0% found this document useful (0 votes)
10 views

Computer Graphics Notes-converted (Repaired)

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Computer Graphics Notes-converted (Repaired)

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 150

PCS5I102 COMPUTER GRAPHICS (3-0-1)

Module – I (12 hours)


Overview of Graphics System: Video Display Units, Raster-Scan and Random Scan Systems, Graphics Input
and Output Devices. Output Primitives: Line drawing Algorithms: DDA and Bresenham’s Line Algorithm,
Circle drawing Algorithms: Midpoint Circle Algorithm and Bresenham’s Circle drawing Algorithm. Two
Dimensional Geometric Transformation: Basic Transformation (Translation, Rotation, Scaling) Matrix
Representation, Composite Transformations, Reflection, Shear, Transformation between coordinate systems.
Module – II (12 hours)
Two Dimensional Viewing: Window-to- View Port Coordinate Transformation. Line Clipping (Cohen-
Sutherland Algorithm) and Polygon Clipping (Sutherland- Hodgeman Algorithm) Aliasing and Antialiasing, Half
Toning, Thresholding, Dithering. Polygon Filling: Seed Fill Algorithm, Scan line Algorithm. Two Dimensional
Object Representations: Spline Representation, Bezier Curves, B-Spline Curves. Fractal Geometry: Fractal
Classification and Fractal Dimension.
Module – III (8 hours)
Three Dimensional Geometric and Modeling Transformations: Translation, Rotation, Scaling, Reflections,
shear, Composite Transformation. Projections: Parallel Projection, Perspective Projection. Visible Surface
Detection Methods: Back-Face Detection, Depth Buffer, A- Buffer, Scan- Line Algorithm, Painters Algorithm.
Module – IV (8 hours)
Illumination Models: Basic Models, Displaying Light Intensities.
Surface Rendering Methods: Polygon Rendering Methods: Gouraud Shading, Phong Shading. Computer
Animation: Types of Animation, Key frame Vs. Procedural Animation, Methods of Controlling Animation,
Morphing. Introduction to Virtual Reality and Augmented Reality.
Textbook:
1. Computer Graphics, D. Hearn and M.P. Baker (C Version), PearsonEducation.
Reference Books:

1.Computer Graphics Principle and Practice, J.D. Foley, A. Dam, S.K. Feiner, Addison Wesley.
2. Procedural Elements of Computer Graphics, David Rogers,TMH.
3. Computer Graphics: Algorithms and Implementations, D.P Mukherjee, D. Jana, PHI.
4. Computer Graphics, Z. Xiang, R. A. Plastock, Schaum’s Outlines, McGrowHill.
5. Computer Graphics, S. Bhattacharya, Oxford UniversityPress.

Course Name: Computer Graphics


Course Code: PCS5I102
Syllabus: COMPUTER GRAPHICS (3-0-1)
Module – I (12 hours)
Overview of Graphics System: Video Display Units, Raster-Scan and Random Scan Systems,
Graphics Input and Output Devices. Output Primitives: Line drawing Algorithms: DDA and
Bresenham’s Line Algorithm, Circle drawing Algorithms: Midpoint Circle Algorithm and
Bresenham’s Circle drawing Algorithm. Two Dimensional Geometric Transformation: Basic
Transformation (Translation, Rotation, Scaling) Matrix Representation, Composite
Transformations, Reflection, Shear, Transformation between coordinate systems.
Module – II (12 hours)
Two Dimensional Viewing: Window-to- View Port Coordinate Transformation. Line Clipping
(Cohen-Sutherland Algorithm) and Polygon Clipping (Sutherland-Hodgeman Algorithm)
Aliasing and Antialiasing, Half Toning, Thresholding, Dithering. Polygon Filling: Seed Fill
Algorithm, Scan line Algorithm. Two Dimensional Object Representations: Spline
Representation, Bezier Curves, B-Spline Curves. Fractal Geometry: Fractal Classification and
Fractal Dimension.
Module – III (8 hours)
Three Dimensional Geometric and Modeling Transformations: Translation, Rotation, Scaling,
Reflections, shear, Composite Transformation. Projections: Parallel Projection, Perspective
Projection. Visible Surface Detection Methods: Back-Face Detection, Depth Buffer, A- Buffer,
Scan- Line Algorithm, Painters Algorithm.
Module – IV (8 hours)
Illumination Models: Basic Models, Displaying Light Intensities. Surface Rendering Methods:
Polygon Rendering Methods: Gouraud Shading, Phong Shading. Computer Animation: Types of
Animation, Key frame Vs. Procedural Animation, Methods of Controlling Animation, Morphing.
Introduction to Virtual Reality and Augmented Reality.
Textbook:
1. Computer Graphics, D. Hearn and M.P. Baker (C Version), Pearson Education.
Reference Books:
1. Computer Graphics Principle and Practice, J.D. Foley, A. Dam, S.K. Feiner, Addison Wesley.
2. Procedural Elements of Computer Graphics, David Rogers, TMH.
3. Computer Graphics: Algorithms and Implementations, D.P Mukherjee, D. Jana, PHI.
4. Computer Graphics, Z. Xiang, R. A. Plastock, Schaum’s Outlines, McGrow Hill.
5. Computer Graphics, S. Bhattacharya, Oxford University Press.
Course Educational Objective

Module:01

Syllabus
Module –I(12 hours)
Overview of Graphics System: Video Display Units, Raster-Scan and
Random Scan Systems, Graphics Input and Output Devices. Output
Primitives: Line drawing Algorithms: DDA and Bresenham’s Line
Algorithm, Circle drawing Algorithms: Midpoint Circle Algorithm and
Bresenham’s Circle drawing Algorithm. Two Dimensional Geometric
Transformation: Basic Transformation (Translation, Rotation, Scaling)
Matrix Representation, Composite Transformations, Reflection, Shear,
Transformation between coordinate systems.

Overview of Graphics System: What is computer Graphics?


Computer graphics is an art of drawing pictures, lines, charts, etc. using computers with the help
of programming. Computer graphics image is made up of number of pixels. Pixel is the smallest
addressable graphical unit represented on the computer screen.

Introduction
 Computer is information processing machine. User needs to communicate with computer and the
computer graphics is one of the most effective and commonly used ways of communication with
the user.
 It displays the information in the form of graphical objects such as pictures, charts, diagram and
graphs.
 Graphical objects convey more information in less time and easily understandable formats for example
statically graph shown in stock exchange.
 In computer graphics picture or graphics objects are presented as a collection of discrete pixels.
 We can control intensity and color of pixel which decide how picture look like.
 The special procedure determines which pixel will provide the best approximation to the desired picture
or graphics object this process is known as Rasterization.
 The process of representing continuous picture or graphics object as a collection of discrete pixels
is called Scan Conversion.

Advantages of computer graphics


 Computer graphics is one of the most effective and commonly used ways of communication with
computer.
 It provides tools for producing picture of “real-world” as well as synthetic objects such as
mathematical
surfaces in 4D and of data that have no inherent geometry such as survey result.
 It has ability to show moving pictures thus possible to produce animations with computer graphics.
 With the use of computer graphics we can control the animation by adjusting the speed, portion of
picture in view the amount of detail shown and so on.
 It provides tools called motion dynamics. In which user can move objects as well as observes as
per requirement for example walk throw made by builder to show flat interior and surrounding.
 It provides facility called update dynamics. With this we can change the shape color and other
properties of object.
 Now in recent development of digital signal processing and audio synthesis chip the interactive graphics
can now provide audio feedback along with the graphical feed backs.

Application of computer graphics


 User interface: - Visual object which we observe on screen which communicates with user is one
of the most useful applications of the computer graphics.
 Plotting of graphics and chart in industry, business, government and educational organizations drawing
like bars, pie-charts, histogram’s are very useful for quick and good decision making.
 Office automation and desktop publishing: - It is used for creation and dissemination of information.
It is used in in-house creation and printing of documents which contains text, tables, graphs and other
forms of drawn or scanned images or picture.
 Computer aided drafting and design: - It uses graphics to design components and system such as
automobile bodies structures of building etc.
 Simulation and animation: - Use of graphics in simulation makes mathematic models and
mechanical systems more realistic and easy to study.
 Art and commerce: - There are many tools provided by graphics which allows used to make their
picture animated and attracted which are used in advertising.
 Process control: - Now a day’s automation is used which is graphically displayed on the screen.
 Cartography: - Computer graphics is also used to represent geographic maps, weather maps,
oceanographic charts etc.
 Education and training: - Computer graphics can be used to generate models of physical, financial and
economic systems. These models can be used as educational aids.
 Image processing: - It is used to process image by changing property of the image.

Display devices
 Display devices are also known as output devices.
 Most commonly used output device in a graphics system is a video monitor.

Cathode-ray-tubes

Fig. 1.1: - Cathode ray tube.

 It is an evacuated glass tube.


 An electron gun at the rear of the tube produce a beam of electrons which is directed towards the
screen of the tube by a high voltage typically 15000 to 20000 volts
 Inner side screen is coated with phosphor substance which gives light when it is stroked bye
electrons.
 Control grid controls velocity of electrons before they hit the phosphor.
 The control grid voltage determines how many electrons are actually in the electron beam. The negative
the control voltage is the fewer the electrons that pass through the grid.
 Thus control grid controls Intensity of the spot where beam strikes the screen.
 The focusing system concentrates the electron beam so it converges to small point when hits the
phosphor coating.
 Deflection system directs beam which decides the point where beam strikes the screen.
 Deflection system of the CRT consists of two pairs of parallel plates which are vertical and
horizontal deflection plates.
 Voltage applied to vertical and horizontal deflection plates is control vertical and horizontal
deflection respectively.
 There are two techniques used for producing images on the CRT screen:
1. Vector scan/Random scan display.
2. Raster scan display.
 Vector scan display directly traces out only the desired lines on CRT.
 If we want line between point p1 & p2 then we directly drive the beam deflection circuitry
which focus beam directly from point p1 to p2.
 If we do not want to display line from p1 to p2 and just move then we can blank the beam as we
move it.
 To move the beam across the CRT, the information about both magnitude and direction is
required. This information is generated with the help of vector graphics generator.
 Fig. 1.2 shows architecture of vector display. It consists of display controller, CPU, display buffer
memory and CRT.
 Display controller is connected as an I/O peripheral to the CPU.
 Display buffer stores computer produced display list or display program.
 The Program contains point & line plotting commands with end point co-ordinates as well
as character plotting commands.
 Display controller interprets command and sends digital and point co-ordinates to a vector
generator.
 Vector generator then converts the digital co-ordinate value to analog voltages for beam
deflection circuits that displace an electron beam which points on the CRT’s screen.
 In this technique beam is deflected from end point to end point hence this techniques is
also called random scan.
 We know as beam strikes phosphors coated screen it emits light but that light decays after
few milliseconds and therefore it is necessary to repeat through the display list to refresh
the screen at least 30 times per second to avoid flicker.
 As display buffer is used to store display list and used to refreshing, it is also called refresh buffer.

 The display image is stored in the form of 1’s and 0’s in the refresh buffer.
 The video controller reads this refresh buffer and produces the actual image on screen.
 It will scan one line at a time from top to bottom & then back to the top.

Horizontal Retrace
Vertical Retrace
OFF ON

Fig. 1.4: - Raster scan CRT.

 In this method the horizontal and vertical deflection signals are generated to move the beam all
over the screen in a pattern shown in fig. 1.4.
 Here beam is swept back & forth from left to the right.
 When beam is moved from left to right it is ON.
 When beam is moved from right to left it is OFF and process of moving beam from right to left after
completion of row is known as Horizontal Retrace.
 When beam is reach at the bottom of the screen. It is made OFF and rapidly retraced back to the
top left to start again and process of moving back to top is known as Vertical Retrace.
 The screen image is maintained by repeatedly scanning the same image. This process is known
as
Refreshing of Screen.
 In raster scan displays a special area of memory is dedicated to graphics only. This memory is
called
Frame Buffer.
 Frame buffer holds set of intensity values for all the screen points.
 That intensity is retrieved from frame buffer and display on screen one row at a time.
 Each screen point referred as pixel or Pel (Picture Element).
 Each pixel can be specified by its row and column numbers.
 It can be simply black and white system or color system.
 In simple black and white system each pixel is either ON or OFF, so only one bit per pixel is
needed.
 Additional bits are required when color and intensity variations can be displayed up to 24-bits per
pixel are included in high quality display systems.
 On a black and white system with one bit per pixel the frame buffer is commonly called a Bitmap.
And for systems with multiple bits per pixel, the frame buffer is often referred as a Pixmap.

Difference between random scan and raster scan


Base of Raster Scan System Random Scan System
Difference
Electron Beam The electron beam is swept across The electron beam is directed only to
the screen, one row at a time, the parts of screen where a picture is
from top to bottom. to be drawn.

Resolution Its resolution is poor because Its resolution is good because this
raster system in contrast system produces smooth lines
produces zigzag lines that are drawings because CRT beam directly
plotted as discrete point sets. follows the line path.

Picture Picture definition is stored as a Picture definition is stored as a set of


Definition set of intensity values for all line drawing instructions in a display
screen points, called pixels in a file.
refresh buffer area.
Realistic Display The capability of this system to These systems are designed for line-
store intensity values for pixel drawing and can’t display realistic
makes it well suited for the shaded scenes.
realistic display of scenes contain
shadow and color pattern.
Draw an Image Screen points/pixels are used to Mathematical functions are used to
draw an image. draw an image.
Direct-view storage tubes (DVST)

Fig. 1.7: - Direct-view storage tube.

 In raster scan display we do refreshing of the screen to maintain a screen image.


 DVST gives alternative method for maintaining the screen image.
 DVST uses the storage grid which stores the picture information as a charge distribution
just behind the phosphor coated screen.
 DVST consists two electron guns a primary gun and a flood gun.
 A primary gun stores the picture pattern and the flood gun maintains the picture display.
 A primary gun emits high speed electrons which strike on the storage grid to draw the picture
pattern.
 As electron beam strikes on the storage grid with high speed, it knocks out electrons from
the storage grid keeping the net positive charge.
 The knocked out electrons are attracted towards the collector.
 The net positive charge on the storage grid is nothing but the picture pattern.
 The continuous low speed electrons from flood gun pass through the control grid and are
attracted to the positive charged area of the storage grid.

 The low speed electrons then penetrate the storage grid and strike the phosphor coating without
affecting the positive charge pattern on the storage grid.
 During this process the collector just behind the storage grid smooth out the flow of flood electrons.

Advantage of DVST
 Refreshing of CRT is not required.
 Very complex pictures can be displayed at very high resolution without flicker.
 Flat screen.

Disadvantage of DVST
 They do not display color and are available with single level of line intensity.
 For erasing it is necessary to removal of charge on the storage grid so erasing and redrawing
process take several second.
 Erasing selective part of the screen cannot be possible.
 Cannot used for dynamic graphics application as on erasing it produce unpleasant flash over entire
screen.
 It has poor contrast as a result of the comparatively low accelerating potential applied to the
flood electrons.
 The performance of DVST is somewhat inferior to the refresh CRT.

Flat Panel Display


 The term flat panel display refers to a class of video device that have reduced volume,
weight & power requirement compared to a CRT.
 As flat panel display is thinner than CRTs, we can hang them on walls or wear on our wrists.
 Since we can even write on some flat panel displays they will soon be available as pocket
notepads.
 We can separate flat panel display in two categories:
1. Emissive displays: - the emissive display or emitters are devices that convert electrical energy
into light. For Ex. Plasma panel, thin film electroluminescent displays and light emitting
diodes.
2. Non emissive displays: - non emissive display or non emitters use optical effects to
convert sunlight or light from some other source into graphics patterns. For Ex. LCD (Liquid
Crystal Display).

Plasma Panels displays

Fig. 1.8: - Basic design of a plasma-panel display device.


 This is also called gas discharge displays.
 It is constructed by filling the region between two glass plates with a mixture of gases that
usually includes neon.
 A series of vertical conducting ribbons is placed on one glass panel and a set of horizontal
ribbon is built into the other glass panel.
 Firing voltage is applied to a pair of horizontal and vertical conductors cause the gas at the
intersection of the two conductors to break down into glowing plasma of electrons and ions.
 Picture definition is stored in a refresh buffer and the firing voltages are applied to refresh
the pixel positions, 60 times per second.
 Alternating current methods are used to provide faster application of firing voltages and
thus brighter displays.
 Separation between pixels is provided by the electric field of conductor.
 One disadvantage of plasma panels is they were strictly monochromatic device that means
shows only one color other than black like black and white.

Light Emitting Diode (LED)


 In this display a matrix of multi-color light emitting diode is arranged to form the pixel
position in the display. And the picture definition is stored in refresh buffer.
 Similar to scan line refreshing of CRT information is read from the refresh buffer and
converted to voltage levels that are applied to the diodes to produce the light pattern on the
display.

Liquid Crystal Display (LCD)

Fig. 1.10: - Light twisting shutter effect used in design of most LCD.
 It is generally used in small system such as calculator and portable laptop.
 This non emissive device produce picture by passing polarized light from the surrounding or
from an internal light source through liquid crystal material that can be aligned to either
block or transmit the light.
 The liquid crystal refreshes to fact that these compounds have crystalline arrangement of
molecules then also flows like liquid.
 It consists of two glass plates each with light polarizer at right angles to each other
sandwich the liquid crystal material between the plates.
 Rows of horizontal transparent conductors are built into one glass plate, and column of
vertical conductors are put into the other plates.
 The intersection of two conductors defines a pixel position.
 In the ON state polarized light passing through material is twisted so that it will pass
through the opposite polarizer.
 In the OFF state it will reflect back towards source.
 We applied a voltage to the two intersecting conductor to align the molecules so that the
light is not twisted.
 This type of flat panel device is referred to as a passive matrix LCD.
 In active matrix LCD transistors are used at each (x, y) grid point.
 Transistor cause crystal to change their state quickly and also to control degree to which
the state has been changed.
 Transistor can also serve as a memory for the state until it is changed.
 So transistor make cell ON for all time giving brighter display then it would be if it had to be
refresh periodically

Advantages of LCD display


 Low cost.
 Low weight.
 Small size
 Low power consumption.

Three dimensional viewing devices


 The graphics monitor which are display three dimensional scenes are devised using a
technique that reflects a CRT image from a vibrating flexible mirror.
 Vibrating mirror changes its focal length due to vibration which is synchronized with the
display of an object on CRT.
 The each point on the object is reflected from the mirror into spatial position corresponding
to distance of that point from a viewing position.
 Very good example of this system is GENISCO SPACE GRAPH system, which use vibrating
mirror to project 3D objects into a 25 cm by 25 cm by 25 cm volume. This system is also
capable to show 2D cross section at different depth.

Application of 3D viewing devices


 In medical to analyze data from ultra-sonography.
 In geological to analyze topological and seismic data.
 In designing like solid objects viewing and 3D viewing of objects.

Stereoscopic and virtual-reality systems

Stereoscopic system

Fig. 1.12: - stereoscopic views.

 Stereoscopic views does not produce three dimensional images, but it produce 3D effects by
presenting different view to each eye of an observer so that it appears to have depth.
 To obtain this we first need to obtain two views of object generated from viewing direction
corresponding to each eye.
 We can construct the two views as computer generated scenes with different viewing
positions or we can use stereo camera pair to photograph some object or scene.
 When we see simultaneously both the view as left view with left eye and right view with
right eye then two views is merge and produce image which appears to have depth.
 One way to produce stereoscopic effect is to display each of the two views with raster
system on alternate refresh cycles.
 The screen is viewed through glasses with each lance design such a way that it act as a
rapidly alternating shutter that is synchronized to block out one of the views.

Virtual-reality

Fig. 1.13: - virtual reality.

 Virtual reality is the system which produce images in such a way that we feel that our
surrounding is what we are set in display devices but in actually it does not.
 In virtual reality user can step into a scene and interact with the environment.
 A head set containing an optical system to generate the stereoscopic views is commonly
used in conjunction with interactive input devices to locate and manipulate objects in the
scene.
 Sensor in the head set keeps track of the viewer’s position so that the front and back of
objects can be seen as the viewer “walks through” and interacts with the display.
 Virtual reality can also be produce with stereoscopic glass and video monitor instead of
head set. This provides low cost virtual reality system.
 Sensor on display screen track head position and accordingly adjust image depth.

Raster graphics systems

Simple raster graphics system

Video Controller Monitor


C Syste
P m

System
I/O Devices Bus

Fig. 1.14: - Architecture of a simple raster graphics system.

 Raster graphics systems having additional processing unit like video controller or display controller.
 Here frame buffer can be anywhere in the system memory and video controller access this
for refresh the screen.
 In addition to video controller more processors are used as co-processors to accelerate the
system in sophisticated raster system.

Raster graphics system with a fixed portion of the system memory


reserved for the frame buffer

CPU Frame Buffer Video Controller


System Memory Monitor

System bus

I/O Devices

Fig. 1.15: - Architecture of a raster graphics system with a fixed portion of the system
memory reserved for the frame buffer.

 A fixed area of the system memory is reserved for the frame buffer and the video controller
can directly access that frame buffer memory.
 Frame buffer location and the screen position are referred in Cartesian coordinates.
 For many graphics monitors the coordinate origin is defined at the lower left screen corner.
 Screen surface is then represented as the first quadrant of the two dimensional systems with
positive X- value increases as left to right and positive Y-value increases bottom to top.

Basic refresh operation of video controller

Horizontal and Vertical Deflection Voltages


Raster Scan
Generator

X Y
regist regist
Intensity Memory Pixel
Address registe

Frame
Fig. 1.16: - Basic video controller refreshBuffer
operation.

 Two registers are used to store the coordinates of the screen pixels which are X and Y
 Initially the X is set to 0 and Y is set to Ymax.
 The value stored in frame buffer for this pixel is retrieved and used to set the intensity of the CRT
beam.
 After this X register is incremented by one.
 This procedure is repeated till X becomes equals to Xmax.
 Then X is set to 0 and Y is decremented by one pixel and repeat above procedure.
 This whole procedure is repeated till Y is become equals to 0 and complete the one refresh
cycle. Then controller reset the register as top –left corner i.e. X=0 and Y=Ymax and
refresh process start for next refresh cycle.
 Since screen must be refreshed at the rate of 60 frames per second the simple procedure
illustrated in figure cannot be accommodated by typical RAM chips.
 To speed up pixel processing video controller retrieves multiple values at a time using more
numbers of registers and simultaneously refresh block of pixel.
 Such a way it can speed up and accommodate refresh rate more than 60 frames per second.
Raster-graphics system with a display processor
Display Processor Memory
Frame Buffer Video Controller
Monitor

CP Display Syste
U Processor m
System
Bus
I/O Devices

Fig. 1.17: - Architecture of a raster-graphics system with a display processor.

 One way to designing raster system is having separate display coprocessor.


 Purpose of display processor is to free CPU from graphics work.
 Display processors have their own separate memory for fast operation.
 Main work of display processor is digitalizing a picture definition given into a set of pixel
intensity values for store in frame buffer.
 This digitalization process is scan conversion.
 Display processor also performs many other functions such as generating various line styles
(dashed, dotted, or solid). Display color areas and performing some transformation for
manipulating object.
 It also interfaces with interactive input devices such as mouse.
 For reduce memory requirements in raster scan system methods have been devised for
organizing the frame buffer as a line list and encoding the intensity information.
 One way to do this is to store each scan line as a set of integer pair one number indicate
number of adjacent pixels on the scan line that are having same intensity and second
stores intensity value this technique is called run-length encoding.
 A similar approach is when pixel. Intensity is changes linearly, encoded the raster as a set of
rectangular areas (cell encoding).
 Disadvantages of encoding is when run length is small it requires more memory then
original frame buffer.
 It also difficult for display controller to process the raster when many sort runs are involved.

Random- scan system

Display Processor Monitor


CP Syste
U m
System
Bus
I/O Devices

Fig. 1.18: - Architecture of a simple random-scan system.

 An application program is input & stored in the system memory along with a graphics package.
 Graphics commands in the application program are translated by the graphics package into a
display file stored in the system memory.
 This display file is used by display processor to refresh the screen.
 Display process goes through each command in display file. Once during every refresh cycle.
 Sometimes the display processor in random scan system is also known as display
processing unit or a graphics controller.
 In this system graphics platform are drawn on random scan system by directing the electron
beam along the component times of the picture.
 Lines are defined by coordinate end points.
 This input coordinate values are converts to X and Y deflection voltages.
 A scene is then drawn one line at a time.

Graphics input devices

Keyboards
 Keyboards are used as entering text strings. It is efficient devices for inputting such a non-
graphics data as picture label.
 Cursor control key’s & function keys are common features on general purpose keyboards.
 Many other application of key board which we are using daily used of computer graphics
are commanding & controlling through keyboard etc.

Mouse
 Mouse is small size hand-held box used to position screen cursor.
 Wheel or roller or optical sensor is directing pointer on the according to movement of mouse.
 Three buttons are placed on the top of the mouse for signaling the execution of some operation.
 Now a day’s more advance mouse is available which are very useful in graphics application for
example Zmouse.

Trackball and Spaceball


 Trackball is ball that can be rotated with the finger or palm of the hand to produce cursor
movement.Potentiometer attached to the ball, measure the amount and direction of rotation.
 They are often mounted on keyboard or Z mouse.
 Space ball provide six-degree of freedom i.e. three dimensional.
 In space ball strain gauges measure the amount of pressure applied to the space ball to provide
input for spatial positioning and orientation as the ball is pushed or pulled in various
directions.
 Space balls are used in 3D positioning and selection operations in virtual reality system,
modeling, animation, CAD and other application.

Joysticks
 A joy stick consists of small vertical lever mounted on a base that is used to steer the
screen cursor around.
 Most joy sticks selects screen positioning according to actual movement of stick (lever).
 Some joy sticks are works on pressure applied on sticks.
 Sometimes joy stick mounted on keyboard or sometimes used alone.
 Movement of the stick defines the movement of the cursor.
 In pressure sensitive stick pressure applied on stick decides movement of the cursor. This
pressure is measured using strain gauge.
 This pressure sensitive joy sticks also called as isometric joy sticks and they are non movable
sticks.

Data glove
 Data glove is used to grasp virtual objects.
 The glow is constructed with series of sensors that detect hand and figure motions.
 Electromagnetic coupling is used between transmitter and receiver antennas which used to
provide position and orientation of the hand.
 Transmitter & receiver Antenna can be structured as a set of three mutually perpendicular
coils forming 3D Cartesian coordinates system.
 In put from the glove can be used to position or manipulate object in a virtual scene.

Digitizer
 Digitizer is common device for drawing painting or interactively selecting coordinates
position on an object.
 One type of digitizers is graphics tablet which input two dimensional coordinates by
activating hand cursor or stylus at selected position on a flat surface.
 Stylus is flat pencil shaped device that is pointed at the position on the tablet.

Image Scanner
 Image Scanner scan drawing, graph, color, & black and white photos or text and can stored for computer
processing by passing an optical scanning mechanism over the information to be stored.
 Once we have internal representation of a picture we can apply transformation.
 We can also apply various image processing methods to modify the picture.
 For scanned text we can apply modification operation.

Touch Panels
 As name suggest Touch Panels allow displaying objects or screen-position to be selected with
the touch or finger.
 A typical application is selecting processing option shown in graphical icons.
 Some system such as a plasma panel are designed with touch screen
 Other system can be adapted for touch input by fitting transparent touch sensing
mechanism over a screen.
 Touch input can be recorded with following methods.
1. Optical methods
2. Electrical methods
3. Acoustical methods

Optical method
 Optical touch panel employ a line of infrared LEDs along one vertical and one horizontal edge.
 The opposite edges of the edges containing LEDs are contain light detectors.
 When we touch at a particular position the line of light path breaks and according to that
breaking line coordinate values are measured.
 In case two line cuts it will take average of both pixel positions.
 LEDs operate at infrared frequency so it cannot be visible to user.

Electrical method
 An electrical touch panel is constructed with two transparent plates separated by small distance.
 One is coated with conducting material and other is coated with resistive material.
 When outer plate is touch it will come into contact with internal plate.
 When both plates touch it creates voltage drop across the resistive plate that is converted
into coordinate values of the selected position.

Acoustical method
 In acoustical touch panel high frequency sound waves are generated in horizontal and vertical
direction across a glass plates.
 When we touch the screen the waves from that line are reflected from finger.
 These reflected waves reach again at transmitter position and time difference between
sending and receiving is measure and converted into coordinate values.

Light pens
 Light pens are pencil-shaped device used to select positions by detecting light coming from
points on the CRT screen.
 Activated light pens pointed at a spot on the screen as the electron beam lights up that spot
and generate electronic pulse that causes the coordinate position of the electron beam to
be recorded.

Voice systems
 It is used to accept voice command in some graphics workstations.
 It is used to initiate graphics operations.
 It will match input against predefined directory of words and phrases.
 Dictionary is setup for a particular operator by recording his voice.
 Each word is speak several times and then analyze the word and establishes a frequency
pattern for that word along with corresponding function need to be performed.
 When operator speaks command it will match with predefine dictionary and perform desired action.
Coordinate representations
 Except few all other general packages are designed to be used with Cartesian coordinate
specifications.
 If coordinate values for a picture are specified is some other reference frame they must be
converted to Cartesian coordinate before giving input to graphics package.
 Special-purpose package may allow use of other coordinates which suits application.
 In general several different Cartesian reference frames are used to construct and display scene.
 We can construct shape of object with separate coordinate system called modeling
coordinates or sometimes local coordinates or master coordinates.
 Once individual object shapes have been specified we can place the objects into appropriate
positions called world coordinates.
 Finally the World-coordinates description of the scene is transferred to one or more output
device reference frame for display. These display coordinates system are referred to as “Device
Coordinates” or “Screen Coordinates”.
 Generally a graphic system first converts the world-coordinates position to normalized
device coordinates. In the range from 0 to 1 before final conversion to specific device
coordinates.
 An initial modeling coordinates position ( Xmc,Ymc) in this illustration is transferred to a
device coordinates position(Xdc,Ydc) with the sequence ( Xmc,Ymc) ( Xwc,Ywc) ( Xnc,Ync)
( Xdc,Ydc).

Graphic Function
 A general purpose graphics package provides user with Varity of function for creating and
manipulating pictures.
 The basic building blocks for pictures are referred to as output primitives. They includes
character, string, and geometry entities such as point, straight lines, curved lines, filled
areas and shapes defined with arrays of color points.
 Input functions are used for control & process the various input device such as mouse, tablet, etc.
 Control operations are used to controlling and housekeeping tasks such as clearing display screen
etc.
 All such inbuilt function which we can use for our purpose are known as graphics function

Points and Lines


 Point plotting is done by converting a single coordinate position furnished by an
application program into appropriate operations for the output device in use.
 Line drawing is done by calculating intermediate positions along the line path between
two specified endpoint positions.
 The output device is then directed to fill in those positions between the end points with some
color.
 For some device such as a pen plotter or random scan display, a straight line can be
drawn smoothly from one end point to other.
 Digital devices display a straight line segment by plotting discrete points between the two
endpoints.
 Discrete coordinate positions along the line path are calculated from the equation of the line.
 For a raster video display, the line intensity is loaded in frame buffer at the corresponding
pixel positions.
 Reading from the frame buffer, the video controller then plots the screen pixels.
 Screen locations are referenced with integer values, so plotted positions may only
approximate actual line positions between two specified endpoints.
 For example line position of (12.36, 23.87) would be converted to pixel position (12, 24).
 This rounding of coordinate values to integers causes lines to be displayed with a stair step
appearance
(“the jaggies”), as represented in fig 2.1.

Fig. 2.1: - Stair step effect produced when line is generated as a series of pixel positions.
 The stair step shape is noticeable in low resolution system, and we can improve their
appearance somewhat by displaying them on high resolution system.
 More effective techniques for smoothing raster lines are based on adjusting pixel
intensities along the line paths.
 For raster graphics device-level algorithms discuss here, object positions are specified directly
in integer device coordinates.
 Pixel position will referenced according to scan-line number and column number which is
illustrated by following figure.

6
5
4
3
2
1
0

0123456
Fig. 2.2: - Pixel positions referenced by scan-line number and column number.
similarly for retrieve the current frame buffer intensity we assume to have procedure 𝑔𝑒𝑡𝑝𝑖𝑥𝑒𝑙(𝑥, 𝑦).

Line Drawing Algorithms


The Cartesian slop-intercept equation for a straight line is “𝑦 = 𝑚𝑥 + 𝑏” with ‘𝑚’
representing slop and ‘𝑏’ as the intercept. The two endpoints of the line are
given which are say (𝑥1, 𝑦1) and (𝑥2, 𝑦2).

Y2

y1

X1 X2

 We can determine values for the slope m by equation:


Fig. 2.3: - Line path between endpoint positions.

𝑚 = (𝑦2 − 𝑦1)/(𝑥2 − 𝑥1)


 We can determine values for the intercept b by equation:
𝑏 = 𝑦1 − 𝑚 ∗ 𝑥1
 For the given interval ∆𝑥 along a line, we can compute the corresponding 𝑦 interval
∆𝑦 as:
∆𝑦 = 𝑚 ∗ ∆𝑥
 Similarly for ∆𝑥:
∆𝑥 = ∆𝑦/𝑚
 For line with slop |𝑚| < 1, ∆𝑥 can be set proportional to small horizontal
deflection voltage and the corresponding vertical deflection voltage is then
set proportional to ∆𝑦 which is calculated from above equation.
 For line with slop |𝑚| > 1, ∆𝑦 can be set proportional to small vertical
deflection voltage and the corresponding horizontal deflection voltage is
then set proportional to ∆𝑥 which is calculated from above equation.
 For line with slop 𝑚 = 1, ∆𝑥 = ∆𝑦 and the horizontal and vertical deflection
voltages are equal.

DDA Algorithm
 Digital differential analyzer (DDA) is scan conversion line drawing algorithm based
on calculating either
∆𝑦 or ∆𝑥 using above equation.
 We sample the line at unit intervals in one coordinate and find corresponding
integer values nearest the line path for the other coordinate.
 Consider first a line with positive slope and slope is less than or equal to 1:
We sample at unit x interval (∆𝑥 = 1) and calculate each successive y value as
follow:
𝑦=𝑚∗𝑥+𝑏
𝑦𝑘 = 𝑚 ∗ (𝑥 + 1) + 𝑏
In general 𝑦𝑘 = 𝑚 ∗ (𝑥 + 𝑘) + 𝑏 , &
𝑦𝑘+1 = 𝑚 ∗ (𝑥 + 𝑘 + 1) + 𝑏
Now write this equation in form:
𝑦𝑘+1 − 𝑦𝑘 = (𝑚 ∗ (𝑥 + 𝑘 + 1) + 𝑏) – (𝑚 ∗ (𝑥 + 𝑘) + 𝑏)
𝑦𝑘+1 = 𝑦𝑘 + 𝑚
So that it is computed fast in computer as addition is fast compare to multiplication.
 In above equation 𝑘 takes integer values starting from 1 and increase by 1
until the final endpoint is reached.
 As 𝑚 can be any real number between 0 and 1, the calculated 𝑦 values must
be rounded to the nearest integer.
 Consider a case for a line with a positive slope greater than 1:
We change the role of 𝑥 and 𝑦 that is sample at unit 𝑦 intervals (∆𝑦 = 1) and calculate
each succeeding
𝑥 value as:
𝑥 = (𝑦 − 𝑏)/𝑚
𝑥1 = ((𝑦 + 1) − 𝑏)/𝑚
In general 𝑥𝑘 = ((𝑦 + 𝑘) − 𝑏)/𝑚, &
𝑥𝑘+1 = ((𝑦 + 𝑘 + 1) − 𝑏)/𝑚
Now write this equation in form:
𝑥𝑘+1 − 𝑥𝑘 = (((𝑦 + 𝑘 + 1) − 𝑏)/𝑚) – (((𝑦 + 𝑘) − 𝑏)/𝑚)
𝑥𝑘+1 = 𝑥𝑘 + 1/𝑚
 Above both equations are based on the assumption that lines are to be
processed from left endpoint to the right endpoint.
 If we processed line from right endpoint to
left endpoint than: If ∆𝑥 = −1 equation
become:
𝑦𝑘+1 = 𝑦𝑘 – 𝑚
If ∆𝑦 = −1 equation become:
𝑥𝑘+1 = 𝑥𝑘 − 1/𝑚

Advantages of DDA algorithm


 It is faster algorithm.
 It is simple algorithm.
Disadvantage of DDA algorithm
 Floating point arithmetic is time consuming.
 Poor end point accuracy.

Bresenham’s Line Algorithm


 An accurate and efficient raster line-generating algorithm, developed by Bresenham which
scan converts line using only incremental integer calculations that can be modified to display
circles and other curves.
 Figure shows section of display screen where straight line segments are to be drawn.

1 Specified
3 50
linepath
S pecified
1 2 49
li ne path

1 1 48

1 0 47

10 11 12 13 14 15 50 51 52535455

Fig. 2.4: - Section of a display screen Fig. 2.5: - Section of a display screen
where a straight line segment is to be where a negative slope line segment is
plotted, starting from the pixel at to be plotted, starting from the pixel at
column 10 on scan line 11. column 50 on scan line 50.
 The vertical axes show scan-line positions and the horizontal axes identify pixel
column.
 Sampling at unit 𝑥 intervals in these examples, we need to decide which of
two possible pixel position is closer to the line path at each sample step.
 To illustrate bresenham’s approach, we first consider the scan-conversion
process for lines with positive slope less than 1.
 Pixel positions along a line path are then determined by sampling at unit 𝑥 intervals.
 Starting from left endpoint (𝑥0 , 𝑦0 ) of a given line, we step to each successive
column and plot the pixel whose scan-line 𝑦 values is closest to the line path.
 Assuming we have determined that the pixel at (𝑥𝑘 , 𝑦𝑘 ) is to be displayed, we next
need to decide which pixel to plot in column 𝑥𝑘 + 1.
 Our choices are the pixels at positions (𝑥𝑘 + 1, 𝑦𝑘 ) and (𝑥𝑘 + 1, 𝑦𝑘 + 1).
 Let’s see mathematical calculation used to decide which pixel position is light up.
 We know that equation of line is:
𝑦 = 𝑚𝑥 + 𝑏
Now for position 𝑥𝑘 + 1.
𝑦 = 𝑚(𝑥𝑘 + 1) + 𝑏
 Now calculate distance bet actual line’s 𝑦 value and lower pixel as 𝑑1 and distance
bet actual line’s 𝑦
value and upper pixel as 𝑑2.
𝑑1 = 𝑦 − 𝑦𝑘
d1 = m(xk + 1) + b − yk ......................................................................................................................................... (1)
𝑑2 = (𝑦𝑘 + 1) − 𝑦
𝑑2 = (𝑦𝑘 + 1) − 𝑚(𝑥𝑘 + 1) − 𝑏..…………………………………………………………………………………………………………(2)
 Now calculate 𝑑1 − 𝑑2 from equation (1) and (2).
𝑑1 − 𝑑2 = (𝑦 – 𝑦𝑘) – ((𝑦𝑘 + 1) – 𝑦)
𝑑1 − 𝑑2 = {𝑚(𝑥𝑘 + 1) + 𝑏 − 𝑦𝑘 } − {(𝑦𝑘 + 1) − 𝑚(𝑥𝑘 + 1) − 𝑏}
𝑑1 − 𝑑2 = {𝑚𝑥𝑘 + 𝑚 + 𝑏 − 𝑦𝑘 } − {𝑦𝑘 + 1 − 𝑚𝑥𝑘 − 𝑚 − 𝑏}
𝑑1 − 𝑑2 = 2𝑚(𝑥𝑘 + 1) − 2𝑦𝑘 + 2𝑏 − 1……………………………………………………………………………….……………..(3)
 Now substitute 𝑚 = ∆𝑦/∆𝑥 in equation (3)
𝑑1 − 𝑑2 = 2 (∆𝑦) (𝑥𝑘 + 1) − 2𝑦𝑘 + 2𝑏 − 1 ….………………………………….………………………………………………….(4)
 Now we have decision parameter 𝑝𝑘 for 𝑘 𝑡ℎ step in the line algorithm is given by:

𝑝𝑘 = ∆𝑥(𝑑1 − 𝑑2)
𝑝𝑘 = ∆𝑥(2∆𝑦/∆𝑥(𝑥𝑘 + 1) – 2𝑦𝑘 + 2𝑏 – 1)
𝑝𝑘 = 2∆𝑦𝑥𝑘 + 2∆𝑦 − 2∆𝑥𝑦𝑘 + 2∆𝑥𝑏 − ∆𝑥
𝑝𝑘 = 2∆𝑦𝑥𝑘 − 2∆𝑥𝑦𝑘 + 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥 ……………………………………………………….………………………(5)
𝑝𝑘 = 2∆𝑦𝑥𝑘 − 2∆𝑥𝑦𝑘 + 𝐶 (𝑊ℎ𝑒𝑟𝑒 𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝐶 = 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥) ...................................(6)
 The sign of 𝑝𝑘 is the same as the sign of 𝑑1 − 𝑑2, since ∆𝑥 > 0 for our example.
 Parameter 𝑐 is constant which is independent of pixel position and will
eliminate in the recursive calculation for 𝑝𝑘.
 Now if 𝑝𝑘 is negative then we plot the lower pixel otherwise we plot the upper pixel.
 So successive decision parameters using incremental integer calculation as:
𝑝𝑘+1 = 2∆𝑦𝑥𝑘+1 − 2∆𝑥𝑦𝑘+1 + C
 Now Subtract 𝑝𝑘 from 𝑝𝑘+1
𝑝𝑘+1 − 𝑝𝑘 = 2∆𝑦(𝑥𝑘+1 − 𝑥𝑘) -2∆𝑥(𝑦𝑘+1 − 𝑦𝑘)
𝑝𝑘+1 − 𝑝𝑘 = 2∆𝑦𝑥𝑘+1 − 2∆𝑥𝑦𝑘+1 + C − 2∆𝑦𝑥𝑘 + 2∆𝑥𝑦𝑘 − C
But 𝑥𝑘+1 = 𝑥𝑘 + 1, so that (𝑥𝑘+1 − 𝑥𝑘) = 1
𝑝𝑘+1 = 𝑝𝑘 + 2∆𝑦 − 2∆𝑥(𝑦𝑘+1 − 𝑦𝑘)
 Where the terms 𝑦𝑘+1 − 𝑦𝑘 is either 0 or 1, depends on the sign of parameter 𝑝𝑘.
 This recursive calculation of decision parameters is performed at each integer 𝑥
position starting at the left coordinate endpoint of the line.
 The first decision parameter 𝑝0 is calculated using equation (5) as first time
we need to take constant part into account so:
𝑝𝑘 = 2∆𝑦𝑥𝑘 − 2∆𝑥𝑦𝑘 + 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥𝑏 − ∆𝑥
Now 𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑒 𝑏 = 𝑦0 – 𝑚𝑥0
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥(𝑦0 − 𝑚𝑥0 ) − ∆x
Now Substitute 𝑚 = ∆𝑦/𝛥𝑥
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥(𝑦0 − (∆𝑦/∆𝑥)𝑥0) − ∆x
𝑝0 = 2∆𝑦𝑥0 − 2∆𝑥𝑦0 + 2∆𝑦 + 2∆𝑥𝑦0 − 2∆𝑦𝑥0 − ∆x
𝑝0 = 2∆𝑦 − ∆x
 Let’s see Bresenham’s line drawing algorithm for |𝑚| < 1
1. Input the two line endpoints and store the left endpoint in (𝑥0 , 𝑦0 ).
2. Load (𝑥0 , 𝑦0 ) into the frame buffer; that is, plot the first point.
3. Calculate constants ∆𝑥, ∆𝑦, 2∆𝑦, and 2∆𝑦 − 2∆𝑥, and obtain the starting
value for the decision parameter as

𝑝0 = 2∆𝑦 − ∆𝑥
4. At each 𝑥𝑘 along the line, starting at 𝑘 = 0, perform
the following test: If 𝑝𝑘 < 0, the next point to plot is
(𝑥𝑘 + 1, 𝑦𝑘 ) and
𝑝𝑘+1 = 𝑝𝑘 + 2∆𝑦
Otherwise, the next point to plot is (𝑥𝑘 + 1, 𝑦𝑘 + 1) and
𝑝𝑘+1 = 𝑝𝑘 + 2∆𝑦 − 2∆𝑥
5. Repeat step-4 ∆𝑥 times.
Bresenham’s algorithm is generalized to lines with arbitrary slope by considering
symmetry between the various octants and quadrants of the 𝑥𝑦 plane.
For lines with positive slope greater than 1 we interchange the roles of the 𝑥 and 𝑦
directions.
Also we can revise algorithm to draw line from right endpoint to left
endpoint, both 𝑥 and 𝑦 decrease as we step from right to left.
When 𝑑1 − 𝑑2 = 0 we choose either lower or upper pixel but once we choose
lower than for all such case for that line choose lower and if we choose upper
the for all such case choose upper.
For the negative slope the procedure are similar except that now one coordinate
decreases as the other increases.
The special case handle separately. Horizontal line (∆𝑦 = 0), vertical line (∆𝑥 = 0)
and diagonal line with |∆𝑥| = |∆𝑦| each can be loaded directly into the frame
buffer without processing them through the line plotting algorithm.

Circle
 A circle is defined as the set of points that are all at a given distance r from a center position say (𝑥𝑐 , 𝑦𝑐 ).
Properties of Circle
 The distance relationship is expressed by the Pythagorean theorem in Cartesian
coordinates as:
(𝑥 − 𝑥𝑐)2 + (𝑦 − 𝑦𝑐)2 = 𝑟2
 We could use this equation to calculate circular boundary points by
incrementing 1 in 𝑥 direction in every steps from 𝑥𝑐 – 𝑟 to 𝑥𝑐 + 𝑟 and
calculate corresponding 𝑦 values at each position as:
(𝑥 − 𝑥𝑐)2 + (𝑦 − 𝑦𝑐)2 = 𝑟2
(𝑦 − 𝑦𝑐)2 = 𝑟2 − (𝑥 − 𝑥𝑐)2
(𝑦 − 𝑦𝑐 ) = ±√𝑟 2 − (𝑥𝑐 − 𝑥)2

y = 𝑦𝑐 ± √𝑟2 − (𝑥𝑐 − 𝑥)2


 But this is not best method for generating a circle because it requires more
number of calculations which take more time to execute.
 And also spacing between the plotted pixel positions is not uniform as shown in figure below.

Fig. 2.8: - Positive half of circle showing non uniform spacing bet calculated pixel positions.
 We can adjust spacing by stepping through 𝑦 values and calculating 𝑥 values whenever the
absolute value of the slop of the circle is greater than 1. But it will increases computation
processing requirement.
 Another way to eliminate the non-uniform spacing is to draw circle using polar coordinates ‘𝑟’ and ‘ ’.

𝑥 = 𝑥𝑐 + 𝑟 cos 
 Calculating circle boundary using polar equation is given by pair of equations which is as follows.

𝑦 = 𝑦𝑐 + 𝑟 sin 
 When display is produce using these equations using fixed angular step size circle is plotted
with uniform spacing.
 The step size ‘ ’ is chosen according to application and display device.
 For a more continuous boundary on a raster display we can set the step size at 1/𝑟. This
plot pixel position that are approximately one unit apart.
 Computation can be reduced by considering symmetry city property of circles. The shape
of circle is similar in each quadrant.

𝑦 axis and similarly for third and fourth quadrant from second and first respectively using
 We can obtain pixel position in second quadrant from first quadrant using reflection about

reflection about 𝑥 axis.


 We can take one step further and note that there is also symmetry between octants.
Circle sections in adjacent octant within one quadrant are symmetric with respect to the

This symmetry condition is shown in figure below where point (𝑥, 𝑦) on one circle sector is
450 line dividing the two octants.

mapped in other seven sector of circle.

(-Y, X) (Y, X)
 Taking advantage of this symmetry property of circle we can generate all pixel
Fig. 2.9: - symmetry of circle.

position on boundary of circle by calculating only one sector from 𝑥 = 0 to 𝑥 = 𝑦.


 Determining pixel position along circumference of circle using any of two equations
shown above still required large computation.
 More efficient circle algorithm are based on incremental calculation of decision
parameters, as in the Bresenham line algorithm.
 Bresenham’s line algorithm can be adapted to circle generation by setting decision parameter for
finding
closest pixel to the circumference at each sampling step.
 The Cartesian coordinate circle equation is nonlinear so that square root evaluations would be
required to compute pixel distance from circular path.

 Bresenham’s circle algorithm avoids these square root calculation by comparing the square of
the pixel separation distance.A method for direct distance comparison to test the midpoint
between two pixels to determine if this midpoint is inside or outside the circle boundary.
 This method is easily applied to other conics also.
 Midpoint approach generates same pixel position as generated by bresenham’s circle algorithm.
 The error involve in locating pixel positions along any conic section using midpoint test is
limited to one- half the pixel separation.

Midpoint Circle Algorithm


 Similar to raster line algorithm we sample at unit interval and determine the
closest pixel position to the specified circle path at each step.
 Given radius ‘𝑟’ and center (𝑥𝑐 , 𝑦𝑐 )
 We first setup our algorithm to calculate circular path coordinates for center
(0, 0). And then we will transfer calculated pixel position to center (𝑥𝑐 , 𝑦𝑐 ) by
adding 𝑥𝑐 to 𝑥 and 𝑦𝑐 to 𝑦.
 Along the circle section from 𝑥 = 0 to 𝑥 = 𝑦 in the first quadrant, the slope of
the curve varies from 0 to -1 so we can step unit step in positive 𝑥 direction
over this octant and use a decision parameter to determine which of the two
possible 𝑦 position is closer to the circular path.
 Position in the other seven octants are then obtain by symmetry.
 For the decision parameter we use the circle function which is:
𝑓𝑐𝑖𝑟𝑐𝑙𝑒 (𝑥, 𝑦) = 𝑥 2 + 𝑦 2 − 𝑟 2
 Any point which is on the boundary is satisfied 𝑓𝑐𝑖𝑟𝑐𝑙𝑒 (𝑥, 𝑦) = 0 if the point is inside
circle function value is negative and if point is outside circle the function value
is positive which can be summarize as below.
< 0 𝑖𝑓 (𝑥, 𝑦)𝑖𝑠 𝑖𝑛𝑠𝑖𝑑𝑒 𝑡ℎ𝑒 𝑐𝑖𝑟𝑐𝑙𝑒 𝑏𝑜𝑢𝑛𝑑𝑎𝑟𝑦
𝑓𝑐𝑖𝑟𝑐𝑙𝑒 (𝑥, 𝑦) { = 0 𝑖𝑓 (𝑥, 𝑦)𝑖𝑠 𝑜𝑛 𝑡ℎ𝑒 𝑐𝑖𝑟𝑐𝑙𝑒 𝑏𝑜𝑢𝑛𝑑𝑎𝑟𝑦
> 0 𝑖𝑓 (𝑥, 𝑦)𝑖𝑠 𝑜𝑢𝑡𝑠𝑖𝑑𝑒 𝑡ℎ𝑒 𝑐𝑖𝑟𝑐𝑙𝑒 𝑏𝑜𝑢𝑛𝑑𝑎𝑟𝑦
 Above equation we calculate for the mid positions between pixels near the
circular path at each sampling step and we setup incremental calculation for
this function as we did in the line algorithm.
 Below figure shows the midpoint between the two candidate pixels at sampling
position 𝑥𝑘 + 1.

𝒙𝟐 + 𝒚𝟐 − 𝒓𝟐 = 𝟎
𝒚𝒌
Midpoint
𝒚𝒌 − 𝟏

𝒙𝒌 𝒙𝒌 + 𝒙𝒌 + 𝟐
𝟏

Fig. 2.10: - Midpoint between candidate pixel at sampling position 𝑥𝑘 + 1 along


circle path.
 Assuming we have just plotted the pixel at (𝑥𝑘 , 𝑦𝑘 ) and next we need to determine whether
the pixel at
position ‘(𝑥𝑘 + 1, 𝑦𝑘 )’ or the one at position’ (𝑥𝑘 + 1, 𝑦𝑘 − 1)’ is closer to circle boundary.
 So for finding which pixel is more closer using decision parameter evaluated
at the midpoint between two candidate pixels as below:
𝑝𝑘 = 𝑓𝑐𝑖𝑟𝑐𝑙𝑒 (𝑥𝑘 + 1, 𝑦𝑘 − 1)
2( ) 2

1 2
𝑝𝑘
= 𝑥𝑘 + 1 + (𝑦𝑘 − ) − 𝑟
2
2
 If 𝑝𝑘 < 0 this midpoint is inside the circle and the pixel on the scan line 𝑦𝑘 is
closer to circle boundary. Otherwise the midpoint is outside or on the
boundary and we select the scan line 𝑦𝑘 − 1.
Algorithm for Midpoint Circle Generation
1. Input radius 𝑟 and circle center (𝑥𝑐 , 𝑦𝑐 ), and obtain the first point on the
circumference of a circle centered on the origin as
(𝑥0 , 𝑦0 ) = (0, 𝑟)
2. calculate the initial value of the decision parameter as
𝑝 =5−𝑟
40
3. At each 𝑥𝑘 position, starting at 𝑘 = 0, perform the following test:
If 𝑝𝑘 < 0, the next point along the circle centered on (0, 0) is (𝑥𝑘 + 1, 𝑦𝑘 ) &
𝑝𝑘+1 = 𝑝𝑘 + 2𝑥𝑘+1 + 1
Otherwise, the next point along the circle is (𝑥𝑘 + 1, 𝑦𝑘 − 1) &
𝑝𝑘+1 = 𝑝𝑘 + 2𝑥𝑘+1 + 1 − 2𝑦𝑘+1
Where 2𝑥𝑘+1 = 2𝑥𝑘 + 2, & 2𝑦𝑘+1 = 2𝑦𝑘 − 2.
4. Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (𝑥, 𝑦) onto the circular path centered on
(𝑥𝑐 , 𝑦𝑐 ) and plot the coordinate values:
𝑥 = 𝑥 + 𝑥𝑐 , 𝑦 = 𝑦 + 𝑦 𝑐
6. Repeat steps 3 through 5 until 𝑥 ≥ 𝑦.
2D Transformation

Transformation
Changing Position, shape, size, or orientation of an object on display is known as transformation.

Basic Transformation
 Basic transformation includes three transformations Translation, Rotation, and Scaling.
 These three transformations are known as basic transformation because with
combination of these three transformations we can obtain any transformation.

Translation

(𝒙′, 𝒚′)

𝒕𝒚

(𝒙, 𝒚)

𝒕𝒙

Fig. 3.1: - Translation.


 It is a transformation that used to reposition the object along the straight
line path from one coordinate location to another.
 It is rigid body transformation so we need to translate whole object.
 We translate two dimensional point by adding translation distance 𝒕𝒙 and 𝒕𝒚
to the original coordinate position (𝒙, 𝒚) to move at new position (𝒙′ , 𝒚′ ) as:
𝒙′ = 𝒙 + 𝒕𝒙 & 𝒚′ = 𝒚 + 𝒕𝒚
 Translation distance pair (𝒕𝒙, 𝒕𝒚 ) is called a Translation Vector or Shift Vector.
 We can represent it into single matrix equation in column vector as;
𝑷′ = 𝑷 + 𝑻
𝒙
′ 𝒙
[ ′ ] = [𝒚 ] + �

𝒚
[



]

 We can also represent it in row vector form as:


𝑷′ = 𝑷 + 𝑻
[𝒙′ 𝒚′] = [𝒙 𝒚] + [𝒕𝒙 𝒕𝒚 ]
 Since column vector representation is standard mathematical notation and since
many graphics package like GKS and PHIGS uses column vector we will also
follow column vector representation.
 Example: - Translate the triangle [A (10, 10), B (15, 15), C (20, 10)] 2 unit in
x direction and 1 unit in y direction.
We know that
𝑃′ = 𝑃 + 𝑇

𝑃 ′ = [𝑃 ]
+¿)
 Final coordinates after translation are [A’ (12, 11), B’ (17, 16), C’ (22, 11)].

Rotation
 It is a transformation that used to reposition the object along the circular path in the
XY - plane.
 To generate a rotation we specify a rotation angle 𝜽 and the position of the
Rotation Point (Pivot Point) (𝒙𝒓, 𝒚𝒓 ) about which the object is to be rotated.
 Positive value of rotation angle defines counter clockwise rotation and
negative value of rotation angle defines clockwise rotation.
 We first find the equation of rotation when pivot point is at coordinate origin(𝟎, 𝟎).

(𝒙′, 𝒚′)

( 𝒙, 𝒚 )
𝜽

Fig. 3.2: - Rotation.


 From figure we can write.
𝒙 = 𝒓 𝐜𝐨𝐬 ∅
𝒚 = 𝒓 𝐬𝐢𝐧 ∅
and
𝒙′ = 𝒓 𝐜𝐨𝐬(𝜽 + ∅) = 𝒓 𝐜𝐨𝐬 ∅ 𝐜𝐨𝐬 𝜽 − 𝒓 𝐬𝐢𝐧 ∅ 𝐬𝐢𝐧 𝜽
𝒚′ = 𝒓 𝐬𝐢𝐧(∅ + 𝜽) = 𝒓 𝐜𝐨𝐬 ∅ 𝐬𝐢𝐧 𝜽 + 𝒓 𝐬𝐢𝐧 ∅ 𝐜𝐨𝐬 𝜽
 Now replace 𝒓 𝐜𝐨𝐬 ∅ with 𝒙 and 𝒓 𝐬𝐢𝐧 ∅ with 𝒚 in above equation.
𝒙′ = 𝒙 𝐜𝐨𝐬 𝜽 − 𝒚 𝐬𝐢𝐧 𝜽
𝒚′ = 𝒙 𝐬𝐢𝐧 𝜽 + 𝒚 𝐜𝐨𝐬 𝜽
 We can write it in the form of column vector matrix equation as;
𝑷′ = 𝑹 ∙ 𝑷

)= ¿ c os θ −s in θ
( )
x'
(
y' sin θ cos θ

Rotation about arbitrary point is illustrated in below figure.

(𝒙′, 𝒚′)

(𝒙, 𝒚)
𝜽


(𝒙𝒓, 𝒚𝒓)
Fig. 3.3: - Rotation about pivot
point.
 Transformation equation for rotation of a point about pivot point (𝒙𝒓, 𝒚𝒓 ) is:
𝒙′ = 𝒙𝒓 + (𝒙 − 𝒙𝒓 ) 𝐜𝐨𝐬 𝜽 − (𝒚 − 𝒚𝒓 ) 𝐬𝐢𝐧 𝜽
𝒚′ = 𝒚𝒓 + (𝒙 − 𝒙𝒓) 𝐬𝐢𝐧 𝜽 + (𝒚 − 𝒚𝒓) 𝐜𝐨𝐬 𝜽
 These equations are differing from rotation about origin and its matrix
representation is also different.
 Its matrix equation can be obtained by simple method that we will discuss later in
this chapter.
 Rotation is also rigid body transformation so we need to rotate each point of object.
 Example: - Locate the new position of the triangle [A (5, 4), B (8, 3), C (8, 8)]
after its rotation by 90o clockwise about the origin.
As rotation is clockwise we will take 𝜃 = −90°.
𝑃′ = 𝑅 ∙ 𝑃
 Final coordinates after rotation are [A’ (4, -5), B’ (3, -8), C’ (8, -8)].

Scaling

Fig. 3.4: - Scaling.


 It is a transformation that used to alter the size of an object.
 This operation is carried out by multiplying coordinate value (𝒙, 𝒚) with scaling
factor (𝒔𝒙 , 𝒔𝒚 )
respectively.
 So equation for scaling is given by:
𝒙′ = 𝒙 ∙ 𝒔𝒙
𝒚′ = 𝒚 ∙ 𝒔𝒚
 These equation can be represented in column vector matrix equation as:
𝑷′ = 𝑺 ∙ 𝑷

 Any positive value can be assigned to(𝒔𝒙 , 𝒔𝒚 ).


 Values less than 1 reduce the size while values greater than 1 enlarge the size
of object, and object remains unchanged when values of both factor is 1.
 Same values of 𝒔𝒙 and 𝒔𝒚 will produce Uniform Scaling. And different values of 𝒔𝒙 and
𝒔𝒚 will produce
Differential Scaling.
 Objects transformed with above equation are both scale and repositioned.
 Scaling factor with value less than 1 will move object closer to origin, while
scaling factor with value greater than 1 will move object away from origin.
 We can control the position of object after scaling by keeping one position fixed called
Fix point (𝒙𝒇 , 𝒚𝒇 )
that point will remain unchanged after the scaling transformation.

Fixed Point

Fig. 3.5: - Fixed point scaling.


 Equation for scaling with fixed point position as (𝒙𝒇 , 𝒚𝒇 ) is:
𝒙′ = 𝒙𝒇 + (𝒙 − 𝒙𝒇 )𝒔𝒙 𝒚′ = 𝒚𝒇 + (𝒚 − 𝒚𝒇 )𝒔𝒚
𝒙′ = 𝒙𝒇 + 𝒙𝒔𝒙 − 𝒙𝒇 𝒔𝒙 𝒚′ = 𝒚𝒇 + 𝒚𝒔𝒚 − 𝒚𝒇 𝒔𝒚
𝒙′ = 𝒙𝒔𝒙 + 𝒙𝒇 (𝟏 − 𝒔𝒙 ) 𝒚′ = 𝒚𝒔𝒚 + 𝒚𝒇 (𝟏 − 𝒔𝒚 )
 Matrix equation for the same will discuss in later section.
 Polygons are scaled by applying scaling at coordinates and redrawing while
other body like circle and ellipse will scale using its defining parameters. For
example ellipse will scale using its semi major axis, semi minor axis and
center point scaling and redrawing at that position.
 Example: - Consider square with left-bottom corner at (2, 2) and right-top
corner at (6, 6) apply the transformation which makes its size half.
As we want size half so value of scale factor are 𝑠𝑥 = 0.5, 𝑠𝑦 = 0.5 and Coordinates of
square are [A (2, 2), B (6, 2), C (6, 6), D (2, 6)].
𝑃′ = 𝑆 ∙ 𝑃

 Final coordinate after scaling are [A’ (1, 1), B’ (3, 1), C’ (3, 3), D’ (1, 3)].

Matrix Representation and homogeneous coordinates


 Many graphics application involves sequence of geometric transformations.
 For example in design and picture construction application we perform
Translation, Rotation, and scaling to fit the picture components into their
proper positions.
 For efficient processing we will reformulate transformation sequences.
 We have matrix representation of basic transformation and we can express it
in the general matrix form as:
𝑷′ = 𝑴𝟏 ∙ 𝑷 + 𝑴𝟐
Where 𝑷 and 𝑷′ are initial and final point position, 𝑴𝟏 contains rotation and scaling
terms and 𝑴𝟐
contains translation al terms associated with pivot point, fixed point and reposition.
 For efficient utilization we must calculate all sequence of transformation in
one step and for that reason we reformulate above equation to eliminate the
matrix addition associated with translation terms in matrix 𝑴𝟐.
 We can combine that thing by expanding 2X2 matrix representation into 3X3
matrices.
 It will allows us to convert all transformation into matrix multiplication but we
need to represent vertex position (𝒙, 𝒚) with homogeneous coordinate triple (𝒙𝒉 ,
𝒚𝒉 , 𝒉) Where 𝒙 = 𝒙𝒉
,𝒚= 𝒚𝒉
thus we can also
𝒉 𝒉
write triple as (𝒉 ∙ 𝒙, 𝒉
∙ 𝒚, 𝒉).
 For two dimensional geometric transformation & 𝟏 respectively.
we can take value of 𝒉 is any positive number
𝒔𝒚

so we can get infinite homogeneous


representation for coordinate value (𝒙, 𝒚).
 But convenient choice is set 𝒉 = 𝟏 as it is multiplicative
identity, than (𝒙, 𝒚) is represented as (𝒙, 𝒚, 𝟏).
 Expressing coordinates in homogeneous
coordinates form allows us to represent all
geometric transformation equations as matrix
multiplication.
 Let’s see each representation with 𝒉 = 𝟏
Translation

𝑷′ = 𝑻(𝒕 ,𝒕 ) ∙ 𝑷
𝒙′ 𝟏 𝟎 𝒕𝒙 𝒙
𝒙

[𝒚′ ] = [𝟎 𝟏 𝒕𝒚] [𝒚]


𝟏 𝟎 𝟎 𝟏 𝟏
NOTE: - Inverse of translation matrix is obtain by putting
−𝒕𝒙 & − 𝒕𝒚 instead of 𝒕𝒙 & 𝒕𝒚.
Rotation

𝑷′ = 𝑹(𝜽) ∙ 𝑷
𝒙′ 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝒙
[𝒚 ] = [ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽

𝟎] [𝒚]
𝟏 𝟎 𝟎 𝟏 𝟏
NOTE: - Inverse of rotation matrix is obtained by
replacing 𝜽 by −𝜽.
Scaling

𝑷′ = 𝑺(𝒔𝒙,𝒔𝒚) ∙ 𝑷𝒙′ 𝒔𝒙 𝟎 𝟎 𝒙
[𝒚 ] = [ 𝟎

𝒔𝒚 𝟎] [𝒚]
𝟏 𝟎 𝟎 𝟏 𝟏
NOTE: - Inverse of scaling matrix is obtained by
replacing 𝒔𝒙 & 𝒔𝒚 by 𝟏 �
𝒙
Composite Transformation
 We can set up a matrix for any sequence of transformations as a composite
transformation matrix by calculating the matrix product of individual
transformation.
 For column matrix representation of coordinate positions, we form composite
transformations by multiplying matrices in order from right to left.

Translations
 Two successive translations are performed as:
𝑷′ = 𝑻(𝒕𝒙𝟐 , 𝒕𝒚𝟐 ) ∙ {𝑻(𝒕𝒙𝟏 , 𝒕𝒚𝟏 ) ∙ 𝑷}
𝑷′ = {𝑻(𝒕𝒙𝟐, 𝒕𝒚𝟐 ) ∙ 𝑻(𝒕𝒙𝟏 , 𝒕𝒚𝟏 )} ∙ 𝑷
𝟏 𝟎 𝒕𝒙𝟐 𝟏 𝟎 𝒕𝒙𝟏
𝑷 = [𝟎 𝟏 𝒕𝒚𝟐] [𝟎 𝟏 𝒕𝒚𝟏] ∙ 𝑷

𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝟏 𝟎 𝒕𝒙𝟏 + 𝒕𝒙𝟐
𝑷 = [𝟎 𝟏 𝒕𝒚𝟏 + 𝒕𝒚𝟐] ∙ 𝑷

𝟎 𝟎 𝟏
𝑷′ = 𝑻(𝒕𝒙𝟏 + 𝒕𝒙𝟐 , 𝒕𝒚𝟏 + 𝒕𝒚𝟐 ) ∙ 𝑷}
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively.
 This concept can be extended for any number of successive translations.

Rotations
 Two successive Rotations are performed as:
𝑷′ = 𝑹(𝜽𝟐) ∙ {𝑹(𝜽𝟏) ∙ 𝑷}
𝑷′ = {𝑹(𝜽𝟐) ∙ 𝑹(𝜽𝟏)} ∙ 𝑷

𝐜𝐨𝐬 𝜽𝟐 − 𝐬𝐢𝐧 𝜽𝟐 𝟎 𝐜𝐨𝐬 𝜽𝟏 −𝐬𝐢𝐧 𝜽𝟏 𝟎


𝑷′ = [ 𝐬𝐢𝐧 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟐 𝟎] [𝐬𝐢𝐧 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟏 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏

𝐜𝐨𝐬 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 − 𝐬𝐢𝐧 𝜽𝟐 𝐬𝐢𝐧 𝜽𝟏 − 𝐬𝐢𝐧 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟐 − 𝐬𝐢𝐧 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 𝟎


𝑷′ = [ 𝐬𝐢𝐧 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟐 + 𝐬𝐢𝐧 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 − 𝐬𝐢𝐧 𝜽𝟐 𝐬𝐢𝐧 𝜽𝟏 𝟎] ∙ 𝑷𝐜𝐨𝐬(𝜽𝟏 + 𝜽𝟐) −𝐬𝐢𝐧(𝜽𝟏
+ 𝜽𝟐) 𝟎
𝑷′ = [𝐬𝐢𝐧(𝜽𝟏 + 𝜽𝟐) 𝐜𝐨𝐬(𝜽𝟏 + 𝜽𝟐) 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏

𝑷′ = 𝑹(𝜽𝟏 + 𝜽𝟐) ∙ 𝑷
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively.
 This concept can be extended for any number of successive rotations.
 Two successive scaling are performed as:
𝑷′ = 𝑺(𝒔𝒙𝟐 , 𝒔𝒚𝟐 ) ∙ {𝑺(𝒔𝒙𝟏 , 𝒔𝒚𝟏 ) ∙ 𝑷}
𝑷′ = {𝑺(𝒔𝒙𝟐 , 𝒔𝒚𝟐 ) ∙ 𝑺(𝒔𝒙𝟏 , 𝒔𝒚𝟏 )} ∙ 𝑷
𝒔𝒙𝟐 𝟎 𝟎 𝒔𝒙𝟏 𝟎 𝟎
𝑷 = [ 𝟎 𝒔𝒚𝟐 𝟎] [ 𝟎

𝒔𝒚𝟏 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝒔𝒙𝟏 ∙ 𝒔𝒙𝟐 𝟎 𝟎
𝑷′ = 𝟎 𝒔𝒚𝟏 ∙ 𝒔𝒚𝟐 𝟎] ∙ 𝑷
[ 𝟎 𝟎 𝟏
𝑷 ial point

coordinate
= respectively.
𝑺(  This concept can
𝒔𝒙 be extended for any
𝟏 number of
∙ successive scaling.
𝒔𝒙
𝟐,
𝒔𝒚
𝟏

𝒔𝒚
𝟐)

𝑷
H
e
r
e


a
n
d


a
r
e
c
o
l
u
m
n
v
e
c
t
o
r
o
f
fi
n
a
l
a
n
d
i
n
it
6 0 0 2 8 12 48
𝑃′ = [0 6 0] ∙ [2 8] = [12 48]
0 0 1 1 1 1 1

Final Coordinates after rotations are 𝑝, (12, 12) and 𝑞 , (48, 48).

General Pivot-Point Rotation

(𝒙𝒓, 𝒚𝒓)

(𝒙𝒓, 𝒚𝒓)

(a) (b) (c) (d)


Original Translation of Rotati Translation of
Position of Object so that on Object so that
Object and Pivot Point (𝒙𝒓 , about Pivot Point is
Pivot Point. 𝒚𝒓 ) is at Origin. Origi Return
n. to Position (𝒙𝒓 , 𝒚𝒓 ) .
Fig. 3.6: - General pivot point rotation.
 For rotating object about arbitrary point called pivot point we need to apply
following sequence of transformation.
1. Translate the object so that the pivot-point coincides with the coordinate origin.
2. Rotate the object about the coordinate origin with specified angle.
3. Translate the object so that the pivot-point is returned to its original position (i.e.
Inverse of step-1).
 Let’s find matrix equation for this
𝑷′ = 𝑻(𝒙𝒓 , 𝒚𝒓 ) ∙ [𝑹(𝜽) ∙ {𝑻(−𝒙𝒓 , −𝒚𝒓 ) ∙ 𝑷}]
𝑷′ = {𝑻(𝒙𝒓 , 𝒚𝒓 ) ∙ 𝑹(𝜽) ∙ 𝑻(−𝒙𝒓 , −𝒚𝒓 )} ∙ 𝑷
𝟏 𝟎 𝒙𝒓 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝟏 𝟎 −𝒙𝒓
𝑷′ = [ 𝟎 𝟏 𝒚𝒓 ] [ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎] [𝟎 𝟏 −𝒚𝒓 ] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝒙𝒓(𝟏 − 𝐜𝐨𝐬 𝜽) + 𝒚𝒓 𝐬𝐢𝐧 𝜽
𝑷′ = [ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝒚𝒓(𝟏 − 𝐜𝐨𝐬 𝜽) − 𝒙𝒓 𝐬𝐢𝐧 𝜽] ∙ 𝑷
𝟎 𝟎 𝟏
𝑷′ = 𝑹(𝒙𝒓 , 𝒚𝒓 𝜽) ∙ 𝑷
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively and (𝒙𝒓,
𝒚𝒓) are the coordinates of pivot-point.

General Fixed-Point
Scaling
(𝒙𝒇, 𝒚𝒇)
(𝒙𝒇, 𝒚𝒇)

(c) (d)
(a) (b) Scale Object Translate Object
Original Translate
with Respect so that Fixed Point
Position of Object so that to Origin is Return to
Object and Fixed Point (𝒙𝒇,
Fixed Point Position (𝒙𝒇, 𝒚𝒇) .
𝒚𝒇) is at Origin

Fig. 3.7: - General fixed point scaling.


 For scaling object with position of one point called fixed point will remains
same, we need to apply following sequence of transformation.
1. Translate the object so that the fixed-point coincides with the coordinate origin.
2. Scale the object with respect to the coordinate origin with specified scale
factors.
3. Translate the object so that the fixed-point is returned to its original position (i.e.
Inverse of step-1).
 Let’s find matrix equation for this
𝑷′ = 𝑻(𝒙𝒇 , 𝒚𝒇 ) ∙ [𝑺(𝒔𝒙 , 𝒔𝒚 ) ∙ {𝑻(−𝒙𝒇 , −𝒚𝒇 ) ∙ 𝑷}]
𝑷′ = {𝑻(𝒙𝒇 , 𝒚𝒇 ) ∙ 𝑺(𝒔𝒙 , 𝒔𝒚 ) ∙ 𝑻(−𝒙𝒇 , −𝒚𝒇 )} ∙ 𝑷
𝟏 𝟎 𝒙𝒇 𝒔𝒙 𝟎 𝟎 𝟏 𝟎 −𝒙𝒇
𝑷′ = [ 𝟎 𝟏 𝒚𝒇 ] [ 𝟎 𝒔𝒚 𝟎] [𝟎 𝟏 −𝒚𝒇 ] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟎 𝟎 𝟏
𝟏
𝒔𝒙 𝟎
𝒙𝒇(𝟏 −
𝒔𝒙)
𝑷′ = [ 𝟎𝒔𝒚
𝒚𝒇(𝟏 −
𝒔 𝒚 )] ∙ 𝑷
𝟎 𝟎
𝑷 = 𝑺(𝒙𝒇 , 𝒚𝒇 , 𝒔𝒙 ,

𝒔𝒚 ) ∙ 𝑷
As we want size half so value of scale factor are 𝑠𝑥 = 0.5, 𝑠𝑦 = 0.5 and Coordinates of
square are [A (2, 2), B (6, 2), C (6, 6), D (2, 6)].
𝑃′ = 𝑆(𝑥𝑓 , 𝑦𝑓 , 𝑠𝑥 , 𝑠𝑦 ) ∙ 𝑃
𝑠𝑥 0 𝑥𝑓(1 − 𝑠𝑥) 2 6 62
𝑃′ = [ 0 𝑠𝑦 𝑦𝑓(1 − 𝑠𝑦)] [2 2 6 6]
0 0 1 1 1 11
0.5 0 4(1 − 0.5) 2 6 62
𝑃′ = [ 0 0.5 4(1 − 0.5) ] [ 2 2 6 6]
0 0 1 1 1 11
0.5 0 2 2 6 62
𝑃′ = [ 0 0.5 2] [2 2 6 6]
0 0 1 1 1 11
3 5 5 3
𝑃′ = [3 3 5 5]
1 1 1 1
 Final coordinate after scaling are [A’ (3, 3), B’ (5, 3), C’ (5, 5), D’ (3, 5)]

General Scaling Directions

𝒔𝟐

𝒔𝟏

Fig. 3.8: - General scaling direction.


 Parameter 𝒔𝒙 and 𝒔𝒚 scale the object along 𝒙 and 𝒚 directions. We can scale an
object in other directions by rotating the object to align the desired scaling
directions with the coordinate axes before applying the scaling
transformation.
 Suppose we apply scaling factor 𝒔𝟏 and 𝒔𝟐 in direction shown in figure than
we will apply following transformations.
1. Perform a rotation so that the direction for 𝒔𝟏 and 𝒔𝟐 coincide with 𝒙 and 𝒚 axes.
2. Scale the object with specified scale factors.
3. Perform opposite rotation to return points to their original orientations. (i.e.
Inverse of step-1).
 Let’s find matrix equation for this
𝑷′ = 𝑹−𝟏 (𝜽) ∙ [𝑺(𝒔𝟏 , 𝒔𝟐 ) ∙ {𝑹(𝜽) ∙ 𝑷}]
𝑷′ = {𝑹−𝟏 (𝜽) ∙ 𝑺(𝒔𝟏 , 𝒔𝟐 ) ∙ 𝑹(𝜽)} ∙ 𝑷
𝐜𝐨𝐬 𝜽 𝐬𝐢𝐧 𝜽 𝟎 𝒔𝒙 𝟎 𝟎 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎
𝑷′ = [− 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎] [ 𝟎 𝒔𝒚 𝟎] [ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎] ∙ 𝑷
𝟎 𝟎 𝟎 𝟎 𝟎 𝟎 𝟏
𝟏
𝒔𝟏 𝐜𝐨𝐬𝟐 𝜽 + 𝒔𝟐 𝐬𝐢𝐧𝟐 𝜽 (𝒔𝟐 − 𝒔𝟏) 𝐜𝐨𝐬 𝜽 𝐬𝐢𝐧 𝜽 𝟎
𝑷 = [(𝒔𝟐 − 𝒔𝟏) 𝐜𝐨𝐬 𝜽 𝐬𝐢𝐧 𝜽

𝒔𝟏 𝐬𝐢𝐧 𝜽 + 𝒔𝟐 𝐜𝐨𝐬 𝜽
𝟐 𝟐
𝟎] ∙ 𝑷
𝟎 𝟎 𝟏
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate
respectively and 𝜽 is the angle between actual scaling direction and our
standard coordinate axes.

Other Transformation
 Some package provides few additional transformations which are useful in
certain applications. Two such transformations are reflection and shear.

Reflection

 A reflection is a transformation that produces a mirror image of an object.


 The mirror image for a two –dimensional reflection is generated relative to
an axis of reflection by rotating the object 180o about the reflection axis.
 Reflection gives image based on position of axis of reflection. Transformation
matrix for few positions are discussed here.

Transformation matrix for reflection about the line 𝒚 = 𝟎 , 𝒕𝒉𝒆 𝒙 𝒂𝒙𝒊𝒔.

y
1 Original
Position
2 3

x
2’ 3’
Reflected
Position
1’

Fig. 3.9: - Reflection about x - axis.


 This transformation keeps x values are same, but flips (Change the sign) y
values of coordinate positions.

1 0 0
[0 −1 0]
0 0 1

Transformation matrix for reflection about the line 𝒙 = 𝟎 , 𝒕𝒉𝒆 𝒚 𝒂𝒙𝒊𝒔.

y
1’ 1 Original
Reflected
Position Position
3’2’ 2 3

x
Fig. 3.10: - Reflection about y - axis.
 This transformation keeps y values are same, but flips (Change the sign) x
values of coordinate positions.

−1 0 0
[ 0 1 0]
0 0 1

Transformation matrix for reflection about the 𝑶𝒓𝒊𝒈𝒊𝒏.

y
Original
3 Position

1 2

1’ x
3’
Reflected 2’
Position

Fig. 3.11: - Reflection about origin.


 This transformation flips (Change the sign) x and y both values of coordinate
positions.

−1 0 0
[ 0 −1 0]
0 0 1

Transformation matrix for reflection about the line 𝒙 = 𝒚 .

y
Origi x=y line
nal
Positi
on
3

2
1

Reflected
1

Position
3

2

Fig. 3.12: - Reflection about x=y line.


 This transformation interchange x and y values of coordinate positions.
0 1 0
[1 0 0]
0 0 1
Transformation matrix for reflection about the line 𝒙 = −𝒚 .

x=-y line 3 y

1 2

Original
1’ Position
3’
2’
Reflected
Position
x

Fig. 3.12: - Reflection about x=-y line.


 This transformation interchange x and y values of coordinate positions.

0 −1 0
[−1 0 0]
0 0 1

 Example: - Find the coordinates after reflection of the triangle [A (10, 10), B
(15, 15), C (20, 10)] about x axis.
1 0 0 10 15 20
𝑃′ = [0 −1 0] [10 15 10 ]
0 0 1 1 1 1
10 15 20
𝑃′ = [−10 −15 −10]
1 1 1
 Final coordinate after reflection are [A’ (10, -10), B’ (15, -15), C’ (20, -10)]

Shear

 A transformation that distorts the shape of an object such that the


transformed shape appears as if the object were composed of internal layers
that had been caused to slide over each other is called shear.
 Two common shearing transformations are those that shift coordinate x
values and those that shift y values.

Shear in 𝒙 − 𝒅𝒊𝒓𝒆𝒄𝒕𝒊𝒐𝒏 .
Before After
Y Shear Shear
Y

X X
Fig. 3.13: - Shear in x-direction.
 Shear relative to 𝑥 − 𝑎𝑥𝑖𝑠 that is 𝑦 = 0 line can be produced by following equation:
𝒙′ = 𝒙 + 𝒔𝒉𝒙 ∙ 𝒚 , 𝒚′ = 𝒚
 Transformation matrix for that is:
𝟏 𝒔𝒉𝒙 𝟎
[𝟎 𝟏 𝟎]
𝟎 𝟎 𝟏
Here 𝒔𝒉𝒙 is shear parameter. We can assign any real value to 𝒔𝒉𝒙.
 We can generate 𝑥 − 𝑑𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛 shear relative to other reference line 𝑦 = 𝑦𝑟𝑒𝑓 with
following equation:
𝒙′ = 𝒙 + 𝒔𝒉𝒙 ∙ (𝒚 − 𝒚𝒓𝒆𝒇) , 𝒚′ = 𝒚
 Transformation matrix for that is:
𝟏 𝒔𝒉𝒙 −𝒔𝒉𝒙 ∙ 𝒚𝒓𝒆𝒇
[𝟎 𝟏 𝟎 ]
𝟎 𝟎 𝟏
 Example: - Shear the unit square in x direction with shear parameter ½
relative to line 𝑦 = −1. Here 𝑦𝑟𝑒𝑓 = −1 and 𝑠ℎ𝑥 = 0.5
Coordinates of unit square are [A (0, 0), B (1, 0), C (1, 1), D (0, 1)].
1 𝑠ℎ𝑥 −𝑠ℎ𝑥 ∙ 0 1 10
′ 𝑦𝑟𝑒𝑓
𝑃 = [0 1 0 ] [ 0 0 1 1]
0 0 1 1 1 11

𝑃′ = [ 0 1
1 0.5 −0.5 ∙ (−1) 0 1 1 0
0 ] [0 0 1 1]
0 0 1 1 1 11

𝑃′ = [ 0 1
1 0.5 0.5 0 1 1 0
0 ] [0 0 1 1]
0 0 1 1 1 11

0.5 1.5 2 1
𝑃′ = [ 0 0 1 1]
1 1 11
 Final coordinate after shear are [A’ (0.5, 0), B’

(1.5, 0), C’ (2, 1), D’ (1, 1)] Shear in 𝒚 − 𝒅𝒊𝒓𝒆𝒄𝒕𝒊𝒐𝒏 .

Before
Y Shear After
Y
Shear

X X

Fig. 3.14: - Shear in y-direction.


 Shear relative to 𝑦 − 𝑎𝑥𝑖𝑠 that is 𝑥 = 0 line can be produced by following equation:
𝒙′ = 𝒙 , 𝒚′ = 𝒚 + 𝒔𝒉𝒚 ∙ 𝒙
 Transformation matrix for that is:
𝟏 𝟎 𝟎 [𝒔𝒉𝒚
𝟏 𝟎]
𝟎 𝟎 𝟏
Here 𝒔𝒉𝒚 is shear parameter. We can assign any real value to 𝒔𝒉𝒚.
 We can generate 𝑦 − 𝑑𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛 shear relative to other reference line 𝑥 = 𝑥𝑟𝑒𝑓 with
following equation:
𝒙′ = 𝒙, 𝒚′ = 𝒚 + 𝒔𝒉𝒚 ∙ (𝒙 − 𝒙𝒓𝒆𝒇)
 Transformation matrix for that is:
𝟏 𝟎 𝟎
[ 𝒔𝒉𝒚 𝟏 −𝒔𝒉𝒚 ∙ 𝒙𝒓𝒆𝒇]
𝟎 𝟎 𝟏
 Example: - Shear the unit square in y direction with shear parameter ½
relative to line 𝑥 = −1. Here 𝑥𝑟𝑒𝑓 = −1 and 𝑠ℎ𝑦 = 0.5
Coordinates of unit square are [A (0, 0), B (1, 0), C (1, 1), D (0, 1)].
1 0 0 0 1 10
𝑃′ = [𝑠ℎ𝑦 1 −𝑠ℎ𝑦 ∙ 𝑥𝑟𝑒𝑓] [0 0 1 1]
0 0 1 1 1 11

𝑃′ = [0.5 1 −0.5 ∙ (−1)] [0 0 1 1]


1 0 0 0 1 10

0 0 1 1 1 11
1 0 0 0 1 10
𝑃′ = [0.5 1 0.5] [0 0 1 1]
0 0 1 1 1 11
0 1 1
0
𝑃′ = [ 0.5 1 2
1.5]
1 1 1
1

Final coordinate after shear are [A’ (0, 0.5), B’ (1, 1), C’ (1, 2), D’ (0,
1.5)]
Module:02

Module – II (12 hours)


Two Dimensional Viewing: Window-to- View Port Coordinate
Transformation. Line Clipping (Cohen-Sutherland Algorithm) and
Polygon Clipping (Sutherland- Hodgeman Algorithm) Aliasing and
Antialiasing, Half Toning, Thresholding, Dithering. Polygon Filling:
Seed Fill Algorithm, Scan line Algorithm. Two Dimensional Object
Representations: Spline Representation, Bezier Curves, B-Spline
Curves. Fractal Geometry: Fractal Classification and Fractal
Dimension.
The Viewing Pipeline
 Window: Area selected in world-coordinate for display is called window. It defines what is to be
viewed.
 Viewport: Area on a display device in which window image is display (mapped) is called
viewport. It defines where to display.
 In many case window and viewport are rectangle, also other shape may be used as
window and viewport.
 In general finding device coordinates of viewport from word coordinates of window is
called as viewing transformation.
 Sometimes we consider this viewing transformation as window-to-viewport
transformation but in general it involves more steps.

Fig. 3.1: - A viewing transformation using standard rectangles for the window and viewport.
 Now we see steps involved in viewing pipeline.

Construct Convert
MC WC VC NVC
World- World-
Coordinate Coordinate
DC
Scene Using to Viewing
Modeling- Coordinate
Coordinate s
Map Viewing
Map
Coordinate to
Normalize
Normalized
d
Viewing
Viewport
Coordinates
to
using Window-
Device
Viewport
Fig. 3.2: - 2D viewing pipeline.
 As shown in figure above first of all we construct world coordinate scene using modeling
coordinate transformation.
 After this we convert viewing coordinates from world coordinates using window to viewport
transformation.
 Then we map viewing coordinate to normalized viewing coordinate in which we obtain values in
between 0 to 1.
 At last we convert normalized viewing coordinate to device coordinate using device driver
software which provide device specification.
 Finally device coordinate is used to display image on display screen.
 By changing the viewport position on screen we can see image at different place on the screen.
 By changing the size of the window and viewport we can obtain zoom in and zoom out effect as
per requirement.
 Fixed size viewport and small size window gives zoom in effect, and fixed size viewport and
larger window gives zoom out effect.
 View ports are generally defines with the unit square so that graphics package are more device
independent which we call as normalized viewing coordinate.
Viewing Coordinate Reference Frame

Fig. 3.3: - A viewing-coordinate frame is moved into coincidence with the world frame in
two steps: (a) translate the viewing origin to the world origin, and then (b) rotate to align
the axes of the two systems.
 We can obtain reference frame in any direction and at any position.
 For handling such condition first of all we translate reference frame origin to standard
reference frame origin and then we rotate it to align it to standard axis.
 In this way we can adjust window in any reference frame.
 this is illustrate by following transformation matrix:
M w c , v c=R T
 Where T is translation matrix and R is rotation matrix.
Window-To-Viewport Coordinate Transformation
 Mapping of window coordinate to viewport is called window to viewport transformation.
 We do this using transformation that maintains relative position of window coordinate into
viewport.
 That means center coordinates in window must be remains at center position in viewport.

𝐱 𝐯 − 𝐱 𝐯𝐦𝐢𝐧 𝐱 𝐰 − 𝐱 𝐰𝐦𝐢𝐧
 We find relative position by equation as follow:

=𝐱
𝐱𝐯𝐦𝐚𝐱 − 𝐰𝐦𝐚𝐱 − 𝐱 𝐰𝐦𝐢𝐧

𝐱𝐯𝐦𝐢𝐧
𝐲𝐰 − 𝐲𝐰𝐦𝐢𝐧
=𝐲
𝐰𝐦𝐚𝐱 − 𝐲𝐰𝐦𝐢𝐧
𝐲𝐯 − 𝐲𝐯𝐦𝐢𝐧
𝐲𝐯𝐦𝐚𝐱 −
𝐲𝐯𝐦𝐢𝐧
 Solving by making viewport position as subject we obtain:
𝐱 𝐯 = 𝐱 𝐯𝐦𝐢𝐧 + (𝐱 𝐰 − 𝐱 𝐰𝐦𝐢𝐧 )𝐬𝐱
𝐲𝐯 = 𝐲𝐯𝐦𝐢𝐧 + (𝐲𝐰 − 𝐲𝐰𝐦𝐢𝐧 )𝐬𝐲
 Where scaling factor are :

𝐬𝐱 𝐱 𝐯𝐦𝐚𝐱 − 𝐱 𝐯𝐦𝐢𝐧
= 𝐱 𝐰𝐦𝐚𝐱 − 𝐱 𝐰𝐦𝐢𝐧
𝐬𝐲 𝐲𝐯𝐦𝐚𝐱 − 𝐲𝐯𝐦𝐢𝐧
=𝐲 − 𝐲𝐰𝐦𝐢𝐧
𝐰𝐦𝐚𝐱
 We can also map window to viewport with the set of transformation, which include
following sequence of transformations:
1.
Perform a scaling transformation using a fixed-point position of (xWmin,ywmin) that
scales the window area to the size of the viewport.
2.
Translate the scaled window area to the position of the viewport.
 For maintaining relative proportions we take (sx = sy). in case if both are not equal then
we get stretched or contracted in either the x or y direction when displayed on the output
device.
 Characters are handle in two different way one way is simply maintain relative position
like other primitive and other is to maintain standard character size even though viewport
size is enlarged or reduce.
 Number of display device can be used in application and for each we can use different
window-to- viewport transformation. This mapping is called the workstation
transformation.

Fig. 3.4: - workstation transformation.


 As shown in figure two different displays devices are used and we map different window-
to-viewport on each one.
Clipping Operations
 Generally, any procedure that identifies those portions of a picture that are either inside
or outside of a specified region of space is referred to as a clipping algorithm, or simply
clipping. The region against which an object is to clip is called a clip window.
 Clip window can be general polygon or it can be curved boundary.
Application of Clipping
 It can be used for displaying particular part of the picture on display screen.
 Identifying visible surface in 3D views.
 Antialiasing.
 Creating objects using solid-modeling procedures.
 Displaying multiple windows on same screen.
 Drawing and painting.
Point Clipping
 In point clipping we eliminate those points which are outside the clipping window and
draw points which are inside the clipping window.
 Here we consider clipping window is rectangular boundary with edge (xwmin,xwmax,ywmin,ywmax).
 So for finding wether given point is inside or outside the clipping window we use following

𝒙𝒘𝒎𝒊𝒏 ≤ 𝒙 ≤ 𝒙𝒘𝒂𝒎𝒙
inequality:

𝒚𝒘𝒎𝒊𝒏 ≤ 𝒚 ≤ 𝒚𝒘𝒂𝒎𝒙
 If above both inequality is satisfied then the point is inside otherwise the point is
outside the clipping window.
Line Clipping
 Line clipping involves several possible cases.
1. Completely inside the clipping window.
2. Completely outside the clipping window.
3. Partially inside and partially outside the clipping window.

P9

P4 Window P10 Window

P2
P2
P8
P1
P1 P8
P5 P6
P3
P6
P5

P7

P7

Before After
Clippi Clipping
ng (b)
(a)
Fig. 3.5: - Line clipping against a rectangular window.
 Line which is completely inside is display completely. Line which is completely outside is
eliminated from display. And for partially inside line we need to calculate intersection with
window boundary and find which part is inside the clipping boundary and which part is
eliminated.
 For line clipping several scientists tried different methods to solve this clipping procedure.
Some of them are discuss below.
Cohen-Sutherland Line Clipping
 This is one of the oldest and most popular line-clipping procedures.

Region and Region Code


 In this we divide whole space into nine region and assign 4 bit code to each endpoint of
line depending on the position where the line endpoint is located.

1001 1000 1010

0001 0000 0010

0101 0100 0110

Fig. 3.6: - Workstation transformation.


 Figure 3.6 shows code for line end point which is fall within particular area.
 Code is deriving by setting particular bit according to
position of area. Set bit 1: For left side of clipping
window.
Set bit 2: For right side of
clipping window. Set bit 3: For
below clipping window.
Set bit 4: For above clipping window.
 All bits as mention above are set means 1 and other are 0.

Algorithm
Step-1:
Assign region code to both endpoint of a line depending on the position where the line endpoint is
located.

Step-2:
If both endpoint have code ‘0000’
Then line is completely inside.
Otherwise
Perform logical ending between this two codes.

If result of logical ending is non-zero


Line is completely outside the clipping window.
Otherwise
Calculate the intersection point with the boundary one by one. Divide the line
into two parts from intersection point.
Recursively call algorithm for both line segments.
Step-3:
Draw line segment which are completely inside and eliminate other line segment which found
completely outside.

Intersection points calculation with clipping window boundary


 For intersection calculation we use line equation “𝑦 = 𝑚𝑥 + 𝑏”.
 ‘𝑥′ is constant for left and right boundary which is:
o for left “𝑥 = 𝑥𝑤𝑚𝑖𝑛 ”
o for right “𝑥 = 𝑥𝑤𝑚𝑎𝑥 ”
 So we calculate 𝑦 coordinate of intersection for this boundary by putting
values of 𝑥 depending on boundary is left or right in below equation.
𝒚 = 𝒚𝟏 + 𝒎(𝒙 − 𝒙𝟏 )
 ′𝑦′ coordinate is constant for top and bottom boundary which is:
o for top “𝑦 = 𝑦𝑤𝑚𝑎𝑥”
o for bottom “𝑦 = 𝑦𝑤𝑚𝑖𝑛 ”
 So we calculate 𝑥 coordinate of intersection for this boundary by putting
values of 𝑦 depending on boundary is top or bottom in below equation.
𝒙= 𝒚 − 𝒚𝟏
𝒙𝟏 + 𝒎
Polygon Clipping
 For polygon clipping we need to modify the line clipping procedure because in line
clipping we need to consider about only line segment while in polygon clipping we need
to consider the area and the new boundary of the polygon after clipping.
Sutherland-Hodgeman Polygon Clipping
 For correctly clip a polygon we process the polygon boundary as a whole against each window
edge.
 This is done by whole polygon vertices against each clip rectangle boundary one by one.
 Beginning with the initial set of polygon vertices we first clip against the left boundary and
produce new sequence of vertices.
 Then that new set of vertices is clipped against the right boundary clipper, a bottom
boundary clipper and a top boundary clipper, as shown in figure below.

Fig. 3.11: - Clipping a polygon against successive window boundaries.

Lef Rig Bottom To


in out
t ht Clipper p

Fig. 3.12: - Processing the vertices of the polygon through boundary clipper.
 There are four possible cases when processing vertices in sequence around the perimeter of a
polygon.
Fig. 3.13: - Clipping a polygon against successive window boundaries.
 As shown in case 1: if both vertices are inside the window we add only second vertices to output
list.
 In case 2: if first vertices is inside the boundary and second vertices is outside the
boundary only the edge intersection with the window boundary is added to the output
vertex list.
 In case 3: if both vertices are outside the window boundary nothing is added to window boundary.
 In case 4: first vertex is outside and second vertex is inside the boundary, then adds both
intersection point with window boundary, and second vertex to the output list.
 When polygon clipping is done against one boundary then we clip against next window boundary.
 We illustrate this method by simple example.

Window
3
2’
1’
2

3’4
1
6 5’
4’
5

Fig. 3.14: - Clipping a polygon against left window boundaries.


 As shown in figure above we clip against left boundary vertices 1 and 2 are found to be
on the outside of the boundary. Then we move to vertex 3, which is inside, we calculate
the intersection and add both intersection point and vertex 3 to output list.
 Then we move to vertex 4 in which vertex 3 and 4 both are inside so we add vertex 4 to
output list, similarly from 4 to 5 we add 5 to output list, then from 5 to 6 we move inside to
outside so we add intersection pint to output list and finally 6 to 1 both vertex are outside
the window so we does not add anything.
 Convex polygons are correctly clipped by the Sutherland-Hodgeman algorithm but concave
polygons may be displayed with extraneous lines.
 For overcome this problem we have one possible solution is to divide polygon into
numbers of small convex polygon and then process one by one.
 Another approach is to use Weiler-Atherton algorithm.
Weiler-Atherton Polygon Clipping
 In this algorithm vertex processing procedure for window boundary is modified so that
concave polygon also clip correctly.
 This can be applied for arbitrary polygon clipping regions as it is developed for visible
surface identification.
 Main idea of this algorithm is instead of always proceeding around the polygon edges as
vertices are processed we sometimes need to follow the window boundaries.
 Other procedure is similar to Sutherland-Hodgeman algorithm.
 For clockwise processing of polygon vertices we use the following rules:
o For an outside to inside pair of vertices, follow the polygon boundary.
o For an inside to outside pair of vertices, follow the window boundary in a clockwise direction.
 We illustrate it with example:
(resu VV22’
me) V 1’

V
V 3’ 3
V 4’
V1
V4
(stop) V 5’
V 7’
V
 As shown in figure we start from v1 and move clockwise towards v2 and add intersection
point and next point to output list by following polygon boundary, then from v2 to v3 we
add v3 to output list.
 From v3 to v4 we calculate intersection point and add to output list and follow window boundary.
 Similarly from v4 to v5 we add intersection point and next point and follow the polygon
boundary, next we move v5 to v6 and add intersection point and follow the window
boundary, and finally v6 to v1 is outside so no need to add anything.
 This way we get two separate polygon section after clipping.

ALIASING
Aliasing is a effect of displaying a high resolution image in a low
resolution display. The aliasing effect are also called artifact or distortion.
The following effect may occur due to aliasing:
a) Jagged Profile: it is especially
noticed where there is a high
contrast between the interior and
exterior of the object.
b) Picket Fence Problem: it occurs
when a user attempts to scan
convert an object that will not fit
exactly in the raster.

[II.6]
COMPUTER GRAPHICS | Module: 02

c) StaircaseArtifact:Inalowresoluti
ondisplaythe slant lines may
have unequal brightness
because two corner pixel the
distance is 1.4 whereas
between two horizontal or
vertical neighbor the distance
is1.
ANTIALIASING
The aliasing effect can be reduced by adjusting intensities of the pixel along the
line. The process of adjusting intensities of the pixels along the line to minimize the
effect of aliasing the called antialiasing.
Methods of antialiasing:
I. IncreasingResolution
II. Unweighted AreaSampling
III. Weighted AreaSampling.
I. INCREASINGRESOLUTION:
The aliasing can be minimized by
increasing resolution of the raster
display. By increasing resolution
and making it twice the original
one, the line passes through twice
as many as column of the pixel
and therefore has twice as many
jags
buteachjagishalfaslargeasinX&Ydi
rection.
II. UNWEIGHTED AREASAMPLING:
In unweighted area sampling, the
intensity of pixel is proportional to the
amount of line area occupied by the
pixel. The technique produce
noticeably better results than full/zero
intensity pixel.
III. WEIGHTED AREASAMPLING
Equal area contribute unequally i.e. a
small area closer to the pixel center has
greater intensity than does one at a
greater distance. Here intensity depends
on area occupied as well as distance
from pixels center.
HALFTONING:
It is a technique for obtaining
increasing visual
resolutionwithaminimumnumberofintensity.I
tdecrease the overall resolution of the
image and it is more suitable where the
resolution of the original image is less than
the outpour device but having more
intensity than the output device.
THRESHOLDING:
Halftoning is resulting loss of spatial
resolution which is visible in case of
displaying a low resolution image. To
improve it thresholding is used. In
thresholding
the output image is having the same size as original image. But having only two intensity
levels:
IfI(X,Y)ΣTthenI1 (X,Y)=whiteelseI1 (X,Y)=Black.
Where I=Original image and I1=ThresholdingiNage
DITHERING:
Dithering is the process by which we create illusion of a color that are not present actually. It is
done by random arrangement of pixel. It is digital Halftoning process to approximate a color that
can’t be display with uniform dots of the displaydevice.
The classical dithering algorithm are:

SOLID AREA SCAN CONVERSION


Scan conversion is used to draw line, Circle and ellipse as a
common object. When we want to design a solid area, it is little
difficult b using scan conversion. By using edges and joint, we
can design a polygon which is closed in nature but when multiple
solid object comes, then complex increases to find overlap.
Inside Outside Test:
Tofindoutapixelinsidethepolygonoroutsidethepolygon
inside outside test or even odd test isperformed.
 A pixel value is extended towards horizontal or vertical direction if he line intersect the
boundary.
o Odd number of times then pixel isinside.
o Even number of times then pixel isoutside.
Coherence:
It is a property by which we can improve the efficiency of a program. It can be defined as
edges and pixel are likely to have the same characteristics in the polygon means if one pixel is
inside the polygon, its adjacent pixel are also treated as inside the polygon.

Filled-Area Primitives

 n practical we often use polygon which are filled with some color or pattern inside it.
 There are two basic approaches to area filling on raster systems.
 One way to fill an area is to determine the overlap intervals for scan line that cross the area.
 Another method is to fill the area is to start from a given interior position and paint out
wards from this point until we encounter boundary.
Scan-Line Polygon Fill Algorithm
 Figure below shows the procedure for scan-line filling algorithm.

Fig. 2.17: - Interior pixels along a scan line passing through a polygon area.
 For each scan-line crossing a polygon, the algorithm locates the intersection points are of
scan line with the polygon edges.
 This intersection points are stored from left to right.
 Frame buffer positions between each pair of intersection point are set to specified fill color.
 Some scan line intersects at vertex position they are required special handling.
 For vertex we must look at the other endpoints of the two line segments of the polygon
which meet at this vertex.
 If these points lie on the same (up or down) side of the scan line, then that point is counts
as two intersection points.
 If they lie on opposite sides of the scan line, then the point is counted as single intersection.
 This is illustrated in figure below

Fig. 2.18: - Intersection points along the scan line that intersect polygon vertices.
 As shown in the Fig. 2.18, each scan line intersects the vertex or vertices of the polygon.
For scan line 1, the other end points (B and D) of the two line segments of the polygon lie
on the same side of the scan
line, hence there are two intersections resulting two pairs: 1 -2 and 3 - 4. Intersections points
2 and 3 are actually same Points. For scan line 2 the other endpoints (D and F) of the two
line segments of the Polygon lie on the opposite sides of the scan line, hence there is a
single intersection resulting two pairs: l - 2 and 3 - 4. For scan line 3, two vertices are the
intersection points"
 For vertex F the other end points E and G of the two line segments of the polygon lie on
the same side of the scan line whereas for vertex H, the other endpoints G and I of the
two line segments of the polygon lie on the opposite side of the scan line. Therefore, at
vertex F there are two intersections and at vertex H there is only one intersection. This
results two pairs: 1 - 2 and 3 - 4 and points 2 and 3 are actually same points.
 Coherence methods often involve incremental calculations applied along a single scan line or
between successive scan lines.
 In determining edge intersections, we can set up incremental coordinate calculations along
any edge by exploiting the fact that the slope of the edge is constant from one scan line to
the next.
 Figure below shows three successive scan-lines crossing the left edge of polygon.

Fig. 2.18: - adjacent scan line intersects with polygon edge.

𝑦𝑘+1 − 𝑦𝑘
 For above figure we can write slope equation for polygon boundary as follows.

𝑚 = 𝑥𝑘+1 − 𝑥𝑘
 Since change in 𝑦 coordinates between the two scan lines is simply
𝑦𝑘+1 − 𝑦𝑘 = 1
 So slope equation can be modified as follows
𝑦𝑘+1 − 𝑦𝑘
𝑚 = 𝑥𝑘+1 − 𝑥𝑘
1
𝑚=
𝑥𝑘+1 − 𝑥𝑘

𝑥𝑘+1 1
− 𝑥𝑘 =
𝑥𝑘+1 1 𝑚
= 𝑥𝑘 +
 𝑚 𝑥 intercept can thus be calculated by adding the inverse of the
Each successive
slope and rounding to
the nearest integer.
 For parallel execution of this algorithm we assign each scan line to separate
processor in that case instead of using previous 𝑥 values for calculation we
use initial 𝑥 values by using equation as.

𝑥𝑘
= 𝑥0 + 𝑘
𝑚
 Using this equation we can perform integer evaluation of 𝑥 intercept by
initializing a counter to 0, then incrementing counter by the value of ∆𝑥 each
time we move up to a new scan line.
 When the counter value becomes equal to or greater than ∆𝑦, we increment the
current 𝑥 intersection value by 1 and decrease the counter by the value ∆𝑦.
 This procedure is seen by following figure.

Line with slope 7/3

Decere 0
4
ment 1
5
Decere
2
6
ment
3
0 Y0
Decere
ment
X0

Y0

X0

Fig. 2.19: - line with slope 7/3 and its integer calculation using equation 𝑥𝑘+1 = 𝑥𝑘 + ∆.
∆𝑦
 Steps for above procedure
Suppose m = 7/3
Initially, set counter to 0, and increment to 3 (which is 𝛥𝑥).
1.

When move to next scan line, increment counter by adding ∆𝑥


2.

When counter is equal or greater than 7 (which is 𝛥𝑦), increment the x-


3.

intercept (in other words, the 𝑥- intercept for this scan line is one more than
4.

the previous scan line), and decrement counter by 7(which is ∆𝑦).


 To efficiently perform a polygon fill, we can first store the polygon boundary
in a sorted edge table that contains all the information necessary to process
the scan lines efficiently.
 We use bucket sort to store the edge sorted on the smallest 𝑦 value of each
edge in the correct scan line positions.
 Only the non-horizontal edges are entered into the sorted edge table.
 Figure below shows one example of storing edge table.
S
c
a
n
- Yb Xc 1/mcb

L
i
n
e

N
u
m
b
e
r

Yc
B

Yd Yc Xd 1/mdb Ye Xd 1/mde

C Scan Line Yc
Ya Yb Xc 1/mcb Yb Xa 1/mab
C’
E
Scan Line . .
Yd
.
1
Scan Line Ya D
0
A

Each entry in the table for a particular scan line contains the maximum 𝑦 values for that edges,
Fig. 2.20: - A polygon and its sorted edge table.

the 𝑥

intercept value for edge, and the inverse slope of the edge.
 For each scan line the edges are in sorted order from left to right.
 Than we process the scan line from the bottom to top for whole polygon and produce
active edge list for each scan line crossing the polygon boundaries.
 The active edge list for a scan line contains all edges crossed by that line, with iterative
coherence calculation used to obtain the edge intersections.

Inside-Outside Tests
 In area filling and other graphics operation often required to find particular point is inside
or outside the polygon.
 For finding which region is inside or which region is outside most graphics package use
either odd even rule or the nonzero winding number rule.

Odd Even Rule

 By conceptually drawing a line from any position 𝑝 to a distant point outside the
 It is also called the odd parity rule or even odd rule.

odd, than 𝑝 is an interior point. Otherwise 𝑝 is exterior point.


coordinate extents of the object and counting the number of edges crossing by this line is

 To obtain accurate edge count we must sure that line selected is does not pass from any
vertices.
 This is shown in figure 2.21(a).
Fig. 2.21: - Identifying interior and exterior region for a self-intersecting polygon.

Nonzero Winding Number Rule


 This method counts the number of times the polygon edges wind around a particular
point in the counterclockwise direction. This count is called the winding number, and the
interior points of a two- dimensional object are defined to be those that have a nonzero
value for the winding number.

𝑝 to distant point beyond the coordinate extents of the object.


 We apply this rule by initializing winding number with 0 and then draw a line for any point

 The line we choose must not pass through vertices.


 Then we move along that line we find number of intersecting edges and we add 1 to
winding number if edge cross our line from right to left and subtract 1 from winding
number if edges cross from left to right.
 The final value of winding number is nonzero then the point is interior and if winding
number is zero the point is exterior.


 This is shown in figure 2.21(b).

vector 𝑈 along the line from 𝑝 to distant point with the edge vector 𝐸 for each
One way to determine directional edge crossing is to take the vector cross product of a

edge that crosses the line.


If 𝑧 component of the cross product𝑈 × 𝐸 for a particular edge is positive
that edge is crosses from right to left and we add 1 to winding number

otherwise the edge is crossing from left to right and we subtract 1 from winding number.

Comparison between Odd Even Rule and Nonzero Winding Rule


 For standard polygons and simple object both rule gives same result but for more
complicated shape both rule gives different result which is illustrated in figure 2.21.

Scan-Line Fill of Curved Boundary Areas


 Scan-line fill of region with curved boundary is more time consuming as intersection
calculation now involves nonlinear boundaries.
 For simple curve such as circle or ellipse scan line fill process is straight forward process.
 We calculate the two scan line intersection on opposite side of the curve.
 This is same as generating pixel position along the curve boundary using standard equation of
curve.
 Then we fill the color between two boundary intersections.
 Symmetry property is used to reduce the calculation.
 Similar method can be used for fill the curve section.
Boundary Fill Algorithm/ Edge Fill Algorithm
 In this method, edges of the polygons are drawn. Then starting with some seed, any
point inside the polygon we examine the neighbouring pixels to check whether the
boundary pixel is reached.
 If boundary pixels are not reached, pixels are highlighted and the process is continued
until boundary pixels are reached.
 Boundary defined regions may be either 4-cormected or 8-connected. as shown in the Figure
below

(a) Four connected region (b) Eight connected region

Fig. 2.22: - Neighbor pixel connected to one pixel.


 If a region is 4-connected, then every pixel in the region may be reached by a
combination of moves in only four directions: left, right, up and down.
 For an 8-connected region every pixel in the region may be reached by a combination of
moves in the two horizontal, two vertical, and four diagonal directions.
 In some cases, an 8-connected algorithm is more accurate than the 4-connected
algorithm. This is illustrated in Figure below. Here, a 4-connected algorithm produces the
partial fill.

Seed

Fig. 2.23: - partial filling resulted due to using 4-connected algorithm.


 The following procedure illustrates the recursive method for filling a 4-connected region
with color specified in parameter fill color (f-color) up to a boundary color specified with
parameter boundary color (b-color).
 Procedure :
boundary-fill4(x, y, f-color, b-color)
{
if(getpixel (x,y) ! = b-color&&gepixel (x, y) ! = f-color)
{
putpixel (x, y, f-color)
boundary-fill4(x + 1, y, f-color, b-color);
boundary-fill4(x, y + 1, f-color,
b-color); boundary-fill4(x - 1,
y, f-color, b-color); boundary-
fill4(x, y - l, f-color, b-color);
}
}
 Note: 'getpixel' function gives the color of .specified pixel and 'putpixel' function draws the


pixel with specified color.

additional statements to test diagonal positions, such as (𝑥 + 1, 𝑦 + 1).


Same procedure can be modified according to 8 connected region algorithm by including four

 This procedure requires considerable stacking of neighbouring points more, efficient


methods are generally employed.
 This method fill horizontal pixel spans across scan lines, instead of proceeding to 4
connected or 8 connected neighbouring points.
 Then we need only stack a beginning position for each horizontal pixel span, instead of
stacking all unprocessed neighbouring positions around the current position.
 Starting from the initial interior point with this method, we first fill in the contiguous span
of pixels on this starting scan line.
 Then we locate and stack starting positions for spans on the adjacent scan lines, where spans
are defined as the contiguous horizontal string of positions bounded by pixels displayed in
the area border color.
 At each subsequent step, we unstack the next start position and repeat the process.
 An example of how pixel spans could be filled using this approach is illustrated for the 4-
connected fill region in Figure below.

2
(a)
1 2
1

3
(b)
1
3
1

5 6
6
4
(c) 5
1
4
1
5

(d) 4 5
1
4
1

Fig. 2.24: - Boundary fill across pixel spans for a 4-connected area.

Flood-Fill Algorithm
 Sometimes it is required to fill in an area that is not defined within a single color boundary.
 In such cases we can fill areas by replacing a specified interior color instead of searching
for a boundary color.
 This approach is called a flood-fill algorithm. Like boundary fill algorithm, here we start
with some seed and examine the neighbouring pixels.
 However, here pixels are checked for a specified interior color instead of boundary color
and they are replaced by new color.
 Using either a 4-connected or 8-connected approach, we can step through pixel positions
until all interior point have been filled.
 The following procedure illustrates the recursive method for filling 4-connected region
using flood-fill algorithm.
 Procedure :
flood-fill4(x, y, new-color, old-color)
{
if(getpixel (x,y) = = old-color)
{
putpixel (x, y, new-color)
flood-fill4 (x + 1, y, new-color,
old -color); flood-fill4 (x, y + 1,
new -color, old -color); flood-
fill4 (x - 1, y, new -color, old -
color); flood-fill4 (x, y - l, new -
color, old-color);
}
}
 Note: 'getpixel' function gives the color of .specified pixel and 'putpixel' function draws the
pixel with specified color.

TWO DIMENSION OBJECT REPRESENATION


INTRODUCTION:
Design of object involves designing of lines, Curves and Surfaces. Curves design are complex.
They are classified into two broad classes:
i. Namable: Nameable curves are from classified geometry those that can be analyzed
mathematically by equations. i.e. Planes, Spheres, parabolasetc.
ii. Unnamable: Industrial shapes demand aesthetic looks and variety of properties,
conventional not described by the namab lecurves.
Curve Continuity
A complete curve is often made up of segments. So it is important to understand how
individual segment ca be connected. To guarantee a smooth transition from one section of
the piecewise curve to next, we can enforce continuity conditions at the link point.
There are two types of curve continuities:
 Geometric
 Parametric
Geometric Continuity:
G0: If two curve sections meet each other at a point.
G 1:Iftwocurvesectionsmeeteachotheratapointandtheirtangentvectorsaresame at meeting point.
Gn: If every pair of the first n derivatives of the two segments has the same direction at the
point.
Spline Representations
 Spline is flexible strip used to produce a smooth curve through a designated set of points.
 Several small weights are attached to spline to hold in particular position.
 Spline curve is a curve drawn with this method.
 The term spline curve now referred to any composite curve formed with polynomial
sections satisfying specified continuity condition at the boundary of the pieces.
 A spline surface can be described with two sets of orthogonal spline curves.

Interpolation and approximation splines


 We specify spline curve by giving a set of coordinate positions called control points. This
indicates the general shape of the curve.
 Interpolation Spline: - When curve section passes through each control point, the curve
is said to interpolate the set of control points and that spline is known as Interpolation

Spline.
Fig. 4.9: -interpolation spline. Fig. 4.10: -Approximation spline.

 Approximation Spline: - When curve section follows general control point path without
necessarily passing through any control point, the resulting curve is said to approximate
the set of control points and that curve is known as Approximation Spline.
 Spline curve can be modified by selecting different control point position.
 We can apply transformation on the curve according to need like translation scaling etc.
 The convex polygon boundary that encloses a set of control points is called convex hull.

Fig. 4.11: -convex hull shapes for two sets of control points.
 A poly line connecting the sequence of control points for an approximation spline is
usually displayed to remind a designer of the control point ordering. This set of connected
line segment is often referred as control graph of the curve.
 Control graph is also referred as control polygon or characteristic polygon.
Fig. 4.12: -Control-graph shapes for two different sets of control points.

Parametric continuity condition


 For smooth transition from one curve section on to next curve section we put
various continuity conditions at connection points.
 Let parametric coordinate functions as
𝑥 = 𝑥 (𝑢), 𝑦 = 𝑦(𝑢), 𝑧 = 𝑧(𝑢) ∵ 𝑢1 ≪ 𝑢 ≪ 𝑢2
 0
Then zero order parametric continuity (c ) means simply curves meets i.e. last point of
first curve section & first points of second curve section are same.
 First order parametric continuity (c1) means first parametric derivatives are same for both
curve section at intersection points.
 Second order parametric continuity (c2) means both the first & second parametric
derivative of two curve section are same at intersection.
 Higher order parametric continuity is can be obtain similarly.

Fig. 4.13: - Piecewise construction of a curve by joining two curve segments uses different
orders of continuity: (a) zero-order continuity only, (b) first-order continuity, and (c) second-
order continuity.
 First order continuity is often sufficient for general application but some graphics package
like cad requires second order continuity for accuracy.

Geometric continuity condition


 Another method for joining two successive curve sections is to specify condition for
geometric continuity.
 Zero order geometric continuity (g0) is same as parametric zero order continuity that two
curve section meets.
 First order geometric continuity (g1) means that the parametric first derivatives are
proportional at the intersection of two successive sections but does not necessary Its
magnitude will be equal.
 Second order geometric continuity (g2) means that the both parametric first & second
derivatives are proportional at the intersection of two successive sections but does not
necessarily magnitude will be equal.
Cubic Spline Interpolation Methods
 Cubic splines are mostly used for representing path of moving object or existing
object shape or drawing.
 Sometimes it also used for design the object shapes.
 Cubic spline gives reasonable computation on as compared to higher order
spline and more stable compare to lower order polynomial spline. So it is
often used for modeling curve shape.
 Cubic interpolation splines obtained by fitting the input points with piecewise
cubic polynomial curve that passes through every control point.

Fig. 4.14: -A piecewise continuous cubic-spline interpolation of n+1 control


points.
𝑝𝑘 = (𝑥𝑘 , 𝑦𝑘 , 𝑧𝑘 ) Where, k=0, 1, 2, 3 ..., n
 Parametric cubic polynomial for this curve is given by
𝑥(𝑢) = 𝑎𝑥𝑢3 + 𝑏𝑥𝑢2 + 𝑐𝑥𝑢 + 𝑑𝑥
𝑦(𝑢) = 𝑎𝑦𝑢3 + 𝑏𝑦𝑢2 + 𝑐𝑦𝑢 + 𝑑𝑦
𝑧(𝑢) = 𝑎𝑧𝑢3 + 𝑏𝑧𝑢2 + 𝑐𝑧𝑢 + 𝑑𝑧
𝑤ℎ𝑒𝑟𝑒( 0 ≤ 𝑢 ≤ 1)
 For above equation we need to determine for constant a, b, c and d the polynomial
representation for each of n curve section.
 This is obtained by settling proper boundary condition at the joints.
 Now we will see common method for settling this condition.

Natural Cubic Splines

We consider that curve is in 𝑐2 continuity means first and second parametric derivatives
 Natural cubic spline is a mathematical representation of the original drafting spline.

of adjacent curve section are same at control point.
 For the ‘’n+1’’ control point we have n curve section and 4n polynomial constants to find.
 For all interior control points we have four boundary conditions. The two curve section on
either side of control point must have same first & second order derivative at the control

We get other two condition as 𝑝0 (first control points) starting & 𝑝𝑛(last control point) is
points and each curve passes through that control points.

end point of the curve.

One approach is to setup second derivative at 𝑝0 & 𝑝𝑛 to be 0. Another approach is to


 We still required two conditions for obtaining coefficient values.

add one extra dummy point at each end. I.e. we add 𝑝−1 & 𝑝𝑛+1 then all original control

points are interior and we get 4n boundary condition.


 Although it is mathematical model it has major disadvantage is with change in the control
point entire curve is changed.
 So it is not allowed for local control and we cannot modify part of the curve.
Hermit Interpolation
 It is named after French mathematician Charles hermit
 It is an interpolating piecewise cubic polynomial with specified tangent at each
control points.
 It is adjusted locally because each curve section is depends on it’s end points only.
 Parametric cubic point function for any curve section is then given by:
𝑝(0) = 𝑝𝑘
𝑝(1) = 𝑝𝑘+1
𝑝′ (0) = 𝑑𝑝𝑘
𝑝′ ′(1) = 𝑑𝑝𝑘+1
Where dpk & dpk+1 are values of parametric derivatives at point pk & pk+1
respectively.
 Vector equation of cubic spline is:
𝑝(𝑢) = 𝑎𝑢3 + 𝑏𝑢2 + 𝑐𝑢 + 𝑑
 Where x component of p is
 𝑥(𝑢) = 𝑎𝑥𝑢3 + 𝑏𝑥𝑢2 + 𝑐𝑥𝑢 + 𝑑𝑥 and similarly y & z components
 Matrix form of above equation is
𝑎
𝑃(𝑢) = [𝑢3 𝑢2 𝑢 1] �
[


]

𝑑
 Now derivatives of p(u) is p’(u)=3au2+2bu+c+0
 Matrix form of p’(u) is
𝑎

𝑃′(𝑢) = [3𝑢2 2𝑢 1 0] �
[


]
𝑑
 Now substitute end point value of u as 0 & 1 in above equation & combine all four
parametric equations
in matrix form:
𝑝𝑘 0 0 0 1 𝑎
𝑝𝑘+1 1 1 1 1 𝑏
[ ]=[
𝑑𝑝𝑘 ][ ]
0 0 1 0 𝑐
𝑑𝑝𝑘+1 3 2 1 0 𝑑
 Now solving it for polynomial co efficient
𝑎 2 −2 1 1 �
𝑏
[ ]
𝑐 = [−3 3 −2 −1 �
]�
𝑑 [
0 0 1 0 �
𝑎 1 0 0 0 �
𝑝𝑘 �

+

1
𝑑𝑝𝑘 ]

� �
+

� � 1

[ 𝑏] 𝑝𝑘+1
𝑐 = 𝑀𝐻
𝑑𝑝𝑘 ]
𝑑 [
𝑑𝑝 𝑘+1
 Now Put value of above equation in equation of 𝑝(𝑢)
2 −2 1 𝑝
−3 3 −2 −1 𝑝𝑘𝑘+1
( ) [ ]
𝑝 𝑢 = 𝑢 𝑢 𝑢1 [
3 2
][ ]
0 0 1 𝑑𝑝𝑘
1 0 0 𝑑𝑝𝑘+1


𝑝𝑘+1

𝑝(𝑢) = [2𝑢3 − 3𝑢2 + 1 − 2𝑢3 + 3𝑢2 𝑢3 − 2𝑢2 + 𝑢 𝑢3 − 𝑢 2 ] [ ]


𝑘
𝑑𝑝𝑘+1

𝑝(𝑢) = 𝑝𝑘(2𝑢3 − 3𝑢2 + 1) + 𝑝𝑘+1(−2𝑢3 + 3𝑢2 ) + 𝑑𝑝𝑘(𝑢3 − 2𝑢2 + 𝑢) + 𝑑𝑝𝑘+1(𝑢3 − 𝑢2)


𝑝(𝑢) = 𝑝𝑘𝐻0(u) + 𝑝𝑘+1𝐻1(u) + 𝑑𝑝𝑘𝐻2(u) + 𝑑𝑝𝑘+1𝐻3(u)
Where 𝐻𝑘(u) for k=0 , 1 , 2 , 3 are referred to as blending functions because that
blend the boundary constraint values for curve section.
 Shape of the four hermit blending function is given below.

Fig. 4.15: -the hermit blending functions.


 Hermit curves are used in digitizing application where we input the approximate curve slope
means DPk & DPk+1.
 But in application where this input is difficult to approximate at that place we cannot use hermit
curve.

Bezier Curves and Surfaces


 It is developed by French engineer Pierre Bezier for the Renault automobile bodies.
 It has number of properties and easy to implement so it is widely available in
various CAD and graphics package.

Bezier Curves
 Bezier curve section can be fitted to any number of control points.
 Number of control points and their relative position gives degree of the Bezier
polynomials.
 With the interpolation spline Bezier curve can be specified with boundary condition
or blending function.
 Most convenient method is to specify Bezier curve with blending function.
 Consider we are given n+1 control point position from p0 to pn where pk = (xk, yk, zk).
 This is blended to gives position vector p(u) which gives path of the approximate
Bezier curve is:
𝑛

𝑝(𝑢) = ∑ 𝑝𝑘 𝐵𝐸𝑍𝑘,𝑛 (𝑢) 0≤𝑢≤1


𝑘=
0
Where 𝐵𝐸𝑍𝑘,𝑛 (𝑢) = 𝐶(𝑛, 𝑘)𝑢𝑘 (1 − 𝑢)𝑛−𝑘
And 𝐶 (𝑛, 𝑘 ) = 𝑛!⁄ (
𝑘! 𝑛 − 𝑘 )!
 We can also solve Bezier blending function by recursion as follow:
𝐵𝐸𝑍𝑘,𝑛 (𝑢) = (1 − 𝑢)𝐵𝐸𝑍𝑘,𝑛−1 (𝑢) + 𝑢𝐵𝐸𝑍𝑘−1,𝑛−1 (𝑢) 𝑛>𝑘≥1
Here 𝐵𝐸𝑍𝑘,𝑘 (𝑢) = 𝑢 and 𝐵𝐸𝑍0,𝑘 (𝑢) = (1 − 𝑢)
𝑘 𝑘

 Parametric equation from vector equation can be obtain as follows.


𝑛

𝑥 (𝑢) = ∑ 𝑥𝑘 𝐵𝐸𝑍𝑘,𝑛 (𝑢)


𝑘=0
𝑛

𝑦(𝑢) = ∑ 𝑦𝑘 𝐵𝐸𝑍𝑘,𝑛 (𝑢)


𝑘=0
𝑛

𝑧(𝑢) = ∑ 𝑧𝑘 𝐵𝐸𝑍𝑘,𝑛 (𝑢)


𝑘=0
 Bezier curve is a polynomial of degree one less than the number of control points.
 Below figure shows some possible curve shapes by selecting various control point.

Fig. 4.20: -Example of 2D Bezier curves generated by different number of


control points.
 Efficient method for determining coordinate positions along a Bezier curve can
be set up using recursive calculation
 For example successive binomial coefficients can be calculated as
𝑛−𝑘+1
𝐶 (𝑛, 𝑘 ) = 𝐶 (𝑛, 𝑘 − 1) 𝑛≥𝑘
𝑘

Properties of Bezier curves


 It always passes through first control point i.e. p(0) = p0
 It always passes through last control point i.e. p(1) = pn
 Parametric first order derivatives of a Bezier curve at the endpoints can be
obtain from control point coordinates as:
𝑝′ (0) = −𝑛𝑝0 + 𝑛𝑝1
𝑝′ (1) = −𝑛𝑝𝑛−1 + 𝑛𝑝𝑛
 Parametric second order derivatives of endpoints are also obtained by control point
coordinates as:
𝑝′′ (0) = 𝑛(𝑛 − 1)[(𝑝2 − 𝑝1 ) − (𝑝1 − 𝑝0 )]
𝑝′′ (1) = 𝑛(𝑛 − 1)[(𝑝𝑛−2 − 𝑝𝑛−1 ) − (𝑝𝑛−1 − 𝑝𝑛 )]
 Bezier curve always lies within the convex hull of the control points.
 Bezier blending function is always positive.
 Sum of all Bezier blending function is always 1.

∑ 𝐵𝐸𝑍𝑘,𝑛 (𝑢) = 1

=

0
 So any curve position is simply the weighted sum of the control point positions.
 Bezier curve smoothly follows the control points without erratic oscillations.

Design Technique Using Bezier Curves


 For obtaining closed Bezier curve we specify first and last control point at same
position.
P3

P2

P4
P1
P0=P5

Fig. 4.21: -A closed Bezier Curve generated by specifying the first and last
control points at the same location.
 If we specify multiple control point at same position it will get more weight
and curve is pull towards that position.

P3
P1=P2

P0

P4

Fig. 4.22: -A Bezier curve can be made to pass closer to a given coordinate
position by assigning multiple control point at
that position.
 Bezier curve can be fitted for any number of control points but it requires
higher order polynomial calculation.
 Complicated Bezier curve can be generated by dividing whole curve into several
lower order polynomial curves. So we can get better control over the shape of
small region.
 Since Bezier curve passes through first and last control point it is easy to join
two curve sections with zero order parametric continuity (C0).
 For first order continuity we put end point of first curve and start point of
second curve at same position and last two points of first curve and first two
point of second curve is collinear. And second control point of second curve
is at position
𝑝𝑛 + (𝑝𝑛 − 𝑝𝑛−1)
 So that control points are equally spaced.
Fig. 4.23: -Zero and first order continuous curve by putting control point at
proper place.
 Similarly for second order continuity the third control point of second curve
in terms of position of the last three control points of first curve section as
𝑝𝑛−2 + 4(𝑝𝑛 − 𝑝𝑛−1)
 C2 continuity can be unnecessary restrictive especially for cubic curve we left
only one control point for adjust the shape of the curve.

Cubic Bezier Curves


 Many graphics package provides only cubic spline function because this
gives reasonable design flexibility in average calculation.
 Cubic Bezier curves are generated using 4 control points.
 4 blending function obtained by substituting n=3
𝐵𝐸𝑍0,3 (𝑢) = (1 − 𝑢)3
𝐵𝐸𝑍1,3 (𝑢) = 3𝑢(1 − 𝑢)2
𝐵𝐸𝑍2,3 (𝑢) = 3𝑢2 (1 − 𝑢)
𝐵𝐸𝑍3,3 (𝑢) = 𝑢3
 Plots of this Bezier blending function are shown in figure below

Fig. 4.24: -Four Bezier blending function for cubic curve.


 The form of blending functions determines how control points affect the
shape of the curve for values of parameter u over the range from 0 to 1.
At u = 0 𝐵𝐸𝑍0,3 (𝑢) is only nonzero blending function
with values 1. At u = 1 𝐵𝐸𝑍3,3 (𝑢) is only nonzero
blending function with values 1.
 So the cubic Bezier curve is always pass through p0 and p3.
 Other blending function is affecting the shape of the curve in intermediate values of
parameter u.
 𝐵𝐸𝑍1,3 (𝑢) is maximum at 𝑢 = 1⁄3and 𝐵𝐸𝑍2,3 (𝑢) is maximum at 𝑢 = 2⁄3
 Blending function is always nonzero over the entire range of u so it is not
allowed for local control of the curve shape.
 At end point positions parametric first order derivatives are :
𝑝′ (0) = 3(𝑝1 − 𝑝0 )
𝑝′ (1) = 3(𝑝3 − 𝑝2 )
 And second order parametric derivatives are.
𝑝′′ (0) = 6(𝑝0 − 2𝑝1 + 𝑝2 )
𝑝′′ (1) = 6(𝑝1 − 2𝑝2 + 𝑝3 )
 This expression can be used to construct piecewise curve with C1 and C2 continuity.
 Now we represent polynomial expression for blending function in matrix form:
𝑝0

𝑝(𝑢) = [𝑢3 𝑝
𝑢 ]∙
𝑢
2

1

𝑀𝐵𝐸𝑍 [
1


2
]
𝑝
3

−1 3 −3 1
3 −6 3 0
𝑀𝐵𝐸𝑍 = [ ]
1 0 0 0
−3 3 0 0

 We can add additional parameter like tension and bias as we did with the
interpolating spline.

Bezier Surfaces
 Two sets of orthogonal Bezier curves can be used to design an object surface by
an input mesh of control points.
 By taking Cartesian product of Bezier blending function we obtain parametric vector
function as:
𝑚 𝑛

𝑝(𝑢, 𝑣 ) = ∑ ∑ 𝑝𝑗,𝑘 𝐵𝐸𝑍𝑗,𝑚 (𝑣 )𝐵𝐸𝑍𝑘,𝑛 (𝑢)


𝑗=0 𝑘=0
 𝑝𝑗,𝑘 Specifying the location of the (m+1) by (n+1) control points.
 Figure below shows Bezier surfaces plot, control points are connected by
dashed line and curve is represented by solid lines.

Fig. 4.25: -Bezier surfaces constructed for (a) m=3, n=3, and (b) m=4, n=4.
Dashed line connects the control points.
 Each curve of constant u is plotted by varying v over interval 0 to 1. And
similarly we can plot for constant v.
 Bezier surfaces have same properties as Bezier curve, so it can be used in
interactive design application.
 For each surface patch we first select mesh of control point XY and then select
elevation in Z direction.
 We can put two or more surfaces together and form required surfaces using
method similar to curve section joining with continuity C0, C1, and C2 as per
need.

B-Spline Curves and Surfaces


 B-Spline is most widely used approximation spline.
 It has two advantage over Bezier spline
1. Degree of a B-Spline polynomial can be set independently of the number
of control points (with certain limitation).
2. B-Spline allows local control.
 Disadvantage of B-Spline curve is more complex then Bezier spline

B-Spline Curves
 General expression for B-Spline curve in terms of blending function is given by:
𝑛

𝑝(𝑢) = ∑ 𝑝𝑘 𝐵𝑘,𝑑 (𝑢) 𝑢𝑚𝑖𝑛 ≤ 𝑢 ≤ 𝑢𝑚𝑎𝑥 , 2 ≤ 𝑑 ≤ 𝑛 + 1


𝑘=
0
Where pk is input set of control points.
 The range of parameter u is now depends on how we choose the B-Spline
parameters.
 B-Spline blending function Bk,d are polynomials of degree d-1 , where d can
be any value in between 2 to n+1.
 We can set d=1 but then curve is only point plot.
 By defining blending function for subintervals of whole range we can achieve local
control.
 Blending function of B-Spline is solved by Cox-deBoor recursion formulas as follows.
1 𝑖𝑓 𝑢𝑘 ≤ 𝑢 ≤ 𝑢𝑘+1
𝐵 �(𝑢) ={ 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑢−𝑢 𝑢𝑘+𝑑 − 𝑢
𝐵𝑘, (𝑢) = 𝑘
(𝑢 ) +
𝑑 𝑢𝑘+𝑑−1 𝐵𝑘,𝑑−1 𝑢𝑘+𝑑 − 𝑢𝑘+1 𝐵𝑘+1,𝑑−1 (𝑢)
− 𝑢𝑘
 The selected set of subinterval endpoints 𝑢𝑗 is reffered to as a knot vector.
 We can set any value as a subinterval end point but it must follow 𝑢𝑗 ≤ 𝑢𝑗+1
 Values of 𝑢𝑚𝑖𝑛 and 𝑢𝑚𝑎𝑥 depends on number of control points, degree d, and knot vector.
 Figure below shows local control
Fig. 4.26: -Local modification of B-Spline curve.
 B-Spline allows adding or removing control points in the curve without changing the
degree of curve.
 B-Spline curve lies within the convex hull of at most d+1 control points so
that B-Spline is tightly bound to input positions.
 For any u in between 𝑢𝑑−1 to 𝑢𝑛+1 , sum of all blending function is 1 i.e. ∑𝑛 𝐵𝑘,𝑑 (𝑢) = 1
𝑘
 There are three general classification for knot vectors:
o Uniform
o Open uniform
o Non uniform

Properties of B-Spline Curves


 It has degree d-1 and continuity Cd-2 over range of u.
 For n+1 control point we have n+1 blending function.
 Each blending function 𝐵𝑘,𝑑 (𝑢) is defined over d subintervals of the total range of
u, starting at knot value uk.
 The range of u is divided into n+d subintervals by the n+d+1 values specified in the
knot vector.
 With knot values labeled as {𝑢0 , 𝑢1 , … , 𝑢𝑛+𝑑 } the resulting B-Spline curve is defined
only in interval from knot values 𝑢𝑑−1 up to knot values 𝑢𝑛+1
 Each spline section is influenced by d control points.
 Any one control point can affect at most d curve section.

Uniform Periodic B-Spline


 When spacing between knot values is constant, the resulting curve is called a
uniform B-Spline.
 For example {0.0,0.1,0.2, … ,1.0} or {0,1,2,3,4,5,6,7}
 Uniform B-Spline have periodic blending function. So for given values of n and d
all blending function has same shape. And each successive blending function
is simply a shifted version of previous function.
𝐵𝑘,𝑑 (𝑢) = 𝐵𝑘+1,𝑑 (𝑢 + ∆𝑢) = 𝐵𝑘+2,𝑑 (𝑢 + 2∆𝑢)
Where ∆𝑢 is interval between adjacent knot vectors.

Cubic Periodic B-Spline


 It commonly used in many graphics packages.
 It is particularly useful for generating closed curve.
 If any three consecutive control points are identical the curve passes through that
coordinate position.
 Here for cubic curve d = 4 and n = 3 knot vector spans d+n+1 =4+3+1=8 so it is
{0,1,2,3,4,5,6,7}
 Now boundary conditions for cubic B-Spline curve is obtain from equation.
𝑛

𝑝(𝑢) = ∑ 𝑝𝑘 𝐵𝑘,𝑑 (𝑢) 𝑢𝑚𝑖𝑛 ≤ 𝑢 ≤ 𝑢𝑚𝑎𝑥 , 2 ≤ 𝑑 ≤ 𝑛 + 1


𝑘=
0
 That are
1
𝑝(0) = (𝑝 + 4𝑝 + 𝑝 )
0 1 2
16
𝑝(1) = (𝑝 + 4𝑝 + 𝑝 )
1 2 3
16
𝑝′(0) = (𝑝 − 𝑝 )
2 0
12
𝑝′(1) = (𝑝 − 𝑝 )
3 1
2
 Matrix formulation for a cubic periodic B-Splines with the four control points can
then be written as
𝑝0

𝑝(𝑢) = [𝑢3 𝑝
𝑢 ]∙
𝑢
2
𝑀𝐵 ∙
1

[
1


2
]
𝑝
3
Where
−1 3 −3 1
1 3 −6 3 0
𝑀𝐵 = [ ]
6 −3 0 3 0
1 4 1 0
 We can also modify the B-Spline equation to include a tension parameter t.
 The periodic cubic B-Spline with tension matrix then has the form:
−𝑡 12 − 9𝑡 9𝑡 − 12 𝑡
1 3𝑡 12𝑡 − 18 18 − 15𝑡 0
𝑀𝐵𝑡 = [ ]
6 −3𝑡 0 3𝑡 0
𝑡 6 − 2𝑡 𝑡 0
When t = 1 𝑀𝐵𝑡 = 𝑀𝐵
 We can obtain cubic B-Spline blending function for parametric range from 0
to 1 by converting matrix representation into polynomial form for t = 1 we
have 1
𝐵 ( ) 3
= (1 − 𝑢)
6
0,3

𝐵1, 1
(𝑢) = (3𝑢3 − 6𝑢2 + 4)
3
6
1
𝐵2,3 (𝑢) = (−3𝑢3 + 3𝑢2 + 3𝑢 +
1) 6
𝐵3,3 1
(𝑢) =

6


3

Open Uniform B-Splines


 This class is cross between uniform B-Spline and non uniform B-Splines.
 Sometimes it is treated as a special type of uniform B-Spline, and sometimes as non
uniform B-Spline
 For open uniform B-Spline (open B-Spline) the knot spacing is uniform
except at the ends where knot values are repeated d times.
 For example {0,0,1,2,3,3} for d=2 and n=3, and {0,0,0,0,1,2,2,2,2} for d=4 and n=4.
 For any values of parameter d and n we can generate an open uniform knot
vector with integer values using the calculations as follow:
0, 𝑓𝑜𝑟 0 ≤ 𝑗 < 𝑑
𝑢𝑗 = {𝑗 − 𝑑 + 1 𝑓𝑜𝑟 𝑑 ≤ 𝑗 ≤
𝑛
𝑛−𝑑+2 𝑓𝑜𝑟 𝑗 > 𝑛
Where 0 ≤ 𝑗 ≤ 𝑛 + 𝑑
 Open uniform B-Spline is similar to Bezier spline if we take d=n+1 it will
reduce to Bezier spline as all knot values are either 0 or 1.
 For example cubic open uniform B-Spline with d=4 have knot vector
{0,0,0,0,1,1,1,1}.
 Open uniform B-Spline curve passes through first and last control points.
 Also slope at each end is parallel to line joining two adjacent control points at that
end.
 So geometric condition for matching curve sections are same as for Bezier curves.
 For closed curve we specify first and last control point at the same position.

Non Uniform B-Spline


 For this class of spline we can specify any values and interval for knot vector.
 For example {0,1,2,3,3,4}, and {0,0,1,2,2,3,4}
 It will give more flexible shape of curves. Each blending function have
different shape when plots and different intervals.
 By increasing knot multiplicity we produce variation in curve shape and also
introduce discontinuities.
 Multiple knot value also reduces continuity by 1 for each repeat of particular value.
 We can solve non uniform B-Spline using similar method as we used in uniform B-
Spline.
 For set of n+1 control point we set degree d and knot values.
 Then using the recurrence relations we can obtain blending function or
evaluate curve position directly for display of the curve.

B-Spline Surfaces
 B-Spline surface formation is also similar to Bezier splines orthogonal set of
curves are used and for connecting two surface we use same method which
is used in Bezier surfaces.
 Vector equation of B-Spline surface is given by cartesion product of B-Spline
blending functions:
𝑛1 𝑛2

𝑝(𝑢, 𝑣 ) = ∑ ∑ 𝑝𝑘1,𝑘2 𝐵𝑘1,𝑑1 (𝑢)𝐵𝑘2,𝑑2 (𝑣)


𝑘1=0 𝑘2=0
 Where 𝑝𝑘1,𝑘2 specify control point position.
 It has same properties as B-Spline curve.
What are Fractals?
Fractals are very complex pictures generated by a computer from a single formula. They are created
using iterations. This means one formula is repeated with slightly different values over and over again,
taking into account the results from the previous iteration.
Fractals are used in many areas such as −
 Astronomy − For analyzing galaxies, rings of Saturn, etc.
 Biology/Chemistry − For depicting bacteria cultures, Chemical reactions, human anatomy,
molecules, plants,
 Others − For depicting clouds, coastline and borderlines, data compression, diffusion, economy,
fractal art, fractal music, landscapes, special effect, etc.

Generation of Fractals
Fractals can be generated by repeating the same shape over and over again as shown in the following
figure. In figure aa shows an equilateral triangle. In figure bb, we can see that the triangle is repeated to
create a star-like shape. In figure cc, we can see that the star shape in figure bb is repeated again and
again to create a new shape.
We can do unlimited number of iteration to create a desired shape. In programming terms, recursion is
used to create such shapes.

Geometric Fractals
Geometric fractals deal with shapes found in nature that have non-integer or fractal dimensions. To
geometrically construct a deterministic nonrandomnonrandom self-similar fractal, we start with a given
geometric shape, called the initiator. Subparts of the initiator are then replaced with a pattern, called
the generator.

As an example, if we use the initiator and generator shown in the above figure, we can construct good
pattern by repeating it. Each straight-line segment in the initiator is replaced with four equal-length line
segments at each step. The scaling factor is 1/3, so the fractal dimension is D = ln 4/ln 3 ≈ 1.2619.
Also, the length of each line segment in the initiator increases by a factor of 4/3 at each step, so that the
length of the fractal curve tends to infinity as more detail is added to the curve as shown in the
following figure −

Module:03

Module – III (8 hours)


Three Dimensional Geometric and Modeling
Transformations: Translation, Rotation, Scaling,
Reflections, shear, Composite Transformation.
Projections: Parallel Projection, Perspective
Projection. Visible Surface Detection Methods:
Back-Face Detection, Depth Buffer, A- Buffer, Scan-
Line Algorithm, Painters Algorithm.
Three Dimensional Display Methods
3D Translation

Fig. 5.1: - 3D Translation.

 Similar to 2D translation, which used 3x3 matrices, 3D translation use 4X4 matrices (X, Y, Z, h).

𝒙, = 𝒙 + 𝒕𝒙
 In 3D translation point (X, Y, Z) is to be translated by amount tx, ty and tz to location (X', Y', Z').

𝒚, = 𝒚 + 𝒕𝒚
𝒛, = 𝒛 + 𝒕𝒛

𝑷′ = 𝑻 ∙ 𝑷
 Let’s see matrix equation

𝒙 𝟏 𝟎 𝟎 𝒕𝒙 𝒙
𝟎 𝟏 𝟎 𝒕𝒚 � �
,

𝒚

=𝒛
[ , ] ∙[ ]
𝟎 𝟎 𝟏 𝒕𝒛 �
[
𝟏 𝟎 𝟎 𝟎 𝟏 �
𝟏
]

 Example : - Translate the given point P (10,10,10) into 3D space with translation factor T

𝑃′ = 𝑇 ∙ 𝑃
(10,20,5).

𝑥
= 1 0 0 𝑡𝑥
0 1 0 𝑡𝑦
,

𝑦
0 0 1 𝑡𝑧] ∙

𝑧
[ ,] [
1 0 0 0 1 1
𝑥 1 0 0 10 10
,
𝑥 �
��
� [
]

𝑦′ 0 1 0 20 10
[ ]=[ ]∙[ ]
𝑧 0 0 1 5 10
,

1 0 0 0 1 1
𝑥′ = 30
20
𝑦
,

𝑧
[ ,] [ ]
15
1 1
Final coordinate after translation is P, (20, 30, 15).

Rotation
 For 3D rotation we need to pick an axis to rotate about.
 The most common choices are the X-axis, the Y-axis, and the Z-axis
Coordinate-Axes Rotations

Y Y Y

X X X

Z Z Z

(a) (b) (c)

Fig. 5.2: - 3D Rotations.

Z-Axis
Rotation
 Two dimension rotation equations can be easily convert into 3D Z-axis rotation equations.

𝒙, = 𝒙 𝐜𝐨𝐬 𝜽 − 𝒚 𝐬𝐢𝐧 𝜽
 Rotation about z axis we leave z coordinate unchanged.

𝒚, = 𝒙 𝐬𝐢𝐧 𝜽 + 𝒚 𝐜𝐨𝐬 𝜽
𝒛, = 𝒛
Where Parameter 𝜽 specify rotation angle.

𝑷′ = 𝑹𝒛(𝜽) ∙ 𝑷
 Matrix equation is written as:

𝒙, 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝟎 𝒙
𝒚′ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎 𝟎 𝒚
= ∙
𝒛 𝟎 𝟏 𝟎 [𝒛]
[, ] [ ]
𝟎
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏

X-Axis Rotation
 Transformation equation for x-axis is obtain from equation of z-axis rotation by replacing

𝒙→𝒚→𝒛→𝒙
cyclically as shown here

𝒚, = 𝒚 𝐜𝐨𝐬 𝜽 − 𝒛 𝐬𝐢𝐧 𝜽
 Rotation about x axis we leave x coordinate unchanged.

𝒛, = 𝒚 𝐬𝐢𝐧 𝜽 + 𝒛 𝐜𝐨𝐬 𝜽
𝒙, = 𝒙
Where Parameter 𝜽 specify rotation angle.

𝑷′ = 𝑹𝒙(𝜽) ∙ 𝑷
 Matrix equation is written as:

𝒙, 𝟏 𝟎 𝟎 𝟎 𝒙
𝒚′ 𝟎 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝒚
= ∙
𝒛 𝟎 𝐬𝐢𝐧 𝜽 𝟎
[, ] [ ][ ]
𝐜𝐨𝐬 𝜽

𝟏 𝟎 𝟎 𝟎 𝟏 𝟏

xis Rotation
 Transformation equation for y-axis is obtain from equation of x-axis rotation by replacing

𝒙→𝒚→𝒛→𝒙
cyclically as shown here
𝒛, = 𝒛 𝐜𝐨𝐬 𝜽 − 𝒙 𝐬𝐢𝐧 𝜽
 Rotation about y axis we leave y coordinate unchanged.

𝒙, = 𝒛 𝐬𝐢𝐧 𝜽 + 𝒙 𝐜𝐨𝐬 𝜽
𝒚, = 𝒚
Where Parameter 𝜽 specify rotation angle.

𝑷′ = 𝑹𝒚(𝜽) ∙ 𝑷
 Matrix equation is written as:

𝒙′
𝒚 𝐜𝐨𝐬 𝜽 𝟎 𝐬𝐢𝐧 𝜽 𝟎 𝒙
𝟎 𝟏
𝟎 𝟎 [𝒚]
,

𝒛
[ ]
]∙
− 𝐬𝐢𝐧 𝜽 𝟎 𝐜𝐨𝐬 𝜽 𝟎
=[

𝟏 𝟎 𝟎 𝟎 𝟏 𝟏

𝑃′ = 𝑅𝑧(𝜃) ∙ 𝑃
 Example: - Rotate the point P(5,5,5) 90o about Z axis.

𝑥,′ cos 𝜃 − sin 𝜃 0 0 𝑥


𝑦 sin 𝜃 cos 𝜃 0 0 𝑦
= ∙
𝑧
[, ] [ ][
0 1 �
0 0 1 0
1
𝑥
0 0

, cos 90 − sin 90 0
]

𝑥
1
𝑦 ′
sin 90 cos 90 0 𝑦
= 0 ∙

𝑧
[ ] [
, 0 0 1 [

1 ]
0

0 0 0
1
]

𝑥𝑦
1
, ′ 0 −1 0
1 0 0 0 0 5 5
[ ]=[ ] ∙[ ]
𝑧 0 0 1 0 5
,

1 0 0 0 1 1
𝑥
, −5

𝑦
5
′ = [ ]

, ]𝑧
[ 5
1
1
Final coordinate after rotation is P, (-5, 5, 5).

General 3D Rotations when rotation axis is parallel to one of the


standard axis
 Three steps require to complete such rotation
1. Translate the object so that the rotation axis coincides with the parallel coordinate axis.
2. Perform the specified rotation about that axis.
3. Translate the object so that the rotation axis is moved back to its original position.
 This can be represented in equation form as:
𝑷′ = 𝑻−𝟏 ∙ 𝑹(𝜽) ∙ 𝑻 ∙ 𝑷
General 3D Rotations when rotation axis is inclined in arbitrary
direction
 When object is to be rotated about an axis that is not parallel to one of the coordinate
axes, we need rotations to align the axis with a selected coordinate axis and to bring the
axis back to its original orientation.
 Five steps require to complete such rotation.
1. Translate the object so that the rotation axis passes through the coordinate origin.
2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
3. Perform the specified rotation about that coordinate axis.
4. Apply inverse rotations to bring the rotation axis back to its original orientation.
5. Apply the inverse translation to bring the rotation axis back to its original position.
 We can transform rotation axis onto any of the three coordinate axes. The Z-axis is a reasonable
choice.
 We are given line in the form of two end points P1 (x1,y1,z1), and P2 (x2,y2,z2).
 We will see procedure step by step.
1) Translate the object so that the rotation axis passes through the coordinate origin.

Y
P2

P1

Fig. 5.3: - Translation of vector V.


 For translation of step one we will bring first end point at origin and transformation matrix

𝟏 𝟎 𝟎 −𝒙𝟏
for the same is as below

𝟎 𝟏 𝟎 −𝒚
𝑻=𝟎 [ 𝟎 𝟏 −𝒛𝟏𝟏]
𝟎 𝟎 𝟎 𝟏
2) Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
 This task can be completed by two rotations first rotation about x-axis and second rotation about
y-axis.
 But here we do not know rotation angle so we will use dot product and vector product.

𝑽 = 𝑷𝟐 − 𝑷𝟏 = (𝒙𝟐 − 𝒙𝟏, 𝒚𝟐 − 𝒚𝟏, 𝒛𝟐 − 𝒛𝟏)


 Lets write rotation axis in vector form.

𝑽 𝒙𝟐 − 𝒙𝟏 𝒚𝟐 − 𝒚𝟏 𝒛𝟐 − 𝒛𝟏
𝒖= ) = (𝒂, 𝒃, 𝒄)
 Unit vector along rotation axis is obtained by dividing vector by its magnitude.
=( , ,

𝑽
|𝑽| |𝑽| | |𝑽|

u’ u

α
X

uz

Fig. 5.4: - Projection of u on YZ-Plane.


 Now we need cosine and sin value of angle between unit vector ‘u’ and XZ plane and for that we
will
take projection of u on YZ-plane say ‘u’’ and then find dot product and cross product of ‘u’’ and
‘uz’ .

𝒖′ ∙ 𝒖𝒛 = |𝒖′||𝒖𝒛| 𝐜𝐨𝐬 𝜶
 Coordinate of ‘u’’ is (0,b,c) as we will take projection on YZ-plane x value is zero.
𝒖 ′ ∙ 𝒖𝒛
𝐜𝐨𝐬 𝜶 = (𝟎, 𝒃, 𝒄)(𝟎, 𝟎, 𝟏) 𝒄 √𝒃𝟐 + 𝒄𝟐
|𝒖
𝒘𝒉𝒆𝒓𝒆
= =
|𝒖𝒛|
𝒅=
′|

And
√𝒃𝟐 + 𝒄𝟐 𝒅
𝒖′ × 𝒖𝒛 = 𝒖𝒙|𝒖′||𝒖𝒛| 𝐬𝐢𝐧 𝜶 = 𝒖𝒙 ∙ 𝒃
𝒖𝒙|𝒖′||𝒖𝒛| 𝐬𝐢𝐧 𝜶 = 𝒖𝒙 ∙ 𝒃

|𝒖′||𝒖𝒛| 𝐬𝐢𝐧 𝜶 = 𝒃
Comparing magnitude

√𝒃𝟐 + 𝒄𝟐 ∙ (𝟏) 𝐬𝐢𝐧 𝜶 = 𝒃


𝒅 𝐬𝐢𝐧 𝜶 = 𝒃
𝒃
𝐬𝐢𝐧 𝜶 =
𝒅
Now we have 𝐬𝐢𝐧 𝜶 and 𝐜𝐨𝐬 𝜶 so we will write matrix for rotation about X-axis.
𝟏 𝟎 𝟎 𝟎

𝟎𝐜𝐨𝐬 𝜶 − 𝐬𝐢𝐧 𝜶𝟎
𝑹 �(𝜶) =𝟎[ 𝐬𝐢𝐧 𝜶 𝐜𝐨𝐬 𝜶 𝟎 ]
𝟎 𝟎 𝟎 𝟏
𝟏 𝟎 𝟎 𝟎
𝒄 𝒃
𝟎 𝟎
𝒅 𝒅
𝑹𝒙(𝜶) =

𝟎𝒅 𝒅 𝟎
� �

[𝟎 𝟎 𝟎 𝟏 ]
 After performing above rotation ‘u’ will rotated into ‘u’’’ in XZ-plane with coordinates (a, 0,
√(b2+c2)). As we know rotation about x axis will leave x coordinate unchanged, ‘u’’’ is in
XZ=plane so y coordinate is zero, and z component is same as magnitude of ‘u’’.
 Now rotate ‘u’’’ about Y-axis so that it coincides with Z-axis.

X
uzβ u’’

Fig. 5.5: - Rotation of u about X-axis.


For that we repeat above procedure between ‘u’’’ and ‘uz’ to find matrix for rotation about Y-axis.
𝒖′′ ∙ 𝒖𝒛 = |𝒖′′||𝒖𝒛| 𝐜𝐨𝐬 𝜷

𝒖 ′ ∙ 𝒖𝒛
(𝒂, 𝟎, √𝒃𝟐 + 𝒄𝟐) (𝟎, 𝟎, 𝟏)
𝐜𝐨𝐬 𝜷 =
= |𝒖 = √𝒃𝟐 + 𝒄𝟐 = 𝒅 𝒘𝒉𝒆𝒓𝒆 𝒅 = √𝒃𝟐 + 𝒄𝟐
𝟏
′|
|𝒖𝒛|

𝒖′′ × 𝒖𝒛 = 𝒖𝒚|𝒖′′||𝒖𝒛| 𝐬𝐢𝐧 𝜷 = 𝒖𝒚 ∙ (−𝒂)


And
𝒖𝒚|𝒖′′||𝒖𝒛| 𝐬𝐢𝐧 𝜷 = 𝒖𝒚 ∙ (−𝒂)

|𝒖′′||𝒖𝒛| 𝐬𝐢𝐧 𝜷 = (−𝒂)


Comparing magnitude
(𝟏) 𝐬𝐢𝐧 𝜷 = − 𝒂
𝐬𝐢𝐧 𝜷 = − 𝒂
Now we have 𝐬𝐢𝐧 𝜷 and 𝐜𝐨𝐬 𝜷 so we will write matrix for rotation about Y-axis.
𝐜𝐨𝐬 𝜷 𝟎 𝐬𝐢𝐧 𝜷 𝟎

𝑹𝒚(𝜷) 𝟎 𝟏 𝟎 𝟎
− 𝐬𝐢𝐧 𝜷 𝟎 𝐜𝐨𝐬
] 𝜷
𝟎
=[
𝟎 𝟎 𝟎 𝟏
𝒅 𝟎 −𝒂 𝟎
𝟎𝟏 𝟎 𝟎
𝑹 (𝜷) = [
𝒂 𝟎 𝒅 𝟎
]

𝟎 𝟎 𝟎 𝟏

 Now by combining both rotation we can coincides rotation axis with Z-axis
3) Perform the specified rotation about that coordinate axis.

𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝟎
 As we know we align rotation axis with Z axis so now matrix for rotation about z axis

𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎 𝟎
𝑹 (�𝜽) = [ 𝟎 𝟎 𝟏 𝟎]
𝟎 𝟎 𝟎 𝟏
4) Apply inverse rotations to bring the rotation axis back to its original orientation.
 This step is inverse of step number 2.
5) Apply the inverse translation to bring the rotation axis back to its original position.
6) This step is inverse of step number 1.

So finally sequence of transformation for general 3D rotation is

𝑷′ = 𝑻−𝟏 ∙ 𝑹𝒙−𝟏(𝜶) ∙ 𝑹𝒚−𝟏(𝜷) ∙ 𝑹𝒛(𝜽) ∙ 𝑹𝒚(𝜷) ∙ 𝑹𝒙(𝜶) ∙ 𝑻 ∙ 𝑷

Scaling
 It is used to resize the object in 3D space.
 We can apply uniform as well as non uniform scaling by selecting proper scaling factor.
 Scaling in 3D is similar to scaling in 2D. Only one extra coordinate need to consider into it.

Coordinate Axes Scaling

Scaling

Fig. 5.6: - 3D Scaling.

𝑷′ = 𝑺 ∙ 𝑷
 Simple coordinate axis scaling can be performed as below.
𝒙, 𝒔𝒙 𝟎 𝟎 𝟎 𝒙
𝒚′ 𝟎 𝒔 𝟎 𝒚
[ , ] [𝟎 𝟎 𝟎
𝒛 𝒔 𝒚

𝟎𝒛
= ][ ]
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
𝒛

 Example: - Scale the line AB with coordinates (10,20,10) and (20,30,30) respectively with

𝑃′ = 𝑆 ∙ 𝑃
scale factor S(3,2,4).

𝑥, 𝑠𝑥 0 0 0 𝑥
𝑦
[ ] [0 𝑠 0 ∙
𝑦 ′

𝑧 =0 0 𝑠𝑧 𝑧
, 𝑦 [ ]
1 ] 1
0
0 0 0

𝐴𝑥
1
′ 3 0 0 10 20
𝐵𝑥
′ 0

𝐴𝑦′ 𝐵𝑦′ 0 2 0 0 20 30
𝐴𝑧 ′
=[ ]∙[ ]
0 0 4 10 30
𝐵𝑧′ [ 1
0 1 1
0 0 0
1 1
𝐴𝑥
]
′ 30 60

𝐵𝑥

𝐴𝑦′ 𝐵𝑦′ 40 60
𝐴𝑧 ′
=[ ]
40 120
𝐵𝑧′ [ 1
1 1

1
]
Final coordinates after scaling are A, (30, 40, 40) and B’ (60, 60, 120).

Fixed Point Scaling

Y Fixed Point Scaling

X
Fixed Point

Fig. 5.7: - 3D Fixed point scaling.


 Fixed point scaling is used when we require scaling of object but particular point must be
at its original position.
 Fixed point scaling matrix can be obtained in three step procedure.
1. Translate the fixed point to the origin.
2. Scale the object relative to the coordinate origin using coordinate axes scaling.
3. Translate the fixed point back to its original position.

𝑷′ = 𝑻(𝒙𝒇, 𝒚𝒇, 𝒛𝒇) ∙ 𝑺(𝒔𝒙, 𝒔𝒚, 𝒔𝒛) ∙ 𝑻(−𝒙𝒇, −𝒚𝒇, −𝒛𝒇) ∙ 𝑷


 Let’s see its equation.

𝟏 𝟎 𝟎 𝟏 𝟎 𝟎 −𝒙𝒇
𝒙𝒇 𝒔𝒙 𝟎 𝟎
𝟎
𝟎 𝟏 𝟎 𝒚𝒇 𝟎 𝒔𝒚 𝟎 𝟎 𝟎 𝟏 𝟎 −𝒚𝒇
𝑷 = ∙𝑷
𝟎 𝟎 𝟏 𝒛𝒇 𝟎 𝟎 𝒔𝒛 𝟎� 𝟎 𝟏 −𝒛

∙[ ] ∙

[𝟎 𝟎 𝟎 𝟎 𝟎 𝟎 [𝟎 𝟎 𝟎 𝟏 ]

𝟏] 𝟏
𝒔𝒙 𝟎 𝟎 (𝟏 − 𝒔𝒙)𝒙𝒇
𝟎 𝒔𝒚 𝟎 (𝟏 − 𝒔𝒚)𝒚𝒇
𝑷 = ∙𝑷

𝟎 𝟎 𝒔𝒛 (𝟏
− 𝒔𝒛)𝒛𝒇 [ 𝟎 𝟎
𝟎 𝟏 ]

Other Transformations

Reflections
 Reflection means mirror image produced when mirror is placed at require position.
 When mirror is placed in XY-plane we obtain coordinates of image by just changing the
sign of z coordinate.

𝟏 𝟎 𝟎 𝟎
 Transformation matrix for reflection about XY-plane is given below.

𝟎 𝟏 𝟎 𝟎
𝑹𝑭 = [
� 𝟎 𝟎 −𝟏 𝟎
]

𝟎 𝟎 𝟎 𝟏

−𝟏 𝟎 𝟎 𝟎
 Similarly Transformation matrix for reflection about YZ-plane is.

𝟎𝟏 𝟎 𝟎
𝑹𝑭 = [
𝟎 𝟎 𝟏 𝟎
]

𝟎 𝟎 𝟎 𝟏

𝟏 𝟎 𝟎 𝟎
 Similarly Transformation matrix for reflection about XZ-plane is.

𝟎 −𝟏 𝟎 𝟎
𝑹𝑭 �=𝟎[ 𝟎 𝟏 𝟎]
𝟎 𝟎 𝟎 𝟏

Shears
 Shearing transformation can be used to modify object shapes.
 They are also useful in 3D viewing for obtaining general projection transformations.
 Here we use shear parameter ‘a’ and ‘b’

𝟏 𝟎 𝒂 𝟎
 Shear matrix for Z-axis is given below

𝟎 𝟏 𝒃 𝟎
𝑺𝑯 = [
� 𝟎 𝟎 𝟏 𝟎
]

𝟎 𝟎 𝟎 𝟏

𝟏 𝟎 𝟎 𝟎
 Similarly Shear matrix for X-axis is.

𝒂𝟏 𝟎 𝟎
𝑺𝑯 = [
� 𝒃 𝟎 𝟏 𝟎
]

𝟎 𝟎 𝟎 𝟏

𝟏 𝒂 𝟎 𝟎
 Similarly Shear matrix for X-axis is.

𝟎 𝟏 𝟎 𝟎
𝑺𝑯 = [
� 𝟎 𝒃 𝟏 𝟎
]

𝟎 𝟎 𝟎 𝟏
Viewing Pipeline

Modeling Transformation Worl Viewing Transformation


Modelin Viewing
g d Coordinates
Coordinat Coordina
es tes

Projection TransformationProject Device


ion Coordinates
Coordin Workstation Transformation
ates

Fig. 5.8: - General 3D viewing pipeline.


 Steps involved in 3D pipeline are similar to the process of taking a photograph.
 As shown in figure that initially we have modeling coordinate of any object which we want
to display on the screen.
 By applying modeling transformation we convert modeling coordinates to world coordinates
which gives which part or portion is to be display.
 Then by applying viewing transformation we obtain the viewing coordinate which is fitted
in viewing coordinate reference frame.
 Then in case of three dimensional objects we have three dimensions coordinate but we
need to display that object on two dimensional screens so we apply projection
transformation on it which gives projection coordinate.
 Finally projection coordinate is converted into device coordinate by applying workstation
transformation which gives coordinates which is specific to particular device.

Viewing Co-ordinates.
 Generating a view of an object is similar to photographing the object.
 We can take photograph from any side with any angle & orientation of camera.
 Similarly we can specify viewing coordinate in ordinary direction.

Fig. 5.9: -A right handed viewing coordinate system, with axes Xv, Yv, and Zv, relative to a
world- coordinate scene.

Specifying the view plan


 We decide view for a scene by first establishing viewing coordinate system, also referred
as view reference coordinate system.
 Then projection plane is setup in perpendicular direction to Zv axis.
 Then projections positions in the scene are transferred to viewing coordinate then
viewing coordinate are projected onto the view plane.
 The origin of our viewing coordinate system is called view reference point.
 View reference point is often chosen to be close to or on the surface as same object
scene. We can also choose other point also.
 Next we select positive direction for the viewing Zv axis and the orientation of the view
plane by specifying the view plane normal vector N.
 Finally we choose the up direction for the view by specifying a vector V called the view up
vector. Which specify orientation of camera.
 View up vector is generally selected perpendicular to normal vector but we can select
any angle between V & N.
 By fixing view reference point and changing direction of normal vector N we get different
views of same object this is illustrated by figure below.

Fig. 5.10: -Viewing scene from different direction with a fixed view-reference point.

Transformation from world to viewing coordinates


 Before taking projection of view plane object description is need to transfer from world to
viewing coordinate.
 It is same as transformation that superimposes viewing coordinate system to world coordinate
system.
 It requires following basic transformation.
 1) Translate view reference point to the origin of the world coordinate system.
 2) Apply rotation to align

Fig. 5.11: - Aligning a viewing system with the world-coordinate axes using a sequence of translate-
rotate transformations.
 As shown in figure the steps of transformation
 Consider view reference point in world coordinate system is at position (𝑥0, 𝑦0, 𝑧0)than
for align view reference point to world origin we perform translation with matrix:
1 0 0 −𝑥0
𝑇 = [0 0 1 −𝑧00]
 0 1 0 −𝑦

0 0 0 1
 Now we require rotation sequence up-to three coordinate axis rotations depending upon
direction we choose for N.

by rotation sequence 𝑅𝑧 ∙ 𝑅𝑦 ∙ 𝑅𝑥.


 In general case N is at arbitrary direction then we can align it with word coordinate axes

 Another method for generating the rotation transformation matrix is to calculate unit uvn
vectors and from the composite rotation matrix directly.
 Here
𝑛= = (𝑛1, , 𝑛3)
𝑁
𝑛2
|𝑁|
𝑉×
𝑢
𝑁= = (𝑢1, , 𝑢3)
|𝑉 × 𝑢2
𝑁|
𝑣 = 𝑛 × 𝑢 = (𝑣1, 𝑣2, 𝑣3)
 This method also automatically adjusts the direction for u so that v is perpendicular to n.

𝑢1 𝑢2 𝑢3 0
 Than composite rotation matrix for the viewing transformation is then:

𝑣 𝑣 𝑣 0
𝑅 = [𝑛11 𝑛22 𝑛33 0]
0 0 0 1
 This aligns u to Xw axis, v to Yw axis and n to Zw axis.

𝑀𝑤𝑐,𝑣𝑐 = 𝑅 ∙ 𝑇
 Finally composite matrix for world to viewing coordinate transformation is given by:

 This transformation is applied to object’s coordinate to transfer them to the viewing reference
frame.

Projections
 Once world-coordinate descriptions of the objects in a scene are converted to viewing
coordinates, we can project the three-dimensional objects onto the two-dimensional view
plane.
 Process of converting three-dimensional coordinates into two-dimensional scene is known as
projection.
 There are two projection methods namely.
1. Parallel Projection.
2. Perspective Projection.
 Lets discuss each one.

Parallel Projections
V
i
P1 e
P1’
w

P2’
P
l
a
n
e

P2

Fig. 5.12: - Parallel projection.


 In a parallel projection, coordinate positions are transformed to the view plane along
parallel lines, as shown in the, example of above Figure.
 We can specify a parallel projection with a projection vector that defines the direction for
the projection lines.
 It is further divide into two types.
1. Orthographic parallel projection.
2. Oblique parallel projection.

Orthographic parallel projection

View Plane

Projection Line

Fig. 5.13: - Orthographic parallel projection.


 When the projection lines are perpendicular to the view plane, we have an orthographic
parallel projection.
 Orthographic projections are most often used to produce the front, side, and top views of
an object, as shown in Fig.

Fig. 5.14: - Orthographic parallel projection.


 Engineering and architectural drawings commonly use orthographic projections, because
lengths and angles are accurately depicted and can be measure from the drawings.
 We can also form orthographic projections that display more than one face of an object.
Such view are called axonometric orthographic projections. Very good example of it
is Isometric projection.
 Transformation equations for an orthographic parallel projection are straight forward.
 If the view plane is placed at position zvp along the zv axis, then any point (x, y, z) in
viewing coordinates is transformed to projection coordinates as
𝒙𝒑 = 𝒙, 𝒚𝒑 = 𝒚
(X,Y,Z) Yv

(X,Y)
Xv

Zv

Fig. 5.15: - Orthographic parallel projection.


 Where the original z-coordinate value is preserved for the depth information needed in
depth cueing and visible-surface determination procedures.

Oblique parallel projection.

View Plane

Projection Line

Fig. 5.16: - Oblique parallel projection.


 An oblique projection is obtained by projecting points along parallel lines that are not
perpendicular to the projection plane.
 Coordinate of oblique parallel projection can be obtained as below.

Yv

(Xp, Yp)
(X,Y,Z)
α L
Φ
Xv
(X,Y)

Zv

Fig. 5.17: - Oblique parallel projection.


 As shown in the figure (X,Y,Z) is a point of which we are taking oblique projection (Xp,Yp)
on the view plane and point (X,Y) on view plane is orthographic projection of (X,Y,Z).
 Now from figure using trigonometric rules we can write
𝒙𝒑 = 𝒙 + 𝑳 𝐜𝐨𝐬 ∅
𝒚𝒑 = 𝒚 + 𝑳 𝐬𝐢𝐧 ∅

𝒁
 Length L depends on the angle α and the z coordinate of the point to be projected:
𝐭𝐚𝐧 𝜶 =
𝑳
𝒁
𝑳=
𝐭𝐚𝐧 𝜶
𝑳 = 𝒁𝑳𝟏 , 𝟏
= 𝐭𝐚𝐧 𝜶
𝑾𝒉𝒆𝒓𝒆 𝑳𝟏

𝒙𝒑 = 𝒙 + 𝒁𝑳𝟏 𝐜𝐨𝐬 ∅
 Now put the value of L in projection equation.

𝒚𝒑 = 𝒚 + 𝒁𝑳𝟏 𝐬𝐢𝐧 ∅

𝟏 𝟎 𝑳𝟏 𝐜𝐨𝐬 ∅ 𝟎
 Now we will write transformation matrix for this equation.

𝟎 𝟏 𝑳𝟏 𝐬𝐢𝐧 ∅ 𝟎
𝑴𝒑𝒂𝒓𝒂𝒍𝒍𝒆𝒍 = [
𝟎 𝟎 𝟎 𝟎
]
𝟎 𝟎 𝟎 𝟏
 This equation can be used for any parallel projection. For orthographic projection L1=0
and so whole term which is multiply with z component is zero.
When value of 𝐭𝐚𝐧 𝜶 = 𝟏 projection is known as Cavalier projection.
When value of 𝐭𝐚𝐧 𝜶 = 𝟐 projection is known as Cabinet projection.

Perspective Projection
View
Plane
P1
P
1’
Proje
P2 ction
Refer
P2’ ence
point

Fig. 5.18: - Perspective projection.


 In perspective projection object positions are transformed to the view plane along lines that
converge to a point called the projection reference point (or center of projection or
vanishing point).

P=(x,y,z)

(xp,yp,zvp)

zvp zprp zv
V
i
e
w

P
l
a
n
e

Fig. 5.19: - Perspective projection.


 Suppose we set the projection reference point at position zprp along the zv axis, and we
place the view plane at zvp as shown in Figure above. We can write equations describing
coordinate positions along this perspective projection line in parametric form as
𝒙′ = 𝒙 − 𝒙𝒖
𝒚′ = 𝒚 − 𝒚𝒖
𝒛′ = 𝒛 − (𝒛 − 𝒛𝒑𝒓𝒑)𝒖
 Here parameter u takes the value from 0 to 1, which is depends on the position of object,
view plane, and projection reference point.
For obtaining value of u we will put z’=zvp and solve equation of z’.
𝒛′ = 𝒛 − (𝒛 − 𝒛𝒑𝒓𝒑)𝒖

𝒛𝒗𝒑 = 𝒛 − (𝒛 − 𝒛𝒑𝒓𝒑)𝒖
𝒛𝒗𝒑 − 𝒛
𝒖 = 𝒛𝒑𝒓𝒑 − 𝒛
𝒛𝒑𝒓𝒑 −value
𝒛𝒗𝒑of u in equation
𝒅𝒑 of x’ and y’ we will obtain.
𝒙 =𝒙( )=𝒙(
 Now substituting

𝒛𝒑𝒓𝒑 − 𝒛
)
𝒛𝒑𝒓𝒑
𝒑

−𝒛 𝒛 𝒛
𝒑𝒓𝒑 𝒗𝒑
𝒚 =𝒚( −𝒛

𝒅
𝒚( 𝑾𝒉𝒆𝒓𝒆 𝒅 𝒛
)= ), =
𝒑

𝒛𝒑𝒓𝒑
𝒑 𝒗𝒑
𝒛𝒑𝒓𝒑
𝒑
𝒑𝒓𝒑
−𝒛 − 𝒛
 Using three dimensional homogeneous-coordinate representations, we can write the perspective
projection transformation matrix form as.
� 𝟏 𝟎 𝟎 𝟎 𝒙
� 𝟎 𝟏 𝟎 𝟎 𝒚
��𝒉
𝒉
[ ]= 𝟎 𝟎 −
𝒛𝒉 𝒛𝒗𝒑 (𝒛𝒑𝒓𝒑 ⁄𝒅𝒑 ) ∙ [ 𝒛 ]
𝒛𝒗𝒑 ⁄𝒅𝒑
𝒉 [𝟎 𝟎 − 𝒛𝒑𝒓𝒑⁄𝒅𝒑 ] 𝟏
𝟏⁄𝒅𝒑
𝒛𝒑𝒓𝒑 −
 In this representation, the homogeneous factor is.
𝒉=
𝒛
𝒂𝒏𝒅
𝒅𝒑
𝒙𝒑 = 𝒙𝒉 ⁄𝒉 𝒂𝒏𝒅 𝒚𝒑 = 𝒚𝒉 ⁄𝒉

 If view plane is taken to be uv plane, then 𝒛𝒗𝒑 = 𝟎 and the projection coordinates are.
 There are number of special cases for the perspective transformation equations.

𝒙 𝟏
) =𝒙(
𝒛 𝒙( 𝟏 − 𝒛⁄𝒛𝒑𝒓𝒑
)
𝟏
𝒑 =𝒑𝒓𝒑
𝒛𝒑𝒓𝒑 ) =𝒚(
�� −𝒛 )

𝒛 𝒚(
=𝒑𝒓𝒑
𝒑
𝒛𝒑𝒓𝒑 𝟏 − 𝒛⁄𝒛𝒑𝒓𝒑
−𝒛
If we take projection reference point at origin than 𝒛𝒑𝒓𝒑 = 𝟎 and the projection coordinates are.
𝒛𝒗𝒑 𝟏

𝒙𝒑 = 𝒙 ( )=𝒙(
𝒛 𝒛⁄𝒛𝒗𝒑
)
𝒛𝒗𝒑 𝟏
𝒚𝒑 = 𝒚 ( )=𝒚(
𝒛 𝒛⁄𝒛𝒗𝒑
)

 The vanishing point for any set of lines that are parallel to one of the principal axes of an
object is referred to as a principal vanishing point
 We control the number of principal vanishing points (one, two, or three) with the
orientation of the projection plane, and perspective projections are accordingly classified as
one-point, two-point, or three- point projections.

The number of principal vanishing points in a projection is


determined by the number of principal axes
Parallel Projection
 This method generates view from solid object by projecting parallel lines onto the display plane.
 By changing viewing position we can get different views of 3D object onto 2D display screen.

Fig. 4.1: - different views object by changing viewing plane position.


 Above figure shows different views of objects.
 This technique is used in Engineering & Architecture drawing to represent an object with
a set of views that maintain relative properties of the object e.g.:- orthographic projection.

Perspective projection
 This method generating view of 3D object by projecting point on the display plane along
converging paths.
Fig. 4.2: - perspective projection
 This will display object smaller when it is away from the view plane and of nearly same
size when closer to view plane.
 It will produce more realistic view as it is the way our eye is forming image.

Depth cueing
 Many times depth information is important so that we can identify for a particular viewing
direction which are the front surfaces and which are the back surfaces of display object.
 Simple method to do this is depth cueing in which assign higher intensity to closer object
& lower intensity to the far objects.
 Depth cuing is applied by choosing maximum and minimum intensity values and a range of
distance over which the intensities are to vary.
 Another application is to modeling effect of atmosphere.

Visible line and surface Identification


 In this method we first identify visible lines or surfaces by some method.
 Then display visible lines with highlighting or with some different color.
 Other way is to display hidden lines with dashed lines or simply not display hidden lines.
 But not drawing hidden lines will loss the some information.
 Similar method we can apply for the surface also by displaying shaded surface or color surface.
 Some visible surface algorithm establishes visibility pixel by pixel across the view plane.
Other determines visibility of object surface as a whole.

Classification of Visible-Surface Detection Algorithms


 It is broadly divided into two parts
o Object-Space methods
o Image-Space methods
 Object space method compares objects and parts of objects to each other within the
scene definition to determine which surface is visible.
 In image space algorithm visibility is decided point by point at each pixel position on the
projection plane.

Back-Face Detection
 Back-Face Detection is simple and fast object –space method.
 It identifies back faces of polygon based on the inside-outside tests.
 A point (x, y, z) is inside if Ax + By + Cz + d < 0 where A, B, C, and D are constants and
this equation is nothing but equation of polygon surface.
 We can simplify test by taking normal vector N= (A, B, C) of polygon surface and vector
V in viewing direction from eye as shown in figure

 Then we check condition if 𝑉 ∙ 𝑁 > 0 then polygon is back face.


Fig. 6.1:- vector V in the viewing direction and back-face normal vector N of a polyhedron.

 If we convert object description in projection coordinates and our viewing


direction is parallel to zv then v= (0,0,vz) and
𝑉 ∙ 𝑁 = 𝑉𝑧𝐶.
 So now we only need to check sign of C.
 In right handed viewing system V is along negative zv
axis. And in that case If C<0 the polygon is backface.
 Also we cannot see any face for which C=0.
 So in general for right
handed system If 𝐶 ≤ 0
polygon is back face.
 Similar method can be used for left handed system.
 In left handed system V is along the positive Z direction and polygon is back face if
𝐶 ≥ 0.
 For a single convex polyhedron such as the pyramid by examining parameter C
for the different plane we identify back faces.
 So far the scene contains only non overlapping convex polyhedral, back face method
works properly.
 For other object such as concave polyhedron as shown in figure below we
need to do more tests for determining back face.

Fig. 6.2:-view of a concave polyhedron with one face partially hidden by other faces.

Depth Buffer Method/ Z Buffer Method


Algorithm

o𝑑𝑒𝑝𝑡ℎ(𝑥, 𝑦) = 0,𝑟𝑒𝑓𝑟𝑒𝑠ℎ(𝑥, 𝑦) = 𝐼𝑏𝑎𝑐𝑘𝑔𝑛𝑑


Initialize the depth buffer and refresh buffer so that for all buffer positions(x, y),

For each position on each polygon surface, compare depth values to previously stored valu

the depth buffer to determine visibility.
 Calculate the depth z for each (x, y) position on the polygon.
 If z > depth(x, y), then set
o𝑑𝑒𝑝𝑡ℎ(𝑥, 𝑦) = 𝑧,𝑟𝑒𝑓𝑟𝑒𝑠ℎ(𝑥, 𝑦) = 𝐼𝑠𝑢𝑟𝑓(𝑥, 𝑦)
Where 𝐼𝑏𝑎𝑐𝑘𝑔𝑛𝑑 is the value for the background intensity, and 𝐼𝑠𝑢𝑟𝑓(𝑥, 𝑦) is the pro

 It is image space approach.


 It compares surface depth at each pixel position on the projection plane.
 It is also referred to as z-buffer method since generally depth is measured in z-direction.
 Each surface of the scene is process separately one point at a time across the surface.

S3
S2
S1 Yv

(X, Y) Xv

Zv

Fig. 6.3:- At view plane position (x, y), surface s1 has smallest depth from the view plane and
so is visible at that position.
 We are starting with pixel position of view plane and for particular surface of object.
 If we take orthographic projection of any point (x,y,z) of the surface on the view plane we
get two dimension coordinate (x,y) for that point to display.
 Here we are taking (x.y) position on plan and find particular surface is at how much depth.
 We can implement depth buffer algorithm in normalized coordinates so that z values
range from 0 at the back clipping plane to zmax at the front clipping plane.
 Zmax value can be 1 for unit cube or the largest value.
 Here two buffers are required. A depth buffer to store depth value of each (x,y) position
and refresh buffer to store corresponding intensity values.
 Initially depth buffer value is 0 and refresh buffer value is intensity of background.
 Each surface of polygon is then process one by one scanline at a time.
 Calculate the z values at each (x,y) pixel position.
 If calculated depth value is greater than the value stored in depth buffer it is replaced with

Depth values are calculated from plane equation 𝐴𝑥 + 𝐵𝑦 + 𝐶𝑧 + 𝐷 = 0 as:


new calculated values and store intensity of that point into refresh buffer at (x,y) position.

−𝐴𝑥 − 𝐵𝑦 − 𝐷
𝑧= 𝐶

Fig. 6.4:-From position (x,y) on a scan line, the next position across the line has coordinates
(x+1,y), and the position immediately below on the next line has

 For horizontal line next pixel’s z values can be calculated by putting x’=x+1 in
coordinates (x,y-1).

above equation.
−𝐴(𝑥 + 1) − 𝐵𝑦 − 𝐷
𝑧′ = 𝐶
𝐴
𝑧 =𝑧−

𝐶
 Similarly for vertical line pixel below the current pixel has y’=y-1 so it’s z values can
be calculated as
follows.
−𝐴𝑥 − 𝐵(𝑦 − 1) − 𝐷
𝑧′ = 𝐶
𝐵
𝑧 =𝑧+

𝐶
 If we are moving along polygon boundary then it will improve performance
by eliminating extra calculation.
 For this if we move top to bottom along polygon boundary we get x’=x-1/m
and y’=y-1, so z value is obtain as follows.
−𝐴(𝑥 − 1⁄𝑚) − 𝐵(𝑦 − 1) − 𝐷
𝑧′ = 𝐴⁄ + 𝐵 𝐶
𝑚
𝑧′ = 𝑧 +
𝐶
 Alternately we can use midpoint method to find the z values.

Module:04
Illumination Models: Basic Models, Displaying Light
Intensities. Surface Rendering Methods: Polygon Rendering
Methods: Gouraud Shading, Phong Shading. Computer
Animation: Types of Animation, Key frame Vs. Procedural
Animation, Methods of Controlling Animation, Morphing.
Introduction to Virtual Reality and Augmented Reality.
ILLUMINATION AND SHADING
Light Source: Light source that illuminate an object are of two types
 Light emitting Source: Bulb, Sunetc.
 Light reflecting Source: Wall of a roometc.
i. Point Source: The dimension of the light source are smaller than size of
theobject.
ii. Distributed light: The dimension of the light source and the object are
approximatelysame.
iii. Light Source Attenuation: A basic property of light is that it loses its
intensity the further it
travelsfromitssource.Theintensityoflightfromthesunchangesinproportionto
thedistance from the sun. The technical name for this is lightattenuation.
Then illumination equation
I=Ia .Ka+Ip .Kd (N¯˙.L¯˙)
Ia=intensityofANbientlight/
BackgroundLightKa=ANbientreflect
ionCoefficient
Ip=IntensityoflightcoNesfroNtheeoi
ntsourceKd=diffusereflectivity
iv. Ambient Light: if instead of self-luminosity, there is a diffuse, non-directional
source lightthen
product of multiple reflection light from many surface present in the environment
is called as ambient light.
Specular Reflection
When illuminate a
shiny surface such as
polished metal, we observe
high light or bright spot on
signee surface. This
phenomena of reflection of
incident light in concentrated
region around the specular
reflection angle is called
specular reflection.
N: Normal Vector
R: unit vector in the
direction of total
specular reflection.
L: unit vector
towards point light
source.
V: Unit vector pointing viewer from surface.

Light source
 When we see any object we see reflected light from that object. Total reflected light is the
sum of contribution from all sources and reflected light from other object that falls on the
object.
 So that the surface which is not directly exposed to light may also visible if nearby object is
illuminated.
 The simplest model for light source is point source. Rays from the source then follows
radial diverging paths from the source position.

Fig. 6.5:- Diverging ray paths from a point light source.


 This light source model is reasonable approximation for source whose size is small compared
to the size of object or may be at sufficient distance so that we can see it as point source.
For example sun can be taken as point source on earth.
 A nearby source such as the long fluorescent light is more accurately modelled as a
distributed light source.
 In this case the illumination effects cannot be approximated with point source because
the area of the source is not small compare to the size of object.
 When light is falls on the surface the part of the light is reflected and part of the light is
absorbed. Amount of reflected and absorbed light is depends on the property of the
object surface. For example shiny surface reflect more light while dull surface reflect less
light.

Basic Illumination Models/ Shading Model/ Lighting Model


 These models give simple and fast method for calculating the intensities of light for various
reflections.

Ambient Light
 This is a simple way to model combination of light reflection from various surfaces to
produce a uniform illumination called ambient light, or background light.
 Ambient light has no directional properties. The amount of ambient light incident on all
the surfaces and object are constant in all direction.
 If consider that ambient light of intensity 𝐼𝑎 and each surface is illuminate with 𝐼𝑎 intensity
then resulting reflected light is constant for all the surfaces.

Diffuse Reflection
 When some intensity of light is falls on object surface and that surface reflect light in all
the direction in equal amount then the resulting reflection is called diffuse reflection.
 Ambient light reflection is approximation of global diffuse lighting effects.
 Diffuse reflections are constant over each surface independent of our viewing direction.
 Amount of reflected light is depend on the parameter Kd, the diffuse reflection
coefficient or diffuse reflectivity.
 Kd is assign value in between 0 and 1 depending on reflecting property. Shiny surface
reflect more light so Kd is assign larger value while dull surface assign small value.
 If surface is exposed to only ambient light we calculate ambient diffuse reflection as:
𝐼𝑎𝑚𝑏𝑑𝑖𝑓𝑓 = 𝐾𝑑 𝐼𝑎
Where 𝐼𝑎 the ambient light is falls on the surface.
 Practically most of times each object is illuminated by one light source so now we discuss
diffuse reflection intensity for point source.

 e assume that the diffuse reflection from source are scattered with equal intensity in all
directions, independent of the viewing direction such a surface are sometimes referred as

This is modelled by lambert’s cosine law. this law states that the radiant energy
ideal diffuse reflector or lambertian reflector.

from any small surface area dA in any direction ∅𝑛 relative to surface normal is

Φn
Radiant energy direction

proportional to 𝑐𝑜𝑠∅𝑛.
dA

Fig. 6.6:- Radiant energy from a surface area dA in direction Φn relative to the surface normal
direction.
 As shown reflected light intensity is does not depends on viewing direction so for
lambertian reflection, the intensity of light is same in all viewing direction.
 Even though there is equal light distribution in all direction from perfect reflector the
brightness of a surface does depend on the orientation of the surface relative to light
source.
 As the angle between surface normal and incidence light direction increases light falls on
the surface is decreases

Fig. 6.7:- An illuminated area projected perpendicular to the path of the incoming light rays.
 If we denote the angle of incidence between the incoming light and surface normal as𝜃,

proportional to 𝑐𝑜𝑠𝜃.
then the projected area of a surface patch perpendicular to the light direction is

 If 𝐼𝑙 is the intensity of the point light source, then the diffuse reflection
equation for a point on the surface can be written as
𝐼𝑙,𝑑𝑖𝑓𝑓 = 𝐾𝑑 𝐼𝑙 𝑐𝑜𝑠𝜃
 Surface is illuminated by a point source only if the angle of incidence is in
the range 00 to 900 other than this value of 𝜃 light source is behind the
surface.

Fig. 6.8:-Angle of incidence 𝜃 between the unit light-source direction vector L and the unit
surface normal N.
 As shown in figure N is the unit normal vector to surface and L is unit vector in direction
of light source then we can take dot product of this to is:
𝑁 ∙ 𝐿 = cos 𝜃
And
𝐼𝑙,𝑑𝑖𝑓𝑓 = 𝐾𝑑 𝐼𝑙 (𝑁 ∙ 𝐿)
 Now in practical ambient light and light source both are present and so total
diffuse reflection is given by:
𝐼𝑑𝑖𝑓𝑓 = 𝐾𝑎 𝐼𝑎 + 𝐾𝑑 𝐼𝑙 (𝑁 ∙ 𝐿)
Here for ambient reflection coefficient 𝐾𝑎 is used in many graphics package so here we use
𝐾𝑎 instead of

𝐾𝑑.

Specular Reflection and the Phong Model.


 When we look at an illuminated shiny surface, such as polished metal we see a highlight,
or bright spot, at certain viewing directions. This phenomenon is called specular reflection,
is the result of total, or near total reflection of the incident light in a concentrated region

around the specular reflection angle.


Fig. 6.9:-Specular reflection angle equals angle of incidence 𝜃.
 Figure shows specular reflection direction at a point on the illuminated surface. The
specular reflection angle equals the angle of the incident light.
 Here we use R as unit vector in direction of reflection L is unit vector point towards light

 Objects other than ideal reflectors exhibits specular reflection over a finite
vector N is unit normal vector and V is unit vector in viewing direction.

range of viewing positions around vector R. Shiny surface have a narrow


specular reflection range and dull surface have wide specular reflection
range.
 By phong specular reflection model or simply phong model sets the intensity of
specular reflection proportional to 𝑐𝑜𝑠𝑛𝑠∅. Angle ∅ varies in between 00 to 900.
 Values assigned to specular reflection parameter ns is determined by the type of
surface that we want to display. A shiny surface assigned ns values large nearly
100 and dull surface assigned small nearly 1.
 Intensity of specular reflection depends on the material properties of the
surface and the angle of incidence as well as specular reflection coefficient,
𝒘(𝜽) for each surfaces.
 Then specular reflection is given by:
𝐼𝑠𝑝𝑒𝑐 = 𝑤(𝜃)𝐼𝑙 𝑐𝑜𝑠 𝑛𝑠 ∅
Where 𝐼𝑙 is the intensity of light source and ∅ is angle between viewing direction V
and specular reflection direction R.
 Since ∅ is angle between two unit vector V and R we can put 𝑐𝑜𝑠∅ = 𝑉 ∙ 𝑅.
 And also for many surfaces 𝑤(𝜃) is constant so we take specular reflection
constant as Ks so equation becomes.
𝐼𝑠𝑝𝑒𝑐 = 𝐾𝑠 𝐼𝑙 (𝑉 ∙ 𝑅)𝑛𝑠
 Vector r is calculated in terms of vector L and N as shown in figure
Fig. 6.10:- Calculation of vector R by considering projection onto the direction of
the normal vector N.
𝑅 + 𝐿 = (2𝑁 ∙ 𝐿)𝑁
𝑅 = (2𝑁 ∙ 𝐿)𝑁 − 𝐿
 Somewhat simplified phong model is to calculate between half way vectors H
and use product of H and N instead of V and R.
 Here H is calculated as follow:
𝐿+𝑉
𝐻 = |𝐿 + 𝑉|

Combined Diffuse and Specular Reflections With Multiple Light


Sources
 For a single point light source we can combined both diffuse and specular reflection by

𝐼 = 𝐼𝑑𝑖𝑓𝑓 + 𝐼𝑠𝑝𝑒𝑐
adding intensity due to both reflection as follows:

𝐼 = 𝐾𝑎 𝐼𝑎 + 𝐾𝑑 𝐼𝑙 (𝑁 ∙ 𝐿) + 𝐾𝑠 𝐼𝑙 (𝑁 ∙ 𝐻)𝑛𝑠
 And for multiple source we can extend this equation as follow:
𝑛

𝐼 = 𝐾𝑎 𝐼𝑎 + ∑ 𝐼𝑙 [𝐾𝑑 (𝑁 ∙ 𝐿) + 𝐾𝑠 (𝑁 ∙ 𝐻)𝑛𝑠 ]
𝑖=1

Properties of Light
 Light is an electromagnetic wave. Visible light is have narrow band in electromagnetic
spectrum nearly 400nm to 700nm light is visible and other bands not visible by human
eye.

Fig. 6.11:- Electromagnetic spectrum.


 Electromagnetic spectrum shown in figure shows other waves are present in spectrum like
microwave infrared etc.
 Frequency value from 4.3 X 10^14 hertz (red) to 7.5 X 10^14 (violet) is visible renge.
 We can specify different color by frequency f or by wavelength λ of the wave.

𝑐 = λf
 We can find relation between f and λ as follows:
 Frequency is constant for all the material but speed of the light and wavelength are material
dependent.
 For producing white light source emits all visible frequency light.
 Reflected light have some frequency and some are absorbed by the light. This frequency
reflected back is decide the color we see and this frequency is called as dominant frequency
(hue) and corresponding reflected wavelength is called dominant wavelength.
 Other property are purity and brightness. Brightness is perceived intensity of light.
Intensity is the radiant energy emitted per unit time, per unit solid angle and per unit
projected area of the source.
 Purity or saturation of the light describes how washed out or how “pure” the color of the light
appears.
 Dominant frequency and purity both collectively refers as chromaticity.
 If two color source combined to produce white light they are called complementary color of
each other. For example red and cyan are complementary color.
 Typical color models that are uses to describe combination of light in terms of dominant
frequency use three colors to obtain reasonable wide range of colors, called the color
gamut for that model.
 Two or three colors are used to obtain other colors in the range are called primary colors.

RGB Color Model


 Based on tristimulus theory of vision our eye perceives color through stimulate one of
three visual pigments in the cones of the retina.

 These visual pigments have peak sensitivity at red, green and blue color.
 So combining these three colors we can obtain wide range of color this concept is used
in RGB color model.

 As shown in figure this model is represented as unit cube.


Fig. 6.13:- The RGB color model.

 Origin represent black color and vertex (1,1,1) is white.


 Vertex of the cube on the axis represents primary color R, G, and B.
 In XYZ color model any color intensity is obtained by addition of primary color.
𝐶𝜆 = 𝑅𝑅 + 𝐺𝐺 + 𝐵𝐵
 Where R, G, and B is amount of corresponding primary color
 Since it is bounded in between unit cube it’s values is very in between 0 to 1
and represented as triplets (R,G,B). For example magenta color is
represented with (1,0,1).
 Shades of gray are represented along the main diagonal of cube from black to white
vertex.
 For half way gray scale we use triplets (0.5,0.5,0.5).
Phong illumination Model
Here maximum specular reflection occurs when α = 0 and fall off sharply as α is
increasing. Rapid fall off is approximately byCOSn α.
n is specular reflection parameter determined by the type pf surface. The value of n typically
vary from 1 to 100 depending on the surface materials.
 For perfect reflector n isinfinity.
 For rough surface n is near to1.
Icpec=Kc .IL .(V.R)n

Halfway Vector / Blinn–Phong reflection model


UsetheadditionalvectorH¯˙ .Itisalsocalledmodified
Phong reflection model because its direction is
halfway between the direction of light source and the
viewer.
L+V
H= |L+V|
Icpe=KS .IL .(N.H)n

SHADING:
We can shade any surface by calculating surface normal at each visible point and applying
the describe illumination model at that point.
Different Types of Shading
I. Constant intensityshading
II. GouraudShading
III. PhongShading
IV. HalftoneShading

CONSTANT INTENSITY SHADING:


It is also called as flat shading. Illumination model is applied only one face for each
polygon to determine single intensity value. This method is valid for following assumption:
1. The light source is at infinity distance. So N.L is constant across the polygon face.
2. The viewer is at infinity distance so V.R
constant over the surface.
3. The polygon represent the actual surface being modeled & is not approximation
to a curve surface.

Gouraud Shading:
1. It is also called interpolated shading.
2. The polygon surface is displayed by linearly interpolating intensity value
across thesurface.
3. Intensity vale of each polygon
method with the value of adjacent polygon along
the common edge. [IV.2]
It need following calculation:
1. Determine the average normal vector at each polygonvertex.
2. Apply illumination model to each polygon vertex to determine
vertexintensity.
3. Linearly interpolate the vertex intensity over the surface of thepolygon.

[IV.3]
LetN1 ,N2 ,N3arenormalofthreesurfacewithsharingvertex‘V’.
∑Ni
N N1+N2 OR
v = |∑Ni |
+N3
|N1 +N2 +N3 |

Ya
I—=Y Y1 — Y a
2
.I + .I
a 1 2
Y
Y1b—Y2 Y1—Y2
I—=Y Y3 — Y b
.I + .I
2

b 3 2
Y3—Y2 Y3—Y2
intensity
Xb of P = IP
I—=X Xp — X a
p + .I
.I
p a b
Xb—Xa Xb—Xa
Phong Shading:
It is also called normal vector interpolating shading.
Here interpolate normal vector rather
than intensity. It procced asfollows:
1. Determine he average nit
normal vector at each
polygonvertex.
2. Linearly interpolate the
vertex normal over the
surface of thepolygon.
3. Apply an illumination model
along each scan line to
determine projected pixel
intensities for surfacepoint.
Y1 .N Y 1— . N 2
N =2
—Y +
Y2
Y1—Y2 1 Y1 —
Halfway Y2
Shading:
Many display devices are like
 They can produce only two intensitylevel.
 In such case we can create an apparent increase in the
number of available intensity. This is achieved b incorporating multiple
pixel position into the display for each intensity value.
 : when we view a small area from large distance our eye’s
average fine details within the small area & record only
the overall intensity of thearea.
This phenomena of apparent increase the number of available intensity
by considering combine intensity of multiple pixel is known as
Halftoning.
ANIMATION:
Literally means “Giving life to”. It generally refer to any time sequence
of [IV.4] visual
changesinascenes.Hechangeinthescenearemadebytransformation(Translatio
n,Scaling, rotation). Some of the application of animation is for entertainment
like cartoon. We can produce animation by changing lighting effect or other
parameters and procedure associated with illumination &rendering.
PRODUCTION TECHIQUE:
The overall animation of enter object is referred as Production.
Production is broken
intomajorpartrefertoassequence.Asequenceisusuallyidentifiedbyanassociate
dstrategy area. Production is usually consists of one to dozensequence.

[IV.5]
Each sequence is broken down into one or more sot. Each shot is
continuous camera
recording.Eachshotisbrokendownintotheindirectframeoffilm.Frameisasinglere
corded image.

Animation is a trial & error process that involves feedback


from one step to previous step.
Straight Ahead: processing from starting point & developing motion
continuously pose to pose key frame are identified.
Intermediate formulate are interpolated.
Story Board: Consists of key frame, animation is outline frame
produced by master animation.
Inbetweening: Producing intermediate frame between Key frames. It is
done by associate animation.
Model Sheet: it consists of a no of drawing for each figure in various pose.
Exposure Sheet: Recorded information for each frame such as sound tracker,
camera more.
Story Reel: May be produce in which he story board frame are recorded.
Type of Animation:
1. ConventionalAnimation:
1stwriteascriptofstory.Thenaseriesofpictureisdrawnofstory.Importantmome
ntof
storyiscalledasstoryboard.Oncestoryboardcreatedactualanimationwillstart
.Infinal animation is achieved by filling the gap between adjacency
keyframes.
2. ComputerAssistance:
Many stage conventional animation seems ideally suited to computed
assistance especially inbetweening and coloring can be done by seed
filling algorithm. Before the computer can be used, the drawing must be
digitized. This can be done by using optional scanning by tracking,
dragging with the data table.
By placing several small resolution frame of an animation in a
rectangle array. The equivalent of pencil test can be generated using
pan-zoom available in some frame buffer.

Types of Animation Systems


1. ScriptingSystems:
System is not interactive. One scripting system is ASAS (Actor Script
[IV.4]
Animation Language), which has a syntax similar to LISP. ASAS introduced
the concept of an actor, i.e., a complex object which has its own animation
rules. For example, in animating a
bicycle,thewheelswillrotateintheirowncoordinatesystemandtheanimatordoes
n'thave

[IV.5]
to worry about this detail. Actors can communicate with other actors be
sending messages
andsocansynchronizetheirmovements.Thisissimilartothebehaviorofobjectsin
object- orientedlanguages.
2. ProceduralAnimation:
Scripting Systems were the earliest type of motion control systems. The
animator writes a
scriptintheanimationlanguage.Thus,theusermustlearnthislanguageandthePr
ocedures are used that define movement over time. These might be
procedures that use the laws of physics (Physically - based modeling) or
animator generated methods. An example is a motion that is the result of
some other action (this is called a "secondary action"), for example throwing
a ball which hits another object and causes the second object tomove.
3. RepresentationalAnimation:
This technique allows an object to change its shape during the animation.
There are three subcategories to this. The first is the animation of
articulated objects, i.e., complex objects composed of connected rigid
segments. The second is soft object animation used for
deformingandanimatingthedeformationofobjects,e.g.skinoverabodyorfacialm
uscles.
Thethirdismorphingwhichisthechangingofoneshapeintoanotherquitedifferent
shape. This can be done in two or threedimensions.
4. StochasticAnimation:
This uses stochastic processes to control groups of objects, such as in
particle systems. Examples are fireworks, fire, water falls, etc.
5. BehavioralAnimation:
Objectsor"actors"aregivenrulesabouthowtheyreacttotheirenvironment.Exam
plesare schools of fish or flocks of birds where each individual behaves
according to a set of rules defined by theanimator.

MORPHING
Mor
phing is a
familiar
technology
to produce
special
effects in
image or
videos.
Morphing is
common in
entertainm
ent
industry.
Morphing is widely used in movies, animation games etc. In addition to the usage of
entertainment industry, morphing can be used in computer based trainings,
electronic book illustrations, presentations, education purposes etc. morphing

[IV.6]
software is widely available in internet.
Animation industry looking for advanced technology to produce special
effects on their movies. Increasing customers of animation industry does not satisfy
with the movies with simple animation. Here comes the significance of morphing.
The Word "Morphing" comes from the word "metamorphosis" which means
change shape, appearance or form. Morphing is done by coupling image warping
with color interpolation. Morphing is the process in which the source image is
gradually distorted and vanished while producing the target image. So earlier
images in the sequence are similar to source image and last
imagesaresimilartotargetimage.Middleimageofthesequenceis
theaverageofthesourceimage and the targetimage.

[IV.7]
Introduction to Virtual Reality and Augmented Reality:
Virtual Reality: Today the Virtual reality (VR) technology is applied to advance fields
of medicine, engineering, education, design, training, and entertainment. VR
is a computer interfaces which tries to mimic real world beyond the flat
monitor to give an immersive 3D (Three Dimension) visual experiences.
Often it is hard to reconstruct the scales and distances between objects in
static 2D images. Thus the third dimension helps bringing depth to objects.
Virtual reality (VR) is a computer-generated scenario that simulates
experience through senses and perception. The immersive environment can
be similar to the real world
oritcanbefantastical,creatinganexperiencenotpossibleinourphysicalreality.Aug
mented reality systems may also be considered a form of VR that layers
virtual information over a live camera feed into a headset or through a
smartphone or tablet device giving the user the ability to view three-
dimensionalimages.
Current VR technology most commonly uses virtual reality headsets
or multi- projected environments, sometimes in combination with physical
environments or props, to
generaterealisticimages,soundsandothersensationsthatsimulateauser'sphysic
alpresence in a virtual or imaginary environment. A person using virtual
reality equipment is able to "look around" the artificial world, move around in
it, and interact with virtual features or items. The effect is commonly created
by VR headsets consisting of a head-mounted display with a small screen in
front of the eyes, but can also be created through specially designed rooms
with multiple largescreens.
VR systems that include transmission of vibrations and other
sensations to the user through a game controller or other devices are known
as haptic systems. This tactile information is generally known as force
feedback in medical, video gaming and military training applications.
AugmentedReality:AugmentedReality(AR)isageneraltermforacollectionoftechnologi
esused
toblendcomputergeneratedinformationwiththeviewer’snaturalsenses.Asimplee
xample of AR is using a spatial display (digital projector) to augment a real
world object (a wall) for a presentation. As you can see, it’s not a new idea,
but a real revolution has come with advances in mobile personal computing
such as tablets andsmartphones.
Sincemobile‘smart’deviceshavebecomeubiquitous,‘AugmentedReality
Browsers’
havebeendevelopedtorunonthem.ARbrowsersutilisethedevice’ssensors(came
rainput,
GPS,compass,etal)andsuperimposeusefulinformationinalayerontopoftheimag
efrom the camera which, in turn, is viewed on the device’sscreen.
AR Browsers can retrieve and display graphics, 3D objects, text,
audio, video, etc., and use geospatial or visual ‘triggers’ (typically images, QR
codes, point cloud data) in the environment to initiate the display.
AR is being used in an increasing variety of ways, from providing
point-of-sale information to shoppers, tourist information on landmarks,
computer enhancement of traditional printed media, service information for
on-site engineers; the number of applications is huge.

[IV.8]
There are a number of different development platforms on the market.
The main mobile device platforms are Junaio (now withdrawn), Aurasma and
Layar. There’s also a plethora of different apps with novel ideas for AR
applications if you search for them with

[IV.9]
yourappprovider,frominteractivemuseumdisplays
tooverlayingmedicalinformationover apatient.
There are some exciting emergent display technologies which are
nearing commercialization. A series of ‘Head Up Display’ (HUD) devices are
coming to market which will provide a ‘hands-free’ projection of AR
information via devices integrated in spectacle-like screens – examples
include Google’s Project Glass, which is now in beta release to selected
users in the States.
Time will tell the level of adoption such hardware will reach, since the
wearer looks
somewhatunusualwearingthedevice,butitwon’tbelongbeforetheybecomemore
discreet
– the ‘bionic contact lens’ is already in development.

Virtual Reality vs. Augmented Reality

What is Virtual Reality?


Virtual reality (VR) is an artificial, computer-generated simulation or
recreation of a real life environment or situation. It immerses the user by making
them feel like they are experiencing the simulated reality firsthand, primarily by
stimulating their vision and hearing.
VR is typically achieved by wearing a headset like Facebook’s Oculus
equipped with the technology, and is used prominently in two different ways:
 To create and enhance an imaginary reality for gaming, entertainment,
and play (Such as video and computer games, or 3D movies, head
mounteddisplay).
 Toenhancetrainingforreallifeenvironmentsbycreatingasimulationofrealit
ywherepeople can practice beforehand (Such as flight simulators
forpilots).
VirtualrealityispossiblethroughacodinglanguageknownasVRML(VirtualRealityMod
eling Language) which can be used to create a series of images, and specify what
types of interactions are possible forthem.
What is Augmented Reality?
Augmented reality (AR) is a technology that layers computer-generated
enhancements atop an existing reality in order to make it more meaningful through
the ability to interact with it. AR is developed into apps and used on mobile devices
to blend digital components into the real world in such a way that they enhance one
another, but can also be told apart easily.
ARtechnologyisquicklycomingintothemainstream.Itisused
todisplayscoreoverlayson telecasted sports games and pop out 3D emails, photos or
text messages on mobile devices. Leaders of the tech industry are also using AR to
do amazing and revolutionary things with holograms and motion
activatedcommands.

[IV.1
0]
Augmented Reality vs. Virtual Reality

[IV.1
1]
Augmentedrealityandvirtualrealityareinversereflectionsofoneinanotherwithwha
teach technology seeks to accomplish and deliver for the user. Virtual reality offers a
digital recreation of a real life setting, while augmented reality delivers virtual
elements as an overlay to the realworld.
How are Virtual Reality and Augmented Reality Similar?
Technology
Augmentedandvirtualrealitiesbothleveragesomeofthesametypesoftechnology,
andthey each exist to serve the user with an enhanced or enrichedexperience.
Entertainment
Bothtechnologiesenableexperiencesthatarebecomingmorecommonlyexpecte
dandsought
afterforentertainmentpurposes.Whileinthepasttheyseemedmerelyafigmentofasciencef
iction imagination, new artificial worlds come to life under the user’s control, and
deeper layers of
interactionwiththerealworldarealsoachievable.Leadingtechmogulsareinvestinganddev
eloping new adaptations, improvements, and releasing more and more products and
apps that support these technologies for the increasingly savvyusers.
Science and Medicine
Additionally, both virtual and augmented realities have great potential in
changing the landscape of the medical field by making things such as remote
surgeries a real possibility. These technologies been already been used to treat and
heal psychological conditions such as Post Traumatic Stress Disorder (PTSD).
How do Augmented and Virtual Realities Differ?
Purpose
Augmented reality enhances experiences by adding virtual components such
as digital
images,graphics,orsensationsasanewlayerofinteractionwiththerealworld.Contrastingl
y,virtual reality creates its own reality that is completely computer generated
anddriven.
Delivery Method
Virtual Reality is usually delivered to the user through a head-mounted, or
hand-held controller. This equipment connects people to the virtual reality, and
allows them to control and navigate their actions in an environment meant to
simulate the real world.

Augmentedrealityisbeingusedmoreandmoreinmobiledevicessuchaslaptops,smartpho
nes,and tablets to change how the real world and digital images, graphics intersect
andinteract.
How do they work together?
It is not always virtual reality vs. augmented reality– they do not always
operate independently of one another, and in fact are often blended together to
generate an even more immersing experience. For example, haptic feedback-which
is the vibration and sensation added to interaction with graphics-is considered an
augmentation. However, it is commonly used within a virtual reality setting in order to
make the experience more lifelike though touch.
Virtual reality and augmented reality are great examples of experiences and
[IV.1
2]
interactions fueled by the desire to become immersed in a simulated land for
entertainment and play, or to add a new dimension of interaction between digital
devices and the real world. Alone or blended together, they are undoubtedly opening
up worlds-both real and virtual alike.

[IV.1
3]

You might also like