0% found this document useful (0 votes)
78 views169 pages

Computer Graphics Modules Wise Questions & Notes - Aeraxia - in

Computer graphics encompasses all visual representations on computers, including images, models, and animations, and is essential for user interfaces, design, and entertainment. Its applications range from computer art and CAD to education and training, utilizing techniques like rasterization and rendering to create realistic visuals. The document also compares random scan and raster scan displays, detailing their advantages, disadvantages, and applications.

Uploaded by

Mohan Barhate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views169 pages

Computer Graphics Modules Wise Questions & Notes - Aeraxia - in

Computer graphics encompasses all visual representations on computers, including images, models, and animations, and is essential for user interfaces, design, and entertainment. Its applications range from computer art and CAD to education and training, utilizing techniques like rasterization and rendering to create realistic visuals. The document also compares random scan and raster scan displays, detailing their advantages, disadvantages, and applications.

Uploaded by

Mohan Barhate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 169

Q1What is computer graphics? State its Representative uses.

The term computer graphics includes almost everything on


computers that is not text or sound. Today almost every computer can
do some graphics, and people have even come to expect to control
their computer through icons and pictures rather than just by typing.
Computer graphics started with the display of data on hard copy
plotters and cathode ray tube (CRT). It has grown to include
the creation, storage, and manipulation of models and images of
objects using algorithms and data structures.
Definition :-
•The computer graphics is one of the most effective and commonly
used way to communicate the processed information to the user.It
displays the information in the form of graphics objects such as
pictures, charts, graphs and diagrams instead of simple text.
•The Computer Graphics is rendering (Service) tool for generation of
images and manipulation of images.
•Computer graphics is the technology that deals with designs and
pictures on computers.
•Computer graphics is the sub part of Compute Science which studies
about manipulating visual content, Synthesizing digitally.
Computer Graphics made up of 4 Components :-
• Image: an Image is a Combination of pixels, a Visual Representation
of Something.
• Models: 3d Representation of Something is Called Model.
• Rendering: Rendering is the process of generating an image from a
2D or 3D model or models in what collectively could be called a scene
file by means of computer programs. Also, the results of such a model
can be called a rendering.
• Animation: Techniques of creating illusive movements using
Successive Images is called Animation.
Representative Uses of Computer Graphics :-
• User interfaces: GUI, etc.
• Business, science and technology: histograms, bar and pie charts,
etc.
• Office automation and electronic publishing: text , tables, graphs,
hypermedia systems, etc.
• Computer-aided design (CAD): structures of building, automobile
bodies, etc.
• Simulation and animation for scientific visualization and
entertainment: flight simulation,games, movies, virtual reality, etc.
• Art and commerce: terminals in public places such as museums, etc.
• Cartography: map making.
Q2) Applications of Computer Graphics
Some of the applications of computer graphics are:

1.Computer Art:
Using computer graphics we can create fine and commercial art
which include animation packages, paint packages. These
packages provide facilities for designing object shapes and
specifying object motion.Cartoon drawing, paintings, logo design
can also be done.

2.Computer Aided Drawing:


Designing of buildings, automobile, aircraft is done with the help
of computer aided drawing, this helps in providing minute details
to the drawing and producing more accurate and sharp drawings
with better specifications.

3.Presentation Graphics:
For the preparation of reports or summarising the

1) financial Report

2) statistical Report

3) mathematical Report

4) scientific Report

5) economic data for research reports, managerial reports,


moreover creation of bar graphs, pie charts, time chart, can
be done using the tools present in computer graphics.

4.Entertainment:
Computer graphics finds a major part of its utility in the movie
industry and game industry. Used for creating motion pictures ,
music video, television shows, cartoon animation films. In the
game industry where focus and interactivity are the key players,
computer graphics helps in providing such features in the
efficient way.

5.Education:
Computer generated models are extremely useful for teaching
huge number of concepts and fundamentals in an easy to
understand and learn manner. Using computer graphics many
educational models can be created through which more interest
can be generated among the students regarding the subject.

6.Training:
Specialised system for training like simulators can be used for
training the candidates in a way that can be grasped in a short
span of time with better understanding. Creation of training
modules using computer graphics is simple and very useful.

7.Visualisation:
Today the need of visualise things have increased drastically, the
need of visualisation can be seen in many advance technologies ,
data visualisation helps in finding insights of the data , to check
and study the behaviour of processes around us we need
appropriate visualisation which can be achieved through proper
usage of computer graphics

8.Image Processing:
Various kinds of photographs or images require editing in order
to be used in different places. Processing of existing images into
refined ones for better interpretation is one of the many
applications of computer graphics.

9.Machine Drawing:
Computer graphics is very frequently used for designing,
modifying and creation of various parts of machine and the
whole machine itself, the main reason behind using computer
graphics for this purpose is the precision and clarity we get from
such drawing is ultimate and extremely desired for the safe
manufacturing of machine using these drawings.

10.Graphical User Interface:


The use of pictures, images, icons, pop-up menus, graphical
objects helps in creating a user friendly environment where
working is easy and pleasant, using computer graphics we can
create such an atmosphere where everything can be automated
and anyone can get the desired action performed in an easy
fashion.
Q3) Random Scan and Raster Scan Display:

Random Scan Display:

Random Scan System uses an electron beam which operates like a


pencil to create a line image on the CRT screen. The picture is
constructed out of a sequence of straight-line segments. Each line
segment is drawn on the screen by directing the beam to move from
one point on the screen to the next, where its x & y coordinates
define each point. After drawing the picture. The system cycles back
to the first line and design all the lines of the image 30 to 60 time
each second. The process is shown in fig:

Random-scan monitors are also known as vector displays or stroke-


writing displays or calligraphic displays.

Advantages:

1. A CRT has the electron beam directed only to the parts of the
screen where an image is to be drawn.
2. Produce smooth line drawings.
3. High Resolution

Disadvantages:

1. Random-Scan monitors cannot display realistic shades scenes.

Raster Scan Display:

A Raster Scan Display is based on intensity control of pixels in the


form of a rectangular box called Raster on the screen. Information
of on and off pixels is stored in refresh buffer or Frame buffer.
Televisions in our house are based on Raster Scan Method. The
raster scan system can store information of each pixel position, so
it is suitable for realistic display of objects. Raster Scan provides a
refresh rate of 60 to 80 frames per second.

Frame Buffer is also known as Raster or bit map. In Frame Buffer


the positions are called picture elements or pixels. Beam refreshing
is of two types. First is horizontal retracing and second is vertical
retracing. When the beam starts from the top left corner and reaches
the bottom right scale, it will again return to the top left side called
at vertical retrace. Then it will again more horizontally from top to
bottom call as horizontal retracing shown in fig:
Types of Scanning or travelling of beam in Raster Scan

1. Interlaced Scanning
2. Non-Interlaced Scanning

In Interlaced scanning, each horizontal line of the screen is


traced from top to bottom. Due to which fading of display of
object may occur. This problem can be solved by Non-
Interlaced scanning. In this first of all odd numbered lines are
traced or visited by an electron beam, then in the next circle,
even number of lines are located.

For non-interlaced display refresh rate of 30 frames per second


used. But it gives flickers. For interlaced display refresh rate of 60
frames per second is used.

Advantages:

1. Realistic image
2. Million Different colors to be generated
3. Shadow Scenes are possible.

Disadvantages:

1. Low Resolution 2. Expensive


Q4) Differentiate between Random and Raster Scan Display:

Random Scan Raster Scan

1. It has high 1. Its resolution is low.


Resolution

2. It is more 2. It is less expensive


expensive

3. Any modification 3.Modification is tough


if needed is easy

4. Solid pattern is 4.Solid pattern is easy to fill


tough to fill

5. Refresh rate 5. Refresh rate does not depend


depends or on the picture.
resolution

6. Only screen with 6. Whole screen is scanned.


view on an area is
displayed.

7. Beam Penetration 7. Shadow mark technology


technology come came under this.
under it.

8. It does not use 8. It uses interlacing


interlacing method.

9. It is restricted to 9. It is suitable for realistic


line drawing display.
applications
Q5) Application Of Random Scan

In random-scan systems, an application program is inputted and


stored in the system memory alongside a graphics package.
Graphics commands within the program are translated by the
graphics package into a display file, which is then stored in the
system memory. The display processor accesses this display file
to refresh the screen. During each refresh cycle, the display
processor cycles through each command in the display file
program.

The display processor in a random-scan system is sometimes


referred to as a display processing unit or a graphics controller.
To draw graphic patterns on a random-scan system, the electron
beam is directed along the component lines of the picture. Lines
are defined by specifying the coordinates of their endpoints.
These input coordinate values are then converted into x and y
deflection voltages. The scene is drawn one line at a time, with
the beam positioned to fill in the line between the specified
endpoints.
By understanding the architecture and components of raster-scan
systems and random-scan systems, we gain insight into the inner
workings of interactive graphics systems, which play a significant
role in various applications such as computer-aided design,
gaming, and multimedia.
Q6) Scan Conversion Definition

It is a process of representing graphics objects a collection of


pixels. The graphics objects are continuous. The pixels used are
discrete. Each pixel can have either on or off state.

The circuitry of the video display device of the computer is


capable of converting binary values (0, 1) into a pixel on and pixel
off information. 0 is represented by pixel off. 1 is represented
using pixel on. Using this ability graphics computer represent
picture having discrete dots.

Any model of graphics can be reproduced with a dense matrix of


dots or points. Most human beings think graphics objects as
points, lines, circles, ellipses. For generating graphical object,
many algorithms have been developed.

Advantage of developing algorithms for scan conversion

1. Algorithms can generate graphics objects at a faster rate.


2. Using algorithms memory can be used efficiently.
3. Algorithms can develop a higher level of graphical objects.

Examples of objects which can be scan converted

1. Point
2. Line
3. Sector
4. Arc
5. Ellipse
6. Rectangle
7. Polygon
8. Characters
9. Filled Regions

The process of converting is also called as rasterization. The


algorithms implementation varies from one computer system to
another computer system. Some algorithms are implemented
using the software. Some are performed using hardware or
firmware. Some are performed using various combinations of
hardware, firmware, and software.

Pixel or Pel:

The term pixel is a short form of the picture element. It is also


called a point or dot. It is the smallest picture unit accepted by
display devices. A picture is constructed from hundreds of such
pixels. Pixels are generated using commands. Lines, circle, arcs,
characters; curves are drawn with closely spaced pixels. To
display the digit or letter matrix of pixels is used.

The closer the dots or pixels are, the better will be the quality of
picture. Closer the dots are, crisper will be the picture. Picture will
not appear jagged and unclear if pixels are closely spaced. So the
quality of the picture is directly proportional to the density of
pixels on the screen.

Pixels are also defined as the smallest addressable unit or element


of the screen. Each pixel can be assigned an address as shown in
fig:
Different graphics objects can be generated by setting the
different intensity of pixels and different colors of pixels. Each
pixel has some co-ordinate value. The coordinate is represented
using row and column.

P (5, 5) used to represent a pixel in the 5th row and the 5th
column. Each pixel has some intensity value which is represented
in memory of computer called a frame buffer. Frame Buffer is
also called a refresh buffer. This memory is a storage area for
storing pixels values using which pictures are displayed. It is also
called as digital memory. Inside the buffer, image is stored as a
pattern of binary digits either 0 or 1. So there is an array of 0 or 1
used to represent the picture. In black and white monitors, black
pixels are represented using 1's and white pixels are represented
using 0's. In case of systems having one bit per pixel frame buffer
is called a bitmap. In systems with multiple bits per pixel it is
called a pixmap.
Q7) Scan Conversion Definition

It is a process of representing graphics objects a collection of


pixels. The graphics objects are continuous. The pixels used are
discrete. Each pixel can have either on or off state.

The circuitry of the video display device of the computer is


capable of converting binary values (0, 1) into a pixel on and pixel
off information. 0 is represented by pixel off. 1 is represented
using pixel on. Using this ability graphics computer represent
picture having discrete dots.

Any model of graphics can be reproduced with a dense matrix of


dots or points. Most human beings think graphics objects as
points, lines, circles, ellipses. For generating graphical object,
many algorithms have been developed.

Advantage of developing algorithms for scan conversion

1. Algorithms can generate graphics objects at a faster rate.


2. Using algorithms memory can be used efficiently.
3. Algorithms can develop a higher level of graphical objects.

Examples of objects which can be scan converted

1. Point
2. Line
3. Sector
4. Arc
5. Ellipse
6. Rectangle
7. Polygon
8. Characters
9. Filled Regions

The process of converting is also called as rasterization. The


algorithms implementation varies from one computer system to
another computer system. Some algorithms are implemented
using the software. Some are performed using hardware or
firmware. Some are performed using various combinations of
hardware, firmware, and software.

Q8) Architecture of Raster Scan Display:


Raster Scan Display basically employs a Cathode Ray Tube
(CRT) or an LCD panel for display. The CRT works just like the
picture tube of a television set. The Raster Scan Display viewing
surface is coated with a layer of arrayed phosphor dots. At the
back of the CRT is a set of electron guns (cathodes) that produce
a controlled stream of electrons that says electron beam. The
phosphor material emits light when struck by these high-energy
electrons. The Architecture of Raster and Random Scan Display
Devices Diagram is given below:
The frequency and intensity of the emitting light depend on
the type of phosphor material uses and the energy of the
electrons. To produce a picture on the screen, these directed
electron beams start at the top of the screen. It scans rapidly
from left to right along the row of phosphor dots. They return
to the leftmost position one line down. It scans again and
repeats this to cover the entire screen. The return of the beam
direction to the leftmost position is one line down that says
Horizontal Retrace.
Q9) Rasterization:

Imagine a picture you drew with perfect lines and curves. That's
a vector image. Rasterization takes that vector image and
translates it into a grid of colored squares – the pixels. The more
pixels used, the closer the rasterized image will resemble the
original vector image.

• Analogy: Think of a mosaic – a picture made from tiny


colored tiles. Rasterization is like taking a perfect painting
and recreating it using a limited number of colored tiles.
The more tiles you use, the more accurate the recreation
will be.
Q9) Rendering:

Rendering is like taking a virtual photograph in a computer-


generated world. Imagine a 3D scene with objects, lights, and
textures. Rendering software calculates how light would bounce
around the scene, creating shadows, reflections, and realistic
textures on the objects. The final output is a digital image that
looks like a real-world photograph of the 3D scene.
• Example: Imagine a car commercial. The car you see
driving might not be real – it could be a 3D model.
Rendering software creates the final image or animation
sequence showing the car with realistic lighting, reflections,
and motion blur.

Q10) Resolution:
Resolution refers to the number of pixels packed into a digital
image. Think of it as the density of the tiny squares (pixels) that

make up the image. The more pixels you have, the higher the
resolution and the sharper the image will be.

• Imagine a photograph: A low-resolution photo might be


blurry because it doesn't have enough pixels to capture all
the details. A high-resolution photo will be much sharper
because it has a higher density of pixels, capturing finer
details.
Q11) Aspect Ratio:

Aspect ratio describes the proportional relationship between the


width and height of an image or screen. It's like the image's
shape. Common aspect ratios include:

• 16:9: This is the widescreen format used in most modern


TVs and computer monitors. Imagine holding a rectangular
sheet of paper horizontally – that's a 16:9 aspect ratio (wider
than tall).
• 4:3: This is the standard definition TV format, also used in
some digital cameras. Imagine a square sheet of paper cut
in half – that's a 4:3 aspect ratio (closer to a square than
16:9).
• 1:1: This is a square format often used for social media
profile pictures. Imagine a perfectly square piece of paper –
that's a 1:1 aspect ratio.
Q12) Screen Resolution vs. Image Resolution: Un

While both terms deal with resolution, they apply to different


aspects of the digital world. Here's a breakdown to clarify the
distinction:

Screen Resolution:
• What it is: Screen resolution refers to the number of pixels
that make up a display, like a computer monitor, phone
screen, or TV. It's expressed in width x height format (e.g.,
1920 x 1080 pixels).
• Impact: Screen resolution determines how much detail and
sharpness you see on your screen. Higher resolution means
more pixels, leading to a sharper and crisper image.
Image Resolution:
• What it is: Image resolution refers to the number of pixels
that make up a digital image (e.g., a photograph, graphic, or
screenshot). It's also expressed in width x height format.
• Impact: Image resolution determines the level of detail and
clarity of the image itself. Higher resolution images contain
more pixels and can capture finer details.
Analogy to understand the difference:

Imagine a movie theater screen. The screen resolution is like the


number of seats in the theater. A larger theater with more seats
(higher resolution) can accommodate a bigger audience and
provide a more immersive experience. The image resolution is
like the quality of the film itself. A high-quality film with high
resolution will show finer details and sharper visuals, regardless
of the screen size (number of seats).

Examples:
• Scenario: You have a high-resolution image (3840 x 2160
pixels) displayed on a low-resolution screen (1024 x 768
pixels).
• Result: The image will be shrunk to fit the screen. You
might see some loss of detail because the screen doesn't
have enough pixels to display the full resolution of the
image.
• Scenario: You have a low-resolution image (640 x 480
pixels) displayed on a high-resolution screen (1920 x 1080
pixels).
• Result: The image will be blown up to fit the screen. This
can result in a pixelated or blurry image because the
software has to stretch the limited number of pixels to fill
the larger space.

In essence:
• Screen resolution is a property of the display device.
• Image resolution is a property of the image file itself.

Understanding this difference helps you choose the right image


resolution for your needs. For example, a high-resolution image
might be suitable for printing, while a lower resolution image
might be sufficient for a web page where file size is a concern.
Q1) Bresenham Line Drawing Algorithm
Line Drawing Algorithms-
In computer graphics, popular algorithms used to generate lines are-

1. Digital Differential Analyzer (DDA) Line Drawing Algorithm


2. Bresenham Line Drawing Algorithm
3. Mid Point Line Drawing Algorithm

DDA Line Drawing Algorithm


Procedure-
Given-
• Starting coordinates = (X0, Y0)

• Ending coordinates = (Xn, Yn)


The points generation using Bresenham Line Drawing Algorithm
involves the following steps-
Step-01:
Calculate ΔX and ΔY from the given input.
These parameters are calculated as-
• ΔX = Xn – X0
• ΔY =Yn – Y0

Step-02:

Calculate the decision parameter Pk.


It is calculated as-
Pk = 2ΔY – ΔX
Step-03:
Suppose the current point is (Xk, Yk) and the next point is (Xk+1, Yk+1).
Find the next point depending on the value of decision parameter Pk.
Follow the below two cases-
Step-04:

Keep repeating Step-03 until the end point is reached or number of


iterations equals to (ΔX-1) times.
PRACTICE PROBLEMS BASED ON BRESENHAM LINE
DRAWING ALGORITHM-

Problem-01:

Calculate the points between the starting coordinates (9, 18) and ending
coordinates (14, 22).

Solution-

Given-
• Starting coordinates = (X0, Y0) = (9, 18)

• Ending coordinates = (Xn, Yn) = (14, 22)

Step-01:

Calculate ΔX and ΔY from the given input.


• ΔX = Xn – X0 = 14 – 9 = 5
• ΔY =Yn – Y0 = 22 – 18 = 4
Step-02:

Calculate the decision parameter.


Pk
= 2ΔY – ΔX
=2x4–5
=3
So, decision parameter Pk = 3

Step-03:

As Pk >= 0, so case-02 is satisfied.

Thus,
• Pk+1 = Pk + 2ΔY – 2ΔX = 3 + (2 x 4) – (2 x 5) = 1
• Xk+1 = Xk + 1 = 9 + 1 = 10

• Yk+1 = Yk + 1 = 18 + 1 = 19
Similarly, Step-03 is executed until the end point is reached or number
of iterations equals to 4 times.

(Number of iterations = ΔX – 1 = 5 – 1 = 4)

Pk Pk+1 Xk+1 Yk+1


9 18
3 1 10 19
1 -1 11 20
-1 7 12 20
7 5 13 21
5 3 14 22
Problem-02:
Calculate the points between the starting coordinates (20, 10) and
ending coordinates (30, 18).
Solution-
Given-
• Starting coordinates = (X0, Y0) = (20, 10)

• Ending coordinates = (Xn, Yn) = (30, 18)

Step-01:
Calculate ΔX and ΔY from the given input.
• ΔX = Xn – X0 = 30 – 20 = 10
• ΔY =Yn – Y0 = 18 – 10 = 8
Step-02:
Calculate the decision parameter.
Pk
= 2ΔY – ΔX
= 2 x 8 – 10
=6
So, decision parameter Pk = 6
Step-03:
As Pk >= 0, so case-02 is satisfied.
Thus,
• Pk+1 = Pk + 2ΔY – 2ΔX = 6 + (2 x 8) – (2 x 10) = 2
• Xk+1 = Xk + 1 = 20 + 1 = 21

• Yk+1 = Yk + 1 = 10 + 1 = 11

Similarly, Step-03 is executed until the end point is reached or number


of iterations equals to 9 times.
(Number of iterations = ΔX – 1 = 10 – 1 = 9)
Pk+1 Xk+1 Yk+1
Pk
20 10
6 2 21 11
2 -2 22 12
-2 14 23 12
14 10 24 13
10 6 25 14
6 2 26 15
2 -2 27 16
-2 14 28 16
14 10 29 17
10 6 30 18
Advantages of Bresenham Line Drawing Algorithm-
The advantages of Bresenham Line Drawing Algorithm are-
• It is easy to implement.

• It is fast and incremental.

• It executes fast but less faster than DDA Algorithm.

• The points generated by this algorithm are more accurate than

DDA Algorithm.
• It uses fixed points only.
Disadvantages of Bresenham Line Drawing Algorithm-
The disadvantages of Bresenham Line Drawing Algorithm are-Though
it improves the accuracy of generated points but still the resulted line is
not smooth.This algorithm is for the basic line drawing.It can not handle
diminishing jaggies.
Q2) Midpoint Ellipse Algorithm:
This is an incremental method for scan converting an ellipse that is
centered at the origin in standard position i.e., with the major and minor
axis parallel to coordinate system axis. It is very similar to the midpoint
circle algorithm. Because of the four-way symmetry property we need to
consider the entire elliptical curve in the first quadrant.
Let's first rewrite the ellipse equation and define the function f that can
be used to decide if the midpoint between two candidate pixels is inside
or outside the ellipse:
Now divide the elliptical curve from (0, b) to (a, 0) into two parts at point
Q where the slope of the curve is -1.

Slope of the curve is defined by the f(x, y) = 0 is where fx & fy


are partial derivatives of f(x, y) with respect to x & y. Skip 10s

We have fx = 2b2 x, fy=2a2 y & Hence we can monitor the


slope value during the scan conversion process to detect Q. Our starting
point is (0, b) Suppose that the coordinates of the last scan converted
pixel upon entering step i are (xi,yi). We are to select either T (xi+1),yi) or
S (xi+1,yi-1) to be the next pixel. The midpoint of T & S is used to define
the following decision parameter.
Algorithm:
Q3) Aliasing And Anti-Aliasing

Aliasing :
In computer graphics, the process by which smooth curves and other
lines become jagged because the resolution of the graphics device or
file is not high enough to represent a smooth curve.
In the line drawing algorithms, we have seen that all rasterized
locations do not match with the true line and we have to select the
optimum raster locations to represent a straight line. This problem is
severe in low resolution screens. In such screens line appears like a
stair-step, as shown in the figure below. This effect is known
as aliasing. It is dominant for lines having gentle and sharp slopes.

Anti-Aliasing - Computer Graphics


Antialiasing is a computer graphics method that removes the aliasing
effect. The aliasing effect occurs when rasterised images have jagged
edges, sometimes called "jaggies" (an image rendered using pixels).
Technically, jagged edges are a problem that arises when scan
conversion is done with low-frequency sampling, also known as under-
sampling, this under-sampling causes distortion of the image. Moreover,
when real-world objects made of continuous, smooth curves are
rasterised using pixels, aliasing occurs.
Under-sampling is an important factor in anti-aliasing. The information
in the image is lost when the sample size is too small. When sampling is
done at a frequency lower than the Nyquist sampling frequency, under-
sampling takes place. We must have a sampling frequency that is at least
two times higher than the highest frequency appearing in the image in
order to prevent this loss.
Anti-Aliasing Methods:
A high-resolution display, post-filtering (super-sampling), pre-filtering
(area sampling), and pixel phasing are the techniques used to remove
aliasing. The explanations of these are given below:
1. Using High-Resolution Display - Displaying objects at a greater
resolution is one technique to decrease aliasing impact and boost
the sampling rate. When using high resolution, the jaggies are
reduced to a size that renders them invisible to the human eye. As
a result, sharp edges get blurred and appear smooth.
Real-Life Applications:
For example, OLED displays and retina displays in Apple products
both have high pixel densities, which results in jaggies that are so
microscopic that they are blurry and invisible to the human eye.
2. Post-Filtering or Super-Sampling - With this technique, we reduce
the adequate pixel size while improving the sampling resolution by
treating the screen as though it were formed of a much finer grid.
The screen resolution, however, does not change. Now, the
average pixel intensity is determined from the average of the
intensities of the subpixels after each subpixel's intensity has been
calculated. In order to display the image at a lesser resolution or
screen resolution, we do sampling at a higher resolution, a process
known as supersampling. Due to the fact that this process is carried
out after creating the rasterised image, this technique is also known
as post filtration.
Real-Life Applications:
The finest image quality in gaming is produced with SSAA (Super-
sample Antialiasing) or FSAA (full-scene Antialiasing). It is
frequently referred to as the "pure AA," which is extremely slow
and expensive to compute. When no better AA techniques were
available, this technique was frequently utilised in the beginning.
Other SSAA modes are available, including 2X, 4X, 8X, and
others that indicate sampling that is done x times (greater than) the
present resolution.
MSAA (multisampling Antialiasing), a quicker and more accurate
version of super-sampling AA, is a better AA type.
Its computational cost is lower. Companies that produce graphics
cards, such as CSAA by NVIDIA and CFAA by AMD, are
working to improve and advance super-sampling techniques.
3. Pre-Filtering or Area-Sampling - The areas of each pixel's overlap
with the objects displayed are taken into account while calculating
pixel intensities in area sampling. In this case, the computation of
pixel colour is centred on the overlap of scene objects with a pixel
region.
Example: Let's say a line crosses two pixels. A pixel that covers a
larger amount of a line (90%) displays 90% intensity, whereas a
pixel that covers a smaller piece (10%) displays 10-15% intensity.
If a pixel region overlaps with multiple colour areas, the final pixel
colour is calculated as the average of those colours. Pre-filtering is
another name for this technique because it is used before
rasterising the image. Some basic graphics algorithms are used to
complete it.
4. Pixel Phasing - It is a method to eliminate aliasing. In this case,
pixel coordinates are altered to virtually exact positions close to
object geometry. For dispersing intensities and aiding with pixel
phasing, some systems let you change the size of individual pixels.
Application of Anti-Aliasing:
1. Compensating for Line Intensity Differences - Despite the
diagonal line being 1.414 times larger than the horizontal line
when a horizontal line and a diagonal line are plotted on a raster
display, the amount of pixels needed to depict both lines is the
same. The extended line's intensity decreases as a result. Anti-
aliasing techniques are used to allocate the intensity of pixels in
accordance with the length of the line to make up for this loss of
intensity.
2. Anti-Aliasing Area Boundaries - Jaggies along area boundaries
can be eliminated using anti-aliasing principles. These techniques
can be used to smooth out area borders in scanline algorithms. If
moving pixels is an option, they are moved to positions nearer the
edges of the area. Other techniques modify the amount of pixel
area inside the boundary by adjusting the pixel intensity at the
boundary position. Area borders are effectively rounded off using
these techniques.

Q4) Even Odd method & Winding number Method-Inside & Outside
Test of a Polygon
Introduction :
A polygon may be represented as a number of line segments
connected, end to form a closed figure.
Polygons can be represented in two ways –
(i) outline from using move commands and
(ii) as a solid objects by setting the pixels high inside the polygon
including pixels on the boundary.
To determine a point lies inside a polygon or not, in computer
graphics, we have two methods :
(a) Even-Odd method (odd-parity rule)
(b) Winding number Method-Inside
Even-Odd method :
Constructing a line segment between the point (P) to be examined
and a known point outside the polygon is the one way to determine a
point lies inside a polygon or not. The number of times the line
segment intersects the polygon boundary is then counted. The point
(P) is an internal point if the number of polygon edges intersected by
this line is odd; otherwise, the point is an external point.
In the figure, the line segment from ‘A’ crosses single edge & hence
point A is inside the polygon. The point B is also inside the polygon
as the line segment from B crosses three (odd) edges. But point C is
outside the polygon it as the line segment from C crosses two
(even) edges.

Polygon
But this even-odd test fails when the intersection point is a vertex.
To handle this case, we have to make few modifications.
We must look at the other end points of the two segments of a
polygon which meet at this vertex. If these points lies on the same
side of the constructed line A’P’, then the intersection point counts
as an even number of intersection. But if they lie on the opposite
side of constructed line AP, then the intersection points counts as a
single intersection.

Polygon
As we can see the line segment A’P’ intersects at M which is a
vertex and L & Z are the other end points of the two segments
meeting at M. L & Z lie on same side of the line segment A’P’, so
the count is taken as even.
Winding Number Method :
There is another alternative method for defining a polygons’ interior
point is called the winding number method. Conceptually, one can
stretch a piece of elastic between the point (P) to be checked and the
point on the polygon boundary.
Treat that, elastic is tied to point (P) to be checked firmly and the
other end of the elastic is sliding along the boundary of the polygon
until it has made one complete circuit. Then we check that how
many times the elastic has been wound around the point of
intersection. If it is wound at-least once, point is inside. If there is no
net winding then point is outside.
In this method, instead of just counting the intersections, we give
each boundary line crossed a direction number, and we sum these
directions numbers. The direction number indicates the direction of
the polygon edge was drawn relative to the line segment we
constructed for the test.
Example : To test a point (xi, yi), let us consider a horizontal line
segment y = yi which runs from outside the polygon to (xi, yi). We
find all the sides which crossed this line segment.
Now there are 2 ways for side to cross, the side could be drawn
starting below end, cross it and end above the line. In this case we
can give direction numbers – 1, to the side or the edge could start
above the line & finish below it in this case, given a direction 1. The
sum of the direction numbers for the sides that cross the constructed
horizontal line segment yield the “Winding Number” for the point.
If the winding number is non-zero , the point is interior to polygon,
else, exterior to polygon.
Polygon
In the above figure, the line segment crosses 4 edges having
different direction numbers : 1, -1, 1& -1 respectively, then :
Winding Number = 1 + (-1) + 1 + (-1) = 0
So the point P is outside the Polygon. The edge has direction
Number -1 because it starts below the line segment & finishes
above. Similarly, edge has direction Number +1 because it starts
from above the line segment & finishes below the line segment
Q5) Comparisons between DDA and Bresenham Line Drawing
algorithm
In lighting tricks, there are 2 algorithmic rules used for drawing a
line over the screen that’s DDA stands for Digital Differential
Analyser algorithmic rule and Bresenham line algorithm.

The main distinction between DDA algorithm and Bresenham line


algorithm is that, the DDA algorithmic rule uses floating purpose
values whereas in Bresenham, spherical off functions is used.
DDA algorithmic rule involves multiplication as well as division
whereas in bresenham algorithmic rule, addition and subtraction are
the most performed operations.
Let’s see that the difference between DDA algorithm and
Bresenham line drawing algorithm:
S.NO DDA Line Algorithm Bresenham line Algorithm
1. DDA stands for Digital While it has no full form.
Differential Analyzer.
2. DDA algorithm is less efficient While it is more efficient than DDA
than Bresenham line algorithm. algorithm.
3. The calculation speed of DDA While the calculation speed of
algorithm is less than Bresenham Bresenham line algorithm is faster
line algorithm. than DDA algorithm.
4. DDA algorithm is costlier than While Bresenham line algorithm is
Bresenham line algorithm. cheaper than DDA algorithm.
5. DDA algorithm has less precision While it has more precision or
or accuracy. accuracy.
6. In DDA algorithm, the complexity While in this, the complexity of
of calculation is more complex. calculation is simple.
7. In DDA algorithm, optimization is While in this, optimization is
not provided. provided.
Q6) Flood Fill Algorithm:
In this method, a point or seed which is inside region is selected. This
point is called a seed point. Then four connected approaches or eight
connected approaches is used to fill with specified color.
The flood fill algorithm has many characters similar to boundary fill. But
this method is more suitable for filling multiple colors boundary. When
boundary is of many colors and interior is to be filled with one color we
use this algorithm.

In fill algorithm, we start from a specified interior point (x, y) and


reassign all pixel values are currently set to a given interior color with
the desired color. Using either a 4-connected or 8-connected approaches,
we then step through pixel positions until all interior points have been
repainted.
Disadvantage:
1. Very slow algorithm
2. May be fail for large polygons
3. Initial pixel required more knowledge about surrounding pixels.L
Algorithm:
1. Procedure floodfill (x, y,fill_ color, old_color: integer)
2. If (getpixel (x, y)=old_color)
3. {
4. setpixel (x, y, fill_color);
5. fill (x+1, y, fill_color, old_color);
6. fill (x-1, y, fill_color, old_color);
7. fill (x, y+1, fill_color, old_color);
8. fill (x, y-1, fill_color, old_color);
9. }
10. }
Q7) DDA Algorithm Advantages and Disadvantages
DDA Algorithm | Line Drawing Algorithms
Computer Graphics
Line Drawing Algorithms-

In computer graphics, popular algorithms used to generate lines are-

1. Digital Differential Analyzer (DDA) Line Drawing


Algorithm
2. Bresenham Line Drawing Algorithm
3. Mid Point Line Drawing Algorithm

1.
2.

Play
DDA Algorithm-
DDA Algorithm is the simplest line drawing algorithm.
Procedure-
Given-
Starting coordinates = (X0, Y0)

• Ending coordinates = (Xn, Yn)

The points generation using DDA Algorithm involves the


following steps-
Step-01:

Calculate ΔX, ΔY and M from the given input.


These parameters are calculated as-
• ΔX = Xn – X0
• ΔY =Yn – Y0
• M = ΔY / ΔX

Step-02:
Find the number of steps or points in between the starting and
ending coordinates.
if (absolute (ΔX) > absolute (ΔY))

Steps = absolute (ΔX);


else
Steps = absolute (ΔY);
Step-03:
Suppose the current point is (Xp, Yp) and the next point is (Xp+1,
Yp+1).
Find the next point by following the below three cases-
Step-04:
Keep repeating Step-03 until the end point is reached or the number
of generated new points (including the starting and ending points)
equals to the steps count.
PRACTICE PROBLEMS BASED ON DDA ALGORITHM-
Problem-01:
Calculate the points between the starting point (5, 6) and ending
point (8, 12).
Solution-
Given-
• Starting coordinates = (X0, Y0) = (5, 6)
• Ending coordinates = (Xn, Yn) = (8, 12)
Step-01:
Calculate ΔX, ΔY and M from the given input.
• ΔX = Xn – X0 = 8 – 5 = 3
• ΔY =Yn – Y0 = 12 – 6 = 6
• M = ΔY / ΔX = 6 / 3 = 2
Step-02:
Calculate the number of steps.
As |ΔX| < |ΔY| = 3 < 6, so number of steps = ΔY = 6
Step-03:
As M > 1, so case-03 is satisfied.
Now, Step-03 is executed until Step-04 is satisfied.
Xp Yp Xp+1 Yp+1 Round
off
(Xp+1,
Yp+1)
5 6 5.5 7 (6, 7)
6 8 (6, 8)
6.5 9 (7, 9)
7 10 (7, 10)
7.5 11 (8, 11)
8 12 (8, 12)

Problem-02:
Calculate the points between the starting point (5, 6) and ending
point (13, 10).
Solution-
Given-
• Starting coordinates = (X0, Y0) = (5, 6)
• Ending coordinates = (Xn, Yn) = (13, 10)
Step-01:
Calculate ΔX, ΔY and M from the given input.
• ΔX = Xn – X0 = 13 – 5 = 8
• ΔY =Yn – Y0 = 10 – 6 = 4
• M = ΔY / ΔX = 4 / 8 = 0.50

Step-02:

Calculate the number of steps.


As |ΔX| > |ΔY| = 8 > 4, so number of steps = ΔX = 8
Step-03:
As M < 1, so case-01 is satisfied.
Now, Step-03 is executed until Step-04 is satisfied.
Xp Yp Xp+1 Yp+1 Round
off
(Xp+1,
Yp+1)
5 6 6 6.5 (6, 7)
7 7 (7, 7)
8 7.5 (8, 8)
9 8 (9, 8)
10 8.5 (10, 9)
11 9 (11, 9)
12 9.5 (12,
10)
13 10 (13,
10)
Problem-03:
Calculate the points between the starting point (1, 7) and ending
point (11, 17).
Solution-
Given-
• Starting coordinates = (X0, Y0) = (1, 7)
• Ending coordinates = (Xn, Yn) = (11, 17)
Step-01:
Calculate ΔX, ΔY and M from the given input.
• ΔX = Xn – X0 = 11 – 1 = 10
• ΔY =Yn – Y0 = 17 – 7 = 10
• M = ΔY / ΔX = 10 / 10 = 1
Step-02:
Calculate the number of steps.
As |ΔX| = |ΔY| = 10 = 10, so number of steps = ΔX = ΔY = 10
Step-03:
As M = 1, so case-02 is satisfied.
Now, Step-03 is executed until Step-04 is satisfied.
Xp Yp Xp+1 Yp+1 Round
off
(Xp+1,
Yp+1)
1 7 2 8 (2, 8)
3 9 (3, 9)
4 10 (4, 10)
5 11 (5, 11)
6 12 (6, 12)
7 13 (7, 13)
8 14 (8, 14)
9 15 (9, 15)
10 16 (10,
16)
11 17 (11,
17)
Advantages of DDA Algorithm-
The advantages of DDA Algorithm are-
• It is a simple algorithm.
• It is easy to implement.
• It avoids using the multiplication operation which is costly
in terms of time complexity.
Disadvantages of DDA Algorithm-
The disadvantages of DDA Algorithm are-
• There is an extra overhead of using round off( ) function.
• Using round off( ) function increases time complexity of the
algorithm.
• Resulted lines are not smooth because of round off( )
function.
• The points generated by this algorithm are not accurate.

Q8) Mid Point Circle Drawing Algorithm


Circle Drawing Algorithms-

In computer graphics, popular algorithms used to generate circle are-

4. Mid Point Circle Drawing Algorithm


5. Bresenham’s Circle Drawing Algorithm
Mid Point Circle Drawing Algorithm-

The points for other octacts are generated using the eight symmetry
property.
Procedure-
Given-
4. Centre point of Circle = (X0, Y0)
5. Radius of Circle = R

The points generation using Mid Point Circle Drawing Algorithm


involves the following steps-

Step-01:
Assign the starting point coordinates (X0, Y0) as-
X0 = 0
• Y0 = R

Step-02:
Calculate the value of initial decision parameter P0 as-
P0 = 1 – R
Step-03:
Suppose the current point is (Xk, Yk) and the next point is (Xk+1, Yk+1).
Find the next point of the first octant depending on the value of decision
parameter Pk.
Follow the below two cases-

Step-04:

If the given centre point (X0, Y0) is not (0, 0), then do the
following and plot the point-
Xplot = Xc + X0
• Yplot = Yc + Y0

Here, (Xc, Yc) denotes the current value of X and Y coordinates.


Step-05:

Keep repeating Step-03 and Step-04 until Xplot >= Yplot.

Step-06:

Step-05 generates all the points for one octant.


To find the points for other seven octants, follow the eight symmetry property
of circle.
This is depicted by the following figure-

PRACTICE PROBLEMS BASED ON MID POINT CIRCLE


DRAWING ALGORITHM-

Problem-01:
Given the centre point coordinates (0, 0) and radius as 10, generate all the
points to form a circle.
Solution-
Given-
• Centre Coordinates of Circle (X0, Y0) = (0, 0)
• Radius of Circle = 10
Step-01:

Assign the starting point coordinates (X0, Y0) as-


• X0 = 0

• Y0 = R = 10

Step-02:

Calculate the value of initial decision parameter P0 as-


P0 = 1 – R
P0 = 1 – 10
P0 = -9

Step-03:

As Pinitial < 0, so case-01 is satisfied.

Thus,
• Xk+1 = Xk + 1 = 0 + 1 = 1
• Yk+1 = Yk = 10
• Pk+1 = Pk + 2 x Xk+1 + 1 = -9 + (2 x 1) + 1 = -6
Step-04
This step is not applicable here as the given centre point coordinates is (0,
0).
Step-05:

Step-03 is executed similarly until Xk+1 >= Yk+1 as follows-

Pk+1 (Xk+1,
Pk Yk+1)
(0,
10)
-9 -6 (1,
10)
-6 -1 (2,
10)
-1 6 (3,
10)
6 -3 (4, 9)
-3 8 (5, 9)
8 5 (6, 8)
Algorithm calculates all the points of octant-1 and terminates.
Now, the points of octant-2 are obtained using the mirror effect by
swapping X and Y coordinates.
Octant- Octant-
1 Points 2 Points
(0, 10) (8, 6)
(1, 10) (9, 5)
(2, 10) (9, 4)
(3, 10) (10, 3)
(4, 9) (10, 2)
(5, 9) (10, 1)
(6, 8) (10, 0)
These are all
points for
Quadrant-1.

Now, the points for rest of the part are generated by following the signs of
other quadrants.
The other points can also be generated by calculating each octant
separately.
Here, all the points have been generated with respect to quadrant-1-

Quadrant- Quadrant- Quadrant- Quadrant-


1 (X,Y) 2 (-X,Y) 3 (-X,-Y) 4 (X,-Y)
(0, 10) (0, 10) (0, -10) (0, -10)
(1, 10) (-1, 10) (-1, -10) (1, -10)
(2, 10) (-2, 10) (-2, -10) (2, -10)
(3, 10) (-3, 10) (-3, -10) (3, -10)
(4, 9) (-4, 9) (-4, -9) (4, -9)
(5, 9) (-5, 9) (-5, -9) (5, -9)
(6, 8) (-6, 8) (-6, -8) (6, -8)
(8, 6) (-8, 6) (-8, -6) (8, -6)
(9, 5) (-9, 5) (-9, -5) (9, -5)
(9, 4) (-9, 4) (-9, -4) (9, -4)
(10, 3) (-10, 3) (-10, -3) (10, -3)
(10, 2) (-10, 2) (-10, -2) (10, -2)
(10, 1) (-10, 1) (-10, -1) (10, -1)
(10, 0) (-10, 0) (-10, 0) (10, 0)
These are all points of the Circle.
Problem-02:

Given the centre point coordinates (4, -4) and radius as 10, generate all
the points to form a circle.

Solution-

Given-
• Centre Coordinates of Circle (X0, Y0) = (4, -4)
• Radius of Circle = 10

As stated in the algorithm,


• We first calculate the points assuming the centre coordinates

is (0, 0).
• At the end, we translate the circle.

Step-01, Step-02 and Step-03 are already completed in Problem-01.


Now, we find the values of Xplot and Yplot using the formula given in Step-
04 of the main algorithm.
The following table shows the generation of points for Quadrant-1-
• Xplot = Xc + X0 = 4 + X0

• Yplot = Yc + Y0 = 4 + Y0

(Xk+1, (Xplot,
Yk+1) Yplot)
(0, 10) (4, 14)
(1, 10) (5, 14)
(2, 10) (6, 14)
(3, 10) (7, 14)
(4, 9) (8, 13)
(5, 9) (9, 13)
(6, 8) (10,
12)
(8, 6) (12,
10)
(9, 5) (13, 9)
(9, 4) (13, 8)
(10, 3) (14, 7)
(10, 2) (14, 6)
(10, 1) (14, 5)
(10, 0) (14, 4)
These are all
points for
Quadrant-1.

The following table shows the points for all the quadrants-

Quadrant- Quadrant- Quadrant- Quadrant-


1 (X,Y) 2 (-X,Y) 3 (-X,-Y) 4 (X,-Y)
(4, 14) (4, 14) (4, -6) (4, -6)
(5, 14) (3, 14) (3, -6) (5, -6)
(6, 14) (2, 14) (2, -6) (6, -6)
(7, 14) (1, 14) (1, -6) (7, -6)
(8, 13) (0, 13) (0, -5) (8, -5)
(9, 13) (-1, 13) (-1, -5) (9, -5)
(10, 12) (-2, 12) (-2, -4) (10, -4)
(12, 10) (-4, 10) (-4, -2) (12, -2)
(13, 9) (-5, 9) (-5, -1) (13, -1)
(13, 8) (-5, 8) (-5, 0) (13, 0)
(14, 7) (-6, 7) (-6, 1) (14, 1)
(14, 6) (-6, 6) (-6, 2) (14, 2)
(14, 5) (-6, 5) (-6, 3) (14, 3)
(14, 4) (-6, 4) (-6, 4) (14, 4)
These are all points of the Circle.
Advantages of Mid Point Circle Drawing Algorithm-

The advantages of Mid Point Circle Drawing Algorithm are-


• It is a powerful and efficient algorithm.

• The entire algorithm is based on the simple equation of

circle X2 + Y2 = R2.
• It is easy to implement from the programmer’s perspective.

• This algorithm is used to generate curves on raster displays.

Disadvantages of Mid Point Circle Drawing Algorithm-

The disadvantages of Mid Point Circle Drawing Algorithm are-


• Accuracy of the generating points is an issue in this

algorithm.
• The circle generated by this algorithm is not smooth.

• This algorithm is time consuming.


Important Points

1. Circle drawing algorithms take the advantage of 8


symmetry property of circle.
2. Every circle has 8 octants and the circle drawing algorithm
generates all the points for one octant.
3. The points for other 7 octants are generated by changing the
sign towards X and Y coordinates.
4. To take the advantage of 8 symmetry property, the circle
must be formed assuming that the centre point coordinates
is (0, 0).
5. If the centre coordinates are other than (0, 0), then we add
the X and Y coordinate values with each point of circle
with the coordinate values generated by assuming (0, 0) as
centre point.
Unit-3 – 2D Transformation & Viewing

Transformation
Changing Position, shape, size, or orientation of an object on display is known as transformation.

Basic Transformation
• Basic transformation includes three transformations Translation, Rotation, and Scaling.
• These three transformations are known as basic transformation because with combination of these
three transformations we can obtain any transformation.

Translation

(𝒙′, 𝒚′)

𝒕𝒚

(𝒙, 𝒚)

𝒕𝒙

Fig. 3.1: - Translation.


• It is a transformation that used to reposition the object along the straight line path from one coordinate
location to another.
• It is rigid body transformation so we need to translate whole object.
• We translate two dimensional point by adding translation distance 𝒕𝒙 and 𝒕𝒚 to the original coordinate
position (𝒙, 𝒚) to move at new position (𝒙′, 𝒚′) as:
𝒙′ = 𝒙 + 𝒕 𝒙 & 𝒚 ′ = 𝒚 + 𝒕𝒚
• Translation distance pair (𝒕𝒙,𝒕𝒚) is called a Translation Vector or Shift Vector.
• We can represent it into single matrix equation in column vector as;
𝑷′ = 𝑷 + 𝑻
𝒙′ 𝒙 𝒕𝒙
[𝒚′] = [𝒚] + [𝒕 ]
𝒚
• We can also represent it in row vector form as:
𝑷′ = 𝑷 + 𝑻
[𝒙′ 𝒚′] = [𝒙 𝒚] + [𝒕𝒙 𝒕𝒚]
• Since column vector representation is standard mathematical notation and since many graphics package
like GKS and PHIGS uses column vector we will also follow column vector representation.
• Example: - Translate the triangle [A (10, 10), B (15, 15), C (20, 10)] 2 unit in x direction and 1 unit in y
direction.
We know that
𝑃′ = 𝑃 + 𝑇
𝑡𝑥
𝑃′ = [𝑃] + [ ]
𝑡𝑦

1
Unit-3 – 2D Transformation & Viewing

For point (10, 10)


10 2
𝐴′ = [ ] + [ ]
10 1
12
𝐴 =[ ]

11
For point (15, 15)
15 2
𝐵′ = [ ] + [ ]
15 1
17
𝐵 =[ ]

16
For point (10, 10)
20 2
𝐶′ = [ ] + [ ]
10 1
22
𝐶′ = [ ]
11
• Final coordinates after translation are [A’ (12, 11), B’ (17, 16), C’ (22, 11)].

Rotation
• It is a transformation that used to reposition the object along the circular path in the XY - plane.
• To generate a rotation we specify a rotation angle 𝜽 and the position of the Rotation Point (Pivot
Point) (𝒙𝒓,𝒚𝒓) about which the object is to be rotated.
• Positive value of rotation angle defines counter clockwise rotation and negative value of rotation angle
defines clockwise rotation.
• We first find the equation of rotation when pivot point is at coordinate origin(𝟎, 𝟎).

(𝒙′, 𝒚′)

(𝒙, 𝒚)
𝜽

Fig. 3.2: - Rotation.


• From figure we can write.
𝒙 = 𝒓 𝐜𝐨𝐬 ∅
𝒚 = 𝒓 𝐬𝐢𝐧 ∅
and
𝒙′ = 𝒓 𝐜𝐨𝐬(𝜽 + ∅) = 𝒓 𝐜𝐨𝐬 ∅ 𝐜𝐨𝐬 𝜽 − 𝒓 𝐬𝐢𝐧 ∅ 𝐬𝐢𝐧 𝜽
𝒚′ = 𝒓 𝐬𝐢𝐧(∅ + 𝜽) = 𝒓 𝐜𝐨𝐬 ∅ 𝐬𝐢𝐧 𝜽 + 𝒓 𝐬𝐢𝐧 ∅ 𝐜𝐨𝐬 𝜽
• Now replace 𝒓 𝐜𝐨𝐬 ∅ with 𝒙 and 𝒓 𝐬𝐢𝐧 ∅ with 𝒚 in above equation.
𝒙′ = 𝒙 𝐜𝐨𝐬 𝜽 − 𝒚 𝐬𝐢𝐧 𝜽
𝒚′ = 𝒙 𝐬𝐢𝐧 𝜽 + 𝒚 𝐜𝐨𝐬 𝜽
• We can write it in the form of column vector matrix equation as;
𝑷′ = 𝑹 ∙ 𝑷

2
Unit-3 – 2D Transformation & Viewing
𝒙′ 𝒙
= 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 ∙
[𝒚′] [ ] [𝒚]
𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽
• Rotation about arbitrary point is illustrated in below figure.

(𝒙′, 𝒚′)

(𝒙, 𝒚)
𝜽


(𝒙𝒓, 𝒚𝒓)

Fig. 3.3: - Rotation about pivot point.


• Transformation equation for rotation of a point about pivot point (𝒙𝒓,𝒚𝒓) is:
𝒙′ = 𝒙𝒓 + (𝒙 − 𝒙𝒓) 𝐜𝐨𝐬 𝜽 − (𝒚 − 𝒚𝒓) 𝐬𝐢𝐧 𝜽
𝒚′ = 𝒚𝒓 + (𝒙 − 𝒙𝒓) 𝐬𝐢𝐧 𝜽 + (𝒚 − 𝒚𝒓) 𝐜𝐨𝐬 𝜽
• These equations are differing from rotation about origin and its matrix representation is also different.
• Its matrix equation can be obtained by simple method that we will discuss later in this chapter.
• Rotation is also rigid body transformation so we need to rotate each point of object.
• Example: - Locate the new position of the triangle [A (5, 4), B (8, 3), C (8, 8)] after its rotation by 90o
clockwise about the origin.
As rotation is clockwise we will take 𝜃 = −90°.
𝑃′ = 𝑅 ∙ 𝑃
cos(−90) − sin(−90) 5 8 8
𝑃′ = [ ][ ]
sin(−90) cos(−90) 4 3 8
0 1 5 8 8
𝑃′ = [ ][ ]
−1 0 4 3 8
4 3 8
𝑃′ = [ ]
−5 −8 −8
• Final coordinates after rotation are [A’ (4, -5), B’ (3, -8), C’ (8, -8)].

Scaling

Fig. 3.4: - Scaling.


3
Unit-3 – 2D Transformation & Viewing

• It is a transformation that used to alter the size of an object.


• This operation is carried out by multiplying coordinate value (𝒙, 𝒚) with scaling factor (𝒔𝒙, 𝒔𝒚)
respectively.
• So equation for scaling is given by:
𝒙′ = 𝒙 ∙ 𝒔 𝒙
𝒚′ = 𝒚 ∙ 𝒔𝒚
• These equation can be represented in column vector matrix equation as:
𝑷′ = 𝑺 ∙ 𝑷
′ 𝒙
[𝒙 ] = [𝒔𝒙 𝟎 ∙
[ ]
𝒚′ 𝟎 𝒔𝒚] 𝒚
• Any positive value can be assigned to(𝒔𝒙, 𝒔𝒚).
• Values less than 1 reduce the size while values greater than 1 enlarge the size of object, and object
remains unchanged when values of both factor is 1.
• Same values of 𝒔𝒙 and 𝒔𝒚 will produce Uniform Scaling. And different values of 𝒔𝒙 and 𝒔𝒚 will produce
Differential Scaling.
• Objects transformed with above equation are both scale and repositioned.
• Scaling factor with value less than 1 will move object closer to origin, while scaling factor with value
greater than 1 will move object away from origin.
• We can control the position of object after scaling by keeping one position fixed called Fix point (𝒙𝒇, 𝒚𝒇)
that point will remain unchanged after the scaling transformation.

Fixed Point

Fig. 3.5: - Fixed point scaling.


• Equation for scaling with fixed point position as (𝒙𝒇, 𝒚𝒇) is:
𝒙′ = 𝒙𝒇 + (𝒙 − 𝒙𝒇)𝒔𝒙 𝒚′ = 𝒚𝒇 + (𝒚 − 𝒚𝒇)𝒔𝒚
𝒙′ = 𝒙𝒇 + 𝒙𝒔𝒙 − 𝒙𝒇𝒔𝒙 𝒚′ = 𝒚𝒇 + 𝒚𝒔𝒚 − 𝒚𝒇𝒔𝒚
𝒙′ = 𝒙𝒔𝒙 + 𝒙𝒇(𝟏 − 𝒔𝒙) 𝒚′ = 𝒚𝒔𝒚 + 𝒚𝒇(𝟏 − 𝒔𝒚)
• Matrix equation for the same will discuss in later section.
• Polygons are scaled by applying scaling at coordinates and redrawing while other body like circle and
ellipse will scale using its defining parameters. For example ellipse will scale using its semi major axis,
semi minor axis and center point scaling and redrawing at that position.
• Example: - Consider square with left-bottom corner at (2, 2) and right-top corner at (6, 6) apply the
transformation which makes its size half.
As we want size half so value of scale factor are 𝑠𝑥 = 0.5, 𝑠𝑦 = 0.5 and Coordinates of square are [A (2,
2), B (6, 2), C (6, 6), D (2, 6)].
𝑃′ = 𝑆 ∙ 𝑃

4
Unit-3 – 2D Transformation & Viewing
𝑠𝑥 0 2 6 6 2
𝑃′ = [ ][ ]
0 𝑠𝑦 2 2 6 6

𝑃 =[ 0.5 0 ] [2 6 6 2]
0 0.5 2 2 6 6
1 3 3 1
𝑃′ = [ ]
1 1 3 3
• Final coordinate after scaling are [A’ (1, 1), B’ (3, 1), C’ (3, 3), D’ (1, 3)].

Matrix Representation and homogeneous coordinates


• Many graphics application involves sequence of geometric transformations.
• For example in design and picture construction application we perform Translation, Rotation, and scaling
to fit the picture components into their proper positions.
• For efficient processing we will reformulate transformation sequences.
• We have matrix representation of basic transformation and we can express it in the general matrix form
as:
𝑷 ′ = 𝑴𝟏 ∙ 𝑷 + 𝑴 𝟐
Where 𝑷 and 𝑷′ are initial and final point position, 𝑴𝟏 contains rotation and scaling terms and 𝑴𝟐
contains translation al terms associated with pivot point, fixed point and reposition.
• For efficient utilization we must calculate all sequence of transformation in one step and for that reason
we reformulate above equation to eliminate the matrix addition associated with translation terms in
matrix 𝑴𝟐.
• We can combine that thing by expanding 2X2 matrix representation into 3X3 matrices.
• It will allows us to convert all transformation into matrix multiplication but we need to represent vertex
position (𝒙, 𝒚) with homogeneous coordinate triple (𝒙𝒉, 𝒚𝒉, 𝒉) Where 𝒙 = 𝒙𝒉 , 𝒚 = 𝒚𝒉 thus we can also
𝒉 𝒉
write triple as (𝒉 ∙ 𝒙, 𝒉 ∙ 𝒚, 𝒉).
• For two dimensional geometric transformation we can take value of 𝒉 is any positive number so we can
get infinite homogeneous representation for coordinate value (𝒙, 𝒚).
• But convenient choice is set 𝒉 = 𝟏 as it is multiplicative identity, than (𝒙, 𝒚) is represented as (𝒙, 𝒚, 𝟏).
• Expressing coordinates in homogeneous coordinates form allows us to represent all geometric
transformation equations as matrix multiplication.
• Let’s see each representation with 𝒉 = 𝟏
Translation

𝑷′ = 𝑻(𝒕 𝒙,𝒕 𝒚) ∙ 𝑷
𝒙′ 𝟏 𝟎 𝒕𝒙 𝒙
[𝒚′] = [𝟎 𝟏 𝒕𝒚] [𝒚]
𝟏 𝟎 𝟎 𝟏 𝟏
NOTE: - Inverse of translation matrix is obtain by putting −𝒕𝒙 & − 𝒕𝒚 instead of 𝒕𝒙 & 𝒕𝒚.
Rotation

𝑷′ = 𝑹(𝜽) ∙ 𝑷
𝒙′ 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝒙
[𝒚′] = [𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎] [𝒚]
𝟏 𝟎 𝟎 𝟏 𝟏
NOTE: - Inverse of rotation matrix is obtained by replacing 𝜽 by −𝜽.
Scaling

𝑷′ = 𝑺(𝒔𝒙,𝒔𝒚) ∙ 𝑷

5
Unit-3 – 2D Transformation & Viewing

𝒙′ 𝒔𝒙 𝟎 𝟎 𝒙
[𝒚′] = [ 𝟎 𝒔𝒚 𝟎] [𝒚]
𝟏 𝟎 𝟎 𝟏 𝟏
NOTE: - Inverse of scaling matrix is obtained by replacing 𝒔𝒙 & 𝒔𝒚 by 𝟏 & 𝟏 respectively.
𝒔𝒙 𝒔𝒚

Composite Transformation
• We can set up a matrix for any sequence of transformations as a composite transformation matrix by
calculating the matrix product of individual transformation.
• For column matrix representation of coordinate positions, we form composite transformations by
multiplying matrices in order from right to left.

Translations
• Two successive translations are performed as:
𝑷′ = 𝑻(𝒕𝒙𝟐, 𝒕𝒚𝟐) ∙ {𝑻(𝒕𝒙𝟏, 𝒕𝒚𝟏) ∙ 𝑷}
𝑷′ = {𝑻(𝒕𝒙𝟐, 𝒕𝒚𝟐) ∙ 𝑻(𝒕𝒙𝟏, 𝒕𝒚𝟏)} ∙ 𝑷
𝟏 𝟎 𝒕𝒙𝟐 𝟏 𝟎 𝒕𝒙𝟏
𝑷 = [𝟎 𝟏 𝒕𝒚𝟐] [𝟎 𝟏 𝒕𝒚𝟏] ∙ 𝑷

𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝟏 𝟎 𝒕𝒙𝟏 + 𝒕𝒙𝟐
𝑷′ = [𝟎 𝟏 𝒕𝒚𝟏 + 𝒕𝒚𝟐] ∙ 𝑷
𝟎 𝟎 𝟏
𝑷′ = 𝑻(𝒕𝒙𝟏 + 𝒕𝒙𝟐, 𝒕𝒚𝟏 + 𝒕𝒚𝟐) ∙ 𝑷}
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively.
• This concept can be extended for any number of successive translations.
Example: Obtain the final coordinates after two translations on point 𝑝(2,3) with translation vector
(4, 3) and (−1, 2) respectively.

𝑃′ = 𝑇(𝑡𝑥1 + 𝑡𝑥2, 𝑡𝑦1 + 𝑡𝑦2) ∙ 𝑃

1 0 𝑡𝑥1 + 𝑡𝑥2 1 0 4 + (−1) 2


𝑃′ = [0 1 𝑡𝑦1 + 𝑡𝑦2] ∙ 𝑃 = [0 1 3 + 2 ] ∙ [3]
0 0 1 0 0 1 1
1 0 3 2 5
𝑃′ = [0 1 5] ∙ [3] = [8]
0 0 1 1 1

Final Coordinates after translations are 𝑝,(5, 8).

Rotations
• Two successive Rotations are performed as:
𝑷′ = 𝑹(𝜽𝟐) ∙ {𝑹(𝜽𝟏) ∙ 𝑷}
𝑷′ = {𝑹(𝜽𝟐) ∙ 𝑹(𝜽𝟏)} ∙ 𝑷
𝐜𝐨𝐬 𝜽𝟐 − 𝐬𝐢𝐧 𝜽𝟐 𝟎 𝐜𝐨𝐬 𝜽𝟏 −𝐬𝐢𝐧 𝜽𝟏 𝟎
𝑷 = [𝐬𝐢𝐧 𝜽𝟐
′ 𝐜𝐨𝐬 𝜽𝟐 𝟎] [𝐬𝐢𝐧 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟏 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝐜𝐨𝐬 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 − 𝐬𝐢𝐧 𝜽𝟐 𝐬𝐢𝐧 𝜽𝟏 − 𝐬𝐢𝐧 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟐 − 𝐬𝐢𝐧 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 𝟎
𝑷′ = [𝐬𝐢𝐧 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟐 + 𝐬𝐢𝐧 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 𝐜𝐨𝐬 𝜽𝟐 𝐜𝐨𝐬 𝜽𝟏 − 𝐬𝐢𝐧 𝜽𝟐 𝐬𝐢𝐧 𝜽𝟏 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏

6
Unit-3 – 2D Transformation & Viewing
𝐜𝐨𝐬(𝜽𝟏 + 𝜽𝟐) −𝐬𝐢𝐧(𝜽𝟏 + 𝜽𝟐) 𝟎
𝑷′ = [𝐬𝐢𝐧(𝜽𝟏 + 𝜽𝟐) 𝐜𝐨𝐬(𝜽𝟏 + 𝜽𝟐) 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏
𝑷′ = 𝑹(𝜽𝟏 + 𝜽𝟐) ∙ 𝑷
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively.
• This concept can be extended for any number of successive rotations.
Example: Obtain the final coordinates after two rotations on point 𝑝(6,9) with rotation angles are 30𝑜 and
60𝑜 respectively.
𝑃′ = 𝑅(𝜃1 + 𝜃2) ∙ 𝑃
𝑐𝑜𝑠(𝜃1 + 𝜃2) −𝑠𝑖𝑛(𝜃1 + 𝜃2) 0
𝑃′ = [𝑠𝑖𝑛(𝜃1 + 𝜃2) 𝑐𝑜𝑠(𝜃1 + 𝜃2) 0] ∙ 𝑃 0
0 1
𝑐𝑜𝑠(30 + 60) −𝑠𝑖𝑛(30 + 60) 0
𝑃′ = [𝑠𝑖𝑛(30 + 60) 𝑐𝑜𝑠(30 + 60) 0] ∙ 𝑃 0
0 1
0 −1 0 6 −9
𝑃′ = [1 0 0] ∙ [9] = [ 6 ]
0 0 1 1 1
Final Coordinates after rotations are 𝑝,(−9, 6).

Scaling
• Two successive scaling are performed as:
𝑷′ = 𝑺(𝒔𝒙𝟐, 𝒔𝒚𝟐) ∙ {𝑺(𝒔𝒙𝟏, 𝒔𝒚𝟏) ∙ 𝑷}
𝑷′ = {𝑺(𝒔𝒙𝟐, 𝒔𝒚𝟐) ∙ 𝑺(𝒔𝒙𝟏, 𝒔𝒚𝟏)} ∙ 𝑷
𝒔𝒙𝟐 𝟎 𝟎 𝒔𝒙𝟏 𝟎 𝟎
𝑷′ = [ 𝟎 𝒔𝒚𝟐 𝟎] [ 𝟎 𝒔𝒚𝟏 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝒔𝒙𝟏 ∙ 𝒔𝒙𝟐 𝟎 𝟎
𝑷′ = [ 𝟎 𝒔𝒚𝟏 ∙ 𝒔𝒚𝟐 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏
𝑷′ = 𝑺(𝒔𝒙𝟏 ∙ 𝒔𝒙𝟐, 𝒔𝒚𝟏 ∙ 𝒔𝒚𝟐) ∙ 𝑷
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively.
• This concept can be extended for any number of successive scaling.
Example: Obtain the final coordinates after two scaling on line 𝑝𝑞 [𝑝(2,2), 𝑞(8, 8)] with scaling factors are
(2, 2) and (3, 3) respectively.

𝑃′ = 𝑆(𝑠𝑥1 ∙ 𝑠𝑥2, 𝑠𝑦1 ∙ 𝑠𝑦2) ∙ 𝑃

𝑠𝑥1 ∙ 𝑠𝑥2 0 0 2∙3 0 0


𝑃′ = [ 0 𝑠𝑦1 ∙ 𝑠𝑦2 0] ∙ 𝑃 = [ 0 2 ∙ 3 0] ∙ 𝑃
0 0 1 0 0 1
6 0 0 2 8 12 48
𝑃′ = [0 6 0] ∙ [2 8] = [12 48]
0 0 1 1 1 1 1

Final Coordinates after rotations are 𝑝,(12, 12) and 𝑞,(48, 48).

General Pivot-Point Rotation

7
Unit-3 – 2D Transformation & Viewing

(𝒙𝒓, 𝒚𝒓) (𝒙𝒓, 𝒚𝒓)

(a) (b) (c) (d)


Original Position Translation of Object Rotation Translation of Object so
of Object and so that Pivot Point about that Pivot Point is Return
Pivot Point. (𝒙𝒓, 𝒚𝒓) is at Origin. Origin. to Position (𝒙𝒓, 𝒚𝒓) .

Fig. 3.6: - General pivot point rotation.


• For rotating object about arbitrary point called pivot point we need to apply following sequence of
transformation.
1. Translate the object so that the pivot-point coincides with the coordinate origin.
2. Rotate the object about the coordinate origin with specified angle.
3. Translate the object so that the pivot-point is returned to its original position (i.e. Inverse of step-1).
• Let’s find matrix equation for this
𝑷′ = 𝑻(𝒙𝒓, 𝒚𝒓) ∙ [𝑹(𝜽) ∙ {𝑻(−𝒙𝒓, −𝒚𝒓) ∙ 𝑷}]
𝑷′ = {𝑻(𝒙𝒓, 𝒚𝒓) ∙ 𝑹(𝜽) ∙ 𝑻(−𝒙𝒓, −𝒚𝒓)} ∙ 𝑷
𝟏 𝟎 𝒙𝒓 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝟏 𝟎 −𝒙𝒓
𝑷 = [𝟎 𝟏 𝒚𝒓] [𝐬𝐢𝐧 𝜽
′ 𝐜𝐨𝐬 𝜽 𝟎] [𝟎 𝟏 −𝒚𝒓] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝒙𝒓(𝟏 − 𝐜𝐨𝐬 𝜽) + 𝒚𝒓 𝐬𝐢𝐧 𝜽
𝑷′ = [𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝒚𝒓(𝟏 − 𝐜𝐨𝐬 𝜽) − 𝒙𝒓 𝐬𝐢𝐧 𝜽] ∙ 𝑷
𝟎 𝟎 𝟏
𝑷′ = 𝑹(𝒙𝒓, 𝒚𝒓𝜽) ∙ 𝑷
Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively and (𝒙𝒓, 𝒚𝒓) are the
coordinates of pivot-point.
• Example: - Locate the new position of the triangle [A (5, 4), B (8, 3), C (8, 8)] after its rotation by 90o
clockwise about the centroid.
Pivot point is centroid of the triangle so:
5+8+8
𝑥 = =7, 𝑦 4+3+8
𝑟 𝑟 = =5
3 3
As′ rotation is clockwise we will take 𝜃 = −90°.
𝑃 = 𝑅(𝑥 , 𝑦 ,𝜃) ∙ 𝑃
𝑟 𝑟
cos 𝜃 − sin 𝜃 𝑥𝑟(1 − cos 𝜃) + 𝑦𝑟 sin 𝜃 5 8 8
𝑃′ = [sin 𝜃 cos 𝜃 𝑦𝑟(1 − cos 𝜃) − 𝑥𝑟 sin 𝜃] [4 3 8]
0 0 1 1 1 1
cos(−90) − sin(−90) 7(1 − cos(−90)) + 5 sin(−90) 5 8 8
𝑃′ = [sin(−90) cos(−90) 5(1 − cos(−90)) − 7 sin(−90)] [4 3 8]
0 0 1 1 1 1
0 1 7(1 − 0) − 5(1) 5 8 8
𝑃′ = [−1 0 5(1 − 0) + 7(1)] [4 3 8]
0 0 1 1 1 1
0 1 2 5 8 8
𝑃′ = [−1 0 12] [4 3 8]
0 0 1 1 1 1

8
Unit-3 – 2D Transformation & Viewing

11 13 18
𝑃′ = [ 7 4 4]
1 1 1
• Final coordinates after rotation are [A’ (11, 7), B’ (13, 4), C’ (18, 4)].

General Fixed-Point Scaling

(𝒙𝒇, 𝒚𝒇) (𝒙𝒇, 𝒚𝒇)

(c) (d)
(a) (b)
Scale Object with Translate Object so that
Original Position Translate Object so
Respect to Origin Fixed Point is Return to
of Object and that Fixed Point
Position (𝒙𝒇, 𝒚𝒇) .
Fixed Point (𝒙𝒇, 𝒚𝒇) is at Origin

Fig. 3.7: - General fixed point scaling.


• For scaling object with position of one point called fixed point will remains same, we need to apply
following sequence of transformation.
1. Translate the object so that the fixed-point coincides with the coordinate origin.
2. Scale the object with respect to the coordinate origin with specified scale factors.
3. Translate the object so that the fixed-point is returned to its original position (i.e. Inverse of step-1).
• Let’s find matrix equation for this
𝑷′ = 𝑻(𝒙𝒇, 𝒚𝒇) ∙ [𝑺(𝒔𝒙, 𝒔𝒚) ∙ {𝑻(−𝒙𝒇, −𝒚𝒇) ∙ 𝑷}]
𝑷′ = {𝑻(𝒙𝒇, 𝒚𝒇) ∙ 𝑺(𝒔𝒙, 𝒔𝒚) ∙ 𝑻(−𝒙𝒇, −𝒚𝒇)} ∙ 𝑷
𝟏 𝟎 𝒙𝒇 𝒔𝒙 𝟎 𝟎 𝟏 𝟎 −𝒙𝒇
𝑷 = [𝟎 𝟏 𝒚𝒇] [ 𝟎 𝒔𝒚 𝟎] [𝟎 𝟏 −𝒚𝒇] ∙ 𝑷

𝟎 𝟎 𝟏 𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝒔𝒙 𝟎 𝒙𝒇(𝟏 − 𝒔𝒙)
𝑷′ = [ 𝟎 𝒔𝒚 𝒚𝒇(𝟏 − 𝒔𝒚)] ∙ 𝑷
𝟎 𝟎 𝟏
𝑷 = 𝑺(𝒙𝒇, 𝒚𝒇, 𝒔𝒙, 𝒔𝒚) ∙ 𝑷

Here 𝑷′ and 𝑷 are column vector of final and initial point coordinate respectively and (𝒙𝒇, 𝒚𝒇) are the
coordinates of fixed-point.
• Example: - Consider square with left-bottom corner at (2, 2) and right-top corner at (6, 6) apply the
transformation which makes its size half such that its center remains same.
Fixed point is center of square so:
6−2
𝑥 =2+ , 𝑦 6−2
𝑓 𝑓 =2+
2 2
As we want size half so value of scale factor are 𝑠𝑥 = 0.5, 𝑠𝑦 = 0.5 and Coordinates of square are [A (2,
2), B (6, 2), C (6, 6), D (2, 6)].
𝑃′ = 𝑆(𝑥𝑓, 𝑦𝑓, 𝑠𝑥, 𝑠𝑦) ∙ 𝑃
𝑠𝑥 0 𝑥𝑓(1 − 𝑠𝑥) 2 6 6 2
𝑃 = [ 0 𝑠𝑦 𝑦𝑓(1 − 𝑠𝑦)] [2 2 6 6]

0 0 1 1 1 1 1

9
Unit-3 – 2D Transformation & Viewing
0.5 0 4(1 − 0.5) 2 6 6 2
𝑃′ = [ 0 0.5 4(1 − 0.5)] [2 2 6 6]
0 0 1 1 1 1 1
0.5 0 2 2 6 6 2
𝑃′ = [ 0 0.5 2] [2 2 6 6]
0 0 1 1 1 1 1
3 5 5 3
𝑃′ = [3 3 5 5]
1 1 1 1
• Final coordinate after scaling are [A’ (3, 3), B’ (5, 3), C’ (5, 5), D’ (3, 5)]

General Scaling Directions


𝒔𝟐

𝒔𝟏

Fig. 3.8: - General scaling direction.


• Parameter 𝒔𝒙 and 𝒔𝒚 scale the object along 𝒙 and 𝒚 directions. We can scale an object in other directions
by rotating the object to align the desired scaling directions with the coordinate axes before applying the
scaling transformation.
• Suppose we apply scaling factor 𝒔𝟏 and 𝒔𝟐 in direction shown in figure than we will apply following
transformations.
1. Perform a rotation so that the direction for 𝒔𝟏 and 𝒔𝟐 coincide with 𝒙 and 𝒚 axes.
2. Scale the object with specified scale factors.
3. Perform opposite rotation to return points to their original orientations. (i.e. Inverse of step-1).
• Let’s find matrix equation for this
𝑷′ = 𝑹−𝟏(𝜽) ∙ [𝑺(𝒔𝟏, 𝒔𝟐) ∙ {𝑹(𝜽) ∙ 𝑷}]
𝑷′ = {𝑹−𝟏(𝜽) ∙ 𝑺(𝒔𝟏, 𝒔𝟐) ∙ 𝑹(𝜽)} ∙ 𝑷
𝐜𝐨𝐬 𝜽 𝐬𝐢𝐧 𝜽 𝟎 𝒔𝒙 𝟎 𝟎 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎
𝑷 = [− 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎] [ 𝟎 𝒔𝒚 𝟎] [𝐬𝐢𝐧 𝜽
′ 𝐜𝐨𝐬 𝜽 𝟎] ∙ 𝑷
𝟎 𝟎 𝟏 𝟎 𝟎 𝟏 𝟎 𝟎 𝟏
𝒔𝟏 𝐜𝐨𝐬 𝜽 + 𝒔𝟐 𝐬𝐢𝐧 𝜽 (𝒔𝟐 − 𝒔𝟏) 𝐜𝐨𝐬 𝜽 𝐬𝐢𝐧 𝜽 𝟎
𝟐 𝟐

𝑷′ = [(𝒔𝟐 − 𝒔𝟏) 𝐜𝐨𝐬 𝜽 𝐬𝐢𝐧 𝜽 𝒔𝟏 𝐬𝐢𝐧𝟐 𝜽 + 𝒔𝟐 𝐜𝐨𝐬𝟐 𝜽 𝟎] ∙ 𝑷


𝟎 𝟎 𝟏
Here 𝑷 and 𝑷 are column vector of final and initial point coordinate respectively and 𝜽 is the angle

between actual scaling direction and our standard coordinate axes.


Other Transformation
• Some package provides few additional transformations which are useful in certain applications. Two
such transformations are reflection and shear.

Reflection

• A reflection is a transformation that produces a mirror image of an object.

10
Unit-3 – 2D Transformation & Viewing

• The mirror image for a two –dimensional reflection is generated relative to an axis of reflection by
rotating the object 180o about the reflection axis.
• Reflection gives image based on position of axis of reflection. Transformation matrix for few positions
are discussed here.

Transformation matrix for reflection about the line 𝒚 = 𝟎 , 𝒕𝒉𝒆 𝒙 𝒂𝒙𝒊𝒔.

y
1 Original
Position
2 3

x
2’ 3’
Reflected
1’ Position

Fig. 3.9: - Reflection about x - axis.


• This transformation keeps x values are same, but flips (Change the sign) y values of coordinate
positions.

1 0 0
[0 −1 0]
0 0 1

Transformation matrix for reflection about the line 𝒙 = 𝟎 , 𝒕𝒉𝒆 𝒚 𝒂𝒙𝒊𝒔.

y
1’ 1 Original
Reflected
Position Position
3’ 2’ 2 3

Fig. 3.10: - Reflection about y - axis.


• This transformation keeps y values are same, but flips (Change the sign) x values of coordinate
positions.

−1 0 0
[ 0 1 0]
0 0 1

Transformation matrix for reflection about the 𝑶𝒓𝒊𝒈𝒊𝒏.

11
Unit-3 – 2D Transformation & Viewing

y
Original
3 Position

1 2

1’ x
3’
Reflected 2’
Position

Fig. 3.11: - Reflection about origin.


• This transformation flips (Change the sign) x and y both values of coordinate positions.

−1 0 0
[ 0 −1 0]
0 0 1

Transformation matrix for reflection about the line 𝒙 = 𝒚 .

y
Original x=y line
Position
3
2 1
1’
3’
Reflected
2’ Position

Fig. 3.12: - Reflection about x=y line.


• This transformation interchange x and y values of coordinate positions.

0 1 0
[1 0 0]
0 0 1

12
Unit-3 – 2D Transformation & Viewing

Transformation matrix for reflection about the line 𝒙 = −𝒚 .

x=-y line 3 y

1 2

Original
’ 1’
3 Position
2’
Reflected
Position
x

Fig. 3.12: - Reflection about x=-y line.


• This transformation interchange x and y values of coordinate positions.

0 −1 0
[−1 0 0]
0 0 1

• Example: - Find the coordinates after reflection of the triangle [A (10, 10), B (15, 15), C (20, 10)] about x
axis.
1 0 0 10 15 20
𝑃 = [0 −1 0] [10 15 10 ]

0 0 1 1 1 1
10 15 20
𝑃′ = [−10 −15 −10]
1 1 1
• Final coordinate after reflection are [A’ (10, -10), B’ (15, -15), C’ (20, -10)]

Shear

• A transformation that distorts the shape of an object such that the transformed shape appears as if the
object were composed of internal layers that had been caused to slide over each other is called shear.
• Two common shearing transformations are those that shift coordinate x values and those that shift y
values.

Shear in 𝒙 − 𝒅𝒊𝒓𝒆𝒄𝒕𝒊𝒐𝒏 .

Before After
Y Shear Shear
Y

X X

Fig. 3.13: - Shear in x-direction.

13
Unit-3 – 2D Transformation & Viewing

• Shear relative to 𝑥 − 𝑎𝑥𝑖𝑠 that is 𝑦 = 0 line can be produced by following equation:


𝒙′ = 𝒙 + 𝒔𝒉𝒙 ∙ 𝒚 , 𝒚′ = 𝒚
• Transformation matrix for that is:
𝟏 𝒔𝒉𝒙 𝟎
[𝟎 𝟏 𝟎]
𝟎 𝟎 𝟏
Here 𝒔𝒉𝒙 is shear parameter. We can assign any real value to 𝒔𝒉𝒙.
• We can generate 𝑥 − 𝑑𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛 shear relative to other reference line 𝑦 = 𝑦𝑟𝑒𝑓 with following equation:
𝒙′ = 𝒙 + 𝒔𝒉𝒙 ∙ (𝒚 − 𝒚𝒓𝒆𝒇) , 𝒚′ = 𝒚
• Transformation matrix for that is:
𝟏 𝒔𝒉𝒙 −𝒔𝒉𝒙 ∙ 𝒚𝒓𝒆𝒇
[𝟎 𝟏 𝟎 ]
𝟎 𝟎 𝟏
• Example: - Shear the unit square in x direction with shear parameter ½ relative to line 𝑦 = −1.
Here 𝑦𝑟𝑒𝑓 = −1 and 𝑠ℎ𝑥 = 0.5
Coordinates of unit square are [A (0, 0), B (1, 0), C (1, 1), D (0, 1)].
1 𝑠ℎ𝑥 −𝑠ℎ𝑥 ∙ 0 1 1 0
𝑦𝑟𝑒𝑓 1
𝑃′ = [0 0 ] [0 0 1 1]
0 0 1 1 1 1 1
1 0.5 −0.5 ∙ (−1) 0 1 1 0
𝑃′ = [0 1 0 ] [0 0 1 1]
0 0 1 1 1 1 1
1 0.5 0.5 0 1 1 0
𝑃′ = [0 1 0 ] [0 0 1 1]
0 0 1 1 1 1 1

0.5 1.5 2 1
𝑃′ = [ 0 0 1 1]
1 1 1 1
• Final coordinate after shear are [A’ (0.5, 0), B’ (1.5, 0), C’ (2, 1), D’ (1, 1)]

Shear in 𝒚 − 𝒅𝒊𝒓𝒆𝒄𝒕𝒊𝒐𝒏 .

Before
Y Shear After
Y
Shear

X X

Fig. 3.14: - Shear in y-direction.


• Shear relative to 𝑦 − 𝑎𝑥𝑖𝑠 that is 𝑥 = 0 line can be produced by following equation:
𝒙′ = 𝒙 , 𝒚′ = 𝒚 + 𝒔𝒉𝒚 ∙ 𝒙
• Transformation matrix for that is:
𝟏 𝟎 𝟎
[𝒔𝒉𝒚 𝟏 𝟎]
𝟎 𝟎 𝟏
Here 𝒔𝒉𝒚 is shear parameter. We can assign any real value to 𝒔𝒉𝒚.
14
Unit-3 – 2D Transformation & Viewing
• We can generate 𝑦 − 𝑑𝑖𝑟𝑒𝑐𝑡𝑖𝑜𝑛 shear relative to other reference line 𝑥 = 𝑥𝑟𝑒𝑓 with following equation:
𝒙′ = 𝒙, 𝒚′ = 𝒚 + 𝒔𝒉𝒚 ∙ (𝒙 − 𝒙𝒓𝒆𝒇)
• Transformation matrix for that is:
𝟏 𝟎 𝟎
[𝒔𝒉𝒚 𝟏 −𝒔𝒉𝒚 ∙ 𝒙𝒓𝒆𝒇]
𝟎 𝟎 𝟏
• Example: - Shear the unit square in y direction with shear parameter ½ relative to line 𝑥 = −1.
Here 𝑥𝑟𝑒𝑓 = −1 and 𝑠ℎ𝑦 = 0.5
Coordinates of unit square are [A (0, 0), B (1, 0), C (1, 1), D (0, 1)].
1 0 0 0 1 1 0
𝑃 = [𝑠ℎ𝑦
′ 1 −𝑠ℎ 𝑦 ∙ 𝑥 𝑟𝑒𝑓] [0 0 1 1]
0 0 1 1 1 1 1
1 0 0 0 1 1 0
𝑃′ = [0.5 1 −0.5 ∙ (−1)] [0 0 1 1]
0 0 1 1 1 1 1
1 0 0 0 1 1 0
𝑃′ = [0.5 1 0.5] [0 0 1 1]
0 0 1 1 1 1 1
0 1 1 0
𝑃′ = [0.5 1 2 1.5]
1 1 1 1
• Final coordinate after shear are [A’ (0, 0.5), B’ (1, 1), C’ (1, 2), D’ (0, 1.5)]

The Viewing Pipeline


• Window: Area selected in world-coordinate for display is called window. It defines what is to be viewed.
• Viewport: Area on a display device in which window image is display (mapped) is called viewport. It
defines where to display.
• In many case window and viewport are rectangle, also other shape may be used as window and
viewport.
• In general finding device coordinates of viewport from word coordinates of window is called as viewing
transformation.
• Sometimes we consider this viewing transformation as window-to-viewport transformation but in
general it involves more steps.

Fig. 3.1: - A viewing transformation using standard rectangles for the window and viewport.
• Now we see steps involved in viewing pipeline.

15
Unit-3 – 2D Transformation & Viewing

Construct World- Map Viewing


Convert Map
Coordinate Scene Coordinate to
MC WC World- VC NVC Normalized DC
Using Modeling- Normalized Viewing
Coordinate to Viewport to
Coordinate Coordinates using
Viewing Device
Transformations Window-Viewport
Coordinates Coordinates
Specifications

Fig. 3.2: - 2D viewing pipeline.


• As shown in figure above first of all we construct world coordinate scene using modeling coordinate
transformation.
• After this we convert viewing coordinates from world coordinates using window to viewport
transformation.
• Then we map viewing coordinate to normalized viewing coordinate in which we obtain values in
between 0 to 1.
• At last we convert normalized viewing coordinate to device coordinate using device driver software
which provide device specification.
• Finally device coordinate is used to display image on display screen.
• By changing the viewport position on screen we can see image at different place on the screen.
• By changing the size of the window and viewport we can obtain zoom in and zoom out effect as per
requirement.
• Fixed size viewport and small size window gives zoom in effect, and fixed size viewport and larger
window gives zoom out effect.
• View ports are generally defines with the unit square so that graphics package are more device
independent which we call as normalized viewing coordinate.
Viewing Coordinate Reference Frame

Fig. 3.3: - A viewing-coordinate frame is moved into coincidence with the world frame in two steps: (a)
translate the viewing origin to the world origin, and then (b) rotate to align the axes of the two systems.
• We can obtain reference frame in any direction and at any position.
• For handling such condition first of all we translate reference frame origin to standard reference frame
origin and then we rotate it to align it to standard axis.
• In this way we can adjust window in any reference frame.
• this is illustrate by following transformation matrix:

16
Unit-3 – 2D Transformation & Viewing
𝐌𝐰𝐜,𝐯𝐜 = 𝐑𝐓
• Where T is translation matrix and R is rotation matrix.
Window-To-Viewport Coordinate Transformation
• Mapping of window coordinate to viewport is called window to viewport transformation.
• We do this using transformation that maintains relative position of window coordinate into viewport.
• That means center coordinates in window must be remains at center position in viewport.
• We find relative position by equation as follow:
𝐱𝐯 − 𝐱𝐯𝐦𝐢𝐧 𝐱 − 𝐱𝐰𝐦𝐢𝐧
= 𝐰
𝐱𝐯𝐦𝐚𝐱 − 𝐱𝐯𝐦𝐢𝐧 𝐱𝐰𝐦𝐚𝐱 − 𝐱𝐰𝐦𝐢𝐧

𝐲𝐯 − 𝐲𝐯𝐦𝐢𝐧 𝐲 − 𝐲𝐰𝐦𝐢𝐧
= 𝐰
𝐲𝐯𝐦𝐚𝐱 − 𝐲𝐯𝐦𝐢𝐧 𝐲𝐰𝐦𝐚𝐱 − 𝐲𝐰𝐦𝐢𝐧
• Solving by making viewport position as subject we obtain:
𝐱𝐯 = 𝐱𝐯𝐦𝐢𝐧 + (𝐱𝐰 − 𝐱𝐰𝐦𝐢𝐧)𝐬𝐱
𝐲𝐯 = 𝐲𝐯𝐦𝐢𝐧 + (𝐲𝐰 − 𝐲𝐰𝐦𝐢𝐧)𝐬𝐲
• Where scaling factor are :
𝐱𝐯𝐦𝐚𝐱 − 𝐱𝐯𝐦𝐢𝐧
𝐬𝐱 = 𝐱
𝐰𝐦𝐚𝐱 − 𝐱𝐰𝐦𝐢𝐧
𝐲𝐯𝐦𝐚𝐱 − 𝐲𝐯𝐦𝐢𝐧
𝐬𝐲 =
𝐲𝐰𝐦𝐚𝐱 − 𝐲𝐰𝐦𝐢𝐧
• We can also map window to viewport with the set of transformation, which include following sequence
of transformations:
1. Perform a scaling transformation using a fixed-point position of (xWmin,ywmin) that scales the window
area to the size of the viewport.
2. Translate the scaled window area to the position of the viewport.
• For maintaining relative proportions we take (sx = sy). in case if both are not equal then we get stretched
or contracted in either the x or y direction when displayed on the output device.
• Characters are handle in two different way one way is simply maintain relative position like other
primitive and other is to maintain standard character size even though viewport size is enlarged or
reduce.
• Number of display device can be used in application and for each we can use different window-to-
viewport transformation. This mapping is called the workstation transformation.

Fig. 3.4: - workstation transformation.

17
Unit-3 – 2D Transformation & Viewing
• As shown in figure two different displays devices are used and we map different window-to-viewport on
each one.
Clipping Operations
• Generally, any procedure that identifies those portions of a picture that are either inside or outside of a
specified region of space is referred to as a clipping algorithm, or simply clipping. The region against
which an object is to clip is called a clip window.
• Clip window can be general polygon or it can be curved boundary.
Application of Clipping
• It can be used for displaying particular part of the picture on display screen.
• Identifying visible surface in 3D views.
• Antialiasing.
• Creating objects using solid-modeling procedures.
• Displaying multiple windows on same screen.
• Drawing and painting.
Point Clipping
• In point clipping we eliminate those points which are outside the clipping window and draw points which
are inside the clipping window.
• Here we consider clipping window is rectangular boundary with edge (xwmin,xwmax,ywmin,ywmax).
• So for finding wether given point is inside or outside the clipping window we use following inequality:
𝒙𝒘𝒎𝒊𝒏 ≤ 𝒙 ≤ 𝒙𝒘𝒂𝒎𝒙
𝒚𝒘𝒎𝒊𝒏 ≤ 𝒚 ≤ 𝒚𝒘𝒂𝒎𝒙
• If above both inequality is satisfied then the point is inside otherwise the point is outside the clipping
window.
Line Clipping
• Line clipping involves several possible cases.
1. Completely inside the clipping window.
2. Completely outside the clipping window.
3. Partially inside and partially outside the clipping window.

P9

Window Window
P4 P10

P2 P2
P8
P1 P1
P5 P5 P6 P8
P3 P6

P7

P7

Before Clipping After Clipping


(a) (b)
Fig. 3.5: - Line clipping against a rectangular window.

18
Unit-3 – 2D Transformation & Viewing
• Line which is completely inside is display completely. Line which is completely outside is eliminated from
display. And for partially inside line we need to calculate intersection with window boundary and find
which part is inside the clipping boundary and which part is eliminated.
• For line clipping several scientists tried different methods to solve this clipping procedure. Some of them
are discuss below.
Cohen-Sutherland Line Clipping
• This is one of the oldest and most popular line-clipping procedures.

Region and Region Code


• In this we divide whole space into nine region and assign 4 bit code to each endpoint of line depending
on the position where the line endpoint is located.

1001 1000 1010

0001 0000 0010

0101 0100 0110

Fig. 3.6: - Workstation transformation.


• Figure 3.6 shows code for line end point which is fall within particular area.
• Code is deriving by setting particular bit according to position of area.
Set bit 1: For left side of clipping window.
Set bit 2: For right side of clipping window.
Set bit 3: For below clipping window.
Set bit 4: For above clipping window.
• All bits as mention above are set means 1 and other are 0.

Algorithm
Step-1:
Assign region code to both endpoint of a line depending on the position where the line endpoint is located.

Step-2:
If both endpoint have code ‘0000’
Then line is completely inside.
Otherwise
Perform logical ending between this two codes.

If result of logical ending is non-zero


Line is completely outside the clipping window.
Otherwise
Calculate the intersection point with the boundary one by one.
Divide the line into two parts from intersection point.
Recursively call algorithm for both line segments.

19
Unit-3 – 2D Transformation & Viewing

Step-3:
Draw line segment which are completely inside and eliminate other line segment which found completely
outside.

Intersection points calculation with clipping window boundary


• For intersection calculation we use line equation “𝑦 = 𝑚𝑥 + 𝑏”.
• ‘𝑥′ is constant for left and right boundary which is:
o for left “𝑥 = 𝑥𝑤𝑚𝑖𝑛”
o for right “𝑥 = 𝑥𝑤𝑚𝑎𝑥”
• So we calculate 𝑦 coordinate of intersection for this boundary by putting values of 𝑥 depending on
boundary is left or right in below equation.
𝒚 = 𝒚𝟏 + 𝒎(𝒙 − 𝒙𝟏)
• ′𝑦′ coordinate is constant for top and bottom boundary which is:
o for top “𝑦 = 𝑦𝑤𝑚𝑎𝑥”
o for bottom “𝑦 = 𝑦𝑤𝑚𝑖𝑛”
• So we calculate 𝑥 coordinate of intersection for this boundary by putting values of 𝑦 depending on
boundary is top or bottom in below equation.
𝒚 − 𝒚𝟏
𝒙 = 𝒙𝟏 +
𝒎
Liang-Barsky Line Clipping
• Line clipping approach is given by the Liang and Barsky is faster than cohen-sutherland line clipping.
Which is based on analysis of the parametric equation of the line which are as below.
𝒙 = 𝒙𝟏 + 𝒖∆𝒙
𝒚 = 𝒚𝟏 + 𝒖∆𝒚
Where 0 ≤ 𝑢 ≤ 1 ,∆𝑥 = 𝑥2 − 𝑥1 and ∆𝑦 = 𝑦2 − 𝑦1.

Algorithm
1. Read two end points of line 𝑃1(𝑥1, 𝑦1) and 𝑃2(𝑥2, 𝑦2)
2. Read two corner vertices, left top and right bottom of window: (𝑥𝑤𝑚𝑖𝑛, 𝑦𝑤𝑚𝑎𝑥) and (𝑥𝑤𝑚𝑎𝑥, 𝑦𝑤𝑚𝑖𝑛)
3. Calculate values of parameters 𝑝𝑘 and 𝑞𝑘 for 𝑘 = 1, 2, 3, 4 such that,
𝑝1 = −∆𝑥, 𝑞1 = 𝑥1 − 𝑥𝑤𝑚𝑖𝑛
𝑝2 = ∆𝑥, 𝑞2 = 𝑥𝑤𝑚𝑎𝑥 − 𝑥1
𝑝3 = −∆𝑦, 𝑞3 = 𝑦1 − 𝑦𝑤𝑚𝑖𝑛
𝑝4 = ∆𝑦, 𝑞4 = 𝑦𝑤𝑚𝑎𝑥 − 𝑦1
4. If 𝑝𝑘 = 0 for any value of 𝑘 = 1, 2, 3, 4 then,
Line is parallel to 𝑘𝑡ℎ boundary.

If corresponding 𝑞𝑘 < 0 then,


Line is completely outside the boundary. Therefore, discard line segment and Go to Step
10.
Otherwise
Check line is horizontal or vertical and accordingly check line end points with
corresponding boundaries.

If line endpoints lie within the bounded area


Then use them to draw line.
Otherwise

20
Unit-3 – 2D Transformation & Viewing
Use boundary coordinates to draw line. And go to Step 8.
5. For 𝑘 = 1,2,3,4 calculate 𝑟𝑘 for nonzero values of 𝑝𝑘 and 𝑞𝑘 as follows:
𝑞𝑘
𝑟𝑘 = , 𝑓𝑜𝑟 𝑘 = 1,2,3,4
𝑝𝑘
6. Find 𝑢1 𝑎𝑛𝑑 𝑢2 as given below:
𝑢1 = max{0, 𝑟𝑘|𝑤ℎ𝑒𝑟𝑒 𝑘 𝑡𝑎𝑘𝑒𝑠 𝑎𝑙𝑙 𝑣𝑎𝑙𝑢𝑒𝑠 𝑓𝑜𝑟 𝑤ℎ𝑖𝑐ℎ 𝑝𝑘 < 0}
𝑢2 = min{1, 𝑟𝑘|𝑤ℎ𝑒𝑟𝑒 𝑘 𝑡𝑎𝑘𝑒𝑠 𝑎𝑙𝑙 𝑣𝑎𝑙𝑢𝑒𝑠 𝑓𝑜𝑟 𝑤ℎ𝑖𝑐ℎ 𝑝𝑘 > 0}
7. If 𝑢1 ≤ 𝑢2 then
Calculate endpoints of clipped line:
𝑥1′ = 𝑥1 + 𝑢1∆𝑥
𝑦1′ = 𝑦1 + 𝑢1∆𝑦
𝑥2′ = 𝑥1 + 𝑢2∆𝑥
𝑦2′ = 𝑦1 + 𝑢2∆𝑦
Draw line (𝑥1′, 𝑦1′, 𝑥2′, 𝑦2′);
8. Stop.

Advantages
1. More efficient.
2. Only requires one division to update 𝑢1 and 𝑢2.
3. Window intersections of line are calculated just once.

Nicholl-Lee-Nicholl Line Clipping


• By creating more regions around the clip window the NLN algorithm avoids multiple clipping of an
individual line segment.
• In Cohen-Sutherlan line clipping sometimes multiple calculation of intersection point of a line is done
before actual window boundary intersection or line is completely rejected.
• These multiple intersection calculation is avoided in NLN line clipping procedure.
• NLN line clipping perform the fewer comparisons and divisions so it is more efficient.
• But NLN line clipping cannot be extended for three dimensions while Cohen-Sutherland and Liang-Barsky
algorithm can be easily extended for three dimensions.
• For given line we find first point falls in which region out of nine region shown in figure below but three
region shown in figure by putting point are only considered and if point falls in other region than we
transfer that point in one of the three region.

P1

P1 P1

P1 in Window P1 in Edge Region P1 in Corner Region


(a) (b) (c)

Fig. 3.7: - Three possible position for a line endpoint p1 in the NLN line-clipping algorithm.
• We can also extend this procedure for all nine regions.
• Now for p1 is inside the window we divide whole area in following region:

21
Unit-3 – 2D Transformation & Viewing

Fig. 3.8: - Clipping region when p1 is inside the window.


• Now for p1 is in edge region we divide whole area in following region:

Fig. 3.9: - Clipping region when p1 is in edge region.


• Now for p1 is in corner region we divide whole area in following region:

Fig. 3.10: - Two possible sets of clipping region when p1 is in corner region.
• Regions are name in such a way that name in which region p2 falls is gives the window edge which
intersects the line.
• For example region LT says that line need to clip at left and top boundary.
• For finding that in which region line 𝒑𝟏𝒑𝟐 falls we compare the slope of the line to the slope of the
boundaries:
𝒔𝒍𝒐𝒑𝒆 𝒑𝟏𝒑𝑩𝟏 < 𝒔𝒍𝒐𝒑𝒆 𝒑𝟏𝒑𝟐 < 𝑠𝑙𝑜𝑝𝑒 𝒑𝟏𝒑𝑩𝟐
Where 𝒑𝟏𝒑𝑩𝟏 and 𝒑𝟏𝒑𝑩𝟐 are boundary lines.
• For example p1 is in edge region and for checking whether p2 is in region LT we use following equation.

𝒔𝒍𝒐𝒑𝒆 𝒑𝟏𝒑𝑻𝑹 < 𝒔𝒍𝒐𝒑𝒆 𝒑𝟏𝒑𝟐 < 𝑠𝑙𝑜𝑝𝑒 𝒑𝟏𝒑𝑻𝑳


𝒚 𝑻 − 𝒚𝟏 𝒚 𝟐 − 𝒚 𝟏 𝒚 𝑻 − 𝒚 𝟏
< <
𝒙𝑹 − 𝒙 𝟏 𝒙𝟐 − 𝒙 𝟏 𝒙𝑳 − 𝒙 𝟏
• After checking slope condition we need to check weather it crossing zero, one or two edges.
• This can be done by comparing coordinates of 𝑝2 with coordinates of window boundary.
• For left and right boundary we compare 𝑥 coordinates and for top and bottom boundary we compare 𝑦
coordinates.
• If line is not fall in any defined region than clip entire line.
22
Unit-3 – 2D Transformation & Viewing
• Otherwise calculate intersection.
• After finding region we calculate intersection point using parametric equation which are:
• 𝒙 = 𝒙𝟏 + (𝒙𝟐 − 𝒙𝟏)𝒖
• 𝒚 = 𝒚𝟏 + (𝒚𝟐 − 𝒚𝟏)𝒖
• For left or right boundary 𝑥 = 𝑥𝑙 𝑜𝑟 𝑥𝑟 respectively, with 𝑢 = (𝑥𝑙/𝑟 – 𝑥1)/ (𝑥2 – 𝑥1), so that 𝑦 can be
obtain from parametric equation as below:
• 𝒚 = 𝒚𝟏 + 𝒚𝟐−𝒚𝟏 (𝒙𝑳 − 𝒙𝟏)
𝒙𝟐 −𝒙 𝟏
• Keep the portion which is inside and clip the rest.

Polygon Clipping
• For polygon clipping we need to modify the line clipping procedure because in line clipping we need to
consider about only line segment while in polygon clipping we need to consider the area and the new
boundary of the polygon after clipping.
Sutherland-Hodgeman Polygon Clipping
• For correctly clip a polygon we process the polygon boundary as a whole against each window edge.
• This is done by whole polygon vertices against each clip rectangle boundary one by one.
• Beginning with the initial set of polygon vertices we first clip against the left boundary and produce new
sequence of vertices.
• Then that new set of vertices is clipped against the right boundary clipper, a bottom boundary clipper
and a top boundary clipper, as shown in figure below.

Fig. 3.11: - Clipping a polygon against successive window boundaries.

Left Right Bottom Top


in out
Clipper Clipper Clipper Clipper

Fig. 3.12: - Processing the vertices of the polygon through boundary clipper.
• There are four possible cases when processing vertices in sequence around the perimeter of a polygon.

23
Unit-3 – 2D Transformation & Viewing
Fig. 3.13: - Clipping a polygon against successive window boundaries.
• As shown in case 1: if both vertices are inside the window we add only second vertices to output list.
• In case 2: if first vertices is inside the boundary and second vertices is outside the boundary only the
edge intersection with the window boundary is added to the output vertex list.
• In case 3: if both vertices are outside the window boundary nothing is added to window boundary.
• In case 4: first vertex is outside and second vertex is inside the boundary, then adds both intersection
point with window boundary, and second vertex to the output list.
• When polygon clipping is done against one boundary then we clip against next window boundary.
• We illustrate this method by simple example.

Window

3
2’
2 1’

1 3’ 4
6
5’
4’
5

Fig. 3.14: - Clipping a polygon against left window boundaries.


• As shown in figure above we clip against left boundary vertices 1 and 2 are found to be on the outside of
the boundary. Then we move to vertex 3, which is inside, we calculate the intersection and add both
intersection point and vertex 3 to output list.
• Then we move to vertex 4 in which vertex 3 and 4 both are inside so we add vertex 4 to output list,
similarly from 4 to 5 we add 5 to output list, then from 5 to 6 we move inside to outside so we add
intersection pint to output list and finally 6 to 1 both vertex are outside the window so we does not add
anything.
• Convex polygons are correctly clipped by the Sutherland-Hodgeman algorithm but concave polygons
may be displayed with extraneous lines.
• For overcome this problem we have one possible solution is to divide polygon into numbers of small
convex polygon and then process one by one.
• Another approach is to use Weiler-Atherton algorithm.
Weiler-Atherton Polygon Clipping
• In this algorithm vertex processing procedure for window boundary is modified so that concave polygon
also clip correctly.
• This can be applied for arbitrary polygon clipping regions as it is developed for visible surface
identification.
• Main idea of this algorithm is instead of always proceeding around the polygon edges as vertices are
processed we sometimes need to follow the window boundaries.
• Other procedure is similar to Sutherland-Hodgeman algorithm.
• For clockwise processing of polygon vertices we use the following rules:
o For an outside to inside pair of vertices, follow the polygon boundary.
o For an inside to outside pair of vertices, follow the window boundary in a clockwise direction.
• We illustrate it with example:
24
Unit-3 – 2D Transformation & Viewing

V2’ V
2
(resume) V1’

V3’ V3
V4’
V1
V4
(stop)
V5’
(resume) V7’
V5
V6 V6’

(a) (b)
Fig. 3.14: - Clipping a concave polygon (a) with the Weiler-Atherton algorithm generates the two se
• As shown in figure we start from v1 and move clockwise towards v2 and add intersection point and next
point to output list by following polygon boundary, then from v2 to v3 we add v3 to output list.
• From v3 to v4 we calculate intersection point and add to output list and follow window boundary.
• Similarly from v4 to v5 we add intersection point and next point and follow the polygon boundary, next
we move v5 to v6 and add intersection point and follow the window boundary, and finally v6 to v1 is
outside so no need to add anything.
• This way we get two separate polygon section after clipping.

25
Unit-4 – 3D Concept & Object Representation

Three Dimensional Display Methods


Parallel Projection
• This method generates view from solid object by projecting parallel lines onto the display plane.
• By changing viewing position we can get different views of 3D object onto 2D display screen.

Fig. 4.1: - different views object by changing viewing plane position.


• Above figure shows different views of objects.
• This technique is used in Engineering & Architecture drawing to represent an object with a set of views
that maintain relative properties of the object e.g.:- orthographic projection.

Perspective projection
• This method generating view of 3D object by projecting point on the display plane along converging
paths.

Fig. 4.2: - perspective projection


• This will display object smaller when it is away from the view plane and of nearly same size when closer
to view plane.
• It will produce more realistic view as it is the way our eye is forming image.

Depth cueing
• Many times depth information is important so that we can identify for a particular viewing direction
which are the front surfaces and which are the back surfaces of display object.

1
Unit-4 – 3D Concept & Object Representation

• Simple method to do this is depth cueing in which assign higher intensity to closer object & lower
intensity to the far objects.
• Depth cuing is applied by choosing maximum and minimum intensity values and a range of distance over
which the intensities are to vary.
• Another application is to modeling effect of atmosphere.

Visible line and surface Identification


• In this method we first identify visible lines or surfaces by some method.
• Then display visible lines with highlighting or with some different color.
• Other way is to display hidden lines with dashed lines or simply not display hidden lines.
• But not drawing hidden lines will loss the some information.
• Similar method we can apply for the surface also by displaying shaded surface or color surface.
• Some visible surface algorithm establishes visibility pixel by pixel across the view plane. Other
determines visibility of object surface as a whole.

Surface Rendering
• More realistic image is produce by setting surface intensity according to light reflect from that surface &
the characteristics of that surface.
• It will give more intensity to the shiny surface and less to dull surface.
• It also applies high intensity where light is more & less where light falls is less.

Exploded and Cutaway views


• Many times internal structure of the object is need to store. For ex., in machine drawing internal
assembly is important.
• For displaying such views it will remove (cutaway) upper cover of body so that internal part’s can be
visible.

Three dimensional stereoscopic views


• This method display using computer generated scenes.
• It may display object by three dimensional views.
• The graphics monitor which are display three dimensional scenes are devised using a technique that
reflects a CRT image from a vibrating flexible mirror.
Projected
3D image
Timing and
Control
System Vibrating
Flexible
Mirror

CRT
Viewer

Fig. 4.3: - 3D display system uses a vibrating mirror.

2
Unit-4 – 3D Concept & Object Representation

• Vibrating mirror changes its focal length due to vibration which is synchronized with the display of an
object on CRT.
• The each point on the object is reflected from the mirror into spatial position corresponding to distance
of that point from a viewing position.
• Very good example of this system is GENISCO SPACE GRAPH system, which use vibrating mirror to
project 3D objects into a 25 cm by 25 cm by 25 cm volume. This system is also capable to show 2D cross
section at different depth.
• Another way is stereoscopic views.
• Stereoscopic views does not produce three dimensional images, but it produce 3D effects by presenting
different view to each eye of an observer so that it appears to have depth.
• To obtain this we first need to obtain two views of object generated from viewing direction
corresponding to each eye.
• We can contract the two views as computer generated scenes with different viewing positions or we can
use stereo camera pair to photograph some object or scene.
• When we see simultaneously both the view as left view with left eye and right view with right eye then
two views is merge and produce image which appears to have depth.
• One way to produce stereoscopic effect is to display each of the two views with raster system on
alternate refresh cycles.
• The screen is viewed through glasses with each lance design such a way that it act as a rapidly
alternating shutter that is synchronized to block out one of the views.

Polygon Surfaces
• A polygonal surface can be thought of as a surface composed of polygonal faces.
• The most commonly used boundary representation for a three dimensional object is a set of polygon
surfaces that enclose the object interior

Polygon Tables
• Representation of vertex coordinates, edges and other property of polygon into table form is called
polygon table.
• Polygon data tables can be organized into two groups: geometric table and attributes table.
• Geometric table contains vertex coordinate and the other parameter which specify geometry of polygon.
• Attributes table stores other information like Color, transparency etc.
• Convenient way to represent geometric table into three different table namely vertex table, edge table,
and polygon table.

3
Unit-4 – 3D Concept & Object Representation

V1

E1
V2 E3
S1 E6
E2
S2
V3 V5

E4 E5

V4

Edge Table
Vertex Table
E1: V1, V2
V1: X1, Y1, Z1 Polygon Surface
E2: V2, V3
V2: X2, Y2, Z2 Table
E3: V3, V1
V3: X3, Y3, Z3 S1: E1, E2, E3
E4: V3, V4
V4: X4, Y4, Z4 S2: E3, E4, E5, E6
E5: V4, V5
V5: X5, Y5, Z5
E6: V5, V1

Fig. 4.4: - Geometric Data Table representation.


• Vertex table stores each vertex included into polygon.
• Edge table stores each edge with the two endpoint vertex pointers back to vertex table.
• Polygon table stores each surface of polygon with edge pointer for the each edge of the surface.
• This three table representation stores each vertex one time only and similarly each edge is also one
time. So it will avoid representation of storing common vertex and edge so it is memory efficient
method.
• Another method to represent with two table vertex table and polygon table but it is inefficient as it will
store common edge multiple times.
• Since tables have many entries for large number of polygon we need to check for consistency as it may
be possible that errors may occurs during input.
• For dealing with that we add extra information into tables. For example figure below shows edge table
of above example with information of the surface in which particular edge is present.

E1: V1, V2, S1


E2: V2, V3, S1
E3: V3, V1, S1, S2
E4: V3, V4, S2
E5: V4, V5, S2
E6: V5, V1, S2

Fig. 4.5: - Edge table of above example with extra information as surface pointer.
• Now if any surface entry in polygon table will find edge in edge table it will verify whether this edge is of
particular surface’s edge or not if not it will detect errors and may be correct if sufficient information is
added.

4
Unit-4 – 3D Concept & Object Representation

Plane Equations
• For producing display of 3D objects we must process the input data representation for the object
through several procedures.
• For this processing we sometimes need to find orientation and it can be obtained by vertex coordinate
values and the equation of polygon plane.
• Equation of plane is given as
𝐴𝑥 + 𝐵𝑦 + 𝐶𝑧 + 𝐷 = 0
• Where (x, y, z) is any point on the plane and A, B, C, D are constants by solving three plane equation for
three non collinear points. And solve simultaneous equation for ratio A/D, B/D, and C/D as follows
𝐴 𝐵 𝐶
𝑥1 + 𝑦1 + 𝑧1 = −1
𝐷 𝐷 𝐷
𝐴 𝐵 𝐶
𝑥2 + 𝑦2 + 𝑧2 = −1
𝐷 𝐷 𝐷
𝐴 𝐵 𝐶
𝑥3 + 𝑦3 + 𝑧3 = −1
𝐷 𝐷 𝐷
• Solving by determinant
1 𝑦1 𝑧1 𝑥1 1 𝑧1 𝑥1 𝑦1 1 𝑥1 𝑦1 𝑧1
𝐴 = |1 𝑦2 𝑧2| 𝐵 = |𝑥2 1 𝑧2| 𝐶 = |𝑥2 𝑦2 1| 𝐷 = − |𝑥2 𝑦2 𝑧2|
1 𝑦3 𝑧3 𝑥3 1 𝑧3 𝑥3 𝑦3 1 𝑥3 𝑦3 𝑧3
• By expanding a determinant we get
𝐴 = 𝑦1(𝑧2 − 𝑧3) + 𝑦2(𝑧3 − 𝑧1) + 𝑦3(𝑧1 − 𝑧2)
𝐵 = 𝑧1(𝑥2 − 𝑥3) + 𝑧2(𝑥3 − 𝑥1) + 𝑧3(𝑥1 − 𝑥2)
𝐶 = 𝑥1(𝑦2 − 𝑦3) + 𝑥2(𝑦3 − 𝑦1) + 𝑥3(𝑦1 − 𝑦2)
𝐷 = −𝑥1(𝑦2𝑧3 − 𝑦3𝑧2) − 𝑥2(𝑦3𝑧1 − 𝑦1𝑧3) − 𝑥3(𝑦1𝑧2 − 𝑦2𝑧1)
• This values of A, B, C, D are then store in polygon data structure with other polygon data.
• Orientation of plane is described with normal vector to the plane.

N= (A, B, C)
Y

X
Z
Fig. 4.6: - the vector N normal to the surface.
• Here N= (A,B,C) where A, B, C are the plane coefficient.
• When we are dealing with the polygon surfaces that enclose object interior we define the side of the
faces towards object interior is as inside face and outward side as outside face.
• We can calculate normal vector N for any particular surface by cross product of two vectors in counter
clockwise direction in right handed system then.
𝑁 = (𝑣2 − 𝑣1)𝑋(𝑣3 − 𝑣1)
• Now N gives values A, B, C for that plane and D can be obtained by putting these values in plane
equation for one of the vertices and solving for D.
• Using plane equation in vector form we can obtain D as
𝑁 ∙ 𝑃 = −𝐷
• Plane equation is also used to find position of any point compare to plane surface as follows

5
Unit-4 – 3D Concept & Object Representation

If 𝐴𝑥 + 𝐵𝑦 + 𝐶𝑧 + 𝐷 ≠ 0 the point (x,y,z) is not on that plane.


If 𝐴𝑥 + 𝐵𝑦 + 𝐶𝑧 + 𝐷 < 0 the point (x,y,z) is inside the surface.
If 𝐴𝑥 + 𝐵𝑦 + 𝐶𝑧 + 𝐷 > 0 the point (x,y,z) is outside the surface.
• These equation are valid for right handed system provides plane parameter A, B, C, and D were
calculated using vertices selected in a counter clockwise order when viewing the surface in an outside to
inside direction.

Polygon Meshes

Fig. 4.7: -A triangle strip formed with 11 triangle Fig. 4.8: -A quadrilateral mesh containing 12 quadrilaterals
connecting 13 vertices constructed from a 5 by 4 input vertex array
• Polygon mesh is a collection of edges, vertices and faces that defines the shape of the polyhedral object
in 3D computer graphics and solid modeling.
• An edge can be shared by two or more polygons and vertex is shared by at least two edges.
• Polygon mesh is represented in following ways
o Explicit representation
o Pointer to vertex list
o Pointer to edge list

Explicit Representation
• In explicit representation each polygon stores all the vertices in order in the memory as,
𝑃 = (((𝑥1, 𝑦1, 𝑧1), (𝑥2, 𝑦2, 𝑧2)), … , ((𝑥𝑚, 𝑦𝑚, 𝑧𝑚), (𝑥𝑛, 𝑦𝑛, 𝑧𝑛)))
• It process fast but requires more memory for storing.

Pointer to Vertex list


• In this method each vertex stores in vertex list and then polygon contains pointer to the required vertex.
𝑉 = ((𝑥1, 𝑦1, 𝑧1), (𝑥2, 𝑦2, 𝑧2), … , (𝑥𝑛, 𝑦𝑛, 𝑧𝑛))
• And now polygon of vertices 3, 4, 5 is represented as 𝑃 = ((𝑣3, 𝑣4), (𝑣4, 𝑣5), (𝑣5, 𝑣3)).
• It is considerably space saving but common edges is difficult to find.

Pointer to Edge List


• In this polygon have pointers to the edge list and edge list have pointer to vertex list for each edge two
vertex pointer is required which points in vertex list.
𝑉 = ((𝑥1, 𝑦1, 𝑧1), (𝑥2, 𝑦2, 𝑧2), … , (𝑥𝑛, 𝑦𝑛, 𝑧𝑛))
𝐸 = ((𝑣1, 𝑣2), (𝑣2, 𝑣3), … , (𝑣𝑛, 𝑣𝑚))
𝑃 = (𝐸1, 𝐸2, 𝐸𝑛)
• This approach is more memory efficient and easy to find common edges.

Spline Representations
• Spline is flexible strip used to produce a smooth curve through a designated set of points.
• Several small weights are attached to spline to hold in particular position.
• Spline curve is a curve drawn with this method.

6
Unit-4 – 3D Concept & Object Representation

• The term spline curve now referred to any composite curve formed with polynomial sections satisfying
specified continuity condition at the boundary of the pieces.
• A spline surface can be described with two sets of orthogonal spline curves.

Interpolation and approximation splines


• We specify spline curve by giving a set of coordinate positions called control points. This indicates the
general shape of the curve.
• Interpolation Spline: - When curve section passes through each control point, the curve is said to
interpolate the set of control points and that spline is known as Interpolation Spline.

Fig. 4.9: -interpolation spline. Fig. 4.10: -Approximation spline.

• Approximation Spline: - When curve section follows general control point path without necessarily
passing through any control point, the resulting curve is said to approximate the set of control points
and that curve is known as Approximation Spline.
• Spline curve can be modified by selecting different control point position.
• We can apply transformation on the curve according to need like translation scaling etc.
• The convex polygon boundary that encloses a set of control points is called convex hull.

Fig. 4.11: -convex hull shapes for two sets of control points.
• A poly line connecting the sequence of control points for an approximation spline is usually displayed to
remind a designer of the control point ordering. This set of connected line segment is often referred as
control graph of the curve.
• Control graph is also referred as control polygon or characteristic polygon.

7
Unit-4 – 3D Concept & Object Representation

Fig. 4.12: -Control-graph shapes for two different sets of control points.

Parametric continuity condition


• For smooth transition from one curve section on to next curve section we put various continuity
conditions at connection points.
• Let parametric coordinate functions as
𝑥 = 𝑥(𝑢), 𝑦 = 𝑦(𝑢), 𝑧 = 𝑧(𝑢) ∵ 𝑢1 ≪ 𝑢 ≪ 𝑢2
• 0
Then zero order parametric continuity (c ) means simply curves meets i.e. last point of first curve
section & first points of second curve section are same.
• First order parametric continuity (c1) means first parametric derivatives are same for both curve section
at intersection points.
• Second order parametric continuity (c2) means both the first & second parametric derivative of two
curve section are same at intersection.
• Higher order parametric continuity is can be obtain similarly.

Fig. 4.13: - Piecewise construction of a curve by joining two curve segments uses different orders of
continuity: (a) zero-order continuity only, (b) first-order continuity, and (c) second-order continuity.
• First order continuity is often sufficient for general application but some graphics package like cad
requires second order continuity for accuracy.

Geometric continuity condition


• Another method for joining two successive curve sections is to specify condition for geometric
continuity.
• Zero order geometric continuity (g0) is same as parametric zero order continuity that two curve section
meets.
• First order geometric continuity (g1) means that the parametric first derivatives are proportional at the
intersection of two successive sections but does not necessary Its magnitude will be equal.
• Second order geometric continuity (g2) means that the both parametric first & second derivatives are
proportional at the intersection of two successive sections but does not necessarily magnitude will be
equal.

8
Unit-4 – 3D Concept & Object Representation

Cubic Spline Interpolation Methods


• Cubic splines are mostly used for representing path of moving object or existing object shape or drawing.
• Sometimes it also used for design the object shapes.
• Cubic spline gives reasonable computation on as compared to higher order spline and more stable
compare to lower order polynomial spline. So it is often used for modeling curve shape.
• Cubic interpolation splines obtained by fitting the input points with piecewise cubic polynomial curve
that passes through every control point.

Fig. 4.14: -A piecewise continuous cubic-spline interpolation of n+1 control points.


𝑝𝑘 = (𝑥𝑘, 𝑦𝑘, 𝑧𝑘) Where, k=0, 1, 2, 3 ..., n
• Parametric cubic polynomial for this curve is given by
𝑥(𝑢) = 𝑎𝑥𝑢3 + 𝑏𝑥𝑢2 + 𝑐𝑥𝑢 + 𝑑𝑥
𝑦(𝑢) = 𝑎𝑦𝑢3 + 𝑏𝑦𝑢2 + 𝑐𝑦𝑢 + 𝑑𝑦
𝑧(𝑢) = 𝑎𝑧𝑢3 + 𝑏𝑧𝑢2 + 𝑐𝑧𝑢 + 𝑑𝑧
𝑤ℎ𝑒𝑟𝑒( 0 ≤ 𝑢 ≤ 1)
• For above equation we need to determine for constant a, b, c and d the polynomial representation for
each of n curve section.
• This is obtained by settling proper boundary condition at the joints.
• Now we will see common method for settling this condition.

Natural Cubic Splines


• Natural cubic spline is a mathematical representation of the original drafting spline.
• We consider that curve is in 𝑐2 continuity means first and second parametric derivatives of adjacent
curve section are same at control point.
• For the ‘’n+1’’ control point we have n curve section and 4n polynomial constants to find.
• For all interior control points we have four boundary conditions. The two curve section on either side of
control point must have same first & second order derivative at the control points and each curve passes
through that control points.
• We get other two condition as 𝑝0 (first control points) starting & 𝑝𝑛(last control point) is end point of the
curve.
• We still required two conditions for obtaining coefficient values.
• One approach is to setup second derivative at 𝑝0 & 𝑝𝑛 to be 0. Another approach is to add one extra
dummy point at each end. I.e. we add 𝑝−1 & 𝑝𝑛+1 then all original control points are interior and we get
4n boundary condition.
• Although it is mathematical model it has major disadvantage is with change in the control point entire
curve is changed.
• So it is not allowed for local control and we cannot modify part of the curve.

Hermit Interpolation
• It is named after French mathematician Charles hermit

9
Unit-4 – 3D Concept & Object Representation

• It is an interpolating piecewise cubic polynomial with specified tangent at each control points.
• It is adjusted locally because each curve section is depends on it’s end points only.
• Parametric cubic point function for any curve section is then given by:
𝑝(0) = 𝑝𝑘
𝑝(1) = 𝑝𝑘+1
𝑝′(0) = 𝑑𝑝𝑘
𝑝′′(1) = 𝑑𝑝𝑘+1
Where dpk & dpk+1 are values of parametric derivatives at point pk & pk+1 respectively.
• Vector equation of cubic spline is:
𝑝(𝑢) = 𝑎𝑢3 + 𝑏𝑢2 + 𝑐𝑢 + 𝑑
• Where x component of p is
• 𝑥(𝑢) = 𝑎𝑥𝑢3 + 𝑏𝑥𝑢2 + 𝑐𝑥𝑢 + 𝑑𝑥 and similarly y & z components
• Matrix form of above equation is
𝑎
𝑏
𝑃(𝑢) = [𝑢3 𝑢2 𝑢 1] [ ]
𝑐
𝑑
• Now derivatives of p(u) is p’(u)=3au2+2bu+c+0
• Matrix form of p’(u) is
𝑎
𝑏
𝑃′(𝑢) = [3𝑢2 2𝑢 1 0] [ ]
𝑐
𝑑
• Now substitute end point value of u as 0 & 1 in above equation & combine all four parametric equations
in matrix form:
𝑝𝑘 0 0 0 1 𝑎
𝑝𝑘+1 1 1 1 1 𝑏
[ ]=[ ][ ]
𝑑𝑝𝑘 0 0 1 0 𝑐
𝑑𝑝𝑘+1 3 2 1 0 𝑑
• Now solving it for polynomial co efficient
𝑎 2 −2 1 1 𝑝𝑘
𝑝
𝑏 −3 3 −2 −1] [ 𝑘+1
[𝑐] = [ 0 0 1 0 𝑑𝑝𝑘 ]
𝑑 1 0 0 0 𝑑𝑝𝑘+1
𝑎 𝑝𝑘
𝑏 𝑝𝑘+1
[𝑐] = 𝑀 𝐻 [ 𝑑𝑝𝑘 ]
𝑑 𝑑𝑝𝑘+1
• Now Put value of above equation in equation of 𝑝(𝑢)
2 −2 1 1 𝑝𝑘
−3 3 −2 −1 𝑝𝑘+1
𝑝(𝑢) = [𝑢3 𝑢2 𝑢 1] [ ][ ]
0 0 1 0 𝑑𝑝 𝑘

1 0 0 0 𝑑𝑝𝑘+1
𝑝𝑘
𝑝𝑘+1
𝑝(𝑢) = [2𝑢3 − 3𝑢2 + 1 − 2𝑢3 + 3𝑢2 𝑢3 − 2𝑢2 + 𝑢 𝑢3 − 𝑢2] [ ]
𝑑𝑝𝑘
𝑑𝑝𝑘+1
𝑝(𝑢) = 𝑝𝑘(2𝑢3 − 3𝑢2 + 1) + 𝑝𝑘+1(−2𝑢3 + 3𝑢2 ) + 𝑑𝑝𝑘(𝑢3 − 2𝑢2 + 𝑢) + 𝑑𝑝𝑘+1(𝑢3 − 𝑢2)
𝑝(𝑢) = 𝑝𝑘𝐻0(u) + 𝑝𝑘+1𝐻1(u) + 𝑑𝑝𝑘𝐻2(u) + 𝑑𝑝𝑘+1𝐻3(u)
Where 𝐻𝑘(u) for k=0 , 1 , 2 , 3 are referred to as blending functions because that blend the boundary
constraint values for curve section.

10
Unit-4 – 3D Concept & Object Representation

• Shape of the four hermit blending function is given below.

Fig. 4.15: -the hermit blending functions.


• Hermit curves are used in digitizing application where we input the approximate curve slope means DPk
& DPk+1.
• But in application where this input is difficult to approximate at that place we cannot use hermit curve.

Cardinal Splines
• As like hermit spline cardinal splines also interpolating piecewise cubics with specified endpoint tangents
at the boundary of each section.
• But in this spline we need not have to input the values of endpoint tangents.
• In cardinal spline values of slope at control point is calculated from two immediate neighbor control
points.
• It’s spline section is completely specified by the 4-control points.

Fig. 4.16: -parametric point function p(u) for a cardinal spline section between control points pk and pk+1.
• The middle two are two endpoints of curve section and other two are used to calculate slope of
endpoints.
• Now parametric equation for cardinal spline is:
𝑝(0) = 𝑝𝑘
𝑝(1) = 𝑝1𝑘+1
𝑝′(0) = (1 − 𝑡)(𝑝 −𝑝 )
𝑘+1 𝑘−1
12
𝑝′(1) = (1 − 𝑡)(𝑝 −𝑝)
𝑘+2 𝑘
2

11
Unit-4 – 3D Concept & Object Representation

Where parameter t is called tension parameter since it controls how loosely or tightly the cardinal spline
fit the control points.

Fig. 4.17: -Effect of the tension parameter on the shape of a cardinal spline section.
• When t = 0 this class of curve is referred to as catmull-rom spline or overhauser splines.
• Using similar method like hermit we can obtain:
𝑝𝑘−1
𝑝𝑘
𝑝(𝑢) = [𝑢3 𝑢2 𝑢 1] ∙ 𝑀𝑐 ∙ [𝑝𝑘+1]
𝑝𝑘+2
• Where the cardinal matrix is
−𝑠 2 − 𝑠 𝑠 − 2 𝑠
2𝑠 𝑠 − 3 3 − 2𝑠 −𝑠
𝑀𝑐 = [ ]
−𝑠 0 𝑠 0
0 1 0 0
• With 𝑠 = (1 − 𝑡)⁄2
• Put value of Mc in equation of p(u)
−𝑠 2 − 𝑠 𝑠 − 2 𝑠 𝑝𝑘−1
2𝑠 𝑠 − 3 3 − 2𝑠 −𝑠
𝑝(𝑢) = [𝑢3 𝑢2 𝑢 1] ∙ [ ] ∙ [ 𝑝𝑘
−𝑠 0 𝑠 0 𝑝𝑘+1]
0 1 0 0 𝑝𝑘+2
𝑝(𝑢) = [−𝑠𝑢3 + 2𝑠𝑢2 − 𝑠𝑢 (2 − 𝑠)𝑢3 + (𝑠 − 3)𝑢2 + 1 (𝑠 − 2)𝑢3 + (3 − 𝑠)𝑢2 + 𝑠𝑢 𝑠𝑢3 − 𝑠𝑢2]
𝑝𝑘−1
𝑝𝑘
∙ [𝑝 ]
𝑘+1
𝑝𝑘+2
𝑝(𝑢) = 𝑝𝑘−1(−𝑠𝑢3 + 2𝑠𝑢2 − 𝑠𝑢) + 𝑝𝑘((2 − 𝑠)𝑢3 + (𝑠 − 3)𝑢2 + 1)
+ 𝑝𝑘+1((𝑠 − 2)𝑢3 + (3 − 𝑠)𝑢2 + 𝑠𝑢) + 𝑝𝑘+2(𝑠𝑢3 − 𝑠𝑢2)
𝑝(𝑢) = 𝑝𝑘−1𝐶𝐴𝑅0(𝑢) + 𝑝𝑘𝐶𝐴𝑅1(𝑢) + 𝑝𝑘+1𝐶𝐴𝑅2(𝑢) + 𝑝𝑘+2𝐶𝐴𝑅3(𝑢)
Where polynomial 𝐶𝐴𝑅𝑘(𝑢) 𝑓𝑜𝑟 𝑘 = 0,1,2,3 are the cardinals blending functions.
• Figure below shows this blending function shape for t = 0.

12
Unit-4 – 3D Concept & Object Representation

Fig. 4.18: -The cardinal blending function for t=0 and s=0.5.

Kochanek-Bartels spline
• It is extension of cardinal spline
• Two additional parameters are introduced into the constraint equation for defining kochanek-Bartels
spline to provide more flexibility in adjusting the shape of curve section.
• For this parametric equations are as follows:
𝑝(0) = 𝑝𝑘
𝑝(1) = 𝑝 1𝑘+1
𝑝′(0) = (1 − 𝑡)[(1 + 𝑏)(1 − 𝑐)(𝑝 − 𝑝 ) + (1 − 𝑏)(1 + 𝑐)(𝑝 − 𝑝 )]
𝑘 𝑘−1 𝑘+1 𝑘
2
1
𝑝 (1) = (1 − 𝑡)[(1 + 𝑏)(1 + 𝑐)(𝑝

− 𝑝 ) + (1 − 𝑏)(1 − 𝑐)(𝑝 − 𝑝 )]
𝑘+1 𝑘 𝑘+2 𝑘+1
2
Where ‘t’ is tension parameter same as used in cardinal spline.
• B is bias parameter and C is the continuity parameter.
• In this spline parametric derivatives may not be continuous across section boundaries.
• Bias B is used to adjust the amount that the curve bends at each end of section.

Fig. 4.19: -Effect of bias parameter on the shape of a Kochanek-Bartels spline section.
• Parameter c is used to controls continuity of the tangent vectors across the boundaries of section. If C is
nonzero there is discontinuity in the slope of the curve across section boundaries.

13
Unit-4 – 3D Concept & Object Representation

• It is used in animation paths in particular abrupt change in motion which is simulated with nonzero
values for parameter C.

Bezier Curves and Surfaces


• It is developed by French engineer Pierre Bezier for the Renault automobile bodies.
• It has number of properties and easy to implement so it is widely available in various CAD and graphics
package.

Bezier Curves
• Bezier curve section can be fitted to any number of control points.
• Number of control points and their relative position gives degree of the Bezier polynomials.
• With the interpolation spline Bezier curve can be specified with boundary condition or blending function.
• Most convenient method is to specify Bezier curve with blending function.
• Consider we are given n+1 control point position from p0 to pn where pk = (xk, yk, zk).
• This is blended to gives position vector p(u) which gives path of the approximate Bezier curve is:
𝑛

𝑝(𝑢) = ∑ 𝑝𝑘𝐵𝐸𝑍𝑘,𝑛(𝑢) 0≤𝑢≤1


𝑘=0
Where 𝐵𝐸𝑍𝑘,𝑛(𝑢) = 𝐶(𝑛, 𝑘)𝑢𝑘(1 − 𝑢)𝑛−𝑘
And 𝐶(𝑛, 𝑘) = 𝑛!⁄𝑘! (𝑛 − 𝑘)!
• We can also solve Bezier blending function by recursion as follow:
𝐵𝐸𝑍𝑘,𝑛(𝑢) = (1 − 𝑢)𝐵𝐸𝑍𝑘,𝑛−1(𝑢) + 𝑢𝐵𝐸𝑍𝑘−1,𝑛−1(𝑢) 𝑛>𝑘≥1
Here 𝐵𝐸𝑍𝑘,𝑘(𝑢) = 𝑢 and 𝐵𝐸𝑍0,𝑘(𝑢) = (1 − 𝑢)
𝑘 𝑘

• Parametric equation from vector equation can be obtain as follows.


𝑛

𝑥(𝑢) = ∑ 𝑥𝑘𝐵𝐸𝑍𝑘,𝑛(𝑢)
𝑘=0
𝑛

𝑦(𝑢) = ∑ 𝑦𝑘𝐵𝐸𝑍𝑘,𝑛(𝑢)
𝑘=0
𝑛

𝑧(𝑢) = ∑ 𝑧𝑘𝐵𝐸𝑍𝑘,𝑛(𝑢)
𝑘=0
• Bezier curve is a polynomial of degree one less than the number of control points.
• Below figure shows some possible curve shapes by selecting various control point.

14
Unit-4 – 3D Concept & Object Representation

Fig. 4.20: -Example of 2D Bezier curves generated by different number of control points.
• Efficient method for determining coordinate positions along a Bezier curve can be set up using recursive
calculation
• For example successive binomial coefficients can be calculated as
𝑛−𝑘+1
𝐶(𝑛, 𝑘) = 𝐶(𝑛, 𝑘 − 1) 𝑛≥𝑘
𝑘

Properties of Bezier curves


• It always passes through first control point i.e. p(0) = p0
• It always passes through last control point i.e. p(1) = pn
• Parametric first order derivatives of a Bezier curve at the endpoints can be obtain from control point
coordinates as:
𝑝′(0) = −𝑛𝑝0 + 𝑛𝑝1
𝑝′(1) = −𝑛𝑝𝑛−1 + 𝑛𝑝𝑛
• Parametric second order derivatives of endpoints are also obtained by control point coordinates as:
𝑝′′(0) = 𝑛(𝑛 − 1)[(𝑝2 − 𝑝1) − (𝑝1 − 𝑝0)]
𝑝′′(1) = 𝑛(𝑛 − 1)[(𝑝𝑛−2 − 𝑝𝑛−1) − (𝑝𝑛−1 − 𝑝𝑛)]
• Bezier curve always lies within the convex hull of the control points.
• Bezier blending function is always positive.
• Sum of all Bezier blending function is always 1.
𝑛

∑ 𝐵𝐸𝑍𝑘,𝑛(𝑢) = 1
𝑘=0
• So any curve position is simply the weighted sum of the control point positions.
• Bezier curve smoothly follows the control points without erratic oscillations.

Design Technique Using Bezier Curves


• For obtaining closed Bezier curve we specify first and last control point at same position.

15
Unit-4 – 3D Concept & Object Representation

P3

P2

P4
P1
P0=P5

Fig. 4.21: -A closed Bezier Curve generated by specifying the first and last control points at the same
location.
• If we specify multiple control point at same position it will get more weight and curve is pull towards
that position.
P3
P1=P2

P0

P4

Fig. 4.22: -A Bezier curve can be made to pass closer to a given coordinate position by assigning multiple
control point at that position.
• Bezier curve can be fitted for any number of control points but it requires higher order polynomial
calculation.
• Complicated Bezier curve can be generated by dividing whole curve into several lower order polynomial
curves. So we can get better control over the shape of small region.
• Since Bezier curve passes through first and last control point it is easy to join two curve sections with
zero order parametric continuity (C0).
• For first order continuity we put end point of first curve and start point of second curve at same position
and last two points of first curve and first two point of second curve is collinear. And second control
point of second curve is at position
𝑝𝑛 + (𝑝𝑛 − 𝑝𝑛−1)
• So that control points are equally spaced.

16
Unit-4 – 3D Concept & Object Representation

Fig. 4.23: -Zero and first order continuous curve by putting control point at proper place.
• Similarly for second order continuity the third control point of second curve in terms of position of the
last three control points of first curve section as
𝑝𝑛−2 + 4(𝑝𝑛 − 𝑝𝑛−1)
• C2 continuity can be unnecessary restrictive especially for cubic curve we left only one control point for
adjust the shape of the curve.

Cubic Bezier Curves


• Many graphics package provides only cubic spline function because this gives reasonable design
flexibility in average calculation.
• Cubic Bezier curves are generated using 4 control points.
• 4 blending function obtained by substituting n=3
𝐵𝐸𝑍0,3(𝑢) = (1 − 𝑢)3
𝐵𝐸𝑍1,3(𝑢) = 3𝑢(1 − 𝑢)2
𝐵𝐸𝑍2,3(𝑢) = 3𝑢2(1 − 𝑢)
𝐵𝐸𝑍3,3(𝑢) = 𝑢3
• Plots of this Bezier blending function are shown in figure below

Fig. 4.24: -Four Bezier blending function for cubic curve.

17
Unit-4 – 3D Concept & Object Representation

• The form of blending functions determines how control points affect the shape of the curve for values of
parameter u over the range from 0 to 1.
At u = 0 𝐵𝐸𝑍0,3(𝑢) is only nonzero blending function with values 1.
At u = 1 𝐵𝐸𝑍3,3(𝑢) is only nonzero blending function with values 1.
• So the cubic Bezier curve is always pass through p0 and p3.
• Other blending function is affecting the shape of the curve in intermediate values of parameter u.
• 𝐵𝐸𝑍1,3(𝑢) is maximum at 𝑢 = 1⁄3and 𝐵𝐸𝑍2,3(𝑢) is maximum at 𝑢 = 2⁄3
• Blending function is always nonzero over the entire range of u so it is not allowed for local control of the
curve shape.
• At end point positions parametric first order derivatives are :
𝑝′(0) = 3(𝑝1 − 𝑝0)
𝑝′(1) = 3(𝑝3 − 𝑝2)
• And second order parametric derivatives are.
𝑝′′(0) = 6(𝑝0 − 2𝑝1 + 𝑝2)
𝑝′′(1) = 6(𝑝1 − 2𝑝2 + 𝑝3)
• This expression can be used to construct piecewise curve with C1 and C2 continuity.
• Now we represent polynomial expression for blending function in matrix form:
𝑝0
𝑝1
𝑝(𝑢) = [𝑢3 𝑢2 𝑢 1] ∙ 𝑀𝐵𝐸𝑍 ∙
[𝑝 ]
2
𝑝3
−1 3 −3 1
3 −6 3 0
𝑀𝐵𝐸𝑍 = [ ]
−3 3 0 0
1 0 0 0
• We can add additional parameter like tension and bias as we did with the interpolating spline.

Bezier Surfaces
• Two sets of orthogonal Bezier curves can be used to design an object surface by an input mesh of control
points.
• By taking Cartesian product of Bezier blending function we obtain parametric vector function as:
𝑚 𝑛

𝑝(𝑢, 𝑣) = ∑ ∑ 𝑝𝑗,𝑘𝐵𝐸𝑍𝑗,𝑚(𝑣)𝐵𝐸𝑍𝑘,𝑛(𝑢)
𝑗=0 𝑘=0
• 𝑝𝑗,𝑘 Specifying the location of the (m+1) by (n+1) control points.
• Figure below shows Bezier surfaces plot, control points are connected by dashed line and curve is
represented by solid lines.

Fig. 4.25: -Bezier surfaces constructed for (a) m=3, n=3, and (b) m=4, n=4. Dashed line connects the
control points.

18
Unit-4 – 3D Concept & Object Representation

• Each curve of constant u is plotted by varying v over interval 0 to 1. And similarly we can plot for
constant v.
• Bezier surfaces have same properties as Bezier curve, so it can be used in interactive design application.
• For each surface patch we first select mesh of control point XY and then select elevation in Z direction.
• We can put two or more surfaces together and form required surfaces using method similar to curve
section joining with continuity C0, C1, and C2 as per need.

B-Spline Curves and Surfaces


• B-Spline is most widely used approximation spline.
• It has two advantage over Bezier spline
1. Degree of a B-Spline polynomial can be set independently of the number of control points (with
certain limitation).
2. B-Spline allows local control.
• Disadvantage of B-Spline curve is more complex then Bezier spline

B-Spline Curves
• General expression for B-Spline curve in terms of blending function is given by:
𝑛

𝑝(𝑢) = ∑ 𝑝𝑘𝐵𝑘,𝑑(𝑢) 𝑢𝑚𝑖𝑛 ≤ 𝑢 ≤ 𝑢𝑚𝑎𝑥, 2 ≤ 𝑑 ≤ 𝑛 + 1


𝑘=0
Where pk is input set of control points.
• The range of parameter u is now depends on how we choose the B-Spline parameters.
• B-Spline blending function Bk,d are polynomials of degree d-1 , where d can be any value in between 2 to
n+1.
• We can set d=1 but then curve is only point plot.
• By defining blending function for subintervals of whole range we can achieve local control.
• Blending function of B-Spline is solved by Cox-deBoor recursion formulas as follows.
1
𝐵𝑘,1 (𝑢) = {0 𝑖𝑓 𝑢𝑘 ≤ 𝑢 ≤ 𝑢𝑘+1
𝑢 − 𝑢𝑘 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒𝑢
𝑘+𝑑 − 𝑢
𝐵 (𝑢) = 𝐵 (𝑢) + 𝐵 (𝑢)
𝑘,𝑑 𝑢𝑘+𝑑−1 − 𝑢𝑘 𝑘,𝑑−1 𝑢𝑘+𝑑 − 𝑢𝑘+1 𝑘+1,𝑑−1
• The selected set of subinterval endpoints 𝑢𝑗 is reffered to as a knot vector.
• We can set any value as a subinterval end point but it must follow 𝑢𝑗 ≤ 𝑢𝑗+1
• Values of 𝑢𝑚𝑖𝑛 and 𝑢𝑚𝑎𝑥 depends on number of control points, degree d, and knot vector.
• Figure below shows local control

Fig. 4.26: -Local modification of B-Spline curve.


• B-Spline allows adding or removing control points in the curve without changing the degree of curve.
• B-Spline curve lies within the convex hull of at most d+1 control points so that B-Spline is tightly bound
to input positions.

19
Unit-4 – 3D Concept & Object Representation

• For any u in between 𝑢𝑑−1 to 𝑢𝑛+1, sum of all blending function is 1 i.e. ∑𝑘=0
𝑛 𝐵𝑘,𝑑(𝑢) = 1
• There are three general classification for knot vectors:
o Uniform
o Open uniform
o Non uniform

Properties of B-Spline Curves


• It has degree d-1 and continuity Cd-2 over range of u.
• For n+1 control point we have n+1 blending function.
• Each blending function 𝐵𝑘,𝑑(𝑢) is defined over d subintervals of the total range of u, starting at knot
value uk.
• The range of u is divided into n+d subintervals by the n+d+1 values specified in the knot vector.
• With knot values labeled as {𝑢0, 𝑢1, … , 𝑢𝑛+𝑑} the resulting B-Spline curve is defined only in interval from
knot values 𝑢𝑑−1 up to knot values 𝑢𝑛+1
• Each spline section is influenced by d control points.
• Any one control point can affect at most d curve section.

Uniform Periodic B-Spline


• When spacing between knot values is constant, the resulting curve is called a uniform B-Spline.
• For example {0.0,0.1,0.2, … ,1.0} or {0,1,2,3,4,5,6,7}
• Uniform B-Spline have periodic blending function. So for given values of n and d all blending function has
same shape. And each successive blending function is simply a shifted version of previous function.
𝐵𝑘,𝑑(𝑢) = 𝐵𝑘+1,𝑑(𝑢 + ∆𝑢) = 𝐵𝑘+2,𝑑(𝑢 + 2∆𝑢)
Where ∆𝑢 is interval between adjacent knot vectors.

Cubic Periodic B-Spline


• It commonly used in many graphics packages.
• It is particularly useful for generating closed curve.
• If any three consecutive control points are identical the curve passes through that coordinate position.
• Here for cubic curve d = 4 and n = 3 knot vector spans d+n+1 =4+3+1=8 so it is {0,1,2,3,4,5,6,7}
• Now boundary conditions for cubic B-Spline curve is obtain from equation.
𝑛

𝑝(𝑢) = ∑ 𝑝𝑘𝐵𝑘,𝑑(𝑢) 𝑢𝑚𝑖𝑛 ≤ 𝑢 ≤ 𝑢𝑚𝑎𝑥, 2 ≤ 𝑑 ≤ 𝑛 + 1


𝑘=0
• That are
1
𝑝(0) = (𝑝 + 4𝑝 + 𝑝 )
6 0
1
1 2
𝑝(1) = (𝑝 + 4𝑝 + 𝑝 )
1 2 3
61
𝑝′(0) = (𝑝 − 𝑝 )
2 0
2
1
𝑝′(1) = (𝑝 − 𝑝 )
2 3 1

• Matrix formulation for a cubic periodic B-Splines with the four control points can then be written as
𝑝0
𝑝1
𝑝(𝑢) = [𝑢3 𝑢2 𝑢 1] ∙ 𝑀𝐵 ∙
[𝑝 ]
2
𝑝3

20
Unit-4 – 3D Concept & Object Representation

Where
−1 3 −3 1
1 3 −6 3 0
𝑀𝐵 = [ ]
6 −3 0 3 0
1 4 1 0
• We can also modify the B-Spline equation to include a tension parameter t.
• The periodic cubic B-Spline with tension matrix then has the form:
−𝑡 12 − 9𝑡 9𝑡 − 12 𝑡
1 3𝑡 12𝑡 − 18 18 − 15𝑡 0
𝑀𝐵𝑡 = [ ]
6 −3𝑡 0 3𝑡 0
𝑡 6 − 2𝑡 𝑡 0
When t = 1 𝑀𝐵𝑡 = 𝑀𝐵
• We can obtain cubic B-Spline blending function for parametric range from 0 to 1 by converting matrix
representation
1 into polynomial form for t = 1 we have
𝐵 ( ) 3
0,3 𝑢
= (1 − 𝑢)
6
1
𝐵1,3 (𝑢) = (3𝑢3 − 6𝑢2 + 4)
6
1
𝐵2,3 (𝑢) = (−3𝑢3 + 3𝑢2 + 3𝑢 + 1)
6
1
𝐵3,3 (𝑢) = 𝑢3
6

Open Uniform B-Splines


• This class is cross between uniform B-Spline and non uniform B-Splines.
• Sometimes it is treated as a special type of uniform B-Spline, and sometimes as non uniform B-Spline
• For open uniform B-Spline (open B-Spline) the knot spacing is uniform except at the ends where knot
values are repeated d times.
• For example {0,0,1,2,3,3} for d=2 and n=3, and {0,0,0,0,1,2,2,2,2} for d=4 and n=4.
• For any values of parameter d and n we can generate an open uniform knot vector with integer values
using the calculations as follow:
0, 𝑓𝑜𝑟 0 ≤ 𝑗 < 𝑑
𝑢𝑗 = {𝑗 − 𝑑 + 1 𝑓𝑜𝑟 𝑑 ≤ 𝑗 ≤ 𝑛
𝑛−𝑑+2 𝑓𝑜𝑟 𝑗 > 𝑛
Where 0 ≤ 𝑗 ≤ 𝑛 + 𝑑
• Open uniform B-Spline is similar to Bezier spline if we take d=n+1 it will reduce to Bezier spline as all
knot values are either 0 or 1.
• For example cubic open uniform B-Spline with d=4 have knot vector {0,0,0,0,1,1,1,1}.
• Open uniform B-Spline curve passes through first and last control points.
• Also slope at each end is parallel to line joining two adjacent control points at that end.
• So geometric condition for matching curve sections are same as for Bezier curves.
• For closed curve we specify first and last control point at the same position.

Non Uniform B-Spline


• For this class of spline we can specify any values and interval for knot vector.
• For example {0,1,2,3,3,4}, and {0,0,1,2,2,3,4}
• It will give more flexible shape of curves. Each blending function have different shape when plots and
different intervals.
• By increasing knot multiplicity we produce variation in curve shape and also introduce discontinuities.

21
Unit-4 – 3D Concept & Object Representation

• Multiple knot value also reduces continuity by 1 for each repeat of particular value.
• We can solve non uniform B-Spline using similar method as we used in uniform B-Spline.
• For set of n+1 control point we set degree d and knot values.
• Then using the recurrence relations we can obtain blending function or evaluate curve position directly
for display of the curve.

B-Spline Surfaces
• B-Spline surface formation is also similar to Bezier splines orthogonal set of curves are used and for
connecting two surface we use same method which is used in Bezier surfaces.
• Vector equation of B-Spline surface is given by cartesion product of B-Spline blending functions:
𝑛1 𝑛2

𝑝(𝑢, 𝑣) = ∑ ∑ 𝑝𝑘1,𝑘2𝐵𝑘1,𝑑1(𝑢)𝐵𝑘2,𝑑2(𝑣)
𝑘1=0 𝑘2=0
• Where 𝑝𝑘1,𝑘2 specify control point position.
• It has same properties as B-Spline curve.

22
Unit-5 – 3D Transformation and Viewing
3D Translation

Fig. 5.1: - 3D Translation.

• Similar to 2D translation, which used 3x3 matrices, 3D translation use 4X4 matrices (X, Y, Z, h).
• In 3D translation point (X, Y, Z) is to be translated by amount tx, ty and tz to location (X', Y', Z').
𝒙, = 𝒙 + 𝒕𝒙
𝒚, = 𝒚 + 𝒕𝒚
𝒛, = 𝒛 + 𝒕𝒛
• Let’s see matrix equation
𝑷′ = 𝑻 ∙ 𝑷
𝒙, 𝟏 𝟎 𝟎 𝒕𝒙 𝒙
𝒚 ′ 𝟎 𝟏 𝟎 𝒕𝒚 𝒚
[ ,] = [ ]∙
𝒛 𝟎 𝟎 𝟏 𝒕𝒛 [𝒛]
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
• Example : - Translate the given point P (10,10,10) into 3D space with translation factor T (10,20,5).
𝑃′ = 𝑇 ∙ 𝑃
𝑥, 1 0 0 𝑡𝑥 𝑥
𝑦′ 0 1 0 𝑡𝑦 𝑦
[𝑧, ] = [ ]∙
0 0 1 𝑡𝑧 [𝑧]
1 0 0 0 1 1
𝑥′, 1 0 0 10 10
𝑦 0 1 0 20 10
[ ]=[ ]∙[ ]
𝑧, 0 0 1 5 10
1 0 0 0 1 1
𝑥, 20
′ 30
[𝑦
𝑧 ,] = [ ]
15
1 1
Final coordinate after translation is P, (20, 30, 15).

Rotation
• For 3D rotation we need to pick an axis to rotate about.
• The most common choices are the X-axis, the Y-axis, and the Z-axis

1
Unit-5 – 3D Transformation and Viewing
Coordinate-Axes Rotations

Y Y Y

X X X

Z Z Z

(a) (b) (c)

Fig. 5.2: - 3D Rotations.

Z-Axis Rotation
• Two dimension rotation equations can be easily convert into 3D Z-axis rotation equations.
• Rotation about z axis we leave z coordinate unchanged.
𝒙, = 𝒙 𝐜𝐨𝐬 𝜽 − 𝒚 𝐬𝐢𝐧 𝜽
𝒚, = 𝒙 𝐬𝐢𝐧 𝜽 + 𝒚 𝐜𝐨𝐬 𝜽
𝒛, = 𝒛
Where Parameter 𝜽 specify rotation angle.
• Matrix equation is written as:
𝑷′ = 𝑹𝒛(𝜽) ∙ 𝑷
𝒙, 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝟎 𝒙
𝒚′ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎 𝟎 𝒚
= ]∙[ ]
[ 𝒛, ] [
𝟎 𝟎 𝟏 𝟎 𝒛
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏

X-Axis Rotation
• Transformation equation for x-axis is obtain from equation of z-axis rotation by replacing cyclically as
shown here
𝒙→𝒚→𝒛→𝒙
• Rotation about x axis we leave x coordinate unchanged.
𝒚, = 𝒚 𝐜𝐨𝐬 𝜽 − 𝒛 𝐬𝐢𝐧 𝜽
𝒛, = 𝒚 𝐬𝐢𝐧 𝜽 + 𝒛 𝐜𝐨𝐬 𝜽
𝒙, = 𝒙
Where Parameter 𝜽 specify rotation angle.
• Matrix equation is written as:
𝑷′ = 𝑹𝒙(𝜽) ∙ 𝑷
𝒙, 𝟏 𝟎 𝟎 𝟎 𝒙
𝒚′ 𝒚
= 𝟎 𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎] ∙ [ ]
[ 𝒛, ] [
𝟎 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎 𝒛
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏

Y-A xis Rotation


• Transformation equation for y-axis is obtain from equation of x-axis rotation by replacing cyclically as
shown here
𝒙→𝒚→𝒛→𝒙

2
Unit-5 – 3D Transformation and Viewing
• Rotation about y axis we leave y coordinate unchanged.
𝒛, = 𝒛 𝐜𝐨𝐬 𝜽 − 𝒙 𝐬𝐢𝐧 𝜽
𝒙, = 𝒛 𝐬𝐢𝐧 𝜽 + 𝒙 𝐜𝐨𝐬 𝜽
𝒚, = 𝒚
Where Parameter 𝜽 specify rotation angle.
• Matrix equation is written as:
𝑷′ = 𝑹𝒚(𝜽) ∙ 𝑷
𝒙, 𝐜𝐨𝐬 𝜽 𝟎 𝐬𝐢𝐧 𝜽 𝟎 𝒙
𝒚′ 𝒚
[ ]=[ 𝟎 𝟏 𝟎 𝟎] ∙ [ ]
𝒛, − 𝐬𝐢𝐧 𝜽 𝟎 𝐜𝐨𝐬 𝜽 𝟎 𝒛
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
• Example: - Rotate the point P(5,5,5) 90o about Z axis.
𝑃′ = 𝑅𝑧(𝜃) ∙ 𝑃
𝑥, cos 𝜃 − sin 𝜃 0 0 𝑥
𝑦′ sin 𝜃 cos 𝜃 0 0 𝑦
= ]∙
[ 𝑧, ] [ 0 0 1 0 [𝑧]
1 0 0 0 1 1
𝑥′, cos 90 − sin 90 0 0 𝑥
𝑦 sin 90 cos 90 0 0 𝑦
= ]∙
[ 𝑧, ] [ 0 0 1 0 [𝑧]
1 0 0 0 1 1
𝑥 , 0 −1 0 0 5

𝑦 1 0 0 0 5
[ ]=[ ]∙[ ]
𝑧, 0 0 1 0 5
1 0 0 0 1 1
𝑥, −5
𝑦′ 5
[𝑧, ] = [ ]
5
1 1
Final coordinate after rotation is P, (-5, 5, 5).

General 3D Rotations when rotation axis is parallel to one of the standard axis
• Three steps require to complete such rotation
1. Translate the object so that the rotation axis coincides with the parallel coordinate axis.
2. Perform the specified rotation about that axis.
3. Translate the object so that the rotation axis is moved back to its original position.
• This can be represented in equation form as:
𝑷′ = 𝑻−𝟏 ∙ 𝑹(𝜽) ∙ 𝑻 ∙ 𝑷

General 3D Rotations when rotation axis is inclined in arbitrary direction


• When object is to be rotated about an axis that is not parallel to one of the coordinate axes, we need
rotations to align the axis with a selected coordinate axis and to bring the axis back to its original
orientation.
• Five steps require to complete such rotation.
1. Translate the object so that the rotation axis passes through the coordinate origin.
2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
3. Perform the specified rotation about that coordinate axis.
4. Apply inverse rotations to bring the rotation axis back to its original orientation.
5. Apply the inverse translation to bring the rotation axis back to its original position.
• We can transform rotation axis onto any of the three coordinate axes. The Z-axis is a reasonable choice.

3
Unit-5 – 3D Transformation and Viewing
• We are given line in the form of two end points P1 (x1,y1,z1), and P2 (x2,y2,z2).
• We will see procedure step by step.
1) Translate the object so that the rotation axis passes through the coordinate origin.

Y
P2

P1

Fig. 5.3: - Translation of vector V.


• For translation of step one we will bring first end point at origin and transformation matrix for the same
is as below
𝟏 𝟎 𝟎 −𝒙𝟏
𝑻=[ 𝟎 𝟏 𝟎 −𝒚𝟏]
𝟎 𝟎 𝟏 −𝒛𝟏
𝟎 𝟎 𝟎 𝟏
2) Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
• This task can be completed by two rotations first rotation about x-axis and second rotation about y-axis.
• But here we do not know rotation angle so we will use dot product and vector product.
• Lets write rotation axis in vector form.
𝑽 = 𝑷𝟐 − 𝑷𝟏 = (𝒙𝟐 − 𝒙𝟏, 𝒚𝟐 − 𝒚𝟏, 𝒛𝟐 − 𝒛𝟏)
• Unit vector along rotation axis is obtained by dividing vector by its magnitude.
𝑽 𝒙𝟐 − 𝒙𝟏 𝒚𝟐 − 𝒚𝟏 𝒛𝟐 − 𝒛𝟏
𝒖= =( , , ) = (𝒂, 𝒃, 𝒄)
|𝑽| |𝑽| |𝑽| |𝑽|

u’ u

α
X
uz

Fig. 5.4: - Projection of u on YZ-Plane.


• Now we need cosine and sin value of angle between unit vector ‘u’ and XZ plane and for that we will
take projection of u on YZ-plane say ‘u’’ and then find dot product and cross product of ‘u’’ and ‘uz’ .
• Coordinate of ‘u’’ is (0,b,c) as we will take projection on YZ-plane x value is zero.
𝒖′ ∙ 𝒖𝒛 = |𝒖′||𝒖𝒛| 𝐜𝐨𝐬 𝑎

4
Unit-5 – 3D Transformation and Viewing
𝒖′ ∙ 𝒖 𝒛 (𝟎, 𝒃, 𝒄)(𝟎, 𝟎, 𝟏) 𝒄
𝐜𝐨𝐬 𝑎 = = = 𝒘𝒉𝒆𝒓𝒆 𝒅 = √𝒃𝟐 + 𝒄𝟐
|𝒖′||𝒖𝒛| √𝒃𝟐 + 𝒄𝟐 𝒅
And
𝒖′ × 𝒖𝒛 = 𝒖𝒙|𝒖′||𝒖𝒛| 𝐬𝐢𝐧 𝑎 = 𝒖𝒙 ∙ 𝒃
𝒖𝒙|𝒖′||𝒖𝒛| 𝐬𝐢𝐧 𝑎 = 𝒖𝒙 ∙ 𝒃
Comparing magnitude
|𝒖′||𝒖𝒛| 𝐬𝐢𝐧 𝑎 = 𝒃
√𝒃𝟐 + 𝒄𝟐 ∙ (𝟏) 𝐬𝐢𝐧 𝑎 = 𝒃
𝒅 𝐬𝐢𝐧 𝑎 = 𝒃
𝒃
𝐬𝐢𝐧 𝑎 =
𝒅
• Now we have 𝐬𝐢𝐧 𝑎 and 𝐜𝐨𝐬 𝑎 so we will write matrix for rotation about X-axis.
𝟏 𝟎 𝟎 𝟎
𝟎 𝐜𝐨𝐬 𝑎 − 𝐬𝐢𝐧 𝑎 𝟎
𝑹𝒙 (𝑎) = [ ]
𝟎 𝐬𝐢𝐧 𝑎 𝐜𝐨𝐬 𝑎 𝟎
𝟎 𝟎 𝟎 𝟏
𝟏 𝟎 𝟎 𝟎
𝖥 𝒄 𝒃 1
I𝟎 − 𝟎l
𝒅 𝒅
𝑹𝒙(𝑎) = I l
I𝟎 𝒃 𝒄 l
I 𝒅 𝒅 𝟎l
[𝟎 𝟎 𝟎 𝟏]
• After performing above rotation ‘u’ will rotated into ‘u’’’ in XZ-plane with coordinates (a, 0, √(b2+c2)). As
we know rotation about x axis will leave x coordinate unchanged, ‘u’’’ is in XZ=plane so y coordinate is
zero, and z component is same as magnitude of ‘u’’.
• Now rotate ‘u’’’ about Y-axis so that it coincides with Z-axis.

X
uz β u’’

Fig. 5.5: - Rotation of u about X-axis.


• For that we repeat above procedure between ‘u’’’ and ‘uz’ to find matrix for rotation about Y-axis.
𝒖′′ ∙ 𝒖𝒛 = |𝒖′′||𝒖𝒛| 𝐜𝐨𝐬 𝖰
𝒖′ ∙ 𝒖 𝒛 (𝒂, 𝟎, √𝒃𝟐 + 𝒄𝟐) (𝟎, 𝟎, 𝟏)
𝐜𝐨𝐬 𝖰 = ′ = = √𝒃𝟐 + 𝒄𝟐 = 𝒅 𝒘𝒉𝒆𝒓𝒆 𝒅 = √𝒃𝟐 + 𝒄𝟐
|𝒖 ||𝒖𝒛| 𝟏
And
𝒖′′ × 𝒖𝒛 = 𝒖𝒚|𝒖′′||𝒖𝒛| 𝐬𝐢𝐧 𝖰 = 𝒖𝒚 ∙ (−𝒂)
𝒖𝒚|𝒖′′||𝒖𝒛| 𝐬𝐢𝐧 𝖰 = 𝒖𝒚 ∙ (−𝒂)
Comparing magnitude
|𝒖′′||𝒖𝒛| 𝐬𝐢𝐧 𝖰 = (−𝒂)

5
Unit-5 – 3D Transformation and Viewing
(𝟏) 𝐬𝐢𝐧 𝖰 = − 𝒂
𝐬𝐢𝐧 𝖰 = − 𝒂
• Now we have 𝐬𝐢𝐧 𝖰 and 𝐜𝐨𝐬 𝖰 so we will write matrix for rotation about Y-axis.
𝐜𝐨𝐬 𝖰 𝟎 𝐬𝐢𝐧 𝖰 𝟎
𝟎 𝟏 𝟎 𝟎]
𝑹𝒚(𝖰) = [
− 𝐬𝐢𝐧 𝖰 𝟎 𝐜𝐨𝐬 𝖰 𝟎
𝟎 𝟎 𝟎 𝟏
𝒅 𝟎 −𝒂 𝟎
𝟎 𝟏 𝟎 𝟎
𝑹 (𝖰) = [ ]
𝒚
𝒂 𝟎 𝒅 𝟎
𝟎 𝟎 𝟎 𝟏
• Now by combining both rotation we can coincides rotation axis with Z-axis
3) Perform the specified rotation about that coordinate axis.
• As we know we align rotation axis with Z axis so now matrix for rotation about z axis
𝐜𝐨𝐬 𝜽 − 𝐬𝐢𝐧 𝜽 𝟎 𝟎
𝑹𝒛 (𝜽) = [ 𝐬𝐢𝐧 𝜽 𝐜𝐨𝐬 𝜽 𝟎 𝟎]
𝟎 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
4) Apply inverse rotations to bring the rotation axis back to its original orientation.
• This step is inverse of step number 2.
5) Apply the inverse translation to bring the rotation axis back to its original position.
6) This step is inverse of step number 1.

So finally sequence of transformation for general 3D rotation is

𝑷′ = 𝑻−𝟏 ∙ 𝑹𝒙−𝟏(𝑎) ∙ 𝑹𝒚−𝟏(𝖰) ∙ 𝑹𝒛(𝜽) ∙ 𝑹𝒚(𝖰) ∙ 𝑹𝒙(𝑎) ∙ 𝑻 ∙ 𝑷

Scaling
• It is used to resize the object in 3D space.
• We can apply uniform as well as non uniform scaling by selecting proper scaling factor.
• Scaling in 3D is similar to scaling in 2D. Only one extra coordinate need to consider into it.

Coordinate Axes Scaling

Scaling

Fig. 5.6: - 3D Scaling.


• Simple coordinate axis scaling can be performed as below.
𝑷′ = 𝑺 ∙ 𝑷

6
Unit-5 – 3D Transformation and Viewing
𝒙, 𝒔𝒙 𝟎 𝟎 𝟎 𝒙
[ ,] [ 𝟎
𝒚 ′ 𝒔 𝟎 𝟎 ∙ 𝒚
𝒛 = 𝟎 𝟎𝒚 𝒔 ] [ ]
𝒛 𝟎 𝒛
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
• Example: - Scale the line AB with coordinates (10,20,10) and (20,30,30) respectively with scale factor
S(3,2,4).
𝑃′ = 𝑆 ∙ 𝑃
𝑥, 𝑠𝑥 0 0 0 𝑥
𝑦 ′
[ ] [ 0 𝑠 0 0 𝑦
= 𝑦 ] ∙
𝑧, 0 0 𝑠𝑧 0 [𝑧]
1 0 0 0 1 1
𝐴 ′ 𝐵𝑥 ′ 3 0 0 0 10 20
𝖥 𝑥 1
𝐴
I 𝑦
′ 𝐵 ′
𝑦l
=[ 0 2 0 0 ]∙[ 20 30]

I𝐴𝑧 𝐵𝑧 l ′ 0 0 4 0 10 30
[ 1 1] 0 0 0 1 1 1
𝐴𝑥 ′ 𝐵𝑥 ′ 30 60
𝖥 1
𝐴
I 𝑦
′ 𝐵 ′
𝑦l 40 60
=[ ]

I𝐴𝑧 𝐵𝑧 l ′ 40 120
[ 1 1] 1 1
Final coordinates after scaling are A, (30, 40, 40) and B’ (60, 60, 120).

Fixed Point Scaling

Y Fixed Point Scaling

X
Fixed Point

Fig. 5.7: - 3D Fixed point scaling.


• Fixed point scaling is used when we require scaling of object but particular point must be at its original
position.
• Fixed point scaling matrix can be obtained in three step procedure.
1. Translate the fixed point to the origin.
2. Scale the object relative to the coordinate origin using coordinate axes scaling.
3. Translate the fixed point back to its original position.
• Let’s see its equation.
𝑷′ = 𝑻(𝒙𝒇, 𝒚𝒇, 𝒛𝒇) ∙ 𝑺(𝒔𝒙, 𝒔𝒚, 𝒔𝒛) ∙ 𝑻(−𝒙𝒇, −𝒚𝒇, −𝒛𝒇) ∙ 𝑷
𝖥𝟏 𝟎 𝟎 𝒙𝒇 1 𝒔𝒙 𝟎 𝟎 𝟎 𝖥𝟏 𝟎 𝟎 −𝒙𝒇 1
𝟎 𝟏 𝟎 𝒚𝒇l 𝟎 𝒔𝒚 𝟎 𝟎 I𝟎 𝟏 𝟎 −𝒚𝒇l
𝑷′ = I ∙[ ]∙ ∙𝑷
I𝟎 𝟎 𝟏 𝒛𝒇l 𝟎 𝟎 𝒔𝒛 𝟎 I𝟎 𝟎 𝟏 −𝒛𝒇l
[𝟎 𝟎 𝟎 𝟏 ] 𝟎 𝟎 𝟎 𝟏 [𝟎 𝟎 𝟎 𝟏 ]

7
Unit-5 – 3D Transformation and Viewing
𝖥𝒔𝒙 𝟎 𝟎 (𝟏 − 𝒔𝒙)𝒙𝒇1
′ 𝟎 𝒔𝒚 𝟎 (𝟏 − 𝒔𝒚)𝒚𝒇
𝑷 =I l∙𝑷
I𝟎 𝟎 𝒔𝒛 (𝟏 − 𝒔𝒛)𝒛𝒇 l
[𝟎 𝟎 𝟎 𝟏 ]

Other Transformations

Reflections
• Reflection means mirror image produced when mirror is placed at require position.
• When mirror is placed in XY-plane we obtain coordinates of image by just changing the sign of z
coordinate.
• Transformation matrix for reflection about XY-plane is given below.
𝟏 𝟎 𝟎 𝟎
𝟎 𝟏 𝟎 𝟎
𝑹𝑭 = [ ]
𝒛
𝟎 𝟎 −𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
• Similarly Transformation matrix for reflection about YZ-plane is.
−𝟏 𝟎 𝟎 𝟎
𝟎 𝟏 𝟎 𝟎
𝑹𝑭 = [ ]
𝒙
𝟎 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
• Similarly Transformation matrix for reflection about XZ-plane is.
𝟏 𝟎 𝟎 𝟎
𝟎 −𝟏 𝟎 𝟎
𝑹𝑭𝒚 = [ ]
𝟎 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏

Shears
• Shearing transformation can be used to modify object shapes.
• They are also useful in 3D viewing for obtaining general projection transformations.
• Here we use shear parameter ‘a’ and ‘b’
• Shear matrix for Z-axis is given below
𝟏 𝟎 𝒂 𝟎
𝟎 𝟏 𝒃 𝟎
𝑺𝑯 = [ ]
𝒛
𝟎 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
• Similarly Shear matrix for X-axis is.
𝟏 𝟎 𝟎 𝟎
𝒂 𝟏 𝟎 𝟎
𝑺𝑯 = [ ]
𝒙
𝒃 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
• Similarly Shear matrix for X-axis is.
𝟏 𝒂 𝟎 𝟎
𝟎 𝟏 𝟎 𝟎
𝑺𝑯 = [ ]
𝒚
𝟎 𝒃 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏

8
Unit-5 – 3D Transformation and Viewing
Viewing Pipeline

Modeling Modeling World Viewing Viewing


Coordinates Transformation Coordinates Transformation Coordinates

Projection Projection Workstation Device


Transformation Coordinates Transformation Coordinates

Fig. 5.8: - General 3D viewing pipeline.


• Steps involved in 3D pipeline are similar to the process of taking a photograph.
• As shown in figure that initially we have modeling coordinate of any object which we want to display on
the screen.
• By applying modeling transformation we convert modeling coordinates to world coordinates which gives
which part or portion is to be display.
• Then by applying viewing transformation we obtain the viewing coordinate which is fitted in viewing
coordinate reference frame.
• Then in case of three dimensional objects we have three dimensions coordinate but we need to display
that object on two dimensional screens so we apply projection transformation on it which gives
projection coordinate.
• Finally projection coordinate is converted into device coordinate by applying workstation transformation
which gives coordinates which is specific to particular device.

Viewing Co-ordinates.
• Generating a view of an object is similar to photographing the object.
• We can take photograph from any side with any angle & orientation of camera.
• Similarly we can specify viewing coordinate in ordinary direction.

Fig. 5.9: -A right handed viewing coordinate system, with axes Xv, Yv, and Zv, relative to a world-
coordinate scene.

Specifying the view plan


• We decide view for a scene by first establishing viewing coordinate system, also referred as view
reference coordinate system.
• Then projection plane is setup in perpendicular direction to Zv axis.

9
Unit-5 – 3D Transformation and Viewing
• Then projections positions in the scene are transferred to viewing coordinate then viewing coordinate
are projected onto the view plane.
• The origin of our viewing coordinate system is called view reference point.
• View reference point is often chosen to be close to or on the surface as same object scene. We can also
choose other point also.
• Next we select positive direction for the viewing Zv axis and the orientation of the view plane by
specifying the view plane normal vector N.
• Finally we choose the up direction for the view by specifying a vector V called the view up vector. Which
specify orientation of camera.
• View up vector is generally selected perpendicular to normal vector but we can select any angle
between V & N.
• By fixing view reference point and changing direction of normal vector N we get different views of same
object this is illustrated by figure below.

Fig. 5.10: -Viewing scene from different direction with a fixed view-reference point.

Transformation from world to viewing coordinates


• Before taking projection of view plane object description is need to transfer from world to viewing
coordinate.
• It is same as transformation that superimposes viewing coordinate system to world coordinate system.
• It requires following basic transformation.
• 1) Translate view reference point to the origin of the world coordinate system.
• 2) Apply rotation to align

Fig. 5.11: - Aligning a viewing system with the world-coordinate axes using a sequence of translate-rotate
transformations.
• As shown in figure the steps of transformation

10
Unit-5 – 3D Transformation and Viewing
• Consider view reference point in world coordinate system is at position (𝑥0, 𝑦0, 𝑧0)than for align view
reference point to world origin we perform translation with matrix:
1 0 0 −𝑥0
• 𝑇 = [0 1 0 −𝑦0]
0 0 1 −𝑧0
0 0 0 1
• Now we require rotation sequence up-to three coordinate axis rotations depending upon direction we
choose for N.
• In general case N is at arbitrary direction then we can align it with word coordinate axes by rotation
sequence 𝑅𝑧 ∙ 𝑅𝑦 ∙ 𝑅𝑥.
• Another method for generating the rotation transformation matrix is to calculate unit uvn vectors and
from the composite rotation matrix directly.
• Here
𝑁
𝑛= = (𝑛1, 𝑛2 , 𝑛3)
|𝑁|
𝑉×𝑁
𝑢= = (𝑢1, 𝑢2 , 𝑢3)
|𝑉 × 𝑁|
𝑣 = 𝑛 × 𝑢 = (𝑣1, 𝑣2, 𝑣3)
• This method also automatically adjusts the direction for u so that v is perpendicular to n.
• Than composite rotation matrix for the viewing transformation is then:
𝑢1 𝑢2 𝑢3 0
𝑣 𝑣2 𝑣3 0
𝑅=[ 1 ]
𝑛1 𝑛2 𝑛3 0
0 0 0 1
• This aligns u to Xw axis, v to Yw axis and n to Zw axis.
• Finally composite matrix for world to viewing coordinate transformation is given by:
• 𝑀𝑤𝑐,𝑣𝑐 = 𝑅 ∙ 𝑇
• This transformation is applied to object’s coordinate to transfer them to the viewing reference frame.

Projections
• Once world-coordinate descriptions of the objects in a scene are converted to viewing coordinates, we
can project the three-dimensional objects onto the two-dimensional view plane.
• Process of converting three-dimensional coordinates into two-dimensional scene is known as projection.
• There are two projection methods namely.
1. Parallel Projection.
2. Perspective Projection.
• Lets discuss each one.

Parallel Projections
View
Plane
P1
P1’

P2
P2’

Fig. 5.12: - Parallel projection.

11
Unit-5 – 3D Transformation and Viewing
• In a parallel projection, coordinate positions are transformed to the view plane along parallel lines, as
shown in the, example of above Figure.
• We can specify a parallel projection with a projection vector that defines the direction for the projection
lines.
• It is further divide into two types.
1. Orthographic parallel projection.
2. Oblique parallel projection.

Orthographic parallel projection

View Plane

Projection Line

Fig. 5.13: - Orthographic parallel projection.


• When the projection lines are perpendicular to the view plane, we have an orthographic parallel
projection.
• Orthographic projections are most often used to produce the front, side, and top views of an object, as
shown in Fig.

Fig. 5.14: - Orthographic parallel projection.


• Engineering and architectural drawings commonly use orthographic projections, because lengths and
angles are accurately depicted and can be measure from the drawings.
• We can also form orthographic projections that display more than one face of an object. Such view are
called axonometric orthographic projections. Very good example of it is Isometric projection.
• Transformation equations for an orthographic parallel projection are straight forward.
• If the view plane is placed at position zvp along the zv axis, then any point (x, y, z) in viewing coordinates
is transformed to projection coordinates as
𝒙𝒑 = 𝒙, 𝒚𝒑 = 𝒚

12
Unit-5 – 3D Transformation and Viewing

(X,Y,Z) Yv

(X,Y)
Xv

Zv

Fig. 5.15: - Orthographic parallel projection.


• Where the original z-coordinate value is preserved for the depth information needed in depth cueing
and visible-surface determination procedures.

Oblique parallel projection.

View Plane

Projection Line

Fig. 5.16: - Oblique parallel projection.


• An oblique projection is obtained by projecting points along parallel lines that are not perpendicular to
the projection plane.
• Coordinate of oblique parallel projection can be obtained as below.
Yv

(Xp, Yp)
(X,Y,Z)
αL
Φ
Xv
(X,Y)

Zv

Fig. 5.17: - Oblique parallel projection.


• As shown in the figure (X,Y,Z) is a point of which we are taking oblique projection (Xp,Yp) on the view
plane and point (X,Y) on view plane is orthographic projection of (X,Y,Z).
• Now from figure using trigonometric rules we can write

13
Unit-5 – 3D Transformation and Viewing
𝒙𝒑 = 𝒙 + 𝑳 𝐜𝐨𝐬 ∅
𝒚𝒑 = 𝒚 + 𝑳 𝐬𝐢𝐧 ∅
• Length L depends on the angle α and the z coordinate of the point to be projected:
𝒁
𝐭𝐚𝐧 𝑎 =
𝑳
𝒁
𝑳=
𝐭𝐚𝐧 𝑎
𝟏
𝑳 = 𝒁𝑳𝟏 , 𝑾𝒉𝒆𝒓𝒆 𝑳𝟏 =
𝐭𝐚𝐧 𝑎
• Now put the value of L in projection equation.
𝒙𝒑 = 𝒙 + 𝒁𝑳𝟏 𝐜𝐨𝐬 ∅
𝒚𝒑 = 𝒚 + 𝒁𝑳𝟏 𝐬𝐢𝐧 ∅
• Now we will write transformation matrix for this equation.
𝟏 𝟎 𝑳𝟏 𝐜𝐨𝐬 ∅ 𝟎
𝑴𝒑𝒂𝒓𝒂𝒍𝒍𝒆𝒍 = [ 𝟎 𝟏 𝑳𝟏 𝐬𝐢𝐧 ∅ 𝟎]
𝟎 𝟎 𝟎 𝟎
𝟎 𝟎 𝟎 𝟏
• This equation can be used for any parallel projection. For orthographic projection L1=0 and so whole
term which is multiply with z component is zero.
• When value of 𝐭𝐚𝐧 𝑎 = 𝟏 projection is known as Cavalier projection.
• When value of 𝐭𝐚𝐧 𝑎 = 𝟐 projection is known as Cabinet projection.

Perspective Projection
View
Plane
P1
P1’

P2 P2’ Projection
Reference
point

Fig. 5.18: - Perspective projection.


• In perspective projection object positions are transformed to the view plane along lines that converge to
a point called the projection reference point (or center of projection or vanishing point).

P=(x,y,z)

(xp,yp,zvp)

zvp zprp zv
View
Plane

Fig. 5.19: - Perspective projection.


• Suppose we set the projection reference point at position zprp along the zv axis, and we place the view
plane at zvp as shown in Figure above. We can write equations describing coordinate positions along this
perspective projection line in parametric form as

14
Unit-5 – 3D Transformation and Viewing
𝒙′ = 𝒙 − 𝒙𝒖
𝒚′ = 𝒚 − 𝒚𝒖
𝒛′ = 𝒛 − (𝒛 − 𝒛𝒑𝒓𝒑)𝒖
• Here parameter u takes the value from 0 to 1, which is depends on the position of object, view plane,
and projection reference point.
• For obtaining value of u we will put z’=zvp and solve equation of z’.
𝒛′ = 𝒛 − (𝒛 − 𝒛𝒑𝒓𝒑)𝒖
𝒛𝒗𝒑 = 𝒛 − (𝒛 − 𝒛𝒑𝒓𝒑)𝒖
𝒛𝒗𝒑 − 𝒛
𝒖=
𝒛𝒑𝒓𝒑 − 𝒛
• Now substituting value of u in equation of x’ and y’ we will obtain.
𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 𝒅𝒑
𝒙 = 𝒙( ) = 𝒙( )
𝒑 𝒛𝒑𝒓𝒑 − 𝒛 𝒛𝒑𝒓𝒑 − 𝒛
𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 𝒅𝒑
𝒚 = 𝒚( ) = 𝒚( ), 𝑾𝒉𝒆𝒓𝒆 𝒅 = 𝒛 −𝒛
𝒑 𝒗𝒑
𝒑 𝒛𝒑𝒓𝒑 − 𝒛 𝒛𝒑𝒓𝒑 − 𝒛 𝒑𝒓𝒑
• Using three dimensional homogeneous-coordinate representations, we can write the perspective
projection transformation matrix form as.
𝒙𝒉
𝒚𝒉 𝖥𝟏𝟎 𝟎𝟏
𝟎
𝟎
𝟎
𝟎 1 𝒚
𝒙
[𝒛 ] = I𝟎 𝟎 − l
𝒛𝒗𝒑(𝒛𝒑𝒓𝒑⁄𝒅𝒑) ∙ [𝒛]
𝒉 I l
𝒛𝒗𝒑⁄𝒅𝒑
𝒉 [𝟎 𝟎 − 𝒛𝒑𝒓𝒑⁄𝒅𝒑 ] 𝟏
𝟏⁄𝒅𝒑
• In this representation, the homogeneous factor is.
𝒛𝒑𝒓𝒑 − 𝒛
𝒉= 𝒂𝒏𝒅
𝒅𝒑
𝒙𝒑 = 𝒙𝒉⁄𝒉 𝒂𝒏𝒅 𝒚𝒑 = 𝒚𝒉⁄𝒉
• There are number of special cases for the perspective transformation equations.
• If view plane is taken to be uv plane, then 𝒛𝒗𝒑 = 𝟎 and the projection coordinates are.
𝒛𝒑𝒓𝒑 𝟏
𝒙 = 𝒙( ) = 𝒙( )
𝒑 𝒛𝒑𝒓𝒑 − 𝒛 𝟏 − 𝒛⁄𝒛𝒑𝒓𝒑
𝒛𝒑𝒓𝒑 𝟏
𝒚 = 𝒚( ) = 𝒚( )
𝒑 𝒛𝒑𝒓𝒑 − 𝒛 𝟏 − 𝒛⁄𝒛𝒑𝒓𝒑
• If we take projection reference point at origin than 𝒛𝒑𝒓𝒑 = 𝟎 and the projection coordinates are.
𝒛𝒗𝒑 𝟏
𝒙𝒑 = 𝒙 ( ) = 𝒙 ( )
𝒛 𝒛⁄𝒛𝒗𝒑
𝒛𝒗𝒑 𝟏
𝒚𝒑 = 𝒚 ( ) = 𝒚 ( )
𝒛 𝒛⁄𝒛𝒗𝒑
• The vanishing point for any set of lines that are parallel to one of the principal axes of an object is
referred to as a principal vanishing point
• We control the number of principal vanishing points (one, two, or three) with the orientation of the
projection plane, and perspective projections are accordingly classified as one-point, two-point, or three-
point projections.
• The number of principal vanishing points in a projection is determined by the number of principal axes
intersecting the view plane.

15
Unit-5 – 3D Transformation and Viewing
View Volumes and General Projection Transformations

Parallelpiped
View Volume Frustum
View
Zv Volume

Window

Zv
Back Front
Window
Plane Plane
Projection
Parallel Projection Back Reference
Front
(a) Plane Point
Plane

Perspective Projection
(b)

Fig. 5.20: - View volume of parallel and perspective projection.


• Based on view window we can generate different image of the same scene.
• Volume which is appears on the display is known as view volume.
• Given the specification of the view window, we can set up a view volume using the window boundaries.
• Only those objects within the view volume will appear in the generated display on an output device; all
others are clipped from the display.
• The size of the view volume depends on the size of the window, while the shape of the view volume
depends on the type of projection to be used to generate the display.
• A finite view volume is obtained by limiting the extent of the volume in the zv direction.
• This is done by specifying positions for one or two additional boundary planes. These zv-boundary planes
are referred to as the front plane and back plane, or the near plane and the far plane, of the viewing
volume.
• Orthographic parallel projections are not affected by view-plane positioning, because the projection
lines are perpendicular to the view plane regardless of its location.
• Oblique projections may be affected by view-plane positioning, depending on how the projection
direction is to be specified.

General Parallel-Projection Transformation


• Here we will obtain transformation matrix for parallel projection which is applicable to both
orthographic as well as oblique projection.

16
Unit-5 – 3D Transformation and Viewing

N Zv
Window

View Vp
Volume

Fig. 5.21: - General parallel projection.


• As shown on figure parallel projection is specified with a projection vector from the projection reference
point to the view window.
• Now we will apply shear transformation so that view volume will convert into regular parallelepiped and
projection vector will become parallel to normal vector N.

Window N Zv

View V’p
Volume

Fig. 5.22: - Shear operation in General parallel projection.


• Let’s consider projection vector 𝑽𝒑 = (𝒑𝒙, 𝒑𝒚, 𝒑𝒛).
• We need to determine the elements of a shear matrix that will align the projection vector 𝑽𝒑 with the
view plane normal vector N. This transformation can be expressed as
𝑽𝒑′ = 𝑴𝒑𝒂𝒓𝒂𝒍𝒍𝒆𝒍 ∙ 𝑽𝒑
𝟎
𝟎
𝑽 ′=[ ]
𝒑
𝒑𝒛
𝟏
• where 𝑴𝒑𝒂𝒓𝒂𝒍𝒍𝒆𝒍 is equivalent to the parallel projection matrix and represents a z-axis shear of the form
𝟏 𝟎 𝒂 𝟎
𝑴𝒑𝒂𝒓𝒂𝒍𝒍𝒆𝒍 = [𝟎 𝟏 𝒃 𝟎]
𝟎 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
• Now from above equation we can write
𝟎 𝟏 𝟎 𝒂 𝟎 𝒑𝒙
𝒑
[ 𝟎 ] = [𝟎 𝟏 𝒃 𝟎] ∙ 𝒚
[𝒑 ]
𝒑𝒛 𝟎 𝟎 𝟏 𝟎 𝒛
𝟏 𝟎 𝟎 𝟎 𝟏 𝟏
• From matrix we can write.
𝟎 = 𝒑𝒙 + 𝒂𝒑𝒛
𝟎 = 𝒑𝒚 + 𝒃𝒑𝒛
So
−𝒑𝒙 −𝒑𝒚
𝒂= , 𝒃=
𝒑𝒛 𝒑𝒛

17
Unit-5 – 3D Transformation and Viewing
• Thus, we have the general parallel-projection matrix in terms of the elements of the projection vector as
𝖥𝟏 𝒙 𝟎
−𝒑 𝟎1
𝒑𝒛
I −𝒑𝒚 l
𝑴 = I𝟎 𝟏 𝟎l
𝒑𝒂𝒓𝒂𝒍𝒍𝒆𝒍
I 𝒑𝒛 l
I𝟎 𝟎 𝟏 𝟎l
[𝟎 𝟎 𝟎 𝟏]
• For an orthographic parallel projection, px = py = 0, and is the identity matrix.

General Perspective-Projection Transformations


• The projection reference point can be located at any position in the viewing system, except on the view
plane or between the front and back clipping planes.

Frustum
Centerline

Zv
View Volume

View Plane

Center of Window

(Xprp, Yprp, Zprp)

Fig. 5.23: - General perspective projection.


• We can obtain the general perspective-projection transformation with the following two operations:
1. Shear the view volume so that the center line of the frustum is perpendicular to the view plane.
2. Scale the view volume with a scaling factor that depends on 1/z .
• A shear operation to align a general perspective view volume with the projection window is shown in
Figure.

Frustum
Centerline
(X’’, Y’’, Z’’)

(X’, Y’, Z’)

View Plane (z=zvp)

Center of Window

(Xprp, Yprp, Zprp)

Fig. 5.24: - Shear and scaling operation in general perspective projection.

18
Unit-5 – 3D Transformation and Viewing

• With the projection reference point at a general position (Xprp, Yprp, Zprp) the transformation involves a
combination z-axis shear and a translation:
𝟏 𝟎 𝒂 −𝒂𝒛𝒑𝒓𝒑
𝑴𝒔𝒉𝒆𝒂𝒓 = [𝟎 𝟏 𝒃 −𝒃𝒛𝒑𝒓𝒑]
𝟎 𝟎 𝟏 𝟎
𝟎 𝟎 𝟎 𝟏
Where the shear parameters are
𝒙𝒑𝒓𝒑 − (𝒙𝒘𝒎𝒊𝒏 + 𝒙𝒘𝒎𝒂𝒙)/𝟐
𝒂=−
𝒛𝒑𝒓𝒑
𝒚𝒘𝒎𝒊𝒏+𝒚𝒘𝒎𝒂𝒙
𝒚𝒑𝒓𝒑 −
𝒃=− 𝟐
𝒛𝒑𝒓𝒑
• Points within the view volume are transformed by this operation as
𝒙′ = 𝒙 + 𝒂(𝒛 − 𝒛𝒑𝒓𝒑)
𝒚′ = 𝒚 + 𝒃(𝒛 − 𝒛𝒑𝒓𝒑)
𝒛′ = 𝒛
• After shear we apply scaling operation.
𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 𝒛𝒗𝒑 − 𝒛
𝒙′′ = 𝒙′ ( )+𝒙
𝒑𝒓𝒑 ( )
𝒛𝒑𝒓𝒑 − 𝒛 𝒛𝒑𝒓𝒑 − 𝒛
𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 𝒛𝒗𝒑 − 𝒛
𝒚′′ = 𝒚′ ( )+𝒚
𝒑𝒓𝒑 ( )
𝒛𝒑𝒓𝒑 − 𝒛 𝒛𝒑𝒓𝒑 − 𝒛
• Homogeneous matrix for this transformation is:
−𝒙𝒑𝒓𝒑 𝒙𝒑𝒓𝒑𝒛𝒗𝒑
𝖥𝟏 𝟎 𝒛 1
I 𝒑𝒓𝒑 − 𝒛 𝒗𝒑 𝒛 𝒑𝒓𝒑 − 𝒛 𝒗𝒑 l
I𝟎 𝟏 −𝒚𝒑𝒓𝒑 𝒚𝒑𝒓𝒑𝒛𝒗𝒑 l
𝑴𝒔𝒄𝒂𝒍𝒆 = I 𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑 𝒛𝒑𝒓𝒑 − 𝒛𝒗𝒑l
I𝟎 𝟎 𝟏 𝟎 l
I
−𝟏 𝒛𝒑𝒓𝒑 l
I𝟎 𝟎 [ l
𝒛𝒑𝒓𝒑 − 𝒛 𝒗𝒑 𝒛𝒑𝒓𝒑 − 𝒛 𝒗𝒑]
• Therefore the general perspective-projection transformation is obtained by equation:
𝑴𝒑𝒆𝒓𝒔𝒑𝒆𝒄𝒕𝒊𝒗𝒆 = 𝑴𝒔𝒄𝒂𝒍𝒆 ∙ 𝑴𝒔𝒉𝒆𝒂𝒓

19
Q1) Classification of visible surface detection algorithm
In the realistic graphics display, we have to identify those parts of a scene that
are visible from a chosen
viewing position. The various algorithms that are used for that are referred to
as Visible-surface detection methods or hidden-surface elimination
methods.

Types of Visible Surface Detection Methods:


o Object-space methods and
o Image-space methods
Visible-surface detection algorithms are broadly classified according to
whether they deal with object definitions directly or with their projected
images. These two approaches are called object-space methods and image-
space methods, respectively.
An object-space method compares objects and parts of objects to each other
within the scene definition to determine which surfaces, as a whole, we should
label as visible. In an image-space algorithm, visibility is decided point by
point at each pixel position on the projection plane. Most visible-surface
algorithms use image-space methods, although object space methods can be
used effectively to locate visible surfaces in some cases. Line display
algorithms, on the other hand, generally use object-space methods to identify
visible lines in wire frame displays, but many image-space visible-surface
algorithms can be adapted
easily to visible-line detection.
Visible Surface Detection Methods:
We will see four methods for Detecting Visible surface Methods. They are:
1. Back Face Detection Method
2. Depth Buffer Method
3. Scan line Method
4. Depth Sorting Method

Q2) Explain Back-Face Detection Method


When we project 3-D objects on a 2-D screen, we need to detect the faces
that are hidden on 2D.
Back-Face detection, also known as Plane Equation method, is an
object space method in which objects and parts of objects are compared
to find out the visible surfaces. Let us consider a triangular surface that
whose visibility needs to decided. The idea is to check if the triangle will
be facing away from the viewer or not. If it does so, discard it for the
current frame and move onto the next one. Each surface has a normal
vector. If this normal vector is pointing in the direction of the center of
projection, then it is a front face and can be seen by the viewer. If this
normal vector is pointing away from the center of projection, then it is a
back face and can not be seen by the viewer.

Algorithm for left-handed system :

1) Compute N for every face of object.


2) If (C.(Z component) > 0)
then a back face and don't draw
else
front face and draw

The Back-face detection method is very simple. For the left-handed


system, if the Z component of the normal vector is positive, then it is a
back face. If the Z component of the vector is negative, then it is a front
face.
Algorithm for right-handed system :

1) Compute N for every face of object.


2) If (C.(Z component) < 0)
then a back face and don't draw
else
front face and draw

Thus, for the right-handed system, if the Z component of the normal


vector is negative, then it is a back face. If the Z component of the vector
is positive, then it is a front face.

Back-face detection can identify all the hidden surfaces in a scene that
contain non-overlapping convex polyhedra.

Recalling the polygon surface equation :

Ax + By + Cz + D < 0
While determining whether a surface is back-face or front face, also consider
the viewing direction. The normal of the surface is given by :

N = (A, B, C)
A polygon is a back face if Vview.N > 0. But it should be kept in mind that after
application of the viewing transformation, viewer is looking down the negative
Z-axis. Therefore, a polygon is back face if :

(0, 0, -1).N > 0


or if C < 0
Viewer will also be unable to see surface with C = 0, therefore, identifying a
polygon surface as a back face if : C <= 0.

Considering (a),

V.N = |V||N|Cos(angle)
if 0 <= angle 0 and V.N > 0
Hence, Back-face.

Considering (b),

V.N = |V||N|Cos(angle)
if 90 < angle <= 180, then
cos(angle) < 0 and V.N < 0
Hence, Front-face.

Limitations :
1) This method works fine for convex polyhedra, but not necessarily for
concave polyhedra.
2) This method can only be used on solid objects modeled as a polygon mesh.

Q3) Explain Area Subdivision Algorithm

It was invented by John Warnock and also called a Warnock Algorithm. It is


based on a divide & conquer method. It uses fundamental of area coherence.
It is used to resolve the visibility of algorithms. It classifies polygons in two
cases i.e. trivial and non-trivial.

Trivial cases are easily handled. Non trivial cases are divided into four equal
subwindows. The windows are again further subdivided using recursion until
all polygons classified trivial and non trivial.
Classification of Scheme
It divides or classifies polygons in four categories:

1. Inside surface
2. Outside surface
3. Overlapping surface
4. Surrounding surface

1. Inside surface: It is surface which is completely inside the surrounding


window or specified boundary as shown in fig (c)

2. Outside surface: The polygon surface completely outside the surrounding


window as shown in fig (a)

3.Overlapping surface: It is polygon surface which completely encloses the


surrounding window as shown in fig (b)

4. Overlapping surface: It is surface partially inside or partially outside the


surface area as shown in fig (c)
Q4) Z-Buffer or Depth-Buffer method
When viewing a picture containing non transparent objects and
surfaces, it is not possible to see those objects from view which are
behind from the objects closer to eye. To get the realistic screen image,
removal of these hidden surfaces is must. The identification and
removal of these surfaces is called as the Hidden-surface problem.

Z-buffer, which is also known as the Depth-buffer method is one of the


commonly used method for hidden surface detection. It is an Image
space method. Image space methods are based on the pixel to be
drawn on 2D. For these methods, the running time complexity is the
number of pixels times number of objects. And the space complexity is
two times the number of pixels because two arrays of pixels are
required, one for frame buffer and the other for the depth buffer.

The Z-buffer method compares surface depths at each pixel position on


the projection plane. Normally z-axis is represented as the depth. The
algorithm for the Z-buffer method is given below :

Algorithm :

First of all, initialize the depth of each pixel.


i.e, d(i, j) = infinite (max length)
Initialize the color value for each pixel
as c(i, j) = background color
for each polygon, do the following steps :

for (each pixel in polygon's projection)


{
find depth i.e, z of polygon
at (x, y) corresponding to pixel (i, j)

if (z < d(i, j))


{
d(i, j) = z;
c(i, j) = color;
}
}
Let’s consider an example to understand the algorithm in a better way.
Assume the polygon given is as below :

In starting, assume that the depth of each pixel is infinite.

As the z value i.e, the depth value at every place in the given
polygon is 3, on applying the algorithm, the result is:
Now, let’s change the z values. In the figure given below, the z
values goes from 0 to 3.

In starting, the depth of each pixel will be infinite as :

Now, the z values generated on the pixel will be different which


are as shown below :
Therefore, in the Z buffer method, each surface is processed separately
one position at a time across the surface. After that the depth values i.e,
the z values for a pixel are compared and the closest i.e, (smallest z)
surface determines the color to be displayed in frame buffer. The z
values, i.e, the depth values are usually normalized to the range [0, 1].
When the z = 0, it is known as Back Clipping Pane and when z = 1, it is
called as the Front Clipping Pane.

In this method, 2 buffers are used :

• Frame buffer
• Depth buffer

Calculation of depth :
As we know that the equation of the plane is :

ax + by + cz + d = 0, this implies

z = -(ax + by + d)/c, c!=0


Calculation of each depth could be very expensive, but the computation
can be reduced to a single add per pixel by using an increment method
as shown in figure below :

Let’s denote the depth at point A as Z and at point B as Z’. Therefore :

AX + BY + CZ + D = 0 implies
Z = (-AX - BY - D)/C ------------(1)

Similarly, Z' = (-A(X + 1) - BY -D)/C ----------(2)

Hence from (1) and (2), we conclude :

Z' = Z - A/C ------------(3)


Hence, calculation of depth can be done by recording the plane
equation of each polygon in the (normalized) viewing coordinate system
and then using the incremental method to find the depth Z.
So, to summarize, it can be said that this approach compares surface
depths at each pixel position on the projection plane. Object depth is
usually measured from the view plane along the z-axis of a viewing
system.
Example :

Let S1, S2, S3 are the surfaces. The surface closest to projection
plane is called visible surface. The computer would start
(arbitrarily) with surface 1 and put it’s value into the buffer. It
would do the same for the next surface. It would then check
each overlapping pixel and check to see which one is closer to
the viewer and then display the appropriate color. As at view-
plane position (x, y), surface S1 has the smallest depth from the
view plane, so it is visible at that position.
Points to remember :
1) Z buffer method does not require pre-sorting of polygons.
2) This method can be executed quickly even with many polygons.
3) This can be implemented in hardware to overcome the speed
problem.
4) No object to object comparison is required.
5) This method can be applied to non-polygonal objects.
6) Hardware implementation of this algorithm are available in some
graphics workstations.
7) The method is simple to use and does not require additional data
structure.
8) The z-value of a polygon can be calculated incrementally.
9) Cannot be applied on transparent surfaces i.e, it only deals with
opaque surfaces. For ex :

10) If only a few objects in the scene are to be rendered, then this
method is less attractive because of additional buffer and the overhead
involved in updating the buffer.
11) Wastage of time may occur because of drawing of hidden objects.
Traditional animation is a technique where each frame of an
animation is drawn by hand. This process is very time-consuming
and labor-intensive, but it can create beautiful and expressive
animation.

Q5) Explain Traditional Animations Techniques

• Storyboarding: The first step is to create a storyboard, which is a


series of sketches that map out the plot of the animation.

• Character design: The animators will then design the characters,


giving them a unique look and personality.
• Keyframing: The key animator will then draw the key frames of the
animation, which are the most important poses in a scene.
• In-betweening: In-betweeners will then draw the frames in between
the key frames, creating a smooth flow of movement.

• Cels and painting: The drawings are then transferred to transparent


sheets of celluloid (cels) and painted.
• Backgrounds: Background artists will paint the backgrounds for the
animation.
• Filming: Finally, the cels are photographed one frame at a time,
creating the illusion of movement.

Traditional animation is a classic art form that has been used to create some
of the most beloved animated films of all time. Although computer
animation has become more popular in recent years, traditional animation
is still used today in some films and television shows.
O App
Q7) Describe Various Principles Of Traditional Animation
Animation is defined as a series of images rapidly changing to
create an illusion of movement. We replace the previous image
with a new image which is a little bit shifted. Animation Industry
is having a huge market nowadays. To make an efficacious
animation there are some principles to be followed.

Principle of Animation:

There are 12 major principles for an effective and easy to


communicate animation.

1) Squash and Stretch:


This principle works over the physical properties that are
expected to change in any process. Ensuring proper
squash and stretch makes our animation more
convincing. For Example: When we drop a ball from
height, there is a change in its physical property. When
the ball touches the surface, it bends slightly which should
be depicted in animation properly.

2) Anticipation:
Anticipation works on action.Animation has broadly divided into
3 phases:

1. Preparation phase
2. Movement phase
3. Finish

In Anticipation, we make our audience prepare for


action. It helps to make our animation look more
realistic. For Example: Before hitting the ball through
the bat, the actions of batsman comes under
anticipation. This are those actions in which the
batsman prepares for hitting the ball.

3. Arcs:
In Reality, humans and animals move in arcs.
Introducing the concept of arcs will increase the
realism. This principle of animation helps us to
implement the realism through projectile motion
also. For Example, The movement of the hand
of bowler comes under projectile motion while
doing bowling.

4. Slow in-Slow out:


While performing animation, one should
always keep in mind that in reality object takes
time to accelerate and slow down. To make
our animation look realistic, we should always
focus on its slow in and slow out
proportion. For Example, It takes time for a
vehicle to accelerate when it is started and
similarly when it stops it takes time.

5. Appeal:
Animation should be appealing to the audience and
must be easy to understand. The syntax or font style
used should be easily understood and appealing to the
audience. Lack of symmetry and complicated design of
character should be avoided.
6. Timing: Velocity with which object is moving effects
animation a lot. The speed should be handled with care
in case of animation. For Example, An fast-moving
object can show an energetic person while a slow-
moving object can symbolize a lethargic person. The
number of frames used in a slowly moving object is less
as compared to the fast-moving object.

7. 3D Effect:
By giving 3D effects we can make our
animation more convincing and effective.
In 3D Effect, we convert our object in a 3-
dimensional plane i.e., X-Y-Z plane which
improves the realism of the object. For
Example, a square can give a 2D effect but
cube can give a 3D effect which appears
more realistic.

8. Exaggeration:

Exaggeration deals with the physical features and


emotions. In Animation, we represent emotions
and feeling in exaggerated form to make it more
realistic. If there is more than one element in a
scene then it is necessary to make a balance
between various exaggerated elements to avoid
conflicts.
9. Stagging:
Stagging is defined as the presentation of the
primary idea, mood or action. It should always be
in presentable and easy to manner. The purpose
of defining principle is to avoid unnecessary
details and focus on important features only. The
primary idea should always be clear and
unambiguous.

10.Secondary Action:
Secondary actions are more important than
primary action as they represent the animation as
a whole. Secondary actions support the primary
or main idea. For Example, A person drinking a
hot tea, then his facial expressions, movement of
hands, etc comes under the secondary actions.

11.Follow Through:
It refers to the action which continues to move
even after the completion of action. This type of
action helps in the generation of more idealistic
animations. For Example: Even after throwing a
ball, the movement of hands continues.

12.Overlap:
It deals with the nature in which before ending
the first action, the second action starts. For
Example: Consider a situation when we are
drinking Tea from the right hand and holding
a sandwich in the left hand. While drinking a
tea, our left-hand start showing movement
towards the mouth which shows the
interference of the second action before the
end of the first action.

You might also like