5CS4-04 U4
5CS4-04 U4
Lecture 20
In the 2D system, we use only two coordinates X and Y but in 3D, an extra coordinate Z is
added. 3D graphics techniques and their application are fundamental to the entertainment,
games, and computer-aided design industries. It is a continuing area of research in scientific
visualization.
Furthermore, 3D graphics components are now a part of almost every personal computer
and, although traditionally intended for graphics-intensive software such as games, they are
increasingly being used by other applications.
Parallel Projection
Parallel projection discards z-coordinate and parallel lines from each vertex on the object are
extended until they intersect the view plane. In parallel projection, we specify a direction of
projection instead of center of projection.
In parallel projection, the distance from the center of projection to project plane is infinite. In
this type of projection, we connect the projected vertices by line segments which correspond to
connections on the original object.
Parallel projections are less realistic, but they are good for exact measurements. In this type of
projections, parallel lines remain parallel and angles are not preserved. Various types of parallel
projections are shown in the following hierarchy.
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) 5555 |Page
P a g55
e
Unit-4 B.Tech. - V Semester Computer Graphics and Multimedia
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) 5656 |Page
P a g56
e
Unit-4 B.Tech. - V Semester Computer Graphics and Multimedia
Orthographic Projection
In orthographic projection the direction of projection is normal to the projection of the plane.
There are three types of orthographic projections −
Front Projection
Top Projection
Side Projection
Oblique Projection
In oblique projection, the direction of projection is not normal to the projection of plane. In
oblique projection, we can view the object better than orthographic projection.
There are two types of oblique projections − Cavalier and Cabinet. The Cavalier projection
makes 45° angle with the projection plane. The projection of a line perpendicular to the view
plane has the same length as the line itself in Cavalier projection. In a cavalier projection, the
foreshortening factors for all three principal directions are equal.
The Cabinet projection makes 63.4° angle with the projection plane. In Cabinet projection, lines
perpendicular to the viewing surface are projected at ½ their actual length. Both the projections
are shown in the following figure −
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) 5757 |Page
P a g57
e
Unit-4 B.Tech. - V Semester Computer Graphics and Multimedia
Isometric Projections
Orthographic projections that show more than one side of an object are called axonometric
orthographic projections. The most common axonometric projection is an isometric
projection where the projection plane intersects each coordinate axis in the model
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) 5858 |Page
P a g58
e
Unit-4 B.Tech. - V Semester Computer Graphics and Multimedia
coordinate system at an equal distance. In this projection parallelism of lines are preserved but
angles are not preserved. The following figure shows isometric projection −
Perspective Projection
In perspective projection, the distance from the center of projection to project plane is finite and the
size of the object varies inversely with distance which looks more realistic.
The distance and angles are not preserved and parallel lines do not remain parallel. Instead, they all
converge at a single point called center of projection or projection reference point. There are 3
types of perspective projections which are shown in the following chart.
The following figure shows all the three types of perspective projection −
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) 5959 | P a Page
g 59
e
Unit-4 B.Tech. - V Semester Computer Graphics and Multimedia
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) 6060 | P a Page
g 60
e
Unit-4 B.Tech. - V Semester Computer Graphics and Multimedia
Lecture 21
Spline Curves A spline curve is a mathematical representation for which it is easy to build an
interface that will allow a user to design and control the shape of complex curves and surfaces.
The general approach is that the user enters a sequence of points, and a curve is constructed
whose shape closely follows this sequence. The points are called control points. A curve that
actually passes through each control point is called an interpolating curve; a curve that passes
near to the control points but not necessarily through them is called an approximating curve.
interpolating curve approximating curve the points are called control points Once we establish
this interface, then to change the shape of the curve we just move the control points. -1 0 1 x 1 2
y The easiest example to help us to understand how this works is to examine a curve that is like
the graph of a function, like y = x 2 . This is a special case of a polynomial function.
Polynomial curves Polynomials have the general form: y = a + bx + cx2 + dx3 + . . . The degree
of a polynomial corresponds with the highest coefficient that is nonzero. For example if c is
non-zero but coefficients d and higher are all zero, the polynomial is of degree 2. The shapes
that polynomials can make are as follows: degree 0: Constant, only a is non-zero. Example: y =
3 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 -0.8 0.8 1.6 2.4 3.2 4 4.8 A constant, uniquely defined
by one point. degree 1: Linear, b is highest non-zero coefficient. Example: y
= 1 + 2x -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 -0.8 0.8 1.6 2.4 3.2 4 4.8 A line, uniquely
defined by two points. degree 2: Quadratic, c is highest non-zero coefficient. Example: y = 1
− 2x + x 2 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 -0.8 0.8 1.6 2.4 3.2 4 4.8 A parabola,
uniquely defined by three points. degree 3: Cubic, d is highest non-zero coefficient. Example: y
= −1 − 7/2x + 3/2x 3 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 -4 -3.2 -2.4 -1.6 -0.8 0.8 1.6
A cubic curve (which can have an inflection, at x = 0 in this example), uniquely defined by four
points. The degree three polynomial – known as a cubic polynomial – is the one that is most
typically chosen for constructing smooth curves in computer graphics. It is used because
1. it is the lowest degree polynomial that can support an inflection – so we can make interesting
curves, and 2. it is very well behaved numerically – that means that the curves will usually be
smooth like this: and not jumpy like this: . 14.1. POLYNOMIAL CURVES 89 So, now we can
write a program that constructs cubic curves. The user enters four control points,
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) 6161 |Page
P a g61
e
and the program solves for the four coefficients a, b, c and d which cause the polynomial to
pass through the four control points. Below, we work through a specific example. Typically the
interface would allow the user to enter control points by clicking them in with the mouse. For
example, say the user has entered control points (−1, 2),(0, 0),(1, −2),(2, 0) as indicated by the
dots in the figure to the left. Then, the computer solves for the coefficients a, b, c, d and might
draw the curve shown going through the control points, using a loop something like this:
glBegin(GL LINE STRIP); for(x = -3; x <= 3; x += 0.25) glVertex2f(x, a + b * x + c * x * x +
d * x * x * x); glEnd(); Note that the computer is not really drawing the curve. Actually, all it is
doing is drawing straight line segments through a sampling of points that lie on the curve. If the
sampling is fine enough, the curve will appear to the user as a continuous smooth curve. The
solution for a, b, c, d is obtained by simultaneously solving the 4 linear equations below, that
are obtained by the constraint that the curve must pass through the 4 points: general form: a +
bx + cx2 + dx3 = y point (−1, 2): a − b + c − d = 2 point (0, 0): a = 0 point (1, −2): a + b + c + d
= −2 point (2, 0): a + 2b + 4c = 8d = 0 This can be written in matrix form Ma = y, 90
CHAPTER 14. SPLINE CURVES or (one row for each equation)
1 −1 1 −1 1 0 0 0 1 1 1 1 1 2 4 8 a b c d = 2
6161 | P a g
e
Lecture 22
Bezier Curves
Bezier curve is discovered by the French engineer Pierre Bézier. These curves can be generated
under the control of other points. Approximate tangents by using control points are used to
generate curve. The Bezier curve can be represented mathematically as −
$$\sum_{k=0}^{n} P_{i}{B_{i}^{n}}(t)$$
The simplest Bézier curve is the straight line from the point $P_{0}$ to
6262 | P a g
e
Bezier curves have the following properties −
They generally follow the shape of the control polygon, which consists of the segments
joining the control points.
They always pass through the first and last control points.
They are contained in the convex hull of their defining control points.
The degree of the polynomial defining the curve segment is one less that
the number of defining polygon point. Therefore, for 4 control points,
the degree of the polynomial is 3, i.e. cubic polynomial.
A Bezier curve generally follows the shape of the defining polygon.
The direction of the tangent vector at the end points is same as that of the
vector determined by first and last segments.
The convex hull property for a Bezier curve ensures that the polynomial
smoothly follows the control points.
No straight line intersects a Bezier curve more times than it intersects its
control polygon.
They are invariant under an affine transformation.
Bezier curves exhibit global control means moving a control point alters
the shape of the whole curve.
A given Bezier curve can be subdivided at a point t=t0 into two Bezier
segments which join together at the point corresponding to the parameter
value t=t0.
B-Spline Curves
The Bezier-curve produced by the Bernstein basis function has limited
flexibility.
First, the number of specified polygon vertices fixes the order of the
resulting polynomial which defines the curve.
The second limiting characteristic is that the value of the blending
function is nonzero for all parameter values over the entire curve.
6363 | P a g
e
The B-spline basis contains the Bernstein basis as the special case. The B-
spline basis is non-global.
A B-spline curve is defined as a linear combination of control points Pi and B-
spline basis function $N_{i,}$ k (t) given by
$C(t) = \sum_{i=0}^{n}P_{i}N_{i,k}(t),$ $n\geq k-1,$ $t\: \epsilon \: [ tk-
1,tn+1 ]$
Where,
{$p_{i}$: i=0, 1, 2….n} are the control points
k is the order of the polynomial segments of the B-spline curve. Order k
means that the curve is made up of piecewise polynomial segments of
degree k - 1,
the $N_{i,k}(t)$ are the “normalized B-spline blending functions”. They
are described by the order k and by a non-decreasing sequence of real
numbers normally called the “knot sequence”.
$${t_{i}:i = 0, ... n + K}$$
The Ni, k functions are described as follows −
$$N_{i,1}(t) = \left\{\begin{matrix} 1,& if \:u \: \epsilon \: [t_{i,}t_{i+1}) \\
0,& Otherwise \end{matrix}\right.$$
and if k > 1,
$$N_{i,k}(t) = \frac{t-t_{i}}{t_{i+k-1}} N_{i,k-1}(t) + \frac{t_{i+k}-
t}{t_{i+k} - t_{i+1}} N_{i+1,k-1}(t)$$
and
$$t \: \epsilon \: [t_{k-1},t_{n+1})$$
Properties of B-spline Curve
B-spline curves have the following properties −
The sum of the B-spline basis functions for any parameter value is 1.
Each basis function is positive or zero for all parameter values.
Each basis function has precisely one maximum value, except for k=1.
6464 | P a g
e
The maximum order of the curve is equal to the number of vertices of
defining polygon.
The degree of B-spline polynomial is independent on the number of
vertices of defining polygon.
B-spline allows the local control over the curve surface because each
vertex affects the shape of a curve only over a range of parameter values
where its associated basis function is nonzero.
6565 | P a g
e
Lecture 23
Rotation
3D rotation is not same as 2D rotation. In 3D rotation, we have to specify the angle of rotation
along with the axis of rotation. We can perform 3D rotation about X, Y, and Z axes. They are
represented in the matrix form as below −
$$R_{x}(\theta) = \begin{bmatrix} 1& 0& 0& 0\\ 0& cos\theta & −sin\theta& 0\\ 0& sin\theta
& cos\theta&
0\\ 0& 0& 0& 1\\ \end{bmatrix} R_{y}(\theta) = \begin{bmatrix} cos\theta& 0& sin\theta& 0\\
0& 1& 0& 0\\
−sin\theta& 0& cos\theta& 0\\ 0& 0& 0& 1\\ \end{bmatrix} R_{z}(\theta) =\begin{bmatrix}
cos\theta &
−sin\theta & 0& 0\\ sin\theta & cos\theta & 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1 \end{bmatrix}$$
The following figure explains the rotation about various axes −
Scaling
You can change the size of an object using scaling transformation. In the scaling process, you
either expand or compress the dimensions of the object. Scaling can be achieved by multiplying
the original coordinates of the object with the scaling factor to get the desired result. The
following figure shows the effect of 3D scaling −
6666 | P a g
e
In 3D scaling operation, three coordinates are used. Let us assume that the original coordinates
are (X, Y, Z), scaling factors are $(S_{X,} S_{Y,} S_{z})$ respectively, and the produced
coordinates are (X’, Y’, Z’). This can be mathematically represented as shown below −
$S = \begin{bmatrix} S_{x}& 0& 0& 0\\ 0& S_{y}& 0& 0\\ 0& 0& S_{z}& 0\\ 0& 0& 0& 1
\end{bmatrix}$
P’ = P∙S
$[{X}' \:\:\: {Y}' \:\:\: {Z}' \:\:\: 1] = [X \:\:\:Y \:\:\: Z \:\:\: 1] \:\: \begin{bmatrix} S_{x}& 0&
0& 0\\ 0& S_{y}& 0& 0\\ 0& 0& S_{z}& 0\\ 0& 0& 0& 1 \end{bmatrix}$
$ = [X.S_{x} \:\:\: Y.S_{y} \:\:\: Z.S_{z} \:\:\: 1]$ Shear
A transformation that slants the shape of an object is called the shear transformation. Like in 2D
shear, we can shear an object along the X-axis, Y-axis, or Z-axis in 3D.
6767 | P a g
e
As shown in the above figure, there is a coordinate P. You can shear it to get a new coordinate P', which
can be represented in 3D matrix form as below −
$Sh = \begin{bmatrix} 1 & sh_{x}^{y} & sh_{x}^{z} & 0 \\ sh_{y}^{x} & 1 & sh_{y}^{z} &
0 \\
sh_{z}^{x} & sh_{z}^{y} & 1 & 0 \\ 0 & 0 & 0 & 1
\end{bmatrix}$
P’ = P ∙
Sh
$X’ = X + Sh_{x}^{y} Y + Sh_{x}^{z}
Z$
$Y' = Sh_{y}^{x}X + Y
+sh_{y}^{z}Z$
$Z' = Sh_{z}^{x}X + Sh_{z}^{y}Y +
Z$
Transformation
Matrices
Transformation matrix is a basic tool for transformation. A matrix with n x m dimensions is multiplied with
the coordinate of objects. Usually 3 x 3 or 4 x 4 matrices are used for transformation. For example,
consider the following matrix for various operation.
$T = \begin{bmatrix} 1& 0& 0& $S = \begin{bmatrix} S_{x}& 0& $Sh = \begin{bmatrix} 1&
0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0\\ 0& S_{y}& 0& 0\\ 0& 0& sh_{x}^{y}& sh_{x}^{z}& 0\\
t_{x}& t_{y}& t_{z}& 1\\ S_{z}& 0\\ 0& 0& 0& 1 sh_{y}^{x}& 1 & sh_{y}^{z}& 0\\
\end{bmatrix}$ \end{bmatrix}$ sh_{z}^{x}& sh_{z}^{y}& 1& 0\\
0& 0& 0& 1 \end{bmatrix}$
6969 | P a g
e
Lecture 24
Perspective
2 projection is representing or drawing parallel projection is used in drawing objects when
.
objects which resemble the real thing perspective projection cannot be used.
3
perspective projection represents objects in a Parallel projection is much like seeing objects through a
.three-dimensional way. telescope, letting parallel light rays into the eyes which
produce visual representations without depth
While
5 parallel projection may be best for it is better to use perspective projection.
.
architectural drawings, in cases wherein
measurements are necessary
6
perspective projections require a distance between In parallel projection the center of projection is at infinity,
. viewer and the target point.
the while in prospective projection, the center of projection is
at a point.
Types:
7 Types:
.
1.one point perspective, 1.Orthographic
2.Two point perspective, 2.Oblique
3. Three point perspective,
7070 | P a g
e
8.
69 | P a g e
Lecture 25
transformations.
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) Page 70
Perspective Projection oblique projection
Perspective projections are used to produce images which look natural. When we view
scenes in everyday life far away items appear small relative to nearer items. This is
called perspective foreshortening. A side effect of perspective foreshortening is that parallel
lines appear to converge on a vanishing point. An important feature of perspective projections
is that it preserves straight lines, this allows us to project only the end-points of 3D lines and
then draw a 2D line between the projected endpoints.
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) Page 71
Objects in the real world appear smaller as they move further
away.
Perspective projection depends on the relative position of the eye and the viewplane.
In the usual arrangement the eye lies on the z-axis and the viewplane is the xy plane.
To determine the projection of a 3D point connect the point and the eye by a straight
line, where the line intersects the viewplane. This intersection point is the projected
point.
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) Page 72
Synthetic
Camera
Perspective projections, while providing a realistic view of an object, are rather
restrictive.
They require that the eye to lie on a coordinate axis and that the viewplane must
coincide with a coordinate plane. If we wish to view an object from a different point of
view, we must rotate the model of an object. This causes an awkward mix of modelling
(describing the objects to be viewed) and viewing (rendering a picture of the
object). We will develop a
flexible method for viewing that is completely separate from modling, this method is called
the synthetic camera. A synthetic camera is a way to describe a camera (or eye) positioned
and oriented in 3D space. The system has three principle ingredients:
1. A viewplane in which a window is defined.
2. A coordinate system called viewing coordinate system(VCS) sometimes called
the UVN system.
3. An eye defined in VCS.
The view plane is defined by a point on the plane called the View Reference Point(VRP) and
a normal to the viewplane called the View Plane Normal(VPN). These are defined in the
world coordinate system. The viewing coordinate system is defined as follows:
The origin is the VRP.
One axis of of the coordinate system is given by VPN, this is known as the naxis.
The 2nd axis is found from the View Up Vector(VUP), this is known as the vaxis.
The third axis u is calculated as u=n×v .
In order for a rendering application to achieve the required view, the user would need the
specify the following parameters.
The VRP (r⃗ =(rx,ry,rz))
To Choose a VPN( n⃗ ), the user would simply select a point in the area of interest in the
The user should select some point in the scene which (s)he would like to appear as the center
of the rendered view, call this point scene⃗ . The vector norm⃗ , a vector lying along n⃗ can
then be calculated:
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) Page 73
norm⃗ =scene⃗ −VRP⃗
n⃗ =norm⟶|norm|
Finally the upward vector must be a unit vector perpendicular to n⃗ , let the user enter a
up′⟶up′⃗ .n⃗ (up⃗ −kn⃗ ).n⃗ up⃗ .n⃗ up⃗ .n⃗ kkup′⃗ finally;v⃗ =========up⃗ −kn⃗ 00kn⃗ n⃗ =
0k|n⃗ |2up⃗ .n⃗ |n⃗ |2up⃗ .n⃗ up⃗ −(up⃗ .n⃗ )n⃗ up′⃗ |up′⃗ |
The vector u⃗ can now be calulated u⃗ =n⃗ ×v⃗ . With the viewing cordinate system set up, a
window in the viewplane can be defined by giving minimum and maximum uand v values.
The centre of the window (CW) does not have to be the VRP. The eye can be given any
position (e⃗ =(eu,ev,en)) in the viewing coordinate system. It is usually positioned at some
negative value on the n-axis, e⃗ =(0,0,−en).
The components of the synthetic camera can be changed to provide different views and
animation effects;
Moving the VRP along a given path will provide a sequence of views which will
show a 'walk through' or a 'fly-by' of the object, with the viewer always looking at the
same location.
Changing the direction of the VPN⃗ is equivalent to swiveling your head or 'panning'.
Changing the direction of the v⃗ vector, allows the user to see the object rotate in the
viewplane, as if the user was tilting his/her head to the left or right.
Below is a Java Applet which demonstrates the effects of the synthetic camera. Modify the
camera in the right panel. The left panel shows the view from the eye.
Describing Objects in Viewing Coordinates
We have developed a method for specifying the location and orientation of the synthetic
camera. In order to draw projections of models in this system we need to be able to represent
our real-world coordinates in terms of u⃗ v⃗ n⃗ .
Converting from one coordinate system to another:
(x,y)in three dimensions;(x,y,z)letMand subtract r⃗ from both sides:(x,y,z)−r⃗ multiply across
by M−1and write (x,y,z) as a vector, p⃗ (abc)M is made up of orthogonal unit
vectors; M−1=MT(abc)expanding −r⃗ MT, we
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) Page 74
get;−r⃗ MT========r⃗ +au⃗ +bv⃗ r⃗ +(ab)(u⃗ v⃗ )r⃗ +(abc)⎛⎝u⃗ v⃗ n⃗ ⎞⎠⎛⎝u⃗ v⃗ n⃗
⎞⎠=
xnynz⎞⎠
We now have a method for converting world coordinates to viewing coordinates of the
synthetic camera. We need to transform all objects from world coordinates to viewing
coordinates, this will simplify the later operations of clipping, projection etc. We should have
a separate data structure to hold the viewing coordinates of an object. The model itself
remains uncorrupted and we can have may different views of the model.
Generalized Perspective Projection
We must find the location (u′1,v′1) of the projection of the point p⃗ (u1,v1,n1) onto the
viewplane. We will use the parametric form of a vector. This vector is known as aray. A ray
is the path of a vector which starts at one point (when t=0 ), as increases the vector moves
along the ray until t=1 , at which point the vector reached the end point.
r⃗ (t)=e⃗ (1−t)+p⃗ t⎧⎩⎨⎪⎪r⃗ (0)=e⃗ r⃗ (1)=p⃗ where e⃗ is the eye and p⃗ is the end point
This equation is valid for values of t between 0 and 1. We wish to find the coordinates of the
ray as it pierces the viewplane, this occurs when rn(t)=0 , the best way to do this is to find
what 'time' t the ray strikes the viewplane so;
rn(t)=en(1−t′)+pnt′=0en−ent′+pnt′=0en−t′(en−pn)=0en=t′(en−pn)t′=en(en−pn)substituting t′\
into ru(t)\ and rv(t)\ we
get;ru(t′)=u′=eu(1−t′)+put′u′=eu(1−en(en−pn))+puen(en−pn)rearranging
gives;u′=puen−eupn(en−pn)similarly for v′;v′=pven−evpn(en−pn)
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) Page 75
This gives us coordinates of the point (u,v,n) when projected on to the view plane. If the eye
is on the n-axis, which is the usual case, then both eu & ev are zero, thus u′and v′ simplify to;
u′=puen(en−pn)v′=pven(en−pn)
Note that u′ and v′ do not depend on t, this means that every point on the ray projects to the
same point on the viewplane. Even points behind the eye (t<0 ) are projected to the same
point on the viewplane. These points will be eliminated later.
When manipulating 3D entities it is useful to have an additional quantity which retains a
measure of depth of a point. As our analysis stands we have lost information about the depth
of the points because all points are projected onto the viewplane with a depth of zero. We
would like to have something which preserves the depth ordering of points, this quantity will
be called pseudodepth, and to simplify later calculation we will define it as;
$n′=pnen(en−pn)
An increase in actual depth pn causes an increase in n as required. The simplified equations
for u′ ,v′ and n′ can be re- written as follows:
u′=puen(en−pn)=puenen(1−pnen)v′=pven(en−pn)=pvenen(1−pnen)n′=pnen(en−pn)=pnenen(1
−pnen)===pu(1−pnen)pv(1−pnen)pn(1−pnen)
We can now write a matrix to implement the above transformation, this is called
the Perspective Transformation:
Mˆp=⎛⎝⎜⎜⎜⎜10000100001000−1en1⎞⎠⎟⎟⎟⎟
The projection P' of a point P can now be written as:
p⃗ ′=(pupvpn1)Mˆp
At this stage we have a method for transforming a point from world-coordinates to viewing
coordinates and then projecting that point onto the view plane, i.e.
P⃗ ′(pu′,pv′,pn′,1)=P⃗ xyzAˆwvMˆp
The human brain perceives depth in a scene because we have two eyes separated in space, so
each eye "sees" a slightly different view, and the brain uses these differences to estimate
relative distance. These two views can be artificially generated by setting up a synthetic
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) Page 76
camera with two "eyes", each offset slightly from the n-axis. Each eye will result in a
different projection. If each projection is displayed to the user in a different colour, and the
user has appropriately filtered glasses, the 2D display will appear to have depth. Other 3D
viewing system include Virtual reality headsets which have two in built displays, one for
each eye or LCD goggles, the goggles block the right eye when the left eye image is being
displayed on a large screen and visa- versa, this cycle must occur 50 times a second, if the
animation is to be smooth.
The View Volume
We must define precisely the region in space that is to be projected and drawn. This region is
called the view-volume. In the general case, only a small fraction of the model falls within
the field of view of the camera, The part of the model that falls outside of the cameras views
must be identified and discarded as soon as possible to avoid unnecessary computation.
The view volume is defined in viewing coordinates. The eye and the window defined on the
view plane, together define a double sided pyramid extending forever in both directions. To
limit the view volume to a finite size, we can define a front plane n=Fand a back plane n=B
these are sometimes known as the hither and yon planes. Now the view volume becomes
a frustum (truncated pyramid).
We will later develop a clipping algorithm which will clip any part of the world which lies
outside of the view volume. The effect of clipping to the front plane is to remove objects that
lie behind the eye or too close to it. The effect of clipping to the back plane is to remove
objects that are too far away, and would appear as indistinguishable spots. We can move the
font and back plane close to each other to produce "cutaway" drawings of complex objects.
Clipping against a volume like a frustum would be a complex process, but if we apply the
perspective transformation too all our points, the clipping process will become trivial. The
view volume is defined after the matrix Mˆwv has been applied to each point in world
coordinates. The effect of applying the perspective transformation is called pre-warping. If
we apply pre-warping to the view volume, it gets distorted into a more managable shape.
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) Page 77
We will first examine pre-warping effects on key points in the view volume. First we need to
calculate the v-coordinate v2 of P2(u2,v2,n2) is an arbitrary point lying on the line from the
eye through P1(0,wt,0) , wherewt represents the top of the window defined on the view plane:
From the equation of a line; vmwe getv2soP2if we now apply pre-warping to the v\
coordinate
of P2;v2′v2′v2′v2′v2′=========m(n−n1)+v1 and;wt−ev0−en(wt−ev−en)(n2−0)+wt=(wt−ev−
en)n2+wt(0,(wt−ev−en)n2+wt,n2)pven−evpnen−pn((wt−ev−en)n2+wt)en−evn2en−n2−wtn2+e
vn2+wten−evn2en−n2wt(en−n2)en−n2wt
So prewarping P2 gives us the point which lies on the plane v=wt .
Therefore the effect of pre-warping is to transform all points on the plane representing the top
of the view volume to a point on the plane v=wt . this plane is parallel with the un plane.
It can be similarly shown that the other three side of the view volume are transformed to
planes parallel to the coordinate planes.
What happens to the front and back planes?
If we take a point on the back plane P3(u3,v3,B) and apply prewarping to the n-coordinate;
n3′=pnen(en−pn)=Ben(en−B)=B1−Ben
so we can see that the back plane has been moved to the plane n=B1−Ben . This plane is
parallel to the original plane.
Similarly the front plane will have been moved to n=F1−Fen .
Applying prewarping to the eye gives n′=enen(en−en)=∞ . This means that the eye has been
moved to infinity.
In summary, pre-warping has moved the walls of the frustum shaped view volume to the
following planes;
uuvvnn======wlwrwtwbF1−FenB1−Ben
Note that each of these planes are parallel to the coordinate axis.The frustum shaped view
volume has become a parallelpiped.>
The Normalization Transformation
The final stage of the transformation process is to map the projected points to their final
position on the viewport on screen. We will combine this viewvolume-viewport mapping
with the pre-warping matrix, this will allow us to perform all the necessary calculation to
transform a point in world coordinates to pixel coordinates on screen in one matrix
multiplication.
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) Page 78
The u and v coordinates will be converted to x and y screen coordinates and to simplify
calculation later we will scale the n coordinates (pseudo-depth) to a range between 0 and 1
(scale the front plane to 0 and the back plane to 1).
First we need to translate the view-volume to the origin, this can be done by applying the
following translation matrix;
Tˆ1=⎛⎝⎜⎜⎜⎜100−vl010−vb001−enFF−en0001⎞⎠⎟⎟⎟⎟
Next the view-volume needs to be scaled to the width and height of the window. At this stage
we will normalize the pseudo-depth to a range of 0 to 1. To scale the n-coordinate ,we need to
scale by;
1−0enBen−B−enFen−F=1enB(en−F)−enF(en−B)(en−B)(en−F)=(en−B)(en−F)e2n(B−F)
Therefore the scaling matrix required is;
Sˆ=⎛⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜wr−wlvr−vl0000wt−wbvt−vb0000(en−B)(en−F)e2n(B−F)00001⎞⎠
⎟⎟⎟⎟⎟⎟⎟⎟⎟
Finally we need to translate the scaled view volume to the position of the viewport;
Tˆ2=⎛⎝⎜⎜100wl010wb00100001⎞⎠⎟⎟
Notes By: - Arvind Sharma (Head, Department of Computer Science and Engineering) Page 79