Cs Graphics
Cs Graphics
GRAPHICS
334 CS
1434-35
Second Semester
UNIT 1 : INTRODUCTION
Computer Graphics have become a powerful tool for the rapid and economical
production of pictures.
Today, computer graphics are used routinely in such diverse fields as science,
art, engineering, business, industry, medicine, government, entertainment,
advertising, education, training, and home applications.
Pixel
2
Basic Graphics System
Data plotting is still one of the most common graphics applications, but now it
is easy to generate graphs for highly complex data relationships.
3
Computer –Aided Design
For some design applications, objects are first displayed in a wire-frame out-
line that shows the overall shape and internal features of the objects.
Wire-frame displays also allow designers to quickly see the effects of interactive
adjustments to design shapes without waiting for the object surfaces to be fully
generated.
Virtual-reality environments
4
Data Visualizations
The term business visualization is used in connection with data sets related to
commerce, industry, and other nonscientific areas.
Computer Art
Both fine art and commercial art make use of computer-graphics methods.
Artists now have available a variety of computer methods and tools, including
specialized hardware, commercial software packages (such as Lumena), symbolic
mathematics programs (such as mathematical), CAD packages, desktop publishing
software, and animation systems that provide facilities for designing object shapes
and specifying object motions.
5
Entertainment
Sometimes graphics images are combined with live actors and scenes, and
sometimes the films are completely generated using computer-rendering and
animation techniques.
Image processing
6
UNIT 2 : Overview of a Graphics systems
Basic Operation
A beam of electrons (i.e. cathode rays) is emitted by the electron gun, it passes
through focusing and deflection systems that direct the beam towards the specified
position on the phosphor screen. The phosphor then emits a small spot of light at
every position contacted by the electron beam. Since light emitted by the phosphor
fades very quickly some method is needed for maintaining the screen picture. One of
the simplest way to maintain pictures on the screen is to redraw the image rapidly.
This type of display is called Refresh CRT.
Components of CRT:
7
2. Control Grid
It is the next element which follows cathode. It almost covers cathode, leaving
small opening for electrons to come out. Intensity of the electron beam is
controlled by setting voltage levels on the control grid. A high negative voltage
applied to the control grid will shut off the beam by repelling electrons and
stopping them from passing through the small hole at the end of control grid
structure. A smaller negative voltage on the control grid will simply decrease
the number of electrons passing through. Thus we can control the brightness
of a display by varying the voltage on the control grid.
3. Accelerating Anode
They are positively charged anodes which accelerate the electrons towards
phosphor screen.
They are together needed to force the electron beam to converge into a small
spot as it strikes the screen otherwise the electrons would repel each other and
the beam would spread out as it approaches the screen. Electrostatic focusing
is commonly used in television and computer graphics monitor.
5. Phosphor Coating
When the accelerating electron beam collides with the phosphor screen, a part
of kinetic energy is converted into light and heat. When the electrons in the
beam collide with the phosphor coating they are stopped and their kinetic
energy is absorbed by the phosphor.
There are two techniques for producing images on the CRT screen.
1. Raster Scan Displays.
2. Random Scan Displays.
8
1. Raster Scan Displays
The most common type of graphics monitor employing a CRT is the raster-scan
display, based on television technology.
In a raster-scan system, the electron beam is swept across the screen, one row
at a time, from the top to bottom. Each row is referred to as a scan line.
As the electron beam moves across a scan line, the beam intensity is turned on
and off to create a pattern of illuminated spots.
Picture definition is stored in a memory area called the refresh buffer or frame
buffer, where the term frame refers to the total screen area.
These stored color values are retrieved from the refresh buffer and used to
control the intensity of the electron beam as it moves from spot to spot across the
screen. In this way, picture is painted on the screen one line at a time as shown
below.
9
Thus, an aspect ratio of 4:3, for example means that a horizontal line plotted
with four points has the same length as vertical line plotted with three points.
The range of colors or shades of gray that can be displayed on a raster system
depends on both the types of phosphor used in the CRT and the number of bits per
pixel available in the frame buffer.
For a simple black and white system, each screen point is either on or off, so
only one bit per pixel is needed to control the intensity of the screen positions.
A bit value of 1, for example, indicates that the electron beam is to be turned
on at that position, and a value of 0 turns the beam off.
Interlacing
First, all points on the even-numbered (solid) scan lines are displayed; and
then all points along the odd-numbered (dashed) lines are displayed.
When operated as a random-scan display unit, a CRT has the electron beam
directed only to those parts of the screen where a picture is to be displayed.
Pictures are generated as line drawings, with the electron beam tracing out the
component lines one after the other.
10
Picture definitions are stored as set of line-drawing commands in an area of
memory referred to as the display list.
To display a specified picture, the system cycles through the set of commands
in the display list, drawing each component line in turn.
After all line-drawing commands have been processed, the system cycles back
to the first line command in the list.
Random displays produce smooth line drawings because the CRT beam
directly follows the line path.
Raster displays produce jagged lines that are plotted as discrete point sets.
Display File
The commands present in the display file contain two fields, an operation code
(opcode) and operand. Opcode identifies the commands such as line draw, move
cursors, etc and the operands provide the co-ordinate of a point to process the
commands.
11
operand. It is also necessary to assign meaning to the possible opcodes before we can
proceed to interpret them.
MOVE 1
LINE 2
It draws the image by scanning one row It draws the image by directing the
at a time. electron beam directly to the part of the
screen where the image is to be drawn.
They generally have resolution limited to They have higher resolution than raster
12
pixel size. scan system
Lines are jagged and curves are less Line plots are straight and curves are
smoother smooth.
They are more suited to geometric area They are more suited to line drawing
drawing applications e.g. monitors, application e.g. CRO‟s, pen plotter.
television.
The emitted light from the different phosphors merges to form a single
perceived color, which depends on the particular set of phosphors that have been
excited.
Beam-penetration method
This technique is used in Random Scan Monitors. In this technique, the inside
of CRT is coated with two layers of phosphor, usually red & green.
The displayed color depends on how far the electron beam penetrates into the
phosphor layers. The outer layer is of red phosphor and inner layer is of green
phosphor. A beam of slow electrons excites only the outer red layer.
A beam of fast electrons penetrates the outer red layer and excites the inner
green layer. At intermediate beam speeds, combination of red and green light is
emitted and two additional colors orange and yellow are displayed.
The beam acceleration voltage controls the speed of the electrons and hence
the screen color at any point on the screen.
Shadow-mask method
The shadow mask technique produces a much wider range of colors than the
beam penetration technique. Hence this technique is commonly used in raster scan
displays including color T.V.
In shadow mask technique, the CRT screen has three phosphor color dots at
each pixel position. One phosphor dot emits red light, another emits a green light
13
and the third one emits a blue light. The CRT has three electron guns one for each
dot, a shadow mask grid just behind the phosphor coated screen.
The shadow mask grid consists of series of holes aligned with the phosphor dot
patterns. As shown in figure, the three electron beams are deflected and focused as a
group onto the shadow mask and when they pass through a hole onto a shadow
mask they excite a dot triangle.
A dot triangle consists of 3 small phosphor dots of red, green and blue color.
These phosphor dots are arranged so that each electron beam can activate only its
corresponding color dot when it passes through the shadow mask.
A dot triangle when activated appears as a small dot on the screen which has
color combination of three small dots in the dot triangle. By varying the intensity of
the three electron beams we can obtain different colors in the shadow mask CRT.
The term flat-panel display refers to a class of video devices that have reduced
volume, weight, and power requirements compared to a CRT.
A significant feature of flat-panel display is that they are thinner than CRTs,
and we can hang them on walls or wear them on our wrists.
We can even write on some flat-panel displays, they are also available as
pocket notepads. There are two categories of flat-panel displays:
14
1. Emissive displays
2. Non emissive displays
1. Emissive displays:
They convert electrical energy into light. Plasma panels, thin-film Electro
luminescent and light emitting diodes are examples of emissive displays.
Plasma Panels:
It has narrow plasma tubes that are lined up together horizontally to make a
display. The tubes, which operate in the same manner as standard plasma displays,
are filled with xenon and neon gas.
Their inside walls are partly coated with either red, green, or blue phosphor,
which together produce the full color spectrum.
The tubes are packed together vertically and are sandwiched between two thin
and lightweight glass or plastic retaining plates.
15
The display electrodes in the tubular display run across its front,
perpendicular to the tubes, while the address electrodes are on the back, parallel to
the tubes.
When current runs through any pair of intersecting display and control
electrodes, an electric charge prompts gas in the tube to discharge and emit
ultraviolet light at the intersection point, which in turn causes the phosphor coating
to emit visible light.
They are similar in construction to plasma panels. The difference is that the region
between the glass plate is filled with a phosphor, such as zinc sulfide doped with
manganese, instead of gas.
Electrical energy is absorbed by the manganese atoms, which then release the
energy as a spot of light similar to the glowing plasma effect in a plasma panel.
16
Liquid Crystal Display
They are commonly used in small systems, such as laptop computers and
calculators. They are non emissive devices.
17
UNIT 3: Input Devices
Graphics workstations can make use of various devices for data input. Most
systems have a keyboard and one or more additional devices specifically designed for
interactive input. These include a mouse trackball, space ball, and joystick. Some
other input devices used in particular applications are digitizers, dials, button boxes,
data gloves, touch panels, image scanners and voice systems.
When we press a key on the keyboard, the keyboard controller places a code
corresponding to the key pressed, in a part of its memory called keyboard buffer.
This code is called the scan code. The keyboard controller informs the CPU of the
computer about the key pressed with the help of interrupt signals. The CPU then
reads the scan code from the Keyboard Buffer.
For specialized tasks, input to a graphics application may come from a set of
buttons, dials.
18
Button and switches are often used to input predefined functions, and dials
are common devices for entering scalar values.
Trackballs and Spaceballs
A trackball is a ball device that can be rotated with the fingers or palm of the
hand to produce screen-cursor movement.
Potentiometers, connected to the ball, measure the amount and direction of
rotation. Laptop keyboards are often equipped with a trackball to eliminate the extra
space required by a mouse.
An extension of the two dimensional trackball concept is the spaceball, which
provides six degrees of freedom.
Unlike the trackball, a spaceball does not actually move. Strain gauges
measure the amount of pressure applied to the spaceball to provide input for spatial
positioning and orientation as the ball is pushed or pulled in various directions.
Spaceball are used for 3-D positioning and selection operations in virtual-
reality systems, modeling, animation, CAD and other applications.
Joysticks
19
Data Gloves
Data gloves can be used to grasp a virtual object. The glove is constructed with
a series of sensors that detect hand and finger motions.
Electromagnetic coupling between transmitting antennas and receiving
antennas are used to provide information about the position and orientation of the
hand.
Digitizers
A common device for drawing, painting, or interactively selecting positions is a
digitizer. These devices can be designed to input coordinate values in either a two
dimensional or a three dimensional space.
In engineering or architectural applications, a digitizer is often used to scan a
drawing or object and to input a set of discrete coordinate positions.
Image Scanners
20
We can also apply various image-processing methods to modify the array
representation of the picture.
Touch Panels
Touch panels allow displayed object or screen positions to be selected with the
touch of a finger.
A typical application of touch panels is for the selection of processing options
that are represented as a menu of graphical icons.
Some monitors, such as plasma panels are designed with touch screens.
Light Pen
Pencil shaped devices are used to select screen positions by detecting the light
coming from points on the CRT screen.
They are sensitive to the short burst of light emitted from the phosphor coating
at the instant the electron beam strikes a particular point.
Voice System
Speech recognizers are used with some graphics workstations as input devices
for voice commands.
The voice system input can be used to initiate graphics operations or to enter
data. These systems operate by matching an input against a predefined dictionary of
words and phrases.
21
UNIT 4: Graphics Output Primitives
Line Drawing Algorithms
A straight line segment in a scene is defined by the coordinate positions for
the endpoints of the segment.
To display the line on a raster monitor, the graphics system must first project
the endpoints to integer screen coordinates and determine the nearest pixel positions
along the line path between the two endpoints.
Rasterization
As a cathode ray tube (CRT) raster display is considered a matrix of discrete
finite area cells (pixels), each of which can be made bright, it is not possible to
directly draw a straight line from one point to another.
The process of determining which pixels provide the best approximation to the
desired line is properly known as rasterization.
22
Digital Differential Analyzer Algorithm
One technique for obtaining a rasterized straight line is to solve the differential
equation for a straight line, i.e.
If we have to draw a line from the point (x1,y1) to (x2,y2) then let us assume
Length = abs(x2 − x1)
or
Length = abs(y2 − y1)
dx as small increment along x
dy as small increment along y
dy = y2 − y1
______________
Length
dx = x2 − x1
______________
Length
xi+1 = xi + dx
yi+1 = yi + dy
23
The actual DDA algorithm is as follows
dx = (x2-x1)/Length
dy = (y2-y1)/Length
x= x1 //the value of x
y= y1 //the value of y
begin
i=1
while ( i <= Length)
glVertex2i(x,y)
x=x+dx
y=y+dy
i=i+1
end while
end
Example:
Consider the line from (0,0) to (5,5). Use the simple DDA to rasterize this line.
Evaluating the steps in the algorithm yields initial calculation as
x1 = 0 ; y1 = 0 ; x2 = 5 ; y2 = 5 ; Length = 5 ; dy = 1 ; dx = 1 ; x= 0 ; y= 0
i glVertex x y
1 (0,0) 0 0
2 (1,1) 1 1
3 (2,2) 2 2
4 (3,3) 3 3
5 (4,4) 4 4
5 5 0 1 2 3 4 5
24
Bresenham’s Line Algorithm
The algorithm seeks to select the optimum raster locations that represent a
straight line. To accomplish this, the algorithm always increments by one unit either
x or Y, depending on the slope of the line.
The increment in the other variable, either zero or one, is determined by
examining the distance between the actual line and the nearest grid locations. This
distance is called the error.
For example, in the above diagram after rastering the pixel at (0,0) we have to
choose whether we have to raster pixel (1,0) or (1,1).
If slope of the required line through (0,0) is greater than ½, then raster point at
(1,1) and if it is less than ½, then raster (1,0).
That is,
If ½ <= (dy/dx) <=1 then (error >= 0)
plot(1,1)
else if 0 <= (dy/dx) < ½ then (error < 0)
plot(1,0)
end if
25
If we have to draw a line from (x1,y1) to (x2,y2) then Bresenham algorithm is
Algorithm:
1. Input two endpoints x1, y1, x2, y2
2. Calculate dx = x2-x1, dy =y2-y1
3. Obtain starting value for the decision variable
p0=2*dy-dx
4. At each xk along the line, starting at k=0 perform the following test.
5. If (pk < 0), the next point to plot is
{
(xk +1, yk)
pk+1 = pk + 2*dy
}
else
{
(xk +1, yk+1)
pk+1 = pk + 2*dy-2*dx
}
end if
6. Perform step 4 dx-1 times.
26
Example:
To draw line from (20,10) to (30,18);
x1 = 20; y1 = 10;
x2 = 30; y2 = 18;
p0 = 2 * dy – dx = 2* 8- 10 = 6;
k p x y
0 6 21 11
1 2 22 12
2 -2 23 12
3 14 24 13
4 10 25 14
5 6 26 15
6 2 27 16
7 -2 28 16
8 14 29 17
9 10 30 18
27
Circle Generating Algorithm
Properties of Circles
A Circle is defined as the set of points that are all at a given distance r from a
center position (xc,yc). For any circle point (x,y), this distance relationship is
expressed as
(x - xc ) 2+( y - yc ) 2 = r2
( y - yc ) 2 = r2 − (xc - x ) 2
y = yc ±√ r2 - (xc - x) 2
Another way to calculate points along the circular boundary is to use polar
coordinates r and θ.
Expressing the circle equation in parametric polar form yields the pair of equations
x = xc + r cos θ and y = yc + r sin θ
Midpoint Circle Algorithm
For a given radius r and screen center position (xc, yc), we have to calculate
pixel positions around a circular path centered at the coordinate origin (0,0).
28
We decide the next pixel to be selected depending upon the value of the
decision variable d as
dold = F(xp+1,yp-1/2)
= (xp+1)2 + (yp-1/2)2- R2
Case 1 : If dold is <0 then
the pixel E is chosen and
the next midpoint will be (xp+2,yp-1/2) .
the value of the next decision variable will be:
dnew = F(xp+2,yp-1/2)
= (xp+2)2 + (yp-1/2)2- R2
the difference between dold and dnew will be
dnew = dold + (2xp + 3 )
therefore ∆E = 2xp + 3
Proof:
dold = F(Xp + 1, Yp – 1/2 )
= (Xp + 1 )2 + (Yp – 1/ 2)2- R2
= ( Xp2 + 2*1* Xp+12)+ ( Yp2 – 2*( 1 / 2)* Yp+(1/2)2)-R2
= ( Xp2 + 2* Xp+12)+ ( Yp2 - Yp+1/4)-R2
dnew = F(Xp + 2, Yp – 1/2 )
= (Xp + 2 )2 + (Yp – 1/ 2)2- R2
= ( Xp2 + 2*2* Xp+22)+ ( Yp2 – 2*( 1 / 2)* Yp+(1/2)2)-R2
= ( Xp2 + 4* Xp+4)+ ( Yp2 - Yp+1/4)-R2
dnew- dold
=( Xp2 + 4* Xp+4)+ ( Yp2 - Yp+1/4)-R2-[ ( Xp2 + 2* Xp+12)+ ( Yp2 - 2* Yp+1/4)-R2]
= Xp2+ 4 Xp+4+ Yp2- Yp+1/4-R2- Xp2 - 2Xp-1- Yp2 + Yp-1/4+R2
=2Xp+3
Hence, we prove that dnew- dold =2Xp+3
29
Case 2 : If dold is ≥ 0 then
the pixel SE is chosen and
the next midpoint will be (xp+2,yp-3/2) .
the value of the next decision variable will be:
dnew = F(xp+2,yp-3/2)
= (xp+2)2 + (yp-3/2)2- R2
the difference between dold and dnew will be
dnew = dold + (2xp – 2yp + 5 )
therefore ∆SE = 2xp -2yp + 5
dold = F(Xp + 1, Yp – 1/2 )
= (Xp + 1 )2 + (Yp – 1/ 2)2- R2
= ( Xp2 + 2*1* Xp+12)+ ( Yp2 – 2*( 1 / 2)* Yp+(1/2)2)-R2
= ( Xp2 + 2* Xp+12)+ ( Yp2 - Yp+1/4)-R2
dnew = F(Xp + 2, Yp – 3/2 )
= (Xp + 2 )2 + (Yp – 3/ 2)2- R2
= ( Xp2 + 2*2* Xp+22)+ ( Yp2 – 2*( 3 / 2)* Yp+(3/2)2)-R2
= ( Xp2 + 4* Xp+4)+ ( Yp2 - 3 Yp+9/4)-R2
dnew- dold
=( Xp2 + 4* Xp+4)+ ( Yp2 - 3 Yp+9/4)-R2-[ ( Xp2 + 2* Xp+12)+ ( Yp2 - Yp+1/4)-R2]
= Xp2+ 4 Xp+4+ Yp2-3Yp+9/4-R2- Xp2 - 2Xp-1- Yp2 + Yp-1/4+R2 = 2Xp- 2Yp +5
Hence, we prove that dnew- dold =2Xp- 2Yp +5
The initial decision variable is based on the initial pixel location (0,R) and the first
midpoint (1,R-1/2)
Therefore,
d0 = F(1,R-1/2)
= (1)2+ (R - 1/2) 2 - R2
= 1+ (R2 – R + 1/ 4) - R2
= (5 / 4) - R
30
The actual algorithm is as follows
//initialize variables
x=0
y = radius
d = ( 5.0 / 4.0 ) – radius
glVertex2f ( x, y );
While ( y > x )
{ if ( d < 0 )
{
d=d+ (2*x +3)
x= x+ 1
}
else
{
d = d + (2 * x – 2 * y + 5 )
x= x+ 1
y= y-1
}
glVertex2f ( x, y );
}
31
UNIT 5: Geometric Transformation.
A transformation T is the process of moving a point in space from one position
to another.
The functions that are available in all graphics packages are those for
translation, rotation and scaling.
P= , P‟= , T=
This allows us to write the two dimensional translation equations in the matrix
form as
P‟= P + T.
Rigid-body translation
A Translation is said to be a rigid body translation if it moves the objects
without deformation.
That is, every point on the object is translated by the same amount. A straight
line segment is translated by applying the transformation equation to each of the two
endpoints and redrawing the line between the new endpoints.
32
Translation
33
glColor3f (0.0, 1.0, 0.0);
Tri();
glPopMatrix();
glutSwapBuffers();
glFlush ();
}
void init (void)
{ glClearColor (1.0, 1.0, 1.0, 0.0);
glMatrixMode(GL_PROJECTION);
gluOrtho2D(0.0, 200.0, 0.0, 150.0);
}
void Tri()
{ glBegin(GL_TRIANGLES);
glVertex2i(10, 10);
glVertex2i(60, 10);
glVertex2i(30, 60);
glEnd();
}
Two Dimensional Scaling.
Scaling transformation is applied to alter the size of the object.
A simple two dimensional scaling operation is performed by multiplying object
positions (x,y) by scaling factors sx and sy, to produce the transformed coordinates
(x‟,y‟).
x‟ = x. sx
y‟ = y. sy
Scaling factor sx scales an object in the x direction, while sy scales in the y
direction.
The basic two dimensional scaling equation can be written in the matrix form as
= *
P‟ = S * P
34
Rules for Scaling
1. Any positive value can be assigned to the scaling factors sx and sy.
2. Values less than 1 will reduce the size of the objects.
3. Values greater than 1 produce enlargements.
4. Specifying a value of 1 for both sx and sy leaves the size of object
unchanged.
Uniform Scaling
When sx and sy are assigned the same value, a uniform scaling is produced
which maintains relative object proportions.
Differential Scaling
Unequal values for sx and sy results in a differential scaling that is often used
in design applications.
Polygon Scaling
Polygons are scaled by applying transformations to each vertex, then
regenerating the polygon using the transformed vertices.
Scaling
35
glutDisplayFunc(PatternSegment);
glutMainLoop();
return 0;
}
void PatternSegment(void)
{
void Tri();
glClear (GL_COLOR_BUFFER_BIT);
glPushMatrix();
glColor3f (1.0, 0.0, 0.0);
Tri();
glScalef(10.0,10.0,0.0);
glColor3f (0.0, 1.0, 0.0);
Tri();
glPopMatrix();
glutSwapBuffers();
glFlush ();
}
void init (void)
{ glClearColor (1.0, 1.0, 1.0, 0.0);
glMatrixMode(GL_PROJECTION);
gluOrtho2D(0.0, 200.0, 0.0, 150.0);
}
void Tri()
{ glBegin(GL_TRIANGLES);
glVertex2i(10, 10);
glVertex2i(60, 10);
glVertex2i(30, 60);
glEnd();
}
36
Two Dimensional Rotation.
To generate a rotation transformation of an object by specifying a rotation axis
and a rotation angle.
All points of the object are then transformed to new positions by rotating the
points through the specified angle about the rotation axis.
A two dimensional rotation of an object is obtained by repositioning the object
along a circular path in the xy plane.
If r is the constant distance of the point from the origin, angle
r
(x,y)
------------------A
The transformed coordinates can be expressed as
= -
= -----B
Substituting the value of A in B we get
–
37
Rotation
38
Tri();
glRotatef(15.0,0,0,1);
glColor3f (0.0, 1.0, 0.0);
Tri();
glPopMatrix();
glutSwapBuffers();
glFlush ();
}
39
Matrix Transformation
Translation T =
Scaling S =
Rotation R =
Translation
= *
Scaling
= *
Rotation
= *
40
Simple Transformation
= *
= *
= * =
= *
41
= * =
Before Scaling
After Scaling
c. Scale a line between (0,1) and (2,1) to twice its length , the left hand
endpoint does not move.
For first point
Sx = 2 , x = 0, y = 1
= * =
= * =
42
43
3. Rotation of coordinates (1,0) by 45 about the origin
= *
From the table we see the value of angles and substitute in the matrix as
shown below
= * =
Composite Transformation
1. Find the transformation matrix for scale by 2 with fixed point (2,1) and do the
following composite transformation for line between
(2,1) and (4,1).
First find the transformation matrix for scale by 2 with fixed point (2,1).
1. Translate origin to the point (2,1).
2. Scale by 2.
3. Translate the point (2,1) to the origin.
44
Translate origin to the point (2,1) the matrix is
* * =
* =
* =
45
UNIT 6: Clipping
Clipping is the process of extracting identifying element of a scene or picture
inside or outside a special region, called the clipping region.
Clipping is useful for copying, moving or deleting a portion of a scene or
picture.
Example:
The classical „cut and paste‟ operation in a windowing system.
Clipping Window
A section of a two dimensional scene that is selected for display is called a
clipping window, because all parts of the scene outside the selected section are
clipped off.
Viewport
Objects inside the clipping window are mapped to the viewport and it is the
viewport that is then positioned within the display window. The clipping window
selects what we want to see. The viewport indicates where it is to be viewed on the
output device.
46
Window to Viewport transformation
The mapping of a two-dimensional, world-coordinate scene description to
device coordinates is called a two-dimensional viewing transformation.
After clipping, the unit square viewport is mapped to the output device. In
other systems, the normalization and clipping is performed before viewport
transformation. The viewport boundaries are given in screen coordinates relative to
the display-window position.
47
If a coordinate position is at the centre of the clipping window, for instance, it
would be mapped to the centre of the viewport.
Position(Xw,Yw) in the clipping window is mapped into position (Xv,Yv) in the
associated viewport.
We can obtain the transformation from the world coordinates to viewport
coordinates with the sequence as shown below
Step 1: Scale the clipping window to the size of the viewport using a fixed point
position of (xwmin ywmin).
Step 2: Translate (xwmi, ywmin) to (xvmi, yvmin).
48
To transform the world coordinate point into the same relative position within
the viewport we require that
Clipping Algorithms
Generally, any procedure that eliminated those portions of a picture that are
either inside or outside of a specified region of space is referred to as a clipping
algorithm or simply clipping.
Usually a clipping region is a rectangle in standard position, although we could
use any shape for a clipping application.
The following are few two dimensional algorithms
1. Point Clipping
2. Line Clipping(Straight line segment)
3. Fill Area Clipping(Polygons)
4. Curve Clipping
5. Text Clipping
Two Dimensional Point Clipping
For a Clipping rectangle in standard position, we save a two-dimensional point
P = (x,y) for display if the following inequalities are satisfied
xmin <= x <=xmax
ymin <= y <=ymax
49
If any one of these four inequalities is not satisfied, the point is clipped (not
saved for display)
Although point clipping is applied less often than line or polygon clipping, it is
useful in various situations, particularly when pictures are modeled with particle
systems.
Two dimensional Line Clipping
50
Cohen-Sutherland line Clipping
The Cohen-Sutherland two-dimensional line clipping algorithm basically
divides the clipping region into; number of sections (specifically nine regions), each
line endpoint is assigned with its own unique 4-bit binary number, called an out
code or region code.
And each bit position is used to indicate whether the point is inside or outside
one of the clipping-window boundaries.
Clipping
Window
0001 0010
0000
The nine binary region codes for identifying the position of a line endpoint, relative
to the clipping window boundaries
One possible ordering with the bit positions numbered 1 through 4 from
right to left.
Thus, for this ordering, the rightmost position (bit 1) references the left
clipping-window boundary, and the leftmost position (bit 4) references the top
window boundary.
A value of 1 (or true) in any bit position indicates that the endpoint is
outside of that window border. Similarly, a value of 0 (or false) in any bit position
indicates that the end point is not outside (it is inside or on) the corresponding
window edge.
51
Polygon clipping
Polygon is a collection of lines. Therefore we might think that the line clipping can be
used directly for polygon clipping.
However, when a closed polygon is clipped as a collection of lines with the line
clipping algorithm, the original closed polygon becomes one or more open polygon or
discrete lines. Thus we need to modify the line clipping algorithm to clip polygon.
We consider a polygon as a closed solid area. Hence after clipping it should remain
closed. To achieve this we require an algorithm that will generate additional line
segment which make the polygon as a closed area.
The lines a-b, c-d, d-e, f-g, g-h, i-j are added to polygon description to make it closed.
Sutherland Hodgeman polygon clipping
52
A polygon can be clipped by processing its boundary as a whole against each
window edge. This is achieved by processing all polygon vertices against each clip
rectangle boundary in turn.
Beginning with the original set of polygon vertices, we could first clip the polygon
against the left rectangle boundary to produce a new sequence of vertices.
The new set of vertices could then be successively passed to a right boundary
clipper, a top boundary clipper & a bottom boundary clipper as shown in fig.
At each step a new set of polygon vertices is generated and passed to the next
window boundary clipper. This is the fundamental idea in the Sutherland Hodgeman
algorithm.
The output algorithm is a list of polygon vertices all of which are on the visible side of
the clipping plane. This is achieved by processing 2 vertices of each edge of the
polygon around the clipping boundary or plane.
Curves
53
A Curve is a continuous map from a one-dimensional space to an n-dimensional space.
Properties of Curves
Local properties:
continuity.
position at a specific place on the curve.
direction at a specific place on the curve.
curvature .
Global properties:
Types of Curves
Quadratic Curves : They are curves of 2nd order. Equation for quadratic curve is
x(t) = at2 + b t + c
Cubic Curves: They are curves of 3rd order. Equation for cubic curve is
54
x(t) = at3 + bt2+ ct +d
If there are only two points they define a line (1st order).
If there are three points they define a quadratic curve (2nd order).
Four points define a cubic curve(3rd order);
Bezier Curve
A Bezier curve is a parametric curve which are used to model smooth curves that can be
scaled indefinitely.
Projection
56
Projection is mapping 3D coordinates to 2D coordinates. It is to transform points from
camera coordinate system to the screen.
Parallel projection
Center of projection is at infinity. Direction of projection (DOP) same for all points
57
Perspective projection
Maps points onto “view plane” along projectors emanating from “center of
projection” (COP).
58