0% found this document useful (0 votes)
19 views166 pages

Computer Graphics2-Kp

Uploaded by

Vishnu Viraat AK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views166 pages

Computer Graphics2-Kp

Uploaded by

Vishnu Viraat AK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 166

GEOMETRIC TRANSFORMATIONS

 Basic transformations:

07/30/2021
 Translation
 Scaling
 Rotation

Computer Graphics
 Purposes:
 To move the position of objects
 To alter the shape / size of objects
 To change the orientation of objects
BASIC TWO-DIMENSIONAL
GEOMETRIC TRANSFORMATIONS
 Two-Dimensional translation
 One of rigid-body transformation, which move objects
without deformation
 Translate an object by Adding offsets to coordinates to
generate new coordinates positions
 Set tx,ty be the translation distance, we have

x'  x  t x y'  y  t y
 In matrix format, where T is the translation vector

 x' x t x 
P'    P  T 
 y' y t y 

P'  P  T
Computer Graphics
BASIC TWO-DIMENSIONAL
GEOMETRIC TRANSFORMATIONS
We could translate an object by applying the equation to every

07/30/2021
point of an object.
 Because each line in an object is made up of an infinite set of points,
however, this process would take an infinitely long time.

Computer Graphics
 Fortunately we can translate all the points on a line by translating

only the line’s endpoints and drawing a new line between the
endpoints.
 This figure translates the “house” by (3, -4)
TRANSLATION EXAMPLE

07/30/2021
y

Computer Graphics
5

4
(2, 3)
3

1
(1, 1) (3, 1)

0 1 2 3 4 5 6 7 8 9 10 x
BASIC TWO-DIMENSIONAL
GEOMETRIC TRANSFORMATIONS

 Two-Dimensional rotation
 Rotation axis and angle are specified for rotation r
 Convert coordinates into polar form for calculation
r
x  r cos  y  r sin 
 Example, to rotation an object with angle a
 The new position coordinates
x'  r cos(   )  r cos  cos   r sin  sin   x cos  y sin 
y '  r sin(   )  r cos  sin   r sin  cos  x sin   y cos 

In matrix
cos 
format  sin  

R  sin  cos  
P'  R  P

x'  xr  ( x  xr ) cos   ( y  yr ) sin 
 Rotation
y '  yabout
 ( xapoint
x ) (x
sinr, yr)  ( y  yr ) cos 
r r

Computer Graphics
BASIC TWO-DIMENSIONAL
GEOMETRIC TRANSFORMATIONS
 This figure shows the rotation of the house by 45 degrees.

07/30/2021
y
6

Computer Graphics
5

2


1 6

0
1 2 3 4 5 6 7 8 9 10
x

 Positive angles are measured counterclockwise (from x towards y)

 For negative angles, you can use the identities:


 cos(-q) = cos(q) and sin(-q)=-sin(q)
ROTATION EXAMPLE

07/30/2021
y

Computer Graphics
5

4
(4, 3)
3

1
(3, 1) (5, 1)

0
1 2 3 4 5 6 7 8 9 10
BASIC TWO-DIMENSIONAL
GEOMETRIC TRANSFORMATIONS

 Two-Dimensional scaling
 To alter the size of an object by multiplying the coordinates
with scaling factor sx and sy
x'  x  s x y  y  sy

 In matrix format,
s 0 where

S is a 2by2 scaling matrix
 x'  x 
   
x
  P'  S  P
 y'   0 s y  y 

x'  x  s  x f (1  s x )
 Choosing ax fix point (xf, yf) as its centroid to perform scaling
y'  y  s y  y f (1  s y )

Computer Graphics
BASIC TWO-DIMENSIONAL
GEOMETRIC TRANSFORMATIONS

07/30/2021
 In this figure, the house is scaled by 1/2 in x and 1/4 in y
 Notice that the scaling is about the origin:
 The house is smaller and closer to the origin

Computer Graphics
SCALING

07/30/2021
 Ifthe scale factor had been greater than 1, it would be larger and
farther away.
Note: Objects grow and move!

Computer Graphics
y
6

3 6  9
 3 3
   
2

1  2 3
1  1
   
0
1 2 3 4 5 6 7 8 9 10
x
Note: House shifts position relative to origin
SCALING EXAMPLE

07/30/2021
y

Computer Graphics
5

4
(2, 3)
3

1
(1, 1) (3, 1)

0 1 2 3 4 5 6 7 8 9 10
HOMOGENEOUS COORDINATES
 A point (x, y) can be re-written in homogeneous coordinates as
(xh, yh, h)

© 2005 Pearson Education


 The homogeneous parameter h is a non-
zero value such that:
xh yh
x y
h h
 We can then write any point (x, y) as (hx, hy, h)
 We can conveniently choose h = 1 so that
(x, y) becomes (x, y, 1)
WHY HOMOGENEOUS
COORDINATES?
 Mathematicians commonly use homogeneous
coordinates as they allow scaling factors to be removed
from equations

© 2005 Pearson Education


 We will see in a moment that all of the transformations


we discussed previously can be represented as 3*3
matrices

 Using homogeneous coordinates allows us use matrix


multiplication to calculate transformations – extremely
efficient!
HOMOGENOUS COORDINATES
 Combine the geometric transformation into a single matrix with
3by3 matrices
 Expand each 2D coordinate to 3D coordinate with homogenous
parameter
 Two-Dimensional translation matrix

 x'  1 0 t x   x 
     
y '
    0 1 t y   y 
 1  0 0 1   1
 Two-Dimensional rotation matrix
 x' cos   sin  0  x 
 y '   sin  cos  0   y 
  
 1   0 0 1  1 
 Two-Dimensional
 x'  s
scaling matrix
0 0  x 
x
     
 y'    0 sy 0   y 
 1   0 0 1  1 © 2005 Pearson Education
INVERSE TRANSFORMATIONS
 Inverse translation matrix
1 0  t x 
 
T 1  0 1  t y 

© 2005 Pearson Education


0 0 1 

 Two-Dimensional rotation matrix


 cos  sin 0
 
R 1   sin cos  0
 0 0 1
 Two-Dimensional
1 scaling matrix
 
s 0 0
 x 
1
S 1
0 0
 sy 
0 0 1
 
 
07/30/2021 Computer Graphics
REFLECTION
07/30/2021 Computer Graphics
REFLECTION
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
COMPOSITE TRANSFORMATION
07/30/2021 Computer Graphics
COMPOSITE TRANSFORMATION
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
VIEWING IN 2 DIMENSIONS
 In 2D, a ‘world’ consists of an infinite plane, defined in
‘world’ coordinates, i.e metres or other units appropriate
to the model

 We normally pick an area of the 2D plane to view,


referred to as the viewing ‘window’.

 On our display device, need to allocate an area for


display, referred to as the ‘viewport’ in device specific
coordinates.

 We “Clip” objects outside of window.


 We “Translate” coordinates to fit the viewport.
 We “Scale” to the device coordinates.
159.235 Graphics 28
Window – A world coordinate area selected for
display is called a window.
 -defines what is to be displayed.

07/30/2021
Viewport – An area on a display device to which a
window is mapped is called.

Computer Graphics
 - defines where it is to be displayed

 Clipping – technique for not showing that part of the


drawing which one not interested in is called
clipping..

 Transformation- Mapping of a part of a world


coordinate scene to device coordinates is referred as a
viewing transformation or windowing transformation
or normalization transformation.
VIEWING IN 2D - VIEWPORT

159.235
250

Graphics
45

Window in world coordinates.

250 x 250
Viewport in Pixels.
Device coords

30
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
1
Clipping Window
ywmax Normalized Viewport
yvmax

May 2010
 xw, yw 
 xv, yv 
ywmin yvmin

0 xvmin xvmax1
xwmin xwmax

Maintain relative size and position between clipping window and viewport.

xv  xvmin xw  xwmin yv  yvmin yw  ywmin


 
xvmax  xvmin xwmax  xwmin yvmax  yvmin ywmax  ywmin
33
07/30/2021 Computer Graphics
2D VIEWING TRANSFORMATION
PIPELINE

Construct World-
Modeling World Convert World-
Coordinates Coordinate Scene Coordinates Coordinates to
From Modeling-
Viewing-
Coordinate
Coordinates
Transformations

Viewing Coordinates

Transform Viewing- Normalized Device


Coordinates Map Normalized- Coordinates
Coordinates to
Coordinates to
Normalized-
Device-Coordinates
Coordinates
Clipping Operations
Clipping Algorithm: Identifies those portions of a
picture that are either inside or outside of a
specified region of space.
Clip Window: The region against which an object
is to be clipped.
• Point Clipping
• Line Clipping (straight-line segments)
• Area Clipping (polygons)
• Text Clipping
36
07/30/2021 Computer Graphics
POINT CLIPPING
clip
rectangle y = y max ( x max , y max )

(x 1, y 1)
x = x min x = x max

( x min , y min ) y = y min

For a point (x,y) to be inside the clip rectangle:

xmin  x  xmax
ymin  y  ymax
CLIPPING IN 2D
 Need to clip primitives (eg lines) against the sides of the
viewing window
 e.g lines or polygons
 We only see what is inside the window

159.235 Graphics 39
07/30/2021 Computer Graphics
COHEN-SUTHERLAND CLIPPING
ALGORITHM
 Thisis an efficient method of accepting or
rejecting lines that do not intersect the window
edges.

 Assign a binary 4 bit code to each vertex :

 First bit : above top of window, y > ymax


 Second bit : below bottom, y < ymin
 Third bit : to right of right edge, x > xmax
 Fourth bit : to left of left edge, x < xmin
 4-bit code called: Outcode

159.235 Graphics 41
COHEN-SUTHERLAND 2D OUTCODES

159.235
1001 1000 1010

Graphics
0001 0000 0010

0101 0100 0110


42
COHEN-SUTHERLAND ALGORITHM

159.235
1001 1000 1010

Graphics
0001 0000 0010

0101 0100 0110

Both endpoint codes 0000, trivial acceptance, else:


Do logical AND of Outcodes (reject if non-zero)
43
COHEN-SUTHERLAND ALGORITHM

159.235
1001 1000 1010

Graphics
1000

0001

0001 0000 0010


0000
0000

0101 0100 0110

Logical AND between codes for 2 endpoints,


Reject line if non-zero – trivial rejection. 44
LINE INTERSECTION.

 Now need to intersect line segments with edges of clip


rectangle.

 Selectany clip edge, trivial line-splitting, feed two new


lines back into algorithm - known as re-entrant

 Alternativelyexpress line in parametric form to


handle vertical lines. ( ie x=x(t), y=y(t), parameter t )
 Substitute for x or y.
 Solve for t.
 Need to perform 4 intersection checks for each line.
159.235 Graphics 45
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
OUTCODE COMPUTATION
typedef unsigned int outcode;
enum {TOP = 0x1, BOTTOM = 0x2, RIGHT = 0x4, LEFT = 0x8}

outcode CompOutCode(

168 471 Computer Graphics, KKU. Lecture 8


double x, double y, double xmin, double xmax, double ymin, double ymax)
{
outcode code = 0;
if ( y > ymax )
code |= TOP;
else if ( y < ymin )
code |= BOTTOM;
if ( x > xmax )
code |= RIGHT;
else if ( x < xmin )
code |= LEFT;
return code;
} 50
168 471 Computer Graphics, KKU. Lecture 8
51
COHEN-SUTHERLAND ALGORITHM
168 471 Computer Graphics, KKU. Lecture 8
52
COHEN-SUTHERLAND ALGORITHM (CONT.)
07/30/2021 Computer Graphics
07/30/2021 Computer Graphics
168 471 Computer Graphics, KKU. Lecture 8
55
COHEN-SUTHERLAND PROCEDURES
COHEN-SUTHERLAND LINE CLIPPING
 For those lines that we cannot immediately
determine, we successively clip against each
boundary.
 Then check the new endpoint for its region code.
 How to find boundary intersection: To find y
coordinate at vertical intersection, substitute x
value at the boundary into the line equation of
the line to be clipped.
 Other algorithms:
 Faster: Cyrus-Beck
 Even faster: Liang-Barsky
 Even faster: Nichol-Lee-Nichol
LIANG-BARSKY LINE CLIPPING
ALGORITHM
Treat undecided lines in Cohen-Sutherland more efficiently.

Define clipping window by intersections of four half-planes.

 xend , yend 
xwmin  xwmax

ywmax

 
 x0 , y0  ywmin


May 2010 57
Parametric presentation:
x  x0  u  xend  x0  , y  y0  u  yend  y0  , 0  u  1.

May 2010
A point on the line is cotained in the clipping window iff:
xwmin  x0  u  xend  x0   xwmax ,
ywmin  y0  u  yend  y0   ywmax .
It can be expressed by: upk  qk , k  1, 2,3, 4, where
p1  x0  xend , q1  x0  xwmin ;
p2  xend  x0 , q2  xwmax  x0 .
p3  y0  yend , q3  y0  ywmin ;
p4  yend  y0 , q4  ywmax  y0 . 58
In the inequality upk  qk if pk  0 (pk  0), the

traversal from  x0 , y0  to  xend , yend  by incrasing

May 2010
u from   to + proceeds the line from the  ( )
half-plane to  () one (with respect to the k -th

border).

Intersection of  ,   extension with k -th border

occurs at u  qk pk .
59
We calculate and update u0 and uend progressively for

k  1, 2,3, 4 borders (left, right, bottom, top).

May 2010
If pk  0 u0 is calculated since progression is from 

to  half planes. Similarly, if pk  0 uend is calculated.

u0 is the maximum among 0 and all qk pk . uend is the

minimum among 1 and all qk pk . The feasibility

condition uend  u0 is progressively checked. The line

is completely outside if uend  u0 .


60
Notice that qk pk doesn't need actual division since

comparison of q p with q p can be done by

May 2010
comparison of qp with qp, and the quotient q p

can be stored as a pair  q, p 

Only if uend  u0 , given by  q0 , p0  and  qend , pend  ,

the actual ends of the clipped portion are calculated.

61
This is more efficient than Cohen-Sutherland Alg,
which computes intersection with clipping window

May 2010
borders for each undecided line, as a part of the
feasibility tests.

62
PSEUDOCODE
Pre-calculate Ni and select PEi for each edge;
for each line segment to be clipped
if P1 = P0 then line is degenerate so clip as a point;
else
begin
tE = 0; tL = 1;
for each candidate intersection with a clip edge
if Ni • D  0 then {Ignore edges parallel to line}
begin
calculate t; {of line and clip edge intersection}
use sign of Ni • D to categorize as PE or PL;
if PE then tE = max(tE,t);
if PL then tL = min(tL,t);
end
if tE > tL then return nil
else return P(tE) and P(tL) as true clip intersections
end
PARAMETRIC LINE-CLIPPING
ALGORITHM
• Introduced by Cyrud and Beck in 1978
• Efficiently improved by Liang and Barsky
• Essentially find the parameter t from P(t) = P0 + (P1-P0)t

168 471 Computer Graphics, KKU. Lecture 8


N i [ P (t )  PEi ]  0
N i [ P0  ( P1  P0 )t  PEi ]  0
N i [ P0  PEi ]  N i [ P1  P0 ]t  0
N i [ P0  PEi ]
t
N i D
where D  ( P1  P0 )

64
PARAMETRIC LINE-CLIPPING
ALGORITHM (CONT.)

168 471 Computer Graphics, KKU. Lecture 8


N i D  0  PE (angle  90)
N i D  0  PL( angle  90)

• Formally, intersections can be classified as PE (potentially entering)


and PL (potentially leaving) on the basis of the angle between P0P1
and Ni
• Determine tE or tL for each intersection
• Select the line segment that has maximum tE and minimum tL
65
• If tE > tL, then trivially rejected
PARAMETRIC LINE-CLIPPING
ALGORITHM (CONT.)

168 471 Computer Graphics, KKU. Lecture 8


66
168 471 Computer Graphics, KKU. Lecture 8
67
CYRUS-BECK ALGORITHM
(PSEUDOCODE)
CLIPPING POLYGONS

 Clip polygons by clipping successively against all 4 sides

 Can implement as pipelined algorithm (ie special hardware


can do the work)

 Recursively test each edge.


 Form new edge with next vertex
 Call with new edge

159.235 Graphics 68
Sutherland-Hodgman Polygon Clipping

Lift Right
Clipper Clipper

At each step, a new sequence of Bottom


output vertices is generated and Clipper
passed to the next window
boundary clipper. Top
Clipper

69
Sutherland-Hodgman Clipping
Algorithm

159.235
Four cases of polygon clipping:

Graphics
Inside Outside Inside Outside Inside Outside Inside Outside

Output First
Output Second Output
Vertex Intersection Output
Case 1 Case 2. Case 3 Case 4
No
output. 70
SUTHERLAND-HODGMAN ALGORITHM
 Input:
 v1, v2, … vn the vertices defining the polygon
 Single infinite clip edge w/ inside/outside info
 Output:
 v’1, v’2, … v’m, vertices of the clipped polygon
 Do this 4 (or ne) times
 Traverse vertices (edges)
 Add vertices one-at-a-time to output polygon
 Useinside/outside info
 Edge intersections

71
SUTHERLAND-HODGMAN ALGORITHM
 Can be done incrementally
 If first point inside add. If outside, don’t add

 Move around polygon from vi to vn and back to v1

 Check vi,vi+1 wrt the clip edge


 Need vi,vi+1‘s inside/outside status
 Add vertex one at a time. There are 4 cases:

72

1994 Foley/VanDam/Finer/Huges/Phillips ICG


Sutherland-Hodgman Algorithm
foreach polygon P P’ = P
foreach clipping edge (there are 4) {
 Clip polygon P’ to clipping edge
 foreach edge in polygon P’
Check clipping cases (there are 4)
• Case 1 : Output vi+1
• Case 2 : Output intersection point
• Case 3 : No output
• Case 4 : Output intersection point & vi+1}

73
CLIPPING TO A BOUNDARY
 Do Inside Test for Each Point in Sequence,
Insert New Points When Cross Window Boundary,
Remove Points Outside Window Boundary

P2 P1
Window
Boundary
Inside

P5 Outside
P3

P4
cgvr.korea.ac.kr 74
CLIPPING TO A BOUNDARY
 Do Inside Test for Each Point in Sequence,
Insert New Points When Cross Window Boundary,
Remove Points Outside Window Boundary

P2 P1
Window
Boundary
Inside

P5 Outside
P3

P4
cgvr.korea.ac.kr 75
CLIPPING TO A BOUNDARY
 Do Inside Test for Each Point in Sequence,
Insert New Points When Cross Window Boundary,
Remove Points Outside Window Boundary

P2 P1
Window
Boundary
Inside

P5 Outside
P3

P4
cgvr.korea.ac.kr 76
CLIPPING TO A BOUNDARY
 Do Inside Test for Each Point in Sequence,
Insert New Points When Cross Window Boundary,
Remove Points Outside Window Boundary

P2 P1
Window
Boundary
Inside

P5 Outside
P3

P4
cgvr.korea.ac.kr 77
CLIPPING TO A BOUNDARY
 Do Inside Test for Each Point in Sequence,
Insert New Points When Cross Window Boundary,
Remove Points Outside Window Boundary

P2 P1
Window
Boundary
P’ Inside

P5 Outside
P3

P4
cgvr.korea.ac.kr 78
CLIPPING TO A BOUNDARY
 Do Inside Test for Each Point in Sequence,
Insert New Points When Cross Window Boundary,
Remove Points Outside Window Boundary

P2 P1
Window
Boundary
P’ Inside

P5 Outside
P3

P4
cgvr.korea.ac.kr 79
CLIPPING TO A BOUNDARY
 Do Inside Test for Each Point in Sequence,
Insert New Points When Cross Window Boundary,
Remove Points Outside Window Boundary

P2 P1
Window
Boundary
P’ Inside

P5 Outside
P3

P4
cgvr.korea.ac.kr 80
CLIPPING TO A BOUNDARY
 Do Inside Test for Each Point in Sequence,
Insert New Points When Cross Window Boundary,
Remove Points Outside Window Boundary

P2 P1
Window
Boundary
P’ P” Inside

P5 Outside
P3

P4
cgvr.korea.ac.kr 81
CLIPPING TO A BOUNDARY
 Do Inside Test for Each Point in Sequence,
Insert New Points When Cross Window Boundary,
Remove Points Outside Window Boundary

P2 P1
Window
Boundary
P’ P” Inside
Outside

cgvr.korea.ac.kr 82
Input vertex P Close Polygon entry

ALGORITHM No First Point Yes


Does SF
intersect E?

Yes

Compute
F=P Intersection I
No
Does SP intersect
No
E?

Output
Yes
vertex I

Compute
Intersection Point
I
Exit

Output
vertex I

S=P

Is S on left
Yes
side of E?

Output
vertex S

NO

Exit
SUTHERLAND-HODGMAN ALGORITHM

84

Animated by Max Peysakhov @ Drexel University


FINAL RESULT

Note: Edges
XY and ZW!

85
ISSUES WITH SUTHERLAND-HODGMAN
ALGORITHM

 Clippingof the concave polygon


 Can produce two CONNECTED areas

86

1994 Foley/VanDam/Finer/Huges/Phillips ICG


Sutherland-Hodgman Algorithm
 Loop round vertices, test each against all 4 clipping planes
in sequence
 Call algorithm again with new edge formed by next vertex
-re-entrant
 No storage requirement between stages
 Easy to implement in hardware

159.235 Graphics 87
Text Clipping
There are several techniques that can be used to
provide text clipping in a graphics packages.
The choice of clipping method depends on how
characters are generated and what requirements
we have for displaying character strings.

88
Text Clipping
All-or-none string-clipping
• If all of the string is inside a clip window, we keep
it.
• Otherwise the string is discarded.

89
Text Clipping
All-or-none character-clipping
Here we discard only those characters that are not
completely inside the window

90
Text Clipping
Clip the components of individual characters
We treat characters in much the same way that we
treated lines.
If an individual character overlaps a clip window
boundary, we clip off the parts of the character that
are outside the window

91
INPUT DEVICES
 Logical Input Devices
 Categorized based on functional characteristics.
 Each device transmits a particular kind of data.
 The different types of data are called input primitives.
 Physical Input Devices
 Categorized based on the physical machine.

RM [2]-92
LOGICAL INPUT DEVICES
 Locator Devices

 A device that allows the user to specify one coordinate


position

 Input Primitive: Coordinate position (x,y)


 Examples: Mouse, Keyboard (Cursor-Position Keys),
Tablet(Digitizer), Trackballs, Lightpens.
 Applications: Interactive drawing and editing, Graph digitizing.

RM [2]-93
RM
Digitizer

[2]-
94
RM
Joystick and Trackball

[2]-
95
LOGICAL INPUT DEVICES
 String Devices
 Input Primitive: A string of characters
 Example: Keyboard
 Applications: Text input

RM [2]-96
LOGICAL INPUT DEVICES

 Valuator
 Input primitive: Scalar values (typically between 0 and 1).
 Examples: Control Dials, Sensing devices, Joysticks
 Applications: Input of graphics parameters, Graphics
representation of analog values, Process simulation, Games.

RM [2]-97
LOGICAL INPUT DEVICES

 Choice
 Input primitive: A selection from a list of options.
 Examples: Mouse, Keyboard (Function Keys), Touch Panel
etc
 Applications: Interactive menu selection, Program control.

RM [2]-98
LOGICAL INPUT DEVICES

 Pick
 Input primitive: Selection of a part of the screen.
 Examples: Mouse, Cursor Keys, Tablet.
 Applications: Interactive editing and positioning.

RM [2]-99
PHYSICAL INPUT DEVICES
 Keyboard (input functions: String, Choice, Locator).
 Mouse (input functions: Locator, Pick, Choice).
 Joystick (input functions: Locator, Valuator).
 Knob (Valuator).
 Tablet (Locator, Pick).

RM [2]-100
3D INTERACTION DEVICES (RECENT
ADDITIONS)

RM
These devices are used in advanced rendering methods and virtual reality
systems for providing information about three dimensional positions and
motion.Few examples are

1. DATA GLOVES

2. SPACE BALLS

3. IMAGING SENSORS
[2]-
101
3D INTERACTION DEVICES (RECENT
ADDITIONS)

These devices are used in advanced rendering methods and virtual reality

RM
systems for providing information about three dimensional positions and
motion.Few examples are

1. DATA GLOVES

2. SPACE BALLS

3. IMAGING SENSORS

[2]-
102
3D INPUT DEVICES
 Gloves
 attach electromagnetic tracker to the hand

 Pinch gloves

103
UNSUCCESSFUL 3D INPUT DEVICES

Commercial failures
 Spaceball

 Flymouse

104
SOME CURRENT INPUT DEVICE RESEARCH

Non-standard Input Devices


 Reconfigurable devices
 Tool handles/props, with attached sensors

Passive input devices


 Would like to separate user from devices
 Voice recognition without a headset
 not successful yet
 Image-based analysis
 video camera trained on user

105
THREE DIMENSIONAL
GRAPHICS
 It is the field of computer graphics that deals with
generating and displaying three dimensional objects
in a two-dimensional space(eg: display screen)

 In addition to color and brightness, a 3-D pixels adds


a depth property that indicates where the point lies
on the imaginary z-axis.
THREE DIMENSIONAL GRAPHICS

 When many 3-D pixels are combined, each with its


own depth value, the result is a 3-D surface called a
texture.

 Objects are created on a 3-D stage where the current


view is derived from the camera and light sources,
similar to the real world.
COORDINATE REFERENCE
This coordinate reference defines the position and
orientation for the plane of the camera film.
THREE-DIMENSIONAL DISPLAY METHODS

Parallel projection
 Project points on the object surface along parallel
lines onto the display plane.

 Parallel lines are still parallel after projection.

 Used in engineering and architectural drawings.


PARALLEL PROJECTION

¨ By selecting different viewing positions, we can


project visible points on the object onto the display
plane to obtain different two-dimensional views of
the object.

Top View Side View


Front View
THREE-DIMENSIONAL DISPLAY METHODS
Perspective projection
 Project points to the display plane along converging
paths.
 This is the way that our eyes and a camera lens form
images and so the displays are more realistic.
PERSPECTIVE PROJECTION

 It has two major characteristics

 Smaller as their distance from the observer


increases.

 Foreshortened: the size of an object’s dimension


along the line of sight are relatively shorter than
dimensions across the line of sight.
THREE-DIMENSIONAL DISPLAY
METHODS
Depth cueing
 Identify which is the front and which is the back of displayed
objects
 For wireframe displays
Vary the intensity of objects according to their distance
from viewing position
 For the atomsphere
THREE-DIMENSIONAL DISPLAY METHODS
 Depth Cueing

 To easily identify the front and back of display objects.

 Depth information can be included using various methods.

 A simple method to vary the intensity of objects according to


their distance from the viewing position.

 Eg: lines closest to the viewing position are displayed with


the highest intensities and lines farther away are displayed
with decreasing intensities.
DEPTH CUEING
 Application is modeling the effect of the atmosphere on
the pixel intensity of objects.
 More distant objects appear dimmer to us than nearer
objects due to light scattering by dust particles, smoke
etc.
THREE-DIMENSIONAL DISPLAY METHODS
Visible line and surface identification
 Highlight the visible lines or display them in different color
 Display nonvisible lines as dashed lines
 Remove the nonvisible lines
THREE-DIMENSIONAL DISPLAY
METHODS
Surface rendering
 Set the surface intensity of
objects according to
Lighting conditions in the scene
Assigned surface characteristics
SURFACE RENDERING

 Lighting specifications include the intensity and


positions of light sources and the general background
illumination required for a scene.

 Surface properties include degree of transparency


and how rough or smooth the surfaces are to be.
THREE-DIMENSIONAL DISPLAY METHODS
 Exploded and Cutaway Views
 To maintain a hierarchical structures to include
internal details.
 These views show the internal structure and
relationships of the object parts
THREE-DIMENSIONAL DISPLAY
METHODS
Cutaway view
 Remove part of the visible surfaces to show internal
structure.
STEREOPSIS
The result of the two slightly different views of the
external world that our laterally-displaced eyes receive.
STEREOSCOPIC DISPLAY
 Stereoscopic images are easy to do badly, hard to do
well, and impossible to do correctly.
STEREOSCOPIC DISPLAYS

 Stereoscopic display systems create a three-


dimensional image (versus a perspective image) by
presenting each eye with a slightly different view of a
scene.

 Time-parallel
 Time-multiplexed
THREE-DIMENSIONAL AND STEREOSCOPIC VIEWS
THREE-DIMENSIONAL GEOMETRIC AND
MODELING TRANSFORMATIONS

 Some Basics
 3D Translations.
 3D Scaling.
 3D Rotation.

 3D Reflections.

 Transformations.
SOME BASICS
• Basic geometric types.
– Scalars s
– Vectors v
– Points p

• Transformations
– Types of transformation:
– rotation, translation, scale, Reflections, shears.
– Matrix representation
– Order
• P=T(P)
3D Point
 We will consider points as column
vectors. Thus, a typical point with
coordinates (x, y, z) is represented as:

 x
 y
 
 z 
3D TRANSLATIONS.

 P is translated to P' by T:

1 0 0 tx  Called the
0 
1 0 ty 
translation

T=  matrix
0 0 1 tz 
 
0 0 0 1
3D TRANSLATIONS.

 x '  1 0 0 tx   x
 y ' 0 1 0 t y   y 
 
 z '  0

0 1 tz   z  P'  T  P
     
 1  0 0 0 1  1 

129
3D TRANSLATIONS.
 An object is translated in 3D dimensional by
transforming each of the defining points of the
objects.

130
3D TRANSLATIONS.

 x ' , y ' , z '


T  t x , t y , t z 
 x, y , z 
3D SCALING

 P is scaled to P' by S:

sx 0 0 0 Called the


0 sy 0 
0 Scaling matrix

S = 0 0 sz 0
 
0 0 0 1
3D SCALING

 Scaling with respect to the coordinate origin

 x'  s x 0 0 0  x 

P'  S  P
 y '  0 sy 0 0  y 
  
 z'  0 0 sz 0  z 
     
1  0 0 0 1  1 

133
3D SCALING

 Scaling with respect to a selected fixed


position (xf, yf, zf)

1. Translate the fixed point to origin


2. Scale the object relative to the coordinate origin
3. Translate the fixed point back to its original position

sx 0 0 (1  s x ) x f 
0 sy 0 (1  s y ) y f 
T ( x f , y f , z f )  S ( s x , s y , s z )  T (  x f ,  y f , z f )  
0 0 sz (1  s z ) z f 
 
0 0 0 1 
3D SCALING
3D REFLECTIONS

 About an axis: equivalent to


180˚rotation about that axis
1 0 0 0
0 1 0 0
3D REFLECTIONS RFz   
0 0 1 0
 
0 0 0 1

137
3D SHEARING
• Modify object shapes
• Useful for perspective projections:
– E.g. draw a cube (3D) on a screen (2D)
– Alter the values for x and y by an amount
proportional to the distance from zref

138
3D SHEARING
1 0 shzx  shzx  z ref 
0 1 shzy 
 shzy  z ref 
M zshear  
0 0 1 0 
 
0 0 0 1 

13
9
SHEARS

1 0 a 0
0 1 b 0
SH z   
0 0 1 0
 
0 0 0 1

140
ROTATION
Positive rotation angles produce counterclockwise
rotations about a coordinate axis

141
ROTATION
COORDINATE-AXES ROTATIONS

 x '  cos   sin  0 0  x  x '  x cos   y sin 


 y '  sin  cos 0 0  y 
    y '  x sin   y cos 
 z'   0 0 1 0  z 
      z'  z
1  0 0 0 1  1 

P '  Rz ( ) P

[email protected] && 143


COORDINATE-AXES ROTATIONS

 x '  cos   sin  0 0  x  x '  x cos   y sin 


 y '  sin  cos 0 0  y 
    y '  x sin   y cos 
 z'   0 0 1 0  z 
      z'  z
1  0 0 0 1  1 

P '  Rz ( ) P

[email protected] && 144


COORDINATE-AXES ROTATIONS

x yzx
COORDINATE-AXES ROTATIONS

 x' 1 0 0 0  x  x'  x
 y ' 0 cos   sin  0  y 
   y '  y sin   z cos
 z '  0  sin  cos  0  z 
      z '  y cos  z sin 
 1  0 0 0 1  1 

P '  Rx ( ) P

[email protected] && 146


COORDINATE-AXES ROTATIONS

 x '   cos 0 sin  0  x  x '  z sin   x cos


 y '  0 1 0 0  y 
    y'  y
 z '   sin  0 cos  0  z 
  
1  0 0 0
  
1  1  z '  z cos   x sin 

P'  R y ( ) P

[email protected] && 147


GENERAL THREE-DIMENSIONAL ROTATIONS

 An object is to be rotated about an axis that is


parallel to one of the coordinate axes
1. Translate the object so that the rotation axis
coincides with the parallel coordinate axis
2. Perform the specified rotation about that axis
3. Translate the object so that the rotation axis is
moved back to its original position

P'  T  Rx ( )  T  P
1

R( )  T  Rx ( )  T
1

[email protected] && 148


GENERAL THREE-DIMENSIONAL ROTATIONS

 An object is to be rotated about an axis that is not


parallel to one of the coordinate axes
1. Translate the object so that the rotation axis passes
through the coordinate origin.
2. Rotate the object so that the axis of rotation coincide
with one of the coordinate axes.
3. Perform the specified rotation about that coordinate axis.
4. Apply inverse rotations to bring the rotation axis back to
its original orientation.
5. Apply the inverse Translation to bring the rotation axis
back to its original position.

[email protected] && 149


3D TRANSFORMATIONS

Translation, Rotation

y
P1

P1'
Scaling, etc. y
x
z

x
z
150
VISIBLE-SURFACE DETECTION METHODS

 Determine what is visible within a scene from a chosen


viewing position

 Two approaches
 Object-space methods: Decide which object, as a whole,
is visible
 Image-space methods: The visibility is decided point-
by-point

 Most visible-surface algorithms use image-space methods

 Sometimes, these methods are referred to as hidden-


surface elimination 151
APPROACHES
 Back-Face Removal
 Depth Buffer

 A-Buffer

 Scanline

 Depth Sorting

 BSP Tree

 Area Subdivision

 Octree

152
BACK-FACE REMOVAL (CULLING)
 Used to remove unseen polygons from convex, closed
polyhedron

 Does not completely solve hidden surface problem since


one polyhedron may obscure another

153
BACK-FACE REMOVAL (CULLING)
 Compute the equation of the plane for each polygon
 A point (x,y,z) is behind a polygon surface if

Ax  By  Cz  D  0
 Determine back-face
 In projection coordinates, we need to consider only the z
component of the normal vector N

Vview  N  0

154
DEPTH-BUFFER (Z-BUFFER)
 Z-Buffer has memory corresponding to each pixel location
 Usually, 16 to 20 bits/location.

155
DEPTH-BUFFER (Z-BUFFER)
 Initialize
 Each z-buffer location  Max z value
 Each frame buffer location  background color
 For each polygon:
 Compute z(x,y), polygon depth at the pixel (x,y)
 If z(x,y) < z-buffer value at pixel (x,y), then
 z buffer(x,y)  z(x,y)
 pixel(x,y)  color of polygon at (x,y)

156
DEPTH CALCULATION
 Calculate the z-value on the plane
 Ax  By  D
Ax  By  Cz  D  0  z 
C
 Incremental calculation
z( x , y ) : the depth of position ( x, y )

 A( x  1)  By  D A
z( x 1, y )   z( x , y ) 
C C
 Ax  B( y  1)  D B
z( x , y 1)   z( x , y ) 
C C 157
DEPTH-BUFFER (Z-BUFFER)
 Advantages/Disadvantages

 Lots of memory
 Linear performance
 Polygons may be processed in any order
 Modifications needed to implement antialiasing,

transparency, translucency effects


 Commonly implemented in hardware  very fast

158
DEPTH-BUFFER (Z-BUFFER)

Backface culling Z-buffer algorithm

159
ACCUMULATION BUFFER (A-BUFFER)
 An extension of the depth-buffer for dealing with
anti-aliasing, area-averaging, transparency, and
translucency

 The depth-buffer method identifies only one visible


surface at each pixel position

 Cannot accumulate color values for more than one


transparent and translucent surfaces

 Even more memory intensive


 Widely used for high quality rendering
160
ACCUMULATION BUFFER (A-BUFFER)
 Each position in the A-buffer has two fields
 Depth field: Stores a depth value
 Surface data field
 RGB intensity components

 Opacity parameter (percent of transparency)

 Depth

 Percent of area coverage

 Surface identifier

161
SCAN LINE METHOD
 Intersect each polygon with a particular scanline and solve
hidden surface problem for just that scan line
 Requires a depth buffer equal to only one scan line
 Requires the entire scene data at the time of scan conversion

 Maintain an active polygon and active edge list


 Can implement antialiasing as part of the algorithm

162
DEPTH SORTING
 We need a partial ordering (not a total ordering) of polygons
 The ordering indicates which polygon obscures which
polygon
 Some polygons may not obscure each other
 Simple cases

163
DEPTH SORTING
 We make the following tests for each polygon that has a depth overlap
with S
 If any one of these tests is true, no reordering is necessary for S and the
polygon being tested
 Polygon S is completely behind the overlapping surface relative to
the viewing position
 The overlapping polygon is completely in front of S relative to the
viewing position
 The boundary-edge projections of the two polygons onto the view
plane do not overlap

164
DEPTH SORTING
 Example

165
DEPTH SORTING
 Cyclically overlapping surfaces that alternately obscure
one another
 We can divide the surfaces to eliminate the cyclic
overlaps

166

You might also like