Computer Graphics Notes
Computer Graphics Notes
302)
computer engineering
department
Miss D. Khumalo
This handout includes copies of the slides that will be used in lectures together with some suggested.
These notes do not constitute a complete transcript of all the lectures and they are not a substitute for
text books. They are intended to give a reasonable synopsis of the subjects discussed, but they give
neither complete descriptions nor all the background material.
Computer Graphics
1 2
What are Computer Graphics &
Computer Graphics & Image Processing Image Processing?
◆ Introduction Scene
◆ Colour and displays description
◆ Image processing Computer Image analysis &
◆ 2D computer graphics graphics computer vision
Image processing
3 4
Why bother with CG & IP? What are CG & IP used for?
✦All visual computer output depends on CG ◆2D computer graphics
printed output (laser/ink jet/phototypesetter)
◆
■ graphical user interfaces: Mac, Windows, X,…
■ graphic design: posters, cereal packets,…
◆ monitor (CRT/LCD/plasma/DMD) 430 thousand printing
■ typesetting: book publishing, report writing,…
◆ all visual computer output consists of real images generated by the companies worldwide
computer from some internal digital image ◆ Image processing £250 billion annual
turnover
■ photograph retouching: publishing, posters,…
✦Much other visual imagery depends on CG & IP ■ photocollaging: satellite imagery,…
◆ TV & movie special effects & post-production ■ art: new forms of artwork based on digitised images
◆ most books, magazines, catalogues, flyers, brochures, junk mail, ◆ 3D computer graphics
newspapers, packaging, posters ■ visualisation: scientific, medical, architectural,…
■ Computer Aided Design (CAD)
20 million users
■ entertainment: special effect, games, movies,… worldwide
5 6
Course Structure Course books
✦Background [3L] ◆ Computer Graphics: Principles & Practice
■ images, human vision, displays ■ Foley, van Dam, Feiner & Hughes,Addison-Wesley, 1990
● Older version: Fundamentals of Interactive Computer Graphics
✦ 2D computer graphics 3D CG ❖ Foley & van Dam, Addison-Wesley, 1982
3
Computer Graphics & Image
Processing
3D CG 8
Background 2D CG IP
image?
◆ what are the constraints on digital
images?
✦what hardware do
we we
Later on in the course use?
will ask:
✦ how does human vision work?
◆ what are the limits of human vision?
◆ what can we get away with given these constraints & limits?
✦ how do we represent colour?
✦ how do displays & printers work?
◆ how do we fool the human eye into seeing what we want it
to see?
9 10
What is an image? What is a digital image?
✦two dimensional function ✦a contradiction in terms
✦value at any point is an intensity or colour ◆ if you can see it, it’s not digital
✦not digital! ◆ if it’s digital, it’s just a collection of numbers
✦a sampled and quantised version of a real
image
✦a rectangular array of intensity or colour values
11 12
Image capture Image capture example
✦a variety of devices can be 103 59 12 80 56 12 34 30 1 78 79 21 145 156 52 136 143 65 115 129 41 128 143 50 85
106 11 74 96 14 85 97 23 66 74 23 73 82 29 67 76 21 40 48 7 33 39 9 94 54 19
42 27 6 19 10 3 59 60 28 102 107 41 208 88 63 204 75 54 197 82 63 179 63 46 158 62
used 46 146 49 40 52 65 21 60 68 11 40 51 17 35 37 0 28 29 0 83 50 15 2 0 1 13 14
8 243 173 161 231 140 69 239 142 89 230 143 90 210 126 79 184 88 48 152 69 35 123 51
27 104 41 23 55 45 9 36 27 0 28 28 2 29 28 7 40 28 16 13 13 1 224 167 112 240
174 80 227 174 78 227 176 87 233 177 94 213 149 78 196 123 57 141 72 31 108 53 22 121
◆ cameras Heidelberg
59 22 93 53 18 76 50 17 9 10 2 54 76 74 108 111 102 218 194 108 228 203 102 228 200
100 212 180 79 220 182 85 198 158 62 180 138 54 155 106 37 132 82 33 95 51 14 87 48
15 81 46 14 16 15 0 11 6 0 64 90 91 54 80 93 220 186 97 212 190 105 214 177 86 208
www.hll.mpg.de www.nuggetlab.com
4
Computer Graphics
13 14
device
116 49 114 64 29 75 49 24 10 9 5 11 16 9 237 190 82 249 221 122 241 225 129 240 219
126 240 199 93 218 173 69 188 135 33 219 186 79 189 184 93 136 104 65 112 69 37 191 153
80 122 74 28 80 51 19 19 37 47 16 37 32 223 177 83 235 208 105 243 218 125 238 206
103 221 188 83 228 204 98 224 220 123 210 194 109 192 159 62 150 98 40 116 73 28 146 104
46 109 59 24 75 48 18 27 33 33 47 100 118 216 177 98 223 189 91 239 209 111 236 213
◆ CRT — computer monitor, TV set 117 217 200 108 218 200 100 218 206 104 207 175 76 177 131 54 142 88 41 108 65 22 103
59 22 93 53 18 76 50 17 9 10 2 54 76 74 108 111 102 218 194 108 228 203 102 228 200
100 212 180 79 220 182 85 198 158 62 180 138 54 155 106 37 132 82 33 95 51 14 87 48
15 81 46 14 16 15 0 11 6 0 64 90 91 54 80 93 220 186 97 212 190 105 214 177 86 208
◆ LCD — portable computer, video projector 165 71 196 150 64 175 127 42 170 117 49 139 89 30 102 53 12 84 43 13 79 46 15 72 42
14 10 13 4 12 8 0 69 104 110 58 96 109 130 128 115 196 154 82 196 148 66 183 138 70
174 125 56 169 120 54 146 97 41 118 67 24 90 52 16 75 46 16 58 42 19 13 7 9 10 5
0 18 11 3 66 111 116 70 100 102 78 103 99 57 71 82 162 111 66 141 96 37 152 102 51
15 16
Different ways of displaying the same
digital image Sampling
✦ a digital image is a rectangular array of intensity
values
✦ each value is called a pixel
◆ “picture element”
✦ sampling resolution is normally measured in pixels per
inch (ppi) or dots per inch (dpi)
■ computer monitors have a resolution around 100 ppi
Nearest-neig Gaussia Half-toning ■ laser and ink jet printers have resolutions between 300 and 1200 ppi
hbour n e.g. laser ■ typesetters have resolutions between 1000 and 3000 ppi
e.g. LCD e.g. printer
✦ the display device has
CRTa significant
effect
on the appearance of the displayed image
17 18
Sampling resolution Quantisation
256×256 128×128 64×64 32×32 ✦ each intensity value is a number
✦ for digital storage the intensity values must be
quantised
■ limits the number of different intensities that can be stored
■ limits the brightest intensity that can be stored
✦ how many intensity levels are needed for human
consumption
■ 8 bits often sufficient
■ some applications use 10 or 12 or 16 bits
■ more detail later in the course
✦ colour is stored as a set of numbers
■ usually as 3 numbers of 5–16 bits each
■ more detail later in the course
5
Computer Graphics
19 20
Quantisation levels Storing images in memory
8 bits 7 bits 6 bits 5 bits
(256 levels) (128 levels) (64 levels) (32 levels) ✦8 bits became a de facto standard for
greyscale images
◆ 8 bits = 1 byte
◆ 16 bits is now being used more widely, 16 bits = 2
bytes
◆ an 8 bit image of size W × H can be stored in a block
of
W × H bytes
◆ one way to do this is to store pixel[x][y] at
memory location base + x + W × y
base
≡
■ memory is 1D, images are 2D 3
2
5 5
1
0 4
1 bit 2 bits 3 bits 4 bits
base + 1 + 5 × 2 01234
(2 levels) (4 (8 (16 levels)
levels) levels)
21 22
23 24
Double buffering Modern graphics cards
◆ifwe allow the currently displayed image
◆most graphics processing is now done on a separate
to be updated then we may see bits of the
graphics card
image being displayed halfway through the
◆the CPU communicates primitive data over the bus
update
■ this can be visually disturbing, especially if we want the illusion of smooth to the special purpose Geometry Processing Unit
animation (GPU)
◆double buffering solves this problem: we ◆there is additional video memory on the graphics
draw into one frame buffer and display card, mostly used for storing textures, which are
from the other mostly used in 3D games
B ■ when drawing is complete we flip buffers
Buffer A
U output
S stage display
B
(e.g. Buffer A Texture memory
Buffer B U output
DAC)
S GPU stage display
Buffer B (e.g.
DAC)
6
Computer Graphics
3D CG 25 26
2D Computer Graphics 2D CG IP Drawing a straight line
✦lines Background
◆ a straight line can be defined by:
y
■ how do I draw a straight line?
✦curves y = mx + c
m
■ how do I specify curved lines?
the slope of the line c
✦clipping
1
x
■ what about lines that go off the edge of the screen? ◆ a mathematical line is “length without
✦filled areas breadth”
✦transformations ◆ a computer graphics line is a set of pixels
■ scaling, rotation, translation, shearing ◆ which pixels do we need to turn on to
✦applications draw a given line?
27 28
Which pixels do we use? A line drawing algorithm — preparation 1
◆there are two reasonably sensible ✦ pixel (x,y) has its centre at real co-ordinate (x,y)
alternatives: ◆ it thus stretches from (x-½, y-½) to (x+½, y+½)
y+1½
pixel (x,y)
y+
every pixel through which the “closest” pixel to the 1
the line passes line in each column y+½
for lines of slope less than for lines of slope less than
45º we can have either one 45º we always have just one
y
or two pixels in each column pixel in every column y-½ x-1½ x-½ x+½
x+1½
✔ x-1 x
◆ in general, use x+1
this Beware: not every graphics system uses this convention. Some put
real co-ordinate (x,y) at the bottom left hand corner of the pixel.
29 30
Bresenham’s line drawing algorithm 1
A line drawing algorithm — preparation 2
✦the line goes from (x0,y0) to (x1,y1)
Initialisat d = (y1 - y0) / (x1 - x0) x = x0
✦the line lies in the first octant (0 ≤ m ≤ 1) ion yi = y0 d
✦x0 < x1 y = y0 y yi
DRAW(x,y) (x0,y0)
(x1,y1) x x+1
WHILE x < x1 DO
Iterati x = x + 1 yi = yi + d
on y = ROUND(yi) DRAW(x,y)
yi’
END WHILE d
y & y’
yi
assumes
(x0,y0) integer end
points x x’
J. E. Bresenham, “Algorithm for Computer Control of a Digital Plotter”, IBM Systems Journal, 4(1), 1965
7
Computer Graphics
31 32
Bresenham’s line drawing algorithm 2 Bresenham’s line drawing algorithm 3
dy = (y1 - y0) dx = (x1 - x0) x = x0
yf = 0 y = y0
◆ this slide and the next show how d = (y1 - y0) / (x1 - x0) x = ◆ Speed up B: DRAW(x,y)
we can optimise Bresenham’s x0 ■ multiply all operations involving yf by WHILE x < x1 DO
2(x1 - x0) x=x+1
algorithm for use on an yf = 0 y = y0
● yf = yf + dy/dx → yf = yf + 2dy yf = yf + 2dy
architecture where integer DRAW(x,y) IF ( yf > dx ) THEN
● yf > ½ → yf > dx
operations are much faster than ● yf = yf - 1 → yf = yf - 2dx
y=y+1
WHILE x < x DO yf = yf - 2dx END IF DRAW(x,y)
floating point 1
■ removes need to do floating point
x=x+1 END WHILE
◆ naïve algorithm involves floating arithmetic if end-points have integer co-
yf = yf + d
ordinates
point arithmetic & rounding IF ( yf > ½ )
inside the loop THEN
y = y + 1 yf =
⇒ slow
yf - 1
◆ Speed up A:
END IF
■ separate integer and fractional parts of yi DRAW(x,y)
(into y and yf) END WHILE
■ replace rounding by an IF
● removes need to do rounding
33 34
Bresenham’s algorithm for floating point
Bresenham’s algorithm — more details
end points
d = (y1 - y0) / (x1 - x0) x = ✦we assumed that the line is in the first octant
ROUND(x0)
yi = y0 + d * (x-x0) y =
◆ can do fifth octant by swapping end points
ROUND(yi) y d ✦therefore need four versions of the algorithm
yf = yi - y DRAW(x,y) yi = y+yf
WHILE x < (x1 - ½) DO (x0,y0)
x = x + 1 yf = yf + d x rd
3 2
nd
IF ( yf > ½ ) THEN
y = y + 1 yf = yf - 1 x+1 st
1
END IF DRAW(x,y)
END WHILE y’+yf’ Exercise: work out what
th
y & y’ d 4 changes need to be made
y+yf to the algorithm for it to
th
5 work in each of the other
th th three octants
x 6 7
th
8
x’
If your end-points are not integers then these kinds of optimisations may not be appropriate. In
this case, it is still useful to incorporate speed up A, but not speed up B.
35 36
8
Computer Graphics
37 38
Midpoint line drawing algorithm 1 Midpoint line drawing algorithm 2
✦decision variable needs to make a decision at
✦first work out the iterative step
◆ it is often easier to work out what should be done on
point
(x+1, y+½)
d = a( x + 1) + b( y + 12 ) +
each iteration and only later work out how to initialise
c
and terminate the iteration ✦ if go E then the new decision variable
✦given that a particular pixel is on the line, is at y+½)
(x+2,
the next pixel must be either immediately to the d' = a( x + 2) + b( y + 12 ) +
right c
(E) or to the right and up one (NE) =d+a
✦ if go NE then the new decision variable is
✦use a decision variable (based on k) to
at
determine
which way to Evaluate
decision the
variable
(x+2, y+1½)
go at this point d ' = a( x + 2) + b( y + 112 ) + c
if ≥ 0 then go NE
This is the current =d+a+b
pixel if < 0 then go E
39 40
Midpoint line drawing algorithm 3 Midpoint — comments
Initialisat Iteration ✦this version only works for lines in the first
ion
a = (y1 - y0) WHILE x < (x1 - ½) DO x = x +
octant
E case
b = -(x1 - x0) 1 just increment
c = x1 y0 - x0 y1 IF d < 0 THEN x
◆ extend to other octants as for Bresenham
x = ROUND(x0) d=d+a ✦it is not immediately obvious that Bresenham
y = ROUND(y0-(x- x0)(a / b)) d =
ELSE
a * (x+1) + b * (y+½) + c d=d+a+by=y+1
and Midpoint give identical results, but it can
DRAW(x,y) END IF be proven that they do
NE case
DRAW(x,y) END WHILE
increment x & ✦Midpoint algorithm can be generalised to draw
y
arbitrary circles & ellipses
y
First decision ◆ Bresenham can only be generalised to draw circles with
(x0,y0) x x+1 If end-points have integer co-ordinates integer radii
then all operations can be in integer
point
arithmetic
41 42
9
Computer Graphics
43 44
Midpoint circle algorithm 2 Taking circles further
✦ decision variable needed to make a decision at point Exercise 1: complete the
(x+1, y-½) ✦ the algorithm can be easily extended circle algorithm for the
second octant
to circles not centred at the origin
d = ( x + 1)2 +2 ( y − 1 )2 − r 2 Exercise 2: complete the
circle algorithm for the
entire circle
45 46
47 48
P(t ) = (1 − t )3 P
0 b (t ) = (1 −✦Weighting functions
t )3 b (t ) = 3t(1 − t )2 b (tare
) = 3t 2 (1 − t ) b (t ) = t 3
Bernstein polynomials
+ 3t(1 − t )2 P
0 1 2 3
1
✦Weighting functions sum to one
+ 3t 2 (1 − t )P 3
2
∑
3
bi (t ) = 1
+t P i=0
3
✦Bezier curve lies within convex hull of its control points
P≡(x,y)
where: i i i
Pierre Bézier worked for Renault in the 1960s
10
Computer Graphics
49 50
51 52
∆t=0.2 ∆t=0.1 ∆t=0.05
Drawing a Bezier cubic – naïve method Examples
the tick marks are
spaced 0.05 apart in t
◆draw as a set of short line segments equispaced in parameter
(∆t=0.05)
space, t
(x0,y0) = Bezier(0)
FOR t = 0.05 TO 1 STEP 0.05 DO
(x1,y1) = Bezier(t) DrawLine( (x0,y0), (x1,y1) ) (x0,y0) = (x1,y1)
END FOR
◆problems:
■ cannot fix a number of segments that is appropriate for all possible
Beziers: too many or too few segments
■ distance in real space, (x,y), is not linearly related to distance in
parameter space, t
53 54
✦need to specify some tolerance for when a straight DrawCurve( left ) draw a line between P0
DrawCurve( right ) and P3: we already
line is an adequate approximation END IF know how to do this
◆ when the Bezier lies within half a pixel width of the straight END DrawCurve
line along its entire length this requires some
straightforward
calculations
11
Computer Graphics
55 56
Checking for flatness Special cases
✦ if s<0 or s>1 then the distance from point C to the
P(s) = (1− s) A + sB we need to
line segment AB is not the same as the distance from
know this
AB ⋅ CP(s) = 0 distance point C to the infinite line AB
✦ in these cases the distance is |AC| or |BC|
⇒ (x B− x )(x − x ) + (y −AyP)(y − y ) = 0 C
respectively
B A P
C
( xB −xA )( xC −xA )+( yB − yA )( yC − yA )
⇒s=
C B
( xB −xA )2 +( yB − yA )2
C
⇒s=
AB⋅AC2
AB B
A
P(s)
P(s)
A
57 58
Subdividing a Bezier cubic into two halves The effect of different tolerances
✦ a Bezier cubic can be easily subdivided into two ◆thisis the same Bezier curve drawn with four different
smaller Bezier cubics tolerances
1 3
Q0 = P0 R=P+P+P+P 0 80 81 82 83 3
R1=4 P 1+ P + P 1
Q1 =2 0P12 1+ P 1 22 43
1
Q = P1 + P + P 33+ P R3 = P3
1
3 80 81 82 83
Exercise: prove that the Bezier cubic curves defined by Q0, Q1, Q2, Q3 and R0, R1, R2, R3 match 100 20 5 0.
the Bezier cubic curve defined by P0, P1, P2, P3 over the ranges t∈[0,½] and t∈[½,1] 2
respectively
59 60
What if we have no tangent vectors? Overhauser’s cubic
◆base each cubic piece on the four surrounding data points ◆method for generating Bezier curves which match
Overhauser’s model
■ simply calculate the appropriate Bezier control point locations from
the given points
● e.g. given points A, B, C, D, the Bezier control points are:
P0=B P1=B+(C-A)/6
P3=C P2=C-(D-B)/6
◆Overhauser’s cubic interpolates its controlling data
◆at each data point the curve must depend solely on the three
surrounding data points Why? points
■ good for control of movement in animation
■ define the tangent at each point as the direction from the preceding point
■ not so good for industrial design because moving a single point
to the succeeding point
modifies the surrounding four curve segments
● tangent at P1 is ½(P2 -P0), at P2 is ½(P3 -P1)
● compare with Bezier where moving a single point modifies just the two
◆this is the basis of Overhauser’s cubic segments connected to that point
©1996–2012 Neil 12
Computer Graphics
61 62
63 64
Clipping Clipping lines against a rectangle — naïvely
✦ what about lines that go off the edge of the P1 to P2 = (x1, y1) to (x2 , y2 ) ■ do this operation for each of the four edges
screen? P(t) = (1− t)P1 + tP2 x(t) = (1−
to intersect with x = xL
t)x1 + tx2 y(t) = (1− t) y1 + ty2 if ( x1 = x2 ) then no intersection else
◆ need to clip them so that we only draw the part of
xL = (1− tL )x1 + tL x2
the line that is actually on the screen
✦ clipping points against a rectangle
x − x1
need to check against four edges: ⇒ tL=x −Lx
2 1
y= x=xL Exercise: if (0 ≤ tL ≤ 1)
once you have the four then line segment intersects
yT x=x R intersection calculations, x = x Lat (x(t ), y(t ))L L
work out how to determine
y= y=yB which bit of the line is
else line segment does not intersect
x=xR edge
x=xL actually inside the
yB y=y T rectangle
65 66
Clipping lines against a rectangle — examples Cohen-Sutherland clipper 1
◆make a four bit code, one bit for each
y = yT inequality
A ≡ x < x L B ≡ABCD
x > x R C ≡ABCD
y < y B D ≡ABCD
y > yT
0001 0101
T
y=y
1001
x=x
x=xL R
◆evaluate this for both
◆you can naïvely check every line against endpoints of the line
Q1 = A1 B1C1 D1 Q2 = A2 B2 C2 D2
each of the four edges
■ this works but is obviously inefficient
◆adding a little cleverness improves Ivan Sutherland is one of the founders of Evans & Sutherland, manufacturers of flight simulator systems
efficiency enormously
■ Cohen-Sutherland clipping algorithm
13
Computer Graphics
67 68
Cohen-Sutherland clipper 2 Cohen-Sutherland clipper 3
◆Q1= Q2 =0
■ both ends in rectangle ACCEPT
◆ifcode has more than a single 1 then you cannot tell
◆Q1∧ Q2 ≠0 which is the best: simply select one and loop again
■ both ends outside and in same half-plane REJECT
◆horizontal and vertical lines are not a problem
◆otherwise Why not?
◆need a line drawing algorithm that can cope with
■ need to intersect line with one of the edges and start again
● you must always re-evaluate Q and recheck the above tests after floating-point
endpoint Why?
doing a single clip
co-ordinates
■ the 1 bits tell you which edge to clip against
y = yB
P ''
1 x2 − x1 the algorithm always tries to intersect
0000 y = yB with xL or xR before yB or yT.]
P'
y−y'
y1 '' = yB x1 '' = x1 '+( x2 − x1 ' )
1
P B 1
0010 Try some other cases of your own
1
y 2−1 y '
1010 devising.
x=x L
x=x R
x=xL
69 70
✦ which pixels do we turn on? ❶take all polygon edges and place in an edge list (EL),
sorted on lowest y value
❷start with the first scanline that intersects the polygon,
get all edges which intersect that scan line and move them
to an active edge list (AEL)
❸for each edge in the AEL: find the intersection point with
the current scanline; sort these into ascending order on
the x value
❹fill between pairs of intersection points
◆ those whose centres lie inside the polygon
● this is a naïve assumption, but is sufficient for now ❺move to the next scanline (increment y); remove edges
from the AEL if endpoint < y ; move new edges from EL to
AEL if start point ≤ y; if any edges remain in the AEL go
back to step ❸
71 72
Scanline polygon fill example Scanline polygon fill details
◆how do we efficiently calculate the intersection points?
■ can throw them out of the edge list, they contribute nothing
14
Computer Graphics
73 74
Clipping polygons Sutherland-Hodgman polygon clipping 1
◆clips an arbitrary polygon against an arbitrary convex polygon
■ basic algorithm clips an arbitrary polygon against a single infinite clip edge
● so we reduce a complex algorithm to a simpler one which we call recursively
■ the polygon is clipped against one edge at a time, passing the result on to
the next stage
Sutherland & Hodgman, “Reentrant Polygon Clipping,” Comm. ACM, 17(1), 1974
75 76
Sutherland-Hodgman polygon clipping 2 2D
◆ the algorithm progresses around the polygon checking if
✦ scale
transformations
each edge crosses the clipping line and outputting the
appropriate points ✦ why?
inside outside inside outside inside outside inside outside ◆ it is extremely useful to be
e
e
s ✦ rotate able to transform predefined
s i objects to an arbitrary
location, orientation, and size
e s ◆ any reasonable graphics
e s i ✦ translat package will include transforms
■ 2D Postscript
e ■ 3D OpenGL
e output i output nothing
i and e output
output
✦ (shear
Exercise: the Sutherland-Hodgman algorithm may introduce new edges along the
edge of the clipping polygon — when does this happen and why?
)
77 78
Basic 2D transformations Matrix representation of transformations
◆ scale x' = mx
■ about origin ✦ scale ✦ rotate
y' = my ◆ about origin, factor m ◆ about origin, angle θ
■ by factor m
◆ rotate x' m 0 x
⎡ ⎤ ⎡ ⎤⎡ ⎤ x' cosθ − sin θ x
⎡ ⎤ ⎡ ⎤⎡ ⎤
about origin x' = x cosθ − y sin θ = ⎥⎢ ⎥
⎢⎥ ⎢ ⎥⎢m y ⎥ =
■
⎢ ⎥ ⎢sin θ cosθ y
■ by angle θ y' ⎣ 0 ⎦⎣ ⎦ ⎣ y'⎦ ⎣ ⎦⎣ ⎦
y' = x sin θ + y cosθ ⎣ ⎦
◆ translate
along vector (xo,yo)
✦ do ✦ shear
■
x' = x + x o
y' = y + yo nothing ◆ parallel to x axis, factor a
◆ identity
x' 1 0 x x' 1 a x
◆ shear ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤
x' = x + ay = =
■ parallel to x axis ⎢ ⎥y' ⎢0 1 ⎥⎢y ⎥ ⎢ ⎥y' ⎢ 0 1 ⎥⎢ y⎥
■ by factor a y' = y ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
©1996–2012 Neil 15
Computer Graphics
79 80
Homogeneous 2D co-ordinates Matrices in homogeneous co-ordinates
✦ scale ✦ rotate
◆translations cannot be represented using simple 2D
◆ about origin, factor m ◆ about origin, angle θ
matrix multiplication on 2D vectors, so we switch to
x' cosθ − sin θ 0 x
homogeneous co-ordinates x'
⎡ ⎤ ⎡
m 0 0 x
⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
( x, y, w) ≡ (, wx w
y
⎢ ⎥y' ⎢ = 0 m 0⎥⎢ ⎥y ⎢ ⎥y' ⎢ = sin θ cosθ 0 y ⎥⎢ ⎥
w' 0 0 1 w w' 0 0
◆ an infinite number of homogeneous co-ordinates ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢
⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ 1 w
map to every 2D point ⎥⎢ ⎥
✦ do nothing ✦shear ⎦⎣ ⎦
◆ w=0 represents a point at infinity ) ◆ identity ◆ parallel to x axis, factor a
◆ usually take the inverse transform to be: x' 1 0 0 x
⎡ ⎤ ⎡ ⎤⎡ ⎤ x' 1 a 0 x
⎡ ⎤ ⎡ ⎤⎡ ⎤
( x, y ) ≡ ( x, y,1)
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢⎥ ⎢
y' = 0 1 0 ⎥⎢y⎥ ⎢ y'
⎥ ⎢ = 0 1 0 ⎥⎢y⎥
w' 0 0 1 w w' 0 0 1 w
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣ ⎦⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
81 82
Translation by matrix algebra Concatenating transformations
◆often necessary to perform more than one
⎡ x'⎤ ⎡1 0 x ⎤⎡ x ⎤ transformation on the same object
o
⎢ ⎥ ⎢ ⎥⎢ ⎥ ◆can concatenate transformations by multiplying their
⎢ y'
⎥ = 0⎢ 1 y y0 ⎥⎢ ⎥
w' 00 1 w matrices
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎣ ⎦ ⎣ ⎦⎣ ⎦ e.g. a shear followed by a scaling:
In scale shear
x'' m 0 0 x' x' 1 a 0 x
homog ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤⎡ ⎤⎡ ⎤
y' = y + wyo w' = w ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
eneou ⎢ ⎥y''⎢ = 0 m 0 y' ⎥⎢ ⎥ ⎢ ⎥y'⎢ = 0 1 0 y ⎥⎢ ⎥
s ⎢w''⎥ ⎢ 0 0 1⎥ ⎢ w' ⎥ ⎢ w'⎥ ⎢ 0 0 1⎥ ⎢ w⎥
⎣ ⎦⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
In conventional coordinates coordi
nates scale shear both
x'' m 0 0 1 a 0 x m ma 0 x
x' = x y' y ⎡ ⎤⎡ ⎤⎡ ⎤⎡ ⎤ ⎡ ⎤⎡ ⎤
x' x ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥
w' =w + x + wxo w' =w + y
0 0
⎢ ⎥y''⎢ = 0 m 0 0 1 0 ⎥⎢y = 0 m 0 y ⎥⎢ ⎥ ⎢
⎥⎢ ⎥
⎢ w''⎥ ⎢ 0 0 1⎥ ⎢ 0 0 1⎥ ⎢ w⎥ ⎢ 0 0 1⎥ ⎢ w⎥
⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦ ⎣ ⎦⎣ ⎦
83 84
Concatenation is not commutative Scaling about an arbitrary point
✦be careful of the order in which you concatenate ◆scale by a factor m about point (xo,yo)
transformations ❶translate point (xo,yo) to the origin
(x o,y
o )
⎢ ⎢ ⎥
1 1 ⎥ x' 1 0 − xo x
rotate by 45° scale by 2 0 ⎥ 010 ⎢ ❶ ⎡⎢ ⎥⎤ ⎡ ⎢ ⎤⎡⎥⎢ ⎤ ⎥
x'' m 0 0 x'
❷ ⎡ ⎢ ⎤ ⎥⎡ ⎢ ⎤⎡ ⎤⎥⎢ ⎥
x''' 1 0 x x''
❸⎡ ⎢ ⎤ ⎡⎥ ⎢ o ⎤⎡ ⎥⎢⎤ o ⎥
⎢ 0
2
along x axis
y' = 0 1 − y oy
⎢⎥⎢ ⎥⎢ ⎥ ⎢⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥
2⎣ ⎢
0 ⎥ y'' = 0 m 0 y'
w'' 0 0 1 w'
y''' = 0 1 y y''
⎢ w''' ⎥ ⎢ 0 0 1 ⎥⎢ w'' ⎥
⎡1 2 −1 0⎤ ⎡ 1 −1 0⎤ ⎢ 0 0 1
⎥ ⎣
⎢w' ⎥ ⎢0 0 1 ⎥ ⎢ w ⎥
⎦ ⎣ ⎦⎣ ⎦ ⎣
⎢ ⎥⎢
⎦ ⎣
⎥⎢ ⎥
⎦⎣ ⎦
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎥2 2 2 2 ⎣ ⎦
⎢ 2⎦ 1 ⎥ ⎢ 12 1 ⎥
0⎥ ⎢ 0⎥ 2
⎢⎡
x''' 1 0 xo m 0 0 1 0 − xo x
⎥⎤ ⎡ ⎢ ⎤⎡⎥⎢ ⎤⎡⎥⎢ ⎤⎡⎥⎢ ⎤ ⎥ Exercise: show how to
⎢ 0
2
scale by 2 rotate by 45°
0 1 0 0 1 ⎢ y'''
⎥ ⎢ = 0 1 y 0o m⎥⎢ 0 0 1 − y y ⎥⎢ o
⎥⎢
2⎢ ⎥ ⎢ ⎥
along x axis perform rotation about
⎣scale then rotate ⎦ ⎣ rotate ⎦ ⎥ an arbitrary point
⎢w'''⎥ ⎢0 0 1 ⎥ ⎢ 0 0 1⎥ ⎢ 0 0 1 ⎥ ⎢ w⎥
⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦
16
Computer Graphics
85 86
Bounding boxes Clipping with bounding boxes
◆whenworking with complex objects, ◆doa quick accept/reject/unsure test to the bounding
bounding boxes can be used to speed up box then apply clipping to only the unsure
some operations objects
BB
T
yT RU R
N A
R R
BBB
A
yB U A
W E BB
L xL U
BB R
R R R
x
BBL > x R ∨ BBR < x L ∨ BBB > xT ∨ BBT < xB ⇒ REJECT
S
BBL ≥ x L ∧ BBR ≤ x R ∧ BBB ≥ x B ∧ BBT ≤ xT ⇒ ACCEPT
otherwise ⇒ clip at next higher level of detail
87 88
Object inclusion with bounding boxes Bit block transfer (BitBlT)
◆including one object (e.g. a graphics) file inside another can
◆it is sometimes preferable to predraw something and
be easily done if bounding boxes are known and used
P P
then copy the image to the correct position on the
L R
screen as and when required
BB N P
T N COMPASS T ■ e.g. icons ■ e.g. games
productions W E
S
P
B
W E
◆copying an image from place to place is essentially a
use the eight values to memory operation
translate and scale the ■ can be made very fast
S original to the appropriate ■ e.g. 32×32 pixel icon can be copied, say, 8 adjacent pixels at a time, if there
BBB
position in the destination is an appropriate memory copy operation
BB BB Tel: 01234 567890 Fax: 01234 567899
document
L R E-mail: compass@piped.co.uk
89 90
Application 1: user interface Application 2: typography
◆ typeface: a family of letters designed to look good together
✦tend to use objects that are quick to draw ■ usually has upright (roman/regular), italic (oblique), bold and bold-italic members
◆ straight lines
◆ filled rectangles abcd efgh ijkl mnop – Gill Sans abcd efgh ijkl mnop – Times
abcd efgh ijkl mnop – Garamond
abcd efgh ijkl mnop – Arial
✦complicated bits done using predrawn icons
◆ two forms of typeface used in computer graphics
✦typefaces also tend to be predrawn ■ pre-rendered bitmaps
● single resolution (don’t scale well)
● use BitBlT to put into frame buffer
■ outline definitions
● multi-resolution (can scale)
● need to render (fill) to put into frame buffer
These notes are mainly set in Gill Sans, a lineale (sans-serif) typeface designed by Eric Gill for
Monotype, 1928–30. The lowercase italic p is particularly interesting.
Mathematics is mainly set in Times New Roman, a roman typeface commissioned by
The Times in 1931, the design supervised by Stanley Morison.
17
Computer Graphics
91 92
Application 3: Postscript Examples which are Bezier-friendly
◆industry standard rendering language for printers
◆developed by Adobe Systems
◆basic features
■ object outlines made up of lines, arcs & Bezier curves
■ objects can be filled or stroked
■ whole range of 2D transformations can be applied to objects
■ typeface handling built in
● typefaces are defined using Bezier curves
■ halftoning
■ can define your own functions in the language
93 3D CG 94
✦ 3D ⇨ 2D projection Background
✦ 3D versions of 2D operations
◆ clipping, transforms, matrices, curves &
surfaces
✦ 3D scan conversion
◆ depth-sort, BSP tree, z-Buffer, A-buffer
✦ sampling
✦ lighting
✦ ray tracing
typeface: Palatino
typeface: Helvetica (1957)
(1950)
abcdQRST2345&
abcdQRST2345&
95 96
18
Computer Graphics
99 100
Viewing volume Geometry of perspective projection
y
(x',y',d
viewing plane ( x, y, z ) d
)
(screen plane) x' = x z
( 0,0,0 ) z
101 102
Perspective projection with an
3D transformations
arbitrary camera ◆3D homogeneous co-ordinates
◆ we have assumed that: ( x, y, z, w) → ( x ,w yw,wz )
■ screen centre at (0,0,d)
■ screen parallel to xy-plane ◆3D
■ z-axis into screen translation transform identity rotation about x-axis
■ y-axis up and x-axis to the right
⎡
1 0 0 tx
⎤
ation ⎡
1000
⎤ ⎡
1 0 0 0
⎤
■ eye (camera) at origin (0,0,0) ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢0 1 0 t matrices
⎢0 1 0 0 ⎢0 cos θ − sin θ 0 ⎥
⎥
y
19
Computer Graphics
103 104
3D transformations are not commutative Viewing transform 1
world viewing
90° rotation 90° rotation co-ordinates
co-ordinates
about z-axis about x-axis viewing transform
opposite
✦the problem:
y
z z faces
x
x ↔ ◆ to transform an arbitrary co-ordinate system to
the default viewing co-ordinate system
↔
✦camera specification in
y
z z ↔ world co-ordinates
x
x ◆ eye (camera) at (ex,ey,ez)
u l
90° rotation 90° rotation ◆ look point (centre of screen) at (lx,ly,lz)
about x-axis about z-axis
◆ up along vector (ux,uy,uz) e
el
■ perpendicular to
105 106
Viewing transform 2 Viewing transform 3
◆translate eye point, (ex,ey,ez), to el
◆need to align line with z-axis
origin, (0,0,0)
■ first transform e and l into new co-ordinate system
⎢ 0 0e ⎥− ex y
⎢0 1⎡01 − ⎤ e'' = S × T × e = 0 l'' = S × T × l
T= ⎥
0 0 1 − ez ■ then rotate e''l'' into yz-plane, rotating about y-axis
⎢ ⎥
⎢⎣ ⎥
000 1
z
el cos θ 0 − sin θ 0
◆ scale so that eye point⎦
to look point distance, , is ⎢
⎡ ⎤ (0, l' ' , l'' +l ''
2
10 ⎥
y x
2
distance from origin to screen centre, d ⎢0
0 0 0
⎤
R= ⎢
sin θ 0 0⎥
z ) θ
(l'' ,x l'y 'z , l'' )
⎡d ⎢ ⎥
el ⎥ ⎥
0 0 ⎥
⎢⎢
d x
el = (l − xex )2 + (l − e )2 + (l y−ye )2
1
S= 0 el
0 0 cos0θ 0
z z
⎢ ⎢
0 0 d el ⎣
⎥⎥ l'' 0
0 0 0 1
⎣ ⎦ θ = arccos z 2
l'' x 1+l'' 2
⎦
z
107 108
Viewing transform 4 Viewing transform 5
◆ having rotated the viewing vector onto the yz plane, ◆the final step is to ensure that the up vector
rotate it about the x-axis so that it aligns actually points up,
with the z-axis i.e. along the positive y-axis
■actually need to rotate the up vector about the z-axis so that it lies
l''' = R1 × l'' in the positive y half of the yz plane why don’t we need to
z u'''' = R ×R ×u multiply u by S or T?
1 0 2 1
⎡
2
0,0, l ''' + l '''y
⎢ ⎥ u is a vector rather than
2
z
cos ψ − sin ψ 0 0
0 cos φ − sin φ 0 ⎡ ⎢ ⎥⎤
a point, vectors do not
R 2= ⎢ ⎢ ⎥ = ( 0,0, d
⎥
00 sin φ cos φ 00⎥
( 0, l ''' y , l '''z )
⎢ sin ψ cos ψ 0 0 get translated
R3 = ⎢
)
ϕ
⎢ ⎤ ⎥ 0 0
0 0 y
⎣⎢ 0 0 ⎥ ⎦ scaling u by a uniform
⎣ 10
scaling matrix would
l'''z 0 u''''y 0 1⎥ make no difference to
φ = arccos ψ = arccos
l''' 21y +l''' 2
⎦ u'''' x+u'''' 2
2
the direction in which it
z y
points
20
Computer Graphics
109 110
Viewing transform 6 Another transformation example
■a well known graphics package (Open Inventor) defines a cylinder to b e:
world viewing y
co-ordinates co-ordinates ● centre at the origin, (0,0,0)
viewing transform ● radius 1 unit
2 x
◆wecan now transform any point in ● height 2 units, aligned along the y-axis
world co-ordinates to the ■ this is the only cylinder that can be drawn,
equivalent point in viewing but the package has a complete set of 3D transformations
2
co-ordinate ⎡ x⎤ ■ we want to draw a cylinder of:
⎢⎡ y ' ⎥⎤
x'
⎢ y⎥ ● radius 2 units
= R ×3 R × R × S × T ×
⎢ z ⎥w
⎢⎣w'⎦ ⎥ ⎣ ⎦
z'
2 ● the centres of its two ends located at (1,2,3) and (2,4,5)
e → (0,0,0) l → (0,0, d )
◆ in 1
❖ its length is thus 3 units
111 112
A variety of transformations Clipping in 3D
object in
object
object in
world
object in
viewing
object in 2D
screen
✦clipping against a volume in
co-ordinates
modelling
co-ordinates
viewing
co-ordinates
projection
co-ordinates viewing co-ordinates
transform transform
a point (x,y,z) can be clipped against the
■ the modelling transform and viewing transform can be multiplied together to 2a pyramid by checking it
produce a single matrix taking an object directly from object against four planes:
co-ordinates into viewing co-ordinates
■ either or both of the modelling transform and viewing transform matrices can
2b
x > − za x
be the identity matrix ad
y
<
● e.g. objects can be specified directly in viewing co-ordinates, or directly in
world co-ordinates
z
z b db
■ this is a useful set of transforms, not a hard and fast model of how things x d y > − zd y<zd
should be done
113 114
What about clipping in z? Clipping in 3D — two methods which is
best?
◆need to at least check for z < 0 to stop ✦clip against the viewing frustum
things behind the camera from oops! ◆ need to clip against six planes
projecting onto the screen y
a a b b
x = − z d x = z y = −dz y = z z =dz
z
x z=z f
b
d
◆ can also have front and
✦ project to 2D (retaining z) and clip against the
back clipping planes:
z > zf and z < zb
axis- aligned cuboid
■ resulting clipping volume is ◆ still need to clip against six planes
called the viewing x=−a x=a y=−b y=b z=zf z=zb
y
frustum ■ these are simpler planes against which to clip
z
x zf zb ■ this is equivalent to clipping in 2D with two extra clips for z
21
Computer Graphics
115 116
Bounding volumes & clipping Curves in 3D
✦can be very useful for reducing the amount of work ✦same as curves in 2D, with an extra
involved in clipping co-ordinate for each point
✦what kind of bounding volume? ✦e.g. Bezier cubic in 3D:
P(t ) = (1 − t )3 P
0
◆ axis aligned box
+ 3t(1 − t )2 P
1
+ 3t 2 (1 − t )P
2
◆ sphere 3
+t P
3
117 118
Surfaces in 3D: polygons Splitting polygons into triangles
✦lines generalise to planar ◆some graphics processors accept only triangles
polygons ◆anarbitrary polygon with more than three vertices isn’t
◆ 3 vertices (triangle) must be planar guaranteed to be planar; a triangle is
◆ > 3 vertices, not necessarily planar
a non-planar
“polygon” rotate the polygon about the
vertical axis
should the result be this
or this? which is preferable?
this vertex is in
front of the other
three, which are all
in the same plane
?
119 120
∑∑
P(s, t ) = bi (s)bj (t )Pi, j
where: b (t ) = (1 − t )3 b (t ) = 3t(1 −i=0 b (t ) = 3t 2 (1 − t ) b (t ) = t 3
t )2j=0
0 1 2 3
∑
P(t ) = bi (t )Pi
i=0
22
Computer Graphics
121 122
Continuity between Bezier patches Drawing Bezier patches
✦each patch is smooth within itself ◆in a similar fashion to Bezier curves, Bezier patches
✦ensuring continuity in 3D: can be drawn by approximating them with planar
polygons
◆ C0 – continuous in position
◆simple method
■ the four edge control points must match
■ select appropriate increments in s and t and render the resulting quadrilaterals
◆ C1 – continuous in both position and tangent vector
◆tolerance-based method
■ the four edge control points must match
■ the two control points on either side of each of the four edge control ■ check if the Bezier patch is sufficiently well approximated by a quadrilateral, if so use
points must be co-linear with both the edge point and each another and be that quadrilateral
equidistant from the edge point ■ if not then subdivide it into two smaller Bezier patches and repeat on each
● subdivide in different dimensions on alternate calls to the subdivision function
■ having approximated the whole Bezier patch as a set of (non-planar)
quadrilaterals, further subdivide these into (planar) triangles
● be careful to not leave any gaps in the resulting surface!
123 124
125 126
3D scan conversion 3D line drawing
✦lines ◆given a list of 3D lines we draw them by:
✦polygons
■ projecting end points onto the 2D screen
■ using a line drawing algorithm on the resulting 2D lines
◆ depth sort ◆thisproduces a wireframe version of whatever
◆ Binary Space-Partitioning tree objects are represented by the lines
◆ z-buffer
◆ A-buffer
✦ray tracing
23
Computer Graphics
127 128
Hidden line removal 3D polygon drawing
◆by careful use of cunning algorithms, lines that are ◆given a list of 3D polygons we draw them by:
hidden by surfaces can be carefully removed from ■ projecting vertices onto the 2D screen
● but also keep the z information
the projected version of the objects
■ using a 2D polygon scan conversion algorithm on the resulting 2D
■ still just a line drawing
polygons
■ will not be covered further in this course
◆in what order do we draw the polygons?
■ some sort of order on z
● depth sort
● Binary Space-Partitioning tree
◆is there a method in which order does not matter?
● z-buffer
129 130
overlapping in z ❶ ❶
❹draw the polygons in depth order from back to ❷
❷ ❹
front ❸
■“painter’s algorithm”: later polygons draw on top of earlier polygons OK ❸ ❶ ❷
◆steps❶ and ❷ are simple, step ❹ is 2D polygon scan
conversion, step ❸ requires more thought
131 132
Resolving ambiguities: algorithm Depth sort: comments
✦for the rearmost polygon, P, in the list, need to compare
each polygon, ◆the depth sort algorithm produces a list of polygons
Q, which overlaps P in z which can be scan-converted in 2D, backmost to
◆ the question is: can I draw P before Q? frontmost, to produce the correct image
❶do the polygons y extents not overlap?
tests get ❷do the polygons x extents not overlap? ◆reasonably cheap for small number of polygons,
more
❸ is P entirely on the opposite side of Q’s plane from the becomes expensive for large numbers of polygons
expensive
viewpoint?
❹ is Q entirely on the same side of P’s plane as the viewpoint?
◆ if all 4 tests fail, repeat ❸ and ❹ with P and Q swapped (i.e. can I
◆the ordering is only valid from one particular
draw Q before P?), if true swap P and Q
◆ otherwise split either P or Q by the plane of the other, throw away viewpoint
the original polygon and insert the two pieces into the list
24
Computer Graphics
133 134
Back face culling: a time-saving trick Binary Space-Partitioning trees
◆if a polygon is a face of a closed polyhedron ◆BSP trees provide a way of quickly calculating the correct
and faces backwards with respect to the depth order:
viewpoint then it need not be drawn at ■ for a collection of static polygons
all because front facing faces would later ■ from an arbitrary viewpoint
obscure it anyway ◆the BSP tree trades off an initial time- and space-intensive
■savesdrawing time at the the cost of one extra test pre- processing step against a linear display algorithm (O(N))
per polygon
which is executed whenever a new viewpoint is specified
■assumes that we know which way a polygon is
oriented ◆the BSP tree allows you to easily determine the correct
◆back face culling can be used in order in which to draw polygons by traversing the tree in a
combination with any 3D scan-conversion simple way
algorithm
135 136
BSP tree: basic idea Making a BSP tree
◆a given polygon will be correctly scan-converted if: ◆given a set of polygons
■ all polygons on the far side of it from the viewer are scan-converted first ■ select an arbitrary polygon as the root of the tree
■ then it is scan-converted ■ divide all remaining polygons into two subsets:
■ then all the polygons on the near side of it are scan-converted ❖ those in front of the selected polygon’s plane
❖ those behind the selected polygon’s plane
●any polygons through which the plane passes are split into two polygons
and the two parts put into the appropriate subsets
■make two BSP trees, one from each of the two subsets
● these become the front and back subtrees of the root
◆may be advisable to make, say, 20 trees with
different random roots to be sure of getting a tree
that is reasonably well balanced
137 138
Drawing a BSP tree Scan-line algorithms
◆insteadof drawing one polygon at a time:
◆if the viewpoint is in front of the root’s polygon’s
modify the 2D polygon scan-conversion algorithm to
plane then:
■ draw the BSP tree for the back child of the root handle all of the polygons at once
■ draw the root’s polygon ◆the algorithm keeps a list of the active edges in all
■ draw the BSP tree for the front child of the root
polygons and proceeds one scan-line at a time
◆otherwise: ■ there is thus one large active edge list and one (even larger) edge list
■ draw the BSP tree for the front child of the root ● enormous memory requirements
■ draw the root’s polygon
◆still
fill in pixels between adjacent pairs of edges on
■ draw the BSP tree for the back child of the root
the scan-line but:
■ need to be intelligent about which polygon is in front and therefore what
colours to put in the pixels
■ every edge is used in two pairs:
one to the left and one to the right of it
25
Computer Graphics
139 140
z-buffer polygon scan conversion z-buffer basics
✦depth sort & BSP-tree methods involve clever sorting ✦ store both colour and depth at each pixel
algorithms followed by the invocation of the ✦ when scan converting a polygon:
standard 2D polygon scan conversion algorithm ◆ calculate the polygon’s depth at each pixel
✦by modifying the 2D scan conversion algorithm we ◆ if the polygon is closer than the current depth stored at
can remove the need to sort the polygons that pixel
■ then store both the polygon’s colour and depth at that pixel
◆ makes hardware implementation easier
■ otherwise do nothing
141 142
z-buffer algorithm z-buffer example
FOR every pixel (x,y)
Colour[x,y] = background colour ; Depth[x,y] =
infinity ;
END FOR ;
This is essentially the 2D
44
∞∞4 4∞ ∞ ∞ ∞ 66 44
∞ ∞ 66
FOR each polygon polygon scan conversion 55
∞∞∞∞ 556666 556666
∞∞∞
FOR every pixel (x,y) in the polygon’s projection z algorithm with depth 666 666666 656666
= polygon’s z-value at pixel (x,y) ; calculation and depth
IF z < Depth[x,y] THEN Depth[x,y] = z ; comparison added. 777
∞∞∞ 666666 645666
Colour[x,y] = polygon’s colour at (x,y) ; END IF ;
END FOR ; END FOR ;
8888
∞∞866666 834566
9999
∞ ∞996666 923456
143 144
Interpolating depth values 1 Interpolating depth values 2
◆just as we incrementally interpolate x as we move ◆we thus have 2D vertices, with added depth
down the edges of the polygon, we can incrementally information
interpolate z: [( xa ', ya ' ), za ]
■ as we move down the edges of the polygon
■ as we move across the polygon’s projection
◆ we can interpolate x and y in this point isfront
between halfway
and back in 2D
2D
( x1 , y1 , z1 )( x1 ' , y1 ', d ) (measure with a
x' = (1 − t )x1 '+(t )x2 ' ruler if you do
d
a x'=x y' = (1 − t ) y1 '+(t ) y2 ' not believe it)
project a
a
z ◆ but z must be interpolated in
( x2 , y2 , z 2 ) ( x 2' , y 2' , d )
a d 3D 1 1 1
y'=y
a
z = (1 − t ) +z(t ) 1
( x3 , y3 , z3 ) ( x3 ' , y3 ' , d ) a z2
z
this point is halfway
between front and
back in 3D (count
the rungs on the
ladder)
©1996–2012 Neil 26
Computer Graphics
145 146
Comparison of methods Putting it all together - a summary
Algorithm Complexity Notes ✦a 3D polygon scan conversion algorithm needs to
Depth sort O(N log N) Need to resolve ambiguities Scan line O(N
log N) Memory intensive include:
BSP tree O(N) O(N log N) pre-processing step ◆ a 2D polygon scan conversion algorithm
z-buffer O(N) Easy to implement in hardware ◆ 2D or 3D polygon clipping
◆BSP is only useful for scenes which do not change ◆ projection from 3D to 2D
◆as number of polygons increases, average size of polygon decreases, so time ◆ some method of ordering the polygons so that they are
to draw a single polygon decreases drawn in the correct order
◆z-buffer easy to implement in hardware: simply give it polygons in any order
you like
◆other algorithms need to know about all the polygons before drawing a
single one, so that they can sort them into order
147 148
Sampling Anti-aliasing
◆ all of the methods so far take a single sample for each pixel at ◆ these artefacts (and others) are jointly known as aliasing
the precise centre of the pixel ◆ methods of ameliorating the effects of aliasing are known as
■ i.e. the value for each pixel is the colour of the polygon which happens to lie
exactly under the centre of the pixel
anti-aliasing
◆ this leads to:
■ in signal processing aliasing is a precisely defined technical term for a
■ stair step (jagged) edges to polygons
particular kind of artefact
■ small polygons being missed completely ■ in computer graphics its meaning has expanded to include most
■ thin polygons being missed completely or split into small pieces undesirable effects that can occur in the image
● this is because the same anti-aliasing techniques which ameliorate true aliasing
artefacts also ameliorate most of the other artefacts
149 150
Anti-aliasing method 1: area averaging Anti-aliasing method 2: super-sampling
◆average the contributions of all polygons to each pixel
◆sampleon a finer grid, then average the samples in each pixel to
■ e.g. assume pixels are square and we just want the average colour in the square
■ Ed Catmull developed an algorithm which does this: produce the final colour
● works a scan-line at a time ■ for an n×n sub-pixel grid, the algorithm would take roughly n2
● clips all polygons to the scan-line times as long as just taking one sample per pixel
● determines the fragment of each polygon which projects to each pixel ◆cansimply average all of the sub-pixels in a pixel or can do some
● determines the amount of the pixel covered by the visible part of each fragment
sort of weighted average
● pixel's colour is a weighted sum of the visible parts
■ expensive algorithm!
27
Computer Graphics
151 152
● partially
11111111 1 = polygon covers this sub-pixel
● not at all
■ sub-pixel sampling is only required in the case of pixels which are partially 00011111
covered by the polygon
00000011 0 = polygon doesn’t cover this sub-pixel
L. Carpenter, “The A-buffer: an antialiased hidden surface method”, SIGGRAPH 84, 103–8
00000000 sampling is done at the centre of each of
the sub-pixels
The use of 4×8 bits is because of the original architecture on which this was
implemented. You could use any number of sub-pixels: a power of 2 is obviously
sensible.
153 154
A-buffer: example Making the A-buffer more efficient
◆to get the final colour of the pixel ◆if a polygon totally covers a pixel then:
you need to average together all ■ do not need to calculate a mask, because the mask is all 1s
(frontmost)
visible bits of polygons
(backmost)
sub-pixel final pixel ■ all masks currently in the list which are behind this polygon can be
A B C colours colour discarded
11111111 00000011 00000000 ■ any subsequent polygons which are behind this polygon can be immediately
discounted (without calculating a mask)
◆in most scenes, therefore, the majority of pixels will have only
00011111 00000111 00000000
a single entry in their list of masks
155 156
A-buffer: calculating masks A-buffer: comments
◆ clip polygon to pixel ◆the A-buffer algorithm essentially adds
◆ calculate the mask for each edge bounded by the right hand anti-aliasing to the z- buffer algorithm in
side of the pixel an efficient way
■ there are few enough of these that they can be stored in a look-up table
◆ most operations on masks are AND, OR, NOT,
◆ XOR all masks together XOR
■ very efficient boolean operations
00000000 00000000 00000000 00000000 00000000
◆ why 4×8?
■ algorithm originally implemented on a machine with 32-bit registers
(VAX 11/780)
00111111 00011111 00000000 00000000 00100000 ■ on a 64-bit register machine, 8×8 is more sensible
⊕ ⊕ ⊕ =
00111111 00000011 00000000 00000000 00111100 ◆ what does the A stand for in A-buffer?
01111111 00000000 00000000 00000000 01111111 ■ anti-aliased, area averaged, accumulator
28
Computer Graphics
157 158
A-buffer: extensions Illumination & shading
◆as presented the algorithm assumes that a mask has ◆untilnow we have assumed that each polygon is a
a constant depth (z value) uniform colour and have not thought about how that
■ can modify the algorithm and perform approximate intersection colour is determined
between polygons
◆things look more realistic if there is some sort of
◆can save memory by combining fragments which illumination in the scene
start life in the same primitive
◆we therefore need a mechanism of determining the
■ e.g. two triangles that are part of the decomposition of a Bezier patch
colour of a polygon based on its surface properties
◆can extend to allow transparent objects
and the positions of the lights
◆we will, as a consequence, need to find ways to shade
polygons which do not have a uniform colour
159 160
161 162
Comments on reflection Calculating the shading of a polygon
◆gross assumptions:
■ there is only diffuse (Lambertian) reflection
◆the surface can absorb some wavelengths of light
■ all light falling on a polygon comes directly from a light source
■ e.g. shiny gold or shiny copper
● there is no interaction between polygons
◆specular reflection has “interesting” properties at ■ no polygon casts shadows on any other
glancing angles owing to occlusion of micro-facets by ● so can treat each polygon as if it were the only polygon in the scene
one another ■ light sources are considered to be infinitely distant from the polygon
● the vector to the light is the same across the whole polygon
◆observation:
■ the colour of a flat polygon will be uniform across its surface, dependent only on
the colour & position of the polygon and the colour & position of the light
◆plastics are good examples of surfaces with: sources
■ specular reflection in the light’s colour
■ diffuse reflection in the plastic’s colour
29
Computer Graphics
163 164
Diffuse shading Diffuse shading: comments
calculation ◆canhave different Il and different kd for different
L is a normalised vector pointing in the
direction of the light source wavelengths (colours)
L N
θ N is the normal to the polygon ◆watch out for cosθ < 0
■ implies that the light is behind the polygon and so it cannot illuminate this
Il is the intensity of the light source side of the polygon
kd is the proportion of light which is diffusely ◆do you use one-sided or two-sided polygons?
I = Il kd cosθ reflected by the surface ■ one sided: only the side in the direction of the normal vector can be
illuminated
= Il kd ( N ⋅ L)
I is the intensity of the light reflected by the
surface ● if cosθ < 0 then both sides are black
■ two sided: the sign of cosθ determines which side of the polygon is
illuminated
use this equation to set the colour of the whole polygon and draw the polygon ● need to invert the sign of the intensity for the back side
using a standard polygon scan-conversion routine
165 166
Gouraud shading
Flat vs Gouraud shading
◆for a polygonal model, calculate the diffuse
illumination at each vertex rather than for
◆note how the interior is smoothly shaded but the
each polygon
■ calculate the normal at the vertex, and use this to calculate the diffuse
outline remains polygonal
illumination at that point
■ normal can be calculated directly if the polygonal model was derived from a
curved surface [( x1 ' , y1 ' ), z1 , ( r1 , g1 , b1 )]
167 168
θθ
Specular reflection Phong shading
✦ Phong developed an L is a normalised vector pointing in the
direction of the light source ◆similar to Gouraud shading, but calculate
easy-to- calculate
R is the vector of perfect reflection the specular component in addition to the
approximation to
N is the normal to the polygon diffuse component
specular reflection
V is a normalised vector pointing at the viewer ◆therefore need to interpolate the normal
N across the polygon in order to be able to
Lθ θR calculate the reflection vector
I lis the intensity of the light source
α V [( x1 ' , y1 ' ), z1 , ( r1 , g1 , b1 ), N 1 ]
ks is the proportion of light which is specularly ◆ N.B. Phong’s approximation
reflected by the surface to specular reflection [( x 2 ' , y 2 ' ), z 2 ,
( r2 ,2 g2 2, b ), N ]
n is Phong’s ad hoc “roughness” coefficient ignores (amongst other
I = I lks cosn α
= I lks ( R ⋅V )n
I is the intensity of the specularly reflected light things) the effects of
glancing incidence [( x 3 ' , y 3 ' ), z 3 , ( r3 , g 3 , b3 ), N 3 ]
30
Computer Graphics
169 170
Examples 100% The gross assumptions revisited
◆only diffuse reflection
■ now have a method of approximating specular reflection
75%
◆no shadows
■ need to do ray tracing to get shadows
◆lights at infinity
50%
specular ■ can add local lights at the expense of more calculation
reflectio ● need to interpolate the L vector
171 172
Shading: overall equation Illumination & shading: comments
◆the overall shading equation can thus be ◆how good is this shading equation?
considered to be the ambient illumination plus ■ gives reasonable results but most objects tend to look as if they are made out
of plastic
the diffuse and specular reflections from each ■ Cook & Torrance have developed a more realistic (and more expensive)
light source shading model which takes into account:
Ri ● micro-facet geometry (which models, amongst other things, the roughness of the
a ∑ i dI ik ( L∑
N
surface)
I = I ka + L i ⋅ Ni s) i+ I k ( R ⋅V )n θ θ
V
α ● Fresnel’s formulas for reflectance off a surface
i i
■ there are other, even more complex, models
◆is there a better way to handle inter-object interaction?
■ “ambient illumination” is, frankly, a gross approximation
■ the more lights there are in the scene, the longer this calculation will take ■ distributed ray tracing can handle specular inter-reflection
■ radiosity can handle diffuse inter-reflection
173 174
Ray tracing: examples
Ray tracing
◆a powerful alternative to polygon
scan-conversion techniques
◆given a set of 3D objects, shoot a ray
from the eye through the centre of
every pixel and see what it hits
31
Computer Graphics
175 176
177 178
Intersection of a ray with an object 2 Ray tracing: shading
◆sphere
◆ once you have the intersection
a=D⋅D light 2
ofray
a with the nearest object
D C
r
b = 2D ⋅(O − C) light 1 you can also:
O
c = (O − C)⋅(O − C)− r2 ■ calculate the normal to the object at
ray: P that intersection point
=O+s −r2 = 0
sphere:(P−C)⋅(P−C) d = b2 − 4ac ■ shoot rays from that point to all of the
D, s ≥ light sources, and calculate
0 the diffuse and specular reflections off
−b+d N the object at that point
s1 = 2a ● this (plus ambient illumination)
d real
−b−d
d imaginary D gives the colour of the object (at
C
s2= r that point)
◆ cylinder, cone, 2a O
torus
■ all similar to sphere
■ much more on this in the Part II Advanced Graphics course
179 180
Ray tracing: Ray tracing: reflection
shadows ◆ifa surface is totally or partially reflective then new
light 2 rays can be spawned to find the contribution to the
◆
rays from
because youthe
are N2
light 1 intersection point to
tracing
the light, you can
check whether
light 3 another object is pixel’s colour
between the light given by the
N N1
intersection and the reflection
light and is hence ■ this is perfect (mirror)
D C reflection
r casting a shadow
O O
■ also need to watch for self-
shadowing
32
Computer Graphics
181 182
Ray tracing: transparency & refraction Sampling in ray tracing
◆objects can be totally or partially transparent ◆single point
■ this allows objects behind the ■ shoot a single ray through the pixel’s centre
current one to be seen through
◆super-sampling for anti-aliasing
D2
■ shoot multiple rays through the pixel and average the result
it ■ regular grid, random, jittered, Poisson disc
◆transparent objects can ◆adaptive super-sampling
light D
1 have refractive indices ■ shoot a few rays through the pixel, check the variance of the resulting
■ bending the rays as they pass values, if similar enough stop, otherwise shoot some more rays
through the objects
D0 D'1
◆transparency +
O reflection means that a
D'2
ray can split into two
parts
183 184
Types of super-sampling 1 Types of super-sampling 2
◆regular grid ◆Poisson disc
■ divide the pixel into a number of sub-pixels and shoot a ■ shoot N rays at random points in the pixel with the proviso that no two
ray through the centre of each rays shall pass through the pixel closer than ε to one another
■ problem: can still lead to noticable aliasing unless a very ■ for N rays this produces a better looking image than pure random
high resolution sub-pixel grid is used sampling
◆random ■ very hard to implement properly
185 186
More reasons for wanting to take
Types of super-sampling 3 multiple samples per pixel
◆jittered ◆ super-sampling is only one reason why we might
■ divide pixel into N sub-pixels and shoot one ray at a want to take multiple samples per pixel
random point in each sub-pixel
■ an approximation to Poisson disc sampling
◆ many effects can be achieved by distributing the
■ for N rays it is better than pure random sampling multiple samples over some range
■ easy to implement
■ called distributed ray tracing
● N.B. distributed means distributed over a range of values
◆ can work in two ways
❶each of the multiple rays shot through a pixel is allocated a random value from the
relevant distribution(s)
● all effects can be achieved this way with sufficient rays per pixel
❷each ray spawns multiple rays when it hits an object
● this alternative can be used, for example, for area lights
jittered Poisson disc pure random
33
Computer Graphics
187 188
189 190
Finite aperture
Area vs point light source
left, a pinhole camera
below, a finite aperture camera below
left, 12 samples per pixel below right,
120 samples per pixel
an area light source produces soft a point light source produces hard
shadows shadows
1,
120
191 192
Distributed ray tracing for specular
reflection Handling direct illumination
light
◆ previously we could only calculate the effect
✦diffuse reflection
of perfect reflection ◆ handled by ray tracing and polygon
◆ we can now distribute the reflected rays over scan conversion
the range of ◆ assumes that the object is a perfect
Lambertian reflector
34
Computer Graphics
193 194
Handing indirect illumination: 1 Handing indirect illumination: 2
light
✦ diffuse to specular light
✦ diffuse to diffuse
◆ handled by distributed ray ◆ handled by radiosity
tracing ■ covered in the Part
II Advanced
Graphics course
195 196
Multiple inter-reflection Hybrid algorithms
✦light may reflect off many surfaces on its way ✦polygon scan conversion and ray tracing are
(diffuse | specular)* the two principal 3D rendering
✦ standard rayfrom the light
tracing to the camera
and polygon scan diffuse | specular mechanisms
conversion can handle a single ◆ each has its advantages
diffuse or specular bounce ■ polygon scan conversion is faster
distributed ray tracing can handle multiple (diffuse | specular) (specular)* ■ polygon scan conversion can be implemented easily in hardware
■ ray tracing produces more realistic looking results
specular bounces
✦ radiosity can handle multiple diffuse ✦hybrid algorithms exist
(diffuse)*
bounces ◆ these generally use the speed of polygon scan conversion
✦ the general case cannot be handled by (diffuse | specular )* for most of the work and use ray tracing only to achieve
any efficient algorithm particular special effects
197 198
Commercial uses Surface detail
✦polygon scan conversion ✦ so far we have assumed perfectly smooth, uniformly
■ in particular z-buffer and A-buffer coloured surfaces
◆used almost everywhere ✦ real life isn’t like that:
■ games
◆ multicoloured surfaces
■ user interfaces
■ e.g. a painting, a food can, a page in a book
■ most special effects
◆ bumpy surfaces
✦ray tracing ■ e.g. almost any surface! (very few things are perfectly smooth)
◆ used when realism or beauty is absolutely crucial ◆ textured surfaces
■ advertising
■ e.g. wood, marble
■ some special effects
35
Computer Graphics
199 200
u
✦ each 3D object is
parameterised in
(u,v) space
all surfaces are smooth and of uniform most surfaces are textured with colour ✦ each pixel maps to some part
2D texture maps of the surface
the pillars are textured with a solid texture
✦ that part of the surface maps
to part of the
texture
201 202
Paramaterising a primitive Sampling texture space
✦polygon: give (u,v) coordinates for three vertices, or
treat as part of a plane
✦plane: give u-axis and v-axis directions in the plane v
✦cylinder: one axis goes up the cylinder, the other
around the cylinder
u
203 204
Sampling texture space:
Sampling texture space: finding the value interpolation methods
✦nearest neighbour
◆ fast with many artefacts
✦bilinear
the ◆ reasonably fast, blurry
three ✦bicubic
standar ◆ gives better results
d ◆ uses 16 values (4×4) around the sample location
◆ but runs at one quarter the speed of bilinear
method
s ✦can we get any better?
◆ many slower techniques offering slightly higher quality
◆ biquadratic is an interesting trade-off
✦nearest neighbour: the sample value is the nearest pixel value to ■ use 9 values (3×3) around the sample location
the sample point ■ faster than bicubic, slower than linear, results seem
✦bilinear reconstruction: the sample value is the weighted mean of to be nearly as good as bicubic
36
Computer Graphics
205 206
Texture mapping examples Down-sampling
✦if the pixel covers quite a large area of
the texture, then it will be necessary to
average the texture across that area, not just
take a sample in the middle of the area
u
nearest- bicubi
neighbou c
r
look at the bottom right hand corner of the
distorted image to compare the two interpolation
methods
207 208
209 210
Solid textures What can a texture map modify?
✦ texture mapping applies a 2D ✦any (or all) of the colour components
texture to a surface ◆ ambient, diffuse, specular
colour = f(u,v)
✦transparency
✦ solid textures have colour
◆ “transparency mapping”
defined for every point in
space ✦reflectiveness
colour = f(x,y,z)
✦ permits the modelling of
objects which appear to be ✦but also the surface normal
carved out of a material
◆ “bump mapping”
37
Computer Graphics
211 3D CG 212
213 214
The workings of the human visual system Structure of the human eye
✦to understand the requirements of displays (resolution, ✦the retina is an array of light detection cells
quantisation and colour) we need to know how the human ✦the fovea is the high resolution area of the
eye works... retina
✦the optic nerve takes signals from the retina to the
The lens of the eye forms an image of the world on the retina: the back surface of the eye
visual cortex in the brain
215 216
The retina Light detectors in the retina
✦ consists of about 150 million light receptors ✦two classes
✦ retina outputs information to the brain along the ◆ rods
optic nerve ◆ cones
◆ there are about one million nerve fibres in the optic nerve
✦cones come in three types
◆ sensitive to short, medium and long wavelengths
◆ the retina performs significant pre-processing to reduce the
number of signals from 150M to 1M ◆ allow you to see in colour
◆ pre-processing includes: ✦the cones are concentrated in the macula, at the
■ averaging multiple inputs together centre of the retina
colour signal processing
✦the fovea is a densely packed region in the centre
■
■ local edge detection
of the macula
◆ contains the highest density of cones
◆ provides the highest resolution vision
www.stlukeseye.com
38
Computer Graphics
217 218
Distribution of rods & cones
Foveal vision
✦ 150,000 cones per square millimetre in the fovea
■ high resolution
■ colour
✦ outside fovea: mostly rods
■ lower resolution
● many rods’ inputs are combined to produce one signal to the visual cortex
cones in the fovea in the brain
■ principally monochromatic
● there are very few cones, so little input available to provide colour
information to the brain
• cones in the fovea are squished together more tightly than ◆ provides peripheral vision
outside the fovea: higher resolution vision; ● allows you to keep the high resolution region in context
• as the density of cones drops the gaps between them are ● without peripheral vision you would walk into things, be unable to find things
filled with rods easily, and generally find life much more difficult
219 220
Detailed structure of retinal processing Some of the processing in the eye
✦discrimination
■ discriminates between different intensities and colours
✦adaptation
■ adapts to changes in illumination level and colour
■ can see about 1:100 contrast at any given time
■ but can adapt to see light over a range of 1010
✦a lot of pre-processing occurs in the retina before signals are ✦edge detection and edge enhancement
passed to the brain ■ visible in e.g. Mach banding effects
221 222
39
Computer Graphics
223 224
Intensity differentiation Simultaneous contrast
✦results for a “normal” viewer ✦ the eye performs a range of non-linear operations
◆ a human can distinguish about a 2% change in intensity for ✦ for example, as well as responding to changes in
much of the range of intensities overall light, the eye responds to local changes
◆ discrimination becomes rapidly worse as you get close to
the darkest or brightest intensities that you can currently
see
ΔI/I
The centre square is the same intensity in all four cases but does not appear to be because
your visual system is taking the local contrast into account
0.02
I
225 226
Each of the nine rectangles is a constant colour but you will see each rectangle being
slightly brighter at the end which is near a darker rectangle and slightly darker at the
end which is near a lighter rectangle
227 228
Summary of what human eyes do... Implications of vision on resolution
✦sample the image that is projected onto the retina ◆the acuity of the eye is measured as the ability to see a white
✦adapt to changing conditions gap,1 minute wide, between two black lines
■ about 300dpi at 30cm
✦perform non-linear pre-processing ■ the corresponds to about 2 cone widths on the fovea
◆ makes it very hard to model and predict behaviour
✦combine a large number of basic inputs into a much ◆resolution decreases as contrast decreases
◆colour resolution is much worse than intensity resolution
smaller set of signals
■ this is exploited in TV broadcast
◆ which encode more complex data ● analogue television broadcasts the colour signal at half the horizontal
■ e.g. presence of an edge at a particular location with a particular resolution of the intensity signal
orientation rather than intensity at a set of locations
40
Computer Graphics
229 230
231 232
233 234
Illuminants have Illuminant × reflection = reflected light
different
daylight
characteristics intensity reflectivity
intensity received
by the eye
◆ sunlight is reasonably
uniform
Incandescent Light Bulbs
◆
◆ sodium street light bulbs are
incandescent incandescent light bulb intensity received
lights emit almost
very red intensity reflectivity by the eye
pure yellow
× =
wavelength wavelength wavelength
www.gelighting.com/na/business_lighting/education_resources/learn_about_
light/
41
Computer Graphics
235 236
incandescent light camera flash
bulb bulb
Comparison of
Representing colour
illuminants ✦we need a mechanism which allows us to
compare these represent colour in the computer by some
things:
set of numbers
❖colour of the
monkey’s nose and ◆ preferably a small set of numbers which can be quantised
paws: more red to a fairly small number of bits each
under certain lights
✦we will discuss:
❖oranges & yellows
(similar in all) ◆ Munsell’s artists’ scheme
■ which classifies colours on a perceptual basis
❖blues & violets
(considerably ◆ the mechanism of colour vision
different) ■ how colour perception works
◆ various colour spaces
■ which quantify colour based on either physical or perceptual
models of colour
237 238
Munsell’s colour classification system Munsell’s colour classification system
✦three axes ✦ any two adjacent colours are a standard “perceptual”
■ hue the dominant colour distance apart
■ value bright colours/dark colours
■ chroma vivid colours/dull colours
◆ worked out by testing it on people
◆can represent this as a 3D graph ◆ a highly irregular space
■ e.g. vivid yellow is much brighter than vivid blue
invented by Albert H. Munsell, an American artist, in 1905 in an attempt to systematically classify colours
239 240
42
Computer Graphics
241 242
Colour signals sent to the brain Chromatic metamerism
◆the signal that is sent to the brain is pre-processed by ◆many different spectra will induce the same response
the retina in our cones
long + medium + short = ■ the values of the three perceived values can be calculated as:
luminance ● l = k ∫ P(λ) l(λ) dλ
● m = k ∫ P(λ) m(λ) dλ
long – medium = red-green ● s = k ∫ P(λ) s(λ) dλ
long + medium – short = yellow-blue
■ k is some constant, P(λ) is the spectrum of the light incident on the retina
■ two different spectra (e.g. P1(λ) and P2(λ)) can give the same values of l,
◆ this theory explains: m, s
■ colour-blindness effects ■ we can thus fool the eye into seeing (almost) any colour by mixing correct
■ why red, yellow, green and blue are proportions of some small number of lights
perceptually important
colours
■ why you can see e.g. a yellowish red
but not a greenish red
243 244
FvDFH Sec 13.2.2
Mixing coloured lights XYZ colour space
✦by mixing different amounts of ✦ not every wavelength can be represented as a mix of
red, green, and blue red, green, and blue lights
lights we can generate a wide ✦ but matching & defining coloured light with a mixture of
range of responses in three fixed primaries is desirable
the human eye ✦ CIE define three standard primaries: X, Y, Z
Y matches the human eye’s response to light of a constant intensity at each wavelength
gr gr (luminous- efficiency function of the eye)
ee ee
n n X, Y, and Z are not themselves colours, they are used for defining colours – you cannot make
blue red light a light that emits one of these primaries
light fully on
off
XYZ colour space was defined in 1931 by the Commission Internationale de l’ Éclairage (CIE)
red blue
245 246
CIE chromaticity diagram Colour spaces
✦chromaticity values are defined in ◆ CIE XYZ, Yxy
X terms of Yx, y, z Z ◆ Uniform
x= , y= , z= ∴x+y+z=1 ■ equal steps in any direction make equal perceptual differences
X+Y+Z X+Y+Z X+Y+Z
■ CIE L*u*v*, CIE L*a*b*
■ ignores luminance 520nm FvDFH Fig 13.24 ◆ Pragmatic
■ can be plotted as a 2D function 540nm Colour plate 2 ■ used because they relate directly to the way that the hardware works
510nm
◆ pure colours (single ■ RGB, CMY, CMYK
wavelength) lie along the 560nm ◆ Munsell-like
outer curve 500nm
580nm
■ used in user-interfaces
■ considered to be easier to use for specifying colour than are the pragmatic
◆ all other colours are a mix colour spaces
of pure colours and hence 600nm
■ map easily to the pragmatic colour spaces
lie inside the curve 490nm ■ HSV, HLS
700nm
◆ points outside the curve
do not exist as colours 480nm
460nm
410nm
43
Computer Graphics
247 248
XYZ is not perceptually uniform Luv was designed to be more uniform
Each ellipse shows how far you can stray from the central point before a human being notices a
difference in colour
249 250
Luv colour space Lab space
L is luminance and is orthogonal to u and v, the two colour axes
✦ another CIE colour space
✦ based on complementary colour theory
◆ see slide 49 (Colour signals sent to the brain)
✦ also aims to be perceptually uniform
L*=116(Y/Yn)1/3 a*=500[(X/Xn)1/3-(Y/Yn)1/3]
b*=200[(Y/Yn)1/3-(Z/Zn)1/3]
251 252
Lab space RGB space
✦this visualization shows ✦ all display devices which output light mix red, green
those colours in Lab space and blue lights to make colour
which a human can ◆ televisions, CRT monitors, video projectors, LCD screens
perceive ✦ nominally, RGB space is a cube
✦again we see that human ✦ the device puts physical limitations on:
perception of colour is ◆ the range of colours which can be displayed
not uniform ◆ the brightest colour which can be displayed
◆ perception of colour ◆ the darkest colour which can be displayed
diminishes at the white and
black ends of the L axis
◆ the maximum perceivable
chroma differs for different
hues
44
Computer Graphics
253 254
RGB in XYZ space CMY space
✦CRTs and LCDs mix red, green, and blue to ✦ printers make colour by mixing coloured inks
make all other colours ✦ the important difference between inks (CMY) and
✦the red, green, and blue primaries each lights (RGB) is that, while lights emit light, inks absorb
map to a point in XYZ space light
✦any colour within the resulting ◆ cyan absorbs red, reflects blue and green
triangle can be displayed ◆ magenta absorbs green, reflects red and blue
■ any colour outside the triangle cannot be displayed
◆ yellow absorbs blue, reflects green and red
■ for example: CRTs cannot display very saturated purple, turquoise,
or yellow ✦ CMY is, at its simplest, the inverse of RGB
✦ CMY space is nominally a cube
FvDFH Figs 13.26, 13.27
255 256
Ideal and actual printing ink reflectivities CMYK space
✦ in real printing we use black
(key) as well as CMY
ideal ✦ why use black?
◆ inks are not perfect absorbers
◆ mixing C + M + Y gives a muddy
grey, not black
◆ lots of text is printed in black:
trying to align C, M and Y
perfectly for black text would
be a nightmare
actual
257 258
Using K Colour spaces for user-interfaces
✦if we print using just ✦RGB and CMY are based on the physical devices
CMY then we which produce the coloured output
can get up to 300% ✦RGB and CMY are difficult for humans to use for
ink at any selecting colours
point on the paper ✦Munsell’s colour system is much more intuitive:
✦removing the ◆ hue — what is the principal colour?
achromatic portion of ◆ value — how light or dark is it?
CMY and ◆ chroma — how vivid or dull is it?
replacing ✦computer interface designers have developed basic
with K reduces the transformations of RGB which resemble Munsell’s
maximum human-friendly system
possible ink
coverage to 200%
45
Computer Graphics
259 260
HSV: hue saturation value HLS: hue lightness saturation
✦three axes, as with Munsell ✦a simple variation of HSV
◆ hue and value have same meaning ◆ hue and saturation have same meaning
◆ the term “saturation” replaces the term “chroma” ◆ the term “lightness” replaces the term “value”
✦designed to address the complaint that HSV has
all pure colours having the same
lightness/value as white
◆ designed by Metrick in 1979
◆ algorithm to convert HLS to RGB and back can be found in
Foley et al., Figs 13.36 and 13.37
◆ designed by Alvy Ray Smith in 1978
◆ algorithm to convert HSV to RGB
and back can be found in Foley et
al., Figs 13.33 and 13.34
261 262
Summary of colour spaces Image display
◆the eye has three types of colour receptor
✦ a handful of technologies cover over 99% of all
◆therefore we can validly use a three-dimensional
co-ordinate system to represent colour display devices
◆XYZ is one such co-ordinate system
◆ active displays
■ cathode ray tube declining use
■ Y is the eye’s response to intensity (luminance)
■ liquid crystal display rapidly increasing use
■ X and Z are, therefore, the colour co-ordinates
■ plasma displays increasing use
● same Y, change X or Z ⇒ same intensity, different colour
■ digital mirror displays increasing use in video projectors
● same X and Z, change Y ⇒ same colour, different intensity
◆there are other co-ordinate systems with a luminance ◆ printers (passive displays)
■ laser printers the traditional office printer
axis ■ ink jet printers low cost, rapidly increasing in quality,
■ L*a*b*, L*u*v*, HSV, HLS the traditional home printer
◆some other systems use three colour co-ordinates ■ commercial printers for high volume
■ RGB, CMY
■ luminance can then be derived as some function of the three
● e.g. in RGB: Y = 0.299 R + 0.587 G + 0.114 B
46
Computer Graphics
265 266
Basic cathode ray tube I Basic cathode ray tube II
◆the heater emits electrons, which are ◆focussing coils magnetically focus the
accelerated by the potential difference electron beam to a small spot on the CRT
between cathode and anode faceplate, making a single dot
◆electrons hitting the front screen excite ◆changing the voltage between the
the phosphor, making it emit visible light phosphor cathode and anode changes the intensity phosphor
coating coating
of the electron beam and hence the
intensity of the dot
vacuum focussing
cathode anode
coils
heater light
vacuum
emitting electron emitted
cathode anode electron
electrons beam light electrons by
heater emitting beam
emitted excited
by phosphor
excited
phosphor
– –
+ +
10–25 kV
267 268
horizontal vertical
scan scan
voltage voltage
–
+
time time
64µs (PAL television) 20ms (PAL television)
269 270
47
Computer Graphics
271 272
Liquid crystal displays I Liquid crystal displays II
◆liquid crystals can twist the polarisation of light there are two polarizers at right angles to one another on either side of the
liquid crystal: under normal circumstances these would block all light
◆basic control is by the voltage that is applied across
there are liquid crystal directors: micro-grooves which align the liquid crystal
the liquid crystal: either on or off, transparent or molecules next to them
opaque the liquid crystal molecules try to line up with one another; the
◆greyscale can be achieved with some types of liquid micro-grooves on each side are at right angles to one another which forces
crystal by varying the voltage the crystals’ orientations to twist gently through 90° as you go from top to
bottom, causing the polarization of the light to twist through 90°, making
◆colour is achieved with colour filters the pixel transparent
liquid crystal molecules are polar: they have a positive and a negative end
applying a voltage across the liquid crystal causes the molecules to stand on
their ends, ruining the twisting phenomenon, so light cannot get through and
the pixel is opaque
273 274
a three LCD video projector, with colour made by devoting one LCD panel to
each of red, green and blue, and by splitting the light using dichroic mirrors
which pass some wavelengths and reflect others
275 276
Plasma displays II Digital micromirror devices I
◆plasma displays have been commercially ◆developed by Texas Instruments
available since 1993 ■ often referred to as Digital Light Processing (DLP) technology
■ but have been widely marketed since about 2000 ◆invented in 1987, following ten year’s work on deformable
January 2004:
◆ advantages mirror devices
Samsung release an
■ can be made larger than LCD panels LCD TV as big as a ◆manufactured like a silicon chip!
plasma TV at about
● although LCD panels are getting bigger ■ a standard 5 volt, 0.8 micron, CMOS process
twice the cost. Will
■ much thinner than CRTs for equivalent screen sizes plasma survive the ■ micromirrors are coated with a highly reflected aluminium alloy
challenge?
◆ disadvantages ■ each mirror is 16×16µm2
48
Computer Graphics
277 278
Digital micromirror devices II Electrophoretic displays I
◆used increasingly in video projectors ✦electronic paper widely used in e-books
◆widely available from late 1990s ✦iRex iLiad, Sony Reader, Amazon Kindle
◆colour is achieved using either three DMD chips or one chip
✦200 dpi passive display
and a rotating colour filter
279 280
Electrophoretic displays II Electrophoretic displays III
✦transparent capsules ~40µ diameter ✦colour filters over individual pixels
◆ filled with dark oil
◆ negatively charged 1µ titanium dioxide particles
✦electrodes in substrate attract or repel white
✦flexible substrate using plastic
particles
semiconductors (Plastic Logic)
✦image persists with no power consumption
281 282
Printers Printer resolution
✦ many types of printer ✦laser printer
◆ ink jet ◆ 300–1200dpi
■ sprays ink onto paper
✦ink jet
◆ laser printer
◆ used to be lower resolution & quality than laser printers but
■ uses a laser to lay down a pattern of charge on a drum; this picks up
charged toner which is then pressed onto the paper now have comparable resolution
◆ commercial offset printer ✦phototypesetter for commercial offset
■ an image of the whole page is put on a roller printing
■ this is repeatedly inked and pressed against the paper to print
thousands of copies of the same thing ◆ 1200–2400 dpi
✦ all make marks on paper ✦bi-level devices: each pixel is either on or
◆ essentially binary devices: mark/no mark off
◆ black or white (for monochrome printers)
◆ ink or no ink (in general)
49
Computer Graphics &
283 284
◆ achieved by halftoning
■ divide image into cells, in each cell draw a spot of
the appropriate size for the intensity of that cell
■ on a printer each cell is m×m pixels, allowing m2+1
different intensity levels
■ e.g. 300dpi with 4×4 cells ⇒ 75 cells per inch, 17
intensity levels
■ phototypesetters can make 256 intensity levels in cells
so small you can only just see them Halftonin Dithering
◆ an alternative method is dithering
■ dithering photocopies badly, halftoning photocopies
g
well
285 286
What about colour? How do you produce halftoned colour?
✦generally use cyan, magenta, yellow, and ◆printfour halftone screens, one in each colour
black inks (CMYK) ◆carefullyangle the screens to prevent interference (moiré)
✦inks aborb colour patterns
◆ c.f. lights which emit colour
◆ CMY is the inverse of RGB Standard rulings (in lines per inch)
287 288
Four colour halftone screens Range of printable colours
✦Standard angles a: colour photography (diapositive)
Magenta, Cyan & Black
◆ Cyan 15° b: high-quality offset printing c: newspaper
are at 30° relative to one
◆ Black 45° another
◆ Magenta 75° Yellow (least distinctive printing
colour) is at 15° relative to
◆ Yellow 90° Magenta and Cyan
50
Computer Graphics
289 290
Beyond four colour printing The extra range of colour
◆printers can be built to do printing in more colours
■ gives a better range of printable colours
✦this gamut is for so-called HiFi
◆six colour printing colour printing
■ for home photograph printing ◆ uses cyan, magenta, yellow, plus red, green and blue inks
■ dark & light cyan, dark & light magenta, yellow, black
◆eight colour printing
■ 3× cyan, 3× magenta, yellow, black
■ 2× cyan, 2× magenta, yellow, 3× black
◆twelve colour printing
■ 3× cyan, 3× magenta, yellow, black red, green, blue, orange
291 292
293 294
Laser printer Ink jet printers
51
Computer Graphics
295 3D CG 296
◆ filtering
◆dye sublimation gives true greyscale Background
■ convolution
pixel sized heater dye sheet ■ nonlinear filtering
297 298
h(i, j) × f ( x − i, y − j)
∑∑
f '( x, y ) =
i=−∞ j=−∞
299 300
Example filters - averaging/blurring Example filters - edge detection
1 1 1
9 9 9 111 Horizontal Vertical Diagonal
Basic 3x3 blurring
1
9
1
1
9
1
1
9
1
= 19 × 111
111 1 0 -1 11 0
filter 9 9 9 111
000 0 -1 10 -1
1
0
-1 -1 -1 1 -1 0 -1 -1 1 0 0 1
Gaussian 5x5 blurring Prewitt filters 0 -1 -1 0
Gaussian 3x3 blurring filter 1 2 4 2 1
Roberts filters
filter 26 9 62 12 1 1 0 -1 210
1 2 1
2 0 -2
1
16 × 2 4 2
1
112 × 4 9 16 9 4 000
0
1 0 -1
26 9 62 -1 -2 -1 1 -1 0 -1 -2
1 2 1
12421 Sobel filters
52
Computer Graphics
301 302
Example filter - horizontal edge detection Example filter - horizontal edge detection
Horizontal edge
Image Result
detection filter
1 1 1 100 100 100 100 100 100 100 100 100 300 300 300 300 200 100 0 0 0
0 0 0 * 0 0 0 0 0 100 100 100 100
= 300 300 300 300 300 200 100 0 0
303 304
Median filtering Median filter - example
original
✦not a convolution method
✦the new value of a pixel is the
median of the values of all the
pixels in its neighbourhood median filter
305 306
Median filter - limitations Point processing
✦copes well with shot (impulse) noise ✦each pixel’s value is modified
✦not so good at other types of noise ✦the modification function only takes that
original
pixel’s value into account
in this example, median filter reduces noise but doesn’t
median
eliminate it
filter p'(i, j ) = f { p(i, j )}
add random noise
53
Computer Graphics
307 308
Point processing Point processing
inverting an image improving an image’s contrast
f(p)
whit
f(p) e
whit
e
black p
black white
black p
black white
309 310
Point processing
Point processing: gamma correction
modifying the output of a filter
black = edge ■ the intensity displayed on a CRT is related to the voltage on the electron
black or white = edge white = no edge grey black = edge
i∝Vγ
mid-grey = no edge = indeterminate white = no edge gun by:
i ∝ V γ ∝ p'γ ∝ p
e e
311 312
Image compositing Simple compositing
✦merging two or more images together ✦copy pixels from one image to another
◆ only copying the pixels you want
◆ use a mask to specify the desired pixels
54
Computer Graphics
313 314
Alpha blending for compositing Arithmetic operations
✦instead of a simple boolean mask, use an alpha ✦images can be manipulated arithmetically
mask ◆ simply apply the operation to each pixel location in turn
◆ value of alpha mask determines how much of each ✦multiplication
image to blend together to produce final pixel
◆ used in masking
✦subtraction (difference)
a b d
◆ used to compare images
315 316
Difference example Halftoning & dithering
the two images are taken from slightly ✦mainly used to convert
different viewpoints greyscale to binary
a b d ◆ e.g. printing greyscale pictures on a laser
printer
◆ 8-bit to 1-bit
- ✦is also used in colour printing,
= ◆ cyan, magenta, yellow,
normally with four
black
take the difference between the two images black = large difference colours:
d = 1−| a − b| white = no difference
317 318
6 3 8
55
Computer Graphics
319 320
Exercise: phototypesetters may use halftone cells up to size16x16, with 256 entries;
either construct a halftone dither matrix for a cell that large or, better, an algorithm to
generate an appropriate halftone dither matrix
321 322
1-to-1 pixel mapping Error diffusion
✦a simple modification of the ordered ✦ error diffusion gives a more pleasing visual result than
dither method can be used ordered dither
◆ turn a pixel on if its intensity is greater than (or ✦ method:
equal to) the value of the corresponding cell in the
◆ work left to right, top to bottom
dither matrix
m ◆ map each pixel to the closest quantised value
d 01 ◆ pass the quantisation error on to the pixels to the right and
e.g. value
quantise 8 bit pixel m,n
qi , j = pi , j div 15
23 below, and add in the errors before quantising these pixels
15 5 13 7
1 193
0
n 2 4 12 2 10
find binary value 11
b =(q ≥d
3
) 14 8 16 6
i,j i,j i mod 4 , j mod 4
323 324
Error diffusion - example (1) Error diffusion - example (2)
✦map 8-bit pixels to 1-bit pixels original image process pixel (0,0) process pixel (1,0)
60 +30 +55 0
◆ quantise and calculate new error values 60 80 110
0-127 0
f
i, j
process pixel (0,1) process pixel
128-255 1 f i , j − 255 (1,1)
0 0 0
0
◆ each 8-bit value is calculated from pixel and error +55
values: in this example the errors 96
+48
137 -59 100 1
f i, =j p + e + e 1
i, j 2 i−1, j 2 i, j −1
from the pixels to the left -59
1
-59
+48
and above are taken into 0
1 account
56
Computer Graphics L
325 326
Error diffusion Halftoning & dithering —
◆Floyd & Steinberg developed the error examples
original ordered dither error diffused
diffusion method in 1975
■ often called the “Floyd-Steinberg algorithm”
◆their original method diffused the errors
in the following proportions:
pixels that have been processed
7
16
3 5 1
16 1 16
current pixel 6 pixels still to
be processed
327 328
Halftoning & dithering — examples Encoding & compression
original error diffused ✦introduction
ordered
dither
more random than
ordered dither and
✦various coding schemes
halftoned with a therefore looks more ◆ difference, predictive, run-length, quadtree
very the regular
✦transform coding
attractive to the
dither fine screen human eye
pattern is ◆ Fourier, cosine, wavelets, JPEG
clearly
thresholding halftoning
visible
<128 ⇒ black the larger the cell size, the more intensity levels
available
≥128 ⇒ white
the smaller the cell, the less noticable the halftone
dots
329 330
57
Computer Graphics
331 332
✦lossy
168 137 78 143 182 189 160 109 104 87 57 36 35 6 16 34 41 36 63 26 118 75 37 41 34 33 31 39 33 1
95 21 181 197 134 125 109 66 46 31 3 33 38 42 33 38 46 12 109 25 41 36 34 36 34 34 37 174 202 21
148 132 101 79 58 41 32 0 11 26 53 46 45 48 38 42 42 38 32 37 36 37 40 30 183 201 201 152 92 67 2
41 24 15 4 7 43 43 41 50 45 10 44 17 37 41 37 33 31 33 33 172 180 168 112 54 55 11 182 179 159 89
48 39 48 46 12 25 162 39 37 28 44 49 43 41 58 130 85 40 49 14 212 218 202 162 98 60 75 8 11 27 38
195 40 45 34 41 48 61 48 42 61 53 35 30 35 178 212 182 206 155 80 70 30 6 14 39 36 53 43 45 8 6 1
35 59 49 31 79 73 78 62 81 108 195 175 156 112 60 53 6 11 22 42 49 51 48 49 3 16 184 77 83 156 36
63 80 65 73 84 157 142 126 77 51 9 12 27 32 142 109 89 56 8 6 169 178 80 240 231 71 36 30 28 35 5
90 55 42 2 3 37 37 192 155 129 101 106 72 65 19 157 168 195 192 157 110 132 39 40 38 35 38 42 51
48 41 89 197 174 144 138 98 92 56 45 69 161 199 46 65 187 79 131 64 41 96 46 38 37 42 47 44 56 4
◆
135 81 84 72 60 43 47 40 209 158 83 154 232 211 186 162 156 167 223 190 58 201 175 101 104 124
162 118 89 81 63 48 39 33 12 209 162 71 152 210 250 176 58 201 191 147 188 160 147 147 166 79 6
137 110 101 83 70 70 48 34 37 2 182 121 157 83 101 104 76 65 194 155 136 156 202 162 173 64 84
130 123 106 77 63 49 37 39 36 26 189 165 119 123 131 24 70 85 229 154 215 176 92 141 223 20 73
99 83 71 49 35 36 30 30 23 151 58 169 33 12 99 22 76 234 156 180 219 108 30 128 59 26 27 26 47 1
45 38 52 55 11 112 128 40 35 40 21 126 65 179 162 156 158 201 145 44 35 18 27 14 21 23 0 101 78
pixel values
162 155 220 174 27 17 20 173 29 160 187 172 93 59 46 121 57 14 50 76 69 31 78 56 82 76 64 66 66
69 26 20 33 160 235 224 253 29 84 102 25 78 22 81 103 78 158 192 148 125 68 53 30 29 23 18 82 13
333 334
Symbol encoding on raw data Quantisation as a compression method
(an example of symbol encoding) (an example of quantisation)
✦ pixels are encoded by variable length symbols ✦quantisation, on its own, is not normally used for
◆ the length of the symbol is determined by the compression because of the visual degradation of the
frequency of the pixel value’s occurence resulting image
✦however, an 8-bit to 4-bit quantisation using error
e.g.
p
P( p ) Code 1 Code 2 diffusion would compress an image to 50% of the space
0 0.19 000 11 with Code 1 each pixel requires 3 bits
1 0.25 001 01
2 0.21 010 10 with Code 2 each pixel requires 2. 7 bits
3 0.16 011 001
335 336
Difference mapping Difference mapping - example (1)
(an example of mapping)
◆ every pixel in an image will be very similar to those either side Percentage
Difference of pixels
of it
0 3.90%
◆ a simple mapping is to store the first pixel value and, for every -8..+7 42.74%
other pixel, the difference between it and the previous pixel -16..+15 61.31%
-32..+31 77.58%
-64..+63 90.35%
-128..+127 98.08%
67 73 74 69 53 54 52 49 127 125 125 126 -255..+255 100.00%
58
Computer Graphics
337 338
Difference mapping - example (2) Predictive mapping
(an example of mapping and symbol encoding combined) (an example of mapping)
◆when transmitting an image left-to-right top-to-bottom, we
✦this is a very simple variable length code
already know the values above and to the left of the current
Difference Code Percentage pixel
value Code length of pixels
◆predictive mapping uses those known pixel values to predict
-8..+7 0XXXX 5 42.74% the current pixel value, and maps each pixel value to the
-40..-9 10XXXXXX 8 38.03% difference between its actual value and the prediction
+8..+39
-255..-41 11XXXXXXXXX 11 19.23%
+40..+255 e.g. prediction
p i , =j p 1 +2 pi −1,1j
2 i , j −1
7.29 bits/pixel
91% of the space of the difference - this is what we transmit
original image d =p −p
i,j i,j i,j
339 340
Run-length encoding Run-length encoding - example (1)
(an example of symbol encoding)
✦based on the idea that images often contain runs ◆run length is encoded as an 8-bit value:
■ first bit determines type of run
of identical pixel values ● 0 = identical pixels, 1 = non-identical pixels
◆ method: ■ other seven bits code length of run
■ encode runs of identical pixels as run length and pixel value ● binary value of run length -1 (run length ∈{1,…,128})
■ encode runs of non-identical pixels as run length and pixel values ◆pixels are encoded as 8-bit values
341 342
Run-length encoding - example (2) CCITT fax encoding
◆ works well for computer generated imagery ✦fax images are binary
◆ not so good for real-life imagery ◆ transmitted digitally at relatively low speed over the
◆ especially bad for noisy images ordinary telephone lines
◆ compression is vital to ensuring efficient use of bandwidth
✦1D CCITT group 3
◆ binary image is stored as a series of run lengths
◆ don’t need to store pixel values!
✦2D CCITT group 3 & 4
◆ predict this line’s runs based on previous line’s runs
◆ encode differences
19.37% 44.06% 99.76%
compression
ratios
59
Computer Graphics
343 344
Transform coding Mathematical foundations
79 73 63 71 73 79 81 89 ✦each of the N pixels, f(x), is
◆ transform N pixel values into represented as a weighted sum
= 76
coefficients of a set of N basis of coefficients, F(u)
functions -4.5 N −1
∑
f(x)= F (u ) H (u, x H(u,x) is the array
◆ the basis functions should be +0 of weights
chosen so as to squash as much e.g. H(u,x)
)
+4.5 u= 0
information
into as few coefficients as 0 +1 +1 +1 x+1 +1 +1 +1 +1
-2 1 +1 +1 +1 +1 -1 -1 -1 -1
possible 2 +1 +1 -1
0 1 2 3 4 5 6 7
-1 +1 +1 -1 -1
+1.5
◆quantise and encode the 3 +1 +1 -1 -1 -1 -1 +1 +1
u 4 +1 -1 +1 -1 +1 -1 +1 -1
coefficients +2
5 +1 -1 +1 -1 -1 +1 -1 +1
6 +1 -1 -1 +1 +1 -1 -1 +1
+1.
5 7 +1 -1 -1 +1 -1 +1 +1 -1
345 346
Calculating the coefficients Walsh-Hadamard transform
✦the coefficients can be calculated from ✦ “square wave” transform
the pixel values using this equation:
N −1
✦ h(x,u)= 1/ H(u,x)
N
forward 0 8
∑
F(u)= f ( x )h( x , u ) transform 1 9
x=0 2 1
the first sixteen 0
3
◆ compare this with the equation for a pixel value, Walsh basis 1
4 1
from the previous slide: functions
5 1
N −1 (Hadamard basis
functions are the 2
inverse
but numbered
same, differently!) 1
∑
f(x)= F (u ) H (u, x transform 6
3
7
) 1
u= 0 4 results for N a power of 2
invented by Walsh (1923) and Hadamard (1893) - the two variants give the same
1
5
347 348
2D transforms 2D Walsh basis functions
◆the two-dimensional versions of the
transforms are an extension of the
one-dimensional cases
one dimension two dimensions
◆these are the Walsh basis functions for N=4
forward transform
N −1
◆in general, there are N2 basis functions operating on an
N −1 N −1
N×N portion of an image
∑
F(u)= f ( x )h( x, u )
∑∑
F ( u, v ) = f ( x , y )h( x , y , u, v )
x=0
x=0y=0
inverse transform
N −1 N −1 N −1
∑∑
f(x,y)= F ( u, v ) H ( u, v , x , y )
∑
f(x)= F (u) H (u, x )
u=0v=0
u= 0
60
Computer Graphics
349 350
Discrete Fourier transform (DFT) DFT – alternative interpretation
✦forward transform: ◆the DFT uses complex coefficients to represent real
N −1
e
− i 2 πux / N pixel values
◆it can be reinterpreted as:
∑
F(u)= f(x) N N
x=0
✦ inverse transform:
2
N −1
∑
f ( x) = A(u ) cos( 2πux + θ (u ))
∑
f(x)= F ( u )e i 2 πxu / N
u =0
u=0
■where A(u) and θ(u) are real values
◆ thus: − i 2πux / N ◆a sum of weighted & offset sinusoids
1
h( x, u) = Ne
H (u, x ) = e i
2πxu / N
351 352
Discrete cosine transform (DCT) DCT basis functions
✦forward transform: the first eight DCT basis functions showing the values of h(u,x) for N=8
N −1
( 2x + 1)uπ
f ( x) = F(u)α(u) cos ⎛ ⎟
⎜
∑ ⎠
2N 0
u=0 ⎝
✦invers
eN −1
∑
(2x +1)u π
F (u) = f (x) (x) cos ⎜⎛ ⎞⎟
transf
x=0 ⎝ 2N ⎠ 1
orm:
1 z=0
⎧
where:
α ( z ) = N⎪ z ∈{1,2,… N −
⎨⎪
2
N
⎩ 1}
353 354
Haar transform: wavelets Haar basis functions
◆“square wave” transform, similar to Walsh-Hadamard the first sixteen Haar basis functions
◆Haar basis functions get progressively more local 0 8
■ c.f. Walsh-Hadamard, where all basis functions are global
1 9
◆simplest wavelet transform
2 10
3 11
4 12
5 13
6 14
7 15
61
Computer Graphics
355 356
Karhunen-Loève transform (KLT) JPEG: a practical example
“eigenvector”, “principal component”, “Hotelling” transform
✦compression standard
✦based on statistical properties of the image source ■ JPEG = Joint Photographic Expert Group
✦theoretically best transform encoding method ✦three different coding schemes:
✦but different basis functions for every different image ◆baseline coding scheme
source ■ based on DCT, lossy
■ adequate for most compression applications
✦if we assume a statistically random image source
◆extended coding scheme
■ all images are then equally likely
■ for applications requiring greater compression or higher precision or
◆the KLT basis functions are very similar to the DCT basis progressive reconstruction
functions ◆independent coding scheme
■ the DCT basis functions are much easier to compute and use ■ lossless, doesn’t use DCT
◆therefore use the DCT for statistically random image sources
first derived by Hotelling (1933) for discrete data; by Karhunen (1947) and Loève (1948) for continuous data
357 358
JPEG sequential baseline scheme JPEG example: DCT transform
◆input and output pixel data limited to 8 bits ✦subtract 128 from each (8-bit) pixel value
◆DCT coefficients restricted to 11 bits ✦subdivide the image into 8×8 pixel blocks
◆three step method
JPEG ✦process the blocks left-to-right,
image Variable
encode top-to-bottom
DCT d image
transform Quantisation length ✦calculate the 2D DCT for each block
encoding
image 2D DCT
the most important
the following slides describe the steps involved in the JPEG
coefficients are in the
compression of an 8 bit/pixel image
top left hand corner
359 360
JPEG example: quantisation JPEG example: symbol encoding
Z (u, v )
✦ quantise each coefficient, F(u,v), 16 11 10 16 24 40 51 61
12 12 14 19 26 58 60 55
✦the DC coefficient (mean intensity) is coded relative
using the values in the 14 13 16 24 40 57 69 56 to the DC coefficient of the previous 8×8
quantisation matrix and the
14 17 22 29 51 87 80 62
18 22 37 56 68 109 103 77
block
formula: ⎡ F (u, v )⎤
24 35 55 64 81 104 113 92
✦each non-zero AC coefficient is encoded by a variable
F (u, v ) = round ⎢ 49 64 78 87 103 121 120 101
62
Computer Graphics
361 362
Course Structure – a review What next?
✦Background [3L]
■ images, human vision, displays
✦ Advanced graphics
◆ Modelling, splines, subdivision surfaces, complex geometry,
✦ 2D computer graphics 3D CG
more ray tracing, radiosity, animation
[4L]
■ lines, curves, clipping, polygon filling, ✦ Human-computer interaction
transformations
IP ◆ Interactive techniques, quantitative and qualitative
✦ 3D computer graphics
■ projection (3D→2D), surfaces, clipping,
2D CG evaluation, application design
[6L]transformations, lighting, filling, ray ✦ Information theory and coding
tracing, texture mapping
Background ◆ Fundamental limits, transforms, coding
✦ Image processing [3L] ✦ Computer vision
■ filtering, compositing, half-toning, dithering,
encoding, compression ◆ Inferring structure from images
363
And then?
✦ Graphics
◆ multi-resolution modelling
◆ animation of human behaviour
◆ æsthetically-inspired image processing
✦ HCI
◆ large displays and new techniques for interaction
◆ emotionally intelligent interfaces
◆ applications in education and for special needs
◆ design theory
63