L2v1 Image Formation

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 42

Introduction to Computer Vision

Image formation
Cameras
• First photograph due to Niepce ( following slide)
• First on record shown in the book - 1822
• Basic abstraction is the pinhole camera
– lenses required to ensure image is not too dark
– various other abstractions can be applied

Camera obscura: XVIth Century.


First Photograph

Oldest surviving photograph Photograph of the first photograph


– Took 8 hours on pewter plate

Joseph Niepce, 1826 Stored at UT Austin

Niepce later teamed up with Daguerre, who eventually created Daguerrotypes


Camera obscura: the pre-camera
• Known during classical period in China and Greece
(e.g. Mo-Ti, China, 470BC to 390BC)

Illustration of Camera Obscura Freestanding camera obscura at UNC Chapel Hill


Photo by Seth Ilys
Pinhole cameras
• Abstract camera model - • Pinhole cameras work in
box with a small hole in it practice
Perspective projection - distant objects are
smaller
Perspective Projection

O
X x -x

f
Z f xX
Z
Parallel lines meet - Vanishing points
• each set of parallel lines • Good ways to spot faked
(=direction) meets at a images
different point – scale and perspective don’t
– The vanishing point for this work
direction – vanishing points behave
• Sets of parallel lines on the badly
– supermarket tabloids are a
same plane lead to collinear
great source.
vanishing points.
– The line is called the horizon
for that plane
Weak perspective
• Issue
– perspective effects, but not
over the scale of individual
objects
– collect points into a group at
about the same depth, then
divide each point by the
depth of its group
– Adv: easy
– Disadv: wrong
Orthographic projection

X  x When the camera is at a


 (roughly constant) distance
Y  y from the scene, take m=1.
Marc Pollefeys
Pinhole too big -
many directions are
averaged, blurring the
image

Pinhole too small-


diffraction effects blur
the image

Generally, pinhole
cameras are dark, because
a very small set of rays
from a particular point
hits the screen.
The reason for lenses
Thin Lens: Projection

Image plane
optical axis

f z

Spherical lense surface: Parallel rays are refracted to single point


Thin Lens: Properties
1. Any ray entering a thin lens parallel to the optical
axis must go through the focus on other side
2. Any ray entering through the focus on one side will
be parallel to the optical axis on the other side
Limits of the Thin Lens Model

3 assumptions :
1. all rays from a point are focused onto 1 image point
• Remember thin lens small angle assumption

2. all image points in a single plane


f'
3. magnification m  is constant
z0
Deviations from this ideal are aberrations
Aberrations

2 types :

1. geometrical

2. chromatic

geometrical : small for paraxial rays

chromatic : refractive index function of


wavelength

Geometrical aberrations

 spherical aberration

 astigmatism

 distortion

 coma

aberrations are reduced by combining lenses


Spherical aberration

rays parallel to the axis do not converge

outer portions of the lens yield smaller


focal lenghts


Astigmatism
Different focal length for inclined rays
Distortion
magnification/focal length different
for different angles of inclination

pincushion
(tele-photo)

barrel
(wide-angle)

Can be corrected! (if parameters are known)


Chromatic aberration

rays of different wavelengths focused


in different planes

cannot be removed completely

sometimes achromatization is achieved for


more than 2 wavelengths


Lens materials

Crown Glass

Fused Quartz & Fused Silica

Calcium Fluoride 9000

Germanium 14000

Zinc Selenide 18000

Saphire 6000

Plastic (PMMA)

100 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 2400
WAVELENGTH (nm)

additional considerations :
humidity and temperature resistance, weight, price,...


Photographs
(Niepce,
“La Table Servie,” 1822)

Milestones: Collection Harlingue-Viollet.

Daguerreotypes (1839)
Photographic Film (Eastman,1889)
Cinema (Lumière Brothers,1895)
Color Photography
(Lumière Brothers, 1908)
Television
(Baird, Farnsworth, Zworykin, 1920s)

CCD Devices (1970)


more recently CMOS
Transformations between frames
Digital cameras (cont’d)
Image digitization

• Sampling: measure the value of an image at a finite number of


points.
• Quantization: represent measured value (i.e., voltage) at the
sampled point by an integer.
Image digitization (cont’d)

Sampling Quantization
What is an image?

8 bits/pixel

255
What is an image? (cont’d)
• We can think of a (grayscale) image as a function, f, from
R2 to R (or a 2D signal):
– f (x,y) gives the intensity at position (x,y)
f (x, y)

– A digital image is a discrete (sampled, quantized) version of


this function
Image Sampling - Example

original image sampled by a factor of 2

Images have
been resized
for easier
sampled by a factor of sampled by a factor of 8 comparison
Image Quantization - Example

• 256 gray levels (8bits/pixel) 32 gray levels (5 bits/pixel) 16 gray levels (4 bits/pixel)

• 8 gray levels (3 bits/pixel) 4 gray levels (2 bits/pixel) 2 gray levels (1 bit/pixel)


Sampling Example
255 0 45 38 90 167 78 45

20 160 80 23 43 74 109 0
255 0 45 38 90 167 78 45
45 54 78 23 187 186 100 31
20 160 80 23 43 74 109 0
45 175 85 230 45 100 0 200
45 54 78 23 187 186 100 31
84 123 0 247 209 209 213 76
45 175 85 230 45 100 0 200
83 90 43 254 0 78 254 34
84 123 0 247 209 209 213 76
50 208 63 211 101 89 24 40
83 90 43 254 0 78 254 34
120 0 200 45 60 34 80 237
50 208 63 211 101 89 24 40

120 0 200 45 60 34 80 237


Sampling Example (cont…)

128x128 64x64 32x32 16x16


Quantization – color values
• Convert continuous color information into discrete bit-
encoded information
Quantization Example

24-bit 8-bit 4-bit


Colour Images

• Color images are comprised of three color


channels – red, green, and, blue – which combine
to create most of the colors we can see.

=
Colour images

 r ( x, y ) 
f ( x, y )   g ( x, y ) 
 
 b( x, y ) 
Color sensing in camera: Color filter array
• In traditional systems, color filters are applied to a single
layer of photodetectors in a tiled mosaic pattern.

Bayer grid
Why more green?

Human Luminance Sensitivity Function


Color sensing in camera: Color filter array

red green blue output

demosaicing
(interpolation)
Alterative Color paces
• RGB (CIE), RnGnBn (TV - National Television Standard Committee)
• XYZ (CIE)
• UVW (UCS de la CIE), U*V*W* (UCS modified by the CIE)
• YUV, YIQ, YCbCr
• YDbDr
• DSH, HSV, HLS, IHS
• Munsel color space (cylindrical representation)
• CIELuv
• CIELab
• SMPTE-C RGB
• YES (Xerox)
• Kodak Photo CD, YCC, YPbPr, ...

You might also like