2 Image Formation Representation
2 Image Formation Representation
Fall 2024
What is a Digital Image?
• Digital Image
— a two-dimensional function
x and y are spatial coordinates
The amplitude of f is called intensity or gray level at the point (x, y) f ( x, y )
Pixel
— the elements of a digital image
2
Sources for Images
3
Electromagnetic (EM) energy spectrum
Major uses
Gamma-ray imaging: nuclear medicine and astronomical observations
inspection, microscopy, lasers, biological imaging,
X-rays: medical diagnostics, industry, and astronomy, etc.
Ultraviolet: lithography, industrial and astronomical observations
Visible and infrared bands: light microscopy, astronomy, remote sensing, industry,
and law enforcement
Microwave band: radar
Radio band: medicine (such as MRI) and astronomy
4
Examples: Gamma-Ray Imaging
5
Examples: X-Ray Imaging
6
Examples: Ultraviolet Imaging
7
Examples: Light Microscopy Imaging
8
Examples: Visual and Infrared Imaging
9
Examples: Visual and Infrared Imaging
10
Examples: Automated Visual Inspection
11
Examples: Automated Visual Inspection
Results of
automated
reading of the
plate content
by the system
The area in
which the
imaging system
detected the
plate
12
Example of Radar Image
13
Examples: MRI (Radio Band)
14
Examples: Ultrasound Imaging
15
Examples: Ultrasound Imaging
16
Image Processing —> Computer
Vision
Image Processing Problems
Image Processing Problems
Image Processing Problems
Image Processing Problems
Image Processing Problems
Image Processing Problems
Image Processing Problems
Image Processing Problems
Image Processing Problems
Image Processing Problems
Element of Visual Perception
28
Structure of the Human Eye
29
Structure of the Human Eye
• Cornea : tough, transparent tissue, covers the anterior surface of the
eye.
• Sclera : Opaque membrane, encloses the remainder of the optic
globe
• Choroid : Lies below the sclera, contains network of blood vessels
that serve as the major source of nutrition to the eye.
• Lens: absorb both infrared and ultraviolet by proteins within the lens
structure and, in excessive amounts, can cause damage to the eye
• Retina: Inner most membrane of the eye which lines inside of the
wall’s entire posterior portion. When the eye is properly focused,
light from an object outside the eye is imaged on the retina
30
Receptors
• Receptors (neurons) is distributed over the surface of the
retina.
• Receptors are divided into 2 classes: Cones and Rods
• Cones (neurons 6-7 million)
– located primarily in the central portion of the retina (the fovea,
muscles controlling the eye rotate the eyeball until the image falls on
the fovea).
– Highly sensitive to color.
– Each is connected to its own nerve end thus human can resolve fine
details.
– Cone vision is called photopic or bright-light vision.
31
Receptors
• Rods (75-150 million),
– distributed over the retina
surface.
– Several rods are connected to a
single nerve end reduce the
amount of detail discernible.
– Serve to give a general, overall
picture of the field of view.
– Sensitive to low levels of
illumination.
– Rod vision is called scotopic or
dim-light vision.
32
Blind Spot
• Figure 2.2 shows the density of rods and cones for a cross
section of the right eye passing through the region of
emergence of the optic nerve from the eye.
• The absence of receptors in this area results in the so-
called blind spot (see Fig. 2.1). Except for this region, the
distribution of receptors is radially symmetric about the
fovea.
33
Blind Spot
34
Image Formation in the Eye
• The principal difference between the lens of the eye and an
ordinary optical lens is that the former is flexible.
• The distance between the center of the lens and the retina (called
the focal length) varies from approximately 17 mm to about 14
mm, as the refractive power of the lens increases from its
minimum to its maximum.
35
Brightness Adaptation and Discrimination
• Two phenomena clearly demonstrate that perceived
brightness is not a simple function of intensity.
• The first is based on the fact that the visual system tends to
undershoot or overshoot around the boundary of regions of
different intensities.
• Figure 2.7(a) shows a striking example of this phenomenon.
• Although the intensity of the stripes is constant, we actually
perceive a brightness pattern that is strongly scalloped,
especially near the boundaries [Fig. 2.7(b)].
• These seemingly scalloped bands are called Mach bands after
Ernst Mach, who first described the phenomenon in 1865.
36
Brightness Adaptation and Discrimination
37
Brightness Adaptation and Discrimination
• The second phenomenon, called simultaneous contrast, is
related to the fact that a region’s perceived brightness does
not depend simply on its intensity, as Fig. 2.8 demonstrates.
• All the center squares have exactly the same intensity.
However, they appear to the eye to become darker as the
background gets lighter.
38
Optical Illusions
39
Light and EM Spectrum
c = E = h , h : Planck's constant.
C = 2.998*108 m/s h = 6.62607004 × 10-34 m2 kg / s
40
Light and EM Spectrum
41
Light and EM Spectrum
Transform
illumination energy
into digital images
43
Image Acquisition Using a Single Sensor
44
Image Acquisition Using Sensor Strips
45
Image Acquisition Process
46
A Simple Image Formation Model
47
Some Typical Ranges of illumination
• Illumination
Lumen — A unit of light flow or luminous flux
Lumen per square meter (lm/m2) — The metric unit of measure
for illuminance of a surface
– On a cloudy day, the sun may produce less than 10,000 lm/m2 of
illumination on the surface of the Earth
48
Some Typical Ranges of Reflectance
• Reflectance
49
Gray Level
• The intensity of a monochrome image f at coordinate
l(x,y) the gray level of the image at that point.
• gray scale = [Lmin, Lmax], common practice, shift the
interval to [0, L], 0 = black , L = white
50
Image Sampling and Quantization
Digitizing the
coordinate
values
Digitizing the
amplitude
values
51
Image Sampling and Quantization
52
Coordinate Convention
53
Representing Digital Images
54
Representing Digital Images
55
Representing Digital Images
56
Representing Digital Images
57
Coloured Image
Spatial and Intensity Resolution
• Spatial resolution
— A measure of the smallest discernible detail in an image
— stated with line pairs per unit distance, dots (pixels) per unit distance, dots
per inch (dpi)
• Intensity resolution
— The smallest discernible change in intensity level
— stated with 8 bits, 12 bits, 16 bits, etc.
59
Spatial and Intensity
Resolution
60
Spatial and Intensity Resolution
61
Spatial and Intensity Resolution
62
Image Interpolation
63
Image Interpolation:
Nearest Neighbor Interpolation
f1(x2,y2) = f(x1,y1)
f(round(x2), round(y2))
=f(x1,y1)
f1(x3,y3) =
f(round(x3), round(y3))
=f(x1,y1)
64
Image Interpolation:
Bilinear Interpolation
(x,y)
where the four coefficients are determined from the four equations in four unknowns that can be written
using the four nearest neighbors of point
65
Image Interpolation:
Bicubic Interpolation
• The intensity value assigned to point (x,y) is obtained by
the following equation
3 3
f3 ( x, y ) = aij x y i j
i =0 j =0
66
Examples: Interpolation
67
Examples: Interpolation
68
Examples: Interpolation
69
Examples: Interpolation
70
Examples: Interpolation
71
Examples: Interpolation
72
Examples: Interpolation
73
Examples: Interpolation
74
Distance Measures
• Given pixels p, q and z with coordinates (x, y), (s, t), (u,
v) respectively, the distance function D has following
properties:
b. D(p, q) = D(q, p)
75
Distance Measures
The following are the different Distance measures:
p(x, y),q(s,t),
a. Euclidean Distance :
De(p, q) = [(x-s)2 + (y-t)2]1/2
76
Introduction to Mathematical Operations in DIP
77
Introduction to Mathematical Operations
H f ( x, y ) = g ( x, y )
H ai fi ( x, y ) + a j f j ( x, y )
Additivity
= H ai f i ( x, y ) + H a j f j ( x, y )
= ai H f i ( x, y ) + a j H f j ( x, y ) Homogeneity
= ai gi ( x, y ) + a j g j ( x, y )
H is said to be a linear operator;
H is said to be a nonlinear operator if it does not meet the above
qualification.
78
Arithmetic Operations
79
Example: Addition of Noisy Images for Noise Reduction
K
1
g ( x, y ) =
K
g ( x, y )
i =1
i
80
Example: Addition of Noisy Images for Noise Reduction
K
1
g ( x, y ) =
K
g ( x, y )
i =1
i
1 K
E g ( x, y ) = E gi ( x, y )
K i =1
1 K
= E f ( x, y ) + ni ( x, y )
K i =1
1 K
= f ( x, y ) + E
K
i =1
ni ( x, y )
= f ( x, y )
81
Example: Addition of Noisy Images for Noise Reduction
82
83
An Example of Image Subtraction: Mask Mode Radiography
Live images f(x,y): X-ray images captured at TV rates after injection of the contrast medium
The procedure gives a movie showing how the contrast medium propagates through the
various arteries in the area being observed.
84
85
An Example of Image Multiplication
86
Set and Logical Operations
87
Set and Logical Operations
• Let A be the elements of a gray-scale image
The elements of A are triplets of the form (x, y, z), where x
and y are spatial coordinates and z denotes the intensity at
the point (x, y).
A = {( x, y, z ) | z = f ( x, y )}
• The complement of A is denoted Ac
Ac = {( x, y, K − z ) | ( x, y, z ) A}
K = 2k − 1; k is the number of intensity bits used to represent z
88
Set and Logical Operations
89
Set and Logical Operations
90
Set and Logical Operations
91
Spatial Operations
• Single-pixel operations
Alter the values of an image’s pixels based on the intensity.
s = T ( z)
e.g.,
92
Spatial Operations
• Neighborhood operations
The value of this pixel is determined by a specified operation involving the pixels in the input image with
coordinates in Sxy
93
Spatial Operations
• Neighborhood operations
94
Geometric Spatial Transformations
• Forward Mapping
( x, y) = T {(v, w)}
Scanning the pixels of the input image and, at location, (v,w), computing
the spatial location, (x,y) of the corresponding pixel in the output image
It’s possible that two or more pixels can be transformed to the same
location in the output image.
• Inverse Mapping −1
(v, w) = T {( x, y )}
Scans the output pixel locations and, at each location (x,y), computes the
corresponding location in the input image.
It then interpolates among the nearest input pixels to determine the
intensity of the output pixel value.
Inverse mappings are more efficient to implement than forward
mappings (used in MATLAB).
97
Example: Image Rotation and Intensity Interpolation
98
Image Registration
• Input and output images are available but the transformation function is
unknown.
Goal: estimate the transformation function and use it to register the two
images.
• One of the principal approaches for image registration is to use tie points
(also called control points)
➢ The corresponding points are known precisely in the input and output
(reference) images.
99
Image Registration
x = c1v + c2 w + c3vw + c4
y = c5v + c6 w + c7 vw + c8
100
Image Registration
101
Image Transform
M −1 N −1
T (u, v) = f ( x, y )r ( x, y, u, v)
x =0 y =0
102
Image Transform
• Given T(u, v), the original image f(x, y) can be recoverd using the inverse
tranformation of T(u, v).
M −1 N −1
f ( x, y ) = T (u, v) s( x, y, u, v)
u =0 v =0
103
Image Transform
104
Example: Image Denoising by Using A Transform
105
2-D Fourier Transform
M −1 N −1
T (u, v) = f ( x, y )e − j 2 ( ux / M + vy / N )
x =0 y =0
M −1 N −1
1
f ( x, y ) =
MN
T (u, v)e
u =0 v =0
j 2 ( ux / M + vy / N )
106