Digital Image Processing: Chapter 1: Introduction
Digital Image Processing: Chapter 1: Introduction
Chapter 1: Introduction
Printing industrial
Textile industrial
1922: image
from
Photographic
reproduction
Using punched
tape
These images are not computerized processed. (Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Digital Images in Early Era
Applications
Nuclear Medicine and astronomical observations
Principle – Inject a patient with a
radioactive isotope that emits gamma ray as it decays. Images are
produced from the emission collected by gamma ray detector
Positron Emission Tomography
Principle is same as X-ray tomography. Patient is given a radioactive
isotope that emits positrons as it decays. When a positron meets an
electron, both are annihilated and two gamma rays are given off.
These are detected and a tomographic image is created.
Bone Scan
Gamma Ray Imaging
PET Image
Gamma Ray Imaging
Cygnus loop
X-Ray Imaging
Principle-
-X-rays are generated using an X-ray tube, which is vacuum tube
with a cathode and anode. Cathode is heated causing release of free
electrons.
-High speed electrons when strike a nucleus, energy is released in
the form of X-ray radiation.
-Energy of X-ray is controlled by a current applied to the filament
in the cathode
Ultraviolet Imaging
Ultraviolet
Fluorescence
phenomenon
Cholesterol
Taxol Microprocessor
Organic
Nickel oxide superconductor
Thin film
Washington
D.C.
Image is produced by
sensors that measure
reflected energy from
different sections of the
EM spectrum
Hurricane Andrew
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Night time light of the world
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 1
Chapter 2
Digital Image Fundamentals
Dr.Basant Kumar
2
Motilal Nehru National Institute of Technology, Allahabad
Eye’s Blind Spot
Dr.Basant Kumar
3
Motilal Nehru National Institute of Technology, Allahabad
Distribution of Light Receptors
Dr.Basant Kumar
4
Motilal Nehru National Institute of Technology, Allahabad
Distribution of Light Receptors (Contd.)
Dr.Basant Kumar
5
Motilal Nehru National Institute of Technology, Allahabad
Image Formation in the Eye
Perception takes place by the relative excitation of light receptors, which transform
radiant energy into electrical impulses that ultimately are decoded by the brain
Dr.Basant Kumar
6
Motilal Nehru National Institute of Technology, Allahabad
HVS Characteristics
-Subjective brightness is a
logarithmic function of the light
incident on the eye
-Total range of distinct intensity
levels the eye can discriminate
simultaneously is small compared
to the total adaptation range
-Current sensitivity of a visual
system is called the brightness
adaptation levels
- At Ba adaptation level ,
intersecting curve represents the mL- mili Lambert
range of subjective brightness
that eye can perceive
Dr.Basant Kumar
7
Motilal Nehru National Institute of Technology, Allahabad
Brightness Discrimination
-The ability of the eye to discriminate between changes in light intensity at any
specific adaptation level
- The quantity ∆Ic/I , ∆Ic is the increment of illumination discriminable 50 % of the
time with background illumination I, is called the Weber ratio
Dr.Basant Kumar
8
Motilal Nehru National Institute of Technology, Allahabad
Brightness Discrimination
Small weber ratio --- small percentage
change in intensity is discriminable
--- good brightness discrimination
Large weber ratio----- a large
percentage change in intensity is
required ---- poor brightness
discrimination
➢The curve shows that the brightness discrimination is poor at low level of
illumination and it improves significantly as background illumination increases
➢At low level illumination – vision is carried out by rods
➢At high level illumination – vision is carried out by cones
➢ The number of different intensities a person can see at any point of time in a
monochrome image is 1 to 2 dozens
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 9
Visual Perception
Perceived brightness is not a simple function of intensity
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 10
Mach Band Effect
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 11
Simultaneous Contrast
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 12
Some Optical Illusions
(a) Outline of a square is (b) Outline of a circle (c) Two horizontal line segments
seen clearly is seen are of the same length, but one
appears shorter than the other
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 13
Electromagnetic Spectrum
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 14
Color Lights
Dr.Basant Kumar
15
Motilal Nehru National Institute of Technology, Allahabad
Properties of Light
Dr.Basant Kumar
16
Motilal Nehru National Institute of Technology, Allahabad
Image Sensing
Images are generated by the combination of the illumination
source and the reflection or absorption of energy from that source
by elements of the scene being imaged
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 17
Imaging Sensors
Dr.Basant Kumar
18
Motilal Nehru National Institute of Technology, Allahabad
Image Acquisition
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 19
Image Acquisition
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 20
Image Acquisition using Sensor Arrays
Dr.Basant Kumar
21
Motilal Nehru National Institute of Technology, Allahabad
CCD Camera
Dr.Basant Kumar
22
Motilal Nehru National Institute of Technology, Allahabad
CCD Vs CMOS
Dr.Basant Kumar
23
Motilal Nehru National Institute of Technology, Allahabad
Image Formation Model
Dr.Basant Kumar
24
Motilal Nehru National Institute of Technology, Allahabad
Image Formation Model
Dr.Basant Kumar
26
Motilal Nehru National Institute of Technology, Allahabad
Image Sampling and Quantization
Discretizing coordinate values is called Sampling
Discretizing the amplitude values is called Quantization
Dr.Basant Kumar
27
Motilal Nehru National Institute of Technology, Allahabad
Image Sampling and Quantization
Dr.Basant Kumar
28
Motilal Nehru National Institute of Technology, Allahabad
Digital Image
Dr .Basant Kumar
29
Motilal Nehru National Institute of Technology, Allahabad
Digital Image Representation
Dr.Basant Kumar
30
Motilal Nehru National Institute of Technology, Allahabad
Digital Image Representation
Dr.Basant Kumar
31
Motilal Nehru National Institute of Technology, Allahabad
Digital Image Representation
Dr.Basant Kumar
32
Motilal Nehru National Institute of Technology, Allahabad
Representing Digital Images
• Digital image
– M N array
– L discrete intensities – power of 2
• L = 2k
• Integers in the interval [0, L - 1]
• Dynamic range: ratio of maximum / minimum intensity
– Low: image has a dull, washed-out gray look
• Contrast: difference between highest and lowest
intensity
– High: image have high contrast
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 33
Saturation and Noise
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 35
Spatial and Intensity Resolution
• Resolution: dots (pixels) per unit distance
• dpi: dots per inch
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 36
Spatial Resolution Example
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 37
Variation of Number of Intensity Levels
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 38
Effects of Varying the Number of Intensity
Levels
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 39
Effects of Varying the Number of Intensity
Levels
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 40
Image Interpolation
• Using known data to estimate values at unknown locations
• Used for zooming, shrinking, rotating, and geometric
corrections
• Nearest Neighbor interpolation
– Use closest pixel to estimate the intensity
– simple but has tendency to produce artifacts
• Bilinear interpolation
– use 4 nearest neighbor to estimate the intensity
– Much better result
– Equation used is
• Bicubic interpolation
– Use 16 nearest neighbors of a point
– Equation used is
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 41
Zooming using Various Interpolation Techniques
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 42
Some Basic Relationships between Pixels
• Neighbors of a pixel
– There are three kinds of neighbors of a pixel:
• N4(p) 4-neighbors: the set of horizontal and vertical neighbors
• ND(p) diagonal neighbors: the set of 4 diagonal neighbors
• N8(p) 8-neighbors: union of 4-neighbors and diagonal neighbors
O O O O O O
O X O X O X O
O O O O O O
Some Basic Relationships between Pixels
• Adjacency:
– Two pixels that are neighbors and have the same grey-
level (or some other specified similarity criterion) are
adjacent
– Pixels can be 4-adjacent, diagonally adjacent, 8-adjacent,
or m-adjacent.
• m-adjacency (mixed adjacency):
– Two pixels p and q of the same value (or specified
similarity) are m-adjacent if either
• (i) q and p are 4-adjacent, or
• (ii) p and q are diagonally adjacent and do not have any common
4-adjacent neighbors.
• They cannot be both (i) and (ii).
• An example of adjacency:
Some Basic Relationships Between Pixels
• Path:
– The length of the path
– Closed path
• Connectivity in a subset S of an image
– Two pixels are connected if there is a path between them that lies
completely within S.
• Connected component of S:
– The set of all pixels in S that are connected to a given pixel in S.
• Region of an image
• Boundary, border or contour of a region
• Edge: a path of one or more pixels that separate two regions
of significantly different gray levels.
Distance Measures
• Distance measures
– Distance function: a function of two points, p and q, in
space that satisfies three criteria
( a ) D ( p, q ) 0
(b) D( p, q ) = D(q, p ), and
(c ) D ( p , z ) D ( p , q ) + D ( q , z )
– The Euclidean distance De(p, q)
De ( p, q) = ( x − s ) 2 + ( y − t ) 2
– The city-block (Manhattan) distance D4(p, q)
D4 ( p, q) =| x − s | + | y − t |
– The chessboard distance D8(p, q)
D8 ( p, q) = max(| x − s |, | y − t |)
Distance Measures- Example
Arithmetic Operations
• Array operations between images
• Carried out between corresponding pixel pairs
• Four arithmetic
s(x, y) = f(x, y) + g(x, y)
d(x, y) = f(x, y) – g(x, y)
p(x, y) = f(x, y) g(x, y)
v(x, y) = f(x, y) ÷ g(x, y)
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 50
Image Subtraction
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 51
Image Subtraction Application
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 52
Shading correction by image multiplication
(and division)
g(x, y) = h(x, y) x f(x, y)
Dr.Basant Kumar
53
Motilal Nehru National Institute of Technology, Allahabad
Masking (RIO) using image multiplication
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 54
Arithmetic Operations
• To guarantee that the full range of an arithmetic operation
between images is captured into a fixed number of bits, the
following approach is performed on image f
fm = f – min(f)
which creates an image whose minimum value is 0. Then the
scaled image is
fs = K [ fm / max(fm)]
whose value is in the range [0, K]
Example- for 8-bit image , setting K=255 gives scaled image
whose intensities span from 0 to 255
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 55
Set and Logical Operations
• Elements of a sets are the coordinates of pixels (ordered pairs
of integers) representing regions (objects) in an image
– Union
– Intersection
– Complement
– Difference
• Logical operations
– OR
– AND
– NOT
– XOR
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 56
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 2
Digital Image Fundamentals
A - B= A∩ Bc
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 57
,b
b
} Set operations involving gray-scale images
}
The union of two gray-scale sets is an array formed from the maximum
intensity between pairs of spatially corresponding elements
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 58
Logical Operations
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 59
Spatial Operations
• Single-pixel operations
– For example, transformation to obtain the
negative of an 8-bit image
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 60
Spatial Operations
Neighborhood operations
For example, compute the average value of the pixels in a
rectangular neighborhood of size m n centered on (x, y)
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 61
Spatial Operations
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 63
Interpolation
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 64
Image Registration
• Used for aligning two or more images of the same scene
• Input and output images available but the specific
transformation that produced output is unknown
• Estimate the transformation function and use it to register the
two images.
• Input image- image that we wish to transform
• Reference image- image against which we want to register
the input
• Principal approach- use tie points ( also called control points)
,which are corresponding points whose locations are known
precisely in the input and reference images
Dr.Basant Kumar
65
Motilal Nehru National Institute of Technology, Allahabad
Image Registration
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 66
Vector and Matrix Operations
• RGB images
• Multispectral images
Dr.Basant Kumar
67
Motilal Nehru National Institute of Technology, Allahabad
Image Transforms
Dr.Basant Kumar
68
Motilal Nehru National Institute of Technology, Allahabad
Image Transforms
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 69
Intensity Transformations and
Spatial Filtering
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 1
Background
• Image domains
Spatial domain techniques -operate directly on the
pixels of an image
Transform domain- operate on transform coefficients
Two main categories of spatial processing
Intensity transformation
- operation on single pixel of image
Ex- Contrast manipulation, image thresholding etc.
Spatial Filtering
- operation on a group of neighboring pixels
- Ex- image sharpening
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 2
Chapter 3
Intensity Transformations and Spatial
Filtering
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 3
Basic Intensity Transformation Functions
• Values of pixels
– before processing: r
– after processing: s
• These value are related by an expression of the form
s=T(r)
• Typically, process starts at the top left of the input
image and proceeds pixel by pixel in horizontal scan,
one row at a time
• Either ignore outside neighbors or pad the image with
a border of 0s or some other specified intensity
values
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 4
Intensity Transformations Functions
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 5
Image Negatives
• Negative of an image with intensity levels in the range [0, L-1]
is obtained using the expression
s=L–1–r
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 8
Fourier Spectrum
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 11
Gamma correction
• A variety of devices used for image capture, printing,
and display respond according to a power law.
• The process used to correct these power-law
response phenomena is called Gamma Correction
• CRT device have an intensity-to-voltage response
that is a power function with = 1.8 to 2.5.
• Different monitors / monitor setting require different
gamma correction values
• Current image standards do not contain the value of
gamma with which an image was created
Dr.Basant Kumar
12
Motilal Nehru National Institute of Technology, Allahabad
Gamma correction
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 13
Contrast Enhancement using Power-law
Transformation
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 14
Contrast Enhancement using Power-law
Transformation
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 15
Piecewise-Linear Transformation Functions
• Contrast Stretching: expands the range of intensity
levels in an image so that it spans the full intensity
range
– Points: (r1, s1) , (r2, s2)
if: r1 = s1 and r2 = s2 linear transformation that produces
no changes in the intensity levels
if: r1 = r2 and s1 = 0 and s2 = L -1 the transformation
becomes a thresholding function that creates a binary
image (Fig. 3.10 (d))
intermediate values of (r1, s1) , (r2, s2) produces various
degrees of spread in the intensity levels of the output
image, thus affecting its contrast
Use: (r1, s1) = (rmin, 0) and (r2, s2) = (rmax, L - 1)
(Fig.3.10 (c))
Dr.Basant Kumar
16
Motilal Nehru National Institute of Technology, Allahabad
Contrast Stretching
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 17
Intensity-level Slicing
Highlighting a specific range of intensities in an image
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 19
Bit-Plane Slicing- Example
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 20
Image Reconstruction using Bit-Planes
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 21
Histogram Processing
• In probabilistic methods of image processing, intensity
values are treated as random quantities
• Histogram of a digital image with intensity levels is a
discrete function
h(rk) = nk
where rk: the kth intensity value and nk: number of pixels in
the image with intensity rk
• Normalized histogram
p(rk) = nk / MN , for k = 0, 1, 2, … , L -1
• Sum of all components of a normalized histogram is equal
to 1
• High contrast image covers a wide range of the intensity
scale and pixels distribution is almost uniform
Dr.Basant Kumar
22
Motilal Nehru National Institute of Technology, Allahabad
Image Histogram
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 23
Histogram Equalization
✓ Used to enhance the appearance of an image and to
make certain features more visible
✓ An attempt to equalize the number of pixels with each
particular value
✓ Allocate more gray levels where there are most pixels
and allocate fewer levels where there are few pixels.
✓ Discrete form of the transformation is
( L − 1) k
k
sk = T (rk ) = ( L − 1) pr (rj ) = nj
j =0 MN j =0
k = 0, 1, 2, , L − 1
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 24
Histogram Equalization- Example
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 25
Histogram Equalization- Example
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 26
Histogram Matching ( Specification)
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 29
Spatial Filtering
• Correlation
– Function of the displacement of the filter
– Used for matching between images
• Convolution
– Rotate filter by 180
• Padding an with (filter size m n)
– m-1 rows of 0s at the top and bottom and
– n-1 columns 0s at the left and right
• Cropping the result
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 30
Correlation and Convolution of a 2-D filter
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 31
Correlation and Convolution of a 2-D filter
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 32
Spatial Filtering
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 34
Smoothing Spatial Filters
• Box filter
• Weighted average filter
– Giving some pixels more importance (weight) at the
expenses of others
– Basic strategy: an attempt to reduce blurring in the
smoothing process
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 37
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 3
Intensity Transformations and Spatial Filtering
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 38
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 3
Intensity Transformations and Spatial Filtering
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 39
Order-Statistics Filters
• Median
– In 3 3 neighborhood is the 5th largest value
– In 5 5 neighborhood is the 13th largest value
• Principal function of median filtering is to
force pixels with distinct intensities to be more
like their neighbors
• Other order-statistics filters
– Max filter: find the brightest points in the image
– Min filter
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 42
Sharpening Spatial Filters
f
= f ( x + 1) − f ( x )
x
2 f
= f ( x + 1) + f ( x − 1) − 2 f ( x)
x 2
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 45
Sharpening Spatial Filters
• Edges in digital images often are ramp-like transitions in
intensity
- first derivative results in thick edges due to nonzero value
along the ramp
- Second derivative produces double edge one pixel thick
separated by zero
– I n the x-direction
– In the y-direction
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 49
Unsharp Masking and Highboost Filtering
Dr.Basant Kumar
50
Motilal Nehru National Institute of Technology, Allahabad
1-D illustration of Unsharp Masking
Dr.Basant Kumar
51
Motilal Nehru National Institute of Technology, Allahabad
First Order derivative for image sharpening- The Gradient
• First derivatives in image processing are magnitude of the
gradient
• For a function f( x, y), the gradient of f at coordinates ( x, y) is
defined as 2-D column vector
f
g x x
f grad ( f ) = =
g y f
y
Dr.Basant Kumar
52
Motilal Nehru National Institute of Technology, Allahabad
Gradient Operators
In some implementations,
it is more suitable to
approximate square root
operation with absolute
values
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 53
Dr.Basant Kumar Motilal Nehru National
54
Institute of Technology, Allahabad
Gradient for Edge Enhancement
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 55
Combining Sharpening Enhancement Methods
Dr.Basant Kumar
56
Motilal Nehru National Institute of Technology, Allahabad
Combining Sharpening Enhancement Methods
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 57
Combining Sharpening Enhancement Methods
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 58
Filtering in the Frequency Domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Background
• Image domains
– Spatial domain techniques operate directly on the
pixels of an image
– Transform domain
• Fourier Transform (FT)
• Wavelet Transform
• Discrete Cosine Transform (DCT)
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Background
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Review
• Sampling and Fourier Transform of sampled
functions
- Sampling theorem, aliasing, reconstruction
from sampled data
• Discrete Fourier Transform (DFT) of one
variable
• 2-D DFT and its Inverse
• Properties of 2-D DFT
Sampling
FT of Impulse Train
• Intermediate result
− The Fourier transform of the impulse train.
+ +
1 n
n =−
(t − nT )
T
−
n =−
T
FT of an impulse
train with period ∆T
is also an impulse
train, whose period
is 1/ ∆T
Sampling Theorem
Extraction of F(µ)
Aliasing
Fourier Transform of Sampled Signal
• The Fourier transform of a sampled band-limited (discrete)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
N −1 2 nk
−j
F [k ] = f [n]e N
, 0 k N −1
n =0
N −1 2 nk
1
F[k ]e
j
f [ n] = N
, 0 n N −1
N n =0
2D Impulse Train
+ +
F ( , )e d d
j 2 ( x + vy )
f ( x, y ) =
− −
Example- 2-D Function FT
Analogous 1-D function FT
2D continuous signals (cont.)
• 2D sampling is accomplished by
+ +
SX Y ( x, y ) = ( x − nX , y − nY )
n =− m =−
Over-sampled Under-sampled
Aliasing in Images
• A continuous function f(x, y) of two continuous
variables, can be band-limited in general only if it
extends infinity in both coordinates.
• Since we cannot sample a function infinitely---Aliasing
is always present in the digital images
Spatial Aliasing – due to under sampling
Temporal Aliasing- related to time intervals between
images in a sequence of images
Ex- “wagon wheel” effect , in which wheels with spokes in a
sequence of images appear to be rotating backward.
Reason- Frame rate is too slow as compared to speed of wheel
rotation in the image sequence
Example-aliasing
Assume a noiseless
imaging system, which
takes fixed no. of
sample 96 x 96 pixels
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 25
Table.2 ( contd.)
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Summary of DFT Pairs
Summary of DFT Pairs
2-D DFT Periodicity
- Transform data in the
interval 0 to M-1
consists of two back-
to-back half periods
meeting at point M/2
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 30
Image Spectrum
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 31
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 32
Basics of Filtering in the Frequency
Domain
• Each term of F(u, v) contains all values of f(x, y)
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 33
Basics of Filtering in the Frequency
Domain
• The slowest varying frequency component (u = v
= 0) is proportional to the average intensity of an
image
• As we move away from the origin of the
transform, the low frequency correspond to the
slowly varying intensity components of an image
– e.g. Smooth intensity variations on walls and floor in an image of
room or a cloudless sky in an outdoor scene
• As we move further away from the origin, the
higher frequencies begin to correspond to faster
and faster intensity changes in the image
– e.g. Sharp change in intensity such as edges of objects and other
components of an image and noise
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 34
Basics of Filtering in the Frequency
Domain
• Filtering techniques in the frequency domain
are based on modifying the FT to achieve a
specific objective and then computing the
inverse DFT to get back to the image domain
• Two components of the FT to which we have
access
– Transform magnitude (Spectrum): useful
– Phase angle: generally is not very useful
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 35
Image and its Fourier Spectrum
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 37
Response of DC Blocking Filter
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 38
Basics of Filtering in the Frequency
Domain
• Lowpass filter: Filter H(u, v) that attenuates
high frequencies while passing low
frequencies would blur an image
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 39
2-D DFT Periodicity
Steps in Frequency Domain Filtering
Frequency Domain Filters
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 42
Image Smoothing using Frequency
Domain Filters
• Lowpass filtering
• High frequency attenuation
• Three types
– Ideal (very sharp)
– Butterworth: transition between two extremes
• Filter Order
– High: approaches ideal filter
– Low: more like Gaussian
– Gaussian (very smooth)
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 43
Image Smoothing using Frequency
Domain Filters
• Ideal Lowpass Filters (ILPF)
– Passes without attenuation all frequencies within
a circle of radius D0 from the origin and cut off all
frequencies outside this circle
– The point of transition between H(u, v) = 1 and
H(u, v) = 0 is called the cutoff frequency
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 44
Ideal Low-pass Filter
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 45
Image Smoothing using Frequency
Domain Filters
• One way to establish a set of standard cutoff frequency loci is
to compute circles that enclose specified amounts of total
image power PT
• A circle of radius D0 encloses percent of the power
p − Q −
PT = P( u , v )
u = v =
= P( u , v ) / PT
u v
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 46
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 47
ILPF Filter Performance
48
Image Smoothing using Frequency
Domain Filters
• Butterworth Lowpass Filters (BLPF)
– BLPF of order n, and with cutoff frequency at a distance D0
– Unlike ILPF, the BLPF transfer function does not have a
sharp discontinuity that gives a clear cutoff between
passed and filtered frequencies
– Smooth transaction between low and high frequencies
H ( u ,v ) = n
+ [ D( u , v ) / D ]
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 49
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 50
BLPF Performance
51
Image Smoothing using Frequency
Domain Filters
• Gaussian Lowpass Filters (GLPF)
– BLPF of order n, and with cutoff frequency at a distance D0
– Unlike ILPF, the BLPF transfer function does not have a
sharp discontinuity that gives a clear cutoff between
passed and filtered frequencies
– Smooth transaction between low and high frequencies
− D ( u ,v ) / D
H ( u ,v ) = e
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 52
GLPF Response
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 53
GLPF Performance
Dr.Basant Kumar 54
Motilal Nehru National Institute of Technology, Allahabad
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 55
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 56
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 57
Image Sharpening using Frequency
Domain Filters
• Highpass filtering
• Low frequency attenuation
• Three types
– Ideal
– Butterworth
– Gaussian
H HP ( u ,v ) = − H LP ( u ,v )
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 58
Image Sharpening using Frequency
Domain Filters
• Ideal Highpass Filters (IHPF)
if D( u ,v ) D
H ( u ,v ) =
if D( u ,v ) D
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 59
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 60
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 61
Image Sharpening using Frequency
Domain Filters
• Butterworth Highpass Filters (BHPF)
H ( u ,v ) = n
+ [ D / D( u , v )]
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 62
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 63
Image Sharpening using Frequency
Domain Filters
• Gaussian Highpass Filters (GHPF)
− D ( u ,v ) / D
H ( u ,v ) = − e
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 64
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 65
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 66
The Laplacian in the Frequency Domain
• Laplacian can be implemented in the
frequency domain using the filter
H ( u ,v ) = − ( u + v )
−
f ( x , y ) = { H ( u ,v )F ( u ,v )}
g( x , y ) = f ( x , y ) + c f ( x , y )
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 67
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 68
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 69
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Homomorphic Filtering
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 70
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 71
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 72
Selective Filtering
• Bandreject and Bandpass Filters
• Notch Filters
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 73
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 74
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 75
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 76
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 77
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 4
Filtering in the frequency domain
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad 78
Image Restoration and
Reconstruction
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Preview
• Principal goal of restoration: to improve an
image in some predefined sense
• Attempts to recover an image that has been
degraded by using a priory knowledge of the
degradation phenomenon
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Model of Image Restoration/Degradation
• A degradation function together with an
additive noise operates on an input image
f(x,y) to produce a degraded image g(x,y)
• Given , some knowledge about the
degradation function H and some knowledge
about the additive noise term (x,y), the
objective of restoration is to obtain an
estimate f (x,y) of the original image
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Model of Image Restoration/Degradation
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Noise Models
• Principal noise sources in digital images arise
during acquisition and/or transmission
– Environmental conditions
– Quality of sensing elements
• Light
• Temperature
– Interference in the transmission channel
• Using wireless network noise due to lightening or other
atmospheric disturbance
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Noise Models
• Some noise models
– White noise: when the Fourier spectrum of noise is
constant
– Gaussian (normal) noise: used frequently in practice
– Rayleigh noise
– Erlang (gamma)
– Exponential
– Uniform
– Impulse (salt-and-pepper)
– Speckle
– Periodic
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Noise Probability Density Functions
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Noise model -Practical Examples
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Effect of Noise Addition
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Effect of Noise Addition
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Periodic Noise -Example
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Noise Models
• Estimation of noise parameters
– May be known partially from sensor specification
but necessary to estimate for a particular imaging
arrangement
• Capture a set of images of “flat” environments
– e.g.in case of optical sensor, imaging a solid gray board that is
illuminated uniformly. The resulting image is good indicators
of system noise
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Estimation of noise parameters
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Performance of Spatial Filters in Noise
Reduction
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Performance of Spatial Filters in Noise
Reduction
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 5
Image Restoration and Reconstruction
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 5
Image Restoration and Reconstruction
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 5
Image Restoration and Reconstruction
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Digital Image Processing, 3rd ed.
Gonzalez & Woods
Chapter 5
Image Restoration and Reconstruction
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Outline
• A model of the image degradation / restoration process
• Noise models
• Restoration in the presence of noise only – spatial
filtering
• Periodic noise reduction by frequency domain filtering
• Linear, position-invariant degradations
• Estimating the degradation function
• Inverse filtering
g(x,y)=f(x,y)+(x,y)
G(u,v)=F(u,v)+N(u,v)
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Spatial filters for de-noising
additive noise
• Skills similar to image enhancement
• Mean filters
• Order-statistics filters
• Adaptive filters
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Mean filters
• Arithmetic mean
ˆf ( x, y ) = 1
g ( s, t )
mn ( s ,t )S xy
Window centered at (x,y)
• Geometric mean
1 / mn
ˆf ( x, y ) = g ( s, t )
( s ,t )S xy
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
original Noisy
Gaussian
m=0
s=20
fˆ ( x, y ) =
g ( s ,
( s ,t )S xy
t ) Q
Q=-1, harmonic
Q=0, airth. mean
Q=+, ?
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Pepper Salt
Noise Noise
黑點 白點
Contra- Contra-
harmonic harmonic
Q=1.5 Q=-1.5
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Wrong sign in contra-harmonic filtering
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Order-statistics filters
• Median filter
fˆ ( x, y) = median{g ( s, t )}
( s ,t )S xy
• Max/min filters
fˆ ( x, y ) = max {g ( s, t )}
( s ,t )S xy
fˆ ( x, y) = min {g (s, t )}
( s ,t )S xy
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
bipolar 3x3
Noise Median
Pa = 0.1 Filter
Pb = 0.1 Pass 1
3x3 3x3
Median Median
Filter Dr.Basant Kumar
Filter
Pass 2 Motilal Nehru National Institute of Technology, Allahabad Pass 3
Pepper Salt
noise noise
Max Min
filter filter
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Order-statistics filters (cont.)
• Midpoint filter
ˆf ( x, y ) = 1 max {g ( s, t )} + min {g ( s, t )}
2 ( s ,t )S xy ( s ,t )S xy
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Order-statistics filters (cont.)
◼ Midpoint filter
ˆf ( x, y ) = 1 max {g ( s, t )} + min {g ( s, t )}
2 ( s ,t )S xy ( s ,t )S xy
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Uniform noise Left +
m=0 Bipolar Noise
s2=800 Pa = 0.1
Pb = 0.1
5x5 5x5
Arith. Mean Geometric
filter mean
5x5 5x5
Median Alpha-trim.
filter Filter
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
d=5
Adaptive filters
◼ Adapted to the behavior based on the
statistical characteristics of the image inside
the filter region Sxy
◼ Improved performance v.s increased
complexity
◼ Example: Adaptive local noise reduction filter
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Adaptive local noise reduction
filter
◼ Simplest statistical measurement
◼ Mean and variance
◼ Known parameters on local region Sxy
◼ g(x,y): noisy image pixel value
◼ s2: noise variance (assume known a prior)
◼ mL : local mean
◼ s2L : local variance
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Adaptive local noise reduction
filter (cont.)
◼ Analysis: we want to do
◼ If s2 is zero, return g(x,y)
◼ If s2L> s2 , return value close to g(x,y)
◼ If s2L= s2 , return the arithmetic mean mL
◼ Formula
ˆf ( x, y ) = g ( x, y ) − s g ( x, y ) − m
2
sL
2 L
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Gaussian Arith.
noise
m=0 mean
s2=1000 7x7
Geometric
mean
7x7 Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad adaptive
Outline
◼ A model of the image degradation / restoration
process
◼ Noise models
◼ Restoration in the presence of noise only – spatial
filtering
◼ Periodic noise reduction by frequency domain
filtering
◼ Linear, position-invariant degradations
◼ Estimating the degradation function
◼ Inverse filtering
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Periodic noise reduction
◼ Pure sine wave
◼ Appear as a pair of impulse (conjugate) in the
frequency domain
f ( x, y) = A sin( u0 x + v0 y)
A u0 v0 u0 v0
F (u, v) = − j (u − , v − ) − (u + ,v + )
2 2 2 2 2
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Periodic noise reduction (cont.)
◼ Bandreject filters
◼ Bandpass filters
◼ Notch filters
◼ Optimum notch filtering
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Bandreject filters
* Reject an isotropic frequency
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
spectrum
noisy
filtered
bandreject
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Bandpass filters
◼ Hbp(u,v)=1- Hbr(u,v)
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Notch filters
◼ Reject(or pass) frequencies in predefined
neighborhoods about a center frequency
ideal
Butterworth Gaussian
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Horizontal
Scan lines
Notch
DFT pass
Notch
Dr.Basant Kumar Motilal Nehru
Notch
National Institute of Technology,
pass
Allahabad reject
Outline
◼ A model of the image degradation / restoration
process
◼ Noise models
◼ Restoration in the presence of noise only – spatial
filtering
◼ Periodic noise reduction by frequency domain
filtering
◼ Linear, position-invariant degradations
◼ Estimating the degradation function
◼ Inverse filtering
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
A model of the image
degradation /restoration process
g(x,y)=f(x,y)*h(x,y)+(x,y)
G(u,v)=F(u,v)H(u,v)+N(u,v)
If linear, position-invariant system
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Linear, position-invariant
degradation
Properties of the degradation function H
◼ Linear system
◼ H[af1(x,y)+bf2(x,y)]=aH[f1(x,y)]+bH[f2(x,y)]
◼ Position(space)-invariant system
◼ H[f(x,y)]=g(x,y)
◼ H[f(x-a, y-b)]=g(x-a, y-b)
◼ c.f. 1-D signal
◼ LTI (linear time-invariant system)
Dr.Basant Kumar Motilal Nehru
National Institute of Technology,
Allahabad
Linear, position-invariant
degradation model
◼ Linear system theory is ready
◼ Non-linear, position-dependent system
◼ May be general and more accurate
◼ Difficult to solve compuatationally
◼ Image restoration: find H(u,v) and apply
inverse process
◼ Image deconvolution
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Estimation by image
observation
◼ Take a window in the image
◼ Simple structure
◼ Strong signal content
◼ Estimate the original image in the window
known
Gs (u, v)
H s (u, v) =
Fˆs (u, v)
estimate
Dr.Basant Kumar Motilal Nehru
National Institute of Technology,
Allahabad
Estimation by experimentation
◼ If the image acquisition system is ready
◼ Obtain the impulse response
original k=0.0025
k=0.001 k=0.00025
Dr.Basant Kumar
Motilal Nehru National Institute of Technology, Allahabad
Estimation by modeling: example
Cut Cut
Outside Outside
70% 85%
Dr.Basant Kumar Motilal Nehru
National Institute of Technology,
Allahabad
DIGITAL IMAGE PROCESSING
Submitted to
Dr. Basant Kumar
(Associate Professor)
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
ALLAHABAD-211004, INDIA
Spectrum of White Light
Fig 1.1 Color Spectrum seen by passing white light through a prism
•Rods
–Long and thin
–Large quantity (~ 100 million)
–Provide scotopic vision (i.e., dim light vision or at low
illumination)
–Only extract luminance information and provide a general overall
picture
• Cones
–Short and thick, densely packed in fovea (center of retina)
–Much fewer (~ 6.5 million) and less sensitive to light than rods
–Provide photopic vision (i.e., bright light vision or at high illumination)
–Help resolve fine details as each cone is connected to its own nerve end
–Responsible for color vision
❑ Mesopic vision
Primary Colors:
Defined CIE in 1931
Red = 700 nm
Green = 546.1 nm
Fig 1.4 Sensivity Curve
Blue = 435.8nm
Primary and Secondary Colors
Additive primary colors: RGB use
in the case of light sources such as
color monitors
Hue Chromaticity
Saturation
Amount of red (X), green (Y) and blue (Z) to form any particular color is called tristimulus.
Perceptual Attributes of Color
❑ Value of Brightness i.e. perceived luminance.
❑ Chrominance
• Hue
o specify color tone (redness, greenness, etc.)
o depend on peak wavelength
• Saturation
o describe how pure the color is
o depend on the spread (bandwidth) of light spectrum
o reflect how much white light is added
RGB Color Model
Purpose of color models: to facilitate
the specification of colors in some
standard.
Hidden faces
of the cube
Safe RGB Colors
Safe RGB colors: a subset of
RGB colors.
There are 216 colors common in
most operating systems.
Trichromatic coefficients:
x
Fig 1.9 Chromaticity Diagram Points on the boundary are fully saturated colors.
C= Cyan
CMY and CMYK Color Models M = Magenta Y = Yellow
K = Black
•Primary colors for pigment
–Defined as one that subtracts/absorbs a primary color
of light & reflects the other two
•CMY – Cyan, Magenta, Yellow
–Complementary to RGB
–Proper mix of them produces black
HSI Color Model
RGB, CMY models are not good for human interpreting
HSI Color model:
Hue: Dominant color
Color
carrying
Saturation: Relative purity (inversely proportional to amount of information
white light added)
Intensity: Brightness
Hue and Saturation on Color Planes
RGB
HUE
IMAGE
SATURATION INTENSITY
C1 = Color No. 1
C2 = Color No. 2
Psuedocolor rendition
of Jupiter’s moon Io
A close-up
Basics of Full Color Image Processing
Methods:
1. Per-color-component processing: process each component separately.
2. Vector processing: treat each pixel as a vector to be processed.
Example of per-color-component processing: smoothing an image by smoothing each
RGB component separately (shown below)
Example: Full Color Image and Various Color Space
Components
Color image
CMYK components
RGB components
HSI components
Color Transformation
Used to transform colors to colors.
Formula:
i=1,2,…,n
Color complement replaces each color with its opposite color in the color circle of
the Hue component. This operation is analogous to image negative in a gray scale
image.
Color Transformation Example
Color Slicing Transformation
We can perform “slicing” in color space: if the color of each pixel is far from a
desired color more than threshold distance, we set that color to some specific color
such as gray, otherwise we keep the original color unchanged.
Color Slicing Transformation Example
Original image
Tonal Correction Examples
Color Balancing Correction Examples
Histogram Equalization of a Full Color Image
Histogram equalization of a color image can be performed by adjusting color
intensity uniformly while leaving color unchanged.
The HSI model is suitable for histogram equalization where only Intensity (I)
component is equalized.
where r and s are intensity components of input and output color image.
Histogram Equalization of a Full Color Image
Color Image Smoothing
Let Sxy denote the set of coordinates defining a neighborhood centered at (x, y) in an RGB
color image. The average of the RGB component vectors in this neighborhood is
Thus, we conclude that smoothing by neighborhood averaging can be carried out on a per-
color-plane basis.
Fig 6.39(a) through (c) show the HSI components of the image. Fig 6.40(a) shows smoothed, full-color RGB image.
Color Image Sharpening
From vector analysis, we know that the Laplacian of a vector is defined as a vector whose
components are equal to the Laplacian of the individual scalar components of the input
vector. In the RGB color system, the Laplacian of vector c in Eq.(6.4-2) is
which tells us that we can compute the Laplacian of a full-color image by computing the
Laplacian of each component image separately.
Figure 6.41(b) shows a similarly sharpened image based on the HSI components in Fig.6.39. This result was
generated by combining the Laplacian of the intensity component with the unchanged hue and saturation
components.
There are 2 methods for color segmentation:
1. Segmented in HSI color space:
A thresholding function based on color information in H and S Components. We
rarely use I component for color image segmentation.
If we want to segment a image based on color we naturally think first of the HSI color
space because color is conveniently represented in hue image.
(a),(b),(c)
Fig: HSI components of noisy color image in fig . (a)Hue, (b) Saturation , (c) Intensity
Color Image Compression
BASANT KUMAR
ECED, MNNIT ALLAHABAD
Image Segmentation
Image segmentation divides an image into regions that are
connected and have some similarity within the region and
some difference between adjacent regions.
The goal is usually to find individual objects inan image.
RT
where T : a nonnegative threshold
Line Detection
Ramp edge- In practice, digital images have edges that are blurred and
noisy--- more closely modeled as ramp edge
Roof Edge- roof edges appear in range imaging, when the thin objects
(such as pipes) are closer to sensor than the equidistant background
Edge Detection
Observations
- Magnitude of the first derivative can be used to
detect the presence of an edge at a point in an image
-Sign of second derivative can be used to determine
whether an edge pixel lies on dark or light side of an
edge ( +ive at the beginning of ramp and –ive at the
end of the ramp)
- Zero crossings can be used for locating centers of
thick edges
Edge Detection in Presence of Noise
• First-order derivatives:
– The gradient of an image f(x,y) at location (x,y)is
defined as the vector:
G x f
x
f f
G y y
x
– The magnitude of this vector: f mag(f) G 2 G 2
y
1
2
Gx
– The direction of this vector: (x, y) tan
1
G
y
– It points in the direction of the greatest rate of change
of f at location (x,y)
Properties of the Gradient
Direction of an edge at any arbitrary point (x, y) is orthogonal to the direction α(x, y) , of
the gradient vector at the point
Ex- Zoomed section of an image containing a straight edge segment. Each square
corresponds to a pixel
Gradient Operator Masks
Masks of 2X2 are simple , but they are not as useful for computing edge direction
as ---not symmetric about the center point
Prewitt operators
Sobel operators
Preferable
Better noise-suppression characteristics
( smoothing )
Prewitt Sobel
15
Gradient Operator Masks
M (x, y) g x g y
17
Sobel Operator Results : Example
f Gx G y
Fine details
act as noise,
hence,
enhanced by
derivative
operator
Gradient Angle: Example
In general, angle images as not as useful as gradient magnitude images for edge
detection, but they do complement the information extracted from an image using the
magnitude of the gradient
Gradient Operator after smooth filtering
sometimes we would use threshold on the gradient
image to achieve the same results.
Pixels with values greater than threshold are shown
white and the other are shown black.
Threshold
Smooth a n
d
the gradient
threshold
image
21
Gradient Diagonal Operators: Example
Second-order derivatives
Canny [1986]
24
Marr-Hildreth edge detector (1980)
Marr and Hildreth
argued -
1)Intensity changes are dependent of image scale and so their
detection requires the use of operators different sizes
2)That a sudden intensity change will give rise to a peak or
trough in the first derivative or, equivalently, to zero crossing in
the second derivative.
Two salient feature:
1)It should be differential operator capable of computing first or
second order derivatives at every point in an image
2)It should be capable of being tuned to act at any desired
scale, so that large operator can be used to detect blurry edges
and small operators to detect sharply focused fine detail
Marr-Hildreth edge detector
28
The algorithm
29
Edge Detection
The Laplacian of Gaussian (LoG)
Canny edge detector
Basic objectives:
51
Step 1 – Smooth image
Create a smoothed image from the original one
using Gaussian function.
𝑥2+𝑦2
𝐺 𝑥,𝑦 = 𝑒− 2𝜎2
𝑓
𝑠 𝑥,𝑦 = 𝐺(𝑥,𝑦) 𝑓(𝑥, 𝑦)
52
Step 2 – Compute Gradient
We located the edges using direction of the point
with the same method as basic edge detection –
gradient:
𝜕𝑓
𝑠 𝜕𝑓𝑠
𝑔𝑥= 𝑔𝑦=
𝜕𝑥 𝜕𝑦
𝑔𝑦
𝑀 𝑥,𝑦 = 𝑔2 𝑥
+ 𝑔2 𝑦
∝ 𝑥,𝑦 = tan−1
𝑔𝑥
53
Step 3 – Nonmaxima suppression
Edges generated using gradient typically contain wide
ridges around local maxima.
We will locate the local maxima using non-
maxima suppression method.
We will define a number of discrete orientations. For
example in a 3x3 region 4 directions can be defined.
We have to quantize all possible edge directions into
four
Define a range of angles for the four possible directions
54
Step 3 – Nonmaxima suppression
55
Step 3 – Nonmaxima suppression
56
Step 4 – Double Thresholding
The received image may still contain false edge
points.
We will reduce them using hysteresis (or double)
thresholding.
Let 𝑇𝐿, 𝑇𝐻be low and high thresholds. The suggested ratio is 1:3
or 2:3. We will define 𝑔𝑁𝐻 𝑥, 𝑦 and 𝑔𝑁𝐿 𝑥, 𝑦 to be 𝑔𝑁 𝑥, 𝑦 after
threshloding it with 𝑇𝐿, 𝑇𝐻.
It is clear that 𝑔𝑁𝐿 𝑥, 𝑦 contains all the point located in 𝑔𝑁𝐻 𝑥, 𝑦. We
will separate them by substruction:
𝑔𝑁𝐿 𝑥,𝑦 = 𝑔𝑁𝐻 𝑥,𝑦 − 𝑔𝑁𝐿 𝑥,𝑦
Now 𝑔𝑁𝐿 𝑥,𝑦 and 𝑔𝑁𝐻 𝑥,𝑦 are the “weak” and the “strong” edge
pixels.
57
Step 4 – Double Thresholding
The algorithm for building the edges is:
1. Locate the next unvisited pixel 𝑝in 𝑔𝑁𝐻 𝑥,𝑦.
2. Mark as valid pixels all the weak pixels in 𝑔𝑁𝐿 𝑥,𝑦 that are
connected to 𝑝
3. If not all non-zero pixels in 𝑔𝑁𝐻 𝑥,𝑦 have been visited return to
step 1.
4.Set all the non-marked pixels in 𝑔𝑁𝐿 𝑥,𝑦 to 0. 5.
Return 𝑔𝑁𝐻 𝑥,𝑦 + 𝑔𝑁𝐿 𝑥,𝑦.
58
Example 1
Example 2
Edge Linking introduction
62
Local processing
63
Local processing
1. Compute the gradient magnitude and angle arrays, M(x,y)
and α(x,y), of the input image, f(x,y) .
2. Form a binary image, g, whose value at any pair of
coordinates (x,y) is given by:
3. Scan the rows of g and fill (set to 1) all gaps (sets of 0s) in each row
that do not exceed a specified length, K. Note that, by definition, a
gap is bounded at both ends by one or more 1s. The rows are
processed individually with no memory between them.
4. To detect gaps in any other direction, θ, rotate g by this angle and
apply the horizontal scanning procedure in step 3. Rotate the result
back by -θ.
64
Example –Edge linking using local processing
65
Global processing
Good for unstructured environments.
All pixels are candidate for linking.
Need for predefined global properties.
66
Trivial application
Given n points.
We want to find subset of n that lie on straight
lines:
Find all lines determined by every pair of points
Find all subset of points that are close to particular
lines.
n*(n-1)/2 + n*(n*(n-1))/2 ~ n^2+ n^3.
67
Hough transform
The xy-plane:
Consider a point (xi,yi). The general equation
of a straight line : yi = axi +b.
Many lines pass through (xi,yi). They all
satisfy : yi = axi +b for varying values of a and
b.
68
Hough transform
x a
69
Hough transform
(xj, yj) also has a line in parameter space
associated with it. – line 2.
Unless line1 and line 2 are parallel they
intersects at some point (a’, b’) where a’ is the
slope and b’ the intercept of the line containing
both (xj, yj) and (xi,yi).
y b’ b
(xi,yi) b=-xia+yi
a’
(xj,yj)
b=-xja+yj
x a
70
Edge Linking and Boundary Detection
Global Processing via the Hough Transform
71
Hough transform
Using yi=axi+b , slope approaches infinity
when lines approach the vertical direction.
Solution?
x cos ysin
Normal representation of a line.
72
Hough transform
Each line is transformed into a sinusoidal in the
ρθ-plane, where
ρ – The distance between the line and the origin .
θ – The angle between the distance vector and the
positive x-axis .
74
Hough transform
75
Edge Linking and Boundary Detection
Hough Transform Example
The intersection of the
curves corresponding
to points 1,3,5
2,3,4
1,4
Hough transform – Edge
linking
Algorithm :
1. Obtain a binary edge image using any of the
techniques discussed earlier in this section
2. Specify subdivisions in the ρθ –plane.
78
Edge Linking and Boundary Detection
Hough Transform Example
Thresholding
Finding histogram of gray level intensity.
Basic Global Thresholding
Optimum Global Thresholding (Otsu’s Method)
Multiple Threshold
Variable Thresholding
59
Thresholding
This algorithm works very well for finding thresholds when the histogram
is suitable.
Example – Multiple Thresholding
69
Variable Thresholding – Example
Region-Based Segmentation
•It states that a region is coherent if all the pixels of that region are
homogeneous with respect to some characteristics such as colour,
intensity, texture, or other statistical properties
•Selection of the initial seed: Initial seed that represent the ROI
should be given typically by the user. Can be chosen automatically.
The seeds can be either single or multiple
•Assume seed point indicated by underlines. Let the seed pixels 1 and 9
represent the regions C and D, respectively
•If the difference is less than or equal to 4 (i.e. T=4), merge the pixel
with that region. Otherwise, merge the pixel with the other region.
Region Growing-Example
Split and Merge Algorithm
• Region growing algorithm is slow
8 8 8
8 8 8
Region-Based Segmentation
Region Splitting
•This process is repeated for each quadrant until all the regions meet
the required homogeneity criteria. If the regions are too small, then
the division process is stopped.