Digital Image Processing - 1
Digital Image Processing - 1
f (row, col)
row
Dr. Chandran Saravanan, NIT Durgapur, India
Image Acquisition
Images are typically generated by illuminating a scene
and absorbing the energy reflected by the objects in
that scene
Typical notions of
illumination and scene
can be way off:
X-rays of a skeleton
Ultrasound of an
unborn baby
Electro-microscopic images of
molecules
Imaging Sensor
16 x 24 12 megapixels 42002800
The more intensity levels used, the finer the level of detail
discernable in an image
Intensity level resolution is usually given in terms of the
number of bits used to store each intensity level
Number of Intensity
Number of Bits Examples
Levels
1 2 0, 1
2 4 00, 01, 10, 11
4 16 0000, 0101, 1111
8 256 00110011, 01010101
16 65,536 1010101010101010
16 grey levels (4 bpp) 8 grey levels (3 bpp) 4 grey levels (2 bpp) 2 grey levels (1 bpp)
ND(p)
8 neighbours N8(p)
Adjacency
V is set of intensity values
D4(p,q)=|x-s|+|y-t|
D8(p,q)=max(|x-s|,|y-t|)
The classical bell-shaped, symmetric histogram with most of the frequency counts bunched in the middle
and with the counts dying off out in the tails. From a physical science / engineering point of view, the
normal distribution is that distribution which occurs most often in nature.
Image I = {5,4,2,3,7,4,6,5,3,6}
J(4)=(I(3)+I(4)+I(5))/3=(2+3+7)/3=4
1/9 x
1 1 1
1 1 1
1 1 1
1/16 x
1 2 1
2 4 2
1 2 1
Filtering continues
Smoothing filters are used for blurring and for noise reduction
Blurring is used in preprocessing tasks, removal of small details, bridging
small gaps in lines or curves
Noise reduction can be accomplished by blurring with a linear filter and
also by nonlinear filtering
Output of smoothing, linear spatial filter is average of pixels contained in
the neighborhood of the filter mask
Also called as averaging filters / lowpass filters
Averaging filters blur images due to averaging the edges
Averaging filter with all coefficients are called box filter
Averaging and Median Filter
Averaging and Median Filter
0 1 0 0 -1 0
1 -4 1 -1 4 -1
0 1 0 0 -1 0
1 1 1 -1 -1 -1
1 -8 1 -1 8 -1
1 1 1 -1 -1 -1
The formula for 2 dimensional inverse discrete Fourier transform is given below.
The discrete Fourier transform is actually the sampled Fourier transform, so it contains
some samples that denotes an image.
In the above formula f(x,y) denotes the image, and F(u,v) denotes the discrete Fourier
transform.
Frequency Filters
Frequency filters process an image in the frequency domain.
The image is Fourier transformed, multiplied with the filter function and then re-
transformed into the spatial domain.
Attenuating high frequencies results in a smoother image in the spatial domain
Attenuating low frequencies enhances the edges
G(k,l)=F(k,l) x H(k,l)
Where, F is input image in Fourier domain,
H is the filter function and G is the filtered image
G will be applied IFT to get reproduced image
Frequency Filters continues
There are basically three different kinds of filters: lowpass,
highpass, and bandpass filters.
A low-pass filter attenuates high frequencies and retains low
frequencies unchanged.
The result in the spatial domain is equivalent to that of a smoothing
filter;
A highpass filter, on the other hand, yields edge enhancement or
edge detection in the spatial domain, because edges contain many
high frequencies
Frequency Filters continues
A bandpass attenuates very low and very high frequencies, but retains a middle range
band of frequencies. Bandpass filtering can be used to enhance edges (suppressing low
frequencies) while reducing the noise at the same time (attenuating high frequencies)
The drawback of low-pass filter function is a ringing effect that occurs along the edges of
the filtered spatial domain image.
Better results can be achieved with a Gaussian shaped filter function. The advantage is
that the Gaussian has the same shape in the spatial and Fourier domains
A commonly used discrete approximation to the Gaussian is the Butterworth filter.
Frequency Filters continues
One difference is that the computational cost of the spatial filter
increases with the standard deviation
whereas the costs for a frequency filter are independent of the filter
function
the spatial Gaussian filter is more appropriate for narrow lowpass
filters, while the Butterworth filter is a better implementation for
wide lowpass filters
Butterworth Low Pass Filters Example
Butterworth Lowpass Filters
1930 by the British engineer and physicist Stephen Butterworth
maximally flat magnitude filter
successively closer approximations were obtained with increasing
numbers of filter elements of the right values
basic Butterworth low-pass filter could be modified to give low-
pass, high-pass, band-pass and band-stop functionality.
Gaussian Filters
a Gaussian filter is a filter whose impulse response is a Gaussian function
Gaussian filters have the properties of having no overshoot to a step
function input while minimizing the rise and fall time
The focal element receives the heaviest weight (having the highest
Gaussian value) and neighboring elements receive smaller weights as their
distance to the focal element increases
Gaussian Filters Example
Homomorphic Filtering
is a generalized technique for signal and image processing, involving a
nonlinear mapping to a different domain in which linear filter techniques
are applied, followed by mapping back to the original domain.
This concept was developed in the 1960s by Thomas Stockham, Alan V.
Oppenheim, and Ronald W. Schafer at MIT.
Homomorphic filtering is most commonly used for correcting non-uniform
illumination in images.
Homomorphic Filtering cont
Homomorphic filter is used for image enhancement.
It simultaneously normalizes the brightness and increases contrast.
Homomorphic filtering is used to remove multiplicative noise.
Multiplicative noise refers to an unwanted random signal that gets multiplied into
some relevant signal during capture, transmission, or other processing.
Dark spots caused by dust in the lens or image sensor, and variations in the gain
of individual elements of the image sensor array
Homomorphic Filtering cont
Illumination and reflectance are not separable, but their approximate locations in
the frequency domain may be located.
Since illumination and reflectance combine multiplicatively, the components are
made additive by taking the logarithm of the image intensity, so that these
multiplicative components of the image can be separated linearly in the
frequency domain.
Illumination variations can be thought of as a multiplicative noise, and can be
reduced by filtering in the log domain.
Homomorphic Filtering cont
To make the illumination of an image more even, the high-frequency components
are increased and low-frequency components are decreased,
because the high-frequency components are assumed to represent mostly the
reflectance in the scene,
whereas the low-frequency components are assumed to represent mostly the
illumination in the scene.
That is, high-pass filtering is used to suppress low frequencies and amplify high
frequencies, in the log-intensity domain
Homomorphic Filtering cont
I(x,y)=L(x,y) R(x,y) - where I is the image, L is scene illumination, and R is
the scene reflectance
Illumination typically varies slowly across the image as compared to
reflectance which can change quite abruptly at object edges.
In homomorphic filtering we first transform the multiplicative components
to additive components by moving to the log domain.
ln(I(x,y))=ln(L(x,y) R(x,y))
ln(I(x,y))=ln(L(x,y))+ln(R(x,y))
Homomorphic Filtering cont
Then we use a high-pass filter in the log domain to remove the low-
frequency illumination component while preserving the high-
frequency reflectance component. The basic steps in homomorphic
filtering are shown in the diagram below:
In this image the background illumination changes gradually from the top-left corner to the bottom-
right corner of the image.
Let's use homomorphic filtering to correct this non-uniform illumination.
The first step is to convert the input image to the log domain.
The next step is to do Gaussian high-pass filtering. We high-pass filter the log-transformed image in the
frequency domain.
We compute the FFT of the log-transformed image with zero-padding
Then we apply the high-pass filter and compute the inverse-FFT.
We crop the image back to the original unpadded size.
The last step is to apply the exponential function to invert the log-transform and get the homomorphic
filtered image.
Fast Fourier Transform
The fast Fourier transform (FFT) is a discrete Fourier
transform algorithm which reduces the number of computations
from O(N2) to O(NlgN), where lg is the base-2 logarithm, can be 20-
30% faster
FFTs were first discussed by Cooley and Tukey (1965), although Gauss
had actually described the critical factorization step as early as 1805
(Bergland 1969, Strang 1993).
FFT continues
The first stage breaks the 16 point signal into two signals each consisting of 8 points.
The second stage decomposes the data into four signals of 4 points.
This pattern continues until there are N signals composed of a single point.
An interlaced decomposition is used each time a signal is broken in two, that is, the
signal is separated into its even and odd numbered samples.
The best way to understand this is by inspecting Fig. 12-2 until you grasp the pattern.
There are Log2N stages required in this decomposition, i.e., a 16 point signal (24) requires
4 stages, a 512 point signal (27) requires 7 stages, a 4096 point signal (212) requires 12
stages, etc.
FFT continues
On the left, the sample numbers of the original signal are listed along with
their binary equivalents.
On the right, the rearranged sample numbers are listed, also along with
their binary equivalents.
The important idea is that the binary numbers are the reversals of each
other.
For example, sample 3 (0011) is exchanged with sample number 12 (1100).
Likewise, sample number 14 (1110) is swapped with sample number 7
(0111), and so forth.