Module 3
Module 3
MODULE 3
(Image Enhancement in Spatial and Frequency Domain)
1
Contd..
➢Enhancement is generally one of the preprocessing methods used on an
image so that it is more suitable for further processing.
➢Selection of enhancement technique of an image is depend on the application area to
which that image is used.
➢Another issue related to enhancement is that evaluation of improvement of image
quality by a enhancement process is a very subjective process and is hard to
standardize. An image may be good in one person’s opinion, may not be good in
another person’s opinion
2
Image Enhancement Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
3
Image Enhancement Examples (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
4
Spatial & Frequency Domains
There are two broad categories of image enhancement
techniques
• Spatial domain Techniques
In spatial (time) domain methods, pixel values are manipulated directly to
get an enhanced image
• Frequency domain techniques
In frequency domain methods, firstly fourier transform of image is taken to
convert image into frequency domain. Then the fourier transform is manipulated and the
modified spectrum is transformed back to spatial domain to view the enhanced image
5
Contd..
➢Neighborhood Processing(Spatial filtering):
• In neighborhood processing, the operation performed on each
pixel depends not only on its own intensity value but also on the
intensity values of its neighboring pixels.
• This means that the processing involves a local area (or
neighborhood) of pixels around each pixel being processed.
• Common operations in neighborhood processing include edge
detection, blurring, and sharpening.
Point Operations
➢Point operations are zero memory operations where a given gray level values of
an individual pixel in the input image is mapped into a gray levels of the pixels
in the output image using the transformation T() .
➢In this case T() is referred to as a grey level transformation function or a point
processing operation
➢Point processing operations take the form
s=T(r)
➢where s refers to the processed image pixel value and r refers to the original
image pixel value.
6
Point Operations Example:
Negative Images
The negative of an image with intensity levels in the range [0 ,L-1] is
obtained by using the expression
s=L-1-r
Where s is the transformed pixel value
r is the original pixel value
L no of possible gray levels in the image( for a ‘n’ bit
image , the no of possible gray levels L= 2 n , which is in the range
[0,L-1]
➢Reversing the intensity levels of an image in this manner produces
the equivalent of a photographic negative.
Original Negative
Image Image
7
Negative Images (contd..)
The negative transformation function can be represented as
➢ Since maximum pixel value is 255 , we need minimum 8 bit to store one
gray level value. So it can be considered as a 8 bit image
➢ Each pixel value ‘s’ in the transformed image can be calculated using the
equation ( 256-1) -r
8
Negative Images (contd..)
Tutorial
9
Point Operations Example:
Contrast Stretching
10
Question
Suppose we have a grayscale image represented by
the following pixel values:
Contd..
Calculate the contrast-stretched pixel values using the below formula for
each pixel in the image
11
Contd..
When r=160, s=
When r=180, s=
When r= 200, s=
When r= 220, s=
When r=240, s=255
12
Question
Suppose we have a grayscale image represented by
the following pixel values:
Contd..
Calculate the contrast-stretched pixel values using the below formula for
each pixel in the image
13
Point Operations Example:
Thresholding
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
14
Question
15
Contd..
Intensity-level slicing without back ground :
In this approach we set one value (say, white) for all the values in
the range of interest and another value (say, black) for all other
intensities. This transformation, produces a binary image. The
transformation function is
Contd..
Intensity-level slicing with back ground
In this approach, brightens (or darkens) the desired range of intensities
but leaves all other intensity levels in the image unchanged.The
transformation function is
16
Contd..
Question 1
➢Perform Intensity level slicing with following transformation function
The image is
17
Contd..
➢Answer is
Question 2
➢Perform Intensity level slicing with following transformation function
The image is
18
contd..
➢Answer is
University question
19
Point Operations Example:
BIT EXTRACTION (BIT PLANE SLICING)
➢Bit-plane slicing is a technique used in image processing to break down
the binary representation of an image pixel intensity values into its
individual bit planes.
➢Each bit plane represents the contribution of a specific bit (either 0 or
1) to the overall pixel intensity
Example
➢Consider the image
20
Contd..
➢Represent each intensity value in binary notation in 3 bits
100 011 101 010
011 110 100 110
010 010 110 101
111 110 100 001
21
Significance of bit plane
extraction
➢By analyzing the contribution of each bit plane to the
overall image quality, compression techniques can prioritize
encoding the most significant bit planes while potentially
discarding or compressing the less significant ones. This can
lead to more efficient compression with minimal loss of
image quality
➢We can reconstruct the image from bit plane images using
the equation
N
I (i, j ) = 2n −1 I n (i, j )
n =1
University question
22
Question
➢Perform bit plane slicing in the following image
➢ANSWER
23
Dynamic Range Compression(cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Log functions are particularly useful when the input grey level values may have
an extremely large range of values
In the following example the Fourier transform of an image is put through a log
transform to reveal more detail
s = log(1 + r)
➢Global Adjustments:
Point operations often apply global adjustments to the entire
image, which may not be suitable for images with diverse content and
lighting conditions.
➢Sensitivity to Outliers:
Some point operations, especially those involving linear
transformations, can be sensitive to outliers or extreme pixel values.
➢Limited Spatial Information:
Point operations do not consider spatial relationships between
pixels, which may limit their effectiveness in certain tasks that require
contextual information.
24
Contd..
➢Loss of Information:
Aggressive contrast adjustments or thresholding can lead to a loss
of subtle details or information in the image.
(x, y)
Neighbourhood
y Image f (x, y)
C. Nikou – Digital Image Processing (E12)
25
Simple Neighbourhood
Operations
Some simple neighbourhood operations include:
◦ Averge:Set pixel value to the average of Neighbourhood
◦ Min: Set the pixel value to the minimum in the
neighbourhood
◦ Max: Set the pixel value to the maximum in the
neighbourhood
◦ Median: The median value of a set of numbers is the
midpoint value in that set (e.g. from the set [1, 7, 15, 18,
24] 15 is the median). Sometimes the median works
better than the average
Neighborhood operations in
spatial domain are mainly
implemented using the concept of
filters
26
What is a Spatial Filter?
➢ A spatial filter consists of
(1) a neighborhood, (typically a small rectangle)
(2) a predefined operation that is performed on the image pixels encompassed by the
neighborhood.
Contd..
➢ If the operation performed on the image pixels is
linear, then the filter is called a linear spatial filter.
Otherwise, the filter is nonlinear
27
Filter Parameters
❖ Many possible filter parameters (size, weights,
function, etc)
• Filter size (size of neighbourhood):
28
Mechanics of linear spatial filtering
29
C. Nikou – Digital Image Processing (E12)
.
The equation for linear Spatial filtering can be
written as
30
Linear spatial filtering
(contd..)
➢The steps for performing Linear spatial
filtering can be summarized as below;
i. Generate proper filter mask(explained in next slide)
ii. Compute new pixel values corresponding to each image
pixel as the sum of product of filter coefficient and image
pixels values by placing the center of the filter aligned
with the corresponding image pixel
iii. While applying operation on image, the boundary pixels
may have parts of the filter function that do not overlap.
To avoid this , before Applying filter the images should be
properly paddedat the boundaries
C. Nikou – Digital Image Processing (E12)
Contd..
➢Padding can be performed in several ways
(i) Zero padding: In zero padding, extra pixels are
added around the input image, and these pixels are typically
assigned a value of zero.
31
Contd..
(ii)Pixel Replication: Replicate padding involves copying the border
pixels of the input image to create the padding.
Contd..
iii) Mirror Padding: Mirror padding, also known as reflective padding,
mirrors the pixels of the input image along the edges to create padding
32
Generating Spatial Filter Masks
➢Generating an linear spatial filter of size m*n requires that we
specify mn mask coefficients.
➢In turn, these coefficients are selected based on what the filter is
supposed to do, keeping in mind that all we can do with linear
filtering is to implement a sum of products.
➢For example, suppose that we want to replace the pixels in an
image by the average intensity of a neighborhood centered on
those pixels.The average value at any location (x, y) in the image
is the sum of the nine intensity values in the neighborhood
centered on (x, y) divided by 9.Therefore a linear filtering
operation with a mask whose coefficients are 1/ 9 implements the
desired averaging
SMOOTHING
SPATIAL FILTERS
33
Smoothing Linear Spatial Filters/Low
Pass Filters/Averaging filters
➢Smoothing filters are used for blurring and for noise reduction
➢Smoothing Linear Spatial Filters Simply take the average all of the
pixels in a neighbourhood around a central value.so these filters are
known as averaging filters.
➢By replacing the value of every pixel in an image by the average of the
intensity levels in the neighborhood defined by the filter mask, this
process results in an image with reduced “sharp” transitions in
intensities. Because random noise typically consists of sharp transitions
in intensity levels, so we can say that smoothing reduces noise
➢However, edges (which almost always are desirable features of an
image) also are characterized by sharp intensity transitions, so
averaging filters have the undesirable side effect that they blur edges
Contd..
➢Another use of averaging filters is in the reduction of “irrelevant” detail in
an image. By “irrelevant” we mean pixel regions that are small with respect
to the size of the filter mask
The image at the top left
is an original image of
size 500*500 pixels
The subsequent images
show the image after
filtering with an averaging
filter of increasing sizes
◦ 3, 5, 9, 15 and 35
34
Contd..
Types of averaging Filter masks
1/ 1/ 1/
9 9 9
1/ 1/ 1/
9 9 9
1/ 1/ 1/
9 9 9
Contd..
➢ To make it computationally efficient, The filtering mask can
be represented as
35
Contd..
) Weighted averaging Filter
(ii
Contd..
(iii) Directional Smoothing
➢To prevent edges from blurring, while smoothing, Directional
smoothing can be used
➢Here spatial averages are computed in several
directions.
36
Example
➢ Consider the following image segment.perform spatial
averaging using a 3*3 mask
University Question
37
Order-Statistic (Nonlinear)
Smoothing Filters
➢Order-statistic filters are nonlinear spatial filters whose response is
based on ordering (ranking) the pixels contained in the image area
encompassed by the filter, and then replacing the value of the center
pixel with the value determined by the ranking result
➢Examples are
(i) Median filter
(ii)Min filter
(iii) Max filter
Median filters
➢It replaces the value of a pixel by the median of the intensity values in
the neighborhood of that pixel (the original value of the pixel is included
in the computation of the median).
➢Median filters are quite popular because, for certain types of random
noise, they provide excellent noise-reduction capabilities, with
considerably less blurring than linear smoothing filters of similar size.
Median filters are particularly effective in the presence of impulse noise,
also called salt-and-pepper noise because of its appearance as white
and black dots superimposed on an image
38
Contd..
➢The median, of a set of values is such that half the values in the set are
less than or equal to the median value and half are greater than or equal
to the median value.
➢ In order to perform median filtering at a point in an image, we first sort
the values of the pixel in the neighborhood, determine their median, and
assign that value to the corresponding pixel in the filtered image
Contd..
➢ In general, median filtering is much better suited than averaging
for the removal of salt-and-pepper noise.This is illustrated in the figure
39
Min and Max Filters
➢MIN filter replaces the value of a pixel by the
minimum of the intensity values in the
neighborhood of that pixel
40
Tutorial
➢ Consider the following image. Apply the following smoothing filters of size 3*3
at image position (2,2) and find the resultant pixel value
(i) Averaging
(ii)weighted Averaging
(iii)Median filter
(iv)Min filter
(v)Max filter
41
Contd..
➢ Therefore ,Linear Spatial filtering can be
considered as correlation or convolution operation
of filter mask on every pixel in the image.
➢While applying correlation or convolution
operation on image, the boundary pixels may have
parts of the filter function that do not overlap. To
avoid this , before Applying filter the images should
be properly padded with zeros at the boundaries.
42
Convolution and Correlation of 2D Signal
43
Sharpening Spatial Filters
➢The principal objective of sharpening is to highlight
transitions in intensity .
➢we saw that image blurring could be accomplished in the
spatial domain by pixel averaging in a neighborhood. Because
averaging is analogous to integration, it is logical to conclude
that sharpening can be accomplished by spatial differentiation.
➢ie, The characteristic of differentiation operation is used by
Sharpening spatial filters.
➢Main application is highlighting the edges of an image
➢Known as High pass filters
➢A process that has been used for many years by the printing
and publishing industry to sharpen images consists of
subtracting an unsharp (smoothed) version of an image from the
original image.This process, called unsharp masking.
➢This is also known as edge crispening process
➢ It consists of the following steps:
1. Blur the original image.
2. Subtract the blurred image from the original (the resulting
difference is called the mask.)
3. Add the mask to the original.
44
Mathematical Expression of unsharp
masking
Illustration of mechanics of
unsharp masking
45
Example
46
Image Enhancement in
Frequency domain
Convolution in the spatial domain is equivalent to multiplication in
frequency domain
➢We begin by observing in Eq. (4.5-15) that each term of (u, v) contains all values of
(x, y), modified by the values of the exponential terms. Thus, with the exception of
trivial cases, it usually is impossible to make direct associations between specific
components of an image and its transform.
47
Contd..
➢However, some general statements can be made about the relationship
between the frequency components of the Fourier transform and spatial
features of an image.
➢ For instance, because frequency is directly related to spatial rates of
change, it is not difficult intuitively to associate frequencies in the
Fourier transform with patterns of intensity variations in an image.
➢The slowest varying frequency component(u=0,v=0) is proportional to
the average intensity of an image.
➢As we move away from the origin of the transform, the low frequencies
correspond to the slowly varying intensity components of an image. In
an image of a room, for example, these might correspond to smooth
intensity variations on the walls and floor.
48
Contd..
➢As we move further away from the origin, the higher frequencies begin
to correspond to faster and faster intensity changes in the image. These
are the edges of objects and other components of an image characterized
by abrupt changes in intensity.
49
The basic steps of image enhancement in
frequency domain are
Functions F, H, and
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Elementwise
Multiplication
50
Contd..
Step 3 : Compute the DFT of the image fp(x,y) .Suppose the
DFT is represented as F(u,v)
Step4:Generate a real, symmetric filter function, H(u, v), of
size P* Q with center at coordinates (P /2,Q /2).
Step 5:Form the product G(u,v) by multiplying F (u,v) with
H(u,v) using array multiplication(The basic theory is that the
product of two functions in the frequency domain implies
convolution in the spatial domain).
Contd..
Step 6:Find the inverse transform of the real part of G(u,v)
and Multiply the result by (-1)x+y , so that we can obtain the
processed image in spatial domain.
51
Correspondence Between Filtering in the
Spatial and Frequency Domains
52
Image Smoothing Using Frequency
Domain Filters
53
Smoothing Frequency Domain Filters
➢Here filters are constructed in such a way that
while multiplying the filter with frequency domain
images, we should get the desired result
➢We consider three types of lowpass filters in
frequency domain:
(i)Ideal lowpass filter
(ii) Butterworth lowpass filters
(iii)Gaussian lowpass filters
A 2-D lowpass filter that passes without attenuation all frequencies within a
circle of radius D0 from the origin and “cuts off” all frequencies outside this
circle is called an ideal lowpass filter (ILPF). D0 is the cut off frequency
1 if D(u, v) D0
H (u, v) =
0 if D(u, v) D0
54
Ideal Low Pass Filter (cont…)
The visual representation for the ideal low pass filter can be given as:
Above we show an image, it’s Fourier spectrum and a series of ideal low pass
filters of radius 5, 15, 30, 80 and 230 superimposed on top of it
55
Ideal Low Pass Filter (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Result of filtering
Original with ideal low pass
image filter of radius 5
Result of filtering
Result of filtering
with ideal low pass
with ideal low pass
filter of radius 230
filter of radius 80
1
H (u, v) =
1 + [ D(u, v) / D0 ]2 n
56
Butterworth Lowpass Filter
(cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Result of filtering
Original with Butterworth
image filter of order 2 and
cutoff radius 5
Result of filtering
Result of filtering with
with Butterworth
Butterworth filter of
filter of order 2 and
order 2 and cutoff
cutoff radius 230
radius 80
− D 2 ( u ,v ) / 2 D0 2
H (u, v) = e
57
Gaussian Lowpass Filters
(cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Result of filtering
Original with Gaussian filter
image with cutoff radius 5
Result of filtering
Result of filtering
with Butterworth
with ideal low pass
filter of order 2
filter of radius 15
and cutoff radius
15
Result of filtering
with Gaussian
filter with cutoff
radius 15
58
Lowpass Filtering Examples
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
59
Lowpass Filtering Examples
(cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
60
COMPARISON BETWEEN LOW PASS FILTERS
61
Sharpening in the Frequency Domain
➢In frequency domain representation of an image , low
frequency component corresponds to areas of low intensity
change(smooth areas) and high frequency component
corresponds to areas of sharp intensity change(edges ).
Contd..
➢High pass frequencies are precisely the reverse of
low pass filters, so:
Hhp(u, v) = 1 – Hlp(u, v)
62
Sharpening Frequency Domain
Filters
➢Here filters are constructed in such a way that
while multiplying the filter with frequency domain
images, we should get the desired result
➢We consider three types of highpass filters in
frequency domain:
(i)Ideal highpass filter
(ii) Butterworth highpass filters
(iii)Gaussian highpass filters
63
Ideal High Pass Filters (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
64
Butterworth High Pass Filters (cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Results of Results of
Butterworth Butterworth
high pass high pass
filtering of filtering of
order 2 with order 2 with
D0 = 15 D0 = 80
65
Gaussian High Pass Filters
(cont…)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Results of Results of
Gaussian Gaussian
high pass high pass
filtering with filtering with
D0 = 15 D0 = 80
66
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
High frequency
emphasis result Original image
67