Module - 2notes - Part1 DIP

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

DIGITAL IMAGE PROCESSING

18CS741
[As per Choice Based Credit System (CBCS) scheme]
(Effective from the academic year 2018 -2019)
SEMESTER – VII
MODULE 2
Notes

Prepared By
Athmaranjan K
Associate Professor
Dept. of Information Science & Eng.
Srinivas Institute of Technology, Mangaluru

Athmaranjan K Dept of ISE


DIGITAL IMAGE PROCESSING 18CS741 Module 2

MODULE 2
SYLLABUS
Image Enhancement In The Spatial Domain: Some Basic Gray Level Transformations, Histogram
Processing, Enhancement Using Arithmetic/Logic Operations, Basics of Spatial Filtering, Smoothing Spatial
Filters, Sharpening Spatial Filters, Combining Spatial Enhancement Methods.
Textbook 1: Ch.3

Textbook 1: Rafael C G., Woods R E. and Eddins S L, Digital Image Processing, Prentice Hall, 2nd edition,
2008

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 2


DIGITAL IMAGE PROCESSING 18CS741 Module 2

IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN


What is an Image Enhancement?
It is the process in which quality of an image is improved for a specific application.
Why Image Enhancement required?
 To highlight the important details in an image.
 To remove noise.
 To make an image look more appealing.
Image enhancement approaches fall into two broad categories:
1. Spatial domain methods
2. Frequency domain methods.
The term spatial domain refers to the image plane itself, and approaches in this category are based on direct
manipulation of pixels in an image.
Frequency domain processing techniques are based on modifying the Fourier transform of an image. In this
Frequency domain method Image will be converted from spatial domain to Frequency domain, then it is
processed using F. T. After processing the image is converted back to spatial domain
IMAGE ENHANCEMENT IN SPATIAL DOMAIN:
Enhancing an image provides better contrast and a more detailed image as compare to non-enhanced image.
Image enhancement has very good applications. It is used to enhance medical images, images captured in
remote sensing, images from satellite etc. As indicated previously, the term spatial domain refers to the
aggregate of pixels composing an image. Spatial domain methods are procedures that operate directly on
these pixels. Spatial domain processes will be denoted by the expression:

Where f(x, y) is an input image and g(x, y) is the output image and the term T is known as the operator. T can
be applied to the set of image or a single image.
Basic Implementation of this Equation on a single image is shown in below Figure:
 Operator T is applied on a single image.
 (x, y) is the arbitrary location in an image and the region next to this pixel is known as neighbourhood
of this pixel (x, y).
 Spatial domain process consist of moving the origin of neighbourhood from pixel to pixel and
applying the operator T on to the pixels in the neighbourhood that will yielding the output of the
location.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 3


DIGITAL IMAGE PROCESSING 18CS741 Module 2

The simplest form of T is when the neighbourhood is of size 1 x 1 (that is, a single pixel). In this case, g
depends only on the value of f at (x, y), and T becomes a gray-level (also called an intensity or mapping)
transformation function of the form:

Where s is the output image pixel value and r is the input image pixel value.
What is gray level or Intensity level Transformation function. Explain the gray level transformation
function for contrast enhancement.
Spatial domain processes will be denoted by the expression:

Where f(x, y) is an input image and g(x, y) is the output image and the term T is known as the operator which
is applied on image pixel. The simplest form of T is when the neighbourhood is of size 1 x 1 (that is, a single
pixel). In this case, g depends only on the value of f at (x, y), and T becomes a gray-level (also called an
intensity or mapping) transformation function of the form:

Where s is the output image pixel value and r is the input image pixel value

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 4


DIGITAL IMAGE PROCESSING 18CS741 Module 2
Gray level transformation function for contrast enhancement.
• Fig (a) shows the effect of transformation that would produce an image of higher contrast than the
original image.
• Image region below m is darken and above m it will be brighten. This is also known as contrast
stretching.
• Fig (b) shows transformation T(r) produces 2 level image which is also known as binary image.
• Here the transformation at any point in an image depends only on the gray level at that point. This is
also known as point processing.

SOME BASIC GRAY LEVEL TRANSFORMATIONS


************Explain basic intensity or Gray level transformation function
OR
********Explain the following gray level transformation functions with a neat graph: i) Linear
Transformation ii) Log transformation iii) Power law transformation.
Basic intensity transformation function or gray level transformation is given by:

Where s is the output image pixel value (after processing) and r is the input image pixel value (before
processing) and T is the Transformation function which maps value of r to s using look up table.
Frequently used 3-Basic types of gray level Transformation Functions for Image Enhancement:
1. Linear Transformation Function: It consists of Image Negative and Image Identity.
2. Logarithmic Transformation Function: It consists of Log functions and Inverse log functions

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 5


DIGITAL IMAGE PROCESSING 18CS741 Module 2
th th
3. Power-Law Transformation Function: It consists of n root and n power functions
1. Linear Transformation Function
Linear transformation includes simple identity and negative transformation.
a) Image Identity: Identity transition is shown by a straight line. In this transition, each intensity value
of the input image is directly mapped to the intensity value of output image. That results in the same
input image and output image. And hence it is called identity transformation. It has been shown
below: It has less useful in Digital image processing.

b) Image Negative: Here also intensity value represented in the range [0, L-1]. Image negative can be
represented by the function:
s=L–1–r
In image negative the intensity value are the pixel value will be reversed to produce an equivalent
photographic negative image. To produce photographic negative we will use Image negative
transformation. It is as shown in below figure:

Use: Particularly suited for enhancing white or gray detail embedded in dark regions of an image, especially
when the black areas are dominant in size. It is very much easier to analyse the negative image.
2. Logarithmic Transformation Function:
Transformation function can be represented as: s = c log (1 + r) where c is a constant scaling factor and r ≥ 0
• By using this transformation we can compress the gray level or expand the gray level.
ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 6
DIGITAL IMAGE PROCESSING 18CS741 Module 2
• At the output of image we get high contrast or low contrast image depending upon the function which
we perform.
• The shape of the log curve is as shown below

Transformation maps a low input intensity level into larger output intensity. The opposite is true of higher
values of input levels. We would use a transformation of this type to expand the values of dark pixels in an
image while compressing the higher-level values. The opposite is true of the inverse log transformation.
3. Power-Law Transformations
It consists of nth root and nth power functions. It can be represented as :

Here when Gama value > 1 power law will act as nth power and < 1 it will act as nth root. It is also known as
gamma corrections.
Plots of s versus r for various values of gamma are shown below:

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 7


DIGITAL IMAGE PROCESSING 18CS741 Module 2

As in the case of the log transformation, power-law curves with fractional values of gamma map a narrow
range of dark input values into a wider range of output values, with the opposite being true for higher values
of input levels. Curves generated with values of gamma >1 have exactly the opposite effect as those
generated with values of gamma < 1. When gamma = 1curve reduces to identity transformation.
PIECEWISE-LINEAR TRANSFORMATION FUNCTIONS
A complementary approach to the methods which we discussed so far is to use piecewise linear functions.
The principal advantage of piecewise linear functions over the basic types of functions which we have
discussed thus far is that the form of piecewise functions can be arbitrarily complex. The principal
disadvantage of piecewise functions is that their specification requires considerably more user input.
*****Explain the following transformations
The 3 piecewise-linear transformation functions are:
1. Contrast Stretching
2. Gray Level or Intensity level slicing
3. Bit plane slicing
CONTRAST STRETCHING
Contrast: is the difference between the highest gray level and lowest gray level of an image. Low-contrast
images can result from :

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 8


DIGITAL IMAGE PROCESSING 18CS741 Module 2
 Poor illumination
 Lack of dynamic range in the imaging sensor
 Wrong setting of a lens aperture in a digital camera
These effects can overcome by using contrast stretching.
Basic Idea behind this contrast stretching is to increase the contrast of an image by making darker portion
more darker and brighter portion more brighter. The Transformation function that represent the contrast
stretching is as shown below:

Above figure shows a typical transformation used for contrast stretching. The locations of points (r1, s1) and
(r2, s2) control the shape of the transformation function.
 If r1= s1 and r2 = s2, the transformation is a linear function that produces no changes in gray levels.
 If r1= r2, s1 = 0and s2 = L-1, the transformation Becomes a thresholding function that creates a
binary image, as illustrated in below figure
:

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 9


DIGITAL IMAGE PROCESSING 18CS741 Module 2
For intermediate values of (r1, s1) and (r2, s2) produce various degrees of spread in the gray levels of the
output image, thus affecting its contrast. In general, r1 ≤ r2 and s1 ≤ s2 is assumed so that the function is
single valued and monotonically increasing.
GRAY-LEVEL SLICING:
It is used to highlight a specific range of gray levels in an image. It can be implemented by using 2
approaches.
One approach is to display a high value for all gray levels in the range of interest and a low value for all other
gray levels. This transformation, shown below produces a binary image.

The second approach brightens the desired range of gray levels but preserves the background and gray-level
tonalities in the image. This transformation, shown below:

Use: Its application includes enhancing features such as masses of water in satellite imagery and enhancing
flaws in X-ray images
BIT-PLANE SLICING:
Instead of highlighting gray-level ranges, highlighting the contribution made to total image appearance by
specific bits might be desired. Suppose that each pixel in an image is represented by 8 bits. Imagine that the
image is composed of eight 1-bit planes, ranging from bit-plane 0 for the least significant bit to bit plane 7 for

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 10


DIGITAL IMAGE PROCESSING 18CS741 Module 2
the most significant bit. In terms of 8-bit bytes, plane 0 contains all the lowest order bits in the bytes
comprising the pixels in the image and plane 7 contains all the high-order bits.
The various bit planes for the image is as shown in in figure below. Note that the higher-order bits contain
the majority of the visually significant data. The other bit planes contribute to more subtle details in the
image.

Use: It is useful for analyzing the relative importance played by each bit of the image, a process that aids in
determining the adequacy of the number of bits used to quantize each pixel.
USE OF HISTOGRAM STATISTICS FOR IMAGE ENHANCEMENT
*********Define histogram and normalized histogram.
Histogram:
Histogram of an image is the graphical representation of frequency of pixels intensity (gray) values. In an
image histogram, the X axis is the gray level intensities and the Y axis is the frequency of these intensities.
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete function of the form
H(rk) = nk
Where rk is the kth gray level and nk is the number of pixels in the image having the gray level rk.
Normalize Histogram:
A normalized histogram is given by the equation
P(rk) = nk / n for k = 0,1,2,…..,L-1
P(rk) gives the estimate of the probability of occurrence of gray level rk. The sum of all components of a
normalized histogram is equal to 1.
The histogram plots are simple plots of H(rk) = nk versus rk.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 11


DIGITAL IMAGE PROCESSING 18CS741 Module 2
 In the dark image the components of the histogram are concentrated on the low (dark) side of the gray
scale.
 In case of bright image, the histogram components are biased towards the high side of the gray scale.
 The histogram of a low contrast image will be narrow and will be centered towards the middle of the
gray scale.
 The components of the histogram in the high contrast image cover a broad range of the gray scale.
The net effect of this will be an image that shows a great deal of gray levels details and has high
dynamic range.

HISTOGRAM EQUALIZATION
Discuss histogram equalization for contrast enhancement in digital image processing
Histogram equalization is a common technique for enhancing the appearance of images. Suppose we have an
image which is predominantly dark. Then its histogram would be skewed towards the lower end of the gray
scale and all the image detail are compressed into the dark end of the histogram. If we could stretch out the
gray levels at the dark end to produce a more uniformly distributed histogram then the image would become
much clearer.
• Histogram equalization automatically determines a transformation function that seeks to produce an
output image that has a uniform histogram..
• Histogram equalization method is simple image enhancement method which is applied on entire
image.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 12


DIGITAL IMAGE PROCESSING 18CS741 Module 2
Let there be a continuous function with r being gray levels of the image to be enhanced. The range of r is [0,
1] with r = 0 repressing black and r = 1 representing white. The transformation function is of the form
s = T(r) where 0 < r < 1
It produces a level s for every pixel value r in the original image.

The transformation function is assumed to fulfill two condition:


1. T(r) is single valued and monotonically increasing in the interval for 0 ≤ r ≤ 1 and
2. 0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1
The transformation function should be single valued so that the inverse transformations should exist.
Monotonically increasing condition preserves the increasing order from black to white in the output image.
The second conditions guarantee that the output gray levels will be in the same range as the input levels.
The gray levels of the image may be viewed as random variables in the interval [0, 1]. The most fundamental
descriptor of a random variable is its probability density function (PDF) Pr(r) and Ps(s) denote the probability
density functions of random variables r and s respectively. Basic results from an elementary probability
theory states that if Pr(r) and Tr are known and T-1(s) satisfies conditions (1), then the probability density
function Ps(s) of the transformed variable is given by the formula

Thus, the PDF of the transformed variable s is the determined by the gray levels PDF of the input image and
by the chosen transformations function. The cumulative distribution function of r is given by

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 13


DIGITAL IMAGE PROCESSING 18CS741 Module 2

w is a dummy variable of integration.


For discrete values we deal with probabilities and summations instead of probability density functions and
integrals. The probability of occurrence of gray level rk in an image is approximated by

Where n is the total number of pixels in the image, nk is the number of pixels that have gray level rk, and L is
the total number of possible gray levels in the image.
The discrete form of transformation function s = T(r) is given by:

Thus, a processed (output) image is obtained by mapping each pixel with level rk in the input image into a
corresponding pixel with level sk in the output image. The transformation T(rk) is known as histogram
equalization.
HISTOGRAM MATCHING (SPECIFICATION)
In some image enhancement application we wish to apply for a specific region of an image, then Histogram
matching method is used. The goal of histogram matching is to take an input image and generate an output
image that is based upon the shape of a specific (or reference) histogram. Histogram matching is also known
as histogram specification.
*****What is histogram matching or specification? Explain
Histogram matching is the process of generating an output image for an input image based upon the shape of
a specific or reference histogram.
• Obtain the histogram for both the input image and the specified image (same method as in histogram
equalization).
• Obtain the cumulative distribution function CDF for both the input image and the specified image
(same method as in histogram equalization).
• Calculate the transformation T to map the old intensity values to new intensity values for both the
input image and specified image (same method as in histogram equalization).

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 14


DIGITAL IMAGE PROCESSING 18CS741 Module 2
Problem 1:
Perform histogram equalization of an image whose pixel intensity distribution is given below:
Gray Levels 0 1 2 3 4 5 6 7
No. of Pixels 790 1023 850 656 329 245 122 81
Answer:
Gray Levels (rk) No. of Pixels PDF p(rk) = nk / n CDF Transformed Equalized Gray
(input) (nk) Gray levels level value (sk)
sk = CDF x max. (output)
gray value (7)
0 790 0.19 0.19 1.33 1
1 1023 0.25 0.44 3.08 3
2 850 0.21 0.65 4.55 5
3 656 0.16 0.81 5.67 6
4 329 0.08 0.89 6.23 6
5 245 0.06 0.95 6.65 7
6 122 0.03 0.98 6.86 7
7 81 0.02 1.00 7.00 7
n = 4096
There are only 5 gray level values in the output.

NOTE: For the above problem we have to consider the given maximum gray level value as 7 (that is 0 to 7
in 3 bit). Here n is sum of all pixels = 4096
Histogram of input image Histogram Equalized image
[plot of rk vs P(rk) = rk/n] [plot of sk vs P(sk) = sk/n]

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 15


DIGITAL IMAGE PROCESSING 18CS741 Module 2
Problem 2
Apply Histogram mapping or specification to the following image (8 x 8)

Answer:
Step1: Find the histogram equalized gray level values for the input (Original) image
Gray Levels (rk) No. of Pixels PDF p(rk) = nk / n CDF Transformed Equalized Gray
(input) (nk) Gray levels level value (sk)
sk = CDF x max. (output)
gray value (7)

0 8 0.125 0.125 0.875 1


1 10 0.156 0.281 1.967 2
2 10 0.156 0.437 3.059 3
3 2 0.031 0.468 3.276 3
4 12 0.187 0.655 4.585 5
5 16 0.25 0.905 6.335 6
6 4 0.0625 0.9675 6.77 7
7 2 0.031 1.00 7.00 7
n = 64
Step2: Find the histogram equalized gray level values for the target image.
Gray Levels (rk) No. of Pixels PDF p(rk) = nk / n CDF Transformed Equalized Gray
(input) (nk) Gray levels level value (sk)
sk = CDF x max. (output)
gray value (7)
0 0 0 0 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 20 0.3125 0.3125 2.1875 2
5 20 0.3125 0.625 4.375 4
6 16 0.25 0.875 6.125 6
7 8 0.125 1.00 7.00 7
n = 64

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 16


DIGITAL IMAGE PROCESSING 18CS741 Module 2
Step 3: Final Mapping process
Gray Levels (rk) Equalized Gray Equalized Gray Final Histogram
(input) level value for level value for Mapped Gray
Input image Target image level value
(output)
0 1 0 4
1 2 0 4
2 3 0 5
3 3 0 5
4 5 2 6
5 6 4 6
6 7 6 7
7 7 7 7

NOTE:
In first row when the input gray level value is 0 which is equalized to gray value 1 and in equalized Gray
level target image we don’t have the gray value 1; so it is mapped to next higher gray level value which is
equal to 2; therefore the input gray level value corresponding to this gray value 2 is 4 is considered as the
final mapped value for input 0

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 17


DIGITAL IMAGE PROCESSING 18CS741 Module 2
ENHANCEMENT USING ARITHMETIC/LOGIC OPERATIONS
****Explain applications of Arithmetic and Logical operations in digital image processing
Arithmetic/logic operations involving images are performed on a pixel-by-pixel basis between two or more
images. It is widely used in image enhancement.
The Four arithmetic operations involving images are:
1. Addition: g(x, y) = f(x, y) + h(x, y)
2. Subtraction: g(x, y) = f(x, y) - h(x, y)
3. Multiplication: g(x, y) = f(x, y) * h(x, y)
4. Division: g(x, y) = f(x, y) / h(x, y)
Addition:
One of the most interesting applications of the image averaging (or image addition) operation is suppressing
the noise component of images. In this case, the addition operation is used to take the average of several

noisy images that are obtained from a given input image. Here the assumption is that at every pair of
coordinates (x, y) the noise is uncorrelated and has zero average value.
An important application of image averaging is in the field of astronomy, where imaging with very low light
levels is routine, causing sensor noise frequently to render single images virtually useless for analysis.
Subtraction:
One of the main applications of image subtraction is:
In the area of medical imaging called mask mode radiography: In this case h(x, y), the mask, is an X-ray
image of a region of a patient’s body captured by an intensified TV camera located opposite an X-ray source.
The procedure consists of injecting a contrast medium into the patient’s bloodstream, taking a series of
images of the same anatomical region as h(x, y), and subtracting this mask from the series of incoming
images after injection of the contrast medium. The net effect of subtracting the mask from each sample in the
incoming stream of TV images is that the areas that are different between f(x, y) and h(x, y) appear in the
output image as enhanced detail.
Multiplication and Division operations are used:
1. In shading correction
2. Masking or Region of interest operations.
Logic operations such as AND, OR and NOT similarly operate on a pixel-by-pixel basis.
 Performing the NOT operation on a black, 8-bit pixel (a string of eight 0’s) produces a white pixel
 The AND and OR operations are used for masking; that is, for selecting sub-images in an image.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 18


DIGITAL IMAGE PROCESSING 18CS741 Module 2
 In the AND and OR image masks, light represents a binary 1 and dark represents a binary 0. Masking
sometimes is referred to as region of interest (ROI) processing.
 In terms of enhancement, masking is used primarily to isolate an area for processing. This is done to
highlight that area and differentiate it from the rest of the image.
 Logic operations also are used frequently in conjunction with morphological operations.
BASICS OF SPATIAL FILTERING
Name filter is borrowed from frequency domain processing. It basically refers to accepting (passing) or
rejecting certain frequency components. A filter that passes low frequencies is called low pass filter and if it
passes high frequencies is called high pass filter. We can directly accomplish a similar smoothing directly on
image itself using spatial filters. It is also called mask or template or kernels or windows.
What is Mask in an image?
Masking is an image processing method in which we define a small size of an image as a sub image that has
the same dimensions as the neighbourhood and it is used to modify a larger image.
The size of mask must be odd (i.e. 3×3, 5×5, etc.) to ensure it has a center. The smallest meaningful size is
3×3.
The values in a filter mask (sub-image) are referred to as coefficients, rather than pixels.

What is spatial filter? Explain the types of spatial filters


Spatial filtering is a process by which we can perform filtering operations directly on the pixels of an image
by moving the filter mask from point to point in an image such that the center of the mask traverses all image
pixels. At each point (x, y), the response of the filter at that point is calculated using a predefined
relationship.
TYPES OF SPATIAL FILTERS:
1. Linear spatial filter: It modifies an input image by replacing the value at each pixel with some linear
function of the values of nearby pixels. Moreover, this linear function is assumed to be independent
of the pixel's location (x, y). For linear spatial filtering the response is given by a sum of products of
the filter coefficients and the corresponding image pixels in the area spanned by the filter mask.
2. Non-Linear spatial filter also operate on neighbourhoods. In general, however, the filtering
operation is based conditionally on the values of the pixels in the neighbourhood under consideration,
and they do not explicitly use coefficients in the sum-of-products manner described for linear filters.
Noise reduction can be achieved effectively with a nonlinear filter whose basic function is to compute
the median gray-level value in the neighbourhood in which the filter is located.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 19


DIGITAL IMAGE PROCESSING 18CS741 Module 2
Explain the working mechanism of spatial filtering.
The filtering process consists simply of moving the filter mask from point to point in an image. At each point
(x, y), the response of the filter at that point is calculated using a predefined relationship. For linear spatial
filtering, the response is given by a sum of products of the filter coefficients and the corresponding image
pixels in the area spanned by the filter mask.
For the 3 x 3 mask shown below;

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 20


DIGITAL IMAGE PROCESSING 18CS741 Module 2
The result (or response), R, of linear filtering with the filter mask at a point (x, y) in the image is:

w(0, 0) coincides with image value f(x, y), indicating that the mask is centred at (x, y) when the computation
of the sum of products takes place.
For a mask of size m x n, we assume that m = 2a + 1 and n = 2b + 1, where a and b are nonnegative integers.
The size of mask must be odd (i.e. 3×3, 5×5, etc.) to ensure it has a center. The smallest meaningful size is
3×3.

It is common practice to simplify the notation by using the following expression:

where the w’s are mask coefficients, the z’s are the values of the image gray levels corresponding to those
coefficients, and mn is the total number of coefficients in the mask.
For example the mask size of 3 x 3 the response at any point (x, y) in the image is given by:

Noise reduction can be achieved effectively with a nonlinear filter whose basic function is to compute the
median gray-level value in the neighbourhood in which the filter is located.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 21


DIGITAL IMAGE PROCESSING 18CS741 Module 2
SMOOTHING SPATIAL FILTERS
 Smoothing filters are used for blurring and for noise reduction.
 Blurring is used in pre-processing steps, such as removal of small details from an image prior to
(large) object extraction, and bridging of small gaps in lines or curves.
 Noise reduction can be accomplished by blurring with a linear filter and also by nonlinear filtering.
There are two types of smoothing spatial filters:
1. Linear Smoothing spatial filter
2. Non-Linear Smoothing spatial filter

SMOOTHING LINEAR SPATIAL FILTERS


**************What is smoothing filter? Explain the different spatial filters used for image
smoothing.
Smoothing filters are used for blurring and for noise reduction. The output (response) of a smoothing, linear
spatial filter is simply the average of the pixels contained in the neighbourhood of the filter mask. These
filters are sometimes called as averaging filters or Low-pass filters.
The idea behind smoothing filters is: to replace the value of every pixel in an image by the average of the
gray levels in the neighbourhood defined by the filter mask, this process results in an image with reduced
“sharp” transitions in gray levels. The most obvious application of smoothing is noise reduction.
These filters have the undesirable side effect that they blur edges.
Different types of Smoothing filters are:
1. Mean or Box Filter
2. Weighted Average Filter
MEAN OR BOX FILTERS:
A spatial averaging filter in which all coefficients are equal is sometimes called a box filter. A major use of
averaging filters is in the reduction of “irrelevant” detail in an image.
Below figure shows a 3 x 3 smoothing filter which yields the standard average of the pixels under the mask.
This can best be seen by substituting the coefficients of the mask into:.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 22


DIGITAL IMAGE PROCESSING 18CS741 Module 2

which is the average of the gray levels of the pixels in the 3 x 3 neighbourhood defined by the mask.
The idea here is that it is computationally more efficient to have coefficients valued 1. At the end of the
filtering process the entire image is divided by 9. An m x n mask would have a normalizing constant equal to
1/mn.
WEIGHTED AVERAGE FILTER:
 Pixels are multiplied by different coefficients, thus giving more importance (weight) to some pixels at
the expense of others.
 The pixel at the centre of the mask is multiplied by a higher value than any other, thus giving this
pixel more importance in the calculation of the average.
 The other pixels are inversely weighted as a function of their distance from the centre of the mask.
 The diagonal terms are further away from the centre than the orthogonal neighbours (by a factor of
square root of 2) and, thus, are weighed less than these immediate neighbours of the centre pixel.

ORDER-STATISTICS FILTERS
Order-statistics filters are nonlinear spatial filters whose response is based on ordering (ranking) the pixels
contained in the image area encompassed by the filter, and then replacing the value of the center pixel with
the value determined by the ranking result.

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 23


DIGITAL IMAGE PROCESSING 18CS741 Module 2
Different types of order statistic filters are:
1. Median Filter: It replaces the value of a pixel by the median of the gray levels in the neighbourhood of
that pixel.
For certain types of random noise, median filters provide excellent noise-reduction capabilities, with
considerably less blurring than linear smoothing filters of similar size.
For example, suppose that a 3 x 3 neighbourhood has values (10, 20, 20, 20, 15, 20, 20, 25, 100). These
values are sorted as (10, 15, 20, 20, 20, 20, 20, 25, 100), which results in a median of 20. Thus, the principal
function of median filters is to force points with distinct gray levels to be more like their neighbours.
Max Filter: is useful in finding the brightest (Maximum pixel value) point in an image.
Min Filter is useful in finding the dark (Minimum pixel value) spot in an image.
Problem:
Consider the image given below and calculate the output of the pixel (2, 2). If smoothing is done using 3 x 3
neighbourhood using the filters given below:
1. Box/Mean filter
2. Weighted Average filter
3. Median filter
4. Min Filter
5. Max Filter

For Box filter

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 24


DIGITAL IMAGE PROCESSING 18CS741 Module 2

For Weighted Average filter

For Median filter: write pixels of mask in ascending order and find the median, which is equal to 5
0, 1, 2, 4, 5, 6, 7, 8, 9 = 5
For 3 x 3 Neighbourhood:
Min filter the minimum value = 0
Max filter Maximum value = 9
***Give example for order statistics or Non-linear filters in spatial domain
X-ray image of a circuit board heavily corrupted by salt-and-pepper noise. Median filtering is much better
suited for the removal of additive salt-and-pepper noise.
SHARPENING SPATIAL FILTERS
The principal objective of sharpening is to highlight fine detail in an image or to enhance detail that has been
blurred, either in error or as a natural effect of a particular method of image acquisition. We saw that image
blurring could be accomplished in the spatial domain by pixel averaging in a neighbourhood. Since averaging
is analogous to integration, it is logical to conclude that sharpening could be accomplished by spatial
differentiation.
Applications of Image sharpening: ranging from electronic printing and medical imaging to industrial
inspection and autonomous guidance in military systems
BASIC FOUNDATION
Explain the fundamental properties of First and second order derivatives in a digital image processing
Image sharpening filters are based on first- and second-order derivatives respectively. The derivatives of a
digital function are defined in terms of differences.
ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 25
DIGITAL IMAGE PROCESSING 18CS741 Module 2
Definition for first derivatives:
 Must be zero in flat segments (areas of constant gray-level values)
 Must be nonzero at the onset of a gray-level step or ramp;
 Must be nonzero along ramps.
Definition for second derivatives:
 Must be zero in flat areas;
 Must be nonzero at the onset and end of a gray-level step or ramp;
 Must be zero along ramps of constant slope
A basic definition of the first-order derivative of a one-dimensional function f(x) is the difference:

Similarly, we define a second-order derivative as the difference:

Example: A simple image with its 1-D gray level profile along the centre of the image including isolated
noise point is shown below:

Simplified 1-D gray level profile of the image is shown below:

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 26


DIGITAL IMAGE PROCESSING 18CS741 Module 2

• First, we note that the first-order derivative is nonzero along the entire ramp, while the second-order
derivative is nonzero only at the onset and end of the ramp.
• We conclude that first-order derivatives produce thicker edges and second-order derivatives
derivatives have a stronger response to fine detail, such as thin lines and isolated points.
• Hence a second-order derivative to enhance fine detail (including noise) much more than a first-order
derivative.
• First order derivatives generally have a stronger response to a gray-level step. Second- order
derivatives produce a double response at step changes in gray level.
USE OF SECOND DERIVATIVES FOR ENHANCEMENT–THE LAPLACIAN
*********Explain the Laplacian second derivatives for spatial enhancement
Laplacian filter, which is defined with respect to x & y coordinates; for a function (image) f(x, y) of two
variables, is defined as:

For the partial second-order derivative in the x-direction

For the partial second-order derivative in the y-direction

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 27


DIGITAL IMAGE PROCESSING 18CS741 Module 2
The digital implementation of the two-dimensional Laplacian in Equation is obtained by summing these two
components:

Using this equation we can design Laplacian filter. This equation can be implemented using the mask as
shown below:

OR
The above basic laplacian filter can be modified by subtracting the laplacian filtered image from the original
image f (x, y) in order to obtain a sharpened result.

Application: its use highlights gray-level discontinuities in an image and deemphasizes regions with slowly
varying gray levels. This will tend to produce images that have grayish edge lines and other discontinuities,
all superimposed on a dark, featureless background

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 28


DIGITAL IMAGE PROCESSING 18CS741 Module 2
USE OF FIRST DERIVATIVES FOR ENHANCEMENT—THE GRADIENT
***********Describe how the first order derivatives are used for Image sharpening
First derivatives in image processing are implemented using the magnitude of the gradient. For a function
f(x, y), the gradient of f at coordinates (x, y) is defined as the two-dimensional column vector

To approximate the magnitude of the gradient by using absolute values instead of squares and square roots
We use

Gradient of an image measures the change in image function f(x, y) in X and Y directions.
A 3 x 3 region of an image (the z’s are gray-level values) and masks used to compute the gradient at point
labeled z5 is shown below:

In figure z5 denotes f(x, y), z1 denotes f(x-1, y-1) and so on.


Example: Gx = z8 - z5 and Gy = z6 = z5
From above equation

= |z8 = z5| + |z6 – z5|


Robert Operator: Robert cross gradient operator for the above 3 x 3 mask can be written as:
Gx = (z9 – z5), Gy = (z8 – z6)

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 29


DIGITAL IMAGE PROCESSING 18CS741 Module 2

Therefore by using = |z9 – z5| + |z8 – z5|

Roberts cross-gradient operators


Sobel operator: It is a first order derivative estimators in which we can specify whether the edge detector is
sensitive to horizontal or vertical edges or both.
Sobel operator for the 3 x 3 mask can be written as

Gx =

Gy =

Sobel operator
All the mask co-efficient in Robert and Sobel operator sum to zero, as expected for derivative operator. As
the center value of sobel operator is zero, it does not include original value of image but it calculates the
difference of right and left pixel values ( or top & bottom values)

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 30


DIGITAL IMAGE PROCESSING 18CS741 Module 2
QUESTION BANK
MODULE 1
1 What is an Image Enhancement? Why it is required? 4

2 What is gray level or Intensity level Transformation function. Explain the gray level 8
transformation function for contrast enhancement
3 Explain the following gray level transformation functions with a neat graph: i) Linear 8

Transformation ii) Log transformation iii) Power law transformation.


4 Explain the following transformations: 8

1. Contrast Stretching
2. Gray Level or Intensity level slicing
3. Bit plane slicing
5 Define histogram and normalized histogram, discuss histogram equalization for contrast 8
enhancement
6 What is histogram matching or specification? Explain 6

7 Explain applications of Arithmetic and Logical operations in digital image processing 8

8 What is spatial filter? Explain the mechanics of spatial filtering. 6

9 Explain the types of spatial filters 4

10 Explain the different spatial filters used for image smoothing. 10

11 Explain order statistics or Non-linear filters in spatial domain 5

12 Give example for order statistics or Non-linear filters in spatial domain 2

13 Explain the Laplacian second derivatives for spatial enhancement 6

14 Describe how the first order derivatives are used for Image sharpening 10

ATHMARANJAN K DEPT OF ISE, SRINIVAS INSTITUTE OF TECHNOLOGY MANGALURU Page 31

You might also like