0% found this document useful (0 votes)
28 views

Digital Image Processing - Image Enhancement

Image Enhancement
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Digital Image Processing - Image Enhancement

Image Enhancement
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Digital Image Processing

Dr S Govindaraju
Associate Professor
Department of Computer Science
Sri Ramakrishna College of Arts & Science
Coimbatore - 641 006
Tamil Nadu, India

1
UNIT-3

IMAGE ENHANCEMENT

INTRODUCTION

The objective of image enhancement is to improve the


interpretability of the information present in images for human
viewers. An enhancement algorithm is one that yields a better-
quality image for the purpose of some particular application
which can be done by either suppressing the noise or increasing
the image contrast. Image-enhancement algorithms are
employed to emphasise, sharpen or smoothen image features for
display and analysis. Enhancement methods are application
specific and are often developed empirically. Image
enhancement techniques emphasise specific image features to
improve the visual perception of an image Image-enhancement
techniques can be classified into two broad categories as

(1) spatial domain method, and (2) transform domain method.

The spatial domain method operates directly on pixels, whereas


the transform domain method operates on the Fourier transform
of an image and then transforms it back to the spatial domain.
Elementary enhancement techniques are histogram-based
because they are simple, fast, and with them acceptable results
for some applications can be achieved. Unsharp masking
sharpens the edges by subtracting a portion of a filtered
component from the original image. The technique of unsharp
masking has become a popular enhancement tool to assist in
diagnosis.
IMAGE ENHANCEMENT IN SPATIAL DOMAIN

The spatial domain technique deals with the manipulation of


pixel values. The spatial domain technique can be broadly
classified into (i) point operation, (ii) mask operation, and (iii)
global operation.

Point Operation

In point operation, each pixel is modified by an equation that is


not dependent on other pixel values. The point operation is
illustrated in Fig. 5.1.

The point operation is represented by

(5.1) g(m, n) = T[f(m, n)]

In point operation, 7 operates on one pixel, or there exith a one-


to-one mapping between the input image f(m, n) and the output
imageg(m,n).

Mask Operation

In mask operation, each pixel in modified according to the


values in a small neighbourhood. Examples of mask operations
are spatialize-pos fiherring using a bon fiber con methane mask
operation, the operator T operates on the neighbourhood of
pixels

Here, mask, is a small matrix whose values are often termed as


weights. Each mal has an orign. The of symmetric masks are
usually their centre pixel position. For non-symmenetric masks,
any pixellocation may be chosen as the origin, depending on the
imended use.

Global Operation
In global operation, all pixel values in the image are taken into
consideration. Usually, frequency domain operations are global
operations.

ENHANCEMENT THROUGH POINT OPERATION:

In point operation, cach pixel value is mapped to a new pixel


value. Point operations are basically memoryless operations. In
a point operation, the enhancement at any point depends only on
the image value at that point. The point operation maps the input
image f(m, n) to the output image g(m, n) which is illustrated in
Fig. 5.3. From the figure, it is obvious that every pixel of f(m, n)
with the same gray level maps to a single gray value in the
output image.
TYPES OF POINT OPERATION

Some of the examples of point operation include


(i) brightness modification, (ii) contrast
manipulation, and (in) histogram manipulation.
Brightness Modification

The brightness of an image depends on the value


associated with the pixel of the image. When
changing the brightness of an image, a constant is
added or subtracted from the luminance of all
sample values. The brightness of the image can
be increased by adding a constant value to each
and every pixel of the image. Similarly the
brightness can be decreased by subtracting a
constant salue from each and every pixel of the
image.

(a) Increasing the Brightness of an Image A simple


method to increase the brightness value of an
image is to add a constant value to each and every
pixel of the image. If f[m, n] represents the
original image then a new image g[m, n] is
obtained by adding a constant k to each pixel off
[m, n). This is represented by

g[m.n] = f[m.n]+k
The original and the brightness-enhanced image.
The original image [mm] is shown Fig. 5.4 (a).
The enhanced image g[m, n] is shown in Fig. 5.4
(b).
Here, the value of k is chosen to be 50. Each
pixel in/[m. n] is increased by a factor of fifty.
The
corresponding MATLAB code is shown in Fig.
55

(b) Decreasing the Brightness of an Image The


brightness of an image can be decreased by
subtracing a constant & from all the pixels of
the input image [m.]. This is represented by
gim.n]=[mn]-k

The original and the brightness-suppressed image


are shown in Figs 5.6 (a) and (b) respectively. The
brightnos is suppressed by subtracting a value of
70 from each and every pisel of the original image.
The corresponde MATLAB code is given
Contrast Adjustment

Contrast adjustment is done by scaling all the


pixels of the image by a constant k. It is given
by
g[m, n] = [m.n]k Changing the contrast of an
image,
changes the range of luminance values present
in the image

.
]

Specifying a value above I will increase the


contrast by making bright samples brighter and
dark samples darker, thus expanding on the range
used. A value below I will do the opposite and
reduce a smaller range simple values. An original
image and its contrast-manipulated images are
illustrated and the responding MATLAB code is
given
HISTOGRAM MANIPULATION

Histogram manipulation basically modifies the


histogram of an input image so as to improve the
visual quality of the image. In order to understand
histogram manipulation, it is necessary that one
should have some basic knowledge about the
histogram of the image. The following section
gives basic idea about histograms of an image
and the histogram-equalisation technique used to
improve the visual quality of an image

(a) Histogram The histogram of an image is a plot


of the number of occurrences of gray levels in the
image against the gray-level values. The histogram
provides a convenient summary of the intensities
inn image, but it is unable to convey any
information regarding spatial relationships
between
pixels. The histogram provides more insight
about image contrast and brightness.

1.The histogram of a dark image will be


clustered towards the lower gray level.

2.The histogram of a bright image will


be clustered towards higher gray level.

3.For a low-contrast image, the histogram will


not be spread equally, that is, the histogram will be
near

4.For a high-contrast image, the histogram


will have an equal spread in the gray level.

Image brightness may be improved by


modifying the histogram of the image.

HISTOGRAM EQUALISATION:

Equalisation is a process that attempts to spread


out the grey levels in the image so that the evenly
distributed across their range.Histogram
equalisation reassigns the brightness values of
pixels based on the image histogram.(b)
Histogram Equalisation is a process that attempts
to spread out the gray levels in othes of pixels
based on the image histogram. Histogram
equalisation is a
technique where the histogram of the resultunt
image is as flat as possible. Histogram equalisation
provides more visually pleasing result

Procedure to Perform Histogram Equalisation

histoagram equalisation is done by performing


the following steps:

1. Find the running sum of the histogram values.

Normalise the values from Step (1) by dividing by


the total number of pixels. Multiply the values from
Step (2) by the maximum gray-level value and
round.

Map the gray level values to the results from Step


(3) using a one-to-one correspondence.
From the results of histogram equalisation,it I
obvious that histogram equalisation makes it
possible minor variation within regions that
appeared nearly uniform in the original image. In
this example histogram equalisation allows to
interpret the original image as a "baby in the
cradle.
5.6 LINEAR GRAY-LEVEL
TRANSFORMATION

A linear transformation of an image is a function


that maps each pixel gray-level value into another
gray at the same position according to a linear
function. The linear transformation is represented
by

Input image f (m,n)

Linear Transformation T

Output image g(m, n)

The linear transformation is given by g(m,n)=T[f(m,


n)].
5.6.1 Image Negative or Inverse Transformation

The inverse transformation reverses light and dark.


An example of inverse transformation is an image
negative. A negative image is obtained by
subtracting each pixel from the maximum pixel
value. For an 8-bit image, the negative image can
be obtained by reverse scaling of the gray levels,
according to the transformation g(m,n)=255-f(m,n)

The graphical representation of negation is shown


Fig. 5.12. Negative images are useful in the display
of medical images and producing negative prints of
images. The MATLAB code to determine the
negative of the input image and the correspond-
ing output are shown in Figs 5.13 and 5.14
respectively
5.7 NONLINEAR GRAY-LEVEL
TRANSFORMATION

Non-linear transformation maps small equal


intervals into non-equal intervals. Some of the non-
linear transformations to be discussed in this
section are (i) thresholding, (ii) gray-level slicing,
(iii) logarithmic transformation, (iv) exponential
transformation, and (v) power law transformation

5.7.1 Thresholding

Thresholding is required to extract a part of


an image which contains all the information.
Thresholding is a part of a more general
segmentation problem. Thresholding can
be
broadly classified into (i) hard threshold-
ing, and
(ii) soft thresholding.

(a) Hard Thresholding In hard thresholding, pixels


having intensity lower than the threshold I are set
to ano and the pixels having intensity greater than
the threshold are set to 255 or left at their original
intensity depending on the effect that is required.
This type of hard thresholding allows us to obtain
a binary image from a grayscale image.
Application of Hard Thresholding Hard
thresholding can be used to obtain a binary
image from a grayscale image. The grayscale
mapping which allows us to obtain a binary
image from a grayscale image is shown in Fig.
5.15.

g(m, n)

The mathematical expression for hard thresholding


is given below

g(m, n) = 0
for f(m, n)<1 otherwise
255

Here, f(m, n) represents the input image, g(m,


n) represents the output image and t represents
the threshold parameter.
5.7.2 Gray-level Slicing

The purpose of gray-level slicing is to highlight a


specific range of gray values. Two different
approaches be adopted for gray-level slicing.
(a) Gray-level Slicing without Preserving
Background

This displays high values for a range of interest


and low values in other areas. The main drawback
of this approach is that the background information
is discarded. The pictorial representation of this
approach is shown in Fig. 5.19.
The MATLAB code which performs gray level
slicing without preserving the background is
shown in Fig. 5.20 and the corresponding output is
shown in Fig. 5.21.

(b) Gray-level Slicing with Background In gray-


level slicing with background, the objective is to
dis- play high values for the range of interest and
original gray level values in other areas. This
approach pres serves the background of the
image. The graphical representation of this
approach to gray-level slicing is shown in Fig.
5.22.

Fig. 5.19 Gray-level slicing without background

The MATLAB code which performs gray-level


slicing by preserving background information is
shown in Fig. 5.23 and the corresponding output is
shown in Fig. 5.2ha By comparing Fig. 5.21 with
Fig. 524, it is obvious that the background
information is preserved in The 5.24. The value of
the threshold, is fixed as 204 and the value of the
threshold 2, is fixed as 255 in both ray-level
slicing without background and with background.

.
This program performs gray level slicing
without background

Clc;
clear all;
Close all;
X=imread('goldfish.tif');
X=imread(x,[256 256]);
Y=double (x);
(m,n)-size(y);
L=double (255);
a=double
(round
(L/1.25));
B=double
(round (2*L/2));
for 1=1;m

for j=l;n

if (y(i,j)>=a & y(i,j)<=b)


Z(i,j)=L;
else

Z(I,j)=0;

end

end

end

imshow(uint8(y));

Figure, imshow(uint8(z));
5.7.3 Logarithmic Transformation

The logarithmic transformation is given

by g(m,n)=clog(f(m, n)+1)
This program performs gray level slicing
without background

Clc;
clear all;
Close all;
X=imrea
d('goldfis
h.tif');
X=imread(x,[256 256]);
Y=double (x);
(m,n)-size(y);
L=double (255);
a=double (round (L/1.25));
B=double (round (2*L/2));

for 1=1;m

for j=l;n

if (y(i,j)>=a & y(i,j)<=b)


Z(i,j)=L;
else

Z(I,j)=0;

end

end

end

imshow(uint8(y));

Figure, imshow(uint8(z));
This type of mapping spreads out the lower gray
levels. For an 8-bit image, the lower gray level is
zero and the gher gray level is 255. It is desirable
to map 0 to 0 and 255 to 255. The function g(m, n)
= clogif(m, n) + 1) spreads out the lower gray
levels.

The MATLAB code which performs logarithmic


transformation is shown in Fig. 5.25 and the
correspond- ing result is shown in Fig. 5.26.

5.3.4 Exponential Transformation

The non-linear transformation that has been used


mainly in conjunction with logarithmic
transformation multiplicative Filtering operations is
exponential transformation. The effect of an
exponential transfer.
Fig. 5.24 Result of gray-level slicing with
background preservation

This code performes logarthmic transformation


a=imread('crow-jpg);
L=2552
C=L/log10(1+L);
d=c*log10 (1+double (a));
imshow(a),title('original Image)
Figure, imshow
(uint8(d)),title("Log
transformation
Image')

function on edges in an image is to compress low-


contrast edges while expanding high-contrast edges.
generally produces an image with less visible detail
than the original and this is not a desirable
enhancement transformation.

5.7.5 Gamma Correction or Power Law


Transformation

The intensity of light generated by a physical


device such as a CRT is not a linear function of the
applied signal The intensity produced at the
surface of the display is approximately the applied
voltage, raised to the power of 25.

The numerical value of the exponent of this


power function is termed as gamma. This non-
linearity must compensated in order to achieve
correct reproduction of intensity. The power law
transformation is given by

g(m,n)=[f(m,n)]

Where(m,n) is the input image and g(m, n) is the


output image. Gamma (7) can take either integer
or fraction values. The MATLAB code that
performs gamma correction is shown in Fig. 5.27
and the corresponding is shown in Fig. 5.28,
Figure 5.28 shows the power-law transformed
image for two different values of gamma. When
the value of gamma is less than one, the image
appears to be a dark image and when the value of
gamma is greater than the image appears to be a
bright image.

clc

clear all

close all

a=imread('myna.jpg');

a=rgb2gray (a):

gamma-1.1;

d-double(a) 、 "gamma;

inshow(a),title('original image')

Figure, imshow (uint8(d)),title('Powerlaw


transformation')
5.8 LOCAL OR NEIGHBOURHOOD
OPERATION

In neighbourhood operation, the pixels in an


image are modified based on some function of the
pixels neighbourhood. Linear spatial filtering is
often referred to as convolving a mask with an
image.
The masks are sometimes called convolution
masks or convolution kernels
1. Spatial Filtering

Spatial filtering involves a pixel value


corresponding to the centre of the kernel with
the sum of the original pixel values in the reg
corresponding to the kernel multiplied by the
kernel weights.

2. Linear Filtering

In linear filtering, each pixel in the input image is


replaced by a linear combination of intensities of
te bouring pixels. That is, each pixel value in the
output image is a weighted sum of the pixels in the
neighbor hood of the corresponding pixel in the
input image. Linear filtering can be used to
smoothen an image well as sharpen the image. A
spatially invariant linear filter can be implemented
using a convolution man If different filter weights
are used for different parts of the image then the
linear filter is spatially varying

3.Mean Filter or Averaging Filter or Low-


pass Filter

The mean filter replaces each pixel by the average


of all the values in the local neighbourhood. The
siz the neighbourhood controls the amount of
filtering. In a spatial averaging operation, each
pixel is replaced a weighted average of its
neighbourhood pixels. The low-pass filter
preserves the smooth region in the imag and it
removes the sharp variations leading to blurring
effect. The 3 by 3 spatial mask which can perform
the averaging operation is given below

3 by 3 low-pass spatial mask=1/9* 111


111
111

Similiarly the 5 by 5 average mask=1/25*

11111
11111
11111
11111
11111

It is to be noted that the sum of the elements is


equal to 1 in the case of a low-pass spatial mask.
The blur effect will be more with the increase in
the size of the mask. Normally, the size of the mask
will be od that the central pixel can be located
exactly.
5.8.4 Limitations of Averaging Filter

The limitations of averaging filters are given below:

1.Averaging operation leads to the blurring of


an image.Blurring affects feature localisation 2.If
the averaging operation is applied image..
2.If the averaging operation is applied image. to
an image corrupted by impulse noise then the
attenuated and diffused but not removed.
2. a single pixel,with a very unrespresentative
value can affect the mean value of all the pixels in
its neighbourhood significally.
5.8.5 Weighted Average Filter

121 The mask of a weighted average filter is given


by 242. From the mask, it is obvious that the
pixels near- 121

ost to the centre are weighted more than the distant


pixels, hence it is given the name weighted average
filter The pixel to be updated is replaced by a sum
of the nearby pixel value times the weights given
in the matrix and divided by the sum of the
coefficients in the matrix.
5.8.6 Bartlett Filter

Fig. 5.35 Effect of averaging filter for salt-and-


pepper noise

The bartlett filter is a triangle-shaped filter in the


spatial domain. A bartlett filter is obtained by como
two box filters in the spatial domain, or it can be
obtained by taking the product of two box filters
frequency domain.
The 3*3 box filter is given by 1/9 * 1 1 1
1 1
1
1 1
From this ,a barlett window in the spatial1 domain
is given by

Barlett window=1/9* 1 1 1 1 1 1
1 1 1 *1/9* 1 1 1 =
1 1 1 1 1 1

1/8* 1 2 3 2 1
2 4 6 4 2
3 6 9 6 3
2 4 6 4 2
1 2 3 2 1
5.8.7 Gaussian Filter

Fig. 5.36 Effect of averaging filter for Gaussian


noise

Gaussian filters are a class of linear smoothing


filters with the weights chosen according to the
shape of a Gaussian function. The Gaussian
kernel is widely used for smoothing purpose. The
Gaussian filter in the continuous space is given by

𝑒
𝜎
h(m,n)= [1/√2𝜋 − 𝑚 2 /2𝜎 2 ]*
[1/√2𝑒 − 𝑛 2 /2𝜎 2 ]

The above expression shows that a Gaussian filter


is separable. The Gaussian smoothing filter is a
very good filter for removing noise drawn from a
normal distribution. Gaussian smoothing is a
particular class of av aging, in which the Reise
draw 2 Gaussian. Gaussian functions have the
following properties that make them useful in
image processing:
(i) Gaussian functions are rotationally symmetric
in two dimensions. The meaning of the term
tionally symmetric' is that the amount of smoothing
performed by the filter will be the same directions.
Since a Gaussian filter is rotationally
symmetric, it will not bias subsequent edge de tion
in any particular direction.

(ii)The Fourier transform of a Gaussian


function is itself a Gaussian function. The Fourier
transform Gaussian has a single lobe in the
frequency spectrum. Images are often corrupted by
high- frequent noise, and the desirable feature of
the image will be distributed both in the low-and-
high frequent spectrum. The single lobe in the
Fourier transform of a Gaussian means that the
smoothened desirable se corrupted by
contributions from unwanted high-frequency than
the, while most of signal properties will be
retained.
(iii)The degree of smoothening is governed
by variance o. A larger o implies a wider
Gaussian fil greater smoothening.

(iv)Two-dimensional Gaussian functions are


separable. This property implies that large
Gaussian filters can be implemented very
efficiently Two-dimensional Gaussian convolution
can be performed by convolving the image with a
one-dimensional Gaussian and then convolving the
result with the same one-dimensional filter ori-
ented orthogonal to the Gaussian used in the first
stage.

Gaussian filters can be applied recursively to


create a Gaussian pyramid. A Gaussian filter can be
generated from PASCALs triangle which is shown
in Fig. 5.38.

3x3 Gaussian Kernel The generation of a 3x3


Gaussian mask from Pascal's triangle is illustrated
in Fig. 5.39.

The 3 by 3 Gaussian kernel can be generated from


the third row of Pascal's triangle as given below

The third row of Pascal's triangle is

121. 1

The sum of the elements in the third row of


Pascal's triangle is 1+2+1=4.

The gaussian kernel is given by as


1
1/4[2]*1/4[1 2 1]=1/16*[1 2 1]
1
121
The 3*3 gaussian kernel is given by 1/16*[2 4 2 ]
121

Generation of a 4x4 Gaussian Kernel The 4 x 4


Gaussian kernel can be generated from the fourth
row of Pascal's triangle.

The fourth row of Pascal's triangle is given by


1331.

Sum of the elements in the fourth row is given by:


1+3+3+1=8

A 4*4 gaussian kernel is generated as


1 1

3 3
1/8*[ ]*1/8* [1 3 3 1 ] =1\64*[ ]* [ 1 3 3 ]
3 3
1
1 1

1 3 3 1
3 9 9 3
The 4*4 gaussian kernel is given by ]
3 9 9 3
1/64*[ 1 3 3 1

You might also like