Chapter III - Image Enhancement

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 64

Chapter III: Image Enhancement

Enhancing an image provides better contrast and a more detailed image as compare to
non-enhanced image. Image enhancement has very applications. It is used to enhance medical
images, images captured in remote sensing, images from satellite e.t.c
Spatial Domain Techniques
These techniques are based on gray level mappings, where the type of mapping used depends
on the criterion chosen for enhancement. As an eg. consider the problem of enhancing the
contrast of an image. Let r and s denote any gray level in the original and enhanced image
respectively. Suppose that for every pixel with level r in original image we create a pixel in the
enhanced image with level . If has the form as shown

Figure1: Spatial Domain Technique


3.1 Point processing
The simplest kind of range transformations are these independent of position x,y:
The transformation function has been given below

s = T ( r ), this is called point processing

where r is the pixels of the input image and s is the pixels of the output image. T is a transformation function
that maps each value of r to each value of s. Image enhancement can be done through gray level
transformations which are discussed below.

3.1.1 Gray level transformation


There are three basic gray level transformation.

 Linear

 Logarithmic

 Power – law
The overall graph of these transitions has been shown in figure 2.

Figure 2: Graph of gray level transformations

1. Linear transformation
First we will look at the linear transformation. Linear transformation includes simple identity and negative
transformation.

 Identity transformation

Identity transition is shown by a straight line. In this transition, each value of the input image is directly
mapped to each other value of output image. In this there is no difference in input image and output image.
And hence is called identity transformation. It has been shown below

Figure 3: Graph of linear transformations

 Negative transformation
This is linear transformation. In negative transformation, each value of the input image is subtracted from the
L-1 and mapped onto the output image.

The result is somewhat like this.


Figure 4: Input Image on left and its negative image on right

In this case the following transition has been done.

s = (L – 1) – r

since the input image of Einstein is an 8 bpp image , so the number of levels in this image are 256. Putting
256 in the equation, we get this

s = 255 – r

So each value is subtracted by 255 and the result image has been shown above. So what happens is that , the
lighter pixels become dark and the darker picture becomes light. And it results in image negative.

It has been shown in the graph below.

Figure 5: Graph for image negative

2. Logarithmic transformations:

Logarithmic transformation further contains two type of transformation. Log transformation and inverse log
transformation.

 Log transformation

The log transformations can be defined by this formula


s = c log(r + 1).

Where s and r are the pixel values of the output and the input image and c is a constant. The value 1 is added
to each of the pixel value of the input image because if there is a pixel intensity of 0 in the image, then log (0)
is equal to infinity. So 1 is added , to make the minimum value at least 1.

During log transformation, the dark pixels in an image are expanded as compare to the higher pixel values.
The higher pixel values are kind of compressed in log transformation. This result in following image
enhancement. The value of c in the log transform adjust the kind of enhancement you are looking for.

Figure 6: Example of log transform

The inverse log transform is opposite to log transform.

3. Power – Law transformations

There are further two transformation is power law transformations, that include nth power and nth root
transformation. These transformations can be given by the expression:

s=cr

This symbol γ is called gamma, due to which this transformation is also known as gamma transformation.

Variation in the value of γ varies the enhancement of the images. Different display devices / monitors have
their own gamma correction, that’s why they display their image at different intensity.

This type of transformation is used for enhancing images for different type of display devices. The gamma of
different display devices is different. For example Gamma of CRT lies in between of 1.8 to 2.5 , that means
the image displayed on CRT is dark.

CORRECTING GAMMA.

s=cr

s=cr(1/2.5)

The same image but with different gamma values has been shown here.
Figure 7: Example of power law transform

Figure 8: Power law transforms are used to darken the image. Different curves highlight different detail
3.1.2 Thresholding
Image thresholding is a simple, yet effective, way of partitioning an image into a foreground and background..
From a grayscale image, thresholding can be used to create binary images.

 Purpose

The purpose of thresholding is to extract those pixels from some image which represent
an object (either text or other line image data such as graphs, maps). Though the information is binary
the pixels represent a range of intensities. Thus the objective of binarization is to mark pixels that belong
to true foreground regions with a single intensity and background regions with different intensities.

Figure 9: Threshold, Density slicing

In many vision applications, it is useful to be able to separate out the regions of the image
corresponding to objects in which we are interested, from the regions of the image that
correspond to background. Thresholding often provides an easy and convenient way to perform
this segmentation on the basis of the different intensities or colors in the foreground and
background regions of an image.

The input to a thresholding operation is typically a grayscale or color image. In the simplest
implementation, the output is a binary image representing the segmentation. Black pixels
correspond to background and white pixels correspond to foreground (or vice versa). In simple
implementations, the segmentation is determined by a single parameter known as the intensity
threshold. In a single pass, each pixel in the image is compared with this threshold. If the pixel's
intensity is higher than the threshold, the pixel is set to, say, white in the output. If it is less than
the threshold, it is set to black.

In more sophisticated implementations, multiple thresholds can be specified, so that a band of


intensity values can be set to white while everything else is set to black. For color or multi-
spectral images, it may be possible to set different thresholds for each color channel, and so
select just those pixels within a specified cuboid in RGB space. Another common variant is to
set to black all those pixels corresponding to background, but leave foreground pixels at their
original color/intensity (as opposed to forcing them to white), so that that information is not
lost.

3.1.3 Piecewise Linear Transformation Function


Rather than using a well-defined mathematical function we can use arbitrary user defined
transform the images below show a contrast stretching liner transform to add contrast to a
poor quality image.

1. Contrast stretching
One of the simplest pricewise linear functions is a contrast stretching. Low contrast images can
result from poor illumination and wrong setting of lens aperture during image acquisition.
Contrast stretching is a process that expands the range of intensity levels in an image.

The contrast of an image is a measure of its dynamic range, or the "spread" of its histogram.
Contrast stretching is a simple image enhancement technique that attempts to improve the
contrast in an image by `stretching' the range of intensity values it contains to span a desired
range of values, e.g. the the full range of pixel values that the image type concerned allows.
Figure 9 shows typical transformation used for contrast stretching.

Figure 10: Transformation used for contrast stretching

Let ɻ , m & n are slopes

S= ɻ . r a ≤r ≤ a

¿ m ( r – a ) + v a ≤ r ≤b

¿ n .(r – b)+ w b ≤ r ≤ L−1

L → maximum gray value


Figure 11: Example of contrast stretching

How It Works

Before the stretching can be performed it is necessary to specify the upper and lower pixel
value limits over which the image is to be normalized. Often these limits will just be the
minimum and maximum pixel values that the image type concerned allows. For example for 8-
bit graylevel images the lower and upper limits might be 0 and 255. Call the lower and the
upper limits a and b respectively.

The simplest sort of normalization then scans the image to find the lowest and highest pixel
values currently present in the image.

FOR EXAMPLE.

Consider the final image1 in brightness.

The matrix of this image is:

100 100 100 100


100 100 100 100
100 100 100 100
100 100 100 100

The maximum value in this matrix is 100.

Contrast = maximum pixel intensity(subtracted by) minimum pixel intensity


= 100 (subtracted by) 100

=0

0 means that this image has 0 contrast.

Contrast stretching the image in Figure 11a produces what is shown in Figure 12a. The image
now takes on the full 8-bit range, and correspondingly the new histogram is spread out over the
range 0-255, resulting in an image that subjectively looks far better to the human eye. However,
the drawback to modifying the histogram of an image in such a manner comes at the expense
of greater "graininess." If the original image is of rather low-contrast and does not contain
much information, stretching the contrast can only accomplish so much.

Contrast stretching is a common technique, and can be quite effective if utilized properly. In


the field of medical imaging, an x-ray camera that consists of an array of x-ray detectors creates
what are known as digital radiographs, or digital x-ray images. The detectors accumulate charge
proportional to the amount of x-ray illumination they receive, which depends on the quality of
the x-ray beam and the object being imaged. A high-density object means less x-rays pass
through the object to eventually reach the detectors (hence the beam is said to be attenuated),
which results in such higher density areas appearing darker.

Figure 12:. X-ray image courtesy of SRS-X, http://www.radiology.co.uk/srs-x. (a) Low contrast
chest x-ray image, (b) Low contrast histogram.

Figure 13. (a) Contrast-stretched chest x-ray image, (b) Modified histogram.
2. Gray-level slicing

Highlighting a specific range of gray-levels in an image is often desired.

Applications include enhancing features such as masses of water, crop regions, or


certain elevation area in satellite imagery.

Another application is enhancing flaws in x-ray. There are two main different
approaches:

        highlight a range of intensities while diminishing all others to a constant low


level.

        highlight a range of intensities but preserve all others.

 The fig.13 illustrates the intensity level slicing process. The left figures show a
transformation function that highlights a range [A,B] while diminishing all the others.
The right figures highlights a range [A,B] but preserves all the others.

s=T (r )=¿ {255 if A≤r≤B¿¿¿¿

Figure14: intensity level slicing process


3. Bit-plane slicing

Instead of highlighting gray level images, highlighting the contribution made to total image
appearance by specific bits might be desired. Suppose that each pixel in an image is represented by
8 bits. Imagine the image is composed of 8, 1-bit planes ranging from bit plane1-0 (LSB)to bit plane
7 (MSB).

In terms of 8-bits bytes, plane 0 contains all lowest order bits in the bytes comprising the pixels in
the image and plane 7 contains all high order bits.

Figure 15: Bit plane slicing

Separating a digital image into its bit planes is useful for analyzing the relative importance played by
each bit of the image, implying, it determines the adequacy of numbers of bits used to quantize
each pixel , useful for image compression.

In terms of bit-plane extraction for a 8-bit image, it is seen that binary image for bit plane 7 is
obtained by proceeding the input image with a thresholding gray-level transformation function that
maps all levels between 0 and 127 to one level (e.g 0)and maps all levels from 129 to 253 to
another (eg. 255).

Example :
Digitally, an image is represented in terms of pixels. These pixels can be expressed further in
terms of bits. Consider the image ‘coins.png’ and the pixel representation of the image.
Figure 16: Example of bit plane slicing
Consider the pixels that are bounded within the yellow line. The binary formats
for those values are (8-bit representation)

The binary format for the pixel value 167 is 10100111


Similarly, for 144 it is 10010000
This 8-bit image is composed of eight 1-bit planes.
Plane 1 contains the lowest order bit of all the pixels in the image.

And plane 8 contains the highest order bit of all the pixels in the image.
Figure 17: The 8 bit-planes of a gray-scale image (the one on left). There are eight because the
original image uses eight bits per pixel.

For an image having 256 grey levels i.e. from 0 to 255 each level can be represented by 8 bits
where 00000000 represents black and 11111111 represents white.

Example1: Show bit plan slicing of follow wins image

2 4 0
1 7 6
3 5 3
Solution → maximum grey level = 7, hence image can be sup resented by three bits.

Example 2: Obtain digital negative of following image. Image contains 256 grey levels.

121 205 217 156 151

139 127 157 117 125


250 200 100 90 80

61 110 255 60 98

121 210 156 171 205

Solution → no. Of gray levels =256 from 0 to 255

S =255 – x

134 50 38 99 104

116 128 98 138 130

5 55 155 165 175

199 145 0 195 157

134 45 99 84 150
Example 3: For given image find.

i) Digital negative and


ii) Bit plane slicing
4 5 6 7

7 5 2 0

3 4 6 5

2 3 6 1

Solution: digital negative

S=7–x

3 2 1 0
0 2 5 7
4 3 2 1
5 4 0 6

Bit plane slicing.


Example 4: → For following image find contrast stretching.

r2 = 5, r1 = 3,S2 =6, S1 =2

4 3 2 1
3 1 2 4
f(x, y) = 5 1 6 2
2 3 5 6

Solution:

ɻ ,m & n are slopes.

r1 = 3,r2 =5 , s1 =2, s2 =6

s1 2
ɻ= = =0.66
r1 3

s2−s 6−2
m= = 1
=2
r 2−r 1 5−3

L−s 7−6
n= 2
= =0.5
L−r 2 7−5

S= ɻ . r 0≤ r ≤3

¿ m . ( r – a ) +v 3 ≤r ≤ 5

¿ n . ( r – b ) + w 5≤ r ≤7
a= r1 =3 , b = r2 =5 , v =2, w=6

r varies from 0 to 7, find values of s for all r values.

r s
0 ɻ r = 0.66 x 0 =0
1 ɻ r = 0.66 x 1 =0.66
2 ɻ r = 0.66 x 2 =1.32
3 m (r –a) +v = 2(3-3)+2=2
4 m (r –a) +v = 2(4-3)+2=4
5 n (r –b) +w = 0.5(5-5)+6=6
6 n(r –b) +w = 0.5(6-5)+6=6.5
7 n (r –b) +w= 0.5(7-5)+6=7

Hence contrast stretched image offer round of is.

4 2 1 1
2 1 1 1
6 1 7 1
1 2 6 7

Example 5: Performs intensity level (grey level) slicing on 3 BPP image. Z ofr 1=3 and r 2=5
Draw modified image using background and without background transformation.

2 1 2 2 1
2 3 4 5 2
6 2 7 6 0
2 6 6 5 1
0 3 2 2 1

Solution:
Without background. (Clipping)

0 0 0 0 0
0 7 7 7 0
7 0 0 0 0
0 0 0 7 0
0 7 0 0 0

With back ground

2 1 2 2 1
2 7 7 7 2
6 2 7 6 0
2 6 6 7 1
0 7 2 2 1

Example 6: For the 3-bit 4x4 sige image perform the following operations.

1) Negation
2) Thresholding with T=4
3) Intensity level slicing with r1 =2 and r2 =5
4) Bit plan slicing for MSB & MSB planes
5) Clipping with r1 =2 and r2 =5

1 2 3 0

2 4 6 7
5 2 4 3

3 2 6 1

Solution. 1) Negation

S=(L-1)-r

= (8-1)-r

=7-r

6 5 4 7

5 3 1 0
G(x,y) =
2 5 3 4

4 5 1 6

1) Thresholding.

g ( x , y )= o if f ( x , y ) ≤ 4
{7 if f ( x , y )≥ 4 }
7 7 7 7

7 0 0 0
0 7 0 7
7 7 0 7
2) Intensity level slicing

0 7 7 0

7 7 0 0

7 7 7 7

7 7 0 0

1 2 7 0

2 7 6 0

7 7 7 7

7 7 6 1

3) Bit plan slicing

0 0 0 0 1 0 1 0

0 1 1 1 0 0 0 1

1 0 1 0 1 0 0 1

0 0 1 0 1 0 0 1

MSB plane LSB


plane
3.2 Neighborhood Processing (Filtering)

Image filtering can be grouped in two depending on the effects:

 Low pass filters (Smoothing)


Low pass filtering (aka smoothing), is employed to remove high spatial frequency noise
from a digital image. The low-pass filters usually employ moving window operator which
affects one pixel of the image at a time, changing its value by some function of a local
region (window) of pixels. The operator moves over the image to affect all the pixels in
the image.
 High pass filters (Edge Detection, Sharpening)
A high-pass filter can be used to make an image appear sharper. These filters emphasize
fine details in the image - the opposite of the low-pass filter. High-pass filtering works in
the same way as low-pass filtering; it just uses a different convolution kernel.

When filtering an image, each pixel is affected by its neighbors, and the net effect of filtering is
moving information around the image.

3.2.1 Low Pass Filtering

A low pass filter is the basis for most smoothing methods. An image is smoothed by decreasing
the disparity between pixel values by averaging nearby pixels . Using a low pass filter tends to
retain the low frequency information within an image while reducing the high frequency
information. An example is an array of ones divided by the number of elements within the
kernel, such as the following 3 by 3 kernel:
1. Averaging ( Mean) Filter

Averaging (Mean) filtering is easy to implement. It is used as a method of smoothing images,


reducing the amount of intensity variation between one pixel and the next resulting in reducing
noise in images.

The idea of mean filtering is simply to replace each pixel value in an image with the mean
(`average') value of its neighbors, including itself. This has the effect of eliminating pixel values
which are unrepresentative of their surroundings. Mean filtering is usually thought of as a
convolution filter. Like other convolutions it is based around a kernel, which represents the
shape and size of the neighborhood to be sampled when calculating the mean.

Figure 18 : Original image on left and it’s low pass filtered image on right

We can see the filtered image (right) has been blurred a little bit compared to the original input
(left). As mentioned earlier, the low pass filter can be used for denoising. Let's test it. First, to
make the input a little bit dirty, spray some pepper and salt on the image, and then apply the
mean filter. You can observe that after applying averaging filter salt and paper noise is
minimized
Figure 19 : Image with salt and pepper noise on left and it’s low pass filtered image on right

Examples on Low Pass Filtering

1) Averaging filter masks

1 1 1
1
x 1 1 1
9
1 1 1

1 1 1 1
1 1 1 1 1
25 1 1 1 1
1 1 1 1

Example 1 : Filter following image using 3 x 3 neighborhood averaging without zero padding

10 10 10 10 10 10 10 10
10 10 10 10 10 10 10 10
10 10 10 10 10 10 10 10
10 10 10 10 10 10 10 10
50 50 50 50 50 50 50 50
50 50 50 50 50 50 50 50
50 50 50 50 50 50 50 50
50 50 50 50 50 50 50 50
Solution: Place 3 x 3 mask on image. Start from left top corner. Keep borders as they are perform
convolution. Change value of Centre pixel. Shift mask towards right and then downwards

The result after convolution is

10 10 10 10 10 10 10 10
10 10 10 10 10 10 10 10

10 10 10 10 10 10 10 10
23.3 23.3 23.3 23.3 23.3 23.3 23.3 23.3
36..6 36..6 36..6 36..6 36..6 36..6 36..6 36..6
50 50 50 50 50 50 50 50

50 50 50 50 50 50 50 50
50 50 50 50 50 50 50 50

We can see that, low frog, regions have remained unchanged, but the sharp edge between 10 and 50
has become blurred. These kind of averaging filter are good for removing Gaussian noise. It achieves
filtering by blurring the noise some other low pass averaging measles are.

0 1 0

1 2 1
0 1 0
1
6

0 1 0

1/10 1 2 1
0 1 0

Example2: Filter the following image using 3 x 3 neighbourhood averaging by assuming i) giro padding
ii) pixel rapt cation

1 2 3 2
4 2 5 1
1 2 6 3
2 4 6 7

Solution. 3x 3 averaging mark is

0 1 0
1 1 2 1
9
0 1 0

i) Zero padding.
0 0 0 0 0 0

0 1 1.88 1.66 1.22 0

0 1.33 2.88 2.88 2.22 0

0 1.66 3.55 4 3.11 0

0 1 2.33 3.11 2.44 0

0 0 0 0 0 0

Round off

ii)Pixel Replication

1 1 2 3 2 2
1 1 2 3 2 2
4 4 2 5 1 1
1 1 2 6 3 3
2 2 4 6 7 7
2 2 4 6 7 7

1 1 2 3 2 2

1 2 2.55 2.44 2.33 2


Round off.
4 2 2.88 2.88 2.88 1

1 2.44 3.55 4 4.33 3

2 2.22 3.66 5 5.77 7

2 2 4 6 7 7
Figure 20 : Mean filter has some effect on the salt and pepper noise but not much. It just
made them blurred.

Median filter

Figure 20: Effect of median filter

Much better. Unlike the previous filter which is just using mean value, this time let us use
median. Median filtering is a nonlinear operation often used in image processing to reduce "salt
and pepper" noise.

Low pass median filtering steps

Steps to perform median filtering

1) Assume 3 x 3 empty mask

2) Place empty mask at left hand corner


3) Arrange a pixels in ascending order

4) Choose median from these 9 values

5) Place median at the centre

6) Move mask in a similar fashion from left to right and top to bottom.

Example1: Apply median filter on following image.

10 10 10 10 10 10 10 10
10 10 10 10 10 10 10 10
10 250 10 10 10 10 10 10

10 10 10 10 10 10 10 10
50 50 50 50 50 50 50 50
50 50 50 50 50 50 50 50
50 50 50 50 50 50 50 50
50 50 50 50 50 50 50 50

Solution. Arrange a pixel in ascending order.


(10, 10, 10, 10, 10, 10, 10, 10, 250)

Median is 5th value place this value at the canter.


Move the mask from left to right and repeat the procedure perform operation on image
we get following image.

10 10 10 -- ---- 10
10 10 --- -- ---- 10
10 10 -- -- ---- 10
10 10 -- -- ---- 10
50 50 -- -- ---- 50
50 50 -- -- ---- 50
50 50 -- -- ---- 50
50 50 -- - - - - - - 50
We can see that salt & pepper noise is removed.

Example 2:. Given an image what is output using 3 x 3 averaging filter and median filter.

3 3 3 3 3 3
3 3 3 3 3 3
3 3 10 10 3 3
3 10 3 3 3 3
3 3 3 3 8 3
3 3 3 3 3 3
3 3 3 3 3 3

3.2.2 High Pass Filtering

A high pass filter is the basis for most sharpening methods. An image is sharpened when
contrast is enhanced between adjoining areas with little variation in brightness or darkness

A high pass filter tends to retain the high frequency information within an image while reducing
the low frequency information. The kernel of the high pass filter is designed to increase the
brightness of the center pixel relative to neighboring pixels. The kernel array usually contains a
single positive value at its center, which is completely surrounded by negative values. The
following array is an example of a 3 by 3 kernel for a high pass filter:

Example 1 : .Apply high pass filter on following image.


Solution: High pass mask is

-1 -1 -1

-1 8 -1
-1 -1 -1
1
9

Example 2: Apply low pass and high pass spatial marks on following image.
Prove that HP=original image-Low Pass image using zero padding

Solution.

1
Low pass mas is
9

Output is

-1 -1 -1
-1 8 -1
HP mask is -1 -1 -1 1
9

Output image is =
3.2.3 The High-Boost Filter
It is often desirable to emphasize high frequency components representing the image details
without eliminating low frequency components representing the basic form of the signal. In this
case, the high-boost filter can be used to enhance high frequency component while still keeping
the low frequency components:

• The high-boost filter can be used to enhance high frequency component while still
keeping the low frequency components.
• High boost filter is composed by an all pass filter and a edge detection filter (Laplacian
filter). Thus, it emphasizes edges and results in image sharpener.
• The high-boost filter is a simple sharpening operator in signal and image processing.
• It is used for amplifying high frequency components of signals and images. The
amplification is achieved via a procedure which subtracts a smoothed version of the
media data from the original one.
• In image processing, we can sharpen edges of a image through the amplification and
obtain a more clear image.
• The high boost filtering is expressed in equation form as follows:

High boost - High pass filtered image

FLP -Low pass filtered image

HLP - Low pass filter

HHP - High pass filter

HHB - High boost filter

High boost=( A−1 ) Original+ High pass

F HB =( A−1 ) f ( x , y ) + F HP (x , y)

High boost= A .Original−low pass

Where “A” is Constant


 

Figure 21 : (left) Original image, (right) High boost image

3.3 Histograms
Histograms are graphs of a distribution of data designed to show centering, dispersion (spread), and
shape (relative frequency) of the data. Histograms can provide a visual display of large amounts of data
that are difficult to understand in a tabular, or spreadsheet form. Usually histogram have bars that
represent frequency of occurring of data in the whole data set.

A Histogram has two axis the x axis and the y axis.

The x axis contains event whose frequency you have to count.

The y axis contains frequency.

The different heights of bar shows different frequency of occurrence of data.

Usually a histogram looks like figure

Figure 21: Sample histogram


 Histogram of an image

An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in


a digital image.[1] It plots the number of pixels for each tonal value. By looking at the histogram for a specific
image a viewer will be able to judge the entire tonal distribution at a glance.

Histogram of an image, like other histograms also shows frequency. But an image histogram, shows
frequency of pixels intensity values. In an image histogram, the x axis shows the gray level intensities
and the y axis shows the frequency of these intensities.

Figure 22: An image and its histogram

The x axis of the histogram shows the range of pixel values. Since it’s an 8 bpp image that
means it has 256 levels of gray or shades of gray in it. That’s why the range of x axis starts from
0 and end at 255 with a gap of 50. Whereas on the y axis, is the count of these intensities.

As you can see from the graph, that most of the bars that have high frequency lies in the first
half portion which is the darker portion. That means that the image we have got is darker. And
this can be proved from the image too.

 Applications of Histograms

Histograms has many uses in image processing. The first use as it has also been discussed above
is the analysis of the image. We can predict about an image by just looking at its histogram. It’s
like looking an x ray of a bone of a body.

The second use of histogram is for brightness purposes. The histograms has wide application in
image brightness. Not only in brightness, but histograms are also used in adjusting contrast of
an image.

Another important use of histogram is to equalize an image.

And last but not the least, histogram has wide use in thresholding. This is mostly used in
computer vision.
3.3.1 Linear stretching of histograms

Linear stretching the image in 23a produces what is shown in figure 24. The image now takes on
the full 8-bit range, and correspondingly the new histogram is spread out over the range 0-255,
resulting in an image that subjectively looks far better to the human eye. If the original image is
of low-contrast and does not contain much information, stretching the contrast can only
accomplish so much.

Figure 23. : (a) Low contrast Prutah image, (b) Low contrast histogram.

Figure 24. (a) Contrast-stretched Prutha image, (b) Modified histogram.


 Contrast
Contrast is the difference between minimum and maximum pixel intensity

Figure 25: Contrast stretching

In this method we do not alter basic shape of histogram, but we spread it so as to cover entire
range. We do this by using straight line equation having slope (S max-Smin) /(rmax-rmin).

Where

Smax→ Maximum grey level of output image

Smin→Minimum level of output image

rmax → Maximum grey level of input image


rmax → Minimum level of input image

Smax−Smin
S=T ( r )= ( r−rmin ) + Smin
rmax−rmin

This transformation function shifts and structures the grey level range of input image to occupy the
entire dynamic range (Smax , Smin)
Example.1:

Perform histogram stretching on following image so that the new image has a dynamic image of (0,7)

smax rmin rmax Smin

Grey level 0 1 2 3 4 5 6 7
No. of pixels 0 0 50 60 50 20 10 0

Solution:

rmin=2 , rmax=6 smin=0 smax=7

Smax−Smin
S= ( r −rmin ) + Smin
rmax−rmin

7−0
¿ ( r−2 )+ 0
6−2

7
S= ( r −2 )
4

∴ for r =2 r =0

r =3 r =1.75≈ 2

r =4 r =3.5≈ 4

r =5 r =5.2≈ 5

r =6 r =7

r =2 0

r =3 2

r =4 4

r =5 5

r =6 7
Modified histogram is
Grey level 0 1 2 3 4 5 6 7
No. of pixels 50 0 60 0 50 20 0 10

No. of pixels No. of pixels


70 70
60 60
50 50
40 40
30 30
20 20
10 10
0 0
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

Original histogram Histogram after stretching

Example 2:

Grey level 0 1 2 3 4 5 6 7
No. of pixel 100 90 85 70 0 0 0 0

Perform histogram stretching so that new image has a dynamic range of(0,7)

Solution :

rmin=0 , rmax=3 Smin=0 Smax=7

Smax−Smin
S= ( π −πmin )+ Smin
πmax−πmin

7
¿ ( r −0 ) +0
3

∴ for r =0 s=0

r =1 s=2.3≈ 2

r =2 s=4.67 ≈ 5

r =3 s=7

r =0 S=0

r =1 S=2

r =2 S=5

r =3 S=7
Grey level 0 1 2 3 4 5 6 7
No. of pixel 100 0 90 0 0 85 0 70

No. of pixel(pk) No. of pixels(pk)


25 25
20 20
15 15
10 10
5 5
0 0
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

Original histogram Histogram after stretching

Example 3:
Given the frequency table.

Grey level 0 1 2 3 4 5 6 7
No. of pixel 0 0 a b c d e 0
Perform linear stretching

Example4: perform histogram stretching on 8 x 8 , 8 level grey image shown in table.

Grey level (rk) 0 1 2 3 4 5 6 7


No. of pixel(pk) 0 0 5 20 20 19 0 0

Solution:

rmin=2 , rmax=5
Smin=0 smax=7

7
∴ s= ( r−2 )+ 0
( 5−2 )

7
¿ ( r −2 )
3

7
r =1 s= x ( 2.2 )=0
3

r =3 s=2.33≈ 2

r =4 s=4.66 ≈ 5

r =5 s=7

The resultant mapping will be, s=0 corresponding to r =2 there fore. The pixels that have grey level r = 2
in original image have to be mapped to level S=0 in output image.

Modified histogram is

Grey level(rk) 0 1 2 3 4 5 6 7
No. of pixels(pk) 5 0 20 0 0 20 0 19

No. of pixel(pk) No. of pixels(pk)


25 25
20 20
15 15
10 10
5 5
0 0
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

Original histogram Histogram after stretching

3.3. 2 Histogram Equalization


In linear stretching shape of histogram remains the same there are many application, where we
need flat histogram equalization is a process that attempts to spared out the gray levels in an
image so that they are evenly distributed across their range .histogram equalization reassigns
the brightness values of pixels based on the image histogram. Histogram equalization is a
technique where the histogram of the result and image is as flat as possible. Histogram
equalization provides more visually pleasing results across a wider range of images.

 Example Histogram Equalization algorithm

This technique is similar to is similar to histogram stretch. It tries to flatten the histogram to
create a better quality image. A good image is that which has equal number of pixels in all its
very levels. Histogram equalization method treads image as a probability distribution.

PDF (probability density function)

As it name suggest, it gives the probability of each number in the data set or you can say that it
basically gives the count or frequency of each element.

How pdf is calculated:

We will calculate PDF from two different ways. First from a matrix , , we have to calculate the
PDF from a matrix , and an image is nothing more then a two dimensional matrix. Then we will
take another example in which we will calculate PDFfrom the histogram.

Consider this matrix.

1 2 7 5 6
Now if we were to calculate the PDF of this matrix, here
7 2 3 4 5 how we are going to do it. At first, we will take the first
value in the matrix , and then we will count , how much
0 1 5 7 3 time this value appears in the whole matrix. After count
they can either be represented in a histogram, or in a
table like this below.
1 2 5 6 7

6 1 0 3 4

0 2 2/25

1 4 4/25

2 3 3/25

3 3 3/25
4 2 2/25

5 4 4/25

6 3 3/25

7 4 4/25

CALCULATING PDF FROM HISTOGRAM

The above histogram shows frequency of gray level values for an 8 bits per pixel image.

Now if we have to calculate its PDF, we will simple look at the count of each bar from vertical
axis and then divide it by total count.

So the PDF of the above histogram is this.

Another important thing to note in the above histogram is that it is not monotonically
increasing. So in order to increase it monotonically, we will calculate its CDF.

CDF cumulative distributive function

CDF is function that calculates the cumulative sum of all the values that are calculated by PDF. It
basically sums the previous one.

HOW IT IS CALCULATED?

We will calculate CDF using a histogram. Here how it is done. Consider the histogram shown
above which shows PMF.

Since this histogram is not increasing monotonically , so will make it grow monotonically.
We will simply keep the first value as it is , and then in the 2nd value , we will add the first one
and so on.

Here is the CDF of the above PDF function.

Now as you can see from the graph above , that the first value of PDF remain as it is. The
second value of PDF is added in the first value and placed over 128. The third value of PDF is
added in the second value of CDF , that gives 110/110 which is equal to 1. And also now , the
function is growing monotonically which is necessary condition for histogram equalization.

The histogram equalization process is summarized in the following steps.

1) Calculate PDF pk
2) Form the cumulative histogram sk =∑ p r c r
3) Normalize the value by dividing it by total number of pixels.
4) Multiply these values by maximum grey level value and round off the values.
5) Map the original value to the result of step 3 by a one to one corresponds
nk
1) PDF → p r (rk )=
n
Where

nk → Number of pixels of a particulars grey level

n → total no of pixels in an image.

2) CDF → sk=∑ p r (rk )


3) Normalised values =(L-I) X sk
Where L → no. Of grey levels.
4) Rounding off

Example 1: Perform histogram equalization of the image.

4 4 4 4 4
3 4 5 4 3
3 5 5 5 3
3 4 5 4 3
4 4 4 4 4

Solution.→ maximum value = 5, we need 3 bits to represent the number.

Hence there are eight possible gray levels from 0 to 7.

Let us represent image in the form of table.

Gray level 0 1 2 3 4 5 6 7

No. of pixel 0 0 0 6 14 9 0 0
nk

Gray No. of No. of pixel


(Running Running sum * Histogram
PDF=
level pixel = ∑ ¿sum)
¿ maximum gray equalization
nk nk CDF level level
Pk =
N Sk = 7 x sk
∑ pr r k
0 0 0 0 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 6 0.24 0.24 1.68 2
4 14 0.56 0.8 5.6 6
5 5 0.2 1 7 7
6 0 0 1 7 7
7 0 0 1 7 7
N=25

4 4 4 4 4 6 6 6 6 6

[ 3 4
3 5
3 4
4 4
5
5
5
4
4
5
4
4
3
3
3
4
] Histogram
Equalization
[ ]
2
2
2
6
6
7
6
6
7
7
7
6
6
7
6
6
2
2
2
6

Original image histogram equalization image

New image

Gray level 0 1 2 3 4 5 6 7
No. of pixels 0 0 6 0 0 0 14 5

Example2: Equalize the given histogram.

Gray level. 0 1 2 3 4 5 6 7

No. of pixels 790 1023 850 656 329 245 122 81

Solution.→ No. Of gray levels =8

= 0 to 7

1200
1023
1000
850
790
800
656
600
400
229 245
200 122 81
0
0 1 2 3 4 5 6 7

Gray nk PDF CDF= New gray


level nk Sk 7 x sk level
Pr =
N
0 790 0.19 0.19 1.33 1
1 1023 0.25 0.44 3.08 3
Old Gray Equalization gray level New grey
2 850 0.21 0.65 4.55 5
level level
3 656 0.16 0.81 5.67 6
0 790 1
4 329 0.08 0.89 6.23 6
1 1023 3
5 242 0.06 0.95 6.65 7
2 850 5
6 122 0.03 0.98 6.86 7
3 656 6
7 81 0.02 1 7 7
4 329 6
N=4096
5 245 7
6 122 7
7 81 7
1200
1023 985
1000
850
790
800

600
448
400

200

0
1 2 3 4 5 6 7

Equalization No. Of pixels.


gray level
0 0
1 790
2 0
3 1023
5 850
6 (56+329)/2=986
7 245+122+81=448

Here dark histogram becomes an evenly spaced histogram.

Example 3: Equalize following histogram.

Gray level. 0 1 2 3 4 5 6 7

No. of pixels 100 90 50 20 0 0 0 0

Solution:
120 Gray nk PDF CDF Round off. Modified nk
100
100 level nk 90 S k 7 x sk New gray
Pr =
80 N Level
60
0 100 0.384 0.384 2.688 3 100
50
1 90 0.346 0.73 5.11 5 90
40
2 50 0.1923 0.9223 20 6.456 6 50
20 3 20 0.0769 1 7 7 20
0 4 0 0 1 7 7 20
51 20 3 04 5 16 7 7 7 20
6 0 0 1 7 7 20
7 0 0 1 7 7 20
N=260

Example 4: Perform histogram equalization for 8 x 8 image shown in table.

Gray level. 0 1 2 3 4 5 6 7
(rk)
No. of 8 10 10 2 12 16 4 2
pixels (pk)

Solution: L =8

Original histogram

18
16
16
14
12
12
10 10
10
8
8
6
4
4
2 2
2
0
0 1 2 3 4 5 6 7

Gray No. of PDF CDF Running sum x Level


level rk pixel nk Sk = maximum gray
Pk(rk) =
pk N ∑ p r (r k ) level
7 x sk
0 8 0.12 0.12 0.84 1
1 10 0.15 0.27 1.89 2
2 10 0.15 0.42 2.94 3
3 2 0.031 0.45 3.15 3
4 12 0.18 0.63 4.41 4
5 16 0.25 0.88 6.16 6
6 4 0.06 0.94 6.58 7
7 2 0.03 0.97 6.79 7
64

Old Gray Old no. New grey level New no. Of pixels.
level Of
pixels.

0 8 1 0→ 0

1 10 2 1→ 8
2 → 10
2 10 3
3 → 10+2=12
3 2 3
4 → 12
4 12 4
5 → 0
5 16 6
6→ 16
6 4 7
7 → 4+2=6
7 2 7
New Histogram
18
16
14
12
10
8
6
4
2
0
0 1 2 3 4 5 6 7

Example 5: Given a histogram see what happens when we equalize if twice.

Gray level. 0 1 2 3

nk 70 20 7 3

Solution:

Gray No. of PDF CDF Sk x 3 Round off New nk


level pixel
rk nk
0 70 0.7 0.7 2.1 2 70
1 20 0.2 0.9 2.7 3 20+7+3
2 7 0.07 0.97 2.91 3 =30
3 3 0.03 1 3 3

100

∴ Modified gray levels are

Gray level. 0 1 2 3

nk 0 0 70 30
Gray No. of pixel PDF CDF Sk x 3 Round off Modified
level rk pk grey level
0 0 0 0 0 0
1 0 0 0 0 0
2 70 0.7 0.7 2.1 2 70
3 30 0.3 1 3 3 30

100
Now equalize again

∴ We get Gray level. 0 1 2 3

nk 0 0 70 30

This is same as what we had get after first equalization.

Hence, equalizing twice gives us same result i.e.it causes no change in the histogram.

3.3.4 Histogram Specification

The user has no control over the histogram normalization process. It gives us histogram which
is an approximation to an uniform histogram. Hence it is desirable to have a method in which
certain grey levels are specified. Histogram specification method allows us to exercise control
over the process through the target histogram specification.

The algorithm for histogram specification is as follows.

1) Find mapping table of histogram equalization


2) Specify the designed histogram.
3) Equalize desired histogram.
4) Perform the mapping process so that the value of the step 1 can be mapped to the
result of step 3.

Example 1: Given histogram (a) & (b) modify histogram (a) as given by histogram (b)
(a)

Gray level. 0 1 2 3 4 5 6 7

No. of pixels 790 1023 850 656 329 245 122 81

(b)

Gray level. 0 1 2 3 4 5 6 7

No. of pixels 0 0 0 614 819 1230 819 614

Solution. → Equalize histogram (a)

Gray nk PDF CDF Sk x 7 Round off New nk.


level
0 740 0.19 0.19 1.33 1 790
1 1023 0.25 0.44 3.08 3 1023
2 850 0.21 0.65 4.55 5 850
3 656 0.16 0.81 5.67 6
4 329 0.08 0.89 6.23 6 985
5 245 0.06 0.95 6.65 7
6 122 0.03 0.98 6.86 7 448
7 81 0.02 1 7 7

4096
Now equalize histogram (b).

Gray nk PDF CDF Sk x 7 Round off


level
0 0 3 0 1- nk of0 old -790 0
1 0 4 0 3- nk of old-01023 0
2 0 5 0 5- nk of old-0850 0
3 614 6 0.149 6- nk of old- 985
0.149 1.05 1
4 819 7 0.20 7- nk of old-
0.35
448 2.50 3
Gray
5 level. 1230 0 1 0.30 2 3 0.65 4 54.55 6 57
6 819 0.20 0.85 5.97 6
No.
7 of pixels 0
614 0 0.15 0 790 1 1023 8507 985 7448
4096
Plot histogram for modified image.

No. of pixels
1200
1000
800
600
400
200
0
0 1 2 3 4 5 6 7

Example 2: Perform histogram specification on 8 x8 image shown in table.

Gray level. 0 1 2 3 4 5 6 7

No. of 8 10 10 2 12 16 4 2
pixels

The target histogram is

Gray level. 0 1 2 3 4 5 6 7

No. of pixels 0 0 0 0 20 20 16 8

Solution: → Histogram equalization of 1st image.

Gray nk PDF CDF Sk x 7 Round off New nk.


level
0 8 0.12 0.12 0.84 1 8
1 10 0.15 0.27 1.89 2 10
2 10 0.15 0.42 2.94 3
3 2 0.031 0.45 3.15 3 12
4 12 0.18 0.63 4.50 4 12
5 16 0.25 0.88 6.16 6 16
6 4 0.06 0.94 6.58 7
7 2 0.03 0.97 6.79 7 6

64

Histogram equalization of 2nd image.

Gray nk PDF CDF Sk x 7 Round off


level
0 0 0 0 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 20 0.31 0.31 2.17 2
5 20 0.31 0.62 4.34 4
6 16 0.25 0.87 6.09 6
7 8 0.12 0.99 6.93 7
64

4−2 10

[ ]
5−4
6−6
7−7
12
16
6

Mapping

Gray levels Mapping mapping of Map


target
0 1 - 8 0 0
1 2 - 10 0 0
2 3 0 0
3 3 12 0 0
4 4 - 12 2 10
5 6 - 16 4 12
6 7 6 16
7 7 6 7 6

Gray level. 0 1 2 3 4 5 6 7

No. of pixels 0 0 10 0 12 0 16 6

25
20 20
20

15

10
6 7
5

0
0 1 2 3 4 5 6 7

18
16
16
14
12
12
10
10
8
6
6
4
2
0
0 1 2 3 4 5 6 7

Histogram of target Mapped histogram

3.4. Image Enhancement in Frequency Domain


Frequency domain techniques are suited for processing the image according to the frequency
content. The principle behind the frequency domain methods of image enhancement consists
of computing a 2-D discrete transform of the image, manipulating the transform coefficients
by an operator M, and then performing the inverse transform. The orthogonal transform of
the image has two components magnitude and phase. The magnitude consists of the
frequency content of the image. The phase is used to restore the image back to the spatial
domain. The usual orthogonal transforms are discrete cosine transform, discrete Fourier
transform, Hartley Transform etc. The transform domain enables operation on the frequency
content of the image, and therefore high frequency content such as edges and other subtle
information can easily be enhanced. Frequency domain which operate on the Fourier
transform of an image.
• Edges and sharp transitions (e.g. noise) in an image contribute significantly to high
frequency content of Fourier transform.
• Low frequency contents in the Fourier transform are responsible to the general
appearance of the image over smooth areas.
The concept of filtering is easier to visualize in the frequency domain. Therefore, enhancement
of image f(x, y) can be done in the frequency domain based on DFT. This is particularly useful
in convolution if the spatial extent of the point spread sequence h(x, y) is large then
convolution theory.

G (x, y) = h(x, y)*f(x, y)

Where G(x, y) is enhanced image.

Types of frequency domain filters


 Low Pass Filter
• High Pass Filter
• Band pass Filter
• Blurring
• Sharpening

The concept of filtering is easier to visualize in the frequency domain. Therefore,


enhancement of image f (m,n) can be done in the frequency domain, based on its DFT F(u, v)
Figure 26: Block diagram of image enhancement

Figure 27:

• We can directly design a transfer function H(u,v) and implement the enhancement in the
frequency domain as follows:
Figure 28: Basic steps for filtering in frequency domain

There are three basic steps to frequency domain filtering:

1. The image must be transformed from the spatial domain into the frequency domain using the Fast
Fourier transform.

2. The resulting complex image must be multiplied by a filter (that usually has only real values).

3. The filtered image must be transformed back to the spatial domain.

3.4.1 General Steps for Filtering


1. Multiply input image by -1x+y (centering)
2. Compute F(u, v) (DFT)
3. Multiply F(u, v) by H(u, v) (filtering)
4. Compute inverse DFT of H(u,v) F(u,v)
5. Obtain the real part of the result
6. Mutliply by -1x+y (decentering)

Multiplication in the frequency domain is a convolution in the spatial domain.

Frequency Bands
Image Fourier Spectrum

Figure 29: An image and its Fourier Transform

3.4.2 Low pass filter


spatial domain----frequency domain
f(x,y) F(u,v)

G(u,v) = F(u,v) • H(u,v) here H(u,v) is filter

g(x,y) G(u,v)

Three types of lowpass filter:


Ideal (very sharp)
Butterworth (tunable)
Gaussian (very smooth)

1. H(u,v)- Ideal Low Pass Filter

1 D (u , v ) ≤ D0
D ( u , v )=
{
0 D ( u , v ) > D0
D ( u , v )=√ u2 +v 2

D0 = cut-off frequency

H(u,v)
H(u,v)

1
v

u 0 D0 D(u,v)

Figure 30 : Plot of Ideal Low pass filter transfer function

The Ringing Problem


G(u,v) = F(u,v) • H(u,v)

Convolution Theorm

g(x,y) = f(x,y) * h(x,y)

-1 sinc(x)

H(u,v) h(x,y)

D0 Ringing radius + blur


2.

Butterworth lowpass Filter


Figure 31: Plot of Butterworth Low pass filter transfer function
BLPF of order n, with a cutoff frequency distance D0 is defined as

1
H ( u , v )= 2n
D (u , v )
1+ (D0 )
D ( u , v )=√ u2 +v 2
No clear cutoff between passed and filtered frequencies
0.15

1
0.1 0.8

0.6
0.05 0.4

0.2
0 0
0 50 100 -50 0 50

Image Domain Frequency Domain


3. Gaussian Low pass Filters

The inverse Fourier transform of a GLPF is also a Gaussian A spatial Gaussian _lter will have no
ringing.
Let σ: measure of the spread of the Gaussian curve. Let σ = D0, then:
Figure 32 : Plot of Gaussian Low pass filter transfer function

3.4.3 Image Sharpening: High pass filter


1. Ideal highpass Filter

D0 = cut-off frequency
D(u; v): distance from a point (u; v) to the origin

H(u,v)
H(u,v)

v 1

u
0 D0
D(u,v)

Figure 33 : Plot of Ideal high Low pass filter transfer function


Figure 34 : Ideal highpass Filters: example

2. Butterworth Highpass Filters

H(u,v)
H(u,v)
1

0.5
v

u 0 D0 D(u,v)

Figure35 : Plot of Butterworth high pass filter transfer function


Figure36 : Butterworth Highpass Filters: Example

3. Gaussian Highpass Filters

The form of a gaussian lowpass filter GLPF in 2D is:

The inverse Fourier transform of a GLPF is also a Gaussian A spatial Gaussian _lter will have no
ringing.
Figure 37 : Gaussian Highpass Filters: Example

You might also like