0% found this document useful (0 votes)
11 views20 pages

Unit-3 Part1

Image enhancement in the spatial domain involves processing images to improve their suitability for specific applications, focusing on techniques such as gray level manipulation, noise reduction, and edge sharpening. It includes two main categories: spatial domain methods, which manipulate pixels directly, and frequency domain methods, which involve transforming images for processing. Key techniques discussed are intensity transformations, spatial filtering, and histogram processing, each contributing to enhancing image quality and detail.

Uploaded by

Sreekanth P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views20 pages

Unit-3 Part1

Image enhancement in the spatial domain involves processing images to improve their suitability for specific applications, focusing on techniques such as gray level manipulation, noise reduction, and edge sharpening. It includes two main categories: spatial domain methods, which manipulate pixels directly, and frequency domain methods, which involve transforming images for processing. Key techniques discussed are intensity transformations, spatial filtering, and histogram processing, each contributing to enhancing image quality and detail.

Uploaded by

Sreekanth P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT – III

IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

INTRODUCTION
The aim of image enhancement is to process an image so that result is more suitable than
original image for specific application.
• It is a first step in digital image processing.
• It basically improves the subjective quality of the images by working with the existing
data.
• Image enhancement includes gray level and contrast manipulation, noise reduction, edge
crispening, and sharpening, filtering, interpolation and magnification , pseudo coloring
and so on

Image enhancement techniques can be divided into two broad categories:


1. Spatial domain methods
2. Frequency domain methods
Spatial domain refers to the image plane itself, and image processing methods are based on
direct manipulation of pixels in an image.
In a transform domain, first transforming an image into the transform domain and doing the
processing there, and obtaining the inverse transform to bring the results back into the spatial
domain.

Spatial Domain Methods


 types
• Intensity transformations (or Point operations)
• Spatial filtering (or Neighborhood operations)

Spatial domain methods are procedures that operate directly on these pixels. Spatial domain
processes will be denoted by the expression
1.
g(x, y) = T [f(x, y)]
2.
where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator on f, defined
over some neighborhood of (x, y).

The principal approach in defining a neighborhood about a point (x, y) is to use a square or
rectangular sub image area centered at (x, y), as Fig.3.1 shows.

The center of the sub image is moved from pixel to pixel starting, say, at the top left corner.
Fig.3.1 A 3*3 neighborhood about a point (x, y) in an image

• The type of operation performed in the neighboring input pixel values is called as Spatial
filter or spatial mask or kernel or template or window or neighborhood operation.
• It consists of moving the origin of the neighborhood from pixel to pixel and applying the
operator T to the pixels in the neighborhood to yield the output at that location.
• Thus ,for any specific location (x, y), the value of the output image g at those coordinates
is equal to the result of applying T to the neighborhood with origin at (x, y) in f.
• The smallest possible neighborhood is of size 1×1. In this ,the g depends only on the
value of f at a single point (x, y) and T becomes an intensity (or gray level or mapping)
transformation function of the form
s=T(r)
Where s ->intensity of g at (x, y)
r ->intensity of f at (x, y)
• Enhancement approaches whose results depend only on the intensity at a point are called
point processing techniques.
Some basic intensity transformation functions (or) point operations:
• Image negatives
• Log transformations
• Power law (Gamma)transformations
• Piecewise-linear transformations
--contrast stretching
--intensity-level slicing
3. --Bit-plane slicing
Image Negatives
The negative of an image with gray levels in the range [0, L-1] is obtained by using the image
negative transformation shown in Fig.3.2, which is given by the expression
s=L-1–r
It reversing the intensity levels of an image in this manner produces the equivalent of a
photographic negative.

Fig 3.2 Basic Intensity Transformations


Log Transformations
The general form of the log transformation shown in Fig.3.2 is
s = c log (1 + r)
where c is a constant, and it is assumed that r ≥ 0.The shape of the log curve in shows that this
transformation maps a narrow range of low gray-level values in the input image into a wider
range of output levels. The opposite is true of higher values of input levels.
Power-Law Transformations
Power-law transformations have the basic form
s = c rγ
where s is the ouput pixel value
r is the input pixel value
c and γ are real numbers
For various values of γ different levels of enhancements can be obtained. This technique is
commonly called as gamma correction and used in monitor displays. Curves generated with
values of γ>1 have exactly the opposite effect as those generated with values of γ<1. We can get
identity transformation when c = γ = 1.
Fig.3.3 Plots of the equation s = c rγ for various values of γ (c=1 in all cases)

Piecewise-Linear Transformation Functions

Contrast stretching:
One of the simplest piecewise linear functions is a contrast-stretching transformation.
Low- contrast images can result from poor illumination, lack of dynamic range in the imaging
sensor, or even wrong setting of a lens aperture during image acquisition.
Contrast stretching is a process that expands range of intensity levels in an image.

Fig 3.4 :contrast stretching transformation


The transformation used for contrast stretching is shown in fig.(a) .The values (r1,s1) and (r2,s2)
determines shape of transformation.
If r1=s1 and r2=s2 we can get linear transformation.
If r1= r2 and s1=0,s2=L-1 we can get Thresholding function.

S=T(r) = a1 r : 0 ≤ r <r1 S1=T(r1)


= a2 (r-r1) + S1 : r1≤ r <r2 S2=T(r2)
= a3(r-r2) + S2 : r2≤ r ≤L-1
Where a1,a2,&a3 control the result of contrast stretching.
If a1=a2=a3=1 no change in gray levels.

If a1=a2=0 and r1=r2 function is thresholding function then the result is a binary image.

Thresholding function:

Thresholding is required to extract a part of an image which contains all the information.
Thresholding is a part of a more general segmentation problem.
In thresholding, pixels having intensity lower than the threshold T are set to zero and the pixels
having intensity greater than the threshold are set to 255
It generates a binary image

S=T(r) = 0 ;r<m
= 1 ;r>m

Gray-level slicing:
Highlighting a specific range of gray levels in an image often is desired.
There are several ways of doing gray level slicing, but most of them are variations of two basic
themes. One approach is to display a high value for all gray levels in the range of interest and a
low value for all other gray levels. The second approach brightens the desired range of gray
levels and remaining gray levels are unchanged.
Fig 3.5 (a) This transformation highlights range [A, B] of gray levels and reduce all others
to a constant level (b) This transformation highlights range [A, B] but preserves all other
levels.

Bit-Plane Slicing:
Suppose that each pixel in an image is represented by 8 bits. The image is composed of eight 1-bit
planes, ranging from bit-plane 0 to bit plane 7.
Bit-plane 0 contains all the lowest order bits in the bytes comprising the pixels in the image and plane 7
contains all the high-order bits.
Note that the higher-order bit planes contain the majority of the visually significant data. The other bit
planes contains less details in the image.
Separating a digital image into its bit planes is useful for analyzing the relative importance played by
each bit of the image.

Fig 3.6 bit plane representation of 8-bit image


Histogram Processing:
• The histogram of a digital image with intensity levels in the range [0,L-1]is
a discrete function
h( rk )=nk
Where rk is the Kth intensity value and
nk is the number of pixels in the image with the intensity r k
• Normalized histogram is
P( rk )= nk /MN
where k=0,1,2,……L-1
M &N are the row and column dimensions of the image.
• P( rk ) is an estimate of the probability of occurrence of intensity level r k in
an image.
The sum of all components of a normalized histogram is equal to 1.
The histogram plots are simple plots of h( rk )=nk versus rk.
In the dark image the components of the histogram are concentrated on the low (dark)
side of the gray scale. Bright image the histogram components are biased towards the
high side of the gray scale. The histogram of a low contrast image will be narrow and
will be centered towards the middle of the gray scale. The components of the
histogram in the high contrast image cover a broad range of the gray scale. The net
effect of this will be an image that shows a great deal of gray levels details and has
high dynamic range.
Fig 3.7 Four basic image types : dark, light, low contrast, high contrast images and their corresponding
histograms

Histogram Equalization:

Consider for a moment continuous functions, and let the variable r represent the gray levels of
the image to be enhanced. We assume that r has been to the interval [0, L-1], with r=0 representing
black and r=L-1 representing white. we focus attention on transformations of the form

that produce a level s for every pixel value r in the original image. For reasons that will become
obvious shortly, we assume that the transformation function T(r) satisfies the following conditions:

(a) T(r) is single-valued and monotonically increasing in the interval 0 ≤ r ≤ L-1; and

(b) 0 ≤ T(r) ≤ L-1 for 0 ≤ r ≤ L-1.

The inverse transformation from s back to r is denoted

Fig.3.8 A gray-level transformation function that is both single valued and monotonically
increasing.
In discrete version:
– The probability of occurrence of gray level rk in an image is
nk
pr ( r )  k  0,1,2,..., L  1
n
n : the total number of pixels in the image
nk : the number of pixels that have gray level rk
L : the total number of possible gray levels in the image
The transformation function is (called as histogram equalization or linearization
transformation)
k k nj
sk  T (rk )  ( L  1) pr (rj )  ( L  1) k  0,1,2,..., L  1
j 0 j 0 n
Thus, an output image is obtained by mapping each pixel with level rk in the input
image into a corresponding pixel with level sk.
Spatial Filtering:
The mechanics of spatial filtering are illustrated in below fig

The process consists simply of moving the filter mask from point to point in an image. At each
point (x, y), the response of the filter at that point is calculated using a predefined relationship.

The response is given by a sum of products of the filter coefficients and the corresponding image
pixels in the area spanned by the filter mask. For the 3 x 3 mask shown in Fig. 9.1, the result (or
response), R, of linear filtering with the filter mask at a point (x, y) in the image is

Fig 3.10 the mechanics of linear spatial filtering using 3*3 filter mask
The response R at particular location (x,y) is given by

where the w’s are filter coefficients, the z’s are the values of the image graylevels corresponding
to those coefficients, and mn is the total number of coefficients in the mask.
For the 3 x 3 general mask shown in Fig.9.2 the response at any point (x, y) in the image is given
by

Fig 3.11. representation of a general 3 x 3 spatial filter mask

• Spatial filtering term is the filtering operations that are performed directly on the pixels of an
image. The process consists simply of moving the filter mask from point to point in an
image.
– Smoothing spatial filters
– Sharpening spatial filters
Smoothing Spatial Filters:

(1) Smoothing is often used to reduce noise within an image.


(2) Image smoothing is a key technology of image enhancement, which can remove noise in
images. So, it is a necessary functional module in various image-processing software.

(3) Image smoothing is a method of improving the quality of images.


(4) Smoothing is performed by spatial and frequency filters

Smoothing Linear Filters:

The output (response) of a smoothing, linear spatial filter is simply the average of the pixels
contained in the neighborhood of the filter mask.
 Also called averaging filters or Low pass filter.
• By replacing the value of every pixel in an image by the average of the intensity levels in the
neighborhood defined by the filter mask.
• Reduced “sharp” transition in intensities.
• Random noise typically consists of sharp transition.
• Edges also characterized by sharp intensity transitions, so averaging filters have the undesirable
side effect that they blur edges.
• If all coefficients are equal in filter then it is also called a box filter.
Fig 3.12 averaging filter
In this filter all weights are equal. This is also called as box filter
Use of the first filter gives average of the pixels under the mask. This can best be seen by
substituting the coefficients of the mask

Weighted average filter


In this filter, pixels are multiplied by different coefficients, thus giving more importance (weight) to
some pixels at the expense of others.
In the mask, the pixel at the center of the mask is multiplied by a higher value than any other, thus
giving this pixel more importance in the calculation of the average.

Fig 3.13 weighted averaging filter

It attempt to reduce blurring in the smoothing process.


The general implementation for filtering an M x N image with a weighted averaging filter of size
m x n (m and n odd) is given by the expression
Order-Statistics Filters:

Order-statistics filters are nonlinear spatial filters whose response is based on ordering (ranking)
the pixels contained in the image area encompassed by the filter, and then replacing the value of
the center pixel with the value determined by the ranking result.
The best-known example in this category is the median filter, which, as its name implies, replaces
the value of a center pixel by the median of the gray levels in the neighborhood of that pixel.
Median filters are quite popular because, for certain types of random noise, they provide excellent
noise-reduction capabilities, with considerably less blurring.
Median filters are particularly effective in the presence of salt-and-pepper noise.
Median represents the 50th percentile of a ranked set of numbers while 100th or 0th percentile
results in the so called max filter or min filter respectively

Sharpening Spatial Filters

• Objective of sharpening is to highlight transitions in intensity.


• Uses in printing and medical imaging to industrial inspection and autonomous guidance in military
systems.
• Averaging is analogous to integration, so sharpening is analogous to spatial differentiation.
• Thus, image differentiation enhances edges and other discontinuities (such as noise) and deemphasizes
areas with slowly varying intensities.

Foundation:

• Definition for a first order derivative


(1) must be zero in areas of constant intensity
(2) must be nonzero at the onset of an intensity step or ramp and
(3) must be nonzero along ramps.

• For a second order derivatives


(1) must be zero in constant areas
(2) must be nonzero at the onset and
(3) must be zero along ramps of constant slope.

• First order derivative of a one dimensional function f(x) is the difference of f(x+1) – f(x).

• Second order = f(x+1) + f(x-1) -2f(x)


Use of Second Derivatives for image sharpening–The Laplacian:

The approach basically consists of defining a discrete formulation of the second-order derivative
and then constructing a filter mask based on that formulation.

Development of the method:

the simplest isotropic derivative operator is the Laplacian, which, for a image f(x, y) of two
variables, is defined as

Because derivatives of any order are linear operations, the Laplacian is a linear operator. we use
the following notation for the partial second-order derivative in the x-direction:
and, similarly in the y-direction, as

Fig.3.14. (a) Filter mask used to implement the digital Laplacian (b) Mask used to
implement an extension of this equation that includes the diagonal neighbors. (c) and (d)
Two other implementations of the Laplacian.

In Laplacian filter sum of all filter coefficients is zero. We can take center coefficient as positive
or negative. Laplacian filter highlights edges in an image.
Laplacian is a derivative operator, its use highlights gray-level discontinuities in an image and
deemphasizes regions with slowly varying gray levels.
The resultant image g(x,y) by using the Laplacian operator as follows:
• Un sharp Masking
– Read Original Image f(x,y)
– Blurred original image f’(x,y)
– Mask = f(x,y) – f’(x,y)
– g(x,y) = f(x,y) + Mask

• High Boost Filtering


– Read Original Image f(x,y)
– Blurred original image f’(x,y)
– Mask = f(x,y) – f’(x,y)
– g(x,y) = f(x,y) + k*Mask, where k>1

Use of First Derivatives for image sharpening -The Gradient:

Image sharpening can be done by using first order derivatives


First derivatives in image processing are implemented using the magnitude of the gradient.
For a function f(x, y), the gradient of f at coordinates (x, y) is defined as the two-dimensional
column vector

The magnitude of this vector is given by

It is common practice to approximate the magnitude of the gradient by using absolute values
instead of squares and square roots:
Two other definitions proposed by Roberts [1965] in the early development of digital image
processing use cross differences:

Fig 3.15 Roberts cross gradient operator

we compute the magnitude of gradient as

If we use absolute values, then substituting the quantities in the equations gives us the
following approximation to the gradient:
Sobel operator
Sobel operator highlights edges in image.
It highlights discontinuities in an image
In sobel operator also sum of all filter coefficients is zero
Idea behind using a weight value of 2 in the center coefficient is to achieve some smoothing

Fig 3.16 filter masks of sobel operator

You might also like