Unit-3 Part1
Unit-3 Part1
INTRODUCTION
The aim of image enhancement is to process an image so that result is more suitable than
original image for specific application.
• It is a first step in digital image processing.
• It basically improves the subjective quality of the images by working with the existing
data.
• Image enhancement includes gray level and contrast manipulation, noise reduction, edge
crispening, and sharpening, filtering, interpolation and magnification , pseudo coloring
and so on
Spatial domain methods are procedures that operate directly on these pixels. Spatial domain
processes will be denoted by the expression
1.
g(x, y) = T [f(x, y)]
2.
where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator on f, defined
over some neighborhood of (x, y).
The principal approach in defining a neighborhood about a point (x, y) is to use a square or
rectangular sub image area centered at (x, y), as Fig.3.1 shows.
The center of the sub image is moved from pixel to pixel starting, say, at the top left corner.
Fig.3.1 A 3*3 neighborhood about a point (x, y) in an image
• The type of operation performed in the neighboring input pixel values is called as Spatial
filter or spatial mask or kernel or template or window or neighborhood operation.
• It consists of moving the origin of the neighborhood from pixel to pixel and applying the
operator T to the pixels in the neighborhood to yield the output at that location.
• Thus ,for any specific location (x, y), the value of the output image g at those coordinates
is equal to the result of applying T to the neighborhood with origin at (x, y) in f.
• The smallest possible neighborhood is of size 1×1. In this ,the g depends only on the
value of f at a single point (x, y) and T becomes an intensity (or gray level or mapping)
transformation function of the form
s=T(r)
Where s ->intensity of g at (x, y)
r ->intensity of f at (x, y)
• Enhancement approaches whose results depend only on the intensity at a point are called
point processing techniques.
Some basic intensity transformation functions (or) point operations:
• Image negatives
• Log transformations
• Power law (Gamma)transformations
• Piecewise-linear transformations
--contrast stretching
--intensity-level slicing
3. --Bit-plane slicing
Image Negatives
The negative of an image with gray levels in the range [0, L-1] is obtained by using the image
negative transformation shown in Fig.3.2, which is given by the expression
s=L-1–r
It reversing the intensity levels of an image in this manner produces the equivalent of a
photographic negative.
Contrast stretching:
One of the simplest piecewise linear functions is a contrast-stretching transformation.
Low- contrast images can result from poor illumination, lack of dynamic range in the imaging
sensor, or even wrong setting of a lens aperture during image acquisition.
Contrast stretching is a process that expands range of intensity levels in an image.
If a1=a2=0 and r1=r2 function is thresholding function then the result is a binary image.
Thresholding function:
Thresholding is required to extract a part of an image which contains all the information.
Thresholding is a part of a more general segmentation problem.
In thresholding, pixels having intensity lower than the threshold T are set to zero and the pixels
having intensity greater than the threshold are set to 255
It generates a binary image
S=T(r) = 0 ;r<m
= 1 ;r>m
Gray-level slicing:
Highlighting a specific range of gray levels in an image often is desired.
There are several ways of doing gray level slicing, but most of them are variations of two basic
themes. One approach is to display a high value for all gray levels in the range of interest and a
low value for all other gray levels. The second approach brightens the desired range of gray
levels and remaining gray levels are unchanged.
Fig 3.5 (a) This transformation highlights range [A, B] of gray levels and reduce all others
to a constant level (b) This transformation highlights range [A, B] but preserves all other
levels.
Bit-Plane Slicing:
Suppose that each pixel in an image is represented by 8 bits. The image is composed of eight 1-bit
planes, ranging from bit-plane 0 to bit plane 7.
Bit-plane 0 contains all the lowest order bits in the bytes comprising the pixels in the image and plane 7
contains all the high-order bits.
Note that the higher-order bit planes contain the majority of the visually significant data. The other bit
planes contains less details in the image.
Separating a digital image into its bit planes is useful for analyzing the relative importance played by
each bit of the image.
Histogram Equalization:
Consider for a moment continuous functions, and let the variable r represent the gray levels of
the image to be enhanced. We assume that r has been to the interval [0, L-1], with r=0 representing
black and r=L-1 representing white. we focus attention on transformations of the form
that produce a level s for every pixel value r in the original image. For reasons that will become
obvious shortly, we assume that the transformation function T(r) satisfies the following conditions:
(a) T(r) is single-valued and monotonically increasing in the interval 0 ≤ r ≤ L-1; and
Fig.3.8 A gray-level transformation function that is both single valued and monotonically
increasing.
In discrete version:
– The probability of occurrence of gray level rk in an image is
nk
pr ( r ) k 0,1,2,..., L 1
n
n : the total number of pixels in the image
nk : the number of pixels that have gray level rk
L : the total number of possible gray levels in the image
The transformation function is (called as histogram equalization or linearization
transformation)
k k nj
sk T (rk ) ( L 1) pr (rj ) ( L 1) k 0,1,2,..., L 1
j 0 j 0 n
Thus, an output image is obtained by mapping each pixel with level rk in the input
image into a corresponding pixel with level sk.
Spatial Filtering:
The mechanics of spatial filtering are illustrated in below fig
The process consists simply of moving the filter mask from point to point in an image. At each
point (x, y), the response of the filter at that point is calculated using a predefined relationship.
The response is given by a sum of products of the filter coefficients and the corresponding image
pixels in the area spanned by the filter mask. For the 3 x 3 mask shown in Fig. 9.1, the result (or
response), R, of linear filtering with the filter mask at a point (x, y) in the image is
Fig 3.10 the mechanics of linear spatial filtering using 3*3 filter mask
The response R at particular location (x,y) is given by
where the w’s are filter coefficients, the z’s are the values of the image graylevels corresponding
to those coefficients, and mn is the total number of coefficients in the mask.
For the 3 x 3 general mask shown in Fig.9.2 the response at any point (x, y) in the image is given
by
• Spatial filtering term is the filtering operations that are performed directly on the pixels of an
image. The process consists simply of moving the filter mask from point to point in an
image.
– Smoothing spatial filters
– Sharpening spatial filters
Smoothing Spatial Filters:
The output (response) of a smoothing, linear spatial filter is simply the average of the pixels
contained in the neighborhood of the filter mask.
Also called averaging filters or Low pass filter.
• By replacing the value of every pixel in an image by the average of the intensity levels in the
neighborhood defined by the filter mask.
• Reduced “sharp” transition in intensities.
• Random noise typically consists of sharp transition.
• Edges also characterized by sharp intensity transitions, so averaging filters have the undesirable
side effect that they blur edges.
• If all coefficients are equal in filter then it is also called a box filter.
Fig 3.12 averaging filter
In this filter all weights are equal. This is also called as box filter
Use of the first filter gives average of the pixels under the mask. This can best be seen by
substituting the coefficients of the mask
Order-statistics filters are nonlinear spatial filters whose response is based on ordering (ranking)
the pixels contained in the image area encompassed by the filter, and then replacing the value of
the center pixel with the value determined by the ranking result.
The best-known example in this category is the median filter, which, as its name implies, replaces
the value of a center pixel by the median of the gray levels in the neighborhood of that pixel.
Median filters are quite popular because, for certain types of random noise, they provide excellent
noise-reduction capabilities, with considerably less blurring.
Median filters are particularly effective in the presence of salt-and-pepper noise.
Median represents the 50th percentile of a ranked set of numbers while 100th or 0th percentile
results in the so called max filter or min filter respectively
Foundation:
• First order derivative of a one dimensional function f(x) is the difference of f(x+1) – f(x).
The approach basically consists of defining a discrete formulation of the second-order derivative
and then constructing a filter mask based on that formulation.
the simplest isotropic derivative operator is the Laplacian, which, for a image f(x, y) of two
variables, is defined as
Because derivatives of any order are linear operations, the Laplacian is a linear operator. we use
the following notation for the partial second-order derivative in the x-direction:
and, similarly in the y-direction, as
Fig.3.14. (a) Filter mask used to implement the digital Laplacian (b) Mask used to
implement an extension of this equation that includes the diagonal neighbors. (c) and (d)
Two other implementations of the Laplacian.
In Laplacian filter sum of all filter coefficients is zero. We can take center coefficient as positive
or negative. Laplacian filter highlights edges in an image.
Laplacian is a derivative operator, its use highlights gray-level discontinuities in an image and
deemphasizes regions with slowly varying gray levels.
The resultant image g(x,y) by using the Laplacian operator as follows:
• Un sharp Masking
– Read Original Image f(x,y)
– Blurred original image f’(x,y)
– Mask = f(x,y) – f’(x,y)
– g(x,y) = f(x,y) + Mask
It is common practice to approximate the magnitude of the gradient by using absolute values
instead of squares and square roots:
Two other definitions proposed by Roberts [1965] in the early development of digital image
processing use cross differences:
If we use absolute values, then substituting the quantities in the equations gives us the
following approximation to the gradient:
Sobel operator
Sobel operator highlights edges in image.
It highlights discontinuities in an image
In sobel operator also sum of all filter coefficients is zero
Idea behind using a weight value of 2 in the center coefficient is to achieve some smoothing