Unit II Image Enhancement
Unit II Image Enhancement
Introduction
• Image enhancement techniques are designed to improve
the quality of an image as perceived by a human being.
• Image enhancement can be performed both in the spatial
as well as in the frequency domain.
• Image enhancement approaches fall into two broad
categories: spatial domain methods and frequency domain
methods.
• The term spatial domain refers to the image plane itself,
and approaches in this category are based on direct
manipulation of pixels in an image.
• Frequency domain processing techniques are based on
modifying the Fourier transform of an image.
• Enhancing an image provides better contrast and a more
detailed image as compare to non-enhanced image.
• Image enhancement has very good applications.
• It is used to enhance medical images, images captured in
remote sensing, images from satellite etc
• Spatial domain methods are procedures that operate directly
on these pixels. Spatial domain processes will be denoted
by the expression
• where f(x, y) is the input image, g(x, y) is the processed
image, and T is an operator on f, defined over some
neighborhood of (x, y).
• The principal approach in
defining a neighborhood about
a point (x, y) is to use a square
or rectangular sub image area
centered at (x, y) as shown in
figure
• For any specific location (x0 ,y0 ), the value of the output image
“g” at those coordinates is equal to the result of applying “T” to
the neighborhood with origin at (x0 , y0) in f.
• For example, suppose that the neighborhood is a square of size 3
×3 and that operator T is defined as “compute the average
intensity of the pixels in the neighborhood.
• Consider an arbitrary location in an image, say (100,150 ).
• The result at that location in the output image, g( 100,150), is the
sum of f( 100, 150) and its 8-neighbors, divided by 9.
• The center of the neighborhood is then moved to the next adjacent
location and the procedure is repeated to generate the next value
of the output image “g”
• Typically, the process starts at the top left of the input image and
proceeds pixel by pixel in a horizontal (vertical) scan, one row
(column) at a time.
• The center of the sub image is moved from pixel to pixel
starting at the top left corner.
• The operator T is applied at each location (x, y) to yield the
output, g, at that location
• The simplest form of T is when the neighborhood is of size
1*1 (that is, a single pixel).
• In this case, g depends only on the value of f at (x, y), and T
becomes a gray-level (also called an intensity or mapping)
transformation function of the form s = T ( r ) where r is the
pixels of the input image and s is the pixels of the output
image.
• T is a transformation function that maps each value of „r‟ to
each value of “s‟
• Enhancement is the process of manipulating an image so
that the result is more suitable than the original for a
specific application.
• The word specific is important, because it establishes at the
outset that enhancement techniques are problem-oriented.
• for example, a method that is quite useful for enhancing X-
ray images may not be the best approach for enhancing
infrared images.
• There is no general “theory” of image enhancement.
• When an image is processed for visual interpretation, the
viewer is the ultimate judge of how well a particular
method works. When dealing with machine perception,
enhancement is easier to quantify.
• For example, in an automated character-recognition system,
the most appropriate enhancement method is the one that
results in the best recognition rate, leaving aside other
considerations such as computational requirements of one
method versus another.
• Regardless of the application or method used, image
enhancement is one of the most visually appealing areas of
image processing.
Gray level transformation functions for contrast
enhancement
BASIC GRAY LEVEL TRANSFORMATIONS: Spatial
Domain
• 1. Image negative
• 2. Log transformations
• 3. Power law transformations
• 4. Piecewise-Linear transformation functions
IMAGE NEGATIVE
• The image negative with gray level value in the range of [0,
L-1] is obtained by negative transformation given by S =
T(r) or
• S = L -1 – r
• Where r= gray level value at pixel (x, y)
• L is the largest gray level consists in the image
• It results in getting photograph negative. It is useful when
for enhancing white details embedded in dark regions of the
image.
• Some basic gray-level
transformation functions used
for image enhancement
• In this case the following transition
has been done.
• S = (L – 1) – r. Since the input image
of Einstein is an 8 bpp (Bits per pixel)
image, so the number of levels in this Input gray level, r
This function highlights range [A,B] and leaves other intensities unchanged.
• Intensity level slicing:
There are applications in which it is of interest to highlight
a specific range of intensities in an image.
Some of these applications include enhancing features in
satellite imagery, such as masses of water, and enhancing
flaws in X-ray images.
The method, called intensity-level slicing, can be
implemented in several ways, but most are variations of
two basic themes. One approach is to display in one value
(say, white) all the values in the range of interest and in
another (say, black) all other intensities.
• This transformation, shown in above Fig (a), produces a
binary image.
• The second approach, based on the transformation in Fig.
(b), brightens (or darkens) the desired range of intensities,
but leaves all other intensity levels in the image unchanged
• Bit-Plane Slicing:
Pixel values are integers composed of bits. For example,
values in a 256-level gray-scale image are composed of 8
bits (one byte).
Instead of highlighting intensity-level ranges, we could
highlight the contribution made to total image appearance
by specific bits
• an 8-bit image may be considered as being composed of
eight one-bit planes, with plane 1 containing the lowest-
order bit of all pixels in the image, and plane 8 all the
highest-order bits.