0% found this document useful (0 votes)
90 views86 pages

Unit II Image Enhancement

Uploaded by

Shree Harish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views86 pages

Unit II Image Enhancement

Uploaded by

Shree Harish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 86

UNIT II IMAGE ENHANCEMENT

Introduction
• Image enhancement techniques are designed to improve
the quality of an image as perceived by a human being.
• Image enhancement can be performed both in the spatial
as well as in the frequency domain.
• Image enhancement approaches fall into two broad
categories: spatial domain methods and frequency domain
methods.
• The term spatial domain refers to the image plane itself,
and approaches in this category are based on direct
manipulation of pixels in an image.
• Frequency domain processing techniques are based on
modifying the Fourier transform of an image.
• Enhancing an image provides better contrast and a more
detailed image as compare to non-enhanced image.
• Image enhancement has very good applications.
• It is used to enhance medical images, images captured in
remote sensing, images from satellite etc
• Spatial domain methods are procedures that operate directly
on these pixels. Spatial domain processes will be denoted
by the expression
• where f(x, y) is the input image, g(x, y) is the processed
image, and T is an operator on f, defined over some
neighborhood of (x, y).
• The principal approach in
defining a neighborhood about
a point (x, y) is to use a square
or rectangular sub image area
centered at (x, y) as shown in
figure
• For any specific location (x0 ,y0 ), the value of the output image
“g” at those coordinates is equal to the result of applying “T” to
the neighborhood with origin at (x0 , y0) in f.
• For example, suppose that the neighborhood is a square of size 3
×3 and that operator T is defined as “compute the average
intensity of the pixels in the neighborhood.
• Consider an arbitrary location in an image, say (100,150 ).
• The result at that location in the output image, g( 100,150), is the
sum of f( 100, 150) and its 8-neighbors, divided by 9.
• The center of the neighborhood is then moved to the next adjacent
location and the procedure is repeated to generate the next value
of the output image “g”
• Typically, the process starts at the top left of the input image and
proceeds pixel by pixel in a horizontal (vertical) scan, one row
(column) at a time.
• The center of the sub image is moved from pixel to pixel
starting at the top left corner.
• The operator T is applied at each location (x, y) to yield the
output, g, at that location
• The simplest form of T is when the neighborhood is of size
1*1 (that is, a single pixel).
• In this case, g depends only on the value of f at (x, y), and T
becomes a gray-level (also called an intensity or mapping)
transformation function of the form s = T ( r ) where r is the
pixels of the input image and s is the pixels of the output
image.
• T is a transformation function that maps each value of „r‟ to
each value of “s‟
• Enhancement is the process of manipulating an image so
that the result is more suitable than the original for a
specific application.
• The word specific is important, because it establishes at the
outset that enhancement techniques are problem-oriented.
• for example, a method that is quite useful for enhancing X-
ray images may not be the best approach for enhancing
infrared images.
• There is no general “theory” of image enhancement.
• When an image is processed for visual interpretation, the
viewer is the ultimate judge of how well a particular
method works. When dealing with machine perception,
enhancement is easier to quantify.
• For example, in an automated character-recognition system,
the most appropriate enhancement method is the one that
results in the best recognition rate, leaving aside other
considerations such as computational requirements of one
method versus another.
• Regardless of the application or method used, image
enhancement is one of the most visually appealing areas of
image processing.
Gray level transformation functions for contrast
enhancement
BASIC GRAY LEVEL TRANSFORMATIONS: Spatial
Domain
• 1. Image negative
• 2. Log transformations
• 3. Power law transformations
• 4. Piecewise-Linear transformation functions
IMAGE NEGATIVE
• The image negative with gray level value in the range of [0,
L-1] is obtained by negative transformation given by S =
T(r) or
• S = L -1 – r
• Where r= gray level value at pixel (x, y)
• L is the largest gray level consists in the image
• It results in getting photograph negative. It is useful when
for enhancing white details embedded in dark regions of the
image.
• Some basic gray-level
transformation functions used
for image enhancement
• In this case the following transition
has been done.
• S = (L – 1) – r. Since the input image
of Einstein is an 8 bpp (Bits per pixel)
image, so the number of levels in this Input gray level, r

image are256. Putting 256 in the


equation, we get this
• S = 255 – r
• So, each value is subtracted by 255 and the result image has been
shown above. So, what happens is that, the lighter pixels become
dark and the darker picture becomes light. And it results in image
negative.
• It has been shown in the graph below.
LOGARITHMIC TRANSFORMATIONS:
• Logarithmic transformation further contains two types of
transformation. Log transformation and inverse log
LOG TRANSFORMATIONS:
• The log transformations can be defined by this formula
• S = c log(r + 1).
• Where S and r are the pixel values of the output and the input
image and c is a constant. The value 1 is added to each of the pixel
value of the input image because if there is a pixel intensity of 0 in
the image, then log (0) is equal to infinity. So, 1 is added, to make
the minimum value at least 1.
• During log transformation, the dark pixels in an image are
expanded as compare to the higher pixel values. The higher pixel
values are kind of compressed in log transformation. This result in
ANOTHER WAY TO REPRESENT LOG
TRANSFORMATIONS
• Enhance details in the darker regions of an image at the
expense of detail in brighter regions. T(f) = C * log (1+r)
• Here C is constant and r≥ 0
• The shape of the curve shows that this transformation maps
the narrow range of low gray level values in the input
image into a wider range of output image.
• The opposite is true for high level values of input image
POWER – LAW TRANSFORMATIONS
• There are further two transformation is power law
transformations, that include n th power and nth root
transformation. These transformations can be given by the
expression: S=Crγ
• This symbol γ is called gamma, due to which this
transformation is also known as gamma transformation.
• Variation in the value of γ varies the enhancement of the
images.
• Different display devices / monitors have their own gamma
correction, that’s why they display their image at different
intensity, where c and g are positive constants.
• Curves generated with values of γ > 1 have exactly the
opposite effect as those generated with values of γ < 1.
• The response of many devices used for image capture,
printing, and display obey a power law.
• By convention, the exponent in a power-law equation is
referred to as gamma
• The process used to correct these power-law response
phenomena is called gamma correction or gamma encoding.
• For example, cathode ray tube (CRT) devices have an
intensity-to-voltage response that is a power function, with
exponents varying from approximately 1.8 to 2.5
• The above equation can be rewritten as: S = C (r +ε) γ

• Plots of the gamma equation s= crγ for


various values of γ (c = 1 in all cases).

• Each curve was scaled independently so


that all curves would fit in the same graph.

• Our interest here is on the shapes of the


curves, not on their relative values
(a) Intensity ramp image. (b) Image as viewed on a simulated
monitor with a gamma of 2.5. (c) Gamma corrected image.
(d) Corrected image as viewed on the same monitor
• In this case, gamma correction consists of using the
transformation s = r1/25 =r0.4 to preprocess the image
before inputting it into the monitor. (Figure b)
HISTOGRAM PROCESSING
• In digital image processing, the histogram is used for
graphical representation of a digital image
• Let rk for k =0,1,2,… L-1 denote the intensities of an L-level
digital image, f(x ,y). The unnormalized histogram of f is
defined as
• where nk is the number of pixels in f with intensity rk , and
the subdivisions of the intensity scale are called histogram
bins. Similarly, the normalized histogram of f is defined as
• where, as usual, M and N are the number of image rows and
columns, respectively. Mostly, we work with normalized
histograms, which we refer to simply as histograms or
image histograms
• The sum of p (rk) for all values of k is always 1.
• The components of p (rk) are estimates of the probabilities
of intensity levels occurring in an image.
• Histograms are simple to compute and are also suitable for
fast hardware implementations, thus making histogram-
based techniques a popular tool for real-time image
processing.
• Histogram shape is related to image appearance. For
example, Fig shows images with four basic intensity
characteristics: dark, light, low contrast, and high contrast;
the image histograms are also shown.
• We note in the dark image that the most populated
histogram bins are concentrated on the lower (dark) end of
the intensity scale.
• Similarly, the most populated bins of the light image are
biased toward the higher end of the scale
• An image with low contrast has a narrow histogram located
typically toward the middle of the intensity scale, figure c
• The components of the histogram of the high-contrast
image cover a wide range of the intensity scale, and the
distribution of pixels is not too far from uniform, with few
bins being much higher than the others.
• Intuitively, it is reasonable to conclude that an image whose
pixels tend to occupy the entire range of possible intensity
levels and, in addition, tend to be distributed uniformly, will
have an appearance of high contrast and will exhibit a large
variety of gray tones.
• The net effect will be an image that shows a great deal of
gray-level detail and has a high dynamic range
HISTOGRAM EQUALIZATION
• Assuming initially continuous intensity values, let the
variable r denote the intensities of an image to be
processed.
• As usual, we assume that r is in the range [0, L-1], with r = 0
representing black and r = L− 1 representing white.
• For r satisfying these conditions, we focus attention on
transformations (intensity mappings) of the form hat
produce an output intensity value, s, for a given intensity
value r in the input image.
(a) Monotonic iincreasing function, showing how multiple
values can map to a single value. (b) Strictly monotonic
increasing function. This is a one-to-one mapping, both
ways.
• We assume that
• The condition in
(a) that T (r ) be monotonically increasing guarantees that
output intensity values will never be less than
corresponding input values, thus preventing artifacts
created by reversals of intensity.
(b)Condition (b) guarantees that the range of output
intensities is the same as the input.
(c) Finally, condition (a‘) guarantees that the mappings from s
back to r will be one-to-one, thus preventing ambiguities.
• The intensity of an image may be viewed as a random
variable in the interval [0 ,L-1 ].Let pr (r) and ps (s) denote
the PDFs of intensity values r and s in two different images.
• The subscripts on p indicate that pr and ps are different
functions.
• A fundamental result from probability theory is that if pr (r)
and T (r) are known, and T (r) is continuous and
differentiable over the range of values of interest, then the
PDF (Probability Density function) of the transformed
(mapped) variable s can be obtained as
• A transformation function of particular importance in
image processing is

• where “ω” is a dummy variable of integration. The integral


on the right side is the cumulative distribution function
(CDF) of random variable r
• Because PDFs always are positive, and the integral of a
function is the area under the function
• Substitute in eqn (1), we get
• These are the values of the equalized histogram.
• Observe that the transformation yielded only five distinct
intensity levels. Because r0 = 0 was mapped to s0 = 1, there
are 790 pixels in the histogram equalized image with this
value
• Also, there are 1023 pixels with a value of s 1 = 3 and 850
pixels with a value of s2 = 5. However, both r3 and r4 were
mapped to the same value, 6, so there are ( 656 + 329)= 985
pixels in the equalized image with this value.
• Similarly, there are (245 +122+81) = 448 pixels with a
value of 7 in the histogram equalized image. Dividing these
numbers by MN = 4096 yielded the equalized histogram
• Because a histogram is an approximation to a PDF, and no
new allowed intensity levels are created in the process,
perfectly flat histograms are rare in practical applications of
histogram equalization using the method just discussed.
• Thus, unlike its continuous counterpart, it cannot be proved
in general that discrete histogram equalization
• the advantages of having intensity values that span the
entire gray scale
• It also has the advantage that it is fully automatic
• This automatic, “hands-off” characteristic is important
• this inverse transformation satisfies conditions (a’) and (b)
defined earlier only if all intensity levels are present in the
input image.
• This implies that none of the bins of the image histogram
are empty.
• Although the inverse transformation is not used in
histogram equalization, it plays a central role in the
histogram-matching scheme developed
Histogram equalization. (a) Original histogram. (b)
Transformation function. (c) Equalized histogram.
HISTOGRAM MATCHING (SPECIFICATION)
• histogram equalization produces a transformation function that
seeks to generate an output image with a uniform histogram.
• When automatic enhancement is desired, this is a good
approach to consider because the results from this technique
are predictable and the method is simple to implement
• However, there are applications in which histogram
equalization is not suitable.
• In particular, it is useful sometimes to be able to specify the
shape of the histogram that we wish the processed image to
have. The method used to generate images that have a
specified histogram is called histogram matching or histogram
specification.
In image processing, histogram matching or
histogram specification is the transformation of an
image so that its histogram matches a specified
histogram. The well-known histogram equalization
method is a special case in which the specified
histogram is uniformly distributed.
Corresponding histogram-equalized images
• Consider for a moment continuous intensities r and z which,
as before, we treat as random variables with PDFs pr(r) and
pz(z) respectively. Here, r and z denote the intensity levels
of the input and output (processed) images, respectively.
• We can estimate pr(r) from the given input image, and pz(z)
is the specified PDF that we wish the output image to have

• where w is dummy variable of integration.


• where v is a dummy variable of integration. It follows from
the preceding two equations that G(z)= s= T (r ) and,
therefore, that z must satisfy the condition
An image whose intensity levels have a specified PDF can be
obtained using the following procedure
• Obtain pr(r ) from the input image
• Use the specified PDF pz (z) to obtain the function G(z).
• Compute the inverse transformation z=G -1 (s); this is a mapping
from s to z, the latter being the values that have the specified
PDF.
• Obtain the output image by first equalizing the input image.
The pixel values in this image are the s values. For each pixel
with value s in the equalized image, perform the inverse
mapping z=G-1 (s)to obtain the corresponding pixel in the
output image. When all pixels have been processed with this
transformation, the PDF of the output image, p z(z), will be
equal to the specified PDF
FUNDAMENTALS OF SPATIAL FILTERING
• Spatial filtering is used in a broad spectrum of image
processing applications, so a solid understanding of filtering
principles is important
• “Filtering” refers to passing, modifying, or rejecting
specified frequency components of an image. For example,
a filter that passes low frequencies is called a low pass
filter.
• The net effect produced by a lowpass filter is to smooth an
image by blurring it. We can accomplish similar smoothing
directly on the image itself by using spatial filters
• Spatial filtering modifies an image by replacing the value of
each pixel by a function of the values of the pixel and its
neighbors.
• If the operation performed on the image pixels is linear,
then the filter is called a linear spatial filter. Otherwise, the
filter is a nonlinear spatial filter.
THE MECHANICS OF LINEAR SPATIAL
FILTERING
• A linear spatial filter performs a sum-of-products operation
between an image f and a filter kernel, w.
• The kernel is an array whose size defines the neighborhood
of operation, and whose coefficients determine the nature of
the filter.
• Other terms used to refer to a spatial filter kernel are mask,
template, and window. We use the term filter kernel or
simply kernel.
• At any point (x,y) in the image, the response, g(x,y), of the
filter is the sum of products of the kernel coefficients and
the image pixels encompassed by the kernel:
• As coordinates x and y are varied, the center of the kernel
moves from pixel to pixel, generating the filtered image, g,
in the process
• Observe that the center coefficient of the kernel, w(0, 0),
aligns with the pixel at location (x ,y).
• For a kernel of size m*n , we assume that m = 2a+ 1 and
n=2b+1, where a and b are nonnegative integers. This
means that our focus is on kernels of odd size in both
coordinate directions.
• In general, linear spatial filtering of an image of size M*N
with a kernel of size m ×n is given by the expression

• where x and y are varied so that the center (origin) of the


kernel visits every pixel in “f” once.
SPATIAL CORRELATION AND CONVOLUTION
• Spatial correlation is illustrated graphically in Figure below
• Correlation consists of moving the center of a kernel over
an image, and computing the sum of products at each
location.
• The mechanics of spatial convolution are the same, except
that the correlation kernel is rotated by 180°.
• Thus, when the values of a kernel are symmetric about its
center, correlation and convolution yield the same result.
The mechanics of linear
spatial filtering
using a 3 × 3 kernel.
The pixels are shown as
squares
to simplify the graphics.
Note that the origin of
the image is at the top
left, but the origin of the
kernel is at its center.
Placing the origin at the
center of spatially
symmetric kernels
simplifies writing
expressions for linear
filtering
• part of w lies outside f, so the summation is undefined in
that area. A solution to this problem is to pad function f
with enough 0’s on either side.
• In general, if the kernel is of size 1 × m, we need ( m − 1)/
2 zeros on either side of “f” in order to handle the
beginning and ending configurations of w with respect to f
SMOOTHING (LOWPASS) SPATIAL FILTERS
• Smoothing (also called averaging) spatial filters are used to
reduce sharp transitions in intensity. Because random noise
typically consists of sharp transitions in intensity, an
obvious application of smoothing is noise reduction.
Smoothing prior to image resampling to reduce aliasing
• Smoothing is used to reduce irrelevant detail in an image,
where “irrelevant” refers to pixel regions that are small with
respect to the size of the filter kernel.
• Another application is for smoothing the false contours that
result from using an insufficient number of intensity levels
in an image
• Smoothing filters are used in combination with other
techniques for image enhancement, such as the histogram
processing techniques
• Convolving a smoothing kernel with an image blurs the
image, with the degree of blurring being determined by the
size of the kernel and the values of its coefficients.
• In addition to being useful in countless applications of
image processing, lowpass filters are fundamental, in the
sense that other important filters, including sharpening
(highpass), bandpass, and bandreject filters can be derived
from lowpass filters
BOX FILTER KERNELS
• The simplest, separable lowpass filter kernel is the box
kernel, whose coefficients have the same value (typically 1).
• The name “box kernel” comes from a constant kernel
resembling a box when viewed in 3-D.
• We showed a 3 ×3 box filter in fig a)
• An m*n box filter is an m×n array of 1’s, with a
normalizing constant in front, whose value is 1 divided by
the sum of the values of the coefficients (i.e., 1/mn when all
the coefficients are 1’s)
• This normalization, which we apply to all lowpass kernels,
has two purposes.
• First, the average value of an area of constant intensity
would equal that intensity in the filtered image, as it should.
• Second, normalizing the kernel in this way prevents
introducing a bias during filtering; that is, the sum of the
pixels in the original and filtered images will be the same
• Because in a box kernel all rows and columns are identical,
the rank of these kernels is 1
• Below figure shows Test pattern of size 1024 1024 × pixels.
(b)-(d) Results of lowpass filtering with box kernels of sizes
3 ×3 , 11×11 , and 21×21
LOWPASS GAUSSIAN FILTER KERNEL
• Because of their simplicity, box filters are suitable for quick
experimentation and they often yield smoothing results that
are visually acceptable.
• They are useful also when it is desired to reduce the effect of
smoothing on edges
• However, box filters have limitations that make them poor
choices in many applications.
• For example, a defocused lens is often modeled as a lowpass
filter, but box filters are poor approximations to the blurring
characteristics of lenses
• Another limitation is the fact that box filters favor blurring
along perpendicular directions. In applications involving
images with a high level of detail, or with strong
geometrical components, the directionality of box filters
often produces undesirable results. These are but two
applications in which box filters are not suitable.
• The kernels of choice in applications such as those just
mentioned are circularly symmetric (also called isotropic,
meaning their response is independent of orientation).
• As it turns out, Gaussian kernels of the form
• Two other fundamental properties of Gaussian functions
are that the product and convolution of two Gaussians are
Gaussian functions also.
• Table shows the mean and standard deviation of the
product and convolution of two 1-D Gaussian functions, f
and g (remember, because of separability, we only need a
1-D Gaussian to form a circularly symmetric 2-D function).
SHARPENING (HIGHPASS) SPATIAL FILTERS
• Sharpening highlights transitions in intensity. Uses of
image sharpening range from electronic printing and
medical imaging to industrial inspection and autonomous
guidance in military systems.
• Averaging is analogous to integration, it is logical to
conclude that sharpening can be accomplished by spatial
differentiation
• Image differentiation enhances edges and other
discontinuities (such as noise) and de-emphasizes areas
with slowly varying intensities
• sharpening filters are based on first- and second-order
derivatives, respectively
• Derivatives of a digital function are defined in terms of
differences. There are various ways to define these
differences. However, we require that any definition we use
for a first derivative:
• 1. Must be zero in areas of constant intensity.
• 2. Must be nonzero at the onset of an intensity step or ramp.
• 3. Must be nonzero along intensity ramps.
• Similarly, any definition of a second derivative
• 1. Must be zero in areas of constant intensity.
• 2. Must be nonzero at the onset and end of an intensity
step or ramp.
• 3. Must be zero along intensity ramps.
• A basic definition of the first-order derivative of a one-
dimensional function f (x) is the difference
• The second-order derivative of f (x ) as the difference
first- and second-order derivatives of a digital
function
• The values denoted by the small squares are the intensity
values along a horizontal intensity profile (the dashed line
connecting the squares is included to aid visualization)
• The actual numerical values of the scan line are shown inside
the small boxes
• the scan line contains three sections of constant intensity, an
intensity ramp, and an intensity step. The circles indicate the
onset or end of intensity transitions
• When computing the first derivative at a location x, we
subtract the value of the function at that location from the
next point,
• Similarly, to compute the second derivative at x, we use the
previous and the next points in the computation
IMAGE SHARPENING USING FREQUENCY
DOMAIN FILTERS
• An image can be smoothed by attenuating the high-
frequency components of its Fourier transform.
• Because edges and other abrupt changes in intensities are
associated with high-frequency components, image
sharpening can be achieved in the frequency domain by
high pass filtering, which attenuates the low-frequency
components without disturbing high-frequency information
in the Fourier transform.
• The filter function H(u,v) are understood to be discrete
functions of size P*Q; that is the discrete frequency
variables are in the range u=0,1, 2…….,P-1 and v=0,1,2,
…….,(Q-1)
• The meaning of sharpening is edges and fine detail
characterized by sharp transitions in image intensity.
• Such transitions contribute significantly to high frequency
components of Fourier transform. Intuitively, attenuating
certain low frequency components and preserving high
frequency components result in sharpening.
• Intended goal is to do the reverse operation of low-pass
filters.
• When low-pass filter attenuated frequencies, high-pass filter
passes them.
• When high-pass filter attenuates frequencies, low-pass filter
passes them. A high pass filter is obtained from a given low
pass filter using the equation
• Hhp (u,v) = 1- Htp (u,v)
• Where Hlp (u,v) is the transfer function of the low-pass
filter. That is when the low- pass filter attenuates
frequencies, the high-pass filter passed them, and vice-versa
Top row: Perspective plot,
image representation,
and cross section of a typical ideal
high-pass filter. Middle and bottom
rows: The same sequence for typical
butter-worth and Gaussian high-pass
filters.
• IDEAL HIGH-PASS FILTER: A 2-D ideal high-pass
filter (IHPF) is defined as H (u,v) = 0, if D(u,v) ≤ D 0 H
(u,v) = 1, if D(u,v) ˃ D0 Where D0 is the cutoff frequency
and D(u,v) is given by eq.
• As intended, the IHPF is the opposite of the ILPF in the
sense that it sets to zero all frequencies inside a circle of
radius D0 while passing, without attenuation, all frequencies
outside the circle. As in case of the ILPF, the IHPF is not
physically realizable.
Fig Spatial representation of typical (a) ideal (b) Butter-worth and (c)
Gaussian frequency domain high-pass filters, and corresponding
intensity profiles through their centers
PIECEWISE-LINEAR TRANSFORMATION
FUNCTIONS
• Contrast Stretching:
One of the simplest piecewise linear functions is a contrast-
stretching transformation.
Low-contrast images can result from poor illumination, lack
of dynamic range in the imaging sensor, or even wrong
setting of a lens aperture during image acquisition. S= T(r )
• Figure a, shows a typical transformation used for contrast
stretching. The locations of points (r1, s1) and (r2, s2)
control the shape of the transformation function. If r1=s1
and r2=s2, the transformation is a linear function that
produces No changes in gray levels. If r1=r2, s1=0 and s2=
L-1, the transformation Becomes a thresholding function
that creates a binary image
an 8-bit image with low contrast.

shows the result of contrast stretching, obtained by setting


(r1, s1) =(rmin, 0) and (r2, s2)=(rmax,L-1) where rmin and
rmax denote the minimum and maximum gray levels in
the image, respectively. Thus, the transformation function
stretched the levels linearly from their original range
to the full range[0, L-1].
shows the result of using the thresholding function defined
previously, with r1=r2=m, the mean gray level in the image.
• Gray-level Slicing:
Highlighting a specific range of gray levels in an image
often is desired.
Applications include enhancing features such as masses of
water in satellite imagery and enhancing flaws in X-ray
images.
There are several ways of doing level slicing, but most of
them are variations of two basic themes.
One approach is to display a high value for all gray levels
in the range of interest and a low value for all other gray
levels.
This transformation function highlights range [A, B] and reduces
all other intensities to a lower level.

This function highlights range [A,B] and leaves other intensities unchanged.
• Intensity level slicing:
There are applications in which it is of interest to highlight
a specific range of intensities in an image.
Some of these applications include enhancing features in
satellite imagery, such as masses of water, and enhancing
flaws in X-ray images.
The method, called intensity-level slicing, can be
implemented in several ways, but most are variations of
two basic themes. One approach is to display in one value
(say, white) all the values in the range of interest and in
another (say, black) all other intensities.
• This transformation, shown in above Fig (a), produces a
binary image.
• The second approach, based on the transformation in Fig.
(b), brightens (or darkens) the desired range of intensities,
but leaves all other intensity levels in the image unchanged
• Bit-Plane Slicing:
Pixel values are integers composed of bits. For example,
values in a 256-level gray-scale image are composed of 8
bits (one byte).
Instead of highlighting intensity-level ranges, we could
highlight the contribution made to total image appearance
by specific bits
• an 8-bit image may be considered as being composed of
eight one-bit planes, with plane 1 containing the lowest-
order bit of all pixels in the image, and plane 8 all the
highest-order bits.

You might also like