Chapter 3
Image Enhancement in the
Spatial Domain
Principle Objective of
Enhancement
Process an image so that the result will be
more suitable than the original image for
a specific application.
The suitableness is up to each application.
A method which is quite useful for
enhancing an image may not necessarily be
the best approach for enhancing another
images
2
2 domains
Spatial Domain : (image plane)
Frequency Domain :
Techniques are based on direct manipulation of
pixels in an image
Techniques are based on modifying the Fourier
transform of an image
There are some enhancement techniques based
on various combinations of methods from these
two categories.
3
Good images
For human visual
The visual evaluation of image quality is a highly
subjective process.
It is hard to standardize the definition of a good
image.
A certain amount of trial and error usually is
required before a particular image
enhancement approach is selected.
Spatial Domain
Procedures that operate
directly on pixels.
g(x,y) = T[f(x,y)]
where
f(x,y) is the input image
g(x,y) is the processed
image
T is an operator on f
defined over some
neighborhood of (x,y)
Mask/Filter
(x,y)
Neighborhood of a point (x,y)
can be defined by using a
square/rectangular (common
used) or circular subimage
area centered at (x,y)
The center of the subimage
is moved from pixel to pixel
starting at the top of the
corner
Point Processing
Neighborhood = 1x1 pixel
g depends on only the value of f at (x,y)
T = gray level (or intensity or mapping)
transformation function
s = T(r)
Where
r = gray level of f(x,y)
s = gray level of g(x,y)
7
Contrast Stretching
Produce higher
contrast than the
original by
darkening the levels
below m in the original
image
Brightening the levels
above m in the original
image
8
Thresholding
Produce a two-level
(binary) image
Mask Processing or Filter
Neighborhood is bigger than 1x1 pixel
Use a function of the values of f in a
predefined neighborhood of (x,y) to determine
the value of g at (x,y)
The value of the mask coefficients determine
the nature of the process
Used in techniques
Image Sharpening
Image Smoothing
10
3 basic gray-level
transformation functions
Output gray level, s
Negative
nth root
Log
nth power
Linear function
Logarithm function
Identity
Inverse Log
Negative and identity
transformations
Log and inverse-log
transformation
Power-law function
nth power and nth root
transformations
Input gray level, r
11
Identity function
Output gray level, s
Negative
nth root
Log
Identity
nth power
Output intensities
are identical to input
intensities.
Is included in the
graph only for
completeness.
Inverse Log
Input gray level, r
12
Output gray level, s
Image Negatives
Negative
nth root
Log
nth power
Identity
Inverse Log
Input gray level, r
An image with gray level in
the range [0, L-1]
where L = 2n ; n = 1, 2
Negative transformation :
s = L 1 r
Reversing the intensity
levels of an image.
Suitable for enhancing white
or gray detail embedded in
dark regions of an image,
especially when the black
area dominant in size.
13
Image Negatives
254
250
254
255
250
250
254
250
255
Original
Negative
14
Example of Negative Image
15
Log Transformations
Output gray level, s
Negative
nth root
Log
nth power
Identity
Inverse Log
s = c log (1+r)
c is a constant
and r 0
Log curve maps a narrow
range of low gray-level
values in the input image
into a wider range of
output levels.
Used to expand the values
of dark pixels in an image
while compressing the
higher-level values.
Input gray level, r
16
Example of Logarithm Image
17
Inverse Logarithm
Transformations
Do opposite to the Log Transformations
Used to expand the values of high pixels
in an image while compressing the
darker-level values.
18
Power-Law Transformations
Output gray level, s
s = cr
Input gray level, r
Plots of
s = cr for various values of
c and are positive
constants
Power-law curves with
fractional values of map a
narrow range of dark input
values into a wider range of
output values, with the
opposite being true for
higher values of input
levels.
c = = 1 Identity function
19
Gamma correction
Monitor
Gamma
correction
= 2.5
Monitor
=1/2.5 = 0.4
Cathode ray tube (CRT)
devices have an
intensity-to-voltage
response that is a
power function, with
varying from 1.8 to 2.5
The picture will become
darker.
Gamma correction is
done by preprocessing
the image before
inputting it to the
monitor with s = cr1/
20
Another example : MRI
(a) The picture is predominately
dark
(b) result after power-law
transformation with = 0.6, c=1
(c) transformation with = 0.4
(best result)
(d) transformation with = 0.3
(under acceptable level)
A b
c d
21
Effect of decreasing gamma
When the is reduced too much, the
image begins to reduce contrast to the
point where the image started to have
very slight wash-out look, especially in
the background
22
Another example
a
c
b
d
(a) image has a washed-out
appearance, it needs a
compression of gray levels needs
>1
(b) result after power-law
transformation with = 3.0
(suitable)
(c) transformation with = 4.0
(suitable)
(d) transformation with = 5.0
(high contrast, the image has
areas that are too dark, some
detail is lost)
23
Piecewise-Linear
Transformation Functions
Advantage:
The form of piecewise
functions can be
arbitrarily complex
Disadvantage:
Their specification
requires considerably
more user input
24
Contrast Stretching
increase the dynamic
range of the gray
levels in the image
(b) a low-contrast
image
(c) result of contrast
stretching: (r1,s1) =
(rmin,0) and (r2,s2) =
(rmax,L-1)
(d) result of
thresholding
25
Gray-level slicing
Highlighting a specific
range of gray levels in an
image
Display a high value of all
gray levels in the range of
interest and a low value for
all other gray levels
(a) transformation highlights
range [A,B] of gray level and
reduces all others to a constant
level
(b) transformation highlights
range [A,B] but preserves all
other levels
26
Bit-plane slicing
One 8-bit byte
Bit-plane 7
(most significant)
Bit-plane 0
(least significant)
Highlighting the contribution
made to total image
appearance by specific bits
Suppose each pixel is
represented by 8 bits
Higher-order bits contain
the majority of the visually
significant data
Useful for analyzing the
relative importance played
by each bit of the image
27
Example
An 8-bit fractal image
28
8 bit planes
Bit-plane 7 Bit-plane 6
BitBitBitplane 5 plane 4 plane 3
BitBitBitplane 2 plane 1 plane 0
29
Histogram Processing
Histogram of a digital image with gray levels in
the range [0,L-1] is a discrete function
h(rk) = nk
Where
rk : the kth gray level
nk : the number of pixels in the image having gray
level rk
h(rk) : histogram of a digital image with gray levels rk
30
Normalized Histogram
dividing each of histogram at gray level rk by the
total number of pixels in the image, n
p(rk) = nk / n
For k = 0,1,,L-1
p(rk) gives an estimate of the probability of
occurrence of gray level rk
The sum of all components of a normalized
histogram is equal to 1
31
Histogram Processing
Basic for numerous spatial domain
processing techniques
Used effectively for image enhancement
Information inherent in histograms also
is useful in image compression and
segmentation
32
h(rk) or p(rk)
Example
rk
Dark image
Components of
histogram are
concentrated on
the low side of
the gray scale.
Bright image
Components of
histogram are
concentrated on
the high side of
the gray scale.
33
Example
Low-contrast image
histogram is
narrow and
centered toward
the middle of the
gray scale
High-contrast image
histogram covers
broad range of the
gray scale and the
distribution of pixels
is not too far from
uniform, with very
few vertical lines
34
being much higher
Histogram Equalization
As the low-contrast images histogram is
narrow and centered toward the middle of the
gray scale, if we distribute the histogram to a
wider range the quality of the image will be
improved.
We can do it by adjusting the probability
density function of the original histogram of
the image so that the probability spread
equally
35
Histogram transformation
s = T(r)
Where 0 r 1
T(r) satisfies
sk= T(rk)
T(r)
r
k
(a). T(r) is singlevalued and
monotonically
increasingly in the
interval 0 r 1
(b). 0 T(r) 1 for
0r1
36
2 Conditions of T(r)
Single-valued (one-to-one relationship)
guarantees that the inverse transformation will
exist
Monotonicity condition preserves the increasing
order from black to white in the output image
0 T(r) 1 for 0 r 1 guarantees that the
output gray levels will be in the same range as
the input levels.
The inverse transformation from s back to r is
r = T -1(s)
;0s1
37
Probability Density Function
The gray levels in an image may be
viewed as random variables in the
interval [0,1]
PDF is one of the fundamental
descriptors of a random variable
38
Applied to Image
Let
pr(r) denote the PDF of random variable r
ps (s) denote the PDF of random variable s
If pr(r) and T(r) are known and T-1(s) satisfies
condition (a) then ps(s) can be obtained using
a formula :
dr
ps(s) pr(r)
ds
39
Applied to Image
The PDF of the transformed variable s
is determined by
the gray-level PDF of the input image
and by
the chosen transformation function
40
Transformation function
A transformation function is :
r
s T ( r ) pr ( w )dw
0
where w is a dummy variable of
integration
Note: T(r)
T(r) depends
depends on
on pprr(r)
(r)
Note:
41
Finding ps(s) from given T(r)
ds dT ( r )
dr
dr
r
d
pr ( w )dw
dr 0
pr ( r )
dr
p s ( s ) pr ( r )
ds
Substitute and yield
1
pr ( r )
pr ( r )
1 where 0 s 1
42
ps(s)
As ps(s) is a probability function, it must
be zero outside the interval [0,1] in this
case because its integral over all values
of s must equal 1.
ps(s) is always a uniform, independent of
the form of pr(r)
43
s T ( r ) pr ( w )dw
0
yields
a random variable s
characterized by
a uniform probability
function
Ps(s
)
1
s
44
Discrete
transformation function
The probability of occurrence of gray
level in an image is approximated by
nk
pr ( rk )
n
where k 0 , 1, ..., L-1
The discrete version of transformation
k
sk T ( rk ) pr ( r j )
j 0
j 0
nj
n
where k 0 , 1, ..., L-1
45
Histogram Equalization
Thus, an output image is obtained by
mapping each pixel with level rk in the
input image into a corresponding pixel
with level sk in the output image
46
Example
before
after
Histogram
equalization
47
Example
before
after
Histogram
equalization
The quality is
not improved
much
because the
original
image
already has a
48
broaden gray-
Histogram Processing
49
Histogram Processing
Transformation functions used to equalized
the histogram
50
Example
No. of pixels
6
4x4 image
Gray scale = [0,9]
Gray leve
0 1 2 3 4 5 6 7 8 9
histogram
51
Gray
Level(j)
No. of
pixels
11
15
16
16
16
16
16
n
j 0
s
j 0
nj
n
sx9
0
0
0
0
6
/
11
15
16
16
16
16
16
16
16
16
16
16
16 16 16
3.3 6.1 8.4
3
52
Example
No. of pixels
6
Output image
Gray scale = [0,9]
1
0 1 2 3 4 5 6 7 8 9
Gray leve
Histogram equalization
53
Histogram Matching
(Specification)
Histogram equalization has a disadvantage
which is that it can generate only one type
of output image.
With Histogram Specification, we can
specify the shape of the histogram that
we wish the output image to have.
It doesnt have to be a uniform histogram
54
Consider the continuous domain
Let pr(r) denote continuous probability density
function of gray-level of input image, r
Let pz(z) denote desired (specified) continuous
probability density function of gray-level of
output image, z
Let s be a random variable with the property
r
s T ( r ) pr ( w )dw
Histogram equalization
Where w is a dummy variable of integration
55
Next, we define a random variable z with the property
z
g ( z ) pz ( t )dt s
Histogram equalization
Where t is a dummy variable of integration
thus
s = T(r) = G(z)
Therefore, z must satisfy the condition
z = G-1(s) = G-1[T(r)]
Assume G-1 exists and satisfies the condition (a) and (b)
We can map an input gray level r to output gray level z
56
Histogram Processing
Histogram Matching (Specification)
57
Procedure Conclusion
1.
Obtain the transformation function T(r) by
calculating the histogram equalization of the
input image
r
s T ( r ) pr ( w )dw
0
2.
Obtain the transformation function G(z) by
calculating histogram equalization of the
desired density function
z
G ( z ) pz ( t )dt s
0
58
Procedure Conclusion
3.
Obtain the inversed transformation
function G-1
z = G-1(s) = G-1[T(r)]
4.
Obtain the output image by applying the
processed gray-level from the inversed
transformation function to all the
pixels in the input image
59
Example
Assume an image has a gray level probability density
function pr(r) as shown.
2r 2 ;0 r 1
pr ( r )
; elsewhere
0
Pr(r)
2
p ( w )dw 1
r
60
Example
We would like to apply the histogram specification with
the desired probability density function pz(z) as shown.
Pz(z
)
2
;0 z 1
; elsewhere
2z
pz ( z )
0
p ( w )dw 1
z
61
Step 1:
Obtain the transformation function T(r)
r
s=T(r)
s T ( r ) pr ( w )dw
0
One to
one
mapping
function
1
( 2 w 2 )dw
0
w 2w
2
r 2r
r
0
62
Step 2:
Obtain the transformation function G(z)
z
G ( z ) ( 2 w )dw
0
z
0
63
Step 3:
Obtain the inversed transformation function G-1
G( z ) T ( r )
z r 2r
2
z 2r r
We can guarantee that 0 z 1 when 0 r 1
64
Discrete formulation
k
sk T ( rk ) pr ( r j )
j 0
j 0
nj
n
k 0 ,1,2 ,..., L 1
G ( z k ) pz ( z i ) sk
k 0 ,1,2 ,..., L 1
i 0
z k G 1 T ( rk )
G
sk
k 0 ,1,2 ,..., L 1
65
Histogram Matching Example
Consider an 8-level
image with the shown
histogram
Match it to the
image with the
histogram
66
Histogram Matching Example
1. Equalize the histogram of the input image
using transform s =T(r)
67
Histogram Matching Example
2. Equalize the desired histogram v =
G(z).
68
Histogram Matching Example
3. Set v = s to obtain the
composite transform
69
Example
Image of Mars moon
Image is dominated by large, dark
areas, resulting in a histogram
characterized by a large
concentration of pixels in pixels in
the dark end of the gray scale
70
Image Equalization
Transformation function
Histogram of the result
for histogram
image
equalization
Result image
after
histogram
equalization
71
Solve the problem
Since the problem with the
transformation function of the
histogram equalization was
caused by a large concentration
of pixels in the original image
with levels near 0
a reasonable approach is to
modify the histogram of that
image so that it does not have
this property
Histogram
Equalization
Histogram
Specification
72
Histogram Specification
(1) the transformation
function G(z) obtained
from
k
G ( z k ) pz ( z i ) sk
i 0
k 0 ,1,2 ,..., L 1
(2) the inverse
transformation G-1(s)
73
Result image and its histogram
The output images histogram
Original image
After applied
the histogram
matching
Notice that the output
histograms low end has
shifted right toward the
lighter region of the gray
scale as desired.
74
Result
75
Note
Histogram specification is a trial-anderror process
There are no rules for specifying
histograms, and one must resort to
analysis on a case-by-case basis for any
given enhancement task.
76
Note
Histogram processing methods are global
processing, in the sense that pixels are
modified by a transformation function
based on the gray-level content of an
entire image.
Sometimes, we may need to enhance
details over small areas in an image, which
is called a local enhancement.
77
Local Enhancement
(a)
(b)
(c)
define a square or rectangular neighborhood and move the center
of this area from pixel to pixel.
at each location, the histogram of the points in the neighborhood
is computed and either histogram equalization or histogram
specification transformation function is obtained.
78
Explain the result in c)
Basically, the original image consists of many
small squares inside the larger dark ones.
However, the small squares were too close in
gray level to the larger ones, and their sizes
were too small to influence global histogram
equalization significantly.
So, when we use the local enhancement
technique, it reveals the small areas.
Note also the finer noise texture is resulted
by the local processing using relatively small
neighborhoods.
79
Enhancement using
Arithmetic/Logic Operations
Arithmetic/Logic operations perform on
pixel by pixel basis between two or more
images
except NOT operation which perform
only on a single image
80
Logic Operations
Logic operation performs on gray-level
images, the pixel values are processed as
binary numbers
light represents a binary 1, and dark
represents a binary 0
NOT operation = negative transformation
81
Example of AND Operation
original image AND image
mask
result of AND
operation
82
Example of OR Operation
original image
OR image
mask
result of OR
operation
83
Image Subtraction
g(x,y) = f(x,y) h(x,y)
enhancement of the differences between
images
84
Image Subtraction
b
d
a). original fractal image
b). result of setting the four
lower-order bit planes to zero
a
c
refer to the bit-plane slicing
the higher planes contribute
significant detail
the lower planes contribute more
to fine detail
image b). is nearly identical
visually to image a), with a very
slightly drop in overall contrast
due to less variability of the
gray-level values in the image.
c). difference between a). and b).
(nearly black)
d). histogram equalization of c).
(perform contrast stretching
transformation)
85
Mask mode radiography
mask image
an image (taken after
injection of a contrast
medium (iodine) into
the bloodstream) with
mask subtracted out.
Note:
the background is dark because it
doesnt change much in both images.
the difference area is bright because it
has a big change
h(x,y) is the mask, an X-ray
image of a region of a patients
body captured by an intensified
TV camera (instead of traditional
X-ray film) located opposite an
X-ray source
f(x,y) is an X-ray image taken
after injection a contrast medium
into the patients bloodstream
images are captured at TV rates,
so the doctor can see how the
medium propagates through the
various arteries in the area being
observed (the effect of
subtraction) in a movie showing
mode.
86
Note
We may have to adjust the gray-scale of the
subtracted image to be [0, 255] (if 8-bit is used)
first, find the minimum gray value of the
subtracted image
second, find the maximum gray value of the
subtracted image
set the minimum value to be zero and the
maximum to be 255
while the rest are adjusted according to the
interval [0, 255], by timing each value with
255/max
87
Image Averaging
consider a noisy image g(x,y) formed by
the addition of noise (x,y) to an original
image f(x,y)
g(x,y) = f(x,y) + (x,y)
88
Image Averaging
if noise has zero mean and be
uncorrelated then it can be shown that if
g ( x, y ) = image formed by averaging
K different noisy images
1
g ( x, y )
K
g ( x, y )
i 1
89
Image Averaging
then
g ( x, y )
g ( x, y )
1 2
( x, y )
K
2
( x , y )= variances of g and
if K increase, it indicates that the variability (noise) of
the pixel at each location (x,y) decreases.
90
Image Averaging
thus
E{g ( x, y )} f ( x, y )
E{g ( x, y )} = expected value of g
(output after averaging)
= original image f(x,y)
91
a
c
e
Example
b
d
f
a) original image
b) image corrupted by
additive Gaussian noise
with zero mean and a
standard deviation of 64
gray levels.
c). -f). results of
averaging K = 8, 16, 64
and 128 noisy images
92
Example
93
Spatial Filtering
use filter (can also be called as
mask/kernel/template or window)
the values in a filter subimage are
referred to as coefficients, rather than
pixel.
our focus will be on masks of odd sizes,
e.g. 3x3, 5x5,
94
Spatial Filtering
95
Spatial Filtering Process
simply move the filter mask from point to
point in an image.
at each point (x,y), the response of the
filter at that point is calculated using a
predefined relationship.
R w1 z1 w2 z 2 ... wmn z mn
mn
wi zi
i i
96
Linear Filtering
Linear Filtering of an image f of size
MxN filter mask of size mxn is given by
the expression
g ( x, y )
w(s, t ) f ( x s, y t )
t a t b
where a = (m-1)/2 and
b = (n-1)/2
To generate a complete filtered image this equation must
be applied for x = 0, 1, 2, , M-1 and y = 0, 1, 2, , N-1
97
Smoothing Spatial Filters
used for blurring and for noise reduction
blurring is used in preprocessing steps,
such as
removal of small details from an image prior
to object extraction
bridging of small gaps in lines or curves
noise reduction can be accomplished by
blurring with a linear filter and also by a
nonlinear filter
98
Smoothing Linear Filters
output is simply the average of the pixels
contained in the neighborhood of the filter
mask.
called averaging filters or lowpass filters.
99
Smoothing Linear Filters
replacing the value of every pixel in an image
by the average of the gray levels in the
neighborhood will reduce the sharp
transitions in gray levels.
sharp transitions
random noise in the image
edges of objects in the image
thus, smoothing can reduce noises (desirable)
and blur edges (undesirable)
100
3x3 Smoothing Linear Filters
box filter
weighted average
the center is the most important and other
pixels are inversely weighted as a function of
their distance from the center of the mask
101
Weighted average filter
the basic strategy behind weighting the
center point the highest and then
reducing the value of the coefficients as
a function of increasing distance from
the origin is simply an attempt to
reduce blurring in the smoothing
process.
102
General form : smoothing mask
filter of size mxn (m and n odd)
a
g ( x, y )
w(s, t ) f ( x s, y t )
s at b
w(s, t )
s at b
summation of all coefficient of the mask
103
a
c
e
Example
b
d
f
a). original image 500x500 pixel
b). - f). results of smoothing
with square averaging filter
masks of size n = 3, 5, 9, 15 and
35, respectively.
Note:
big mask is used to eliminate small
objects from an image.
the size of the mask establishes
the relative size of the objects
that will be blended with the
background.
104
Example
original image
result after smoothing result of thresholding
with 15x15 averaging mask
we can see that the result after smoothing and
thresholding, the remains are the largest and brightest
105
objects in the image.
Order-Statistics Filters
(Nonlinear Filters)
the response is based on ordering
(ranking) the pixels contained in the
image area encompassed by the filter
example
median filter : R = median{zk |k = 1,2,,n x n}
max filter : R = max{zk |k = 1,2,,n x n}
min filter : R = min{zk |k = 1,2,,n x n}
note: n x nis the size of the mask
106
Median Filters
replaces the value of a pixel by the median of
the gray levels in the neighborhood of that
pixel (the original value of the pixel is included
in the computation of the median)
quite popular because for certain types of
random noise (impulse noise salt and pepper
noise)
noise , they provide excellent noise-reduction
capabilities,
capabilities with considering less blurring than
linear smoothing filters of similar size.
107
Median Filters
forces the points with distinct gray levels to
be more like their neighbors.
isolated clusters of pixels that are light or
dark with respect to their neighbors, and
whose area is less than n2/2 (one-half the
filter area), are eliminated by an n x n median
filter.
eliminated = forced to have the value equal the
median intensity of the neighbors.
larger clusters are affected considerably less
108
Example : Median Filters
109
Sharpening Spatial Filters
to highlight fine detail in an image
or to enhance detail that has been
blurred, either in error or as a natural
effect of a particular method of image
acquisition.
110
Blurring vs. Sharpening
as we know that blurring can be done in
spatial domain by pixel averaging in a
neighbors
since averaging is analogous to integration
thus, we can guess that the sharpening
must be accomplished by spatial
differentiation.
111
Derivative operator
the strength of the response of a
derivative operator is proportional to the
degree of discontinuity of the image at
the point at which the operator is
applied.
thus, image differentiation
enhances edges and other discontinuities
(noise)
112
First-order derivative
a basic definition of the first-order
derivative of a one-dimensional function
f(x) is the difference
f
f ( x 1) f ( x)
x
113
Second-order derivative
similarly, we define the second-order
derivative of a one-dimensional function
f(x) is the difference
f
f ( x 1) f ( x 1) 2 f ( x)
2
x
2
114
Derivative operator
115
First and Second-order
derivative of f(x,y)
when we consider an image function of
two variables, f(x,y), at which time we
will dealing with partial derivatives along
the two spatial axes.
f ( x, y ) f ( x, y ) f ( x, y )
f
Gradient operator
xy
x
y
Laplacian operator
(linear operator)
f ( x, y ) f ( x , y )
f
2
2
x
y
2
116
Discrete Form of Laplacian
from
f
f ( x 1, y ) f ( x 1, y ) 2 f ( x, y )
2
x
2
f
f ( x, y 1) f ( x, y 1) 2 f ( x, y )
2
y
2
yield,
f [ f ( x 1, y ) f ( x 1, y )
2
f ( x, y 1) f ( x, y 1) 4 f ( x, y )]
117
Result Laplacian mask
118
Laplacian mask implemented an
extension of diagonal neighbors
119
Other implementation of
Laplacian masks
give the same result, but we have to keep in mind that
when combining (add / subtract) a Laplacian-filtered
image with another image.
120
Effect of Laplacian Operator
as it is a derivative operator,
it highlights gray-level discontinuities in an
image
tends to produce images that have
grayish edge lines and other discontinuities,
all superimposed on a dark,
featureless background.
121
Correct the effect of
featureless background
easily by adding the original and Laplacian
image.
be careful with the Laplacian filter used
g ( x, y )
f ( x, y ) 2 f ( x, y )
2
f
(
x
,
y
)
f ( x, y )
if the center coefficient
of the Laplacian mask is
negative
if the center coefficient
of the Laplacian mask is
positive
122
Example
a). image of the North
pole of the moon
b). Laplacian-filtered
image with
1
-8
c). Laplacian image scaled
for display purposes
d). image enhanced by
addition with original
image
123
Mask of Laplacian + addition
to simply the computation, we can create
a mask which do both operations,
Laplacian Filter and Addition the original
image.
124
Mask of Laplacian + addition
g ( x, y ) f ( x, y ) [ f ( x 1, y ) f ( x 1, y )
f ( x, y 1) f ( x, y 1) 4 f ( x, y )]
5 f ( x, y ) [ f ( x 1, y ) f ( x 1, y )
f ( x, y 1) f ( x, y 1)]
0
-1
-1
-1
-1
125
Example
126
f ( x, y ) f ( x, y )
Note
0
0
1
g ( x, y )
f ( x, y ) f ( x, y )
9
1
1
-
0 0 0
=
5
1
1
0
0
10
0
1
0 1 0
0
1
0 0 0
4
1
1
0 0 0
0
0
10
0
1
0 1 0
0 0 0
8
1
1
0
0 127
Unsharp masking
f s ( x, y ) f ( x, y ) f ( x, y )
sharpened image
image =
= original
original image
image blurred
blurred
sharpened
image
image
to subtract a blurred version of an image
produces sharpening output image.
128
High-boost filtering
generalized form of Unsharp masking
A1
129
High-boost filtering
if we use Laplacian filter to create
sharpen image fs(x,y) with addition of
original image
f ( x, y ) f ( x, y )
f s ( x, y )
2
f ( x, y ) f ( x, y )
2
130
High-boost filtering
yields
if the center coefficient
of the Laplacian mask is
negative
Af ( x, y ) f ( x, y )
f hb ( x, y )
2
Af ( x, y ) f ( x, y )
2
if the center coefficient
of the Laplacian mask is
positive
131
High-boost Masks
A1
if A = 1, it becomes standard Laplacian
132
sharpening
Example
133
Gradient Operator
f
Gx x
f
f
Gy
first derivatives are implemented using
the magnitude of the gradient.
gradient
f mag (f ) [G G ]
2
x
f 2 f 2
x
y
2
y
commonly approx.
the magnitude becomes nonlinear
f G x G y
134
Gradient Mask
z1
z2
z3
z4
z5
z6
z7
z8
z9
simplest approximation, 2x2
G x ( z8 z 5 )
f [G G ]
2
x
2
y
and
2
G y ( z 6 z5 )
[( z8 z5 ) ( z6 z5 ) ]
2
f z8 z 5 z 6 z 5
135
Gradient Mask
z1
z2
z3
z4
z5
z6
z7
z8
z9
Roberts cross-gradient operators, 2x2
G x ( z9 z5 )
f [G G ]
2
x
2
y
and
2
G y ( z8 z 6 )
[( z9 z5 ) ( z8 z6 ) ]
2
f z 9 z 5 z8 z 6
136
Gradient Mask
z1
z2
z3
z4
z5
z6
z7
z8
z9
Sobel operators, 3x3
Gx ( z7 2 z8 z9 ) ( z1 2 z 2 z3 )
G y ( z3 2 z6 z9 ) ( z1 2 z 4 z7 )
f G x G y
137
Note
the summation of coefficients in all
masks equals 0, indicating that they
would give a response of 0 in an area of
constant gray level.
138
Example
139
Example of Combining Spatial
Enhancement Methods
want to sharpen the
original image and bring
out more skeletal
detail.
problems: narrow
dynamic range of gray
level and high noise
content makes the
image difficult to
enhance
140
Example of Combining Spatial
Enhancement Methods
solve :
1.
2.
3.
Laplacian to highlight fine detail
gradient to enhance prominent
edges
gray-level transformation to
increase the dynamic range of gray
levels
141
142
143