0% found this document useful (0 votes)
11 views78 pages

Chapter 3

The document discusses image enhancement techniques in the spatial domain. It describes how spatial domain techniques directly manipulate pixel values using masks or filters. Point processing and mask processing techniques are introduced. Common functions used in spatial domain enhancement like contrast stretching and thresholding are explained.

Uploaded by

bhavana rai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views78 pages

Chapter 3

The document discusses image enhancement techniques in the spatial domain. It describes how spatial domain techniques directly manipulate pixel values using masks or filters. Point processing and mask processing techniques are introduced. Common functions used in spatial domain enhancement like contrast stretching and thresholding are explained.

Uploaded by

bhavana rai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Department of Computer Engineering, CMU

Chapter 3:
Image Enhancement in the
Spatial Domain

Lecturer: Wanasanan Thongsongkrit


Email : [email protected]
Office room : 410

Principle Objective of
Enhancement
„ Process an image so that the result will be
more suitable than the original image for
a specific application.
„ The suitableness is up to each application.
„ A method which is quite useful for
enhancing an image may not necessarily be
the best approach for enhancing another
images
2

1
Department of Computer Engineering, CMU

2 domains
„ Spatial Domain : (image plane)
„ Techniques are based on direct manipulation of
pixels in an image
„ Frequency Domain :
„ Techniques are based on modifying the Fourier
transform of an image
„ There are some enhancement techniques based
on various combinations of methods from these
two categories.

Good images
„ For human visual
„ The visual evaluation of image quality is a highly
subjective process.
„ It is hard to standardize the definition of a good
image.
„ For machine perception
„ The evaluation task is easier.
„ A good image is one which gives the best machine
recognition results.
„ A certain amount of trial and error usually is
required before a particular image
enhancement approach is selected. 4

2
Department of Computer Engineering, CMU

Spatial Domain
„ Procedures that operate
directly on pixels.
g(x,y) = T[f(x,y)]
where
„ f(x,y) is the input image

„ g(x,y) is the processed


image
„ T is an operator on f
defined over some
neighborhood of (x,y)

Mask/Filter
„ Neighborhood of a point (x,y)
can be defined by using a
square/rectangular (common
(x,y)
used) or circular subimage
area centered at (x,y)
• „ The center of the subimage
is moved from pixel to pixel
starting at the top of the
corner

3
Department of Computer Engineering, CMU

Point Processing
„ Neighborhood = 1x1 pixel
„ g depends on only the value of f at (x,y)
„ T = gray level (or intensity or mapping)
transformation function
s = T(r)
„ Where
„ r = gray level of f(x,y)

„ s = gray level of g(x,y)

Contrast Stretching
„ Produce higher
contrast than the
original by
„ darkening the levels
below m in the original
image
„ Brightening the levels
above m in the original
image

4
Department of Computer Engineering, CMU

Thresholding
„ Produce a two-level
(binary) image

Mask Processing or Filter


„ Neighborhood is bigger than 1x1 pixel
„ Use a function of the values of f in a
predefined neighborhood of (x,y) to determine
the value of g at (x,y)
„ The value of the mask coefficients determine
the nature of the process
„ Used in techniques
„ Image Sharpening
„ Image Smoothing

10

5
Department of Computer Engineering, CMU

3 basic gray-level
transformation functions
„ Linear function
Negative

nth root
„ Negative and identity
transformations
Output gray level, s

Log
nth power „ Logarithm function
„ Log and inverse-log
transformation
„ Power-law function
Inverse Log
Identity
„ nth power and nth
root transformations
Input gray level, r

11

Identity function
„ Output intensities
Negative
are identical to input
nth root
intensities.
Output gray level, s

Log
nth power
„ Is included in the
graph only for
completeness.

Identity Inverse Log

Input gray level, r


12

6
Department of Computer Engineering, CMU

Image Negatives
„ An image with gray level in
Negative the range [0, L-
L-1]
nth root where L = 2n ; n = 1, 2…
Output gray level, s

„ Negative transformation :
Log
nth power s = L – 1 –r
„ Reversing the intensity
levels of an image.
„ Suitable for enhancing white
Inverse Log
or gray detail embedded in
Identity
dark regions of an image,
especially when the black
Input gray level, r area dominant in size.
13

Example of Negative Image

Original mammogram Negative Image : gives a


showing a small lesion of better vision to analyze
a breast the image 14

7
Department of Computer Engineering, CMU

Log Transformations
s = c log (1+r)
Negative
„ c is a constant
nth root and r ≥ 0
Output gray level, s

Log
„ Log curve maps a narrow
nth power range of low gray-level
values in the input image
into a wider range of
output levels.
Inverse Log
„ Used to expand the
Identity
values of dark pixels in
an image while
Input gray level, r
compressing the higher-
level values.
15

Log Transformations
„ It compresses the dynamic range of images
with large variations in pixel values
„ Example of image with dynamic range: Fourier
spectrum image
„ It can have intensity range from 0 to 106 or
higher.
„ We can’t see the significant degree of detail
as it will be lost in the display.

16

8
Department of Computer Engineering, CMU

Example of Logarithm Image

Fourier Spectrum with Result after apply the log


range = 0 to 1.5 x 106 transformation with c = 1,
range = 0 to 6.2 17

Inverse Logarithm
Transformations
„ Do opposite to the Log Transformations
„ Used to expand the values of high pixels
in an image while compressing the
darker-level values.

18

9
Department of Computer Engineering, CMU

Power-Law Transformations
s = crγ
„ c and γ are positive
Output gray level, s

constants
„ Power-law curves with
fractional values of γ
map a narrow range of
dark input values into a
wider range of output
values, with the opposite
being true for higher
values of input levels.
Input gray level, r
Plots of s = crγ for various values of γ „ c = γ = 1 D Identity
(c = 1 in all cases) function 19

Gamma correction
„ Cathode ray tube (CRT)
devices have an
Monitor
intensity-to-voltage
response that is a
power function, with γ
varying from 1.8 to 2.5
γ = 2.5
Gamma
correction
„ The picture will become
darker.
„ Gamma correction is
Monitor
done by preprocessing
the image before
inputting it to the
monitor with s = cr1/γ
γ =1/2.5 = 0.4 20

10
Department of Computer Engineering, CMU

a b
Another example : MRI c d

(a) a magnetic resonance image of


an upper thoracic human spine
with a fracture dislocation and
spinal cord impingement
„ The picture is predominately dark
„ An expansion of gray levels are
desirable D needs γ < 1
(b) result after power-law
transformation with γ = 0.6, c=1
(c) transformation with γ = 0.4
(best result)
(d) transformation with γ = 0.3
(under acceptable level) 21

Effect of decreasing gamma


„ When the γ is reduced too much, the
image begins to reduce contrast to the
point where the image started to have
very slight “wash-out” look, especially in
the background

22

11
Department of Computer Engineering, CMU

a b
Another example c d

(a) image has a washed-out


appearance, it needs a
compression of gray levels
D needs γ > 1
(b) result after power-law
transformation with γ = 3.0
(suitable)
(c) transformation with γ = 4.0
(suitable)
(d) transformation with γ = 5.0
(high contrast, the image
has areas that are too dark,
some detail is lost) 23

Piecewise-Linear
Transformation Functions
„ Advantage:
„ The form of piecewise functions can be
arbitrarily complex
„ Disadvantage:
„ Their specification requires considerably
more user input

24

12
Department of Computer Engineering, CMU

Contrast Stretching
„ increase the dynamic range of
the gray levels in the image
„ (b) a low-contrast image : result
from poor illumination, lack of
dynamic range in the imaging
sensor, or even wrong setting of
a lens aperture of image
acquisition
„ (c) result of contrast
stretching: (r1,s1) = (rmin,0) and
(r2,s2) = (rmax,L-1)
„ (d) result of thresholding

25

Gray-level slicing
„ Highlighting a specific
range of gray levels in an
image
„ Display a high value of all
gray levels in the range of
interest and a low value
for all other gray levels
„ (a) transformation highlights
range [A,B] of gray level and
reduces all others to a
constant level
„ (b) transformation highlights
range [A,B] but preserves all
other levels
26

13
Department of Computer Engineering, CMU

Bit-plane slicing
„ Highlighting the
Bit-plane 7
contribution made to total
One 8-bit byte
(most significant) image appearance by
specific bits
„ Suppose each pixel is
represented by 8 bits
„ Higher-order bits contain
the majority of the visually
Bit-plane 0 significant data
(least significant)
„ Useful for analyzing the
relative importance played
by each bit of the image
27

Example
„ The (binary) image for
bit-plane 7 can be
obtained by processing
the input image with a
thresholding gray-level
transformation.
„ Map all levels between 0
and 127 to 0
„ Map all levels between 129
and 255 to 255

An 8-bit fractal image


28

14
Department of Computer Engineering, CMU

8 bit planes

Bit-plane 7 Bit-plane 6

Bit- Bit- Bit-


plane 5 plane 4 plane 3

Bit- Bit- Bit-


plane 2 plane 1 plane 0

29

Histogram Processing
„ Histogram of a digital image with gray levels in
the range [0,L-1] is a discrete function
h(rk) = nk
„ Where
„ rk : the kth gray level
„ nk : the number of pixels in the image having gray
level rk
„ h(rk) : histogram of a digital image with gray levels rk

30

15
Department of Computer Engineering, CMU

Normalized Histogram
„ dividing each of histogram at gray level rk by
the total number of pixels in the image, n
p(rk) = nk / n
„ For k = 0,1,…,L-1
„ p(rk) gives an estimate of the probability of
occurrence of gray level rk
„ The sum of all components of a normalized
histogram is equal to 1

31

Histogram Processing
„ Basic for numerous spatial domain
processing techniques
„ Used effectively for image enhancement
„ Information inherent in histograms also
is useful in image compression and
segmentation

32

16
Department of Computer Engineering, CMU

h(rk) or p(rk)

Example rk

Dark image
Components of
histogram are
concentrated on the
low side of the gray
scale.
Bright image
Components of
histogram are
concentrated on the
high side of the gray
scale.
33

Example
Low-contrast image
histogram is narrow
and centered toward
the middle of the
gray scale
High-contrast image
histogram covers broad
range of the gray scale
and the distribution of
pixels is not too far from
uniform, with very few
vertical lines being much
higher than the others
34

17
Department of Computer Engineering, CMU

Histogram Equalization
„ As the low-contrast image’s histogram is
narrow and centered toward the middle of the
gray scale, if we distribute the histogram to a
wider range the quality of the image will be
improved.
„ We can do it by adjusting the probability
density function of the original histogram of
the image so that the probability spread
equally

35

Histogram transformation
s s = T(r)
„ Where 0 ≤ r ≤ 1
„ T(r) satisfies
„ (a). T(r) is single-
sk= T(rk)
valued and
T(r)
monotonically
increasingly in the
interval 0 ≤ r ≤ 1
„ (b). 0 ≤ T(r) ≤ 1 for
0≤r≤1
0 rk 1 r
36

18
Department of Computer Engineering, CMU

2 Conditions of T(r)
„ Single-valued (one-to-one relationship)
guarantees that the inverse transformation will
exist
„ Monotonicity condition preserves the increasing
order from black to white in the output image
thus it won’t cause a negative image
„ 0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1 guarantees that the
output gray levels will be in the same range as
the input levels.
„ The inverse transformation from s back to r is
r = T -1(s) ;0≤s≤1 37

Probability Density Function


„ The gray levels in an image may be
viewed as random variables in the
interval [0,1]
„ PDF is one of the fundamental
descriptors of a random variable

38

19
Department of Computer Engineering, CMU

Random Variables
„ Random variables often are a source of
confusion when first encountered.
„ This need not be so, as the concept of a
random variable is in principle quite
simple.

39

Random Variables
„ A random variable, x, is a real-valued function
defined on the events of the sample space, S.
„ In words, for each event in S, there is a real
number that is the corresponding value of the
random variable.
„ Viewed yet another way, a random variable
maps each event in S onto the real line.
line
„ That is it. A simple, straightforward definition.

40

20
Department of Computer Engineering, CMU

Random Variables
„ Part of the confusion often found in
connection with random variables is the
fact that they are functions.
functions
„ The notation also is partly responsible
for the problem.

41

Random Variables
„ In other words, although typically the
notation used to denote a random
variable is as we have shown it here, x, or
some other appropriate variable,
„ to be strictly formal, a random variable
should be written as a function x(·)
where the argument is a specific event
being considered.
42

21
Department of Computer Engineering, CMU

Random Variables
„ However, this is seldom done, and, in our
experience, trying to be formal by using
function notation complicates the issue
more than the clarity it introduces.
„ Thus, we will opt for the less formal
notation, with the warning that it must
be keep clearly in mind that random
variables are functions.
43

Random Variables
„ Example:
„ Consider the experiment of drawing a single
card from a standard deck of 52 cards.
„ Suppose that we define the following events.
A: a heart; B: a spade; C: a club; and D: a
diamond, so that S = {A, B, C, D}.
„ A random variable is easily defined by
letting x = 1 represent event A, x = 2
represent event B, and so on.

44

22
Department of Computer Engineering, CMU

Random Variables
„ As a second illustration,
„ consider the experiment of throwing a single die and
observing the value of the up-face.
„ We can define a random variable as the numerical
outcome of the experiment (i.e., 1 through 6), but
there are many other possibilities.
„ For example, a binary random variable could be
defined simply by letting x = 0 represent the event
that the outcome of throw is an even number and
x = 1 otherwise.

45

Random Variables
„ Note
„ the important fact in the examples just
given that the probability of the events have
not changed;
„ all a random variable does is map events onto
the real line.

46

23
Department of Computer Engineering, CMU

Random Variables
„ Thus far we have been concerned with
random variables whose values are
discrete.
„ To handle continuous random variables
we need some additional tools.
„ In the discrete case, the probabilities of
events are numbers between 0 and 1.

47

Random Variables
„ When dealing with continuous quantities
(which are not denumerable) we can no
longer talk about the "probability of an
event" because that probability is zero.
„ This is not as unfamiliar as it may seem.

48

24
Department of Computer Engineering, CMU

Random Variables
„ For example,
„ given a continuous function we know that the
area of the function between two limits a
and b is the integral from a to b of the
function.
„ However, the area at a point is zero because
the integral from,say, a to a is zero.
„ We are dealing with the same concept in the
case of continuous random variables.

49

Random Variables
„ Thus, instead of talking about the probability
of a specific value, we talk about the
probability that the value of the random
variable lies in a specified range.
„ In particular, we are interested in the
probability that the random variable is less
than or equal to (or, similarly, greater than or
equal to) a specified constant a.
„ We write this as
F(a) = P(x ≤ a)
50

25
Department of Computer Engineering, CMU

Random Variables
„ If this function is given for all values of a (i.e.,
− ∞ < a < ∞), then the values of random variable
x have been defined.
„ Function F is called the cumulative probability
distribution function or simply the cumulative
distribution function (cdf).
„ The shortened term distribution function also
is used.

51

Random Variables
„ Observe that the notation we have used makes
no distinction between a random variable and
the values it assumes.
„ If confusion is likely to arise, we can use more
formal notation in which we let capital letters
denote the random variable and lowercase
letters denote its values.
„ For example, the cdf using this notation is
written as
FX(x) = P(X ≤ x)
52

26
Department of Computer Engineering, CMU

Random Variables
„ When confusion is not likely, the cdf
often is written simply as F(x).
„ This notation will be used in the following
discussion when speaking generally about
the cdf of a random variable.

53

Random Variables
„ Due to the fact that it is a probability,
the cdf has the following properties:
1. F(-∞) = 0
2. F(∞) = 1
3. 0 ≤ F(x) ≤ 1
4. F(x1) ≤ F(x2) if x1 < x2
5. P(x1 < x ≤ x2) = F(x2) – F(x1)
6. F(x+) = F(x),
where x+ = x + ε, with ε being a positive,
infinitesimally small number. 54

27
Department of Computer Engineering, CMU

Random Variables
The probability density function
(pdf or shortly called density function)
of random variable x is defined as the
derivative of the cdf:

dF ( x )
p( x ) =
dx
55

Random Variables
The pdf satisfies the following properties:

56

28
Department of Computer Engineering, CMU

Random Variables
„ The preceding concepts are applicable to
discrete random variables.
„ In this case, there is a finite no. of events and
we talk about probabilities, rather than
probability density functions.
„ Integrals are replaced by summations and,
sometimes, the random variables are
subscripted.
„ For example, in the case of a discrete variable
with N possible values we would denote the
probabilities by P(xi), i=1, 2,…, N.
57

Random Variables
„ If a random variable x is transformed by a
monotonic transformation function T(x) to
produce a new random variable y,
„ the probability density function of y can be
obtained from knowledge of T(x) and the
probability density function of x, as follows:
dx
p y ( y ) = px ( x )
dy
where the vertical bars signify the absolute value.
58

29
Department of Computer Engineering, CMU

Random Variables
„ A function T(x) is monotonically
increasing if T(x1) < T(x2) for x1 < x2, and
„ A function T(x) is monotonically
decreasing if T(x1) > T(x2) for x1 < x2.
„ The preceding equation is valid if T(x) is
an increasing or decreasing monotonic
function.

59

Applied to Image
„ Let
„ pr(r) denote the PDF of random variable r
„ ps (s) denote the PDF of random variable s
„ If pr(r) and T(r) are known and T-1(s)
satisfies condition (a) then ps(s) can be
obtained using a formula :
dr
ps(s) = pr(r)
ds
60

30
Department of Computer Engineering, CMU

Applied to Image

The PDF of the transformed variable s


is determined by
the gray-level PDF of the input image
and by
the chosen transformation function

61

Transformation function
„ A transformation function is a cumulative
distribution function (CDF) of random
variable r :
r
s = T ( r ) = ∫ pr ( w )dw
0
where w is a dummy variable of integration
Note:
Note: depends on
T(r) depends
T(r) on pprr(r)
(r)
62

31
Department of Computer Engineering, CMU

Cumulative
Distribution function
„ CDF is an integral of a probability
function (always positive) is the area
under the function
„ Thus, CDF is always single valued and
monotonically increasing
„ Thus, CDF satisfies the condition (a)
„ We can use CDF as a transformation
function

63

Finding ps(s) from given T(r)


ds dT ( r )
=
dr dr
dr
d  
r
p s ( s ) = pr ( r )
=  ∫ pr ( w )dw  ds
dr  0 
1
= pr ( r ) = pr ( r )
pr ( r )
= 1 where 0 ≤ s ≤ 1
Substitute and yield
64

32
Department of Computer Engineering, CMU

ps(s)
„ As ps(s) is a probability function, it must
be zero outside the interval [0,1] in this
case because its integral over all values
of s must equal 1.
„ Called ps(s) as a uniform probability
density function
„ ps(s) is always a uniform, independent of
the form of pr(r)

65

r
s = T ( r ) = ∫ pr ( w )dw
0

yields

Ps(s)

a random variable s 1
characterized by
a uniform probability
function s
0
66

33
Department of Computer Engineering, CMU

Discrete
transformation function
„ The probability of occurrence of gray
level in an image is approximated by
nk
pr ( rk ) = where k = 0 , 1, ..., L-1
n
„ The discrete version of transformation
k
sk = T ( rk ) = ∑ pr ( r j )
j =0
k nj
=∑ where k = 0 , 1, ..., L-1
j =0 n 67

Histogram Equalization
„ Thus, an output image is obtained by mapping
each pixel with level rk in the input image into a
corresponding pixel with level sk in the output
image
„ In discrete space, it cannot be proved in
general that this discrete transformation will
produce the discrete equivalent of a uniform
probability density function, which would be a
uniform histogram

68

34
Department of Computer Engineering, CMU

Example
before after Histogram
equalization

69

Example
before after Histogram
equalization

The quality is
not improved
much because
the original
image already
has a broaden
gray-level scale
70

35
Department of Computer Engineering, CMU

Example
No. of pixels
6
2 3 3 2 5
4 2 4 3 4

3 2 3 5 3

2
2 4 2 4
1
Gray level
4x4 image
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9]
histogram
71

Gray
0 1 2 3 4 5 6 7 8 9
Level(j)
No. of
0 0 6 5 4 1 0 0 0 0
pixels
k

∑n
j =0
j 0 0 6 11 15 16 16 16 16 16

k nj 6 11 15 16 16 16 16 16
s=∑ 0 0 / / / / / / / /
j =0 n
16 16 16 16 16 16 16 16
3.3 6.1 8.4
sx9 0 0 9 9 9 9 9
≈3 ≈6 ≈8

36
Department of Computer Engineering, CMU

Example
No. of pixels
6
3 6 6 3 5
8 3 8 6 4

6 3 6 9 3

2
3 8 3 8
1

Output image
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9] Gray level
Histogram equalization
73

Note
„ It is clearly seen that
„ Histogram equalization distributes the gray level to
reach the maximum gray level (white) because the
cumulative distribution function equals 1 when
0 ≤ r ≤ L-1
„ If the cumulative numbers of gray levels are slightly
different, they will be mapped to little different or
same gray levels as we may have to approximate the
processed gray level of the output image to integer
number
„ Thus the discrete transformation function can’t
guarantee the one to one mapping relationship
74

37
Department of Computer Engineering, CMU

Histogram Matching
(Specification)

„ Histogram equalization has a disadvantage


which is that it can generate only one type
of output image.
„ With Histogram Specification, we can
specify the shape of the histogram that
we wish the output image to have.
„ It doesn’t have to be a uniform histogram

75

Consider the continuous domain

Let pr(r) denote continuous probability density


function of gray-level of input image, r

Let pz(z) denote desired (specified) continuous


probability density function of gray-level of
output image, z

Let s be a random variable with the property

r
s = T ( r ) = ∫ pr ( w )dw Histogram equalization
0

Where w is a dummy variable of integration


76

38
Department of Computer Engineering, CMU

Next, we define a random variable z with the property


z
g ( z ) = ∫ pz ( t )dt = s Histogram equalization
0

Where t is a dummy variable of integration


thus

s = T(r) = G(z)

Therefore, z must satisfy the condition

z = G-1(s) = G-1[T(r)]

Assume G-1 exists and satisfies the condition (a) and (b)
We can map an input gray level r to output gray level z
77

Procedure Conclusion
1. Obtain the transformation function T(r) by
calculating the histogram equalization of the
input image
r
s = T ( r ) = ∫ pr ( w )dw
0
2. Obtain the transformation function G(z) by
calculating histogram equalization of the
desired density function
z
G ( z ) = ∫ pz ( t )dt = s
0 78

39
Department of Computer Engineering, CMU

Procedure Conclusion
3. Obtain the inversed transformation
function G-1
z = G-1(s) = G-1[T(r)]

4. Obtain the output image by applying the


processed gray-level from the inversed
transformation function to all the
pixels in the input image

79

Example
Assume an image has a gray level probability density
function pr(r) as shown.

Pr(r)  − 2r + 2 ;0 ≤ r ≤ 1
pr ( r ) = 
2  0 ; elsewhere

1 r

∫ p ( w )dw = 1
0
r

0 1 2 r
80

40
Department of Computer Engineering, CMU

Example
We would like to apply the histogram specification with
the desired probability density function pz(z) as shown.

Pz(z)
 2z ;0 ≤ z ≤ 1
2 pz ( z ) = 
 0 ; elsewhere
1 z

∫ p ( w )dw = 1
z
z 0
0 1 2
81

Step 1:
Obtain the transformation function T(r)
r
s=T(r)
s = T ( r ) = ∫ pr ( w )dw
0
1 r
= ∫ ( −2 w + 2 )dw
One to one 0
mapping r
function = − w 2 + 2w
0
r
0 1 = − r + 2r
2

82

41
Department of Computer Engineering, CMU

Step 2:
Obtain the transformation function G(z)

z
z
G ( z ) = ∫ ( 2 w )dw = z2 = z2
0
0

83

Step 3:

Obtain the inversed transformation function G-1

G( z ) = T ( r )
z 2 = − r 2 + 2r
z = 2r − r 2
We can guarantee that 0 ≤ z ≤1 when 0 ≤ r ≤1
84

42
Department of Computer Engineering, CMU

Discrete formulation
k
sk = T ( rk ) = ∑ pr ( r j )
j =0
k nj
=∑ k = 0 ,1,2 ,..., L − 1
j =0 n
k
G ( z k ) = ∑ pz ( z i ) = sk k = 0 ,1,2 ,..., L − 1
i =0

z k = G −1 [T ( rk )]
= G −1 [sk ] k = 0 ,1,2 ,..., L − 1 85

Example

Image is dominated by large, dark areas,


resulting in a histogram characterized by
a large concentration of pixels in pixels in
the dark end of the gray scale
Image of Mars moon 86

43
Department of Computer Engineering, CMU

Image Equalization

Result image
after histogram
equalization
Transformation function
Histogram of the result image
for histogram equalization
The histogram equalization doesn’t make the result image look better than
the original image. Consider the histogram of the result image, the net
effect of this method is to map a very narrow interval of dark pixels into
the upper end of the gray scale of the output image. As a consequence, the
output image is light and has a washed-out appearance. 87

Solve the problem Histogram Equalization

Since the problem with the


transformation function of the
histogram equalization was
caused by a large concentration
of pixels in the original image
with levels near 0 Histogram Specification

a reasonable approach is to
modify the histogram of that
image so that it does not have
this property
88

44
Department of Computer Engineering, CMU

Histogram Specification
„ (1) the transformation
function G(z) obtained
from
k
G ( z k ) = ∑ pz ( z i ) = sk
i =0

k = 0 ,1,2 ,..., L − 1

„ (2) the inverse


transformation G-1(s)

89

Result image and its histogram

The output image’s histogram

Notice that the output


histogram’s low end has
After applied shifted right toward the
Original image the histogram lighter region of the gray
equalization scale as desired.
90

45
Department of Computer Engineering, CMU

Note
„ Histogram specification is a trial-and-
error process
„ There are no rules for specifying
histograms, and one must resort to
analysis on a case-by-case basis for any
given enhancement task.

91

Note
„ Histogram processing methods are global
processing, in the sense that pixels are
modified by a transformation function
based on the gray-level content of an
entire image.
„ Sometimes, we may need to enhance
details over small areas in an image,
which is called a local enhancement.
92

46
Department of Computer Engineering, CMU

a) Original image
(slightly blurred to
reduce noise)
b) global histogram

Local Enhancement equalization (enhance


noise & slightly
increase contrast but
the construction is
not changed)
c) local histogram
equalization using
7x7 neighborhood
(reveals the small
squares inside larger
ones of the original
image.
(a) (b) (c)
„ define a square or rectangular neighborhood and move the center
of this area from pixel to pixel.
„ at each location, the histogram of the points in the neighborhood
is computed and either histogram equalization or histogram
specification transformation function is obtained.
„ another approach used to reduce computation is to utilize
nonoverlapping regions, but it usually produces an undesirable
checkerboard effect. 93

Explain the result in c)


„ Basically, the original image consists of many
small squares inside the larger dark ones.
„ However, the small squares were too close in
gray level to the larger ones, and their sizes
were too small to influence global histogram
equalization significantly.
„ So, when we use the local enhancement
technique, it reveals the small areas.
„ Note also the finer noise texture is resulted
by the local processing using relatively small
neighborhoods.
94

47
Department of Computer Engineering, CMU

Enhancement using
Arithmetic/Logic Operations
„ Arithmetic/Logic operations perform on
pixel by pixel basis between two or more
images
„ except NOT operation which perform
only on a single image

95

Logic Operations
„ Logic operation performs on gray-level
images, the pixel values are processed as
binary numbers
„ light represents a binary 1, and dark
represents a binary 0
„ NOT operation = negative transformation

96

48
Department of Computer Engineering, CMU

Example of AND Operation

original image AND image result of AND


mask operation
97

Example of OR Operation

original image OR image result of OR


mask operation
98

49
Department of Computer Engineering, CMU

Image Subtraction

g(x,y) = f(x,y) – h(x,y)

„ enhancement of the differences between


images

99

a b
c d
Image Subtraction
„ a). original fractal image
„ b). result of setting the four
lower-order bit planes to zero
„ refer to the bit-plane slicing
„ the higher planes contribute
significant detail
„ the lower planes contribute more
to fine detail
„ image b). is nearly identical
visually to image a), with a very
slightly drop in overall contrast
due to less variability of the
gray-level values in the image.
„ c). difference between a). and b).
(nearly black)
„ d). histogram equalization of c).
(perform contrast stretching
transformation)

100

50
Department of Computer Engineering, CMU

Mask mode radiography


„ h(x,y) is the mask, an X-ray
image of a region of a
patient’s body captured by an
intensified TV camera
(instead of traditional X-ray
film) located opposite an X-
ray source
„ f(x,y) is an X-ray image taken
after injection a contrast
mask image an image (taken after medium into the patient’s
injection of a contrast bloodstream
medium (iodine) into the „ images are captured at TV
bloodstream) with mask rates, so the doctor can see
Note: subtracted out. how the medium propagates
• the background is dark because it through the various arteries
doesn’t change much in both images. in the area being observed
• the difference area is bright because it (the effect of subtraction) in
has a big change a movie showing mode.
101

Note
„ We may have to adjust the gray-scale of the subtracted
image to be [0, 255] (if 8-bit is used)
„ first, find the minimum gray value of the subtracted
image
„ second, find the maximum gray value of the subtracted
image
„ set the minimum value to be zero and the maximum to be
255
„ while the rest are adjusted according to the interval
[0, 255], by timing each value with 255/max
„ Subtraction is also used in segmentation of moving pictures
to track the changes
„ after subtract the sequenced images, what is left should
be the moving elements in the image, plus noise 102

51
Department of Computer Engineering, CMU

Image Averaging
„ consider a noisy image g(x,y) formed by
the addition of noise η(x,y) to an original
image f(x,y)

g(x,y) = f(x,y) + η(x,y)

103

Image Averaging
„ if noise has zero mean and be
uncorrelated then it can be shown that if

g ( x, y ) = image formed by averaging


K different noisy images

K
1
g ( x, y ) =
K
∑ g ( x, y )
i =1
i

104

52
Department of Computer Engineering, CMU

Image Averaging
„ then
1 2
σ 2
g ( x, y ) = σ η ( x, y )
K
σ 2 g ( x , y ) , σ 2η ( x , y ) = variances of g and η

if K increase, it indicates that the variability (noise) of the


pixel at each location (x,y) decreases.
105

Image Averaging
„ thus

E{g ( x, y )} = f ( x, y )

E{g ( x, y )} = expected value of g


(output after averaging)

= original image f(x,y)

106

53
Department of Computer Engineering, CMU

Image Averaging
„ Note: the images gi(x,y) (noisy images)
must be registered (aligned) in order to
avoid the introduction of blurring and
other artifacts in the output image.

107

a b
c d
Example e f

„ a) original image
„ b) image corrupted by
additive Gaussian noise
with zero mean and a
standard deviation of 64
gray levels.
„ c). -f). results of
averaging K = 8, 16, 64
and 128 noisy images
108

54
Department of Computer Engineering, CMU

Spatial Filtering

„ use filter (can also be called as


mask/kernel/template or window)
„ the values in a filter subimage are
referred to as coefficients, rather than
pixel.
„ our focus will be on masks of odd sizes,
e.g. 3x3, 5x5,…
109

Spatial Filtering Process


„ simply move the filter mask from point
to point in an image.
„ at each point (x,y), the response of the
filter at that point is calculated using a
predefined relationship.
R = w1 z1 + w2 z 2 + ... + wmn z mn
mn
= ∑ wi zi
i =i
110

55
Department of Computer Engineering, CMU

Linear Filtering
„ Linear Filtering of an image f of size
MxN filter mask of size mxn is given by
the expression
a b
g ( x, y ) = ∑ ∑ w(s, t ) f ( x + s, y + t )
t =− a t =−b

where a = (m-1)/2 and b = (n-1)/2


To generate a complete filtered image this equation must
be applied for x = 0, 1, 2, … , M-1 and y = 0, 1, 2, … , N-1
111

Smoothing Spatial Filters


„ used for blurring and for noise reduction
„ blurring is used in preprocessing steps,
such as
„ removal of small details from an image prior
to object extraction
„ bridging of small gaps in lines or curves
„ noise reduction can be accomplished by
blurring with a linear filter and also by a
nonlinear filter
112

56
Department of Computer Engineering, CMU

Smoothing Linear Filters

„ output is simply the average of the pixels


contained in the neighborhood of the filter
mask.
„ called averaging filters or lowpass filters.

113

Smoothing Linear Filters


„ replacing the value of every pixel in an image
by the average of the gray levels in the
neighborhood will reduce the “sharp”
transitions in gray levels.
„ sharp transitions
„ random noise in the image
„ edges of objects in the image
„ thus, smoothing can reduce noises (desirable)
and blur edges (undesirable)

114

57
Department of Computer Engineering, CMU

3x3 Smoothing Linear Filters

box filter weighted average


the center is the most important and other
pixels are inversely weighted as a function of
their distance from the center of the mask
115

Weighted average filter


„ the basic strategy behind weighting the
center point the highest and then
reducing the value of the coefficients as
a function of increasing distance from
the origin is simply an attempt to
reduce blurring in the smoothing
process.

116

58
Department of Computer Engineering, CMU

General form : smoothing mask


„ filter of size mxn (m and n odd)
a b

∑ ∑ w(s, t ) f ( x + s, y + t )
g ( x, y ) = s = − at = − b
a b

∑ ∑ w(s, t )
s = − at = − b

summation of all coefficient of the mask


117

a b
c d
Example e f

„ a). original image 500x500 pixel


„ b). - f). results of smoothing
with square averaging filter
masks of size n = 3, 5, 9, 15 and
35, respectively.
„ Note:
„ big mask is used to eliminate small
objects from an image.
„ the size of the mask establishes
the relative size of the objects
that will be blended with the
background.
118

59
Department of Computer Engineering, CMU

Example

original image result after smoothing result of thresholding


with 15x15 averaging mask

we can see that the result after smoothing and thresholding,


the remains are the largest and brightest objects in the image.
119

Order-Statistics Filters
(Nonlinear Filters)
„ the response is based on ordering
(ranking) the pixels contained in the
image area encompassed by the filter
„ example
„ median filter : R = median{zk |k = 1,2,…,n x n}
„ max filter : R = max{zk |k = 1,2,…,n x n}
„ min filter : R = min{zk |k = 1,2,…,n x n}
„ note: n x nis the size of the mask

120

60
Department of Computer Engineering, CMU

Median Filters
„ replaces the value of a pixel by the median of
the gray levels in the neighborhood of that
pixel (the original value of the pixel is included
in the computation of the median)
„ quite popular because for certain types of
random noise (impulse noise > salt and pepper
noise)
noise , they provide excellent noise-reduction
capabilities,
capabilities with considering less blurring than
linear smoothing filters of similar size.

121

Median Filters
„ forces the points with distinct gray levels to
be more like their neighbors.
„ isolated clusters of pixels that are light or
dark with respect to their neighbors, and
whose area is less than n2/2 (one-half the
filter area), are eliminated by an n x n median
filter.
„ eliminated = forced to have the value equal the
median intensity of the neighbors.
„ larger clusters are affected considerably less
122

61
Department of Computer Engineering, CMU

Example : Median Filters

123

Sharpening Spatial Filters


„ to highlight fine detail in an image
„ or to enhance detail that has been
blurred, either in error or as a natural
effect of a particular method of image
acquisition.

124

62
Department of Computer Engineering, CMU

Blurring vs. Sharpening


„ as we know that blurring can be done in
spatial domain by pixel averaging in a
neighbors
„ since averaging is analogous to integration
„ thus, we can guess that the sharpening
must be accomplished by spatial
differentiation.

125

Derivative operator
„ the strength of the response of a derivative
operator is proportional to the degree of
discontinuity of the image at the point at
which the operator is applied.
„ thus, image differentiation
„ enhances edges and other discontinuities (noise)
„ deemphasizes area with slowly varying gray-level
values.

126

63
Department of Computer Engineering, CMU

First-order derivative
„ a basic definition of the first-order
derivative of a one-dimensional function
f(x) is the difference

∂f
= f ( x + 1) − f ( x)
∂x

127

Second-order derivative
„ similarly, we define the second-order
derivative of a one-dimensional function
f(x) is the difference

∂2 f
= f ( x + 1) + f ( x − 1) − 2 f ( x)
∂x 2

128

64
Department of Computer Engineering, CMU

First and Second-order


derivative of f(x,y)
„ when we consider an image function of
two variables, f(x,y), at which time we
will dealing with partial derivatives along
the two spatial axes.
∂f ( x, y ) ∂f ( x, y ) ∂f ( x, y )
Gradient operator ∇f = = +
∂x∂y ∂x ∂y
Laplacian operator
∂ 2 f ( x, y ) ∂ 2 f ( x, y )
(linear operator) ∇ f =
2
+
∂x 2 ∂y 2 129

Discrete Form of Laplacian


from
∂2 f
= f ( x + 1, y ) + f ( x − 1, y ) − 2 f ( x, y )
∂x 2
∂2 f
= f ( x, y + 1) + f ( x, y − 1) − 2 f ( x, y )
∂y 2
yield,

∇ 2 f = [ f ( x + 1, y ) + f ( x − 1, y )
+ f ( x, y + 1) + f ( x, y − 1) − 4 f ( x, y )]
130

65
Department of Computer Engineering, CMU

Result Laplacian mask

131

Laplacian mask implemented an


extension of diagonal neighbors

132

66
Department of Computer Engineering, CMU

Other implementation of
Laplacian masks

give the same result, but we have to keep in mind that


when combining (add / subtract) a Laplacian-filtered
image with another image. 133

Effect of Laplacian Operator


„ as it is a derivative operator,
„ it highlights gray-level discontinuities in an
image
„ it deemphasizes regions with slowly varying
gray levels
„ tends to produce images that have
„ grayish edge lines and other discontinuities,
all superimposed on a dark,
„ featureless background.
134

67
Department of Computer Engineering, CMU

Correct the effect of


featureless background
„ easily by adding the original and Laplacian
image.
„ be careful with the Laplacian filter used
if the center coefficient
of the Laplacian mask is
 f ( x , y ) − ∇ f ( x, y )
2
negative
g ( x, y ) = 
 f ( x , y ) + ∇ f ( x, y )
2

if the center coefficient


of the Laplacian mask is
positive
135

Example
„ a). image of the North
pole of the moon
„ b). Laplacian-filtered
image with
1 1 1
1 -8 1
1 1 1

„ c). Laplacian image scaled


for display purposes
„ d). image enhanced by
addition with original
image
136

68
Department of Computer Engineering, CMU

Mask of Laplacian + addition


„ to simply the computation, we can create
a mask which do both operations,
Laplacian Filter and Addition the original
image.

137

Mask of Laplacian + addition


g ( x, y ) = f ( x, y ) − [ f ( x + 1, y ) + f ( x − 1, y )
+ f ( x, y + 1) + f ( x, y − 1) + 4 f ( x, y )]
= 5 f ( x, y ) − [ f ( x + 1, y ) + f ( x − 1, y )
+ f ( x, y + 1) + f ( x, y − 1)]

0 -1 0
-1 5 -1
0 -1 0
138

69
Department of Computer Engineering, CMU

Example

139

 f ( x, y ) − ∇ 2 f ( x , y )
g ( x, y ) = 
Note  f ( x, y ) + ∇ f ( x, y )
2

0 -1 0 0 0 0 0 -1 0
-1 5 -1 = 0 1 0 + -1 4 -1
0 -1 0 0 0 0 0 -1 0

0 -1 0 0 0 0 0 -1 0
-1 9 -1 = 0 1 0 + -1 8 -1
0 -1 0 0 0 0 0 -1 0

140

70
Department of Computer Engineering, CMU

Unsharp masking

f s ( x, y ) = f ( x , y ) − f ( x , y )
sharpened image
sharpened image =
= original
original image
image –– blurred
blurred image
image

„ to subtract a blurred version of an image


produces sharpening output image.

141

High-boost filtering

f hb ( x, y ) = Af ( x, y ) − f ( x, y )

f hb ( x, y ) = ( A − 1) f ( x, y ) − f ( x, y ) f ( x, y )
= ( A − 1) f ( x, y ) − f s ( x, y )
„ generalized form of Unsharp masking
„ A≥1
142

71
Department of Computer Engineering, CMU

High-boost filtering
f hb ( x, y ) = ( A − 1) f ( x, y ) − f s ( x, y )
„ if we use Laplacian filter to create
sharpen image fs(x,y) with addition of
original image

 f ( x, y ) − ∇ 2 f ( x, y )
f s ( x, y ) = 
 f ( x, y ) + ∇ f ( x, y )
2

143

High-boost filtering
if the center coefficient
„ yields of the Laplacian mask is
negative

 Af ( x, y ) − ∇ 2 f ( x, y )
f hb ( x, y ) = 
 Af ( x, y ) + ∇ f ( x, y )
2

if the center coefficient


of the Laplacian mask is
positive
144

72
Department of Computer Engineering, CMU

High-boost Masks

„ A≥1
„ if A = 1, it becomes “standard” Laplacian
sharpening 145

Example

146

73
Department of Computer Engineering, CMU

 ∂f 
Gx   ∂x 
∇f =   =  ∂f 
Gradient Operator G y   
 ∂y 
„ first derivatives are implemented using
the magnitude of the gradient.
gradient
1
∇f = mag (∇f ) = [Gx2 + G y2 ] 2

1
commonly approx.
 ∂f  2  ∂f  2
 2

=   +   
 ∂x   ∂y  
∇f ≈ G x + G y
the magnitude becomes nonlinear
147

z1 z2 z3
z4 z5 z6
Gradient Mask z7 z8 z9

„ simplest approximation, 2x2

G x = ( z8 − z 5 ) and G y = ( z 6 − z5 )
1 1
∇f = [G x2 + G y2 ] 2
= [( z8 − z5 ) 2 + ( z6 − z5 ) 2 ] 2

∇f ≈ z 8 − z 5 + z 6 − z 5

148

74
Department of Computer Engineering, CMU

z1 z2 z3
z4 z5 z6
Gradient Mask z7 z8 z9

„ Roberts cross-gradient operators, 2x2


G x = ( z9 − z5 ) and G y = ( z8 − z 6 )
1 1
∇f = [G x2 + G y2 ] 2
= [( z9 − z5 ) 2 + ( z8 − z6 ) 2 ] 2

∇f ≈ z 9 − z 5 + z 8 − z 6

149

z1 z2 z3
z4 z5 z6
Gradient Mask z7 z8 z9

„ Sobel operators, 3x3


Gx = ( z7 + 2 z8 + z9 ) − ( z1 + 2 z2 + z3 )
G y = ( z3 + 2 z6 + z9 ) − ( z1 + 2 z 4 + z7 )
∇f ≈ G x + G y
the weight value 2 is to
achieve smoothing by
giving more important
to the center point 150

75
Department of Computer Engineering, CMU

Note
„ the summation of coefficients in all
masks equals 0, indicating that they
would give a response of 0 in an area of
constant gray level.

151

Example

152

76
Department of Computer Engineering, CMU

Example of Combining Spatial


Enhancement Methods
„ want to sharpen the
original image and
bring out more
skeletal detail.
„ problems: narrow
dynamic range of
gray level and high
noise content makes
the image difficult to
enhance
153

Example of Combining Spatial


Enhancement Methods
„ solve :
1. Laplacian to highlight fine detail
2. gradient to enhance prominent
edges
3. gray-level transformation to
increase the dynamic range of
gray levels
154

77
Department of Computer Engineering, CMU

155

156

78

You might also like