0% found this document useful (0 votes)
73 views39 pages

Dip Unit3 Reg13

This document defines key concepts in image restoration and segmentation: 1. Image restoration is the process of reconstructing a degraded or distorted image by removing or minimizing known degradation, provided the degradation function is known. 2. Degradation models describe the process by which a clean image is degraded to produce the observed image. Common forms of degradation include blur, noise, and geometric distortions. 3. Segmentation partitions an image into meaningful regions or objects. Morphological operations like erosion and dilation are commonly used for segmentation.

Uploaded by

hidhanaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views39 pages

Dip Unit3 Reg13

This document defines key concepts in image restoration and segmentation: 1. Image restoration is the process of reconstructing a degraded or distorted image by removing or minimizing known degradation, provided the degradation function is known. 2. Degradation models describe the process by which a clean image is degraded to produce the observed image. Common forms of degradation include blur, noise, and geometric distortions. 3. Segmentation partitions an image into meaningful regions or objects. Morphological operations like erosion and dilation are commonly used for segmentation.

Uploaded by

hidhanaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

PART A-TWO MARKS


1. Define image restoration?
 Image restoration is defined as the process of reconstructing image which is in the degraded or
distorted state. But knowledge of the degradation function is needed for a successful restoration.
 It can also be defined as the process of removing or minimizing the known degradation in the
image.

2. Draw the model of image degradation / restoration process? (Nov 2008 & Nov 2011 &
May 2013 &May 2015)

3. What is mean by homogeneity property in linear operator?


Homogeneity property states that the response to constant multiple of any input is equal to the
response to that input multiplied by the same constant.
H[k1f1 (x, y)] = k1 H [f1 (x, y)] is called as homogeneity property.

4. Define position invariant?


An operator having the input, output relationship. g (x, y) = H [f(x, y)] is said to be position
invariant if H [ f(x-⍺, y- β)] = g (x-⍺ , y- β)] for any f(x, y) and any ⍺ and β.

5. What is meant by circular matrix?


A square matrix in which each row is a circular shift of the proceeding row and the first row is
a circular shift of the last row is called as circular matrix.

6. Write the equation for discrete degradation model?

For x=0………… M – 1
Y=0………… N-1

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 1


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

7. Define Wiener filter? (OR )What are the functions of wiener filter? (June 2011)
Wiener filtering is a method of restoring images in the presence of blur and noise.

8. What is inverse filtering? (Nov 2011)


Inverse filtering is the process of recovering the input of a system from its output. It is useful
for precorrecting an input signal; in anticipation of the degradation caused by the system. It is
not physically reliable. They are unstable and they are sensitive to noise.

9. What are the forms of degradation?


 Sensor noise  Random atmosphere turbulence
 Blur due to camera misfocus  Thermal noise etc.
 Relative object camera motion

10. List the properties involved in degradation model?


 Linearity  Homogeneity
 Additivity  Position / space invariant

11. Difference between image enhancement and image restoration.


Image enhancement Image restoration
1. Concerned with extraction of image 1. Restoration of degradation.
features.
2. Difficult to represent mathematically. 2. Can be quantified.
3. Image dependent 3. Depends on class or ensemble properties
of set.

12. Give the noise probability density function?


(i). Gaussian noise. (iv). Exponential noise.
(ii). Rayleigh noise. (v). Uniform noise.
(iii). Erlang or Gamma noise (vi). Impulse noise.

13. Define pseudo inverse filter? (Nov 2008 & Nov 2011)
 Stabilized or generalized version of inverse filter.
 For linear shift invariant system with frequency response H (u,v) is

14. What are the assumptions in Wiener filter?


 Noise and image are uncorrelated and has zero mean.
 Gray levels in the estimate are linear functions of leads in degraded image.
PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 2
IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

15. State the concept of inverse filtering? (or) What is the principle of inverse filtering?
(May-2014)
The inverse filtering divides the transform of the degraded image G(u, v) by the degradation
function H(u, v) and determines an approximation of the transform of the original image.

Limitation: - It has no provision for handling noise.

16. Define spatial transformation?


Spatial transformation is defined as the rearrangement of the pixels on the image plane.

17. Define gray level interpolation? (June 2010)


It deals with the assignment of gray levels to pixels in the spatially transformed image.

18. Difference between Wiener and inverse filtering? (June 2011)


Wiener filtering Inverse filtering
When noise level is increased Wiener With small amount of noise inverse
filter works. filtering works.
It has no zero or small value problem. It has zero or small value problem.
The result obtained is more closer to the The result obtained is not closer to the
original image. original image.

19. What is constrained restoration with Lagrange multiplier?

This quantity must be adjusted, so that the constrained is


satisfied.

20. What are the limitations of inverse filtering? (or) Mention the drawbacks of inverse
filtering. (June 2011, Nov-2013)

1. Inverse filtering is highly sensitive to noise.


2. It has Zero or small value problem.
If degradation function H (u, v) has zero or small value then the ratio N(u,v) / H(u,v)
dominates the value of restored image.
This implies a poor performance of the system and results in bad approximation of the original
function. This is known as zero or small value problem.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 3


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

21. Give the transfer function of Wiener filtering? (June 2009)

Where, | H(u, v)|2= H*(u, v) H(u,v)


H*(u, v) – conjugate of H(u, v)
𝑺𝒇 (𝒖, 𝒗) - Power spectrum of original image.
𝑺ŋ (𝒖, 𝒗) - Power spectrum of noise.

25. Define rubber sheet transformation. (or) Geometric transformation:(May 2013 &
2015)
Geometric transformation:
It is generally modify spatial relationship between pixels in an image. It is also called as
“Rubber sheet transformation” because they may be viewed as the process of printing the image
on a sheet of rubber.

26. What is meant by bilinear interpolation? (April 2011)


• Bilinear interpolation is used when we need to know values at random position on a regular
2D grid. Note that this grid can as well be an image or a texture map.
• Interpolation Techniques:
– 1D linear interpolation (elementary algebra)
– 2D -2 sequential 1D (divide-and-conquer)
– Directional (adaptive) interpolation.
• Interpolation Applications:
– Digital zooming (resolution enhancement)
– Image imprinting (error concealment)
– Geometric transformations

27. What are the basic transformations that can be applied on the images?
– Applying some basic transformation to a uniformly distorted image can correct for a range
of perspective distortions by transforming the measurements from the ideal coordinates to those
actually used. (For example, this is useful in satellite imaging where geometrically correct
ground maps are desired.)
– Translation
– Scaling
– Rotation
– Concatenation

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 4


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

28. State the principle of image restoration? (Nov-2012)


The restoration technique is used to reconstruct or recover the image (ie) already degraded by
degradation phenomena. The restoration process the degradation images and apply inverse
process to that image to recover the original image.

29. Write the equation for continuous degradation model?

30. What is fredholm integral of first kind?


The fredholm or superposition integral of the first kind is used to calculate the response of any
input f(𝛼,β) if the response of the degradation function H to an impulse is known. It is given by

This integral is an important fundamental result in linear system theory.

31. What is Lagrange multiplier? Where it is used? (nov 2014)


The Lagrange multiplier is a strategy for finding the local maxima and minima of a function
subject to equality constraints. This is mainly used in the image restoration process like image
acquisition, image storage and transmission.

32. Why blur is to be removed from images? (nov 2014)


The blur is caused by lens that is in improper manner, relative motion between camera and
scene and atmospheric turbulence.It will introduce bandwidth reduction and make the image
analysis as complex.To prevent these problems only the blur is removed from images.

33. What is erosion?


erosion shrinks or thins objects in a binary image. In fact, we can view erosion as a
morphological filtering operation in which image details smaller than the structuring element
are filtered (re-moved) from the image. Erosion performed the function of a "line filter."

34. What is dilation?


 Unlike erosion, which is a shrinking or thinning operation, dilation "grows" or "thickens"
objects in a binary image.
 The specific manner and extent of this thickening is controlled by the shape of the
structuring element used.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 5


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

PART-B
1. Explain in detail about noise distributions or noise models? (Apr2010)

Noise distribution:
The noise in the digital image arises due to image acquisition or transmission while
image acquisition, the image sensor is affected by environmental factors.
While transmission of images are corrupted due to interference in the channel.
Spatial property of noise:
The noise is independent of spatial co-ordinates. However in some application, it is invalid and
so we deal with spatial dependent.

PDF of noise:
(i). Gaussian noise:
The mathematical expression for Gaussian noise is in both spatial of frequency domains. This
models are convenient because they are marginally applicable at best.

The PDf of Gaussian noise is,

Where,
Μ – means of z σ – SD
σ² - variance z – gray level

P(z)

1/√2πσ

μ – σ μ μ+σ
When Z is described 70% of its value will be in the range of µ – σ, µ+σ.

(ii) Rayleigh noise:


The PDF of Ray – leigh noise is

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 6


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

P(2)
0.607
√2/b
a a+ √b/2 b

(iii) Gamma noise:


The PDF of gamma noise is given by

P(2)

z
(b-1)/a

(iv) Exponential Noise:


The PDF of exponential noise is given by

P(z)
a

z
(v) Uniform Noise: The PDF of uniform noise is given by,

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 7


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

1/ -a
b

Z
(vi). Impulse noise distribution or Salt & Pepper noise:

 If b>a, there will be dark dot in the image and will appear like a black dots.

 If a>b, these will be white dots appearing in the image. If either Pa or Pb is zero then the
impulse noise is called as “Unipolar noise”.

 If neither Pa nor Pb is zero, then the noise is called as bipolar noise. It means salt &
pepper granules distributed over the images so the bipolar noise is known as “salt & pepper
noise”.

 It is also called as short spike noise. Negative impulse appears in black and positive
impulse appears in white dots.

2. Explain the various types of mean filters in detail.

(i) Arithmetic Mean filters

The arithmetic mean filter computes the average value of the corrupted image g(x, y) in
the area defined by Sxr The value of the restored image f at point (x, y ) is simply the arithmetic
mean computed using the pixels in the region defined by Sxy.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 8


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

(ii) Geometric mean filtering:

 The geometric mean filter is performed by restored pixel is by the product of pixel is the
sub image window. It rises to the power of 1/mn.
 The image analysis includes measurement of shape, size, texture and color of the objects
present in the image. Here the input is image and produces numerical and graphical information
based on the characteristics of the image data.
 Based upon the filter results, the analysis of an image is performed and identify which
pixel need to be improved. These are done based upon the statistical measurements.

(iii) Harmonic mean filter: (Nov 2014)

It can be used to reduce noise such as salt noise and Gaussian noise, The restored image
using harmonic mean filter is given by,

(iv) Contra harmonic mean filter:

It is used to reduce salt and pepper noise. The restored image is given by,

Where q – order of filter.

For positive values of Q, it eliminates pepper noise and for negative values of q it eliminates salt
noise.
It can’t do both simultaneously.
If Q=0, then the contra harmonic mean filter reduces to arithmetic mean filter.
If Q=-1, it reduces to harmonic mean filter. Thus it is helpful restore the images.

3. Explain various order-statistic filters:

Order-statistic filters are spatial filters whose response is based on ordering


(ranking) the values of the pixels contained in the image area encompassed by the filter,

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 9


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

(i) Median filtering:

The best-known order-statistic filter is the median filter, which, as its name implies, replaces
the value of a pixel by the median of the intensity levels in the neighborhood of that pixel:

 The value of the pixel at ( x , y ) is included in the computation of the median.


 Median filters are quite popular because, for certain types of random noise, they provide
excellent noise-reduction capabilities, with considerably less blurring than linear smoothing
filters of similar size.
 Median filters are particularly effective in the presence of both bipolar and unipolar
impulse noise.
 The median filter yields excellent results for images corrupted by this type of noise.

(ii) Max and min filter:

100th percentile results in the so-called m a x f i l t e r , given by

This filter is useful for finding the brightest points in an image. Also, because pepper
noise has very low values, it is reduced by this filter as a result of the

The 0th percentile filter is the min filter.

(iii) Midpoint filter

The midpoint filter simply computes the midpoint between the maximum and minimum values
in the area encompassed by the filter:

This filter combines order statistics and averaging. It works best for randomly distributed noise,
like Gaussian or uniform noise.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 10


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

(iv) Alpha-trimmed mean filter


 Suppose that we delete the d / 2 lowest and the d / 2 highest intensity values of g ( s , t ) in
the neighborhood S x y . Let gr(s, t ) represent the remaining mn-d pixels.
 A filter formed by averaging these remaining pixels is called an alpha-trimmed mean
filter:

 where the value of d can range from 0 to mn - 1. When d = 0, the alpha-trimmed filter
reduces to the arithmetic mean filter discussed in the previous section. If we choose d = mn - 1,
the filter becomes a median filter.
 For other values of d , the alpha-trimmed filter is useful in situations involving multiple
types of noise, such as a combination of salt-and-pepper and Gaussian noise.

4. Explain Adaptive filters in detail? Or Adaptive, local noise reduction filter

(i) Adaptive, local noise reduction filter

 The simplest statistical measures of a random variable are its mean and variance.
 These are reasonable parameters on which to base an adaptive filter because they are
quantities closely related to the appearance of an image.
 The mean gives a measure of average intensity in the region over which the mean is
computed, and the variance gives a measure of contrast in that region.
 Our filter is to operate on a local region, Sxv .

The response of the filter at any point ( x , y ) on which the region is centered is to be based on
four quantities:
1. g ( x , y), the value of the noisy image at ( x , y);
2. (b)σ ŋ2, the variance of the noise corrupting f(x, y) to form g ( x , y);
3. m L , the local mean of the pixels in Sxv ; and
4. σ L2, the local variance of the pixels in S x v .

We want the behavior of the filter to be as follows:

1. If σŋ2 is zero, the filter should return simply the value of g ( x , y).This is the
trivial, zero-noise case in which g ( x , y ) is equal to f ( x , y ) .
2. If the local variance is high relative to σŋ2, the filter should return a value
close to g ( x , y ) . A high local variance typically is associated with edges, and
these should be preserved.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 11


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

3. If the two variances are equal, we want the filter to return the arithmetic
mean value of the pixels in S x v .
This condition occurs when the local area
has the same properties as the overall image, and local noise is to be re-
duced simply by averaging.
An adaptive expression for obtaining based on these assumptions may be written as

 The only quantity that needs to be known or estimated is the variance of the overall noise,
σŋ .
2

 The other parameters are computed from the pixels in S xy at each location ( x , y ) on
which the filter window is centered.

 A tacit assumption in Eq. (5.3-12) is that .


 The noise in our model is additive and position independent, so this is a reasonable
assumption to make because Sxy is a subset of g ( x , y ) .
 However, we seldom have exact knowledge of σ ŋ2. Therefore, it is possible for this
condition to be violated in practice.
 For that reason, a test should be built into an implementation of Eq. (5.3-12) so that the

ratio is set to 1 if the condition occurs. This makes this filter nonlinear.
 However, it prevents nonsensical results (i.e., negative intensity levels, depending on the
value of m L ) due to a potential lack of knowledge about the variance of the image noise.
Another approach is to allow the negative values to occur, and then rescale the intensity values
at the end. The result then would be a loss of dynamic range in the image.

(ii) Adaptive median filter

 The median filter performs well if the spatial density of the impulse noise is not large
(as a rule of thumb, Pa and Pb less than 0.2).
 An additional benefit of the adaptive median filter is that it seeks to preserve detail
while smoothing non impulse noise, something that the "traditional" median filter does not
do.
 As in all the filters discussed in the preceding sections, the adaptive median filter
also works in a rectangular window area S xy .
 Unlike those filters, however, the adaptive median filter changes (increases) the
size of Sxv during filter operation, depending on certain conditions listed in this section.
 Keep in mind that the output of the filter is a single value used to replace the value
of the pixel at ( x , y ) , the point on which the window Sxv is centered at a given tune.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 12


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

Consider the following notation:

zmin = minimum intensity value in S x y


Zmax = maximum intensity value in Sxv
zmed = median of intensity values in S x y
z x y = intensity value at coordinates ( x , y )
Smax = maximum allowed size of S x y

The adaptive median-filtering algorithm works in two stages, denoted stage A and stage B , as
follows:

Staged: A1 =Zmed — Zmin


A2 = Zmed — Zmax
If A1 > 0 AND A2 < 0, go to stage B
Else increase the window size

If window size ≤ Smax repeat stage A


Else output Zme d

Stage B: B1= Z X y - Zmin


B2= Z X y - Zma x
I f B 1 > 0 AND Bl < 0, output z x y
Else output zmed

 The key to understanding the mechanics of this algorithm is to keep in mind that it has
three main purposes: to remove salt-and-pepper (impulse) noise, to provide smoothing of other
noise that may not be impulsive, and to reduce distortion, such as excessive thinning or
thickening of object boundaries.
 The values zmin and zmax are considered statistically by the algorithm to be "impulse-like"
noise components, even if these are not the lowest and highest possible pixel values in the
image.
 With these observations in mind, we see that the purpose of stage A is to determine if the
median filter output, zmed, is an impulse (black or white) or not.
 If the condition zmin < zmed < zmax holds, then zmed cannot be an impulse for the reason
mentioned in the previous paragraph.
 In this case, we go to stage B and test to see if the point in the center of the window, Zxy,
is itself an impulse (recall that z xy is the point being processed).
 If the condition B1 > 0 AND B2 < 0 is true, then z min < z xy < z max , and z xv cannot be an impulse
for the same reason that z med was not.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 13


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

 In this case, the algorithm outputs the unchanged pixel value, z xy . By not changing these
"intermediate-level" points, distortion is reduced in the image.
 If the condition B1 > 0 AND B2 < 0 is false, then either Zxy-Zmin or Zxy = Zmax. In
either case, the value of the pixel is an extreme value and the algorithm outputs the median
value zmed, which we know from stage A is not a noise impulse.
 The last step is what the standard median filter does. The problem is that the standard
median filter replaces every point in the image by the median of the corresponding
neighborhood. This causes unnecessary loss of detail.
 Continuing with the explanation, suppose that stage A does find an impulse (i.e., it fails
the test that would cause it to branch to stage B).
 The algorithm then increases the size of the window and repeats stage A. This looping
continues until the algorithm either finds a median value that is not an impulse (and branches to
stage B), or the maximum window size is reached.
 If the maximum window size is reached, the algorithm returns the value of z m e d . Note
that there is no guarantee that this value is not an impulse.
 The smaller the noise probabilities Pa and/or Pb are, or the larger Smax is allowed to be, the
less likely it is that a premature exit condition will occur.
 This is plausible. As the density of the impulses increases, it stands to reason that we
would need a larger window to "clean up" the noise spikes.
 Every time the algorithm outputs a value, the window S x v is moved to the next location
in the image. The algorithm then is reinitialized and applied to the pixels in the new location.
 As indicated in Problem 3.18, the median value can be updated iteratively using only the
new pixels, thus reducing computational load.

5. Explain Periodic Noise Reduction by Frequency Domain Filtering (or) various types of
filter for periodic noise reduction.

 The basic idea is that periodic noise appears as concentrated bursts of energy in the
Fourier transform, at locations corresponding to the frequencies of the periodic interference.
 The approach is to use a selective filter to isolate the noise.
 The three types of selective filters bandreject, bandpass, and notch are used for basic
periodic noise reduction.

5.1 Band reject Filters

 One of the principal applications of bandreject filtering is for noise removal in


applications where the general location of the noise component(s) in the frequency domain is
approximately known.
 A good example is an image corrupted by additive periodic noise that can be
approximated as two-dimensional sinusoidal functions.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 14


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

 It is not difficult to show that the Fourier transform of a sine consists of two impulses that
are mirror images of each other about the origin of the transform.
 The impulses are both imaginary (the real part of the Fourier transform of a sine is zero)
and are complex conjugates of each other.

 Figure 5.16(a), which is the same as Fig. 5.5(a), shows an image heavily corrupted
by sinusoidal noise of various frequencies.
 The noise components are easily seen as symmetric pairs of bright dots in the Fourier
spectrum shown in Fig. 5.16(b).
 In this example, the components lie on an approximate circle about the origin of the
transform, so a circularly symmetric bandreject filter is a good choice.
 Figure 5.16(c) shows a Butterworth bandreject filter of order 4, with the appropriate
radius and width to enclose completely the noise impulses.
 Since it is desirable in general to remove as little as possible from the transform, sharp,
narrow filters are common in bandreject filtering.
 The result of filtering Fig. 5.16(a) with this filter is shown in Fig. 5.16(d). The
improvement is quite evident.
 Even small details and textures were restored effectively by this simple filtering ap-
proach.
 It is worth noting also that it would not be possible to get equivalent results by a direct
spatial domain filtering approach using small convolution masks.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 15


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

5.2 Bandpass Filters

A bandpass filter performs the opposite operation of a bandreject filter.


We showed in Section 4.10.1 how the transfer function HBP ( U , V ) of a bandpass filter is
obtained from a corresponding bandreject filter with transfer function H B R ( u , v ) by using the
equation

5.3 Notch filters

 A notch filter rejects (or passes) frequencies in predefined neighborhoods about a center
frequency.
 Equations for notch filtering are detailed in Section 4.10.2. Figure 5.18 shows 3-D plots
of ideal, Butterworth, and Gaussian notch (reject) filters.
 Due to the symmetry of the Fourier transform, notch filters must appear in symmetric
pairs about the origin in order to obtain meaningful results.
 The one exception to this rule is if the notch filter is located at the origin, in which
case it appears by itself.
 Although we show only one pair for illustrative purposes, the number of pairs of
notch filters that can be implemented is arbitrary.
 The shape of the notch areas also can be arbitrary (e.g., rectangular). We can obtain
notch filters that pass, rather than suppress, the frequencies contained in the notch areas.
 Since these filters perform exactly the opposite function as the notch reject filters,
their transfer functions are given by

 where HNP(u,v) is the transfer function of the notch pass filter corresponding to the
notch reject filter with transfer function H NR(u, v ) .

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 16


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

5.4 Optimum Notch Filtering

 Alternative filtering methods that reduce the effect of these degradations are quite
useful in many applications.
 The method discussed here is optimum, in the sense that it minimizes local variances of
the restored estimate f ( x , y ) .
 The procedure consists of first isolating the principal contributions of the interference
pattern and then subtracting a variable, weighted portion of the pattern from the corrupted
image.
 Although we develop the procedure in the context of a specific application, the basic
approach is quite general and can be applied to other restoration tasks in which multiple
periodic interference is a problem.
 The first step is to extract the principal frequency components of the interference pattern.
 As before, this can be done by placing a notch pass filter. H N P ( u , v ) , at the location
of each spike.
 If the filter is constructed to pass only-components associated with the interference
pattern, then the Fourier transform of the interference noise pattern is given by the expression

 where, as usual, G(u, v), denotes the Fourier transform of the corrupted image.
Formation of H N P (u,v ) requires considerable judgment about what is or is not an
interference spike.
 For this reason, the notch pass filter generally is constructed interactively by
observing the spectrum of G (u,v ) on a display.
 After a particular filter has been selected, the corresponding pattern in the spatial
domain is obtained from the expression

 Because the corrupted image is assumed to be formed by the addition of the


uncorrupted image f ( x , y) and the interference, if ŋ(x, y) were known completely,
subtracting (he pattern from g (x , y) to obtain f(x, y) would be a simple matter.
 The problem, of course, is that (his filtering procedure usually yields only an
approximation of the true pattern. The effect of components

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 17


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 18


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 19


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

6. Explain the image restoration using inverse filtering. What are its limitations?
(or)Describe inverse filtering for removal of blur caused by any motion and describe how
it restore the image. (Nov 2010, April 2010, May-2014& May 2015)

Inverse filtering:-
It is the process of recovering the input of a system from its output. They are useful for
recorrecting an input signal in anticipation of the degradation caused by the system such as
correcting a non – linearity of the display.
The inverse filtering divides the transform of the degraded image by the degradation
function.
w.k.t unconstrained restoration,

We know that, g = Hf + ŋ
G(u, v) = H(u, v) F(u, v) + N(u, v)
Restored image is given by,

To find the original image,

Drawbacks:
Inverse filtering is highly sensitive to noise.

Zero or Small value problem:-


If t he degradation function H(u, v) has zero or small value, then the ratio of
N(u, v)/H(u, v) dominates the value of restored image.
This implies a poor performance of the system and results in best.
Approximation of the original image function. This is known as zero or small value
problem.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 20


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

The above drawbacks can be overcome by limiting the filter frequencies to only the values
around the origin. This will decrease the probability of zero occurrence value degradation
function.

7. Explain the function of Wiener filter for image restoration in presence of additive noise?
(or) Explain the principle of least square filter and state its limitation. (Nov 2012, June
2011, May 2013, May-2014) (or) minimum mean square error filtering?(Nov 2014 & May
2015)

Wiener filtering or LMS filter – Least Mean square filtering:


For the restoration of an image, this method considers the degradation function as well as
statistical properties of noise.

Objective:
It is to approximate the original image in such a way the mean square error between
original and approximated image will be minimized.

LMS Value:

Where,
f – Original image.
𝑓̂- Restored image.
Assumptions:
(i). The image of noise are uncorrelated (no relation).
(ii). Either image or the noise has zero mean.
(iii). Approximated gray level for a linear function of degraded gray level.
Approximated image,

Where,
H* (u,v) – conjugate of H (u,v)
PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 21
IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

Sf (u, v) – Power spectrum of original image.


Sŋ (u,v) – Power density spectrum of noise.
H (u, v) – Linear operator.

Multiply & divide by H (u, v)


Wiener filtering equation is,

Case 1:

If noise = 0

the signal-to-noise ratio, approximated using frequency domain quantities such as

The mean square error can be approximated also in terms a-summation involving the original
and restored images:

considers the restored image to be "signal" and the difference between this image and the
original to be noise, signal-to-noise ratio in the spatial domain as

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 22


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

Case 2:

If noise = unknown quantity

Advantages of Wiener filtering over inverse filtering:


(i). Wiener filtering has no zero or small value problem.
(ii). The results obtained in Wiener filtering are more closer to the original image than inverse
filtering.

Disadvantages: It requires power spectrum of ungraded image of noise to be known which


makes the implementation more difficult.

8. What is edge detection? Describe in detail about the types of edge detection
operation?(or) Explain how edges of different orientations can be detected in an
image?[NOV 2011& NOV 2012] &[Nov 2014]

Edge detection is used for determining the discontinuities in gray level.

Edge:
An edge is a boundary between two regions with relatively gray level properties different. It is
a set of connected pixels that lie on the boundary between two regions.

1 .Ideal digital edge model:

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 23


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

An ideal image edge is a set of connected pixels in the vertical direction where edge of the
pixel is located at an orthogonal step transition in gray level.
Edges are blurred, due to optics, sampling and other image acquisition imperfections.

 Thick edges of ramp are inversely proportional to the degree of blurring in the edge.
 The thickness is determined by length of ramp. Length is determined by slope, in turn
determined by degree of blurring. Blurred image edges tends to be thick of sharp edge tend to be
thin.
Use of Operator:
1. First derivative operator-Gradient.
2. Second derivative operator-Laplacian

Edge detection in Ist derivative image:


The first derivative image of the gray level is positive at the trailing edge, zero in the constant
gray level, & negative at leading edge of transition.
Magnitude of first derivative is used to identify the presence of edge in image.

Edge detection in IInd derivative image:


The second derivative is positive for part of transition which is dark side of edge, zero for
exactly on the edge and negative for light side of edge.
The sign of second derivative is used to find whether the edge pixel lies on the lighter side or
darker side.

Gradient operators:
The gradient of an image f(x, y) at location (x, y) is defined as,

Magnitude ofgradient vector∇𝑓 is given by,

|∇𝑓| =
∇𝑓 is commonly known as gradient

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 24


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

The direction of gradient vector is given by,

𝐺𝑥
𝛼(𝑥, 𝑦) = tan−1 ( )
𝐺𝑦

There are different types of gradient operators:-

(i). Prewitt gradient operator:

Generally 3x3 image I s taken as below.


P1 P2 P3

P4 P5 P6

P7 P8 P9

3x3 Images Prewitt Mask for horizontal detection

-1 -1 -1

0 0 0

1 1 1

Prewitt mask for vertical detection

-1 0 1

-1 0 1

-1 0 1
Gx edge detection in Horizontal detection
Gx = (P7 + P8 + P9) – (P1 + P2 + P3)
Gy = (P3 + P6 + P9) – (P1 + P4 + P7 )
(Edge detection in vertical detection)
Where P1 ……. P9 Pixel values in subimage

(ii) Sobel Gradient operators:

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 25


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

P1 P2 P3

P4 P5 P6

P7 P8 P9
3x3 Images

Sobel Mask for horizontal detection

-1 -2 -1

0 0 0

1 2 1

Sobel mask for vertical detection

-1 0 1

-2 0 2

-1 0 1
For horizontal detection
Gx = (P7 + 2P8 + P9) – (P1 + 2P2 + P3)
To find Gy:-
Gy = (P3 + 2P6 + P9) – (P1 + 2P4 + P7)

(iii). Roberts Gradient operators:-

-1 0
0 1
Gx

0 -1
1 0
Gy

Gx = P9 – P5
Gy= P8 – P6

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 26


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

Second Derivative operator:

(iv). Laplacian operators:-


The second derivative is
1. Positive at the transition related with the dark side of the edge.
2. Negative at the transition are bright / light side of the edge.

Uses:
1.It is used to differentiate the dark and bright level.
The Laplacian of a 2D function f(x, y) is a second order derivative is given by,

The Laplacian response is sensitive to noise.


P1 P2 P3

P4 P5 P6

P7 P8 P9
3x3 Images
Laplacian Mask
0 -1 0

-1 4 -1

0 -1 0
For 3x3 images, the Laplacian operator is given below

The coefficients associated with center pixels are positive & negative.
Sum of coefficients is zero.

Drawbacks:
1.It is highly sensitive to noise.
2.Its magnitude produces double edges, which makes the segmentation difficult.
3.It is unable to detect edge detection.

Laplacian can be used for two purposes:


1. To find whether the pixel is on the dark / light side of an edge.
2. To find edge location using zero crossing property.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 27


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

9. Explain in detail about the process of edge linking and boundary detection? (Or) How
do you link edge pixels through global processing? (May/June-2013)(Or) how do you link
edge pixels through hough transform. {May 2015]

Linking the edges of an image is done by edge linking and boundary detection.

The following techniques are used for edge linking and boundary detection.
1). Local processing.
2). Global processing using Hough transform.
3). Global processing using graph theoretic approach.

1) Local processing:-
One method for linking edges is to analyze the characteristic pixel in small neighborhood
about every point in an image.

Two properties in this analysis are:-


(i). the strength of the response of gradient operator used to produce the edge pixel.
(ii). the direction of gradient vector.

2) Global processing via Hough transform:-

Edge linking via hough transform:

In this method, global relationship between pixel are considered and the points are linked by
first determined whether they lie on a curve (or) a specified shape
Finding straight line point:
Let there are n-points in an image if the subset of points that lie on the straight line are to be
found.

Method 1:
First find all lines determined by every pair of points then find all subset of points that are close
to particular line.

Method 2:
 This is an alternative approach to method 1 which is referred as hough transform
 Consider a point xi, yi the general equation of straight line.yi = axi + b
 now the number of lines passed through the xi and yi for different values of a and b
 consider a second point xi, yi also has a line. Let this line intercept the line (xi, yi )

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 28


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

There are ‘n’ points in an image. We have to find the subsets of points that lie on standard
lines.
One possible solution is to first find all lines. They are determined by every pair of points &
then find all subsets of points that are close to particular lines.

 First find and then perform comparisons of every point on to


all lines.
 Consider a point (xi, yi) and general equation of a standard line in slope – intercept form.
yi = axi + b
 Infinitely many lines past through (xi, yi) but all satisfy the equation yi = axi + b

 For varying a & b. However writing this equation as b= -axi + yi and considering ab plane
yields the equation of a single line for fixed pair (x i, y i). Further a second point (x i , y i ) also
has a lines in parameter space associated with it and this line intersects the line associated with
(x i , y i ) at (a’, b’)
[where a’ – slope & b’ – intercept of line]
y b’

The computational bmin attractiveness bmax of the Hough transform arises from subdividing the
parameter space into Accumulator cells.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 29


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

Accumulator cells:
An important property of Huff transform is that the parameter space can be subdivided into cells
which is referred as accumulator cells.

Procedure:
1) Compute the gradient of an image and threshold
2) Specify subdivision in the plane
3) Examine the count of accumulator cell for high pixel concentration
4) Examine the relationship between the pixels

Applications:
1) In addition to straight line, Huff transform has applied to any function
2) Huff transform can be used to detect curves without any simple analytical representation.

10. Explain in detail about region growing process in detail? [APRIL 2010, May-2013] (o)
Explain region based segmentation and region growing with an example.[May 2015]

Region: Similar pixels are grouped together to form regions.

The process of dividing an image into smaller regions is based on some predefined rules.

Basic rules for segmentation:-


Let R be the entire image region R is subdivided into n number of sub regions.

(1).
(2). Ri is a connected region for i=1, 2 … n, all the pixels in the region should be connected.
(3). Ri ∩ Rk = Ф for all i & k, where i ≠ k. this shows that region must be disjoint. No pixel
should be presenting two different regions.
(4). P(R i) = true, for i=1, 2, ………n, all pixels in region R i have same intensity levels.
(5). P(R i U Rk) = false, for i ≠ k, this shows that R i& Rk are different. The pixels in two
adjacent regions should not have same intensity.

Steps:
1. Find seed points.
2. Find the similarity pixels.
3. Find the stopping point.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 30


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

 Region growing is a process of rating pixels or sub regions into larger regions based on pre-
defined similarity criteria.
 This approach starts with a set of start point known as “seed points”. Then the neighbouring
pixel having property that is similar to the seed are added to each seed. Thus new regions are
grown.

Similarity criteria:
It is based on intensity value, texture, color, size, and shape of the region being grown.

1. Selection of Seed points:


When no prior information is available if the result of these computation forms
cluster(group)then the pixels with property near the centroid of the cluster can be selected as
seed point(starting point).

2. Selection of Similarity criteria:


For the color images, the problem is very difficult within prior information.
For monochrome images descriptors and connectivity property should be considered for
similarity criteria.

3. Formulation of stopping groups:


Region growing should be stopped when there is no more pixels which satisfy the similarity
criteria for the particular region.

Region growing by pixel Aggregation:-


A set of seed points are used as starting point. Regions grow by appending from staring point
(neighbor pixels have similar properties).
 Algorithm starts with selection of seed pixel & desired predicate P(Ri) to compare with
neighboring pixels.
 The eight neighboring pixels of the seed pixel are assigned to region R1, if these pixels meet
the predicate requirement.
 This process of comparing eight neighboring pixels will be continued until no further pixels
are added to the region. The region R1 is completely defined.
 A new seed pixel is located within another area of image and same procedure is repeated until
all pixels defining R2 is determined.
 Same procedure is repeated until the all regions have been defined.
 Seed pixel is located by scanning the image pixel which has max or min gray level.

For eg:
Let us consider 5x5 image pixels for the threshold value of 3 (Th = 3)

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 31


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

1 0 5 6 7
1 0 5 5 5
0 12 6 77 6
1 2 5 6 7
1 1 7 6 5

 Values inside the boxes are gray level .Two seeds are 2 & 7.
 The location of 2 is (4, 2) & the location of 7 is (3, 4).
 These are the starting points. Then the regions are grown.
Here Th = 3
The absolute difference between gray level of that pixel and the gray level of seed must be less
than the threshold values

R1 R1 R2 R2 R2
R1 R1 R2 R2 R2
R1 R1 R2 R2 R2
R1 R1 R2 R2 R2
R1 R1 R2 R2 R2

Region 1 Region 2

11. Explain the principle of Region splitting and merging in details. [APRIL 2010, NOV
2011, May-2013, NOV 2012)
 Region splitting and merging is a segmentation process in which an image is initially
subdivided into a quadrant and then the regions are merged (or) splitted to satisfy the basic
conditions.
 In this technique, the image is divided into various sub images of disjoint regions and then
merges the connected region together.
 R – be the entire region of the image.
 The predicate P(Ri) is used to check the condition. In any region, if P(Ri)= true, then image is
subdivided into various subimages.
 If P(Ri) = false, then divide the image into quadrants. If P(R i) = false, then further divide the
quadrants into sub quadrants.
R1
R11 R12
R2
R13 R14
R3 R4
Region R11 is divided into quadrants.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 32


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

Quad tree:
A Quad tree is a tree in which the nodes have exactly four descendants (Followers).
This is shown in quadrants representation as shown below

Quadtree representation
Example for split & merge algorithm is shown below:-

Initially image is divided into 4 regions R1, R2, R3 & R4


R11 R12 R21 R22 R31 R32xxx xxxxxx R42
xxx R41

R13 xxxxxxx R23 R24 R33 R34 R43 R44


R14

xxx xxxxxx

R1 R2 R3 R4

 R1, R2, R3, R4 region are divided into 16 sub region.


 In this region R2, R23 can be further sub divided as shown below.
R231 R232

xxxxxx xxxxxx
R233 R234
xxxxxx xxxxxx

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 33


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

Quad tree representation is shown below:-

Let R regions of the original image.


 In the next level, the original image is divided into 4 non – overlapping regions. The next
level is the sub division of these 4 regions into 4 smaller regions.
 The final level showing the sub division of R23 into 4 regions.
 The process of splitting an image into small regions and then merging connected regions
together is known as regions segmentation by splitting and merging.

Advantages:
It uses the same quad tree for splitting and merging.

12. Explain in detail about segmentation by morphological watersheds? (or)write


morphological concept applicable for image processing . [NOV 2011] &[Nov 2014]

Image segmentation is based on three principle concepts:-


1). Detection of discontinuities
2). Thresholding
3). Region processing

Morphological watershed image segmentation embodies many of the concepts of above 3


approaches.
Often produces more stable segmentation including continuous segmentation boundaries.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 34


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

It provides a simple framework for incorporating knowledge based constraints.

Basic concept:-
Image is visualized in 3 dimensions
1) Two spatial dimensions.
2). Gray level
Any gray tone image may be considered as topological surface.

Topological interpretation consists of 3 points.


1). Points belonging to regional minimum.
2). Catchment basin.
3). Divide lines or watershed lines.

The main aim of the segmentation algorithm based on this concept is to find watershed lines.

Catchment basins:-
Points at which water drop, if placed at locations of any of those points would fall to single
minimum is called as watershed lines.

Watershed lines:-
 Points at which water drop would fall more than one such minimum is called as watershed
lines.
 A line is build to prevent the rising water from, distinct catchment basins form merging.
 Only the taps of down are visible above the water line.
 These dam boundaries correspond to the divide lines of water sheds.
 They are the connected boundaries by watershed segmentation algorithm.
 In order to prevent structure, higher height dams are calculated.
 The value of height is determined by highest possible gray level value in the input image.

13. Explain Erosion and Dilation in morphological processing.

These operations are fundamental to morphological processing.

(i) Erosion:

With A and B as sets in Z2, the erosion of A by B, denoted A Ɵ B, is defined as

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 35


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

 In words, this equation indicates that the erosion of A by B is the set of all points z such
that B, translated by z , is contained in A. In the following discussion, set B is assumed to be a
structuring element.

 The statement that B has to be contained in A is equivalent to B not sharing any common
elements with the background; we can express erosion in the following equivalent form:

 where, Ac is the complement of A and Ø is the empty set.

 Figure 9.4 shows an example of erosion.

 The elements of A and B are shown shaded and the background is white.
 The solid boundary in Fig. 9.4(c) is the limit beyond which further displacements of the
origin of B would cause the structuring element to cease being completely contained in A.
 Thus, the locus of points (locations of the origin of B ) within (and including) this
boundary, constitutes the erosion of A by B.
 We show the erosion shaded in Fig. 9.4(c).
 Keep in mind that that erosion is simply the set of values of z that satisfy Eq. (9.2-1) or
(9.2-2).
 The boundary of set A is shown dashed in Figs. 9.4(c) and (e) only as a reference; it is not
part of the erosion operation.
 Figure 9.4(d) shows an elongated structuring element, and Fig. 9.4(e) shows the erosion
of A by this element. Note that the original set was eroded to a line.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 36


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

 Equations (9.2-1) and (9.2-2) are not the only definitions of erosion (see Problems 9.9
and 9.10 for two additional, equivalent definitions.)
 However, these equations have the distinct advantage over other formulations in that they
are more intuitive when the structuring element B is viewed as a spatial mask (see Section
3.4.1).

Thus erosion shrinks or thins objects in a binary image. In fact, we can view erosion as a
morphological filtering operation in which image details smaller than the structuring element
are filtered (re-moved) from the image. In Fig. 9.5, erosion performed the function of a "line
filter."

(ii) Dilation

With A and B as sets in Z2, the dilation of A by B , denoted A B, is defined as

 This equation is based on reflecting B about its origin, and shifting this reflection by z
(see Fig. 9.1).
 The dilation of A by B then is the set of all displacements, z , such that B and A overlap
by at least one element. Based on this interpretation, Eq. (9.2-3) can be written equivalently as

 Equations (9.2-3) and (9.2-4) are not the only definitions of dilation currently in use (see
Problems 9.11 and 9.12 for two different, yet equivalent, definitions).
 However, the preceding definitions have a distinct advantage over other formulations in
that they are more intuitive when the structuring element B is viewed as a convolution mask.
 The basic process of flipping (rotating) B about its origin and then successively
displacing it so that it slides over set (image) A is analogous to spatial convolution, as
introduced in Section 3.4.2.
 Keep in mind, however, that dilation is based on set opera- tions and therefore is a
nonlinear operation, whereas convolution is a linear operation.
 Unlike erosion, which is a shrinking or thinning operation, dilation "grows" or "thickens"
objects in a binary image.
 The specific manner and extent of this thickening is controlled by the shape of the
structuring element used.
 Figure 9.6(a) shows the same set used in Fig. 9.4, and Fig. 9.6(b) shows a structuring
element (in this case B = B because the SE is symmetric about its origin).

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 37


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

 The dashed line in Fig. 9.6(c) shows the original set for reference, and the solid line
shows the limit beyond which any further displacements of the origin of B by z would cause the
intersection of B and A to be empty.
 Therefore, all points on and inside this boundary constitute the dilation of A by B.
 Figure 9.6(d) shows a structuring element designed to achieve more dilation vertically
than horizontally, and Fig. 9.6(e) shows the dilation achieved with this element.

14. A blur filter h(m,n) is given by

Find the deblur filter using inverse filtering? (Nov-2013)


Solution:
Find the Fourier transform =(4x4 )DFT kernel *i/p image*DFT (kernel)T

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 38


IT6005- DIP UNIT III -IMAGE RESTORATION AND SEGMENTATION

H(k,l)=
𝟏 𝟏 𝟏 𝟏 𝟎. 𝟏 𝟎. 𝟏 𝟎. 𝟏 𝟎 𝟏 𝟏 𝟏 𝟏
𝟏 −𝒋 −𝟏 𝒋 𝟎. 𝟏 𝟎. 𝟏 𝟎. 𝟏 𝟎. 𝟏 𝟏 −𝒋 −𝟏 𝒋
| || | | |
𝟏 −𝟏 𝟏 −𝟏 𝟎. 𝟎𝟓 𝟎. 𝟏 𝟎. 𝟏 𝟎. 𝟎𝟓 𝟏 −𝟏 𝟏 −𝟏
𝟏 𝒋 −𝟏 −𝒋 𝟎 𝟎. 𝟎𝟓 𝟎. 𝟎𝟓 𝟎 𝟏 𝒋 −𝟏 −𝒋

1 −0.2 − 0.2𝑗 0 −0.2 + 2𝑗


−0.1 − 0.3𝑗 −0.1𝑗 0 −0.1
H(k,l) =| |
0 −0.1 − 0.1𝑗 0 −0.1 + 0.1𝑗
−0.1 + 3𝑗 −0.1 0 0.1𝑗

1
G(k,l) = is given by inverse filtering
𝐻(𝑘,𝑙)
1 −2.5 + 2.5𝑗 ∞ −2.5 − 2.5𝑗
−1 + 3𝑗 10𝑗 ∞ −10
= | |
∞ −5 + 5𝑗 ∞ −5 − 5𝑗
−1 − 3𝑗 −10 ∞ −10𝑗

15. Compare restoration with image enhancement. (nov 2014) (8 marks)

S.NO IMAGE ENHANCEMENT IMAGE RESTORATION


1. It’s a subjective process. It’s objective is based on sound
mathematical principles.
2. It involves only cosmetic changes in the It requires modeling of the degradations.
brightness and contrast.
3. Often this is trial and error process.The The restoration algorithm is well defined.
enchancement procedure is heuristic.
4. The procedure is very simple. The procedure is complex.
5. It increases the quality of an image. It’s related to image enchancement.
6. Have prior knowledge about the Does not need the prior information.
information.
7. Computation speed is low. Computation speed is high.
8. Identifying and analysing of degraded pixel Difficult.
is easy.
9. Losses are minimum. Losses are high compare to enchancement.

PREPARED BY P.SRIVIDDHYA-AP/ECE, S.MAHESWARI-AP/ECE 39

You might also like