Restoration Filters
Restoration Filters
The distortion correction equations yield non integer values for x' and y'. Because the
distorted image g is digital, its pixel values are defined only at integer coordinates. Thus using non
integer values for x' and y' causes a mapping into locations of g for which no gray levels are
defined. Inferring what the gray-level values at those locations should be, based only on the pixel
values at integer coordinate locations, then becomes necessary. The technique used to accomplish
this is called gray-level interpolation.
The simplest scheme for gray-level interpolation is based on a nearest neighbor approach.
This method, also called zero-order interpolation, is illustrated in Fig. 6.1. This figure shows
(A) The mapping of integer (x, y) coordinates into fractional coordinates (x', y') by means of
following equations
and
(B) The selection of the closest integer coordinate neighbor to (x', y');
and
(C) The assignment of the gray level of this nearest neighbor to the pixel located at (x, y).
Although nearest neighbor interpolation is simple to implement, this method often has the
drawback of producing undesirable artifacts, such as distortion of straight edges in images of high
resolution. Smoother results can be obtained by using more sophisticated techniques, such as
cubic convolution interpolation, which fits a surface of the sin(z)/z type through a much larger
number of neighbors (say, 16) in order to obtain a smooth estimate of the gray level at any
desired point. Typical areas in which smoother approximations generally are required include 3-D
graphics and medical imaging. The price paid for smoother approximations is additional
computational burden. For general-purpose image processing a bilinear interpolation approach
that uses the gray levels of the four nearest neighbors usually is adequate. This approach is
straightforward. Because the gray level of each of the four integral nearest neighbors of a non
integral pair of coordinates (x', y') is known, the gray-level value at these coordinates, denoted
v(x', y'), can be interpolated from the values of its neighbors by using the relationship
e2 = E {(f- f )2}
where E{•} is the expected value of the argument. It is assumed that the noise and the image are
uncorrelated; that one or the other has zero mean; and that the gray levels in the estimate are a
linear function of the levels in the degraded image. Based on these conditions, the minimum of
the error function is given in the frequency domain by the expression
where we used the fact that the product of a complex quantity with its conjugate is equal to the
magnitude of the complex quantity squared. This result is known as the Wiener filter, after N.
Wiener [1942], who first proposed the concept in the year shown. The filter, which consists of the
terms inside the brackets, also is commonly referred to as the minimum mean square error filter or
the least square error filter. The Wiener filter does not have the same problem as the inverse filter
with zeros in the degradation function, unless both H(u, v) and S η(u, v) are zero for the same
value(s) of u and v.
The terms in above equation are as follows:
As before, H (u, v) is the transform of the degradation function and G (u, v) is the
transform of the degraded image. The restored image in the spatial domain is given by the inverse
Fourier transform of the frequency-domain estimate F (u, v). Note that if the noise is zero, then the
noise power spectrum vanishes and the Wiener filter reduces to the inverse filter.
When we are dealing with spectrally white noise, the spectrum │N (u, v│ 2 is a constant,
which simplifies things considerably. However, the power spectrum of the undegraded image
seldom is known. An approach used frequently when these quantities are not known or cannot be
estimated is to approximate the equation a
Restoration filters used when the image degradation is due to noise only.
If the degradation present in an image is only due to noise, then,
1. Mean filters
2. Order static filters and
3. Adaptive filters
Mean filters.
There are four types of mean filters. They are
This is the simplest of the mean filters. Let Sxy represent the set of coordinates in a
rectangular subimage window of size m X n, centered at point (x, y).The arithmetic mean filtering
process computes the average value of the corrupted image g(x, y) in the area defined by Sxy. The
value of the restored image f at any point (x, y) is simply the arithmetic mean computed using the
pixels in the region defined by Sxy. In other words
This operation can be implemented using a convolution mask in which all coefficients have
value 1/mn
Here, each restored pixel is given by the product of the pixels in the subimage window, raised to
the power 1/mn. A geometric mean filter achieves smoothing comparable to the arithmetic mean
filter, but it tends to lose less image detail in the process.
The harmonic mean filter works well for salt noise, but fails for pepper noise. It does well also
with other types of noise like Gaussian noise.
The contra harmonic mean filtering operation yields a restored image based on the expression
where Q is called the order of the filter. This filter is well suited for reducing or virtually
eliminating the effects of salt-and-pepper noise. For positive values of Q, the filter eliminates
pepper noise. For negative values of Q it eliminates salt noise. It cannot do both simultaneously.
Note that the contra harmonic filter reduces to the arithmetic mean filter if Q = 0, and to the
harmonic mean filter if Q = -1.
Order-Statistic Filters.
There are four types of Order-Statistic filters. They are
The best-known order-statistics filter is the median filter, which, as its name implies,
replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel:
The original value of the pixel is included in the computation of the median. Median filters are
quite popular because, for certain types of random noise, they provide excellent noise-reduction
capabilities, with considerably less blurring than linear smoothing filters of similar size. Median
filters are particularly effective in the presence of both bipolar and unipolar impulse noise.
Although the median filter is by far the order-statistics filler most used in image
processing, it is by no means the only one. The median represents the 50th percentile of a ranked
set of numbers, but the reader will recall from basic statistics that ranking lends itself to many
other possibilities. For example, using the 100th percentile results in the so-called max filter,
given by
This filter is useful for finding the brightest points in an image. Also, because pepper noise has
very low values, it is reduced by this filter as a result of the max selection process in the
subimage area Sxy.
This filter is useful for finding the darkest points in an image. Also, it reduces salt noise as a
result of the min operation.
The midpoint filter simply computes the midpoint between the maximum and minimum
values in the area encompassed by the filter:
Note that this filter combines order statistics and averaging. This filter works best for randomly
distributed noise, like Gaussian or uniform noise.
(iv) Alpha - trimmed mean filter
It is a filter formed by deleting the d/2 lowest and the d/2 highest gray-level values of g(s,
t) in the neighborhood Sxy. Let gr (s, t) represent the remaining mn - d pixels. A filter formed by
averaging these remaining pixels is called an alpha-trimmed mean filter:
where the value of d can range from 0 to mn - 1. When d = 0, the alpha- trimmed filter reduces to
the arithmetic mean filter. If d = (mn - l)/2, the filter becomes a median filter. For other values of
d, the alpha-trimmed filter is useful in situations involving multiple types of noise, such as a
combination of salt-and-pepper and Gaussian noise.
Adaptive Filters.
Adaptive filters are filters whose behavior changes based on statistical characteristics of
the image inside the filter region defined by the m X n rectangular window Sxy.
The simplest statistical measures of a random variable are its mean and variance. These
are reasonable parameters on which to base an adaptive filler because they are quantities closely
related to the appearance of an image. The mean gives a measure of average gray level in the
region over which the mean is computed, and the variance gives a measure of average contrast in
that region.
This filter is to operate on a local region, Sxy. The response of the filter at any point (x, y)
on which the region is centered is to be based on four quantities: (a) g(x, y), the value of the noisy
image at (x, y); (b) a2, the variance of the noise corrupting /(x, y) to form g(x, y); (c) ray, the
local mean of the pixels in Sxy; and (d) σ2L , the local variance of the pixels in Sxy.
The behavior of the filter to be as follows:
1. If σ2η is zero, the filler should return simply the value of g (x, y). This is the trivial, zero-noise
case in which g (x, y) is equal to f (x, y).
2. If the local variance is high relative to σ2η the filter should return a value close to g (x, y). A
high local variance typically is associated with edges, and these should be preserved.
3. If the two variances are equal, we want the filter to return the arithmetic mean value of the
pixels in Sxy. This condition occurs when the local area has the same properties as the overall
image, and local noise is to be reduced simply by averaging.
Adaptive local noise filter is given by,
The only quantity that needs to be known or estimated is the variance of the overall noise, a2. The
other parameters are computed from the pixels in Sxy at each location (x, y) on which the filter
window is centered.
The adaptive median filtering algorithm works in two levels, denoted level A and level B, as
follows:
Level A: A1 = zmed - zmin
A2 = zmed - zmax
B2 = zxy - zmax
Appropriately, these are called the illumination and reflectance components and are denoted by i
(x, y) and r (x, y), respectively. The two functions combine as a product to form f (x, y).
and
Equation (4) indicates that reflectance is bounded by 0 (total absorption) and 1 (total
reflectance).The nature of i (x, y) is determined by the illumination source, and r (x, y) is
determined by the characteristics of the imaged objects. It is noted that these expressions also are
applicable to images formed via transmission of the illumination through a medium, such as a
chest X-ray.
Inverse filtering.
The simplest approach to restoration is direct inverse filtering, where F (u, v), the
transform of the original image is computed simply by dividing the transform of the degraded
image, G (u, v), by the degradation function
It tells that even if the degradation function is known the undegraded image cannot be
recovered [the inverse Fourier transform of F( u, v)] exactly because N(u, v) is a random function
whose Fourier transform is not known.
If the degradation has zero or very small values, then the ratio N(u, v)/H(u, v) could easily
dominate the estimate F(u, v).
One approach to get around the zero or small-value problem is to limit the filter
frequencies to values near the origin. H (0, 0) is equal to the average value of h(x, y) and that thisf
is usually the highest value of H (u, v) in the frequency domain. Thus, by limiting the analysis to
frequencies near the origin, the probability of encountering zero values is reduced.
The following are among the most common PDFs found in image processing applications.
Gaussian noise
Because of its mathematical tractability in both the spatial and frequency domains,
Gaussian (also called normal) noise models are used frequently in practice. In fact, this tractability
is so convenient that it often results in Gaussian models being used in situations in which they are
marginally applicable at best.
… (1)
where z represents gray level, µ is the mean of average value of z, and a σ is its standard
deviation. The standard deviation squared, σ2, is called the variance of z. A plot of this function is
shown in Fig. 5.10. When z is described by Eq. (1), approximately 70% of its values will be in the
range [(µ - σ), (µ +σ)], and about 95% will be in the range [(µ - 2σ), (µ + 2σ)].
Rayleigh noise
µ=a+ /4
σ2 = b(4 – Π)/4
Figure 5.10 shows a plot of the Rayleigh density. Note the displacement from the origin and the
fact that the basic shape of this density is skewed to the right. The Rayleigh density can be quite
useful for approximating skewed histograms.
µ=b/a
σ2 = b / a2
Exponential noise
µ=1/a
σ2 = 1 / a2
This PDF is a special case of the Erlang PDF, with b = 1.
Uniform noise
µ = a + b /2
σ2 = (b – a ) 2 / 12
Impulse (salt-and-pepper) noise
If b > a, gray-level b will appear as a light dot in the image. Conversely, level a will appear
like a dark dot. If either Pa or Pb is zero, the impulse noise is called unipolar. If neither probability
is zero, and especially if they are approximately equal, impulse noise values will resemble salt-
and-pepper granules randomly distributed over the image. For this reason, bipolar impulse noise
also is called salt-and-pepper noise. Shot and spike noise also are terms used to refer to this type
of noise.
Differences between the image enhancement and image restoration.
(i) Image enhancement techniques are heuristic procedures designed to manipulate an image in
order to take advantage of the psychophysical aspects of the human system. Whereas image
restoration techniques are basically reconstruction techniques by which a degraded image is
reconstructed by using some of the prior knowledge of the degradation phenomenon.
(ii) Image enhancement can be implemented by spatial and frequency domain technique, whereas
image restoration can be implement by frequency domain and algebraic techniques.
(iii) The computational complexity for image enhancement is relatively less when compared to
the computational complexity for irrrage restoration, since algebraic methods requires
manipulation of large number of simultaneous equation. But, under some condition computational
complexity can be reduced to the same level as that required by traditional frequency domain
technique.
(iv) Image enhancement techniques are problem oriented, whereas image restoration techniques
are general and are oriented towards modeling the degradation and applying the reverse process in
order to reconstruct the original image.
(v) Masks are used in spatial domain methods for image enhancement, whereas masks are not
used for image restoration techniques.
(vi) Contrast stretching is considered as image enhancement technique because it is based on the
pleasing aspects of the review, whereas removal of’ image blur by applying a deblurring function
is considered as a image restoration technique.
With as the point spread function, the pixels in observed image are expressed as,
Here,
The L-R algorithm cannot be used in application in which the psf (Pij) is dependent on one or
more unknown variables.
The L-R algorithm is based on maximum- likelihood formulation, in this formulation Poisson statistics are
used to model the image. If the likelihood of model is increased, then the result is an equation which
satisfies when the following iteration converges.
Here,
The factor f which is present in the right side denominator leads to non-linearity. Since, the
algorithm is a type of nonlinear restorations; hence it is stopped when satisfactory result is
obtained.
The basic syntax of function deconvlucy with the L-R algorithm is implemented is given
below.
g = Degraded image
fr = Restored image
DAMPAR
The DAMPAR parameter is a scalar parameter which is used to determine the deviation of
resultant image with the degraded image (g). The pixels which gel deviated from their original
value within the DAMPAR, for these pixels iterations are cancelled so as to reduce noise
generation and present essential image information.
WEIGHT
WEIGHT parameter gives a weight to each and every pixel. It is array of size similar to
that of degraded image (g). In applications where a pixel leads to improper image is removed by
assigning it to a weight as 0’. The pixels may also be given weights depending upon the flat-field
correction, which is essential according to image array. Weights are used in applications such as
blurring with specified psf. They are used to remove the pixels which are pre9ent at the boundary
of the image and are blurred separately by psf.
If the array size of psf is n x n then the width of weight of border of zeroes being used is
ceil (n / 2)