0% found this document useful (0 votes)
24 views17 pages

1.explain Fourier Transforms With Suitable Expression

Uploaded by

vocojo5037
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views17 pages

1.explain Fourier Transforms With Suitable Expression

Uploaded by

vocojo5037
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

CV

1.Explain Fourier transforms with suitable expression


Fourier analysis could be used to analyze the frequency characteristics of various filters. In
this section, we explain both how Fourier analysis lets us determine these characteristics (or
equivalently, the frequency content of an image) and how using the Fast Fourier Transform
(FFT) lets us perform large-kernel convolutions in time that is independent of the kernel’s
size. More comprehensive introductions to Fourier transforms are provided by Bracewell
(1986); Glassner (1995); Oppenheim and Schafer (1996); Oppen heim, Schafer, and Buck
(1999). Howcan we analyze what a given filter does to high, medium, and low frequencies?
The answer is to simply pass a sinusoid of known frequency through the filter and to observe
by how much it is attenuated.
let
where N is the length of the signal or region of analysis. These formulas apply both to filters,
such as h(x), and to signals or images, such as s(x) or g(x). The discrete form of the Fourier
transform (3.54) is known as the Discrete Fourier Trans form (DFT). Note that while (3.54)
can be evaluated for any value of k, it only makes sense for values in the range
k [ -N /2, N/ 2 ].
At face value, the DFT takes O(N2) operations (multiply-adds) to evaluate. Fortunately, there
exists a faster algorithm called the Fast Fourier Transform (FFT), which requires only O(N
log2N) operations (Bracewell 1986; Oppenheim, Schafer, and Buck 1999).
2.Explain Weiner Filtering with suitable expression.
4.Interpret how general transformations, such as image rotations or general
warps performed.
In contrast to the point processes we saw in Section 3.1, where the function applied to an
image transforms the range of the image,
g(x) = h(f(x))
5.Explain Image Degradation/Restoration Process in Detail.
Image restoration is an area that also deals with improving the appearance of an image.
However, unlike enhancement, which is subjective, image restoration is objective, in the
sense that restoration techniques tend to be based on mathematical or probabilistic models
of image degradation. Enhancement, on the other hand, is based on human subjective
preferences regarding what constitutes a “good” enhancement result.

6.Interpret how to restore noise image with respect to Spatial Filtering.

sometimes it is possible to estimate N ( u,v ) from the spectrum of G(u , v).


In this case N(u,v) can be subtracted from G(u,v) to obtain an estimate of the original image,
but this type of knowledge is the exception, rather than the rule. Spatial filtering is the
method of choice for estimating f (x,y) [i.e., denoising image g (x,y)] in situations when only
additive random noise is present.
7.Interpret noise models with suitable probability density functions.
The statistical behavior of the intensity values in the noise component of the model .These
may be considered random variables, characterized by a probability density function (PDF),
as noted briefly as noted earlier. The noise component of the model is an image, h( x,y ), of
the same size as the input image. We create a noise image for simulation purposes by
generating an array whose intensity values are random numbers with a specified probability
density function. This approach is true for all the PDFs to be discussed shortly, with the
exception of salt-and-pepper noise, which is applied differently. The following are among the
most common noise PDFs found in image processing applications.
8.Explain Periodic Noise Reduction Using Frequency Domain Filtering
• Periodic noise can be analyzed and filtered quite effectively using frequency domain
techniques.
• The basic idea is that periodic noise appears as concentrated bursts of energy in the
Fourier transform, at locations corresponding to the frequencies of the periodic
interference.
• The approach is to use a selective filter to isolate the noise.
• There is no difference between how these filters were used and the way they are used for
image restoration.
• In restoration of images corrupted by periodic interference, the tool of choice is a notch
filter.
9.Explain Laplacian kernel Line Detection with suitable matrix.
• The next level of complexity is line detection.
• line detection we can expect second derivatives to result in a stronger filter response, and
to produce thinner lines than first derivatives.
• Thus, we can use the Laplacian kernel for line detection also, keeping in mind that the
double-line effect of the second derivative must be handled properly.

• Consider the kernels in Fig. 10.6. Suppose that an image with a constant background and
containing various lines (oriented at 0°, ± ° 45 , and 90°) is filtered with the first kernel.
• The maximum responses would occur at image locations in which a horizontal line passes
through the middle row of the kernel.
• This is easily verified by sketching a simple array of 1’s with a line of a different intensity
(say, 5s) running horizontally through the array.
• A similar experiment would reveal that the second kernel in Fig. 10.6 responds best to lines
oriented at + ° 45 ; the third kernel to vertical lines; and the fourth kernel to lines in the − °
45 direction.
• The preferred direction of each kernel is weighted with a larger coefficient (i.e., 2) than
other possible directions.
• The coefficients in each kernel sum to zero, indicating a zero response in areas of constant
intensity.
10.Explain image segmentation application with respect to thresholding.
Its intuitive properties, simplicity of implementation, and computational speed, image
thresholding enjoys a central position in applications of image segmentation.
FOUNDATION
regions were identified by first finding edge segments, then attempting to link the
segments into boundaries. In this techniques for partitioning images directly into regions
based on intensity values and/or properties of these values.
11.Explain the following
• Marr-Hildreth Edge Detector
• One of the earliest successful attempts at incorporating more sophisticated analysis into
the edge-finding process is attributed to Marr and Hildreth [1980].
• Edge- detection methods in use at the time were based on small operators, such as the
Sobel kernels discussed earlier.
• Marr and Hildreth argued
(1) that intensity changes are not independent of image scale, implying that their
detection requires using operators of different sizes; and
(2) that a sudden intensity change will give rise to a peak or trough in the first derivative
or, equivalently, to a zero crossing in the second derivative.
• These ideas suggest that an operator used for edge detection should have two salient
features.
• First and foremost, it should be a differential operator capable of computing a digital
approximation of the first or second derivative at every point in the image.
• Second, it should be capable of being “tuned” to act at any desired scale, so that large
operators can be used to detect blurry edges and small operators to detect sharply
focused fine detail.

This expression is called the Laplacian of a Gaussian (LoG)


The Marr-Hildreth edge-detection algorithm may be summarized as follows:
1. Filter the input image with an n × n Gaussian lowpass kernel obtained by sampling
2. Compute the Laplacian of the image resulting from Step 1 using, for example, the 3 × 3
kernel
3. Find the zero crossings of the image from Step 2.

• Canny Edge Detector


• Global Processing Using the Hough Transform
• The method discussed in the previous section is applicable in situations in which
knowledge about pixels belonging to individual objects is available.
• Often, we have to work in unstructured environments in which all we have is an edge
map and no knowledge about where objects of interest might be.
• In such situations, all pixels are candidates for linking, and thus have to be accepted or
eliminated based on pre defined global properties.
• In this we develop an approach based on whether sets of pixels lie on curves of a
specified shape. Once detected, these curves form the edges or region boundaries of
interest.

You might also like