0% found this document useful (0 votes)
50 views17 pages

Image Processing Summary

Edge detection can be performed using Sobel operators, which apply a filter to detect edges along the x and y directions in an image. The Sobel operators calculate the gradient of the image intensity function, approximating the derivatives using differences. This helps find regions of high spatial frequency that correspond to edges. Discrete histogram equalization does not generally yield a flat histogram because it maps each pixel intensity in the input image to a level in the output image to spread the histogram across a wider intensity range, but the distribution of pixels among levels may not be equal. Regular thresholding uses a single threshold for the entire image while adaptive thresholding divides the image into parts and calculates individual thresholds for each part, making it more suitable when intensity varies within

Uploaded by

Ahmed Hamdy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views17 pages

Image Processing Summary

Edge detection can be performed using Sobel operators, which apply a filter to detect edges along the x and y directions in an image. The Sobel operators calculate the gradient of the image intensity function, approximating the derivatives using differences. This helps find regions of high spatial frequency that correspond to edges. Discrete histogram equalization does not generally yield a flat histogram because it maps each pixel intensity in the input image to a level in the output image to spread the histogram across a wider intensity range, but the distribution of pixels among levels may not be equal. Regular thresholding uses a single threshold for the entire image while adaptive thresholding divides the image into parts and calculates individual thresholds for each part, making it more suitable when intensity varies within

Uploaded by

Ahmed Hamdy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

1. Pixels is the term most widely used to denote the elements of a digital image.

2. Ideal edge is a step function in some direction


3. For Symmetric filters, there's no difference between correlation and convolution
4. The Second Derivative of Image Sharpening called Laplacian
5. Exposure, is the amount of light per unit area reaching a photographic film or
electronic image sensor.
6. The primary objective of sharpening of an image is Highlight fine details in the image
7. Second derivative of I(x) has a zero crossing at edge.
8. The histogram shows the distribution of grey levels in an image.
9. The spatial coordinates of a digital image (x,y) are proportional to Brightness
10. The Laplacian of Gaussian (or Mexican hat) filter uses the Gaussian for noise removal
and the Laplacian for edge detection
11. Single value thresholding only works for bimodal histograms.
12. In morphological processing, any on pixel in the structuring element covers an on pixel
in the image in Hit.
13. In morphological processing, there are two basic morphological operations which are
erosion and dilation*.
14. Erosion shrinks objects while Dilation enlarges objects.
15. The Closing of image f by structuring element s, is simply a dilation followed by an
erosion.
16. The first and foremost step in image processing is Image acquisition.
17. The type of mean filters that works well for salt noise but fails for pepper noise is
Harmonic Mean.
18. The total amount of energy that flows from the light source (measured in watts) is called
Radiance.
19. The point of equal energy has equal amounts of each colour and is the CIE standard
for pure white.
20. In Alpha-Trimmed Mean Filter, given a set of 16 points, trimming by 25% would
compute the mean of the remaining 8 (or 4 or 6 who fucking knows) points.
21. In Contraharmonic Mean, positive values of Q eliminate pepper noise.
22. Point, lines and edges are the three basic types of discontinuities.
23. Masks are designed with suitable coefficients and are applied at each point in an
image.
24. Using gray-level transformation, the basic function Logarithmic deals with which of the
following transformation?
a. Log and inverse-log transformations
b. Negative and identity transformations
c. nth and nth root transformations
d. All of the mentioned
25. By default, Matlab stores most data in arrays of class .................
a. uint8
b. uint16
c. double
d. logical
26. Which of the following is not a valid response when we apply a second derivative?
a. Zero response at onset of gray level step
b. Nonzero response at onset of gray level step
c. Zero response at flat segments
d. Nonzero response along the ramps
27. What is the output of a smoothing, linear spatial filter?
a. Median of pixels
b. Maximum of pixels
c. Minimum of pixels
d. Average of pixels
28. What is the thickness of the edges produced by first order derivatives when compared
to that of second order derivatives?
a. Finer
b. Equal
c. Thicker
d. Independent
29. Structuring elements runs over image's
a. rows
b. columns
c. every element
d. edges
30. ................... is a set of connected pixels that lie on the boundary between two regions.
a. Edge
b. Line
c. Boundary
d. Blob
31. Smallest value of gamma will produce
a. contrast
b. darker image
c. brighter image
d. black and white image
32. In spatial domain, which of the following operation is done on the pixels in sharpening
the image?
a. Integration
b. Average
c. Median
d. Differentiation
33. Which one is not the process of image processing
a. high level
b. low level
c. last level
d. mid level
34. Smoothing spatial filters are useful for ........................
a. image enhancement
b. image restoration
c. highlight gross details
d. highlight fine details
35. Which is the first fundamental step in image processing?
a. filration
b. image acquisition
c. image enhancement
d. image restoration
36. Which of the following depicts the main functionality of the Bit-plane slicing?
a) Highlighting a specific range of gray levels in an image
b) Highlighting the contribution made to the total image appearance by specific
bits
c) Highlighting the contribution made to the total image appearance by specific bytes
d) Highlighting the contribution made to the total image appearance by specific pixels
37. In ................... image, we notice that the components of the histogram are concentrated
on the low side on the intensity scale.
a. bright
b. colourful
c. all of the mentioned
d. dark
38. Histogram is the technique processed in
a. intensity domain
b. undefined domain
c. frequency domain
d. spatial domain
39. Which of the following transformations expands the value of dark pixels while the
higher-level values are being compressed?
a. Log transformations
b. Inverse-log transformations
c. Negative transformations
d. None of the mentioned
40. The method in which images are input, and attributes are output is called
a. low-level processes
b. edge-level processes
c. high-level processes
d. mid-level processes
41. The first derivative of I(x) has a ..........................at the edge.
a. none of them
b. valley
c. zero crossing
d. peak
42. What is the sum of all components of a normalized histogram?
a. 1
b. -1
c. 0
d. None of the mentioned
43. Which of the following shows three basic types of functions used frequently for image
enhancement?
a. Linear, logarithmic, and inverse law
b. Power law, logarithmic, and inverse law
c. Linear, exponential, and inverse law
d. Linear, logarithmic, and power law
44. Which of the following arithmetic operators is primarily used as a masking operator in
enhancement?
a. Addition
b. Subtraction
c. Multiplication
d. Division
45. Which of the following is/are more commercially successful image enhancement
methods in mask mode radiography, an area under medical imaging?
a. Addition
b. Subtraction
c. Multiplication
d. Division
46. A filter is applied to an image whose response is independent of the direction of
discontinuities in the image. The filter is/are .................
a. Median filter
b. Isotropic filters
c. Box filters
d. All of the mentioned
47. The Laplacian is which of the following operators?
a. Nonlinear operator
b. Linear operator
c. Order-Statistic operator
d. None of the mentioned
48. Applying Laplacian has which of the following result(s)?
a. Produces an image having greyish edge lines
b. Produces an image having featureless background
c. All of the mentioned
d. None of the mentioned

49.
50. ............................. bring out detail that is obscured, or simply to highlight certain
features of interest in an image.

. Image Restoration
b. Image Enhancement
. Segmentation
. Object Recognition

42. Intensity levels in 8bit image are


. 255
. 256
. 244
. 245
43. Full color images have at least
. 2 components
. 4 components
. 3 components
. 255 components
44. Hue and saturation, both together produce

. brightness
. transitivity
. chromaticity
. reflectivity

46. Negative of the image having intensity values [0, L − 1] is expressed by

. s = L-1
. s = 1-r
c. s = L-1-r
. s = L-r
47. Smallest value of gamma will produce

. contrast
. darker image
c. brighter image
. black and white image

48. Smallest possible neighborhood in an image must be of size

. 3x3
. 2x2
c. 1x1
. 4x4
49. For edge detection, we use

. first derivative
. second derivative
. third derivative
d. Both A and B

50. The type of noise in which pixel values multiplied by random noise is............

a. speckle noise
. periodic noise
. gaussian noise
. none of them

51. The type of mean filter that achieves similar smoothing to the arithmetic mean but tends
to lose less image detail is ..................
a. geometric mean
. contraharmonic mean
. harmonic mean
. none of them

52.
Edges play an important role in our perception of images as
well as in the analysis
of images. Describe one method which can detect lines/edges along xnd yirections in
an image? Write down its mathematical operations on an input image.
Solution:
Edge detection along xnd y-directions can be done using
Sobel operators:

Explain why the discrete histogram equalization technique


does not, in general,
yield a flat histogram.
Solution:
The distribution “flat” (flat histogram) which means the number of pixel in each intensity
levels distributed equally, and in the discrete histogram equalization map each pixel in the
input image with intensity rk into a corresponding pixel with level sk in output image to
spread the histogram of the input image so that the intensity levels of the equalized image
span a wider range of the intensity scale.

Explain the differences between regular and adaptive


thresholding. Give examples
of when each type should be used.
Solution:
For regular (global) thresholding you find threshold value or values for the entire image. In
adaptive thresholding the image id divided into part, usually square, and threshold levels are
found for each separate part. Global thresholding is useful when you want the image is
similar in most parts. Adaptive thresholding is very useful when the image is changing in
intensity, e.g., because of a light source from the right side. Then the threshold values
should be quite different on the left and right side of the image.

The two images shown below are different, but their


histograms are identical.
Both images have size 80 × 80, with black (0) and white (1) pixels.

The histograms of two images are illustrated below. Sketch a


transformation
function for each image that will make the image has a better contrast.

Give a 3x3 mask for performing unsharp masking in a single


pass through an
image.

marks) In a given application an averaging mask is applied to


input images to reduce
noise, and then a Laplacian mask is applied to enhance small details. Would the result
be the same if the order of these operations were reversed?
Solution:
The result would be the same if the order of these operations were reversed since the
averaging and the Laplacian are linear operations. The Laplacian is a linear operator
because
derivatives of any order are linear operations and the Laplacian is the second derivation.

What linear transformation will change an image f(x,y) with


gray levels ranging
from 20 through 30 to an image g(x,y) with gray levels ranging from 20 through 50?
Solution:
fmin = 20 fmax = 30
gmin = 20 gmax = 50
ma + b = n
20 a + b = 20 (1) 30 a + b = 50 (2)
Solving equations (1)&(2), we get:
a = 3 , b = -40
Transformation function:
3m - 40 = n

Is the threshold obtained with the basic global thresholding


algorithm independent
of the starting point?
If your answer is yes, prove it. If your answer is no, give an example.
Solution:
The value of the threshold at convergence is independent of the initial value if the initial
value of the threshold is chosen between the minimum and maximum intensity of the
image.
The final threshold is dependent of the initial value chosen for T if that value does not
satisfy this condition.
For ex-ample, consider the following image histogram, suppose that we select the initial
threshold T (1) =0. Then, at the next iterative step, m2(2) = 0, m1(2) = M, and T (2) = M /2.
Because m2(2) = 0, it follows that m2(3) = 0, m1(3) = M, and T (3) = T (2) =M /2.
Any following iterations will yield the same result, so the algorithm converges with the
wrong value of threshold. If we had started with I min< T (1) < Imax, the algorithm would
have converged properly.
What is the difference between “Image enhancement” and
“Image restoration”.
Solution:
Image Enhancement: Bring out detail that is obscured, or simply to highlight certain
features of interest in an image.
Image Restoration: is the operation of taking a corrupt/noisy image and estimating the
clean, original image. Corruption may come in many forms such as motion blur, noise and
camera mis-focus. Image restoration is performed by reversing the process that blurred the
image and such is performed by imaging a point source and use the point source image to
restore the image information lost to the blurring process.
Image restoration is different from image enhancement in that the latter is designed to
emphasize features of the image that make the image more pleasing to the observer, but not
necessarily to produce realistic data from a scientific point of view. Image enhancement
techniques (like contrast stretching or de-blurring by a nearest neighbor procedure)
provided by imaging packages use no a priori model of the process that created the image

What are the types of noise models?


a. Gaussian
i. Most common model
b. Rayleigh
c. Erlang
d. Exponential
e. Uniform
f. Impulse
i. Salt and pepper noise
Give a single intensity transformation function for spreading
the
intensities of an image so the lowest intensity is 0 and the highest is L ### 1.
Solutions:
Let f denote the original image. First subtract the minimum value of f denoted
f min from f to yield a function whose minimum value is 0:
g1 = f x f min
Next divide g1 by its maximum value to yield a function in the range [0,1] and
multiply the result by L ### 1 to yield a function with values in the range [0, L - 1]

Keep in mind that f min is a scalar and f is an image.

Foo
What are the primary stage that comprise the image
processing pipeline
Solution

name and explain the categories of image enhancement


(Domains of image enhancement)
Solution
There are two broad categories of image enhancement techniques
– Spatial domain techniques
• Direct manipulation of image pixels.
– Frequency domain techniques
• Manipulation of Fourier transform or wavelet transform of an image or Fourier transform.

Explain the differences between Point processing and


neighborhood operations.
The simplest spatial domain operations occur when the neighborhood is simply the pixel
itself.
In this case T is referred to as a grey level transformation function or a point processing
operation. Point processing operations take the form
s=T(r)
where s refers to the processed image pixel value and r refers to the original image pixel
value
Example: Negative images and thresholding
Neighborhood operations simply operate on a larger neighborhood of pixels than point
operations, Neighborhoods are mostly a rectangle around a central pixel.
Example: Min, Max and median
When objects are represented, the representation should be
invariant to three things. Describe these three things
Solution (ChatGPT) ‫مش معانا باين‬
In image processing, the representation of objects is ideally designed to be invariant to three
main factors:

1. Translation Invariance:
Meaning: The representation should remain the same regardless of the object's
position or location in the image.
Example: If an object is in the center or at the corner of an image, its representation
should be consistent.
2. Rotation Invariance:
Meaning: The representation should be unaffected by the rotation of the object within
the image.
Example: If an object is upright or rotated, the representation remains the same.
3. Scale Invariance:
Meaning: The representation should not change with the size or scale of the object in
the image.
Example: Whether an object is large or small within the image, its representation
remains invariant to scale changes.
These invariances are crucial for creating robust and reliable object recognition
systems in image processing. They ensure that the model can recognize and
understand objects regardless of their position, orientation, or size in the images.

You might also like