DIVP
DIVP
B.E.(E&TC)-2020-2021
Unit No.1
1. DIP Stands:
a) Digital image processing
b) Digital information processing
c) Digital induction process
d) None of these
2. What is image?
a) Picture
b) Matrix of pixel
c) Collection of pixel
d) All of these
4. The field of digital image processing refers to processing digital images by means:
a) Digital computer
b) Super computer
c) mini-computer
d) None of these
5. What is pixel?
a) Pixel is the elements of a digital image
b) Pixel is the elements of a analog image
c) a & b
d) none of these
8. Among the following image processing techniques which is fast, precise and flexible
a) optical
b) digital
c) electronic
d) photographic
10. Which is the image processing technique used to improve the quality of image for human
viewing?
a) compression
b) enhancement
c) restoration
d) analysis
11. Which type of enhancement operations are used to modify pixel values according to the
value of the pixel’s neighbors?
a) point operations
b) local operations
c) global operations
d) mask operations
12. In which type of progressive coding technique, gray color is encoded first and then
other colors are encoded?
a) quality progressive
b) resolution progressive
c) component progressive
d) region progressive
13. Which image processing technique is used to eliminate electronic noise by mathematical
process?
a) Frame averaging
b) Image understanding
c) Image compression
d) none
17. Which is a fundamental task in image processing used to match two or more pictures?
a) registration
b) segmentation
c) computer vision
d) image differencing
18. Which technique is used for the images of the same scene are acquired from different
View points
a) multiview analysis
b) multitemporal analysis
c) multisensory analysis
d) image differencing
19. Which sensor is used for obtaining the video source in 3d face recognition system
a) optical
b) electronic
c) 3d sensor
d) 2d sensor
21. Which technique turns the unique lines, patterns, and spots apparent in a person‘s skin into a
mathematical space
a) registration
b) segmentation
c) skin texture analysis
d) image differencing
28. ___ is the most reliable and accurate biometric identification technique.
a) Computer vision
b) Iris recognition
c) Medical imaging
d) Remote sensing
-Dr.C.G.Patil
Course Coordinator
Unit-2
1. DFT stands as:
a. Discrete Fourier transform
b. digital function transform
c. digital frequency transform
d. none
4. Restoration is:
a. attempts to reconstruct or recover an image that has been degraded by using a priori
knowledge of the degradation phenomenon.
b. attempts to reconstruct or recover an image that has been graded by using a priori knowledge
of the gradation phenomenon.
c. a & b
d. None of above
5. Restoration technique:
a. its oriented toward modeling the degradation and applying the inverse process in order to
recover the original image.
b. its oriented toward modeling the gradation and applying the inverse process in order to recover
the original image.
c. its oriented toward modeling the degradation and applying the process in order to recover the
original image.
d. none of above
11. A ............. achieves smoothing comparable to the arithmetic mean filter, but it tends to lose
less image detail in the process.
a. Arithmetic mean filter
b. geometric mean filter
c. spatial filter
d. none of above
12. A geometric mean filter achieves smoothing comparable to the arithmetic mean filter, but it
tends to ..................... image detail in the process.
a. lossy
b. corrupted
c. lose less
d. none of above
13. The harmonic mean filter works well for .............. but fails for pepper noise.
a. salt and pepper noise
b. salt noise
c. pepper noise
d. none of above
14. The harmonic mean filter works well for salt noise, but fails for ...................
a. salt and pepper noise
b. salt noise
c. pepper noise
d. none of above
16. Contra harmonic mean filter is well suited for reducing or virtually eliminating the effects of
................................
a. salt and pepper noise
b. Gaussian noise
c. pepper noise
d. none of above
17. For ......................value of Q, the Contra harmonic mean filter eliminates pepper noise.
a. positive
b. negative
c. equal
d. none of above
18. for negative value of Q, the Contra harmonic mean filter eliminates salt noise.
a. positive
b. negative
c. equal
d. none of above
19. The Contra harmonic mean filter reduces to the arithmetic mean filter if ...... , and to the
harmonic mean filter if Q=-1.
a. Q=0
b. Q=1
c. Q=-1
d. none of above
20. The arithmetic and geometric mean filters are well suited for random noise like Gaussian or
uniform noise.
a. random noise
b. uniform noise
c. Gaussian noise
d. all of above
21. The ................... are well suited for random noise like Gaussian or uniform noise.
a. arithmetic and geometric mean filters
a. arithmetic mean filters
a geometric mean filters
d. all of these
22. The Contra harmonic mean filter is well suited for impulse noise, but it has the disadvantage
that it must be known whether the noise is dark or light in order to select the proper sign for Q.
a. random noise
b. uniform noise
c. impulse noise
d. all of above
23. The best known order statistics filter is the median filter, which replaces the value of a pixel
by the median of the gray levels in the neighborhood of that pixel.
a. fˆ(x, y) median{g(s, t)}
(s,t) Sxy
a. max filter
b. min filter
c. median filter
d. none of above
a. max filter
b. min filter
c. median filter
d. none of above
31.
d. none of above
32.
a. 2-D DFT
b. 1-D DFT
c. 2-D FFT
d. none of above
d. none of above
a. Discrete function with twice the number of nonzero, it‘s Fourier spectrum
b. Discrete function with the number of nonzero, it‘s Fourier spectrum
c. Discrete function with fourth the number of nonzero, it‘s Fourier spectrum
d. none of above
d. none of above
d. none of above
37.
d. none of above
38.
d. none of above
39 ............... can be thought of as one low-pass filtered image minus another low pass filtered
image.
d. none of above
d. none of above
d. none of above
d. none of above
d. none of above
44. Given: observation y(m,n) and blurring function h(m,n); Design: g(m,n), such that the
a. Non-blind deblurring/deconvolution
b. Blind deblurring/deconvolution
c. Non-blind blurring/convolution
d. none of above
45. Given: observation y(m,n); Design: g(m,n), such that the distortion between x(m,n) and
is minimized
a. Non-blind deblurring/deconvolution
b. Blind deblurring/deconvolution
c. Non-blind blurring/convolution
d. none of above
Answers-Key Unit-2:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
41 42 43 44 45
Unit-3
1. what is color?
d. all of above
d. all of above
a. any color can be obtained by mixing of three secondary colors with a right proportion.
b. any color can be obtained by mixing of three primary colors with a right proportion.
c. a & b
d. none of above
c. a & b
d. none of above
5. Additive rule
c. a & b
d. none of above
d. none of above
c. a & b
d. none of above
8. Subtractive rule
d. none of above
d. none of above
10. Primary colors for reflecting sources
a. secondary colors
c. a & b
d. none of above
c. a & b
d. none of above
12. If a surface coated with ........ is illuminated with white light, no red light is reflected form the
surface.
a. cyan
b. yellow
c. magenta
d. none of above
13 ......... subtracts red light from white light which contains amounts of red, green and blue light.
a. cyan
b. yellow
c. magenta
d. none of above
d. none of above
a. CMYK
b. RGBK
c. RGYK
d. none of above
16.
d. none of above
d. none of above
c. a & b
d. none of above
a. To the relative purity or the amount of white light mixed with a hue. The pure spectrum colors
are semi-fully saturated.
b. To the relative purity or the amount of white light mixed with a hue. The pure spectrum colors
are partially saturated.
c. To the relative purity or the amount of white light mixed with a hue. The pure spectrum colors
are fully saturated.
d. none of above
a. less saturated
b. more saturated
c. better saturated
d. none of above
a. The brightness
b. The contrast
c. a & b
d. none of above
22. Total amount of energy that flow from the light source, measured in watts (W)
a. Radiance
b. Luminance
c. a & b
d. none of above
a. Radiance
b. Luminance
c. a & b
d. none of above
a. lumens
b. km
c. mm
d. none of above
d. none of above
26. Subjective descriptor that is hard to measure, similar to the achromatic notion of intensity
a. Radiance
b. Brightness
c. a & b
d. none of above
27. Principal sensing categories in eyes
d. none of above
d. none of above
a. additive primaries
b. subtractive primaries
c. a & b
d. none of above
30. below figure shows
a. additive primaries
b. subtractive primaries
c. a & b
d. none of above
a. Color TV
b. Picture
c. image
d. none of above
32. Suitable for hardware or applications
a. RGB model
b. CYM model
c. CYMK model
d. all of above
d. none of above
34. The number of bits used to represent each pixel in RGB space.
a. Pixel depth
b. no. of pixel
c. pixel size
d. none of above
d. none of above
a. used to compact image components that are useful in the representation and description of
region shape.
b. used to extract image components that are useful in the representation and description of
region shape.
c. used to extract image components that are useful in the compression of region shape.
d. none of above
37. The element of the set is the coordinates (x,y) of pixel belong to the object Z2
b. gray-scaled image
c. a & b
d. none of above
38. The element of the set is the coordinates (x,y) of pixel belong to the object and the gray
levels Z3
b. gray-scaled image
c. a & b
d. none of above
39. Erosion
a. Erosion of a set A by structuring element B: all z in A such that B is in A when origin of B=z
b. A B {z|(B)z A}
d. all of above
40. Dilution
a. Dilation of a set A by structuring element B: all z in A such that B hits A when origin of B=z
b. A B {z|(B̂) z A Φ}
41. Erosion
b. A B {z|(B)z A}
d. all of above
42. Dilation
b. A B {z|(B̂) z A Φ}
d. all of above
43. Wanted: Remove structures / fill holes and without affecting remaining parts
b. Solution: dilation
c. Solution: erosion
d. none of above
44. Opening:
b. A B (A B) B
c. Eliminates protrusions
d. all of above
45. Closing:
d. all of above
Answers-Key Unit-3:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
41 42 43 44 45
Unit-4
c. a & b
d. none of above
2. Spatial Transformations
a. Rigid
b. Affine
c. Projective
d. all of above
3. Rigid Transformation
a. Rotation(R)
b. Translation (t)
c. Similarity (scale)
d. all of above
4. Affine Transformation
a. Rotation
b. Translation
c. Scale
d. all of above
a. Projective Transformation
b. affine Transformation
c. rigid transformation
d. none of above
6. Methods of Registration
a. Correlation
b. Fourier
c. Point Mapping
d. all of above
a. Correlation
b. Convolution
c. circular convolution
d. none of above
8. Given a two images T & I, 2D normalized ............... function measures the similarity for each
translation in an image patch
T(x, y)I(x u, y v)
C(u, v)
I 2 (x u, y v)
a. Correlation
b. Convolution
c. circular convolution
d. none of above
9. Correlation Theorem
a. Fourier transform of the correlation of two images is the product of the Fourier transform of
one image and the Fourier transform of the other.
b. Fourier transform of the correlation of two images is the product of the Fourier transform of
one image and the inverse of the Fourier transform of the other.
c. Fourier transform of the correlation of two images is the product of the Fourier transform of
one image and the complex conjugate of the Fourier transform of the other.
d. none of above
10. Fourier Transform Based Methods
a. Phase-Correlation
c. Power cepstrum
d. all of above
11. All Fourier based methods are very efficient, only only work in cases of rigid transformation.
a. Projective Transformation
b. affine Transformation
c. rigid transformation
d. none of above
a. Control Points
c. Global Polynomial
d. all of above
a. Clustering
b. determine the optimal spatial transformation between images by an evaluation of all possible
pairs of feature matches.
c. a & b
d. none of above
a. Feature Space
b. Similarity Metrics
c. Search Strategy
d. all of above
c. a & b
d. none of above
c. a & b
d. none of above
17. Goal: Suppress non-common information & capture the common scene details
d. none of above
18. Apply the Laplacian high pass filter to the original images
d. none of above
a. Segmentation
b. fragment
c. addition
d. none of above
21. should stop when the objects of interest in an application have been isolated.
a. Segmentation
b. fragment
c. addition
d. none of above
22. Segmentation algorithms generally are based on one of 2 basis properties of intensity values
b. Similarity: to partition an image into regions that are similar according to a set of predefined
criteria.
c. a & b
d. none of above
a. points
b. lines
c. edges
d. all of above
a. a point has been detected at the location on which the mark is centered if |R| T
b. T is a nonnegative threshold
c. R is the sum of products of the coefficients with the gray levels contained in the region
encompassed by the mark.
d. all of above
25. This is a:
d. none of above
26. This is a:
d. none of above
27. This is a:
a. Point Detection mask
d. none of above
28. This is a:
d. none of above
29. This is a:
d. none of above
30 ............. will result with max response when a line passed through the middle row of the mask with a
constant background.
a. Horizon mask
b. Horizontal mask
c. vertical mask
d. none of above
31. if we are interested in detecting all lines in an image in the direction defined by a given mask, we
simply run the mask through the image and threshold the absolute value of the result.
a. Line Detection
b. edge detection
c. point detection
d. none of above
32. Blurred edges tend to be ....... and sharp edges tend to be......:
a. thick, thin
b. thick, thick
c. thin, thin
d. none of above
33. An imaginary straight line joining the extreme positive and negative values of the second derivative
would cross zero near the midpoint of the edge.
a. two-crossing property
b. one-crossing property
c. Zero-crossing property
d. none of above
34 ............... should be serious consideration prior to the use of derivatives in applications where noise is
likely to be present.
a. Image smoothing
b. image compression
c. image enhancement
d. none of above
35. This is a:
d. none of above
36. This is a:
d. none of above
37. This is a:
a. Prewitt edge detection gradient mask
d. none of above
a. two-crossing
b. one-crossing
c. zero-crossing
d. none of above
39. The threshold used for each pixel depends on the location of the pixel in terms of the subimages,
this type of thresholding is...........
a. adaptive
b. static
c. modern
d. none of above
40 ................. contributes significantly to algorithms for feature detection, segmentation, and motion
analysis.
a. point detection
b. line detection
c. Edge detection
d. none of above
41. An........ a place where there is a rapid change in the brightness (or other property) of an image.
a. edge
b. point
c. line
d. none of above
d. none of above
c. a & b
d. none of above
b. Advantage: can readily incorporate high level knowledge of the image composition through region
threshold
c. a & b
d. none of above
d. none of above
Answers-Key Unit-4:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
41 42 43 44 45
Unit-5
a. To represent and describe information embedded in an image in other forms that are less
suitable than the image itself.
b. a. To represent and describe information embedded in an image in other forms that are more
suitable than the image itself.
c. a & b
d. none of above
a. Easier to understand
d. all of above
3. What kind of information we can use for Image Representation and Description
a. Boundary, shape
b. Region
c. Texture
d. all of above
a. The boundary is a good representation of an object shape and also requires a few memory.
b. The boundary is a poor representation of an object shape and also requires a few memory.
c. The boundary is a good representation of an object shape and also requires a high memory.
d. none of above
b. Chain codes
c. binary codes
d. none of above
a. Solution: treat a chain code as a rectangular sequence and redefine the starting point so that the
resulting sequence of numbers forms an integer of minimum magnitude.
b. Solution: treat a chain code as a circular sequence and redefine the starting point so that the resulting
sequence of numbers forms an integer of minimum magnitude.
c. Solution: treat a chain code as a circular sequence and redefine the ending point so that the resulting
sequence of numbers forms an integer of maximum magnitude.
d. none of above
d. none of above
d. none of above
a. circular invariant
b. rotational variant
c. rotational invariant
d. none of above
d. none of above
d. all of above
a. pixels “1” having at least one of its 8 neighbor pixels valued “0”
b. pixels “2” having at least one of its 8 neighbor pixels valued “0”
c. pixels “1” having at least one of its 8 neighbor pixels valued “1”
d. none of above
d. none of above
a. difference: 3 3 3 3
b. shape no.: 3 3 3 3
c. a & b
d. none of above
a. difference: 3 0 3 3 0 3
b. shape no.: 0 3 3 0 3 3
c. a & b
d. none of above
17. view a coordinate (x,y) as a complex number (x = real part and y = imaginary part) then
apply the Fourier transform to a sequence of boundary points.
a. Fourier descriptor
b. Laplace descriptor
c. Regional descriptor
d. none of above
a. Fourier descriptor
b. Laplace descriptor
c. Regional Descriptors
d. none of above
a. C = 1/pi
b. C = 1/4pi
c. C = 1/8pi
d. none of above
a. Topological Descriptors
b. Laplace descriptor
c. Regional Descriptors
d. none of above
a. Euler number
b. Euler formula
c. Euler value
d. none of above
a. Euler number
b. Euler formula
c. Euler value
d. none of above
d. none of above
d. none of above
25. below figure shows:
a. smoothness
b. skewness
c. flatness
d. none of above
a. smoothness
b. skewness
c. flatness
d. none of above
a. smoothness
b. skewness
c. flatness
d. none of above
30. Fourier Approach for ............... Concept: convert 2D spectrum into 1D graphs
a. Texture Descriptor
b. regional Descriptor
c. topological Descriptor
d. none of above
31. Principal Components for ............. Purpose: to reduce dimensionality of a vector image while
maintaining information as much as possible.
a. Description
b. registration
c. observation
d. none of above
c. a & b
d. none of above
a. The matching of physical object is not good: It can be improved by morphology and geometric
mathematic
c. a & b
d. none of above
b. Pattern recognition
c. Pattern checker
d. none of above
a. Pattern classes
b. Pattern recognition
c. Pattern checker
d. none of above
a. Vectors
b. Strings
c. Trees
d. all of above
a. Tree descriptions
b. regional descriptions
c. space descriptions
d. none of above
a. semi-theoretic approaches
b. Final-theoretic approaches
c. Decision-theoretic approaches
d. none of above
39. The distance between two shapes a and b defined as: (Degree of similarity k)
a. D(a, b)
b. D(a, b)
2k
c. D(a, b)
d. none of above
a. R
max( a , b )
b. Represent the number of matches between the two strings, where a match occurs in the kth
position if ak = bk.
d. all of above
Answers-Key Unit-4:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
MODEL QUESTION PAPER
Elective-II
A random
B vertex
C contour
D sampling
Ans.: D
Q2. What is the tool used in tasks such as zooming, shrinking, rotating, etc.?
A Sampling
B Interpolation
C Filters
Ans.: B
A line pairs
B pixels
C dots
Ans.: D
Q4. The most familiar single sensor used for Image Acquisition is
A Microdensitometer
B Photodiode
C CMOS
Ans.: B
Q5 The difference is intensity between the highest and the lowest intensity
levels in an image is ___________
A Noise
B Saturation
C Contrast
D Brightness
Ans.: C
Q6. The spatial coordinates of a digital image (x,y) are proportional to:
A Position
B Brightness
C Contrast
D Noise
Ans.: B
Q7. Among the following image processing techniques which is fast, precise
and flexible.
A Optical
B Digital
C Electronic
D Photographic
Ans.: B
A Height of image
B Width of image
C Amplitude of image
D Resolution of image
Ans.: C
Ans.: A
A Dynamic range
B Band range
C Peak range
D Resolution range
Ans.: A
A Saturation
B Hue
C Brightness
D Intensity
Ans.: B
A Interpretation
B Recognition
C Acquisition
D Segmentation
Ans.: A
Ans.: B
A Quantization
B Sampling
C Contrast
D Dynamic range
Ans.: B
Ans.: A
Ans.: A
Q17. For pixels p(x, y), q(s, t), the city-block distance between p and q is
defined as:
B D(p, q) = |x – s| + |y – t|
Ans.: B
Q18. The domain that refers to image plane itself and the domain that refers
to Fourier transform of an image is/are :
Q19. Using gray-level transformation, the basic function Logarithmic deals with
which of the following transformation?
Ans.: A
As=L–1–r
Ans.: A
Q21. Which of the following transformations expands the value of dark pixels
while the higher-level values are being compressed?
A Log transformations
B Inverse-log transformations
C Negative transformations
D None of the mentioned
Ans.: A
Ans.: D
A 1
B -1
C 0
D None of the mentioned
Ans.: A
A Blurring
B Noise reduction
C All of the mentioned
D None of the mentioned
Ans.: C
Ans.: D
Q26. A spatial averaging filter having all the coefficients equal is termed
_________
A A box filter
B A weighted average filter
C A standard average filter
D A median filter
Ans.: A
Q27. An image contains noise having appearance as black and white dots
superimposed on the image. Which of the following noise(s) has the same
appearance?
A Salt-and-pepper noise
B Gaussian noise
C All of the mentioned
D None of the mentioned
Ans.: C
Q28. Which filter(s) used to find the brightest point in the image?
A Median filter
B Max filter
C Mean filter
D All of the mentioned
Ans.: B
Q29. In linear spatial filtering, what is the pixel of the image under mask
corresponding to the mask coefficient w (1, -1), assuming a 3*3 mask?
A f (x, -y)
B f (x + 1, y)
C f (x, y – 1)
D f (x + 1, y – 1)
Ans.: D
Ans.: C
Q31. Which of the following statement(s) is true for the given fact that
“Applying High pass filters has an effect on the background of the output
image”?
A The average background intensity increases to near white of black and white
B The average background intensity reduces to near black
C The average background intensity changes to a value average
D All of the mentioned
Ans.: B
A UV Rays
B Gamma Rays
C Microwaves
D Radio Waves
Ans.: B
A lumens
B watts
C armstrong
D hertz
Ans.: B
Q34. Which of the following is used for chest and dental scans?
A Hard X-Rays
B Soft X-Rays
C Radio waves
D Infrared Rays
Ans.: B
A c = wavelength / frequency
B frequency = wavelength / c
C wavelength = c * frequency
D c = wavelength * frequency
Ans.: C
Ans.: B
Ans.: A
A image enhancement
B image decompression
C image contrast
D image equalization
Ans.: B
A pixels
B matrix
C intensity
D coordinates
Ans.: C
A pixels
B matrix
C frames
D intensity
Ans.: D
Q41. Logic operations between two or more images are performed on pixel-
by-pixel basis, except for one that is performed on a single image. Which one
A AND
B OR
C NOT
D None of the mentioned
Ans.: C
Q42. How many bit RGB color image is represented by full-color image?
Ans.: B
A JPEG
B GIF
C BMP
D PNG
Ans.: B,D
Q44. Makes the file smaller by deleting parts of the file permanently (forever)
A Lossy Compression
B Lossless Compression
Ans.: A
Ans.: A
Ans.: A
A Gaussian
B laplacian
C ideal
D butterworth
Ans.: B
Q48. A typical Fourier Spectrum with spectrum value ranging from 0 to 106,
which of the following transformation is better to apply.
A nonzero
B zero
C positive
D negative
Ans.: B
A intensity transition
B shape transition
C color transition
D sign transition
Ans.: D
A discontinuity
B similarity
C continuity
D recognition
Ans.: A
A audio
B sound
C sunlight
D ultraviolet
Ans.: B
A ultrasonic
B radar
C visible and infrared
D Infrared
Ans.: B
Ans.: C
Q54. The digitization process i.e. the digital image has M rows and N columns,
requires decisions about values for M, N, and for the number, L, of gray levels
allowed for each pixel. The value M and N have to be:
Ans.: A
Q55. After digitization process a digital image with M rows and N columns
have to be positive and for the number, L, max gray levels i.e. an integer
power of 2 for each pixel. Then, the number b, of bits required to store a
digitized image is:
A b=M*N*k
B b=M*N*L
C b=M*L*k
D b=L*N*k
Ans.: A
A bright
B dark
C colourful
D All of the Mentioned
Ans.: B
A Image enhancement
B Blurring
C Contrast adjustment
D None of the Mentioned
Ans.: A
A Intensive
B Local
C Global
D Random
Ans.: A
A Intensive
B Local
C Global
D Random
Ans.: B
Ans.: C
Model Question Paper
Subject :Digital Image Processing
Branch: E&TC
Class: BE
Semester:VIII
A)128
B)255
C)256
D)512
Ans:C
A) Slicing
B) Color Slicing
C) Enhancing
D) Cutting
Ans:B
3) A type of Image is called as VHRR image. What is the definition of VHRR image?
Ans:C
Ans:D
B)255
C)256
D)1
Ans:A
6) The Image sharpening in frequency domain can be achieved by which of the following method(s)?
Ans:B
7) The function of filters in Image sharpening in frequency domain is to perform reverse operation of
which of the following Lowpass filter?
D) None
Ans:C
8) The edges and other abrupt changes in gray-level of an image are associated with_________
C) Edges with high frequency and other abrupt changes in gray-level with low frequency components
D) Edges with low frequency and other abrupt changes in gray-level with high frequency components
Ans:A
A) |Gx|+|Gy|
B) |Gx|-|Gy|
C) |Gx|/|Gy|
D) |Gx|x|Gy|
Ans:A
10) Which of the following statement(s) is true for the given fact that “Applying Highpass filters has an
effect on the background of the output image”?
C) The average background intensity changes to a value average of black and white
Ans:B
A) Red Noise
B) White Noise
C) Black Noise
D) Normal Noise
Ans:D
A) Frequency
B) Time
C) Spatial
D) Plane
Ans:A
A) MRI
B) surgery
C) CT scan
D) Injections
Ans:A
14 Which one is not the process of image processing
A) high level
B) low level
C) last level
D) Mid level
Ans:C
15) Filters that replaces pixel value with medians of intensity levels is
C) Median Filter
Ans:C
A) Position
B) Brightness
C) Contrast
D) Saturation
Ans:B
17)Among the following image processing techniques which is fast, precise and flexible
A) Optical
B) Digital
C) Electronic
D) Photographic
Ans:B
A) Height of image
B) Width of image
C) Amplitude of image
D) Resolution of image
Ans:C
19)The range of values spanned by the gray scale is called
A) Dynamic range
B) Band range
C) Peak range
D) Resolution range
Ans:A
A) Saturation
B) Hue
C) Brightness
D) Intensity
Ans:B
A) law enforcement
B) lithography
C) medicine
D) voice calling
Ans:D
A) Interpretation
B) Recognition
C) Acquisition
D) Segmentation
Ans:A
A) 256 X 256
B) 512 X 512
C) 1920 X 1080
D) 1080 X 1080
Ans:B
24)The number of gray values are integer powers of
A) 4
B)2
C)8
D)1
Ans:B
A) Image restoration
B) Image enhancement
C) Image acquisition
D) Segmentation
Ans:C
26)In which step of processing, the images are subdivided successively into smaller regions?
A) Image enhancement
B) Image acquisition
C) Segmentation
D) wavelet
Ans:D
A) Wavelets
C) Segmentation
D) Morphological processing
Ans:D
28) What is the step that is performed before color image processing in image processing?
B) Image enhancement
C) Image acquisition
D) Image restoration
Ans:D
29)How many number of steps are involved in image processing?
A) 10
B) 11
C) 9
D) 12
Ans:A
Ans:B
31)Which of the following step deals with tools for extracting image components those are useful in
the representation and description of shape?
B) Segmentation
C) Compression
D) Morphological processing
Ans:D
32)In which step of the processing, assigning a label (e.g., “vehicle”) to an object based on its
descriptors is done?
A) Object recognition
B) Morphological processing
D) Segmentation
Ans:A
A) Deals with extracting attributes that result in some quantitative information of interest
B) Deals with techniques for reducing the storage required saving an image, or the bandwidth
required transmitting it
C) Deals with property in which images are subdivided successively into smaller regions
D) Deals with partitioning an image into its constituent parts or objects
Ans:D
Ans:B
35) A structured light illumination technique was used for lens deformation
A) lens deformation
B) inverse filtering
C) lens enhancement
D)lens error
Ans:A
A) edges
B) slices
C) boundaries
D) illumination
Ans:B
37) Major use of gamma rays imaging includes
A) Radars
B) astronomical observations
C) industry
D) lithography
Ans:B
A) Image addition
B) Image Multiplication
C) Image division
D) None
Ans:B
39)What is the sum of the coefficient of the mask defined using HPF?
A) 1
B) -1
C) 0
Ans:C
A ) thumb prints
B) paper currency
C)mp3
Ans:C
A) spatial coordinates
B) frequency coordinates
C)time coordinates
D) real coordinates
Ans:A
42) Lithography uses
A) ultraviolet
B) x-rays
C)gamma
D) visible rays
Ans:A
A)color enhancement
B) Frequency enhancement
C)Spatial enhancement
D)Detection
Ans:D
A) microscopy
B) medical
C) industry
D) radar
Ans:B
A) 1048576
B) 1148576
C) 1248576
D) 1348576
Ans:A
46)The lens is made up of concentric layers of
A) strong cells
B) inner cells
C) fibrous cells
D) outer cells
Ans:C
A) audio
B) AM
C) FM
D) Both b and c
Ans:D
A) 2 levels
B) 4levels
C) 8 levels
D) 16 levels
Ans:C
A) values
B) numbers
C) frequencies
D) intensities
Ans:D
50)In MxN, M is no of
A) intensity levels
B) colors
C) rows
D) columns
Ans:C
51) Each element of the matrix is called
A) dots
B) coordinate
C) pixels
D) value
Ans:C
C) digitized image
D) analog signal
Ans:C
A) radiance
B) variance
C) sampling
D) quantization
Ans:C
A) pel
B) dot
C) resolution
D) digits
Ans:A
B) three
C) four
D) five
Ans:B
A) speed of light
B) light constant
C) plank's constant
D) acceleration constant
Ans:C
Ans:B
A) b = NxK
B) b = MxN
C) b = MxNxK
D) b = MxK
Ans: C
Ans:A
B) illuminance
C) sampling
D) quantization
Ans:D
Model Question Paper
Subject: Digital Image Processing Branch:E&TC
Class:BE Semester:VIII
(B) Segmentation
Ans: a
(A) Compression
(B) Quantization
(C) Sampling
(D) Segmentation
Ans: c
3. _____ is the total amount of energy that flows from light source.
(A) Radiance
(B) Darkness
(C) Brightness
(D) Luminance
Ans: a
Ans: d
Ans: d
6.The transition between continuous values of the image function and its digital equivalent is
called ______________
a) Quantisation
b) Sampling
c) Rasterisation
d) None of the Mentioned
Ans:a
7. Images quantised with insufficient brightness levels will lead to the occurrence of
____________
a) Pixillation
b) Blurring
c) False Contours
Ans:c
8.The smallest discernible change in intensity level is called ____________
a) Intensity Resolution
b) Contour
c) Saturation
d) Contrast
Ans:a
9. What is the tool used in tasks such as zooming, shrinking, rotating, etc.?
a) Sampling
b) Interpolation
c) Filters
Ans:b
10. The type of Interpolation where for each new location the intensity of the immediate pixel
is assigned is ___________
a) bicubic interpolation
b) cubic interpolation
c) bilinear interpolation
d) nearest neighbour interpolation
Ans:d
11. The type of Interpolation where the intensity of the FOUR neighbouring pixels is used to
obtain intensity a new location is called __________
a) Cubic interpolation
b) nearest neighbour interpolation
c) bilinear interpolation
d) bicubic interpolation
Ans: b
12. Dynamic range of imaging system is a ratio where the upper limit is determined by
a)Saturation
b) Noise
c) Brightness
d) Contrast
Ans:a
d) Contrast
Ans:c
14. Quantitatively, spatial resolution cannot be represented in which of the following ways
a) line pairs
b) pixels
c) dots
d) none of the Mentioned
Ans:d
Ans:b
16. The process of using known data to estimate values at unknown locations is called
a) Acquisition
b) Interpolation
c) Pixelation
d) None of the Mentioned
Ans:b
Ans:c
18. In which step of processing, the images are subdivided successively into smaller regions?
a) Image enhancement
b) Image acquisition
c) Segmentation
d) Wavelets
Ans:c
Ans:a
Ans:b
21.The principal factor to determine the spatial resolution of an image is _______
a) Quantization
b) Sampling
c) Contrast
d) Dynamic range
Ans:b
22. What causes the effect, imperceptible set of very fine ridge like structures in areas of
smooth gray levels?
a) Caused by the use of an insufficient number of gray levels in smooth areas of a digital
image
b) Caused by the use of huge number of gray levels in smooth areas of a digital image
c) All of the mentioned
d) None of the mentioned
Ans:a
23. What is the name of the effect caused by the use of an insufficient number of gray levels
in smooth areas of a digital image?
a) Dynamic range
b) Ridging
c) Graininess
d) False contouring
Ans:d
24. Using rough rule of thumb, and assuming powers of 2 for convenience, what image size
are about the smallest images that can be expected to be reasonably free of objectionable
sampling checkerboards and false contouring?
a) 512*512pixels and 16 gray levels
b) 256*256pixels and 64 gray levels
c) 64*64pixels and 16 gray levels
d) 32*32pixels and 32 gray levels
Ans:b
25. What does a shift up and right in the curves of isopreference curve simply means? Verify
in terms of N (number of pixels) and k (L=2k, L is the gray level) values.
a) Smaller values for N and k, implies a better picture quality
b) Larger values for N and k, implies low picture quality
c) Larger values for N and k, implies better picture quality
d) Smaller values for N and k, implies low picture quality
Ans:c
26. How does the curves behave to the detail in the image in isopreference curve?
a) Curves tend to become more vertical as the detail in the image decreases
b) Curves tend to become less vertical as the detail in the image increases
c) Curves tend to become less vertical as the detail in the image decreases
d) Curves tend to become more vertical as the detail in the image increases
Ans:d
27. For an image with a large amount of detail, if the value of N (number of pixels) is fixed
then what is the gray level dependency in the perceived quality of this type of image?
a) Totally independent of the number of gray levels used
b) Nearly independent of the number of gray levels used
c) Highly dependent of the number of gray levels used
d) None of the mentioned
Ans:b
Ans:a
29. For a band-limited function, which Theorem says that “if the function is sampled at a rate
equal to or greater than twice its highest frequency, the original function can be recovered
from its samples”?
a) Band-limitation theorem
b) Aliasing frequency theorem
c) Shannon sampling theorem
d) None of the mentioned
Ans:c
30. What is the name of the phenomenon that corrupts the sampled image, and how does it
happen?
a) Shannon sampling, if the band-limited functions are undersampled
b) Shannon sampling, if the band-limited functions are oversampled
c) Aliasing, if the band-limited functions are undersampled
d) Aliasing, if the band-limited functions are oversampled
Ans:c
Ans:a
33If h(rk) = nk, rk the kthgray level and nk total pixels with gray level rk, is a histogram in gray
level range [0, L – 1]. Then how can we normalize a histogram?
a) If each value of histogram is added by total number of pixels in image, say n, p(rk)=nk+n
b) If each value of histogram is subtracted by total number of pixels in image, say n, p(rk)=nk-
n
c) If each value of histogram is multiplied by total number of pixels in image, say n,
p(rk)=nk * n
d) If each value of histogram is divided by total number of pixels in image, say n, p(rk)=nk / n
Ans:d
Ans:a
35. A low contrast image will have what kind of histogram when, the histogram, h(rk) = nk,
rk the kthgray level and nk total pixels with gray level rk, is plotted nk versus rk?
a) The histogram that are concentrated on the dark side of gray scale
b) The histogram whose component are biased toward high side of gray scale
c) The histogram that is narrow and centered toward the middle of gray scale
d) The histogram that covers wide range of gray scale and the distribution of pixel is
approximately uniform
Ans:c
36. A bright image will have what kind of histogram, when the histogram, h(rk) = nk, rk the
kthgray level and nk total pixels with gray level rk, is plotted nk versus rk?
a) The histogram that are concentrated on the dark side of gray scale
b) The histogram whose component are biased toward high side of gray scale
c) The histogram that is narrow and centered toward the middle of gray scale
d) The histogram that covers wide range of gray scale and the distribution of pixel is
approximately uniform
Ans:b
37. A high contrast image and a dark image will have what kind of histogram respectively,
when the histogram, h(rk) = nk, rk the kthgray level and nk total pixels with gray level rk, is
plotted nk versus rk?
The histogram that are concentrated on the dark side of gray scale.
The histogram whose component are biased toward high side of gray scale.
The histogram that is narrow and centered toward the middle of gray scale.
The histogram that covers wide range of gray scale and the distribution of pixel is
approximately uniform.
a) I) And II) respectively
b) III) And II) respectively
c) II) And IV) respectively
d) IV) And I) respectively
Ans:d
38. The transformation s = T(r) producing a gray level s for each pixel value r of input image.
Then, if the T(r) is single valued in interval 0 ≤ r ≤ 1, what does it signifies?
a) It guarantees the existence of inverse transformation
b) It is needed to restrict producing of some inverted gray levels in output
c) It guarantees that the output gray level and the input gray level will be in same range
d) All of the mentioned
Ans:a
39. The transformation s = T(r) producing a gray level s for each pixel value r of input image.
Then, if the T(r) is monotonically increasing in interval 0 ≤ r ≤ 1, what does it signifies?
a) It guarantees the existence of inverse transformation
b) It is needed to restrict producing of some inverted gray levels in output
c) It guarantees that the output gray level and the input gray level will be in same range
d) All of the mentioned
Ans:b
40. The transformation s = T(r) producing a gray level s for each pixel value r of input image.
Then, if the T(r) is satisfying 0 ≤ T(r) ≤ 1 in interval 0 ≤ r ≤ 1, what does it signifies?
a) It guarantees the existence of inverse transformation
b) It is needed to restrict producing of some inverted gray levels in output
c) It guarantees that the output gray level and the input gray level will be in same range
d) All of the mentioned
Ans:c
41. What is the full form for PDF, a fundamental descriptor of random variables i.e. gray
values in an image?
a) Pixel distribution function
b) Portable document format
c) Pel deriving function
d) Probability density function
Ans:d
Ans:c
43. For the transformation T(r) = [∫0r pr(w) dw], r is gray value of input image, pr(r) is PDF of
random variable r and w is a dummy variable. If, the PDF are always positive and that the
function under integral gives the area under the function, the transformation is said to be
__________
a) Single valued
b) Monotonically increasing
c) All of the mentioned
d) None of the mentioned
Ans:c
44. The transformation T (rk) = ∑k(j=0) nj /n, k = 0, 1, 2, …, L-1, where L is max gray value
possible and r-k is the kthgray level, is called _______
a) Histogram linearization
b) Histogram equalization
c) All of the mentioned
d) None of the mentioned
Ans:c
45. If the histogram of same images, with different contrast, are different, then what is the
relation between the histogram equalized images?
a) They look visually very different from one another
b) They look visually very similar to one another
c) They look visually different from one another just like the input images
d) None of the mentioned
Ans:b
46.In 4-neighbours of a pixel p, how far are each of the neighbours located from p?
a) one pixel apart
b) four pixels apart
c) alternating pixels
d) none of the Mentioned
Ans:a
47. If S is a subset of pixels, pixels p and q are said to be ____________ if there exists a path
between them consisting of pixels entirely in S.
a) continuous
b) ambiguous
c) connected
d) none of the Mentioned
Ans:c
Ans:b
49. Two regions are said to be ___________ if their union forms a connected set.
a) Adjacent
b) Disjoint
c) Closed
d) None of the Mentioned
Ans:a
50. If an image contains K disjoint regions, what does the union of all the regions represent?
a) Background
b) Foreground
c) Outer Border
d) Inner Border
Ans:b
51. For a region R, the set of points that are adjacent to the complement of R is called as
________
a) Boundary
b) Border
c) Contour
d) All of the Mentioned
Ans:d
52. The distance between pixels p and q, the pixels have a distance less than or equal to some
value of radius r centred at (x,y) is called :
a) Euclidean distance
b) City-Block distance
c) Chessboard distance
d) None of the Mentioned
Ans:a
53. The distance between pixels p and q, the pixels have a distance less than or equal to some
value of radius r, form a diamond centred at (x,y) is called :
a) Euclidean distance
b) Chessboard distance
c) City-Block distance
d) None of the Mentioned
Ans:c
54. The distance between pixels p and q, the pixels have a distance less than or equal to some
value of radius r, form a square centred at (x,y) is called :
a) Euclidean distance
b) Chessboard distance
c) City-Block distance
d) None of the Mentioned
Ans:b
Ans:a
Ans:c
58. What is the process of moving a filter mask over the image and computing the sum of
products at each location called as?
a) Convolution
b) Correlation
c) Linear spatial filtering
d) Non linear spatial filtering
Ans:b
59. The standard deviation controls ___________ of the bell (2-D Gaussian function of bell
shape).
a) Size
b) Curve
c) Tightness
d) None of the Mentioned
Ans:c
Ans:a
Image Sampling and Quantization
1. A continuous image is digitised at points.
a) random
b) vertex
c) contour
d) sampling
View Answer
Answer: d
Explanation: The sampling points are ordered in the plane and their relation is called a Grid.
2. The transition between continuous values of the image function and its digital equivalent is
called
a) Quantisation
b) Sampling
c) Rasterisation
d) None of the Mentioned
View Answer
Answer: a
Explanation: The transition between continuous values of the image function and its digital
equivalent is called Quantisation.
3. Images quantised with insufficient brightness levels will lead to the occurrence of
a) Pixillation
b) Blurring
c) False Contours
d) None of the Mentioned
View Answer
Answer: c
Explanation: This effect arises when the number brightness levels is lower that which the human
eye can distinguish.
Answer: a
Explanation: Number of bits used to quantise intensity of an image is called intensity resolution.
5. What is the tool used in tasks such as zooming, shrinking, rotating, etc.?
a) Sampling
b) Interpolation
c) Filters
d) None of the Mentioned
View Answer
Answer: b
Explanation: Interpolation is the basic tool used for zooming, shrinking, rotating, etc.
6. The type of Interpolation where for each new location the intensity of the immediate pixel is
assigned is
a) bicubic interpolation
b) cubic interpolation
c) bilinear interpolation
d) nearest neighbour interpolation
View Answer
Answer: d
Explanation: Its called as Nearest Neighbour Interpolation since for each new location the
intensity of the next neighbouring pixel is assigned.
7. The type of Interpolation where the intensity of the FOUR neighbouring pixels is used to
obtain intensity a new location is called
a) cubic interpolation
b) nearest neighbour interpolation
c) bilinear interpolation
d) bicubic interpolation
View Answer
Answer: b
Explanation: Bilinear interpolation is where the FOUR neighbouring pixels is used to estimate
intensity for a new location.
8. Dynamic range of imaging system is a ratio where the upper limit is determined by
a) Saturation
b) Noise
c) Brightness
d) Contrast
View Answer
Answer: a
Explanation: Saturation is taken as the Numerator.
Answer: c
Explanation: Noise is taken as the Denominator.
10. Quantitatively, spatial resolution cannot be represented in which of the following ways
a) line pairs
b) pixels
c) dots
d) none of the Mentioned
View Answer
Answer: d
Explanation: All the options can be used to represent spatial resolution.
11. The most familiar single sensor used for Image Acquisition is
a) Microdensitometer
b) Photodiode
c) CMOS
d) None of the Mentioned
View Answer
Answer: b
Explanation: Photodiode is the most commonly used single sensor made up of silicon materials.
Answer: b
Explanation: Sensor strips are very common next to single sensor and use in-line arrangement.
14. The section of the real plane spanned by the coordinates of an image is called the
a) Spacial Domain
b) Coordinate Axes
c) Plane of Symmetry
d) None of the Mentioned
View Answer
Answer: a
Explanation: The section of the real plane spanned by the coordinates of an image is called the
Spacial Domain, with the x and y coordinates referred to as Spacial coordinates.
15. The difference is intensity between the highest and the lowest intensity levels in an image is
a) Noise
b) Saturation
c) Contrast
d) Brightness
View Answer
Answer: c
Explanation: Contrast is the measure of the difference is intensity between the highest and the
lowest intensity levels in an image.
16. is the effect caused by the use of an insufficient number of intensity levels in
smooth areas of a digital image.
a) Gaussian smooth
b) Contouring
c) False Contouring
d) Interpolation
View Answer
Answer: c
Explanation: It is called so because the ridges resemble the contours of a map.
17. The process of using known data to estimate values at unknown locations is called
a) Acquisition
b) Interpolation
c) Pixelation
d) None of the Mentioned
View Answer
Answer: b
Explanation: Interpolation is the process used to estimate unknown locations. It is applied in all
image resampling methods.
Answer: c
Explanation: Because Pixelation deals with enlargement of pixels.
19. The procedure done on a digital image to alter the values of its individual pixels is
a) Neighbourhood Operations
b) Image Registration
c) Geometric Spacial Transformation
d) Single Pixel Operation
View Answer
Answer: d
Explanation: It is expressed as a transformation function T, of the form s=T(z) , where z is the
intensity.
20. In Geometric Spacial Transformation, points whose locations are known precisely in input
and reference images.
a) Tie points
b) Réseau points
c) Known points
d) Key-points
View Answer
Answer: a
Explanation: Tie points, also called Control points are points whose locations are known
precisely in input and reference images.
22. In the Visible spectrum the colour has the maximum wavelength.
a) Violet
b) Blue
c) Red
d) Yellow
View Answer
Answer: c
Explanation: Red is towards the right in the electromagnetic spectrum sorted in the increasing
order of wavelength.
Answer: d
Explanation: It is usually written as wavelength = c / frequency.
Answer: a
Explanation: Electromagnetic waves are visualised as sinusoidal wave.
Answer: b
Explanation: Radiance is the total amount of energy that flows from the light source and is
measured in Watts.
26. Which of the following is used for chest and dental scans?
a) Hard X-Rays
b) Soft X-Rays
c) Radio waves
d) Infrared Rays
View Answer
Answer: b
Explanation: Soft X-Rays (low energy) are used for dental and chest scans.
Answer: d
Explanation: Brightness is subjective descriptor of light perception that is impossible to measure.
Answer: a
Explanation: Each bundle of massless energy is called a Photon.
Answer: b
Explanation: Achromatic light is also called monochromatic light.(Light void of color)
31. How is array operation carried out involving one or more images?
a) array by array
b) pixel by pixel
c) column by column
d) row by row
View Answer
Answer: b
Explanation: Any array operation is carried out on a pixel by pixel basis.
32. The property indicating that the output of a linear operation due to the sum of two inputs is
same as performing the operation on the inputs individually and then summing the results is
called
a) additivity
b) heterogeneity
c) homogeneity
d) None of the Mentioned
View Answer
Answer: a
Explanation: This property is called additivity .
33. The property indicating that the output of a linear operation to a constant times as input is the
same as the output of operation due to original input multiplied by that constant is called
a) additivity
b) heterogeneity
c) homogeneity
d) None of the Mentioned
View Answer
Answer: c
Explanation: This property is called homogeneity .
Answer: a
Explanation: Mask mode radiography is an important medical imaging area based on Image
Subtraction.
Answer: b
Explanation: A common use of image multiplication is Masking, also called ROI operation.
Answer: c
Explanation: A is called the subset of B.
38. Consider two regions A and B composed of foreground pixels. The of these two
sets is the set of elements belonging to set A or set B or both.
a) OR
b) AND
c) NOT
d) XOR
View Answer
Answer: a
Explanation: This is called an OR operation.
39. Imaging systems having physical artefacts embedded in the imaging sensors produce a set of
points called
a) Tie Points
b) Control Points
c) Reseau Marks
d) None of the Mentioned
View Answer
Answer: c
Explanation: These points are called “known” points or “Reseau marks”.
40. Image processing approaches operating directly on pixels of input image work directly in
a) Transform domain
b) Spatial domain
c) Inverse transformation
d) None of the Mentioned
View Answer
Answer: b
Explanation: Operations directly on pixels of input image work directly in Spatial Domain.
41. Noise reduction is obtained by blurring the image using smoothing filter.
a) True
b) False
View Answer
Answer: a
Explanation: Noise reduction is obtained by blurring the image using smoothing filter. Blurring
is used in pre-processing steps, such as removal of small details from an image prior to object
extraction and, bridging of small gaps in lines or curves.
Answer: d
Explanation: The output or response of a smoothing, linear spatial filter is simply the average of
the pixels contained in the neighbourhood of the filter mask.
Answer: b
Explanation: Since the smoothing spatial filter performs the average of the pixels, it is also called
as averaging filter.
44. Which of the following in an image can be removed by using smoothing filter?
a) Smooth transitions of gray levels
b) Smooth transitions of brightness levels
c) Sharp transitions of gray levels
d) Sharp transitions of brightness levels
View Answer
Answer: c
Explanation: Smoothing filter replaces the value of every pixel in an image by the average value
of the gray levels. So, this helps in removing the sharp transitions in the gray levels between the
pixels. This is done because, random noise typically consists of sharp transitions in gray levels.
Answer: a
Explanation: Edges, which almost always are desirable features of an image, also are
characterized by sharp transitions in gray level. So, averaging filters have an undesirable side
effect that they blur these edges.
Answer: b
Explanation: One of the application of smoothing spatial filters is that, they help in smoothing
the false contours that result from using an insufficient number of gray levels.
47. The mask shown in the figure below belongs to which type of filter?
Answer: d
Explanation: This is a smoothing spatial filter. This mask yields a so called weighted average,
which means that different pixels are multiplied with different coefficient values. This helps in
giving much importance to the some pixels at the expense of others.
48. The mask shown in the figure below belongs to which type of filter?
Answer: c
Explanation: The mask shown in the figure represents a 3×3 smoothing filter. Use of this filter
yields the standard average of the pixels under the mask.
Answer: a
Explanation: A spatial averaging filter or spatial smoothening filter in which all the coefficients
are equal is also called as box filter.
50. If the size of the averaging filter used to smooth the original image to first image is 9, then
what would be the size of the averaging filter used in smoothing the same original picture to
second in second image?
a) 3
b) 5
c) 9
d) 15
View Answer
Answer: d
Explanation: We know that, as the size of the filter used in smoothening the original image that
is averaging filter increases then the blurring of the image. Since the second image is more
blurred than the first image, the window size should be more than 9.
51. Which of the following comes under the application of image blurring?
a) Object detection
b) Gross representation
c) Object motion
d) Image segmentation
View Answer
Answer: b
Explanation: An important application of spatial averaging is to blur an image for the purpose of
getting a gross representation of interested objects, such that the intensity of the small objects
blends with the background and large objects become easy to detect.c
Answer: a
Explanation: Order static filters are nonlinear smoothing spatial filters whose response is based
on the ordering or ranking the pixels contained in the image area encompassed by the filter, and
then replacing the value of the central pixel with the value determined by the ranking result.
advertisement
53. Median filter belongs to which category of filters?
a) Linear spatial filter
b) Frequency domain filter
c) Order static filter
d) Sharpening filter
View Answer
Answer: c
Explanation: The median filter belongs to order static filters, which, as the name implies,
replaces the value of the pixel by the median of the gray levels that are present in the
neighbourhood of the pixels.
Answer: a
Explanation: Median filters are used to remove impulse noises, also called as salt-and-pepper
noise because of its appearance as white and black dots in the image.
55. What is the maximum area of the cluster that can be eliminated by using an n×n median
filter?
a) n2
b) n2/2
c) 2*n2
d) n
View Answer
Answer: b
Explanation: Isolated clusters of pixels that are light or dark with respect to their neighbours, and
whose area is less than n2/2, i.e., half the area of the filter, can be eliminated by using an n×n
median filter.
56. Which of the following expression is used to denote spatial domain process?
a) g(x,y)=T[f(x,y)]
b) f(x+y)=T[g(x+y)]
c) g(xy)=T[f(xy)]
d) g(x-y)=T[f(x-y)]
View Answer
Answer: a
Explanation: Spatial domain processes will be denoted by the expression g(x,y)=T[f(x,y)], where
f(x,y) is the input image, g(x,y) is the processed image, and T is an operator on f, defined over
some neighborhood of (x, y). In addition, T can operate on a set of input images, such as
performing the pixel-by-pixel sum of K images for noise reduction.
57. Which of the following shows three basic types of functions used frequently for image
enhancement?
a) Linear, logarithmic and inverse law
b) Power law, logarithmic and inverse law
c) Linear, logarithmic and power law
d) Linear, exponential and inverse law
View Answer
Answer: b
Explanation: In introduction to gray-level transformations, which shows three basic types of
functions used frequently for image enhancement: linear (negative and identity transformations),
logarithmic (log and inverse-log transformations), and power-law (nth power and nth root
transformations).The identity function is the trivial case in which output intensities are identical
to input intensities. It is included in the graph only for completeness.
58. Which expression is obtained by performing the negative transformation on the negative of
an image with gray levels in the range[0,L-1] ?
a) s=L+1-r
b) s=L+1+r
c) s=L-1-r
d) s=L-1+r
View Answer
Answer: c
Explanation: The negative of an image with gray levels in the range[0,L-1] is obtained by using
the negative transformation, which is given by the expression: s=L-1-r.
Answer: b
Explanation: The general form of the log transformation: s=clog10(1+r), where c is a constant,
and it is assumed that r ≥ 0.
Answer: a
Explanation: Power-law transformations have the basic form: s=crγ where c and g are positive
constants. Sometimes s=crγ is written as s=c.(r+ε)γ to account for an offset (that is, a measurable
output when the input is zero).
61. What is the name of process used to correct the power-law response phenomena?
a) Beta correction
b) Alpha correction
c) Gamma correction
d) Pie correction
View Answer
Answer: c
Explanation: A variety of devices used for image capture, printing, and display respond
according to a power law. By convention, the exponent in the power-law equation is referred to
as gamma .The process used to correct these power-law response phenomena is called gamma
correction.
62. Which of the following transformation function requires much information to be specified at
the time of input?
a) Log transformation
b) Power transformation
c) Piece-wise transformation
d) Linear transformation
View Answer
Answer: c
Explanation: The practical implementation of some important transformations can be formulated
only as piecewise functions. The principal disadvantage of piecewise functions is that their
specification requires considerably more user input.
63. In contrast stretching, if r1=s1 and r2=s2 then which of the following is true?
a) The transformation is not a linear function that produces no changes in gray levels
b) The transformation is a linear function that produces no changes in gray levels
c) The transformation is a linear function that produces changes in gray levels
d) The transformation is not a linear function that produces changes in gray levels
View Answer
Answer: b
Explanation: The locations of points (r1,s1) and (r2,s2) control the shape of the transformation
function. If r1=s1 and r2=s2 then the transformation is a linear function that produces no changes
in gray levels.
64. In contrast stretching, if r1=r2, s1=0 and s2=L-1 then which of the following is true?
a) The transformation becomes a thresholding function that creates an octal image
b) The transformation becomes a override function that creates an octal image
c) The transformation becomes a thresholding function that creates a binary image
d) The transformation becomes a thresholding function that do not create an octal image
View Answer
Answer: c
Explanation: If r1=r2, s1=0 and s2=L-1,the transformation becomes a thresholding function that
creates a binary image.
65. In contrast stretching, if r1≤r2 and s1≤s2 then which of the following is true?
a) The transformation function is double valued and exponentially increasing
b) The transformation function is double valued and monotonically increasing
c) The transformation function is single valued and exponentially increasing
d) The transformation function is single valued and monotonically increasing
View Answer
Answer: d
Explanation: The locations of points (r1,s1) and (r2,s2) control the shape of the transformation
function. If r1≤r2 and s1≤s2 then the function is single valued and monotonically increasing.
advertisement
66. In which type of slicing, highlighting a specific range of gray levels in an image often is
desired?
a) Gray-level slicing
b) Bit-plane slicing
c) Contrast stretching
d) Byte-level slicing
View Answer
Answer: a
Explanation: Highlighting a specific range of gray levels in an image often is desired in gray-
level slicing. Applications include enhancing features such as masses of water in satellite
imagery and enhancing flaws in X-ray images.
67. Which of the following depicts the main functionality of the Bit-plane slicing?
a) Highlighting a specific range of gray levels in an image
b) Highlighting the contribution made to total image appearance by specific bits
c) Highlighting the contribution made to total image appearance by specific byte
d) Highlighting the contribution made to total image appearance by specific pixels
View Answer
Answer: b
Explanation: Instead of highlighting gray-level ranges, highlighting the contribution made to
total image appearance by specific bits might be desired. Suppose , each pixel in an image is
represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from bit-
plane 0 for the least significant bit to bit-plane 7 for the most significant bit. In terms of 8-bit
bytes, plane 0 contains all the lowest order bits in the bytes comprising the pixels in the image
and plane 7 contains all the high-order bits.
Answer: b
Explanation: The sharpening of image helps in highlighting the fine details that are present in the
image or to enhance the details that are blurred due to some reason like adding noise.
Answer: a
Explanation: The applications of image sharpening is present in various fields like electronic
printing, autonomous guidance in military systems, medical imaging and industrial inspection.
70. In spatial domain, which of the following operation is done on the pixels in sharpening the
image?
a) Integration
b) Average
c) Median
d) Differentiation
View Answer
Answer: d
Explanation: We know that, in blurring the image, we perform the average of pixels which can
be considered as integration. As sharpening is the opposite process of blurring, logically we can
tell that we perform differentiation on the pixels to sharpen the image.
71. Image differentiation enhances the edges, discontinuities and deemphasizes the pixels with
slow varying gray levels.
a) True
b) False
View Answer
Answer: a
Explanation: Fundamentally, the strength of the response of the derivative operative is
proportional to the degree of discontinuity in the image. So, we can state that image
differentiation enhances the edges, discontinuities and deemphasizes the pixels with slow
varying gray levels.
72. In which of the following cases, we wouldn‟ t worry about the behaviour of sharpening filter?
a) Flat segments
b) Step discontinuities
c) Ramp discontinuities
d) Slow varying gray values
View Answer
Answer: d
Explanation: We are interested in the behaviour of derivatives used in sharpening in the constant
gray level areas i.e., flat segments, and at the onset and end of discontinuities, i.e., step and ramp
discontinuities.
73. Which of the following is the valid response when we apply a first derivative?
a) Non-zero at flat segments
b) Zero at the onset of gray level step
c) Zero in flat segments
d) Zero along ramps
View Answer
Answer: c
Explanation: The derivations of digital functions are defined in terms of differences. The
definition we use for first derivative should be zero in flat segments, nonzero at the onset of a
gray level step or ramp and nonzero along the ramps.
74. Which of the following is not a valid response when we apply a second derivative?
a) Zero response at onset of gray level step
b) Nonzero response at onset of gray level step
c) Zero response at flat segments
d) Nonzero response along the ramps
View Answer
Answer: b
Explanation: The derivations of digital functions are defined in terms of differences. The
definition we use for second derivative should be zero in flat segments, zero at the onset of a
gray level step or ramp and nonzero along the ramps.
75. If f(x,y) is an image function of two variables, then the first order derivative of a one
dimensional function, f(x) is:
a) f(x+1)-f(x)
b) f(x)-f(x+1)
c) f(x-1)-f(x+1)
d) f(x)+f(x-1)
View Answer
Answer: a
Explanation: The first order derivative of a single dimensional function f(x) is the difference
between f(x) and f(x+1).
That is, ∂f/∂x=f(x+1)-f(x).
Answer: a
Explanation: The point which has very high or very low gray level value compared to its
neighbours, then that point is called as isolated point or noise point. The noise point of is of one
pixel size.
77. What is the thickness of the edges produced by first order derivatives when compared to that
of second order derivatives?
a) Finer
b) Equal
c) Thicker
d) Independent
View Answer
Answer: c
Explanation: We know that, the first order derivative is nonzero along the entire ramp while the
second order is zero along the ramp. So, we can conclude that the first order derivatives produce
thicker edges and the second order derivatives produce much finer edges.
78. First order derivative can enhance the fine detail in the image compared to that of second
order derivative.
a) True
b) False
View Answer
Answer: b
Explanation: The response at and around the noise point is much stronger for the second order
derivative than for the first order derivative. So, we can state that the second order derivative is
better to enhance the fine details in the image including noise when compared to that of first
order derivative.
79. Which of the following derivatives produce a double response at step changes in gray level?
a) First order derivative
b) Third order derivative
c) Second order derivative
d) First and second order derivatives
View Answer
Answer: c
Explanation: Second order derivatives produce a double line response for the step changes in the
gray level. We also note of second-order derivatives that, for similar changes in gray-level values
in an image, their response is stronger to a line than to a step, and to a point than to a line.
Answer: d
Explanation: Highlighting the fine detail in an image or Enhancing detail that has been blurred
because of some error or some natural effect of some method of image acquisition, is the
principal objective of sharpening spatial filters.
Answer: b
Explanation: Smoothing is analogous to integration and so, sharpening to spatial differentiation.
82. Which of the following fact(s) is/are true about sharpening spatial filters using digital
differentiation?
a) Sharpening spatial filter response is proportional to the discontinuity of the image at the point
where the derivative operation is applied
b) Sharpening spatial filters enhances edges and discontinuities like noise
c) Sharpening spatial filters deemphasizes areas that have slowly varying gray-level values
d) All of the mentioned
View Answer
Answer: d
Explanation: Derivative operator‟ s response is proportional to the discontinuity of the image
at the point where the derivative operation is applied.
Image differentiation enhances edges and discontinuities like noise and deemphasizes areas that
have slowly varying gray-level values.
Since a sharpening spatial filters are analogous to differentiation, so, all the above mentioned
facts are true for sharpening spatial filters.
83. Which of the facts(s) is/are true for the first order derivative of a digital function?
a) Must be nonzero in the areas of constant grey values
b) Must be zero at the onset of a gray-level step or ramp discontinuities
c) Must be nonzero along the gray-level ramps
d) None of the mentioned
View Answer
Answer: c
Explanation: The first order derivative of a digital function is defined as:
Must be zero in the areas of constant grey values.
Must be nonzero at the onset of a gray-level step or ramp discontinuities.
Must be nonzero along the gray-level ramps.
84. Which of the facts(s) is/are true for the second order derivative of a digital function?
a) Must be zero in the flat areas
b) Must be nonzero at the onset and end of a gray-level step or ramp discontinuities
c) Must be zero along the ramps of constant slope
d) All of the mentioned
View Answer
Answer: c
Explanation: The second order derivative of a digital function is defined as:
Must be zero in the flat areas i.e. areas of constant grey values.
Must be nonzero at the onset of a gray-level step or ramp discontinuities.
Must be zero along the gray-level ramps of constant slope.
85. The derivative of digital function is defined in terms of difference. Then, which of the
following defines the first order derivative ∂f/∂x= of a one-dimensional function
f(x)?
a) f(x+1)-f(x)
b) f(x+1)+ f(x-1)-2f(x)
c) All of the mentioned depending upon the time when partial derivative will be dealt along two
spatial axes
d) None of the mentioned
View Answer
Answer: a
Explanation: The definition of a first order derivative of a one dimensional image f(x) is:
∂f/∂x= f(x+1)-f(x), where the partial derivative is used to keep notation same even for f(x, y)
when partial derivative will be dealt along two spatial axes.
86. The derivative of digital function is defined in terms of difference. Then, which of the
following defines the second order derivative ∂2 f/∂x2 = of a one-dimensional
function f(x)?
a) f(x+1)-f(x)
b) f(x+1)+ f(x-1)-2f(x)
c) All of the mentioned depending upon the time when partial derivative will be dealt along two
spatial axes
d) None of the mentioned
View Answer
Answer: b
Explanation: The definition of a second order derivative of a one dimensional image f(x) is:
(∂2 f)/∂x2 =f(x+1)+ f(x-1)-2f(x), where the partial derivative is used to keep notation same even
for f(x, y) when partial derivative will be dealt along two spatial axes.
87. What kind of relation can be obtained between first order derivative and second order
derivative of an image having a on the basis of edge productions that shows a transition like a
ramp of constant slope?
a) First order derivative produces thick edge while second order produces a very fine edge
b) Second order derivative produces thick edge while first order produces a very fine edge
c) Both first and second order produces thick edge
d) Both first and second order produces a very fine edge
View Answer
Answer: a
Explanation: the first order derivative remains nonzero along the entire ramp of constant slope,
while the second order derivative remain nonzero only at onset and end of such ramps.
If an edge in an image shows transition like the ramp of constant slope, the first order and second
order derivative values shows the production of thick and finer edge respectively.
88. What kind of relation can be obtained between first order derivative and second order
derivative of an image on the response obtained by encountering an isolated noise point in the
image?
a) First order derivative has a stronger response than a second order
b) Second order derivative has a stronger response than a first order
c) Both enhances the same and so the response is same for both first and second order derivative
d) None of the mentioned
View Answer
Answer: b
Explanation: This is because a second order derivative is more aggressive toward enhancing
sharp changes than a first order.
89. What kind of relation can be obtained between the response of first order derivative and
second order derivative of an image having a transition into gray-level step from zero?
a) First order derivative has a stronger response than a second order
b) Second order derivative has a stronger response than a first order
c) Both first and second order derivative has the same response
d) None of the mentioned
View Answer
Answer: c
Explanation: This is because a first order derivative has stronger response to a gray-level step
than a second order, but, the response becomes same if transition into gray-level step is from
zero.
90. If in an image there exist similar change in gray-level values in the image, which of the
following shows a stronger response using second order derivative operator for sharpening?
a) A line
b) A step
c) A point
d) None of the mentioned
View Answer
Answer: c
Explanation: second order derivative shows a stronger response to a line than a step and to a
point than a line, if there is similar changes in gray-level values in an image.
91. To convert a continuous sensed data into Digital form, which of the following is required?
a) Sampling
b) Quantization
c) Both Sampling and Quantization
d) Neither Sampling nor Quantization
View Answer
Answer: c
Explanation: The output of the most sensor is a continuous waveform and the amplitude and
spatial behavior of such waveform are related to the physical phenomenon being sensed.
92. To convert a continuous image f(x, y) to digital form, we have to sample the function in
a) Coordinates
b) Amplitude`
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: An image may be continuous in the x- and y-coordinates or in amplitude, or in
both.
93. For a continuous image f(x, y), how could be Sampling defined?
a) Digitizing the coordinate values
b) Digitizing the amplitude values
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: a
Explanation: Sampling is the method of digitizing the coordinate values of the image.
Answer: b
Explanation: Sampling is the method of digitizing the amplitude values of the image.
Answer: a
Explanation: Digital function requires both sampling and quantization of the one-dimensional
image function.
96. How is sampling been done when an image is generated by a single sensing element
combined with mechanical motion?
a) The number of sensors in the strip defines the sampling limitations in one direction and
Mechanical motion in the other direction.
b) The number of sensors in the sensing array establishes the limits of sampling in both
directions.
c) The number of mechanical increments when the sensor is activated to collect data.
d) None of the mentioned.
View Answer
Answer: c
Explanation: When an image is generated by a single sensing element along with mechanical
motion, the output data is quantized by dividing the gray-level scale into many discrete levels.
However, sampling is done by selecting the number of individual mechanical increments
recorded at which we activate the sensor to collect data.
97. How does sampling gets accomplished with a sensing strip being used for image acquisition?
a) The number of sensors in the strip establishes the sampling limitations in one image direction
and Mechanical motion in the other direction
b) The number of sensors in the sensing array establishes the limits of sampling in both
directions
c) The number of mechanical increments when the sensor is activated to collect data
d) None of the mentioned
View Answer
Answer: a
Explanation: When a sensing strip is used the number of sensors in the strip defines the sampling
limitations in one direction and mechanical motion in the other direction.
98. How is sampling accomplished when a sensing array is used for image acquisition?
a) The number of sensors in the strip establishes the sampling limitations in one image direction
and Mechanical motion in the other direction
b) The number of sensors in the sensing array defines the limits of sampling in both directions
c) The number of mechanical increments at which we activate the sensor to collect data
d) None of the mentioned
View Answer
Answer: b
Explanation: When we use sensing array for image acquisition, there is no motion and so, only
the number of sensors in the array defines the limits of sampling in both directions and the output
of the sensor is quantized by dividing the gray-level scale into many discrete levels.
Answer: c
Explanation: The quality of a digital image is determined mostly by the number of samples and
discrete gray levels used in sampling and quantization.
100. Assume that an image f(x, y) is sampled so that the result has M rows and N columns. If the
values of the coordinates at the origin are (x, y) = (0, 0), then the notation (0, 1) is used to signify
:
a) Second sample along first row
b) First sample along second row
c) First sample along first row
d) Second sample along second row
View Answer
Answer: a
Explanation: The values of the coordinates at the origin are (x, y) = (0, 0). Then, the next
coordinate values (second sample) along the first row of the image are represented as (x, y) = (0,
1).
101. The resulting image of sampling and quantization is considered a matrix of real numbers.
By what name(s) the element of this matrix array is called
a) Image element or Picture element
b) Pixel or Pel
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: Sampling and Quantization of an image f(x, y) forms a matrix of real numbers and
each element of this matrix array is commonly known as Image element or Picture element or
Pixel or Pel.
102. Let Z be the set of real integers and R the set of real numbers. The sampling process may be
viewed as partitioning the x-y plane into a grid, with the central coordinates of each grid being
from the Cartesian product Z2, that is a set of all ordered pairs (zi, zj), with zi and zj being
integers from Z. Then, f(x, y) is said a digital image if:
a) (x, y) are integers from Z2 and f is a function that assigns a gray-level value (from Z) to each
distinct pair of coordinates (x, y)
b) (x, y) are integers from R2 and f is a function that assigns a gray-level value (from R) to each
distinct pair of coordinates (x, y)
c) (x, y) are integers from R2 and f is a function that assigns a gray-level value (from Z) to each
distinct pair of coordinates (x, y)
d) (x, y) are integers from Z2 and f is a function that assigns a gray-level value (from R) to each
distinct pair of coordinates (x, y)
View Answer
Answer: d
Explanation: In the given condition, f(x, y) is a digital image if (x, y) are integers from Z2 and f a
function that assigns a gray-level value (that is, a real number from the set R) to each distinct
coordinate pair (x, y).
103. Let Z be the set of real integers and R the set of real numbers. The sampling process may be
viewed as partitioning the x-y plane into a grid, with the central coordinates of each grid being
from the Cartesian product Z2, that is a set of all ordered pairs (zi, zj), with zi and zj being
integers from Z. Then, f(x, y) is a digital image if (x, y) are integers from Z2 and f is a function
that assigns a gray-level value (that is, a real number from the set R) to each distinct coordinate
pair (x, y). What happens to the digital image if the gray levels also are integers?
a) The Digital image then becomes a 2-D function whose coordinates and amplitude values are
integers
b) The Digital image then becomes a 1-D function whose coordinates and amplitude values are
integers
c) The gray level can never be integer
d) None of the mentioned
View Answer
Answer: a
Explanation: In Quantization Process if the gray levels also are integers the Digital image then
becomes a 2-D function whose coordinates and amplitude values are integers.
104. The digitization process i.e. the digital image has M rows and N columns, requires decisions
about values for M, N, and for the number, L, of gray levels allowed for each pixel. The value M
and N have to be:
a) M and N have to be positive integer
b) M and N have to be negative integer
c) M have to be negative and N have to be positive integer
d) M have to be positive and N have to be negative integer
View Answer
Answer: a
Explanation: The digitization process i.e. the digital image has M rows and N columns, requires
decisions about values for M, N, and for the number, L, of max gray level. There are no
requirements on M and N, other than that M and N have to be positive integer.
105. The digitization process i.e. the digital image has M rows and N columns, requires decisions
about values for M, N, and for the number, L, of max gray levels. There are no requirements on
M and N, other than that M and N have to be positive integer. However, the number of gray
levels typically is
a) An integer power of 2 i.e. L = 2k
b) A Real power of 2 i.e. L = 2k
c) Two times the integer value i.e. L = 2k
d) None of the mentioned
View Answer
Answer: a
Explanation: Due to processing, storage, and considering the sampling hardware, the number of
gray levels typically is an integer power of 2 i.e. L = 2k.
106. The digitization process i.e. the digital image has M rows and N columns, requires decisions
about values for M, N, and for the number, L, of max gray levels is an integer power of 2 i.e. L =
2k, allowed for each pixel. If we assume that the discrete levels are equally spaced and that they
are integers then they are in the interval and Sometimes the range of values spanned
by the gray scale is called the of an image.
a) [0, L – 1] and static range respectively
b) [0, L / 2] and dynamic range respectively
c) [0, L / 2] and static range respectively
d) [0, L – 1] and dynamic range respectively
View Answer
Answer: d
Explanation: In digitization process M rows and N columns have to be positive and for the
number, L, of discrete gray levels typically an integer power of 2 for each pixel. If we assume
that the discrete levels are equally spaced and that they are integers then they lie in the interval
[0, L-1] and Sometimes the range of values spanned by the gray scale is called the dynamic
range of an image.
107. After digitization process a digital image with M rows and N columns have to be positive
and for the number, L, max gray levels i.e. an integer power of 2 for each pixel. Then, the
number b, of bits required to store a digitized image is:
a) b=M*N*k
b) b=M*N*L
c) b=M*L*k
d) b=L*N*k
View Answer
Answer: a
Explanation: In digital image of M rows and N columns and L max gray levels an integer power
of 2 for each pixel. The number, b, of bits required to store a digitized image is: b=M*N*k.
108. An image whose gray-levels span a significant portion of gray scale have
dynamic range while an image with dull, washed out gray look have dynamic range.
a) Low and High respectively
b) High and Low respectively
c) Both have High dynamic range, irrespective of gray levels span significance on gray scale
d) Both have Low dynamic range, irrespective of gray levels span significance on gray scale
View Answer
Answer: b
Explanation: An image whose gray-levels signifies a large portion of gray scale have High
dynamic range, while that with dull, washed out gray look have Low dynamic range.
109. Validate the statement “When in an Image an appreciable number of pixels exhibit high
dynamic range, the image will have high contrast.”
a) True
b) False
View Answer
Answer: a
Explanation: In an Image if an appreciable number of pixels exhibit high dynamic range
property, the image will have high contrast.
110. In digital image of M rows and N columns and L discrete gray levels, calculate the bits
required to store a digitized image for M=N=32 and L=16.
a) 16384
b) 4096
c) 8192
d) 512
View Answer
Answer: b
Explanation: In digital image of M rows and N columns and L max gray levels i.e. an integer
power of 2 for each pixel. The number, b, of bits required to store a digitized image is:
b=M*N*k.
For L=16, k=4.
i.e. b=4096.
Answer: b
Explanation: Gamma Rays come first in the electromagnetic spectrum sorted in the decreasing
order of frequency.
112. In the Visible spectrum the colour has the maximum wavelength.
a) Violet
b) Blue
c) Red
d) Yellow
View Answer
Answer: c
Explanation: Red is towards the right in the electromagnetic spectrum sorted in the increasing
order of wavelength.
Answer: d
Explanation: It is usually written as wavelength = c / frequency.
Answer: a
Explanation: Electromagnetic waves are visualised as sinusoidal wave.
Answer: b
Explanation: Radiance is the total amount of energy that flows from the light source and is
measured in Watts.
116. Which of the following is used for chest and dental scans?
a) Hard X-Rays
b) Soft X-Rays
c) Radio waves
d) Infrared Rays
View Answer
Answer: b
Explanation: Soft X-Rays (low energy) are used for dental and chest scans.
117. Which of the following is impractical to measure?
a) Frequency
b) Radiance
c) Luminance
d) Brightness
View Answer
Answer: d
Explanation: Brightness is subjective descriptor of light perception that is impossible to measure.
Answer: a
Explanation: Each bundle of massless energy is called a Photon.
Answer: b
Explanation: Achromatic light is also called monochromatic light.(Light void of color)
Answer: b
Explanation: Brightness embodies the achromatic notion of intensity and is a key factor in
describing color sensation.
Mathematical Tools in Digital Image
Processing
This set of Digital Image Processing Multiple Choice Questions & Answers (MCQs) focuses on
” Mathematical Tools in Digital Image Processing”.
121. How is array operation carried out involving one or more images?
a) array by array
b) pixel by pixel
c) column by column
d) row by row
View Answer
Answer: b
Explanation: Any array operation is carried out on a pixel by pixel basis.
122. The property indicating that the output of a linear operation due to the sum of two inputs is
same as performing the operation on the inputs individually and then summing the results is
called
a) additivity
b) heterogeneity
c) homogeneity
d) None of the Mentioned
View Answer
Answer: a
Explanation: This property is called additivity .
123. The property indicating that the output of a linear operation to a constant times as input is
the same as the output of operation due to original input multiplied by that constant is called
a) additivity
b) heterogeneity
c) homogeneity
d) None of the Mentioned
View Answer
Answer: c
Explanation: This property is called homogeneity .
Answer: c
Explanation: A frequent application of image subtraction is in the enhancement of differences
between images .
Answer: a
Explanation: Mask mode radiography is an important medical imaging area based on Image
Subtraction.
Answer: b
Explanation: A common use of image multiplication is Masking, also called ROI operation.
Answer: c
Explanation: A is called the subset of B.
128. Consider two regions A and B composed of foreground pixels. The of these two
sets is the set of elements belonging to set A or set B or both.
a) OR
b) AND
c) NOT
d) XOR
View Answer
Answer: a
Explanation: This is called an OR operation.
129. Imaging systems having physical artefacts embedded in the imaging sensors produce a set
of points called
a) Tie Points
b) Control Points
c) Reseau Marks
d) None of the Mentioned
View Answer
Answer: c
Explanation: These points are called “known” points or “Reseau marks”.
130. Image processing approaches operating directly on pixels of input image work directly in
a) Transform domain
b) Spatial domain
c) Inverse transformation
d) None of the Mentioned
View Answer
Answer: b
Explanation: Operations directly on pixels of input image work directly in Spatial Domain.
131. Noise reduction is obtained by blurring the image using smoothing filter.
a) True
b) False
View Answer
Answer: a
Explanation: Noise reduction is obtained by blurring the image using smoothing filter. Blurring
is used in pre-processing steps, such as removal of small details from an image prior to object
extraction and, bridging of small gaps in lines or curves.
Answer: d
Explanation: The output or response of a smoothing, linear spatial filter is simply the average of
the pixels contained in the neighbourhood of the filter mask.
Answer: b
Explanation: Since the smoothing spatial filter performs the average of the pixels, it is also called
as averaging filter.
134. Which of the following in an image can be removed by using smoothing filter?
a) Smooth transitions of gray levels
b) Smooth transitions of brightness levels
c) Sharp transitions of gray levels
d) Sharp transitions of brightness levels
View Answer
Answer: c
Explanation: Smoothing filter replaces the value of every pixel in an image by the average value
of the gray levels. So, this helps in removing the sharp transitions in the gray levels between the
pixels. This is done because, random noise typically consists of sharp transitions in gray levels.
Answer: a
Explanation: Edges, which almost always are desirable features of an image, also are
characterized by sharp transitions in gray level. So, averaging filters have an undesirable side
effect that they blur these edges.
136. Smoothing spatial filters doesn‟ t smooth the false contours.
a) True
b) False
View Answer
Answer: b
Explanation: One of the application of smoothing spatial filters is that, they help in smoothing
the false contours that result from using an insufficient number of gray levels.
137. The mask shown in the figure below belongs to which type of filter?
Answer: d
Explanation: This is a smoothing spatial filter. This mask yields a so called weighted average,
which means that different pixels are multiplied with different coefficient values. This helps in
giving much importance to the some pixels at the expense of others.
138. The mask shown in the figure below belongs to which type of filter?
Answer: c
Explanation: The mask shown in the figure represents a 3×3 smoothing filter. Use of this filter
yields the standard average of the pixels under the mask.
139. Box filter is a type of smoothing filter.
a) True
b) False
View Answer
Answer: a
Explanation: A spatial averaging filter or spatial smoothening filter in which all the coefficients
are equal is also called as box filter.
140. If the size of the averaging filter used to smooth the original image to first image is 9, then
what would be the size of the averaging filter used in smoothing the same original picture to
second in second image?
a) 3
b) 5
c) 9
d) 15
View Answer
Answer: d
Explanation: We know that, as the size of the filter used in smoothening the original image that
is averaging filter increases then the blurring of the image. Since the second image is more
blurred than the first image, the window size should be more than 9.
141. Which of the following comes under the application of image blurring?
a) Object detection
b) Gross representation
c) Object motion
d) Image segmentation
View Answer
Answer: b
Explanation: An important application of spatial averaging is to blur an image for the purpose of
getting a gross representation of interested objects, such that the intensity of the small objects
blends with the background and large objects become easy to detect.c
Answer: a
Explanation: Order static filters are nonlinear smoothing spatial filters whose response is based
on the ordering or ranking the pixels contained in the image area encompassed by the filter, and
then replacing the value of the central pixel with the value determined by the ranking result.
advertisement
Answer: c
Explanation: The median filter belongs to order static filters, which, as the name implies,
replaces the value of the pixel by the median of the gray levels that are present in the
neighbourhood of the pixels.
Answer: a
Explanation: Median filters are used to remove impulse noises, also called as salt-and-pepper
noise because of its appearance as white and black dots in the image.
145. What is the maximum area of the cluster that can be eliminated by using an n×n median
filter?
a) n2
b) n2/2
c) 2*n2
d) n
View Answer
Answer: b
Explanation: Isolated clusters of pixels that are light or dark with respect to their neighbours, and
whose area is less than n2/2, i.e., half the area of the filter, can be eliminated by using an n×n
median filter.
Digital Image Processing Questions And
Answers – Basic Intensity Transformation
Functions
This set of Digital Image Processing Interview Questions and Answers focuses on “Basic
Intensity Transformation Functions”.
146. Which of the following expression is used to denote spatial domain process?
a) g(x,y)=T[f(x,y)]
b) f(x+y)=T[g(x+y)]
c) g(xy)=T[f(xy)]
d) g(x-y)=T[f(x-y)]
View Answer
Answer: a
Explanation: Spatial domain processes will be denoted by the expression g(x,y)=T[f(x,y)], where
f(x,y) is the input image, g(x,y) is the processed image, and T is an operator on f, defined over
some neighborhood of (x, y). In addition, T can operate on a set of input images, such as
performing the pixel-by-pixel sum of K images for noise reduction.
147. Which of the following shows three basic types of functions used frequently for image
enhancement?
a) Linear, logarithmic and inverse law
b) Power law, logarithmic and inverse law
c) Linear, logarithmic and power law
d) Linear, exponential and inverse law
View Answer
Answer: b
Explanation: In introduction to gray-level transformations, which shows three basic types of
functions used frequently for image enhancement: linear (negative and identity transformations),
logarithmic (log and inverse-log transformations), and power-law (nth power and nth root
transformations).The identity function is the trivial case in which output intensities are identical
to input intensities. It is included in the graph only for completeness.
148. Which expression is obtained by performing the negative transformation on the negative of
an image with gray levels in the range[0,L-1] ?
a) s=L+1-r
b) s=L+1+r
c) s=L-1-r
d) s=L-1+r
View Answer
Answer: c
Explanation: The negative of an image with gray levels in the range[0,L-1] is obtained by using
the negative transformation, which is given by the expression: s=L-1-r.
Answer: b
Explanation: The general form of the log transformation: s=clog10(1+r), where c is a constant,
and it is assumed that r ≥ 0.
Answer: a
Explanation: Power-law transformations have the basic form: s=crγ where c and g are positive
constants. Sometimes s=crγ is written as s=c.(r+ε)γ to account for an offset (that is, a measurable
output when the input is zero).
151. What is the name of process used to correct the power-law response phenomena?
a) Beta correction
b) Alpha correction
c) Gamma correction
d) Pie correction
View Answer
Answer: c
Explanation: A variety of devices used for image capture, printing, and display respond
according to a power law. By convention, the exponent in the power-law equation is referred to
as gamma .The process used to correct these power-law response phenomena is called gamma
correction.
152. Which of the following transformation function requires much information to be specified
at the time of input?
a) Log transformation
b) Power transformation
c) Piece-wise transformation
d) Linear transformation
View Answer
Answer: c
Explanation: The practical implementation of some important transformations can be formulated
only as piecewise functions. The principal disadvantage of piecewise functions is that their
specification requires considerably more user input.
153. In contrast stretching, if r1=s1 and r2=s2 then which of the following is true?
a) The transformation is not a linear function that produces no changes in gray levels
b) The transformation is a linear function that produces no changes in gray levels
c) The transformation is a linear function that produces changes in gray levels
d) The transformation is not a linear function that produces changes in gray levels
View Answer
Answer: b
Explanation: The locations of points (r1,s1) and (r2,s2) control the shape of the transformation
function. If r1=s1 and r2=s2 then the transformation is a linear function that produces no changes
in gray levels.
154. In contrast stretching, if r1=r2, s1=0 and s2=L-1 then which of the following is true?
a) The transformation becomes a thresholding function that creates an octal image
b) The transformation becomes a override function that creates an octal image
c) The transformation becomes a thresholding function that creates a binary image
d) The transformation becomes a thresholding function that do not create an octal image
View Answer
Answer: c
Explanation: If r1=r2, s1=0 and s2=L-1,the transformation becomes a thresholding function that
creates a binary image.
155. In contrast stretching, if r1≤r2 and s1≤s2 then which of the following is true?
a) The transformation function is double valued and exponentially increasing
b) The transformation function is double valued and monotonically increasing
c) The transformation function is single valued and exponentially increasing
d) The transformation function is single valued and monotonically increasing
View Answer
Answer: d
Explanation: The locations of points (r1,s1) and (r2,s2) control the shape of the transformation
function. If r1≤r2 and s1≤s2 then the function is single valued and monotonically increasing.
advertisement
156. In which type of slicing, highlighting a specific range of gray levels in an image often is
desired?
a) Gray-level slicing
b) Bit-plane slicing
c) Contrast stretching
d) Byte-level slicing
View Answer
Answer: a
Explanation: Highlighting a specific range of gray levels in an image often is desired in gray-
level slicing. Applications include enhancing features such as masses of water in satellite
imagery and enhancing flaws in X-ray images.
157. Which of the following depicts the main functionality of the Bit-plane slicing?
a) Highlighting a specific range of gray levels in an image
b) Highlighting the contribution made to total image appearance by specific bits
c) Highlighting the contribution made to total image appearance by specific byte
d) Highlighting the contribution made to total image appearance by specific pixels
View Answer
Answer: b
Explanation: Instead of highlighting gray-level ranges, highlighting the contribution made to
total image appearance by specific bits might be desired. Suppose , each pixel in an image is
represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from bit-
plane 0 for the least significant bit to bit-plane 7 for the most significant bit. In terms of 8-bit
bytes, plane 0 contains all the lowest order bits in the bytes comprising the pixels in the image
and plane 7 contains all the high-order bits.
Answer: a
Explanation: The applications of image sharpening is present in various fields like electronic
printing, autonomous guidance in military systems, medical imaging and industrial inspection.
160. In spatial domain, which of the following operation is done on the pixels in sharpening the
image?
a) Integration
b) Average
c) Median
d) Differentiation
View Answer
Answer: d
Explanation: We know that, in blurring the image, we perform the average of pixels which can
be considered as integration. As sharpening is the opposite process of blurring, logically we can
tell that we perform differentiation on the pixels to sharpen the image.
161. Image differentiation enhances the edges, discontinuities and deemphasizes the pixels with
slow varying gray levels.
a) True
b) False
View Answer
Answer: a
Explanation: Fundamentally, the strength of the response of the derivative operative is
proportional to the degree of discontinuity in the image. So, we can state that image
differentiation enhances the edges, discontinuities and deemphasizes the pixels with slow
varying gray levels.
162. In which of the following cases, we wouldn‟ t worry about the behaviour of
sharpening filter?
a) Flat segments
b) Step discontinuities
c) Ramp discontinuities
d) Slow varying gray values
View Answer
Answer: d
Explanation: We are interested in the behaviour of derivatives used in sharpening in the constant
gray level areas i.e., flat segments, and at the onset and end of discontinuities, i.e., step and ramp
discontinuities.
163. Which of the following is the valid response when we apply a first derivative?
a) Non-zero at flat segments
b) Zero at the onset of gray level step
c) Zero in flat segments
d) Zero along ramps
View Answer
Answer: c
Explanation: The derivations of digital functions are defined in terms of differences. The
definition we use for first derivative should be zero in flat segments, nonzero at the onset of a
gray level step or ramp and nonzero along the ramps.
164. Which of the following is not a valid response when we apply a second derivative?
a) Zero response at onset of gray level step
b) Nonzero response at onset of gray level step
c) Zero response at flat segments
d) Nonzero response along the ramps
View Answer
Answer: b
Explanation: The derivations of digital functions are defined in terms of differences. The
definition we use for second derivative should be zero in flat segments, zero at the onset of a
gray level step or ramp and nonzero along the ramps.
165. If f(x,y) is an image function of two variables, then the first order derivative of a one
dimensional function, f(x) is:
a) f(x+1)-f(x)
b) f(x)-f(x+1)
c) f(x-1)-f(x+1)
d) f(x)+f(x-1)
View Answer
Answer: a
Explanation: The first order derivative of a single dimensional function f(x) is the difference
between f(x) and f(x+1).
That is, ∂f/∂x=f(x+1)-f(x).
167. What is the thickness of the edges produced by first order derivatives when compared to
that of second order derivatives?
a) Finer
b) Equal
c) Thicker
d) Independent
View Answer
Answer: c
Explanation: We know that, the first order derivative is nonzero along the entire ramp while the
second order is zero along the ramp. So, we can conclude that the first order derivatives produce
thicker edges and the second order derivatives produce much finer edges.
advertisement
168. First order derivative can enhance the fine detail in the image compared to that of second
order derivative.
a) True
b) False
View Answer
Answer: b
Explanation: The response at and around the noise point is much stronger for the second order
derivative than for the first order derivative. So, we can state that the second order derivative is
better to enhance the fine details in the image including noise when compared to that of first
order derivative.
169. Which of the following derivatives produce a double response at step changes in gray level?
a) First order derivative
b) Third order derivative
c) Second order derivative
d) First and second order derivatives
View Answer
Answer: c
Explanation: Second order derivatives produce a double line response for the step changes in the
gray level. We also note of second-order derivatives that, for similar changes in gray-level values
in an image, their response is stronger to a line than to a step, and to a point than to a line.
Digital Image Processing Questions and
Answers – Sharpening Spatial Filters-2
This set of Digital Image Processing Questions and Answers for Freshers focuses on
“Sharpening Spatial Filters-2”.
Answer: d
Explanation: Highlighting the fine detail in an image or Enhancing detail that has been blurred
because of some error or some natural effect of some method of image acquisition, is the
principal objective of sharpening spatial filters.
Answer: b
Explanation: Smoothing is analogous to integration and so, sharpening to spatial differentiation.
170. Which of the following fact(s) is/are true about sharpening spatial filters using digital
differentiation?
a) Sharpening spatial filter response is proportional to the discontinuity of the image at the point
where the derivative operation is applied
b) Sharpening spatial filters enhances edges and discontinuities like noise
c) Sharpening spatial filters deemphasizes areas that have slowly varying gray-level values
d) All of the mentioned
View Answer
Answer: d
Explanation: Derivative operator‟ s response is proportional to the discontinuity of the image at
the point where the derivative operation is applied.
Image differentiation enhances edges and discontinuities like noise and deemphasizes areas that
have slowly varying gray-level values.
Since a sharpening spatial filters are analogous to differentiation, so, all the above mentioned
facts are true for sharpening spatial filters.
171. Which of the facts(s) is/are true for the first order derivative of a digital function?
a) Must be nonzero in the areas of constant grey values
b) Must be zero at the onset of a gray-level step or ramp discontinuities
c) Must be nonzero along the gray-level ramps
d) None of the mentioned
View Answer
Answer: c
Explanation: The first order derivative of a digital function is defined as:
Must be zero in the areas of constant grey values.
Must be nonzero at the onset of a gray-level step or ramp discontinuities.
Must be nonzero along the gray-level ramps.
172. Which of the facts(s) is/are true for the second order derivative of a digital function?
a) Must be zero in the flat areas
b) Must be nonzero at the onset and end of a gray-level step or ramp discontinuities
c) Must be zero along the ramps of constant slope
d) All of the mentioned
View Answer
Answer: c
Explanation: The second order derivative of a digital function is defined as:
Must be zero in the flat areas i.e. areas of constant grey values.
Must be nonzero at the onset of a gray-level step or ramp discontinuities.
Must be zero along the gray-level ramps of constant slope.
173. The derivative of digital function is defined in terms of difference. Then, which of the
following defines the first order derivative ∂f/∂x= of a one-dimensional function
f(x)?
a) f(x+1)-f(x)
b) f(x+1)+ f(x-1)-2f(x)
c) All of the mentioned depending upon the time when partial derivative will be dealt along two
spatial axes
d) None of the mentioned
View Answer
Answer: a
Explanation: The definition of a first order derivative of a one dimensional image f(x) is:
∂f/∂x= f(x+1)-f(x), where the partial derivative is used to keep notation same even for f(x, y)
when partial derivative will be dealt along two spatial axes.
174. The derivative of digital function is defined in terms of difference. Then, which of the
following defines the second order derivative ∂2 f/∂x2 = of a one-dimensional
function f(x)?
a) f(x+1)-f(x)
b) f(x+1)+ f(x-1)-2f(x)
c) All of the mentioned depending upon the time when partial derivative will be dealt along two
spatial axes
d) None of the mentioned
View Answer
Answer: b
Explanation: The definition of a second order derivative of a one dimensional image f(x) is:
(∂2 f)/∂x2 =f(x+1)+ f(x-1)-2f(x), where the partial derivative is used to keep notation same even
for f(x, y) when partial derivative will be dealt along two spatial axes.
175. What kind of relation can be obtained between first order derivative and second order
derivative of an image having a on the basis of edge productions that shows a transition like a
ramp of constant slope?
a) First order derivative produces thick edge while second order produces a very fine edge
b) Second order derivative produces thick edge while first order produces a very fine edge
c) Both first and second order produces thick edge
d) Both first and second order produces a very fine edge
View Answer
Answer: a
Explanation: the first order derivative remains nonzero along the entire ramp of constant slope,
while the second order derivative remain nonzero only at onset and end of such ramps.
If an edge in an image shows transition like the ramp of constant slope, the first order and second
order derivative values shows the production of thick and finer edge respectively.
176. What kind of relation can be obtained between first order derivative and second order
derivative of an image on the response obtained by encountering an isolated noise point in the
image?
a) First order derivative has a stronger response than a second order
b) Second order derivative has a stronger response than a first order
c) Both enhances the same and so the response is same for both first and second order derivative
d) None of the mentioned
View Answer
Answer: b
Explanation: This is because a second order derivative is more aggressive toward enhancing
sharp changes than a first order.
178. What kind of relation can be obtained between the response of first order derivative and
second order derivative of an image having a transition into gray-level step from zero?
a) First order derivative has a stronger response than a second order
b) Second order derivative has a stronger response than a first order
c) Both first and second order derivative has the same response
d) None of the mentioned
View Answer
Answer: c
Explanation: This is because a first order derivative has stronger response to a gray-level step
than a second order, but, the response becomes same if transition into gray-level step is from
zero.
advertisement
179. If in an image there exist similar change in gray-level values in the image, which of the
following shows a stronger response using second order derivative operator for sharpening?
a) A line
b) A step
c) A point
d) None of the mentioned
View Answer
Answer: c
Explanation: second order derivative shows a stronger response to a line than a step and to a
point than a line, if there is similar changes in gray-level values in an image.
Answer: c
Explanation: The principle objective of Sharpening, to highlight transitions is Intensity.
181. How can Sharpening be achieved?
a) Pixel averaging
b) Slicing
c) Correlation
d) None of the mentioned
View Answer
Answer: d
Explanation: Sharpening is achieved using Spatial Differentiation.
Answer: a
Explanation: Image Differentiation enhances Edges and other discontinuities.
Answer: c
Explanation: Image Differentiation de-emphasizes areas with slowly varying intensities.
Answer: d
Explanation: All the three conditions must be satisfied.
186. The ability that rotating the image and applying the filter gives the same result, as applying
the filter to the image first, and then rotating it, is called
a) Isotropic filtering
b) Laplacian
c) Rotation Invariant
d) None of the mentioned
View Answer
Answer: c
Explanation: It is called Rotation Invariant, although the process used is Isotropic filtering.
advertisement
187. For a function f(x,y), the gradient of „f‟ at coordinates (x,y) is defined as a
a) 3-D row vector
b) 3-D column vector
c) 2-D row vector
d) 2-D column vector
View Answer
Answer: d
Explanation: The gradient is a 2-D column vector.
Answer: a
Explanation: Gradient is used in Industrial inspection, to aid humans, in detection of defects.
Answer: d
Explanation: In Unsharp Masking, all of the above occurs in the order: Blurring, Subtracting the
blurred image and then Adding the mask.
Digital Image Processing Questions and
Answers – Combining Spatial Enhancements
Methods
This set of Digital Image Processing Multiple Choice Questions & Answers (MCQs) focuses on
“Combining Spatial Enhancements Methods”.
Answer: d
Explanation: All the mentioned options make it difficult to enhance an image.
Answer: b
Explanation: Laplacian is a second-order derivative operator.
192. Response of the gradient to noise and fine detail is the Laplacian‟ s.
a) equal to
b) lower than
c) greater than
d) has no relation with
View Answer
Answer: b
Explanation: Response of the gradient to noise and fine detail is lower than the Laplacian‟ s and
can further be lowered by smoothing.
Answer: d
Explanation: It can be solved by Histogram Specification but it is better handled by Power-law
Transformation.
Answer: c
Explanation: The smallest possible value of a gradient image is 0.
Answer: c
Explanation: Histogram Equalization fails to work on dark intensity distributions.
Answer: c
Explanation: Nuclear Whole Body Scan is used to detect diseases such as bone infection and
tumors
197. How do you bring out more of the skeletal detail from a Nuclear Whole Body Bone Scan?
a) Sharpening
b) Enhancing
c) Transformation
d) None of the mentioned
View Answer
Answer: a
Explanation: Sharpening is used to bring out more of the skeletal detail.
Answer:a
Explanation: Using a mask, formed from the smoothed version of the gradient image, can be
used for median filtering.
Answer: c
Explanation: Increasing the dynamic range of the sharpened image is the final step in
enhancement.
Answer: a
Explanation: Filtering is the process of accepting or rejecting certain frequency components.
201. A filter that passes low frequencies is
a) Band pass filter
b) High pass filter
c) Low pass filter
d) None of the Mentioned
View Answer
Answer: c
Explanation: Low pass filter passes low frequencies.
202. What is the process of moving a filter mask over the image and computing the sum of
products at each location called as?
a) Convolution
b) Correlation
c) Linear spatial filtering
d) Non linear spatial filtering
View Answer
Answer: b
Explanation: The process is called as Correlation.
203. The standard deviation controls of the bell (2-D Gaussian function of bell
shape).
a) Size
b) Curve
c) Tightness
d) None of the Mentioned
View Answer
Answer: c
Explanation: The standard deviation controls “tightness” of the bell.
Answer: a
Explanation: To generate an M X N linear spatial filter MN mask coefficients must be specified.
Answer: b
Explanation: Convolution is the same as Correlation except that the image must be rotated by
180 degrees initially.
Answer: d
Explanation: Convolution and Correlation are functions of displacement.
207. The function that contains a single 1 with the rest being 0s is called
a) Identity function
b) Inverse function
c) Discrete unit impulse
d) None of the Mentioned
View Answer
Answer: c
Explanation: It is called Discrete unit impulse.
Answer: a
Explanation: Correlation is applied in finding matches.
Answer: d
Explanation: Gaussian function has two variables and is an exponential continuous function.
Digital Image Processing Questions And
Answers – Histogram Processing – 2
This set of Digital Image Processing Interview Questions and Answers for freshers focuses on
“Histogram Processing – 2”.
210. The histogram of a digital image with gray levels in the range [0, L-1] is represented by a
discrete function:
a) h(r_k)=n_k
b) h(r_k )=n/n_k
c) p(r_k )=n_k
d) h(r_k )=n_k/n
View Answer
Answer: a
Explanation: The histogram of a digital image with gray levels in the range [0, L-1] is a discrete
function h(rk )=nk, where rk is the kth gray level and nkis the number of pixels in the image
having gray level rk.
Answer: b
Explanation: It is common practice to normalize a histogram by dividing each of its values by the
total number of pixels in the image, denoted by n. Thus, a normalized histogram is given by p(rk
)=nk/n, for k=0,1,2…..L-1. Loosely speaking, p(rk ) gives an estimate of the probability of
occurrence of gray-level rk. Note that the sum of all components of a normalized histogram is
equal to 1.
212. Which of the following conditions does the T(r) must satisfy?
a) T(r) is double-valued and monotonically decreasing in the interval 0≤r≤1; and
0≤T(r)≤1 for 0≤r≤1
b) T(r) is double-valued and monotonically increasing in the interval 0≤r≤1; and
0≤T(r)≤1 for 0≤r≤1
c) T(r) is single-valued and monotonically decreasing in the interval 0≤r≤1; and
0≤T(r)≤1 for 0≤r≤1
d) T(r) is single-valued and monotonically increasing in the interval 0≤r≤1; and
0≤T(r)≤1 for 0≤r≤1
View Answer
Answer: d
Explanation: For any r satisfying the aforementioned conditions, we focus attention on
transformations of the form
s=T(r) For 0≤r≤1
That produces a level s for every pixel value r in the original image.
For reasons that will become obvious shortly, we assume that the transformation function T(r)
satisfies the following conditions:
T(r) is single-valued and monotonically increasing in the interval 0≤r≤1; and
0≤T(r)≤1 for 0≤r≤1.
Answer: c
Explanation: The inverse transformation from s back to r is denoted by:
r=T-1(s) for 0≤s≤1.
214. The probability density function p_s (s) of the transformed variable s can be obtained by
using which of the following formula?
a) p_s (s)=p_r (r)|dr/ds|
b) p_s (s)=p_r (r)|ds/dr|
c) p_r (r)=p_s (s)|dr/ds|
d) p_s (s)=p_r (r)|dr/dr|
View Answer
Answer: a
Explanation: The probability density function p_s (s) of the transformed variable s can be
obtained using a basic formula: p_s (s)=p_r (r)|dr/ds|
Thus, the probability density function of the transformed variable, s, is determined by the gray-
level PDF of the input image and by the chosen transformation function.
Answer: b
Explanation: A plot of pk_ (rk) versus r_k is called a histogram .The transformation (mapping)
given in sk =∑k j =0)k nj/n k=0,1,2,……,L-1 is called histogram equalization or histogram
linearization.
217. What is the method that is used to generate a processed image that have a specified
histogram?
a) Histogram linearization
b) Histogram equalization
c) Histogram matching
d) Histogram processing
View Answer
Answer: c
Explanation: In particular, it is useful sometimes to be able to specify the shape of the histogram
that we wish the processed image to have. The method used to generate a processed image that
has a specified histogram is called histogram matching or histogram specification.
218. Histograms are the basis for numerous spatial domain processing techniques.
a) True
b) False
View Answer
Answer: a
Explanation: Histograms are the basis for numerous spatial domain processing techniques.
Histogram manipulation can be used effectively for image enhancement.
219. In a dark image, the components of histogram are concentrated on which side of the grey
scale?
a) High
b) Medium
c) Low
d) Evenly distributed
View Answer
Answer: c
Explanation: We know that in the dark image, the components of histogram are concentrated
mostly on the low i.e., dark side of the grey scale. Similarly, the components of histogram of the
bright image are biased towards the high side of the grey scale.
220. What is the basis for numerous spatial domain processing techniques?
a) Transformations
b) Scaling
c) Histogram
d) None of the Mentioned
View Answer
Answer: c
Explanation: Histogram is the basis for numerous spatial domain processing techniques.
221. In image we notice that the components of histogram are concentrated on the low
side on intensity scale.
a) bright
b) dark
c) colourful
d) All of the Mentioned
View Answer
Answer: b
Explanation: Only in dark images, we notice that the components of histogram are concentrated
on the low side on intensity scale.
Answer: c
Explanation: Histogram Linearisation is also known as Histogram Equalisation.
Answer: b
Explanation: Histogram Specification is also known as Histogram Matching.
Answer: a
Explanation: It is mainly used for Enhancement of usually dark images.
Answer: c
Explanation: Utilising non-overlapping regions usually produces “Blocky” effect.
Answer: c
Explanation: SEM stands for Scanning Electron Microscope.
227. The type of Histogram Processing in which pixels are modified based on the intensity
distribution of the image is called .
a) Intensive
b) Local
c) Global
d) Random
View Answer
Answer: c
Explanation: It is called Global Histogram Processing.
228. Which type of Histogram Processing is suited for minute detailed enhancements?
a) Intensive
b) Local
c) Global
d) Random
View Answer
Answer: b
Explanation: Local Histogram Processing is used.
Answer: d
Explanation: PDF stands for Probability Density Function.
230. The output of a smoothing, linear spatial filtering is a of the pixels contained
in the neighbourhood of the filter mask.
a) Sum
b) Product
c) Average
d) Dot Product
View Answer
Answer: c
Explanation: Smoothing is simply the average of the pixels contained in the neighbourhood.
Answer: a
Explanation: Averaging filters is also known as Low pass filters.
Answer: c
Explanation: Blue edges is the undesirable side effect of Averaging filters.
233. A spatial averaging filter in which all coefficients are equal is called _ .
a) Square filter
b) Neighbourhood
c) Box filter
d) Zero filter
View Answer
Answer: c
Explanation: It is called a Box filter.
234. Which term is used to indicate that pixels are multiplied by different coefficients?
a) Weighted average
b) Squared average
c) Spatial average
d) None of the Mentioned
View Answer
Answer: a
Explanation: It is called weighted average since more importance(weight) is given to some
pixels.
235. The non linear spacial filters whose response is based on ordering of the pixels contained is
called .
a) Box filter
b) Square filter
c) Gaussian filter
d) Order-statistic filter
View Answer
Answer: d
Explanation: It is called Order-statistic filter.
Answer: c
Explanation: It is called salt-and-pepper noise because of its appearance as white and black dots
superimposed on an image.
Answer: c
Explanation: Median filter is the best known Order-statistic filter.
Answer: b
Explanation: It refers to forcing to median intensity of neighbours.
239. Which of the following is best suited for salt-and-pepper noise elimination?
a) Average filter
b) Box filter
c) Max filter
d) Median filter
View Answer
Answer: d
Explanation: Median filter is better suited than average filter for salt-and-pepper noise
elimination.
Answer: c
Explanation: Smoothing filter is used for blurring and noise reduction.
Answer: d
Explanation: The average of pixels in the neighborhood of filter mask is simply the output of the
smoothing linear spatial filter.
242. Which of the following filter(s) results in a value as average of pixels in the neighborhood
of filter mask.
a) Smoothing linear spatial filter
b) Averaging filter
c) Lowpass filter
d) All of the mentioned
View Answer
Answer: d
Explanation: The output as an average of pixels in the neighborhood of filter mask is simply the
output of the smoothing linear spatial filter also known as averaging filter and lowpass filter.
Answer: b
Explanation: Random noise has sharp transitions in gray levels and smoothing filters does noise
reduction.
Answer: d
Explanation: Averaging filter or smoothing linear spatial filter is used: for noise reduction by
reducing the sharp transitions in gray level, for smoothing false contours that arises because of
use of insufficient number of gray values and for reduction of irrelevant data i.e. the pixels
regions that are small in comparison of filter mask.
245. A spatial averaging filter having all the coefficients equal is termed
a) A box filter
b) A weighted average filter
c) A standard average filter
d) A median filter
View Answer
Answer: a
Explanation: An averaging filter is termed as box filter if all the coefficients of spatial averaging
filter are equal.
246. What does using a mask having central coefficient maximum and then the coefficients
reducing as a function of increasing distance from origin results?
a) It results in increasing blurring in smoothing process
b) It results to reduce blurring in smoothing process
c) Nothing with blurring occurs as mask coefficient relation has no effect on smoothing process
d) None of the mentioned
View Answer
Answer: a
Explanation: Use of a mask having central coefficient maximum and then the coefficients
reducing as a function of increasing distance from origin is a strategy to reduce blurring in
smoothing process.
advertisement
247. What is the relation between blurring effect with change in filter size?
a) Blurring increases with decrease of the size of filter size
b) Blurring decrease with decrease of the size of filter size
c) Blurring decrease with increase of the size of filter size
d) Blurring increases with increase of the size of filter size
View Answer
Answer: d
Explanation: Using a size 3 filter 3*3 and 5*5 size squares and other objects shows a significant
blurring with respect to object of larger size.
The blurring gets more pronounced while using filter size 5, 9 and so on.
248. Which of the following filter(s) has the response in which the central pixel value is replaced
by value defined by ranking the pixel in the image encompassed by filter?
a) Order-Statistic filters
b) Non-linear spatial filters
c) Median filter
d) All of the mentioned
View Answer
Answer: d
Explanation: An Order-Statistic filters also called non-linear spatial filters, response is based on
ranking the pixel in the image encompassed by filter that replaces the central pixel value. A
Median filter is an example of such filters.
249. Is it true or false that “the original pixel value is included while computing the median using
gray-levels in the neighborhood of the original pixel in median filter case”?
a) True
b) False
View Answer
Answer: a
Explanation: A median filter the pixel value is replaced by median of the gray-level in the
neighborhood of that pixel and also the original pixel value is included while computing the
median.
250. Two filters of similar size are used for smoothing image having impulse noise. One is
median filter while the other is a linear spatial filter. Which would the blurring effect of both?
a) Median filter effects in considerably less blurring than the linear spatial filters
b) Median filter effects in considerably more blurring than the linear spatial filters
c) Both have the same blurring effect
d) All of the mentioned
View Answer
Answer: a
Explanation: For impulse noise, median filter is much effective for noise reduction and causes
considerably less blurring than the linear spatial filters.
251. An image contains noise having appearance as black and white dots superimposed on the
image. Which of the following noise(s) has the same appearance?
a) Salt-and-pepper noise
b) Gaussian noise
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: An impulse noise has an appearance as black and white dots superimposed on the
image. This is also known as Salt-and-pepper noise.
252. While performing the median filtering, suppose a 3*3 neighborhood has value (10, 20, 20,
20, 15, 20, 20, 25, 100), then what is the median value to be given to the pixel under filter?
a) 15
b) 20
c) 100
d) 25
View Answer
Answer: b
Explanation: The values are first sorted and so turns out to (10, 15, 20, 20, 20, 20, 20, 25, and
100). For a 3*3 neighborhood the 5th largest value is the median, and so is 20.
253. Which of the following are forced to the median intensity of the neighbors by n*n median
filter?
a) Isolated cluster of pixels that are light or dark in comparison to their neighbors
b) Isolated cluster of pixels whose area is less than one-half the filter area
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: The isolated cluster pixel value doesn‟ t come as a median value and since are either
are light or dark as compared to neighbors, so are forced with median intensity of neighbors that
aren‟ t even close to their original value and so are sometimes termed “eliminated”.
If the area of such isolated pixels are < n2/2, that is again the pixel value won‟ t be a median
value and so are eliminated.
Larger cluster pixels value are more pronounced to be a median value, so are considerably less
forced to median intensity.
254. Which filter(s) used to find the brightest point in the image?
a) Median filter
b) Max filter
c) Mean filter
d) All of the mentioned
View Answer
Answer: b
Explanation: A max filter gives the brightest point in an image and so is used.
advertisement
255. The median filter also represents which of the following ranked set of numbers?
a) 100th percentile
b) 0th percentile
c) 50th percentile
d) None of the mentioned
View Answer
Answer: c
Explanation: Since the median filter forces median intensity to the pixel which is almost the
largest value in the middle of the list of values as per the ranking, so represents a 50th percentile
ranked set of numbers.
256. Which of the following filter represents a 0th percentile set of numbers?
a) Max filter
b) Mean filter
c) Median filter
d) None of the mentioned
View Answer
Answer: d
Explanation: A min filter since provides the minimum value in the image, so represents a 0th
percentile set of numbers.
257. In neighborhood operations working is being done with the value of image pixel in the
neighborhood and the corresponding value of a subimage that has same dimension as
neighborhood. The subimage is referred as
a) Filter
b) Mask
c) Template
d) All of the mentioned
View Answer
Answer: d
Explanation: Working in neighborhood operations is done with the value of a subimage having
same dimension as neighborhood corresponding to the value in the image pixel. The subimage is
called as filter, mask, template, kernel or window.
258. The response for linear spatial filtering is given by the relationship
a) Sum of filter coefficient‟ s product and corresponding image pixel under filter mask
b) Difference of filter coefficient‟ s product and corresponding image pixel under filter mask
c) Product of filter coefficient‟ s product and corresponding image pixel under filter mask
d) None of the mentioned
View Answer
Answer: a
Explanation: In spatial filtering the mask is moved from point to point and at each point the
response is calculated using a predefined relationship. The relationship in linear spatial filtering
is given by: the Sum of filter coefficient‟ s product and corresponding image pixel in area
under filter mask.
259. In linear spatial filtering, what is the pixel of the image under mask corresponding to the
mask coefficient w (1, -1), assuming a 3*3 mask?
a) f (x, -y)
b) f (x + 1, y)
c) f (x, y – 1)
d) f (x + 1, y – 1)
View Answer
Answer: d
Explanation: The pixel corresponding to mask coefficient (a 3*3 mask) w (0, 0) is f (x, y), and so
for w (1, -1) is f (x + 1, y – 1).
Answer: c
Explanation: Computation of variance as well as median comes under nonlinear operation.
261. Which of the following is/are used as basic function in nonlinear filter for noise reduction?
a) Computation of variance
b) Computation of median
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: b
Explanation: Computation of median gray-level value in the neighborhood is the basic function
of nonlinear filter for noise reduction.
262. In neighborhood operation for spatial filtering if a square mask of size n*n is used it is
restricted that the center of mask must be at a distance ≥ (n – 1)/2 pixels from border of image,
what happens to the resultant image?
a) The resultant image will be of same size as original image
b) The resultant image will be a little larger size than original image
c) The resultant image will be a little smaller size than original image
d) None of the mentioned
View Answer
Answer: c
Explanation: If the center of mask must be at a distance ≥ (n – 1)/2 pixels from border of image,
the border pixels won‟ t get processed under mask and so the resultant image would be of smaller
size.
263. Which of the following method is/are used for padding the image?
a) Adding rows and column of 0 or other constant gray level
b) Simply replicating the rows or columns
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: In neighborhood operation for spatial filtering using square mask, padding of
original image is done to obtain filtered image of same size as of original image done, by adding
rows and column of 0 or other constant gray level or by replicating the rows or columns of the
original image.
264. In neighborhood operation for spatial filtering using square mask of n*n, which of the
following approach is/are used to obtain a perfectly filtered result irrespective of the size?
a) By padding the image
b) By filtering all the pixels only with the mask section that is fully contained in the image
c) By ensuring that center of mask must be at a distance ≥ (n – 1)/2 pixels from border of image
d) None of the mentioned
View Answer
Answer: c
Explanation: By ensuring that center of mask must be at a distance ≥ (n – 1)/2 pixels from border
of image, the resultant image would be of smaller size but all the pixels would be the result of the
filter processing and so is a fully filtered result.
In the other approach like padding affect the values near the edges that gets more prevalent with
mask size increase, while the another approach results in the band of pixels near border that gets
processed with partial filter mask. So, not a fully filtered case.
265. Which of the following fact(s) is/are true for the relationship between low frequency
component of Fourier transform and the rate of change of gray levels?
a) Moving away from the origin of transform the low frequency corresponds to smooth gray
level variation
b) Moving away from the origin of transform the low frequencies corresponds to abrupt change
in gray level
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: Moving away from the origin of transform the low frequency corresponds to the
slowly varying components in an image. Moving further away from origin the higher frequencies
corresponds to faster gray level changes.
266. Which of the following fact(s) is/are true for the relationship between high frequency
component of Fourier transform and the rate of change of gray levels?
a) Moving away from the origin of transform the high frequency corresponds to smooth gray
level variation
b) Moving away from the origin of transform the higher frequencies corresponds to abrupt
change in gray level
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: b
Explanation: Moving away from the origin of transform the low frequency corresponds to the
slowly varying components in an image. Moving further away from origin the higher frequencies
corresponds to faster gray level changes.
267. What is the name of the filter that multiplies two functions F(u, v) and H(u, v), where F has
complex components too since is Fourier transformed function of f(x, y), in an order that each
component of H multiplies both real and complex part of corresponding component in F?
a) Unsharp mask filter
b) High-boost filter
c) Zero-phase-shift-filter
d) None of the mentioned
View Answer
Answer: c
Explanation: Zero-phase-shift-filter multiplies two functions F(u, v) and H(u, v), where F has
complex components too since is Fourier transformed function of f(x, y), in an order that each
component of H multiplies both real and complex part of corresponding component in F.
268. To set the average value of an image zero, which of the following term would be set 0 in the
frequency domain and the inverse transformation is done, where F(u, v) is Fourier transformed
function of f(x, y)?
a) F(0, 0)
b) F(0, 1)
c) F(1, 0)
d) None of the mentioned
View Answer
Answer: a
Explanation: For an image f(x, y), the Fourier transform at origin of an image, F(0, 0), is equal to
the average value of the image.
269. What is the name of the filter that is used to turn the average value of a processed image
zero?
a) Unsharp mask filter
b) Notch filter
c) Zero-phase-shift-filter
d) None of the mentioned
View Answer
Answer: b
Explanation: Notch filter sets F (0, 0), to zero, hence setting up the average value of image zero.
The filter is named so, because it is a constant function with a notch at origin and so is able to set
F (0, 0) to zero leaving out other values.
270. Which of the following filter(s) attenuates high frequency while passing low frequencies of
an image?
a) Unsharp mask filter
b) Lowpass filter
c) Zero-phase-shift filter
d) All of the mentioned
View Answer
Answer: b
Explanation: A lowpass filter attenuates high frequency while passing low frequencies.
271. Which of the following filter(s) attenuates low frequency while passing high frequencies of
an image?
a) Unsharp mask filter
b) Highpass filter
c) Zero-phase-shift filter
d) All of the mentioned
View Answer
Answer: b
Explanation: A highpass filter attenuates low frequency while passing high frequencies.
272. Which of the following filter have a less sharp detail than the original image because of
attenuation of high frequencies?
a) Highpass filter
b) Lowpass filter
c) Zero-phase-shift filter
d) None of the mentioned
View Answer
Answer: b
Explanation: A lowpass filter attenuates high so the image has less sharp details.
273. The feature(s) of a highpass filtered image is/are
a) Have less gray-level variation in smooth areas
b) Emphasized transitional gray-level details
c) An overall sharper image
d) All of the mentioned
View Answer
Answer: d
Explanation: A highpass filter attenuates low frequency so have less gray-level variation in
smooth areas, and allows high frequencies so have emphasized transitional gray-level details,
resulting in a sharper image.
274. A spatial domain filter of the corresponding filter in frequency domain can be obtained by
applying which of the following operation(s) on filter in frequency domain?
a) Fourier transform
b) Inverse Fourier transform
c) None of the mentioned
d) All of the mentioned
View Answer
Answer: b
Explanation: Filters in spatial domain and frequency domain has a Fourier transform pair
relation. A spatial domain filter of the corresponding filter in frequency domain can be obtained
by applying inverse Fourier transform on frequency domain filter.
advertisement
275. A frequency domain filter of the corresponding filter in spatial domain can be obtained by
applying which of the following operation(s) on filter in spatial domain?
a) Fourier transform
b) Inverse Fourier transform
c) None of the mentioned
d) All of the mentioned
View Answer
Answer: a
Explanation: Filters in spatial domain and frequency domain has a Fourier transform pair
relation. A frequency domain filter of the corresponding filter in spatial domain can be obtained
by applying inverse Fourier transform on spatial domain filter.
276. Which of the following filtering is done in frequency domain in correspondence to lowpass
filtering in spatial domain?
a) Gaussian filtering
b) Unsharp mask filtering
c) High-boost filtering
d) None of the mentioned
View Answer
Answer: a
Explanation: A plot of Gaussian filter in frequency domain can be recognized similar to lowpass
filter in spatial domain.
277. Using the feature of reciprocal relationship of filter in spatial domain and corresponding
filter in frequency domain, which of the following fact is true?
a) The narrower the frequency domain filter results in increased blurring
b) The wider the frequency domain filter results in increased blurring
c) The narrower the frequency domain filter results in decreased blurring
d) None of the mentioned
View Answer
Answer: a
Explanation: The characteristics feature of reciprocal relationship says that the narrower the
frequency domain filter becomes it attenuates more low frequency component and so increases
blurring.
Answer: a
Explanation: Since, edges and sharp transitions contribute significantly to high-frequency
contents in the gray level of an image. So, smoothing is done by attenuating a range of high-
frequency components.
Answer: d
Explanation: Lowpass filters are considered of three types: Ideal, Butterworth, and Gaussian.
280. Which of the following lowpass filters is/are covers the range of very sharp filter function?
a) Ideal lowpass filters
b) Butterworth lowpass filter
c) Gaussian lowpass filter
d) All of the mentioned
View Answer
Answer: a
Explanation: Ideal lowpass filter covers the range of very sharp filter functioning of lowpass
filters.
281. Which of the following lowpass filters is/are covers the range of very smooth filter
function?
a) Ideal lowpass filters
b) Butterworth lowpass filter
c) Gaussian lowpass filter
d) All of the mentioned
View Answer
Answer: a
Explanation: Gaussian lowpass filter covers the range of very smooth filter functioning of
lowpass filters.
282. Butterworth lowpass filter has a parameter, filter order, determining its functionality as very
sharp or very smooth filter function or an intermediate filter function. If the parameter value is
very high, the filter approaches to which of the following filter(s)?
a) Ideal lowpass filter
b) Gaussian lowpass filter
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: a
Explanation: For high value of filter order Butterworth lowpass filter behaves as Ideal lowpass
filter, while for lower order value it has a smoother form behaving like Gaussian lowpass filter.
283. Butterworth lowpass filter has a parameter, filter order, determining its functionality as very
sharp or very smooth filter function or an intermediate filter function. If the parameter value is of
lower order, the filter approaches to which of the following filter(s)?
a) Ideal lowpass filter
b) Gaussian lowpass filter
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: b
Explanation: For high value of filter order Butterworth lowpass filter behaves as Ideal lowpass
filter, while for lower order value it has a smoother form behaving like Gaussian lowpass filter.
284. In a filter, all the frequencies inside a circle of radius D0 are not attenuated while all
frequencies outside circle are completely attenuated. The D0 is the specified nonnegative
distance from origin of the Fourier transform. Which of the following filter(s) characterizes the
same?
a) Ideal filter
b) Butterworth filter
c) Gaussian filter
d) All of the mentioned
View Answer
Answer: a
Explanation: In ideal filter all the frequencies inside a circle of radius D0 are not attenuated
while all frequencies outside the circle are completely attenuated.
285. In an ideal lowpass filter case, what is the relation between the filter radius and the blurring
effect caused because of the filter?
a) Filter size is directly proportional to blurring caused because of filter
b) Filter size is inversely proportional to blurring caused because of filter
c) There is no relation between filter size and blurring caused because of it
d) None of the mentioned
View Answer
Answer: b
Explanation: Increase in filter size, removes less power from the image and so less severe
blurring occurs.
Answer: c
Explanation: the lowpass filter has two different characteristics: one is a dominant component at
origin and other one is a concentric, circular components about the center component.
287. What is the relation for the components of ideal lowpass filter and the image enhancement?
a) The concentric component is primarily responsible for blurring
b) The center component is primarily for the ringing characteristic of ideal filter
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: d
Explanation: The center component of ideal lowpass filter is primarily responsible for blurring
while, concentric component is primarily for the ringing characteristic of ideal filter.
288. Using the feature of reciprocal relationship of filter in spatial domain and corresponding
filter in frequency domain along with convolution, which of the following fact is true?
a) The narrower the frequency domain filter more severe is the ringing
b) The wider the frequency domain filter more severe is the ringing
c) The narrower the frequency domain filter less severe is the ringing
d) None of the mentioned
View Answer
Answer: a
Explanation: The characteristics feature of reciprocal relationship says that the narrower the
frequency domain filter becomes it attenuates more low frequency component and so increases
blurring and more severe becomes the ringing.
289. Which of the following defines the expression for BLPF H(u, v) of order n, where D(u, v) is
the distance from point (u, v), D0 is the distance defining cutoff frequency?
a)
b)
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: a
Explanation: BLPF is the Butterworth lowpass filter and is defined as:
.
advertisement
289. Which of the following defines the expression for ILPF H(u, v) of order n, where D(u, v) is
the distance from point (u, v), D0 is the distance defining cutoff frequency?
a)
b)
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: a
Explanation: ILPF is the Ideal lowpass filter and is defined as:
290. State the statement true or false: “BLPF has sharp discontinuity and ILPF doesn‟ t, and
so ILPF establishes a clear cutoff b/w passed and filtered frequencies”.
a) True
b) False
View Answer
Answer: b
Explanation: ILPF has sharp discontinuity and BLPF doesn‟ t, so BLPF establishes a clear cutoff
b/w passed and filtered frequencies.
Answer: a
Explanation: A Butterworth filter of order 1 has no ringing and ringing exist for order 2 although
is imperceptible. A Butterworth filter of higher order shows significant factor of ringing.
Answer: b
Explanation: In frequency domain terminology unsharp masking is defined as “obtaining a
highpass filtered image by subtracting from the given image a lowpass filtered version of itself”.
293. Which of the following is/ are a generalized form of unsharp masking?
a) Lowpass filtering
b) High-boost filtering
c) Emphasis filtering
d) All of the mentioned
View Answer
Answer: b
Explanation: Unsharp masking is defined as “obtaining a highpass filtered image by subtracting
from the given image a lowpass filtered version of itself” while high-boost filtering generalizes it
by multiplying the input image by a constant, say A≥1.
294. High boost filtered image is expressed as: fhb = A f(x, y) – flp(x, y), where f(x, y) the input
image, A is a constant and flp(x, y) is the lowpass filtered version of f(x, y). Which of the
following fact validates if A=1?
a) High-boost filtering reduces to regular Highpass filtering
b) High-boost filtering reduces to regular Lowpass filtering
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: a
Explanation: High boost filtered image is modified as: fhb = (A-1) f(x, y) +f(x, y) – flp(x, y)
i.e. fhb = (A-1) f(x, y) + fhp(x, y). So, when A=1, High-boost filtering reduces to regular Highpass
filtering.
295. High boost filtered image is expressed as: fhb = A f(x, y) – flp(x, y), where f(x, y) the input
image, A is a constant and flp(x, y) is the lowpass filtered version of f(x, y). Which of the
following fact(s) validates if A increases past 1?
a) The contribution of the image itself becomes more dominant
b) The contribution of the highpass filtered version of image becomes less dominant
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: High boost filtered image is modified as: fhb = (A-1) f(x, y) +f(x, y) – flp(x, y)
i.e. fhb = (A-1) f(x, y) + fhp(x, y). So, when A>1, the contribution of the image itself becomes
more dominant over the highpass filtered version of image.
296. If, Fhp(u, v)=F(u, v) – Flp(u, v) and Flp(u, v) = Hlp(u, v)F(u, v), where F(u, v) is the image
in frequency domain with Fhp(u, v) its highpass filtered version, Flp(u, v) its lowpass filtered
component and Hlp(u, v) the transfer function of a lowpass filter. Then, unsharp masking can be
implemented directly in frequency domain by using a filter. Which of the following is the
required filter?
a) Hhp(u, v) = Hlp(u, v)
b) Hhp(u, v) = 1 + Hlp(u, v)
c) Hhp(u, v) = – Hlp(u, v)
d) Hhp(u, v) = 1 – Hlp(u, v)
View Answer
Answer: d
Explanation: Unsharp masking can be implemented directly in frequency domain by using a
composite filter: Hhp(u, v) = 1 – Hlp(u, v).
297. Unsharp masking can be implemented directly in frequency domain by using a filter: Hhp(u,
v) = 1 – Hlp(u, v), where Hlp(u, v) the transfer function of a lowpass filter. What kind of filter is
Hhp(u, v)?
a) Composite filter
b) M-derived filter
c) Constant k filter
d) None of the mentioned
View Answer
Answer: a
Explanation: Unsharp masking can be implemented directly in frequency domain by using a
composite filter: Hhp(u, v) = 1 – Hlp(u, v).
298. If unsharp masking can be implemented directly in frequency domain by using a composite
filter: Hhp(u, v) = 1 – Hlp(u, v), where Hlp(u, v) the transfer function of a lowpass filter. Then,
the composite filter for High-boost filtering is
a) Hhb(u, v) = 1 – Hhp(u, v)
b) Hhb(u, v) = 1 + Hhp(u, v)
c) Hhb(u, v) = (A-1) – Hhp(u, v), A is a constant
d) Hhb(u, v) = (A-1) + Hhp(u, v), A is a constant
View Answer
Answer: d
Explanation: For given composite filter of unsharp masking Hhp(u, v) = 1 – Hlp(u, v), the
composite filter for High-boost filtering is Hhb(u, v) = (A-1) + Hhp(u, v).
299. The frequency domain Laplacian is closer to which of the following mask?
a) Mask that excludes the diagonal neighbors
b) Mask that excludes neighbors in 4-adjacancy
c) Mask that excludes neighbors in 8-adjacancy
d) None of the mentioned
View Answer
Answer: a
Explanation: The frequency domain Laplacian is closer to mask that excludes the diagonal
neighbors.
Answer: c
Explanation: To accentuate the contribution to enhancement made by high-frequency
components, we have to multiply the highpass filter by a constant and add an offset to the
highpass filter to prevent eliminating zero frequency term by filter.
Answer: c
Explanation: High frequency emphasis is the method that accentuate the contribution to
enhancement made by high-frequency component. In this we multiply the highpass filter by a
constant and add an offset to the highpass filter to prevent eliminating zero frequency term by
filter.
302. Which of the following a transfer function of High frequency emphasis {Hhfe(u, v)} for
Hhp(u, v) being the highpass filtered version of image?
a) Hhfe(u, v) = 1 – Hhp(u, v)
b) Hhfe(u, v) = a – Hhp(u, v), a≥0
c) Hhfe(u, v) = 1 – b Hhp(u, v), a≥0 and b>a
d) Hhfe(u, v) = a + b Hhp(u, v), a≥0 and b>a
View Answer
Answer: d
Explanation: The transfer function of High frequency emphasis is given as:Hhfe(u, v) = a + b
Hhp(u, v), a≥0 and b>a.
303. The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b Hhp(u, v),
for Hhp(u, v) being the highpass filtered version of image,a≥0 and b>a. for certain values of a
and b it reduces to High-boost filtering. Which of the following is the required value?
a) a = (A-1) and b = 0,A is some constant
b) a = 0 and b = (A-1),A is some constant
c) a = 1 and b = 1
d) a = (A-1) and b =1,A is some constant
View Answer
Answer: d
Explanation: The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v) and the transfer function for High-boost filtering is Hhb(u, v) = (A-1) + Hhp(u, v), A
being some constant. So, for a = (A-1) and b =1, Hhfe(u, v) = Hhb(u, v).
advertisement
304. The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b Hhp(u, v),
for Hhp(u, v) being the highpass filtered version of image,a≥0 and b>a. What happens when b
increases past 1?
a) The high frequency are emphasized
b) The low frequency are emphasized
c) All frequency are emphasized
d) None of the mentioned
View Answer
Answer: a
Explanation: The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image,a≥0 and b>a. When b
increases past 1, the high frequency are emphasized.
305. The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b Hhp(u, v),
for Hhp(u, v) being the highpass filtered version of image,a≥0 and b>a. When b increases past
1 the filtering process is specifically termed as
a) Unsharp masking
b) High-boost filtering
c) Emphasized filtering
d) None of the mentioned
View Answer
Answer: c
Explanation: The transfer function of High frequency emphasis is given as: Hhfe(u, v) = a + b
Hhp(u, v), for Hhp(u, v) being the highpass filtered version of image,a≥0 and b>a. When b
increases past 1, the high frequency are emphasized and so the filtering process is better known
as Emphasized filtering.
306. Validate the statement “Because of High frequency emphasis the gray-level tonality due to
low frequency components is not lost”.
a) True
b) False
View Answer
Answer: a
Explanation: Because of High frequency emphasis the gray-level tonality due to low frequency
components is not lost.
Answer: d
Explanation: An image is expressed as the multiplication of illumination and reflectance
component.
308. If an image is expressed as the multiplication of illumination and reflectance component i.e.
f(x, y)= i(x, y) * r(x, y), then Validate the statement “We can directly use the equation f(x, y)=
i(x, y) * r(x, y) to operate separately on the frequency component of illumination and
reflectance” .
a) True
b) False
View Answer
Answer: b
Explanation: For an image is expressed as the multiplication of illumination and reflectance
component i.e. f(x, y)= i(x, y) * r(x, y), the equation can‟ t be used directly to operate separately
on the frequency component of illumination and reflectance because the Fourier transform of the
product of two function is not separable.
309. In Homomorphic filtering which of the following operations is used to convert input image
to discrete Fourier transformed function?
a) Logarithmic operation
b) Exponential operation
c) Negative transformation
d) None of the mentioned
View Answer
Answer: a
Explanation: For an image is expressed as the multiplication of illumination and reflectance
component i.e. f(x, y) = i(x, y) * r(x, y), the equation can‟ t be used directly to operate separately
on the frequency component of illumination and reflectance because the Fourier transform of the
product of two function is not separable. So, the logarithmic operation is
used.I{z(x,y)}=I{ln(f(x,y)) }=I{ln(i(x,y)) }+I{ln(r(x,y))}.
310. A class of system that achieves the separation of illumination and reflectance component of
an image is termed as
a) Base class system
b) Homomorphic system
c) Base separation system
d) All of the mentioned
View Answer
Answer: b
Explanation: Homomorphic system is a class of system that achieves the separation of
illumination and reflectance component of an image.
311. Which of the following image component is characterized by a slow spatial variation?
a) Illumination component
b) Reflectance component
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: a
Explanation: The illumination component of an image is characterized by a slow spatial
variation.
312. Which of the following image component varies abruptly particularly at the junction of
dissimilar objects?
a) Illumination component
b) Reflectance component
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: b
Explanation: The reflectance component of an image varies abruptly particularly at the junction
of dissimilar objects.
313. The reflectance component of an image varies abruptly particularly at the junction of
dissimilar objects. The characteristic lead to associate illumination with
a) The low frequency of Fourier transform of logarithm of the image
b) The high frequency of Fourier transform of logarithm of the image
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: b
Explanation: The reflectance component of an image varies abruptly, so, is associated with the
high frequency of Fourier transform of logarithm of the image.
advertisement
314. The illumination component of an image is characterized by a slow spatial variation. The
characteristic lead to associate illumination with
a) The low frequency of Fourier transform of logarithm of the image
b) The high frequency of Fourier transform of logarithm of the image
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: a
Explanation: The illumination component of an image is characterized by a slow spatial
variation, so, is associated with the low frequency of Fourier transform of logarithm of the
image.
315. If the contribution made by illumination component of image is decreased and the
contribution of reflectance component is amplified, what will be the net result?
a) Dynamic range compression
b) Contrast enhancement
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: The illumination component of an image is characterized by a slow spatial variation
and the reflectance component of an image varies abruptly particularly at the junction of
dissimilar objects, so, if the contribution made by illumination component of image is decreased
and the contribution of reflectance component is amplified then there is simultaneous dynamic
range compression and contrast stretching.
Digital Image Processing Questions and
Answers – Intensity Transformation
Functions
This set of Digital Image Processing Multiple Choice Questions & Answers (MCQs) focuses on
“Intensity Transformation Functions”.
316. How is negative of an image obtained with intensity levels [0,L-1] with “r” and “s” being
pixel values?
a) s = L – 1 + r
b) s = L – 1 – r
c) s = L + 1 + r
d) s = L + 1 + r
View Answer
Answer: b
Explanation: The negative is obtained using s = L – 1 + r.
Answer: a
Explanation: s = c.log(1 + r) is the log transformation.
318. Power-law transformations has the basic form of where c and ∆ are
constants.
a) s = c + r∆
b) s = c – r∆
c) s = c * r∆
d) s = c / r.∆
View Answer
Answer: c
Explanation: s = c * r∆ is called the Power-law transformation.
319. For what value of the output must the Power-law transformation account for offset?
a) No offset needed
b) All values
c) One
d) Zero
View Answer
Answer: d
Explanation: When the output is Zero, an offset is necessary.
Answer: a
Explanation: The exponent in Power-law is called gamma and the process used to correct the
response of Power-law transformation is called Gamma Correction.
321. Which process expands the range of intensity levels in an image so that it spans the full
intensity range of the display?
a) Shading correction
b) Contrast sketching
c) Gamma correction
d) None of the Mentioned
View Answer
Answer: b
Explanation: Contrast sketching is the process used to expand intensity levels in an image.
Answer: c
Explanation: Highlighting a specific range of intensities of an image is called Intensity Slicing.
323. Highlighting the contribution made to total image by specific bits instead of highlighting
intensity-level changes is called
a) Intensity Highlighting
b) Byte-Slicing
c) Bit-plane slicing
d) None of the Mentioned
View Answer
Answer: c
Explanation: It is called Bit-plane slicing.
324. Which of the following involves reversing the intensity levels of an image?
a) Log Transformations
b) Piecewise Linear Transformations
c) Image Negatives
d) None of the Mentioned
View Answer
Answer: c
Explanation: Image negatives use reversing intensity levels.
Answer: d
Explanation: Piecewise Linear Transformation function involves all the mentioned functions.
326. What is the set generated using infinite-value membership functions, called?
a) Crisp set
b) Boolean set
c) Fuzzy set
d) All of the mentioned
View Answer
Answer: c
Explanation: It is called fuzzy set.
327. Which is the set, whose membership only can be true or false, in bi-values Boolean logic?
a) Boolean set
b) Crisp set
c) Null set
d) None of the mentioned
View Answer
Answer: b
Explanation: The so called Crisp set is the one in which membership only can be true or false, in
bi-values Boolean logic.
328. If Z is a set of elements with a generic element z, i.e. Z = {z}, then this set is called
a) Universe set
b) Universe of discourse
c) Derived set
d) None of the mentioned
View Answer
Answer: b
Explanation: It is called the universe of discourse.
Answer: c
Explanation: A fuzzy set is characterized by a membership function.
Answer: a
Explanation: It is called an Empty set.
331. Which of the following is a type of Membership function?
a) Triangular
b) Trapezoidal
c) Sigma
d) All of the mentioned
View Answer
Answer: d
Explanation: All of them are types of Membership functions.
Answer: d
Explanation: All of the mentioned above are types of Membership functions.
333. Using IF-THEN rule to create the output of fuzzy system is called .
a) Inference
b) Implication
c) Both the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: It is called Inference or Implication.
Answer: a
Explanation: Maturity is the independent variable of fuzzy output.
336. Which gray-level transformation increase the dynamic range of gray-level in the image?
a) Power-law transformations
b) Negative transformations
c) Contrast stretching
d) None of the mentioned
View Answer
Answer: c
Explanation: Increasing the dynamic range of gray-levels in the image is the basic idea behind
contrast stretching.
337. When is the contrast stretching transformation a linear function, for r and s as gray-value of
image before and after processing respectively?
a) r1 = s1 and r2 = s2
b) r1 = r2, s1 = 0 and s2 = L – 1, L is the max gray value allowed
c) r1 = 1 and r2 = 0
d) None of the mentioned
View Answer
Answer: a
Explanation: If r1 = s1 and r2 = s2 the contrast stretching transformation is a linear function.
338. When is the contrast stretching transformation a thresholding function, for r and s as gray-
value of image before and after processing respectively?
a) r1 = s1 and r2 = s2
b) r1 = r2, s1 = 0 and s2 = L – 1, L is the max gray value allowed
c) r1 = 1 and r2 = 0
d) None of the mentioned
View Answer
Answer: b
Explanation: If r1 = r2, s1 = 0 and s2 = L – 1, the contrast stretching transformation is a
thresholding function.
339. What condition prevents the intensity artifacts to be created while processing with contrast
stretching, if r and s are gray-values of image before and after processing respectively?
a) r1 = s1 and r2 = s2
b) r1 = r2, s1 = 0 and s2 = L – 1, L is the max gray value allowed
c) r1 = 1 and r2 = 0
d) r1 ≤ r2 and s1 ≤ s2
View Answer
Answer: d
Explanation: While processing through contrast stretching, if r1 ≤ r2 and s1 ≤ s2 is maintained, the
function remains single valued and so monotonically increasing. This helps in the prevention of
creation of intensity artifacts.
340. A contrast stretching result been obtained by setting (r1, s1) = (rmin, 0) and (r2, s2) = (rmax, L –
1), where, r and s are gray-values of image before and after processing respectively, L is the max
gray value allowed and rmax and rmin are maximum and minimum gray-values in image
respectively. What should we term the transformation function if r1 = r2 = m, some mean gray-
value.
a) Linear function
b) Thresholding function
c) Intermediate function
d) None of the mentioned
View Answer
Answer: b
Explanation: From (r1, s1) = (rmin, 0) and (r2, s2) = (rmax, L – 1), we have s1 = 0 and s2 = L – 1 and
if r1 = r2 = m is set then the result becomes r1 = r2, s1 = 0 and s2 = L – 1, i.e. a thresholding
function.
Answer: d
Explanation: gray-level slicing is being done by two approach: One approach is to give all gray
level of a specific range high value and a low value to all other gray levels.
Second approach is to brighten the pixels gray-value of interest and preserve the background.
I.e. in both highlighting of a specific range of gray-level is been done.
342. What is/are the approach(s) of the gray-level slicing?
a) To give all gray level of a specific range high value and a low value to all other gray levels
b) To brighten the pixels gray-value of interest and preserve the background
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: There are basically two approach of gray-level slicing:
One approach is to give all gray level of a specific range high value and a low value to all other
gray levels.
Second approach is to brighten the pixels gray-value of interest and preserve the background.
343. Which of the following transform produces a binary image after processing?
a) Contrast stretching
b) Gray-level slicing
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: The approach of gray-level slicing “to give all gray level of a specific range high
value and a low value to all other gray levels” produces a binary image.
One of the transformation in Contrast stretching darkens the value of r (input image gray-level)
below m (some predefined gray-value) and brightens the value of r above m, giving a binary
image as result.
344. Specific bit contribution in the image highlighting is the basic idea of
a) Contrast stretching
b) Bit –plane slicing
c) Thresholding
d) Gray-level slicing
View Answer
Answer: b
Explanation: Bit-plane slicing highlights the contribution of specific bits made to total image,
instead of highlighting a specific gray-level range.
345. In bit-plane slicing if an image is represented by 8 bits and is composed of eight 1-bit plane,
with plane 0 showing least significant bit and plane 7 showing most significant bit. Then, which
plane(s) contain the majority of visually significant data.
a) Plane 4, 5, 6, 7
b) Plane 0, 1, 2, 3
c) Plane 0
d) Plane 2, 3, 4, 5
View Answer
Answer: a
Explanation: In bit-plane slicing, for the given data, the higher-ordered bits (mostly top four)
contains most of the data visually signified.
346. If the Gaussian filter is expressed as H(u, v) = e(-D2 (u,v)/2D 02),where D(u, v) is the distance
from point(u, v), D0 is the distance defining cutoff frequency, then for what value of D(u, v) the
filter is down to 0.607 of its maximum value?
a) D(u, v) = D0
b) D(u, v) = D02
c) D(u, v) = D03
d) D(u, v) = 0
View Answer
Answer: a
Explanation: For the given Gaussian filter of 2-D image, the value D(u, v) = D0 gives the filter a
down to 0.607 of its maximum value.
347. State the statement as true or false. “The GLPF did produce as much smoothing as the
BLPF of order 2 for the same value of cutoff frequency”.
a) True
b) False
View Answer
Answer: b
Explanation: For the same value of cutoff frequency, the GLPF did not produce as much
smoothing as the BLPF of order 2, because the profile of GLPF is not as tight as BLPF of order
2.
349. The lowpass filtering process can be applied in which of the following area(s)?
a) The field of machine perception, with application of character recognition
b) In field of printing and publishing industry
c) In field of processing satellite and aerial images
d) All of the mentioned
View Answer
Answer: d
Explanation: In case of broken characters recognition system, LPF is used. LPF is used as
preprocessing system in printing and publishing industry, and in case of remote sensed images
LPF is used to blur out as much detail as possible leaving the large feature recognizable.
350. The edges and other abrupt changes in gray-level of an image are associated
with
a) High frequency components
b) Low frequency components
c) Edges with high frequency and other abrupt changes in gray-level with low frequency
components
d) Edges with low frequency and other abrupt changes in gray-level with high frequency
components
View Answer
Answer: a
Explanation: High frequency components are related with the edges and other abrupt changes in
gray-level of an image.
351. A type of Image is called as VHRR image. What is the definition of VHRR image?
a) Very High Range Resolution image
b) Very High Resolution Range image
c) Very High Resolution Radiometer image
d) Very High Range Radiometer Image
View Answer
Answer: c
Explanation: A VHRR image is a Very High Resolution Radiometer Image.
352. The Image sharpening in frequency domain can be achieved by which of the following
method(s)?
a) Attenuating the high frequency components
b) Attenuating the low-frequency components
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: b
Explanation: The Image sharpening in frequency domain is achieved by attenuating the low-
frequency components without disturbing the high-frequency components.
353. The function of filters in Image sharpening in frequency domain is to perform reverse
operation of which of the following Lowpass filter?
a) Gaussian Lowpass filter
b) Butterworth Lowpass filter
c) Ideal Lowpass filter
d) None of the Mentioned
View Answer
Answer: c
Explanation: The function of filters in Image sharpening in frequency domain is to perform
precisely reverse operation of Ideal Lowpass filter.
The transfer function of Highpass filter is obtained by relation: Hhp(u, v) = 1 – Hlp(u, v), where
Hlp(u, v) is transfer function of corresponding lowpass filter.
354. If D0 is the cutoff distance measured from origin of frequency rectangle and D(u, v) is the
distance from point(u, v). Then what value does an Ideal Highpass filter will give if D(u, v) ≤ D0
andifD(u, v) >D0?
a) 0 and 1 respectively
b) 1 and 0 respectively
c) 1 in both case
d) 0 in both case
View Answer
Answer: a
Explanation: Unlike Ideal lowpass filter, an Ideal highpass filter attenuates the low-frequency
components and so gives 0 for D(u, v) ≤ D0 and 1 for D(u, v) >D0.
355. What is the relation of the frequencies to a circle of radius D0, where D0 is the cutoff
distance measured from origin of frequency rectangle, for an Ideal Highpass filter?
a) IHPF sets all frequencies inside circle to zero
b) IHPF allows all frequencies, without attenuating, outside the circle
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: c
Explanation: An Ideal high pass filter gives 0 for D(u, v) ≤ D0 and 1 for D(u, v) >D0.
356. Which of the following is the transfer function of the Butterworth Highpass Filter, of order
n, D0 is the cutoff distance measured from origin of frequency rectangle and D(u, v) is the
distance from point(u, v)?
a)
b)
c)
d) none of the mentioned
View Answer
Answer: a
Explanation: The transfer function of Butterworth highpass filter of order n, D0 is the cutoff
distance measured from origin of frequency rectangle and D(u, v) is the distance from point(u, v)
is given by: .
357. Which of the following is the transfer function of the Ideal Highpass Filter? Given D0 is the
cutoff distance measured from origin of frequency rectangle and D(u, v) is the distance from
point(u, v).
a)
b)
c)
d) none of the mentioned
View Answer
Answer: b
Explanation: The transfer function of Ideal highpass filter, whereD0 is the cutoff distance
measured from origin of frequency rectangle and D(u, v) is the distance from point(u, v) is given
by: .
advertisement
358. Which of the following is the transfer function of the Gaussian Highpass Filter? Given D0 is
the cutoff distance measured from origin of frequency rectangle and D(u, v) is the distance from
point(u, v).
a)
b)
c)
d) none of the mentioned
View Answer
Answer: c
Explanation: The transfer function of Gaussian highpass filter, where D0 is the cutoff distance
measured from origin of frequency rectangle and D(u, v) is the distance from point(u, v) is given
by: .
359. For a given image having smaller objects, which of the following filter(s), having D0 as the
cutoff distance measured from origin of frequency rectangle, would you prefer for a comparably
smoother result?
a) IHLF with D0 15
b) BHPF with D0 15 and order 2
c) GHPF with D0 15 and order 2
d) All of the mentioned
View Answer
Answer: c
Explanation: For the same format as for BHPF, GHPF gives a result comparably smoother than
BHPF. However, BHPF performance for filtering smaller object is comparable with IHPF.
360. Which of the following statement(s) is true for the given fact that “Applying Highpass
filters has an effect on the background of the output image”?
a) The average background intensity increases to near white
b) The average background intensity reduces to near black
c) The average background intensity changes to a value average of black and white
d) All of the mentioned
View Answer
Answer: b
Explanation: The Highpass filter eliminates the zero frequency components of the Fourier
transformed image HPFs are applied on. So, the average background intensity reduces to near
black.
Answer: c
Explanation: Rods are long slender receptors while cones are shorter and thicker receptors.
362. How is image formation in the eye different from that in a photographic camera
a) No difference
b) Variable focal length
c) Varying distance between lens and imaging plane
d) Fixed focal length
View Answer
Answer: b
Explanation: Fibers in ciliary body vary shape of the lens thereby varying its focal length.
363. Range of light intensity levels to which the human eye can adapt (in Log of Intensity-mL)
a) 10-6 to 10-4
b) 104 to 106
c) 10-6 to 104
d) 10-5 to 105
View Answer
Answer: c
Explanation: Range of light intensity to which human eye can adapt is enormous
and about the order 1010 from 10-6 to 104.
Answer: a
Explanation: It is the intensity as perceived by the human eye.
Answer: a
Explanation: The human eye a wide dynamic range by changing the eye‟ s overall sensitivity and
this is called brightness adaptation.
Answer: d
Explanation: Retina is the innermost membrane of the human eye.
Answer: d
Explanation: Iris is responsible for controlling the amount of light that enters the human eye.
Answer: b
Explanation: Rods produce an overall picture of the field of view.
Answer: c
Explanation: Except the blind spot, receptors are radially distributed.
371. In 4-neighbours of a pixel p, how far are each of the neighbours located from p?
a) one pixel apart
b) four pixels apart
c) alternating pixels
d) none of the Mentioned
View Answer
Answer: a
Explanation: Each pixel is a unit distance apart from the pixel p.
372. If S is a subset of pixels, pixels p and q are said to be if there exists a path
between them consisting of pixels entirely in S.
a) continuous
b) ambiguous
c) connected
d) none of the Mentioned
View Answer
Answer: c
Explanation: Pixels p and q are said to be connected if there exists a path between them
consisting of pixels entirely in S.
373. If R is a subset of pixels, we call R a of the image if R is a connected set.
a) Disjoint
b) Region
c) Closed
d) Adjacent
View Answer
Answer: b
Explanation: R is called a Region of the image.
374. Two regions are said to be if their union forms a connected set.
a) Adjacent
b) Disjoint
c) Closed
d) None of the Mentioned
View Answer
Answer: a
Explanation: The regions are said to be Adjacent to each other.
375. If an image contains K disjoint regions, what does the union of all the regions represent?
a) Background
b) Foreground
c) Outer Border
d) Inner Border
View Answer
Answer: b
Explanation: The union of all regions is called Foreground and its complement is called the
Background.
376. For a region R, the set of points that are adjacent to the complement of R is called as
a) Boundary
b) Border
c) Contour
d) All of the Mentioned
View Answer
Answer: d
Explanation: The words boundary, border and contour mean the same set.
377. The distance between pixels p and q, the pixels have a distance less than or equal to some
value of radius r centred at (x,y) is called :
a) Euclidean distance
b) City-Block distance
c) Chessboard distance
d) None of the Mentioned
View Answer
Answer: a
Explanation: Euclidean distance is measured using a radius from a defined centre.
378. The distance between pixels p and q, the pixels have a distance less than or equal to some
value of radius r, form a diamond centred at (x,y) is called :
a) Euclidean distance
b) Chessboard distance
c) City-Block distance
d) None of the Mentioned
View Answer
Answer: c
Explanation: Formation of a diamond is measured as City-Block distance.
379. The distance between pixels p and q, the pixels have a distance less than or equal to some
value of radius r, form a square centred at (x,y) is called :
a) Euclidean distance
b) Chessboard distance
c) City-Block distance
d) None of the Mentioned
View Answer
Answer: b
Explanation: Distance measured by forming a square around the centre is called Chessboard
distance.
Answer: d
Explanation: All the mentioned adjacency types are valid.
381. How many categories does the color image processing is basically divided into?
a) 4
b) 2
c) 3
d) 5
View Answer
Answer: b
Explanation: Color image processing is divided into two major areas: full-color and pseudo-color
processing.
Answer: a
Explanation: Color image processing is divided into two major areas: full-color and pseudo-color
processing. In the first category, the images are acquired with a full-color sensor like color TV or
color scanner. In the second category, there is a problem of assigning a color to a particular
monochrome intensity or range of intensities.
383. What are the basic quantities that are used to describe the quality of a chromatic light
source?
a) Radiance, brightness and wavelength
b) Brightness and luminence
c) Radiance, brightness and luminence
d) Luminence and radiance
View Answer
Answer: c
Explanation: Three quantities are used to describe the quality of a chromatic light source:
radiance, luminance and brightness.
384. What is the quantity that is used to measure the total amount of energy flowing from the
light source?
a) Brightness
b) Intensity
c) Luminence
d) Radiance
View Answer
Answer: d
Explanation: Three quantities are used to describe the quality of a chromatic light source:
radiance, luminance and brightness. Radiance is used to measure the total amount of energy
flows from the light source and is generally measured in watts (W).
385. What are the characteristics that are used to distinguish one color from the other?
a) Brightness, Hue and Saturation
b) Hue, Brightness and Intensity
c) Saturation, Hue
d) Brightness, Saturation and Intensity
View Answer
Answer: a
Explanation: The characteristics generally used to distinguish one color from another are
brightness, hue and saturation. Brightness embodies the chromatic notion of intensity. Hue is an
attribute associated with dominant wavelength in a mixture of light waves. Saturation refers to
the relative purity or the amount of white light mixed with a hue.
386. What are the characteristics that are taken together in chromaticity?
a) Saturation and Brightness
b) Hue and Saturation
c) Hue and Brightness
d) Saturation, Hue and Brightness
View Answer
Answer: b
Explanation: Hue and saturation are taken together are called chromaticity and therefore, a color
may be characterized by its brightness and chromaticity.
387. Which of the following represent the correct equations for trichromatic coefficients?
a) x=X/(X+Y+Z), y=Y/(X+Y+Z), z=Z/(X+Y+Z)
b) x=(Y+Z)/(X+Y+Z), y=(X+Z)/(X+Y+Z), z=(X+Y)/(X+Y+Z)
c) x=X/(X-Y+Z), y=Y/(X-Y+Z), z=Z/(X-Y+Z)
d) x=(-X)/(X+Y+Z), y=(-Y)/(X+Y+Z), z=(-Z)/(X+Y+Z)
View Answer
Answer: a
Explanation: Tri-stimulus values are the amounts of red, green and blue needed to form any
particular color and they are denoted as X,Y and Z respectively. A colors the specified by its
trichromatic coefficients x, y & z: =X/(X+Y+Z), y=Y/(X+Y+Z), z=Z/(X+Y+Z).
Answer: d
Explanation: The amounts of red, green and blue needed to form any particular color are called
the tri-stimulus values and are denoted by X, Y and Z respectively. A color is then specified by
its trichromatic coefficients, whose equations are formed from tri-stimulus values.
389. What is the value obtained by the sum of the three trichromatic coefficients?
a) 0
b)-1
c) 1
d) Null
View Answer
Answer: c
Explanation: From the equations: x=X/(X+Y+Z), y=Y/(X+Y+Z), z=Z/(X+Y+Z) it is the noted
that sum of the coefficients is x+y+z≅1.
390. What is the name of area of the triangle in C.I E chromatic diagram that shows a typical
range of colors produced by RGB monitors?
a) Color gamut
b) Tricolor
c) Color game
d) Chromatic colors
View Answer
Answer: a
Explanation: The triangle in C.I.E chromatic diagram shows a typical range of colors called the
color gamut produced by RGB monitors. The irregular region inside the triangle is representative
of the color gamut of today‟ s high-quality color printing devices.
Answer: c
Explanation: A color model is also called as color space or color system .Its purpose is to
facilitate the specification of colors in some standard, generally accepted way.
Answer: a
Explanation: Images are represented in the RGB color model consist of three component images
one for each primary color. When fed into RGB monitor, these three images combine on the
phosphor screen to produce a composite color image. The number of bits used to represent each
pixel in RGB space is called the pixel depth.
393. How many bit RGB color image is represented by full-color image?
a) 32-bit RGB color image
b) 24-bit RGB color image
c) 16-bit RGB color image
d) 8-bit RGB color image
View Answer
Answer: b
Explanation: The term full-color image is used often to denote a 24-bit RGB color image. The
total number of colors in a 24-bit RGB color image is (28)3=16777216.
394. What is the equation used to obtain S component of each RGB pixel in RGB color format?
a) S=1+3/(R+G+B) [min(R,G,B)].
b) S=1+3/(R+G+B) [max(R,G,B)].
c) S=1-3/(R+G+B) [max(R,G,B)].
d) S=1-3/(R+G+B) [min(R,G,B)].
View Answer
Answer: d
Explanation: If an image is given in RGB format then the saturation component is obtained by
the equation.
395. What is the equation used to obtain I(Intensity) component of each RGB pixel in RGB color
format?
a) I=1/2(R+G+B)
b) I=1/3(R+G+B)
c) I=1/3(R-G-B)
d) I=1/3(R-G+B)
View Answer
Answer: b
Explanation: If an image is given in RGB format then the intensity (I) component is obtained by
the equation, I=1/3 (R+G+B).
396. What is the equation used for obtaining R value in terms of HSI components?
a) R=I[1-(S cosH)/cos(60°-H) ].
b) R=I[1+(S cosH)/cos(120°-H)].
c) R=I[1+(S cosH)/cos(60°-H) ].
d) R=I[1+(S cosH)/cos(30°-H) ].
View Answer
Answer: c
Explanation: Given values of HSI in the interval [0, 1], the R value in the RGB components is
given by the equation:
397. What is the equation used for calculating B value in terms of HSI components?
a) B=I(1+S)
b) B=S(1-I)
c) B=S(1+I)
d) B=I(1-S)
View Answer
Answer: d
Explanation: Given values of HSI in the interval [0, 1], the B value in the RGB components is
given by the equation: B=I(1-S).
advertisement
398. What is the equation used for calculating G value in terms of HSI components?
a) G=3I-(R+B)
b) G=3I+(R+B)
c) G=3I-(R-B)
d) G=2I-(R+B)
View Answer
Answer: a
Explanation: Given values of HSI in the interval [0, 1], the B value in the RGB components is
given by the equation: G=3I-(R+B).
399. Which of the following color models are used for color printing?
a) RGB
b) CMY
c) CMYK
d) CMY and CMYK
View Answer
Answer: d
Explanation: The hardware oriented models which are prominently used in the color printing
process are CMY (cyan, magenta and yellow) and CMYK (cyan, magenta, yellow and black).
400. What does the total number of pixels in the region defines?
a) Perimeter
b) Area
c) Intensity
d) Brightness
View Answer
Answer: b
Explanation: The area of a region is defined by the total number of pixels in the region. The
perimeter is given the number of pixels along the length of the boundary of the region.
Answer: c
Explanation: The compactness of a region is defined as (perimeter)2/area. Thus, the compactness
of a region is a dimensionless quantity.
Answer: a
Explanation: With the exception of errors introduced by the rotation of the digital image, we can
state that compactness of a region is insensitive to the orientation of the image.
404. Which of the following measures are not used to describe a region?
a) Mean and median of grey values
b) Minimum and maximum of grey values
c) Number of pixels alone
d) Number of pixels above and below mean
View Answer
Answer: c
Explanation: Some of the measures which are used to describe a region are mean and median of
grey values, minimum and maximum of grey values and number of pixels above and below
mean. The area of the region, i.e., the total number of pixels in the region cannot alone describe
the region.
Answer: b
Explanation: One of the regional descriptor is normalized area. It can be quite useful to extract
information from the image. In satellite images of earth, the data can be refined by normalized it
with respect to land mass per region.
406. What is the study of properties of a figure that are unaffected by any deformation?
a) Topology
b) Geography
c) Statistics
d) Deformation
View Answer
Answer: a
Explanation: We can define topology as the study of properties of a figure that are unaffected by
any deformation, as long as there is no joining or tearing of the figure. We use topological
properties in the region description.
407. On which of the following operation of an image, the topology of the region changes?
a) Stretching
b) Rotation
c) Folding
d) Change in distance measure
View Answer
Answer: c
Explanation: If a topological descriptor is defined by the number of holes in an image, then the
number of holes will not vary if the image is stretched or rotated. The number of holes in the
region will change only if the image is torn or folded.
Answer: a
Explanation: We know that, as stretching affects distance, topological properties do not depend
on the notion of distance or any properties implicitly based on the concept of distance measures.
a) 0
b) 1
c) 2
d) -1
View Answer
Answer: d
Explanation: The image shown in the question has two holes and one connected components. So,
the Euler number E is given as 1-2=-1.
410. What is the Euler number of a region with polygonal network containing V,Q and F as the
number of vertices, edges and faces respectively?
a) V+Q+F
b) V-Q+F
c) V+Q-F
d) V-Q-F
View Answer
Answer: b
Explanation: It is very important to classify the polygonal network. Let V,Q and F denote the
number of vertices, edges and faces respectively. Then,
V-Q+F=C-H
Where C,H represents the number of connected components and number of holes in the region
respectively. So, the Euler number E is given by V-Q+F.
411. What is the Euler number of the region shown in the figure below?
a) 1
b) -2
c) -1
d) 2
View Answer
Answer: b
Explanation: The polygonal network given in the figure has 7 vertices, 11 edges and 2 faces.
Thus the Euler number is given by the formula,
E=V-Q+F=7-11+2=-2.
advertisement
412. The texture of the region provides measure of which of the following properties?
a) Smoothness alone
b) Coarseness alone
c) Regularity alone
d) Smoothness, coarseness and regularity
View Answer
Answer: d
Explanation: One of the important approach to region description is texture content. This helps to
provide the measure of some of the important properties of an image like smoothness, coarseness
and regularity of the region.
Answer: a
Explanation: Structural techniques deal with the arrangement of image primitives, such as the
description of the texture based on the regularly spaced parallel lines.
Answer: b
Explanation: Spectral techniques are based on properties of the Fourier spectrum and are used
primarily to detect global periodicity in an image by identifying high energy, narrow peaks in the
image.
Answer: a
Explanation: The length of a boundary is one of the simple boundary descriptor. The length of
the boundary is approximately given by the number of pixels along that boundary.
416. Which of the following of a boundary is defined as the line perpendicular to the major axis?
a) Equilateral axis
b) Equidistant axis
c) Minor axis
d) Median axis
View Answer
Answer: c
Explanation: The minor axis of a boundary is defined as the line perpendicular to the major axis
and of such length that a box passing through the outer four points of intersection of the
boundary with the two axes completely encloses the boundary.
417. Which of the following is the useful descriptor of a boundary, whose value is given by the
ratio of length of the major axis to the minor axis?
a) Radius
b) Perimeter
c) Area
d) Eccentricity
View Answer
Answer: d
Explanation: Eccentricity, which is the ratio of major axis to the minor axis which is one of the
important parameter that is used to describe a boundary.
Answer: b
Explanation: Curvature of a boundary is defined as the rate of change of slope. In general, as the
boundaries tend to be locally ragged, it is difficult to obtain reliable measures of curvature at a
point on a digital boundary.
419. If the boundary is traversed in the clockwise direction, a vertex point „p‟ is said to be a
part of the convex segment if the rate of change of slope at „p‟ is:
a) Negative
b) Zero
c) Non negative
d) Cannot be determined
View Answer
Answer: c
Explanation: If the boundary is traversed in the clockwise direction and the rate of change of
slope at the vertex point is non negative, then that point is said to be in the convex segment.
420. A point „p‟ is said to be corner point, if the change of slope is less than 100.
a) True
b) False
View Answer
Answer: b
Explanation: In general, a point „p‟ is said to be on the straight line segment if the change
of slope is less than 100 and said to be at the corner point if the change exceeds 900.
421. Based on the 4-directional code, the first difference of smallest magnitude is called as:
a) Shape number
b) Chain number
c) Difference
d) Difference number
View Answer
Answer: a
Explanation: We know that, the first difference of a chain coded boundary depends on the
starting point. Based on such 4 directional boundary, the first difference of smallest magnitude is
called as the shape number of the boundary.
Answer: b
Explanation: The order of shape number gives the number of digits in its representation. The
value of this order is even for closed boundary and limits the number of possible different
shapes.
423. What is the order of the shape number of a rectangular boundary with the dimensions of
3×3?
a) 3
b) 6
c) 9
d) 12
View Answer
Answer: d
Explanation: The order of shape number is also defined as the perimeter of the boundary. Since,
given is a rectangle of dimensions 3×3, the perimeter of the rectangle is given as 2(3+3) = 12.
424. The chain code for the following shape is given as:
a) 000030032232221211
b) 003010203310321032
c) 022332103210201330
d) 012302301023100321
View Answer
Answer: a
Explanation: The effective boundary for the given figure is given as
425. What is the shape number for the boundary given in the previous figure?
a) 003231023101230123
b) 012301220331023010
c) 133021030012330120
d) 000310330130031303
View Answer
Answer: d
Explanation: The chain code for the boundary is given as 000030032232221211.
We know that, shape number is the first difference of a chain coded boundary. Thus the shape
number of the above given boundary will be 000310330130031303.
426. Statistical moments are used to describe the shape of boundary segments quantitatively.
a) True
b) False
View Answer
Answer: a
Explanation: Statistical moments like mean, variance and higher order moments can
quantitatively describe the shape of boundary segments.
advertisement
427. Which of the following techniques of boundary descriptions have the physical interpretation
of boundary shape?
a) Fourier transform
b) Statistical moments
c) Laplace transform
d) Curvature
View Answer
Answer: b
Explanation: The statistical moments have an advantage over the other techniques that it helps in
the physical interpretation of the shape of the boundary.
Answer: b
Explanation: The statistical moment technique of describing the shape of boundary is insensitive
of the rotation of the shape. If desired, size normalization can be achieved by scaling the range of
values of „g‟ and „r‟ .
430. What causes the effect, imperceptible set of very fine ridge like structures in areas of
smooth gray levels?
a) Caused by the use of an insufficient number of gray levels in smooth areas of a digital image
b) Caused by the use of huge number of gray levels in smooth areas of a digital image
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: a
Explanation: The set of very fine ridge like structures in area of smooth gray levels generally is
quite visible in images displayed using 16 or less uniformly spaced gray levels.
431. What is the name of the effect caused by the use of an insufficient number of gray levels in
smooth areas of a digital image?
a) Dynamic range
b) Ridging
c) Graininess
d) False contouring
View Answer
Answer: d
Explanation: The effect, caused due to insufficient number of gray levels in smooth areas of a
digital image, is called false contouring, so called because the ridges resemble topographic
contours in a map.
432. Using rough rule of thumb, and assuming powers of 2 for convenience, what image size are
about the smallest images that can be expected to be reasonably free of objectionable sampling
checkerboards and false contouring?
a) 512*512pixels and 16 gray levels
b) 256*256pixels and 64 gray levels
c) 64*64pixels and 16 gray levels
d) 32*32pixels and 32 gray levels
View Answer
Answer: b
Explanation: An image of 128*128pixels shows a pronounced checkerboard pattern, while for
256*256pixels image a minimum gray level of 64 is required to remove false contouring.
Also the effect is quite visible in images displayed using 16 or less uniformly spaced gray levels.
433. What does a shift up and right in the curves of isopreference curve simply means? Verify in
terms of N (number of pixels) and k (L=2k, L is the gray level) values.
a) Smaller values for N and k, implies a better picture quality
b) Larger values for N and k, implies low picture quality
c) Larger values for N and k, implies better picture quality
d) Smaller values for N and k, implies low picture quality
View Answer
Answer: c
Explanation: Points lying on an isopreference curve correspond to images of equal subjective
quality. It was found that the isopreference curves tended to shift right and upward with the
details of the image. So, a shift up and right in the curves simply means larger values for N and
k, implying better picture quality.
434. How does the curves behave to the detail in the image in isopreference curve?
a) Curves tend to become more vertical as the detail in the image decreases
b) Curves tend to become less vertical as the detail in the image increases
c) Curves tend to become less vertical as the detail in the image decreases
d) Curves tend to become more vertical as the detail in the image increases
View Answer
Answer: d
Explanation: The curves in isopreference curve tend to become more vertical as the detail in the
image increases.
The right side graph shows the same, curve for crowd is nearly vertical.
435. For an image with a large amount of detail, if the value of N (number of pixels) is fixed then
what is the gray level dependency in the perceived quality of this type of image?
a) Totally independent of the number of gray levels used
b) Nearly independent of the number of gray levels used
c) Highly dependent of the number of gray levels used
d) None of the mentioned
View Answer
Answer: b
Explanation: For image with high details of the image only a few gray levels may be needed.
Answer: a
Explanation: Functions whose area under the curve is finite can be represented in terms of sines
and cosines of various frequencies. The highest frequency is determined by the sine/cosine
component is the highest “frequency content” of the function. If this highest frequency is finite
and that the function is of unlimited duration, then, these functions are called band-limited
functions.
437. For a band-limited function, which Theorem says that “if the function is sampled at a rate
equal to or greater than twice its highest frequency, the original function can be recovered from
its samples”?
a) Band-limitation theorem
b) Aliasing frequency theorem
c) Shannon sampling theorem
d) None of the mentioned
View Answer
Answer: c
Explanation: For a band-limited function, Shannon sampling theorem says that “if the function is
sampled at a rate equal to or greater than twice its highest frequency, the original function can be
recovered from its samples”.
438. What is the name of the phenomenon that corrupts the sampled image, and how does it
happen?
a) Shannon sampling, if the band-limited functions are undersampled
b) Shannon sampling, if the band-limited functions are oversampled
c) Aliasing, if the band-limited functions are undersampled
d) Aliasing, if the band-limited functions are oversampled
View Answer
Answer: c
Explanation: If the band-limited functions is undersampled, then a phenomenon called aliasing
corrupts the sampled image.
advertisement
Answer: a
Explanation: Aliasing corrupts the sampled image by introducing additional frequency
components to the sampled function. So, the principal approach for reducing the aliasing effects
on an image is to reduce its high-frequency components by blurring the image prior to sampling.
441. In terms of Sampling and Quantization, Zooming and Shrinking may be viewed as
Answer: b
Explanation: Oversampling increases the number of sample in the image, i.e. like Zooming.
Undersampling decreases the number of samples in the image, i.e. like Shrinking.
442. The two steps: one is the creation of new pixel locations, and other is the assignment of gray
levels to those new locations are involved in
a) Shrinking
b) Zooming
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: b
Explanation: Suppose that we have an image of size500*500pixels and we want to enlarge it 1.5
times to 750*750pixels.
Creation of new Pixels: One of the easiest ways to visualize zooming is laying an imaginary
750*750 grid over the original image and so there would be less spacing by one pixel in the grid
because we are fitting it over a smaller image.
Assignment of gray levels to new locations: In order to perform gray-level assignment for any
point in the overlay, we assign its gray level to the new pixel in the grid its closest pixel in the
original image.
When the above steps are done with all points in the overlay grid, we expand it to the original
specified size to obtain the zoomed image.
443. While Zooming, In order to perform gray-level assignment for any point in the overlay, we
assign its gray level to the new pixel in the grid its closest pixel in the original image. What‟ s
this method of gray-level assignment called?
a) Neighbor Duplication
b) Duplication
c) Nearest neighbor Interpolation
d) None of the mentioned
View Answer
Answer: c
Explanation: Because we look for the closest pixel in the original image and assign its gray level
to the new pixel in the grid.
444. A special case of nearest neighbor Interpolation that just duplicates the pixels the number of
times to achieve the desired size, is known as
a) Bilinear Interpolation
b) Contouring
c) Ridging
d) Pixel Replication
View Answer
Answer: d
Explanation: A special case of nearest neighbor interpolation is Pixel replication and is
applicable when we want to increase the size of an image an integer number of times.
For example, doubling the size of an image is achieved duplicating each column and hence
image size gets doubled in the horizontal direction. Then, we duplicate each row of the enlarged
image to double the size in the vertical direction. Similarly, enlarging the image by any integer
number of times (triple, quadruple, and so on) is possible.
Answer: d
Explanation: At greater magnification nearest neighbor Interpolation has the undesirable feature
that it produces a checkerboard effect.
Answer: c
Explanation: Bilinear interpolation uses the four nearest neighbors of the new pixel. Let (x‟ ,
y‟ ) is the coordinates of a point in the zoomed image and the gray level assigned to the point
is v(x, y‟ ).
For bilinear interpolation, the assigned gray level is given by
v(x‟ , y‟ ) = ax‟ + by‟ + cx‟ y‟ + d
Here, a, b, c and d are determined from the four equations in four unknowns that can be written
using the four nearest neighbors of point (x‟ , y‟ ).
447. Row-column deletion method of Image Shrinking is an equivalent process to which method
of Zooming?
a) Bilinear Interpolation
b) Contouring
c) Pixel Replication
d) There is no such equivalent process
View Answer
Answer: c
Explanation: Row-column deletion method is used to shrink an image by one-half, one-fourth
and so on.
In case of one-half we delete every other row and column.
advertisement
Answer: a
Explanation: Although Image Shrinking uses the grid analogy of nearest neighbor interpolation,
but that we now expand the grid to fit over the original image, do gray-level nearest neighbor or
bilinear interpolation, causing the possible aliasing effect, and then shrink the grid back to its
original specified size.
Answer: a
Explanation: For case 32*32 to 1024*1024, the data is rather lost in nearest neighbor
Interpolation, but the result of Bilinear Interpolation remains reasonably good for the same.
MODEL QUESTION PAPER
Elective-II
A random
B vertex
C contour
D sampling
Ans.: D
Q2. What is the tool used in tasks such as zooming, shrinking, rotating, etc.?
A Sampling
B Interpolation
C Filters
Ans.: B
A line pairs
B pixels
C dots
Ans.: D
Q4. The most familiar single sensor used for Image Acquisition is
A Microdensitometer
B Photodiode
C CMOS
Ans.: B
Q5 The difference is intensity between the highest and the lowest intensity
levels in an image is ___________
A Noise
B Saturation
C Contrast
D Brightness
Ans.: C
Q6. The spatial coordinates of a digital image (x,y) are proportional to:
A Position
B Brightness
C Contrast
D Noise
Ans.: B
Q7. Among the following image processing techniques which is fast, precise
and flexible.
A Optical
B Digital
C Electronic
D Photographic
Ans.: B
A Height of image
B Width of image
C Amplitude of image
D Resolution of image
Ans.: C
Ans.: A
A Dynamic range
B Band range
C Peak range
D Resolution range
Ans.: A
A Saturation
B Hue
C Brightness
D Intensity
Ans.: B
A Interpretation
B Recognition
C Acquisition
D Segmentation
Ans.: A
Ans.: B
A Quantization
B Sampling
C Contrast
D Dynamic range
Ans.: B
Ans.: A
Ans.: A
Q17. For pixels p(x, y), q(s, t), the city-block distance between p and q is
defined as:
B D(p, q) = |x – s| + |y – t|
Ans.: B
Q18. The domain that refers to image plane itself and the domain that refers
to Fourier transform of an image is/are :
Q19. Using gray-level transformation, the basic function Logarithmic deals with
which of the following transformation?
Ans.: A
As=L–1–r
Ans.: A
Q21. Which of the following transformations expands the value of dark pixels
while the higher-level values are being compressed?
A Log transformations
B Inverse-log transformations
C Negative transformations
D None of the mentioned
Ans.: A
Ans.: D
A 1
B -1
C 0
D None of the mentioned
Ans.: A
A Blurring
B Noise reduction
C All of the mentioned
D None of the mentioned
Ans.: C
Ans.: D
Q26. A spatial averaging filter having all the coefficients equal is termed
_________
A A box filter
B A weighted average filter
C A standard average filter
D A median filter
Ans.: A
Q27. An image contains noise having appearance as black and white dots
superimposed on the image. Which of the following noise(s) has the same
appearance?
A Salt-and-pepper noise
B Gaussian noise
C All of the mentioned
D None of the mentioned
Ans.: C
Q28. Which filter(s) used to find the brightest point in the image?
A Median filter
B Max filter
C Mean filter
D All of the mentioned
Ans.: B
Q29. In linear spatial filtering, what is the pixel of the image under mask
corresponding to the mask coefficient w (1, -1), assuming a 3*3 mask?
A f (x, -y)
B f (x + 1, y)
C f (x, y – 1)
D f (x + 1, y – 1)
Ans.: D
Ans.: C
Q31. Which of the following statement(s) is true for the given fact that
“Applying High pass filters has an effect on the background of the output
image”?
A The average background intensity increases to near white of black and white
B The average background intensity reduces to near black
C The average background intensity changes to a value average
D All of the mentioned
Ans.: B
A UV Rays
B Gamma Rays
C Microwaves
D Radio Waves
Ans.: B
A lumens
B watts
C armstrong
D hertz
Ans.: B
Q34. Which of the following is used for chest and dental scans?
A Hard X-Rays
B Soft X-Rays
C Radio waves
D Infrared Rays
Ans.: B
A c = wavelength / frequency
B frequency = wavelength / c
C wavelength = c * frequency
D c = wavelength * frequency
Ans.: C
Ans.: B
Ans.: A
A image enhancement
B image decompression
C image contrast
D image equalization
Ans.: B
A pixels
B matrix
C intensity
D coordinates
Ans.: C
A pixels
B matrix
C frames
D intensity
Ans.: D
Q41. Logic operations between two or more images are performed on pixel-
by-pixel basis, except for one that is performed on a single image. Which one
A AND
B OR
C NOT
D None of the mentioned
Ans.: C
Q42. How many bit RGB color image is represented by full-color image?
Ans.: B
A JPEG
B GIF
C BMP
D PNG
Ans.: B,D
Q44. Makes the file smaller by deleting parts of the file permanently (forever)
A Lossy Compression
B Lossless Compression
Ans.: A
Ans.: A
Ans.: A
A Gaussian
B laplacian
C ideal
D butterworth
Ans.: B
Q48. A typical Fourier Spectrum with spectrum value ranging from 0 to 106,
which of the following transformation is better to apply.
A nonzero
B zero
C positive
D negative
Ans.: B
A intensity transition
B shape transition
C color transition
D sign transition
Ans.: D
A discontinuity
B similarity
C continuity
D recognition
Ans.: A
A audio
B sound
C sunlight
D ultraviolet
Ans.: B
A ultrasonic
B radar
C visible and infrared
D Infrared
Ans.: B
Ans.: C
Q54. The digitization process i.e. the digital image has M rows and N columns,
requires decisions about values for M, N, and for the number, L, of gray levels
allowed for each pixel. The value M and N have to be:
Ans.: A
Q55. After digitization process a digital image with M rows and N columns
have to be positive and for the number, L, max gray levels i.e. an integer
power of 2 for each pixel. Then, the number b, of bits required to store a
digitized image is:
A b=M*N*k
B b=M*N*L
C b=M*L*k
D b=L*N*k
Ans.: A
A bright
B dark
C colourful
D All of the Mentioned
Ans.: B
A Image enhancement
B Blurring
C Contrast adjustment
D None of the Mentioned
Ans.: A
A Intensive
B Local
C Global
D Random
Ans.: A
A Intensive
B Local
C Global
D Random
Ans.: B
Ans.: C
Model Question Paper
Subject :Digital Image Processing
Branch: E&TC
Class: BE
Semester:VIII
A)128
B)255
C)256
D)512
Ans:C
A) Slicing
B) Color Slicing
C) Enhancing
D) Cutting
Ans:B
3) A type of Image is called as VHRR image. What is the definition of VHRR image?
Ans:C
Ans:D
B)255
C)256
D)1
Ans:A
6) The Image sharpening in frequency domain can be achieved by which of the following method(s)?
Ans:B
7) The function of filters in Image sharpening in frequency domain is to perform reverse operation of
which of the following Lowpass filter?
D) None
Ans:C
8) The edges and other abrupt changes in gray-level of an image are associated with_________
C) Edges with high frequency and other abrupt changes in gray-level with low frequency components
D) Edges with low frequency and other abrupt changes in gray-level with high frequency components
Ans:A
A) |Gx|+|Gy|
B) |Gx|-|Gy|
C) |Gx|/|Gy|
D) |Gx|x|Gy|
Ans:A
10) Which of the following statement(s) is true for the given fact that “Applying Highpass filters has an
effect on the background of the output image”?
C) The average background intensity changes to a value average of black and white
Ans:B
A) Red Noise
B) White Noise
C) Black Noise
D) Normal Noise
Ans:D
A) Frequency
B) Time
C) Spatial
D) Plane
Ans:A
A) MRI
B) surgery
C) CT scan
D) Injections
Ans:A
14 Which one is not the process of image processing
A) high level
B) low level
C) last level
D) Mid level
Ans:C
15) Filters that replaces pixel value with medians of intensity levels is
C) Median Filter
Ans:C
A) Position
B) Brightness
C) Contrast
D) Saturation
Ans:B
17)Among the following image processing techniques which is fast, precise and flexible
A) Optical
B) Digital
C) Electronic
D) Photographic
Ans:B
A) Height of image
B) Width of image
C) Amplitude of image
D) Resolution of image
Ans:C
19)The range of values spanned by the gray scale is called
A) Dynamic range
B) Band range
C) Peak range
D) Resolution range
Ans:A
A) Saturation
B) Hue
C) Brightness
D) Intensity
Ans:B
A) law enforcement
B) lithography
C) medicine
D) voice calling
Ans:D
A) Interpretation
B) Recognition
C) Acquisition
D) Segmentation
Ans:A
A) 256 X 256
B) 512 X 512
C) 1920 X 1080
D) 1080 X 1080
Ans:B
24)The number of gray values are integer powers of
A) 4
B)2
C)8
D)1
Ans:B
A) Image restoration
B) Image enhancement
C) Image acquisition
D) Segmentation
Ans:C
26)In which step of processing, the images are subdivided successively into smaller regions?
A) Image enhancement
B) Image acquisition
C) Segmentation
D) wavelet
Ans:D
A) Wavelets
C) Segmentation
D) Morphological processing
Ans:D
28) What is the step that is performed before color image processing in image processing?
B) Image enhancement
C) Image acquisition
D) Image restoration
Ans:D
29)How many number of steps are involved in image processing?
A) 10
B) 11
C) 9
D) 12
Ans:A
Ans:B
31)Which of the following step deals with tools for extracting image components those are useful in
the representation and description of shape?
B) Segmentation
C) Compression
D) Morphological processing
Ans:D
32)In which step of the processing, assigning a label (e.g., “vehicle”) to an object based on its
descriptors is done?
A) Object recognition
B) Morphological processing
D) Segmentation
Ans:A
A) Deals with extracting attributes that result in some quantitative information of interest
B) Deals with techniques for reducing the storage required saving an image, or the bandwidth
required transmitting it
C) Deals with property in which images are subdivided successively into smaller regions
D) Deals with partitioning an image into its constituent parts or objects
Ans:D
Ans:B
35) A structured light illumination technique was used for lens deformation
A) lens deformation
B) inverse filtering
C) lens enhancement
D)lens error
Ans:A
A) edges
B) slices
C) boundaries
D) illumination
Ans:B
37) Major use of gamma rays imaging includes
A) Radars
B) astronomical observations
C) industry
D) lithography
Ans:B
A) Image addition
B) Image Multiplication
C) Image division
D) None
Ans:B
39)What is the sum of the coefficient of the mask defined using HPF?
A) 1
B) -1
C) 0
Ans:C
A ) thumb prints
B) paper currency
C)mp3
Ans:C
A) spatial coordinates
B) frequency coordinates
C)time coordinates
D) real coordinates
Ans:A
42) Lithography uses
A) ultraviolet
B) x-rays
C)gamma
D) visible rays
Ans:A
A)color enhancement
B) Frequency enhancement
C)Spatial enhancement
D)Detection
Ans:D
A) microscopy
B) medical
C) industry
D) radar
Ans:B
A) 1048576
B) 1148576
C) 1248576
D) 1348576
Ans:A
46)The lens is made up of concentric layers of
A) strong cells
B) inner cells
C) fibrous cells
D) outer cells
Ans:C
A) audio
B) AM
C) FM
D) Both b and c
Ans:D
A) 2 levels
B) 4levels
C) 8 levels
D) 16 levels
Ans:C
A) values
B) numbers
C) frequencies
D) intensities
Ans:D
50)In MxN, M is no of
A) intensity levels
B) colors
C) rows
D) columns
Ans:C
51) Each element of the matrix is called
A) dots
B) coordinate
C) pixels
D) value
Ans:C
C) digitized image
D) analog signal
Ans:C
A) radiance
B) variance
C) sampling
D) quantization
Ans:C
A) pel
B) dot
C) resolution
D) digits
Ans:A
B) three
C) four
D) five
Ans:B
A) speed of light
B) light constant
C) plank's constant
D) acceleration constant
Ans:C
Ans:B
A) b = NxK
B) b = MxN
C) b = MxNxK
D) b = MxK
Ans: C
Ans:A
B) illuminance
C) sampling
D) quantization
Ans:D
Model Question Paper
Subject: Digital Image Processing Branch:E&TC
Class:BE Semester:VIII
(B) Segmentation
Ans: a
(A) Compression
(B) Quantization
(C) Sampling
(D) Segmentation
Ans: c
3. _____ is the total amount of energy that flows from light source.
(A) Radiance
(B) Darkness
(C) Brightness
(D) Luminance
Ans: a
Ans: d
Ans: d
6.The transition between continuous values of the image function and its digital equivalent is
called ______________
a) Quantisation
b) Sampling
c) Rasterisation
d) None of the Mentioned
Ans:a
7. Images quantised with insufficient brightness levels will lead to the occurrence of
____________
a) Pixillation
b) Blurring
c) False Contours
Ans:c
8.The smallest discernible change in intensity level is called ____________
a) Intensity Resolution
b) Contour
c) Saturation
d) Contrast
Ans:a
9. What is the tool used in tasks such as zooming, shrinking, rotating, etc.?
a) Sampling
b) Interpolation
c) Filters
Ans:b
10. The type of Interpolation where for each new location the intensity of the immediate pixel
is assigned is ___________
a) bicubic interpolation
b) cubic interpolation
c) bilinear interpolation
d) nearest neighbour interpolation
Ans:d
11. The type of Interpolation where the intensity of the FOUR neighbouring pixels is used to
obtain intensity a new location is called __________
a) Cubic interpolation
b) nearest neighbour interpolation
c) bilinear interpolation
d) bicubic interpolation
Ans: b
12. Dynamic range of imaging system is a ratio where the upper limit is determined by
a)Saturation
b) Noise
c) Brightness
d) Contrast
Ans:a
d) Contrast
Ans:c
14. Quantitatively, spatial resolution cannot be represented in which of the following ways
a) line pairs
b) pixels
c) dots
d) none of the Mentioned
Ans:d
Ans:b
16. The process of using known data to estimate values at unknown locations is called
a) Acquisition
b) Interpolation
c) Pixelation
d) None of the Mentioned
Ans:b
Ans:c
18. In which step of processing, the images are subdivided successively into smaller regions?
a) Image enhancement
b) Image acquisition
c) Segmentation
d) Wavelets
Ans:c
Ans:a
Ans:b
21.The principal factor to determine the spatial resolution of an image is _______
a) Quantization
b) Sampling
c) Contrast
d) Dynamic range
Ans:b
22. What causes the effect, imperceptible set of very fine ridge like structures in areas of
smooth gray levels?
a) Caused by the use of an insufficient number of gray levels in smooth areas of a digital
image
b) Caused by the use of huge number of gray levels in smooth areas of a digital image
c) All of the mentioned
d) None of the mentioned
Ans:a
23. What is the name of the effect caused by the use of an insufficient number of gray levels
in smooth areas of a digital image?
a) Dynamic range
b) Ridging
c) Graininess
d) False contouring
Ans:d
24. Using rough rule of thumb, and assuming powers of 2 for convenience, what image size
are about the smallest images that can be expected to be reasonably free of objectionable
sampling checkerboards and false contouring?
a) 512*512pixels and 16 gray levels
b) 256*256pixels and 64 gray levels
c) 64*64pixels and 16 gray levels
d) 32*32pixels and 32 gray levels
Ans:b
25. What does a shift up and right in the curves of isopreference curve simply means? Verify
in terms of N (number of pixels) and k (L=2k, L is the gray level) values.
a) Smaller values for N and k, implies a better picture quality
b) Larger values for N and k, implies low picture quality
c) Larger values for N and k, implies better picture quality
d) Smaller values for N and k, implies low picture quality
Ans:c
26. How does the curves behave to the detail in the image in isopreference curve?
a) Curves tend to become more vertical as the detail in the image decreases
b) Curves tend to become less vertical as the detail in the image increases
c) Curves tend to become less vertical as the detail in the image decreases
d) Curves tend to become more vertical as the detail in the image increases
Ans:d
27. For an image with a large amount of detail, if the value of N (number of pixels) is fixed
then what is the gray level dependency in the perceived quality of this type of image?
a) Totally independent of the number of gray levels used
b) Nearly independent of the number of gray levels used
c) Highly dependent of the number of gray levels used
d) None of the mentioned
Ans:b
Ans:a
29. For a band-limited function, which Theorem says that “if the function is sampled at a rate
equal to or greater than twice its highest frequency, the original function can be recovered
from its samples”?
a) Band-limitation theorem
b) Aliasing frequency theorem
c) Shannon sampling theorem
d) None of the mentioned
Ans:c
30. What is the name of the phenomenon that corrupts the sampled image, and how does it
happen?
a) Shannon sampling, if the band-limited functions are undersampled
b) Shannon sampling, if the band-limited functions are oversampled
c) Aliasing, if the band-limited functions are undersampled
d) Aliasing, if the band-limited functions are oversampled
Ans:c
Ans:a
33If h(rk) = nk, rk the kthgray level and nk total pixels with gray level rk, is a histogram in gray
level range [0, L – 1]. Then how can we normalize a histogram?
a) If each value of histogram is added by total number of pixels in image, say n, p(rk)=nk+n
b) If each value of histogram is subtracted by total number of pixels in image, say n, p(rk)=nk-
n
c) If each value of histogram is multiplied by total number of pixels in image, say n,
p(rk)=nk * n
d) If each value of histogram is divided by total number of pixels in image, say n, p(rk)=nk / n
Ans:d
Ans:a
35. A low contrast image will have what kind of histogram when, the histogram, h(rk) = nk,
rk the kthgray level and nk total pixels with gray level rk, is plotted nk versus rk?
a) The histogram that are concentrated on the dark side of gray scale
b) The histogram whose component are biased toward high side of gray scale
c) The histogram that is narrow and centered toward the middle of gray scale
d) The histogram that covers wide range of gray scale and the distribution of pixel is
approximately uniform
Ans:c
36. A bright image will have what kind of histogram, when the histogram, h(rk) = nk, rk the
kthgray level and nk total pixels with gray level rk, is plotted nk versus rk?
a) The histogram that are concentrated on the dark side of gray scale
b) The histogram whose component are biased toward high side of gray scale
c) The histogram that is narrow and centered toward the middle of gray scale
d) The histogram that covers wide range of gray scale and the distribution of pixel is
approximately uniform
Ans:b
37. A high contrast image and a dark image will have what kind of histogram respectively,
when the histogram, h(rk) = nk, rk the kthgray level and nk total pixels with gray level rk, is
plotted nk versus rk?
The histogram that are concentrated on the dark side of gray scale.
The histogram whose component are biased toward high side of gray scale.
The histogram that is narrow and centered toward the middle of gray scale.
The histogram that covers wide range of gray scale and the distribution of pixel is
approximately uniform.
a) I) And II) respectively
b) III) And II) respectively
c) II) And IV) respectively
d) IV) And I) respectively
Ans:d
38. The transformation s = T(r) producing a gray level s for each pixel value r of input image.
Then, if the T(r) is single valued in interval 0 ≤ r ≤ 1, what does it signifies?
a) It guarantees the existence of inverse transformation
b) It is needed to restrict producing of some inverted gray levels in output
c) It guarantees that the output gray level and the input gray level will be in same range
d) All of the mentioned
Ans:a
39. The transformation s = T(r) producing a gray level s for each pixel value r of input image.
Then, if the T(r) is monotonically increasing in interval 0 ≤ r ≤ 1, what does it signifies?
a) It guarantees the existence of inverse transformation
b) It is needed to restrict producing of some inverted gray levels in output
c) It guarantees that the output gray level and the input gray level will be in same range
d) All of the mentioned
Ans:b
40. The transformation s = T(r) producing a gray level s for each pixel value r of input image.
Then, if the T(r) is satisfying 0 ≤ T(r) ≤ 1 in interval 0 ≤ r ≤ 1, what does it signifies?
a) It guarantees the existence of inverse transformation
b) It is needed to restrict producing of some inverted gray levels in output
c) It guarantees that the output gray level and the input gray level will be in same range
d) All of the mentioned
Ans:c
41. What is the full form for PDF, a fundamental descriptor of random variables i.e. gray
values in an image?
a) Pixel distribution function
b) Portable document format
c) Pel deriving function
d) Probability density function
Ans:d
Ans:c
43. For the transformation T(r) = [∫0r pr(w) dw], r is gray value of input image, pr(r) is PDF of
random variable r and w is a dummy variable. If, the PDF are always positive and that the
function under integral gives the area under the function, the transformation is said to be
__________
a) Single valued
b) Monotonically increasing
c) All of the mentioned
d) None of the mentioned
Ans:c
44. The transformation T (rk) = ∑k(j=0) nj /n, k = 0, 1, 2, …, L-1, where L is max gray value
possible and r-k is the kthgray level, is called _______
a) Histogram linearization
b) Histogram equalization
c) All of the mentioned
d) None of the mentioned
Ans:c
45. If the histogram of same images, with different contrast, are different, then what is the
relation between the histogram equalized images?
a) They look visually very different from one another
b) They look visually very similar to one another
c) They look visually different from one another just like the input images
d) None of the mentioned
Ans:b
46.In 4-neighbours of a pixel p, how far are each of the neighbours located from p?
a) one pixel apart
b) four pixels apart
c) alternating pixels
d) none of the Mentioned
Ans:a
47. If S is a subset of pixels, pixels p and q are said to be ____________ if there exists a path
between them consisting of pixels entirely in S.
a) continuous
b) ambiguous
c) connected
d) none of the Mentioned
Ans:c
Ans:b
49. Two regions are said to be ___________ if their union forms a connected set.
a) Adjacent
b) Disjoint
c) Closed
d) None of the Mentioned
Ans:a
50. If an image contains K disjoint regions, what does the union of all the regions represent?
a) Background
b) Foreground
c) Outer Border
d) Inner Border
Ans:b
51. For a region R, the set of points that are adjacent to the complement of R is called as
________
a) Boundary
b) Border
c) Contour
d) All of the Mentioned
Ans:d
52. The distance between pixels p and q, the pixels have a distance less than or equal to some
value of radius r centred at (x,y) is called :
a) Euclidean distance
b) City-Block distance
c) Chessboard distance
d) None of the Mentioned
Ans:a
53. The distance between pixels p and q, the pixels have a distance less than or equal to some
value of radius r, form a diamond centred at (x,y) is called :
a) Euclidean distance
b) Chessboard distance
c) City-Block distance
d) None of the Mentioned
Ans:c
54. The distance between pixels p and q, the pixels have a distance less than or equal to some
value of radius r, form a square centred at (x,y) is called :
a) Euclidean distance
b) Chessboard distance
c) City-Block distance
d) None of the Mentioned
Ans:b
Ans:a
Ans:c
58. What is the process of moving a filter mask over the image and computing the sum of
products at each location called as?
a) Convolution
b) Correlation
c) Linear spatial filtering
d) Non linear spatial filtering
Ans:b
59. The standard deviation controls ___________ of the bell (2-D Gaussian function of bell
shape).
a) Size
b) Curve
c) Tightness
d) None of the Mentioned
Ans:c
Ans:a
3/25/2021 MCQ based test on unit 2 of DIVP
1. Email address *
3. Roll number *
Median of pixels
Maximum of pixels
Minimum of pixels
Average of pixels
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 1/15
3/25/2021 MCQ based test on unit 2 of DIVP
True
False
7. If the size of the averaging filter used to smooth the original image to first 1 point
image is 9, then what would be the size of the averaging filter used in
smoothing the same original picture to second in second image? *
15
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 2/15
3/25/2021 MCQ based test on unit 2 of DIVP
8. The mask shown in the figure below belongs to which type of filter? * 1 point
Median filter
g(x,y)=T[f(x,y)]
f(x+y)=T[g(x+y)]
g(xy)=T[f(xy)]
g(x-y)=T[f(x-y)]
10. Which of the following shows three basic types of functions used 1 point
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 3/15
3/25/2021 MCQ based test on unit 2 of DIVP
s=L+1-r
s=L+1+r
s=L-1-r
s=L-1+r
s=clog10(1/r)
s=clog10(1+r)
s=clog10(1*r)
s=clog10(1-r)
13. What is the name of process used to correct the power-law response 1 point
phenomena? *
Beta correction
Alpha correction
Gamma correction
Pie correctionon
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 4/15
3/25/2021 MCQ based test on unit 2 of DIVP
14. Which of the following transformation function requires much information 1 point
Log transformation
Power transformation
Piece-wise transformation
Linear transformation
15. In contrast stretching, if r1=s1 and r2=s2 then which of the following is 1 point
true? *
The transformation is not a linear function that produces no changes in gray levels
The transformation is not a linear function that produces changes in gray levels
16. In contrast stretching, if r1=r2, s1=0 and s2=L-1 then which of the following 1 point
is true? *
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 5/15
3/25/2021 MCQ based test on unit 2 of DIVP
17. In which type of slicing, highlighting a specific range of gray levels in an 1 point
image often is desired? *
Gray-level slicing
Bit-plane slicing
Contrast stretching
Byte-level slicing
18. Which of the following depicts the main functionality of the Bit-plane 1 point
slicing? *
19. Which of the following is the primary objective of sharpening of an image? 1 point
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 6/15
3/25/2021 MCQ based test on unit 2 of DIVP
20. In spatial domain, which of the following operation is done on the pixels in 1 point
Integration
Average
Median
Differentiation
21. Which of the following is the valid response when we apply a first 1 point
derivative? *
22. Which of the following is not a valid response when we apply a second 1 point
derivative? *
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 7/15
3/25/2021 MCQ based test on unit 2 of DIVP
23. Which of the following derivatives produce a double response at step 1 point
24. The lowpass filtering process can be applied in which of the following 1 point
area(s)? *
25. The edges and other abrupt changes in gray-level of an image are 1 point
associated with_________ *
Edges with high frequency and other abrupt changes in gray-level with low
frequency components
Edges with low frequency and other abrupt changes in gray-level with high
frequency components
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 8/15
3/25/2021 MCQ based test on unit 2 of DIVP
26. If D0 is the cutoff distance measured from origin of frequency rectangle 1 point
and D(u, v) is the distance from point(u, v). Then what value does an Ideal
Highpass filter will give if D(u, v) ≤ D0 andifD(u, v) >D0? *
0 and 1 respectively
1 and 0 respectively
1 in both case
0 in both case
27. A bright image will have what kind of histogram, when the histogram, h(rk) 1 point
= nk, rk the kth gray level and nk total pixels with gray level rk, is plotted nk
versus rk? *
The histogram that are concentrated on the dark side of gray scale
The histogram whose component are biased toward high side of gray scale
The histogram that is narrow and centered toward the middle of gray scale
The histogram that covers wide range of gray scale and the distribution of pixel is
approximately uniform
28. The transformation s = T(r) producing a gray level s for each pixel value r of 1 point
input image. Then, if the T(r) is single valued in interval 0 ≤ r ≤ 1, what does
it signifies? *
It guarantees that the output gray level and the input gray level will be in same
range
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 9/15
3/25/2021 MCQ based test on unit 2 of DIVP
29. The transformation s = T(r) producing a gray level s for each pixel value r of 1 point
input image. Then, if the T(r) is monotonically increasing in interval 0 ≤ r ≤
1, what does it signifies? *
It guarantees that the output gray level and the input gray level will be in same
range
30. For the transformation T(r) = [∫0r pr(w) dw], r is gray value of input image, 1 point
pr(r) is PDF of random variable r and w is a dummy variable. If, the PDF are
always positive and that the function under integral gives the area under
the function, the transformation is said to be __________ *
Single valued
Monotonically increasing
31. Which of the following mask(s) is/are used to sharpen images by 1 point
subtracting a blurred version of original image from the original image
itself? *
Unsharp mask
High-boost filter
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 10/15
3/25/2021 MCQ based test on unit 2 of DIVP
32. Which of the following gives an expression for high boost filtered image 1 point
33. The domain that refers to image plane itself and the domain that refers to 1 point
Contouring
Contrast stretching
Mask processing
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 11/15
3/25/2021 MCQ based test on unit 2 of DIVP
35. * 2 points
36. * 2 points
24
25
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 12/15
3/25/2021 MCQ based test on unit 2 of DIVP
37. * 2 points
38. * 4 points
It is a subjective process
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 13/15
3/25/2021 MCQ based test on unit 2 of DIVP
41. Following is the most popular filter used for restoration * 1 point
Laplacian filter
Median filter
Wiener filter
Averaging filter
Linear filter
Averaging filter
43. The key approach in Homomorphic filtering is to separate the illumination 1 point
True
False
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 14/15
3/25/2021 MCQ based test on unit 2 of DIVP
True
False
True
False
True
False
Forms
https://docs.google.com/forms/d/1zL9C8JhoyjkARhhXWNWXw5PMUGIJKbCR7cgS1UcbbLA/edit 15/15
3/25/2021 MCQ based test on unit 3 of DIVP
1. Email address *
2. E mail id *
5. Roll number *
Encoder
Decoder
Frames
Both A and B
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 1/15
3/25/2021 MCQ based test on unit 3 of DIVP
Data
Meaningful data
Raw data
Both A and B
Pixels
Matrix
Coordinates
Intensity
word
byte
codeword
nibble
pixels
matrix
intensity
frame
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 2/15
3/25/2021 MCQ based test on unit 3 of DIVP
pixels
matrix
intensity
frame
12. If the pixels are reconstructed without error mapping is said to 1 point
be__________ *
irreversible
reversible
frames
facsimile
sampling
entropy
quantization
normalization
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 3/15
3/25/2021 MCQ based test on unit 3 of DIVP
Bandwidth
Storage
Money
both A and B
nibble
byte
code
word
coding redundancy
spatial redundancy
irrelevant information
temporal redundancy
56 kbps
72 kbps
24kbps
64 kbps
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 4/15
3/25/2021 MCQ based test on unit 3 of DIVP
complex ratio
compression ratio
constant
condition
spatial
temporal
coding
fascimile
21. Redundancy of the data can be found using the formula__________ * 1 point
1-1/C
1+1/C
1/C
Image enhancement
Image compression
Image watermarking
Image restoration
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 5/15
3/25/2021 MCQ based test on unit 3 of DIVP
encoding
decoding
framing
both A and B
Both A and B
Both A and B
26. Eye does not respond with equal sensitivity to all visual information. * 1 point
True
False
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 6/15
3/25/2021 MCQ based test on unit 3 of DIVP
type of compression
Both A and B
Mapper,Quantizer,Symbol decoder
Mapper,Quantizer,Symbol encoder
Mapper,Quantizer
Mapper,Symbol encoder
True
False
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 7/15
3/25/2021 MCQ based test on unit 3 of DIVP
31. Mapper is used to transform input data into format designed to reduce 1 point
inter pixel redundancy in an input image. *
True
False
32. Mapper uses Run length coding and it is an irreversible process. * 1 point
True
False
33. Channel encoder and channel decoder are designed to reduce impact of 1 point
channel noise. *
True
False
Arithmetic code
Huffman code
Hamming code
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 8/15
3/25/2021 MCQ based test on unit 3 of DIVP
Arithmetic code
Huffman code
Hamming code
Arithmetic code
Huffman code
Hamming code
Speech
Medical reports
images
Both A AND C
Both A and B
Medical reports
computer files
speech
Both A and C
Both A and B
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 9/15
3/25/2021 MCQ based test on unit 3 of DIVP
39. Blocking artifacts are less pronounced in the DCT than in the DFT. * 1 point
True
False
40. DFT provides very good energy compaction than DCT. * 1 point
True
False
41. The JPEG compression starts by breaking the image into_____________ * 1 point
8x8 pixels
16x16 pixels
24x24 pixels
32x32 pixels
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 10/15
3/25/2021 MCQ based test on unit 3 of DIVP
True
False
quantization
1 d sequence
entropy table
DCT
DFT
DWT
blocking
ringing
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 11/15
3/25/2021 MCQ based test on unit 3 of DIVP
Consumer applications
Medical applications
Remote sensing
48. Wavelet based compression does not have blocking artifacts . * 1 point
True
False
49. Just like JPEG Wavelet algorithm divides an image into blocks. * 1 point
True
False
50. To display Fourier spectrum following image processing technique is used 1 point
Log transformation
Histogram processing
Other:
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 12/15
3/25/2021 MCQ based test on unit 3 of DIVP
51. Which image processing technique is used to improve the following 1 point
image? *
Homomophic filter
Median filter
Averaging filter
LoG filter
52. Which image processing technique is used to improve the following 1 point
image? *
DoG filter
Laplacian filter
Sobel filter
Wiener filter
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 13/15
3/25/2021 MCQ based test on unit 3 of DIVP
53. Which image processing technique is used to get the right sided 1 point
processed image? *
Coloring image
Pseudo coloring
Green filter
A AND B
A OR B
A -- B
A+B
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 14/15
3/25/2021 MCQ based test on unit 3 of DIVP
A AND B
A OR B
A -- B
A+B
Forms
https://docs.google.com/forms/d/1nIsiwacLvLB7yoUtT9KkiPqbWwdZnz26uUpaFhO4ls4/edit 15/15
3/25/2021 MCQ based test on Unit 1 of DIVP
1. Email address *
3. Roll number *
5. The spatial coordinates of a digital image (x,y) are proportional to: * 1 point
Position
Brightness
Contrast
Noise
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 1/17
3/25/2021 MCQ based test on Unit 1 of DIVP
Dynamic range
Band range
Peak range
Resolution range
Saturation
Hue
Brightness
Intensity
8. Which gives a measure of the degree to which a pure colour is diluted by 1 point
white light? *
Saturation
Hue
Brightness
Intensity
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 2/17
3/25/2021 MCQ based test on Unit 1 of DIVP
9. In which step of processing, the images are subdivided successively into 1 point
smaller regions? *
Image enhancement
Image acquisition
Segmentation
Wavelets
10. What role does the segmentation play in image processing? * 1 point
Deals with techniques for reducing the storage required saving an image, or the
bandwidth required transmitting it
Deals with property in which images are subdivided successively into smaller
regions
11. To convert a continuous image f(x, y) to digital form, we have to sample 1 point
Coordinates
Amplitude`
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 3/17
3/25/2021 MCQ based test on Unit 1 of DIVP
12. For a continuous image f(x, y), how could be Sampling defined? * 1 point
14. How many bit RGB color image is represented by full-color image? * 1 point
15. Which of the following color models are used for color printing? * 1 point
RGB
CMY
CMYK
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 4/17
3/25/2021 MCQ based test on Unit 1 of DIVP
16. What are the names of categories of color image processing? * 1 point
17. What are the characteristics that are taken together in chromaticity? * 1 point
18. Which of the following is/are more commercially successful image 1 point
Addition
Subtraction
Multiplication
Division
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 5/17
3/25/2021 MCQ based test on Unit 1 of DIVP
19. Which of the following operations are used for masking? * 1 point
AND, OR
AND, NOT
NOT, OR
20. While implementing logic operation on gray-scale images, the processing 1 point
21. Logic operations between two or more images are performed on pixel- 1 point
by-pixel basis, except for one that is performed on a single image. Which
one is that? *
AND
OR
NOT
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 6/17
3/25/2021 MCQ based test on Unit 1 of DIVP
22. Two images having one pixel gray value 01010100 and 00000101 at the 1 point
same location, are operated against AND operator. What would be the
resultant pixel gray value at that location in the enhanced image? *
10100100
11111011
00000100
01010101
to detect colour
Other:
24. The digitization process i.e. the digital image has M rows and N columns, 1 point
requires decisions about values for M, N, and for the number, L, of max
gray levels is an integer power of 2 i.e. L = 2k, allowed for each pixel. If we
assume that the discrete levels are equally spaced and that they are
integers then they are in the interval __________ and Sometimes the range
of values spanned by the gray scale is called the ________ of an image. *
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 7/17
3/25/2021 MCQ based test on Unit 1 of DIVP
25. The dependence of the perceived intensity of an object on its surrounding 1 point
background is called-------- *
Simultaneous contrast
Weber effect
MTF
Simultaneous contrast
Weber effect
MTF
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 8/17
3/25/2021 MCQ based test on Unit 1 of DIVP
Other:
BMP
GIF
JPEG
TIFF
Other:
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 9/17
3/25/2021 MCQ based test on Unit 1 of DIVP
BMP
GIF
JPEG
TIFF
Other:
BMP
GIF
JPEG
TIFF
Other:
31. This colour model is ideal for developing image processing algorithms. * 1 point
RGB
HSI
CMY
YIQ
Other:
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 10/17
3/25/2021 MCQ based test on Unit 1 of DIVP
RGB
HSI
CMY
YIQ
Other:
RGB
HSI
CMY
YIQ
Other:
34. * 2 points
Other:
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 11/17
3/25/2021 MCQ based test on Unit 1 of DIVP
35. * 2 points
3,5
5,3
6,4
4,3
Other:
Temporal resolution
Spatial resolution
Other:
Brightness resolution
Spatial resolution
Both A and B
Other:
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 12/17
3/25/2021 MCQ based test on Unit 1 of DIVP
38. The effect caused by the use of an insufficient number of gray levels in 1 point
Ridges
False contouring
Spatial resolution
Other:
39. What is the equation used for calculating B value in terms of HSI 1 point
components? *
B=I(1+S)
B=S(1-I)
B=S(1+I)
B=I(1-S)
It is the amount of red, green and yellow needed to form any particular color
It is the amount of red, green and indigo needed to form any particular color
It is the amount of red, yellow and blue needed to form any particular color
It is the amount of red, green and blue needed to form any particular color
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 13/17
3/25/2021 MCQ based test on Unit 1 of DIVP
41. An image of size 1024X1024 in which each pixel is of 8 bit requires _____ 1 point
storage space *
1 MB
10 MB
5 MB
100 MB
Online storage
Archival storage
Online storage
Archival storage
Ease of processing
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 14/17
3/25/2021 MCQ based test on Unit 1 of DIVP
45. ____ are used for image compression and for pyramidal representation in 1 point
Fourier coeficients
Wavelets
DCT coefficients
JPEG images
Rods
Cones
47. Which of the following step deals with tools for extracting image 1 point
Segmentation
Compression
Morphological processing
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 15/17
3/25/2021 MCQ based test on Unit 1 of DIVP
48. In which step of the processing, assigning a label (e.g., “vehicle”) to an 1 point
Object recognition
Morphological processing
Segmentation
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 16/17
3/25/2021 MCQ based test on Unit 1 of DIVP
51. After digitization process a digital image with M rows and N columns have 1 point
to be positive and for the number, L, max gray levels i.e. an integer power
of 2 for each pixel. Then, the number b, of bits required to store a digitized
image is: *
b=M*N*k
b=M*N*L
b=M*L*k
b=L*N*k
Forms
https://docs.google.com/forms/d/1e-VDoOsJPnb00rQZzPcxBoxfToii1r1vgjGlezUNbss/edit 17/17
1.
a.
industry
b.
astronomy
c.
radar
d.
medical diagnoses
Answer: (c).
radar
2.
a.
b.
c.
d.
3.
a.
mineral mapping
b.
soil moisture
c.
water penetration
d.
vegetation discrimination
Answer: (a).
mineral mapping
4.
a.
law enforcement
b.
radar
c.
nuclear medicine
d.
fluorescence microscopy
View Answer Report Discuss Too Difficult!
Answer: (d).
fluorescence microscopy
5.
a.
b.
c.
d.
Answer: (d).
6.
a.
audio
b.
sound
c.
sunlight
d.
ultraviolet
Answer: (b).
sound
7.
a.
industry
b.
astronomical observations
c.
angipgraphy
d.
lithography
Answer: (c).
angipgraphy
8.
a.
electrical spectrum
b.
magnetic spectrum
c.
electro spectrum
d.
Answer: (d).
9.
a.
gamma rays
b.
x-rays
c.
ultraviolet rays
d.
visible rays
Answer: (a).
gamma rays
10.
a.
ultrasonic
b.
radar
c.
visible and infrared
d.
gamma
Answer: (b).
radar
11.
a.
b.
degraded images
c.
d.
brighter images
Answer: (b).
degraded images
12.
a.
medicines
b.
radar
c.
lens enhancement
d.
medical diagnoses
Answer: (b).
radar
13.
a.
slide projector
b.
side projector
c.
dual projector
d.
imaging projector
Answer: (a).
slide projector
14.
a.
industry
b.
radar
c.
medicine
d.
lithography
Answer: (b).
radar
15.
a.
x-rays
b.
microwaves
c.
gamma
d.
radio waves
Answer: (d).
radio waves
16.
a.
MRI
b.
surgery
c.
CT scan
d.
injections
Answer: (a).
MRI
17.
a.
chlorine
b.
fluorine
c.
fluoresce
d.
copper
Answer: (c).
fluoresce
18.
high level
b.
low level
c.
last level;
d.
mid level
Answer: (c).
last level;
19.
a.
law enforcement
b.
lithography
c.
medicine
d.
voice calling
Answer: (d).
voice calling
20.
A structured light illumination technique was used for
a.
lens deformation
b.
inverse filtering
c.
lens enhancement
d.
lens error
Answer: (a).
lens deformation
21.
a.
edges
b.
slices
c.
boundaries
d.
illumination
Answer: (b).
slices
22.
a.
gamma rays
b.
visible spectrum
c.
x-rays
d.
uv rays
Answer: (d).
uv rays
23.
a.
radar
b.
astronomical observations
c.
industry
d.
lithography
View Answer Report Discuss Too Difficult!
Answer: (b).
astronomical observations
24.
a.
image addition
b.
image multiplication
c.
image division
d.
Answer: (b).
image multiplication
25.
a.
10.4-12.5
b.
10.4-13.5
c.
11.4-12.5
d.
10.3-12.5
Answer: (a).
10.4-12.5
26.
a.
thumb prints
b.
paper currency
c.
mp3
d.
Answer: (c).
mp3
27.
a.
spatial coordinates
b.
frequency coordinates
c.
time coordinates
d.
real coordinates
Answer: (a).
spatial coordinates
28.
a.
400-700nm
b.
600-900nm
c.
400-700pm
d.
600-900pm
Answer: (a).
400-700nm
29.
a.
b.
c.
four gray levels
d.
Answer: (d).
30.
Lithography uses
a.
ultraviolet
b.
x-rays
c.
gamma
d.
visible rays
Answer: (a).
ultraviolet
31.
a.
soil moisture
b.
mineral mapping
c.
water penetration
d.
vegetation discrimination
Answer: (d).
vegetation discrimination
32.
a.
mecatronic
b.
acoustic
c.
ultrasonic
d.
electronic
Answer: (a).
mecatronic
33.
a.
blue
b.
violet
c.
green
d.
red
Answer: (b).
violet
34.
a.
visible
b.
gamma
c.
x-rays
d.
ultraviolet
Answer: (c).
x-rays
35.
a.
soil moisture
b.
mineral mapping
c.
water penetration
d.
vegetation discrimination
Answer: (c).
water penetration
36.
a.
spectral imaging
b.
c.
central imaging
d.
bio imaging
Answer: (b).
37.
color enhancement
b.
frequency enhancement
c.
spatial enhancement
d.
detection
Answer: (d).
detection
38.
a.
x-rays
b.
gamma
c.
microwaves
d.
radio waves
Answer: (a).
x-rays
39.
Wavelength of near infrared ranges from
a.
0.76-1.90
b.
0.76-0.90
c.
0.36-0.90
d.
0.76-0.10
Answer: (b).
0.76-0.90
40.
a.
0.52-0.70
b.
0.52-0.62
c.
0.53-0.60
d.
0.52-0.60
Answer: (d).
0.52-0.60
41.
a.
b.
c.
visual inspection
d.
automated inspection
Answer: (a).
42.
a.
lithography
b.
astronomy
c.
industrial inspection
d.
medicine inspection
View Answer Report Discuss Too Difficult!
Answer: (c).
industrial inspection
43.
a.
medicines
b.
chemistry
c.
neurobiology
d.
chemicals
Answer: (c).
neurobiology
44.
a.
voice over IP
b.
c.
audio processing
d.
video processing
Answer: (b).
45.
a.
gamma rays
b.
x-rays
c.
d.
ultraviolet
Answer: (c).
46.
a.
filtration
b.
image acquisition
c.
image enhancement
d.
image restoration
Answer: (b).
image acquisition
47.
a.
red
b.
blue
c.
green
d.
yellow
Answer: (a).
red
48.
a.
detection
b.
correction
c.
inspection
d.
enhancement
Answer: (a).
detection
49.
a.
gamma rays
b.
x-rays
c.
radio waves
d.
ultraviolet
Answer: (c).
radio waves
50.
a.
Position
b.
Brightness
c.
Contrast
d.
Noise
Answer: (b).
Brightness
51.
Among the following image processing techniques which is fast, precise and flexible.
a.
Optical
b.
Digital
c.
Electronic
d.
Photographic
Answer: (b).
Digital
52.
a.
Height of image
b.
Width of image
c.
Amplitude of image
d.
Resolution of image
Answer: (c).
Amplitude of image
53.
a.
Dynamic range
b.
Band range
c.
Peak range
d.
Resolution range
Answer: (a).
Dynamic range
54.
a.
Saturation
b.
Hue
c.
Brightness
d.
Intensity
Answer: (b).
Hue
55.
Which gives a measure of the degree to which a pure colour is diluted by white light?
a.
Saturation
b.
Hue
c.
Intensity
d.
Brightness
Answer: (a).
Saturation
56.
Interpretation
b.
Recognition
c.
Acquisition
d.
Segmentation
Answer: (a).
Interpretation
57.
a.
256 X 256
b.
512 X 512
c.
1920 X 1080
d.
1080 X 1080
Answer: (b).
512 X 512
58.
The number of grey values are integer powers of:
a.
b.
c.
d.
Answer: (b).
59.
a.
Image restoration
b.
Image enhancement
c.
Image acquisition
d.
Segmentation
Answer: (c).
Image acquisition
60.
In which step of processing, the images are subdivided successively into smaller regions?
a.
Image enhancement
b.
Image acquisition
c.
Segmentation
d.
Wavelets
Answer: (d).
Wavelets
61.
a.
Wavelets
b.
Segmentation
c.
d.
Morphological processing
Morphological processing
62.
What is the step that is performed before color image processing in image processing?
a.
b.
Image enhancement
c.
Image restoration
d.
Image acquisition
Answer: (c).
Image restoration
63.
a.
10
b.
c.
11
d.
12
View Answer Report Discuss Too Difficult!
Answer: (a).
10
64.
a.
b.
c.
d.
Answer: (b).
65.
Which of the following step deals with tools for extracting image components those are useful in the
representation and description of shape?
a.
Segmentation
b.
c.
Compression
d.
Morphological processing
Answer: (d).
Morphological processing
66.
In which step of the processing, assigning a label (e.g., “vehicle”) to an object based on its descriptors is
done?
a.
Object recognition
b.
Morphological processing
c.
Segmentation
d.
Answer: (a).
Object recognition
67.
a.
Deals with extracting attributes that result in some quantitative information of interest
b.
Deals with techniques for reducing the storage required saving an image, or the bandwidth required
transmitting it
c.
Deals with partitioning an image into its constituent parts or objects
d.
Deals with property in which images are subdivided successively into smaller regions
Answer: (c).
68.
a.
b.
c.
d.
Answer: (b).
a.
microscopy
b.
medical
c.
industry
d.
radar
Answer: (b).
medical
2.
a.
1048576
b.
1148576
c.
1248576
d.
1348576
Answer: (a).
1048576
3.
a.
strong cells
b.
inner cells
c.
fibrous cells
d.
outer cells
Answer: (c).
fibrous cells
4.
a.
audio
b.
AM
c.
FM
d.
Both b and c
Both b and c
5.
L = 23 would have
a.
2 levels
b.
4 levels
c.
6 levels
d.
8 levels
Answer: (d).
8 levels
6.
a.
eye lid
b.
lashes
c.
anterior
d.
exterior
View Answer Report Discuss Too Difficult!
Answer: (c).
anterior
7.
a.
values
b.
numbers
c.
frequencies
d.
intensities
Answer: (d).
intensities
8.
In MxN, M is no of
a.
intensity levels
b.
colors
c.
rows
d.
columns
Answer: (c).
rows
9.
a.
dots
b.
coordinate
c.
pixels
d.
value
Answer: (c).
pixels
10.
a.
b.
voltage signal
c.
digitized image
d.
analog signal
Answer: (c).
digitized image
11.
a.
radiance
b.
illuminance
c.
sampling
d.
quantization
Answer: (c).
sampling
12.
a.
pixel
b.
dot
c.
coordinate
d.
digits
Answer: (a).
pixel
13.
a.
two
b.
three
c.
four
d.
five
Answer: (b).
three
14.
a.
speed of light
b.
light constant
c.
plank's constant
d.
acceleration constant
Answer: (c).
plank's constant
15.
a.
1.8mm
b.
1.5mm
c.
1.6mm
d.
1.7mm
Answer: (b).
1.5mm
16.
b.
c.
d.
Answer: (d).
17.
a.
b.
c.
d.
Answer: (b).
18.
Hard x-rays are used in
a.
medicines
b.
lithoscopy
c.
industry
d.
radar
Answer: (c).
industry
19.
a.
b = NxK
b.
b = MxN
c.
b = MxNxK
d.
b = MxK
Answer: (c).
b = MxNxK
20.
a.
b.
c.
d.
Answer: (a).
21.
a.
radiance
b.
illuminance
c.
sampling
d.
quantization
View Answer Report Discuss Too Difficult!
Answer: (d).
quantization
22.
a.
interchange
b.
interpolation
c.
extrapolation
d.
estimation
Answer: (b).
interpolation
23.
a.
eye lid
b.
cornea
c.
retina
d.
sclera
Answer: (d).
sclera
24.
a.
2.998x10^8
b.
3.998x10^8
c.
4.998x10^8
d.
5.998x10^8
Answer: (a).
2.998x10^8
25.
a.
2 levels
b.
3 levels
c.
4 levels
d.
5 levels
Answer: (a).
2 levels
26.
a.
wavelength
b.
frequency
c.
energy
d.
power
Answer: (b).
frequency
27.
a.
cornea
b.
cells
c.
retina
d.
choroid
Answer: (b).
cells
28.
In MxN, N is no of
a.
intensity levels
b.
colors
c.
rows
d.
columns
Answer: (d).
columns
29.
a.
radiance
b.
refraction
c.
illumination
d.
brightness
Answer: (b).
refraction
30.
a.
wide domain
b.
spatial domain
c.
frequency domain
d.
algebraic domain
Answer: (c).
frequency domain
31.
a.
1 and 2
b.
0 and 1
c.
0 and 2
d.
0 and -1
Answer: (b).
0 and 1
32.
a.
voltage waveform
b.
current waveform
c.
audio
d.
discrete signals
Answer: (a).
voltage waveform
33.
Mechanical digitizers are referred to
a.
densitometer
b.
micrometer
c.
microdensity
d.
microdensitometer
Answer: (d).
microdensitometer
34.
a.
photopic
b.
photogenic
c.
photograph
d.
protoplasm
Answer: (a).
photopic
35.
a.
reflection
b.
sampling
c.
quantization
d.
Both b and c
Answer: (d).
Both b and c
36.
A matrix is denoted by
a.
M.N
b.
MxN
c.
M+N
d.
MN
Answer: (b).
MxN
37.
a.
illumination
b.
brightness
c.
brightness adaption
d.
illumination adaption
Answer: (c).
brightness adaption
38.
a.
v(x,y) = ax+by+cxy+d
b.
v(x,y) = ax+by+cxy
c.
v(x,y) = ax+by+d
d.
v(x,y) = by+cxy+d
v(x,y) = ax+by+cxy+d
39.
a.
single sensor
b.
line sensor
c.
matrix sensor
d.
array sensor
Answer: (c).
matrix sensor
40.
a.
x-ray film
b.
rays
c.
images
d.
reel
View Answer Report Discuss Too Difficult!
Answer: (a).
x-ray film
41.
a.
9.55x10-34
b.
8.55x10-34
c.
7.55x10-34
d.
6.55x10-34
Answer: (d).
6.55x10-34
42.
a.
probabilistic formulations
b.
additional formulations
c.
probabilistic addition
d.
probabilistic subtraction
Answer: (a).
probabilistic formulations
43.
a.
focal length
b.
width
c.
length
d.
focal width
Answer: (a).
focal length
44.
a.
b.
c.
illumination and radiance
d.
Answer: (d).
45.
a.
eye lid
b.
cornea
c.
retina
d.
sclera
Answer: (c).
retina
46.
a.
wavelength
b.
frequency
c.
energy
d.
power
Answer: (a).
wavelength
47.
Matrix is made up of
a.
rows
b.
column
c.
values
d.
Both a and b
Answer: (d).
Both a and b
48.
a.
wide domain
b.
spatial domain
c.
frequency domain
d.
algebraic domain
Answer: (b).
spatial domain
49.
To convert a continuous sensed data into Digital form, which of the following is required?
a.
Sampling
b.
Quantization
c.
d.
Answer: (c).
50.
To convert a continuous image f(x, y) to digital form, we have to sample the function in __________
a.
Coordinates
b.
Amplitude
c.
d.
Answer: (c).
51.
a.
b.
c.
d.
Answer: (a).
52.
For a continuous image f(x, y), Quantization is defined as
a.
b.
c.
d.
Answer: (b).
53.
“For a given image in one-dimension given by function f(x, y), to sample the function we take equally
spaced samples, superimposed on the function, along a horizontal line. However, the sample values still
span (vertically) a continuous range of gray-level values. So, to convert the given function into a digital
function, the gray-level values must be divided into various discrete levels.”
a.
True
b.
False
c.
May be
d.
Can't Say
View Answer Report Discuss Too Difficult!
Answer: (a).
True
54.
How is sampling been done when an image is generated by a single sensing element combined with
mechanical motion?
a.
The number of sensors in the strip defines the sampling limitations in one direction and Mechanical
motion in the other direction.
b.
The number of sensors in the sensing array establishes the limits of sampling in both directions.
c.
The number of mechanical increments when the sensor is activated to collect data.
d.
Answer: (c).
The number of mechanical increments when the sensor is activated to collect data.
55.
How does sampling gets accomplished with a sensing strip being used for image acquisition?
a.
The number of sensors in the strip establishes the sampling limitations in one image direction and
Mechanical motion in the other direction
b.
The number of sensors in the sensing array establishes the limits of sampling in both directions
c.
The number of mechanical increments when the sensor is activated to collect data
d.
Answer: (a).
The number of sensors in the strip establishes the sampling limitations in one image direction and
Mechanical motion in the other direction
56.
How is sampling accomplished when a sensing array is used for image acquisition?
a.
The number of sensors in the strip establishes the sampling limitations in one image direction and
Mechanical motion in the other direction
b.
The number of sensors in the sensing array defines the limits of sampling in both directions
c.
The number of mechanical increments at which we activate the sensor to collect data
d.
Answer: (b).
The number of sensors in the sensing array defines the limits of sampling in both directions
57.
a.
b.
d.
Answer: (c).
58.
Assume that an image f(x, y) is sampled so that the result has M rows and N columns. If the values of the
coordinates at the origin are (x, y) = (0, 0), then the notation (0, 1) is used to signify :
a.
b.
c.
d.
Answer: (a).
59.
The resulting image of sampling and quantization is considered a matrix of real numbers. By what
name(s) the element of this matrix array is called __________
a.
Pixel or Pel
c.
d.
Answer: (c).
60.
Let Z be the set of real integers and R the set of real numbers. The sampling process may be viewed as
partitioning the x-y plane into a grid, with the central coordinates of each grid being from the Cartesian
product Z2, that is a set of all ordered pairs (zi, zj), with zi and zj being integers from Z. Then, f(x, y) is
said a digital image if:
a.
(x, y) are integers from Z2 and f is a function that assigns a gray-level value (from Z) to each distinct pair
of coordinates (x, y)
b.
(x, y) are integers from R2 and f is a function that assigns a gray-level value (from R) to each distinct pair
of coordinates (x, y)
c.
(x, y) are integers from R2 and f is a function that assigns a gray-level value (from Z) to each distinct pair
of coordinates (x, y)
d.
(x, y) are integers from Z2 and f is a function that assigns a gray-level value (from R) to each distinct pair
of coordinates (x, y)
Answer: (d).
(x, y) are integers from Z2 and f is a function that assigns a gray-level value (from R) to each distinct pair
of coordinates (x, y)
61.
Let Z be the set of real integers and R the set of real numbers. The sampling process may be viewed as
partitioning the x-y plane into a grid, with the central coordinates of each grid being from the Cartesian
product Z2, that is a set of all ordered pairs (zi, zj), with zi and zj being integers from Z. Then, f(x, y) is a
digital image if (x, y) are integers from Z2 and f is a function that assigns a gray-level value (that is, a real
number from the set R) to each distinct coordinate pair (x, y). What happens to the digital image if the
gray levels also are integers?
a.
The Digital image then becomes a 2-D function whose coordinates and amplitude values are integers
b.
The Digital image then becomes a 1-D function whose coordinates and amplitude values are integers
c.
d.
Answer: (a).
The Digital image then becomes a 2-D function whose coordinates and amplitude values are integers
62.
The digitization process i.e. the digital image has M rows and N columns, requires decisions about values
for M, N, and for the number, L, of gray levels allowed for each pixel. The value M and N have to be:
a.
b.
M and N have to be negative integer
c.
d.
Answer: (a).
63.
The digitization process i.e. the digital image has M rows and N columns, requires decisions about values
for M, N, and for the number, L, of max gray levels. There are no requirements on M and N, other than
that M and N have to be positive integer. However, the number of gray levels typically is
a.
b.
c.
d.
Answer: (a).
64.
The digitization process i.e. the digital image has M rows and N columns, requires decisions about values
for M, N, and for the number, L, of max gray levels is an integer power of 2 i.e. L = 2k, allowed for each
pixel. If we assume that the discrete levels are equally spaced and that they are integers then they are in
the interval __________ and Sometimes the range of values spanned by the gray scale is called the
________ of an image.
a.
b.
c.
d.
Answer: (d).
65.
After digitization process a digital image with M rows and N columns have to be positive and for the
number, L, max gray levels i.e. an integer power of 2 for each pixel. Then, the number b, of bits required
to store a digitized image is:
a.
b=M*N*k
b.
b=M*N*L
c.
b=M*L*k
d.
b=L*N*k
Answer: (a).
b=M*N*k
66.
An image whose gray-levels span a significant portion of gray scale have __________ dynamic range
while an image with dull, washed out gray look have __________ dynamic range.
a.
b.
c.
Both have High dynamic range, irrespective of gray levels span significance on gray scale
d.
Both have Low dynamic range, irrespective of gray levels span significance on gray scale
Answer: (b).
67.
Validate the statement “When in an Image an appreciable number of pixels exhibit high dynamic range,
the image will have high contrast.”
a.
True
b.
False
c.
May be
d.
Can't Say
View Answer Report Discuss Too Difficult!
Answer: (a).
True
68.
In digital image of M rows and N columns and L discrete gray levels, calculate the bits required to store a
digitized image for M=N=32 and L=16.
a.
16384
b.
4096
c.
8192
d.
512
Answer: (b).
4096
69.
a.
random
b.
vertex
c.
contour
d.
sampling
Answer: (d).
sampling
70.
The transition between continuous values of the image function and its digital equivalent is called
______________
a.
Quantisation
b.
Sampling
c.
Rasterisation
d.
Answer: (a).
Quantisation
71.
Images quantised with insufficient brightness levels will lead to the occurrence of ____________
a.
Pixillation
b.
Blurring
c.
False Contours
d.
Answer: (c).
False Contours
72.
a.
Intensity Resolution
b.
Contour
c.
Saturation
d.
Contrast
Answer: (a).
Intensity Resolution
73.
What is the tool used in tasks such as zooming, shrinking, rotating, etc.?
a.
Sampling
b.
Interpolation
c.
Filters
d.
Answer: (b).
Interpolation
74.
The type of Interpolation where for each new location the intensity of the immediate pixel is assigned is
___________
a.
bicubic interpolation
b.
cubic interpolation
c.
bilinear interpolation
d.
Answer: (d).
75.
The type of Interpolation where the intensity of the FOUR neighbouring pixels is used to obtain intensity
a new location is called ___________
a.
cubic interpolation
b.
c.
bilinear interpolation
d.
bicubic interpolation
Answer: (b).
76.
Dynamic range of imaging system is a ratio where the upper limit is determined by
a.
Saturation
b.
Noise
c.
Brightness
d.
Contrast
Answer: (a).
Saturation
77.
For Dynamic range ratio the lower limit is determined by
a.
Saturation
b.
Brightness
c.
Noise
d.
Contrast
Answer: (c).
Noise
78.
a.
line pairs
b.
pixels
c.
dots
d.
Answer: (d).
a.
Microdensitometer
b.
Photodiode
c.
CMOS
d.
Answer: (b).
Photodiode
80.
a.
A photodiode
b.
Sensor strips
c.
Sensor arrays
d.
CMOS
Answer: (b).
Sensor strips
81.
a.
b.
c.
d.
Answer: (d).
82.
The section of the real plane spanned by the coordinates of an image is called the _____________
a.
Spacial Domain
b.
Coordinate Axes
c.
Plane of Symmetry
d.
Answer: (a).
Spacial Domain
83.
The difference is intensity between the highest and the lowest intensity levels in an image is
___________
a.
Noise
b.
Saturation
c.
Contrast
d.
Brightness
Answer: (c).
Contrast
84.
_____________ is the effect caused by the use of an insufficient number of intensity levels in smooth
areas of a digital image.
a.
Gaussian smooth
b.
Contouring
c.
False Contouring
d.
Interpolation
Answer: (c).
False Contouring
85.
The process of using known data to estimate values at unknown locations is called
a.
Acquisition
b.
Interpolation
c.
Pixelation
d.
Answer: (b).
Interpolation
86.
a.
Shading Correction
b.
Masking
c.
Pixelation
d.
Answer: (c).
Pixelation
87.
The procedure done on a digital image to alter the values of its individual pixels is
a.
Neighbourhood Operations
b.
Image Registration
c.
d.
Answer: (d).
88.
In Geometric Spacial Transformation, points whose locations are known precisely in input and reference
images.
a.
Tie points
b.
Reseau points
c.
Known points
d.
Key-points
Answer: (a).
Tie points
89.
a.
UV Rays
b.
Gamma Rays
c.
Microwaves
d.
Radio Waves
Answer: (b).
Gamma Rays
90.
a.
array by array
b.
pixel by pixel
c.
column by column
d.
row by row
Answer: (b).
pixel by pixel
91.
The property indicating that the output of a linear operation due to the sum of two inputs is same as
performing the operation on the inputs individually and then summing the results is called ___________
a.
additivity
b.
heterogeneity
c.
homogeneity
d.
Answer: (a).
additivity
92.
The property indicating that the output of a linear operation to a constant times as input is the same as
the output of operation due to original input multiplied by that constant is called _________
a.
additivity
b.
heterogeneity
c.
homogeneity
d.
Answer: (c).
homogeneity
93.
a.
Additivity
b.
Homogeneity
c.
Subtraction
d.
Answer: (c).
Subtraction
94.
A commercial use of Image Subtraction is ___________
a.
b.
MRI scan
c.
CT scan
d.
Answer: (a).
95.
a.
Shading correction
b.
Masking
c.
Dilation
d.
Answer: (b).
Masking
96.
If every element of a set A is also an element of a set B, then A is said to be a _________ of set B.
a.
Disjoint set
b.
Union
c.
Subset
d.
Complement set
Answer: (c).
Subset
97.
Consider two regions A and B composed of foreground pixels. The ________ of these two sets is the set
of elements belonging to set A or set B or both.
a.
OR
b.
AND
c.
NOT
d.
XOR
Answer: (a).
OR
98.
Imaging systems having physical artefacts embedded in the imaging sensors produce a set of points
called __________
a.
Tie Points
b.
Control Points
c.
Reseau Marks
d.
Answer: (c).
Reseau Marks
99.
Image processing approaches operating directly on pixels of input image work directly in ____________
a.
Transform domain
b.
Spatial domain
c.
Inverse transformation
d.
Spatial domain
1.
a.
red noise
b.
black noise
c.
white noise
d.
normal noise
Answer: (d).
normal noise
2.
a.
frequency domain
b.
time domain
c.
spatial domain
d.
plane
Answer: (a).
frequency domain
3.
a.
additivity
b.
homogeneity
c.
multiplication
d.
Both a and b
Answer: (d).
Both a and b
4.
a.
b.
probability density function
c.
d.
Answer: (b).
5.
Filter that replaces the pixel value with the medians of intensity levels is
a.
b.
c.
median filter
d.
Answer: (c).
median filter
6.
a.
notch filter
b.
bandpass filter
c.
wiener filter
d.
inverse filter
Answer: (d).
inverse filter
7.
a.
different
b.
homogenous
c.
correlated
d.
uncorrelated
Answer: (d).
uncorrelated
8.
EBCT scanners stands for
a.
b.
c.
d.
Answer: (d).
9.
a.
b.
c.
d.
Answer: (b).
Valueimpression
10.
a.
lowpass filter
b.
bandpass filter
c.
highpass filter
d.
max filter
View Answer Report Discuss Too Difficult! Search Google
Answer: (b).
bandpass filter
11.
a.
2ways
b.
3ways
c.
4ways
d.
5ways
Answer: (b).
3ways
12.
a.
degraded image
b.
original image
c.
pixels
d.
coordinates
Answer: (b).
original image
13.
a.
notch filter
b.
bandpass filter
c.
wiener filter
d.
max filter
Answer: (c).
wiener filter
14.
a.
degraded image
b.
original image
c.
restored image
d.
plane
Answer: (c).
restored image
15.
a.
b.
c.
d.
Answer: (d).
16.
a.
single projection
b.
double projection
c.
triple projection
d.
octa projection
Answer: (a).
single projection
17.
a.
sharpening
b.
blurring
c.
restoration
d.
acquisition
Answer: (b).
blurring
18.
In geometric mean filters when alpha is equal to 0 then it works as
a.
notch filter
b.
bandpass filter
c.
d.
inverse filter
Answer: (c).
19.
a.
newton
b.
Raphson
c.
wiener
d.
newton-Raphson
Answer: (d).
newton-Raphson
Powered by
Valueimpression
20.
a.
additive noise
b.
destruction
c.
pixels
d.
coordinates
View Answer Report Discuss Too Difficult! Search Google
Answer: (a).
additive noise
21.
a.
image nonconvolution
b.
image inconvolution
c.
image deconvolution
d.
image byconvolution
Answer: (c).
image deconvolution
22.
Impulse is simulated by
a.
black dot
b.
gray dot
c.
bright dot
d.
sharp dot
Answer: (c).
bright dot
23.
a.
inverse filtering
b.
spike filtering
c.
black filtering
d.
ranking
Answer: (a).
inverse filtering
24.
a.
variance
b.
noise
c.
restoration
d.
power
Answer: (a).
variance
25.
CT stands for
a.
computerized tomography
b.
computed tomography
c.
computerized terminology
d.
computed terminology
Answer: (b).
computed tomography
26.
b.
c.
d.
Answer: (a).
27.
Approach that incorporates both degradation function and statistical noise in restoration is called
a.
inverse filtering
b.
spike filtering
c.
wiener filtering
d.
ranking
Answer: (c).
wiener filtering
28.
Bandreject filters are used where the noise components are usually
a.
rejected
b.
unknown
c.
known
d.
taken
Answer: (c).
known
29.
a.
b.
gamma noise
c.
black noise
d.
exponential noise
Answer: (a).
additive random noise
Powered by
Valueimpression
30.
a.
additive noise
b.
c.
pixels
d.
ranking
Answer: (d).
ranking
31.
a.
transmission
b.
degradation
c.
restoration
d.
acquisition
Answer: (a).
transmission
32.
a.
b.
most square error filter
c.
d.
error filter
Answer: (c).
33.
a.
lowpass filter
b.
bandpass filter
c.
highpass filter
d.
max filter
Answer: (c).
highpass filter
34.
a.
Rayleigh noise
b.
gamma noise
c.
black noise
d.
exponential noise
Answer: (c).
black noise
35.
Filter that replaces the pixel value with the minimum values of intensity levels is
a.
max filter
b.
c.
median filter
d.
min filter
Answer: (d).
min filter
36.
FFT stands for
a.
b.
c.
d.
Answer: (a).
37.
a.
b.
bandpass filters
c.
wiener filters
d.
error filters
Answer: (a).
a.
notch filter
b.
bandpass filter
c.
highpass filter
d.
max filter
Answer: (a).
notch filter
39.
a.
frequency domain
b.
time domain
c.
spatial domain
d.
plane
spatial domain
Powered by
Valueimpression
40.
Filter that computes midpoint between min and max value is called
a.
max filter
b.
midpoint filter
c.
median filter
d.
min filter
Answer: (b).
midpoint filter
41.
a.
lowpass filter
b.
bandpass filter
c.
highpass filter
d.
max filter
Answer: (a).
lowpass filter
42.
a.
destruction
b.
degradation
c.
restoration
d.
acquisition
Answer: (d).
acquisition
43.
a.
electrical interference
b.
gamma interference
c.
beta interference
d.
mechanical interference
Answer: (a).
electrical interference
44.
sharpening
b.
spike noise
c.
restoration
d.
superposition
Answer: (d).
superposition
45.
a.
red noise
b.
black noise
c.
white noise
d.
green noise
Answer: (c).
white noise
46.
a.
Rayleigh noise
b.
spike noise
c.
black noise
d.
exponential noise
Answer: (b).
spike noise
47.
a.
Rayleigh noise
b.
degradation
c.
restoration
d.
optimum restoration
Answer: (d).
optimum restoration
48.
a.
Rayleigh noise
b.
gamma noise
c.
black noise
d.
impulse
Answer: (d).
impulse
49.
a.
ones
b.
zeros
c.
pixels
d.
coordinates
View Answer Report Discuss Too Difficult! Search Google
Answer: (b).
zeros
Powered by
Valueimpression
50.
Filter that replaces the pixel value with the maximum values of intensity levels is
a.
max filter
b.
c.
median filter
d.
min filter
Answer: (a).
max filter
51.
a.
Rods
b.
Cones
c.
d.
Answer: (c).
52.
How is image formation in the eye different from that in a photographic camera
a.
No difference
b.
c.
d.
Answer: (b).
53.
Range of light intensity levels to which the human eye can adapt (in Log of Intensity-mL)
a.
10^-6 to 10^-4
b.
10^4 to 10^6
c.
10^-6 to 10^4
d.
10^-5 to 10^5
Answer: (c).
10^-6 to 10^4
54.
Related to intensity
b.
Related to brightness
c.
d.
Answer: (a).
Related to intensity
55.
a.
b.
c.
d.
Answer: (a).
a.
Blind Spot
b.
Sclera
c.
Choroid
d.
Retina
Answer: (d).
Retina
57.
a.
Source of nutrition
b.
Detect color
c.
d.
Answer: (d).
Control amount of light
58.
a.
Cones
b.
Rods
c.
Retina
d.
Answer: (b).
Rods
59.
a.
1:20
b.
1:2
c.
1:1
d.
1:5
View Answer Report Discuss Too Difficult! Search Google
Answer: (a).
1:20
Powered by
Valueimpression
60.
a.
Lens
b.
Ciliary body
c.
Blind spot
d.
Fovea
Answer: (c).
Blind spot
61.
In 4-neighbours of a pixel p, how far are each of the neighbours located from p?
a.
b.
c.
alternating pixels
d.
Answer: (a).
62.
If S is a subset of pixels, pixels p and q are said to be ____________ if there exists a path between them
consisting of pixels entirely in S.
a.
continuous
b.
ambiguous
c.
connected
d.
Answer: (c).
connected
63.
a.
Disjoint
b.
Region
c.
Closed
d.
Adjacent
Answer: (b).
Region
64.
Two regions are said to be ___________ if their union forms a connected set.
a.
Adjacent
b.
Disjoint
c.
Closed
d.
Answer: (a).
Adjacent
65.
If an image contains K disjoint regions, what does the union of all the regions represent?
a.
Background
b.
Foreground
c.
Outer Border
d.
Inner Border
Answer: (b).
Foreground
66.
For a region R, the set of points that are adjacent to the complement of R is called as ________
a.
Boundary
b.
Border
c.
Contour
d.
Answer: (d).
67.
The distance between pixels p and q, the pixels have a distance less than or equal to some value of
radius r centred at (x,y) is called :
a.
Euclidean distance
b.
City-Block distance
c.
Chessboard distance
d.
Answer: (a).
Euclidean distance
68.
The distance between pixels p and q, the pixels have a distance less than or equal to some value of
radius r, form a diamond centred at (x,y) is called :
a.
Euclidean distance
b.
Chessboard distance
c.
City-Block distance
d.
Answer: (c).
City-Block distance
69.
The distance between pixels p and q, the pixels have a distance less than or equal to some value of
radius r, form a square centred at (x,y) is called :
a.
Euclidean distance
b.
Chessboard distance
c.
City-Block distance
d.
None of the Mentioned
Answer: (b).
Chessboard distance
Powered by
Valueimpression
70.
a.
4-Adjacency
b.
8-Adjacency
c.
m-Adjacency
d.
Answer: (d).
1.
a.
128
b.
255
c.
256
d.
512
Answer: (c).
256
2.
a.
slicing
b.
color slicing
c.
cutting
d.
color enhancement
Answer: (b).
color slicing
3.
a.
safe colors
b.
colors space
c.
web colors
d.
Answer: (d).
4.
b.
c.
d.
Answer: (a).
5.
a.
b.
c.
255
d.
256
Answer: (a).
6.
CRT technology stands for
a.
b.
c.
d.
Answer: (b).
7.
a.
2 components
b.
3 components
c.
4 components
d.
5 components
Answer: (b).
3 components
8.
a.
n=2
b.
n=3
c.
n=4
d.
n=5
Answer: (b).
n=3
9.
a.
brightness
b.
transitivity
c.
chromaticity
d.
reflectivity
Answer: (c).
chromaticity
10.
a.
H = H-90
b.
H = H-100
c.
H = H-120
d.
H = H-180
Answer: (c).
H = H-120
11.
a.
b.
contouring
c.
erosion
d.
Answer: (a).
12.
a.
boundary
b.
edges
c.
white region
d.
black region
Answer: (d).
black region
13.
a.
[Æ’ (x) = 0]
b.
[Æ’ (y) = 0]
c.
[Æ’ (x,y) = 0]
d.
[Æ’ (x,y) = 1]
Answer: (c).
[Æ’ (x,y) = 0]
14.
a.
spatial variables
b.
frequency variables
c.
intensity variables
d.
Both a and b
Answer: (a).
spatial variables
15.
a.
yellow
b.
red
c.
magenta
d.
cyan
Answer: (d).
cyan
16.
a.
b.
contouring
c.
erosion
d.
Answer: (d).
17.
a.
(0,1,1)
b.
(0,1,0)
c.
(0,0,1)
d.
(1,1,1)
Answer: (d).
(1,1,1)
18.
a.
b.
c.
d.
Answer: (a).
19.
a.
oriented
b.
descriptor
c.
matter
d.
defined
Answer: (b).
descriptor
20.
a.
b.
color space
c.
chromaticity
d.
Answer: (a).
21.
a.
brightness
b.
reflectance
c.
luminance
d.
radiance
Answer: (d).
radiance
22.
a.
b.
c.
d.
Answer: (c).
23.
b.
Cartesian system
c.
chromaticity
d.
colorimetric
Answer: (d).
colorimetric
24.
Radiance is measured in
a.
joule
b.
watt
c.
lumens
d.
meter
Answer: (b).
watt
25.
Color model is also called
a.
color system
b.
color space
c.
color area
d.
Both a and b
Answer: (d).
Both a and b
26.
a.
edges
b.
boundaries
c.
complements
d.
saturation
Answer: (c).
complements
27.
a.
physiopsychological
b.
psychological
c.
physiological
d.
physiopsychology
Answer: (a).
physiopsychological
28.
a.
4 colors
b.
6 colors
c.
7 colors
d.
8 colors
Answer: (b).
6 colors
29.
a.
true colors
b.
false colors
c.
primary colors
d.
secondary colors
Answer: (b).
false colors
30.
a.
RCB
b.
CMYK
c.
RGB
d.
HSI
RCB
31.
a.
pixels
b.
coordinates
c.
pixel depth
d.
intensity levels
Answer: (d).
intensity levels
32.
a.
CMYK
b.
BGR
c.
RGB
d.
CMR
Answer: (c).
RGB
33.
a.
[0,1]
b.
[1,2]
c.
[1,0]
d.
[-1,1]
Answer: (a).
[0,1]
34.
Luminance is measured in
a.
joule
b.
watt
c.
lumens
d.
meter
Answer: (c).
lumens
35.
The amount of energy perceived by the human through the light source is called
a.
brightness
b.
reflectance
c.
luminance
d.
radiance
Answer: (c).
luminance
36.
a.
white color
b.
magenta color
c.
yellow color
d.
cyan color
Answer: (a).
white color
37.
a.
high intensities
b.
low intensities
c.
middle intensities
d.
zero intensities
Answer: (b).
low intensities
38.
a.
20 bit image
b.
24 bit image
c.
28 bit image
d.
32 bit image
Answer: (b).
24 bit image
39.
a.
b.
c.
d.
Answer: (d).
40.
a.
refracted
b.
transmitted
c.
reflected
d.
absorbed
Answer: (c).
reflected
41.
a.
hue
b.
saturation
c.
descriptor
d.
Both a and b
Answer: (d).
Both a and b
42.
a.
image processing
b.
c.
d.
Answer: (b).
43.
a.
b.
c.
d.
Answer: (b).
44.
pixels
b.
coordinates
c.
pixel depth
d.
color depth
Answer: (c).
pixel depth
45.
a.
n=2
b.
n=3
c.
n=4
d.
n=5
Answer: (c).
n=4
46.
RGB color system is based upon
a.
Cartesian plane
b.
Cartesian system
c.
d.
Answer: (d).
47.
a.
300-600 nm
b.
400-700 nm
c.
500-800 nm
d.
600-900 nm
Answer: (b).
400-700 nm
48.
a.
density slicing
b.
image slicing
c.
color slicing
d.
region slicing
Answer: (a).
density slicing
49.
Color pixel is
a.
scalar
b.
coordinate
c.
vector
d.
Both a and b
Answer: (c).
vector
50.
a.
CMYK
b.
RCB
c.
RGB
d.
CMR
Answer: (a).
CMYK
51.
How many categories does the color image processing is basically divided into?
a.
b.
c.
d.
5
Answer: (b).
52.
a.
b.
c.
d.
Answer: (a).
53.
What are the basic quantities that are used to describe the quality of a chromatic light source?
a.
b.
c.
Answer: (c).
54.
What is the quantity that is used to measure the total amount of energy flowing from the light source?
a.
Brightness
b.
Intensity
c.
Luminence
d.
Radiance
Answer: (d).
Radiance
55.
What are the characteristics that are used to distinguish one color from the other?
a.
b.
c.
Saturation, Hue
d.
Answer: (a).
56.
a.
b.
c.
d.
Answer: (b).
57.
Which of the following represent the correct equations for trichromatic coefficients?
a.
b.
d.
Answer: (a).
58.
a.
It is the amount of red, green and yellow needed to form any particular color
b.
It is the amount of red, green and indigo needed to form any particular color
c.
It is the amount of red, yellow and blue needed to form any particular color
d.
It is the amount of red, green and blue needed to form any particular color
Answer: (d).
It is the amount of red, green and blue needed to form any particular color
59.
What is the value obtained by the sum of the three trichromatic coefficients?
a.
b.
1
c.
d.
Null
Answer: (c).
60.
What is the name of area of the triangle in C.I E chromatic diagram that shows a typical range of colors
produced by RGB monitors?
a.
Color gamut
b.
Tricolor
c.
Color game
d.
Chromatic colors
Answer: (a).
Color gamut
61.
Color space
b.
Color gap
c.
d.
Color system
Answer: (c).
62.
a.
b.
c.
d.
Answer: (a).
63.
How many bit RGB color image is represented by full-color image?
a.
b.
c.
d.
Answer: (b).
64.
What is the equation used to obtain S component of each RGB pixel in RGB color format?
a.
S=1+3/(R+G+B) [min(R,G,B)].
b.
S=1+3/(R+G+B) [max(R,G,B)].
c.
S=1-3/(R+G+B) [max(R,G,B)].
d.
S=1-3/(R+G+B) [min(R,G,B)].
Answer: (d).
S=1-3/(R+G+B) [min(R,G,B)].
65.
What is the equation used to obtain I(Intensity) component of each RGB pixel in RGB color format?
a.
I=1/2(R+G+B)
b.
I=1/3(R+G+B)
c.
I=1/3(R-G-B)
d.
I=1/3(R-G+B)
Answer: (b).
I=1/3(R+G+B)
66.
What is the equation used for obtaining R value in terms of HSI components?
a.
R=I[1-(S cosH)/cos(60°-H) ].
b.
R=I[1+(S cosH)/cos(120°-H)].
c.
R=I[1+(S cosH)/cos(60°-H) ].
d.
R=I[1+(S cosH)/cos(30°-H) ].
Answer: (c).
R=I[1+(S cosH)/cos(60°-H) ].
67.
What is the equation used for calculating B value in terms of HSI components?
a.
B=I(1+S)
b.
B=S(1-I)
c.
B=S(1+I)
d.
B=I(1-S)
Answer: (d).
B=I(1-S)
68.
What is the equation used for calculating G value in terms of HSI components?
a.
G=3I-(R+B)
b.
G=3I+(R+B)
c.
G=3I-(R-B)
d.
G=2I-(R+B)
G=3I-(R+B)
69.
Which of the following color models are used for color printing?
a.
RGB
b.
CMY
c.
CMYK
d.
Answer: (d).
Image Segmentation
1.
a.
differences
b.
multiplication
c.
addition
d.
division
2.
a.
Gaussian
b.
laplacian
c.
ideal
d.
butterworth
3.
a.
|Gx|+|Gy|
b.
|Gx|-|Gy|
c.
|Gx|/|Gy|
d.
|Gx|x|Gy|
4.
a.
b.
[2 -1 -1; -1 2 -1; -1 -1 2]
c.
d.
5.
a.
discontinuity
b.
similarity
c.
extraction
d.
recognition
View Answer Report Discuss Too Difficult! Search Google
6.
a.
Gx
b.
Gy
c.
Gt
d.
Gs
7.
To avoid the negative values taking absolute values in lapacian image doubles
a.
thickness of lines
b.
thinness of lines
c.
thickness of edges
d.
thinness of edges
a.
b.
c.
positive
d.
negative
9.
a.
b.
[2 -1 -1; -1 2 -1; -1 -1 2]
c.
d.
Valueimpression
10.
Second derivative approximation says that values along the ramp must be
a.
nonzero
b.
zero
c.
positive
d.
negative
a.
1,2,3,4
b.
1,2,3…10
c.
1,2,3…50
d.
1,2,3…n
Answer: (d).
1,2,3…n
12.
a.
point detection
b.
line detection
c.
area detection
d.
edge detection
edge detection
13.
a.
sharp image
b.
blur image
c.
gradient image
d.
binary image
Answer: (c).
gradient image
14.
a.
b.
c.
positive
d.
negative
Answer: (c).
positive
15.
a.
image smoothing
b.
image contouring
c.
image enhancement
d.
image recognition
Answer: (a).
image smoothing
16.
a.
b.
30
c.
45
d.
90
Answer: (c).
45
17.
a.
ramp edges
b.
step edges
c.
sharp edges
d.
Both a and b
Answer: (d).
Both a and b
18.
a.
b.
30
c.
45
d.
90
Answer: (a).
19.
a.
0.1
b.
0.2
c.
0.3
d.
0.4
Answer: (a).
0.1
Powered by
Valueimpression
20.
a.
first derivative
b.
second derivative
c.
third derivative
d.
Both a and b
Answer: (a).
first derivative
21.
a.
1 pixel
b.
2 pixels
c.
3 pixels
d.
4 pixels
Answer: (a).
1 pixel
22.
a.
horizontal lines
b.
vertical lines
c.
Diagonal lines
d.
edges
Diagonal lines
23.
a.
pixels
b.
constant intensities
c.
point pixels
d.
edges
Answer: (d).
edges
24.
a.
ramp
b.
step
c.
onset
d.
edges
Answer: (c).
onset
25.
Method in which images are input and attributes are output is called
a.
b.
c.
d.
Answer: (c).
26.
a.
spatial filtering
b.
frequency filtering
c.
d.
Answer: (a).
spatial filtering
27.
a.
ramp edges
b.
step edge
c.
roof edges
d.
thinness of edges
Answer: (c).
roof edges
28.
a.
adjacent pixels
b.
near pixels
c.
edge pixels
d.
line pixels
Answer: (a).
adjacent pixels
29.
Averaging is analogous to
a.
differentiation
b.
derivation
c.
addition
d.
integration
Answer: (d).
integration
30.
a.
sharp intensities
b.
constant intensities
c.
low intensities
d.
high intensities
Answer: (b).
constant intensities
31.
problem
b.
objects
c.
image
d.
partition
Answer: (a).
problem
32.
a.
area
b.
line
c.
point
d.
edge
Answer: (a).
area
33.
a.
low frequencies
b.
smooth changes
c.
abrupt changes
d.
contrast
Answer: (c).
abrupt changes
34.
a.
b.
single effect
c.
d.
35.
a.
Gaussian
b.
laplacian
c.
ideal
d.
butterworth
Answer: (b).
laplacian
36.
a.
ramp
b.
step
c.
constant intensity
d.
edge
Answer: (a).
ramp
37.
a.
connected set
b.
boundaries
c.
region
d.
image
Answer: (a).
connected set
38.
a.
b.
128
c.
255
d.
256
Answer: (d).
256
39.
a.
1970
b.
1971
c.
1972
d.
1973
Answer: (a).
1970
40.
a.
pixels
b.
points
c.
cross gradient
d.
intensity
Answer: (d).
intensity
41.
A line is viewed as
a.
area
b.
edge segment
c.
point
d.
edge
Answer: (b).
edge segment
42.
a.
sharpening
b.
blurring
c.
smoothing
d.
contrast
Answer: (c).
smoothing
43.
a.
differentiation
b.
derivation
c.
addition
d.
integration
Answer: (b).
derivation
44.
a.
thin
b.
thick
c.
sharp
d.
blur
Answer: (a).
thin
45.
Example of similarity approach in image segmentation is
a.
b.
c.
d.
Both a and b
Answer: (c).
46.
a.
b.
c.
d.
Answer: (b).
high value coefficients
47.
Points other than exceeding the threshold in output image are marked as
a.
b.
c.
11
d.
Answer: (a).
48.
a.
b.
c.
positive
d.
negative
View Answer Report Discuss Too Difficult! Search Google
Answer: (d).
negative
49.
a.
area pixels
b.
line pixels
c.
point pixels
d.
edge pixels
Answer: (d).
edge pixels
50.
a.
edge point
b.
noise point
c.
ramp
d.
step
Answer b
Noise point
51.
a.
b.
c.
11
d.
x
Answer: (b).
52.
a.
b.
c.
d.
Both a and b
Answer: (d).
Both a and b
53.
First derivative approximation says that values of intensities at the onset must be
a.
nonzero
b.
zero
c.
positive
d.
negative
Answer: (a).
nonzero
54.
a.
morphology
b.
set theory
c.
extraction
d.
recognition
Answer: (a).
morphology
55.
a.
orthogonal
b.
isolated
c.
edge map
d.
edge normal
Answer: (c).
edge map
56.
a.
b.
30
c.
45
d.
90
Answer: (d).
90
57.
If R is the entire region of the image then union of all segmented parts should be equal to
a.
R
b.
R'
c.
Ri
d.
Rn
Answer: (a).
58.
a.
sum to zero
b.
subtraction to zero
c.
division to zero
d.
multiplication to zero
Answer: (a).
sum to zero
59.
Lines in an image can be oriented at angle
a.
b.
90
c.
30
d.
Both a and b
Answer: (d).
Both a and b
60.
a.
contraction
b.
expansion
c.
scaling
d.
enhancement
Answer: (c).
scaling
51.
a.
b.
c.
11
d.
Answer: (b).
52.
a.
b.
c.
Both a and b
Answer: (d).
Both a and b
53.
First derivative approximation says that values of intensities at the onset must be
a.
nonzero
b.
zero
c.
positive
d.
negative
Answer: (a).
nonzero
54.
a.
morphology
b.
set theory
c.
extraction
d.
recognition
Answer: (a).
morphology
55.
a.
orthogonal
b.
isolated
c.
edge map
d.
edge normal
Answer: (c).
edge map
56.
a.
0
b.
30
c.
45
d.
90
Answer: (d).
90
57.
If R is the entire region of the image then union of all segmented parts should be equal to
a.
b.
R'
c.
Ri
d.
Rn
Answer: (a).
58.
sum to zero
b.
subtraction to zero
c.
division to zero
d.
multiplication to zero
Answer: (a).
sum to zero
59.
a.
b.
90
c.
30
d.
Both a and b
Answer: (d).
Both a and b
Powered by
Valueimpression
60.
a.
contraction
b.
expansion
c.
scaling
d.
enhancement
scaling
61.
a.
first derivative
b.
second derivative
c.
third derivative
d.
Both a and b
Answer: (b).
second derivative
62.
a.
b.
c.
good boundary deletion
d.
Answer: (a).
63.
If the standard deviation of the pixels is positive, then sub image is labeled as
a.
black
b.
green
c.
white
d.
red
Answer: (c).
white
64.
a.
large image
b.
gray scale image
c.
color image
d.
binary image
Answer: (d).
binary image
65.
a.
1980
b.
1981
c.
1982
d.
1983
Answer: (a).
1980
66.
Segmentation is a process of
a.
low level processes
b.
c.
d.
Answer: (c).
67.
a.
Gaussian
b.
c.
gradient image
d.
Mexican hat
Answer: (d).
Mexican hat
68.
Segmentation algorithms depends on intensity values'
a.
discontinuity
b.
similarity
c.
continuity
d.
Both a and b
Answer: (d).
Both a and b
69.
a.
first derivative
b.
second derivative
c.
third derivative
d.
Both a and b
Answer: (a).
first derivative
Powered by
Valueimpression
70.
a.
1 pixel
b.
2 pixels
c.
3 pixels
d.
4 pixels
View Answer Report Discuss Too Difficult! Search Google
Answer: (a).
1 pixel
71.
a.
[0 1]
b.
[0 2]
c.
[0 255]
d.
[0 256]
Answer: (a).
[0 1]
72.
a.
area
b.
line
c.
point
d.
edge
Answer: (d).
edge
73.
a.
processes
b.
images
c.
divisions
d.
sensors
Answer: (d).
sensors
74.
a.
connected set
b.
boundaries
c.
region
d.
concerned area
Answer: (c).
region
75.
a.
Gx
b.
Gy
c.
Gt
d.
Gs
Answer: (b).
Gy
76.
first derivative
b.
second derivative
c.
third derivative
d.
Both a and b
Answer: (b).
second derivative
77.
a.
sharpening
b.
segmentation
c.
edge finding
d.
recognition
Answer: (c).
edge finding
78.
a.
paused
b.
cleared
c.
continued
d.
stopped
Answer: (d).
stopped
79.
a.
low frequencies
b.
smooth changes
c.
abrupt changes
d.
contrast
Answer: (b).
smooth changes
Powered by
Valueimpression
80.
a.
sharpening
b.
set theory
c.
smoothing
d.
thresholding
Answer: (d).
thresholding
81.
a.
ramp edges
b.
step edges
c.
roof edges
d.
Both a and b
Answer: (c).
roof edges
82.
a.
laplacian of Gaussian
b.
length of Gaussian
c.
d.
Answer: (a).
laplacian of Gaussian
83.
a.
logical operations
b.
arithmetic operation
c.
vector operations
d.
array operations
Answer: (d).
array operations
84.
a.
distance
b.
length
c.
strength
d.
edge
Answer: (b).
length
85.
a.
b.
c.
positive
d.
negative
Answer: (b).
86.
Gradient image is formed by the component
a.
Gx
b.
Gy
c.
Gt
d.
Both a and b
Answer: (d).
Both a and b
87.
a.
first derivative
b.
second derivative
c.
third derivative
d.
Both a and b
Answer: (b).
second derivative
88.
Local averaging
a.
smooths image
b.
sharps image
c.
darkens image
d.
blurs image
Answer: (a).
smooths image
89.
a.
blur
b.
noisy
c.
clear
d.
Both a and b
Both a and b
90.
a.
ramp edges
b.
step edge
c.
roof edges
d.
thinness of edges
Answer: (c).
roof edges
91.
a.
intensity transition
b.
shape transition
c.
color transition
d.
sign transition
Answer: (d).
sign transition
92.
a.
1D mask
b.
2D mask
c.
3D mask
d.
4D mask
Answer: (b).
2D mask
93.
a.
discontinuity
b.
similarity
c.
continuity
d.
recognition
Answer: (b).
similarity
94.
Algorithm stating that boundaries of the image are different from background is
a.
discontinuity
b.
similarity
c.
extraction
d.
recognition
Answer: (a).
discontinuity
95.
a.
discontinuity
b.
similarity
c.
continuity
d.
zero crossing
Answer: (d).
zero crossing
96.
a.
2 points
b.
3 points
c.
4 points
d.
5 points
Answer: (b).
3 points
97.
a.
joint
b.
disjoint
c.
connected
d.
overlapped
Answer: (b).
disjoint
98.
a.
1x1
b.
2x2
c.
3x3
d.
5x5
Answer: (d).
5x5
99.
a.
orthogonal
b.
isolated
c.
isomorphic
d.
isotropic
Answer: (a).
orthogonal
Powered by
Valueimpression
100.
a.
nonzero
b.
zero
c.
positive
d.
negative
Answer: (a).
nonzero
101.
a.
same
b.
disjoint
c.
different
d.
overlapped
Answer: (c).
different
102.
If all lines in the direction of defined direction of mask are wished to be found then we use
a.
thick edges
b.
thin edges
c.
thresholding
d.
enhancement
Answer: (c).
thresholding
103.
a.
TRUE
b.
FALSE
c.
d.
Answer: (a).
TRUE
104.
a.
b.
30
c.
45
d.
90
Answer: (c).
45
105.
a.
abrupt changes
b.
smooths changes
c.
thickness of edges
d.
thinness of edges
Answer: (a).
abrupt changes
106.
a.
Division
b.
segmentation
c.
extraction
d.
recognition
Answer: (b).
segmentation
107.
a.
b.
c.
positive
d.
negative
Answer: (a).
108.
a.
0
b.
c.
positive
d.
negative
Answer: (a).
109.
a.
sharpening
b.
constant intensities
c.
smoothing
d.
contrast
Answer: (c).
smoothing
Powered by
Valueimpression
110.
a.
b.
c.
positive
d.
negative
Answer: (b).
0
111.
a.
ideal model
b.
step edge
c.
real model
d.
smoothing model
Answer: (b).
step edge
112.
a.
ramp
b.
step
c.
constant intensity
d.
edge
Answer: (c).
constant intensity
113.
a.
12
b.
128
c.
144
d.
256
Answer: (c).
144
114.
a.
b.
1
c.
positive
d.
negative
Answer: (a).
115.
a.
discontinuity
b.
constant intensities
c.
continuity
d.
zero crossing
Answer: (d).
zero crossing
116.
a.
infrared imaging
b.
x-ray imaging
c.
microwave imaging
d.
UV imaging
Answer: (a).
infrared imaging
117.
a.
noise
b.
thin lines
c.
edges
d.
Both a and b
Answer: (d).
Both a and b
118.
Edge pixels lie on darker or bright side of image can be determined by the
a.
b.
c.
d.
Both a and b
Answer: (b).
119.
a.
thick edges
b.
thin edges
c.
fine edges
d.
rough edges
Answer: (a).
thick edges
120.
a.
1st point
b.
2nd point
c.
3rd point
d.
4th point
Answer: (c).
3rd point
121.
a.
absolute values
b.
positive values
c.
negative values
d.
Both a and b
Answer: (b).
positive values
122.
a.
isolated
b.
tuned
c.
isomorphic
d.
isotropic
tuned
123.
a.
area
b.
points
c.
isolated point
d.
edge
Answer: (c).
isolated point
124.
a.
1st point
b.
2nd point
c.
3rd point
d.
4th point
Answer: (b).
2nd point
125.
a.
connected set
b.
empty set
c.
union
d.
complement
Answer: (b).
empty set
126.
a.
pixels
b.
directions
c.
intensities
d.
edges
Answer: (b).
directions
127.
a.
b.
[2 -1 -1; -1 2 -1; -1 -1 2]
c.
d.
Answer: (c).
128.
a.
sobel gradient
b.
Robert cross gradient
c.
cross gradient
d.
Answer: (b).
129.
a.
b.
[2 -1 -1; -1 2 -1; -1 -1 2]
c.
d.
Answer: (d).
Valueimpression
130.
a.
b.
edge segment
c.
edge pixels
d.
edge normal
Answer: (d).
edge normal
131.
a.
spatial filters
b.
frequency filters
c.
low pass
d.
high pass
Answer: (a).
spatial filters
132.
a.
ramp
b.
step
c.
roof
d.
edges
View Answer Report Discuss Too Difficult! Search Google
Answer: (a).
ramp
133.
a.
discontinuity
b.
constant intensities
c.
continuity
d.
gradient
Answer: (d).
gradient
134.
a.
2 types
b.
3 types
c.
4 types
d.
5 types
Answer: (b).
3 types
135.
a.
discontinuity
b.
segmentation
c.
continuity
d.
edge detection
Answer: (d).
edge detection
136.
Laplacian detector is
a.
coupled
b.
isolated
c.
isomorphic
d.
isotropic
Answer: (d).
isotropic
137.
a.
directly proportional
b.
inversely proportional
c.
indirectly proportional
d.
exponentially proportional
Answer: (b).
inversely proportional
138.
a.
first derivative
b.
second derivative
c.
third derivative
d.
Both a and b
Answer: (b).
second derivative
139.
a.
quality
b.
size
c.
accuracy
d.
pixels
Answer: (c).
accuracy
140.
a.
thick edges
b.
thin edges
c.
fine edges
d.
rough edges
141.
a.
2 neighbors
b.
4 neighbors
c.
8 neighbors
d.
16 neighbors
Answer: (c).
8 neighbors
142.
a.
edges
b.
thick lines
c.
region
d.
points
Answer: (c).
region
143.
If all the regions are labeled with same intensity then it produces
a.
segmented effect
b.
regional effect
c.
unblocky effect
d.
blocky effect
Answer: (d).
blocky effect
144.
a.
pixels
b.
noise
c.
thickness
d.
thinness
Answer: (b).
noise
145.
a.
trivial
b.
non trivial
c.
illuminated
d.
low resolution
Answer: (b).
non trivial
146.
a.
nonzero
b.
zero
c.
positive
d.
negative
View Answer Report Discuss Too Difficult! Search Google
Answer: (a).
nonzero
147.
a.
pixels
b.
edges
c.
intensities
d.
Both a and b
Answer: (c).
intensities
148.
a.
differentiation
b.
derivation
c.
partial derivation
d.
integration
Answer: (c).
partial derivation
149.
a.
region growing
b.
region splitting
c.
extraction
d.
Both a and b
Answer: (d).
Both a and b
Powered by
Valueimpression
150.
a.
b.
c.
d.
-1
Answer: (d).
-1
151.
What is the Euler number of the region shown in the figure below?
a.
b.
-2
c.
-1
d.
Answer: (b).
-2
152.
a.
000030032232221211
b.
003010203310321032
c.
022332103210201330
d.
012302301023100321
View Answer Report Discuss Too Difficult! Search Google
Answer: (a).
000030032232221211
153.
a.
Perimeter
b.
Area
c.
Intensity
d.
Brightness
Answer: (b).
Area
154.
a.
Meter
b.
Meter2
c.
No units
d.
Meter-1
Answer: (c).
No units
155.
a.
Rectangle
b.
Square
c.
Irregular
d.
Disk
Answer: (d).
Disk
156.
a.
True
b.
False
c.
May be
d.
Can't Say
Answer: (a).
True
157.
a.
b.
c.
d.
Answer: (c).
158.
a.
True
b.
False
c.
May be
d.
Can't Say
Answer: (b).
False
159.
What is the study of properties of a figure that are unaffected by any deformation?
a.
Topology
b.
Geography
c.
Statistics
d.
Deformation
Answer: (a).
Topology
Powered by
Valueimpression
160.
On which of the following operation of an image, the topology of the region changes?
a.
Stretching
b.
Rotation
c.
Folding
d.
Answer: (c).
Folding
161.
a.
True
b.
False
c.
May be
d.
Can't Say
Answer: (a).
True
162.
What is the Euler number of a region with polygonal network containing V,Q and F as the number of
vertices, edges and faces respectively?
a.
V+Q+F
b.
V-Q+F
c.
V+Q-F
d.
V-Q-F
View Answer Report Discuss Too Difficult! Search Google
Answer: (b).
V-Q+F
163.
The texture of the region provides measure of which of the following properties?
a.
Smoothness alone
b.
Coarseness alone
c.
Regularity alone
d.
Answer: (d).
164.
a.
True
b.
False
c.
May be
d.
Can't Say
Answer: (a).
True
165.
a.
Structural
b.
Spectral
c.
Statistical
d.
Topological
Answer: (b).
Spectral
166.
a.
True
b.
False
c.
May be
d.
Can't Say
Answer: (a).
True
167.
Which of the following of a boundary is defined as the line perpendicular to the major axis?
a.
Equilateral axis
b.
Equidistant axis
c.
Minor axis
d.
Median axis
Answer: (c).
Minor axis
168.
Which of the following is the useful descriptor of a boundary, whose value is given by the ratio of length
of the major axis to the minor axis?
a.
Radius
b.
Perimeter
c.
Area
d.
Eccentricity
Answer: (d).
Eccentricity
169.
a.
b.
c.
Slope
d.
Answer: (b).
Valueimpression
170.
If the boundary is traversed in the clockwise direction, a vertex point ‘p’ is said to be a part of the convex
segment if the rate of change of slope at ‘p’ is:
a.
Negative
b.
Zero
c.
Non negative
d.
Cannot be determined
Answer: (c).
Non negative
171.
A point ‘p’ is said to be corner point, if the change of slope is less than 10 degrees.
a.
True
b.
False
c.
May be
d.
Can't Say
Answer: (b).
False
172.
Based on the 4-directional code, the first difference of smallest magnitude is called as:
a.
Shape number
b.
Chain number
c.
Difference
d.
Difference number
View Answer Report Discuss Too Difficult! Search Google
Answer: (a).
Shape number
173.
a.
Odd
b.
Even
c.
d.
Answer: (b).
Even
174.
What is the order of the shape number of a rectangular boundary with the dimensions of 3×3?
a.
b.
c.
9
d.
12
Answer: (d).
12
175.
Statistical moments are used to describe the shape of boundary segments quantitatively.
a.
True
b.
False
c.
May be
d.
Can't Say
Answer: (a).
True
176.
Which of the following techniques of boundary descriptions have the physical interpretation of
boundary shape?
a.
Fourier transform
b.
Statistical moments
c.
Laplace transform
d.
Curvature
Answer: (b).
Statistical moments
177.
a.
True
b.
False
c.
May be
d.
Can't Say
Answer: (b).
False
1.
a.
Quantization
b.
Sampling
c.
Contrast
d.
Dynamic range
Answer: (b).
Sampling
2.
What causes the effect, imperceptible set of very fine ridge like structures in areas of smooth gray levels?
a.
Caused by the use of an insufficient number of gray levels in smooth areas of a digital image
b.
Caused by the use of huge number of gray levels in smooth areas of a digital image
c.
d.
Answer: (a).
Caused by the use of an insufficient number of gray levels in smooth areas of a digital image
3.
What is the name of the effect caused by the use of an insufficient number of gray levels in smooth
areas of a digital image?
a.
Dynamic range
b.
Ridging
c.
Graininess
d.
False contouring
Answer: (d).
False contouring
4.
Using rough rule of thumb, and assuming powers of 2 for convenience, what image size are about the
smallest images that can be expected to be reasonably free of objectionable sampling checkerboards
and false contouring?
a.
b.
c.
d.
Answer: (b).
What does a shift up and right in the curves of isopreference curve simply means? Verify in terms of N
(number of pixels) and k (L=2k, L is the gray level) values.
a.
b.
c.
d.
Answer: (c).
6.
How does the curves behave to the detail in the image in isopreference curve?
a.
Curves tend to become more vertical as the detail in the image decreases
b.
Curves tend to become less vertical as the detail in the image increases
c.
Curves tend to become less vertical as the detail in the image decreases
d.
Curves tend to become more vertical as the detail in the image increases
Answer: (d).
Curves tend to become more vertical as the detail in the image increases
7.
For an image with a large amount of detail, if the value of N (number of pixels) is fixed then what is the
gray level dependency in the perceived quality of this type of image?
a.
b.
c.
d.
Answer: (b).
8.
a.
b.
c.
d.
Answer: (a).
For a band-limited function, which Theorem says that “if the function is sampled at a rate equal to or
greater than twice its highest frequency, the original function can be recovered from its samples”?
a.
Band-limitation theorem
b.
c.
d.
Answer: (c).
10.
What is the name of the phenomenon that corrupts the sampled image, and how does it happen?
a.
b.
c.
d.
Answer: (c).
Aliasing, if the band-limited functions are undersampled
11.
a.
b.
c.
d.
Answer: (a).
12.
a.
b.
c.
d.
Answer: (a).
13.
In terms of Sampling and Quantization, Zooming and Shrinking may be viewed as ___________
a.
b.
c.
d.
Answer: (b).
14.
The two steps: one is the creation of new pixel locations, and other is the assignment of gray levels to
those new locations are involved in ____________
a.
Shrinking
b.
Zooming
c.
d.
None of the mentioned
Answer: (b).
Zooming
15.
While Zooming, In order to perform gray-level assignment for any point in the overlay, we assign its gray
level to the new pixel in the grid its closest pixel in the original image. What’s this method of gray-level
assignment called?
a.
Neighbor Duplication
b.
Duplication
c.
d.
Answer: (c).
16.
A special case of nearest neighbor Interpolation that just duplicates the pixels the number of times to
achieve the desired size, is known as ___________
a.
Bilinear Interpolation
b.
Contouring
c.
Ridging
d.
Pixel Replication
Answer: (d).
Pixel Replication
17.
a.
Aliasing effect
b.
c.
Ridging effect
d.
Checkerboard effect
Answer: (d).
Checkerboard effect
18.
a.
Assign gray level to the new pixel using its right neighbor
b.
Assign gray level to the new pixel using its left neighbor
c.
Assign gray level to the new pixel using its four nearest neighbors
d.
Assign gray level to the new pixel using its eight nearest neighbors
Answer: (c).
Assign gray level to the new pixel using its four nearest neighbors
19.
Row-column deletion method of Image Shrinking is an equivalent process to which method of Zooming?
a.
Bilinear Interpolation
b.
Contouring
c.
Pixel Replication
d.
Answer: (c).
Pixel Replication
20.
a.
Aliasing effect
b.
False contouring effect
c.
Ridging effect
d.
Checkerboard effect
Answer: (a).
Aliasing effect
21.
“In general-purpose for a digital image of zooming and shrinking, where Bilinear Interpolation generally
is the method of choice over nearest neighbor Interpolation”.
a.
True
b.
False
c.
May be
d.
Can't Say
Answer: (a).
True
22.
A pixel p at coordinates (x, y) has neighbors whose coordinates are given by:
a.
4-neighbors of p
b.
Diagonal neighbors
c.
8-neighbors
d.
Answer: (a).
4-neighbors of p
23.
A pixel p at coordinates (x, y) has neighbors whose coordinates are given by:
a.
4-neighbors of p
b.
Diagonal neighbors
c.
8-neighbors
d.
Answer: (b).
Diagonal neighbors
24.
a.
(x+1, y), (x-1, y), (x, y+1), (x, y-1), (x+2, y), (x-2, y), (x, y+2), (x, y-2)
b.
(x+1, y), (x-1, y), (x, y+1), (x, y-1), (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
c.
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1), (x+2, y+2), (x+2, y-2), (x-2, y+2), (x-2, y-2)
d.
(x+2, y), (x-2, y), (x, y+2), (x, y-2), (x+2, y+2), (x+2, y-2), (x-2, y+2), (x-2, y-2)
Answer: (b).
(x+1, y), (x-1, y), (x, y+1), (x, y-1), (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
25.
Two pixels p and q having gray values from V, the set of gray-level values used to define adjacency, are
m-adjacent if:
a.
q is in N4(p)
b.
q is in ND(p) and the set N4(p) ∩ N4(q) has no pixels whose values are from V
c.
Any of the mentioned
d.
Answer: (c).
26.
a.
If for any pixel p in S, the set of pixels that are connected to it in Sis only one
b.
c.
If S is a region
d.
Answer: (d).
27.
a.
If R is a region, and the set of pixels in R have one or more neighbors that are not in R
b.
If R is an entire image, then the set of pixels in the first and last rows and columns of R
c.
d.
Answer: (c).
28.
The domain that refers to image plane itself and the domain that refers to Fourier transform of an image
is/are :
a.
b.
c.
d.
Answer: (c).
29.
What is the technique for a gray-level transformation function called, if the transformation would be to
produce an image of higher contrast than the original by darkening the levels below some gray-level m
and brightening the levels above m in the original image.
a.
Contouring
b.
Contrast stretching
c.
Mask processing
d.
Point processing
Answer: (b).
Contrast stretching
30.
a.
Contouring
b.
Contrast stretching
c.
Mask processing
d.
Answer: (c).
Mask processing
31.
Using gray-level transformation, the basic function linearity deals with which of the following
transformation?
a.
b.
c.
d.
Answer: (b).
32.
Using gray-level transformation, the basic function Logarithmic deals with which of the following
transformation?
a.
b.
c.
d.
Answer: (a).
Log and inverse-log transformations
33.
Using gray-level transformation, the basic function power-law deals with which of the following
transformation?
a.
b.
c.
d.
Answer: (b).
34.
If r be the gray-level of image before processing and s after processing then which expression defines
the negative transformation, for the gray-level in the range [0, L-1]?
a.
s=L–1–r
b.
c.
d.
Answer: (a).
s=L–1–r
35.
If r be the gray-level of image before processing and s after processing then which expression helps to
obtain the negative of an image for the gray-level in the range [0, L-1]?
a.
s=L–1–r
b.
c.
d.
Answer: (c).
36.
If r be the gray-level of image before processing and s after processing then which expression defines
the power-law transformation, for the gray-level in the range [0, L-1]?
a.
s=L–1–r
b.
c.
Answer: (b).
37.
Which of the following transformations is particularly well suited for enhancing an image with white and
gray detail embedded in dark regions of the image, especially when there is more black area in the
image.
a.
Log transformations
b.
Power-law transformations
c.
Negative transformations
d.
Answer: (c).
Negative transformations
38.
Which of the following transformations expands the value of dark pixels while the higher-level values
are being compressed?
a.
Log transformations
b.
Inverse-log transformations
c.
Negative transformations
d.
Answer: (a).
Log transformations
39.
Although power-law transformations are considered more versatile than log transformations for
compressing of gray-levels in an image, then, how is log transformations advantageous over power-law
transformations?
a.
b.
c.
d.
Answer: (a).
40.
A typical Fourier Spectrum with spectrum value ranging from 0 to 106, which of the following
transformation is better to apply.
a.
Log transformations
b.
Power-law transformations
c.
Negative transformations
d.
Answer: (a).
Log transformations
41.
The power-law transformation is given as: s = crᵞ, c and ᵞ are positive constants, and r is the gray-level of
image before processing and s after processing. Then, for what value of c and ᵞ does power-law
transformation becomes identity transformation?
a.
c = 1 and ᵞ < 1
b.
c = 1 and ᵞ > 1
c.
c = -1 and ᵞ = 0
d.
c=ᵞ=1
Answer: (d).
c=ᵞ=1
42.
Which of the following transformation is used cathode ray tube (CRT) devices?
a.
Log transformations
b.
Power-law transformations
c.
Negative transformations
d.
Answer: (b).
Power-law transformations
43.
a.
b.
c.
d.
Answer: (d).
None of the mentioned
44.
The power-law transformation is given as: s = crᵞ, c and ᵞ are positive constants, and r is the gray-level of
image before processing and s after processing. What happens if we increase the gamma value from 0.3
to 0.7?
a.
b.
c.
d.
Answer: (c).
45.
If h(rk) = nk, rk the kth gray level and nk total pixels with gray level rk, is a histogram in gray level range
[0, L – 1]. Then how can we normalize a histogram?
a.
If each value of histogram is added by total number of pixels in image, say n, p(rk)=nk+n
b.
If each value of histogram is subtracted by total number of pixels in image, say n, p(rk)=nk-n
c.
If each value of histogram is multiplied by total number of pixels in image, say n, p(rk)=nk * n
d.
If each value of histogram is divided by total number of pixels in image, say n, p(rk)=nk / n
View Answer Report Discuss Too Difficult!
Answer: (d).
If each value of histogram is divided by total number of pixels in image, say n, p(rk)=nk / n
46.
a.
b.
-1
c.
d.
Answer: (a).
47.
A low contrast image will have what kind of histogram when, the histogram, h(rk) = nk, rk the kth gray
level and nk total pixels with gray level rk, is plotted nk versus rk?
a.
The histogram that are concentrated on the dark side of gray scale
b.
The histogram whose component are biased toward high side of gray scale
c.
The histogram that is narrow and centered toward the middle of gray scale
d.
The histogram that covers wide range of gray scale and the distribution of pixel is approximately uniform
Answer: (c).
The histogram that is narrow and centered toward the middle of gray scale
48.
A bright image will have what kind of histogram, when the histogram, h(rk) = nk, rk the kth gray level
and nk total pixels with gray level rk, is plotted nk versus rk?
a.
The histogram that are concentrated on the dark side of gray scale
b.
The histogram whose component are biased toward high side of gray scale
c.
The histogram that is narrow and centered toward the middle of gray scale
d.
The histogram that covers wide range of gray scale and the distribution of pixel is approximately uniform
Answer: (b).
The histogram whose component are biased toward high side of gray scale
49.
The transformation s = T(r) producing a gray level s for each pixel value r of input image. Then, if the T(r)
is single valued in interval 0 ≤ r ≤ 1, what does it signifies?
a.
b.
c.
It guarantees that the output gray level and the input gray level will be in same range
d.
Answer: (a).
50.
The transformation s = T(r) producing a gray level s for each pixel value r of input image. Then, if the T(r)
is monotonically increasing in interval 0 ≤ r ≤ 1, what does it signifies?
a.
b.
c.
It guarantees that the output gray level and the input gray level will be in same range
d.
Answer: (b).
51.
The transformation s = T(r) producing a gray level s for each pixel value r of input image. Then, if the T(r)
is satisfying 0 ≤ T(r) ≤ 1 in interval 0 ≤ r ≤ 1, what does it signifies?
a.
c.
It guarantees that the output gray level and the input gray level will be in same range
d.
Answer: (c).
It guarantees that the output gray level and the input gray level will be in same range
52.
What is the full form for PDF, a fundamental descriptor of random variables i.e. gray values in an image?
a.
b.
c.
d.
Answer: (d).
53.
a.
Cumulative density function
b.
c.
d.
Answer: (c).
54.
For the transformation T(r) = [∫0^r pr(w) dw], r is gray value of input image, pr(r) is PDF of random
variable r and w is a dummy variable. If, the PDF are always positive and that the function under integral
gives the area under the function, the transformation is said to be __________
a.
Single valued
b.
Monotonically increasing
c.
d.
Answer: (c).
55.
The transformation T (rk) = ∑k(j=0) nj /n, k = 0, 1, 2, …, L-1, where L is max gray value possible and r-k is
the kth gray level, is called _______
a.
Histogram linearization
b.
Histogram equalization
c.
d.
Answer: (c).
56.
If the histogram of same images, with different contrast, are different, then what is the relation
between the histogram equalized images?
a.
b.
c.
They look visually different from one another just like the input images
d.
Answer: (b).
They look visually very similar to one another
57.
The technique of Enhancement that has a specified Histogram processed image as result, is called?
a.
Histogram Linearization
b.
Histogram Equalization
c.
Histogram Matching
d.
Answer: (c).
Histogram Matching
58.
In Histogram Matching r and z are gray level of input and output image and p stands for PDF, then, what
does pz(z) stands for?
a.
b.
c.
d.
59.
Inverse transformation plays an important role in which of the following Histogram processing
Techniques?
a.
Histogram Linearization
b.
Histogram Equalization
c.
Histogram Matching
d.
Answer: (c).
Histogram Matching
60.
In Histogram Matching or Specification, z = G^-1[T(r)], r and z are gray level of input and output image
and T & G are transformations, to confirm the single value and monotonous of G^-1 what of the
following is/are required?
a.
b.
c.
d.
None of the mentioned
Answer: (a).
61.
a.
Histogram Linearization
b.
Histogram Specification
c.
Histogram Matching
d.
Answer: (d).
62.
What happens to the output image when global Histogram equalization method is applied on smooth
and noisy area of an image?
a.
b.
d.
Answer: (a).
63.
a.
b.
c.
d.
Answer: (b).
64.
a.
b.
1
c.
-1
d.
Answer: (a).
65.
For a local enhancement using mean and variance, what happens if the lowest value of contrast is not
restricted as per the willingness of acceptance of value?
a.
b.
Enhancement will occur for areas with standard deviation value > 1
c.
d.
Enhancement will occur for areas with standard deviation value > 0 and < 1
Answer: (c).
66.
Logic operations between two or more images are performed on pixel-by-pixel basis, except for one that
is performed on a single image. Which one is that?
a.
AND
b.
OR
c.
NOT
d.
Answer: (c).
NOT
67.
a.
AND
b.
OR
c.
NOT
d.
Answer: (d).
68.
While implementing logic operation on gray-scale images, the processing of pixel values is done as
__________
a.
b.
c.
d.
Answer: (c).
69.
What is the equivalent for a black, 8-bit pixel to be processed under logic operation on gray scale image?
a.
A string: 00000000
b.
A string: 11111111
c.
A string: 10000000
d.
A string: 01111111
Answer: (a).
A string: 00000000
70.
Which of the following operation(s) is/are equivalent to negative transformation?
a.
AND
b.
OR
c.
NOT
d.
Answer: (c).
NOT
71.
a.
AND, OR
b.
AND, NOT
c.
NOT, OR
d.
NOT, OR
72.
Two images having one pixel gray value 01010100 and 00000101 at the same location, are operated
against AND operator. What would be the resultant pixel gray value at that location in the enhanced
image?
a.
10100100
b.
11111011
c.
00000100
d.
01010101
Answer: (c).
00000100
73.
Which of the following arithmetic operator is primarily used as a masking operator in enhancement?
a.
Addition
b.
Subtraction
c.
Multiplication
d.
Division
Answer: (c).
Multiplication
74.
Which of the following is/are more commercially successful image enhancement method in mask mode
radiography, an area under medical imaging?
a.
Addition
b.
Subtraction
c.
Multiplication
d.
Division
Answer: (b).
Subtraction
75.
The subtraction operation results in areas that appear as dark shades of gray. Why?
a.
Because the difference in such areas is little, that yields low value
b.
Because the difference in such areas is high, that yields low value
c.
Because the difference in such areas is high, that yields high value
d.
Answer: (a).
Because the difference in such areas is little, that yields low value
76.
If the images are displayed using 8-bits, then, what is the range of the value of an image if the image is a
result of subtraction operation?
a.
0 to 255
b.
0 to 511
c.
-255 to 0
d.
Answer: (d).
77.
The subtracted image needs to be scaled, if 8-bit channel is used to display the subtracted images. So,
the method of adding 255 to each pixel and then dividing by 2, has certain limits. What is/are those
limits?
a.
b.
d.
Answer: (c).
78.
Which of the following is/are the fundamental factors that need tight control for difference based
inspection work?
a.
Proper registration
b.
c.
Noise levels should be low enough so that the variation due to noise won’t affect the difference value
much
d.
Answer: (d).
79.
a.
Their covariance is 0
b.
Their covariance is 1
c.
Their covariance is -1
d.
Answer: (a).
Their covariance is 0
80.
In Image Averaging enhancement method assumptions are made for a noisy image g(x, y). What is/are
those?
a.
b.
c.
d.
Answer: (d).
81.
The standard deviation ‘σ’ at any point in image averaging: σḡ(x, y) = 1/√K σɳ(x, y), where ḡ(x, y) is the
average image formed by averaging K different noisy images and ɳ(x, y) is the noise added to an original
image f(x, y). What is the relation between K and the variability of the pixel values at each location (x, y)?
a.
b.
c.
d.
Answer: (a).
82.
a.
Isotropic filters
b.
Box filters
c.
Median filter
d.
Answer: (a).
Isotropic filters
83.
In isotropic filtering, which of the following is/are the simplest isotropic derivative operator?
a.
Laplacian
b.
Gradient
c.
d.
Answer: (a).
Laplacian
84.
a.
Nonlinear operator
b.
Order-Statistic operator
c.
Linear operator
d.
Linear operator
85.
The Laplacian ∇^2 f=[f(x + 1, y) + f(x – 1, y) + f(x, y + 1) + f(x, y – 1) – 4f(x, y)], gives an isotropic result for
rotations in increment by what degree?
a.
90 degree
b.
0 degree
c.
45 degree
d.
Answer: (a).
90 degree
86.
The Laplacian incorporated with diagonal directions, i.e. ∇^2 f=[f(x + 1, y) + f(x – 1, y) + f(x, y + 1) + f(x, y
– 1) – 8f(x, y)], gives an isotropic result for rotations in increment by what degree?
a.
90 degree
b.
0 degree
c.
45 degree
d.
None of the mentioned
Answer: (a).
90 degree
87.
a.
b.
c.
d.
Answer: (c).
88.
Applying Laplacian produces image having featureless background which is recovered maintaining the
sharpness of Laplacian operation by either adding or subtracting it from the original image depending
upon the Laplacian definition used. Which of the following is true based on above statement?
a.
b.
c.
If definition used has a negative center coefficient, then addition is done
d.
Answer: (a).
89.
A mask of size 3*3 is formed using Laplacian including diagonal neighbors that has central coefficient as
9. Then, what would be the central coefficient of same mask if it is made without diagonal neighbors?
a.
b.
-5
c.
d.
-8
Answer: (a).
90.
Which of the following mask(s) is/are used to sharpen images by subtracting a blurred version of original
image from the original image itself?
a.
Unsharp mask
b.
High-boost filter
c.
d.
Answer: (c).
91.
Which of the following gives an expression for high boost filtered image fhb, if f represents an image, f
blurred version of f, fs unsharp mask filtered image and A ≥ 1?
a.
b.
c.
d.
Answer: (d).
92.
If we use a Laplacian to obtain sharp image for unsharp mask filtered image fs(x, y) of f(x, y) as input
image, and if the center coefficient of the Laplacian mask is negative then, which of the following
expression gives the high boost filtered image fhb, if ∇^2 f represent Laplacian?
a.
b.
c.
d.
Answer: (a).
93.
“For very large value of A, a high boost filtered image is approximately equal to the original image”.
State whether the statement is true or false?
a.
True
b.
False
c.
May be
d.
Can't Say
Answer: (a).
True
94.
a.
Unsharp masking
b.
Box filter
c.
Median filter
d.
Answer: (a).
Unsharp masking
95.
A First derivative in image processing is implemented using which of the following given operator(s)?
a.
b.
The Laplacian
c.
d.
96.
What is the sum of the coefficient of the mask defined using gradient?
a.
b.
-1
c.
d.
Answer: (c).
97.
a.
b.
c.
d.
Answer: (c).
98.
Gradient have some important features. Which of the following is/are some of them?
a.
b.
c.
d.
Answer: (c).
99.
An image has significant edge details. Which of the following fact(s) is/are true for the gradient image
and the Laplacian image of the same?
a.
b.
c.
Both the gradient image and the Laplacian image has equal values
d.
None of the mentioned
Answer: (b).
100.
a.
b.
c.
d.
Answer: (c).
101.
Assuming that the origin of F(u, v), Fourier transformed function of f(x, y) an input image, has been
correlated by performing the operation f(x, y)(-1)x+y prior to taking the transform of the image. If F and
f are of same size, then what does the given operation is/are supposed to do?
a.
b.
c.
Shifts the center transform
d.
Answer: (c).
102.
Assuming that the origin of F(u, v), Fourier transformed function of f(x, y) an input image, has been
correlated by performing the operation f(x, y)(-1)x+y prior to taking the transform of the image. If F and
f are of same size M*N, where does the point (u, v) =(0,0) shifts?
a.
(M -1, N -1)
b.
(M/2, N/2)
c.
(M+1, N+1)
d.
(0, 0)
Answer: (b).
(M/2, N/2)
103.
Assuming that the origin of F(u, v), Fourier transformed function of f(x, y) an input image, has been
correlated by performing the operation f(x, y)(-1)x+y prior to taking the transform of the image. If F and
f are of same size M*N, then which of the following is an expression for H(u, v), the filter used for
implementing Laplacian in frequency domain?
a.
H(u, v)= -(u^2+ v^2)
b.
c.
d.
Answer: (c).
104.
Computing the Fourier transform of the Laplacian result in spatial domain is equivalent to multiplying
the F(u, v), Fourier transformed function of f(x, y) an input image, and H(u, v), the filter used for
implementing Laplacian in frequency domain. This dual relationship is expressed as _________
a.
b.
Laplacian
c.
Gradient
d.
Answer: (a).
105.
An enhanced image can be obtained as: g(x,y)=f(x,y)-∇^2 f(x,y), where Laplacian is being subtracted from
f(x, y) the input image. What does this conclude?
a.
b.
c.
d.
Answer: (c).
106.
a.
b.
c.
d.
Answer: (c).
Which of the following fact is true for the masks that includes diagonal neighbors than the masks that
doesn’t?
a.
Mask that excludes diagonal neighbors has more sharpness than the masks that doesn’t
b.
Mask that includes diagonal neighbors has more sharpness than the masks that doesn’t
c.
d.
Answer: (b).
Mask that includes diagonal neighbors has more sharpness than the masks that doesn’t
1.
a.
up sampling
b.
filtering
c.
d.
prototype
prototype
2.
a.
low resolution
b.
high resolution
c.
intensity
d.
blurred portion
Answer: (b).
high resolution
3.
a.
approximation
b.
vertical detail
c.
horizontal detail
d.
diagonal detail
Answer: (c).
horizontal detail
4.
a.
b.
c.
d.
Answer: (b).
5.
a.
scaling coefficient
b.
detail coefficient
c.
span coefficient
d.
Both a and b
Answer: (d).
Both a and b
6.
a.
b.
c.
d.
Answer: (a).
7.
a.
b.
c.
Digital signal processed
d.
Answer: (a).
8.
a.
heights
b.
sharpness
c.
intensity
d.
weights
Answer: (d).
weights
9.
a.
modulating equation
b.
FIR filter
c.
dilation equation
d.
span equation
Answer: (c).
dilation equation
10.
a.
lower scales
b.
higher scales
c.
mid scales
d.
intense scales
Answer: (b).
higher scales
11.
a.
2 steps
b.
3 steps
c.
4 steps
d.
5 steps
Answer: (b).
3 steps
12.
a.
h5(n) = (-1)nh1(n)
b.
h5(n) = h1(n)
c.
h5(n) = (-1)n
d.
h = (-1)nh1(n)
Answer: (a).
h5(n) = (-1)nh1(n)
13.
co efficient
b.
multipliers
c.
subtractors
d.
filter coefficients
Answer: (d).
filter coefficients
14.
a.
N-1 x N-1
b.
N+1 x N-1
c.
N-1 x N
d.
NxN
Answer: (d).
NxN
15.
MRA stands for
a.
Multiresolution analysis
b.
Multiresolution assembly
c.
Multiresemble analysis
d.
Multiresemble assembly
Answer: (a).
Multiresolution analysis
16.
a.
up sampling
b.
filtering
c.
down sampling
d.
blurring
Answer: (c).
down sampling
17.
Images are
a.
1D arrays
b.
2D arrays
c.
3D arrays
d.
4D arrays
Answer: (b).
2D arrays
18.
a.
1 FIR filter
b.
2 FIR filters
c.
3 FIR filters
d.
4 FIR filters
Answer: (b).
2 FIR filters
19.
a.
b.
c.
d.
Answer: (d).
20.
a.
histogram
b.
pyramids
c.
mean pyramids
d.
equalized histogram
mean pyramids
21.
a.
b.
c.
d.
Answer: (b).
22.
a.
tiles
b.
blocks
c.
squares
d.
circles
Answer: (a).
tiles
23.
a.
b.
c.
d.
Answer: (b).
24.
a.
low coding
b.
high coding
c.
intense coding
d.
subband coding
Answer: (d).
subband coding
25.
a.
sharp details
b.
finer details
c.
blur details
d.
edge details
Answer: (b).
finer details
26.
a.
low resolution
b.
high resolution
c.
intense
d.
blurred
Answer: (a).
low resolution
27.
a.
histogram
b.
pyramids
c.
mean pyramids
d.
haar function
Answer: (d).
haar function
28.
a.
b.
complex conjugate operation
c.
d.
Answer: (b).
29.
a.
low resolution
b.
high resolution
c.
intensity
d.
blurred portion
Answer: (a).
low resolution
30.
a.
histogram
b.
image pyramid
c.
local histogram
d.
equalized histogram
Answer: (b).
image pyramid
31.
a.
increases
b.
remain same
c.
decreases
d.
blurred
Answer: (c).
decreases
32.
pentagonal
b.
square
c.
orthogonal
d.
oval
Answer: (c).
orthogonal
33.
a.
approximation
b.
vertical detail
c.
horizontal detail
d.
diagonal detail
Answer: (d).
diagonal detail
34.
CWT stands for
a.
b.
c.
d.
Answer: (c).
35.
a.
data processing
b.
information processing
c.
data processinerosion
d.
dilation
Answer: (b).
information processing
36.
a.
pentagonal
b.
square
c.
orthogonal
d.
oval
Answer: (c).
orthogonal
37.
a.
low resolution
b.
high resolution
c.
intense
d.
blurred
Answer: (b).
high resolution
38.
a.
histogram
b.
pyramids
c.
local histogram
d.
equalized histogram
Answer: (c).
local histogram
39.
a.
T = HFHT
b.
T = HFH
c.
T = HFT
d.
T = HT
T = HFHT
40.
a.
arbitrary precision
b.
filtering
c.
down sampling
d.
prototype
Answer: (a).
arbitrary precision
41.
a.
unit delay
b.
multiplier
c.
subtractor
d.
adder
Answer: (c).
subtractor
42.
a.
scaling function
b.
shaping function
c.
down sampling
d.
blurring
Answer: (a).
scaling function
43.
a.
open span
b.
fully span
c.
closed span
d.
span
Answer: (c).
closed span
44.
No filtering produces
a.
Gaussian pyramids
b.
pyramids
c.
mean pyramids
d.
subsampling pyramids
Answer: (d).
subsampling pyramids
45.
a.
modulation
b.
multiplier
c.
cross modulation
d.
subband coding
Answer: (c).
cross modulation
46.
a.
j levels
b.
j-1 levels
c.
j+1 levels
d.
n levels
Answer: (a).
j levels
47.
a.
approximation
b.
vertical detail
c.
horizontal detail
d.
diagonal detail
Answer: (b).
vertical detail
48.
a.
approximation
b.
vertical detail
c.
horizontal detail
d.
diagonal detail
Answer: (a).
approximation
49.
a.
segment image
b.
reconstruct image
c.
blur image
d.
sharpened image
Answer: (b).
reconstruct image
50.
a.
Gaussian pyramids
b.
pyramids
c.
mean pyramids
d.
equalized histogram
Answer: (a).
Gaussian pyramids
1.) Type of Interpolation where the intensity of 4 neighboring pixels uses to obtain intensity a new
location is called ___________
a) cubic interpolation
b) nearest-neighbor interpolation
c) bilinear interpolation
d) bicubic interpolation
Answer is C)
Hide Answer
a) audio
b) sound
c) sunlight
d) ultraviolet
Answer is b)
Hide Answer
a) edges
b) slices
c) boundaries
d) illumination
Answer is b)
Hide Answer
b) astronomical observations
c) industry
d) lithography
Answer is b)
Hide Answer
5.) How many different frames are required for analyzing a 3D image?
a) 5
b) 4
c) 6
d) 7
Answer is a)
Hide Answer
6.) If f(x,y) is a image function of 2 variables, then the first-order derivative of a one-dimensional
function, f(x) is:
a) f(x+1)-f(x)
b) f(x)-f(x+1)
c) f(x-1)-f(x+1)
d) f(x)+f(x-1)
Answer is a)
Hide Answer
b) acoustic
c) ultrasonic
d) electronic
Answer is a)
Hide Answer
a) visible
b) gamma
c) x-rays
d) ultraviolet
Answer is c)
Hide Answer
Answer: The Camera coordinate frame is used to relate the objects wrt the camera.
b) The largest division of the CCD array is also known as the pixel.
c) both a) and b)
d) None of the above
Answer is a)
Hide Answer
b) Gaussian Transform
d) Power-law Transformation
Answer is c)
Hide Answer
a) 256 X 256
b) 512 X 512
c) 1920 X 1080
d) 1080 X 1080
Answer is b)
Hide Answer
a) Y=f/z
b) Y= z/f
d) Y = -F/Z
Answer is d)
Hide Answer
14.) What is the formula for the calculation of total number of combinations of bit
a) (2)^bp
b)(2)^xyz
d)(2)^bpp
Answer is d)
Hide Answer
15.) Which method is used to generate a processed image that has a specified histogram?
a) linearization of a histogram
b) equalization of a histogram
d) Histogram processing
Answer is c)
Hide Answer
Answer is b)
Hide Answer
17.) The white color in the image processing can be calculated as…
Answer is a)
Hide Answer
18.)In 4-neighbors of a pixel p, how far are each of the neighbors located from p?
c) alternating pixels
Answer is a)
Hide Answer
b) Number of columns
d) all of these
Answer is d)
Hide Answer
20). While Zooming, In order to perform a gray-level assignment for any point in the overlay, we assign
its grey level to the new pixel in the grid its closest pixel in the original image. What’s this method of
grey-level assignment called?
a) Neighbor Duplication
b) Duplication
Answer is d)
Hide Answer
21.)The transformation s = T(r) produces a gray level s for each pixel value r of the input image. Then, if
the T(r) is monotonically increasing in interval 0 ≤ r ≤ 1, what does it signify?
b) It is needed to restrict the production of some inverted gray levels in the output
c) It ensures that the gray output level and the input gray level will be in the same range
Answer is b)
Hide Answer
22.)The transformation formula T (rk) = ∑k(j=0) nj /n, k = 0, 1, 2, …, L-1, where L is maximum gray value
possible and r-k is the kth gray level, is called _______
a) Histogram linearization
b) Histogram equalization
Answer is c)
Hide Answer
Answer is b)
Hide Answer
a) 0.52-0.70
b) 0.52-0.62
c) 0.53-0.60
d) 0.52-0.60
Answer is d)
Hide Answer
a) gamma rays
b) x-rays
d) ultraviolet
Answer is c)
Hide Answer
Spatial coordinates
Two-dimensional function
Image elements
Plane coordinates
Answer is C)
Hide Answer
2.) Identify the secondary colors of light.
Cyan, magenta
Magenta, Yellow
Answer is C)
Hide Answer
3.) For a continuous image f(x, y), how could be Sampling defined?
Answer is A)
Hide Answer
4.) What are the basic quantities that are used to describe the quality of a chromatic light source?
Answer is C)
Hide Answer
Answer is A)
Hide Answer
Quantization
Sampling
Zooming
Shrinking
Answer is B)
Hide Answer
7.) De , Euclidean distance between the pixels p and q with coordinates (x,y) and (s,t) is
│x-s│ +│y-t│
Max{│x-s│,│y-t│}
Min{│x-s│,│y-t│}
{│x-s│2 +│y-t│2}1/2
Answer is D)
Hide Answer
both a) and b)
Answer is C)
Hide Answer
9.) The digital image processing work with………….that performs operations on a digital image.
a analog system
a digital system
both a) and b)
Answer is B)
Hide Answer
Human Voice
Digital system
both a) and b)
Answer is A)
Hide Answer
Johann Zahn
Muslim scientist
Ibn al-Haitham
Answer is A)
Hide Answer
1816
1814
1820
1864
Answer is B)
Hide Answer
bath a) and b)
Null
Answer is A)
Hide Answer
Answer is A)
Hide Answer
15.) Images quantised with insufficient brightness levels is lead to the occurrence of ____________
Pixillation
Blurring
False Contours
Answer is C)
Hide Answer
16.) The human visual system structures its low-level representations is known as
Perceptual organization.
Pragmatic modeling
Pragmatic matching
Visualization.
Answer is A)
Hide Answer
Euler
Prewitt
Marr
Gaussian
Answer is C)
Hide Answer
Answer is A)
Hide Answer
35mm
40mm
70mm
Answer is A)
Hide Answer
a) (255,0,1)
b) (255,0,0)
c) (255,1,0)
d) none of these
Answer is B)
Hide Answer
Answer: The Leica and argus are the two analog cameras that were developed in 1925 and in 1939,
respectively.
a) 1981
b) 1982`
c) 1864
Answer is A)
Hide Answer
a) Medical field
Answer is A)
Hide Answer
Answer is D)
Hide Answer
processing?
A. radar
B. medicines
C. lens enhancement
D. medical diagnoses
A. high level
B. last level
C. low level
D. mid level
A. acoustic
B. mecatronic
C. ultrasonic
D. electronic
Q4. Which of the following color is having largest frequency in visible spectrum?
A. violet
B. blue
C. green
D. red
A. color enhancement
B. spatial enhancement
C. detection
D. frequency enhancement
A. chemistry
B. chemicals
C. neurobiology
D. medicines
Q7. Which of the following is the first fundamental step in image processing?
A. filtration
B. image restoration
C. image enhancement
D. image acquisition
A. Intensity
B. Hue
C. Brightness
D. Intensity
Q9. What is the full form of JPEG?
A. UV Rays
B. Radio Waves
C. Gamma Rays
D. Microwaves
Q13. Which of the following filter is used to find the brightest point in the image?
A. Max filter
B. Mean filter
C. Median filter
A. Membership
B. Maturity
C. Generic Element
C. Source of nutrition
Q16. The color image processing is basically divided into ..... categories.
A. 3
B. 4
C. 2
D. 5
Q17. Which of the following equation is used for calculating B value in terms of HSI
components?
A. B=I(1+S)
B. B=I(1-S)
C. B=S(1-I)
D. B=S(1+I)
Q18. Which of the following color models are used for color printing?
A. CMY
B. CMYK
C. RGB
D. Both A & B
A. Contrast
B. Quantization
C. Sampling
D. Dynamic range
sampling
quantization
framing
Both A and B
Answer
image enhancement
image decompression
image contrast
image equalization
Answer
pixels
matrix
frames
coordinates
Answer
encoder
decoder
frames
Both A and B
Answer
data
meaningful data
raw data
Both A and B
Answer
pixels
matrix
intensity
coordinates
Answer
code word
word
byte
nibble
Answer
pixels
matrix
frames
intensity
Answer
reversible
irreversible
temporal
facsimile
Answer
always occur
no probability
normalization
MCQ: Digitizing the image intensity amplitude is called
sampling
quantization
framing
Both A and B
Answer
image enhancement
image decompression
image contrast
image equalization
Answer
pixels
matrix
frames
coordinates
Answer
MCQ: Image compression comprised of
encoder
decoder
frames
Both A and B
Answer
data
meaningful data
raw data
Both A and B
Answer
byte
Both A and B
Answer
matrix
frames
shape
Answer
fast
female
feminine
facsimile
Answer
markov
zero source
Both A and B
Answer
sampling
quantization
entropy
normalization
Answer
good
fair
bad
excellent
Answer
no of levels
length
no of intensity levels
low quality
Answer
storage
bandwidth
money
Both A and B
Answer
storage
word
code
nibble
rows
column
level
intensity
Answer
rows
column
level
intensity
Answer
low definition
high definition
enhanced
low quality
Answer
low definition
high definition
intensity
coordinates
Answer
MCQ: Histogram equalization refers to image
sampling
quantization
framing
normalization
Answer
low
high
visible
invisible
Answer
good
fair
bad
excellent
Answer
MCQ: DVD stands for
Answer
sampling
quantization
framing
Both A and B
Answer
zero-memory source
nonzero-memory source
zero source
memory source
Answer
MCQ: If the pixels can not be reconstructed without error mapping is said to be
reversible
irreversible
temporal
facsimile
Answer
image enhancement
image compression
image decompression
image equalization
Answer
coding redundancy
spatial redundancy
temporal redundancy
both b and c
Answer
spatial redundancy
temporal redundancy
irrelevant info
Answer
56kbps
64kbps
72kbps
24kbps
Answer
markov
fidelity criteria
noiseless theorem
Answer
pixels
matrix
frames
noise
Answer
coding redundancy
spatial redundancy
temporal redundancy
irrelevant info
Answer
complex ratio
compression ratio
constant
condition
Answer
redundant data
meaningful data
raw data
Both A and B
Answer
10
20
25
30
Answer
image enhancement
image compression
image contrast
image equalization
Answer
coding
spatial
temporal
facsimile
Answer
1-(1/c)
1+(1/c)
1-(-1/c)
(1/c)
Answer
mapping
image compression
image watermarking
image equalization
Answer
data
superfluous data
information
meaningful data
Answer
image enhancement
image compression
image watermarking
image equalization
Answer
coding theorem
noiseless theorem
Answer
encoding
decoding
framing
Both A and B
Answer
MCQ: Encoder is used for
image enhancement
image compression
image decompression
image equalization
Answer
purification
industry
radar
MRI
Answer
MCQ: In the expression s = Tr, Tr, in the range 0 = < r = < L-1 is the
Answer
MCQ: High pass filters promotes
all components
Answer
alpha correction
gamma correction
beta correction
pixel correction
Answer
nth power
nth log
inverse log
identity
Answer
frequency domain
algebraic domain
Both A and B
Answer
MCQ: Process that expands the range of intensity levels in image is called
linear stretching
contrast stretching
color stretching
elastic stretching
Answer
intensity domain
frequency domain
spatial domain
undefined domain
Answer
Answer
x-rays
alpha
beta
gamma
Answer D
bandpass filter
Answer
transformation vector
transformation theorem
transformation function
Answer
CRT devices
audio devices
radio
turbines
Answer
MCQ: In bit plane slicing the most of the information of an image is contained by
all planes
Answer
binary image
high quality image
enhanced image
Answer
spatial masks
kernels
templates
Answer
100
Answer
dual valued
single valued
multi valued
running sum
Answer
histogram enhancement
histogram normalization
histogram equalization
histogram matching
Answer
s = clog(r)
s = clog(1+r)
s = clog(2+r)
s = log(1+r)
Answer
Answer
all components
Answer
alpha transformation
beta transformation
gamma transformation
intensity transformation
Answer
6 planes
7 planes
8 planes
9 planes
Answer
correlation
convolution
histogram equalization
Both A and B
Answer
nth power
log
inverse log
identity
Answer
lithography
microwave
radar
detecting blockages
Answer
MCQ: In the formula s = clog(1+r), r ranges
r >= 0
r >= 1
0 >= r
1 >= r
Answer
aortic angiogram
radar
contrast stretching
MRI
Answer
1x1
2x2
3x3
4x4
MCQ: The values of pixels in image processing are related by the expression
s = Tr
s = (r)
r = T(s)
T = sr
Answer
s = L-1
s = 1-r
s = L-1-r
s = L-r
Answer
decreasing
increasing
positive
negative
Answer
Answer
right bottom
left bottom
MCQ: Process of manipulating the digital image to make results more suitable is called
manipulation
improvement
enhancement
degradation
Answer
spatial filter
Answer
linear stretching
contrast stretching
color stretching
elastic stretching
Answer
nth power
nth log
inverse log
identity
Answer
pixels slicing
color slicing
Answer
MCQ: The principle tools used in image processing for a broad spectrum of applications
intensity filtering
spatial filtering
Answer
transformed image
output image
input image
digitized image
Answer
blurring
noise reduction
contrast
Both A and B
Answer
MCQ: In the expression s = Tr, T
Answer
whole image
slices of image
center of image
edges of image
g(x,y) = [ƒ(x,y)]
g(x,y) = T[ƒ(x,y)]
g(x,y) = T[ƒ(x)]
g(x,y) = T[ƒ(y)]
Answer
MCQ: Digital image with intensity levels in the range [0,L-1] is called
k-map
histogram
truth table
graph
Answer
contrast
darker image
brighter image
Answer
spatial transformation
intensity transformation
coordinates transformation
domain transformation
Answer
intensity levels
dots
bits
even
odd
prime
aliasing
Answer
imaginary plane
real plane
complex plane
polar plane
Answer
undefined
infinity
1
Answer
Fourier series
Fourier transform
digital image
Answer
enhancement
sharpening
blurring
resizing
Answer
aperiodic
periodic
linear
non linear
Answer
quantization
convolution
jaggies
blurring
Answer
pixel replication
bicubic interpolation
bilinear interpolation
Answer
aperiodic impulse
periodic impulse
impulse train
summation
Answer
attenuated
accentuated
reduced
removed
conjugate symmetry
hermition
antihermition
symmetry
Answer
MCQ: The band limited function can be recovered from its samples if the acquired samples are at rate
twice the highest frequency, this theorem is called
sampling theorem
sampling theorem
sampling theorem
sampling theorem
Answer
MCQ: The product of two even or two odd functions is
even
odd
prime
aliasing
Answer
undefined
infinity
Answer
Fourier series
Fourier transform
antisymmetric
periodic
aperiodic
Answer
MCQ: A continuous band limited function can be recovered with no error if sampled intervals are less
than 1/2umax is the statement of
2D sampling series
3D sampling theorem
1D sampling theorem
2D sampling theorem
Answer
one variable
two variables
three variables
four variables
Answer
MCQ: Any function whose Fourier transform is zero for frequencies outside the finite interval is called
Answer
frequency domain
spatial domain
Fourier domain
time domain
periodic convolution
aperiodic convolution
correlation
circular convolution
Answer
spatial variables
Answer
60dpi
65dpi
70dpi
75dpi
Answer
√-1
-1
Answer
MCQ: Fourier transform of two continuous functions, that are inverse of each other is called
Fourier series
Fourier transform
MCQ: Forward and inverse Fourier transforms exist for the samples having values
integers
infinite
finite
discrete
Answer
MCQ: The greater, the values of continuous variables, the spectrum of Fourier transform will be
contracted
expanded
discrete
continuous
Answer
phase
dc component
ac component
vector
Answer
MCQ: ƒ(0,0) is sometimes called
ac component
dc component
jaggy
coordinate
Answer
rotating property
shifting property
additive property
additive inverse
pixel replication
bicubic interpolation
bilinear interpolation
under sampling
Answer
sharpening
blurring
resizing
Answer
MCQ: The sampled frequency less than the nyquist rate is called
under sampling
over sampling
critical sampling
nyquist sampling
Answer
sine(x/y)
arcsine(x/y)
tan(x/y)
arctan(x/y)
Answer
sine
cosine
tangent
Both A and B
360
270
90
180
Answer
smoothing
sharpening
degradation
Both A and B
Answer
symmetric
antisymmetric
periodic
aperiodic
Answer
under sampling
over sampling
critical sampling
nyquist sampling
Answer
smoothing
sharpening
summation
aliasing
Answer
correlation
convolution
Fourier transform
−1x+y
−1
√−1
Answer
quantization
sampling
Fourier transform
Answer
c = jI
c = R+jI
c=R
c = R+I
Answer
MCQ: Shrinking of image is viewed as
under sampling
over sampling
critical sampling
nyquist sampling
Answer
MCQ: The sampled frequency greater than the nyquist rate is called
under sampling
over sampling
critical sampling
nyquist sampling
Answer
ringing effect
image sharpening
blurring
Answer
MCQ: High pass filters are used for image
contrast
sharpening
blurring
resizing
Answer
aliasing
temporal aliasing
frequency aliasing
spatial aliasing
Answer
joseph Fourier
john Fourier
sean Fourier
jay Fourier
Answer
MCQ: (A.B).B is equal to
A .B
A+B
AoB
AxB
Answer
A .B
A+B
A-B
AxB
Answer
1D vector
2D vector
3D vector
4D vector
Answer
MCQ: Reflection is applied on image's
x coordinate
y coordinate
z coordinate
Both A and B
Answer
sharps
shrinks
smooths
deletes
Answer
MCQ: SE having size d/4 when dilated with image of size d, thickens the image by size
d/2
d/3
d/4
d/8
Answer
3pixels
2pixels
1pixel
Answer
rows
columns
edges
every element
Answer
MCQ: SE having size d/4 when eroded with image of size d, shrinks the image by size
d/2
d/3
d/4
d/8
Answer
symmetric
asymmetric
translated
Answer
opening
closing
blurring
translation
Answer
pixels
frames
structuring elements
coordinates
Answer
pixels
lines
contour
boundary
Answer
erosion
dilation
set theory
Both A and B
Answer
top left
top right
center
bottom left
Answer
pixels
lines
subimage
noise
Answer
thinner
shrinked
thickened
sharpened
Answer
neighbors
duals
centers
corners
Answer
erosion
dilation
opening
closing
Answer
opening
closing
blurring
translation
Answer
padding
logic diagram
set theory
map
Answer
removal
detection
compression
decompression
Answer
set theory
logic diagram
graph
map
Answer
one equation
both equations
any equation
Both A and B
Answer
AoB
A+B
A-B
AxB
Answer
MCQ: Subimages used to probe the image is called
pixels
frames
structuring elements
coordinates
Answer
narrow breaks
lines
dots
noise
Answer
reflection
compression
decompression
translation
Answer
MCQ: A o B is the subset of
−A
−B
Answer
bridging gaps
compression
decompression
translation
Answer
A .B
A+B
AoB
AxB
Answer
shrinked
blurred
sharpened
Answer
MCQ: Best removal of lines from image will be produced by the SE of size
1x1
2x2
3x3
5x5
Answer
pixels
frames
objects
intensity
Answer
frames
objects
coordinates
Answer
expansion
compression
decompression
translation
Answer
reflected
translated
compressed
filtered
Answer
A
B
Answer
reflection
compression
filtering
decompression
Answer
square array
circular array
triangular array
rectangular array
Answer
removing lines
producing lines
blurring image
sharpening image
Answer
{c|c = b+z}
{c|c = b-z}
{c|c = bxz}
{c|c = b}
Answer
correlation mask
convolution mask
Answer
separation
combination
togetherness
Both A and B
Answer
software engineering
structuring elements
structure eliminate
software engineer
Answer
neighbors
duals
centers
corners
Answer
y,z
z,x
x,y
x,y,z
Answer
separation
compression
decompression
filling holes
Answer
square
symmetric
asymmetric
translated
Answer
{w|w = -b}
{w|w = b}
{w = -b}
{w|w = -(-b)}
Answer