Updated Dip QB
Updated Dip QB
Page 1 of 14
10. Define Resolutions?
Resolution is defined as the smallest number of discernible detail in an image.
CO1 K1
Spatial resolution is the smallest discernible detail in an image and
gray level resolution refers to the smallest discernible change is gray level.
11. What is meant by pixel? CO1 K1
A digital image is composed of a finite number of elements each of which has a
particular location or value. These elements are referred to as pixels or
image elements or picture elements or pels elements.
12. List the steps involved in DIP. CO1 K1
1. Image Acquisition
2. Preprocessing
3. Segmentation
4. Representation and Description
5. Recognition and Interpretation
13. Summarize the elements of DIP system. CO1 K2
1. Image Acquisition
2. Storage
3. Processing
4. Display
14. List the categories of digital storage. CO1 K1
1. Short term storage for use during processing.
2. Online storage for relatively fast recall.
3. Archical storage for infrequent access
15. What are the types of light receptors? CO1 K1
The two types of light receptors are
1. Cones and
2. Rods
16. Define subjective brightness and brightness adaptation? CO1 K1
Subjective brightness means intensity as preserved by the human visual
system. Brightness adaptation means the human visual system can operate
only from scotopic to glare limit. It cannot operate over the range
simultaneously. It accomplishes this large variation by changes in its overall
intensity.
17. What is meant by mach band effect? CO1 K1
Mach band effect means the intensity of the stripes is constant. Therefore it
preserves the brightness pattern near the boundaries, these bands are
called as mach band effect.
Page 2 of 14
21. What is meant by shrinking of digital images? CO1 K1
Shrinking may be viewed as under sampling. To shrink an image by one half,
we delete every row and column. To reduce possible aliasing effect, it is a good
idea to blue an image slightly before shrinking it.
PART-B
1 Summarize the steps involved in digital image processing. 16 CO1 K2
4 Identify the technique used for digitizing the coordinate value and 16 CO1 K3
amplitude in image processing and summarize in details.
7 Explain in detail about the basic relationships between pixels and 16 CO1 K2
provide necessary examples.
*****
Knowledge Level K1: Remember, K2: Understand, K3: Apply, K4: Analyze, K5: Evaluate, K6: Create
CO1 Apply the mathematical transform necessary for image processing
Page 3 of 14
UNIT II IMAGE ENHANCEMENT
1. Label the objective of image enhancement technique.
The objective of enhancement technique is to process an image so that the CO2 K1
result is more suitable than the original image for a particular application.
2. Summarize the 2 categories of image enhancement.
i) Spatial domain refers to image plane itself &
approaches in this category are based on direct
CO2 K2
manipulation of picture image.
ii) Frequency domain methods based on modifying the
image by Fourier transform.
3. What is contrast stretching?
Contrast stretching reduces an image of higher contrast than the original by CO2 K1
darkening the levels below m and brightening the levels above m in the image.
4. Infer grey level slicing.
Highlighting a specific range of grey levels in an image often is desired.
CO2 K2
Applications include enhancing features such as masses of water in satellite
imagery and enhancing flaws in x-ray images.
5. Define image subtraction.
The difference between 2 images f(x,y) and h(x,y) expressed as, g(x,y)=f(x,y)-
CO2 K1
h(x,y) is obtained by computing the difference between all pairs of
corresponding pixels from f and h.
6. What is the purpose of image averaging?
An important application of image averaging is in the field of astronomy, where
CO2 K1
imaging with very low light levels is routine, causing sensor noise frequently to
render single images virtually useless for analysis
7. Recall masking.
Mask is the small 2-D array in which the values of mask co-efficient determines the
CO2 K1
nature of process. The enhancement technique based on this type of approach is
referred to as mask processing.
8. Define histogram.
The histogram of a digital image with gray levels in the range [0, L-1] is a
CO2 K1
discrete function h(rk)=nk. rk-kth gray level nk-number of pixels in the image
having gray level rk.
9. What do you mean by Point processing?
Image enhancement at any Point in an image depends only on the gray level at CO2 K1
that point is often referred to as Point processing.
10. Explain spatial filtering.
Spatial filtering is the process of moving the filter mask from point to point in an
image. For linear spatial filter, the response is given by a sum of products of CO2 K2
the filter coefficients, and the corresponding image pixels in the area spanned by
the filter mask.
11. What is a Median filter?
The median filter replaces the value of a pixel by the median of the gray levels in CO2 K1
the neighborhood of that pixel
12. What is maximum filter and minimum filter?
The 100th percentile is maximum filter is used in finding brightest points in an
CO2 K1
image. The 0th percentile filter is minimum filter used for finding darkest points
in an image.
13. Summarize the application of sharpening filters? CO2 K2
1. Electronic printing and medical imaging to industrial application
Page 4 of 14
2. Autonomous target detection in smart weapons.
3. Name the different types of derivative filters?
4. Perwitt operators
5. Roberts cross gradient operators
6. Sobel operators
14. Define image subtraction.
The difference between 2 images f(x,y) and h(x,y) expressed as,g(x,y)=f(x,y)-
CO2 K1
h(x,y) is obtained by computing the difference between all pairs of
corresponding pixels from f and h.
15. What is meant by laplacian filter?
The laplacian for a function f(x,y) of 2 variables is defined as, 2 2 2 2 2 f = _ f / _ x CO2 K1
+_f/_y
16. Illustrate the steps involved in frequency domain filtering
1. Multiply the input image by (-1) to center the transform.
2. Compute F(u,v), the DFT of the image from (1).
3. Multiply F(u,v) by a filter function H(u,v). CO2 K2
4. Compute the inverse DFT of the result in (3).
5. Obtain the real part of the result in
6. Multiply the result in (5) by (-1)
PART-B
1 Explain the types of gray level transformation used for image enhancement. 16 CO2 K2
2 Illustrate the image smoothing filter with its model in the spatial domain 16 CO2 K2
3 What are image sharpening filters? Explain the various types of it. 16 CO2 K2
*****
Knowledge Level K1: Remember, K2: Understand, K3: Apply, K4: Analyze, K5: Evaluate, K6: Create
CO2 Compute the Enhancement Techniques Using Spatial And Frequency Filters.
Page 6 of 14
Erlang noise
Exponential noise
Uniform noise
15. Rephrase the relation for Gamma noise. CO3 K2
Gamma noise: The PDF is
P(Z)=ab zb-1 ae-az/(b-1) for Z>=0
0 for Z<0 mean μ=b/a
standard deviation _2=b/a2
16. Relate the Exponential noise with image. CO3 K2
Exponential noise
The PDF is
P(Z)= ae-az Z>=0
0 Z<0
mean μ=1/a
standard deviation _2=1/a2
17. What is inverse filtering? CO3 K1
The simplest approach to restoration is direct inverse filtering, an estimate
F^(u,v) of the transform of the original image simply by dividing the transform
of the degraded image G^(u,v) by the degradation function.
F^ (u,v) = G^(u,v)/H(u,v)
18. What is pseudo inverse filter? CO3 K1
It is the stabilized version of the inverse filter.For a linear shift invariant system
with frequency response H(u,v) the pseudo inverse filter is defined as
H-(u,v)=1/(H(u,v) H=/0
0 H=0
19. What is meant by least mean square filter? CO3 K1
The limitation of inverse and pseudo inverse filter is very sensitive noise.The
wiener filtering is a method of restoring images in the presence of blurr as
well as noise.
20. What is meant by blind image restoration? CO3 K1
An information about the degradation must be extracted from the observed
image either explicitly or implicitly.This task is called as blind image
restoration.
21. What is meant by Direct measurement? CO3 K1
In direct measurement the blur impulse response and noise levels are first
estimated from an observed image where this parameter are utilized in the
restoration.
PART-B
1 Explain the algebra approach in image restoration. 16 CO3 K2
2 Experiment with the wiener filter in image restoration with suitable 16 CO3 K3
example.
6 What are the two approaches for blind image restoration? Explain in 16 CO3 K2
Page 7 of 14
detail.
*****
Knowledge Level K1: Remember, K2: Understand, K3: Apply, K4: Analyze, K5: Evaluate, K6: Create
CO3 Apply the restoration technique in the presence of noise and degradation
Page 8 of 14
The first derivative at any point in an image is obtained by using the magnitude
of the gradient at that point. Similarly the second derivatives are obtained by
using the laplacian.
5. Infer about linking edge points. CO4 K2
The approach for linking edge points is to analyze the characteristics of pixels
in a small neighborhood (3x3 or 5x5) about every point (x,y)in an image that
has undergone edge detection. All points that are similar are linked, forming a
boundary of pixels that share some common properties.
6. Illustrate the two properties used for establishing similarity of edge pixels? CO4 K2
(1) The strength of the response of the gradient operator used to produce the
edge pixel.
(2) The direction of the gradient.
7. What is edge? CO4 K1
An edge is a set of connected pixels that lie on the boundary between two
regions edges are more closely modeled as having a ramplike profile. The slope
of the ramp is inversely proportional to the degree of blurring in the edge.
8. Interpret the object point and background point. CO4 K2
To execute the objects from the background is to select a threshold T that
separate these modes. Then any point (x,y) for which f(x,y)>T is called an
object point. Otherwise the point is called background point.
9. What is global, Local and dynamic or adaptive threshold? CO4 K1
When Threshold T depends only on f(x,y) then the threshold is called global . If
T depends both on f(x,y) and p(x,y) is called local. If T depends on the spatial
coordinates x and y the threshold is called dynamic or adaptive where f(x,y) is
the original image.
10. Define region growing? CO4 K1
Region growing is a procedure that groups pixels or subregions in to layer
regions based on predefined criteria. The basic approach is to start with a set
of seed points and from there grow regions by appending to each seed these
neighboring pixels that have properties similar to the seed.
11. List the steps involved in splitting and merging? CO4 K1
Split into 4 disjoint quadrants any region Ri for which P(Ri)=FALSE. Merge
any adjacent regions Rj and Rk for which P(RjURk)=TRUE. Stop when no
further merging or splitting is positive.
12. What is meant by markers? CO4 K1
An approach used to control over segmentation is based on markers. marker is a
connected component belonging to an image. We have internal markers,
associated with objects of interest and external markers associated with
background.
13. Summarize the 2 principles steps involved in marker selection? CO4 K2
The two steps are
1. Preprocessing
2. Definition of a set of criteria that markers must satisfy.
14. Define chain codes. CO4 K1
Chain codes are used to represent a boundary by a connected sequence of
straight line segment of specified length and direction. Typically this
representation is based on 4 or 8 connectivity of the segments . The direction of
each segment is coded by using a numbering scheme.
15. What are the demerits of chain code? CO4 K1
* The resulting chain code tends to be quite long.
Page 9 of 14
* Any small disturbance along the boundary due to noise cause changes in the
code that may not be related to the shape of the boundary.
16. Recall thinning or skeletonizing algorithm. CO4 K1
An important approach to represent the structural shape of a plane region is to
reduce it to a graph. This reduction may be accomplished by obtaining the
skeletonizing algorithm. It play a central role in a broad range of problems in
image processing, ranging from automated inspection of printed circuit
boards to counting of asbestos fibres in air filter.
17. List the various image representation approaches CO4 K1
Chain codes
Polygonal approximation
Boundary segments
18. Summarize polygonal approximation method. CO4 K2
Polygonal approximation is an image representation approach in which a
digital boundary can be approximated with arbitrary accuracy by a polygon.
For a closed curve the approximation is exact when the number of segments in
polygon is equal to the number of points in the boundary so that each pair of
adjacent points defines a segment in the polygon.
19. List the various polygonal approximation methods CO4 K1
Minimum perimeter polygons
Merging techniques
Splitting techniques
20. Recall few boundary descriptors CO4 K1
Simple descriptors
Shape numbers
Fourier descriptors
21. Define length of a boundary. CO4 K1
The length of a boundary is the number of pixels along a boundary. Eg. for a
chain coded curve with unit spacing in both directions the number of vertical
and horizontal components plus _2 times the number of diagonal components
gives its exact length.
PART-B
1 Experiment with any image segmentation method in detail. 16 CO4 K3
3 Make use of the thresholding and the various methods of thresholding in detail 16 K3
CO4
with suitable example.
4 Compare the threshold region based techniques and analyze any one region 16 K3
CO4
based image segmentation techniques.
5 Explain the various representation approaches with suitable example. 16 CO4 K2
8 Explain the two techniques of region representation with suitable example. 16 CO4 K2
Page 10 of 14
9 Explain the segmentation techniques that are based on finding the regions in 16 K2
CO4
detail.
*****
Knowledge Level K1: Remember, K2: Understand, K3: Apply, K4: Analyze, K5: Evaluate, K6: Create
CO4 Apply the concept of various segmentation techniques and representation
Page 11 of 14
Lossy compression will result in a certain loss of accuracy in exchange for a
substantial increase in compression. Lossy compression is more effective when
used to compress graphic images and digitised voice where losses outside visual
or aural perception can be tolerated.
4. Recall the need for Compression. CO5 K1
In terms of storage, the capacity of a storage device can be effectively increased
with methods that compress a body of data on its way to a storage device and
decompresses it when it is retrieved.
In terms of communications, the bandwidth of a digital communication link can
be effectively increased by compressing data at the sending end and
decompressing data at the receiving end.
At any given time, the ability of the Internet to transfer data is fixed. Thus, if
data can effectively be compressed wherever possible, significant
improvements of data throughput can be achieved. Many files can be combined
into one compressed document making sending easier.
5. Define is coding redundancy. CO5 K1
If the gray level of an image is coded in a way that uses more code words than
necessary to represent each gray level, then the resulting image is said to
contain coding redundancy.
6. Define interpixel redundancy. CO5 K1
The value of any given pixel can be predicted from the values of its neighbors.
The information carried by is small. Therefore the visual contribution of a
single pixel to an image is redundant. Otherwise called as spatial redundant
geometric redundant
7. Outline run length coding. CO5 K2
Run-length Encoding, or RLE is a technique used to reduce the size of a
repeating string of characters. This repeating string is called a run; typically
RLE encodes a run of symbols into two bytes, a count and a symbol. RLE can
compress any type of data regardless of its information content, but the content
of data to be compressed affects the compression ratio. Compression is
normally measured with the compression ratio.
Page 12 of 14
3) Symbol encoder- This reduces the coding redundancy .This is the final stage
of encoding process.
12. Summarize the types of decoder. CO5 K2
Source decoder- has two components
a) Symbol decoder- This performs inverse operation of symbol encoder. b)
Inverse mapping- This performs inverse operation of mapper. Channel decoder-
this is omitted if the system is error free.
13. Illustrate the operations performed by error free compression. CO5 K2
1) Devising an alternative representation of the image in which its interpixel
redundant are reduced.
2) Coding the representation to eliminate coding redundancy
14. Infer the variable Length Coding. CO5 K2
Variable Length Coding is the simplest approach to error free compression. It
reduces only the coding redundancy. It assigns the shortest possible codeword
to the most probable gray levels.
15. Define Huffman coding. CO5 K1
Huffman coding is a popular technique for removing coding redundancy.
When coding the symbols of an information source the Huffman code yields the
smallest possible number of code words, code symbols per source symbol.
16. Define Block code CO5 K1
Each source symbol is mapped into fixed sequence of code symbols or code
words. So it is called as block code.
17. Recall instantaneous code. CO5 K1
A code word that is not a prefix of any other code word is called instantaneous
or prefix codeword.
18. Label uniquely decodable code CO5 K1
A code word that is not a combination of any other codeword is said to be
uniquely decodable code.
19. Define arithmetic coding. CO5 K1
In arithmetic coding one to one corresponds between source symbols and code
word doesn’t exist where as the single arithmetic code word assigned for a
sequence of source symbols. A code word defines an interval of number
between 0 and 1.
20. What is bit plane Decomposition? CO5 K1
An effective technique for reducing an image’s interpixel redundancies is to
process the image’s bit plane individually. This technique is based on the
concept of decomposing multilevel images into a series of binary images and
compressing each binary image via one of several well-known binary
compression methods.
21. Explain effectiveness of quantization can be improved. CO5 K2
Introducing an enlarged quantization interval around zero, called a dead zero.
Adapting the size of the quantization intervals from scale to scale. In either
case, the selected quantization intervals must be transmitted to the decoder with
the encoded image bit stream.
PART-B
1 Utilize the data redundancy and explain three basic data redundancy. 16 CO5 K3
Page 13 of 14
4 Explain about Error free Compression. 16 CO5 K2
*****
Knowledge Level K1: Remember, K2: Understand, K3: Apply, K4: Analyze, K5: Evaluate, K6: Create
CO5 Compute various compression and recognition methods
Prepared by Approved by
(K.Sheikdavood) (HOD/ECE)
Page 14 of 14