0% found this document useful (0 votes)
15 views14 pages

Updated Dip QB

This document contains a question bank for the course "Digital Image Processing" taught in the Department of Electronics and Communication Engineering. It includes questions related to key concepts in digital image processing such as digital image fundamentals, image enhancement techniques, and image transforms. The questions are categorized into units related to digital image fundamentals, image enhancement, and image transforms. Formative assessment questions involving practical applications and analysis of concepts are also included.

Uploaded by

sheikdavood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views14 pages

Updated Dip QB

This document contains a question bank for the course "Digital Image Processing" taught in the Department of Electronics and Communication Engineering. It includes questions related to key concepts in digital image processing such as digital image fundamentals, image enhancement techniques, and image transforms. The questions are categorized into units related to digital image fundamentals, image enhancement, and image transforms. Formative assessment questions involving practical applications and analysis of concepts are also included.

Uploaded by

sheikdavood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

18ECE011T DIGITAL IMAGE PROCESSING


QUESTION BANK
Year/Semester & Branch: III/ V1&ECE

UNIT I DIGITAL IMAGE FUNDAMENTALS


1. Define Image?
An image may be defined as two dimensional light intensity function f(x, y)
where x and y denote spatial co-ordinate and the amplitude or value of f at any CO1 K1
point (x, y) is called intensity or grayscale or brightness of the image at that
point.
2. What is Dynamic Range?
The range of values spanned by the gray scale is called dynamic range of an CO1 K1
image. Image will have high contrast, if the dynamic range is high and image
will have dull washed out gray look if the dynamic range is low.
3. Define Brightness?
Brightness of an object is the perceived luminance of the surround. Two objects CO1 K1
with different surroundings would have identical luminance but different
brightness.
4. Define Tapered Quantization? CO1 K2
If gray levels in a certain range occur frequently while others occurs rarely, the
quantization levels are finely spaced in this range and coarsely spaced outside
of it. This method is sometimes called Tapered Quantization.
5. Infer Gray level. CO1 K2
Gray level refers to a scalar measure of intensity that ranges from black to grays
and finally to white.
6. Summarize color model. CO1 K2
A Color model is a specification of 3D-coordinates system and a subspace
within that system where each color is represented by a single point.
7. List the hardware oriented color models? CO1 K1
1. RGB model
2. CMY model
3. YIQ model
4. HSI model
8. Outline Hue of saturation.
Hue is a color attribute that describes a pure color where saturation gives a CO1 K2
measure of the degree to which a pure color is diluted by white light.
9. Illustrate Chromatic Adoption. CO1 K2
The hue of a perceived color depends on the adoption of the viewer. For
example, the American Flag will not immediately appear red, white, and
blue of the viewer has been subjected to high intensity red light before
viewing the flag. The color of the flag will appear to shift in hue toward the
red component cyan.

Page 1 of 14
10. Define Resolutions?
Resolution is defined as the smallest number of discernible detail in an image.
CO1 K1
Spatial resolution is the smallest discernible detail in an image and
gray level resolution refers to the smallest discernible change is gray level.
11. What is meant by pixel? CO1 K1
A digital image is composed of a finite number of elements each of which has a
particular location or value. These elements are referred to as pixels or
image elements or picture elements or pels elements.
12. List the steps involved in DIP. CO1 K1
1. Image Acquisition
2. Preprocessing
3. Segmentation
4. Representation and Description
5. Recognition and Interpretation
13. Summarize the elements of DIP system. CO1 K2
1. Image Acquisition
2. Storage
3. Processing
4. Display
14. List the categories of digital storage. CO1 K1
1. Short term storage for use during processing.
2. Online storage for relatively fast recall.
3. Archical storage for infrequent access
15. What are the types of light receptors? CO1 K1
The two types of light receptors are
1. Cones and
2. Rods
16. Define subjective brightness and brightness adaptation? CO1 K1
Subjective brightness means intensity as preserved by the human visual
system. Brightness adaptation means the human visual system can operate
only from scotopic to glare limit. It cannot operate over the range
simultaneously. It accomplishes this large variation by changes in its overall
intensity.
17. What is meant by mach band effect? CO1 K1
Mach band effect means the intensity of the stripes is constant. Therefore it
preserves the brightness pattern near the boundaries, these bands are
called as mach band effect.

18. What is simultaneous contrast? CO1 K1


The region reserved brightness not depends on its intensity but also on its
background. All centre square have same intensity. However they appear to
the eye to become darker as the background becomes lighter.
19. What is meant by illumination and reflectance? CO1 K1
Illumination is the amount of source light incident on the scene. It is
represented as i(x, y).Reflectance is the amount of light reflected by the
object in the scene. It is represented by r(x, y).
20. Summarize the Zooming of digital images. CO1 K2
Zooming may be viewed as over sampling. It involves the creation of new pixel
locations and the assignment of gray levels to those new locations.

Page 2 of 14
21. What is meant by shrinking of digital images? CO1 K1
Shrinking may be viewed as under sampling. To shrink an image by one half,
we delete every row and column. To reduce possible aliasing effect, it is a good
idea to blue an image slightly before shrinking it.
PART-B
1 Summarize the steps involved in digital image processing. 16 CO1 K2

2 Experiment with brightness adaptation and discrimination image 16 CO1 K3


formation in the eye.

3 Summarize the elements of visual perception with necessary pictorial 16 CO1 K2


representation.

4 Identify the technique used for digitizing the coordinate value and 16 CO1 K3
amplitude in image processing and summarize in details.

5 Describe the functions of elements of digital image processing system 16 CO1 K2


with a diagram.

6 Explain Hadamard transformation in detail with suitable equations. 16 CO1 K2

7 Explain in detail about the basic relationships between pixels and 16 CO1 K2
provide necessary examples.

8 Explain the properties of 2D Fourier Transform. 16 CO1 K2

9 Explain in detail the Walsh transforms 16 CO1 K2

10. Explain Hadamard transformation in detail. 16 CO1 K2

*****
Knowledge Level K1: Remember, K2: Understand, K3: Apply, K4: Analyze, K5: Evaluate, K6: Create
CO1 Apply the mathematical transform necessary for image processing

Page 3 of 14
UNIT II IMAGE ENHANCEMENT
1. Label the objective of image enhancement technique.
The objective of enhancement technique is to process an image so that the CO2 K1
result is more suitable than the original image for a particular application.
2. Summarize the 2 categories of image enhancement.
i) Spatial domain refers to image plane itself &
approaches in this category are based on direct
CO2 K2
manipulation of picture image.
ii) Frequency domain methods based on modifying the
image by Fourier transform.
3. What is contrast stretching?
Contrast stretching reduces an image of higher contrast than the original by CO2 K1
darkening the levels below m and brightening the levels above m in the image.
4. Infer grey level slicing.
Highlighting a specific range of grey levels in an image often is desired.
CO2 K2
Applications include enhancing features such as masses of water in satellite
imagery and enhancing flaws in x-ray images.
5. Define image subtraction.
The difference between 2 images f(x,y) and h(x,y) expressed as, g(x,y)=f(x,y)-
CO2 K1
h(x,y) is obtained by computing the difference between all pairs of
corresponding pixels from f and h.
6. What is the purpose of image averaging?
An important application of image averaging is in the field of astronomy, where
CO2 K1
imaging with very low light levels is routine, causing sensor noise frequently to
render single images virtually useless for analysis
7. Recall masking.
Mask is the small 2-D array in which the values of mask co-efficient determines the
CO2 K1
nature of process. The enhancement technique based on this type of approach is
referred to as mask processing.
8. Define histogram.
The histogram of a digital image with gray levels in the range [0, L-1] is a
CO2 K1
discrete function h(rk)=nk. rk-kth gray level nk-number of pixels in the image
having gray level rk.
9. What do you mean by Point processing?
Image enhancement at any Point in an image depends only on the gray level at CO2 K1
that point is often referred to as Point processing.
10. Explain spatial filtering.
Spatial filtering is the process of moving the filter mask from point to point in an
image. For linear spatial filter, the response is given by a sum of products of CO2 K2
the filter coefficients, and the corresponding image pixels in the area spanned by
the filter mask.
11. What is a Median filter?
The median filter replaces the value of a pixel by the median of the gray levels in CO2 K1
the neighborhood of that pixel
12. What is maximum filter and minimum filter?
The 100th percentile is maximum filter is used in finding brightest points in an
CO2 K1
image. The 0th percentile filter is minimum filter used for finding darkest points
in an image.
13. Summarize the application of sharpening filters? CO2 K2
1. Electronic printing and medical imaging to industrial application

Page 4 of 14
2. Autonomous target detection in smart weapons.
3. Name the different types of derivative filters?
4. Perwitt operators
5. Roberts cross gradient operators
6. Sobel operators
14. Define image subtraction.
The difference between 2 images f(x,y) and h(x,y) expressed as,g(x,y)=f(x,y)-
CO2 K1
h(x,y) is obtained by computing the difference between all pairs of
corresponding pixels from f and h.
15. What is meant by laplacian filter?
The laplacian for a function f(x,y) of 2 variables is defined as, 2 2 2 2 2 f = _ f / _ x CO2 K1
+_f/_y
16. Illustrate the steps involved in frequency domain filtering
1. Multiply the input image by (-1) to center the transform.
2. Compute F(u,v), the DFT of the image from (1).
3. Multiply F(u,v) by a filter function H(u,v). CO2 K2
4. Compute the inverse DFT of the result in (3).
5. Obtain the real part of the result in
6. Multiply the result in (5) by (-1)

PART-B
1 Explain the types of gray level transformation used for image enhancement. 16 CO2 K2

2 Illustrate the image smoothing filter with its model in the spatial domain 16 CO2 K2

3 What are image sharpening filters? Explain the various types of it. 16 CO2 K2

4 Explain spatial filtering in image enhancement. 16 CO2 K2

5 Explain image enhancement in the frequency domain. 16 CO2 K2

6 Explain Homomorphic filtering in detail. 16 CO2 K2

7 What is histogram? Explain histogram equalization. 16 CO2 K2

*****
Knowledge Level K1: Remember, K2: Understand, K3: Apply, K4: Analyze, K5: Evaluate, K6: Create
CO2 Compute the Enhancement Techniques Using Spatial And Frequency Filters.

UNIT III IMAGE RESTORATION


1. What is meant by Image Restoration? CO3 K1
Page 5 of 14
Restoration attempts to reconstruct or recover an image that has been degraded
by using a clear knowledge of the degrading phenomenon.
2. List the two properties in Linear Operator?. CO3 K1
Additivity
Homogenity
3. Interpret additivity property in Linear Operator. CO3 K2
H[f1(x,y)+f2(x,y)]=H[f1(x,y)]+H[f2(x,y)]
The additive property says that if H is the linear operator,the response to a sum
of two is equal to the sum of the two responses.
4. How a degradation process is modeled? CO3 K1
A system operator H, which together with an additive white noise term _(x,y) a
operates on an input image f(x,y) to produce a degraded image g(x,y).
5. Explain homogenity property in Linear Operator? CO3 K2
H[k1f1(x,y)]=k1 H[f1(x,y)]
The homogeneity property says that,the response to a constant multiple of any
input is equal to the response to that input multiplied by the same constant.
6. What is fredholm integral of first kind? CO3 K1
g(x,y) = f(_,_)h(x,_,y,_)d_ d_
which is called the superposition or convolution or fredholm integral of first
kind. It states that if the response of H to an impulse is known, the response to
any input f(_,_) can be calculated by means of fredholm integral.
7. What is concept algebraic approach? CO3 K1
The concept of algebraic approach is to estimate the original image which
minimizes a predefined criterion of performances.
8. List the two methods of algebraic approach. CO3 K1
Unconstraint restoration approach o Constraint restoration approach
9. Define Gray-level interpolation. CO3 K1
Gray-level interpolation deals with the assignment of gray levels to pixels in
the spatially transformed image
10. What is meant by Noise probability density function? CO3 K1
The spatial noise descriptor is the statistical behavior of gray level values in
the noise component of the model.
11. Interpret the restoration is called as unconstrained restoration. CO3 K2
In the absence of any knowledge about the noise ‘n’, a meaningful criterion
function is to seek an f^ such that H f^ approximates of in a least square sense
by assuming the noise term is as small as possible.
Where H = system operator. f^ = estimated input image. g = degraded image.
12. Infer the most frequent method to overcome the difficulty to formulate the CO3 K2
spatial relocation of pixels?
The point is the most frequent method, which are subsets of pixels whose
location in the input (distorted) and output (corrected) imaged is known
precisely.
13. What are the three methods of estimating the degradation function? CO3 K1
1. Observation
2. Experimentation
3. Mathematical modeling.
14. What are the types of noise models? CO3 K1
Guassian noise
Rayleigh noise

Page 6 of 14
Erlang noise
Exponential noise
Uniform noise
15. Rephrase the relation for Gamma noise. CO3 K2
Gamma noise: The PDF is
P(Z)=ab zb-1 ae-az/(b-1) for Z>=0
0 for Z<0 mean μ=b/a
standard deviation _2=b/a2
16. Relate the Exponential noise with image. CO3 K2
Exponential noise
The PDF is
P(Z)= ae-az Z>=0
0 Z<0
mean μ=1/a
standard deviation _2=1/a2
17. What is inverse filtering? CO3 K1
The simplest approach to restoration is direct inverse filtering, an estimate
F^(u,v) of the transform of the original image simply by dividing the transform
of the degraded image G^(u,v) by the degradation function.
F^ (u,v) = G^(u,v)/H(u,v)
18. What is pseudo inverse filter? CO3 K1
It is the stabilized version of the inverse filter.For a linear shift invariant system
with frequency response H(u,v) the pseudo inverse filter is defined as
H-(u,v)=1/(H(u,v) H=/0
0 H=0
19. What is meant by least mean square filter? CO3 K1
The limitation of inverse and pseudo inverse filter is very sensitive noise.The
wiener filtering is a method of restoring images in the presence of blurr as
well as noise.
20. What is meant by blind image restoration? CO3 K1
An information about the degradation must be extracted from the observed
image either explicitly or implicitly.This task is called as blind image
restoration.
21. What is meant by Direct measurement? CO3 K1
In direct measurement the blur impulse response and noise levels are first
estimated from an observed image where this parameter are utilized in the
restoration.

PART-B
1 Explain the algebra approach in image restoration. 16 CO3 K2

2 Experiment with the wiener filter in image restoration with suitable 16 CO3 K3
example.

3 Summarize the Inverse filtering in detail. 16 CO3 K2

4 Explain singular value decomposition and specify its properties. 16 CO3 K2

5 Explain image degradation model /restoration process in detail 16 CO3 K2

6 What are the two approaches for blind image restoration? Explain in 16 CO3 K2

Page 7 of 14
detail.

*****
Knowledge Level K1: Remember, K2: Understand, K3: Apply, K4: Analyze, K5: Evaluate, K6: Create
CO3 Apply the restoration technique in the presence of noise and degradation

UNIT IV IMAGE SEGMENTATION AND REPRESENTATION


1. What is segmentation? CO4 K1
Segmentation subdivides on image in to its constitute regions or objects. The
level to which the subdivides is carried depends on the problem being
solved .That is segmentation should when the objects of interest in application
have been isolated
2. Write the applications of segmentation. CO4 K1
* Detection of isolated points.
* Detection of lines and edges in an image.
3. What are the three types of discontinuity in digital image? CO4 K1
Points, lines and edges
4. How the derivatives are obtained in edge detection during formulation? CO4 K2

Page 8 of 14
The first derivative at any point in an image is obtained by using the magnitude
of the gradient at that point. Similarly the second derivatives are obtained by
using the laplacian.
5. Infer about linking edge points. CO4 K2
The approach for linking edge points is to analyze the characteristics of pixels
in a small neighborhood (3x3 or 5x5) about every point (x,y)in an image that
has undergone edge detection. All points that are similar are linked, forming a
boundary of pixels that share some common properties.
6. Illustrate the two properties used for establishing similarity of edge pixels? CO4 K2
(1) The strength of the response of the gradient operator used to produce the
edge pixel.
(2) The direction of the gradient.
7. What is edge? CO4 K1
An edge is a set of connected pixels that lie on the boundary between two
regions edges are more closely modeled as having a ramplike profile. The slope
of the ramp is inversely proportional to the degree of blurring in the edge.
8. Interpret the object point and background point. CO4 K2
To execute the objects from the background is to select a threshold T that
separate these modes. Then any point (x,y) for which f(x,y)>T is called an
object point. Otherwise the point is called background point.
9. What is global, Local and dynamic or adaptive threshold? CO4 K1
When Threshold T depends only on f(x,y) then the threshold is called global . If
T depends both on f(x,y) and p(x,y) is called local. If T depends on the spatial
coordinates x and y the threshold is called dynamic or adaptive where f(x,y) is
the original image.
10. Define region growing? CO4 K1
Region growing is a procedure that groups pixels or subregions in to layer
regions based on predefined criteria. The basic approach is to start with a set
of seed points and from there grow regions by appending to each seed these
neighboring pixels that have properties similar to the seed.
11. List the steps involved in splitting and merging? CO4 K1
Split into 4 disjoint quadrants any region Ri for which P(Ri)=FALSE. Merge
any adjacent regions Rj and Rk for which P(RjURk)=TRUE. Stop when no
further merging or splitting is positive.
12. What is meant by markers? CO4 K1
An approach used to control over segmentation is based on markers. marker is a
connected component belonging to an image. We have internal markers,
associated with objects of interest and external markers associated with
background.
13. Summarize the 2 principles steps involved in marker selection? CO4 K2
The two steps are
1. Preprocessing
2. Definition of a set of criteria that markers must satisfy.
14. Define chain codes. CO4 K1
Chain codes are used to represent a boundary by a connected sequence of
straight line segment of specified length and direction. Typically this
representation is based on 4 or 8 connectivity of the segments . The direction of
each segment is coded by using a numbering scheme.
15. What are the demerits of chain code? CO4 K1
* The resulting chain code tends to be quite long.

Page 9 of 14
* Any small disturbance along the boundary due to noise cause changes in the
code that may not be related to the shape of the boundary.
16. Recall thinning or skeletonizing algorithm. CO4 K1
An important approach to represent the structural shape of a plane region is to
reduce it to a graph. This reduction may be accomplished by obtaining the
skeletonizing algorithm. It play a central role in a broad range of problems in
image processing, ranging from automated inspection of printed circuit
boards to counting of asbestos fibres in air filter.
17. List the various image representation approaches CO4 K1
Chain codes
Polygonal approximation
Boundary segments
18. Summarize polygonal approximation method. CO4 K2
Polygonal approximation is an image representation approach in which a
digital boundary can be approximated with arbitrary accuracy by a polygon.
For a closed curve the approximation is exact when the number of segments in
polygon is equal to the number of points in the boundary so that each pair of
adjacent points defines a segment in the polygon.
19. List the various polygonal approximation methods CO4 K1
Minimum perimeter polygons
Merging techniques
Splitting techniques
20. Recall few boundary descriptors CO4 K1
Simple descriptors
Shape numbers
Fourier descriptors
21. Define length of a boundary. CO4 K1
The length of a boundary is the number of pixels along a boundary. Eg. for a
chain coded curve with unit spacing in both directions the number of vertical
and horizontal components plus _2 times the number of diagonal components
gives its exact length.

PART-B
1 Experiment with any image segmentation method in detail. 16 CO4 K3

2 Explain Edge Detection methods in detail 16 CO4 K2

3 Make use of the thresholding and the various methods of thresholding in detail 16 K3
CO4
with suitable example.
4 Compare the threshold region based techniques and analyze any one region 16 K3
CO4
based image segmentation techniques.
5 Explain the various representation approaches with suitable example. 16 CO4 K2

6 Explain in detail about the Boundary descriptors. 16 CO4 K2

7 Explain in detail about regional descriptors with suitable representation. 16 CO4 K2

8 Explain the two techniques of region representation with suitable example. 16 CO4 K2

Page 10 of 14
9 Explain the segmentation techniques that are based on finding the regions in 16 K2
CO4
detail.

*****
Knowledge Level K1: Remember, K2: Understand, K3: Apply, K4: Analyze, K5: Evaluate, K6: Create
CO4 Apply the concept of various segmentation techniques and representation

UNIT V IMAGE COMPRESSION AND RECOGNITION


1. What is image compression? K1
Image compression refers to the process of redundancy amount of data required
CO5
to represent the given quantity of information for digital image. The basis of
reduction process is removal of redundant data.
2. Define data Compression. CO5 K1
Data compression requires the identification and extraction of source
redundancy.
In other words, data compression seeks to reduce the number of bits used to
store or transmit information.
3. List two main types of Data compression? CO5 K1
Lossless compression can recover the exact original data after compression. It is
used mainly for compressing database records, spreadsheets or word processing
files, where exact replication of the original is essential.

Page 11 of 14
Lossy compression will result in a certain loss of accuracy in exchange for a
substantial increase in compression. Lossy compression is more effective when
used to compress graphic images and digitised voice where losses outside visual
or aural perception can be tolerated.
4. Recall the need for Compression. CO5 K1
In terms of storage, the capacity of a storage device can be effectively increased
with methods that compress a body of data on its way to a storage device and
decompresses it when it is retrieved.
In terms of communications, the bandwidth of a digital communication link can
be effectively increased by compressing data at the sending end and
decompressing data at the receiving end.
At any given time, the ability of the Internet to transfer data is fixed. Thus, if
data can effectively be compressed wherever possible, significant
improvements of data throughput can be achieved. Many files can be combined
into one compressed document making sending easier.
5. Define is coding redundancy. CO5 K1
If the gray level of an image is coded in a way that uses more code words than
necessary to represent each gray level, then the resulting image is said to
contain coding redundancy.
6. Define interpixel redundancy. CO5 K1
The value of any given pixel can be predicted from the values of its neighbors.
The information carried by is small. Therefore the visual contribution of a
single pixel to an image is redundant. Otherwise called as spatial redundant
geometric redundant
7. Outline run length coding. CO5 K2
Run-length Encoding, or RLE is a technique used to reduce the size of a
repeating string of characters. This repeating string is called a run; typically
RLE encodes a run of symbols into two bytes, a count and a symbol. RLE can
compress any type of data regardless of its information content, but the content
of data to be compressed affects the compression ratio. Compression is
normally measured with the compression ratio.

8. Define compression ratio. CO5 K1


Compression Ratio = original size / compressed size: 1
9. Define psycho visual redundancy? CO5 K1
In normal visual processing certain information has less importance than other
information. So this information is said to be psycho visual redundant.
10. Define encoder. CO5 K1
Source encoder is responsible for removing the coding and interpixel
redundancy and psycho visual redundancy.
There are two components
A) Source Encoder
B) Channel Encoder
11. Define source encoder CO5 K1
Source encoder performs three operations
1) Mapper -this transforms the input data into non-visual format. It reduces the
interpixel redundancy.
2) Quantizer - It reduces the psycho visual redundancy of the input images .This
step is omitted if the system is error free.

Page 12 of 14
3) Symbol encoder- This reduces the coding redundancy .This is the final stage
of encoding process.
12. Summarize the types of decoder. CO5 K2
Source decoder- has two components
a) Symbol decoder- This performs inverse operation of symbol encoder. b)
Inverse mapping- This performs inverse operation of mapper. Channel decoder-
this is omitted if the system is error free.
13. Illustrate the operations performed by error free compression. CO5 K2
1) Devising an alternative representation of the image in which its interpixel
redundant are reduced.
2) Coding the representation to eliminate coding redundancy
14. Infer the variable Length Coding. CO5 K2
Variable Length Coding is the simplest approach to error free compression. It
reduces only the coding redundancy. It assigns the shortest possible codeword
to the most probable gray levels.
15. Define Huffman coding. CO5 K1
Huffman coding is a popular technique for removing coding redundancy.
When coding the symbols of an information source the Huffman code yields the
smallest possible number of code words, code symbols per source symbol.
16. Define Block code CO5 K1
Each source symbol is mapped into fixed sequence of code symbols or code
words. So it is called as block code.
17. Recall instantaneous code. CO5 K1
A code word that is not a prefix of any other code word is called instantaneous
or prefix codeword.
18. Label uniquely decodable code CO5 K1
A code word that is not a combination of any other codeword is said to be
uniquely decodable code.
19. Define arithmetic coding. CO5 K1
In arithmetic coding one to one corresponds between source symbols and code
word doesn’t exist where as the single arithmetic code word assigned for a
sequence of source symbols. A code word defines an interval of number
between 0 and 1.
20. What is bit plane Decomposition? CO5 K1
An effective technique for reducing an image’s interpixel redundancies is to
process the image’s bit plane individually. This technique is based on the
concept of decomposing multilevel images into a series of binary images and
compressing each binary image via one of several well-known binary
compression methods.
21. Explain effectiveness of quantization can be improved. CO5 K2
Introducing an enlarged quantization interval around zero, called a dead zero.
Adapting the size of the quantization intervals from scale to scale. In either
case, the selected quantization intervals must be transmitted to the decoder with
the encoded image bit stream.
PART-B
1 Utilize the data redundancy and explain three basic data redundancy. 16 CO5 K3

2 Explain any four variable length coding compression schemes. 16 CO5 K2

3 Explain about Image compression model. 16 CO5 K2

Page 13 of 14
4 Explain about Error free Compression. 16 CO5 K2

5 Explain about Lossy compression with example. 16 CO5 K2

6 Explain the schematics of image compression standard JPEG. 16 CO5 K2

7 Summarize how compression is achieved in transform coding and explain 16 CO5 K2


about DCT.

8 Explain arithmetic coding with suitable example. 16 CO5 K2

9 Infer about MPEG standard and compare with JPEG. 16 CO5 K2

10. Explain about Image compression standards. 16 CO5 K2

*****
Knowledge Level K1: Remember, K2: Understand, K3: Apply, K4: Analyze, K5: Evaluate, K6: Create
CO5 Compute various compression and recognition methods

Prepared by Approved by
(K.Sheikdavood) (HOD/ECE)

Page 14 of 14

You might also like