0% found this document useful (0 votes)
95 views19 pages

Data Compressio MCQ

The document provides information about data compression techniques. It discusses key concepts such as lossy vs lossless compression, compression ratio, Huffman coding, and dictionary techniques. Some key points: - Lossy compression techniques allow for some loss of data quality or information to achieve higher compression ratios, and are generally used for media like images, video and audio. Lossless techniques maintain exact reconstruction but have lower compression. - Huffman coding is an entropy encoding technique that represents frequently occurring symbols using shorter code lengths. It constructs a prefix code that is optimal for a given probability distribution of symbols. - Dictionary techniques for data compaction build a dictionary of frequently occurring patterns in the data and replace repetitions with references to dictionary

Uploaded by

Deepak Daksh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views19 pages

Data Compressio MCQ

The document provides information about data compression techniques. It discusses key concepts such as lossy vs lossless compression, compression ratio, Huffman coding, and dictionary techniques. Some key points: - Lossy compression techniques allow for some loss of data quality or information to achieve higher compression ratios, and are generally used for media like images, video and audio. Lossless techniques maintain exact reconstruction but have lower compression. - Huffman coding is an entropy encoding technique that represents frequently occurring symbols using shorter code lengths. It constructs a prefix code that is optimal for a given probability distribution of symbols. - Dictionary techniques for data compaction build a dictionary of frequently occurring patterns in the data and replace repetitions with references to dictionary

Uploaded by

Deepak Daksh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

UNIT-1

1. Data compression means to ______ the file size.


(A) Increase
(B) Decrease
(C) Can't say
(D) None of the above

Answer

Correct option is B

2. Data compression and encryption both work on binary code.


(A) False
(B) True
Answer
Correct option is B

3. What is compression?
(A) To compress something by pressing it very hardly
(B) To minimize the time taken for a file to be downloaded
(C) To reduce the size of data to save space
(D) To convert one file to another
Answer
Correct option is C

4. Data compression usually works by _______.


(A) Deleting random bits data
(B) Finding repeating patterns
Answer
Correct option is B
5. Why data compressed?
(A) To optimise the data
(B) To reduce secondary storage space
(C) To reduce packet congestion on networks
(D) Both (B) and (C)
Answer
Correct option is D

6. Which is a type of data compression?


(A) Resolution
(B) Zipping
(C) Inputting
(D) Caching
Answer
Correct option is B
7. Data compression involves
(A) Compression only
(B) Reconstruction only
(C) Both compression and reconstruction
(D) None of the above
Answer
Correct option is C

8. Based on the requirements of reconstruction, data compression schemes can be


divided into ____ broad classes.
(A) 3
(B) 4
(C) 2
(D) 5
Answer
Correct option is C

9. _______ compression is the method which eliminate the data which is not
noticeable and _______ compression does not eliminate the data which is not
noticeable.
(A) Lossless, lossy
(B) Lossy, lossless
(C) None of these
Answer
Correct option is B

10. ______ compression is generally used for applications that cannot tolerate any
difference between the original and reconstructed data.
(A) Lossy
(B) Lossless
(C) Both
(D) None of these
Answer
Correct option is B

11. What is compression ratio?


(A) The ratio of the number of bits required to represent the data before compression
to the number of bits required to represent the data after compression.
(B) The ratio of the number of bits required to represent the data after compression to
the number of bits required to represent the data before compression.
(C) The ratio of the number of bits required to represent the data after reconstruction
to the number of bits required to represent the data before compression.
(D) The ratio of the number of bits required to represent the data before reconstruction
to the number of bits required to represent the data after reconstruction.
Answer
Correct option is A
12. Suppose storing an image made up of a square array of 256×256 pixels requires
65,536 bytes. The image is compressed and the compressed version requires 16,384
bytes. Then compression ratio is _______.
(A) 1:4
(B) 4:1
(C) 1:2
(D) 2:1
Answer
Correct option is B

13. Lossy techniques are generally used for the compression of data that originate as
analog signals, such as
(A) Speech
(B) Video
(C) Both
(D) None of these
Answer
Correct option is C

14. If fidelity or quality of a reconstruction is _____, then the difference between the
reconstruction and the original is ______.
(A) High, small
(B) Small, small
(C) High, high
(D) None of the above
Answer
Correct option is A

15. The development of data compression algorithms for a variety of data can be
divided into ____ phases.
(A) 2
(B) 3
(C) 4
(D) 5

Answer
Correct option is A

16. Which of the following is true of lossy and lossless compression techniques?
(A) Lossless compression is only used in situations where lossy compression
techniques can't be used
(B) Lossy compression is best suited for situations where some loss of detail is
tolerable, especially if it will not be detectable by a human
(C) Both lossy and lossless compression techniques will result in some information
being lost from the original file
(D) Neither lossy nor lossless compression can actually reduce the number of bits
needed to represent a file
Answer
Correct option is B

17. Which of the following would not be suitable for Lossy Compression?
(A) Speech
(B) Video
(C) Text
(D) Image
Answer
Correct option is C

18. Which of the following are not in a compressed format?


(A) MP3
(B) Bitmap
(C) MPEG
(D) JPEG
Answer
Correct option is B

19. Information theory was given by


(A) Claude von Regan
(B) Claude Elwood Shannon
(C) Claude Monet
(D) Claude Debussy
Answer
Correct option is B

20. The unit of information depends on the base of the log. If we use log base 2, the
unit is ____; if we use log base e, the unit is ____; and if we use log base 10, the unit
is _____.
(A) Hartleys, nats, bits
(B) Hartleys, bits, nats
(C) Bits, nats, hartleys
(D) Bits, hartleys, nats
Answer
Correct option is C

21. According to Claude Elwood Shannon's second theorem, it is not feasible to


transmit information over the channel with ______ error probability, although by
using any coding technique.
(A) Large
(B) May be large or small
(C) Unpredictable
(D) Small
Answer
Correct option is D

22. The essential condition/s for a good error control coding technique?
(A) Better error correcting capability
(B) Maximum transfer of information in bits/sec
(C) Faster coding & decoding methods
(D) All of the above
Answer
Correct option is D

23. The prefix code is also called as


(A) Block code
(B) Convolutional code
(C) Parity code
(D) Instantaneous code
Answer
Correct option is D

24. Self information should be _____.


(A) Negative
(B) Positive
(C) Both
(D) None of these
Answer
Correct option is B

25. A code in which no codeword is a prefix to another codeword is called as


(A) Prefix cod
(B) Parity code
(C) Convolutional code
(D) Block code
Answer
Correct option is A

26. The set of binary sequences is called a _____, and the individual members of the
set are called _______.
(A) Codewords, code
(B) Code, codewords
(C) None of these
Answer
Correct option is B

27. Full form of ASCII.


(A) American Standard Code for Information Intercaste
(B) American Standard Codewords for Information Interchange
(C) American Standard Code for Information Interchange
(D) American System Code for Information Interchange
Answer
Correct option is C

28. Composite source models is a combination or composition of several sources. In


which how many source being active at any given time?
(A) All
(B) Only one
(C) Only first three
(D) None of these
Answer
Correct option is B

29. For models used in lossless compression, we use a specific type of Markov
process called a
(A) Continous time Markov chain
(B) Discrete time Markov chain
(C) Constant time Markov chain
(D) None of the above
Answer
Correct option is B

30. Markov model is often used when developing coding algorithms for
(A) Speech
(B) Image
(C) Both
(D) None of these
Answer
Correct option is C
Unit -2

1. Huffman codes are ______ codes and are optimum for a given model (set of
probabilities).
(A) Parity
(B) Prefix
(C) Convolutional code
(D) Block code

Answer
Correct option is B
2. The Huffman procedure is based on observations regarding optimum prefix codes,
which is/are
(A) In an optimum code, symbols that occur more frequently (have a higher
probability of occurrence) will have shorter codewords than symbols that occur less
frequently.
(B) In an optimum code,thetwo symbolsthat occurleast frequently will havethe
samelength
(C) Both (A) and (B)
(D) None of these
Answer
Correct option is C

3. The best algorithms for solving Huffman codes


(A) Brute force algorithm
(B) Divide and conquer algorithm
(C) Greedy algorithm
(D) Exhaustive search
Answer
Correct option is C

4. How many printable characters does the ASCII character set consists of?
(A) 128
(B) 100
(C) 95
(D) 90
Answer
Correct option is C

5. The difference between the entropy and the average length of the Huffman code is
called
(A) Rate
(B) Redundancy
(C) Power
(D) None of these
Answer
Correct option is B
6. Unit of redundancy is
(A) bits/second
(B) symbol/bits
(C) bits/symbol
(D) none of these
Answer
Correct option is C

7. The redundancy is zero when


(A) The probabilities are positive powers of two
(B) The probabilities are negative powers of two
(C) Both
(D) None of the above
Answer
Correct option is B

8. Which bit is reserved as a parity bit in an ASCII set?


(A) Sixth
(B) Seventh
(C) Eighth
(D) Ninth
Answer
Correct option is C

9. Bits are needed for standard encoding if the size of the character set is X
(A) X+1
(B) log(X)
(C) X2
(D) 2X
Answer
Correct option is B

10. In Huffman coding, data in a tree always occur in


(A) Leaves
(B) Roots
(C) Left sub trees
(D) None of these
Answer
Correct option is A

11. An optimal code will always be present in a full tree.


(A) True
(B) False
Answer
Correct option is A
12. Running time of the Huffman encoding algorithm is
(A) O(Nlog(C))
(B) O(Clog(C))
(C) O(C)
(D) O(log(C))
Answer
Correct option is B

13. Running time of the Huffman algorithm, if its implementation of the priority
queue is done using linked lists
(A) O(log(C))
(B) O(Clog(C))
(C) O(C2)
(D) O(C)
Answer
Correct option is C

14. The unary code for a positive integer n is simply n ___ followed by a ___.
(A) zero, ones
(B) ones, zero
(C) None of these
Answer
Correct option is B

15. The unary code for 4 is ______.


(A) 11100
(B) 11110
(C) 00001
(D) 00011
Answer
Correct option is B

16. In the Tunstall code, all codewords are of _____ length. However, each codeword
represents a _________ number of letters.
(A) different, equal
(B) equal, different
(C) none of these
Answer
Correct option is B

17. Tunstall coding is a form of entropy coding used for


(A) Lossless data compression
(B) Lossy data compression
(C) Both
(D) None of these
Answer
Correct option is A

18. The main advantage of a Tunstall code is that


(A) Errors in codewords do not propagate
(B) Errors in codewords propagate
(C) The disparity between frequencies
(D) None of these
Answer
Correct option is A

19. Applications of Huffman Coding


(A) Text compression
(B) Audio compression
(C) Lossless image compression
(D) All of the above
Answer
Correct option is D
UNIT -3

1. In dictionary techniques for data compaction, which approach of building


dictionary is used for the prior knowledge of probability of the frequently occurring
patterns?
(A) Adaptive dictionary
(B) Static dictionary
(C) Both
(D) None of the above

Answer
Correct option is B

2. If the probability of encountering a pattern from the dictionary is p, then the


average number of bits per pattern R is given by
(A) R=21-12p
(B) R=9-p
(C) R=21-p
(D) R=12-p
Answer
Correct option is A

3. Static dictionary –
(A) permanent
(B) sometimes allowing the addition of strings but no deletions
(C) allowing for additions and deletions of strings as new input symbols are being
read
(D) Both (A) and (B)
(E) Both (A) and (C)
Answer
Correct option is D

4. Adaptive dictionary –
(A) holding strings previously found in the input stream
(B) sometimes allowing the addition of strings but no deletions
(C) allowing for additions and deletions of strings as new input symbols are being
read
(D) Both (A) and (B)
(E) Both (A) and (C)
Answer
Correct option is E

5. LZ77 and LZ78 are the two __________ algorithms published in papers by
Abraham Lempel and Jacob Ziv in 1977 and 1978
(A) Lossy data compression
(B) Lossless data compression
(C) Both
(D) None of the above
Answer
Correct option is B

6. Deflate = ________
(A) LZ78 + Huffman
(B) LZ77 + Huffman
(C) LZW + Huffman
(D) None of these
Answer
Correct option is B

7. Full form of GIF.


(A) Graphics Interchange Form
(B) Graphics Inter Format
(C) Graphics Interchange Format
(D) Graphics Interact Format
Answer
Correct option is C

8. LZ78 has _____ compression but very _____ decompression than LZ77.
(A) fast, slow
(B) slow, fast
(C) None of these
Answer
Correct option is B

9. Compression packages which use an LZ77-based algorithm followed by a variable-


length coder.
(A) PKZip
(B) Zip
(C) PNG
(D) All of the above
Answer
Correct option is D

10. Application of LZW


(A) GIF
(B) Zip
(C) PNG
(D) All of the above

Answer
Correct option is A

11. Algorithm used for solving temporal probabilistic reasoning


(A) Depth-first search
(B) Hidden markov model
(C) Hidden markov model
(D) Breadth-first search
Answer
Correct option is C

12. Where does the Hidden Markov Model is used?


(A) Understanding of real world
(B) Speech recognition
(C) Both
(D) None of the above
Answer
Correct option is B

13. A coding scheme that takes advantage of long runs of identical symbols is called
as
(A) Move-to-front coding
(B) Binary coding
(C) Huffman coding
(D) Move-to-back coding
Answer
Correct option is A
UNIT-4

1. Which of the following characterizes a quantizer


(A) Quantization results in a non-reversible loss of information
(B) A quantizer always produces uncorrelated output samples
(C) The output of a quantizer has the same entropy rate as the input
(D) None of the above

Answer
Correct option is A

2. What is the signal-to-noise ratio (SNR)?


(A) The ratio of the average squared value of the source output and the squared error
of the source output
(B) The ratio of the average squared value of the source output and the mean squared
error of the source output
(C) The ratio of the average squared value of the source output and the absolute
difference measure of the source output
(D) None of the above
Answer
Correct option is B

3. The output signal of a scalar quantizer has property


(A) The output is a discrete signal with a finite symbol alphabet
(B) The output is a discrete signal with a countable symbol alphabet (but not
necessarily a finite symbol alphabet)
(C) The output signal may be discrete or continuous
(D) None of the above
Answer
Correct option is B

4. What is a Lloyd quantizer?


(A) For a given source, the Lloyd quantizer is the best possible scalar quantizer in
ratedistortion sense. That means, there does not exist any other scalar quantizer that
yields a smaller distortion at the same rate.
(B) The output of a Lloyd quantizer is a discrete signal with a uniform pmf
(C) Both (A) and (B)
(D) A Lloyd quantizer is the scalar quantizer that yields the minimum distortion for a
given source and a given number of quantization intervals.
Answer
Correct option is D

5. Which of the following statement is correct for comparing scalar quantization and
vector quantization?
(A) Vector quantization improves the performance only for sources with memory. For
iid sources, the best scalar quantizer has the same efficiency as the best vector
quantizer
(B) Vector quantization does not improve the rate-distortion performance relative to
scalar quantization, but it has a lower complexity
(C) By vector quantization we can always improve the rate-distortion performance
relative to the best scalar quantizer
(D) All of the above
Answer
Correct option is C

6. If {x}n is the source output and {y}n is the reconstructed sequence, then the squared
error measure is given by
(A) d(x, y) = (y - x)2
(B) d(x, y) = (x - y)2
(C) d(x, y) = (y + x)2
(D) d(x, y) = (x - y)4
Answer
Correct option is B

7. If {x}n is the source output and {y}n is the reconstructed sequence, then the absolute
difference measure is given by
(A) d(x, y) = |y - x|
(B) d(x, y) = |x - y|
(C) d(x, y) = |y + x|
(D) d(x, y) = |x - y|2
Answer
Correct option is B

8. The process of representing a _______ possibly infinite set of values with a much
_______ set is called quantization
(A) Large, smaller
(B) Smaller, large
(C) None of these
Answer
Correct option is A

9. The set of inputs and outputs of a quantizer can be


(A) Only scalars
(B) Only vectors
(C) Scalars or vectors
(D) None of these
Answer
Correct option is C

10. Which of the folowing is/are correct for uniform quantizer


(A) The simplest type of quantizer is the uniform quantizer
(B) All intervals are the same size in the uniform quantizer, except possibly for the
two outer intervals
(C) The decision boundaries are spaced evenly
(D) All of the above
Answer
Correct option is D

11. If a Zero is assigned a decision level, then what is the type of quantizer?
(A) A midtread quantizer
(B) A midrise quantizer
(C) A midtreat quantizer
(D) None of the above
Answer
Correct option is B

12. If a Zero is assigned a quantization level, then what is the type of quantizer?
(A) A midtread quantizer
(B) A midrise quantizer
(C) A midtreat quantizer
(D) None of the above
Answer
Correct option is A

13. The main approaches to adapting the quantizer parameters:


(A) An off-line or forward adaptive approach
(B) An on-line or backward adaptive approach
(C) Both
(D) None of the above
Answer
Correct option is C

14. Uniform quantizer is also called as


(A) Low rise quantizer
(B) High rise quantizer
(C) Mid rise quantizer
(D) None of the above
Answer
Correct option is C

15. Non uniform quantizer ______ distortion.


(A) Decrease
(B) Increase
(C) Doesn't change
(D) None of the above
Answer
Correct option is A

16. The spectral density of white noise is ______.


(A) Poisson
(B) Exponential
(C) Uniform
(D) Gaussian
Answer
Correct option is C
Unit-5

1. Characteristic of a vector quantizer


(A) Multiple quantization indexes are represented by one codeword
(B) Each input symbol is represented by a fixed-length codeword
(C) Multiple input symbols are represented by one quantization index
(D) All of the above

Answer
Correct option is C

2. Vector quantization is rarely used in practical applications, why?


(A) The coding efficiency is the same as for scalar quantization
(B) The computational complexity, in particular for the encoding, is much higher than
in scalar quantization and a large codebook needs to be stored
(C) It requires block Huffman coding of quantization indexes, which is very complex
(D) All of the above
Answer
Correct option is B
3. Let N represent the dimension of a vector quantizer. What statement about the
performance of the best vector quantizer with dimension N is correct?
(A) For N approaching infinity, the quantizer performance asymptotically approaches
the rate-distortion function (theoretical limit)
(B) By doubling the dimension N, the bit rate for the same distortion is halved
(C) The vector quantizer performance is independent of N
(D) All of the above
Answer
Correct option is A

4. Which of the following is/are correct for advantage of vector quantization over
scalar quantization
(A) Vector Quantization can lower the average distortion with the number of
reconstruction levels held constant
(B) Vector Quantization can reduce the number of reconstruction levels when
distortion is held constant
(C) Vector Quantization is also more effective than Scalar Quantization When the
source output values are not correlated
(D) All of the above
Answer
Correct option is D

5. Vector quantization is used for


(A) Lossy data compression
(B) Lossy data correction
(C) Pattern recognition
(D) All of the above
Answer
Correct option is D

6. The Linde–Buzo–Gray algorithm is a ______ quantization algorithm to derive a


good codebook.
(A) Scalar
(B) Vector
(C) Both
(D) None of the above
Answer
Correct option is B

7. Vector quantization is used in


(A) Video coding
(B) Audio coding
(C) Speech coding
(D) All of the above
Answer
Correct option is C

8. What are process(Techniques) used in video coding?


(A) Partition of frames into macro blocks
(B) Form of Vector Quantization
(C) Both (A) & (B)
(D) None of these
Answer
Correct option is C

You might also like