0% found this document useful (0 votes)
70 views3 pages

Data Compression Exam

The document contains an exam for a course on information theory and data compression. It includes multiple choice and true/false questions testing knowledge of topics like Huffman coding, Lempel-Ziv encoding, lossy vs lossless compression, and foundations of information theory.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views3 pages

Data Compression Exam

The document contains an exam for a course on information theory and data compression. It includes multiple choice and true/false questions testing knowledge of topics like Huffman coding, Lempel-Ziv encoding, lossy vs lossless compression, and foundations of information theory.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

TANTA UNIVERSITY

FACULTY OF COMPUTERS AND INFORMATICS


EXAMINATION FOR (LEVEL 3)
COURSE TITLE: INFORMATION THEORY AND COURSE CODE: IT351
DATA COMPRESSION
DATE: / / 2024 TOTAL ASSESSMENT MARKS: TIME ALLOWED: - HOUR

Choose the correct answer

1. The Huffman procedure is based on observations regarding optimum prefix codes,


which is/are
(A) In an optimum code, symbols that occur more frequently (have a higher probability of
occurrence) will have shorter codewords than symbols that occur less frequently.
(B) In an optimum code, the two symbols that occur least frequently will have the same length
(C) Both (A) and (B)
(D) None of these.

2. Information is the
(A) Data (B) meaningful data (C)raw data (D)Both A and B
3. In Huffman coding, data in a tree always occur in
(A) Leaves (B) Roots (C)Left sub trees (D)None of these

4. Applications of Huffman Coding


(A) Text compression (B)Audio compression
(C) Lossless image compression (D)All of the above.

5. An alphabet consists of the letters A, B, C and D. The probability of occurrence is


P(A) = 0.4, P(B)= 0.1, P(C) = 0.2 and P(D) = 0.3. The Huffman code is
(A) A = 0 B = 111 C = 110 D = 10
(B)A = 0, B = 100, C = 101, D = 11.
(C) A = 0, B = 111, C = 11, D = 101.
(D) A = 0, B = 11, C = 10, D=111
6. The basic idea behind Huffman coding is to
(A) compress data by using fewer bits to encode fewer frequently occurring characters
(B) compress data by using fewer bits to encode more frequently occurring characters
(C) compress data by using more bits to encode more frequently occurring characters
(D) expand data by using fewer bits to encode more frequently occurring characters

7. Huffman coding is an encoding algorithm used for


(A) lossless data compression (B)broadband systems
(C ) files greater than 1 Mbit (D)lossy data compression

8. A Huffman encoder takes a set of characters with fixed length and produces a set of
characters of
(A) random length (B) fixed length (C)variable length (D)constant length
9. A Huffman code: A = 1, B = 000, C = 001, D = 01 , P(A) = 0.4, P(B) = 0.1, P(C) = 0.2,
P(D) = 0.3 The average number of bits per letter is
(A) 8.0 bit (B) 2.1 bit (C) 2.0 bit (D)1.9 bit

10. Which type of method is used is used to compress data made up of combination
of symbols?
(A) Run- length encoding (B) Huffman encoding
(C ) Lempel Ziv encoding (D) JPEG encoding

11. Data compression and encryption both work on binary code.


(A) False. (B)True.

12. Data compression usually works by .


(A) Deleting random bits data (B) Finding repeating patterns.

13. Based on the requirements of reconstruction, data compression schemes can be


divided into broad classes.
(A) 3 (B) 4 (C) 2 (D) 5

14. ______compression is generally used for applications that cannot tolerate any
difference between the original and reconstructed data.
(A) Lossy (B) Lossless (C) Both A and B. (D) None of these

15. Information theory was given by


(A) Claude von Regan (B) Claude Elwood Shanno (C)Claude Monet (D) Claude
Debussy

16.The unit of information depends on the base of the log. If we use log base 2, the unit
is_________; if we use log base e, the unit is__________; and if we use log base 10, the
unit is_______________.
(A) Hartleys, nats, bits (B) Hartleys, bits, nats
(C) Bits, nats, hartleys (D) Bits, hartleys, nats

17.A code in which no codeword is a prefix to another codeword is called as


(A) Prefix code (B) Parity code (D) Convolutional code (E) Block code

18.The set of binary sequences is called a , and the individual members of the set
are called_________________________.
(A) Codewords, code (B)Code, codeword (C) None of the above

19.Which of the following are Lossless methods?


(A) Run- length (B)Huffman (C)Predictive (D)All of the above

20.Huffman codes are____codes and are optimum for a given model (set of probabilities).
(A) Parity (B) Prefix (C) Convolutional code (D)Block code
21. LZ78 has .... compression but very decompression than LZ77
(A) fast, slow (B) slow, fast (C) None of these Answer

22. Application of LZW


(A) GIF (B) ZIP (C) PNG (D) All of the above Answer

23. Without losing quality, JPEG-2000 can achieve compression ratio of


(A) 2:1 (B) 200:1 (C) 2000:1 (D) 20:1

24. Expansion of LZ Coding is


(A) Lossy (B) Lossless (C) Lempel-ziv-welsh (D) Lempel-ziv

25. LZ77 algorithm works on .......... data whereas LZ78 algorithm attempts to work on
……….data
(A) future, past (B) past, future (C) present, future (D) past, present

Question Two: Put true or false

1. Algorithms used in Lossy compression are Transform coding ( )

2. Lossless Compression is also termed as irreversible compression. ( )

3. To perform DCT Transformation on an image, first we have to fetch image file


information then we apply DCT on that block of data. ( )

4. In LZW, the dictionary of phrases was defined by a fixed-length code of previously seen
text. ( )

5. In LZW with the initialization of the table being blanks. Compress : achieved by using
codes 259 through 4095 to represent sequence of bytes ( )

6. Use the LZW algorithm to compress the sequence: BABAABAAA the code for the
representing BA is 258 ( )

7. The adaptor: uses information's extra to the data to adapt the model (more or less)
continuously to the data ( )

8. A prefix code is a code system, typically a variable length code with the "prefix
property": there is no valid code word in the system that prefix (start) of any other valid
word in the set. ( )

9. Uncompressed audio and video files require less memory than compressed files ( )

10.Lossless compression permanently deletes the data. ( )

You might also like