Digital Communications Lab (CE-343L) : Experiment NO
Digital Communications Lab (CE-343L) : Experiment NO
EXPERIMENT
06
NO
DATE OF
EXPERIMENT
INSTRUCTOR
SUBMITTED BY
ADNAN ZAFAR
___________________
___________________
_______________________________________________________________________________________
(x X)
Where X denotes alphabets and p(x) is the probability of the letter x. The base of the logarithm is
chosen to be 2, which results in the entropy being expressed in bits. For the binary alphabets with
probabilities p and 1-p, the entropy is denoted by Hb (p) and is given by
Hb (p) = -p log p (1-p) log (1-p)
The entropy of source provides an essential bound on the number of bits required to represent a
source for full recovery. In other words, the average number of bits per source output required to
encode a source for error-free recovery can be made as close to H (X) as we desire but cannot be
less than H (X).
Noiseless Coding:
Noiseless coding is the general term for all schemes that reduce the number of bits required for
the representation of a source output for perfect recovery. The noiseless encoding theorem, due
to Shannon, state that for perfect reconstruction of a source it is possible to use a code with a rate
as close to the entropy of the source as we desire, but it is not possible to use a code with a rate
less than the source entropy. In other words, for any > 0, we can have a code with rate less than
H(X)+, but we cannot have a code with rate less than H (X), regardless of the complexity of the
encoder and the decoder. There exists various algorithms for noiseless source coding; Huffman
coding and Lempel-Ziv coding are two examples.
Huffman Coding:
In Huffman coding we design longer code words to the less probable source outputs and shorter
code words to the more probable ones. To do this we start by merging the two least probable
source outputs to generate a new merged output whose probability is the sum of the
corresponding probabilities. This process is repeated until only one merged output is left. In this
way we generate a tree. Starting from the tree and assigning 0's and 1's to any two branches
emerging from the same code, we generate the code. It can be shown that in this way we
generate a code with minimum average length among the class of prefix- free codes.
It should be noted that the Huffman coding algorithm does not result in a unique code due
to the arbitrary way of assigning 0's and 1's to different branches.
2
_______________________________________________________________________________________
Problem:
A discrete-memory less information source with alphabets X = {x1, x2, x3, x4,. ,x9} and the
corresponding probabilities P = {0.20, 0.15, 0.13, 0.12, 0.10, 0.09, 0.08, 0.07, 0.06} is to be
coded using Huffman coding.
1. Determine the entropy of the source
2. Find a Huffman Code for the source and determine the efficiency of the Huffman code
Hint: Efficiency = Entropy / Average Codeword length
LAB Report
Please include the following in your lab report.
1. Both the above tasks should be performed using MATLAB.
2. You need to provide Huffman Code tree along with the lab report.