Communication System CH#2
Communication System CH#2
(ECEg4271)
By: H/MARYAM G.
1 1 1 1
H(X) = − log2 − log2
2 2 2 2
= 1 bits/symbol
▶ The amount of uncertainty remaining about the channel input after ob-
serving the channel output, is called as conditional entropy.
n m
H(Y |X) = − ∑ ∑ p xi , y j log2 p y j |xi (7)
j=1 i=1
▶ Let Lmin denote the minimum possible value of L. We then define the
coding efficiency of the source encoder as:
Lmin
η= (17)
L
▶ With L ≥ Lmin , we clearly have η ≤ 1. The source encoder is said to be
efficient when η approaches unity.
Yekatit 15, 2014 E.C. Communication Systems 24 / 40
Source Coding
▶ But how is the minimum value Lmin determined? The answer to this fun-
damental question is embodied in Shannon’s source-coding theorem.
▶ According to the source-coding theorem, the entropy H(X) represents a
fundamental limit on the average number of bits per source symbol neces-
sary to represent a discrete memoryless source in that it can be made as
small as, but no smaller than, the entropy H(X).
▶ Thus with Lmin = H(X), we may rewrite the efficiency of a source encoder
in terms of the entropy H(X) as:
H(X)
η= (18)
L
▶ From the discussion above it is seen that the most desirable of the above
four codes is code 3, which is uniquely decodable, instantaneous, and has
the least-average codeword length. This is an example of Huffman code.
▶ Note that in Shannon-Fano encoding the ambiguity may arise in the choice
of approximately equiprobable sets.
Yekatit 15, 2014 E.C. Communication Systems 31 / 40
Huffman Encoding Algorithm
▶ An optimal (shortest expected length) prefix code for a given distribution
can be constructed by a simple algorithm discovered by Huffman.
▶ The basic idea behind Huffman coding is to assign to each symbol of
an alphabet a sequence of bits roughly equal in length to the amount of
information conveyed by the symbol in question.
▶ The end result is a source code whose average code word length approaches
the fundamental limit set by the entropy of a discrete memoryless source.
▶ The essence of the algorithm used to synthesize the Huffman code is to
replace the prescribed set of source statistics of a discrete memoryless
source with a simpler one.
▶ This reduction process is continued in a step-by-step manner until we are
left with a final set of only two source statistics (symbols), for which (0,
1) is an optimal code.
Yekatit 15, 2014 E.C. Communication Systems 32 / 40
Huffman Encoding Algorithm
▶ Starting from this trivial code, we then work backward and thereby con-
struct the Huffman code for the given source.
▶ Generally, Huffman encoding algorithm proceeds as follows:
① The source symbols are listed in order of decreasing probability. The two source
symbols of lowest probability are assigned 0 and 1. This part of the step is referred
to as the splitting stage.
② These two source symbols are then combined into a new source symbol with proba-
bility equal to the sum of the two original probabilities. (The list of source symbols,
and, therefore, source statistics, is thereby reduced in size by one.) The probability
of the new symbol is placed in the list in accordance with its value.
③ The procedure is repeated until we are left with a final list of source statistics (sym-
bols) of only two for which the symbols 0 and 1 are assigned.
▶ The code for each (original) source is found by working backward and
tracing the sequence of 0’s and 1’s assigned to that symbol as well as its
successors.
Yekatit 15, 2014 E.C. Communication Systems 33 / 40
Huffman Encoding Algorithm
▶ Example:The five symbols of the alphabet of a DM source (s0 , s1 , s2 , s3 , s4 )
and their probabilities ( 52 , 51 , 15 , 10
1 1
, 10 ) are shown in Fig. 3. below.
The average codeword length is therefore:
2 1 1 1 1
E[L] = 2 × + 2 × + 2 × + 3 × + 3 × = 2.2
5 5 5 10 10
The entropy of the specified DM source is calculated as follows.
1 1 1
H(X) = 0.4 log2 + 2 × 0.2 log2 + 2 × 0.1 log2
0.4 0.2 0.1
= 2.12193 bits/symbol
Efficiency of a source encoder becomes:
H(X) 2.12193
η= = = 96.46%
L 2.2
Yekatit 15, 2014 E.C. Communication Systems 34 / 40
Huffman Encoding Algorithm
▶ Following through the Huffman algorithm, we reach the end of the compu-
tation in four steps, resulting in the Huffman tree shown in Table 3. The
codewords of the Huffman code for the source are tabulated in Table 4.
Table 3: Example of the Huffman encoding algorithm.
2. Convolutional codes
A) Maximum likelihood decoding
B) Viterbi decoding