0% found this document useful (0 votes)
267 views12 pages

InfThe Rev PDF

This document provides an example problem involving a binary symmetric channel (BSC). It asks the reader to: 1) Derive formulas relating the mutual information I(X,Y) to the source entropy H(X) and average information lost H(X|Y), and to the destination entropy H(Y) and error entropy H(Y|X). 2) State the relationship between H(X|Y) and H(Y|X). 3) Calculate specific values for the entropies and mutual information given a BSC with p=0.4 and an error probability of 0.1.

Uploaded by

Amol Bhatkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
267 views12 pages

InfThe Rev PDF

This document provides an example problem involving a binary symmetric channel (BSC). It asks the reader to: 1) Derive formulas relating the mutual information I(X,Y) to the source entropy H(X) and average information lost H(X|Y), and to the destination entropy H(Y) and error entropy H(Y|X). 2) State the relationship between H(X|Y) and H(Y|X). 3) Calculate specific values for the entropies and mutual information given a BSC with p=0.4 and an error probability of 0.1.

Uploaded by

Amol Bhatkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

ELEC3028 Digital Transmission – Overview & Information Theory S Chen

Example 1

1. A source emits symbols Xi, 1 ≤ i ≤ 6, in the BCD format with probabilities P (Xi)
as given in Table 1, at a rate Rs = 9.6 kbaud (baud=symbol/second).
State (i) the information rate and (ii) the data rate of the source.

2. Apply Shannon-Fano coding to the source signal Table 1.


Xi P (Xi) BCD word
characterised in Table 1. Are there any disadvantages
in the resulting code words? A 0.30 000
B 0.10 001
3. What is the original symbol sequence of the Shannon- C 0.02 010
Fano coded signal 110011110000110101100? D 0.15 011
E 0.40 100
4. What is the data rate of the signal after Shannon-Fano
F 0.03 101
coding? What compression factor has been achieved?

5. Derive the coding efficiency of both the uncoded BCD signal as well as the
Shannon-Fano coded signal.

6. Repeat parts 2 to 5 but this time with Huffman coding.

72
ELEC3028 Digital Transmission – Overview & Information Theory S Chen

Example 1 - Solution
1. (i) Entropy of source:
X 6
H = − P (Xi ) · log2 P (Xi ) = −0.30 · log2 0.30 − 0.10 · log2 0.10 − 0.02 · log2 0.02
i=1

−0.15 · log2 0.15 − 0.40 · log2 0.40 − 0.03 · log2 0.03


= 0.52109 + 0.33219 + 0.11288 + 0.41054 + 0.52877 + 0.15177
= 2.05724 bits/symbol

Information rate: R = H · Rs = 2.05724 [bits/symbol] · 9600 [symbols/s] = 19750 [bits/s]


(ii) Data rate = 3 [bits/symbol] · 9600 [symbols/s] = 28800 [bits/s]
2. Shannon-Fano coding: X P (X) I (bits) steps code
E 0.4 1.32 0 0
A 0.3 1.74 1 0 10
D 0.15 2.74 1 1 0 110
B 0.1 3.32 1 1 1 0 1110
F 0.03 5.06 1 1 1 1 0 11110
C 0.02 5.64 1 1 1 1 1 11111
Disadvantage: the rare code words have maximum possible length of q − 1 = 6 − 1 = 5, and a
buffer of 5 bit is required.

73
ELEC3028 Digital Transmission – Overview & Information Theory S Chen
˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛
3. Shannon-Fano encoded sequence: 110 0 11110 0 0 0 110 10 110˛0 = DEFEEEDADE
˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛
4. Average code word length:

d = 0.4 · 1 + 0.3 · 2 + 0.15 · 3 + 0.1 · 4 + 0.05 · 5 = 2.1 [bits/symbol]

Data rate:
d · Rs = 2.1 · 9600 = 20160 [bits/s]
Compression factor:
3 [bits] 3
= = 1.4286
d [bits] 2.1
5. Coding efficiency before Shannon-Fano:

information rate 19750


CE = = = 68.58%
data rate 28800

Coding efficiency after Shannon-Fano:

information rate 19750


CE = == = 97.97%
data rate 20160

Hence Shannon-Fano coding brought the coding efficiency close to 100%.


6. Huffman coding:

74
ELEC3028 Digital Transmission – Overview & Information Theory S Chen

X P (X) steps code


step 1 step 2
1 2 3 4 5
E 0.40 E 0.40
E 0.4 1 1
A 0.30 A 0.30
A 0.3 0 0 00
D 0.15 D 0.15
D 0.15 0 1 0 010
B 0.10 B 0.10 0
B 0.1 0 1 1 0 0110
F 0.03 0 FC 0.05 1
F 0.03 0 1 1 1 0 01110
C 0.02 1
C 0.02 1 1 1 1 0 01111
step 3 step 4 step 5
E 0.40 E 0.40 ADBFC 0.60 0
A 0.30 A 0.30 0 E 0.40 1
D 0.15 0 DBFC 0.30 1
BFC 0.15 1
Same disadvantage as Shannon-Fano: the rare code words have maximum possible length of
q − 1 = 6 − 1 = 5, and a buffer of 5 bit is required.
˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛
1 1 00 1 1 1 1 00 00 1 1 010˛1˛1˛00 = EEAEEEEAAEEDEEA
˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛ ˛

The same data rate and the same compression factor achieved as Shannon-Fano coding.

The coding efficiency of the Huffman coding is identical to that of Shannon-Fano coding.

75
ELEC3028 Digital Transmission – Overview & Information Theory S Chen

Example 2

1 − pe
1. Considering the binary symmetric P (X0 ) = p X u
0Q
- u Y0 P (Y0 )
Q 3

channel (BSC) shown in the figure: Q p
Q e 

Q 
Q 

Q
 Q
 Q
 p
 e Q
Q
 s
Q
P (X1 ) = 1 − p X1 
u - u Y1 P (Y1 )
1 − pe
From the definition of mutual information,
XX P (Xi |Yj )
I(X, Y ) = P (Xi , Yj ) · log2 [bits/symbol]
i j
P (Xi )
derive both
(i) a formula relating I(X, Y ), the source entropy H(X), and the average information lost per
symbol H(X|Y ), and
(ii) a formula relating I(X, Y ), the destination entropy H(Y ), and the error entropy H(Y |X).
2. State and justify the relation (>,<,=,≤, or ≥) between H(X|Y ) and H(Y |X).
3. Considering the BSC in Figure 1, we now have p = 41 and a channel error probability pe = 10 1
.
Calculate all probabilities P (Xi , Yj ) and P (Xi |Yj ), and derive the numerical value for the mutual
information I(X, Y ).

76
ELEC3028 Digital Transmission – Overview & Information Theory S Chen

Example 2 - Solution

1. (i) Relating to source entropy and average information lost:


XX P (Xi |Yj )
I(X, Y ) = P (Xi , Yj ) · log2
i j
P (Xi )
XX 1 XX 1
= P (Xi , Yj ) · log2 − P (Xi , Yj ) · log2
i j
P (Xi ) i j
P (Xi |Yj )
0 1
X X 1
= @ P (Xi , Yj )A · log2
i j
P (Xi )
!
X X 1
− P (Yj ) · P (Xi |Yj ) · log2
j i
P (Xi |Yj )
X 1 X
= P (Xi ) · log2 − P (Yj ) · I(X|Yj ) = H(X) − H(X|Y )
i
P (Xi ) j

(ii) Bayes rule :


P (Xi |Yj ) P (Xi , Yj ) P (Yj |Xi )
= =
P (Xi ) P (Xi ) · P (Yj ) P (Yj )

77
ELEC3028 Digital Transmission – Overview & Information Theory S Chen

Hence, relating to destination entropy and error entropy:


XX P (Yj |Xi ) XX 1
I(X, Y ) = P (Xi , Yj ) · log2 = P (Yj , Xi ) · log2
i j
P (Yj ) i j
P (Yj )
XX 1
− P (Yj , Xi ) · log2 = H(Y ) − H(Y |X)
i j
P (Yj |Xi )

2. Unless pe = 0.5 or for equiprobable source symbols X , the symbols Y at the destination are more
balanced, hence H(Y ) ≥ H(X). Therefore, H(Y |X) ≥ H(X|Y ).
3. Joint probabilities:
1 9
P (X0 , Y0) = P (X0 ) · P (Y0|X0 ) = · = 0.225
4 10
1 1
P (X0 , Y1) = P (X0 ) · P (Y1|X0 ) = · = 0.025
4 10
3 1
P (X1 , Y0) = P (X1 ) · P (Y0|X1 ) = · = 0.075
4 10
3 9
P (X1 , Y1) = P (X1 ) · P (Y1|X1 ) = · = 0.675
4 10

Destination total probabilities:


1 9 3 1
P (Y0 ) = P (X0 ) · P (Y0 |X0) + P (X1 ) · P (Y0 |X1) = · + · = 0.3
4 10 4 10

78
ELEC3028 Digital Transmission – Overview & Information Theory S Chen

1 1 3 9
P (Y1 ) = P (X0 ) · P (Y1 |X0) + P (X1 ) · P (Y1 |X1) = · + · = 0.7
4 10 4 10
Conditional probabilities:
P (X0 , Y0) 0.225
P (X0 |Y0) = = = 0.75
P (Y0 ) 0.3
P (X0 , Y1) 0.025
P (X0 |Y1) = = = 0.035714
P (Y1 ) 0.7
P (X1 , Y0) 0.075
P (X1 |Y0) = = = 0.25
P (Y0 ) 0.3
P (X1 , Y1) 0.675
P (X1 |Y1) = = = 0.964286
P (Y1 ) 0.7

Mutual information:
P (Y0|X0 ) P (Y1|X0 )
I(X, Y ) = P (X0 , Y0 ) · log2 + P (X0 , Y1) · log2
P (Y0 ) P (Y1 )
P (Y0 |X1) P (Y1 |X1)
+P (X1 , Y0) · log2 + P (X1 , Y1) · log2
P (Y0 ) P (Y1 )
= 0.3566165 − 0.0701838 − 0.1188721 + 0.2447348 = 0.4122954 [bits/symbol]

79
ELEC3028 Digital Transmission – Overview & Information Theory S Chen

Example 3

A digital communication system uses a 4-ary signalling scheme. Assume that 4


symbols -3,-1,1,3 are chosen with probabilities 81 , 14 , 12 , 18 , respectively. The channel is
an ideal channel with AWGN, the transmission rate is 2 Mbaud (2 × 106 symbols/s),
and the channel signal to noise ratio is known to be 15.

1. Determine the source information rate.

2. If you are able to employ some capacity-approaching error-correction coding


technique and would like to achieve error-free transmission, what is the minimum
channel bandwidth required?

80
ELEC3028 Digital Transmission – Overview & Information Theory S Chen

Example 3 - Solution

1. Source entropy:

1 1 1 7
H = 2 · · log2 8 + · log2 4 + · log2 2 = [bits/symbol]
8 4 2 4

Source information rate:

7
R = H · Rs = · 2 × 106 = 3.5 [Mbits/s]
4

2. To be able to achieve error-free transmission


 
SP
R ≤ C = B log2 1+ → 3.5 × 106 ≤ B log2(1 + 15)
NP

Thus
B ≥ 0.875 [MHz]

81
ELEC3028 Digital Transmission – Overview & Information Theory S Chen

Example 4

A predictive source encoder generates a bit stream, and it is known that the probability
of a bit taking the value 0 is P (0) = p = 0.95. The bit stream is then encoded by a
run length encoder (RLC) with a codeword length of n = 5 bits.

1. Determine the compression ratio of the RLC.

2. Find the encoder input patterns that produce the following encoder output
cordwords

11111 11110 11101 11100 11011 · · · 00001 00000

3. What is the encoder input sequence of the RLC coded signal 110110000011110?

82
ELEC3028 Digital Transmission – Overview & Information Theory S Chen

Example 4 - Solution
1. Codeword length after RLC is n = 5 bits, and average codeword length d before RLC with
N = 2n − 1 N −1
X l N 1 − pN
d= (l + 1) · p · (1 − p) + N · p =
l=0
1−p
Compression ratio d 1 − pN 1 − 0.9531
= = = 3.1844
n n(1 − p) 5 × 0.05
2. RLC table 00
| · · ·{z00000} → 11111
31
00 · 0000} 1 → 11110
| · ·{z
30
00
| · {z
· · 000} 1 → 11101
29
00 · · 00} 1 → 11100
| ·{z
28
00
| {z· · · 0} 1 → 11011
27
...
01 → 00001
1 → 00000
3. 11011 | 00000 | 11110 ←− the encoder input sequence
00 · · · 0} 1 1 00
| {z | · ·{z
· 0000} 1
27 30

83

You might also like