0% found this document useful (0 votes)
5 views

Tutorial information theory and error coding with answers

The document covers various concepts in information theory and error coding, including the Binary Symmetric Channel model and calculations for probabilities of bits in error. It explains parity methods for error detection, Hamming distance, and the Shannon-Hartley formula for channel capacity. Additionally, it discusses the implications of undetected errors in parity checks and evaluates channel capacities under different conditions.

Uploaded by

ibrahim
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Tutorial information theory and error coding with answers

The document covers various concepts in information theory and error coding, including the Binary Symmetric Channel model and calculations for probabilities of bits in error. It explains parity methods for error detection, Hamming distance, and the Shannon-Hartley formula for channel capacity. Additionally, it discusses the implications of undetected errors in parity checks and evaluates channel capacities under different conditions.

Uploaded by

ibrahim
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Tutorial information theory and error coding

1) Consider the Binary Symmetric Channel model:

How would we calculate the probability of having k bits in error out of N


transmitted down the channel?

Use this formula to determine:


a) P(0 bits in error if N=100)
b) P(2 bits in error if N=100)

For p=0.01, p=0.1.

2) 1) a) What does Odd and Even Parity mean (w.r.t. error detection)? Use a diagram to help your explanation.
b) Assume Even Parity. Do the following received codewords look like they have no errors?
i) 00000000, ii) 10000011
c) Assume Odd Parity. Do the following received codewords look like they have no errors?
i) 00000000, ii) 10000011

3) What is the Hamming Distance of the codes:


Decimal value Data Block Parity bit Code word
0 0000 0 00000
1 0001 1 00011
2 0010 1 00101
3 0011 0 00110
4 0100 1 01001
5 0101 0 01010
6 0110 0 01100
7 0111 1 01111
8 1000 1 10001
9 1001 0 10010
10 1010 0 10100
11 1011 1 10111
12 1100 0 11000
13 1101 1 11011
14 1110 1 11101
15 1111 0 11110
What does this mean for error detection?

4) Using a diagram to help you, explain how a Parity check bit added to 7 data bits can produce a situation in
which we have errors that are undetected.
5) Compare the Shannon-Hartley formula in its original form with the simple approximation we derived.
Assume an analogue bandwidth of 1kHz, and an SNR of 10dB. What do you conclude?
6) A telephone network has an effective bandwidth of about 4000Hz, and transmitted signals have an average
SNR around 20dB. Evaluate the theoretical maximum channel capacity of the assumed telephone network
using the Shannon-Hartley formula.
7) Assume the channel bandwidth, B, is very large (B=1MHz). The signal is affected by AWGN, with variance =
10 volts2. The signal is a random signal for which logic 0 is represented by –A, and logic 1 represented by +A.
Assuming the signal is (worst case) a squarewave of amplitude A, find the upper limit of the capacity, C.
8) A repetition code uses 2 replications (i.e. each transmitted bit is replicated twice). What is the probability
that 4 bits transmitted in this way will be free from errors after decoding? Assume BER=0.01.
ANSWERS
1) Consider the Binary Symmetric Channel model:

How would we calculate the probability of having k bits in error out of N


transmitted down the channel?

Use this formula to determine:


a) P(0 bits in error if N=100)
b) P(2 bits in error if N=100)

For p=0.01, p=0.1.

a) p=0.01
𝑁
𝑝𝑟(𝑘 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 𝑁) = ( ) 𝑝𝑘 (1 − 𝑝)N−k
𝑘

𝑁!
𝑝𝑟(𝑘 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 𝑁) = . (0.01)𝑘 . (1 − 0.01)𝑁−𝑘
(𝑁 − 𝑘)! 𝑘!
100!
𝑝𝑟(0 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 100) = . (0.01)0 . (1 − 0.01)100−0
(100 − 0)! 0!
= 0.3660

P=0.1
𝑁
𝑝𝑟(𝑘 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 𝑁) = ( ) 𝑝𝑘 (1 − 𝑝)N−k
𝑘

𝑁!
𝑝𝑟(𝑘 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 𝑁) = . (0.1)𝑘 . (1 − 0.1)𝑁−𝑘
(𝑁 − 𝑘)! 𝑘!

100!
𝑝𝑟(0 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 100) = . (0.1)0 . (1 − 0.1)100−0
(100 − 0)! 0!
= 0.000026

b) p=0.01
𝑁
𝑝𝑟(𝑘 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 𝑁) = ( ) 𝑝𝑘 (1 − 𝑝)N−k
𝑘

𝑁!
𝑝𝑟(𝑘 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 𝑁) = . (0.01)𝑘 . (1 − 0.01)𝑁−𝑘
(𝑁 − 𝑘)! 𝑘!
100!
𝑝𝑟(2 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 100) = . (0.01)2 . (1 − 0.01)100−2
(100 − 2)! 2!
= 0.1848

P=0.1
𝑁
𝑝𝑟(𝑘 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 𝑁) = ( ) 𝑝𝑘 (1 − 𝑝)N−k
𝑘

𝑁!
𝑝𝑟(𝑘 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 𝑁) = . (0.1)𝑘 . (1 − 0.1)𝑁−𝑘
(𝑁 − 𝑘)! 𝑘!

100!
𝑝𝑟(2 𝑏𝑖𝑡𝑠 𝑖𝑛 𝑒𝑟𝑟𝑜𝑟 𝑜𝑢𝑡 𝑜𝑓 100) = . (0.1)2 . (1 − 0.1)100−2
(100 − 2)! 2!
= 0.0016
2) a) A Parity bit is appended to some data to ensure that the number of 1’s in the whole codeword (inc. the
Parity bit) is EVEN (for EVEN Parity), or ODD (for ODD Parity). See diagram below:

Even parity -- Even parity means the number of 1's in the given word including the parity bit should be even
(0,2,4,6,....).
Odd parity -- Odd parity means the number of 1's in the given word including the parity bit should be odd
(1,3,5,....).

b) Assuming Even Parity, do the following received codewords look like they have no errors:
i) 00000000
Yes, there are an even number of 1’s.
ii) 10000011
No, there are an odd number of 1’s.

c) Assume Odd Parity. Do the following received codewords look like they have no errors?
i) 00000000
ii) 10000011

3) The Hamming Distance is 2, meaning that single bit errors only can be detected.

4) Using a diagram to help you, explain how a Parity check bit added to 7 data bits can produce a situation in
which we have errors that are undetected. Example shown:

0 1 0 1 0 1 1 becomes
0 0 0 0 0 1 1 – it looks ok but isn’t
as there have been 2 errors.

5) Compare the Shannon-Hartley formula in its original form with the simple approximation we derived.
Assume an analogue bandwidth of 1kHz, and an SNR of 10dB. What do you conclude?

The approximation gives:


𝑆
(𝑁) 1000(10) 1
𝑑𝐵
𝐶 ≈ 𝐵. = = 3333 𝑏𝑝𝑠
3 3 3

The original formula gives:


𝑆𝑁𝑅𝑑𝐵
𝑆𝑁𝑅𝑑𝐵 = 10. log10(𝑆𝑁𝑅) therefore 𝑆𝑁𝑅 = 10 10 , 𝑆𝑁𝑅 = 1010/10 = 10

𝐶 = 𝐵. log 2 (1 + 𝑆𝑁𝑅) = 1000. log 2 (1 + 10) ≈ 3459𝑏𝑝𝑠

They are still pretty close even at this not huge SNR.

6) A telephone network has an effective bandwidth of about 4000Hz, and transmitted signals have an average
SNR around 20dB. Evaluate the theoretical maximum channel capacity of the assumed telephone network.

As a ratio, 20dB is equal to 100 as a ratio, so:

𝑆 = 𝐵. log 2 (1 + 𝑆/𝑁)
𝑆 = 4000. log 2 (1 + 100) = 26632.8 bps
7) Assume the channel bandwidth, B, is very large (B=1MHz). The signal is affected by AWGN, with variance =
10 volts2. The signal is a random signal for which logic 0 is represented by –A, and logic 1 represented by +A.
Assuming the signal is (worst case) a squarewave of amplitude A, find the upper limit of the capacity, C.

𝑝𝑜𝑤𝑒𝑟𝑠𝑖𝑔𝑛𝑎𝑙 (𝑠𝑖𝑔𝑛𝑎𝑙𝑟𝑚𝑠 )2
𝑆𝑁𝑅 = =
𝑃𝑜𝑤𝑒𝑟𝑛𝑜𝑖𝑠𝑒 𝑃𝑜𝑤𝑒𝑟𝑛𝑜𝑖𝑠𝑒

For a squarewave of amplitude A: signal_rms = A


𝐴2
𝑆𝑁𝑅 = ( )
10

𝐴2 𝐴2 𝐴2
𝐶 = 𝐵. log 2 (1 + ( )) 𝑏𝑝𝑠 = 1,000,000. log 2 (1 + ( )) 𝑏𝑝𝑠 = log 2 (1 + ) 𝑀𝑏𝑝𝑠
10 10 10

10+𝐴2
𝐶 = log 2 ( 10
) 𝑀𝑏𝑝𝑠 = log 2 (10 + 𝐴2 ) − log 2 10 ≈ (log 2 (10 + 𝐴2 ) − 3.3219) Mbps

8) A repetition code uses 2 replications (i.e. each transmitted bit is replicated twice). What is the probability
that 4 bits transmitted in this way will be free from errors after decoding? Assume BER=0.01

For each bit P(bit decoded correctly) = (1 − 𝐵𝐸𝑅)3 + (31)𝐵𝐸𝑅1 . (1 − 𝐵𝐸𝑅)2

= 0.993 + 3(0.01)(0.99)2 = 0.9703 + 0.0294 = 0.9997

Therefore for all 4 bits to be decoded without error:

P(decoded correctly) = 0.99974 = 0.9988

You might also like