0% found this document useful (0 votes)
40 views24 pages

TE361 Channel Coding 1

This document discusses channel coding and introduces some key concepts in information theory. It begins by explaining that channel coding can be used to recover messages after they pass through a noisy channel. It then defines discrete memoryless channels and introduces two examples: 1) The binary symmetric channel, where each bit has a probability p of being flipped. 2) The binary erasure channel, where each bit has a probability α of being erased. The document derives the capacity of these channels and explains how channel capacity can be used to determine the maximum reliable communication rate over a noisy channel.

Uploaded by

Jeffrey Mintah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views24 pages

TE361 Channel Coding 1

This document discusses channel coding and introduces some key concepts in information theory. It begins by explaining that channel coding can be used to recover messages after they pass through a noisy channel. It then defines discrete memoryless channels and introduces two examples: 1) The binary symmetric channel, where each bit has a probability p of being flipped. 2) The binary erasure channel, where each bit has a probability α of being erased. The document derives the capacity of these channels and explains how channel capacity can be used to determine the maximum reliable communication rate over a noisy channel.

Uploaded by

Jeffrey Mintah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

TE361: Information Theory

Channel Coding

“If you can’t explain it simply, you don’t understand it well enough”
- Albert Einstein
Channel Coding
Introduction
• In practice
– Most media are not perfect – noisy channels:
• Wireless link
– satellite
• Modem line
• Hard disk

• Can we
– recover the original message (without errors) after passing
through a noisy channel?
• How
– can we communicate in reliable fashion over noisy channels?
Introduction
• The communication model
– we are using consists of a source that generates discrete symbols/
digital information
• This information is sent to a destination through a CHANNEL
• The channel
– can be associated with noise so we have two cases
1. Noiseless case
– The channel in this case transmits symbols without causing any
errors
2. Noisy case
– The channel in this case introduces noise that causes errors in the
received symbols at the destination
» Noise can
• be defined as any unwanted signal or effect in addition to
the desired signal
» To reduce the errors due to noise we would add systematic
redundancy to the information to be sent
• This done through CHANNEL CODING
Introduction
• Assumptions
1. That the information generated at the source is ‘source-coded’
or compressed into a string of binary digits
2. Discrete Memoryless Channel (DMC)
• Each symbol is sent over the channel independently of the previous
symbols sent
• the behavior of the channel and the effect of the noise at time t will
not depend on the behavior of the channel or the effect of the noise
at any previous time
Introduction
• We defined
– information rate R and learnt that arbitrarily reliable
communication is possible for any R < C
• In this lecture
– we will study noisy channel models and compute the channel
capacity for such channels
– We will design and analyze block codes
Introduction
• Shannon put forth a stochastic model of the channel
– Input alphabet A
– Output alphabet B
– Probability transition matrix P(b/a)
• that expresses the probability of observing the output symbol b given that the
symbol a was sent
• describes the transmission behavior of the channel

• The input and output symbols


– Arise from two random variables and the channel models the joint
relationship between the variables
• Focus
– is on measuring the information carrying capacity of DMC in the
presence of noise
Discrete Memoryless Channel
• The information carrying capacity
– is defined as the maximum mutual information

C  max I ( A ; B )
p( x)

• The mutual information


– I(A; B) of a channel with input A and output B
• measures the amount of information a channel is able to convey
about the source
Noiseless Channel
• If the channel is noiseless
– then the mutual information is equal to the information content of
the source A
– That is
I ( A ; B )  H ( A)

– For noiseless binary channel, C = 1 bit


Noisy Channel
• In the presence
– of noise there is an uncertainty H(A/B), that reduces the mutual
information to

I ( A ; B )  H ( A)  H ( A / B )

– Here
• H(A) is source entropy
• H(A/B) is the average information lost per symbol
Discrete Memoryless Channel

• We need
– a mathematical model of data/information transmission over
unreliable communication channels
• Popular DMC models include
– The Binary Symmetric Channel (BSC)
– The Binary Erasure Channel (BEC)
Binary Symmetric Channel (BSC)
• This is a channel
– when you input a bit, 0 or 1, with
• probability (1 - p) it passes through the channel intact
• probability p it gets flipped to the other parity

• That is
– the probability of error or bit error rate (BER) is p

P  y  1 / x  0  P  y  0 / y  1  p

– the channel experienced bit inversion


– the parameter p fully defines the behavior of the channel
Binary Symmetric Channel (BSC)

 P ( y  0 / x  0) P ( y  1 / x  0 
P 
 P ( y  0 / x  1) P ( y  1 / x  1) 

1  p p 
P 
 p 1  p 
• The mutual information is
I ( X ; Y )  H X   H  X / Y   H (Y )  H (Y / X )

C  1  H ( p)
– where H(p) is the binary entropy function
Binary Erasure Channel (BEC)
• Another effect
– that noise may have is to prevent the receiver from deciding
whether the symbol was a 0 or 1 (loss of signal)
• In this case
– the output alphabet includes an additional symbol “e”
• called the erasure symbol that denotes a bit that was not able to be
detected
• For a
– binary input {0, 1} the output alphabet consists of three symbols
{0, e, 1}
• This information channel
– is called a Binary Erasure Channel
– experienced bit lost
– BEC does not model/capture the effect of bit inversion
Binary Erasure Channel (BEC)

• The probability of error or erasure probability is α


• That is
P  y  e / x  0  P  y  e / x  1  

 P( y  0 / x  0) P( y  e / x  0) P( y  1 / x  0)
P 
 P ( y  0 / x  1) P ( y  e / x  1) P ( y  1 / x  1) 

1    0 
P
0  1   
Binary Erasure Channel (BEC)

• The BEC capacity is


I ( X ; Y )  H X   H X / Y   H (Y )  H (Y / X )

C 1  

• BEC
– Important model for wireless, mobile and satellite communication
channels
Discrete Memoryless Channel
• To fully specify
– the behavior of an information channel it is necessary to specify
• the characteristics of the input
• as well as the channel matrix
• We will assume
– that the input characteristics are described by a probability
distribution over the input alphabet with
• p(xi) denoting the probability of symbol xi being input to the channel
• If the channel
– is fully specified then the output can be calculated by

P  y j    p  y j / x i  p x i 
r

i 0
Discrete Memoryless Channel
• Note well
– p y / x  is forward probabilities
j i
– p x / y  is backward probabilities
i j

• The backward probabilities


– can be calculated by using Bayes’ rule

P xi , y j  P y j / xi P xi 
p xi / y j   
P y j  P y j 
Problem
• Consider
– the binary information channel fully specified by

P x  0   2 / 3 3 / 4 1 / 4 
and P  
P x  1  1 / 3 1 / 8 7 / 8

– Represented as

– Calculate the output probabilities


Solution
• The output probabilities are calculated using

P  y j    p  y j / x i  p x i 
r

i 0

• The probability that the output is 0 is

P  y  0  P  y  0 / x  0 P x  0  P  y  0 / x  1 P x  1
 3  2   1  1  13
        
 4  3   8  3  24

• The probability that the output is 1 is

11
P  y  1  1  P  y  0 
24
Solution
• The backward probabilities are:
P ( y  0 / x  0) P ( x  0)
P x  0 / y  0  
P ( y  0)
(3 / 4)( 2 / 3) 12
 
13 / 24 13
12 1
P x  1 / y  0   1  P x  0 / y  0   1  
13 13
P ( y  1 / x  1) P ( x  1)
P x  1 / y  1 
P ( y  1)
(7 / 8)(1 / 3) 7
 
11 / 24 11

7 4
P x  0 / y  1  1  P x  1 / y  1  1  
11 11
Solution
• The average uncertainty
– we have about the input after the channel out yj is observed is
given by

H X / y j   P( x / y j ) log 2
1
xX p( x / y j )

– Let say we observe an output of y = 0


• then

1 1
H X / y  0 P x  0 / y  0log 2  P x  1 / y  0log 2
p ( x  0 / y  0) p ( x  1 / y  0)
12  13  1
 log 2    log 2 13  0.387
13  12  13
Solution
– Let say we observe an output of y = 1
• Then
1 1
H X / y  1 P x  0 / y  1log 2  P x  1 / y  1log 2
p( x  0 / y  1) p( x  1 / y  1)
4  11  7  11 
 log 2    log 2    0.946
11  4  11 7

• The conditional entropy


– is given by
H X / Y    P( y ) H ( X / y )
j j

 P( y  0) H ( X / y  0)  P( y  1) H ( X / y  1)
 13   11 
  0.387    0.946 
 24   24 
 0.643

– Thus
• the average information lost per symbol is through the channel is 0.643
Solution
• The source entropy is
1
H X   p( x ) log 2
p( x )
1 1
 p( x  0) log 2  p( x  1) log 2
p ( x  0) p( x  1)
2  3 1  3
 log 2    log 2    0.918
3 2 3 1
• The capacity
– of the channel in the presence of noise is

C  H ( X )  H ( X /Y )
 0.918  0.643  0.275 bits / channel use

You might also like