Introduction To Information Theory Channel Capacity and Models
Introduction To Information Theory Channel Capacity and Models
theory
channel capacity andA.J.
models
Han Vinck
University of Essen
October 2002
content
Introduction
Source coding
Channel coding
Multi-user models
Constraint sequence
Applications to cryptography
This lecture
Some models
Channel capacity
converse
P(y|x)
output Y
transition probabilities
memoryless:
- output at time i depends only on input at time i
- input and output alphabet finite
channel capacity:
I(X;Y) = H(X) - H(X|Y) = H(Y) H(Y|X) (Shannon 1948)
H(X)
H(X|Y)
channel
notes:
max I(X; Y) capacity
capacity depends on
probabilities
P ( xinput
)
because the transition probabilites are fixed
channel model:
binary symmetric channel
1-p
Error Source
E
X
Input
Y X E
Output
0
p
1
1-p
Error Source
Error Source
good
bad
Pbg
transition probability
Pbb
Interleaving:
bursty
Message
interleaver
channel
interleaver
encoder
-1
message
decoder
random error
Note: interleaving brings encoding and decoding delay
Homework: compare the block and convolutional interleaving w.r.t. delay
Interleaving: block
Channel models are difficult to derive:
- burst definition ?
- random and burst errors ?
for practical reasons: convert burst into random error
read in row wise
transmit
column wise
De-Interleaving: block
read in column
wise
this row contains 1 error
read out
row wise
Interleaving: convolutional
input sequence 0
input sequence 1
delay of b elements
AWGN,
I
1
Q
AWGN,22
I and Q same variance
Select channel k
with probability
Q(k)
Transition
probability P(k)
k2I / A G2 1/ 2
(k) : (
)
2
2
I G
e A A k
Q(k) :
k!
Example of parameters
k
Q(k)
p(k) (= transition probability )
0
0.36
0.00
1
0.37
0.16
2
0.19
0.24
3
0.06
0.28
4
0.02
0.31
Average p = 0.124; Capacity (BSC) = 0.457
Example of parameters
Middletons class A: E = 1; = 1; I /G = 10-3
1
Q(k) 0.5
Q(k) 0.5
0.0
0.0
0.0
A = 0.1
0.5
Transition
probability P(k)
Q(k) 0.5
p(k)
p(k)
A= 1
0.5
p(k)
A = 10
0.5
Example of parameters
Middletons class A: E = 0.01; = 1; I /G = 10-3
1
Q(k) 0.5
Q(k) 0.5
0.0
0.0
0.0
A = 0.1
0.5
Transition
probability P(k)
Q(k) 0.5
p(k)
p(k)
A= 1
0.5
p(k)
A = 10
0.5
1-p
0
X
since Y is binary
H(Y|X) = h(p)
1-p
= P(X=0)h(p) + P(X=1)h(p)
0 (light on)
X
p
1-p
P(X=0) = P0
Y
1 (light off)
1-e
e
Y
e
H(X) = h(P0 )
H(X|Y) = e h(P0)
1
P(X=0) = P0
Thus Cerasure = 1 e
(check!, draw and compare with BSC and Z)
x1
x2
y1
P2|1
P1|2
:
:
:
xn
y2
P2|2
Pj|i = PY|X(yj|xi)
:
:
:
In general:
Pm|n
ym
clue:
I(X;Y)
is convex in the input probabilities
i.e. finding a maximum is simple
Channel capacity
Definition:
The rate R of a code is the ratio
k
n
, where
System design
Code book
Code
word in
message
2k
receive
estimate
channel
decoder
Code book
n
There are 2k code words of length n
Channel capacity:
sketch of proof for the BSC
Code: 2k binary codewords where p(0) = P(1) =
Channel errors: P(0 1) = P(1 0) = p
i.e. # error sequences 2nh(p)
Decoder: search around received sequence for codeword
with np differences
Channel capacity:
decoding error probability
1.
nh ( p )
2
P( 1) ( 2 k 1) n 0
2
k
for
R 1 h ( p)
n
and
n
Pe
k/n
C
Converse:
Xi
channel
Yi
i 1
i 1
i 1
i 1
source
encoder
Xn
channel
Yn
decoder
converse
R := k/n
k = H(M) = I(M;Yn)+H(M|Yn)
I(Xn;Yn) +1+ k Pe
1 C n/k - 1/k Pe
nC +1+ k Pe
Pe 1 C/R - 1/k
Hence:
Appendix:
Assume:
binary sequence P(0) = 1 P(1) = 1-p
t is the # of 1s in the sequence
Then n , > 0
Weak law of large numbers
Probability ( |t/n p| > ) 0
i.e. we expect with high probability pn 1s
Appendix:
Consequence:
1.
2.
log 2
n ( p )
n
n
log 2 (2n
) log 2 2n log 2 2 nh ( p )
t
pn
3.
1
1
log 2 2n log 2 2 nh ( p ) h (p)
n
n
4.