0% found this document useful (0 votes)
12 views

Lecture 15

The document discusses different types of communication channels including binary symmetric, binary erasure, Gaussian, and binary deletion channels. It provides the definitions and calculations for the channel capacities. Random coding is shown to achieve capacity in the limit as the code length increases.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lecture 15

The document discusses different types of communication channels including binary symmetric, binary erasure, Gaussian, and binary deletion channels. It provides the definitions and calculations for the channel capacities. Random coding is shown to achieve capacity in the limit as the code length increases.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

COMPSCI 650 Applied Information Theory Mar 24, 2016

Lecture 15
Instructor: Arya Mazumdar Scribe: Names Redacted

1 Review
1.1 Channel Capacity
In the communication system (Figure 1),we define the channel capacity:

C = max I(X; Y ) (1)


p(x)

Figure 1: Communication System

1.2 Binary Symmetric Channel

Figure 2: Binary Symmetric Channel

The information capacity of a binary symmetric channel with parameter p is:

C = 1 − h(p) bits (2)

Figure 3: communication system for Binary Symmetric Channel

1
Figure 4: Binary Erasure Channel

2 Binary Erasure Channel (BEC)


In this channel, a fraction p of bits are erased (Figure 4).

2.1 Capacity of Binary Erasure Channel

C = max I(X; Y ) (3)


p(x)

= max H(Y ) − H(Y |X) (4)


p(x)
X
= max H(Y ) − p(x)H(Y |X = x) (5)
p(x)
x
= max H(Y ) − h(p) (6)
p(x)

First guess for max H(Y ) is log 3, but we cannot achieve this by any choice of input distribution p.
p(x)
Assume, p(x = 1) = π, p(x = 1) = 1 − π.
let E be the event {Y= ?}, so H(E) = h(p), H(Y |E = 1) = 0, H(Y |E = 0) ≤ 1. Thus

H(Y ) = H(Y |E) = H(E) + H(Y |E) (7)


= h(p) + p(E = 0)H(Y |E = 0) + p(E = 1)H(Y |E = 1) (8)
= h(p) + (1 − p)H(Y |E = 0) (9)
≤ h(p) + (1 − p) (10)

Hence

C = max H(Y ) − h(p) (11)


p(x)

= h(p) + 1 − p − h(p) (12)


= 1−p (13)

2
2.2 example
1
If p(x = 1) = p(x = 0) = , then H(Y)= ?
2
1
p(Y = 0) = (1 − p)
2
1
p(Y = 1) = (1 − p)
2
1 1
p(Y =?) = p + p = p
2 2
1 1 1 1
H(Y ) = (1 − p) log (1 − p) − p log p − (1 − p) log (1 − p)
2 2 2 2
1
= −p log p − (1 − p) log (1 − p) − (1 − p) log
2
= h(p) + (1 − p) (this is max H(Y ) in BEC channel)
p(x)

I(X; Y ) = H(Y ) − h(p) = 1 − p

Actually, the expression (13) for the capacity of BEC channel has some intuitive meaning:Since a
proportion p of the bits are lost in the channel, we can recover (at most) a proportion (1-p) of the bits.
Hence the capacity is at most 1-p.

3 Binary Deletion Channel


In this channel, each bit is deleted with probability p. However, the capacity of deletion channel is still
an open problem.

4 Gaussian Channel
4.1 Definition
Gaussian channel is the most important continuous alphabet channel. It has the output Y, input X and
noise Z. The noise Z is drawn i.i.d from a Gaussian distribution with variance σ 2 .

Y =X +Z Z ∼ N (0, σ 2 ) (14)

The limitation on input (x1 , x2 , · · · , xn ) is that:


n
1X 2
x ≤ p → E[x2 ] ≤ p (15)
n i=1 i

4.2 Capacity of Gaussian Channel


define:
X
C= I(X; Y ) (16)
f (x):E[x2 ]≤p

3
Here

I(X; Y ) = h(Y ) − h(Y |X) (17)


= h(Y ) − h(X + Z|X) (18)
= h(Y ) − h(Z|X) (19)
= h(Y ) − h(Z) (20)
1
= h(Y ) − log 2πeσ 2 (21)
2
To find the maximum value of I(X;Y), we need to find the maximum value of h(Y). In lecture 12, we
have proved the Theorem: A Gaussian random variable has the highest entropy of all random variables
x with fixed variance σ 2 . That is:
1 2h(x)
σ2 ≥ e (22)
2πe
And

E[Y 2 ] = E[(X = Z)2 ] = E[X 2 ] + E[Z 2 ] + 2E[X] · E[Z] (E[Z] = 0) (23)


2 2
= E[X ] + E[Z ] (24)

Since E[X 2 ] ≤ p and E[Z 2 ] = σ 2

E[Y 2 ] ≤ p + σ 2 (25)

from inequality(22), we can get:


1
h[Y ] ≤ log 2πe(p + σ 2 ) (26)
2
Hence
1 1 1 p
I(X; Y ) ≤ log 2πe(p + σ 2 ) − log 2πeσ 2 = log(1 + 2 ) (27)
2 2 2 σ

Figure 5: Gaussian Channel

Thus, in the Gaussian Channel (Figure 5), the capacity is:


X 1 p
C= I(X; Y ) = log(1 + 2 ) (28)
2 σ
f (x):E[x2 ]≤p

4
define
p
: signal to noise ratio(SNR)
σ2
Hence
1
C=
log(1 + SN R) (29)
2
In general, capacity can be computed for arbitrary channels using numerical algorithms such as
Arimoto-Blahut algorithm.

5 Random codes achieve capacity


5.1 recall
index set W = {1, 2, · · · , M } messages:
1 → X1 (1) · · · Xn (1)
2 → X1 (2) · · · Xn (2)
······
M → X1 (M ) · · · Xn (M )
log M
R= = 1 − h(p) and p(n)
e → 0
n
Find all codewords that are within n(p + ) bits flip away from y. If there is only one codeword found,
then output that; otherwise, declare failure. In this algorithm, let A: X(i) is more than n(p + ) bits flip
away from y and B: ∃j 6= 1: X(j) is within n(P + ) bits flip away from y. So the probability of error is:
p(n)
e = P r(A + B) ≤ P (A) + P (B)
From chernoff bond, we know that:
n
!
X
P X(i) ≥ n(p + ) = 2−nD(p+||p) → 0 (30)
i=1
(n)
Thus P (A) → 0 and pe ≈ P (B)
And P (B) ≤ (M − 1) · P r(X(i) is within n(p + ) bits flip away from y)
So
1
p(n)
e ≈ P (B) ≤ (M − 1)2−nD(p+||p) < M · 2−nD(p+|| 2 ) (31)
say
M = 2n(1−h(p+2))
Hence
1
p(n)
e < 2n(1−h(p+2)) · 2−nD(p+|| 2 ) (32)
−n[h(p+2)−h(p+)]
= 2 (33)
−nh
= 2 → 0 (34)
And
n(1 − h(p + 2))
R= = 1 − h(p + 2) (35)
n
If  → 0, then R → 1 − h(p).

You might also like