Lecture 15
Lecture 15
Lecture 15
Instructor: Arya Mazumdar Scribe: Names Redacted
1 Review
1.1 Channel Capacity
In the communication system (Figure 1),we define the channel capacity:
1
Figure 4: Binary Erasure Channel
First guess for max H(Y ) is log 3, but we cannot achieve this by any choice of input distribution p.
p(x)
Assume, p(x = 1) = π, p(x = 1) = 1 − π.
let E be the event {Y= ?}, so H(E) = h(p), H(Y |E = 1) = 0, H(Y |E = 0) ≤ 1. Thus
Hence
2
2.2 example
1
If p(x = 1) = p(x = 0) = , then H(Y)= ?
2
1
p(Y = 0) = (1 − p)
2
1
p(Y = 1) = (1 − p)
2
1 1
p(Y =?) = p + p = p
2 2
1 1 1 1
H(Y ) = (1 − p) log (1 − p) − p log p − (1 − p) log (1 − p)
2 2 2 2
1
= −p log p − (1 − p) log (1 − p) − (1 − p) log
2
= h(p) + (1 − p) (this is max H(Y ) in BEC channel)
p(x)
Actually, the expression (13) for the capacity of BEC channel has some intuitive meaning:Since a
proportion p of the bits are lost in the channel, we can recover (at most) a proportion (1-p) of the bits.
Hence the capacity is at most 1-p.
4 Gaussian Channel
4.1 Definition
Gaussian channel is the most important continuous alphabet channel. It has the output Y, input X and
noise Z. The noise Z is drawn i.i.d from a Gaussian distribution with variance σ 2 .
Y =X +Z Z ∼ N (0, σ 2 ) (14)
3
Here
E[Y 2 ] ≤ p + σ 2 (25)
4
define
p
: signal to noise ratio(SNR)
σ2
Hence
1
C=
log(1 + SN R) (29)
2
In general, capacity can be computed for arbitrary channels using numerical algorithms such as
Arimoto-Blahut algorithm.