Channel Coding: - Channel Capacity Channel Capacity, C Is Defined As
Channel Coding: - Channel Capacity Channel Capacity, C Is Defined As
Channel Coding
• Channel Capacity
Channel Capacity, C is defined as
‘the maximum mutual information I(X ; Y) in any single use of
the channel, where the maximization is over all possible input
probability distributions {p(xj )} on X”
Example:
For, the binary symmetric channel discussed previously, I(X ; Y
will be maximum when p(x0 ) = p(x1 ) = 12 . So, we have
& %
' $
Since, we know
C = 1 + p log2 p + (1 − p) log2 (1 − p)
(5)
=⇒ 1 − H(p)
& %
' $
& %
' $
H(S) C
≤ (6)
Ts Tc
There exists a coding scheme for which the source output
can be transmitted over the channel and be reconstructed
with an arbitrarily small probability of error. The parameter
C
is called critical rate.
Tc
4. Conversly, if
H(S) C
> (7)
Ts Tc
it is not possible to transmit information over the channel
and reconstruct it with an arbitrarily small probability of
error.
& %
Example:
' $
1 C
≤ (8)
Ts Tc
Tc
But the ratio equals the code rate, r of the channel encoder.
Ts
Hence, for a binary symmetric channel, if r ≤ C, then there
exists a code capable of achieving an arbitrarily low probability
of error.
& %
' $
Information Capacity Theorem:
The Information Capacity Theorem is defined as
‘The information capacity of a continuous channel of bandwidth B
hertz, perturbed by additive white Gaussian noise of power spectral
N0
density and limited in bandwidth to B, is given by
2
!
P
C = B log2 1+ (9)
N0 B
K = 2BT (10)
Yk = Xk + Nk (11)
& %
' $
The noise sample Nk is Gaussian with zero mean and variance
given by
σ 2 = N0 B (12)
E[Xk2 ] = P (13)
& %
' $
The mutual information I(Xk ; Yk ) can be expressed as
& %
Hence, the differential entropy of Yk is
' $
1
h(Yk ) = log2 [2πe(P + σ 2 )] (17)
2
2. The variance of the noise sample Nk equals σ 2 . Hence, the
differential entropy of Nk is given by
1
h(Nk ) = log2 (2πeσ 2 ) (18)
2
3. Now, substituting above two equations into
I(Xk ; Yk ) = h(Yk ) − h(Nk ) yeilds
1 P
C = log2 (1 + 2 )bits per transmission (19)
2 σ
With the channel used K times for the transmission of K samples
of the process X(t) in T seconds, we find that the information
& %
capacity per unit time is K/T times the result given above for C.
' $
P
C = Blog2 (1 + )bits per second (20)
N0 B
& %