Introduction To Communications: Channel Coding
Introduction To Communications: Channel Coding
Introduction to Communications
Lecture 32: Channel Coding
This lecture:
1. Mutual Information.
2. Channel Capacity.
3. The Hamming Code.
Ref: CCR pp. 713–719, 549–550 (& 560–567), A Mathematical
Theory of Communication.
cumbersome P (X = x) and P (Y = y) we
simply write P (x), P (y).
I Similarly, we write P (x, y) for the joint
probability P (X = x, Y = y).
I We write P (x | y) for the conditional
probability P (X = x | Y = y).
I Conversely, P (y | x) for P (Y = y | X = x)
given Y.
channel (DMC).
I It is discrete in the sense that the input and the
c cc cc c c c cjcjcjcj• y1
cc1 ccc
P (y1 | x1 )
c[ c c c c c ccccccccc j jj jj5 jPj(y | x )
x1 T [
• TT[T[[[[[[[[[[ j j
[[j[j[j[j[j[[[
1 2
TTTT
TTTjTjjj [ [[[- [[[[[[
P (y2 | x1 )
Channel
Channel input jjj T T T cc c 1 c c c cc c • y
jj j Tc T
cTcc cc cc 2
output
jjj cccccccccc TTTTT P (y2 | x2 )
x2 •jc[jc[c[c[c[[[[[[[[[ TTTT P (y3 | x1 )
[[[[- [[[[[ )
[[[[[T[T[T[T[TT
P (y3 | x2 ) [• y3
P (Y = 1 | X = 0) = P (Y = 0 | X = 1) = α.
1−
0 •XXXXXXXX /
α
fff• 0
XXXXX ff3 fff
Channel XXXfXfXfffffff α Channel
fff XXXXXXXX α
input fffffff + XX
output
fffff /
XXXX
1 • 1−
• 1
α
capacity.m
It turns out that the capacity is
0.9
0.8
C2 (bits per channel use)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
!
(round up).
I To each bit sequence, randomly assign a
sequence of n channel input symbols (using a
suitable probability distribution).
I This constitutes the codebook.
in inity.
I For inite n, the BER will be non-zero.
be made.
I We need to wait for a block of n symbols to be
other n − k.
I The irst k message bits are copied directly
2 (n − 1) errors.
1
transmit information!
COMS3100/COMS7100 Intro to Communications L32 - Noise, Errors and Synchronisation 20 / 16
Simple Parity
x5 ≡ x2 + x3 + x4 (mod 2)
x6 ≡ x1 + x3 + x4 (mod 2)
x7 ≡ x1 + x2 + x4 (mod 2)
s1 ≡ y2 + y3 + y4 + y5 (mod 2)
s2 ≡ y1 + y3 + y4 + y6 (mod 2)
s3 ≡ y1 + y2 + y4 + y7 (mod 2)
C = B log(1 + SNR)