Mathematics in Communications: Introduction To Coding: F F Tusubira, PHD, Muipe, Miee, Reng, Ceng
Mathematics in Communications: Introduction To Coding: F F Tusubira, PHD, Muipe, Miee, Reng, Ceng
i
i
i c r
Source encoder decoder sink
f
F F Tusubira 6
A simple example of binary block codes is the single parity check code for which n=k+1.
The last co-ordinate of the codeword satisfies Equation 2.
i c
j n
j
n
+
1
0
2
0 2 mod (2)
The sum of two codewords a and c, a+c is obtained by adding a
i
+c
i
, i=0, n-1 with the
mod 2 operation applied to the sum of every co-ordinate.
The Hamming weight of a vector c is defined as the number of non-zero vector
coordinates (Equation 3). 0<Hamming Weight<n, where n is the length of the vector c.
wt c wt c where wt
c
c
j
j
j
j
n
( ) ( ), ( )
,
,
'
c
j
0 0
1 0 0
1
(3)
The Hamming distancebetween 2 vectors a and c, dist(a,c) is the number of coordinates
where a and c differ (Equations 4 and 5).
dist wt where wt
c a
c a
j j
j j
j
n
( ) ( ), ( )
,
,
a, c a c a + c
j j j j
+
'
0
1 0
1
(4)
dist ( ) a, c (a c) + wt
(5)
We can now apply some rigour to our conceptual example. Clearly, we were comparing
the received codeword with the possible transmitted codewords. We then selected the
codeword which, at least in our estimation, has the minimum distanceto the received
codeword.
The minimum distance is easier to calculate through the Hamming weight, rather than
directly, using Equation 5 (this equation applies to linear codes).
The error correction and detection capability of any coding scheme is determined by the
minimum distance d of the code. This is the minimum distance between any two
codewords of the code (Equation 6). The minimum weight also gives the minimum
distance (Equation 7)
{ } d
a c C
a c
min )
,
dist(a, c (6)
{ } d
a c C
a c
min
,
wt( ) a + c (7)
The general concept can be stated as follows:
A code has the ability to correct a received vector r=c+f if the distance between r and any
other valid codeword a satisfies the condition:
F F Tusubira 7
dist dist ( ) ( ) c, c f a, c f + < + (8)
wt wt (f a + c f ) ( ) < + (9)
wt(f )
1
]
1
d 1
2
(10)
In Equations 8 to 10, f is the error vector. The inherent assumption here is that fewer
errors are more probable so that we map the received vector to the nearest codeword.
5.3 The Hamming Bound
Note that the inequality in these equations defines a conceptual space surrounding a valid
codeword point. All codewords (which are by implication not valid) within this space can
be unambiguously mapped on to the valid codeword. We can extend this easily to a three
dimensional concept and a definition of the Hamming bound.
Any binary code defined by (n, k, d) obeys the inequality in Equation 11:
2 1
1
2
1
2
k n
n n
e
where e
d
+
_
,
+ +
_
,
_
,
1
]
1
. . . . . . , (11)
Each sphere defines a correction sphere. The minimum diameter of the correction sphere
corresponds to the minimum distance between codewords for any given code.
5.4 Syndromes
A syndrome, in English, means a concurrence, especially of symptoms; ,
characteristic of a particular problem or condition
The idea behind channel coding is that we set up a mathematical mechanism for detecting
symptoms, or the syndrome, of a corrupted codeword. We will give a simple example.
Linear block codes obey Equation 12:
H.c
T
= 0 or c.H
T
= 0 only for valid codewords (12)
H in this case is the parity check generated specifically for the code used.
The syndrome is obtained by using Equation 11 on the received codeword. A non-zero
result indicates that an error has occurred. Further operations indicate the most probable
error that would give the detected syndrome. The error is then corrected. For those who
have been to the doctor, this is a very familiar process.
F F Tusubira 8
5.5 Decoding and Error Probability
In decoding, a decision guide has to be given to the decoding algorithm by the system
designer. The algorithm will depend on the nature of the expected errors, the required
performance, and other factors. The following are some simple illustrative examples:
Maximum Likelihood decoding: This method selects the codeword c that has the largest
probability, P(r|c) as the transmitted codeword. Where two codewords share this
property, a random decision is made. This introduces the probability of an error or false
decoding.
Symbolwise maximum a posteriori decoding: Each element of the codeword is
independently decoded. It should be noted tha t when the resulting codeword is
assembled, it may not be a valid codeword . In that case, decoding failure occurs.
Bounded minimum distance decoding: A requirement here is that r must lie within the
correction sphere. We can have correct decoding, false decoding, or a decoding failure.
5.6 Code Generation
The modern communication channel contains a lot of computing power, and all the
processes of coding and decoding are handled using algorithms programmed in hard or
soft form into the channel. There will be a code generating algorithm, which can be a
matrix or a polynomial, depending on the selected method.
5.7 Other Types of Channel Codes
It should be noted that we have presented only the most basic examples here as an aid to
understanding the concepts. There are several sources that dwell at length on some of the
modern and sophisticated coding techniques. All these make very interesting
mathematical reading.
6. CONCLUSION
The intimate linkage between mathematics and communications has been demonstrated,
using coding theory, specifically channel coding, as a vehicle for this demonstration. It is
the hope of the author that this will re-awaken awareness of this important linkage,
creating a basis for joint research and training programmes among the Electrical
Engineering, Mathematics, and Physics disciplines within Uganda, and particularly within
Makerere University.