0% found this document useful (0 votes)
19 views29 pages

Error Correction Codes

Uploaded by

owassim236
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views29 pages

Error Correction Codes

Uploaded by

owassim236
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 29

15-853:Algorithms in the Real World

Error Correcting Codes I


– Overview
– Hamming Codes
– Linear Codes

15-853 Page1
Mathematicians are like Frenchmen:
whatever you say to them they translate
into their own language and forthwith it is
something entirely different.

- Goethe

15-853 Page2
General Model
message (m) Errors introduced by the
noisy channel:
coder • changed fields in the
codeword (e.g. a
codeword (c) flipped bit)
noisy • missing fields in the
channel codeword (e.g. a lost
byte). Called erasures
codeword’ (c’)
How the decoder deals
decoder with errors.
• error detection vs.
• error correction
message or error

15-853 Page3
Applications
• Storage: CDs, DVDs, “hard drives”,
• Wireless: Cell phones, wireless links
• Satellite and Space: TV, Mars rover, …
• Digital Television: DVD, MPEG2 layover
• High Speed Modems: ADSL, DSL, ..

Reed-Solomon codes are by far the most used in


practice, including pretty much all the examples
mentioned above.
Algorithms for decoding are quite sophisticated.

15-853 Page4
Block Codes
message (m) Each message and codeword
is of fixed size
coder  = codeword alphabet
codeword (c)
k =|m| n = |c| q = ||
C  n (codewords)
noisy
channel (x,y) = number of positions
s.t. xi  yi
codeword’ (c’)
d = min{(x,y) : x,y C, x  y}
decoder s = max{(c,c’)} that the code
can correct
message or error Code described as: (n,k,d)q
15-853 Page5
Hierarchy of Codes
linear C forms a linear subspace of n
of dimension k

C is linear and
cyclic c0c1c2…cn-1 is a codeword implies
c1c2…cn-1c0 is a codeword
BCH
Bose-Chaudhuri-Hochquenghem

Hamming Reed-Solomon

These are all block codes.


15-853 Page6
Binary Codes

Today we will mostly be considering  = {0,1} and


will sometimes use (n,k,d) as shorthand for (n,k,d)2
In binary x,y) is often called the Hamming
distance

15-853 Page7
Hypercube Interpretation
Consider codewords as vertices on a hypercube.
110 111
codeword
010 011
d = 2 = min distance
100
101 n = 3 = dimensionality
001
2n = 8 = number of nodes
000

The distance between nodes on the hypercube is the


Hamming distance . The minimum distance is d.
001 is equidistance from 000, 011 and 101.
For s-bit error detection d  s + 1
For s-bit error correction d  2s + 1
15-853 Page8
Error Detection with Parity Bit
A (k+1,k,2)2 systematic code
Encoding:
m1m2…mk  m1m2…mkpk+1
where pk+1 = m1  m2  …  mk

d = 2 since the parity is always even (it takes two bit


changes to go from one codeword to another).
Detects one-bit error since this gives odd parity
Cannot be used to correct 1-bit error since any
odd-parity word is equal distance  to k+1 valid
codewords.
15-853 Page9
Error Correcting One Bit Messages
How many bits do we need to correct a one bit error
on a one bit message?
110 111

10 11 010 011

100
101

00 01 000 001

2 bits 3 bits
0 -> 00, 1-> 11 0 -> 000, 1-> 111
(n=2,k=1,d=2) (n=3,k=1,d=3)

In general need d  3 to correct one error. Why?


15-853 Page10
Example of (6,3,3)2 systematic code
Definition: A Systematic code
message codeword is one in which the message
000 000000 appears in the codeword
001 001011
010 010101
011 011110
100 100110
101 101101
110 110011
111 111000

15-853 Page11
Error Correcting Multibit Messages
We will first discuss Hamming Codes
Detect and correct 1-bit errors.

Codes are of form: (2r-1, 2r-1 – r, 3) for any r > 1


e.g. (3,1,3), (7,4,3), (15,11,3), (31, 26, 3), …
which correspond to 2, 3, 4, 5, … “parity bits” (i.e. n-k)

The high-level idea is to “localize” the error.


Any specific ideas?

15-853 Page12
Hamming Codes: Encoding
Localizing error to top or bottom half 1xxx or 0xxx
m15 m14 m13 m12 m11 m10 m9 p8 m7 m6 m5 m3 p0
p8 = m15  m14  m13  m12  m11  m10  m9
Localizing error to x1xx or x0xx
m15 m14 m13 m12 m11 m10 m9 p8 m7 m6 m5 p4 m3 m2 p0
p4 = m15  m14  m13  m12  m7  m6  m5
Localizing error to xx1x or xx0x
m15 m14 m13 m12 m11 m10 m9 p8 m7 m6 m5 p4 m3 p2 p0
p2 = m15  m14  m11  m10  m7  m6  m3
Localizing error to xxx1 or xxx0
m15 m14 m13 m12 m11 m10 m9 p8 m7 m6 m5 p4 m3 p2 p1 p0
p1 = m15  m13  m11  m9  m7  m5  m3
15-853 Page13
Hamming Codes: Decoding
m15 m14 m13 m12 m11 m10 m9 p8 m7 m6 m5 p4 m3 p2 p1 p0

We don’t need p0, so we have a (15,11,?) code.


After transmission, we generate
b8 = p8  m15  m14  m13  m12  m11  m10  m9
b4 = p4  m15  m14  m13  m12  m7  m6  m5
b2 = p2  m15  m14  m11  m10  m7  m6  m3
b1 = p1  m15  m13  m11  m9  m7  m5  m3
With no errors, these will all be zero
With one error b8b4b2b1 gives us the error location.
e.g. 0100 would tell us that p4 is wrong, and
1100 would tell us that m12 is wrong
15-853 Page14
Hamming Codes
Can be generalized to any power of 2
– n = 2r – 1 (15 in the example)
– (n-k) = r (4 in the example)
– d = 3 (discuss later)
– Can correct one error, but can’t tell difference between
one and two!
– Gives (2r-1, 2r-1-r, 3) code
Extended Hamming code
– Add back the parity bit at the end
– Gives (2r, 2r-1-r, 4) code
– Can correct one error and detect 2
– (not so obvious)

15-853 Page15
Lower bound on parity bits
How many nodes in hypercube do we need so that d = 3?
Each of the 2k codewords eliminates n neighbors plus
itself, i.e. n+1
2n  (n  1)2k
n  k  log 2 (n  1)
n  k  log 2 (n  1) 
In previous hamming code 15  11 +  log2(15+1)  = 15
Hamming Codes are called perfect codes since they
match the lower bound exactly

15-853 Page16
Lower bound on parity bits
What about fixing 2 errors (i.e. d=5)?
Each of the 2k codewords eliminates itself, its
 n  n
neighbors and its neighbors’ neighbors, giving: 1      
 1  2
n k
2  (1  n  n(n  1) / 2)2
n  k  log 2 (1  n  n(n  1) / 2)
 k  2 log 2 n  1
Generally to correct s errors:
 n  n  n
n  k  log 2 (1           )
 1  2 s

15-853 Page17
Lower Bounds: a side note
The lower bounds assume random placement of bit
errors.
In practice errors are likely to be less than random, e.g.
evenly spaced or clustered:
x x x x x x
x xxxxx
Can we do better if we assume regular errors?

We will come back to this later when we talk about


Reed-Solomon codes. In fact, this is the main
reason why Reed-Solomon codes are used much
more than Hamming-codes.
15-853 Page18
Linear Codes
If  is a field, then n is a vector space
Definition: C is a linear code if it is a linear subspace
of n of dimension k.

This means that there is a set of k basis vectors


vi  n (1  i  k) that span the subspace.
i.e. every codeword can be written as:
c = a 1 v 1 + … + a k vk a i  

The sum of two codewords is a codeword.

15-853 Page19
Linear Codes
Basis vectors for the (7,4,3)2 Hamming code:
m 7 m 6 m 5 p4 m 3 p2 p1
v1 = 1 0 0 1 0 1 1
v2 = 0 1 0 1 0 1 0
v3 = 0 0 1 1 0 0 1
v4 = 0 0 0 0 1 1 1
How can we see that d = 3?

15-853 Page20
Generator and Parity Check Matrices
Generator Matrix:
A k x n matrix G such that: C = {xG | x  k}
Made from stacking the basis vectors
Parity Check Matrix:
An (n – k) x n matrix H such that: C = {y  n | HyT = 0}
Codewords are the nullspace of H
These always exist for linear codes

15-853 Page21
Advantages of Linear Codes
• Encoding is efficient (vector-matrix multiply)
• Error detection is efficient (vector-matrix multiply)
• Syndrome (HyT) has error information
• Gives qn-k sized table for decoding
Useful if n-k is small

15-853 Page22
Example and “Standard Form”
For the Hamming (7,4,3) code:
1 0 0 1 0 1 1
 
0 1 0 1 0 1 0
G
0 0 1 1 0 0 1
 
0 0 0 0 1 1 1
By swapping columns 4 and 5 it is in the form Ik,A.
A code with a matrix in this form is systematic, and
G is in “standard form” 1 0 0 0 1 1 1
 
0 1 0 0 1 1 0
G 
0 0 1 0 1 0 1 
 
0 0 0 1 0 1 1

15-853 Page23
Relationship of G and H
If G is in standard form [Ik,A]
then H = [AT,In-k]

Example of (7,4,3) Hamming code:

transpose

1 0 0 0 1 1 1
  1 1 1 0 1 0 0
 0 1 0 0 1 1 0  
G H  1 1 0 1 0 1 0
0 0 1 0 1 0 1
  1 0 1 1 0 0 1
0 0 0 1 0 1 1

15-853 Page24
Proof that H is a Parity Check Matrix
Suppose that x is a message. Then

H(xG)T = H(GTxT) = (HGT)xT = (ATIk+In-kAT)xT =


(AT + AT)xT = 0

Now suppose that HyT = 0. Then ATi,*  yT[1..k] + yTk+i = 0


(where ATi,* is row i of AT and yT[1..k] are the first k
elements of yT]) for 1  i  n-k. Thus, y[1..k]  A*,i = yk+i
where A*,i is now column i of A, and y[1..k] are the first k
elements of y, so y[k+1…n] = y[1..k]A.

Consider x = y[1..k]. Then xG = [y [1..k] | y[1..k]A] = y.

Hence if HyT = 0, y is the15-853


codeword for x = y[1..k]. Page25
The d of linear codes
Theorem: Linear codes have distance d if every set
of (d-1) columns of H are linearly independent
(i.,e., sum to 0), but there is a set of d columns
that are linearly dependent.

Proof: if d-1 or fewer columns are linearly


dependent, then for any codeword y, there is
another codeword y’, in which the bits in the
positions corresponding to the columns are
inverted, that both have the same syndrome (0).

If every set of d-1 columns is linearly


independent, then changing any d-1 bits in a
codeword y must also change the syndrome (since
the d-1 corresponding columns cannot sum to 0).
15-853 Page26
Dual Codes
For every code with
G = Ik,A and H = AT,In-k
we have a dual code with
G = In-k, AT and H = A,Ik

The dual of the Hamming codes are the binary


simplex codes: (2r-1, r, 2r-1-r)
The dual of the extended Hamming codes are the
first-order Reed-Muller codes.
Note that these codes are highly redundant and can
fix many errors.

15-853 Page27
NASA Mariner:
Deep space probes from
1969-1977.
Mariner 10 shown

Used (32,6,16) Reed Muller code (r = 5)


Rate = 6/32 = .1875 (only 1 out of 5 bits are useful)
Can fix up to 7 bit errors per 32-bit word

15-853 Page28
How to find the error locations
HyT is called the syndrome (no error if 0).
In general we can find the error location by creating
a table that maps each syndrome to a set of error
locations.
Theorem: assuming s  2d-1 every syndrome value
corresponds to a unique set of error locations.
Proof: Exercise.

Table has qn-k entries, each of size at most n (i.e.


keep a bit vector of locations).

15-853 Page29

You might also like