CT3 PDF
CT3 PDF
Linear Codes
Enes Pasalic
University of Primorska
Koper, 2013
2
Contents
1 Preface 5
3 Coding theory 31
3
4 CONTENTS
Chapter 1
Preface
This book has been written as lecture notes for students who need a grasp
of the basic principles of linear codes.
The scope and level of the lecture notes are considered suitable for under-
graduate students of Mathematical Sciences at the Faculty of Mathematics,
Natural Sciences and Information Technologies at the University of Primorska.
It is not possible to cover here in detail every aspect of linear codes, but I
hope to provide the reader with an insight into the essence of the linear codes.
Enes Pasalic
[email protected]
5
6 CHAPTER 1. PREFACE
Chapter 2
• Decoding problem
• Hamming distance
• Error correction
• Shannon
7
Mariners Course description Decoding problem Hamming distance Error correction Shannon
– Probably not !!
You would not be able to listen CD-s, retrieve correct data from
your hard disk, would not be able to have a quality communication
over telephone etc.
1 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
2 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
3 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
4 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Coding efficiency
5 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Mariner story
• Back in 1969 Mariners (Voyagers etc.) were supposed to send
pictures from Mars to Earth
• The problem was a thermal noise to send pixels with grey
scale of 64 level.
• This means that the total energy per bit is reduced - this
causes increased probability of (bit) error !
7 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
8 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
9 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
10 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
ISBN
11 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
ISBN - example
0 = 1 · 0 + 2 · 7 + 3 · 9 + 4 · 2 + 5 · 3 + 6 · x6 + 7 · 5 + 8 · 1 + 9 · 9 + 10 · 10,
12 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Course topics
13 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Block code
c0 = (00000) c1 = (10110)
c2 = (01011) c3 = (11101)
14 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
r =n−k
15 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
16 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Coding Alphabet
• A is q-ary alphabet
• q = 2, q = p > 2, q = p m or sometimes
• A = {a, b, c, d}
17 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Transmission scheme
00 00000
DATA 01 01011
Encryption Coding
10 10110
11 11101
Transmission
channel (noisy)
0000 1 00000
Decoding Decryption
?
18 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Decoding problem
1. error correction
2. error detection (retransmission request)
3. Hybrid approach both correction and detection
19 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
20 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
21 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
p d
Pb(r, c) = (1 − p)n−d ( ) .
q−1
22 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
23 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
24 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
25 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Codeword spheres
.
00001 00110
. .
10111
00010 . .
10000 . .
00000
11110
10110 .
.
00100 .
10100
01000
. 10010
.
10001
.
. . .
. .
.
01011 >3 11101
. .
. . . .
d=2e+1=3; e=1
Spheres of radius 1
26 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
27 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Example
28 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
• Even case and odd case are different when both correction
and detection are performed !
29 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Decoding example
Consider the code C (example 6) with codewords:
30 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Decoding example II
Decode as c3 .
31 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
r = (10100).
32 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Decoding complexity
33 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
34 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
X
N
Pe = (1 − p)k p N−k < (0.07)N ,
k
0≤k<N/2
thus Pe → 0 for N → ∞.
35 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
P ∗ (M, n, p) := min PC
C
36 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
Shannon’s theorem
log2 M
Theorem If the rate R = n is in the range 0 < R < 1 − H(p)
and Mn := 2bRnc then
P ∗ (Mn , n, p) → 0 if n → ∞
37 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
CBSC = 1 − H(p).
38 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
OPTIONAL FOR
INTERESTED STUDENTS
39 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
b := (np(1 − p)/(/2))1/2
Then,
1
P(w > np + b) ≤ Chebyshev’s inequality
2
– Since p < 1/2 then ρ := bnp + bc < n/2 for large n
40 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
ρ ρ
log = p log p + O(n−1/2 )
n n
ρ ρ
(1 − ) log(1 − ) = q log q + O(n−1/2 )(n → ∞)
n n
• Finally need two functions. If u, v, y ∈ {0, 1}n , x ∈ C then
0, if d(u, v) > ρ
f (u, v) =
1, if d(u, v) ≤ ρ
X
gi (y) = 1 − f (y, xi ) + f (y, xj ).
j6=i
41 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
• Express Pi using gi ,
X
Pi = P(y|xi )gi (y) (xi is fixed )
y∈{0,1}n
X XX
= P(y|xi ){1 − f (y, xi )} + P(y|xi )f (y, xi ).
y∈{0,1}n y j6=i
| {z }
Pb(y6∈Bρ (xi ))
42 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
XXX M
1
Pc ≤ + M −1 P(y, xi )f (y, xi )
2 y i=1 j6=i
• Now we use the fact that P ∗ (M, n, p) < E(PC ), where E(PC )
is expected value over all possible codes C . Hence,
XXX M
∗ 1
P (M, n, p) ≤ + M −1 E(P(y, xi ))E(f (y, xi ))
2 y i=1 j6=i
M XX
X
1 |Bρ |
= + M −1 E(P(y, xi )) ·
2 y
2n
i=1 j6=i
1
= + (M − 1)2−n |Bρ |.
2
43 / 46
Mariners Course description Decoding problem Hamming distance Error correction Shannon
44 / 46
30 CHAPTER 2. SHANNON THEORY AND CODING
Chapter 3
Coding theory
• Vector spaces
• Linear codes
• Generator matrix
• Parity check
31
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
Example
c1 = (110000) c2 = (001100)
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
Decoding example
Consider the code C (example 6) with codewords:
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
Decoding example II
• Let r=(00011). Then we compute,
Decode as c3 .
Decoding complexity
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
X
N
Pe = (1 − p)k p N−k < (0.07)N ,
k
0≤k<N/2
thus Pe → 0 for N → ∞.
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
P ∗ (M, n, p) := min PC
C
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
Shannon’s theorem
log2 M
Theorem If the rate R = n is in the range 0 < R < 1 − H(p)
and Mn := 2bRnc then
P ∗ (Mn , n, p) → 0 if n → ∞
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
CBSC = 1 − H(p).
• Not always the best ones but allows for efficient coding and
decoding.
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
q k messages ↔ q k n − tuples in Vk (F )
Vector spaces-basics
• What is a k-dim. vector subspace S ⊂ Vn (F )?
• Simply, subspace is determined by k linearly independent
vectors in Vn (F )
C = a1 c1 + a2 c2 , (a1 , a2 ) ∈ F ; F = GF (22 )
k−1
Y
(q n − 1)(q n − q)(q n − q 2 ) · · · (q n − q k−1 ) = (q n − q i ).
i=0
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
Counting subspaces
• Each k-dimensional subspace contains
k−1
Y
(q k − 1)(q k − q)(q k − q 2 ) · · · (q k − q k−1 ) = (q k − q i )
i=0
Counting subspaces II
k−1
Y Y1 Qk−1 n i
k i 2 i i=0 (q − q ) 960
(q −q ) = (2 −2 ) = 3·2 = 6; Qk−1 = = 160.
i=0 (q k − q i ) 6
i=0 i=0
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
Basis of a code
v1 = (1100) v1 = (0110).
(00) → (0000)
(10) → (1100)
(01) → (0110)
(11) → (1010)
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
• Many choices for the subspace (linear code) for fixed n, k. E.g.
B = {(10000), (01000)} ⇒ dC = 1,
B = {(10110), (01011)} ⇒ dC = 3,
B = {(10111), (11110)} ⇒ dC = 2,
w (v ) = #{vi 6= 0, 1 ≤ i ≤ n}
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
d = w (C )
d(x, y) = w (x − y)
d = min{w (z) : z ∈ C , z 6= 0}
• If m = (m1 m2 m3 ) ∈ M then
c = mG = m1 v1 + m2 v2 + m3 v3
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
G = [Ik A]
Ik identity k × k; A − k × (n − k) matrix
– Can we for a given C always find G in a standard form ? NO,
but we can find equivalent code !
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
Equivalent codes
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
Orthogonal spaces
– Define inner product of x, y ∈ Vn (F ),
n
X
x·y = xi yi
i=1
x = (101) ⇒ x · x = 1 + 0 + 1 = 0
Orthogonal vectors if x · y = 0.
Definition Let C be an (n, k) code over F . The orthogonal
complement of C ( dual code of C ) is
C ⊥ = {x ∈ Vn (F ) : x · y = 0 for all y ∈ C }
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
Dual code
Then,
1 0 0 1 0 0
H = [−AT I2 ] = 1 1 1 0 1 0
1 1 0 0 0 1
Check that GH T = 0, and linear independency of rows of H !
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
c ∈ C ⇔ HcT = 0.
c = (m1 m2 . . . mk x1 x2 . . . xn−k )
HcT = 0
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
1 + 1 + x1 = 0 → x1 = 0
1 + x2 = 0 → x2 = 1 ⇒ c = (101011)
1 + x3 = 0 → x3 = 1
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
m1 + m3 = x1
m1 + m2 = x2
m2 + m3 = x3
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
– Construct c s.t.
λij 1, ≤ j ≤ t
cij =
0, otherwise
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
Shannon - proof
OPTIONAL - FOR
INTERESTED STUDENTS
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
b := (np(1 − p)/(/2))1/2
Then,
1
P(w > np + b) ≤ Chebyshev’s inequality
2
– Since p < 1/2 then ρ := bnp + bc < n/2 for large n
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
ρ ρ
log = p log p + O(n−1/2 )
n n
ρ ρ
(1 − ) log(1 − ) = q log q + O(n−1/2 )(n → ∞)
n n
• Finally need two functions. If u, v, y ∈ {0, 1}n , x ∈ C then
0, if d(u, v) > ρ
f (u, v) =
1, if d(u, v) ≤ ρ
X
gi (y) = 1 − f (y, xi ) + f (y, xj ).
j6=i
• Express Pi using gi ,
X
Pi = P(y|xi )gi (y) (xi is fixed )
y∈{0,1}n
X XX
= P(y|xi ){1 − f (y, xi )} + P(y|xi )f (y, xi ).
y∈{0,1}n y j6=i
| {z }
Pb(y6∈Bρ (xi ))
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
XXX M
1
Pc ≤ + M −1 P(y, xi )f (y, xi )
2 y i=1 j6=i
• Now we use the fact that P ∗ (M, n, p) < E(PC ), where E(PC )
is expected value over all possible codes C . Hence,
XXX M
∗ 1
P (M, n, p) ≤ + M −1 E(P(y, xi ))E(f (y, xi ))
2 y i=1 j6=i
M XX
X
1 |Bρ |
= + M −1 E(P(y, xi )) ·
2 y
2n
i=1 j6=i
1
= + (M − 1)2−n |Bρ |.
2
Decoding Shannon Vector spaces Linear codes Generator matrix Parity check
• Group theory
• Standard array
• Weight distribution
• MacWilliams identity
53
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
1 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
2 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
3 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
4 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
5 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Hamming codes
6 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
7 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Perfect codes
8 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Perfect codes II
q k = q n−r
(nmb. of codewords)
9 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
r = c + e,
=0
10 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
1. Compute HrT
11 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Decoding - example
Let again
1 0 0 1 0 1 1
H= 0 1 0 1 1 0 1
0 0 1 0 1 1 1
Correct r ← r + (1000000).
12 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
2. ∀a, b, c ∈ G : a ◦ (b ◦ c) = (a ◦ b) ◦ c Associativity
13 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Example of Groups
∀a ∈ Z, a + 0 = a; a + (−a) = 0
14 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Structure of Groups
20 = 1; 21 = 2; 22 = 4; 23 = 3 (mod 5)
40 = 1; 41 = 4; 42 = 1 (mod 5)
15 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
a ◦ H = {a ◦ h | h ∈ H}
16 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Example Let G = [(00), (10), (01), (11)] be a group with the group
operation vector addition mod2.
Thus G = H ∪ H + (01).
17 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
18 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
coset leaders
00000 10101 01110 11011 codewords
00001 10100 01111 11010
00010 10111 01100 11001
00100 10001 01010 11111
01000 11101 00110 10011
10000 00101 11110 01011
11000 01101 01110 00011
10010 00111 11100 01001
19 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
20 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
d −1 d −1
w (x − y) 6 w (x) + w (y) 6 b c+b c6d −1
2 2
Contradicts the fact w (C ) = d, unless x = y.
21 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
22 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
coset leaders
00000 10101 01110 11011 codewords
00001 10100 01111 11010
00010 10111 01100 11001
00100 10001 01010 11111
01000 11101 00110 10011
10000 00101 11110 01011
11000 01101 01110 00011
10010 00111 11100 01001
23 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Syndrome decoding
24 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
25 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Not needed !
26 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
27 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Step-by-step decoding
28 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Step-by-step decoding
Step-by-step decoding for linear codes II
1. Set i = 1
2. Compute HrT and the weight w of corresponding coset
leader
3. If w = 0, stop with r as the transmitted codeword
4. If H(r + ei )T has smaller associated weight than HrT ,
set r = r + ei .
5. Set i = i + 1 and go to 2.
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Ai = #{c : w (c) = i, c ∈ C }
30 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
1-p 1-p
0 0 0 0
p 1 1
p
1 1 . 2
. ..
1-p . p
q-1 .
Binary symmetric channel
q
q
1-p
q-ary symmetric channel
31 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
.
00001 00110
. .
10111
00010 . .
.
10000 00000 . 11110
10110 .
.
00100
.
10100
01000
. 10010 . Legend
.
11000
. . . Black and blue correctly decoded
. .
01010
. > 3 11101
. . Red points incorrectly decoded
. . . . N(1,2,2)= 0 for c=(00001)
N(1,2,1)=4 { (00011),(00101),(01001),
d=2e+1=3; e=1
(10001)}
Spheres of radius 1
34 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
35 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
36 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Weight enumerators
Small codes - list the codewords and find weight distribution. E.g.
1 1 0 0
G=
1 1 1 1
Then C = {0000, 1100, 0011, 1111} thus A0 = 1, A2 = 2, A4 = 1.
37 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Weight enumerators II
X n
X
P(u) = Ai x n−i y i = WC (x, y )
u∈C i=0
39 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Therefore, X X
gn (u) = |C | P(v).
u∈C v∈C ⊥
Proved by induction on n !
40 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
MacWilliams identity
1
WC ⊥ (x, y ) = WC (x + y , x − y ).
2k
Proof Let the weight distribution of C be (A0 , A1 , . . . , An ). Then,
X 1 X
P(u) = gn (u) Lemma 3.11
|C |
u∈C ⊥ u∈C
1 X
= (x + y )n−w (u) (x − y )w (u) Lemma 3.12
|C |
u∈C
Xn
1 1
= Ai (x + y )n−i (x − y )i = WC (x + y , x − y )
|C | |C |
i=0
41 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
WC (x + y , x − y ) = (x + y )6 + 4(x + y )3 (x − y )3 + 3(x + y )2 (x − y )4
= . . . = 8x 6 + 32x 3 y 3 + 24x 2 y 4
43 / 44
Reminder Hamming Group theory Standard array Weight distribution MacWilliams identity
Conclusions
• Many nice algebraic properties for linear codes (not always the
case for nonlinear codes)
44 / 44
76CHAPTER 4. DECODING OF LINEAR CODES AND MACWILLIAMS IDENTITY
Chapter 5
Coding theory -
Constructing New Codes
• Some bounds
• Other construction methods
• Elias codes
77
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
(n = 2r − 1, 2r − 1 − r , 3) r > 3
1 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
1 0 1 0 1
G= ;
0 1 1 1 0
coset leaders syndrome
00000 10101 01110 11011 000
00001 10100 01111 11010 001
00010 10111 01100 11001 010
00100 10001 01010 11111 100
01000 11101 00110 10011 011
10000 00101 11110 01011 101
11000 01101 01110 00011 110
10010 00111 11100 01001 111
2 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
MacWilliams identity-reminder
Theorem
If C is a binary (n, k) code with dual C ⊥ then,
1
WC ⊥ (x, y ) = WC (x + y , x − y ).
2k
n
X
WC (x, y ) = Ai x n−i y i .
i=0
3 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Introduction
• These codes are only defined for some specific lengths, have
certain minimum distance and dimension.
4 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
5 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
6 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Extending codes
Definition
If C is a code of length n over Fq the extended code C is defined
by,
n+1
X
C := {(c1 , . . . , cn , cn+1 )|(c1 , . . . , cn ) ∈ C , ci = 0}
i=1
7 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
• Finally note that for odd weight the parity (extended bit) is 1
- all together we get an (8,4,4) code.
We would have,
d +i
δ= →1 i → ∞.
n+i
Clearly not possible for arbitrary k and n.
9 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Some intuition
10 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
11 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Example
Let us consider the existence of (9,4,5) code
1 1 . . . 1 1 0 0 . . . 0 0
G=
G1 G2
12 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
13 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
14 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
15 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
d (a) = min{d, n − d 0 }
where d 0 is the largest weight of any codeword in C
0000000
1111111
1101000
0010111
8 words 0110100 8 complements 1001011
..
..
.
.
1010001 0101110
16 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Definition
Expurgation: Throwing away the codewords of the code.
CAUTION : It can turn a linear code into a nonlinear one. E.g.
throwing away 5 out of 16 codewords of a (7,4,3) code results in a
nonlinear code.
17 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Facts
If C is a binary (n, k, d) code containing words of both odd and
even weight then (exercise)
18 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
19 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Example
From a (3,2,2) code by puncturing we get a (2,2,1) code,
0 0 0 0 0
0 1 1 0 1
(3, 2, 2) code (2, 2, 1) code
1 0 1 1 0
1 1 0 1 1
• From the original code we have thrown out all the codewords
that start with one, i.e. c1 = 1.
• Shortening can be seen as expurgating followed by puncturing.
21 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
22 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
23 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
24 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
25 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
26 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Example
A rather complicated construction from the 60’s gave a [10, 38, 4]
code - a good code.
Until 1978 it was believed this was the best possible code for
n = 10, d = 4.
27 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Strange language
Example
• Using “strange” language over binary alphabet: 30 letters and
10 decimal digits
28 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Example
We can ask a question: Is there a binary (5,3,3) linear code ?
29 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Turns out that we cannot do better, though the upper and lower
bounds say that 4 6 M 6 6.
30 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
31 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Motivation
:
The Singleton bound shows that this is indeed the best possible,
over any alphabet.
32 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Singleton bound
Theorem
(Singleton) If C is an (n, k, d) code then d 6 n − k + 1.
Proof.
Use projection of the codewords to the first (k − 1) coordinates
• 2k codewords ⇒ ∃c1 , c2 having the same first k − 1 values
33 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Example
For instance, we cannot have (7,4,5) code but can we construct
(7,4,4) code ?
34 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Generalization
35 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Facts
MDS codes and perfect codes are incomparable:
• there exist perfect codes that are not MDS and
• there exist MDS codes that are not perfect.
36 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Hamming bound
Theorem
(Hamming bound) If q, n, e ∈ N, d = 2e + 1 then,
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
|C | 6 27 /(1 + 7) = 16
38 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Example
• Another example is the UB on M for a [5, M, 3] code
• Applying the Hamming bound we get,
25
|C | = M 6 = 5.3 = 5.
6
Note that Singleton bound (generalized) gives M 6 2n−d+1 = 8.
39 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
1
R
Hamming
Singleton
Gilbert
0 1/2 delta
40 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
41 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
(u,u + v) construction
In general, let Ci be a binary [n, Mi , di ] code (i = 1, 2) and define,
C : {((u, u + v)|u ∈ C1 , v ∈ C2 }
Example
Take 2 codes given by
1 0 1 1 1 1 0 0
G1 = G2 =
0 1 0 1 0 0 1 1
(1011 1100)
(1011 0011)
..
.
Theorem
Then C is a [2n, M1 M2 , d] code, where d := min{2d1 , d2 }
Proof.
Consider (u1 , u1 + v1 ) and (u2 , u2 + v2 ).
1. If v1 = v2 and u1 6= u2 then d > 2d1
2. If v1 6= v2 the distance is (triangle ineq.)
43 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
An abstract justification.
Example
Take for C2 an [8, 20, 3] obtained by puncturing a [9, 20, 4] code
44 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Applications include:
45 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
46 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Notation:
Example
One basis for the vector space of 2 × 2 matrices is:
1 0 0 1 0 0 0 0
0 0 0 0 1 0 0 1
47 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Definition
The direct product A ⊗ B is the code consisting of all
nA × nB
48 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
000
0000
1 0 1 101 1 1 0 0 1100
GA = A= ; GB = B=
0 1 1
011 0 0 1 1
0011
110 1111
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Facts
The product code is “clearly” linear :
50 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
wt(A) > dA dB .
51 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Example
T
Let g1A = (101)T and g1B = (1100). Then
1 1 1 0 0
T
g1A g1B = 0 · [1 1 0 0] = 0 0 0 0
1 1 1 0 0
giA and gjB are the rows of the generator matrices GA and GB .
52 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Iterative approach
To summarize :
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
54 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Properties of recursion
Facts
From the definition of recursion we have:
ni+1 = ni 2m+i
ki+1 = ki (2m+i − m − i − 1)
55 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
i =1 n1 = 2m
i =2 n2V2 = 2m · 2m+1 = 22m+1
i =3 n3V3 = 22m+1 · 2m+2 = 23m+1+2
..
.
1
ni = 2mi+(1+2+...+i−1) = 2mi+ 2 i(i−1)
i−1
Y
mi+ 12 i(i−1) m+j +1
ni = 2 ; ki = ni 1−
2m+j
j=0
56 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
57 / 58
Constructing new codes Basic methods for constructions Some bounds Other construction methods Elias codes
Conclusions
58 / 58
Chapter 6
• Upper bounds
• Reed-Muller codes
107
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
1 / 30
2 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
The last one is not obvious, but empirically good codes are
obtained by reducing q
4 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
Optimal codes
Definition
Optimal codes II
Useful notation:
Pr n
– Vq (n, r ) := |Br (c)| = i=0 i (q − 1)i - # of sphere of radius r
6 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
1
R
Hamming
Singleton
Gilbert
0 1/2 delta
7 / 30
Theorem
(GV bound) For n, d ∈ N, d 6 n, we have,
Proof.
• Let the [n, M, d] code C be maximal, i.e. there is no word in
An with distance > d to all words in C
9 / 30
• Length n → ∞ but Ri 6→ 0 !
10 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
GV as a construction method
11 / 30
Theorem
(GV bound LC) If n, d, k ∈ N satisfy
Proof.
• Let Ck−1 be an (n, k − 1, d) code. Since,
12 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
GV bound - example II
13 / 30
Algorithm:
14 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
15 / 30
xG 6∈ B(0, d − 1)
P
3. By union bound (Pb(∪ni=1 Ai ) 6 ni=1 Pb(Ai )), the probability
that there is x such that xG ∈ V2 (n, d − 1) is at most
16 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
Is GV bound tight ?
17 / 30
Example
Hamming codes specified by n = 2r − 1, k = 2r − 1 − r , d = 3
Need to compute
2 r
X
2 −1
V2 (n, d − 1) =
i
i=0
18 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
19 / 30
alpha
Hamming
Plotkin
Singleton
Gilbert
0 1/2 delta
20 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
Is LP bound tight ?
Example
Back to the same example, we had :
• A(13, 5) 6 512 - Singleton bound
21 / 30
1
LP bound
alpha Elias
Hamming
Plotkin
Singleton
Gilbert
0 1/2 delta
22 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
Reed-Muller codes
23 / 30
24 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
• Let us define,
v1
v2
Br = [Hr , 0] = ..
.
vr
• The size of B is r × 2r .
25 / 30
Reed-Muller code
Definition
The first order Reed-Muller code denoted R(1, r ) is the subspace
generated by the vectors 1, v1 , v2 , . . . , vr .
Theorem
R(1, r ) is a (2r , r + 1, 2r −1 ) code.
26 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
Proof.
Need to prove the results on dimension and min. distance
27 / 30
Example
1 0 0 1 0 1 1 0
H3 0 = 0 1 0 1 1 0 1 0 all vectors ofGF (2)3
0 0 1 0 1 1 1 0
No, . . .
28 / 30
Shannon’s theorem revisited Lower bounds Upper bounds Reed-Muller codes
ReedMuller Hamming
dimension r +1 2r − r − 1
length 2r 2r − 1
d 2r −1 3
29 / 30
30 / 30
Chapter 7
Reed-Muller codes
• Hadamard transform
123
Direct product of RM Decoding RM Hadamard transform
1 / 34
Reed-Muller code(reminder)
Definition
The first order Reed-Muller code denoted R(1, r ) is the subspace
generated by the vectors 1, v1 , v2 , . . . , vr .
Theorem
R(1, r ) is a (2r , r + 1, 2r −1 ) code.
2 / 34
Direct product of RM Decoding RM Hadamard transform
Example
Want to construct a (16,9) linear code !
3 / 34
Then using two such codes (same) in direct product we get a linear
code,
(n1 n2 , k1 , k2 , d1 d2 ) = (16, 9, 4)
Construction example
Example
A (4,3,2) RM code C is easily constructed using,
1 1 1 1
G = 1 0 1 0
0 1 1 0
1 1 1 1
mG = (011) 1 0 1 0 = (1100)
0 1 1 0
5 / 34
Construction example II
Example
What are the codewords of a big (16,9,4) code V = C ⊗ C ? For
instance c1 = (0110) and c2 = (0101) gives,
0 0 0 0 0
1
cT (0101) = 0 1 0
1 c2 =
1
1 0 1 0 1
0 0 0 0 0
6 / 34
Direct product of RM Decoding RM Hadamard transform
Erasure channel
1-p
0 0
p erasure
p
1 1
1-p
7 / 34
Decoding erasures
Example
Given is the received word with 3 erasures for a (16,9,4) code,
0 E 0 0
0 1 0 1
r= 0
1 0 1
0 E 0 E
Decoding strategy:
• Correct erasures in each column using the erasure correction
for a (4,3,2) RM code.
Example
Correcting erasures in columns gives
0 E 0 0
0 1 0 1
r= 0 1
0 1
0 E 0 0
9 / 34
• Proper ordering
• Hadamard matrices
• Hadamard transform
10 / 34
Direct product of RM Decoding RM Hadamard transform
Hadamard matrix
Definition
A Hadamard matrix Hn is an n × n matrix with integer entries +1
and -1 whose rows are pairwise orthogonal as real numbers.
Example
The matrix
1 1
1 −1
is a Hadamard matrix.
11 / 34
• Combinatorial theory
12 / 34
Direct product of RM Decoding RM Hadamard transform
Hadamard conjecture
Facts
Hadamard conjectured that such a matrix of size 4k × 4k could
be constructed for any k !
H H
H −H T
13 / 34
Hn HnT = nIn .
14 / 34
Direct product of RM Decoding RM Hadamard transform
Example
1 1 1 1 2 0
H2 H2T = =
1 −1 1 −1 0 2
15 / 34
Sylvester construction
12 = 0011 = 0 · 20 + 0 · 21 + 22 + 23
P1 = [0, 1]
if Pi = [b1 , . . . , b2i ] then Pi+1 = [b1 0, . . . , b2i 0, b1 1, . . . , b2i 1]
17 / 34
Example
• Binary triples would be ordered as,
18 / 34
Direct product of RM Decoding RM Hadamard transform
Example
Let r = 2. Then,
1 1 1 1
1 −1 1 −1
H4 =
1
1 −1 −1
1 −1 −1 1
19 / 34
r(|{z}
110 ) = 1 picks up the 4-th component of r
u
Then define
20 / 34
Direct product of RM Decoding RM Hadamard transform
r
u ∈ Fr2 , r ∈ F22 → r(u) → R(u) = (−1)r(u)
Alternatively, the mapping r → R is defined as
0 7→ 1 1 7→ −1
21 / 34
Hadamard transform
Definition
The Hadamard transform of the 2r -tuple R is the 2r -tuple R̂ where
for any u ∈ Fr2 X
R̂(u) = (−1)u·v R(v).
v∈Fr2
22 / 34
Direct product of RM Decoding RM Hadamard transform
X
R̂(110) = (−1)(110)·v+r(v)
v∈F32
= (−1)(110)·(100)+r(100) + (−1)(110)·(010)+r(010)
= (−1)(110)·(001)+r(001) + (−1)(110)·(110)+r(110) + · · ·
= (−1)1+1 + (−1)1+1 + (−1)0+0 + (−1)0+1 + · · · = 6
23 / 34
Example
For r = (11011100) we have computed
R = (−1, −1, 1, −1, −1, −1, 1, 1). Then,
T T
−1 1 1 1 1 1 1 1 1 −2
−1 1 −1 1 −1 1 −1 1 −1 2
1 1 1 −1 −1 1 1 −1 −1 −6
−1 1 −1 −1 1 1 −1 −1 1 −2
RH8 =
=
−1 1 1 1 1 −1 −1 −1 −1 −2
−1 1 −1 1 −1 −1 1 −1 1 2
1 1 1 −1 −1 −1 −1 1 1 2
1 1 −1 −1 1 −1 1 1 −1 −2
24 / 34
Direct product of RM Decoding RM Hadamard transform
1 0 0 1 0 1 1 0
B3 = H3 0 = 0 1 0 1 1 0 1 0 all vectors ofGF (2)3
0 0 1 0 1 1 1 0
25 / 34
Theorem
(Main theorem) R̂(u) is the number of 0’s minus the number of
1’s in the binary vector,
r
X
t=r+ ui vi
i=1
26 / 34
Direct product of RM Decoding RM Hadamard transform
r
Suppose r ∈ F22 is a received vector. Our goal is to decode r to
the codeword closest to r.
Facts
Pr
• For any binary r -tuple u = (u1 , . . . , ur ), uBr = i=1 ui vi .
29 / 34
Connection to encoding
30 / 34
Direct product of RM Decoding RM Hadamard transform
Decoding RM codes
Pr
3. If R̂(u) > 0, then decode r as i=1 ui vi
Pr
4. If R̂(u) < 0, then decode r as 1 + i=1 ui vi
32 / 34
Direct product of RM Decoding RM Hadamard transform
33 / 34
Easy to compute R = (−1)r = (1, −1, −1, −1, 1, −1, −1, 1). Also,
• Self-Dual codes
141
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
Decoding complexity
R̂ = RH
Example
Mariner was using RM(1,5) code to correct up to 7 errors.
Received vectors of length 25 = 32 are multiplied with H(25 )
requiring c.a. 22r +1 = 211 operations (back in the 70’s).
1 / 26
Decoding complexity II
Definition
For A = [aij ] and B = [bij ] of order m and n respectively define the
Kronecker product as mn × mn matrix,
A × B = [aij B]
2 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
3 / 26
H(2r ) = H2 × H2 × · · · × H2
| {z }
r times
Example
1 1 1 1
1 1 1 −1 1 −1
H2 = H4 =
1
1 −1 1 −1 −1
1 −1 −1 1
(A × B)(C × D) = AC × BD
4 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
Theorem
For a positive integer r ,
(1) (2) (r )
H(2r ) = M2r M2r · · · M2r ,
(i)
where M2r = I2r −i × H2 × I2i−1 .
5 / 26
Decomposition - example
where,
(1) 1 0 1 1
M4 = I2 × H2 × I1 = ⊗ ⊗ [1] =
0 1 1 −1
1 1 0 0
1 −1 0 0
I2 × H2 =
0
0 1 1
0 0 1 −1
6 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
Decomposition - example II
1 0 1 0
(2) 1 1 1 0 0 1 0 1
M4 = H2 × I2 = ⊗ =
1
1 −1 0 1 0 −1 0
0 1 0 −1
(1) (2)
Finally we confirm below that M4 × M4 = H4
1 1 0 0 1 0 1 0 1 1 1 1
1 −1 0
0 0 1 0 1 1 −1 1 −1
=
0 0 1 1 1 0 −1 0 1 1 −1 −1
0 0 1 −1 0 1 0 −1 1 −1 −1 1
7 / 26
Computing M matrices
1 1
1 −1
1 1
(1) 1 −1
M8 = I4 × H2 × I1 =
···
1 1
1 −1
1 1
1 −1
1 1
1 1
1 1
(3) 1 1
M8 = H2 × I4 =
1
−1
1 −1
1 −1
1 −1
8 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
10 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
11 / 26
x3 x2 x1 f (x)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 0
1 1 1 1
12 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
Boolean functions-definitions
• There are 2n different terms x1c1 x2c2 · · · xncn for different c’s. As
n
ac is binary it gives 22 different functions in n variables.
Example
For n = 3 there are 28 = 256 distinct functions specified by ac ,
1, x1 , . . . , xr , x1 x2 , . . . , xr −1 xr , . . . , x1 · · · xt , . . . , xr −t+1 · · · xr
| {z } | {z } | {z }
linear terms quadratic terms degree t
x3 x2 x1 1 x1 x2 x1 x3 x2 x3
0 0 0 1 0 0 0
0 0 1 1 0 0 0
0 1 0 1 0 0 0
0 1 1 1 1 0 0
1 0 0 1 0 0 0
1 0 1 1 0 1 0
1 1 0 1 0 0 1
1 1 1 1 1 1 1
15 / 26
16 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
E.g.,
f (x1 , x2 , x3 ) = x1 + x2 + x1 x3 + x1 x2 + x1 x2 x3
= x1 + x2 + x1 x2 +x3 (x1 + x1 x2 )
| {z } | {z }
g (x1 ,x2 ) h(x1 ,x2 )
17 / 26
18 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
19 / 26
20 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
21 / 26
22 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
Self-orthogonal codes
Definition
A linear code C is self-orthogonal if C ⊂ C ⊥ .
Each codeword orthogonal to every other codeword !
Example
The matrix,
1 0 0 1 0
G=
1 0 1 1 1
generates a self-orthogonal code C . One can check that ci cj = 0.
23 / 26
Self-orthogonal codes
Theorem
(Lemma 4.5) A linear code C is self-orthogonal iff GG T = 0
Proof.
(sketch) Assume C ⊂ C ⊥ . Let ri be a row of G . Then,
ri ∈ C ; C ⊂ C ⊥ ⇒ ri ∈ C ⊥
24 / 26
Fast Hadamard transform RM codes and Boolean functions Self-Dual codes
Self-dual codes
Definition
A linear code C is self-dual if C = C ⊥
25 / 26
26 / 26