E2 205 Module7
E2 205 Module7
Navin Kashyap
Indian Institute of Science
Definitions and Notation
I CGRS is an [n, k, d = n − k + 1] GRS code over Fq , with
parity-check matrix
v1 v2 ··· vn
v1 α1 v2 α2 ··· vn αn
2 2 2
HGRS = v1 (α1 )
v2 (α2 ) ··· vn (αn )
.. .. ..
. . .
d−2 d−2 d−2
v1 (α1 ) v2 (α2 ) ··· vn (αn )
where the αj s are non-zero and distinct, and the vj s are non-zero.
I Set τ := b d−1
2 c. CGRS is capable of correcting all error patterns of
weight ≤ τ .
Definitions and Notation
I CGRS is an [n, k, d = n − k + 1] GRS code over Fq , with
parity-check matrix
v1 v2 ··· vn
v1 α1 v2 α2 ··· vn αn
2 2 2
HGRS = v1 (α1 )
v2 (α2 ) ··· vn (αn )
.. .. ..
. . .
d−2 d−2 d−2
v1 (α1 ) v2 (α2 ) ··· vn (αn )
where the αj s are non-zero and distinct, and the vj s are non-zero.
I Set τ := b d−1
2 c. CGRS is capable of correcting all error patterns of
weight ≤ τ .
I Let c = (c1 , c2 , . . . , cn ) ∈ CGRS be the transmitted codeword, and
let y = (y1 , y2 , . . . , yn ) ∈ Fnq be the received vector.
I The error vector is given by e = (e1 , e2 , . . . , en ) = y − c.
I Let J = {j : ej 6= 0} be the set of error locations.
Definitions and Notation
I CGRS is an [n, k, d = n − k + 1] GRS code over Fq , with
parity-check matrix
v1 v2 ··· vn
v1 α1 v2 α2 ··· vn αn
2 2 2
HGRS = v1 (α1 )
v2 (α2 ) ··· vn (αn )
.. .. ..
. . .
d−2 d−2 d−2
v1 (α1 ) v2 (α2 ) ··· vn (αn )
where the αj s are non-zero and distinct, and the vj s are non-zero.
I Set τ := b d−1
2 c. CGRS is capable of correcting all error patterns of
weight ≤ τ .
I Let c = (c1 , c2 , . . . , cn ) ∈ CGRS be the transmitted codeword, and
let y = (y1 , y2 , . . . , yn ) ∈ Fnq be the received vector.
I The error vector is given by e = (e1 , e2 , . . . , en ) = y − c.
I Let J = {j : ej 6= 0} be the set of error locations.
We will assume throughout that
|J| = t ≤ τ
Syndrome
(αj )−1 , j ∈ J.
Error Locator Polynomial
(αj )−1 , j ∈ J.
T
I Knowns: J, s = [s0 , s1 , . . . , sd−2 ] , (vj )j∈J and (αj )j∈J .
I Unknowns: (ej )j∈J — there are |J| = t ≤ b d−1
2 c of these.
T
I Knowns: J, s = [s0 , s1 , . . . , sd−2 ] , (vj )j∈J and (αj )j∈J .
I Unknowns: (ej )j∈J — there are |J| = t ≤ b d−1
2 c of these.
So, the big question is: How to retrieve σ(x) from the syndrome s?
Recovering σ(x) from the Syndrome s
We give a two-step method for recovering σ(x) from s:
1. determine the number of errors t = |J|
2. find the degree-t polynomial σ(x)
Recovering σ(x) from the Syndrome s
We give a two-step method for recovering σ(x) from s:
1. determine the number of errors t = |J|
2. find the degree-t polynomial σ(x)
6= 0 if λ = t
Determining σ(x)
0 = σ(αj −1 ) = 1 + σ1 αj −1 + σ2 αj −2 + · · · + σt αj −t
t+ρ
I For 0 ≤ ρ ≤ t − 1, multiply the above by ej vj (αj ) to get
t+ρ t+ρ−1 ρ
0 = ej vj (αj ) + σ1 ej vj (αj ) + · · · + σt ej vj (αj ) ,
for each j ∈ J.
0 = st+ρ + σ1 st+ρ−1 + · · · + σt sρ
P `
(recalling that s` = j∈J ej vj (αj ) ).
Proof of Theorem GRS-DEC-2 (cont’d)
T
1. Compute s = HGRS yT = [s0 , s1 , . . . , sd−2 ] .
If s = 0, declare “no errors” and stop.
3. Solve
σt −st
σt−1 −st+1
Mt · . = .
.. ..
σ1 −s2t−1
for σ1 , . . . , σt .
Set σ(x) = 1 + σ1 x + σ2 x 2 + · · · + σt x t .
The Peterson-Gorenstein-Zierler (PGZ) Decoder
6. Decode to c = y − e.
Remarks on the PGZ Decoder
X d−2
X
= ej vj (αj x)` ,
j∈J `=0
Hence, ! d−2
Y X X `
σ(x)S(x) = (1 − αm x) e j vj (αj x)
m∈J j∈J `=0
d−2
!
X Y X `
= e j vj (1 − αm x) (αj x)
j∈J m∈J `=0
X Y
≡ e j vj (1 − αm x) (mod x d−1 )
j∈J m∈J\{j}
| {z }
error-evaluator polynomial ω(x)
The Error-Evaluator Polynomial
Note that
d−2
X ` d−1
(1 − αj x) (αj x) = 1 − (αj x) ≡ 1 (mod x d−1 )
`=0
Hence, ! d−2
Y X X `
σ(x)S(x) = (1 − αm x) e j vj (αj x)
m∈J j∈J `=0
d−2
!
X Y X `
= e j vj (1 − αm x) (αj x)
j∈J m∈J `=0
X Y
≡ e j vj (1 − αm x) (mod x d−1 )
j∈J m∈J\{j}
| {z }
error-evaluator polynomial ω(x)
In summary,
σ(x)S(x) ≡ ω(x) (mod x d−1 ) (K1)
The Error-Evaluator Polynomial
X Y
ω(x) = e j vj (1 − αm x)
j∈J m∈J\{j}
Some observations:
Some observations:
I For any h ∈ J,
Y
ω(αh −1 ) = eh vh (1 − αm αh −1 ) 6= 0.
m∈J\{h}
So, no root of σ(x) is a root of ω(x), meaning that σ(x) and ω(x)
have no common factors:
gcd σ(x), ω(x) = 1 (K3)
The Key Equation
The equations
The equations
For a given S(x), the solution σ(x), ω(x) to the key equation is
unique up to scaling by a constant.
Uniqueness of Solution to Key Equation
We assume that the non-zero syndrome polynomial S(x) arises
vector e of weight ≤ τ , so that the corresponding
from an error
σ(x), ω(x) pair satisfies the key equation (K1)–(K3).
I If, in addition to
(1) and (2) above, we also require that
gcd β(x), γ(x)
= 1, then (β(x), γ(x) must be equal to
σ(x), ω(x) , up to scaling by a constant µ ∈ F .
Proof of Lemma KEY-1
I Multiply (1) by σ(x): σ(x)β(x)S(x) ≡ σ(x)γ(x) (mod x d−1 )
Apply (K1):
β(x)ω(x) ≡ σ(x)γ(x) (mod x d−1 ) (3)
Proof of Lemma KEY-1
I Multiply (1) by σ(x): σ(x)β(x)S(x) ≡ σ(x)γ(x) (mod x d−1 )
Apply (K1):
β(x)ω(x) ≡ σ(x)γ(x) (mod x d−1 ) (3)
σ(x) = 1 + σ1 x + σ2 x 2 + · · · + σt x t .
The Key to Solving the Key Equation
σ(x) = 1 + σ1 x + σ2 x 2 + · · · + σt x t .
Input: a(x), b(x) non-zero polynomials with deg a(x) ≥ deg b(x)
Initialize: r−1 (x) ←− a(x) ; r0 (x) ←− b(x) ;
s−1 (x) ←− 1 ; s0 (x) ←− 0 ;
t−1 (x) ←− 0 ; t0 (x) ←− 1 ;
Loop:
i ←− 1 ;
while (ri−1 (x) 6= 0) do
ri ←− ri−2 mod ri−1 ;
qi ←− ri−2 div ri−1 ; /* This is the quotient */
si ←− si−2 − qi si−1 ;
ti ←− ti−2 − qi ti−1 ;
i ←− i + 1 ;
The Extended Euclidean Algorithm
Note that from (c), we have gcd si (x), ti (x) = 1 for all i.
Solving the Key Equation
1
EUCLID (b)
deg t (x)
i∗ = d − 1 − deg ri ∗ −1 (x) ≤ (d − 1).
2
Proof of Theorem KEY
I From Lemma EUCLID (a), we have
ti ∗ (x)S(x) ≡ ri ∗ (x) (mod x d−1 ).
1
EUCLID (b)
deg t (x)
i∗ = d − 1 − deg ri ∗ −1 (x) ≤ (d − 1).
2
I Therefore, by Lemma KEY-1,
ti ∗ (x) = µ(x)σ(x) and ri ∗ (x) = µ(x)ω(x)
for some polynomial µ(x) ∈ F[x].
I Plug these into Lemma EUCLID (a), we have
si ∗ (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)
Proof of Theorem KEY
I From Lemma EUCLID (a), we have
ti ∗ (x)S(x) ≡ ri ∗ (x) (mod x d−1 ).
1
EUCLID (b)
deg t (x)
i∗ = d − 1 − deg ri ∗ −1 (x) ≤ (d − 1).
2
I Therefore, by Lemma KEY-1,
ti ∗ (x) = µ(x)σ(x) and ri ∗ (x) = µ(x)ω(x)
for some polynomial µ(x) ∈ F[x].
I Plug these into Lemma EUCLID (a), we have
si ∗ (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)
d
am x m ∈ F[x] is the
P
Definition: The formal derivative of f (x) =
m=0
d
polynomial f 0 (x) = mam x m−1 .
P
m=1
I Therefore,
αh ω(αh −1 )
eh = − · for all h ∈ J
vh σ 0 (αh −1 )
This formula is attributed to G.D. Forney.
Summary: GRS Decoding via the Extended Euclidean Algo
Given: An [n, k, d = n − k + 1] GRS code over Fq specified by HGRS with
code locators α1 , . . . , αn and column multipliers v1 , . . . , vn .
n
Input: a received word y ∈ (Fq ) .
T
1. Compute s = HGRS yT = [s0 , s1 , . . . , sd−2 ] .
If s = 0, output c = y, and stop.
0 otherwise.
Output c = y − e.
Computational Complexity
I Step 1 requires O(dn) field operations.
I Step 2 requires O(|J|d) field operations.
I Each iteration of the EEA deals with polynomials of deg < d,
hence requires O(d) field operations.
I To estimate the number of iterations, note that
I deg ti (x) is strictly increasing in i, since
deg ti (x) = d − 1 − deg ri−1 (x) by EUCLID (b),
and deg ri−1 (x) is strictly decreasing in i.
I deg t0 (x) = 0 and deg ti ∗ (x) = deg σ(x) = |J|.
Hence, the number of iterations needed to go from
deg t0 (x) = 0 to deg ti ∗ (x) = |J| is at most |J|.