0% found this document useful (0 votes)
14 views61 pages

E2 205 Module7

This document describes the decoding of generalized Reed-Solomon (GRS) codes. It defines GRS codes and their parity check matrices. It explains that the decoding process involves computing the syndrome of the received word, determining the error locations from the syndrome, and recovering the error values. It presents a two-step method for recovering the error locator polynomial σ(x) from the syndrome: 1) Determine the number of errors t by computing determinants of syndrome-derived matrices, and 2) Find the degree-t polynomial σ(x). Once σ(x) is recovered, the error locations and values can be obtained.

Uploaded by

harshilultimate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views61 pages

E2 205 Module7

This document describes the decoding of generalized Reed-Solomon (GRS) codes. It defines GRS codes and their parity check matrices. It explains that the decoding process involves computing the syndrome of the received word, determining the error locations from the syndrome, and recovering the error values. It presents a two-step method for recovering the error locator polynomial σ(x) from the syndrome: 1) Determine the number of errors t by computing determinants of syndrome-derived matrices, and 2) Find the degree-t polynomial σ(x). Once σ(x) is recovered, the error locations and values can be obtained.

Uploaded by

harshilultimate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

E2 205: Error-Control Coding

Chapter 7: Decoding of GRS Codes

Navin Kashyap
Indian Institute of Science
Definitions and Notation
I CGRS is an [n, k, d = n − k + 1] GRS code over Fq , with
parity-check matrix
v1 v2 ··· vn
 
 v1 α1 v2 α2 ··· vn αn 
 2 2 2 
HGRS =  v1 (α1 )
 v2 (α2 ) ··· vn (αn ) 
.. .. ..

 
 . . . 
d−2 d−2 d−2
v1 (α1 ) v2 (α2 ) ··· vn (αn )
where the αj s are non-zero and distinct, and the vj s are non-zero.
I Set τ := b d−1
2 c. CGRS is capable of correcting all error patterns of
weight ≤ τ .
Definitions and Notation
I CGRS is an [n, k, d = n − k + 1] GRS code over Fq , with
parity-check matrix
v1 v2 ··· vn
 
 v1 α1 v2 α2 ··· vn αn 
 2 2 2 
HGRS =  v1 (α1 )
 v2 (α2 ) ··· vn (αn ) 
.. .. ..

 
 . . . 
d−2 d−2 d−2
v1 (α1 ) v2 (α2 ) ··· vn (αn )
where the αj s are non-zero and distinct, and the vj s are non-zero.
I Set τ := b d−1
2 c. CGRS is capable of correcting all error patterns of
weight ≤ τ .
I Let c = (c1 , c2 , . . . , cn ) ∈ CGRS be the transmitted codeword, and
let y = (y1 , y2 , . . . , yn ) ∈ Fnq be the received vector.
I The error vector is given by e = (e1 , e2 , . . . , en ) = y − c.
I Let J = {j : ej 6= 0} be the set of error locations.
Definitions and Notation
I CGRS is an [n, k, d = n − k + 1] GRS code over Fq , with
parity-check matrix
v1 v2 ··· vn
 
 v1 α1 v2 α2 ··· vn αn 
 2 2 2 
HGRS =  v1 (α1 )
 v2 (α2 ) ··· vn (αn ) 
.. .. ..

 
 . . . 
d−2 d−2 d−2
v1 (α1 ) v2 (α2 ) ··· vn (αn )
where the αj s are non-zero and distinct, and the vj s are non-zero.
I Set τ := b d−1
2 c. CGRS is capable of correcting all error patterns of
weight ≤ τ .
I Let c = (c1 , c2 , . . . , cn ) ∈ CGRS be the transmitted codeword, and
let y = (y1 , y2 , . . . , yn ) ∈ Fnq be the received vector.
I The error vector is given by e = (e1 , e2 , . . . , en ) = y − c.
I Let J = {j : ej 6= 0} be the set of error locations.
We will assume throughout that
|J| = t ≤ τ
Syndrome

I The first step in any decoding algorithm is the computation of


the syndrome of y (wrt HGRS ):

s = [s0 s1 s2 . . . sd−2 ]T = HGRS yT .

I Since HGRS yT = HGRS eT , we have that


X
s` = ej vj (αj )` , ` = 0, 1, 2, . . . , d − 2.
j∈J
Syndrome

I The first step in any decoding algorithm is the computation of


the syndrome of y (wrt HGRS ):

s = [s0 s1 s2 . . . sd−2 ]T = HGRS yT .

I Since HGRS yT = HGRS eT , we have that


X
s` = ej vj (αj )` , ` = 0, 1, 2, . . . , d − 2.
j∈J

I Decoding requires the retrieval of e, or equivalently, the set,


J, of error locations, and the corresp. error values ej , j ∈ J,
from the syndrome s.
Error Locator Polynomial

Definition: The error locator polynomial corresponding to the set,


J, of error locations is
Y
σ(x) = (1 − αj x).
j∈J

I σ(x) is a polynomial in Fq [x].


I It has degree |J| = t.
I The error locations are determined by the roots (in Fq ) of
σ(x), which are precisely

(αj )−1 , j ∈ J.
Error Locator Polynomial

Definition: The error locator polynomial corresponding to the set,


J, of error locations is
Y
σ(x) = (1 − αj x).
j∈J

I σ(x) is a polynomial in Fq [x].


I It has degree |J| = t.
I The error locations are determined by the roots (in Fq ) of
σ(x), which are precisely

(αj )−1 , j ∈ J.

The main goal of a decoding algorithm is to recover (somehow)


the error locator polynomial σ(x) from the syndrome s.
Recovering the Error Values

Once the set, J, of error locations has been successfully recovered,


recovering the error values ej , j ∈ J, is straightforward:
I We have the d − 1 equations
X
s` = ej vj (αj )` , ` = 0, 1, 2, . . . , d − 2.
j∈J

T
I Knowns: J, s = [s0 , s1 , . . . , sd−2 ] , (vj )j∈J and (αj )j∈J .
I Unknowns: (ej )j∈J — there are |J| = t ≤ b d−1
2 c of these.

I Thus, we have d − 1 linear equations in t < d − 1 unknowns.


Can solve for the unknowns using, say, Gaussian elimination.
Recovering the Error Values

Once the set, J, of error locations has been successfully recovered,


recovering the error values ej , j ∈ J, is straightforward:
I We have the d − 1 equations
X
s` = ej vj (αj )` , ` = 0, 1, 2, . . . , d − 2.
j∈J

T
I Knowns: J, s = [s0 , s1 , . . . , sd−2 ] , (vj )j∈J and (αj )j∈J .
I Unknowns: (ej )j∈J — there are |J| = t ≤ b d−1
2 c of these.

I Thus, we have d − 1 linear equations in t < d − 1 unknowns.


Can solve for the unknowns using, say, Gaussian elimination.

So, the big question is: How to retrieve σ(x) from the syndrome s?
Recovering σ(x) from the Syndrome s
We give a two-step method for recovering σ(x) from s:
1. determine the number of errors t = |J|
2. find the degree-t polynomial σ(x)
Recovering σ(x) from the Syndrome s
We give a two-step method for recovering σ(x) from s:
1. determine the number of errors t = |J|
2. find the degree-t polynomial σ(x)

For the first step, we use the following theorem:


Theorem GRS-DEC-1. Suppose that there is an error vector of
weight t ≤ τ that produces the syndrome s = [s0 , s1 , . . . , sd−2 ]T .
For each integer λ ∈ [t, τ ] define the λ × λ matrix
 
s0 s1 s2 · · · sλ−1
 s1
 s2 s3 · · · sλ 
Mλ := 
 s2 s3 s4 · · · sλ+1 
 .. .. .. . . .. 
 . . . . . 
sλ−1 sλ sλ+1 · · · s2λ−2

Then, det(Mλ ) = 0 if λ > t, and det(Mλ ) 6= 0 if λ = t.


Determining t

In other words, if s arises from an error vector of weight t ≤ τ ,


then t is the largest integer λ ≤ τ such that det(Mλ ) 6= 0.

Thus, we have an algorithm to determine t:


1. Set λ = τ .
2. Compute det(Mλ ).
3. If det(Mλ ) = 0, set λ ← λ − 1, and return to Step 2.
If det(Mλ ) 6= 0, output t = λ.

Remark: If s 6= 0, but for each integer λ ≤ τ , we get det(Mλ ) = 0,


then s cannot arise from an error vector of weight ≤ τ . This
means that more than τ errors must have occurred.
Proof of Theorem GRS-DEC-1
Suppose that s = HGRS eT for some e of weight t ≤ τ .
I Permuting the columns of HGRS if necessary, we may assume that
e = (e1 , e2 , . . . , et , et+1 , . . . , en )
| {z } | {z }
non-zero zero
I So, for 0 ≤ ` ≤ d − 2,
t
X `
s` = ej vj (αj ) .
j=1
Proof of Theorem GRS-DEC-1
Suppose that s = HGRS eT for some e of weight t ≤ τ .
I Permuting the columns of HGRS if necessary, we may assume that
e = (e1 , e2 , . . . , et , et+1 , . . . , en )
| {z } | {z }
non-zero zero
I So, for 0 ≤ ` ≤ d − 2,
t
X `
s` = ej vj (αj ) .
j=1

I For λ ∈ [t, τ ], define


 
1 1 ··· 1

 α1 α2 ··· αλ 

B := 
 (α1 )2 (α2 )2 ··· (αλ )2 

 .. .. .. .. 
 . . . . 
(α1 )λ−1 (α2 )λ−1 ··· (αλ )λ−1
(Note that this is a Vandermonde matrix.)
Proof of Theorem GRS-DEC-1 (cont’d)
I Also define
 
e 1 v1 0 0 ··· 0
 0
 e 2 v2 0 ··· 0 

D := diag(e1 v1 , e2 v2 , . . . , eλ vλ ) =  0
 0 e 3 v3 ··· 0 

 .. .. .. .. 
 . . . . 0 
0 0 0 ··· e λ vλ
Proof of Theorem GRS-DEC-1 (cont’d)
I Also define
 
e 1 v1 0 0 ··· 0
 0
 e 2 v2 0 ··· 0 

D := diag(e1 v1 , e2 v2 , . . . , eλ vλ ) =  0
 0 e 3 v3 ··· 0 

 .. .. .. .. 
 . . . . 0 
0 0 0 ··· e λ vλ

I Verify by direct matrix multiplication that


Mλ = BDB T .

I Hence, det Mλ = (det B)2 (det D).


I Now, det B 6= 0 by the Vandermonde determinant formula, and
λ
Y
det D = ej vj = 0 if λ > t
j=1

6= 0 if λ = t
Determining σ(x)

Having found a means of determining t = |J|, we turn our


attention to finding the error locator polynomial
Y
σ(x) := (1 − αj x) = 1 + σ1 x + σ2 x 2 + · · · + σt x t
j∈J

Theorem GRS-DEC-2. Suppose that there is an error vector of


weight t ≤ τ that produces the syndrome s = [s0 , s1 , . . . , sd−2 ]T .
Then,    
σt −st
σt−1   −st+1 
Mt ·  .  =  .  ,
   
.
 .  .
 . 
σ1 −s2t−1
where Mt is as defined in Theorem GRS-DEC-1.
Determining σ(x)

I Thus, to determine σ(x) = 1 + σ1 x + σ2 x 2 + · · · + σt x t , we


set up the system of linear equations
   
σt −st
σt−1   −st+1 
Mt ·  .  =  .  ,
   
.
 .  .
 . 
σ1 −s2t−1

and solve for the unknowns σ1 , . . . , σt .

I Note that Mt is invertible by Theorem GRS-DEC-1, so the


system of equations has a unique solution.
Proof of Theorem GRS-DEC-2
Y
σ(x) := (1 − αj x) = 1 + σ1 x + σ2 x 2 + · · · + σt x t
j∈J

I For each j ∈ J, αj −1 is a root of σ(x):

0 = σ(αj −1 ) = 1 + σ1 αj −1 + σ2 αj −2 + · · · + σt αj −t

t+ρ
I For 0 ≤ ρ ≤ t − 1, multiply the above by ej vj (αj ) to get
t+ρ t+ρ−1 ρ
0 = ej vj (αj ) + σ1 ej vj (αj ) + · · · + σt ej vj (αj ) ,
for each j ∈ J.

I Now, sum over j ∈ J to get: for 0 ≤ ρ ≤ t − 1,

0 = st+ρ + σ1 st+ρ−1 + · · · + σt sρ
P `
(recalling that s` = j∈J ej vj (αj ) ).
Proof of Theorem GRS-DEC-2 (cont’d)

I These t linear equations (in the unknowns σ1 , . . . , σt ) can be


written in matrix form as
 
s0 s1 s2 · · · st−1    
σt −st
 s1 s 2 s 3 · · · s t  

 −st+1 
σt−1 
 
 s2
 s3 s4 · · · st+1   ·  ..  =  ..  .
 
 .. .. .. . .. .
..   .   . 
 . . .
σ1 −s2t−1
st−1 st st+1 · · · s2t−2
| {z }
Mt
The Peterson-Gorenstein-Zierler (PGZ) Decoder
Given: An [n, k, d = n − k + 1] GRS code over Fq specified by HGRS with
code locators α1 , . . . , αn and column multipliers v1 , . . . , vn .
n
Input: a received word y ∈ (Fq ) .

T
1. Compute s = HGRS yT = [s0 , s1 , . . . , sd−2 ] .
If s = 0, declare “no errors” and stop.

2. Find the largest integer 0 < λ ≤ τ = b d−1


2 c such that det Mλ 6= 0.
Set t to be this λ. If no such λ found, declare “> τ errors” and
stop.

3. Solve    
σt −st
σt−1   −st+1 
Mt ·  .  =  . 
   
 ..   .. 
σ1 −s2t−1
for σ1 , . . . , σt .
Set σ(x) = 1 + σ1 x + σ2 x 2 + · · · + σt x t .
The Peterson-Gorenstein-Zierler (PGZ) Decoder

4. Determine the roots of σ(x) in Fq (say, by exhaustive search).


If fewer than t distinct non-zero roots found, declare “> τ errors”
and stop.
If t distinct non-zero roots found, then determine the error location
set J:
J = {j : αj −1 is a root of σ(x)}

5. Solve the system of linear equations


X `
ej vj (αj ) = s` , ` = 0, 1, 2, . . . , d − 2
j∈J

to obtain the error values ej , j ∈ J.

6. Decode to c = y − e.
Remarks on the PGZ Decoder

I If the syndrome s computed in Step 1 arises from an error


vector e of weight t ≤ τ , then the PGZ decoder is guaranteed
to find this (necessarily unique) e.
I Otherwise, the decoder will fail in one of several ways.
Remarks on the PGZ Decoder

I If the syndrome s computed in Step 1 arises from an error


vector e of weight t ≤ τ , then the PGZ decoder is guaranteed
to find this (necessarily unique) e.
I Otherwise, the decoder will fail in one of several ways.

I The same decoder can be used to decode an alternant code


Calt = CGRS ∩ (F0 )n , where F0 is a subfield of the field Fq over
which CGRS is defined.
If y ∈ (F0 )n is now a received word, and we need to decode it
to c ∈ Calt , then we run the same PGZ decoder, with an extra
check:
I all e values obtained in Step 5 must lie in F0 , so that
j
c = y − e belongs to Calt .
Computational Complexity of PGZ Decoder

I The PGZ decoding algorithm needs to solve certain systems


of linear equations: to determine the coefficients of σ(x), and
to obtain the error values ej , j ∈ J.
I If Gaussian elimination is used to solve these systems of linear
equations, the resulting computational complexity is O(d 3 ).
This is not good, as d is typically linear in the blocklength n.
Towards Faster Decoding of GRS Codes

Again, we assume that the syndrome s = [s0 , s1 , . . . , sd−2 ]T arises


from an error vector e of weight ≤ τ = b d−1
2 c.

Define the syndrome polynomial


 
d−2
X d−2
X X
S(x) := s` x ` =  ej vj (αj )`  x `
`=0 `=0 j∈J

X d−2
X
= ej vj (αj x)` ,
j∈J `=0

where, as before, J = {j : ej 6= 0} is the set of error locations.


The Error-Evaluator Polynomial
Note that
d−2
X ` d−1
(1 − αj x) (αj x) = 1 − (αj x) ≡ 1 (mod x d−1 )
`=0
The Error-Evaluator Polynomial
Note that
d−2
X ` d−1
(1 − αj x) (αj x) = 1 − (αj x) ≡ 1 (mod x d−1 )
`=0

Hence, ! d−2

Y X X `
σ(x)S(x) = (1 − αm x)  e j vj (αj x) 
m∈J j∈J `=0
d−2
!
X Y X `
= e j vj (1 − αm x) (αj x)
j∈J m∈J `=0
X Y
≡ e j vj (1 − αm x) (mod x d−1 )
j∈J m∈J\{j}
| {z }
error-evaluator polynomial ω(x)
The Error-Evaluator Polynomial
Note that
d−2
X ` d−1
(1 − αj x) (αj x) = 1 − (αj x) ≡ 1 (mod x d−1 )
`=0

Hence, ! d−2

Y X X `
σ(x)S(x) = (1 − αm x)  e j vj (αj x) 
m∈J j∈J `=0
d−2
!
X Y X `
= e j vj (1 − αm x) (αj x)
j∈J m∈J `=0
X Y
≡ e j vj (1 − αm x) (mod x d−1 )
j∈J m∈J\{j}
| {z }
error-evaluator polynomial ω(x)

In summary,
σ(x)S(x) ≡ ω(x) (mod x d−1 ) (K1)
The Error-Evaluator Polynomial
X Y
ω(x) = e j vj (1 − αm x)
j∈J m∈J\{j}

Some observations:

I deg ω(x) ≤ |J| − 1 < deg σ(x), so that

deg ω(x) < deg σ(x) ≤ τ (K2)


The Error-Evaluator Polynomial
X Y
ω(x) = e j vj (1 − αm x)
j∈J m∈J\{j}

Some observations:

I deg ω(x) ≤ |J| − 1 < deg σ(x), so that

deg ω(x) < deg σ(x) ≤ τ (K2)

I For any h ∈ J,
Y
ω(αh −1 ) = eh vh (1 − αm αh −1 ) 6= 0.
m∈J\{h}

So, no root of σ(x) is a root of ω(x), meaning that σ(x) and ω(x)
have no common factors:

gcd σ(x), ω(x) = 1 (K3)
The Key Equation

The equations

σ(x)S(x) ≡ ω(x) (mod x d−1 ) (K1)


deg ω(x) < deg σ(x) ≤ τ (K2)

gcd σ(x), ω(x) = 1 (K3)

together form the key equation of GRS decoding.


The Key Equation

The equations

σ(x)S(x) ≡ ω(x) (mod x d−1 ) (K1)


deg ω(x) < deg σ(x) ≤ τ (K2)

gcd σ(x), ω(x) = 1 (K3)

together form the key equation of GRS decoding.


For a given S(x), the solution σ(x), ω(x) to the key equation is
unique up to scaling by a constant.
Uniqueness of Solution to Key Equation
We assume that the non-zero syndrome polynomial S(x) arises
 vector e of weight ≤ τ , so that the corresponding
from an error
σ(x), ω(x) pair satisfies the key equation (K1)–(K3).

Lemma KEY-1. Suppose that there exist polynomials β(x) and


γ(x) such that

β(x)S(x) ≡ γ(x) (mod x d−1 ) (1)


1 1
with deg γ(x) < (d − 1) and deg β(x) ≤ (d − 1) (2)
2 2
Then, there exists some polynomial µ(x) such that
β(x) = µ(x)σ(x) and γ(x) = µ(x)ω(x).
Uniqueness of Solution to Key Equation
We assume that the non-zero syndrome polynomial S(x) arises
 vector e of weight ≤ τ , so that the corresponding
from an error
σ(x), ω(x) pair satisfies the key equation (K1)–(K3).

Lemma KEY-1. Suppose that there exist polynomials β(x) and


γ(x) such that

β(x)S(x) ≡ γ(x) (mod x d−1 ) (1)


1 1
with deg γ(x) < (d − 1) and deg β(x) ≤ (d − 1) (2)
2 2
Then, there exists some polynomial µ(x) such that
β(x) = µ(x)σ(x) and γ(x) = µ(x)ω(x).

I If, in addition to
 (1) and (2) above, we also require that
gcd β(x), γ(x)
 = 1, then (β(x), γ(x) must be equal to
σ(x), ω(x) , up to scaling by a constant µ ∈ F .
Proof of Lemma KEY-1
I Multiply (1) by σ(x): σ(x)β(x)S(x) ≡ σ(x)γ(x) (mod x d−1 )
Apply (K1):
β(x)ω(x) ≡ σ(x)γ(x) (mod x d−1 ) (3)
Proof of Lemma KEY-1
I Multiply (1) by σ(x): σ(x)β(x)S(x) ≡ σ(x)γ(x) (mod x d−1 )
Apply (K1):
β(x)ω(x) ≡ σ(x)γ(x) (mod x d−1 ) (3)

I From (2) and (K2), we also have


 
deg β(x)ω(x) < d − 1 and deg σ(x)γ(x) < d − 1,
so that the congruence mod x d−1 in (3) above is in fact an equality:
β(x)ω(x) = σ(x)γ(x). (4)
Proof of Lemma KEY-1
I Multiply (1) by σ(x): σ(x)β(x)S(x) ≡ σ(x)γ(x) (mod x d−1 )
Apply (K1):
β(x)ω(x) ≡ σ(x)γ(x) (mod x d−1 ) (3)

I From (2) and (K2), we also have


 
deg β(x)ω(x) < d − 1 and deg σ(x)γ(x) < d − 1,
so that the congruence mod x d−1 in (3) above is in fact an equality:
β(x)ω(x) = σ(x)γ(x). (4)

I In particular, this implies that σ(x) | β(x)ω(x).



However, since gcd σ(x), ω(x) = 1 by (K3), we must have
σ(x) | β(x), i.e., β(x) = µ(x)σ(x) for some µ(x).
Proof of Lemma KEY-1
I Multiply (1) by σ(x): σ(x)β(x)S(x) ≡ σ(x)γ(x) (mod x d−1 )
Apply (K1):
β(x)ω(x) ≡ σ(x)γ(x) (mod x d−1 ) (3)

I From (2) and (K2), we also have


 
deg β(x)ω(x) < d − 1 and deg σ(x)γ(x) < d − 1,
so that the congruence mod x d−1 in (3) above is in fact an equality:
β(x)ω(x) = σ(x)γ(x). (4)

I In particular, this implies that σ(x) | β(x)ω(x).



However, since gcd σ(x), ω(x) = 1 by (K3), we must have
σ(x) | β(x), i.e., β(x) = µ(x)σ(x) for some µ(x).
I Plugging this back in (3) yields

µ(x)σ(x)ω(x) = σ(x)γ(x) =⇒ σ(x) γ(x) − µ(x)ω(x) = 0
=⇒ γ(x) = µ(x)ω(x)
The Key to Solving the Key Equation

I Lemma KEY-1 also implies the following:



If β(x), γ(x) is a solution to (1) and (2) in which β(x) has
the least degree among all solutions, then it must be the case
that
β(x) = µ σ(x) and γ(x) = µ ω(x)
for some constant µ ∈ F.

I Furthermore, the constant µ ∈ F is uniquely determined by


the additional requirement that the constant coefficient of
σ(x) is 1:

σ(x) = 1 + σ1 x + σ2 x 2 + · · · + σt x t .
The Key to Solving the Key Equation

I Lemma KEY-1 also implies the following:



If β(x), γ(x) is a solution to (1) and (2) in which β(x) has
the least degree among all solutions, then it must be the case
that
β(x) = µ σ(x) and γ(x) = µ ω(x)
for some constant µ ∈ F.

I Furthermore, the constant µ ∈ F is uniquely determined by


the additional requirement that the constant coefficient of
σ(x) is 1:

σ(x) = 1 + σ1 x + σ2 x 2 + · · · + σt x t .

We will solve the key equation (K1)–(K3) using the extended


Euclidean algorithm.
Recall: The Euclidean Algorithm
Input: a(x), b(x) non-zero polynomials with deg a(x) ≥ deg b(x)
Initialize: r−1 (x) ←− a(x) ; r0 (x) ←− b(x)
Loop:
i ←− 1 ;
while (ri−1 (x) 6= 0) do
ri ←− ri−2 mod ri−1 ;
i ←− i + 1 ;

If ν is the largest index i such that ri (x) 6= 0, then



rν (x) = gcd a(x), b(x) .
Recall: The Euclidean Algorithm
Input: a(x), b(x) non-zero polynomials with deg a(x) ≥ deg b(x)
Initialize: r−1 (x) ←− a(x) ; r0 (x) ←− b(x)
Loop:
i ←− 1 ;
while (ri−1 (x) 6= 0) do
ri ←− ri−2 mod ri−1 ;
i ←− i + 1 ;

If ν is the largest index i such that ri (x) 6= 0, then



rν (x) = gcd a(x), b(x) .

The extended Euclidean algorithm also keeps track of polynomials


si (x) and ti (x) such that
si (x)a(x) + ti (x)b(x) = ri (x) for i = −1, 0, 1, . . . , ν + 1.
The Extended Euclidean Algorithm

Input: a(x), b(x) non-zero polynomials with deg a(x) ≥ deg b(x)
Initialize: r−1 (x) ←− a(x) ; r0 (x) ←− b(x) ;
s−1 (x) ←− 1 ; s0 (x) ←− 0 ;
t−1 (x) ←− 0 ; t0 (x) ←− 1 ;
Loop:
i ←− 1 ;
while (ri−1 (x) 6= 0) do
ri ←− ri−2 mod ri−1 ;
qi ←− ri−2 div ri−1 ; /* This is the quotient */
si ←− si−2 − qi si−1 ;
ti ←− ti−2 − qi ti−1 ;
i ←− i + 1 ;
The Extended Euclidean Algorithm

The following result can be easily proved by induction on i:

Lemma EUCLID: For i = 0, 1. . . . , ν + 1, we have


(a) si (x)a(x) + ti (x)b(x) = ri (x)
(b) deg ti (x) + deg ri−1 (x) = deg a(x)
(c) si−1 (x)ti (x) − si (x)ti−1 (x) = (−1)i


Note that from (c), we have gcd si (x), ti (x) = 1 for all i.
Solving the Key Equation

Theorem KEY: Let S(x) be the given syndrome polynomial.


 The
key equation (K1)–K(3) can be solved for σ(x), ω(x) by means
of the following procedure:
I Run the extended Euclidean algorithm on

a(x) = x d−1 and b(x) = S(x).

I Stop at the index i ∗ such that


1
deg ri ∗ (x) < (d − 1) ≤ deg ri ∗ −1 (x).
2
Then, ti ∗ (x) = µ σ(x) and ri ∗ (x) = µ ω(x) for some µ ∈ F.

Remark: Such an i ∗ exists, and is unique, since deg ri (x) strictly


decreases in i.
Proof of Theorem KEY
I From Lemma EUCLID (a), we have
ti ∗ (x)S(x) ≡ ri ∗ (x) (mod x d−1 ).

I By choice of i ∗ , we also have deg ri ∗ (x) < 12 (d − 1), and

1

EUCLID (b)
deg t (x)
i∗ = d − 1 − deg ri ∗ −1 (x) ≤ (d − 1).
2
Proof of Theorem KEY
I From Lemma EUCLID (a), we have
ti ∗ (x)S(x) ≡ ri ∗ (x) (mod x d−1 ).

I By choice of i ∗ , we also have deg ri ∗ (x) < 12 (d − 1), and

1

EUCLID (b)
deg t (x)
i∗ = d − 1 − deg ri ∗ −1 (x) ≤ (d − 1).
2
I Therefore, by Lemma KEY-1,
ti ∗ (x) = µ(x)σ(x) and ri ∗ (x) = µ(x)ω(x)
for some polynomial µ(x) ∈ F[x].
I Plug these into Lemma EUCLID (a), we have
si ∗ (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)
Proof of Theorem KEY
I From Lemma EUCLID (a), we have
ti ∗ (x)S(x) ≡ ri ∗ (x) (mod x d−1 ).

I By choice of i ∗ , we also have deg ri ∗ (x) < 12 (d − 1), and

1

EUCLID (b)
deg t (x)
i∗ = d − 1 − deg ri ∗ −1 (x) ≤ (d − 1).
2
I Therefore, by Lemma KEY-1,
ti ∗ (x) = µ(x)σ(x) and ri ∗ (x) = µ(x)ω(x)
for some polynomial µ(x) ∈ F[x].
I Plug these into Lemma EUCLID (a), we have
si ∗ (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)

I On the other hand, (K1) gives us


v (x)x d−1 + σ(x)S(x) = ω(x) for some v (x) ∈ F[x].
Proof of Theorem KEY (cont’d)

si ∗ (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)


v (x)x d−1 + σ(x)S(x) = ω(x)
Proof of Theorem KEY (cont’d)

si ∗ (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)


µ(x)v (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)
Proof of Theorem KEY (cont’d)

si ∗ (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)


µ(x)v (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)

I Comparing the two equations, we get that

si ∗ (x) = µ(x)v (x),

so that µ(x) | si ∗ (x).


Proof of Theorem KEY (cont’d)

si ∗ (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)


µ(x)v (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)

I Comparing the two equations, we get that

si ∗ (x) = µ(x)v (x),

so that µ(x) | si ∗ (x).

I Since ti ∗ (x) = µ(x)σ(x), we also have µ(x) | ti ∗ (x).


Proof of Theorem KEY (cont’d)

si ∗ (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)


µ(x)v (x)x d−1 + µ(x)σ(x)S(x) = µ(x)ω(x)

I Comparing the two equations, we get that

si ∗ (x) = µ(x)v (x),

so that µ(x) | si ∗ (x).

I Since ti ∗ (x) = µ(x)σ(x), we also have µ(x) | ti ∗ (x).

I However, by the remark following Lemma EUCLID,


gcd si ∗ (x), ti ∗ (x) = 1, and hence, µ(x) must be some
constant µ ∈ F.
Computing ej , j ∈ J

I From σ(x), we determine the error location set J as usual by


finding the roots of σ(x).
I For the error values ej , j ∈ J, we use the error-evaluator
polynomial ω(x).
Computing ej , j ∈ J

I From σ(x), we determine the error location set J as usual by


finding the roots of σ(x).
I For the error values ej , j ∈ J, we use the error-evaluator
polynomial ω(x).

d
am x m ∈ F[x] is the
P
Definition: The formal derivative of f (x) =
m=0
d
polynomial f 0 (x) = mam x m−1 .
P
m=1

I Formal derivatives obey the familiar product rule: for


f (x), g (x) ∈ F[x],

[f (x)g (x)]0 = f 0 (x)g (x) + f (x)g 0 (x).


Computing ej , j ∈ J
Q
I Applying the product rule recursively to σ(x) = (1 − αj x), we find
j∈J
X Y
σ 0 (x) = (−αj ) (1 − αm x).
j∈J m∈J\{j}

I Then, for any h ∈ J,


Y
σ 0 (αh −1 ) = (−αh ) (1 − αm αh −1 ) 6= 0.
m∈J\{h}

Also, as observed during the derivation of (K3),


Y
ω(αh −1 ) = eh vh (1 − αm αh −1 ).
m∈J\{h}

I Therefore,
αh ω(αh −1 )
eh = − · for all h ∈ J
vh σ 0 (αh −1 )
This formula is attributed to G.D. Forney.
Summary: GRS Decoding via the Extended Euclidean Algo
Given: An [n, k, d = n − k + 1] GRS code over Fq specified by HGRS with
code locators α1 , . . . , αn and column multipliers v1 , . . . , vn .
n
Input: a received word y ∈ (Fq ) .
T
1. Compute s = HGRS yT = [s0 , s1 , . . . , sd−2 ] .
If s = 0, output c = y, and stop.

2. Use the extended Euclidean


 algorithm, as outlined in Theorem KEY,
to recover σ(x), ω(x) .

3. Determine J by (exhaustively) identifying all the roots of σ(x).

4. Compute the error vector e = (e1 , . . . , en ) as follows:


 −1
− αh · ω(αh ) if h ∈ J,
eh = vh σ (αh −1 )
0

0 otherwise.

Output c = y − e.
Computational Complexity
I Step 1 requires O(dn) field operations.
I Step 2 requires O(|J|d) field operations.
I Each iteration of the EEA deals with polynomials of deg < d,
hence requires O(d) field operations.
I To estimate the number of iterations, note that
I deg ti (x) is strictly increasing in i, since
deg ti (x) = d − 1 − deg ri−1 (x) by EUCLID (b),
and deg ri−1 (x) is strictly decreasing in i.
I deg t0 (x) = 0 and deg ti ∗ (x) = deg σ(x) = |J|.
Hence, the number of iterations needed to go from
deg t0 (x) = 0 to deg ti ∗ (x) = |J| is at most |J|.

I Steps 3 & 4 require O(|J|n) field operations.


Computational Complexity
I Step 1 requires O(dn) field operations.
I Step 2 requires O(|J|d) field operations.
I Each iteration of the EEA deals with polynomials of deg < d,
hence requires O(d) field operations.
I To estimate the number of iterations, note that
I deg ti (x) is strictly increasing in i, since
deg ti (x) = d − 1 − deg ri−1 (x) by EUCLID (b),
and deg ri−1 (x) is strictly decreasing in i.
I deg t0 (x) = 0 and deg ti ∗ (x) = deg σ(x) = |J|.
Hence, the number of iterations needed to go from
deg t0 (x) = 0 to deg ti ∗ (x) = |J| is at most |J|.

I Steps 3 & 4 require O(|J|n) field operations.

Overall, the computational complexity is O(dn).

You might also like