Low-Latency Ordered Statistics Decoding of BCH Codes
Low-Latency Ordered Statistics Decoding of BCH Codes
◼ 陈立 教授
◼ 中山大学 电子与信息工程学院
1
Outline
◼ Background
◼ BCH Codes
◼ Low-Latency OSD
2
Background
◼ Future scenarios featured by URLLC
3
Background
◼ Capacity-approaching codes
4
Background
◼ Near ML decoding performance: rate 𝑅 = 1/2 & length 𝑛 = 128 [1]
0.1dB
[1] M. Shirvanimoghaddam et al., “Short block-length codes for ultrareliable low latency communications,” IEEE Commun. Mag., vol. 57, no. 2, pp.
130–137, Feb. 2019. 5
BCH Codes
◼ Encoding of BCH codes
Primitive element of 𝔽2𝑚 : 𝜎
𝑛, 𝑘 binary primitive BCH code Codeword length: 𝑛 = 2𝑚 − 1
𝒞BCH [𝑛, 𝑘] Designed distance: 𝑑 = 2𝑡 + 1
Encoding process: 𝒄 = 𝒇 ⋅ 𝐆
Parity-check matrix:
(𝜎 )0 (𝜎 )1 ⋯ (𝜎 )𝑛−1
2 0
𝐇 = (𝜎 ) (𝜎 2 )1 ⋯ (𝜎 2 )𝑛−1 ∈ 𝔽2𝑡×𝑛 Parity-check matrix of
2𝑚
⋮ ⋮ ⋱ ⋮ the 𝑛, 𝑛 − 2𝑡 RS code
(𝜎 )0
2𝑡 (𝜎 )1
2𝑡 ⋯ (𝜎 )𝑛−1
2𝑡
Parity-check equation:
𝑐 𝜎 = 𝑐 𝜎 2 = ⋯ = 𝑐 𝜎 2𝑡 = 0 ⇔ 𝒄 ⋅ 𝐇T = 𝟎
7
BCH Codes
◼ Decoding of BCH codes
Berlekamp-Massey (BM)
Efficient – algebra fluency & hardware friendly
Euclidean
Limited performance
Guruswami-Sudan (GS)
8
Soft-Decision Decoding
Unreliable information Reliable information
Reliability
(LLR)
Decoding Re-encoding
Hybrid soft decoding: use algebraic Chase decoding to identify test error patterns (TEPs)
9
Ordered Statistics Decoding
◼ Overview of OSD
Codeword Optimal
LLRs GE Encoding
candidates codeword
TEPs
• GE: Gaussian elimination
• LLRs: log likelihood ratios
• MRIPs: most reliable independent positions
• TEPs: test error patterns
10
Ordered Statistics Decoding
◼ Construct the systematic generator matrix
Channel
Sort based on reliability and determine the most reliable independent positions (MRIPs)
𝐆 𝐆′ = Λ(𝐆) 𝐆BCH
11
Ordered Statistics Decoding
◼ Generate BCH candidate codewords
Re-encoding process
Initial message: 𝒇(0) = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗𝑘−1 ) ∈ 𝔽𝑘2
(𝜔) (𝜔) (𝜔)
Test error pattern: 𝒆(𝜔) = (𝑒𝑗0 , 𝑒𝑗1 , … , 𝑒𝑗𝑘−1 ) ∈ 𝔽𝑘2 𝜔 = 0,1, … , 𝑁TEPs − 1
𝜔 𝜔 𝜔
Repetitive
BCH codeword candidate: 𝒄ො (𝜔) = (𝑐0Ƹ , 𝑐1Ƹ , … , 𝑐𝑛−1
Ƹ ) = Λ−1 (𝒇 𝜔 ⋅ 𝐆BCH) ∈ 𝔽𝑛2 processing
Number of TEPs (= number of BCH codeword candidates) (parallel)
𝑑H 𝒆 𝜔 ,𝟎 ≤ 𝜏
OSD with order 𝜏
𝑘 𝑘 𝑘
𝑁TEPs = + + ⋯+
0 1 𝜏
𝜔 𝜔
Number of TEPs 𝒆 s.t. 𝑑H 𝒆 ,𝟎 = 1
12
Ordered Statistics Decoding
◼ Challenges of OSD
GE
Alternative solution? Utilize algebra of the code
13
Low-Latency OSD
◼ BCH codes and RS codes
Subfield subcode
Given two linear codes 𝒞 ⊂ 𝔽𝑛2 and 𝒞 ′ ⊂ 𝔽𝑛2𝑚 , if 𝒞 = 𝒞 ′ ∩ 𝔽𝑛2 , 𝒞 is called the subfield subcode
of 𝒞 ′ over 𝔽2 .
Lemma 1
An (𝑛, 𝑘) t-error-correcting BCH code defined over 𝔽2 is a subfield subcode of an 𝑛, 𝑘 ′ t-
error-correcting RS code defined over 𝔽2𝑚 . i.e., 𝒞BCH [𝑛, 𝑘, 2𝑡 + 1] = 𝒞RS [𝑛, 𝑘 ′ , 2𝑡 + 1] ∩ 𝔽𝑛2
14
Low-Latency OSD
◼ Basic idea of low-latency OSD (LLOSD)
GE (sequential)
𝐆BCH
Test message Candidates
OSD
LLOSD
15
Low-Latency OSD
◼ Generation of GRS
Encoding of an 𝑛, 𝑘 ′ RS code
𝑘𝑚′ ′ −1
Message 𝒖 = 𝑢0 , 𝑢1 , … , 𝑢𝑘 ′ −1 ∈ 𝔽2 , in poly. 𝑢(𝑥) = 𝑢0 + 𝑢1 𝑥 + ⋯ + 𝑢𝑘 ′ −1 𝑥 𝑘 ∈ 𝔽2𝑚 [𝑥]
𝑣0 𝑣1 … 𝑣𝑛−1
𝛼0 , 𝛼1 , …, 𝛼𝑛−1 ∈ 𝔽2𝑚 \{0} are the code locators
16
Low-Latency OSD
◼ Generation of GRS
Determine the MRIPs
Permuted received word: 𝒓′ = Λ (𝒓) = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗𝑛−1 ) |𝐿𝑗0 | ≥ |𝐿𝑗1 | ≥ ⋯ ≥ |𝐿𝑗𝑛−1 |
Θc = {𝑗𝑘 ′ , 𝑗𝑘 ′ +1 , … , 𝑗𝑛−1 ൟ
Θ (MRPs) Θc
17
Low-Latency OSD
◼ Generation of GRS
′
Message 𝒖 = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗′
) ∈ 𝔽𝑘2𝑚 , defined by Θ = {𝑗0 , 𝑗1 , … , 𝑗𝑘 ′ −1 ൟ
𝑘 −1
ℋ𝒖 (𝑥) = 𝑟𝑗 𝑇𝑗 (𝑥)
𝑗∈Θ
18
Low-Latency OSD
◼ Example 3
Given a (7, 4) BCH code and an LLR sequence 𝑳 = (−2.447, 5.115, −4.771, −1.349, −7.096, 0.443, −3.485)
MRPs: Θ = 4, 1, 2, 6, 0
Lagrange interpolation polynomials:
𝑥 − 𝛼1 𝑥 − 𝛼2 𝑥 − 𝛼6 𝑥 − 𝛼0 𝑥 − 𝛼4 𝑥 − 𝛼2 𝑥 − 𝛼6 𝑥 − 𝛼0
𝑇4 (𝑥) = ⋅ ⋅ ⋅ 𝑇1 (𝑥) = ⋅ ⋅ ⋅
𝛼4 − 𝛼1 𝛼4 − 𝛼2 𝛼4 − 𝛼6 𝛼4 − 𝛼0 𝛼1 − 𝛼4 𝛼1 − 𝛼2 𝛼1 − 𝛼6 𝛼1 − 𝛼0
𝑥 − 𝛼4 𝑥 − 𝛼1 𝑥 − 𝛼6 𝑥 − 𝛼0 𝑥 − 𝛼4 𝑥 − 𝛼1 𝑥 − 𝛼2 𝑥 − 𝛼0
𝑇2 (𝑥) = ⋅ ⋅ ⋅ 𝑇6 (𝑥) = ⋅ ⋅ ⋅
𝛼2 − 𝛼4 𝛼1 − 𝛼2 𝛼2 − 𝛼6 𝛼2 − 𝛼0 𝛼6 − 𝛼4 𝛼6 − 𝛼1 𝛼6 − 𝛼2 𝛼6 − 𝛼0
𝑥 − 𝛼4 𝑥 − 𝛼1 𝑥 − 𝛼2 𝑥 − 𝛼6
𝑇0 (𝑥) = ⋅ ⋅ ⋅
𝛼0 − 𝛼4 𝛼0 − 𝛼1 𝛼0 − 𝛼2 𝛼0 − 𝛼6
19
Low-Latency OSD
◼ Generation of GRS
20
Low-Latency OSD
◼ Generation of GRS
ෑ 𝛼𝑗 = 1
ෑ 𝛼𝑗 − 𝛼𝑗 ′ 𝑗=0 𝛼𝑖 ෑ 𝛼𝑖 − 𝛼𝑗 ′
𝑗 ′ ∈Θ 𝑗 ′ ∈Θc ,𝑗 ′ ≠𝑗
ℋ𝒖𝑖 (𝛼𝑗 ) = ℋ𝒖𝑖 (𝛼𝑗 ) =
𝛼𝑗 − 𝛼𝑖 ෑ 𝛼𝑖 − 𝛼𝑗 ′ 𝛼𝑗 ෑ 𝛼𝑗 − 𝛼𝑗 ′
𝑗 ′ ∈Θ,𝑗≠𝑖 𝑗 ′ ∈Θc ,𝑗 ′ ≠𝑗
where 𝑗 ∈ Θc
|Θ| = 𝑘 ′ |Θc | = 𝑛 − 𝑘 ′
𝛼3 − 𝛼1 𝛼3 − 𝛼2 𝛼3 − 𝛼6 𝛼3 − 𝛼0
ℋ𝒖4 𝛼3 = ⋅ ⋅ ⋅ =5 Locators : 𝛼0 𝛼1 𝛼2 𝛼3 𝛼4 𝛼5 𝛼6
𝛼4 − 𝛼1 𝛼4 − 𝛼2 𝛼4 − 𝛼6 𝛼4 − 𝛼0
&
𝛼4 (𝛼4 −𝛼5 )
ℋ𝒖4 𝛼3 = =5
𝛼3 (𝛼3 −𝛼5 )
22
Low-Latency OSD
◼ Generation of BCH codeword candidates
𝜔 𝜔
The 𝜔-th test message: 𝒖 = 𝒖 + 𝒆′
(𝜔) (𝜔) (𝜔)
The 𝜔-th systematic RS codeword: ෝ
𝒗 𝜔
= (𝑣ො0 , 𝑣ො1 , … , 𝑣ො𝑛−1 ) ∈ 𝔽𝑛2𝑚
𝜔 𝜔 𝜔
ෝ
𝒗 =𝒖 ෝ(0) + 𝒆′
⋅ 𝐆RS = 𝒗 ⋅ 𝐆RS
23
Low-Latency OSD
◼ Generation of BCH codeword candidates
Theorem 2 (Identify invalid TEPs)
(0)
If 𝑣ො𝑗 +σ ′(𝜔) ℋ𝒖𝑖 (𝛼𝑗 ) ∈ {0,1}, ∀𝑗 ∈ Θc , 𝒗
ෝ 𝜔 is a BCH codeword.
𝑖∈Θ,𝑒𝑖 ≠0
Θc
24
Low-Latency OSD
◼ Overview of LLOSD
Sequential Parallel
The optimal
LLRs
codeword
Lagrange Codeword
polynomials candidates
LLOSD
Parallel Parallel
25
Low-Latency OSD
◼ Complexity comparison of LLOSD and OSD
𝑁𝑗 ′ is the number of TEPs that yield binary symbols after the 𝑗 ′ th judgement as in Theorem 2. 26
Low-Latency OSD
◼ Performance of the (63, 45) BCH code, AWGN, BPSK
1.E+00
Complexity and latency comparison
1.E-01
1.E-02
FER
[2] T. Kaneko et al., “An efficient maximum-likelihood-decoding algorithm for linear block codes with algebraic decoder,” IEEE Trans. Inf. Theory, vol. 40, no. 2, pp.
320–327, 1994. 27
Low-Latency OSD
◼ Re-encoding complexity distribution: LLOSD (3) of the (63,45) BCH code with ML criterion,
SNR = 5.0 dB
28
Low-Latency OSD
◼ Segmented variation of LLOSD
Analysis
Θ (MRPs) Θc
29
Low-Latency OSD
◼ Segmented variation of LLOSD
For OSD
𝑑
If 𝜏 ≥ min − 1 ,𝑘 The list error probability of OSD: 𝑃list 𝜏 ≪ 𝑃e,ML
4
MRIPs
The OSD can produce ~ ML decoding performance
𝑗0 𝑗1 … 𝑗𝑘−1 … 𝑗𝑘′ −1 𝑗𝑘 ′ … 𝑗𝑛−1
We only need the decoding order of 𝜏 for the MRIPs
Θ (MRPs) Θc
For LLOSD
𝜔 𝜔 𝜔 𝜔 𝜔 𝜔 𝜔
TEPs: 𝒆′ = (𝑒 ′𝑗0 , 𝑒 ′𝑗1 , … , 𝑒 ′𝑗𝑘−1 , 𝑒 ′𝑗𝑘 , 𝑒 ′𝑗𝑘+1 , … , 𝑒 ′𝑗 )
𝑘′ −1
𝑑
𝜏1 = min − 1 , 𝑘 is sufficient extra order 𝜏2
4
𝜏 𝑘′ 𝜏1 𝑘 𝜏2 𝑘′ − 𝑘
Number of TEPs 𝑁TEPs : ⋅
𝑖=0 𝑖 𝑖1 =0 𝑖1 𝑖2 =0 𝑖2
30
Low-Latency OSD
◼ Performance of the (63, 45) BCH code, AWGN, BPSK
1.E+00
Complexity and latency comparison
1.E-01
1.E-02
FER
OSD (1)
LLOSD (1)
1.E-03
LLOSD (2)
LLOSD (3)
1.E-04
Seg. LLOSD (1| 45, 3)
31
Low-Latency OSD
◼ Concatenated perspective
Λ(𝐇BCH ) = 𝐏 1
𝐎 (over 𝔽2 ) 𝒞BCH [𝑛, 𝑘] punctured on the positions in Θc
1
⋮ ⋮ 𝐏 c
Θ
𝐏 𝑚−1
𝐎 𝐏∗ = ⋮ : parity-check matrix of 𝒞BCH [𝑘 ′ , 𝑘]
𝑚−1
𝐏
If 𝒖(𝜔) ⋅ 𝐏 ∗ T = 𝟎, let 𝒄ො (𝜔) = Λ−1 (𝒖 𝜔
⋅ 𝐈𝑘 ′ 0 T )
𝐏
𝜔 T
⟹ 𝒄ො ⋅ 𝐇BCH = 𝟎 ⟺ 𝒄ො (𝜔) ∈ 𝒞BCH
′ c
𝒖(𝜔) ∈ 𝔽𝑘2 Θ
𝒖(𝜔) ∈ 𝒞BCH 𝒄ො (𝜔) = 𝒖(𝜔) ⋅ 𝐆RS ∈ 𝒞BCH
Parity-checker of punctured Systematic encoder of RS code
c
Θ
BCH code 𝒞BCH [𝑘 ′ , 𝑘] (𝜔) ∗T 𝒞RS [𝑛, 𝑘 ′ ] with information set Θ
(𝒖 ⋅𝐏 = 𝟎)
32
Low-Latency OSD
◼ Concatenated perspective
′ c
𝒄ො (𝜔) = 𝒖 𝜔
⋅ Λ−1 0 T
𝒖(𝜔) ∈ 𝔽𝑘2 Θ
𝒖(𝜔) ∈ 𝒞BCH 𝐈𝑘 ′ 𝐏
Parity-checker of punctured Systematic encoder of binary code
c
Θ
BCH code 𝒞BCH [𝑘 ′ , 𝑘] (𝜔) ∗T 𝒞 ∗ [𝑛, 𝑘 ′ ] with information set Θ
(𝒖 ⋅𝐏 = 𝟎)
Example 6
1 1 1 0 0 1 0 1 1 1 0 0 1 0
1 0 1 1 1 0 0 0 1 1 1 0 0 1
3
𝜎3 𝔽2𝑚 ≅𝔽𝑚 0 0 0 0 0 0 0 row perm. 1 0 1 1 1 0 0
= 𝜎4 1 𝜎 𝜎 1 0 2
𝐇RS = 𝐇BCH
𝜎 1 𝜎5 𝜎5 𝜎4 0 1 0 1 1 1 0 0 1 1 0 1 1 1 0 0
1 0 1 1 1 0 0 0 0 0 0 0 0 0
1 0 1 1 1 0 0 1 0 1 1 1 0 0
(1) 𝒖(𝜔) = 1, 0, 1, 1, 0 : 𝒖(𝜔) ⋅ 𝐏 ∗ T = (1,1,0,1)
(𝜔)
𝑐5Ƹ = 𝒖(𝜔) ⋅ 1, 1, 1, 0, 0 T
=0
(2) 𝒖 (𝜔)
= 1, 0, 1, 0, 0 : 𝒖 (𝜔)
⋅𝐏 ∗T
=𝟎 ቐ (𝜔)
⟹ 𝒄ො (𝜔) = (1, 0, 1, 1, 0, 0, 1)
𝑐6Ƹ = 𝒖(𝜔) ⋅ 0, 1, 1, 1, 0 T
=1
33
Low-Latency OSD
◼ Concatenated perspective
For LLOSD (𝜏) 𝜏
𝑘′
𝑁TEPs : number of TEPs 𝑁TEPs =
𝑖
𝑖=0
𝑁BCH : number of BCH codeword candidates:
Theorem 3
If the channel condition is sufficiently good (SNR ⟶ ∞)
𝜏
⟹ 𝑁BCH = 𝐴𝑖
𝑖=0
Θc
𝐴𝑖 : number of weight-𝑖 codewords in 𝒞BCH [𝑘 ′ , 𝑘]
• The set of punctured positions Θc varies for each decoding event, and 𝐴𝑖 varies accordingly
In general 𝜏 𝜏
𝑘′
𝐴𝑖 ≪
𝑖
𝑖=0 𝑖=0
34
Low-Latency OSD
◼ Concatenated perspective
The number of BCH codeword candidates 𝑁BCH in decoding the 𝒞BCH [63, 45] and 𝒞BCH [31, 21]
Theoretical line
Randomly puncturing 𝑛 − 𝑘 ′ positions of
Θ 𝑐
𝒞BCH [𝑛, 𝑘], obtaining 𝒞BCH [𝑘 ′ , 𝑘] and its
weight distribution {𝐴𝑖 }
𝑁TEPs = 30914
𝑁TEPs = 379
35
Low-Latency OSD
◼ The average number of BCH codeword candidates in LLOSD (3) of the (63,45) BCH code
Remarks
• The probability of finding a BCH codeword decreases as the TEP weight increases
• As the SNR increases, the likelihood of generating a BCH codeword using a weight-0 TEP
increases, while the likelihood decreases when using a TEP of a larger weight
36
Hybrid Soft Decoding
◼ LLOSD vs. Chase - BM / GS
It remains challenging to decode longer BCH codes with LLOSD
TEPs No
Chase-GS decoding
Advantages
38
Hybrid Soft Decoding
◼ Integrating LLOSD and Chase-GS decoding
LLOSD
𝐆RS Yes Optimal
LLRs TEPs Re-encoding ML
codeword
TEPs No
Chase-GS decoding
Test-vector Re-encoding
Skipping Interpolation Root-finding
formulation transform
Lagrange polynomials
39
Hybrid Soft Decoding
◼ Key steps of HSD
Ψ (LRPs)
Test-vector formulation
𝑗0 𝑗1 … 𝑗𝑘′ −1 𝑗𝑘 ′ … 𝑗𝑛−𝜂 𝑗𝑛−𝜂+1 … 𝑗𝑛−1
Ψ = { 𝑗𝑛−𝜂 , 𝑗𝑛−𝜂+1 , … , 𝑗𝑛−1 }: 𝜂 least reliable positions (LRPs)
Θ (MRPs) Θc
The 2𝜂 test-vectors can be formulated as
00…000
00…001
𝜔 𝜔
Λ(𝒓𝜔 ) = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗𝑛−𝜂−1 , 𝑟𝑗𝑛−𝜂 , … , 𝑟𝒋𝑛−1 ) 00…010 0 ≤ 𝜔 ≤ 2𝜂 − 1
00…011
…
11…110
11…111 𝒓𝜔
𝑑H (ෝ ෝ and test-vector 𝒓𝜔
𝒗, 𝒓𝜔 ): Hamming distance between 𝒗
40
Hybrid Soft Decoding
◼ Key steps of HSD
0
ෝ
Initial RS codeword (from LLOSD): 𝒗 = 𝒖 ⋅ 𝐆RS ∈ 𝔽𝑛2𝑚
0 𝜔 𝜔
ෝ
Transformed test vector: 𝒛𝜔 = 𝒓𝜔 − 𝒗 Λ(𝒛𝜔 ) = (0, 0, … , 0, 𝑧𝑗 , … , 𝑧𝑗𝑛−1 )
𝑘′
41
Hybrid Soft Decoding
◼ Key steps of HSD
Interpolation 一 module basis construction (based on the Lagrange polynomials)
Seed polynomials:
ς𝑗 ′ ∈Θc,𝑗 ′ ≠𝑗(𝑥 − 𝛼𝑗 ′ )
𝒢(𝑥) = ෑ (𝑥 − 𝛼𝑗 ) & 𝑅𝜔 (𝑥) = 𝑧𝑗 𝜔 𝑇𝑗′ (𝑥) 𝑇𝑗′ (𝑥) =
𝑗∈Θc 𝑗∈Θc ς𝑛−1
𝑗 ′ =0,𝑗 ′ ≠𝑗(𝛼𝑗 − 𝛼𝑗 ′ )
𝜔 𝜔
ෝ
Codeword candidate of LLOSD: 𝒗 = 𝒆′ ෝ(0)
⋅ 𝐆RS + 𝒗
ෝ𝜔 = 𝑢ො 𝜔 𝛼0 , 𝑢ො 𝜔 𝛼1 , … , 𝑢ො 𝜔 𝛼𝑛−1 ෝ 0
Codeword candidate of Chase-GS: 𝒗 +𝒗
= 𝑢ො 𝜔 𝛼𝑗0 , 𝑢ො 𝜔 𝛼𝑗1 , … , 𝑢ො 𝜔 𝛼𝑗 ෝ
⋅ 𝐆RS + 𝒗 0
𝑘′ −1
= 𝒆ො 𝜔 ⋅ 𝐆RS +ෝ
𝒗 0
43
Hybrid Soft Decoding
◼ Key steps of HSD
Partial root-finding (based on 𝐆RS )
The estimated TEP 𝒆ො 𝜔 = (𝑢ො 𝜔 (𝛼𝑗0 ), 𝑢ො 𝜔 (𝛼𝑗1 ), … , 𝑢ො 𝜔 (𝛼𝑗 )) can be determined as
𝑘′ −1
∗ 1
1, if 𝑄𝜔 𝛼𝑗 = 0
𝑢ො 𝜔 (𝛼𝑗 ) = ∀𝑗 ∈ Θ
0, otherwise
ෝ𝜔 can be generated as
The estimated codeword 𝒗 Provided by Chase-GS
LLOSD
44
Hybrid Soft Decoding
◼ Performance of the (63, 39) BCH code
1.E+00
45
Hybrid Soft Decoding
◼ Performance of the (255, 223) BCH code
1.E+00
LLOSD (2)
Complexity: 1.27 × 104
1.E-01
OSD (1)
Complexity: 2.71 × 105
1.E-02 (BOPs)
FER