0% found this document useful (0 votes)
23 views46 pages

Low-Latency Ordered Statistics Decoding of BCH Codes

The document discusses low-latency ordered statistics decoding (OSD) for BCH codes, emphasizing its application in ultra-reliable low-latency communications (URLLC). It covers the encoding and decoding processes of BCH codes, the advantages of OSD, and introduces a low-latency variant that utilizes RS codes to improve performance. The document also addresses challenges in OSD and proposes solutions to enhance decoding efficiency.

Uploaded by

Hoàng Lý Minh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views46 pages

Low-Latency Ordered Statistics Decoding of BCH Codes

The document discusses low-latency ordered statistics decoding (OSD) for BCH codes, emphasizing its application in ultra-reliable low-latency communications (URLLC). It covers the encoding and decoding processes of BCH codes, the advantages of OSD, and introduces a low-latency variant that utilizes RS codes to improve performance. The document also addresses challenges in OSD and proposes solutions to enhance decoding efficiency.

Uploaded by

Hoàng Lý Minh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

BCH码的低时延分阶统计译码

Low-Latency Ordered Statistics Decoding of BCH Codes

◼ 陈立 教授

◼ 中山大学 电子与信息工程学院

1
Outline

◼ Background

◼ BCH Codes

◼ Ordered Statistics Decoding

◼ Low-Latency OSD

◼ Hybrid Soft Decoding

2
Background
◼ Future scenarios featured by URLLC

Unmanned Driving Wise Information Technology of Med (WITMED)

Factory Automation Extended Reality (XR)

Short-to-medium length codes

3
Background
◼ Capacity-approaching codes

Turbo codes LDPC codes Polar codes

𝑛 is large → randomness & iterative decoding, channel polarization

◼ Good short-to-medium length codes

BCH codes TBCC Polar codes PAC codes


(OSD) (Circular Viterbi) (SCL) (Fano)

𝑛 is short-to-medium → near ML decoding, list decoding

4
Background
◼ Near ML decoding performance: rate 𝑅 = 1/2 & length 𝑛 = 128 [1]

0.1dB

[1] M. Shirvanimoghaddam et al., “Short block-length codes for ultrareliable low latency communications,” IEEE Commun. Mag., vol. 57, no. 2, pp.
130–137, Feb. 2019. 5
BCH Codes
◼ Encoding of BCH codes
Primitive element of 𝔽2𝑚 : 𝜎
𝑛, 𝑘 binary primitive BCH code Codeword length: 𝑛 = 2𝑚 − 1
𝒞BCH [𝑛, 𝑘] Designed distance: 𝑑 = 2𝑡 + 1

Generator polynomial: 𝑔 𝑥 = 𝑔0 + 𝑔1 𝑥 + ⋯ + 𝑔𝑛−𝑘 𝑥 𝑛−𝑘 ∈ 𝔽2 [𝑥] is the minimal (deg 𝑔 𝑥 )


nonzero polynomial such that
𝑔 𝜎 = 𝑔 𝜎 2 = ⋯ = 𝑔 𝜎 2𝑡 = 0

Message 𝒇 = 𝑓0 , 𝑓1 , … , 𝑓𝑘−1 ∈ 𝔽𝑘2 , in poly. 𝑓(𝑥) = 𝑓0 + 𝑓1 𝑥 + ⋯ + 𝑓𝑘−1 𝑥 𝑘−1 ∈ 𝔽2 [𝑥]

Codeword 𝒄 = (𝑐0 , 𝑐1 , … , 𝑐𝑛−1 ) ∈ 𝔽𝑛2 , in poly. 𝑐 𝑥 = 𝑐0 + 𝑐1 𝑥 + ⋯ + 𝑐𝑛−1 𝑥 𝑛−1 = 𝑓(𝑥)𝑔 𝑥 ∈ 𝔽2 [𝑥]

Parity-check condition: 𝑐(𝜎) = 𝑐(𝜎 2 ) = ⋯ = 𝑐(𝜎 2𝑡 ) = 0


BCH Codes
◼ Encoding of BCH codes
Generator matrix:
𝑔0 𝑔1 … 𝑔𝑛−𝑘 0 … 0
0 𝑔0 … … 𝑔𝑛−𝑘 ⋱ ⋮
𝐆= ∈ 𝔽𝑘×𝑛
2
⋮ ⋱ ⋱ ⋱ ⋱ ⋱ 0
0 … 0 𝑔0 𝑔1 … 𝑔𝑛−𝑘

Encoding process: 𝒄 = 𝒇 ⋅ 𝐆
Parity-check matrix:
(𝜎 )0 (𝜎 )1 ⋯ (𝜎 )𝑛−1
2 0
𝐇 = (𝜎 ) (𝜎 2 )1 ⋯ (𝜎 2 )𝑛−1 ∈ 𝔽2𝑡×𝑛 Parity-check matrix of
2𝑚
⋮ ⋮ ⋱ ⋮ the 𝑛, 𝑛 − 2𝑡 RS code
(𝜎 )0
2𝑡 (𝜎 )1
2𝑡 ⋯ (𝜎 )𝑛−1
2𝑡

Parity-check equation:

𝑐 𝜎 = 𝑐 𝜎 2 = ⋯ = 𝑐 𝜎 2𝑡 = 0 ⇔ 𝒄 ⋅ 𝐇T = 𝟎

7
BCH Codes
◼ Decoding of BCH codes

Hard-decision decoding (utilizing algebraic structure)

Berlekamp-Massey (BM)
Efficient – algebra fluency & hardware friendly
Euclidean
Limited performance
Guruswami-Sudan (GS)

Soft-decision decoding (utilizing soft information)

Ordered Statistics Decoding (OSD) High performance (~ ML)

Algebraic Chase decoding (Chase-BM, Chase-GS) High complexity & latency

8
Soft-Decision Decoding
Unreliable information Reliable information
Reliability
(LLR)
Decoding Re-encoding

Algebraic structure Information set

Equation system Systematic generator matrix


(Syndrome-based or interpolation-based)

Low-latency OSD: use 𝐆RS to replace 𝐆BCH

Hybrid soft decoding: use algebraic Chase decoding to identify test error patterns (TEPs)

9
Ordered Statistics Decoding
◼ Overview of OSD

Systematic Generator matrix 𝐆BCH

Codeword Optimal
LLRs GE Encoding
candidates codeword

MRIPs Test messages

TEPs
• GE: Gaussian elimination
• LLRs: log likelihood ratios
• MRIPs: most reliable independent positions
• TEPs: test error patterns

10
Ordered Statistics Decoding
◼ Construct the systematic generator matrix
Channel

• Received LLR sequence: 𝑳 = (𝐿0 , 𝐿1 , … , 𝐿𝑛−1 ) ∈ ℝ𝑛

• Hard-decision received word: 𝒓 = (𝑟0 , 𝑟1 , … , 𝑟𝑛−1 ) ∈ 𝔽𝑛2

Sort based on reliability and determine the most reliable independent positions (MRIPs)

𝒓′ = Λ(𝒓) = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗𝑛−1 ൯ |𝐿𝑗0 | ≥ |𝐿𝑗1 | ≥ ⋯ ≥ |𝐿𝑗𝑛−1 |


Gaussian
Construct the systematic generator matrix 𝐆BCH elimination

𝐆 𝐆′ = Λ(𝐆) 𝐆BCH

11
Ordered Statistics Decoding
◼ Generate BCH candidate codewords
Re-encoding process
Initial message: 𝒇(0) = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗𝑘−1 ) ∈ 𝔽𝑘2
(𝜔) (𝜔) (𝜔)
Test error pattern: 𝒆(𝜔) = (𝑒𝑗0 , 𝑒𝑗1 , … , 𝑒𝑗𝑘−1 ) ∈ 𝔽𝑘2 𝜔 = 0,1, … , 𝑁TEPs − 1

(𝜔) (𝜔) (𝜔)


Test message: 𝒇(𝜔) = (𝑓𝑗0 , 𝑓𝑗1 , … , 𝑓𝑗𝑘−1 ) ∈ 𝔽𝑘2

𝜔 𝜔 𝜔
Repetitive
BCH codeword candidate: 𝒄ො (𝜔) = (𝑐0Ƹ , 𝑐1Ƹ , … , 𝑐𝑛−1
Ƹ ) = Λ−1 (𝒇 𝜔 ⋅ 𝐆BCH) ∈ 𝔽𝑛2 processing
Number of TEPs (= number of BCH codeword candidates) (parallel)
𝑑H 𝒆 𝜔 ,𝟎 ≤ 𝜏
OSD with order 𝜏

𝑘 𝑘 𝑘
𝑁TEPs = + + ⋯+
0 1 𝜏
𝜔 𝜔
Number of TEPs 𝒆 s.t. 𝑑H 𝒆 ,𝟎 = 1

12
Ordered Statistics Decoding
◼ Challenges of OSD

Reduction of TEPs: skipping rule; stopping rule


Exponential complexity Trade off with performance: multiple bases; validation rule

𝜏–OSD complexity: 𝑂(𝑘 𝜏 ) Trade off with storage: box-and-match algorithm

Offline computed systematic generator matrices

Decoding latency bottleneck Only an auxiliary method

GE
Alternative solution? Utilize algebra of the code

13
Low-Latency OSD
◼ BCH codes and RS codes
Subfield subcode

Given two linear codes 𝒞 ⊂ 𝔽𝑛2 and 𝒞 ′ ⊂ 𝔽𝑛2𝑚 , if 𝒞 = 𝒞 ′ ∩ 𝔽𝑛2 , 𝒞 is called the subfield subcode
of 𝒞 ′ over 𝔽2 .

Lemma 1
An (𝑛, 𝑘) t-error-correcting BCH code defined over 𝔽2 is a subfield subcode of an 𝑛, 𝑘 ′ t-
error-correcting RS code defined over 𝔽2𝑚 . i.e., 𝒞BCH [𝑛, 𝑘, 2𝑡 + 1] = 𝒞RS [𝑛, 𝑘 ′ , 2𝑡 + 1] ∩ 𝔽𝑛2

BCH code 𝑑 = 2𝑡 + 1 < 𝑛 − 𝑘 + 1


Dimension: 𝑘 < 𝑘 ′
RS code 𝑑 = 2𝑡 + 1 = 𝑛 − 𝑘′ + 1 (MDS)
𝒞RS 𝒞BCH 𝔽𝑛2
Example 1
The (7, 4) BCH code is the binary subcode of the 7, 5 RS code

14
Low-Latency OSD
◼ Basic idea of low-latency OSD (LLOSD)

GE (sequential)
𝐆BCH
Test message Candidates

OSD

LLOSD

Test message 𝐆RS Candidates

Lagrange interpolation (parallel)

15
Low-Latency OSD
◼ Generation of GRS
Encoding of an 𝑛, 𝑘 ′ RS code
𝑘𝑚′ ′ −1
Message 𝒖 = 𝑢0 , 𝑢1 , … , 𝑢𝑘 ′ −1 ∈ 𝔽2 , in poly. 𝑢(𝑥) = 𝑢0 + 𝑢1 𝑥 + ⋯ + 𝑢𝑘 ′ −1 𝑥 𝑘 ∈ 𝔽2𝑚 [𝑥]

Codeword 𝒗 = 𝑢(𝛼0 ), 𝑢(𝛼1 ), … , 𝑢(𝛼𝑛−1 ) ∈ 𝔽𝑛2𝑚

𝑣0 𝑣1 … 𝑣𝑛−1
𝛼0 , 𝛼1 , …, 𝛼𝑛−1 ∈ 𝔽2𝑚 \{0} are the code locators

◼ Example 2 For a (7, 5) RS code over 𝔽8

Message 𝒖 = 2, 3, 5, 0, 1 , in poly. 𝑢 𝑥 = 2 + 3𝑥 + 5𝑥2 + 𝑥4


Codeword 𝒗 = 𝑢 1 , 𝑢(2), 𝑢(4), 𝑢(3), 𝑢(6), 𝑢(7), 𝑢(5) = (5, 0, 4, 7, 6, 1, 3)

𝔽8 is an extension field of 𝔽2 , defined by 𝑝(𝑥) = 𝑥 3 + 𝑥 + 1


In 𝔽8 , {0, 𝜎 0 , 𝜎 1 , 𝜎 2 , 𝜎 3 , 𝜎 4 , 𝜎 5 , 𝜎 6 } = {0, 1, 2, 4, 3, 6, 7, 5}

16
Low-Latency OSD
◼ Generation of GRS
Determine the MRIPs

Permuted received word: 𝒓′ = Λ (𝒓) = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗𝑛−1 ) |𝐿𝑗0 | ≥ |𝐿𝑗1 | ≥ ⋯ ≥ |𝐿𝑗𝑛−1 |

Most reliable positions (MRPs): MRIPs


Θ = {𝑗0 , 𝑗1 , … , 𝑗𝑘 ′ −1 ൟ
𝑗0 𝑗1 … 𝑗𝑘−1 … 𝑗𝑘′ −1 𝑗𝑘 ′ … 𝑗𝑛−1

Θc = {𝑗𝑘 ′ , 𝑗𝑘 ′ +1 , … , 𝑗𝑛−1 ൟ
Θ (MRPs) Θc

MDS property MRPs are linearly independent

For RS codes: MRPs MRIPs

17
Low-Latency OSD
◼ Generation of GRS

Message 𝒖 = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗′
) ∈ 𝔽𝑘2𝑚 , defined by Θ = {𝑗0 , 𝑗1 , … , 𝑗𝑘 ′ −1 ൟ
𝑘 −1

Construct the Lagrange interpolation polynomials


𝑥 − 𝛼𝑗 ′
𝑇𝑗 𝑥 = ෑ
𝛼𝑗 − 𝛼𝑗 ′
𝑗 ′ ∈Θ,𝑗 ′ ≠𝑗

where 𝑇𝑗 (𝛼𝑗 ) = 1, and 𝑇𝑗 (𝛼𝑗 ′ ) = 0 if 𝑗 ′ ∈ Θ and 𝑗 ′ ≠ 𝑗.


Form the systematic message polynomial of 𝒖 = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗𝑘′−1 )

ℋ𝒖 (𝑥) = ෍ 𝑟𝑗 𝑇𝑗 (𝑥)
𝑗∈Θ

systematic message symbols 𝑟𝑗 , if 𝑗 ∈ Θ;


ℋ𝒖 (𝛼𝑗 ) generates
parity-check symbols, if 𝑗 ∈ Θc.

18
Low-Latency OSD
◼ Example 3
Given a (7, 4) BCH code and an LLR sequence 𝑳 = (−2.447, 5.115, −4.771, −1.349, −7.096, 0.443, −3.485)

Mother code: (7, 5) RS code

Hard-decision received word: 𝒓 = (1, 0, 1, 1, 1, 0, 1)

MRPs: Θ = 4, 1, 2, 6, 0
Lagrange interpolation polynomials:
𝑥 − 𝛼1 𝑥 − 𝛼2 𝑥 − 𝛼6 𝑥 − 𝛼0 𝑥 − 𝛼4 𝑥 − 𝛼2 𝑥 − 𝛼6 𝑥 − 𝛼0
𝑇4 (𝑥) = ⋅ ⋅ ⋅ 𝑇1 (𝑥) = ⋅ ⋅ ⋅
𝛼4 − 𝛼1 𝛼4 − 𝛼2 𝛼4 − 𝛼6 𝛼4 − 𝛼0 𝛼1 − 𝛼4 𝛼1 − 𝛼2 𝛼1 − 𝛼6 𝛼1 − 𝛼0
𝑥 − 𝛼4 𝑥 − 𝛼1 𝑥 − 𝛼6 𝑥 − 𝛼0 𝑥 − 𝛼4 𝑥 − 𝛼1 𝑥 − 𝛼2 𝑥 − 𝛼0
𝑇2 (𝑥) = ⋅ ⋅ ⋅ 𝑇6 (𝑥) = ⋅ ⋅ ⋅
𝛼2 − 𝛼4 𝛼1 − 𝛼2 𝛼2 − 𝛼6 𝛼2 − 𝛼0 𝛼6 − 𝛼4 𝛼6 − 𝛼1 𝛼6 − 𝛼2 𝛼6 − 𝛼0

𝑥 − 𝛼4 𝑥 − 𝛼1 𝑥 − 𝛼2 𝑥 − 𝛼6
𝑇0 (𝑥) = ⋅ ⋅ ⋅
𝛼0 − 𝛼4 𝛼0 − 𝛼1 𝛼0 − 𝛼2 𝛼0 − 𝛼6

19
Low-Latency OSD
◼ Generation of GRS

𝒖𝑗0 = (1, 0, 0, … , 0) codeword of 𝒖𝑗0


𝒖𝑗1 = (0, 1, 0, … , 0) codeword of 𝒖𝑗1
𝐆RS =


codeword of 𝒖𝑗 ′
𝒖𝑗 = (0, 0, 0, … , 1) 𝑘 −1
𝑘′ −1

1 0 ⋯ 0 ℋ𝒖𝑗 (𝛼𝑗 ′ ) ⋯ ℋ𝒖𝑗 (𝛼𝑗𝑛−1 )


0 𝑘 0
0 1 ⋯ 0 ℋ𝒖𝑗 (𝛼𝑗 ′ ) ⋯ ℋ𝒖𝑗 (𝛼𝑗𝑛−1 )
Λ(𝐆RS ) = 1 𝑘 1
⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮
0 0 ⋯ 1 ℋ𝒖𝑗 (𝛼𝑗 ′ ) ⋯ ℋ𝒖𝑗 (𝛼𝑗𝑛−1 )
𝑘′ −1 𝑘 𝑘′ −1

𝑘′× 𝑘′ 𝛼𝑗0 𝛼𝑗1 ⋯ 𝛼𝑗 ′ 𝛼𝑗 ⋯ 𝛼𝑗𝑛−1


𝑘 −1 𝑘′
identity submatrix
Θ (MRPs) Θc

20
Low-Latency OSD
◼ Generation of GRS

The row-𝑖 column-𝑗 entry of 𝐆RS ∶


𝑛−1

ෑ 𝛼𝑗 = 1
ෑ 𝛼𝑗 − 𝛼𝑗 ′ 𝑗=0 𝛼𝑖 ෑ 𝛼𝑖 − 𝛼𝑗 ′
𝑗 ′ ∈Θ 𝑗 ′ ∈Θc ,𝑗 ′ ≠𝑗
ℋ𝒖𝑖 (𝛼𝑗 ) = ℋ𝒖𝑖 (𝛼𝑗 ) =
𝛼𝑗 − 𝛼𝑖 ෑ 𝛼𝑖 − 𝛼𝑗 ′ 𝛼𝑗 ෑ 𝛼𝑗 − 𝛼𝑗 ′
𝑗 ′ ∈Θ,𝑗≠𝑖 𝑗 ′ ∈Θc ,𝑗 ′ ≠𝑗

where 𝑗 ∈ Θc
|Θ| = 𝑘 ′ |Θc | = 𝑛 − 𝑘 ′

Relationship: systematic generator matrix & systematic parity-check matrix

Complexity of the generation of 𝐆RS : 2𝑛 ⋅ min{𝑛 − 𝑘 ′ , 𝑘 ′ }


Low-Latency OSD
◼ Generation of GRS

Example 4 continues from Example 3 with MRPs being Θ = 4, 1, 2, 6, 0


𝑥 − 𝛼1 𝑥 − 𝛼2 𝑥 − 𝛼6 𝑥 − 𝛼0
𝒖4 = (1, 0, 0, 0, 0) ℋ𝒖4 (𝑥) = ⋅ ⋅ ⋅
𝛼4 − 𝛼1 𝛼4 − 𝛼2 𝛼4 − 𝛼6 𝛼4 − 𝛼0
𝑥 − 𝛼4 𝑥 − 𝛼2 𝑥 − 𝛼6 𝑥 − 𝛼0 0 0 0 5 1 3 0
𝒖1 = (0, 1, 0, 0, 0) ℋ𝒖1 (𝑥) = ⋅ ⋅ ⋅
𝛼1 − 𝛼4 𝛼1 − 𝛼2 𝛼1 − 𝛼6 𝛼1 − 𝛼0
𝑥 − 𝛼4 𝑥 − 𝛼1 𝑥 − 𝛼6 𝑥 − 𝛼0
0 1 0 4 0 2 0
𝒖2 = (0, 0, 1, 0, 0) ℋ𝒖2 (𝑥) = ⋅ ⋅ ⋅ 𝐆RS = 0 0 1 1 0 1 0
𝛼2 − 𝛼4 𝛼1 − 𝛼2 𝛼2 − 𝛼6 𝛼2 − 𝛼0
𝑥 − 𝛼4 𝑥 − 𝛼1 𝑥 − 𝛼2 𝑥 − 𝛼0
𝒖6 = (0, 0, 0, 1, 0) ℋ𝒖6 (𝑥) = ⋅ ⋅ ⋅
𝛼6 − 𝛼4 𝛼6 − 𝛼1 𝛼6 − 𝛼2 𝛼6 − 𝛼0
0 0 0 4 0 3 1
𝒖0 = (0, 0, 0, 0, 1) ℋ𝒖0 (𝑥) =
𝑥 − 𝛼4 𝑥 − 𝛼1 𝑥 − 𝛼2 𝑥 − 𝛼6
⋅ ⋅ ⋅
1 0 0 5 0 2 0
𝛼0 − 𝛼4 𝛼0 − 𝛼1 𝛼0 − 𝛼2 𝛼0 − 𝛼6

𝛼3 − 𝛼1 𝛼3 − 𝛼2 𝛼3 − 𝛼6 𝛼3 − 𝛼0
ℋ𝒖4 𝛼3 = ⋅ ⋅ ⋅ =5 Locators : 𝛼0 𝛼1 𝛼2 𝛼3 𝛼4 𝛼5 𝛼6
𝛼4 − 𝛼1 𝛼4 − 𝛼2 𝛼4 − 𝛼6 𝛼4 − 𝛼0
&
𝛼4 (𝛼4 −𝛼5 )
ℋ𝒖4 𝛼3 = =5
𝛼3 (𝛼3 −𝛼5 )

22
Low-Latency OSD
◼ Generation of BCH codeword candidates

Generate the initial codeword:



Initial message: 𝒖 = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗 ) ∈ 𝔽𝑘2
𝑘′ −1
(0) (0) (0) (0)
Initial RS codeword: ෝ(0) = (𝑣ො0 , 𝑣ො1 , … , 𝑣ො𝑛−1 ) = 𝒖 ⋅ 𝐆RS
𝒗 𝑣ො𝑗 = 𝑟𝑗 , ∀𝑗 ∈ Θ

Generate any 𝑛, 𝑘 ′ systematic RS codeword:


𝜔 𝜔 𝜔 𝜔 ′
The 𝜔-th test error pattern: 𝒆′ = (𝑒 ′𝑗0 , 𝑒 ′𝑗1 , … , 𝑒 ′𝑗 ) ∈ 𝔽𝑘2
𝑘′ −1

𝜔 𝜔
The 𝜔-th test message: 𝒖 = 𝒖 + 𝒆′
(𝜔) (𝜔) (𝜔)
The 𝜔-th systematic RS codeword: ෝ
𝒗 𝜔
= (𝑣ො0 , 𝑣ො1 , … , 𝑣ො𝑛−1 ) ∈ 𝔽𝑛2𝑚

𝜔 𝜔 𝜔

𝒗 =𝒖 ෝ(0) + 𝒆′
⋅ 𝐆RS = 𝒗 ⋅ 𝐆RS

23
Low-Latency OSD
◼ Generation of BCH codeword candidates
Theorem 2 (Identify invalid TEPs)
(0)
If 𝑣ො𝑗 +σ ′(𝜔) ℋ𝒖𝑖 (𝛼𝑗 ) ∈ {0,1}, ∀𝑗 ∈ Θc , 𝒗
ෝ 𝜔 is a BCH codeword.
𝑖∈Θ,𝑒𝑖 ≠0
Θc

Example 5 continues from Example 4 0 0 0 5 1 3 0


𝒖 = (1, 0, 1, 1, 1) 0 1 0 4 0 2 0
Initial message : 𝐆RS = 0 0 1 1 0 1 0
Initial RS codeword: ෝ(0) = 𝒖 ⋅ 𝐆RS = (1, 0, 1, 5, 1, 3, 1)
𝒗 0 0 0 4 0 3 1
1 0 0 5 0 2 0
1
Test error pattern 1: 𝒆′ = (0, 0, 0, 0, 1)
+
Systematic RS codeword 1: ෝ(1) = 𝒗
𝒗 ෝ(0) + 𝒆′ 1
⋅ 𝐆RS = (0, 0, 1, 0, 1, 1, 1) BCH codeword
candidate
3
Test error pattern 3: 𝒆′ = (0, 0, 1, 0, 0)
ෝ(3) = 𝒗
Systematic RS codeword 3: 𝒗 ෝ(0) + 𝒆′ 3
⋅ 𝐆RS = (1, 0, 0,×, 1,×, 1) Invalid

24
Low-Latency OSD
◼ Overview of LLOSD

Sequential Parallel

Gaussian Codeword OSD


elimination candidates

The optimal
LLRs
codeword

Lagrange Codeword
polynomials candidates
LLOSD

Parallel Parallel

25
Low-Latency OSD
◼ Complexity comparison of LLOSD and OSD

Algorithms Operations Complexity


GE 𝑛 ⋅ (min{𝑛 − 𝑘, 𝑘})2 Key differences
Compute 𝒄ො (0) 𝑘 ⋅ (𝑛 − 𝑘)
OSD (𝜏) Operation type : binary → finite field
Compute 𝒄ො (𝜔) (𝑛 − 𝑘) ⋅ σ𝜏𝜆=1 𝜆 𝑘𝜆

Find 𝒄ො opt 𝑛 ⋅ σ𝜏𝜆=0 𝑘 Compute 𝐆BCH / 𝐆RS : 𝑂 𝑛3 → 𝑂 𝑛2


𝜆

Compute GRS 2𝑛 ⋅ min{𝑛 − 𝑘 ′ , 𝑘 ′ } sequential → parallel


ෝ(0)
Compute 𝒗 𝑘 ′ ⋅ (𝑛 − 𝑘 ′ )
𝑘
LLOSD (𝜏) Candidate list : σ𝜏𝜆=0 → 𝑁𝑛−𝑘 ′ = 𝑁BCH
𝑛−𝑘 ′ 𝜆
Compute 𝒄ො (𝜔) σ𝜏𝜆=1 𝑘𝜆 + 𝜏σ 𝑗 ′ =1 𝑁𝑗 ′
ෝopt
Find 𝒗 𝑛𝑁𝑛−𝑘 ′

𝑁𝑗 ′ is the number of TEPs that yield binary symbols after the 𝑗 ′ th judgement as in Theorem 2. 26
Low-Latency OSD
◼ Performance of the (63, 45) BCH code, AWGN, BPSK
1.E+00
Complexity and latency comparison

1.E-01

1.E-02
FER

1.E-03 OSD (1)


LLOSD (1)
LLOSD (2)
1.E-04
LLOSD (3) Simulation environment: Intel core i7-10710U CPU
ML Stopping rule: ML criterion [2]
1.E-05
1 2 3 4 5 6
SNR (dB)

[2] T. Kaneko et al., “An efficient maximum-likelihood-decoding algorithm for linear block codes with algebraic decoder,” IEEE Trans. Inf. Theory, vol. 40, no. 2, pp.
320–327, 1994. 27
Low-Latency OSD
◼ Re-encoding complexity distribution: LLOSD (3) of the (63,45) BCH code with ML criterion,
SNR = 5.0 dB

Reasons for Multiple Peaks


For the re-encoding phase of TEPs with
a larger weight
• The complexity is greater
• The ML criterion is more difficult to
satisfy
• The probability of finding a BCH
• Colors are used to distinguish the phases that the re-encoding terminates at
• Phase (𝑖): the re-encoding phase for weight-𝑖 TEPs codeword is lower

28
Low-Latency OSD
◼ Segmented variation of LLOSD

Analysis

• The error probability of the LLOSD is upper bounded by

𝑃e,LLOSD 𝜏 ≤ 𝑃e,ML + 𝑃list (𝜏)

• 𝑃e,ML : ML decoding error probability,

depends on the weight distribution of the code

• 𝑃list (𝜏): List decoding error probability,


= Prob{the channel introduces more than 𝜏 errors in MRPs}
𝑗0 𝑗1 … 𝑗𝑘−1 … 𝑗𝑘′ −1 𝑗𝑘 ′ … 𝑗𝑛−1

Θ (MRPs) Θc

29
Low-Latency OSD
◼ Segmented variation of LLOSD
For OSD

𝑑
If 𝜏 ≥ min − 1 ,𝑘 The list error probability of OSD: 𝑃list 𝜏 ≪ 𝑃e,ML
4
MRIPs
The OSD can produce ~ ML decoding performance
𝑗0 𝑗1 … 𝑗𝑘−1 … 𝑗𝑘′ −1 𝑗𝑘 ′ … 𝑗𝑛−1
We only need the decoding order of 𝜏 for the MRIPs
Θ (MRPs) Θc
For LLOSD
𝜔 𝜔 𝜔 𝜔 𝜔 𝜔 𝜔
TEPs: 𝒆′ = (𝑒 ′𝑗0 , 𝑒 ′𝑗1 , … , 𝑒 ′𝑗𝑘−1 , 𝑒 ′𝑗𝑘 , 𝑒 ′𝑗𝑘+1 , … , 𝑒 ′𝑗 )
𝑘′ −1

𝑑
𝜏1 = min − 1 , 𝑘 is sufficient extra order 𝜏2
4

𝜏 𝑘′ 𝜏1 𝑘 𝜏2 𝑘′ − 𝑘
Number of TEPs 𝑁TEPs : ෍ ෍ ⋅෍
𝑖=0 𝑖 𝑖1 =0 𝑖1 𝑖2 =0 𝑖2

30
Low-Latency OSD
◼ Performance of the (63, 45) BCH code, AWGN, BPSK

1.E+00
Complexity and latency comparison

1.E-01

1.E-02
FER

OSD (1)

LLOSD (1)
1.E-03
LLOSD (2)

LLOSD (3)
1.E-04
Seg. LLOSD (1| 45, 3)

ML Simulation environment: Intel core i7-10710U CPU


1.E-05
1 2 3 4 5 6 Stopping rule: ML criterion
SNR (dB)

31
Low-Latency OSD
◼ Concatenated perspective

Systematic parity-check matrix 𝒞RS [𝑛, 𝑘 ′ ] Systematic generator matrix


Λ(𝐇RS ) = 𝐏 𝐈𝑛−𝑘 ′ Λ(𝐆RS ) = 𝐈𝑘 ′ 𝐏T
1. 𝔽2𝑚 ≅ 𝔽𝑚
2
(𝟎) systematic parity-check matrix of an 𝑛, 𝑘 ′
2. Row permutation 𝐏 𝐈𝑛−𝑘 ′ :
𝐏 0
𝐈𝑛−𝑘 ′ binary code 𝒞 ∗ [𝑛, 𝑘 ′ ]

Λ(𝐇BCH ) = 𝐏 1
𝐎 (over 𝔽2 ) 𝒞BCH [𝑛, 𝑘] punctured on the positions in Θc
1
⋮ ⋮ 𝐏 c
Θ
𝐏 𝑚−1
𝐎 𝐏∗ = ⋮ : parity-check matrix of 𝒞BCH [𝑘 ′ , 𝑘]
𝑚−1
𝐏
If 𝒖(𝜔) ⋅ 𝐏 ∗ T = 𝟎, let 𝒄ො (𝜔) = Λ−1 (𝒖 𝜔
⋅ 𝐈𝑘 ′ 0 T )
𝐏
𝜔 T
⟹ 𝒄ො ⋅ 𝐇BCH = 𝟎 ⟺ 𝒄ො (𝜔) ∈ 𝒞BCH
′ c
𝒖(𝜔) ∈ 𝔽𝑘2 Θ
𝒖(𝜔) ∈ 𝒞BCH 𝒄ො (𝜔) = 𝒖(𝜔) ⋅ 𝐆RS ∈ 𝒞BCH
Parity-checker of punctured Systematic encoder of RS code
c
Θ
BCH code 𝒞BCH [𝑘 ′ , 𝑘] (𝜔) ∗T 𝒞RS [𝑛, 𝑘 ′ ] with information set Θ
(𝒖 ⋅𝐏 = 𝟎)

32
Low-Latency OSD
◼ Concatenated perspective
′ c
𝒄ො (𝜔) = 𝒖 𝜔
⋅ Λ−1 0 T
𝒖(𝜔) ∈ 𝔽𝑘2 Θ
𝒖(𝜔) ∈ 𝒞BCH 𝐈𝑘 ′ 𝐏
Parity-checker of punctured Systematic encoder of binary code
c
Θ
BCH code 𝒞BCH [𝑘 ′ , 𝑘] (𝜔) ∗T 𝒞 ∗ [𝑛, 𝑘 ′ ] with information set Θ
(𝒖 ⋅𝐏 = 𝟎)

Example 6
1 1 1 0 0 1 0 1 1 1 0 0 1 0
1 0 1 1 1 0 0 0 1 1 1 0 0 1
3
𝜎3 𝔽2𝑚 ≅𝔽𝑚 0 0 0 0 0 0 0 row perm. 1 0 1 1 1 0 0
= 𝜎4 1 𝜎 𝜎 1 0 2
𝐇RS = 𝐇BCH
𝜎 1 𝜎5 𝜎5 𝜎4 0 1 0 1 1 1 0 0 1 1 0 1 1 1 0 0
1 0 1 1 1 0 0 0 0 0 0 0 0 0
1 0 1 1 1 0 0 1 0 1 1 1 0 0
(1) 𝒖(𝜔) = 1, 0, 1, 1, 0 : 𝒖(𝜔) ⋅ 𝐏 ∗ T = (1,1,0,1)
(𝜔)
𝑐5Ƹ = 𝒖(𝜔) ⋅ 1, 1, 1, 0, 0 T
=0
(2) 𝒖 (𝜔)
= 1, 0, 1, 0, 0 : 𝒖 (𝜔)
⋅𝐏 ∗T
=𝟎 ቐ (𝜔)
⟹ 𝒄ො (𝜔) = (1, 0, 1, 1, 0, 0, 1)
𝑐6Ƹ = 𝒖(𝜔) ⋅ 0, 1, 1, 1, 0 T
=1

33
Low-Latency OSD
◼ Concatenated perspective
For LLOSD (𝜏) 𝜏
𝑘′
𝑁TEPs : number of TEPs 𝑁TEPs =෍
𝑖
𝑖=0
𝑁BCH : number of BCH codeword candidates:
Theorem 3
If the channel condition is sufficiently good (SNR ⟶ ∞)
𝜏

⟹ 𝑁BCH = ෍ 𝐴𝑖
𝑖=0
Θc
𝐴𝑖 : number of weight-𝑖 codewords in 𝒞BCH [𝑘 ′ , 𝑘]
• The set of punctured positions Θc varies for each decoding event, and 𝐴𝑖 varies accordingly

In general 𝜏 𝜏
𝑘′
෍ 𝐴𝑖 ≪ ෍
𝑖
𝑖=0 𝑖=0

34
Low-Latency OSD
◼ Concatenated perspective
The number of BCH codeword candidates 𝑁BCH in decoding the 𝒞BCH [63, 45] and 𝒞BCH [31, 21]

Theoretical line
Randomly puncturing 𝑛 − 𝑘 ′ positions of
Θ 𝑐
𝒞BCH [𝑛, 𝑘], obtaining 𝒞BCH [𝑘 ′ , 𝑘] and its
weight distribution {𝐴𝑖 }

𝑁TEPs = 30914

𝑁TEPs = 379

35
Low-Latency OSD
◼ The average number of BCH codeword candidates in LLOSD (3) of the (63,45) BCH code

Remarks
• The probability of finding a BCH codeword decreases as the TEP weight increases
• As the SNR increases, the likelihood of generating a BCH codeword using a weight-0 TEP
increases, while the likelihood decreases when using a TEP of a larger weight

36
Hybrid Soft Decoding
◼ LLOSD vs. Chase - BM / GS
It remains challenging to decode longer BCH codes with LLOSD

GE-free and low-latency Efficient and hardware friendly

High decoding performance LLOSD Chase-BM/GS Good for long codes

Higher decoding complexity Difficult to approach ML

MRPs & re-encoding LRPs & test-vector decoding

LRPs: Least reliable positions 37


Hybrid Soft Decoding
◼ Integrating LLOSD and Chase-GS decoding
LLOSD
𝐆RS Yes Optimal
LLRs TEPs Re-encoding ML
codeword

TEPs No
Chase-GS decoding

Advantages

◼ LLOSD output list can be utilized


◼ Computation sharing
◼ Low complexity root-finding

38
Hybrid Soft Decoding
◼ Integrating LLOSD and Chase-GS decoding
LLOSD
𝐆RS Yes Optimal
LLRs TEPs Re-encoding ML
codeword

TEPs No
Chase-GS decoding

Test-vector Re-encoding
Skipping Interpolation Root-finding
formulation transform

candidate list ෝ(0)


𝒗 Module basis 𝐆RS
Module basis
construction reduction

Lagrange polynomials

39
Hybrid Soft Decoding
◼ Key steps of HSD
Ψ (LRPs)
Test-vector formulation
𝑗0 𝑗1 … 𝑗𝑘′ −1 𝑗𝑘 ′ … 𝑗𝑛−𝜂 𝑗𝑛−𝜂+1 … 𝑗𝑛−1
Ψ = { 𝑗𝑛−𝜂 , 𝑗𝑛−𝜂+1 , … , 𝑗𝑛−1 }: 𝜂 least reliable positions (LRPs)
Θ (MRPs) Θc
The 2𝜂 test-vectors can be formulated as
00…000
00…001
𝜔 𝜔
Λ(𝒓𝜔 ) = (𝑟𝑗0 , 𝑟𝑗1 , … , 𝑟𝑗𝑛−𝜂−1 , 𝑟𝑗𝑛−𝜂 , … , 𝑟𝒋𝑛−1 ) 00…010 0 ≤ 𝜔 ≤ 2𝜂 − 1
00…011

11…110
11…111 𝒓𝜔

Skipping (based on candidate list) ෝ


𝒗
𝒓𝜔 𝑡
If 𝑑H (ෝ ෝ
𝒗, 𝒓𝜔 ) ≤ 𝑡, 𝒓𝜔 will be decoded as 𝒗 𝒓𝜔 can be skipped Hamming
sphere

𝑑H (ෝ ෝ and test-vector 𝒓𝜔
𝒗, 𝒓𝜔 ): Hamming distance between 𝒗
40
Hybrid Soft Decoding
◼ Key steps of HSD

Re-encoding transform (based on the initial RS codeword)

0

Initial RS codeword (from LLOSD): 𝒗 = 𝒖 ⋅ 𝐆RS ∈ 𝔽𝑛2𝑚
0 𝜔 𝜔

Transformed test vector: 𝒛𝜔 = 𝒓𝜔 − 𝒗 Λ(𝒛𝜔 ) = (0, 0, … , 0, 𝑧𝑗 , … , 𝑧𝑗𝑛−1 )
𝑘′

Finding a Grö bner basis of the interpolation module


𝜔
ℳ𝜔 = {𝑄 ∈ 𝔽2𝑚 𝑥, 𝑦 ∣ deg 𝑦 𝑄 ≤ 1; 𝑄 𝛼𝑗 , 𝑧𝑗 /𝑉 𝛼𝑗 = 0, ∀𝑗 ∈ Θc }
Interpolation 一 module basis construction (based on the Lagrange polynomials)
Interpolation points: 𝑧𝑗
𝜔 𝜔
𝑘′
𝑧𝑗𝑛−1
(𝛼𝑗0 , 0), (𝛼𝑗1 , 0), … , (𝛼𝑗 , 0), (𝛼𝑗 ′ , ) … , (𝛼𝑗𝑛−1 , )
𝑘′ −1 𝑘 𝑉 𝛼𝑗𝑛−1
𝑉 𝛼𝑗𝑘′

Interpolated by 𝑉(𝑥) = ැ𝑗∈Θ (𝑥 − 𝛼𝑗 ) Lagrange interpolation

41
Hybrid Soft Decoding
◼ Key steps of HSD
Interpolation 一 module basis construction (based on the Lagrange polynomials)

Seed polynomials:

ς𝑗 ′ ∈Θc,𝑗 ′ ≠𝑗(𝑥 − 𝛼𝑗 ′ )
𝒢(𝑥) = ෑ (𝑥 − 𝛼𝑗 ) & 𝑅𝜔 (𝑥) = ෍ 𝑧𝑗 𝜔 𝑇𝑗′ (𝑥) 𝑇𝑗′ (𝑥) =
𝑗∈Θc 𝑗∈Θc ς𝑛−1
𝑗 ′ =0,𝑗 ′ ≠𝑗(𝛼𝑗 − 𝛼𝑗 ′ )

For 𝒓𝜔 (𝒛𝜔 ), module ℳ𝜔 can be generated by


Computed in generating 𝐆RS
𝑃𝜔,0 (𝑥, 𝑦) = 𝒢(𝑥) Basis reduction
The Gröbner basis of ℳ𝜔
𝑃𝜔,1 𝑥, 𝑦 = 𝑦 − 𝑅𝜔 (𝑥)
The interpolation polynomial
∗ 0 ∗ 1
𝑄𝜔 𝑥, 𝑦 = 𝑉 𝑥 𝑄𝜔 𝑥 + 𝑄𝜔 𝑥 𝑦
Hybrid Soft Decoding
◼ Key steps of HSD
Root-finding
∗ 0
𝑉(𝑥)𝑄𝜔 (𝑥)
Estimated message polynomial: 𝑢ො 𝜔 (𝑥) = ∗ 1
𝑄𝜔 (𝑥)
Lemma 4
The evaluation values of polynomial 𝑢ො 𝜔 (𝑥) over MRPs form a TEP of the LLOSD, i.e.,
𝒆ො 𝜔 = (𝑢ො 𝜔 (𝛼𝑗0 ), 𝑢ො 𝜔 (𝛼𝑗1 ), … , 𝑢ො 𝜔 (𝛼𝑗 ))
𝑘′ −1

𝜔 𝜔

Codeword candidate of LLOSD: 𝒗 = 𝒆′ ෝ(0)
⋅ 𝐆RS + 𝒗

ෝ𝜔 = 𝑢ො 𝜔 𝛼0 , 𝑢ො 𝜔 𝛼1 , … , 𝑢ො 𝜔 𝛼𝑛−1 ෝ 0
Codeword candidate of Chase-GS: 𝒗 +𝒗

= 𝑢ො 𝜔 𝛼𝑗0 , 𝑢ො 𝜔 𝛼𝑗1 , … , 𝑢ො 𝜔 𝛼𝑗 ෝ
⋅ 𝐆RS + 𝒗 0
𝑘′ −1

= 𝒆ො 𝜔 ⋅ 𝐆RS +ෝ
𝒗 0

43
Hybrid Soft Decoding
◼ Key steps of HSD
Partial root-finding (based on 𝐆RS )
The estimated TEP 𝒆ො 𝜔 = (𝑢ො 𝜔 (𝛼𝑗0 ), 𝑢ො 𝜔 (𝛼𝑗1 ), … , 𝑢ො 𝜔 (𝛼𝑗 )) can be determined as
𝑘′ −1

∗ 1
1, if 𝑄𝜔 𝛼𝑗 = 0
𝑢ො 𝜔 (𝛼𝑗 ) = ∀𝑗 ∈ Θ
0, otherwise

ෝ𝜔 can be generated as
The estimated codeword 𝒗 Provided by Chase-GS

ෝ𝜔 = (𝑢ො 𝜔 (𝛼𝑗0 ), 𝑢ො 𝜔 (𝛼𝑗1 ), … , 𝑢ො 𝜔 (𝛼𝑗


𝒗 ෝ
)) ⋅ 𝐆RS + 𝒗 0 = 𝒆ො 𝜔 ⋅ 𝐆RS + 𝒗
ෝ 0
𝑘′ −1

LLOSD

44
Hybrid Soft Decoding
◼ Performance of the (63, 39) BCH code
1.E+00

1.E-01 OSD (1)


Complexity: 4.08 × 104
(BOPs)
1.E-02
LLOSD (3)
FER

OSD (1) Complexity: 4.45 × 103


1.E-03 OSD (2)
LLOSD (2)
HSD (1, 6)
LLOSD (3)
Complexity: 2.71 × 103
1.E-04 HSD (1, 4)
HSD (1, 6)
HSD (1, 8) SNR: 5 dB
1.E-05
1 2 3 4 5 6
SNR (dB)

45
Hybrid Soft Decoding
◼ Performance of the (255, 223) BCH code
1.E+00
LLOSD (2)
Complexity: 1.27 × 104
1.E-01
OSD (1)
Complexity: 2.71 × 105
1.E-02 (BOPs)
FER

OSD (1) PLCC (6)


1.E-03 PLCC (6) Complexity: 4.67 × 104
PLCC (8)
LLOSD (2)
HSD (1, 6)
HSD (1, 4)
1.E-04 Complexity: 1.56 × 104
HSD (1, 6)
HSD (1, 8)
SNR: 5.5 dB
1.E-05
3 3.5 4 4.5 5 5.5 6 6.5
SNR (dB)

PLCC decoding is the progressive variant of the Chase-GS 46

You might also like