Lecture 03 LDPC
Lecture 03 LDPC
Li-Wei Liu
May 2019
1 Introduction
1.1 Linear Block Code
• A generator matrix G for the (n,k) binary linear block code C is a k × n
binary matrix whose rows span C i.e.
k
C = {x · G|x ∈ {0, 1} }
• A parity check matrix H for the (n,k) binary linear block code C is a
m × n binary matrix whose rows span the space dual to C, or whose rows
describe the parity checks that each codeword must satisfy
n
C = {c|c ∈ {0, 1} , c · H T = 0}
and m ≥ n − k
• Generator and Parity Check matrices are not unique
• Each code bit is involved with wc parity constraints, and each parity
constraint involves wr bits.
• Density γ is defined by wc /m = wr /n (Low density means wc << m and
wr << n)
• No two rows of H have more than one 1 in common. (RC constraint)
• wc n = wr m = number of ones in H.
k wc
m ≤ n − k means R = n ≤1− wr and thus wc << wr
1
• We sometimes refer to such a code as (wc , wr ) regular LDPC
• dmin ≥ wc + 1
VEGA’s comment:
Because each two columns could has at most one pair of 1 , so in minimum
case, there are wc + 1 columns sum up to be 0 vector. Since wc << Wr ,
so dmin ≥ wc + 1
Insight:
The larger the m, the lower the wc /m which means Low Density When wc is
large enough such that dmin would also be large.
2 Encode
• Systematic Encode
2
3 Decoder
Hard Decision LDPC Encoder
Hard Decision is bad at low SNR scenario
3
Element of Iterative Processing(Turbo Principle)
1. Concatenation (Serial or Parallel)
2. Interleaver
3. SISO Processing
4. Extrinsic Information Exchange
2. All Variable Node only take the information from neighbor Check Node
3. Variable Node’s neighbors are statistically independent
4. Check Node’s neighbors are statistically independent
Message Passing
• Each bit node (VN) sentds a message (Extrinsic Information) to each
check node it’s connected to
• Each check node sends a message (Extrinsic Information) to each bit node
it is connected to.
• qi,j (x)
4
3.1 Decoding Algorithm
1. Initialization qi,j (x) is based on observed Yi = yi assuming the Xi ’s are a
priori equally likely to be +1 or -1.
1
Summary: qi,j (x) = (−2xyi ) for x ∈ {+1, −1}
1+e σ2
Proof of Lemma :
Given A = [A1 , A2 , A3 , . . . , AL ], where P (Ai = 1) = pi
L
1 1Y
P (A has even parity) = + (1 − 2pi )
2 2 i
• For d = 1: P0 = 1 − p1 ; P1 = p1
• For d = 2:
P0 = (1 − p1 )(1 − p1 ) + p1 p1 = 21 + 12 (1 − 2p1 )(1 − 2p1 )
P1 = 1 − P0 = 12 − 12 (1 − 2p1 )(1 − 2p1 )
• For d = 3:
P0 = P0d=2 (1 − p1 ) + P1d=2 p1
= ( 12 + 12 (1 − 2p1 )(1 − 2p1 ))(1 − p1 ) + ( 12 − 12 (1 − 2p1 )(1 − 2p1 ))(p1 )
= ( 21 (1 − 2p1 )(1 − 2p1 ))(1 − p1 − p1 ) + 12 (1 − p1 ) + 12 (p1 )
= ( 12 + (1 − 2p1 )(1 − 2p1 )(1 − 2p1 ))
P1 = 1 − P0 = 12 − 12 (1 − 2p1 )(1 − 2p1 )(1 − 2p1 )
By mathematics induction, QED
5
3.2 Sum product algorithm
1. Initialization, Compute the Log-APP based on Observed value of Yi
P (Xi = +1|Yi = yi )
L(qi,j ) = Bayes Rule, prior would be cancelled
P (Xi = −1|Yi = yi )
1
(−2yi )
1+e σ2
= 1
(2yi )
1+e σ2
2yi
=
σ2
From identity:
1 x
tanh( log( )) = 1 − 2y
2 y
, where x + y = 1
Let x = rj,i (+1), y = rj,i (−1), rj,i (+1) + rj,i (−1) = 1
Let x = qi0 ,j (+1), y = qi0 ,j (−1), qi0 ,j (+1) + qi0 ,j (−1) = 1
1 Y
tanh( L(rj,i )) = (1 − 2qi0 .j (−1)) (1)
2
i∈Rj\i
Y 1
= tanh( L(qj,i )) (2)
2
i∈Rj\i
P
3. Variable Node Update. L(qi,j ) = L(Xi ) + j∈Ci\j L(rj,i )
Each time, it must include channel information L(Xi )
P
4. Compute Log Likelihood . L(Qi ) = L(Xi ) + j∈Ci L(rj,i )
6
Y Y 1
L(rj,i ) = sign(L(qj,i )) × 2 tanh−1 tanh( L(qj,i ))
2
i∈Rj\i
Y Y 1
= sign(L(qj,i )) × 2 tanh−1 log−1 log tanh( L(qj,i ))
2
i∈Rj\i
Y X 1
= sign(L(qj,i )) × 2 tanh−1 log−1 log(tanh( L(qj,i )))
2
i∈Rj\i
x ex + 1
φ(x) = − log(tanh( ) = log( x ), and φ(x)−1 = φ(x)
2 e −1
Y X 1
= sign(L(qj,i )) × 2 tanh−1 (− log−1 (− log(tanh( L(qj,i )))))
2
i∈Rj\i
Y X 1
= sign(L(qj,i )) × φ−1 ( (φ( L(qj,i )))))
2
i∈Rj\i
Largest in Summation would have smallest in its inverse, so we could use minimum to approx
Y
= sign(L(qj,i )) min(L(qj,i ))
P
3. Variable Node Update. L(qi,j ) = L(Xi ) + j∈Ci\j L(rj,i )
P
4. Compute Log Likelihood . L(Qi ) = L(Xi ) + j∈Ci L(rj,i )