0% found this document useful (0 votes)
11 views7 pages

Lecture 03 LDPC

This document provides an introduction to low-density parity-check (LDPC) codes. It discusses linear block codes and their generator and parity check matrices. It defines regular LDPC codes and describes their properties including column weight, row weight, density, and minimum distance. The document outlines encoding and decoding of LDPC codes using message passing algorithms on Tanner graphs. It describes the sum-product and min-sum decoding algorithms and how they estimate probabilities and pass messages between variable and check nodes.

Uploaded by

劉力瑋
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views7 pages

Lecture 03 LDPC

This document provides an introduction to low-density parity-check (LDPC) codes. It discusses linear block codes and their generator and parity check matrices. It defines regular LDPC codes and describes their properties including column weight, row weight, density, and minimum distance. The document outlines encoding and decoding of LDPC codes using message passing algorithms on Tanner graphs. It describes the sum-product and min-sum decoding algorithms and how they estimate probabilities and pass messages between variable and check nodes.

Uploaded by

劉力瑋
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Intoduction to LDPC

Li-Wei Liu
May 2019

1 Introduction
1.1 Linear Block Code
• A generator matrix G for the (n,k) binary linear block code C is a k × n
binary matrix whose rows span C i.e.
k
C = {x · G|x ∈ {0, 1} }

• A parity check matrix H for the (n,k) binary linear block code C is a
m × n binary matrix whose rows span the space dual to C, or whose rows
describe the parity checks that each codeword must satisfy
n
C = {c|c ∈ {0, 1} , c · H T = 0}

and m ≥ n − k
• Generator and Parity Check matrices are not unique

1.2 LDPC Code


A regular LDPC code is one for which the m × n parity check matrix of interest
has wc one’s in every column (column weight) and wr one’s in every row (row
weight) The number of 1’s in common between and 2 columns is no greater than
1.

• Each code bit is involved with wc parity constraints, and each parity
constraint involves wr bits.
• Density γ is defined by wc /m = wr /n (Low density means wc << m and
wr << n)
• No two rows of H have more than one 1 in common. (RC constraint)

• wc n = wr m = number of ones in H.
k wc
m ≤ n − k means R = n ≤1− wr and thus wc << wr

1
• We sometimes refer to such a code as (wc , wr ) regular LDPC
• dmin ≥ wc + 1
VEGA’s comment:
Because each two columns could has at most one pair of 1 , so in minimum
case, there are wc + 1 columns sum up to be 0 vector. Since wc << Wr ,
so dmin ≥ wc + 1

Insight:
The larger the m, the lower the wc /m which means Low Density When wc is
large enough such that dmin would also be large.

2 Encode
• Systematic Encode

• Encode by Parity Check Matrix (with special structure)


Generally, T is ease to inverse (Low Triangular Matrix).

2
3 Decoder
Hard Decision LDPC Encoder
Hard Decision is bad at low SNR scenario

3.0.1 Tanner Grapg


Parameter:

• n: Number of Varialbe Ndoe (VN)


• m: Number of Check Node (CN)
• k: Cycle of Length
Cycle Length must be even number, larger than 4(RC Constraint)
Minimum Length of any cycle in Graph: Girth
• wr : row degree
• wc : column degree

3.0.2 Q & A or Insight


1. Is there a systematic way to find Girth ?
This is an NP Problem
2. Girth determine the cycle number ( Girth
2 ) to pass independent message.
(Where statistic independent assumption still works)
3. Large Girth does’nt good to decoding, it just make statistic independent message passing works longer.

4. We could even make short cycle node more sparse,

3
Element of Iterative Processing(Turbo Principle)
1. Concatenation (Serial or Parallel)
2. Interleaver

3. SISO Processing
4. Extrinsic Information Exchange

Property 1 and 2 are from Tanner Graph.

Assumption for LDPC Decoding


1. All Variable are statistically independent (Come from large code length)

2. All Variable Node only take the information from neighbor Check Node
3. Variable Node’s neighbors are statistically independent
4. Check Node’s neighbors are statistically independent

Message Passing
• Each bit node (VN) sentds a message (Extrinsic Information) to each
check node it’s connected to

• Each check node sends a message (Extrinsic Information) to each bit node
it is connected to.
• qi,j (x)

– message passed from V Ni to CNj


– probability that V Ni = x, given the channel value and Extrinsic
Information from CNj , where j 6= i.
• rj,i (x)

– message passed from CNj to V Ni

– probability that CNi parity check is satisfied (even or odd parity),


given the channel value and Extrinsic Information from CNj , where
j 6= i.
– rj,i (−1) + rj,i (+1) = 1

4
3.1 Decoding Algorithm
1. Initialization qi,j (x) is based on observed Yi = yi assuming the Xi ’s are a
priori equally likely to be +1 or -1.

qi,j (+1) = P (Xi = +1|Yi = yi )


fYi (yi |Xi = +1)P (Xi = +1)
=P
Xi fYi (yi |Xi = +1)P (Xi = +1)
(yi −1)2
1 1 −
2 2σ 2 e
2σ 2
= −(yi −1)2 −(yi +1)2
1 1
2 ( 2σ 2 ){e
2σ 2 +e 2σ 2 }
1
= −(4yi )
1 + e 2σ2
1
= −(2yi )
1+e σ2

qi,j (−1) = P (Xi = −1|Yi = yi )


1
= (2yi )
1 + e σ2

1
Summary: qi,j (x) = (−2xyi ) for x ∈ {+1, −1}
1+e σ2

Proof of Lemma :
Given A = [A1 , A2 , A3 , . . . , AL ], where P (Ai = 1) = pi
L
1 1Y
P (A has even parity) = + (1 − 2pi )
2 2 i

• For d = 1: P0 = 1 − p1 ; P1 = p1

• For d = 2:
P0 = (1 − p1 )(1 − p1 ) + p1 p1 = 21 + 12 (1 − 2p1 )(1 − 2p1 )
P1 = 1 − P0 = 12 − 12 (1 − 2p1 )(1 − 2p1 )

• For d = 3:
P0 = P0d=2 (1 − p1 ) + P1d=2 p1
= ( 12 + 12 (1 − 2p1 )(1 − 2p1 ))(1 − p1 ) + ( 12 − 12 (1 − 2p1 )(1 − 2p1 ))(p1 )
= ( 21 (1 − 2p1 )(1 − 2p1 ))(1 − p1 − p1 ) + 12 (1 − p1 ) + 12 (p1 )
= ( 12 + (1 − 2p1 )(1 − 2p1 )(1 − 2p1 ))
P1 = 1 − P0 = 12 − 12 (1 − 2p1 )(1 − 2p1 )(1 − 2p1 )
By mathematics induction, QED

5
3.2 Sum product algorithm
1. Initialization, Compute the Log-APP based on Observed value of Yi

P (Xi = +1|Yi = yi )
L(qi,j ) = Bayes Rule, prior would be cancelled
P (Xi = −1|Yi = yi )
1
(−2yi )
1+e σ2
= 1
(2yi )
1+e σ2

2yi
=
σ2

2. Check Node Update. rj,i (−1) = 12 − 12 i∈Rj\i (1 − 2qi0 .j (−1))


Q

From identity:
1 x
tanh( log( )) = 1 − 2y
2 y
, where x + y = 1
Let x = rj,i (+1), y = rj,i (−1), rj,i (+1) + rj,i (−1) = 1
Let x = qi0 ,j (+1), y = qi0 ,j (−1), qi0 ,j (+1) + qi0 ,j (−1) = 1

1 Y
tanh( L(rj,i )) = (1 − 2qi0 .j (−1)) (1)
2
i∈Rj\i
Y 1
= tanh( L(qj,i )) (2)
2
i∈Rj\i

P
3. Variable Node Update. L(qi,j ) = L(Xi ) + j∈Ci\j L(rj,i )
Each time, it must include channel information L(Xi )
P
4. Compute Log Likelihood . L(Qi ) = L(Xi ) + j∈Ci L(rj,i )

3.3 Min Sum Algorithm


1. Initialization, Compute the Log-APP based on Observed value of Yi
L(qi,j ) = 2y
σ2
i

2. Check Node Update.


1 Y 1
tanh( L(rj,i )) = tanh( L(qj,i ))
2 2
i∈Rj\i

6
Y Y 1
L(rj,i ) = sign(L(qj,i )) × 2 tanh−1 tanh( L(qj,i ))
2
i∈Rj\i
Y Y 1
= sign(L(qj,i )) × 2 tanh−1 log−1 log tanh( L(qj,i ))
2
i∈Rj\i
Y X 1
= sign(L(qj,i )) × 2 tanh−1 log−1 log(tanh( L(qj,i )))
2
i∈Rj\i

Because log(e + e ) = max(x, y) + log(1 + e−|x−y| ) ≈ max(x, y)


x y

x ex + 1
φ(x) = − log(tanh( ) = log( x ), and φ(x)−1 = φ(x)
2 e −1
Y X 1
= sign(L(qj,i )) × 2 tanh−1 (− log−1 (− log(tanh( L(qj,i )))))
2
i∈Rj\i
Y X 1
= sign(L(qj,i )) × φ−1 ( (φ( L(qj,i )))))
2
i∈Rj\i

Largest in Summation would have smallest in its inverse, so we could use minimum to approx
Y
= sign(L(qj,i )) min(L(qj,i ))

P
3. Variable Node Update. L(qi,j ) = L(Xi ) + j∈Ci\j L(rj,i )
P
4. Compute Log Likelihood . L(Qi ) = L(Xi ) + j∈Ci L(rj,i )

You might also like