0% found this document useful (0 votes)
344 views16 pages

An Introduction To Low Density Parity Check Codes Matlab Implementation

The document describes an iterative decoding process for low-density parity-check (LDPC) codes. It discusses computing initial symbol probabilities from the received codeword, performing a syndrome check to determine if it is a valid codeword, and if not, running iterative decoding by updating symbol probabilities using message passing between variable and check nodes on a Tanner graph until a valid codeword is found or the maximum number of iterations is reached. Tables of values show the symbol probabilities and message values converging over 7 iterations of decoding for an example LDPC code.

Uploaded by

Meo_Firdaus_6995
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
344 views16 pages

An Introduction To Low Density Parity Check Codes Matlab Implementation

The document describes an iterative decoding process for low-density parity-check (LDPC) codes. It discusses computing initial symbol probabilities from the received codeword, performing a syndrome check to determine if it is a valid codeword, and if not, running iterative decoding by updating symbol probabilities using message passing between variable and check nodes on a Tanner graph until a valid codeword is found or the maximum number of iterations is reached. Tables of values show the symbol probabilities and message values converging over 7 iterations of decoding for an example LDPC code.

Uploaded by

Meo_Firdaus_6995
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

ECE 519 Project Part C

Submitted By:
Arif Mahmood
A20248293

Contents
1

Introduction

LPDC Decoding

Results

Conclusion

Appendix

1. Introduction
As in every coding scheme, LDPC has two stages: an encode step and a decode step.
The LDPC encode is a simple matrix multiplication common to all sparse graph. The
LDPC decode is where the difficulty and the strength of LDPC lies: it uses a decode
matrix in an iterative decoding process on probabilities of specific symbols. This section
will discuss a binary implementation of LDPC codes, covering the decoding function in
detail.

2. LDPC Decoding
The decoding step can be broken into 3 main parts: computing initial probabilities,
syndrome check, and iterative decoding. Before anything else can be done, the
received codeword, s, must be converted into symbol probabilities. Next a syndrome
check is performed on the probabilities to determine if s is a valid codeword. If s is a
valid codeword, the message is extracted from it, and the decoding step is finished. If
we do not have a valid codeword, it is sent through the iterative decoder, which updates
the probabilities by using knowledge of other probabilities. After each iteration
the syndrome check is performed to see if a valid codeword has been decoded. Usually,
a maximum number of iterations is set. Should the maximum number be reached, then
the codeword is not decoded and there are possibilities for errors in the extracted
message.

Initial Probabilities
The received codeword, s, is no longer a bit stream. It is a string of floating point
numbers. For instance, although a one may have be sent through the channel, the
noise may have made it into 1.2 or .2 depending on the noise strength. Using these
values we can derive the probability that each bit was a one or a zero from the other
side. The probability that a bit is a one is computed in q1; the probability that it is a zero
is set in q0.

q0 = 1 q1
Once all the q1 and q0 are computed the syndrome check and iterative decoding may
begin.

Syndrome Check
The syndrome check takes the computed probabilities and decides whether the bit is a
one or a zero based on the higher probability. It then uses a decode matrix, H, derived

from the encode matrix, G, to decide if the codeword is valid or not. The codeword is
simply multiplied by the transpose of H. If the result is an all zero vector, then the
codeword is valid.
H is of the form:
H = [ PT | I ]

Where PT is the transpose of the parity check from G, and I is the identity. For the
previous example H is:
H=

11100000
00011100
10010010
01001001

Assuming our codeword comes through the channel with no errors the
probabilities we can put y to the test.

Y * HT = [ 0 0 0 0]

In this case we exit the decoding process and take the first three bits of y since
we know that they are the original message word. Suppose now that

Y * HT = [ 0 0 0 0] is not true

In this case we know that received codeword is not a valid codeword, so we must go
into the iterative decoding process to find a better codeword.

Iterative Decoding
The iterative decoding process takes the probabilities from the initial probabilities part
and the H matrix from the syndrome check and updates the probabilities. From the H
matrix a message-passing graph is formed. Here is the message-passing graph for the
previous example.

Looking back at the H matrix the first line contains ones in the second and the fourth
places, and therefore, there is a connection between the first square and the second
and fourth circles. The additional connections are made in similar manners. The idea in
decoding is to iteratively go back and forth between the q probabilities and the r
probabilities to get the best possible result. The qs are set initially to be the initial
probabilities. The rs are then computed using the values of q and the following
formulas.

rk0 = _ (1 + (qk0 - qk1 ))


rk1 = _ (1 - (qk0 - qk1 ))
In these formulas, qk1 is the probability that circle k is a one, and qk0 is the probability that
circle k is a zero. These two should add to one. The differences of qks are then
multiplied together to form r1 and r0..Once the rs have been calculated, it is possible to
update the qs.Qs are calculated using the following equations.

qk0 = pj0rk0
qk1 = pj1rk1
From these 4 equations it is possible to iterate back and forth to get better and better
values of q. Between iterations, it is customary to do a syndrome check on the values of
q to see if a valid codeword has been determined. This enables the algorithm to do as
much work as needed to retrieve the real codeword, while not wasting time doing extra
iterations.
Once a valid codeword is determined, the message word is extracted by removing the
first string of bits of length equal to the message length. Then the decoding algorithm is
complete.

3. Results

Values of qi,j
Iteration 1:
qi,j

qi,j(0)

qi,j(1)

q0,0
q0,2
q1,0
q1,3
q2,0
q3,1
q3,2
q4,1
q4,3
q5,1
q6,2
q7,3

0.0548
0.3055
0.7581
0.6777
0.0121
0.9660
0.7455
0.2289
0.3773
0.8808
0.1680
0.0082

0.9452
0.6945
0.2419
0.3223
0.9879
0.0340
0.2545
0.7711
0.6227
0.1192
0.8320
0.9918

qi,j

qi,j(0)

qi,j(1)

q0,0
q0,2
q1,0
q1,3
q2,0
q3,1
q3,2
q4,1
q4,3
q5,1
q6,2
q7,3

0.0931
0.0625
0.4237
0.8648
0.0121
0.9492
0.8208
0.0612
0.4436
0.8808
0.1680
0.0082

0.9069
0.9375
0.5763
0.1352
0.9879
0.0508
0.1792
0.9388
0.5564
0.1192
0.8320
0.9918

qi,j(0)

qi,j(1)

Iteration 2:

Iteration 3:
qi,j

q0,0
q0,2
q1,0
q1,3
q2,0
q3,1
q3,2
q4,1
q4,3
q5,1
q6,2
q7,3

0.0752
0.2142
0.3596
0.7966
0.0121
0.9765
0.6866
0.0218
0.4193
0.8808
0.1680
0.0082

0.9248
0.7858
0.6404
0.2034
0.9879
0.0235
0.3134
0.9782
0.5807
0.1192
0.8320
0.9918

qi,j

qi,j(0)

qi,j(1)

q0,0
q0,2
q1,0
q1,3
q2,0
q3,1
q3,2
q4,1
q4,3
q5,1
q6,2
q7,3

0.1085
0.2616
0.3823
0.8278
0.0121
0.9608
0.6339
0.0344
0.4600
0.8808
0.1680
0.0082

0.8915
0.7384
0.6177
0.1722
0.9879
0.0392
0.3661
0.9656
0.5400
0.1192
0.8320
0.9918

qi,j

qi,j(0)

qi,j(1)

q0,0
q0,2
q1,0
q1,3
q2,0
q3,1
q3,2
q4,1
q4,3
q5,1

0.1235
0.2437
0.3447
0.7706
0.0121
0.9550
0.6522
0.0284
0.4359
0.8808

0.8765
0.7563
0.6553
0.2294
0.9879
0.0450
0.3478
0.9716
0.5641
0.1192

Iteration 4:

Iteration 5:

q6,2
q7,3

0.1680
0.0082

0.8320
0.9918

qi,j

qi,j(0)

qi,j(1)

q0,0
q0,2
q1,0
q1,3
q2,0
q3,1
q3,2
q4,1
q4,3
q5,1
q6,2
q7,3

0.1182
0.2740
0.3667
0.7461
0.0121
0.9573
0.6437
0.0397
0.4275
0.8808
0.1680
0.0082

0.8818
0.7260
0.6333
0.2539
0.9879
0.0427
0.3563
0.9603
0.5725
0.1192
0.8320
0.9918

qi,j

qi,j(0)

qi,j(1)

q0,0
q0,2
q1,0
q1,3
q2,0
q3,1
q3,2
q4,1
q4,3
q5,1
q6,2
q7,3

0.1206
0.2559
0.3746
0.7547
0.0121
0.9534
0.6594
0.0449
0.4307
0.8808
0.1680
0.0082

0.8794
0.7441
0.6254
0.2453
0.9879
0.0466
0.3406
0.9551
0.5693
0.1192
0.8320
0.9918

Iteration 6:

Iteration 7:

Values of rj,i
Iteration 1:
rj,i

rj,i(+1)

rj,i(-1)

r0,0
r0,1
r0,2
r1,3
r1,4
r1,5
r2,0
r2,3
r2,6
r3,1
r3,4
r3,7

0.6854
0.8240
0.6262
0.2100
0.8175
0.1825
0.2232
0.7205
0.2232
0.8746
0.6869
0.6447

0.3146
0.1760
0.3738
0.7900
0.1825
0.8175
0.7768
0.2795
0.7768
0.1254
0.3131
0.3553

rj,i

rj,i(+1)

rj,i(-1)

r0,0
r0,1
r0,2
r1,3
r1,4
r1,5
r2,0
r2,3
r2,6
r3,1
r3,4
r3,7

0.2482
0.9344
0.2702
0.2935
0.8549
0.2473
0.3369
0.6292
0.4045
0.6206
0.3252
0.4564

0.7518
0.0656
0.7298
0.7065
0.1451
0.7527
0.6631
0.3708
0.5955
0.3794
0.6748
0.5436

rj,i

rj,i(+1)

rj,i(-1)

r0,0
r0,1
r0,2
r1,3

0.5745
0.8971
0.5621
0.1658

0.4255
0.1029
0.4379
0.8342

Iteration 2:

Iteration 3:

r1,4
r1,5
r2,0
r2,3
r2,6
r3,1
r3,4
r3,7

0.8421
0.1058
0.2870
0.7905
0.2193
0.5554
0.1411
0.4589

0.1579
0.8942
0.7130
0.2095
0.7807
0.4446
0.8589
0.5411

rj,i

rj,i(+1)

rj,i(-1)

r0,0
r0,1
r0,2
r1,3
r1,4
r1,5
r2,0
r2,3
r2,6
r3,1
r3,4
r3,7

0.6370
0.9145
0.6193
0.1358
0.8629
0.0442
0.3761
0.6898
0.3933
0.5794
0.2082
0.4521

0.3630
0.0855
0.3807
0.8642
0.1371
0.9558
0.6239
0.3102
0.6067
0.4206
0.7918
0.5479

rj,i

rj,i(+1)

rj,i(-1)

r0,0
r0,1
r0,2
r1,3
r1,4
r1,5
r2,0
r2,3
r2,6
r3,1
r3,4
r3,7

0.6148
0.8820
0.5921
0.1454
0.8509
0.0709
0.4111
0.6583
0.4361
0.5393
0.1775
0.4738

0.3852
0.1180
0.4079
0.8546
0.1491
0.9291
0.5889
0.3417
0.5639
0.4607
0.8225
0.5262

Iteration 4:

Iteration 5:

Iteration 6:
rj,i

rj,i(+1)

rj,i(-1)

r0,0
r0,1
r0,2
r1,3
r1,4
r1,5
r2,0
r2,3
r2,6
r3,1
r3,4
r3,7

0.6515
0.8674
0.6169
0.1408
0.8465
0.0708
0.3989
0.6702
0.4220
0.5631
0.2338
0.4653

0.3485
0.1326
0.3831
0.8592
0.1535
0.9292
0.6011
0.3298
0.5780
0.4369
0.7662
0.5347

rj,i

rj,i(+1)

rj,i(-1)

r0,0
r0,1
r0,2
r1,3
r1,4
r1,5
r2,0
r2,3
r2,6
r3,1
r3,4
r3,7

0.6300
0.8726
0.6018
0.1494
0.8482
0.0790
0.4046
0.6501
0.4351
0.5714
0.2579
0.4643

0.3700
0.1274
0.3982
0.8506
0.1518
0.9210
0.5954
0.3499
0.5649
0.4286
0.7421
0.5357

Iteration 7:

Values of Qj

and

Cj

Iteration 1:
Qj
Cj

0.768564 0.869353 0.964745 0.507639 0.742638 0.947887 0.719896 0.985288


1
1
1
1
1
1
1
1

Iteration 2:
Qj
Cj

0.175094
0

0.183594
0

0.966820
1

0.233021
0

0.463686
0

0.972205
1

0.830456
1

0.991889
1

0.460890
0

0.966690
1

0.630752
1

0.809100
1

0.949785
1

0.680155
1

0.990949
1

0.268341
0

0.973079
1

0.147662
0

0.409536
0

0.981059
1

0.839161
1

0.992170
1

0.579861
1

0.967244
1

0.342521
0

0.757300
1

0.955780
1

0.810181
1

0.989574
1

0.208854
0

0.973617
1

0.213642
0

0.624265
1

0.968649
1

0.841218
1

0.992391
1

0.338104
0

0.969217
1

0.408618
0

0.786916
1

0.956694
1

0.775395
1

0.992333
1

Iteration 3:
Qj
Cj

0.723217
1

Iteration 4:
Qj
Cj

0.530679
1

Iteration 5:
Qj
Cj

0.756417
1

Iteration 6:
Qj
Cj

0.454376
0

Iteration 7:
Qj
Cj

0.739903
1

4. Conclusion

In conclusion, my project is a success. I successfully completed all programming in


Matlab and am able to produce the desired results. Throughout the project, the novelty
of the LDPC algorithm held me back.
Instead of having readily available libraries of code, I had to write my own code.Instead
of working with an established algorithm, much of the theory behind sparse graph codes
is still being developed. I understood the challenges that the project presented when our
teacher posted it for us to do, but feel that in the future, other groups will be able to use
this project as a basis for further exploration.

5. Appendix

Matlab Code used for the project


clear all;
clc;

% Probability-domain sum product algorithm LDPC decoder


%

: Received signal vector (column vector)

: LDPC matrix

NV

: Noise variance

iteration : Number of iteration

: Decoded vector (0/1)

C=[-0.4 -0.2 -1.1 0.6 -0.5 0.5 -0.4 -1.2]';


H=[1 1 1 0 0 0 0 0 ;0 0 0 1 1 1 0 0;1 0 0 1 0 0 1 0;0 1 0 0 1 0 0 1];
[M N] = size(H);

NV = 0.5;
iteration = 7;

% Prior probabilities
P1 = ones(size(C))./(1 + exp(2*C./(NV)));
P0 = 1 - P1;

% Initialization
K0 = zeros(M, N);
K1 = zeros(M, N);
rji0 = zeros(M, N);
rji1 = zeros(M, N);
qij0 = H.*repmat(P0', 4, 1);
qij1 = H.*repmat(P1', 4, 1);

% Iteration
for n = 1:iteration

fprintf('\n Iteration : %d\n', n);

% ----- Horizontal step ----for i = 1:M

% Find non-zeros in the column


c1 = find(H(i, :));

for k = 1:length(c1)

% Get column products of drji\c1(l)


drji = 1;
for l = 1:length(c1)
if l~= k
drji = drji*(qij0(i, c1(l)) - qij1(i, c1(l)));
end
end % for l

rji0(i, c1(k)) = (1 + drji)/2


rji1(i, c1(k)) = (1 - drji)/2

end % for k

end % for i

% ------ Vertical step -----for j = 1:N

% Find non-zeros in the row


r1 = find(H(:, j));

for k = 1:length(r1)

% Get row products of prodOfrij\ri(l)


prodOfrij0 = 1;
prodOfrij1 = 1;
for l = 1:length(r1)
if l~= k
prodOfrij0 = prodOfrij0*rji0(r1(l), j);
prodOfrij1 = prodOfrij1*rji1(r1(l), j);
end
end % for l

% Update constants
K0(r1(k), j) = P0(j)*prodOfrij0;
K1(r1(k), j) = P1(j)*prodOfrij1;

% Update qij0 and qij1


qij0(r1(k), j) = K0(r1(k), j)./(K0(r1(k), j) + K1(r1(k), j))
qij1(r1(k), j) = K1(r1(k), j)./(K0(r1(k), j) + K1(r1(k), j))

end % for k

% Update constants

Ki0 = P0(j)*prod(rji0(r1, j));


Ki1 = P1(j)*prod(rji1(r1, j));

% Get Qj
Qi0 = Ki0/(Ki0 + Ki1);
Qi1 = Ki1/(Ki0 + Ki1);

Q(j) = Qi1

% Decode Qj
if Qi1 > Qi0
v(j) = 1;
else
v(j) = 0;
end

end % for j

fprintf('Q :');
for k= 1:j
fprintf(' %f \t',Q(k));
end % for k
fprintf('\n');
fprintf('v :');
for i= 1:j
fprintf(' %d \t\t\t',v(i));
end % for i

end % for n
vHtranspose=mod(v*(H'),2)

References

Implementation of High Performance LDPC in a Communications Channel By Irsal Mashhor,

Laura Miyakawa and Steven Sherry Carnegie Institute of Technology


Carnegie Mellon University Pittsburgh, May 8, 2000.

Valenti, Matthew, Turbo Codes and Iterative Processing, Virginia Polytechnic


Institute and State University, 1998.

You might also like