A Practical Guide to Error Control Coding Using MATLAB Yuan Jing pdf download
A Practical Guide to Error Control Coding Using MATLAB Yuan Jing pdf download
https://ebookname.com/product/a-practical-guide-to-error-control-
coding-using-matlab-yuan-jing/
https://ebookname.com/product/error-control-coding-2nd-edition-
shu-lin/
https://ebookname.com/product/essentials-of-error-control-
coding-1st-edition-jorge-castineira-moreira/
https://ebookname.com/product/managing-maintenance-error-a-
practical-guide-1st-edition-hobbs/
https://ebookname.com/product/special-tests-for-neurologic-
examination-1st-edition-james-scifers/
Tradition Book Cult of Ecstasy Mage The Ascension Lynn
Davis
https://ebookname.com/product/tradition-book-cult-of-ecstasy-
mage-the-ascension-lynn-davis/
https://ebookname.com/product/the-reproduction-of-colour-6th-
edition-r-w-g-hunt/
https://ebookname.com/product/mockingbird-song-ecological-
landscapes-of-the-south-1st-edition-jack-temple-kirby/
https://ebookname.com/product/urban-environments-for-healthy-
ageing-a-global-perspective-1st-edition-anna-lane-editor/
https://ebookname.com/product/the-u-s-navy-seal-survival-
handbook-learn-the-survival-techniques-and-strategies-of-america-
s-elite-warriors-us-army-survival-1st-edition-mann/
Markets and Market Liberalization Ethnographic
Reflections 1st edition Edition Dannhaeuser N. (Ed.)
https://ebookname.com/product/markets-and-market-liberalization-
ethnographic-reflections-1st-edition-edition-dannhaeuser-n-ed/
A Practical Guide to Error-Control
Coding Using MATLAB®
DISCLAIMER OF WARRANTY
The technical descriptions, procedures, and computer programs in this book
have been developed with the greatest of care and they have been useful to the
author in a broad range of applications; however, they are provided as is,
without warranty of any kind. Artech House, Inc. and the authors and editors
of the book titled A Practical Guide to Error-Control Coding Using MATLAB®
make no warranties, expressed or implied, that the equations, programs, and
procedures in this book or its associated software are free of error, or are
consistent with any particular standard of merchantability, or will meet your
requirements for any particular application. They should not be relied upon
for solving a problem whose incorrect solution could result in injury to a
person or loss of property. Any use of the programs or procedures in such
a manner is at the user’s own risk. The editors, author, and publisher dis-
claim all liability for direct, incidental, or consequent damages resulting from
use of the programs or procedures in this book or the associated software.
A Practical Guide to Error-Control
Coding Using MATLAB®
Yuan Jiang
artechhouse.com
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S. Library of Congress.
All rights reserved. Printed and bound in the United States of America. No part of
this book may be reproduced or utilized in any form or by any means, electronic or
mechanical, including photocopying, recording, or by any information storage and
retrieval system, without permission in writing from the publisher.
All terms mentioned in this book that are known to be trademarks or service
marks have been appropriately capitalized. Artech House cannot attest to the ac-
curacy of this information. Use of a term in this book should not be regarded as
affecting the validity of any trademark or service mark.
10 9 8 7 6 5 4 3 2 1
Contents
Preface ix
References 17
Selected Bibliography 17
Problems 43
References 44
Selected Bibliography 44
Problems 110
References 111
Selected Bibliography 112
Contents vii
Problems 150
References 151
Problems 210
References 210
Selected Bibliography 212
Problems 272
References 274
Selected Bibliography 276
Index 279
Preface
ix
A Practical Guide to Error-Control Coding Using MATLAB®
It is left to the readers to determine whether the book has served its
purpose. The author welcomes feedback of any kind (ecc.book.comments@
hotmail.com).
Finally the author would like to express his gratitude to editors Mark
Walsh, Lindsey Gendall, and Rebecca Allendorf at Artech House. With-
out their appreciation and help, publication of this book would have been
a lot harder. The author is also indebted to the book reviewer, who remains
anonymous to the author, for his valuable comments and suggestions, which
enlightened the author a great deal.
1
Error Control in Digital Communications
and Storage
A Practical Guide to Error-Control Coding Using MATLAB®
Example 1.1
We send a message bit of 1 to the receiver. Due to the channel error, when
the bit passes the channel and arrives at the receiver it becomes a 0. Unfor-
tunately there is no indication whatsoever whether the received bit is correct
or not.
Now, instead of sending the raw message bit, we send a codeword c formed
by repeating the message bit three times. The codeword corresponding to a
message bit of 0 is c0 = (000), and the codeword for a message bit of 1 is c1 =
(111). The redundancy here is the two duplicates of the message bit.
Error Control in Digital Communications and Storage
Transmitter
Information
Encoding Modulation
source
Channel
Receiver
Information
Decoding Demodulation
destination
Suppose that the received word is r = (011), which has an error in its
first position. We immediately know that r is in error, because all three bits
are supposed to be identical but they are not.
Notice that r differs from c0 by two bits and differs from c1 by one bit. It
is logical to think that the received word is more likely to be r if c1 is sent.
So we can quite confidently conclude that the codeword transmitted is c1 =
(111) and the original message is 1. The two redundant bits have helped us
make correct decoding.
This trivial repetition code provides both error detection and error cor-
rection capability.
Figure 1.2 illustrates a typical bit error rate (BER) versus signal-to-noise
ratio (SNR) curve for coded and uncoded systems.
The use of error correction, however, is not free. The redundancy acts
as overhead and it “costs” transmission resources (e.g., channel bandwidth
or transmission power). Therefore, we want the redundancy to be as small
as possible. To give the redundancy a quantitative measure, the coding rate
R is defined as the ratio of the message length to the codeword length. For
example, if a coding scheme generates a codeword of length n from a message
of length k, the coding rate is:
R = kn (1.1)
−1
10
−2
10
Uncoded performance
BER
−3
10
−4 Coded performance
10
−5
10
3 4 5 6 7 8 9
SNR (dB)
correction capability is strengthened, but the coding rate drops. A good code
should maximize the error correction performance while keeping the coding
rate close to 1.
rects channel errors and brings down the error probability; on the other hand,
reduced power per bit causes the error probability to go higher. So we will be
better off only if the coding increases the performance enough to make up
for the signal power reduction caused by the redundancy and produces a net
gain. Let us reexamine Figure 1.2. We observe that the BER performance of
the coded system is actually worse than that of the coded system in the low
SNR range (£3.5 dB in the figure). This is because the coding in that SNR
range is not able to offer enough performance improvement to cover the sig
nal power loss due to the redundancy.
As a conclusion, codes must be designed to offer a net performance
gain.
Extending BPSK to the nonbinary case, let us say that the symbol
consists of two bits. Then the symbol has four possible combinations: (00),
(01), (11), and (10). Assigning to the carrier four corresponding phase shifts
p/4, 3p/4, 5p/4, and 7p/4, we form so-called quadrature phase-shift keying
(QPSK). QPSK maps the symbol as (00) ® 1 + j, (01) ® -1 + j, (11) ® -1 -
j, (10) ® 1 - j. The signal space constellations of BPSK and QPSK are de
picted in Figure 1.5.
BPSK
(1) (0)
−1 +1
QPSK
Q
+1
(01) (00)
I
−1 +1
(11) (01)
−1
æ ö
px = Q ç 2 Eb N ÷ (1.2)
è 0ø
∞
( 2p ) ∫x
y 2 /2
where Q ( x ) � 1 ⋅ e dy is called the Q -function and Eb/N0 is
the bit SNR. Like an AWGN channel, the BSC is also memoryless.
. Eb denotes the bit energy, and N0 denotes the AWGN power spectral density.
Error Control in Digital Communications and Storage
Transmit Receive
1− px
0 0
px Crossover
px probability
1− px
1 1
The DVD also includes a simple script qfunc* to compute the Q-func
tion. To calculate the crossover probability at Eb/N0 = 0 dB, we type in the
following command:
>> eb_n0 = 0; % dB
>> eb_n0 = 10^(eb_n0/10); % convert to linear scale
>> px = qfunc(sqrt(2*eb_n0)) % crossover prob.
px =
0.0786
ìc0 , if P (r | c0 ) ³ P (r | c1 )
c� = í (1.3)
î c1, if P (r | c0 ) < P (r | c1 )
where c˜ denotes the decoded word and P(r |c0) [or P(r |c1)] is the prob
ability that the word r is received given the condition that the codeword
. For the sake of simplicity, we assume two codewords in total. The principle remains the
same for cases with more codewords.
10 A Practical Guide to Error-Control Coding Using MATLAB®
where P(c0|r) [or P(c1|r)] is the probability that c0 (or c1) is transmitted given
the condition that the vector r is received.
In the previous example, we implied that the codewords c0 and c1 are
equally likely to occur. If, say, c0 has an 80% chance to be sent, and c1 has the
remaining 20%, then we need to use MAP to achieve optimal decoding.
When all message symbols are equally probable, ML and MAP decod
ing techniques are equivalent.
where dmin reflects the error correction capability of a code. To explain this,
let us assume that C is a code with a total of eight codewords c0, c1, ... , c7,
which are graphically represented as eight points in Figure 1.7. Without
loss of generality, we also assume dH(c1, c3) = dmin. Now we draw a circle
around each codeword point with the same radius and no overlap with the
others. Evidently the maximum such radius is t = ë(dmin - 1)/2û, where ëxû
C C
C C
denotes the greatest integer no greater than x. These circles are called the
Hamming sphere (or the decoding sphere) of their corresponding codewords.
Now suppose that we send c3 to the receiver. If no channel errors exist, the
received word r will coincide with the codeword point c3 in the figure. Oth
erwise the channel errors will move r away from where c3 is. If in this case
r falls within the Hamming sphere of c3, r will still be correctly decoded to
c3, simply because it is closer to c3 than any other codeword point (the ML
decoding criterion). If r falls out of the sphere, then it will be mistakenly
decoded to some other codeword. From this we actually can draw a general
conclusion:
Correct decoding is guaranteed if and only if the received word falls within
the Hamming sphere of the true codeword.
Example 1.2
The repetition code in the previous example contains only two codewords,
c0 = (000) and c1 = (111). Therefore, the Hamming distance between the
two codewords is the minimum Hamming distance, which is computed to
be:
d min = d H (c0 , c1 ) = 1 + 1 + 1 = 3
Based on (1.8), we see that the code is able to correct, at most, one ran-
dom error. Example 1.1 confirms that it does correct one error. It is also easy
to verify that the code cannot correct two or more errors. For instance, if c1
is transmitted and r = (001) (containing two errors) is received, r will be
incorrectly decoded to c0.
P ( E1 ∪ E 2 ∪ ⋅ ⋅ ⋅ ∪ En ) ≤ P ( E1 ) + P ( E 2 ) + ⋅ ⋅ ⋅ + P ( En ) (1.9)
where the equality holds when the subevents are mutually exclusive.
Alternatively an error control system may also be evaluated with cod
ing gain. Coding gain measures the difference in the SNR levels between the
coded system and uncoded system at a specified error rate. Go back to Figure
1.2; the difference in Eb /N0 between the intersections of the two BER curves
and the horizontal line of 10-4 is the coding gain of the code at the error rate
of 10–4. While the coding gain must be evaluated for each individual code
of interest, the asymptotic coding gain, an approximation to the coding gain
when SNR >> 1, offers a simple and quick measure of the coding perfor
mance. It has been shown that, for hard-decision decoding, a code with a
rate R and a minimum distance dmin has an asymptotic coding gain of [1, 2]:
(
K = 10 × log Rd min 2 ) (1.10)
which is 3 dB better.
>> cgain(eb_n0,ber,10^(-4))
ans =
1.7112
Comment: the result is in dB.
The channel capacity C, defined as the maximum number of bits per unit
time that can be transmitted free of error over a channel, is given by the
Shannon formula:
(
C = B × log 2 1 + S N ) (1.12)
Elphinston.
Dryden.
——Animorum
Impulsu, et cæcâ magnâque cupidine ducti.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookname.com