Error Coding For Engineers Houghton Aeditor Instant Download
Error Coding For Engineers Houghton Aeditor Instant Download
download
https://ebookbell.com/product/error-coding-for-engineers-
houghton-aeditor-21347456
Error Control Coding For B3g4g Wireless Systems Paving The Way To
Imtadvanced Standards Klaus Davideds
https://ebookbell.com/product/error-control-coding-for-b3g4g-wireless-
systems-paving-the-way-to-imtadvanced-standards-klaus-davideds-4303926
https://ebookbell.com/product/nonbinary-error-control-coding-for-
wireless-communication-and-data-storage-rolando-antonio-
carrasco-1224856
Error Correcting Coding And Security For Data Networks Analysis Of The
Superchannel Concept 1st Edition Grigorii Kabatiansky
https://ebookbell.com/product/error-correcting-coding-and-security-
for-data-networks-analysis-of-the-superchannel-concept-1st-edition-
grigorii-kabatiansky-2597780
https://ebookbell.com/product/error-correction-coding-mathematical-
methods-and-algorithms-todd-k-moon-53645884
Error Control Coding 2nd Edition 2nd Edition Shu Lin Daniel J Costello
https://ebookbell.com/product/error-control-coding-2nd-edition-2nd-
edition-shu-lin-daniel-j-costello-2027864
https://ebookbell.com/product/error-correction-coding-mathematical-
methods-and-algorithms-2nd-edition-todd-k-moon-23269180
https://ebookbell.com/product/errorcorrection-coding-and-decoding-
bounds-codes-decoders-analysis-and-applications-1st-edition-martin-
tomlinson-5839528
https://ebookbell.com/product/error-control-coding-from-theory-to-
practice-peter-sweeney-931336
https://ebookbell.com/product/errorcorrection-coding-and-decoding-
martin-tomlinson-cen-jung-tjhai-marcel-a-ambroze-mohammed-ahmed-
mubarak-jibril-59042360
ERROR CODING
FOR ENGINEERS
THE KLUWER INTERNATIONAL SERIES
IN ENGINEERING AND COMPUTER SCIENCE
ERROR CODING
FOR ENGINEERS
A. Houghton
Synectic Systems, Ltd., United Kingdom
Houghton, A.
Error coding for engineers / A. Houghton.
p.cm. - (The Kluwer international series in engineering and computer science; SECS 641)
Includes bibliographical references and index.
ISBN 978-1-4613-5589-2 ISBN 978-1-4615-1509-8 (eBook)
DOI 10.1007/978-1-4615-1509-8
1. Signal processing. 2. Error-correcting codes (Information theory) 1. Title. II. Series.
Preface IX
1. Introduction 1
1.1 Messages Need Space 1
1.2 The Hamming Bound 4
1.3 The Gilbert Bound 6
1.4 Where Do Errors Come From? 7
1.5 ABrief History of Error Coding 12
1.6 Summary 13
2. A Little Maths 15
2.1 Polynomials and Finite Fields 16
2.2 Manipulating Field Elements 19
2.3 Summary 24
3. Error Detection 25
3.1 The Horizontal and Vertical Parity Check 25
3.2 The Cyc1ic Redundancy Check 27
3.3 Longhand Ca1culation of the CRC 28
3.4 Performance 31
3.5 Hardware Implementation 32
3.6 Table-Based Ca1culation of the CRC 35
3.7 Discussion 39
6. Reed-Muller Codes 67
6.1 Constructing a Generator Matrix For RM Codes 67
6.2 Encoding with the Hadamard Matrix 74
6.3 Discussion 77
7. Reed-Solomon Codes 79
7.1 Introduction to the Time Domain 79
7.2 Calculation of the Check Symbols for One Error 80
7.3 Correcting Two Symbols 83
7.4 Error Correction in the Frequency Domain 88
7.5 Recursive Extension 91
7.6 The Berlekamp-Massey Algorithm 97
7.7 The Fomey Algorithm 100
7.8 Mixed-Domain Error Coding 101
7.9 Higher Dimensional Data Structures 107
7.10 Discussion 118
vi
10. Hardware 147
10.1 Reducing Elements 147
10.2 Multiplication 149
10.3 Division 153
10.4 Logs and Exponentials 158
10.5 Reciprocal 160
10.6 Discussion 164
Index 245
vii
Preface
Error coding is the art of encoding messages so that errors can be detected
and, if appropriate, corrected after transmission or storage. A full
appreciation of error coding requires a good background in whole number
maths, symbols and abstract ideas. However, the mechanics of error coding
are often quite easy to grasp with the assistance of a few good examples.
This book is about the mechanics. In other words, if you're interested in
implementing error coding schemes but have no idea what a finite field is,
then this book may help you. The material covered starts with simple coding
schemes which are often rather inefficient and, as certain concepts are
established, progresses to more sophisticated techniques.
Error coding is a bit like the suspension in a car. Mostly it's unseen, but the
thing would be virtually useless without it. In truth, error coding underpins
nearly all modem digital systems and without it, they simply couldn't work.
Probably like some car suspensions, the elegance of error coding schemes is
often amazing. While it's a bit early to talk about things like efficiency, two
schemes of similar efficiency might yield vastly different performance and,
in this way, error coding is rather 'holistic'; it can't be treated properly
outside of the context for which it is required. WeIl, more of this later. For
now, if you're interested in whole number maths (which is actually really
good fun), have a problem which requires error coding, are an undergraduate
studying DSP or communications, but you don't have a formal background in
maths, then this could be the book for you. Principally, you will require a
working knowledge ofbinary; the rest can be picked up along the way.
x
Chapter 1
INTRODUCTION
The chapter deals with an overview of error coding, not especially in terms
of what' s been achieved over the years or what is the current state of the
subject, but in pragmatic terms of what' s undemeath it that makes it work.
The maths which surrounds error coding serves two key purposes; one, it
shows us what can theoretically be achieved by error coding and what we
might reasonable have to do to get there and, two, it provides mechanisms
for implementing practical coding schemes. Both of these aspects are,
without doubt daunting for anyone whose principal background is not maths
yet, for the engineer who needs to implement error coding, many of the
practical aspects of error coding are tractable.
Space is at the heart of all error coding so it seems like a good place to
start. If I transmit an 8-bit number to you by some means, outside of any
context, you can have absolutely no idea whether or not what you receive is
what I sent. At best, if you were very clever and knew something about the
quality of the channel which I used to send the byte (an 8-bit number) to
you, you could work out how likely it is that the number is correct. In fact,
quite a lot of low-speed communication works on the basis that the channel
is good enough that most of the time there are no errors. The reason that you
cannot be certain that what you have received is what I sent is simply that
there is no space between possible messages. An 8-bit number can have any
value between and inc1uding 0 and 255. I could have sent any one of these
256 numbers so whatever number you receive might have been what I sent.
You've probably heard of the parity bit in RS232 communications. If not, it
works like this. When you transmit a byte, you count up the number of Is in
it (its weight) and add a ninth bit to it such that the total number of Is is even
(or odd; it doesn't matter which so long as both ends of the channel agree.)
So we're now transmitting 8 bits of data or information using 9 bits of
bandwidth. The recipient receives a 9-bit number which, of course, can have
512 values (or 2n where n is the number of bits). However, the recipient also
knows that only 256 of the 512 values will ever be transmitted because there
are only eight data bits. If the recipient gets a message which is one of the
256 messages that will never be transmitted, then they know that an error
has occurred. In this particular case, the message would have suffered a
parity error.
In essence, what the parity bit does is to introduce some space between
messages. If a single bit changes in the original 8-bit message, it will look
like another 8-bit message. If a single bit changes in the 9-bit, parity-encoded
message, it will look like an invalid or non-existent message. To change the
parity-encoded message into another valid message will require at least 2
bits to be flipped. Hopefully this is really obvious and you're wondering why
I mentioned it. However space, in one form or another, is how all error
coding works. The key to error coding is finding efficient ways of creating
this space in messages and, in the event of errors, working out how to correct
them.
Space between messages is measured in terms of Hamming distance (d).
The Hamming distance between two messages is a count of the minimum
number of bits that must be changed to convert one message into the other.
Error codes have a special measure called dmino d min is the minimum number
of bits that must be changed to make one valid message look like another
valid message. This, ultimately, deterrnines the ability of the code to detect
or correct errors. Every bit-error that occurs increases the Hamming distance
between the original and corrupted messages by one. If terrors are to be
detected then
0/
o 2n message points (circIes)
o
° ~=~, ° o
o . . \ () 0/// o
o 0 . .· · · · · . . 0 .'
k ./
.... 2 valid messages ................. . o
o o ......... 0 (filled circIes)
.................................... 0 o o
• o o o
Figure 1.1 Valid data points within the total message space.
4 1.2 The Hamming Bound
The use of n and k, above, leads to the idea of (n, k) or (n, k, t) codes,
another way in which error codes are often defined. n is the total number of
message bits, so there are 2n possible messages for the code, while k defines
how many of those bits are data or infonnation bits. So out of the 2n
messages, 2k will be valid. The job of the error code is to spread the 2k valid
messages evenly across the 2n message space maximizing dmin • A question
you might reasonably ask is, is there a relationship between t, n and k?
Another way of putting this might be to wonder how many bits of
redundancy (n - k) must be added to a message to create a t-error correcting
code. There is a pragmatic and a mathematical way of considering this
question. It's useful to start at the mathematical end since, if you can acquire
a broad understanding of what's involved in error coding right at the start, it
aIl makes more sense later on. Here goes then:
n, k and t are a littIe intertwined and it's easiest to choose three values and
see if they can be made to satisfy certain constraints. For an example,
consider the possibility of a (7, 4, 1) code. Could such a thing ex ist? There
are 16 valid messages (24) in a pool of 128 possible messages (27) and we
want to correct I-bit errors. Each valid message must be surrounded by a sea
of invalid messages for which d <= t. For correction to be possible, no other
valid messages are allowed to share these unless, for them, d > t. So in this
example, each of the 16 messages must have seven surrounding messages of
d = 1 that are unique to them to permit correction. Each valid message
requires 1 (itself) + 7 (d = 1 surrounding) messages. We need, therefore, 8
messages per valid message and there are 16 valid messages giving 8 x 16 =
128 messages in total. In this case there are exactly 128 available so, in
principle at least, this code could exist. This limit is caIled the Hamming
bound and codes (of which there are a few), which satisfy exactly this bound
CHAPTER 1. INTRODUCTION 5
are called perfect codes. For most codes 2n is somewhat greater than the
Hamming bound.
In general terms, an n-bit message will have
n!
(1.3)
d!(n-d).
messages that are d bits distance from it. This is simply the number of
different ways that you can change d bits in n. So, for a terror correcting
code, the Hamming bound is found from
which simplifies to
(1.4)
Ld!(nn'-.d)
21
d=O
messages out of the total pool. If there are k data bits then a total of
2k XL 21
n.
,
d=O d!(n - d)
L
21
n.
,
< 2 n- k (1.5)
d=O d!(n - d) -
Most error codes sit somewhere between these bounds which means that
often, even though the capacity to correct errors may have been exceeded,
the code can still detect unrecoverable errors. In this instance the received
error message will be more than t from any valid code so no correction will
be attempted.
So what about the pragmatic approach? During decoding the extra bits
added to a message for error coding give rise to a syndrome, typically of the
same size as the redundancy. The syndrome is simply a number which is
usually 0 if no errors have occurred. If terrors are to be corrected in a
message of n bits then the syndrome must be able to identify uniquely the t
bit-error locations. So in n bits, there will be
CHAPTER 1. INTRODUCTION 7
1: n'.
t
1:t
n.
,
~ 2 n- k
d=O d!(n - d).
which is, of course, the Hamming bound. Another way of viewing this is to
consider the special case of messages where n = 2i - 1 (i is some integer). At
least i bits will be needed to specify the location of one bit in the message.
For example, if the message is 127 bits then a 7-bit number is required to
point to any bit location in the message (plus the all-zero/no error result). So
for terrors, at least t x i check bits will be needed. We can compare this
result with the (15, 11, 1) code. To find one error in 15 bits needs 1 x 4 = 4
check bits which there are. If you try this approach with the Golay code, you
find that 15 check bits should be needed as opposed to the 11 that there
actually are. See if you can work out why this is and repeat the calculation
for the (90, 78, 2) code.
A few more basic measures can be derived from (n, k, t) codes.
Redundancy is the name given to the extra bits (r) added to a message to
create space in it. The amount of redundancy added gives rise to a measure
called the coding rate R which is the ratio of the number of data bits k,
divided by the total bits k + r (= n). In percent, R gives the code ejficiency.
Errors arise from a number of sources and the principal source which is
anticipated should, at least in part, determine the code that is used. There are
two main types of error which are caused by very different physical
processes. First there are random errors. Any communications medium
(apart perhaps from superconductors which, as yet, are not ubiquitous), is
subject to noise processes. These may be internal thermal and partition noise
sources as electrons jostle around each other, or they may be the cumulative
8 1.4 Where Do Errors Come From?
lV
The curves about the ±1 volt markers are the Gaussian function (also
called the normal function) which has the form in (1.6), below.
(X_X)2
1 22
---==e (J (1.6)
(j.J27r
cr is the standard deviation of the function, while xis the mean. The curve
describes in terms of probability, what the received signal will be. The total
area under the curve is 1 so the received signal must have some value. In
this example, we could calculate how likely a 0 would be of being
misinterpreted as a 1 by measuring the area under the solid curve that falls to
CHAPTER 1. INTRODUCTION 9
the right of the 0 volt decision threshold. Suppose it was 0.001, or Ihooo. This
means that one in one thousand Os will be read as a 1. The same would be
true for Is being misread as Os. So all in all, one in five hundred bits
received will be in error giving a bit error rate (or BER) of 0.002. The BER
can be reduced by increasing the distance between the signals. Because of
the shape of the PDF curve, doubling the 0/1 voltages to ±2 volts would give
a dramatic error reduction.
Channe1s are often characterised graphically by showing BER on a
logarithmic axis against a quantity called Eh/No. This is a normalized
measure of the amount of energy used to signal each bit. You can think of it
like the separation of the Os and 1s in Figure 1.2, while the noise processes
which impinge themselves on the signal control the width (or 0") of the
Gaussian PDF. The greater Eh/No, the smaller the BER. Eh/No can be
thought of in a very similar way to SNR or signal to noise ratio. Figure 1.3
show an example of bit error rate against signal strength
o 2 3 4 5 6 7
Or---~--~--~--~--~--~~
Coding loss
·1
ffi ·2
m
C5' ·3
(;
0-4 Coding gain
..J
·5
·6
Eb/NO (dB)
The solid line represents an uncoded signal whereas the dotted line is the
same channel, but augmented by error coding. This gives rise to an
enormously important idea called coding gain. At very low powers (and
consequently high error rates), the addition of error coding bits to the
message burdens it such that an Eh/No of Figure 1.3 in the coded channel,
gives the same performance as the uncoded signal with an Eh/No of O. The
reason is that to get the same number of data bits through the channel when
coded, more total bits must be sent and so more errors. Because the errors
rates are high, the coding is inadequate to offset them (t is exceeded), so
10 1.4 Where Do Errors Come From?
we're actually worse off. However, at signal powers of 2, the coded and
uncoded channels have similar performance while above 2, the coded
channel has a better performance. The early part of the graph represents
coding loss, while the latter, coding gain. So why is this important?
If a particular system requires aBER of no greater than one error in 105,
then an uncoded channel in this example will need an Eb / No of 5. However,
the coded channel can provide this performance with an E b / No of only 4, so
the code is said to give a coding gain of IdB (Eb / No). Put this in terms of a
TV relay satellite and it starts to make good economics. Error coding can
reduce the required transmitter power or increase its range dramatically
which has an major impact on the weight and lifetime of the satellite.
Approximately, when error coding is added to a signal, the two distributions
in Figure 1.2 get closer together since, for the same transmitter power, more
bits are being sent. Normally, this would increase the BER. However, the
effect of adding error coding reduces the widths (a) ofthe distributions such
that the BER actually decreases.
Before leaving random errors, there is another aspect to them which has
been usefully exploited in some modem coding schemes. When a random
error occurs, the nature of the Gaussian PDF means that the signal is
increasingly likely to be near to the decision boundary (0 volts in this
example). Rather than make a straight-forward 1/0 decision, each bit can be
described by its probability of being a 1 or a 0 (a soft decision). Figure 1.4
shows the typical soft output.
Modem error coding, while based on maths that was discovered (in some
cases) centuries ago, began in the late forties with most of the major
innovations occurring over the fifties, sixties and seventies. In 1948 Claude
Shannon proved that, if information was transmitted at less than the channel
capacity then, it was possible to use the excess bandwidth to transmit error
correcting codes to increase arbitrarily the integrity of the message. The race
was now on to find out how to do it.
During the early fifties Hamming codes appeared, representing the first
practical error correction codes for single-bit errors. Golay codes also
appeared at this time allowing the correction of up to three bits. By the mid-
fifties Reed-Muller codes had appeared and, by the late fifties, Reed-
Solomon codes were born. These last codes were symbol, rather than bit,
based and underpin the majority of modem block coding. With block codes,
encoding and decoding operate on fixed sized blocks of data. Of necessity,
the compilation of data into blocks introduces constraints on the message
size, and latency into the transmission process, precluding their use in
certain applications.
It is important to remember that, while the codes and their algebraic
solutions now existed, digital processing was not what it is today. Since
Reed-Solomon codes had been shown to be optimal, the search was shifted
somewhat towards faster and simpler ways of implementing the coding and
decoding processes to facilitate practical systems. To this end, during the
mid-sixties, techniques like the Fomey and Berlekamp-Massey algorithrns
appeared. These reduced the processing burdens of decoding to fractions of
what had been previously required, heralding modem coding practice as it is
today.
Almost in parallel with this sequence of events and discoveries was the
introduction in the mid-fifties, by Elias, of convolutional codes as a means of
introducing redundancy into a message. These codes differed from the others
in that data did not need to be formatted into specific quantities prior to
encoding or decoding. Throughout the sixties various methods were created
for decoding convolutional codes and this continued until1967 when Viterbi
produced his famous algorithrn for decoding convolutional codes which was
subsequently shown to be a maximum-likelihood decoding algorithm (i.e.,
the best you could get).
The eighties contributed in two key ways to error coding, one of which
was the application of convolutional coders to modulation, creating today's
reliable, high-speed modems. Here the error coding and modulation
CHAPfER 1. INTRODUCTION 13
1.6 Summary
~=35.
3!4!
14 1.6 Summary
You've actually calculated the number of different ways that you can
change three bits in a total of seven. Take any 7-bit message and there are 35
other messages that are 3 bits distance from it. At this point, you might even
be able to construct an error code using the Gilbert bound coupled with a
random code selection and deletion process.
Error types and sources have been considered, leading into processes
which can augment error coding like soft-decision decoding and
interleaving. When all these ideas are brought together, some spectacular
results emerge. Messages that come back from deep space probes can be
buried deeply in noise and yet, with error coding, the data are still
recoverable. Digital TV places tight constraints on acceptable BER and yet,
with error coding, it's possible without enormous transmitter power. CDs and
especially DAT (Digital Audio Tape) players simply wouldn't work without
error coding. Systems like modems combine error coding with signal
modulation to provide communications almost at the limit of what is
theoretically possible. So what we need now are some practical schemes for
implementing error coding and decoding.
Chapter 2
A LITTLE MATHS
for example. Since we will only be dealing with binary, x is simply 2. The
above example could be written
but where there is an even number of similar terms the result is zero (i.e.
1 + 1 = 0) so the x 5 term goes since x5 + x5 = (1 + 1)x5 • It may be that the
finite field we're using doesn't contain elements as high as x9 in which case
the ans wer is modified further, using the generator polynomial to 'wrap' the
higher order terms back into the field. We'll see how this operates shortly.
A finite field is constructed using a generator polynomial (GP) which is
primitive. This means that it has no factors (or is irreducible). Consider the
following:
which simplifies to
x 2 +x + 1 = 0,
x 3 + X + 1 = 0, x 3 + x 2 + 1 = 0,
x5 + x4 + i + x + 1 = O ...
The set of numbers which the GP describes contains elements one bit less in
length than the GP itself. Technically speaking, the degree of the elements
(their highest power or x) is one less than that of the GP. So the field that
x4 + x + 1 forms, for example, contains elements which inc1ude bits up to x 3
18 2.1 Polynomials and Finite Fields
but no higher. You may already be familiar with finite fields but in a
different guise. Pseudo random binary sequences are sometimes generated in
the same way as finite fields. PRBSs are sequences of numbers which repeat
in the long term, but over the short term, appear random.
Finite fields are often referred to in the form GF(2 n ). The G stands for
Galois who originated much of the field theory, while the F stands for field.
The 2 means that the field is described over binary numbers while n is the
degree of the GP. Such a field has, therefore, 2n element (or numbers) in it.
Taking a small field, GF(2\ we'll use the polynomial x 3 + x + 1 = O. To
ca1culate the field we start off with an element called a wh ich is the
primitive root. It has this title because all elements of the field (except 0) are
described uniquely by apower of a. In this ca se a is 2, 010 or x. For any
finite field GF(2n), a 2n - 1 = aO = 1. What's special about these fields is that
the elements obey the normal laws of maths insofar as they can be added,
multiplied or divided to yield another member of the field. Table 2.1 shows
how the field is constructed.
there is no term ~ but, from the primitive polynomial ~ + x + 1 ::: 0, we can see that
~::: X + 1, remembering also that 1::: -1. Substituting 'folds' ~ back into the bottom 3
bits.
a1 x = 010 (2)
a2 = X.x ~ = 100 (4)
a3 = x.x.x = ~
thistime, x3=~+X+ 1 so
and the cycle repeats. This time, we have not generated all non-zero elements with three
bits. In fact, there are three cycles, corresponding to the three factors of ~ + ~ + x + 1 = O.
Exactly which one is generated depends on the starting conditions. The above cycle
contains 2, 4, 7, 1. Starting with a value not represented here, say 3, then
?
a' x+1 = Oll (3)
?
a.a' = x.(x+ 1) = 110 (6)
a.a?+I= x.(x2 +x) = = x+l = Oll (3)
generating a smaller cycle containing 3 and 6. Starting with the one remaining unused
number, 5, then
a7 = x2 +1 = 101 (5)
a.a? x.(~ + 1) ~+x = ~+ 1 101 (5)
or in binary 100
LUEB
011
= xx3 + X = x(x + 1) + x
To evaluate this we need to examine long division over finite fields. This
is not altogether dissimilar to normallong division and is best attempted bit-
wise. Rearranging the problem gives
010
1 1 1
Step 1 involves aligning the most significant set bit of the denominator
with the most significant set bit in the numerator (assuming both are non-
zero). So the sum above becomes
1 (result)
0 f 0 ~; ~
and a 1 is placed in the result, coinciding with the least significant bit
position of the denominator. A remainder is ca1culated by exc1usive-oring
the denominator with the numerator. When the remainder is zero, the
division is complete. The two vertical lines represent the valid range of the
result which, over this field, inc1udes x 2 , Xl and xo. Now you can see that the
first ca1culated bit of the result falls in the position X-I, out of range. To
prevent this from happening, we can use the fact that 1 = x + i, from the
generator polynomial.
In table 2.1, you may recall that the GP was used to change x 3 into x + 1 or
= (13. For numerical purposes, we could think of i as (13 and x4 as (14 and so
on for any Xi. For example, we saw, in the previous multiplication, how x 4
was brought back into the field using the substitution x(x + 1). This can also
be done for negative i. In the division above, there is a result and remainder
term in the [ I column. In just the same way that x4 can be thought of as (14
even though it is not strictly in the field, so X-I can be thought of as (1-1, or (16
=~ + 1 . Rearranging the GP, we have
so
The GP can, therefore, be used to move bits both to the right and the left in
order to bring them back into the field. Rearranging, the ca1culation above
becomes
22 2.2 Manipulating Field Elements
0 1 1 (result)
0 0
(align) 1 1 EB
0 1 (remainder)
However, continuing the calculation in this way does not lead to a solution
since, in this case, it is not possible to get a zero remainder. Instead, the
division can be performed as follows:
h 18
16 14 15
11 h 13 (result)
0 1 0 (X
1 0 1 M" 0 0 (X (becomes)
1 1 11 EB
1 0 0 (X3 (remainder)
1 1 12 EB _ (X5
1 1 0 (X4 (remainder)
_ (X5
1 1 h
0 o 1 aO (remainder)
1 0 1A('0 (x0 (becomes)
1 1 14 EB _ a5
1 0 0 (X2 (remainder)
1 1 15 _ a5
0 1 1 a 3 (remainder)
1 0 1 A(' 0 1 a 3 (becomes)
1 1 16 EB _ a5
1 0 0 a 3 (remainder)
1 1 h EB _ a5
1 1 1 (X5 (remainder)
1 1 18 _ (X5
0 0 0 0 (remainder)
To summarise, when dividing would result in a bit in the result to the right
of xO, the left-most 1 in the current numerator is replaced by substitution
using the GP. As a result of this iterative solution, the result (the top three
rows) looks a bit confusing. However, the bits are summed vertically
modulo-2 to give 011 or a 3, the correct result.
CHAPTER 2. A LITTLE MATHS 23
Clock
Figure 2.1 An anti-logging circuit.
24 2.3 Summary
2.3 Summary
Obviously this will seem a little out of context at the moment since there
have been no examples yet, to hang these ideas on; the engineering mind is
often frustrated by solely abstract ideas. However, you should have some
feel now for how finite fields can be constructed and how their members can
be manipulated. In particular, you will have seen that, apart from 0, elements
can be viewed either as logs (an expression of the power of 0.), or binary bit
patterns. It turns out that both representations are essential - for example,
you can't readily add elements if they're expressed as logs, multiplication
and division are much more readily achieved with logs.
Some problems, such as finding the roots of a polynomial, exist in both
domains at once and a solution to this will be examined later. Unfortunately,
movement between these forms is generally either fast and inelegant, via
look-up-tables, or slower but more compact, via algorithms. GF(2 2),
however, has logs which are offset from bit patterns by 1, i.e. a.i = i + 1,
making translation simple. Unfortunately, it's also rather a small field for
practical use.
Chapter 3
ERROR DETECTION
Techniques for error detection and error correction are often, although not
always, related. All work on the idea of creating space between valid coded
messages. With error correction, however, more space is generally required
as is some mechanism for finding the nearest valid message to a corrupted
one. Error detection provides a gentle introduction into practical coding and
two techniques are examined. Both techniques can be applied to error
correction and provide the ideal starting point.
Almost without a doubt, you will have heard of or used parity as a means
of checking for errors. The parity bit is an extra bit added to a data word to
fix its weight (the number of set bits) in a known state of either even or odd.
It was shown in the introduction how this creates a minimum Hamming
distance (dmin) of two bits in between valid messages by doubling the total
message space. This technique has been employed in many low-speed serial
communications links for decades. Some examp1es of parity are shown in
Table 3.1 below. Column E contains the even parity bits whi1e column 0
shows the odd parity bits.
Data E 0
0 0 1 0 1 1 0 1 0 1
1 1 1 0 1 1 1 1 1 0
1 1 0 0 1 0 0 0 1 0
1 1 1 0 0 0 1 0 0 1
0 0 0 0 0 0 1 1 0 1
Because dmin is 2, no even bit errors (multiples of two bits) within a single
character will be detected. The first error moves the message into the invalid
space in-between valid messages, whi1e the second error moves the message
back into the valid message space. This is unfortunate since burst errors,
typically created by spikes on power supplies, often comprise a few correct
bits in between two error bits. The last row in Table 3.1 contains a five-bit
burst error, where the third and seventh bits are in error. At the receiver
these errors will pass undetected. This effect can be partly offset by
including a vertical parity check (often called achecksum) with a block of
characters. This is not always practical since communication systems that
use parity are often asynchronous, making compilation of characters into a
block impossible as there may be arbitrary gaps between characters.
Choosing even parity, Table 3.2 shows the previous data with the addition of
such a vertical (even) parity check.
Data E
0 0 1 0 1 1 0 1 0
1 1 1 0 1 1 1 1 1
1 1 0 0 1 0 0 0 1
1 1 1 0 0 0 1 0 0
0 0 !! 0 0 0 1 1 0
1 1 !! 0 1 0 !! 1 ~Received parity
1 1 1 0 1 0 ! 1 ~Expected parity
CHAPTER 3. ERROR DETECTION 27
In this case a difference between the received and expected vertical parity
alerts the receiver to the presence of errors not detected by the horizontal
parity checks. Even so, strategically positioned multiples of four error bits
can still go unnoticed by both horizontal and vertical checks. Some systems
calculate the vertical check by adding value of each character and discarding
the higher order bits of the sumo This technique can help to offset systematic
errors where, for example, the bit patterns in certain characters may
predispose them to corruption.
If you've studied communications or digital signal processing, you may
weIl be aware of schemes like eight to fourteen modulation (EFM) wh ich
seek to impose desirable frequency characteristics onto the aIlowed
(modulated) codes, making them suited to the channel through wh ich they
must pass. With raw data, where long (or short) runs of ones or zeros are free
to occur, this is not possible. Interactions between the channel and the codes
can, therefore, give rise to repeatble errors wh ich may weIl cancel each other
if only simple vertical parity is used.
m-I
CRC= Ld;CX; (3.1)
i=n
remainder will be zero. It' s a bit like taking a number, for example 14,
dividing it by another number, say 5, and transmitting 10 (14 minus the
remainder). At the receiver the received message should be divisible by 5.
Once the CRC is appended to the message, replacing dn_ 1 to do, (3.2) is true
rn-I
Ldia/ =0 (3.2)
i=O
10 11 01100000
from (3.1), this becomes
Xli + 0 + x9 + i + 0 + x 6 + x5 + 0 + 0 + 0 + 0 + 0
and then, using the GP, move the bits rightwards into positions x3 to xo. This
is most easily accomplished using a long division, as follows in Table 3.1.
Using x4 + i + 1, generate the field and try adding the elements a ll , a9, a 8,
a 6 and a 5 to verify the result. The steps involved in calculating the CRC are
as folIows:
1. append as many zeros to the data as there will be CRC bits (one fewer than
GP)
2. line up the MSB of the GP (underlined) with the first 1 (italic) in the data
andXOR ($)
3. line up the MSB of the GP with the next 1 (italic), bring down data bits
and zeros (.J,) as needed
4. XOR and go back to step 3 until there are no bits greater than x 3•
5. the remainder under the appended zeros (bold) is the CRC.
CHAPTER 3. ERROR DETECTION 29
1 1 0 1 0 0 1 0
1 0 1 1 0 1 1 0 0 0 0 0
1 1 0 0 1 J,
EB 1 1 1 1 1
1 1 0 0 1 J, J,
EB 1 1 0 1 0
1 1 0 0 1 J, J, J,
EB 1 1 0 0 0
1 1 0 0 1 J,
EB R 0 0 1 0
The transmitted message is 10 110 11000 10, comprising eight data and four
check bits. The quotient is discarded serving only as arecord of where the
eXORing took place. At the receiver an almost identical process takes place.
The only difference is that the CRC is used in the division rather than
appending four zeros. This is shown in Table 3.2.
1 1 0 1 0 0 1 0
1 0 1 1 0 1 1 0 0 0 1 0
1 1 0 0 1 J,
EB 1 1 1 1 1
1 1 0 0 1 J, J,
EB 1 1 0 1 0
1 1 0 0 1 J, J, J,
EB 1 1 0 0 1
1 1 0 0 1 J,
EB R 0 0 0 0
In the absence of errors the remainder is zero, but if a detectable error has
occurred, then it will be non-zero. This result can be verified by adding the
elements a ll + a 9 + a 8 + (i + a 5 + a (from 3.2), based on the Is in the
message. This result is shown in Table 3.3.
30 3.3 Longhand Calculation of the CRC
a ll 1 1 0 1
a9 0 1 0 1
a8 1 1 1 0
a6 1 1 1 1
a5 1 0 1 1
al 0 0 1 0
L 0 0 0 0
Table 3.4 gives a reworking of the remainder with two added bit-errors
(bold).
Q 1 1 1 0 0 0 0 0
1 0 0 1 1 1 1 0 0 0 1 0
1 1 0 0 1 J,
EB 1 0 1 0 1
1 1 0 0 1 J,
$ 1 1 0 0 1
1 1 0 0 1 J, J, J, J, J,
Et> R 0 0 0 1 0
1 1 0 0 1 0
1 0 1 0 0 0 0 0 0 0
1 1 0 0 1 J,
EB 1 1 0 1 0
1 1 0 0 1 J, J, J,
EB 1 1 0 0 0
1 1 0 0 1 J,
EB R 0 0 1 0
CHAPTER 3. ERROR DETECTION 31
There are, however, error patterns which will not be detected. These can be
generated by adding the GP to the message in any arbitrary position. Table
3.6 gives an example of this. The errors are shown in bold, coinciding with
bits X 2(X4 + x 3 + 1).
1 1 0 1 0 0 1 0
1 0 1 1 0 0 0 0 0 1 1 0
1 1 0 0 1 t
1 1 1 1 0
1 1 0 0 1 t t
1 1 1 0 0
1 1 0 0 1 t t
1 0 1 0 1
1 1 0 0 1 t
1 1 0 0 1
1 1 0 0 1 t
0 0 0 0
Adding errors equal to the GP effectively creates another valid data + CRC
pattern. Using the field described by x4 + x 3 + 1, try adding any three
elements with the relationship a:~+i + a?+i + cl The GP changes a message
fromone valid code to another. Pragmatically, in these examples I have used
eight-bit data which means that there are 256 valid data bit patterns. Since
the CRC is only four bits, it can only have 16 different values. The 256 data
values must, therefore, map onto only 16 possible CRCs which in turn means
that 16 data patterns will share the same CRC. The corollary to this that 1 in
16 errors will go undetected. This may seem a disappointing result until you
consider that in reality the CRC will usually be 16 bits long, missing only 1
in 65 536 of all errors. Also, many burst errors will be smaller in length than
the GP which guarantees detection (can you think why?)
3.4 Performance
with 1 bit in 9 for simple parity, or 0.78% of the channel instead of 11.1 %.
As far as detection ability goes, we can guarantee that all burst errors smaller
in length than the GP will be detected if the GP is primitive, and 2n -1 out of
2n errors in total where there are m bits in the CRC. For a 16-bit CRC this
works out to 99.9985% of errors. This calculation comes from the rash
assumption that a random error gives rise to a random result in the
remainder. If the remainder is n bits then 1 in 2n random results will be zero
and so not detected.
If the message size (data plus CRC) is less than 2n bits and the CRC is n
bits, then a single-bit error at Xi can be corrected since such an error produces
a unique remainder (Xi. For messages equal or greater than 2n bits, remainders
may be shared. In its simplest form, adding a CRC creates a dmin of 3 in
messages up to 2n - 1 bits. This might be increased, however, if the message
size is considerably less than this.
1 o o 1
I----~Q B D'/-----IQ A
ck ck
Initially, with the switch in calculate mode, the registers are reset to zero
and the data are c10cked serially into the circuit. When the final data bit has
been c10cked in (no zeros added), the CRC is actually in the registers. To
c10ck it out without corrupting it, the switch is set to read out which disables
the feedback path. Three further c10cks serially output the CRC.
To verify the operation of the circuit we can use the data from the previous
example, both before and after corruption by errors. Table 3.7 shows the
initial calculation of the CRC. After all the data have been c10cked in, the
CRC resides in the four registers.
0 0 0 0 1 1
1 0 0 1 0 1
1 1 0 1 1 0
0 1 1 0 1 1
1 0 1 0 0 0
0 1 0 1 1 0
0 0 1 0 1 1
0 0 0 0 0
The last bit of data is c10cked in at this point and the AND gate switch is
OPENED (=0) to prevent the feedback from destroying the CRC now in
ABCD. The CRC c10cks serially out of the registers MSB First.
o 1 o o 0(0) 0(0)
o o 1 o 0(0) 0(0)
o o o 1 0(1) 1 (0)
o o o o 0(0) 0(0)
Table 3.7 also shows, in brackets, the situation at the receiver where the
CRC is appended to the data stream. In the absence of errors this results in
four zeros. Table 3.8 repeats the example in Table 3.4, where two errors have
been added. Again, the hardware result agrees with the longhand calculations
given previously. The circuit mirrors very c1osely, the form of the longhand
calculations, given above. The principal difference is that, by hand, we are
free to skip passed leading zeros in the numerator. In the circuit, Os appear in
the feedback path where they have no effect other than to shift the register
contents left, unchanged. When a one appears in the feedback, the register
contents are both shifted and modified by the form of the GP.
34 3.5 Hardware Implementation
0 0 0 0 1
1 0 0 1 0 1
1 1 0 1 0 1
1 1 1 1 1 0
0 1 1 1 1 0
0 0 1 1 1 0
0 0 0 1 1 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 1 1
0 0 0 0 0 0
for 12 bits.
These generator polynomials are not primitive, each having the factor
(x + 1), for example
CHAPTER 3. ERROR DETECTION 35
a 0010
a2 0100
a3 1000
a4 1101
a5 0111
a6 1110
a7 0001
To add a byte to the CRC, the CRC register is shifted left by eight bits,
shifting the new data byte into the lower eight bits of the CRC register. The
eight bits which are shifted out from the top of the CRC register access the
look-up-table and are converted back into finite field. The 16-bit output from
the look-up-table is XORed with the CRC register to form the new CRC.
This operation need not be 8-bit oriented. For a micro-controller solution, a
4-bit approach forms a good speed/resource compromise. In this case, a
16x16-bit (32-byte) look-up-table is required and the addition of a byte to the
CRC requires two 4-bit phases. CRC register is shifted left by four bits,
adding the high nibble of the new byte into the lower four bits of the CRC
register. The four bits which fall out of the top of the CRC register access the
look-up-table and the look-up-table output is XORed with the CRC register.
The operation is repeated, this time bringing in the lower four (remaining)
bits of the new byte.
A3 A2 Al Ao Q3 Q2 Ql Qo
(x7 ) (x6 ) (x5) (x4 )
0 0 0 0 0 0 0 0 0
0 0 0 1 cl 0 0 1 1
0 0 1 0 a5 0 1 1 0
0 0 1 1 a 5+a4 0 1 0 1
0 1 0 0 a6 1 1 0 0
0 1 0 1 a 6+a4 1 1 1 1
0 1 1 0 a 6+a5 1 0 1 0
0 1 1 1 a 6+a5+a4 1 0 0 1
1 0 0 0 a7 1 0 1 1
1 0 0 1 a 7+a4 1 0 0 0
1 0 1 0 a7+a5 1 1 0 1
1 0 1 1 a 7+a5+a4 1 1 1 0
1 1 0 0 a 7+a6 0 1 1 1
1 1 0 1 a 7+a6+a4 0 1 0 0
1 1 1 0 a 7+a6+a5 0 0 0 1
1 1 1 1 a 7+a6+a5+a4 0 0 1 0
Consider the message 10001100111. First, this must be split up into nibbles,
and four zeros appended, giving 0100 0110 0111 0000, or 4 6 7 O. The CRC
register is cleared to 0 and 4 is presented to the input.
[LUT
.~4Input
1 0 ~ o CRC
The XOR output will be 4 since the look-up-table (LUT) will output O. Now,
6 is presented to the circuit input and 4 is presented to the LUT .
[ LUT • ~6Input
I OC I'" 4 CRC
38 3.6 Table-Based Calculation of the CRC
The LUT output will be OC, so the XOR will output OA to the CRC register.
Now,7 is presented to the circuit input giving
LI L~~ w~--~
~~7
CRC Input
with OA on the XOR output, again. Finally, the appended 0 is input to the
circuit
L LUT
I OD ~~--
~~:t:
OA CRC
0
Input
LI LUT
4
~~~put
WI"'--~ CRC
Now, the CRC register holds the final result, OD or 1101. We can compare
this with a long-hand calculation of the CRC as folIows, which agrees.
10000111011
1 0 0 0 1 1 0 0 1 1 1 0 0 0 0
1 0 0 1 1 J, J, J,
1 0 1 0 0
1 0 0 1 1 J, J,
1 1 1 1 1
1 0 0 1 1 J,
1 1 0 0 1
1 0 0 1 1 J,
1 0 1 0 0
1 0 0 1 1 J, J,
1 1 1 0 0
1 0 0 1 1 J,
1 1 1 1 0
1 0 0 1 1
1 1 0 1
CHAPTER 3. ERROR DETECTION 39
3.7 Discussion
Where data are to be organized into blocks you will see that the CRC
provides a greatly enhanced error detection capability over simple parity
checking and at a greatly reduced cost in terms of bandwidth. A 16-bit CRC
appended to a typical message of, say, 2 Kbits uses about 0.8% of the
channel as opposed to around 11 % for parity. For infrequent and intermittent
communication, however, parity will be more applicable because bandwidth
is not usually an issue and compilation of data words into blocks is not
possible.
You should be able to ca1culate a CRC on a block of data given a suitable
generator polynomial, and derive a hardware generator from the polynomial
bit pattern. Also, for software solutions, you should be able to construct a
suitably sized look-up-table to speed up the CRC ca1culation.
The nature of the CRC ca1culation means that certain errors will not be
detected since the corrupted message is still exactly divisible by the GP.
When this happens, the message has moved through some multiple of dmin
bits into a new valid message.
Chapter 4
and vertical parity check except that the data are now arranged in higher
dimensional formats.
ds +
ds
These in turn give rise to the codewords in Table 4.1 below. A careful
examination of the codewords will show that at least three bits must be
changed to get from one codeword to another codeword. The code, therefore,
satisfies the requirement that the minimum Hamming distance must be 2t + 1
between codes, and t is one for a single-bit error correction. A useful
property of the codewords is that they are linear. This means that they can be
combined (by eXORing), to produce other valid codewords. By storing only
the codewords for one, two, four and eight, all others can be constructed. For
example, taking the code for one (0001 111) and adding to it the code for
eight (1000 101) gives the code for nine (1001 010). Using Table 4.1 you can
try other combinations.to verify this.
Other documents randomly have
different content
ajoi seksmanni heidän ohitsensa, osaamatta mitään pahaa aavistaa,
varmana voitostaan, kuuliaiskestit mielessään. Nuoret, jotka seisoivat
vähän matkaa tieltä, näkivät hänen, hän onneksi ei heitä. — Ja
seuraavana päivänä kuulutettiin: nuori mies Lauri Laurinpoika
Kilmala ja nuori neiti Aini Vierre.»
*****
»Lauri ja Aini ovat vielä sitä mieltä, että seksmanni siunasi heitä.
Ja hyvä on, että he niin luulevat. Minä olen vähän toista mieltä, sillä
minäkin tunsin Kilmalan seksmannin… Mutta tuossahan on
hirrenpää, josta saan suuteita hänen kaatumaisillaan olevan ristinsä
tueksi.»
Mörkö-Maija.
*****
Ei, sitä hän ei ollut tehnyt. Hän oli sen lujaan painanut. Hän on
itse varma siitä… »Ah, Mörkö-Maija!»
Mutta miten hän oli kellariin päässyt? Lukossahan oli ovi, ja avain
emännän avainkimpussa. »Ah, noidan vehkeitä!»
»Äiti!»
»Kah, joko sinäkin olet herännyt, Liili?»
»Tuo ruma akka on täällä, mutta hän ei tee minulle pahaa… Hän
taputteli poskiani ja sanoi: 'se oli oikein, että tahdoit antaa voileipäsi
tuolle pienelle tytölle'… Äiti, miksi en saanut antaa?»
»Kuka ruma akka, kuka pieni tyttö? Kenestä sinä puhut, lapseni?»
»En tiedä, äiti… Hän sanoi myös, että minä olen hyvä tyttö ja että
sinä olet paha. Mutta minä sanoin, että sinä olet niin hyvä, niin hyvä
— Liilille.»
»Mutta minä sanon, että sinä maksat sen! Sinä jätit eilen tulta
pesään, etkä huomannut, että padassa oli vesi kiehunut kuiviin…»
»Vai niin, vai sekö sen padan halkaisi; sillä halki se on, siitä ei
pääse… Mutta, emäntä hyvä, kukahan kodassa eilen viimeksi kävi,
minäkö vai te? Vai ettekö tahdo muistaa?… Mörkö-Maija teitä vielä
pihalla manasi, ja Matin-Miina kuljetti pois ontuvan kakaransa… Te
pistäysitte humalistossa, hyppäsitte sitten kotaan ja sieltä kellariin…
Minä näin sen luhdin ovelta; olin juuri maata menossa… Ja kyllä sen
Mörkö-Maijakin näki… Niin sen asian laita on. Vai minäkö maksaisin
padan!… Se on nyt halki, siitä ei pääse! Ja… Voi, voi sentään! En
muistanut sitä sanoa: koko lammaslauma on kauramaassa… Kai
jätitte veräjän auki, kun humalistossa kävitte…»
»Vai Jumalan kiitos! Vai … vai… Olisitte kai suonut, että olisin
astunut molemmat jalkani… Vai Jumalan kiitos!» Ja vihan vimmassa
repi hän pois siteen haavalta ja ojensi jalkansa emäntää kohden.
»Siinä sen näette, mitä tyttökakaranne on aikaan saanut!» —
Jalkapöytä oli miltei kahtia leikattu; haava oli kauhea, ja verta siitä
pillinään ruiskusi lattialle.
»Mihin Lotta kannetaan, äiti?» kysyi Liili, joka oli saanut uuden
paidan yllensä ja nyt kummastellen katseli outoa näköä tuvassa.
Tällä välin oli Kaisa-Liisa äkkiä pistäynyt ulkona, ja kun hän sitten
palasi tupaan, oli hämmästys nähtävänä hänen kasvoillaan.
»Menkäähän vaan tieveräjälle katsomaan!» huusi hän jo ovelta.
»Niin on kuin sanoin: kaksi ristiä siinä on, suuri ja pieni, onpa hiukan
kolmattakin.»
ebookbell.com