0% found this document useful (0 votes)
75 views

Module 3 Information Redundancy

This document provides an introduction to information redundancy and coding theory. It discusses how adding redundant information to data can help tolerate faults through techniques like error detecting and correcting codes. Memory reliability is an important application area as memories take up a large portion of digital systems. The document outlines different types of memory faults and the need for non-testing based solutions. It provides background on linear algebra concepts used in coding theory and defines terms like codes, codewords, encoding, decoding, and error detection/correction. Parity codes are presented as a basic example of an error detecting code.

Uploaded by

ynjuan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Module 3 Information Redundancy

This document provides an introduction to information redundancy and coding theory. It discusses how adding redundant information to data can help tolerate faults through techniques like error detecting and correcting codes. Memory reliability is an important application area as memories take up a large portion of digital systems. The document outlines different types of memory faults and the need for non-testing based solutions. It provides background on linear algebra concepts used in coding theory and defines terms like codes, codewords, encoding, decoding, and error detection/correction. Parity codes are presented as a basic example of an error detecting code.

Uploaded by

ynjuan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 136

DEPENDABLE COMPUTER SYSTEMS AND

NETWORKS

Module 3 – Information Redundancy

Sy-Yen Kuo
郭斯彥

NTUEE 1 KUO
Introduction - Information Redundancy
 Add information to data to tolerate faults
‒ Error detecting codes
‒ Error correcting codes
 Applications
‒ communication
‒ memory
 Coding: Representation of information
‒ Sequence of code words or symbols
 Redundancy of Information
‒ Shannon’s theorem of 1948

NTUEE 2 KUO
Richard Hamming
“Why do so few scientists make
significant contributions and so
many are forgotten in the long
run?”

“If you don’t work on important


problems, it’s not likely that you
will do important work.”

Richard W. Hamming, “You and Your


Research”, March 7, 1986
NTUEE 3 KUO
Motivation and Background

 Memoriesare integral part of digital systems


(computers)
 Majority
of chip and/or board area is taken by
memories
 Hence– reliability improvement methods
must pay attention to memories (RAMs,
ROMs, etc.)

NTUEE 4 KUO
Motivation and Background (contd.)

 Several Types of faults prevalent in memories


 During manufacturing
‒ Stuck-at
‒ Timing faults
‒ Coupling and pattern sensitive faults
 During operation
‒ Cell failures due to life, stress – same as stuck-at
‒ Alpha particle hits – cell content changes
» Sensitive to system location. Higher hits at altitudes and in flight
‒ Need non-testing based solutions
‒ Random failures – bit/nibble/byte/card failures

NTUEE 5 KUO
Motivation and Background (contd.)
 Theoretical Foundation
‒ Linear and modern algebra
» Concept of groups, fields, and vector spaces
» We will focus on binary codes but will have to include
polynomial algebra
 Theory – Informal definitions and results
‒ Vector: A collection of bits represented as a string
‒ Information bits - collection of k-bits
‒ Code word: encoded information bit string
» k information bits encoded to n bits. Encoded information
word is a code word.
‒ Check bits: r (= n-k) extra bits used to encode
information bits

NTUEE 6 KUO
Motivation and Background (contd.)
 A field Z2 is the set { 0,1} together with two operations
of addition and multiplication (modulo 2) satisfying a
given set of properties
 A vector space Vn over a field Z2 is a subset of {0, 1}n,
with two operations of addition and multiplication
(modulo 2) satisfying a given set of properties
 A subspace is a subset of a vector space which is
itself a vector space
 A set of vectors {v0,…,vk-1 } is linearly independent if
a0v0 + a1v1 + …+ ak-1vk-1 = 0 implies a0 = a1= …= ak-1 =
0

NTUEE 7 KUO
Code
 Code of length n is a set of n-tuples
satisfying some well-defined set rules
 binary code uses only 0 and 1 symbols
‒ binary coded decimal (BCD) code
» uses 4 bits for each decimal digit

NTUEE 8 KUO
Codeword
 Codeword is an element of the code
satisfying the rules of the code
 Word is an n-tuple not satisfying the rules of
the code
n
 Codewords should be a subset of all 2
possible binary tuples to make error
detection/correction possible
‒ BCD: 0110 valid; 1110 invalid
‒ any binary code: 2015 invalid
 The number of codewords in a code C is
called the size of C
NTUEE 9 KUO
Encoding/decoding

2 scenario if errors affect codeword:


‒ correct codeword → another codeword
‒ correct codeword → word

NTUEE 10 KUO
Error detection
 We can define a code so that errors
introduced in a codeword force it to lie
outside the range of codewords
‒ basic principle of error detection

NTUEE 11 KUO
Error correction
 We can define a code so that it is possible
to determine the correct codeword from
the erroneous codeword
‒ basic principle of error correction

NTUEE 12 KUO
Error-detecting Codes
 A fault is a physical malfunction
 An error is an incorrect output caused by fault
 Output of circuit may be encoded so that output takes a
subset of possible values during normal (fault-free)
operation
 Formally, a code is a subset S of an universes U of
vectors chosen
 A noncode word is a vector in set U-S
 If X is a code word and X’ is a different vector produced
by a fault, then X’ is a detectable error if X’ ∈ U-S and
undetectable error if X’ ∈ S

NTUEE 13 KUO
Error-detecting Codes (Cont.)
Example
Assume a code word has 8 bits, so U = 28 vectors
X 1 → X 3 Undetectable error
X 2 → X 4 Detectable error

U
S X1
failure X3
failure X2
X4
code words : X 1 , X 2 , X 3 detectable error : X 2 → X 4
noncode word : X 4 undetectable error : X 1 → X 3
NTUEE 14 KUO
Fault Detection through Encoding
 At logic level, codes provide means of masking
or detection of errors
 Example

X1 is a code word
<10010011>
Due to multiple bit errors,
becomes X3=<10011100> U = 28 vectors
⇒ not detectable
X2 is a code word S=even parity
X2 X1
becomes X4 noncode
word X3
⇒ detectable X4

NTUEE 15 KUO
Basic Code Operations

 Consider n-bit vectors, space of 2 n vectors


A subset of 2 k vectors are codewords
 Subset called (n, k) code, where fraction k/n is
called information rate of code
 Addition operation on vectors is bitwise XOR
X + Y =< x1 ⊕ y1 , x2 ⊕ y2 , ... xn ⊕ yn >

 Multiplication operation is bitwise AND


cX =< cx1 , cx2 , ..., cxn >

NTUEE 16 KUO
Parity Codes - Example
Odd and even parity codes for BCD data
Decimal BCD BCD
Digit BCD odd parity even parity
0 0000 00001 00000 Parity Parity Parity Error
1 0001 00100 00011 Generator Bit Checking Signal
2 0010 00100 00101
3 0011 00111 00110 Data Parity Data
4 0100 01000 01001
5 0101 01011 01010 Memory
6 0110 01101 01100 ~ ~ Data In ~ ~ Data Out
7 0111 01110 01111
8 1000 10000 10001 Organization of a memory that uses single-
bit parity.
9 1001 10011 10010
The parity bit is generated when data is
parity bit written to memory and checked when data is
ready.

NTUEE 17 KUO
XOR Tree for Parity Generation

Data Bits
d0
d1
d2
Generated Parity Bit
d3

P Error Signal

NTUEE 18 KUO
Block Codes
 Block code: each code word contains the same number
of bits.
 Code words are represented by n-tuples or n-vectors
‒ Information is only k bits
‒ Redundancy normalization (n-k)/n or r/n
‒ It is called (n, k) code
 Binary Code
‒ By far most important
‒ lend itself to mathematical treatment
 Encoding: converting source codes into block codes
 Decoding: Inverse operation of encoding
 Error detection (EDC) and Error correction (ECC) codes

NTUEE 19 KUO
Functional Classes of Codes
 Single error correcting codes
‒ any one bit can be detected and corrected
 Burst error correcting codes
‒ Any set of consecutive b bits can be corrected
 Independent error correcting codes
‒ Up to t errors can be detected and corrected
 Multiple character correcting codes
‒ n-characters, t of them are wrong, can be recovered
 Coding complexity goes up with number of errors
 Sometimes partial correction is sufficient

NTUEE 20 KUO
Information Redundancy - Coding
 A data word with k bits is encoded into a codeword with
n bits – n > k
‒ Not all 2n combinations are valid codewords
‒ To extract original data - n bits must be decoded
‒ If the n bits do not constitute a valid codeword, an error is
detected
‒ For certain encoding schemes, some types of errors can also be
corrected
 Key parameters of code:
‒ number of erroneous bits that can be detected
‒ number of erroneous bits that can be corrected
 Overhead of code:
‒ additional bits required
‒ time to encode and decode
NTUEE 21 KUO
Error detecting/correcting code

 Characterizedby the number of bits that


can be detected and/or corrected
‒ double-bit detecting code can detect two
single-bit errors
‒ single-bit correcting code can correct one
single-bit error
 Hamming distance gives a measure of
error detecting/correcting capabilities of a
code

NTUEE 22 KUO
Hamming distance
 Hamming distance is the number of bit
positions in which two n-tuples differ

NTUEE 23 KUO
Hamming Distance

 The Hamming distance between two codewords - the


number of bit positions in which the two words differ

 Two words in this figure are connected by an edge if


their Hamming distance is 1

NTUEE 24 KUO
Hamming Distance - Examples

 101 and 011 differ in two bit


positions - Hamming distance of 2
‒ Need to traverse two edges to get
from 101 to 011
 101 and 100 differ by one bit position - a
single error in the least significant bit in either of
these two codewords will go undetected
 A Hamming distance of two between two
codewords implies that a single bit error will not
change one of the codewords into the other

NTUEE 25 KUO
Hamming Distance
 Theory – Informal definitions and results
‒ Hamming weight of a vector v: Number of 1’s in v
‒ Hamming distance (HD) between a pair of vectors v1
and v2: number of places two vectors differ from each
other.
HD(v1, v2) = HW(v1⊕v2)
‒ Code: Collection of code words.
‒ Block code: each code word contains same number of
bits.
‒ Minimum Hamming distance of a code: Minimum of all
HDs between all pairs of code words in a code.
‒ Error detection: Erroneous word (a code word with one
or more bit errors) is not a code word

NTUEE 26 KUO
Distance Property of Codes
 Weight of an n-tuple: # of 1s in the tuple
‒ W(1011) = 3
‒ Computing Weight W ( X ⊕ Y ) = W ( X ) + W (Y ) − 2W ( X • Y )
 Hamming Distance
‒ Number of places two words differ
‒ For X, Y, d ( X , Y ) = W ( X ⊕ Y )
‒ d(X,Y)=0 for X=Y and d(X,Y) <= d(X,Z)+d(Z,Y)
 # of words at distance d from a word = nCd
# of words at distance up to d , W = ∑ (in )
d

i =0
 Min distance d of a code = minimum over all pairs

NTUEE 27 KUO
Distance of a Code
 The Distance of a code -
the minimum Hamming distance
between any two valid codewords
 Example - Code with four codewords -
{001,010,100,111}
‒ has a distance of 2
‒ can detect any single bit error
 Example - Code with two codewords - {000,111}
‒ has a distance of 3
‒ can detect any single or double bit error
‒ if double bit errors are not likely to happen - code
can correct any single bit error
NTUEE 28 KUO
Distance Properties of Parity Codes
 Definition: The minimum distance of a code S is the
minimum of Hamming distances between all pairs of
code words e.g., Fragment of distance 5(double
error correcting code)

Correctable
double error 0100100 0110100

0000100
0110110

Improperly
0000000 Correctable corrected
triple error 0110111
single error

NTUEE 29 KUO
Distance Property of Codes
 Basic results 1: A code is capable of t error detection if
and only if min HD of the code is at least t+1.
‒ Example: Use of parity – we know that we can detect single error.
What is the minimum HD for such a code?
 Basic results 2: A code is capable of correcting t errors if
and only if min HD of the code is at least 2t+1.
 Combine the two results:
1. Thus a code with minimum Hamming distance d between its
codewords can detect at most d-1 errors or can correct ⌊(d-1)/2⌋
errors
2. Or to correct all patterns of Hamming distance c and detect up
to d additional errors
» code distance must be ≥ 2c + d + 1
» E.g., code with distance 4 can correct 1 single-bit error and detect 1
additional error or detect at most 3 errors

NTUEE 30 KUO
Coding vs. Redundancy

 Many redundancy techniques can be


considered as coding schemes
 The code {000,111} can be used to encode a
single data bit
‒ 0 can be encoded as 000 and 1 as 111
‒ This code is identical to TMR
 The code {00,11} can also be used to encode
a single data bit
‒ 0 can be encoded as 00 and 1 as 11
‒ This code is identical to a duplex

NTUEE 31 KUO
Separability of a Code
A code is separable if it has separate fields for
the data and the code bits
‒ codeword = data + check bits
‒ e.g. parity: 11011 = 1101 + 1
 Decoding is easy by disregarding the code bits
 The code bits can be processed separately to
verify the correctness of the data
A non-separable code has the data and code
bits integrated together - extracting the data
from the encoded word requires some
processing
NTUEE 32 KUO
Parity Codes
 Thesimplest separable codes are the parity
codes
A parity-coded word includes d data bits and
an extra bit which holds the parity
 In
even (odd) parity code - the extra bit is set
so that the total number of 1's in the (d+1)-bit
word (including the parity bit) is even (odd)
 The overhead fraction of this parity code is
1/d
 Use: bus, memory, transmission, …

NTUEE 33 KUO
Organization of memory with single-bit
parity code

Extra HW required (parity generator, checker, extra memory)

NTUEE 34 KUO
Properties of Parity Codes

A parity code has a distance of 2 - will


detect all single-bit errors
 If one bit flips from 0 to 1 (or vice versa)
- the overall parity will not be the same
- error can be detected
 Simple parity cannot correct any bit errors

NTUEE 35 KUO
Encoding and Decoding Circuitry for
Parity Codes

The encoder: a
modulo-2 adder - encoder

generating a 0 if the
number of 1's is even
The output is the parity
signal
decoder

NTUEE 36 KUO
Parity Codes - Decoder
 The decoder generates the parity from the
received data bits and compares it with the
received parity bit
 If they match, the output of the exclusive-or gate
is a 0 - indicating no error has been detected
 If they do not match - the output is 1, indicating
an error
 Double-bit errors can not be detected by a parity
check
 All three-bit errors will be detected

NTUEE 37 KUO
Even or Odd Parity?
 The decision depends on which type of all-bits
error is more probable
 For even parity - the parity bit for the all 0's
data word will be 0 and an all-0’s failure will
go undetected - it is a valid codeword
 Selecting the odd parity code will allow the
detection of the all-0's failure
 If all-1's failure is more likely - the odd parity
code must be selected if the total number of
bits (d+1) is even, and the even parity if d+1 is
odd

NTUEE 38 KUO
Codes for RAMs
 Thepurpose is to provide additional error
capability

NTUEE 39 KUO
Parity Bit Per Byte

A separate parity bit is assigned to every byte


(or any other group of bits)
 The overhead increases from 1/d to m/d (m is
the number of bytes or other equal-sized
groups)
 Up to m errors will be detected if they occur in
different bytes.
 If both all-0's and all-1's failures may happen
‒ select odd parity for one byte and even parity for
another byte

NTUEE 40 KUO
Byte-Interlaced Parity Code
 Example: d=64, data bits - a63,a62,…,a0
 Eight parity bits:
‒ First - parity bit of a63,a55,a47,a39,a31,a23,a15,a7 -
the most significant bits in the eight bytes
‒ Remaining seven parity bits - assigned so that the
corresponding groups of bits are interlaced
 Scheme is beneficial when shorting of adjacent
bits is a common failure mode (example - a bus)
 If parity type (odd or even) is alternated between
groups - unidirectional errors (all-0's or all-1's)
will also be detected
NTUEE 41 KUO
Parity Codes for Memory - Comparison

NTUEE 42 KUO
Overlapping Error-Correcting Parity Codes
 Simplest scheme - data is organized
in a 2-dimensional array

 Bits at the end of row - parity over that row


 Bits at the bottom of column - parity over column
 A single-bit error anywhere will cause a row and a column
to be erroneous
 This identifies a unique erroneous bit
 This is an example of overlapping parity - each bit is
covered by more than one parity bit
NTUEE 43 KUO
Overlapping parity code (Hamming code)
 Overlapping parity for 4-bits of data - each data bit is assigned
to multiple parity groups
 Hamming codes can detect up to two-bit errors or correct one-
bit errors without detection of uncorrected errors. By contrast,
the simple parity code cannot correct errors, and can detect
only an odd number of bits in error.

NTUEE 44 KUO
Error Correction with Overlapped Parity

Error Correction with Overlapped Parity

3 2 1 0 P2 P1 P0

1 3 0 2 0 3 1 2 3
Parity Generator Parity Generator Parity Generator
C0
Pr 1 Bit 0

Corrected Bits
Pr 0 P r2
Bit 0
C3 Correct Bit 3 C1
Pr 0 C2 Correct Bit 2 Bit 1
3-8 Bit 1
P0 C1 Correct Bit 1 C2
Pr1 Decode CP2 Correct Bit P2 Bit 2
C0 Correct Bit 0 Bit 2
P1
CP1 Correct Bit P1 ...
Pr 2 CP0 Correct Bit P0 CP 2
P2 E No Error Bit P2
Bit P2

NTUEE 45 KUO
Overlapping parity code (Hamming code)
 Purpose - identify every single erroneous bit
 k data bits and c parity bits - total of k+c bits
 Assuming single-bit errors - k+c error states + one no-
error state - total of k+c+1 states
 We need k+c+1 distinct parity ‘’signatures” (bit
configurations) to distinguish among the states
 c parity checks generate parity signatures
 Hence, c is the smallest integer that satisfies
2c ≥ k+c+1
 Question - how are the parity
bits assigned?

NTUEE 46 KUO
Single Error Correction and Double Error
Detection Hamming Code (SEC-DED)
1 2 3 4 5 6 7 8
 Consider a data word consisting of four
p1 p2 d0 p3 d1 d2 d3 p4
information bits
1 1 1 1 1 1 1 1
 Three parity bits are needed to provide
single error correction
1 0 1 0 1 0 1 0  Adding an extra parity bit, the Hamming
code can be used to correct single bit
0 1 1 0 0 1 1 0
errors and to detect double errors
0 0 0 1 1 1 1 0
Check bits computation
P1 = XOR (3, 5, 7)
c1 c2 c3 c4
P2 = XOR (3, 6, 7)
P3 = XOR (5, 6, 7)
0 0 0 0 No errors P4 = even parity over the
first 7 bits of the code word
x3 x2 x1 1 Single error in a position
(x1x2x3) Syndromes computation
y3 y2 y1 0 C1 = XOR (1, 3, 5, 7)
Double error C2 = XOR (2, 3, 6, 7)
0 0 0 1 C3 = XOR (4, 5, 6, 7)
Error in bit p4
C4 = parity over all 8 bits of
the code word
NTUEE 47 KUO
Single Error Correction and Double Error
Detection Hamming Code (SEC-DED) Example
1 2 3 4 5 6 7 8

p1 p2 d0 p3 d1 d2 d3 p4
Initial daa
1 1 1 0 0 1 1 0 0
d0 d1 d2 d3
0 1 1 0
2 1 1 1 0 1 1 0 0
Failure
3 1 1 0 0 1 0 0 0
scenarios
4 0 1 0 0 0 1 0 0

5 1 1 0 0 1 1 0 1

c1 c2 c3 c4

1 0 0 0 0 No errors
Corresponding 2 1 1 0 1 Single error in position 3
Syndromes
3 0 1 1 1 Single error in position 6

4 0 0 1 0 Double error
5 0 0 0 1
Error in bit p4
NTUEE 48 KUO
Linear code: Definition

 A (n,k) linear code over the field Z2


is a k- dimensional subspace of Vn
– spanned by k linearly independent
vectors
– any codeword c can be written as a
linear combination of k basic vectors
(v0,…,vk-1) as follows
c = d0v0 + d1v1 + …+ dk-1vk-1
– (d0,d1,…,dk-1) is the data to be
encoded

NTUEE 49 KUO
Example: (4,2) linear code
 Data to be encoded are 2-bit words
 [00], [01], [10], [11]
 Suppose we select for a basis the vectors
 v0 = [1000], v1 = [0110]
 To find the codeword c = [c0c1c2c3]
corresponding to the data d = [d0d1], we
compute the linear combination of the basic
vectors as
 c = d0 v 0 + d 1 v 1
 For example, data d = [11] is encoded to
c = 1 · [1000] + 1 · [0110] = [1110]
NTUEE 50 KUO
Example (cont.)
 d = [00] is encoded to
‒ c = 0 · [1000] + 0 · [0110] = [0000]
 d = [01] is encoded to
‒ c = 0 · [1000] + 1 · [0110] = [0110]
 d = [10] is encoded to
‒ c = 1 · [1000] + 0 · [0110] = [1000]

NTUEE 51 KUO
Generator matrix
 The rows of the generator matrix are the basis
vectors v0,…,vk-1
 For example, generator matrix for the previous
example is
1000
G=
0110

 Codeword c is obtained by multiplying G by d


c=d·G

NTUEE 52 KUO
Example: (6,3) linear code

 Construct the code spanned by the basis vectors


[100011], [010110] and [001101]
 The generator matrix for this code is

100011
G= 010110
001101

 For example, data d = [011] is encoded to


c=0·[100011]+1·[010110]+1·[001101]=[011011]

NTUEE 53 KUO
Parity check matrix

 To check for errors in a (n, k) linear code, we


use an (n-k)×n parity check matrix H of the
code
 The parity check matrix is related to the
generator matrix by the equation
H · GT = 0
where GT denotes the transpose of G
 This implies that, for any code word c, the
product of the parity check matrix and the
encoded message should be zero
H · cT = 0
NTUEE 54 KUO
Constructing parity check matrix
 If a generator matrix is of the form G = [Ik A], then
the parity check matrix is of the form
H = [AT In-k]
Example: (6,3) linear code

• If G is of the form
100 011
G= 010 110
001 101
• Then H is of the form
011100
H= 110010
101001

NTUEE 55 KUO
Assigning Parity Bits - Example
 k=4 data bits
 c=3 - minimum number of parity bits
 k+c+1=8 – number of states the word can be in
 A possible assignment of parity values to states
 In (a3 a2 a1 a0 p2 p1 p0), bit 0, 1 and 2 are parity bits,
the rest are data bits

NTUEE 56 KUO
(7,4) Hamming Single Error
Correcting (SEC) Code

 p0 check fails when bit 3 (a0) is in error


‒ Also bit 4 and bit 6
‒ A parity bit covers all bits whose error it indicates
 p0 covers positions 0,3,4,6 - p0 = a0⊕a1⊕a3
 p1 covers positions 1,3,5,6 - p1 = a0⊕a2⊕a3
 p2 covers positions 2,4,5,6 - p2 = a1⊕a2⊕a3

NTUEE 57 KUO
Definition - Syndrome p 0 = a 0⊕ a 1⊕ a 3
p 1 = a 0⊕ a 2⊕ a 3
p 2 = a 1⊕ a 2⊕ a 3
 Example: a3a2a1a0 = 1100 and p2p1p0 = 001
 Suppose 1100001 becomes 1000001
 Recalculate p2p1p0 = 111
 Difference (bit-wise XOR) is 110
 This difference is called syndrome - indicates the
bit in error
 It is clear that a2 is in error and the correct data is
a3a2a1a0=1100
NTUEE 58 KUO
Calculating the Syndrome - (7,4)
Hamming Code
 The syndrome can be calculated directly in one step
from the bits a3 a2 a1 a0 p2 p1 p0
 This is best represented by the following matrix
operation where all the additions are mod 2

p 0 = a 0⊕ a 1⊕ a 3
p 1 = a 0⊕ a 2⊕ a 3
Parity check
matrix p 2 = a 1⊕ a 2⊕ a 3

NTUEE 59 KUO
Syndrome

 Encoded data is checked for errors by multiplying it


by the parity check matrix
s = H · cT
 The resulting (n-k)-bit vector is called syndrome
– If s = 0, no error has occurred
– If s matches one of the columns of H, a single-bit
error has occurred. The bit position corresponds to
the position of the matching column of H
– Otherwise, a multiple-bit error has occurred

NTUEE 60 KUO
Constructing linear codes
 To be able to correct e errors, a code should
have a distance of at least 2e+1
 It is proved that a code has distance of at
least d if and only if every subset of d-1
columns of its parity check matrix H are
linearly independent.
 So, to have a code distance two, we must
ensure that every column of the parity check
matrix is linearly independent
– To have a code with code distance 2 (single- error
detecting), every column of H should be linearly
independent, i.e. H shouldn’t have a zero column

NTUEE 61 KUO
Example: (4,2) linear code

 Parity check matrix for (4,2) linear code we have


constructed before is

0110
H=
0001

 The first column is zero, therefore the columns of


H are linearly dependent and code distance is 1
 Let us construct a code with distance 2

NTUEE 62 KUO
Example: (4,2) linear code, Cd=2

 Replace 1st column by a column containing all 1

 Now G can be constructed as

NTUEE 63 KUO
Example: (4,2) linear code, Cd=2

 The resulting code generated by G is

NTUEE 64 KUO
Hamming codes
 Hamming codes are a family of linear codes
 An (n, k) linear code is a Hamming code if
its parity check matrix contains 2n−k −1
columns representing all possible nonzero
binary vectors of length n − k.
 Example: (7,4) Hamming code

NTUEE 65 KUO
Parity check matrix
 If the columns of H are permuted, the resulting
code remains a Hamming code
 Example: different (7,4) Hamming code

0001111
H= 0110011
1010101

 Such H is called lexicographic parity check


matrix
– the corresponding code does not have a generator
matrix in standard form G = [I3 A]

NTUEE 66 KUO
Decoding

 If the parity check matrix H is lexicographic, a simple


procedure for syndrome decoding exists
 To check a codeword c for errors, calculate
s = H · cT
– If s = 0, no error has occurred
– If s ≠ 0, then it matches one of the columns of H, say i,
 c is decoded assuming that a single-bit error has
occurred in the ith bit of c

NTUEE 67 KUO
Example: (7,4) Hamming code
 Construct Hamming code corresponding to parity
check matrix

 The corresponding G is

NTUEE 68 KUO
Example (cont.)
 Suppose the data to be encoded is d = [1110]
 We multiply d by G to get c = [1110001]
 Suppose an error has occurred in the last bit of c
‒ c is transformed to [1110000]
 By multiplying [1110000] by H, we s = [001]
‒ s matches the last column of H
‒ the error has occurred in the last bit of the codeword
 We correct [1110000] to [1110001] and decode it
to d = [1110] by taking the first 4 bits of data

NTUEE 69 KUO
Extended Hamming code
 The code distance of a Hamming code is
3, so it can correct single-bit error
 Often extended Hamming code is used,
which can correct single-bit error and
detect double-bit errors
–obtained by adding a parity check bit to
every codeword of a Hamming code
– if c = [c1c2…cn] is a codeword of a Hamming
code, c’ = [c0c1c2…cn] is the corresponding
extended codeword, where c0 is the parity bit

NTUEE 70 KUO
Improving Detection

 Previous code can correct a single bit error


but not detect a double error
 Example - 1100001 becomes 1010001 - a2 and a1
are erroneous - syndrome is 011
‒ Indicates erroneously that bit a0 should be corrected
 One way of improving error detection capabilities -
adding an extra check bit which is the parity bit of all
the other data and parity bits
 This is an (8,4) single error correcting/double error
detecting (SEC/DED) Hamming code
NTUEE 71 KUO
Syndrome Generation for (8,4) Hamming Code

 p3 - parity bit of all data and check bits -


a single bit error will change the overall
parity and yield s3=1
 The last three bits of the syndrome will
indicate the bit in error to be corrected as
before as long as s3=1
 If s3=0 and any other syndrome bit is
nonzero - a double or greater error is
detected
NTUEE 72 KUO
Example

 Single error - 11001001 becomes 10001001


 Syndrome is 1110 - indicating that a2 is
erroneous
 Two errors - 11001001 becomes 10101001
 Syndrome is 0011 indicating an uncorrectable
error
NTUEE 73 KUO
Checksum

 Primarilyused to detect errors in data


transmission on communication networks
 Basic idea - add up the block of data being
transmitted and transmit this sum as well
 Receiveradds up the data it received and
compares it with the checksum it received
 If the two do not match - an error is indicated

NTUEE 74 KUO
Versions of Checksums
 Data words - d bits long
d
 Single-precision version - checksum is a modulo 2
addition
 Double-precision version - modulo 2 2 d addition
 In general - single-precision checksum catches fewer
errors than double-precision
‒ only keeps the rightmost d bits of the sum
 Residue checksum takes into account the carry out of the
d-th bit as an end-around carry
‒ somewhat more reliable
 The Honeywell checksum concatenates words into pairs
for the checksum calculation (done modulo 2 2 d ) - guards
against errors in the same position

NTUEE 75 KUO
Checksum Codes - Basic Concepts
 The checksum is appended to block data when such blocks are
transferred

dn rn

d5 r5
d4 Transfer r4
d3 r3
d2 r2
d1 r1

Checksum on Checksum on
Original Data Received Data

Compare
di = original word of data
ri = received word of data Received Version
of Checksum

NTUEE 76 KUO
Single Precision Checksums

0111

(Addition) +
0001
0110
1000
} Original Data

1 ( 0110 ) Checksum
Carry is Ignored

A single-precision checksum is formed by adding the data words and ignoring any overflow

Original Data Original Data

d3 d2 d1 d0 d3 d2 d1 d0
0111 1111
d0
0001 d1 1001
Transmit d2 Receive
0110 1110
d3
0000 Faulty Line 1000
Always “1”
Checksum Checksum of Received Data
1110 1110
Received Checksum
1110
The single-precision checksum is unable to detect certain types of errors.
The received checksum and the checksum of the received data are equal,
so no error is detected.

NTUEE 77 KUO
Double Precision Checksums
Compute 2n-bit checksum for a block of n-bit words
Overflow is still a concern, but it is now overflow from a 2n-bits

Original Data Received Data

d3 d2 d1 d0 d3 d2 d1 d0
0111 1111
d0
d1
0001 Transmit Receive 1001
d2
0110 d3 1110

0000 Faulty Line 1000


Always
Checksum “1” Checksum of Received Data
0000 1110 0010 1110
Received Checksum
1000 1110

The received checksum and the checksum of the received data


are not equal, so the error is detected

NTUEE 78 KUO
Honeywell Checksums
Concatenate consecutive words to form double words to
create k/2 words of 2n bits; checksum formed over newly
structured data Transmit Receive
d0

Word n d1
Received Data
Original Data d3 d2 d1 d0
d3 d2 d1 d0 d2
1111
Word 3 0111
d3
Word 2 1001
0001
Word 1 1110
0110 Faulty Line
1000
Always “1”
0000
Word n - 1 Word n

1001 1111
Word 9 Word 10 0001 0111
1000 1110
0000 0110
Word 7 Word 8

Word 5 Word 6
Checksum of Received Data
Word 3 Word 4 Checksum of Original Data
0010 1101
Word 1 Word 2 0001 1101
Received Checksum

10 0 1 1101
Checksum

NTUEE 79 KUO
Residue Checksums
The same concept as the single-precision checksum except that the
carry bit is not ignored and is added to checksum in an end-around
carry fashion
Transmit Receive
Original Data d0
d3 d2 d1 d0
d1
Word n 0111
0001 d2

0110 d3
Word 4 Received Data
Word 3 0000 d3 d2 d1 d0
Word 2 Faulty Line 1111
Word 1 Checksum of Original Data Always “1”
1001
Carry 1110
from 1110
Addition 1000
c Sum of Data
End-Around 1110
Carry Three Carries
c
Addition Generated During 1

End-Around 1
1
Carry Addition
Checksum of Received Data
Checksum
0001
Received Checksum
1110

NTUEE 80 KUO
Comparing the Versions

NTUEE 81 KUO
Comparison - Example

‒ In single-precision checksum - transmitted


checksum and computed checksum match
‒ In Honeywell checksum computed checksum
differs from received checksum and error is
detected
 All checksum schemes allow error detection
 Do not allow error location
 Entire block of data must be retransmitted if an
error is detected
NTUEE 82 KUO
Unordered codes

 Designed to detect unidirectional errors


 An error is unidirectional if all affected bits are
changed to either 0 → 1 or 1 → 0, but not both
 Example:
–correct codeword: 010101
–same codeword with unidirectional errors

NTUEE 83 KUO
Unidirectional error detection
 Theorem: A code C detects all unidirectional
errors if and only if every pair of codewords
in C is unordered
 two binary n-tuples x and y are ordered if
either xi ≤ yi or xi ≥ yi for all i ∈ {1,2,...,n}

 Examples of ordered codewords:


0110 < 0111 < 1111
0110 > 0100 > 0000

NTUEE 84 KUO
Unidirectional error detection
A unidirectional error always changes a
word x to a word y which is either smaller
or greater than x
Aunidirectional error cannot change x to a
word which is not ordered with x
 Therefore,if any pair of codewords are
unordered, a unidirectional error will never
transform a codeword to another
codeword

NTUEE 85 KUO
Unidirectional Error Model
 Given a bit string= ( X 1 X 2 X 3 ... X n ), X i = {0, 1}
 Under error, either some 0->1 or 1->0, but not both at the
same time
 E.g., if you know that s-a-0 faults are the only faults that
can occur:

0
1
Transmission
1 s-a-0
lines
1
1 s-a-0

NTUEE 86 KUO
Berger Codes

 Used in Control Units


 Systematic Code
 For an n-component vector, add a check
vector equal to count of zeros in vector
 If information is X, c(X)= number of zeros in
X Example: X=<10010001> c(X)=5=101
so, Code=<10010001:101>
 Detects unidirectional multiple errors

NTUEE 87 KUO
Berger Code Error Detection

 If unidirectional errors in information part,


1->0: number of zeros in information > check value
0-1>: number of zeros in information < check value
 If unidirectional error in check part
0->1: number of zeros in information < check value
1->0: number of zeros in information > check value
 Unidirectional errors in both information and check parts,
1->0: number of zeros in information increases, check
value decreases
0->1: number of zeros in information decreases, check
value increases
NTUEE 88 KUO
M-of-N Codes
 Unidirectional error codes
‒ one or more 1s turn to 0s and no 0s turn to 1s (or vice
versa)
N
 Exactly M bits out of N are 1:   codewords
M 
‒ A single bit error – (M+1) or (M-1) 1s
‒ This is a non-separable code
 To get a separable code:
‒ Add M extra bits to the M-bit data word
for a total of M 1s
‒ This is an M-of-2M separable
unidirectional error code
 Example - 2-of-5 code
‒ for decimal digits:

NTUEE 89 KUO
Overhead of Berger Code
 2
 d data bits - at most d 0s - up to log ( d + 1)

bits to describe
 Overhead = log 2 (d + 1) d
 r - number of check bits
 If d = 2 − 1 (integer k)
k

‒ r=k
‒ maximum-length Berger
code
 Smallest number of check bits
out of all separable codes (for
unidirectional error detection)

NTUEE 90 KUO
Cyclic codes

 Cycliccodes are special class of linear codes


 Used in applications where burst errors can
occur
‒ a group of adjacent bits is affected
‒ digital communication, storage devices (disks, CDs)
 Important classes of cyclic codes:
‒ Cyclic redundancy check (CRC)
» used in modems and network protocols
‒ Reed-Solomon code
» used in CD and DVD players

NTUEE 91 KUO
Cyclic Codes
 Cyclic codes are parity check codes with additional
property that cyclic shift of a codeword also a codeword
 If (Cn -1 , Cn−1...C1 , C0 ) is a codeword, (Cn -2 , Cn −3 ,...C0 , Cn −1 ) is also
a codeword
 Parity check codes require complex encoding, decoding
circuit using arrays of EX-OR gates, AND gates, etc.
 Cyclic codes require far less hardware, in form of linear
feedback shift registers
 Cyclic codes are used in sequential storage devices, e.g.
tapes, disks, and data links
 An (n,k) cyclic code can detect single bit errors, and
multiple adjacent bit error affecting fewer than (n-k) bits,
burst transient errors (typical for communication
applications)

NTUEE 92 KUO
Cyclic code and Polynomials

 Cyclic codes depend on the representation of


data by polynomial
 If (Cn −1 , Cn − 2 ,...C1 , C0 ) is a codeword, its polynomial
representation
C(x)= Cn −1 x n −1 + Cn −2 x n −2 + ...C1 x + C0
 Cyclic codes are characterized by its generator
polynomial g(x)
 g(x) is a polynomial of degree (n-k) for an (n,k)
code, with unity coefficient in (n-k) term
 Example: for (7,4) code, g ( x ) = x 3
+ x +1

NTUEE 93 KUO
Cyclic And Polynomial Algebra
 Definition: A cyclic code is a parity check code,
where every cyclic shift of a code word is a
codeword E.g. if (Cn −1 , Cn −2 ,..., C0 ) is a code word,
then (Cn −2 , Cn −3 ,..., C0 , Cn −1 ) is also a code word
 Cyclic
codes are conveniently described as
polynomials
Cn −1 x n −1 + Cn −2 x n −2 + ...C1 x + C0
 Use key concept of polynomial division using
Euclidean division algorithm: f(X)=q(X)p(X)+r(X),
degree(r(X))<degree(P(X)) q=quotient and
r=remainder
NTUEE 94 KUO
Cyclic And Polynomial Algebra(Cont.)
 Most important modulo base polynomial
for cyclic codes is X n − 1 since
1 = X n [mod( X n − 1)]
 Now we have
XC ( X ) = Cn −1 X n + Cn − 2 X n −1 + ... + C1 X 2 + C0 X
XC ( X ) mod ( X n − 1) = Cn − 2 X n −1 + ... + C0 X + Cn −1

NTUEE 95 KUO
Basic Operations on Polynomials
 Can multiply or divide one polynomial by another, follow
modulo 2 arithmetic, coefficients are 1 or 0, and addition
and subtraction are same

x7 + x6 + x5 + x3
Multiplication ( x 4 + x3 + x 2 + 1)( x3 + x) + x5 + x 4 + x3 + x
= x7 + x6 + x4 + x
Division
x Quotient

( x + x + x + 1)
4 3 2 x5 + x 4
x5 + x 4 + x3 + x
x3 + x Remainder

NTUEE 96 KUO
Properties of Generator Polynomials
 g(x)is unique lowest degree nonzero
polynomial with degree n-k
 Every code polynomial is some multiple of g(x)
 g(x)is a factor of x n-1, I.e., it divides it with
zero remainder
 If polynomial with degree n-k divides
n
x -1, then g(x) generated cyclic code

NTUEE 97 KUO
Cyclic Code - Example
3
 x
Consider generator polynomial g(x)= + x+1 for (7,4)
code
 Can verify g(x) divides x7 −1
 Given data word (1111), generate codeword
 d(x)= x 3 + x 2 + x + 1
 Then c( x) = g ( x)d ( x) = ( x 3 + x 2 + x + 1)( x 3 + x + 1) = x 6 + x 5 + x + 1
 Hence code word is (1101001)

NTUEE 98 KUO
Properties of Cyclic Codes:(n,k) Codes(Cont.)

as polynomials as 7-tuples
g(X)= X + X + 1
3
0001011

Xg(X)= X + X + X
4 2
0010110

X 2g( X ) = X 5 + X 3 + X 2 0101100

X 3g( X ) = X 6 + X 4 + X 3 1011000

 If a polynomial g(X) of degree r=n-k divides X n -1,


then g(X) generates a cyclic code

NTUEE 99 KUO
Encoding/decoding

: Multiply data polynomial by generator polynomial:


c(x) = d(x).g(x)

:Divide received polynomial c(x) by the generator polynomial g(x):


d(x) = c(x)/g(x)

If remainder is zero, no error has occurred

NTUEE 100 KUO


Decoding in presence of error
 Suppose an error has occurred, then
creceived(x) = c(x) + e(x), e(x) - error polynomial
dreceived(x) = (c(x) + e(x))/g(x)
 Unless e(x) is a multiple of g(x), the received codeword
will not be evenly dividable by g(x)
‒ We detect errors by checking whether creceived(x) is evenly
dividable by g(x)
‒ If yes, we assume that there is no error and dreceived (x) = d(x)
‒ If there is a reminder, we assume that there is an error
 Undetectable errors - However, if e(x) is a multiple of
g(x), the reminder of e(x)/g(x) is 0 and the error will not
be detected

NTUEE 101 KUO


Example of error detection

d(x) = (1101) = x3 + x2 + 1
g(x) = x3 + x + 1
c(x) = d(x).g(x) = x6 + x5 + x4 + x3 + x2 + x + 1

Let e(x) = x3+1, then creceived = x6 + x5 + x4 + x2 + x

creceived(x)/g(x) = (x3 + x2) + x/(x3 + x + 1)

Reminder is not 0, so the error is detected

NTUEE 102 KUO


Separable Cyclic Codes
 Cyclic codes are often non-separable although
separable cyclic codes exist
 Allow use of data before encoding completed
 Data word D( X ) = d k −1 X k −1 + d k −2 X k −2 + ⋅ ⋅ ⋅ + d 0
 Append (n-k) zeroes to D to obtain D(X)=
d k −1 X n−1 + d k −2 X n−2 + ⋅ ⋅ ⋅ + d 0 X n−k
 Divide by G(X): D( X ) = Q( X )G ( X ) + R( X ) , degree of
R(X) < n-k
 Codeword C ( X ) = D( X ) − R( X ) has G(X) as a factor
 Divide C(X) by G(X) - if non-zero ⇒ error
 In C(X) : first k bits data, last n-k check bits
 Example: (5,4) code with G(X)=X+1: for data 0110 we
get
D( X ) = X + X = ( X + 1) X + 0
3 2 2

‒ Codeword 01100
‒ Same codewords generated but different correspondence
NTUEE 103 KUO
Example of Separable Cyclic Code
 Generator polynomial g(x) = x4 + x3 + x2 + 1 of (7,3) code
 Data is 3 bits, n-k = 4 bits

NTUEE 104 KUO


HW for encoding/decoding of cyclic codes
 Encoding and decoding is done using
linear feedback shift registers (LFSRs)
 LFSR implements polynomial division by
generator polynomial g(x)
 Linear feedback shift register consists of:
− register cells so, s1, …, sr-1, where r = n-k is the
degree of g(x)
− XOR-gates between the cells
− feedback connections to XOR, with weights
− clock

NTUEE 105 KUO


Circuit to Generate Cyclic Code

The multiplication of the data polynomial d(x) = d0 ⊕ d1 x ⊕ d2 x 2 ⊕ ... ⊕ dk−1 xk−1


by the generator polynomial g(x) = g0 ⊕ g1 x ⊕ g2 x 2 ⊕ ... ⊕ gr xr , where r = n − k,
can be implemented using an LFSR as below:
In the diagram, the square boxes are register cells, each capable of storing one
bit of information. The circles labeled by ”⊕” are modulo 2 adders. The triangles
represent the weights corresponding to the coefficients gi of the generator
polynomial g(x), for i ∈ {0, 1, . . . , r }. Each coefficient gi is either 0, meaning “no
connection”, or 1, meaning “connection”.

NTUEE 106 KUO


As an example, consider the LFSR shown below. It implements the
multiplication by the generator polynomial g(x) = 1 + x2 + x3 . Let si ∈ {0, 1}, i ∈
{0, 1, 2}, represent the value of the register cell i at the current time step. The
vector of values of register cells is called a state of the LFSR. Let si+ ∈ {0, 1},
represent the value of the register cell i at the next time step. Then, from the
circuit, we can derive the following next-state equations:

NTUEE 107 KUO


Table illustrates the encoding process for the data d = [d0d1d2d3] = [0101].
The resulting codeword is c = [c0c1c2c3c4c5c6] = [0100111]. It is easy to verify
that by multiplying d(x) by g(x):
d(x) · g(x) = (x ⊕ x3)(1 ⊕ x2 ⊕ x3) = x ⊕ x4 ⊕ x5 ⊕ x6 = c(x).

NTUEE 108 KUO


Decoding of Cyclic Codes
 Determine if code word (cn-1, cn-2, ....., c1, c0) is
valid Code polynomial c(x) = cn-1 xn-1 + cn-2xn-2
+ ... c1x +c0
 If c(x) is a valid code polynomial, it should be a
multiple generator polynomial g(x)
 c(x) = d(x) g(x) + s(x), where s(x) the
syndrome polynomial should be zero
 Hence, divide c(x) by g(x) and check the
remainder whether equal to 0

NTUEE 109 KUO


LFSR
 Weights gi are the coefficients of the generator
polynomial g(x)=g0+g1x1+…+ gr xr
– gi=0 means ’no connection’
– gi=1 means ’connection’
– gr is always connected

NTUEE 110 KUO


Example: decoding circuit
 LFSR for g(x)=1+x+x3

s0 =s2+c(x)
s1 =s0+ s2
s2 =s1
d(x)=s2

NTUEE 111 KUO


Example: decoding, no error

Suppose the word to decode is [1010001], i.e c(x) = 1 + x2 + x6.


Most significant bit is fed first.

NTUEE 112 KUO


Example: decoding, with error
Suppose an error has occurred in the 4th bit, i.e. we received
[1011001] instead of [1010001].

c(x) s0 s1 s2 d(x)

t0 0 0 0
t1 1 1 0 0 0
t2 0 0 1 0 0
t3 0 0 0 1 0
t4 1 0 1 0 1
t5 1 1 0 1 0
t6 0 1 0 0 1 The syndrome [110] matches
t7 1 1 1 0 0 the 4th column of the check
matrix H.

a (7,4) cyclic code with the generator


polynomial g(x) = 1 ⊕ x ⊕ x 3 has the parity
check matrix given by

NTUEE 113 KUO


Encoding for separable cyclic codes
 Division can be used for encoding of a separable
(n,k) cyclic code
–shift data by n-k positions, i.e. multiply d(x) by xn-k
–use LFSR to divide d(x) xn-k by g(x). The reminder
r(x) is contained in the register
–append the check bits r(x) to the data by adding
r(x) to d(x) xn-k:
 c(x) = d(x) xn-k + r(x)

NTUEE 114 KUO


CRC codes
 Cyclic Redundancy Check (CRC) codes are
separable codes with specific generator polynomials,
chosen to provide high error detection capability for
data transmission and storage
 A CRC detects all burst errors of length less or equal
to deg(g(x)). A CRC also detects many errors which
are larger than deg(g(x)).
 For example, apart from detecting all burst errors of
length 16 or less, CRC-16 and CRC-CCITT are also
capable of detecting 99.997% of burst errors of
length 17 and 99.9985 burst errors of length18
 Common generator polynomials are:
CRC-16: 1 + x2 + x15 + x16
CRC-CCITT: 1 + x5 + x12 + x16
CRC-32: 1 + x + x2 + x4 + x7 + x8 + x10 + x11 + x12
+ x16 + x22 + x23 + x26 + x32
NTUEE 115 KUO
CRC codes
 CRC-16 and CRC-CCITT are widely used in
modems and network protocols in the USA and
Europe, respectively, and give adequate protection
for most applications
 The number of non-zero terms in their polynomials
is small (just four)
 LFSR required to implement encoding and decoding
is simpler
 Applications that need extra protection, e.g.
Ethernet uses CRC-32

NTUEE 116 KUO


Reed-Solomon codes

 Reed-Solomon (RS) codes are a class of


separable cyclic codes used to correct errors
in a wide range of applications including
‒ storage devices (tapes, compact disks, DVDs,
bar-codes)
‒ wireless communication (cellular telephones,
microwave links)
‒ Satellite communication, digital television, high-
speed modems (ADSL, xDSL).

NTUEE 117 KUO


Reed-Solomon Codes
 Popular ECC for CDs, DVDs, wireless
communications, etc.
 k data symbols, each of which is m bits
 r parity symbols, each of which is also m bits
 Can correct up to r/2 symbols that contain errors
 Denoted by RS(n,k)
 Common example: RS(255, 223) with m=8
‒ n = 255 → 255 codeword bytes
‒ k = 223 → 223 dataword bytes
‒ r = 32 → can correct errors in ≤ 16 bytes
‒ each of these 16 bytes can have multiple bit errors.

NTUEE 118 KUO


Reed-Solomon Codes
 There exist many flavors of RS codes, each of
which is tailored to specific purpose
─ Cross-Interleaved Reed-Solomon Coding (CIRC)
used in CDs can correct error burst of up to 4000
bits!
─ 4000 bits is roughly equivalent to 2.5mm on the CD
surface
 RS codes are best for bursty error model
─ Just as good at handling 1 error in symbol or m
errors in symbol
 Codewords created by multiplying datawords
with generator polynomial (like CRC)
NTUEE 119 KUO
Reed-Solomon codes
 The encoding for Reed-Solomon code is
done using the usual procedure
− codeword is computed by shifting the data left
n-k positions, dividing it by the generator
polynomial and then adding the obtained
reminder to the shifted data
 A key difference is that groups of m bits
rather than individual bits are used as
symbols of the code.
− usually m = 8, i.e. a byte.
− the theory behind is a field Zm2 of degree m
over {0,1}

NTUEE 120 KUO


Decoding

 Decodingof Reed-Solomon codes is


performed using an algorithm designed by
Berlekamp
‒ popularity of RS codes is due to efficiency of
this algorithm to a large extent.
 Thisalgorithm was used by Voyager II for
transmitting pictures of the outer space
back to Earth
 Basis for decoding CD in players

NTUEE 121 KUO


Summary of cyclic codes

 Anyend-around shift of a codeword produce


another codeword
 code is characterized by its generator
polynomial g(x), with a degree (n-k), n = bits
in codeword, k = bits in data word
 detect
all single errors and all multiple
adjacent error affecting (n-k) bits or less

NTUEE 122 KUO


Arithmetic Codes
 Codes that are preserved under a set of arithmetic
operations
 Enable detection of errors occurring during execution of
arithmetic operations
 Error detection can be attained by duplicating the
arithmetic processor - too costly
 An error code is preserved under an arithmetic operation
∗ if for any two operands X and Y and the corresponding
encoded entities X' and Y' there is an operation ⊗
satisfying X' ⊗ Y'= (X ∗ Y)'
‒ The result of ⊗ when applied to the encoded X' and Y' will yield
the same as encoding the outcome of the original operation ∗ to
the original operands X and Y

NTUEE 123 KUO


Error Detection

 Arithmetic codes should be able to detect all


single-bit errors
A single bit error in an operand or an
intermediate result may cause a multiple-bit
error in the final result
 Example - when adding two binary numbers,
if stage i of the adder is faulty, all the
remaining n-i higher order digits may be
erroneous

NTUEE 124 KUO


Arithmetic Codes
 Useful to check arithmetic operations
 Parity codes are not preserved under
addition, subtraction
 Arithmetic codes can be separable (check
symbols disjoint from data symbols) or
non-separable (combined check and data)
 Several Arithmetic codes
‒ AN codes, Residue codes, Bi-residue codes
 Arithmetic codes have been used in STAR
fault tolerant computer for space
applications

NTUEE 125 KUO


Non-Separable Arithmetic Codes
 Simplest - AN-codes - formed by multiplying the
operands by a constant A
 X’=A•X and the operations ∗ and ⊗ are identical for
add/subtract
 Addition of code words performed modulo M
‒ A(X + MY) = AX + M AY
 Check operation by dividing the result by A
‒ If result = 0, no error, else error
‒ All error magnitudes that are multiples of A will not be detected

AX
A(X + MY)
+M
AY
Residue
Mod A

NTUEE 126 KUO


Example of 3N Code
Example - A=3
Each operand is multiplied by 3 (obtained as 2X+X)
The result of the operation is checked to see whether it 3N CODE 3N CODE
is an integer multiple of 3 B A
Resulting 3N code words for a b5 b4 b3 b2 b1 b0 a5 a4 a3 a2 a1 a0
4-bit information words
Original
Information 3N code word
0000 000000
0001 000011 ADDER
0010 000110
0011 001001
0100 001100
0101 001111
S 5 S4 S 3 S 2 S 1 S 0
0110 010010
0111 010101 3N CODE
1000 011000 of Sum
1001 011011
1010 011110
Normal Operation If S1 is always “1”
1011 100001
1100 100100
1101 100111 A = 0 1 0 0 1 0 (3N Code of 6) A = 0 1 0 0 1 0 (3N Code of 6)
1110 101010 + B = 0 0 0 0 1 1 (3N Code of 1) B = 0 0 0 0 1 1 (3N Code of 1)
1111 101101 S = 0 1 0 1 0 1 (3N Code of 7) S = 0 1 0 1 1 1 (Not a valid
3N Code)

Illustration of the error detection capabilities of the 3N arithmetic


code. The presence of the fault results in the sum being an invalid 3N code.

NTUEE 127 KUO


AN Codes
 A should not be a power of the radix 2
 An odd A is best - it will detect every single bit fault -
such an error has a magnitude of 2i
 A=3 - least expensive AN-code that enables detection
of all single bit errors
 Example - the number 0110 2 = 610
 Representation in the AN-code with A=3 is
010010 2 = 1810
 A fault in bit position 23 may give the erroneous result
0110102 = 2610
 The error is easily detectable - 26 is not a multiple of 3

NTUEE 128 KUO


Adder protected by 3N code

NTUEE 129 KUO


Addition using 3N code - fault-free

Addition using 3N code - with faults

NTUEE 130 KUO


Selecting the value of A

 Forbinary codes, the constant A shouldn’t


be a power of two
‒ otherwise multiplication by A results a left shift
of the original data
‒ error in a single bit yields a codeword evenly
divisible by A (valid), so it will not be detected
 3Ncode is easy to encode using n+1 bit
adder: create 2N by shift and add N to it

NTUEE 131 KUO


Residue Codes
 Separable code (X, X Mod A)
 Created by appending the residue of a number to
that number

Adder
X
+ X+Y
residue residue [(X + Y) mod A]
(X mod A) Residue
Generator
(mod-A)
Y
residue +A Equality Error
(Y mod A) Modulo-A checker
Adder

NTUEE 132 KUO


Separable Arithmetic Codes
 Simplest - residue code and inverse residue code
 We attach a separate check symbol C(X) to every
operand X
‒ For the residue code, C(X)=X mod A = |X|A
» A is called the check modulus
‒ For the inverse residue code, C(X)=A-(X mod A)
 For both codes C(X) ⊗ C(Y) = C(X ∗ Y)
‒ ⊗ equals ∗ - either addition or multiplication
 |X+Y|A =||X|A +|Y|A|A
 |X • Y|A = ||X|A • |Y|A|A
 Division: X-S=Q • D
- X is the dividend, D is the divisor, Q is the quotient,
S is the remainder
 The check is: ||X|A -|S|A|A = ||Q|A • |D|A|A
NTUEE 133 KUO
Examples
 A=3, X=7, Y=5
- the residues are: |X|3 =1 and |Y|3 =2 ;
- |7+5|3 = 0 = ||7|3 +|5|3|3 =|1+2|3 =0
- |7 • 5|3 =2=||7|3 • |5|3|3 =|1 • 2|3 =2
 A=3, X=7 and D=5 - Q=1 and S=2
- the residue check is: ||7|3 -|2|3|3 = ||5|3 • |1|3|3 =2
 Subtraction is done by adding the complement to the
modulus: |1-2|3 = |1+|3-2|3|3 =|1+1|3 =2

NTUEE 134 KUO


Residue mod A vs. AN Code
 Same undetectable errors
‒ Example: A=3 - only errors that modify the result by a multiple of
3 will not be detected
‒ single-bit errors are always detectable
 Same checking algorithm
‒ Compute the residue modulo A of the result
 Same increase in word length - |log2A|
 Most important difference - separability
‒ The unit for C(X)
in the residue code
is separable
‒ Single unit for
the AN code

NTUEE 135 KUO


Summary - Information Redundancy

 Add information to data to tolerate faults


‒ Error detecting codes
‒ Error correcting codes
 Applications
‒ communication
‒ memory

NTUEE 136 KUO

You might also like