0% found this document useful (0 votes)
300 views132 pages

Unit 4

Information theory and Coding - Linear Block codes and Parity check Matrices
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
300 views132 pages

Unit 4

Information theory and Coding - Linear Block codes and Parity check Matrices
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 132

UNIT-4

Block Codes- Digital communication channel, Introduction to block


codes, Single-parity check codes, Product codes, Repetition codes,
Hamming codes, Minimum distance of block codes, Soft-decision
decoding, Automatic-repeat-request schemes
Linear codes- Definition of linear codes, Generator matrices, Standard
array, Parity-check matrices.
Block Codes
Channel Coding
Why?
To increase the resistance of digital communication systems to
channel noise via error control coding

How?
By mapping the incoming data sequence into a channel input
sequence and inverse mapping the channel output sequence into an
output data sequence in such a way that the overall effect of channel
noise on the system is minimised

Redundancy is introduced in the channel encoder so as to reconstruct


the original source sequence as accurately as possible.

EEE377 Lecture Notes 3


Error Control Coding
Error control for data integrity may be exercised by means of forward
error correction (FEC).

The discrete source generates information in the form of binary


symbols.
The channel encoder accepts message bits and adds redundancy to
produce encoded data at higher bit rate.
The channel decoder uses the redundancy to decide which message
bits were actually transmitted.

What is the implication?

EEE377 Lecture Notes 4


The implication of Error Control Coding
Addition of redundancy implies the need for increased transmission
bandwidth
It also adds complexity in the decoding operation
Therefore, there is a design trade-off in the use of error-control coding
to achieve acceptable error performance considering bandwidth and
system complexity.

Types of Error Control Coding


• Block codes
• Convolutional codes

EEE377 Lecture Notes 5


Block Codes
Usually in the form of (n,k) block code where n is the number of bits of
the codeword and k is the number of bits for the binary message

To generate an (n,k) block code, the channel encoder accepts


information in successive k-bit blocks
For each block add (n-k) redundant bits to produce an encoded block
of n-bits called a code-word
The (n-k) redundant bits are algebraically related to the k message
bits
The channel encoder produces bits at a rate called the channel data
rate, R0
 n Where Rs is the bit rate of
R0   Rs the information source
k
and n/k is the code rate

EEE377 Lecture Notes 6


EXAMPLE
SOLUTION
EXAMPLE
SOLUTION
EXAMPLE
EXAMPLE
SOLUTION
Block Codes
Usually in the form of (n,k) block code where n is the number of bits of
the codeword and k is the number of bits for the binary message

To generate an (n,k) block code, the channel encoder accepts


information in successive k-bit blocks
For each block add (n-k) redundant bits to produce an encoded block
of n-bits called a code-word
The (n-k) redundant bits are algebraically related to the k message
bits
The channel encoder produces bits at a rate called the channel data
rate, R0
 n Where Rs is the bit rate of
R0   Rs the information source
k
and n/k is the code rate
Forward Error-Correction (FEC)
The channel encoder accepts information in successive k-bit blocks and for each
block it adds (n-k) redundant bits to produce an encoded block of n-bits called a
code-word.
The channel decoder uses the redundancy to decide which message bits were
actually transmitted.
In this case, whether the decoding of the received code word is
successful or not, the receiver does not perform further processing.
In other words, if an error is detected in a transmitted code word, the
receiver does not request for retransmission of the corrupted code word.

Automatic-Repeat Request (ARQ) scheme


Upon detection of error, the receiver requests a repeat transmission of the
corrupted code word
There are 3 types of ARQ scheme
• Stop-and-Wait
• Continuous ARQ with pullback
• Continuous ARQ with selective repeat
Types of ARQ scheme
Stop-and-wait
• A block of message is encoded into a code word and transmitted
• The transmitter stops and waits for feedback from the receiver either
an acknowledgement of a correct receipt of the codeword or a
retransmission request due to error in decoding.
• The transmitter resends the code word before moving onto the next
block of message

What is the implication of this?

Idle time during stop-and-wait is wasted and will reduce the data
throughput

Any idea to overcome this?


Types of ARQ scheme
Continuous ARQ with pullback (or go-back-N)
•Allows the receiver to send a feedback signal while the transmitter is
sending another code word
•The transmitter continues to send a succession of code words until it
receives a retransmission request.
•It then stops and pulls back to the particular code word that was not
correctly decoded and retransmits the complete sequence of code words
starting with the corrupted one.

What is the implication of this?

Code words that are successfully decoded are also retransmitted.


This is a waste of resources
Any idea to overcome this?
Continuous ARQ with selective repeat
•Retransmits the code word that was incorrectly decoded only.
•Thus, eliminates the need for retransmitting the successfully decoded code
words.

ARQ schemes (a) stop-and-wait (b) go-back (c) selective repeat


Error Control
Hamming Code

A type of (n, k) linear block codes with the following parameters


• Block length, n = 2m - 1
• Number of message bits, k = 2m – m -1
• Number of parity bits : n-k=m
• m >= 3
Hamming Code – Example

A (7,4) Hamming code with the following parameters


n=7; k=4, m=7-4=3
The k-by-(n-k) (4-by-3) coefficient matrix, P =

1 1 0
P= 0 1 1
1 1 1
1 0 1
The generator matrix, G is, G =
1 1 0 1 0 0 0
G= 0 1 1 0 1 0 0
1 1 1 0 0 1 0
1 0 1 0 0 0 1
Hamming Code – Example

The parity vector,b is generated by b=m.P

For a given block of message bits m = (m1 m2 m3 m4), we can work out
the parity vector, b and hence the code word, c = mG for the (7,4)
Hamming Code.

Exercise: Try to work out the codewords for the (7,4) Hamming Code.
Codewords for (7,4) Hamming
Code
Message Word Parity bits Code words
0000 000 0000000
0001 101 1010001
0010 111 1110010
0011 010 0100011
0100 011 0110100
0101 110 1100101
0110 100 1000110
0111 001 0010111
1000 110 1101000
1001 011 0111001
1010 001 0011010
1011 100 1001011
1100 101 1011100
1101 000 0001101
1110 010 0101110
1111 111 1111111
Code parameters
• The Hamming distance
• The Hamming distance between a pair of code vectors, c1 and c2 that have the same number of elements is defined
as the number of locations in which their respective elements differ
• The Hamming weight
• The Hamming weight of a code vector c is defined as the number of nonzero elements in that code vector
• Equivalent to the distance between a code vector and an all-zero code vector
• The minimum distance
• The minimum distance of a linear block code is defined as the smallest Hamming distance between any pair of code
vectors in the code.
• Equivalent to the smallest Hamming weight of the difference between any pair of code vectors
• Equivalent to the smallest Hamming weight of the nonzero code vectors in the code
• Code rate
• The ratio between the number of original message bits and the number of bits of the codeword
• For (n,k) code , code rate = k/n.
Codewords for (7,4) Hamming
Code
Message Word Parity bits Code words Hamming weight
0000 000 0000000
0001 101 1010001
0010 111 1110010
0011 010 0100011 Min dist=?
0100 011 0110100
0101 110 1100101
0110 100 1000110
0111 001 0010111
1000 110 1101000
1001 011 0111001
1010 001 0011010
1011 100 1001011
1100 101 1011100
1101 000 0001101
1110 010 0101110
1111 111 1111111
Code parameters
The minimum distance of a code determines the error detecting and correcting
capability of the code
Error detection is always possible when the number of transmission errors in a
codeword is less than the minimum distance so that the erroneous word may not
be seen as another valid code vector
Various degrees of error control capability
• Detect up to l errors per word , dmin >= l + 1
• Correct up to t errors per word, dmin >= 2t + 1
• Correct up to t errors and detect l > t errors per word,
dmin >= t + l + 1
Code rate is a measure of the code efficiency
Hard and Soft Decision
Decoding
• A challenging task in error correction is decoding the codewords that
have been received via noisy channels. Before data is transmitted, the
sender adds redundant bits or parity bits to the message forming
codewords. The codewords are then transmitted via computer
networks. The receiver checks the incoming codewords and performs
the decoding or error correction process to retrieve the original data.
• If there are no errors, i.e. the codewords find an exact matching, then it
is easy to decode the data by eliminating the parity bits. However, if a
match is not found, then more complex decoding mechanisms are
adopted.
• The two categories of decoding techniques are −
Hard Decision Decoding
• Hard decision decoding takes a stream of bits or a block of bits from
the threshold stage of receiver, and decodes each bit by considering it
as definitely 1 or 0. It samples the received pulses and compares their
voltages to threshold values. If a voltage is greater than the threshold
value, it is decoded as 1 and otherwise decoded as 0. The decoding is
done irrespective of how close the voltage is to the threshold.
Soft Decision Decoding
• Soft decision decoding is a class of algorithms that takes a stream of
bits or a block of bits and decodes them by considering a range of
possible values that it may take. It considers the reliability of each
received pulse to form better estimates of input data.
• Soft-decision decoders are often used in Viterbi decoders that are
used for decoding convolutional codes.
Example
• Let us consider that the valid set of codewords that are sent by the source encoder is −
• 001
• 010
• 101
• 111
• Suppose that the bit ‘0’ is encoded as 0.2V and ‘1’ bit is encoded as 4.2V. If the sender
wants to send the codeword 001, it sends pulses of voltages 0.2V, 0.2V, 4.2V.
• Suppose that this message is attenuated during transmission and the voltages
received at destination are 0.4V, 2.6V, 4.2V.
• The following two cases show decoding by hard decision method and soft decision
methods.
Hard Decision Decoding
• Let us consider that the threshold voltage chosen by the hard decision
decoder is 2.2V. Any voltage received above 2.2V is considered as 1
bit and any voltage received below 2.2V is considered 0 bit.
• So, the hard decision decoder considers the incoming voltages 0.4V,
2.6V, 4.2V as 011.
• Since 011 is not a valid codeword, the hard decision decoder
compares the Hamming distances of this data with the set of valid
codewords and finds the minimum Hamming distance.
In this case, there are 3 codewords with the minimum
Hamming distance of 1. The decoder chooses any of them
randomly with a probability of 1/3.
Thus it can be seen that hard decision decoding has scope for
ambiguity.
Soft Decision Decoding
• There are various soft decision decoding algorithms. Let us consider
that in this situation, the soft decision decoder calculates the
Euclidean distances between the received voltages and the voltages
of the codewords received. It then finds the minimum Euclidean
distance and selects the corresponding codeword.
It can be seen that the minimum Euclidean distance
corresponds to the codeword 001, which has been actually sent
by the source.
Thus soft-decision decoding technique provides better error
correction capability than hard decision decoding.
Product Codes
DIFFERENT TYPES OF CODES
Repetition code
In coding theory, the repetition code is one of the most basic
error-correcting codes. In order to transmit a message over a noisy channel that
may corrupt the transmission in a few places, the idea of the repetition code is to
just repeat the message several times. The hope is that the channel corrupts only
a minority of these repetitions. This way the receiver will notice that a
transmission error occurred since the received data stream is not the repetition
of a single message, and moreover, the receiver can recover the original message
by looking at the received message in the data stream that occurs most often.
Because of the bad error correcting performance and the low ratio between
information symbols and actually transmitted symbols, other
error correction codes are preferred in most cases. The chief attraction of the
repetition code is the ease of implementation.
Code parameters
In the case of a binary repetition code, there exist two code words - all ones and all
zeros - which have a length of n . Therefore, the minimum Hamming distance of the code equals its length
n . This gives the repetition code an error correcting capacity of (n-1)/2 (i.e. it will correct up to (n-1)/2 errors in
Any code word). If the length of a binary repetition code is odd, then it's a perfect code.The binary repetition
code of length n is equivalent to the (n,1)-Hamming code.

Example
Consider a binary repetition code of length 3. The user wants to transmit the information bits 101. Then the encoding maps each
bit either to the all ones or all zeros code word, so we get the 111 000 111, which will be transmitted.
Let's say three errors corrupt the transmitted bits and the received sequence is 111 010 100. Decoding is usually done by a simple
majority decision for each code word. That lead us to 100 as the decoded information bits, because in the first and second code
word occurred less than two errors, so the majority of the bits are correct. But in the third code word two bits are corrupted,
which results in an erroneous information bit, since two errors lie above the error correcting capacity.
Applications
• Despite their poor performance as stand-alone codes, use in Turbo code-like iteratively
decoded concatenated coding schemes, such as repeat-accumulate (RA) and accumulate-
repeat-accumulate (ARA) codes, allows for surprisingly good error correction
performance.
• Repetition codes are one of the few known codes whose code rate can be automatically
adjusted to varying channel capacity, by sending more or less parity information as
required to overcome the channel noise, and it is the only such code known for non-
erasure channels. Practical adaptive codes for erasure channels have been invented only
recently, and are known as fountain codes.
• Some UARTs, such as the ones used in the FlexRay protocol, use a majority filter to
ignore brief noise spikes. This spike-rejection filter can be seen as a kind of repetition
decoder.
Prefect Codes
Cyclic Codes

A subclass of linear codes having a cyclic structure.

The code vector can be expressed in the form

c = ( cn-1 cn-2 ……c1 c0 )

A new code vector in the code can be produced by cyclic shifting of


another code vector.
For example, a cyclic shift of all n bits one position to the left gives

c’ = ( cn-2 cn-3 ……c1 c0 cn-1)


A second shift produces another code vector, c”

c” = ( cn-3 cn-4 ……c1 c0 cn-1 cn-2)


Cyclic Codes

The cyclic property can be treated mathematically by associating a


code vector, c with the code polynomial, c(X)

c(X) = c0 + c1X + c2X2+……cn-1Xn-1

The power of X denotes the positions of the codeword bits.

The coefficients are either 1s and 0s.


An (n,k) cyclic code is defined by a generator polynomial, g(X)

g(X) = Xn-k + gn-k-1Xn-k-1 + ……. + g1X + 1

The coefficient g are such that g(X) is a factor of Xn + 1


Cyclic Codes – Encoding Procedure

To encode an (n,k) cyclic code

1. Multiply the message polynomial , m(X) by Xn-k

2. Divide Xn-k.m(X) by the generator polynomial, g(X) to obtain the


remainder polynomial, b(X)

3. Add b(X) to Xn-k.m(X) to obtain the code polynomial


Non Systematic Cyclic Codes - Example

The (7,4) Hamming Code

For message sequence 1001

The message polynomial, m(X) = 1 + X3

1. Multiply by Xn-k (X3) gives X3 + X6

2. Divide by the generator polynomial, g(X) that is a factor of


Xn +For
1 the (7,4) Hamming code is defined by its generator polynomials,
g(X) that are factors of X7 + 1

With n =7, we can factorize X7 + 1 into three irreducible polynomials

X7 + 1 = (1 + X)(1 + X2 + X3)(1 + X + X3)


Cyclic Codes - Example

For example we choose the generator polynomial, 1 + X + X 3 and


perform the division we get the remainder, b(X) as X2 + X

3. Add b(X) to obtain the code polynomial, c(X)

c(X) = X + X2 + X3 + X6

So the codeword for message sequence 1001 is 0111001


Cyclic Codes – Exercise

Find the codeword for (7,4) cyclic Hamming Code using the
generator polynomial, 1 + X + X3 for the message sequence 0011
Cyclic Codes – Implementation

The Cyclic code is implemented by the shift-register encoder with


(n-k) stages

rn-k-1

Encoding starts with the feedback switch closed, the output switch in the
message bit position, and the register initialised to the all-zero state.
The k message bits are shifted into the register and delivered to the
transmitter.
After k shift cycles, the register contains the b check bits.
The feedback switch is now opened and the output switch is moved to the
check bits to deliver them to the transmitter.
Cyclic Codes – Implementation example

The shift-register encoder for the (7,4) Hamming Code has (7-4=3) stages

When the input message is 0011, after 4 shift cycles the redundancy bits are delivered
Cyclic Codes – Implementation Exercise

The shift-register encoder for the (7,4) Hamming Code has (7-4=3) stages

When the input message is 1001, after 4 shift cycles the redundancy bits are delivered

1 0 0 0 0 1 1
The check bits
0 0 1 1 1 1 0
is 011
0 1 1 0 1 1 1
1 1 1 1 1 1 0
Cyclic Codes – Implementation Exercise

The shift-register encoder for the (7,4) Hamming Code has (7-4=3) stages

When the input message is 1100?

You might also like