100% found this document useful (2 votes)
2K views79 pages

Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

This document provides information about advanced coding techniques including Reed-Solomon codes, space-time codes, concatenated codes, turbo codes, and low-density parity-check (LDPC) codes. It describes the basic principles and structures of space-time block codes, concatenated codes using inner and outer codes, turbo codes which approach channel capacity, and iterative decoding of turbo codes and LDPC codes on graphs. Examples of encoding and decoding for turbo codes are also presented.

Uploaded by

ArunPratapSingh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
2K views79 pages

Information Theory, Coding and Cryptography Unit-5 by Arun Pratap Singh

This document provides information about advanced coding techniques including Reed-Solomon codes, space-time codes, concatenated codes, turbo codes, and low-density parity-check (LDPC) codes. It describes the basic principles and structures of space-time block codes, concatenated codes using inner and outer codes, turbo codes which approach channel capacity, and iterative decoding of turbo codes and LDPC codes on graphs. Examples of encoding and decoding for turbo codes are also presented.

Uploaded by

ArunPratapSingh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

UNIT : V

INFORMATION THEORY, CODING & CRYPTOGRAPHY (MCSE 202)


PREPARED BY ARUN PRATAP SINGH 5/26/14 MTECH 2nd SEMESTER





PREPARED BY ARUN PRATAP SINGH 1

1
ADVANCE CODING TECHNIQUES : REED SOLOMON CODES

UNIT : V


PREPARED BY ARUN PRATAP SINGH 2

2



PREPARED BY ARUN PRATAP SINGH 3

3



PREPARED BY ARUN PRATAP SINGH 4

4


PREPARED BY ARUN PRATAP SINGH 5

5


PREPARED BY ARUN PRATAP SINGH 6

6


SPACE TIME CODES :
A spacetime code (STC) is a method employed to improve the reliability of
data transmission in wireless communication systems using multiple transmit antennas. STCs
rely on transmitting multiple, redundant copies of a data stream to the receiver in the hope that at
least some of them may survive the physical path between transmission and reception in a good
enough state to allow reliable decoding.
Space time codes may be split into two main types:

PREPARED BY ARUN PRATAP SINGH 7

7
Spacetime trellis codes (STTCs) distribute a trellis code over multiple antennas and multiple
time-slots and provide both coding gain and diversity gain.
Spacetime block codes (STBCs) act on a block of data at once (similarly to block codes) and
also provide diversity gain but doesn't provide coding gain.
STC may be further subdivided according to whether the receiver knows the channel impairments.
In coherent STC, the receiver knows the channel impairments through training or some other form
of estimation. These codes have been studied more widely, and division algebras over number
fields have now become the standard tool for constructing such codes.
In noncoherent STC the receiver does not know the channel impairments but knows the statistics
of the channel. In differential spacetime codes neither the channel nor the statistics of the
channel are available.




PREPARED BY ARUN PRATAP SINGH 8

8



CONCATENATED CODES :
Concatenated codes are error-correcting codes that are constructed from two or more simpler
codes in order to achieve good performance with reasonable complexity. Originally introduced by
Forney in 1965 to address a theoretical issue, they became widely used in space communications
in the 1970s. Turbo codes and other modern capacity- approaching codes may be regarded as
elaborations of this approach.

PREPARED BY ARUN PRATAP SINGH 9

9

The inner code is a short block code like that envisioned by Shannon, with rate r close to C ,block
length n , and therefore 2nr codewords. The inner decoder decodes optimally, so its complexity
increases exponentially with n ; for large enough n it achieves a moderately low decoding error
probability.
The outer code is an algebraic Reed-Solomon (RS) code (Reed and Solomon, 1960) of
length 2nr over the finite field with 2nr elements, each element corresponding to an inner
codeword. (The overall block length of the concatenated code is therefore N=n2nr , which is
exponential in n , so the complexity of the inner decoder is only linear in N .) The outer decoder
uses an algebraic error-correction algorithm whose complexity is only polynomial in the RS code
length 2nr ; it can drive the ultimate probability of decoding error as low as desired.



PREPARED BY ARUN PRATAP SINGH 10

10



PREPARED BY ARUN PRATAP SINGH 11

11


PREPARED BY ARUN PRATAP SINGH 12

12


PREPARED BY ARUN PRATAP SINGH 13

13



Figure 3: Simple repeat-accumulate code with iterative decoding.
All capacity-approaching codes are now regarded as ``codes on graphs," in which a (possibly
large) number of simple codes are interconnected according to some graph topology. Any such
code may be regarded as a (possibly elaborate) concatenated code.


PREPARED BY ARUN PRATAP SINGH 14

14
TURBO CODING AND LDPC CODES :

In information theory, turbo codes (originally in French Turbocodes) are a class of high-
performance forward error correction (FEC) codes developed in 1993, which were the first
practical codes to closely approach the channel capacity, a theoretical maximum for the code
rate at which reliable communication is still possible given a specific noise level. Turbo codes are
finding use in 3G mobile communications and (deep space) satellite communications as well as
other applications where designers seek to achieve reliable information transfer over bandwidth-
or latency-constrained communication links in the presence of data-corrupting noise. Turbo codes
are nowadays competing with LDPC codes, which provide similar performance.
The name "turbo code" arose from the feedback loop used during normal turbo code decoding,
which was analogized to the exhaust feedback used for engine turbocharging.Hagenauer has
argued the term turbo code is a misnomer since there is no feedback involved in the encoding
process.

An example encoder
There are many different instances of turbo codes, using different component encoders, input/output
ratios, interleavers, and puncturing patterns. This example encoder implementation describes a
classic turbo encoder, and demonstrates the general design of parallel turbo codes.
This encoder implementation sends three sub-blocks of bits. The first sub-block is the m-bit block of
payload data. The second sub-block is n/2 parity bits for the payload data, computed using a recursive
systematic convolutional code (RSC code). The third sub-block is n/2 parity bits for a
known permutation of the payload data, again computed using an RSC convolutional code. Thus, two
redundant but different sub-blocks of parity bits are sent with the payload. The complete block
has m + n bits of data with a code rate ofm/(m + n). The permutation of the payload data is carried out
by a device called an interleaver.
Hardware-wise, this turbo-code encoder consists of two identical RSC coders, 1 and C2, as depicted
in the figure, which are connected to each other using a concatenation scheme, called parallel
concatenation:

PREPARED BY ARUN PRATAP SINGH 15

15

In the figure, M is a memory register. The delay line and interleaver force input bits dk to appear in
different sequences. At first iteration, the input sequence dk appears at both outputs of the
encoder, xk and y1k or y2k due to the encoder's systematic nature. If the encoders C1 and C2 are used
respectively in n1 and n2 iterations, their rates are respectively equal to




The decoder
The decoder is built in a similar way to the above encoder. Two elementary decoders are
interconnected to each other, but in serial way, not in parallel. The decoder operates on lower
speed (i.e., ), thus, it is intended for the encoder, and is
for correspondingly. yields a soft decision which causes delay. The same delay is caused
by the delay line in the encoder. The 's operation causes delay.

PREPARED BY ARUN PRATAP SINGH 16

16

An interleaver installed between the two decoders is used here to scatter error bursts coming
from output. DI block is a demultiplexing and insertion module. It works as a switch, redirecting
input bits to at one moment and to at another. In OFF state, it feeds
both and inputs with padding bits (zeros).
Consider a memoryless AWGN channel, and assume that at k-th iteration, the decoder receives a pair
of random variables:

where and are independent noise components having the same variance . is a k-th bit
from encoder output.
Redundant information is demultiplexed and sent through DI to (when ) and
to (when ).
yields a soft decision; i.e.:

and delivers it to . is called the logarithm of the likelihood
ratio (LLR). is the a posteriori probability (APP) of the data bit which
shows the probability of interpreting a received bit as . Taking the LLR into
account, yields a hard decision; i.e., a decoded bit.
It is known that the Viterbi algorithm is unable to calculate APP, thus it cannot be used
in . Instead of that, a modified BCJR algorithm is used. For , the Viterbi algorithms
an appropriate one.

PREPARED BY ARUN PRATAP SINGH 17

17
However, the depicted structure is not an optimal one, because uses only a proper
fraction of the available redundant information. In order to improve the structure, a feedback
loop is used (see the dotted line on the figure).

Soft decision approach-
The decoder front-end produces an integer for each bit in the data stream. This integer is a measure
of how likely it is that the bit is a 0 or 1 and is also called soft bit. The integer could be drawn from the
range [127, 127], where:
127 means "certainly 0"
100 means "very likely 0"
0 means "it could be either 0 or 1"
100 means "very likely 1"
127 means "certainly 1"
etc.
This introduces a probabilistic aspect to the data-stream from the front end, but it conveys more
information about each bit than just 0 or 1.
For example, for each bit, the front end of a traditional wireless-receiver has to decide if an internal
analog voltage is above or below a given threshold voltage level. For a turbo-code decoder, the front
end would provide an integer measure of how far the internal voltage is from the given threshold.
To decode the m + n-bit block of data, the decoder front-end creates a block of likelihood measures,
with one likelihood measure for each bit in the data stream. There are two parallel decoders, one for
each of the
n
2-bit parity sub-blocks. Both decoders use the sub-block of m likelihoods for the payload
data. The decoder working on the second parity sub-block knows the permutation that the coder used
for this sub-block.

LDPC CODES :
In information theory, a low-density parity-check (LDPC) code is a linear error correcting code,
a method of transmitting a message over a noisy transmission channel,
[1][2]
and is constructed
using a sparse bipartite graph.
[3]
LDPC codes are capacity-approaching codes, which means that
practical constructions exist that allow the noise threshold to be set very close (or
even arbitrarily close on the BEC) to the theoretical maximum (the Shannon limit) for a symmetric
memoryless channel. The noise threshold defines an upper bound for the channel noise, up to
which the probability of lost information can be made as small as desired. Using iterative belief
propagation techniques, LDPC codes can be decoded in time linear to their block length.

PREPARED BY ARUN PRATAP SINGH 18

18
LDPC codes are finding increasing use in applications requiring reliable and highly efficient
information transfer over bandwidth or return channel-constrained links in the presence of
corrupting noise. Implementation of LDPC codes has lagged behind that of other codes,
notably turbo codes. The fundamental patent for Turbo Codes expired on August 29, 2013.

Function-
LDPC codes are defined by a sparse parity-check matrix. This sparse matrix is often randomly
generated, subject to the sparsity constraintsLDPC code construction is discussed later. These
codes were first designed by Robert Gallager in 1962.
Below is a graph fragment of an example LDPC code using Forney's factor graph notation. In this
graph, n variable nodes in the top of the graph are connected to (nk) constraint nodes in the
bottom of the graph. This is a popular way of graphically representing an (n, k) LDPC code. The
bits of a valid message, when placed on the T's at the top of the graph, satisfy the graphical
constraints. Specifically, all lines connecting to a variable node (box with an '=' sign) have the
same value, and all values connecting to a factor node (box with a '+' sign) must sum, modulo two,
to zero (in other words, they must sum to an even number).

Ignoring any lines going out of the picture, there are eight possible six-bit strings corresponding
to valid codewords: (i.e., 000000, 011001, 110010, 101011, 111100, 100101, 001110, 010111).
This LDPC code fragment represents a three-bit message encoded as six bits. Redundancy is
used, here, to increase the chance of recovering from channel errors. This is a (6, 3) linear code,
with n = 6 and k = 3.
Once again ignoring lines going out of the picture, the parity-check matrix representing this graph
fragment is


PREPARED BY ARUN PRATAP SINGH 19

19
In this matrix, each row represents one of the three parity-check constraints, while each column
represents one of the six bits in the received codeword.
In this example, the eight codewords can be obtained by putting the parity-check matrix H into
this form through basic row operations:



From this, the generator matrix G can be obtained as (noting that in the special case of
this being a binary code ), or specifically:

Finally, by multiplying all eight possible 3-bit strings by G, all eight valid codewords are obtained.
For example, the codeword for the bit-string '101' is obtained by:



Example Encoder Figure 1 illustrates the functional components of most LDPC encoders.

PREPARED BY ARUN PRATAP SINGH 20

20

LDPC Encoder
During the encoding of a frame, the input data bits (D) are repeated and distributed to a set of
constituent encoders. The constituent encoders are typically accumulators and each accumulator is
used to generate a parity symbol. A single copy of the original data (S0,K-1) is transmitted with the parity
bits (P) to make up the code symbols. The S bits from each constituent encoder are discarded.
In some cases a parity bit is encoded by a second constituent code (serial concatenation), but more
typically the constituent encoding for the LDPC is done in parallel.
In an example using the DVB-S2 rate 2/3 code the encoded block size is 64800 symbols (N=64800)
with 42300 data bits (K=43200) and 21600 parity bits (M=21600). Each constituent code (check node)
encodes 16 data bits except for the first parity bit which encodes 8 data bits. The first 4680 data bits
are repeated 13 times (used in 13 parity codes), while the remaining data bits are used in 3 parity
codes (irregular LDPC code).

PREPARED BY ARUN PRATAP SINGH 21

21
For comparison, classic turbo codes typically use two constituent codes configured in parallel, each
of which encodes the entire input block (K) of data bits. These constituent encoders are recursive
convolutional codes (RSC) of moderate depth (8 or 16 states) that are separated by a code interleaver
which interleaves one copy of the frame.
The LDPC code, in contrast, uses many low depth constituent codes (accumulators) in parallel, each
of which encode only a small portion of the input frame. The many constituent codes can be viewed
as many low depth (2 state) 'convolutional codes' that are connected via the repeat and distribute
operations. The repeat and distribute operations perform the function of the interleaver in the turbo
code.
The ability to more precisely manage the connections of the various constituent codes and the level
of redundancy for each input bit give more flexibility in the design of LDPC codes, which can lead to
better performance than turbo codes in some instances. Turbo codes still seem to perform better than
LDPCs at low code rates, or at least the design of well performing low rate codes is easier for Turbo
Codes.
As a practical matter, the hardware that forms the accumulators is reused during the encoding process.
That is, once a first set of parity bits are generated and the parity bits stored, the same accumulator
hardware is used to generate a next set of parity bits.

Decoding
As with other codes, the maximum likelihood decoding of an LDPC code on the binary symmetric
channel is an NP-complete problem. Performing optimal decoding for a NP-complete code of any
useful size is not practical.
However, sub-optimal techniques based on iterative belief propagation decoding give excellent results
and can be practically implemented. The sub-optimal decoding techniques view each parity check that
makes up the LDPC as an independent single parity check (SPC) code. Each SPC code is decoded
separately using soft-in-soft-out (SISO) techniques such as SOVA, BCJR, MAP, and other derivates
thereof. The soft decision information from each SISO decoding is cross-checked and updated with
other redundant SPC decodings of the same information bit. Each SPC code is then decoded again
using the updated soft decision information. This process is iterated until a valid code word is achieved
or decoding is exhausted. This type of decoding is often referred to as sum-product decoding.
The decoding of the SPC codes is often referred to as the "check node" processing, and the cross-
checking of the variables is often referred to as the "variable-node" processing.
In a practical LDPC decoder implementation, sets of SPC codes are decoded in parallel to increase
throughput.

PREPARED BY ARUN PRATAP SINGH 22

22
In contrast, belief propagation on the binary erasure channel is particularly simple where it consists of
iterative constraint satisfaction.
For example, consider that the valid codeword, 101011, from the example above, is transmitted across
a binary erasure channel and received with the first and fourth bit erased to yield ?01?11. Since
the transmitted message must have satisfied the code constraints, the message can be represented
by writing the received message on the top of the factor graph.
In this example, the first bit cannot yet be recovered, because all of the constraints connected to it
have more than one unknown bit. In order to proceed with decoding the message, constraints
connecting to only one of the erased bits must be identified. In this example, only the second constraint
suffices. Examining the second constraint, the fourth bit must have been zero, since only a zero in that
position would satisfy the constraint.
This procedure is then iterated. The new value for the fourth bit can now be used in conjunction with
the first constraint to recover the first bit as seen below. This means that the first bit must be a one to
satisfy the leftmost constraint.

Thus, the message can be decoded iteratively. For other channel models, the messages passed
between the variable nodes and check nodes are real numbers, which express probabilities and
likelihoods of belief.
This result can be validated by multiplying the corrected codeword r by the parity-check matrix H:

Because the outcome z (the syndrome) of this operation is the three one zero vector, the
resulting codeword r is successfully validated.

PREPARED BY ARUN PRATAP SINGH 23

23
While illustrative, this erasure example does not show the use of soft-decision decoding or soft-
decision message passing, which is used in virtually all commercial LDPC decoders.

NESTED CODES :



PREPARED BY ARUN PRATAP SINGH 24

24

BLOCK CODES :


PREPARED BY ARUN PRATAP SINGH 25

25



PREPARED BY ARUN PRATAP SINGH 26

26



PREPARED BY ARUN PRATAP SINGH 27

27


PREPARED BY ARUN PRATAP SINGH 28

28















PREPARED BY ARUN PRATAP SINGH 29

29
CONVOLUTIONAL CHANNEL CODING :





PREPARED BY ARUN PRATAP SINGH 30

30


PREPARED BY ARUN PRATAP SINGH 31

31


PREPARED BY ARUN PRATAP SINGH 32

32


PREPARED BY ARUN PRATAP SINGH 33

33



PREPARED BY ARUN PRATAP SINGH 34

34



PREPARED BY ARUN PRATAP SINGH 35

35



PREPARED BY ARUN PRATAP SINGH 36

36


PREPARED BY ARUN PRATAP SINGH 37

37


PREPARED BY ARUN PRATAP SINGH 38

38


PREPARED BY ARUN PRATAP SINGH 39

39




PREPARED BY ARUN PRATAP SINGH 40

40
OR



PREPARED BY ARUN PRATAP SINGH 41

41



PREPARED BY ARUN PRATAP SINGH 42

42



PREPARED BY ARUN PRATAP SINGH 43

43



PREPARED BY ARUN PRATAP SINGH 44

44




PREPARED BY ARUN PRATAP SINGH 45

45


PREPARED BY ARUN PRATAP SINGH 46

46



PREPARED BY ARUN PRATAP SINGH 47

47



PREPARED BY ARUN PRATAP SINGH 48

48





PREPARED BY ARUN PRATAP SINGH 49

49
DISTANCE PROPERTIES AND TRANSFER FUNCTION REPRESENTATION :

PREPARED BY ARUN PRATAP SINGH 50

50



PREPARED BY ARUN PRATAP SINGH 51

51


PREPARED BY ARUN PRATAP SINGH 52

52



PREPARED BY ARUN PRATAP SINGH 53

53


PREPARED BY ARUN PRATAP SINGH 54

54



PREPARED BY ARUN PRATAP SINGH 55

55


PREPARED BY ARUN PRATAP SINGH 56

56





PREPARED BY ARUN PRATAP SINGH 57

57




PREPARED BY ARUN PRATAP SINGH 58

58



PREPARED BY ARUN PRATAP SINGH 59

59

DECODING CONVOLUTIONAL CODES :
Several algorithms exist for decoding convolutional codes. For relatively small values of k,
the Viterbi algorithm is universally used as it provides maximum likelihood performance and is
highly parallelizable. Viterbi decoders are thus easy to implement in VLSI hardware and in
software on CPUs with SIMD instruction sets.
Longer constraint length codes are more practically decoded with any of several sequential
decoding algorithms, of which the Fano algorithm is the best known. Unlike Viterbi decoding,
sequential decoding is not maximum likelihood but its complexity increases only slightly with
constraint length, allowing the use of strong, long-constraint-length codes. Such codes were used
in the Pioneer program of the early 1970s to Jupiter and Saturn, but gave way to shorter, Viterbi-
decoded codes, usually concatenated with large Reed-Solomon error correction codes that
steepen the overall bit-error-rate curve and produce extremely low residual undetected error
rates.
Both Viterbi and sequential decoding algorithms return hard decisions: the bits that form the most
likely codeword. An approximate confidence measure can be added to each bit by use of the Soft
output Viterbi algorithm. Maximum a posteriori (MAP) soft decisions for each bit can be obtained
by use of the BCJR algorithm.

PREPARED BY ARUN PRATAP SINGH 60

60



PREPARED BY ARUN PRATAP SINGH 61

61


PREPARED BY ARUN PRATAP SINGH 62

62


PREPARED BY ARUN PRATAP SINGH 63

63



PREPARED BY ARUN PRATAP SINGH 64

64






PREPARED BY ARUN PRATAP SINGH 65

65
VITERBI ALGORITHM FOR MLSE :



PREPARED BY ARUN PRATAP SINGH 66

66


PREPARED BY ARUN PRATAP SINGH 67

67


PREPARED BY ARUN PRATAP SINGH 68

68


PREPARED BY ARUN PRATAP SINGH 69

69


PREPARED BY ARUN PRATAP SINGH 70

70


PREPARED BY ARUN PRATAP SINGH 71

71



PREPARED BY ARUN PRATAP SINGH 72

72



PREPARED BY ARUN PRATAP SINGH 73

73


PREPARED BY ARUN PRATAP SINGH 74

74



PREPARED BY ARUN PRATAP SINGH 75

75




PREPARED BY ARUN PRATAP SINGH 76

76


PREPARED BY ARUN PRATAP SINGH 77

77



PREPARED BY ARUN PRATAP SINGH 78

78

You might also like