0% found this document useful (0 votes)
55 views

Coding Intro

Error control coding techniques are used in digital communication systems to allow reliable transmission despite channel limitations such as bandwidth constraints, power limits, and channel impairments. Forward error correction (FEC) uses redundant parity bits added to transmitted data to detect and correct errors, reducing error rates and power requirements. Common error correcting codes include block codes like Hamming codes and convolutional codes. The minimum distance of a code determines its error detection and correction capabilities.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Coding Intro

Error control coding techniques are used in digital communication systems to allow reliable transmission despite channel limitations such as bandwidth constraints, power limits, and channel impairments. Forward error correction (FEC) uses redundant parity bits added to transmitted data to detect and correct errors, reducing error rates and power requirements. Common error correcting codes include block codes like Hamming codes and convolutional codes. The minimum distance of a code determines its error detection and correction capabilities.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 21

Coding Theory

Communication System

Source CRC Channel


encoder Interleaver Modulator
Voice encoder encoder
Image
Data
Impairments
Error control Noise Channel
Fading

Source CRC Channel


encoder Deinterleaver Demodulator
decoder encoder

2
Error control coding
• Limits in communication systems
– Bandwidth limit
– Power limit
– Channel impairments
• Attenuation, distortion, interference, noise and fading
• Error control techniques are used in the digital
communication systems for reliable transmission under
these limits.

3
Power limit vs. Bandwidth limit

4
Error control coding
• Advantage of error control coding
– In principle:
• Every channel has a capacity C.
• If you transmit information at a rate R < C, then the error-free
transmission is possible.
– In practice:
• Reduce the error rates
• Reduce the transmitted power requirements
• Increase the operational range of a communication system
• Classification of error control techniques
– Forward error correction (FEC)
– Error detection: cyclic redundancy check (CRC)
– Automatic repeat request (ARQ)
5
History
• Shannon (1948)
– R: Transmission rate for data
– C: Channel capacity
– If R < C, it is possible to transfer information at error rates
that can be reduced to any desired level.

6
History
• Hamming codes (1950)
– Single error correcting
• Convolutional codes (Elias, 1956)
• BCH codes (1960), RS codes (1960)
– multiple error correcting
• Goppa codes (1970)
– Generalization of BCH codes
• Algebraic geometric codes (1982)
– Generalization of RS codes
– Constructed over algebraic curves
• Turbo codes (1993)
• LDPC codes
7
Channel
• Memoryless channel
– The probability of error is independent from one symbol to t
he next.
• Symmetric channel
– P( i | j )=P( j | i ) for all symbol values i and j
Ex) binary symmetric channel (BSC)
• Additive white Gaussian noise (AWGN) channel
• Burst error channel
• Compound (or diffuse) channel
– The errors consist of a mixture of bursts and random errors.
• Many codes work best if errors are random.
– Interleaver and deinterleaver are added.
8
Channel
• Random error channels
– Deep-space channels
– Satellite channels
 Use random error correcting codes
• Burst error channels: channels with memory
– Radio channels
• Signal fading due to multipath transmission
– Wire and cable transmission
• Impulse switching noise, crosstalk
– Magnetic recording
• Tape dropouts due to surface defects and dust particles
 Use burst error correcting codes
9
Encoding
• Block codes

k bits k bits k bits n bits n bits n bits

message or information codeword


Redundancy: n – k
Code rate: k / n

• Encoding of an [n , k] block code


Message m codeword c
(m1, m2, … , mk) (m1, m2, … , mk, p1, p2, … , pn - k )

Add n – k redundant
parity check symbols
(p1, p2, … , pn - k)
10
Decoding
• Decoding [n , k] block code
– Decide what the transmitted information was
– The minimum distance decoding is optimum in a memoryles
s channel.

Received data r Decoded message m̂


(r1, r2, … , rn) (mˆ 1 , mˆ 2 , ....., mˆ k )

Correct errors and remove


n – k redundant symbols
Error vector e = (e1, e2, … , en) = (r1, r2, … , rn) – (c1, c2, … , cn)

11
Decoding
• Decoding plane

r
c2
c4

c1 c3
c6
c5

12
Decoding
Ex) Encoding and decoding procedure of [6, 3] code
1. Generate the information (100) in the source.
2. Transmit the codeword (100101) corresponding to (100).
3. The vector (101101) is received.
4. Choose the nearest codeword (100101) to (101101).
5. Extract the information (100) from the codeword (100101).
Information codeword Distance from (101101)
000 000000 4
100 100101 1
010 010011 5
110 110110 4
001 001111 2
101 101010 3
011 011100 3
111 111001 2 13
Parameters of block codes
• Hamming distance dH(u, v)
– # positions at which symbols are different in two vectors
Ex) u=(1 0 1 0 0 0)
v=(1 1 1 0 1 0)  dH(u, v) = 2
• Hamming weight wH(u)
– # nonzero elements in a vector
Ex) wH(u) = 2, wH(v) = 4
• Relation between hamming distance and hamming weight
– Binary code: dH(u, v) = wH(u + v),
where ‘+’ means exclusive OR (bit by bit)
– Nonbinary code: dH(u, v) = wH(u – v)
14
Parameters of block codes
• Minimum distance d
– d = min dH(ci, cj) for all ci  cj  C
• Any two codewords differ in at least d places.
• [n, k] code with d  [n, k, d] code
• Error detection and correction capability
– Let s = # errors to be detected
t = # errors to be corrected (s  t)
– Then, we have d  s + t + 1
• Error correction capability
– Any block code correcting t or less errors satisfies
d  2t + 1
– Thus, we have t = (d – 1) / 2
15
Parameters of block codes
Ex) d = 3, 4  t = 1 : single error correcting (SEC) codes
d = 5, 6  t = 2 : double error correcting (DEC) codes
d = 7, 8  t = 3 : triple error correcting (TEC) codes
• Coding sphere

d t
t
ci cj

16
Code performance and coding gain
• Criteria for performance in the coded system
– BER: bit error rate in the information after decoding, Pb
– SNR: signal to noise ratio, Eb / N0
Eb = signal energy per bit
N0 = one-sided noise power spectral density in the channel
– Coding gain (for a given BER)
G = (Eb / N0)without FEC – (Eb / N0)with FEC [dB]
• At a given BER, Pb, we can save the transmission power by
G [dB] over the uncoded system.

17
Minimum distance decoding
• Maximum-likelihood decoding (MLD)
– m̂ : estimated message after decoding
– ĉ : estimated codeword in the decoder
ˆ  m  cˆ  c
m
• Assume that c was transmitted.
– A decoding error occurs if cˆ  c .
• Conditional error probability of the decoder, given r :
P( E | r )  P(cˆ  c | r )
• Error probability of the decoder:
P ( E )  r P ( E | r ) P (r ) , where P(r) is independent of
decoding rule

18
Minimum distance decoding
• Optimum decoding rule: minimize error probability, P(E)
– This can be obtained by minr P(E | r), which is equivalent to
max r P(cˆ  c | r )
• Optimum decoding rule is
– argmaxc P(c | r) : Maximum a posteriori probability (MAP)
– argmaxc P(r | c) : Maximum likelihood (ML)
• Bayes’ rule
P(r | c) P (c)
P (c | r ) 
P(r )

– If equiprobable c, MAP = ML

19
Problems
• Basic problems in coding
– Find good codes
– Find their decoding algorithm
– Implement the decoding algorithms
• Cost for forward error correction schemes
– If we use [n, k] code, the transmission rate increase from k
to n.
• Increase of channel bandwidth by n / k or decrease of
message transmission rate by k / n.
• Cost for FEC

20
Classification
• Classification of FEC
– Block codes
• Hamming, BCH, RS, Golay, Goppa, Algebraic geometric code
s (AGC)
Tree codes
• Convolutional codes
– Linear codes
• Hamming, BCH, RS, Golay, Goppa, AGC, etc.
Nonlinear codes
• Nordstrom-Robinson, Kerdock, Preparata, etc.
– Systematic codes vs. Nonsystematic codes

21

You might also like