2 Introduction To Digital Communications - Source and Channel Coding
2 Introduction To Digital Communications - Source and Channel Coding
diagram
年 1 年 2
1. Source coding
– Deals with digital data compression
The fundamental problem of communication is that of reproducing
– Goal: to represent the digital data with few bits as possible
at one point either exactly or approximately a message selected at
– Encryption may be incorporated or done separately another point
- Claude Shannon
2. Channel coding
– Deals with protecting the digital data from errors in the channel SOURCE CODING & CHANNEL CODING
– Goal: to have an error-free digital data or to correct them if in error
– Its error detection may have error correction or not
年 3 年 4
2016: The Shannon Centennial Do you know what this says? All vowels are
removed
Claude Elwood Shannon, father of the information age, was born 100 years ago, on
April 30th, 1916. He defined the entropy of information, coined the term “bit”, and
laid the foundations of the communication networks we have today.
https://www.youtube.com/watch?v=z7bVw7lMtUg scncndtchnlgycntr
https://www.youtube.com/watch?v=T58NGMrUp0M
年 5 年 6
Do you know what this says? All vowels are All vowels & spacing are included
included
年 7 年 8
Do you know what this says? All vowels are So what is the limit to how much
removed compression can take place?
年 9 年 10
• Since many things happen to the signal on its journey & the channel has an • A branch of applied mathematics & electrical engineering involving the
effect on the transmitted signal quantification of information
年 11 年 12
Entropy How to quantify entropy?
年 13 年 14
• If some source symbols are known to be more probable than others, then we
Obtain the entropy of the roman alphabet assuming that each symbol is equally may exploit this feature in the generation of a source code
likely
年 15 年 16
2 functional requirements for an efficient source
Code efficiency: measures the deficiency in
encoder
for digital communications entropy
Pr ( ) log Pr ( )
=−
log (max )
年 17 年 18
• A more efficient encoding method to use than fixed-length encoding when the
source symbols are not equally probable—entropy coding
= Pr – The problem is to devise a method for selecting & assigning the code words
to source letters
• For a set of symbols represented by binary code words with lengths binary • Done by assigning short code words to frequent source symbols & long code
digits words to rare source symbols
年 19 年 20
Huffman coding algorithm Huffman coding algorithm
1. Sort the probability of source symbols descendingly
• Optimum in the sense that the average number of binary digits required to 3. Add the probabilities of the 2 source symbols in step 2
represent the source symbols is a minimum – The two source symbols are regarded as being combined into a new source
symbol
• Requires that the received sequence to be
4. Placed the new source symbol in the list in accordance with its value
– Uniquely &
5. Repeat steps 1 to 4 until a final list of 2 source statistics to which a 0 & a 1 is
assigned
– Instantaneously decodable
6. Obtain the code for each original source symbol working back & tracing the
sequence of 0s & 1s assigned to that symbol as well as its successors
年 21 年 22
Find the
H(A) = 2.259 bits/symbol
1. Entropy
2. Source code using Huffman encoding
3. Average codeword length
4. Code efficiency
年 23 年 24
How to find the source code using Huffman Source code using Huffman encoding
encoding
00 00 00 1 0
a5 0.35 0.35 0.35 0.40 0.60 a1 01
01 01 01 00 1
a1
0.25 0.25 0.25 0.35 0.40 a2 110
10 10 10 01
a3
0.20 0.20 0.20 0.25 a3 10
110 110
a2
0.10 0.10 a4 1110
11
1110
0.20
a4 0.05
111
a5 00
1111 0.10
a6 0.05 a6 1111
年 25 年 26
2.259
=
2.3
= 98.2%
年 27 年 28
Channel coding Error types
• Its aim is to find efficient codes that can correct or at least detect many errors
• Due to non-ideal channel transmission characteristics that are associated with • Single-bit error
any communications system, it is inevitable that errors will occur
• Multiple-bit error / burst
• Codes are divided into two general categories & could overlap
– Error detection
– Error correction
年 29 年 30
• Least likely type of errors in serial data transmission because the noise
must have a very short duration which is very rare
• Can happen in parallel transmission • 2 or more bits in the data unit have changed from 1 to 0 or vice
versa
Example:
• If data is sent at 1 Mbps then each bit lasts only 1/1,000,000 sec. or 1 • The length of the burst is measured from the first corrupted bit to the
µs. last corrupted bit
• For a single-bit error to occur, the noise must have a duration of only 1
µs, which is very rare
年 31 年 32
Burst error Number of bits affected by burst error
• Depends on the data rate & noise duration
Example 1
Data rate = 1 kbps
Noise duration = 0.01 s
Example 2
Data rate = 1 Mbps
• Does not necessarily mean that the errors occur in consecutive bits Noise duration = 0.01 s
– Some bits in between may not have been corrupted
Can affect 10,000 bits
• Most likely to happen in serial transmission since the duration of noise
is normally longer than the duration of a bit
年 33 年 34
• Neither correct errors nor identify which bits are in error – they indicate only
when an error has occurred
年 35 年 36
Redundancy checking Vertical redundancy checking
• A single parity bit is added to force the total number of logic 1s (including the
parity bit) to be either
年 37 年 38
年 39 年 40
Longitudinal redundancy checking Drill problem
• LRC bits are computed in the transmitter & then appended to the end of the
message as a redundant character
• At the receiver, the LRC is recomputed from the data, & the recomputed LRC is
compared to the LRC appended to the message
• If the two LRC characters are the same, most likely no transmission errors have
occurred
Determine the VRCs & LRC for the following ASCII-encoded message:
THE CAT
Use odd parity for the VRCs & even parity for the LRC.
年 41 年 42
• The LRC is 00101111 binary (2F hex), which is the character “/” in ASCII
• Therefore, after the LRC character is appended to the message, it would read
“THE CAT/”
年 43 年 44
Longitudinal redundancy checking Checksum
• The receiver replicates the combining operation & determines its own checksum
• LRC detects between 95% & 98% of all transmission errors
年 45 年 46
2. All sections are added together using one’s complement to get the sum
2. All sections are added together using one’s complement to get the sum
4. If the result is zero, the data are accepted: otherwise, they are rejected
4. The checksum is sent with the data
年 47 年 48
Checksum One’s complement addition
If there is a carry out (of 1) from the MSB, then the result will be off by 1
年 49 年 50
• CRC is considered a systematic code & probably the most reliable redundancy
checking technique for error detection
年 51 年 52
CRC is a type of cyclic block code Math expression of CRC
G ( x)
= Q( x) + R ( x)
P( x)
年 53 年 54
• 0±0=0
• 0±1=1
• 1±0=1
• 1±1=0
年 55 年 56
Find the quotient & remainder Find the quotient & remainder
年 57 年 58
Find the quotient & remainder Find the quotient & remainder
• Quotient is 1011
• Remainder is 0010
• Divisor in this example corresponds to a
modulo 2 polynomial: x3 + x2 + 1
年 59 年 60
CRC encoding example CRC decoding example without corrupted
bits
• Message: 1111101
年 61 年 62
– Damaged message
• Recognized at the destination but contains one or more transmission errors
年 63 年 64
Retransmission Forward error correction
• The only error-correction scheme that actually detects & corrects transmission
errors when they are received without requiring retransmission
• Occurs when a receive station requests the transmit station to resend a • Redundant bits are added to the message before transmission
message (or a portion of a message) when the message is received in error
– When an error is detected, the redundant bits are used to determine which
Example: bit is in error
– Automatic Repeat Request • Suited for data communications systems when acknowledgements are
– Automatic Retransmission Request (ARQ) impractical or impossible
– ARQ variants
• The purpose of FEC is to eliminate the time wasted for retransmissions
年 65 年 66
• Developed by mathematician Richard W. Hamming at Bell Telephone • An ASCII character requires 4 Hamming bits
Laboratories
– "ASCII = 7
• Corrects 1 bit error
年 67 年 68
Hamming bits in fixed position Calculating the Hamming code (one method)
• The LSB is on the leftmost & the MSB is on the rightmost bit
2. Mark all other bit positions are for the character/message to be encoded
年 69 年 70
3. Each parity bit calculates the parity for some of the bits in the code word
– The position of the parity bit determines the sequence of bits that it Position 16: check 16 bits, skip 16 bits, check 16 bits, skip 16 bits, etc.
alternately checks & skips. (16-31,48-63,80-95,...)
Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc. Position 32: check 32 bits, skip 32 bits, check 32 bits, skip 32 bits, etc.
(1,3,5,7,9,11,13,15,...) (32-63,96-127,160-191,...)
Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc. etc.
(2,3,6,7,10,11,14,15,...)
4. Set parity bit to 0 if the total number of ones in the positions it checks is odd, 1
Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc. otherwise
(4,5,6,7,12,13,14,15,20,21,22,23,...) – Note that the parity can be set to either odd or even, but must stick to only
one type throughout
Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc.
(8-15,24-31,40-47,...)
年 71 年 72
Calculating a Hamming code (another Hamming code example
method)
Hamming bits:
bits in positions 1, 2, & 4
Original message 1 0 1 1 of the message to be sent
年 73 年 74
2n: check bits 20 21 22 2n: check bits 20 21 22 Look at bit positions in the
message to be sent
Calculate check bits Calculate check bits with 1’s in them
3 = 21+ 20 = 0 1 1 3 = 21 + 20 = 0 1 1
Set the Hamming bit to
5 = 22 + 20 = 1 0 1 5 = 22 + 20 = 1 0 1
6 = 22 + 21 + = 1 1 0 6 = 22 + 21 + = 1 1 0 e.g. odd parity
7 = 22 + 21 + 20 = 1 1 1 7 = 22 + 21 + 20 = 1 1 1
年 75 年 76
Hamming code example Hamming code example
2n: check bits 20 21 22 Look at bit positions in the 2n: check bits 20 21 22 Look at bit positions in the
message to be sent message to be sent
Calculate check bits with 1’s in them Calculate check bits with 1’s in them
3 = 21+ 20 = 0 1 1 3 = 21 + 20 = 0 1 1
Set the Hamming bit to Set the Hamming bit to
5 = 22 + 20 = 1 0 1 5 = 22 + 20 = 1 0 1
6 = 22 + 21 + = 1 1 0 a specific parity 6 = 22 + 21 + = 1 1 0 a specific parity
7 = 22 + 21 + 20 = 1 1 1 7 = 22 + 21 + 20 = 1 1 1
年 77 年 78
1 0 1 1 0 0 1
Original message = 1 0 1 1 Position 1 2 3 4 5 6 7
年 79 年 80
How to check for a single-bit error in the How to check for a single-bit error in the
sent message using the Hamming code? sent message using the Hamming code?
Received message bits 1 0 1 1 0 0 1 Received message bits 1 0 1 1 0 0 1 Repeat with the 21 position:
Starting with the 20 position:
Look at positions with 1’s
Look at positions with 1’s
1 0 1 1 0 0 1 1 0 1 1 0 0 1 in them
in them
Position 1 2 3 4 5 6 7 Position 1 2 3 4 5 6 7
Count the number of 1’s in
Count the number of 1’s in
2n: check bits 20 21 22 2n: check bits 20 21 22 both the corresponding
both the corresponding
message bits & the 21 check
message bits & the 20 check
Calculate check bits Calculate check bits bit & compute the parity
bit & compute the parity
3 = 21
+ 20 = 0 1 1 3 = 21 + 20 = 0 1 1
5 = 22 + 20 = 1 0 1 5 = 22 + 20 = 1 0 1 If even parity, there is an error
If even parity, there is an error
6 = 22 + 21 = 1 1 0 6 = 22 + 21 = 1 1 0 in one of the four bits that were
in one of the four bits that were
7 = 22 + 21 + 20 = 1 1 1 7 = 22 + 21 + 20 = 1 1 1 checked
checked
年 81 年 82
How to check for a single-bit error in the How to find the error location using
sent message using the Hamming code? Hamming code?
Error in bit 4, 5, 6 or 7
年 83 年 84
How to find the error location using How to find the error location using
Hamming code? Hamming code?
1 0 1 1 0 0 1 1 0 1 1 0 0 1
Position 1 2 3 4 5 6 7 Position 1 2 3 4 5 6 7
No error in bits 1, 3, 5, 7
Error must be in bit 6
Error in bit 2, 3, 6 or 7 because bits 3, 5, 7
Error in bit 4, 5, 6 or 7 are correct, & all the
remaining information
agrees on bit 6
年 85 年 86