0% found this document useful (0 votes)
76 views22 pages

2 Introduction To Digital Communications - Source and Channel Coding

This document discusses digital communication system block diagrams and source coding and channel coding. [1] Source coding deals with compressing digital data through techniques like encryption to represent data with fewer bits. [2] Channel coding deals with protecting digital data from errors in the channel through techniques like error correction coding. [3] Together, source coding and channel coding aim to reliably transmit as much information as possible over a communication channel.

Uploaded by

drago
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views22 pages

2 Introduction To Digital Communications - Source and Channel Coding

This document discusses digital communication system block diagrams and source coding and channel coding. [1] Source coding deals with compressing digital data through techniques like encryption to represent data with fewer bits. [2] Channel coding deals with protecting digital data from errors in the channel through techniques like error correction coding. [3] Together, source coding and channel coding aim to reliably transmit as much information as possible over a communication channel.

Uploaded by

drago
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Digital communication system block

diagram

Introduction to digital communications

Source coding & channel coding

年 1 年 2

Major encodings after the ADC & before


bandpass modulation

1. Source coding
– Deals with digital data compression
The fundamental problem of communication is that of reproducing
– Goal: to represent the digital data with few bits as possible
at one point either exactly or approximately a message selected at
– Encryption may be incorporated or done separately another point
- Claude Shannon
2. Channel coding
– Deals with protecting the digital data from errors in the channel SOURCE CODING & CHANNEL CODING
– Goal: to have an error-free digital data or to correct them if in error
– Its error detection may have error correction or not

年 3 年 4
2016: The Shannon Centennial Do you know what this says? All vowels are
removed

Claude Elwood Shannon, father of the information age, was born 100 years ago, on
April 30th, 1916. He defined the entropy of information, coined the term “bit”, and
laid the foundations of the communication networks we have today.

https://www.youtube.com/watch?v=z7bVw7lMtUg scncndtchnlgycntr
https://www.youtube.com/watch?v=T58NGMrUp0M

年 5 年 6

Do you know what this says? All vowels are All vowels & spacing are included
included

scienceandtechnologycenter science and technology center

年 7 年 8
Do you know what this says? All vowels are So what is the limit to how much
removed compression can take place?

• Answered by information theory

• This is a source coding problem

• In digital communication systems, it is desired to send as much information


as possible over the channel & have it arrived & be useful
snndthnlgyntr

年 9 年 10

What is the maximum capacity of the Information theory


channel?

• Founded by Claude Shannon (b. 1916, d. 2001)

– His goal was to find fundamental limits on compressing. reliably storing,


• Answered by information theory & communicating data.

• This is a channel coding problem • Fundamental paper: A Mathematical Theory of Communication

• Since many things happen to the signal on its journey & the channel has an • A branch of applied mathematics & electrical engineering involving the
effect on the transmitted signal quantification of information

年 11 年 12
Entropy How to quantify entropy?

• A key measure of information theory


Η =− Pr ( ) log Pr ( )
• Usually expressed by the average number of bits needed for storage or
communication
Pr ( )—probability of the nth symbol of information among a finite alphabet
• Quantifies the uncertainty involved when encountering a random variable
• Common values of b (& the corresponding unit of Η )
Example: a fair coin flip (2 equally likely outcomes) will have less entropy than a roll of – 2 (bits/symbol)
a die (6 equally likely outcomes) – e (nats/symbol)
– 10 (dits/symbol)
• Shannon entropy—quantifies the expected value of the information contained in
a message, usually in units such as bits
• The logarithm captures the additivity characteristic for independent uncertainties

年 13 年 14

Drill problem Source coding

• For the source encoder to be efficient, we require knowledge of the statistics of


the source

• If some source symbols are known to be more probable than others, then we
Obtain the entropy of the roman alphabet assuming that each symbol is equally may exploit this feature in the generation of a source code
likely

年 15 年 16
2 functional requirements for an efficient source
Code efficiency: measures the deficiency in
encoder
for digital communications entropy

Pr ( ) log Pr ( )
=−
log (max )

1. The code words produced by the encoder are in binary form


• A source alphabet with non-uniform distribution will have less entropy than if
those symbols had uniform distribution

2. The source code is uniquely decodable, so that the original source


• Quantifies the effective use of a communications channel
sequence can be reconstructed perfectly from the encoded binary
sequence
• Aka normalized entropy (entropy is divided by the maximum entropy)

年 17 年 18

Average codeword length Variable-length source coding

• A more efficient encoding method to use than fixed-length encoding when the
source symbols are not equally probable—entropy coding

= Pr – The problem is to devise a method for selecting & assigning the code words
to source letters

• For a set of symbols represented by binary code words with lengths binary • Done by assigning short code words to frequent source symbols & long code
digits words to rare source symbols

• Unit: bit/symbol Example:


Morse code: letters of the alphabet & numerals are encoded into streams of
dots "." & dashes "-“

Huffman coding—a variable-length encoding algorithm, based on the source


letter probabilities developed

年 19 年 20
Huffman coding algorithm Huffman coding algorithm
1. Sort the probability of source symbols descendingly

2. Assign 0 & 1 to the 2 source symbols with the lowest probability

• Optimum in the sense that the average number of binary digits required to 3. Add the probabilities of the 2 source symbols in step 2
represent the source symbols is a minimum – The two source symbols are regarded as being combined into a new source
symbol
• Requires that the received sequence to be
4. Placed the new source symbol in the list in accordance with its value
– Uniquely &
5. Repeat steps 1 to 4 until a final list of 2 source statistics to which a 0 & a 1 is
assigned
– Instantaneously decodable

6. Obtain the code for each original source symbol working back & tracing the
sequence of 0s & 1s assigned to that symbol as well as its successors

年 21 年 22

Drill problem Entropy

Consider an unfair die

A = {a1, a2, a3, a4, a5, a6}


H(A) = (0.25)log2(1/0.25) + (0.10)log2(1/0.10) + (0.20)log2(1/0.20) +
(0.05)log2(1/0.05) + (0.35)log2(1/0.35)+ (0.05)log2(1/0.05)
Pr(A) = {0.25, 0.1, 0.2, 0.05, 0.35, 0.05}

Find the
H(A) = 2.259 bits/symbol
1. Entropy
2. Source code using Huffman encoding
3. Average codeword length
4. Code efficiency

年 23 年 24
How to find the source code using Huffman Source code using Huffman encoding
encoding
00 00 00 1 0
a5 0.35 0.35 0.35 0.40 0.60 a1 01
01 01 01 00 1
a1
0.25 0.25 0.25 0.35 0.40 a2 110
10 10 10 01
a3
0.20 0.20 0.20 0.25 a3 10
110 110
a2
0.10 0.10 a4 1110
11
1110
0.20
a4 0.05
111
a5 00
1111 0.10
a6 0.05 a6 1111

年 25 年 26

Average codeword length Code efficiency

L = (0.35)(2)+ (0.25)(2) + (0.20) (2)+ (0.10)(3) + (0.05)(4) + (0.05)(4)


L = 2.30 bits
( )
=
Note: H(A) = 2.259 bits/symbol

2.259
=
2.3

= 98.2%

年 27 年 28
Channel coding Error types

• Its aim is to find efficient codes that can correct or at least detect many errors

• The code mainly depends on the probability of errors happening during


transmission

• Due to non-ideal channel transmission characteristics that are associated with • Single-bit error
any communications system, it is inevitable that errors will occur
• Multiple-bit error / burst
• Codes are divided into two general categories & could overlap
– Error detection
– Error correction

年 29 年 30

Single-bit error Burst error

• Least likely type of errors in serial data transmission because the noise
must have a very short duration which is very rare

• Can happen in parallel transmission • 2 or more bits in the data unit have changed from 1 to 0 or vice
versa
Example:
• If data is sent at 1 Mbps then each bit lasts only 1/1,000,000 sec. or 1 • The length of the burst is measured from the first corrupted bit to the
µs. last corrupted bit
• For a single-bit error to occur, the noise must have a duration of only 1
µs, which is very rare

年 31 年 32
Burst error Number of bits affected by burst error
• Depends on the data rate & noise duration

Example 1
Data rate = 1 kbps
Noise duration = 0.01 s

Can affect 10 bits (0.01 * 1000)

Example 2
Data rate = 1 Mbps
• Does not necessarily mean that the errors occur in consecutive bits Noise duration = 0.01 s
– Some bits in between may not have been corrupted
Can affect 10,000 bits
• Most likely to happen in serial transmission since the duration of noise
is normally longer than the duration of a bit

年 33 年 34

Error detection Four types of redundancy checks that are


used in data communications

• Process of adding additional bits (redundancy checking) in order to determine


when errors have occurred in the transmitted data

• Neither correct errors nor identify which bits are in error – they indicate only
when an error has occurred

年 35 年 36
Redundancy checking Vertical redundancy checking

• Aka character parity or simply parity

• A single parity bit is added to force the total number of logic 1s (including the
parity bit) to be either

– Odd number (odd parity)


• Duplicating each data unit for the purpose of detecting errors
– Even number (even parity)
• Adding bits for the sole purpose of detecting errors
Example:
Let P be the parity bit
ASCII code for the letter C is 4316 = P10000112
For odd parity, the P bit is made a logic 0
For even parity, the P bit is made a logic 1

年 37 年 38

Vertical redundancy checking Longitudinal redundancy checking

• Aka message parity

• Uses parity to determine if a transmission error has occurred within a message


block
• Primary advantage: simplicity

• Result of XORing the “character codes” that make up the message


• Disadvantage: cannot detect error if even number of bits are
received in error
– VRC is the result of XORing of bits within a single character

– The parity remains the same for even numbers


• With LRC, even parity is generally used

– With VRC, odd parity is generally used

年 39 年 40
Longitudinal redundancy checking Drill problem

• LRC bits are computed in the transmitter & then appended to the end of the
message as a redundant character

• At the receiver, the LRC is recomputed from the data, & the recomputed LRC is
compared to the LRC appended to the message

• If the two LRC characters are the same, most likely no transmission errors have
occurred

Determine the VRCs & LRC for the following ASCII-encoded message:
THE CAT
Use odd parity for the VRCs & even parity for the LRC.

年 41 年 42

Solution Longitudinal redundancy checking

• The LRC is 00101111 binary (2F hex), which is the character “/” in ASCII

• Therefore, after the LRC character is appended to the message, it would read
“THE CAT/”

• Block or frame data


– Group of characters that comprise a message (i.e., THE CAT)

• Block check sequence (BCS) or frame check sequence (FCS)


– The bit sequence for the LRC

年 43 年 44
Longitudinal redundancy checking Checksum

• Characters within a message are combined together to produce an error-


• All messages (regardless of their length) have the same number of error-
checking character (checksum)
detection characters
– The checksum which can be as simple as the arithmetic sum of the
numerical values of all the characters in the message
– Makes LRC a better choice for systems that typically send long messages

• The receiver replicates the combining operation & determines its own checksum
• LRC detects between 95% & 98% of all transmission errors

• The checksum at the receiver is compared to the checksum appended to the


• LRC will not detect transmission errors when an even number of characters has message
an error in the same bit position
– If they are the same, it is assumed that no transmission errors have occurred

年 45 年 46

Checksum at the sender Checksum at the receiver

1. The unit is divided into k sections, each of n bits


1. The unit is divided into k sections, each of n bits

2. All sections are added together using one’s complement to get the sum
2. All sections are added together using one’s complement to get the sum

3. The sum is complemented


3. The sum is complemented & becomes the checksum

4. If the result is zero, the data are accepted: otherwise, they are rejected
4. The checksum is sent with the data

年 47 年 48
Checksum One’s complement addition

00111 (7) 111110 (-1)


+ 00101 (5) + 000010 (2)
----------- ------------
01100 (12) 1 000000 (0) wrong!
+ 1
----------
000001 (1) right!

If there is a carry out (of 1) from the MSB, then the result will be off by 1

So add 1 again to get the correct result (end around carry)

年 49 年 50

Drill problem Cyclic redundancy checking

• CRC is considered a systematic code & probably the most reliable redundancy
checking technique for error detection

Suppose the following block of 24 bits is to be sent using checksum of 8 bits


• 99.999% of all transmission errors are detected

10101001 00111001 00101011 • Most common CRC code is CRC-16


– 16 bits are used for the block check sequence

Generate the checksum at the transmitter


• Its BCS is separate from the message but transported within the same
& show that at the receiver no error has occurred. transmission

年 51 年 52
CRC is a type of cyclic block code Math expression of CRC

G ( x)
= Q( x) + R ( x)
P( x)

• Cyclic block codes are often written as (n,k) cyclic codes


– n: bit length of transmission (total number of bits to transmit)
– k: bit length of message where G ( x) = message polynomial
P ( x) = generator polynomial
• Length of the block check code in bits is n - k Q( x) = quotient
R ( x) = remainder

年 53 年 54

Standard CRC generator polynomials Modulo 2 arithmetic addition/subtraction


rules

• 0±0=0

• 0±1=1

• 1±0=1

• 1±1=0

年 55 年 56
Find the quotient & remainder Find the quotient & remainder

年 57 年 58

Find the quotient & remainder Find the quotient & remainder

• Quotient is 1011
• Remainder is 0010
• Divisor in this example corresponds to a
modulo 2 polynomial: x3 + x2 + 1

年 59 年 60
CRC encoding example CRC decoding example without corrupted
bits

• Message: 1111101

• Generator: 1101 • If no bits are lost or corrupted, dividing


the received information string by the
• The information string is shifted left by agreed upon pattern will give a
one position less than the number of remainder of zero
positions in the divisor
• Real applications use longer
• The remainder is found through modulo 2 polynomials to cover larger information
division & added to the information string strings
1111101000 + 111 = 1111101111.

年 61 年 62

CRC example showing modulo-2 long Error correction


division
• Their codes use sufficient extraneous information with each message to enable
the Rx to determine when an error has occurred & which bit is in error

• Two types of error messages


– Lost message
• Never arrives at the destination
• Arrives but is damaged to the extent that it is unrecognizable

– Damaged message
• Recognized at the destination but contains one or more transmission errors

• Two primary methods used for error correction


– Retransmission
– Forward error correction (FEC)

年 63 年 64
Retransmission Forward error correction

• The only error-correction scheme that actually detects & corrects transmission
errors when they are received without requiring retransmission

• Occurs when a receive station requests the transmit station to resend a • Redundant bits are added to the message before transmission
message (or a portion of a message) when the message is received in error
– When an error is detected, the redundant bits are used to determine which
Example: bit is in error

– Automatic Repeat Request • Suited for data communications systems when acknowledgements are
– Automatic Retransmission Request (ARQ) impractical or impossible
– ARQ variants
• The purpose of FEC is to eliminate the time wasted for retransmissions

Example: Hamming code, low-density parity-check code, BCH


code

年 65 年 66

Hamming code Hamming code for ASCII character

• Developed by mathematician Richard W. Hamming at Bell Telephone • An ASCII character requires 4 Hamming bits
Laboratories
– "ASCII = 7
• Corrects 1 bit error

• Number of Hamming bits ( ) • The 4 Hamming bits can be placed


– At the end of the character bits
2 ≥"+ +1 – At the beginning of the character bits,
– Interspersed throughout the character bits,
" : number bits in each data character – In fixed positions (at powers of 2)

• Requires transmitting 11 bits per ASCII character


– 57% increase in the message length

年 67 年 68
Hamming bits in fixed position Calculating the Hamming code (one method)

1. Mark all bit positions that are powers of 2 as parity bits


– Positions 1, 2, 4, 8, 16, 32, 64, 128, 256, etc.
• Bit positions of powers of 2 (1, 2, 4, 8, 16, etc.)

• The LSB is on the leftmost & the MSB is on the rightmost bit
2. Mark all other bit positions are for the character/message to be encoded

– Positions 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 17, etc.

年 69 年 70

Calculating the Hamming code Calculating the Hamming code

3. Each parity bit calculates the parity for some of the bits in the code word

– The position of the parity bit determines the sequence of bits that it Position 16: check 16 bits, skip 16 bits, check 16 bits, skip 16 bits, etc.
alternately checks & skips. (16-31,48-63,80-95,...)

Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc. Position 32: check 32 bits, skip 32 bits, check 32 bits, skip 32 bits, etc.
(1,3,5,7,9,11,13,15,...) (32-63,96-127,160-191,...)

Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc. etc.
(2,3,6,7,10,11,14,15,...)
4. Set parity bit to 0 if the total number of ones in the positions it checks is odd, 1
Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc. otherwise
(4,5,6,7,12,13,14,15,20,21,22,23,...) – Note that the parity can be set to either odd or even, but must stick to only
one type throughout
Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc.
(8-15,24-31,40-47,...)

年 71 年 72
Calculating a Hamming code (another Hamming code example
method)
Hamming bits:
bits in positions 1, 2, & 4
Original message 1 0 1 1 of the message to be sent

• Place message bits in their non-power-of-2 Hamming positions


Message to be sent 1 0 1 1
Position 1 2 3 4 5 6 7
• Build a table listing the binary representation each of the message bit
positions 2n: check bits 20 21 22

• Calculate the check bits

• Obtain the Hamming bits

年 73 年 74

Hamming code example Hamming code example

Hamming bits: Hamming bits:


bits in positions 1, 2, & 4 bits in positions 1, 2, & 4
Original message 1 0 1 1 of the message to be sent Original message 1 0 1 1 of the message to be sent

Message to be sent 1 0 1 1 Message to be sent 1 1 0 1 1


Starting with the 20 position:
Position 1 2 3 4 5 6 7 Position 1 2 3 4 5 6 7

2n: check bits 20 21 22 2n: check bits 20 21 22 Look at bit positions in the
message to be sent
Calculate check bits Calculate check bits with 1’s in them
3 = 21+ 20 = 0 1 1 3 = 21 + 20 = 0 1 1
Set the Hamming bit to
5 = 22 + 20 = 1 0 1 5 = 22 + 20 = 1 0 1
6 = 22 + 21 + = 1 1 0 6 = 22 + 21 + = 1 1 0 e.g. odd parity
7 = 22 + 21 + 20 = 1 1 1 7 = 22 + 21 + 20 = 1 1 1

年 75 年 76
Hamming code example Hamming code example

Hamming bits: Hamming bits:


bits in positions 1, 2, & 4 bits in positions 1, 2, & 4
Original message 1 0 1 1 of the message to be sent Original message 1 0 1 1 of the message to be sent

Message to be sent 1 0 1 0 1 1 Message to be sent 1 0 1 1 0 1 1


Repeat with the 21 position: Repeat with the 22 position:
Position 1 2 3 4 5 6 7 Position 1 2 3 4 5 6 7

2n: check bits 20 21 22 Look at bit positions in the 2n: check bits 20 21 22 Look at bit positions in the
message to be sent message to be sent
Calculate check bits with 1’s in them Calculate check bits with 1’s in them
3 = 21+ 20 = 0 1 1 3 = 21 + 20 = 0 1 1
Set the Hamming bit to Set the Hamming bit to
5 = 22 + 20 = 1 0 1 5 = 22 + 20 = 1 0 1
6 = 22 + 21 + = 1 1 0 a specific parity 6 = 22 + 21 + = 1 1 0 a specific parity
7 = 22 + 21 + 20 = 1 1 1 7 = 22 + 21 + 20 = 1 1 1

年 77 年 78

Hamming code example How to check for a single-bit error in the


sent message using the Hamming code?

Received message bits 1 0 1 1 0 0 1

1 0 1 1 0 0 1
Original message = 1 0 1 1 Position 1 2 3 4 5 6 7

2n: check bits 20 21 22


Message to be sent = 1 0 1 1 0 1 1
Calculate check bits
3 = 21 + 20 = 0 1 1
5 = 22 + 20 = 1 0 1
6 = 22 + 21 = 1 1 0
7 = 22 + 21 + 20 = 1 1 1

年 79 年 80
How to check for a single-bit error in the How to check for a single-bit error in the
sent message using the Hamming code? sent message using the Hamming code?

Received message bits 1 0 1 1 0 0 1 Received message bits 1 0 1 1 0 0 1 Repeat with the 21 position:
Starting with the 20 position:
Look at positions with 1’s
Look at positions with 1’s
1 0 1 1 0 0 1 1 0 1 1 0 0 1 in them
in them
Position 1 2 3 4 5 6 7 Position 1 2 3 4 5 6 7
Count the number of 1’s in
Count the number of 1’s in
2n: check bits 20 21 22 2n: check bits 20 21 22 both the corresponding
both the corresponding
message bits & the 21 check
message bits & the 20 check
Calculate check bits Calculate check bits bit & compute the parity
bit & compute the parity
3 = 21
+ 20 = 0 1 1 3 = 21 + 20 = 0 1 1
5 = 22 + 20 = 1 0 1 5 = 22 + 20 = 1 0 1 If even parity, there is an error
If even parity, there is an error
6 = 22 + 21 = 1 1 0 6 = 22 + 21 = 1 1 0 in one of the four bits that were
in one of the four bits that were
7 = 22 + 21 + 20 = 1 1 1 7 = 22 + 21 + 20 = 1 1 1 checked
checked

No error in bits 1, 3, 5, 7 Error in bit 2, 3, 6 or 7

年 81 年 82

How to check for a single-bit error in the How to find the error location using
sent message using the Hamming code? Hamming code?

Received message bits 1 0 1 1 0 0 1 Repeat with the 22 position:

Look at positions with 1’s


1 0 1 1 0 0 1
in them 1 0 1 1 0 0 1
Position 1 2 3 4 5 6 7 Position 1 2 3 4 5 6 7
Count the number of 1’s in
2n: check bits 20 21 22 both the corresponding
message bits & the 22 check
Calculate check bits bit & compute the parity
3 = 21 + 20 = 0 1 1
5 = 22 + 20 = 1 0 1 If even parity, there is an error
6 = 22 + 21 = 1 1 0 in one of the four bits that were
7 = 22 + 21 + 20 = 1 1 1 checked

Error in bit 4, 5, 6 or 7

年 83 年 84
How to find the error location using How to find the error location using
Hamming code? Hamming code?

erroneous bit, change to 1

1 0 1 1 0 0 1 1 0 1 1 0 0 1
Position 1 2 3 4 5 6 7 Position 1 2 3 4 5 6 7

No error in bits 1, 3, 5, 7
Error must be in bit 6
Error in bit 2, 3, 6 or 7 because bits 3, 5, 7
Error in bit 4, 5, 6 or 7 are correct, & all the
remaining information
agrees on bit 6

年 85 年 86

You might also like