Digital Transmission Fundamentals: 3.1 Digital Representation of Information 3.2 Why Digital Communications?
Digital Transmission Fundamentals: 3.1 Digital Representation of Information 3.2 Why Digital Communications?
Digital Transmission
Fundamentals
3.1 Digital Representation of Information
3.2 Why Digital Communications?
3.3 Digital Representation of Analog Signals
3.4 Characterization of Communication Channels
3.5 Fundamental Limits in Digital Transmission
3.6 Line Coding
3.7 Modems and Digital Modulation
3.8 Properties of Media and Digital Transmission Systems
3.9 Error Detection and Correction
Digital Networks
Digital
transmission enables networks to
support many services
TV E-mail
Telephone
Questions of Interest
How long will it take to transmit a message?
How many bits are in the message (text, image)?
How fast does the network/system transfer information?
Can a network/system handle a voice (video) call?
How many bits/second does voice/video require? At what
quality?
How long will it take to transmit a message without
errors?
How are errors introduced?
How are errors detected and corrected?
What transmission speed is possible over radio,
copper cables, fiber, infrared, …?
Chapter 3
Digital Transmission
Fundamentals
3.1 Digital Representation of
Information
Bits, numbers, information
Bit: number with value 0 or 1
n bits: digital representation for 0, 1, … , 2n
Byte or Octet, n = 8
Computer word, n = 16, 32, or 64
n bits allows enumeration of 2n possibilities
n-bit field in a header
n-bit representation of a voice sample
Message consisting of n bits
The number of bits required to represent a message
is a measure of its information content
More bits → More content
Block vs. Stream Information
Block Stream
Information that occurs Information that is
in a single block produced & transmitted
Text message continuously
Data file Real-time voice
JPEG image Streaming video
MPEG file
Size = bits / block Bit rate = bits / second
or bytes/block 1 Kbps = 103 bps
1 Kbyte = 210 bytes 1 Mbps = 106 bps
1 Mbyte = 220 bytes 1 Gbps = 109 bps
1 Gbyte = 230 bytes
Transmission Delay
L number of bits in message
R bps speed of digital transmission system
L/R time to transmit the information
d distance in meters
c speed of light (3x108 m/s in vacuum)
tprop time for signal to propagate across medium
Th e s p ee ch s i g n al l e v el v a r ie s w i th t i m(e)
Digitization of Analog Signal
Sample analog signal in time and amplitude
Find closest approximation
Original signal
Sample value
7∆ /2 Approximation
5∆ /2
3 bits / sample
3∆ /2
∆ /2
−∆ /
2
−3∆ /2
−5∆ /2
−7∆ /2
720
Broadcast TV at 30 frames/sec =
480
10.4 x 106 pixels/sec
1920
HDTV at 30 frames/sec =
Communication channel
Transmitter
Converts information into signal suitable for transmission
Injects energy into communications medium or channel
Telephone converts voice into electric current
Modem converts bits into tones
Receiver
Receives energy from medium
Converts received signal into form suitable for delivery to user
Telephone converts current into voice
Modem converts tones into bits
Transmission Impairments
Transmitted Received
Transmitter
Signal Signal Receiver
Communication channel
0 T 2T 3T 4T 5T 6T
-A
Channel
T t t
No errors
Low
SNR
t t t
Recalculate
check bits
k bits
Channel
Calculate
check bits Compare
Sent Received
Information
check check bits
accepted if
bits check bits
match
n – k bits
How good is the single parity
check code?
Redundancy: Single parity check code adds 1
redundant bit per k information bits:
overhead = 1/(k + 1)
Coverage: all error patterns with odd # of errors can
be detected
An error pattern is a binary (k + 1)-tuple with 1s where
errors occur and 0’s elsewhere
Of 2k+1 binary (k + 1)-tuples, ½ are odd, so 50% of error
patterns can be detected
Is it possible to detect more errors if we add more
check bits?
Yes, with the right codes
What if bit errors are random?
Many transmission channels introduce bit errors at random, independently of each
other, and with probability p
Some error patterns are more probable than others:
p
P[10000000] = p(1 – p)7 = (1 – p)8 and
1–p
P[11000000] = p (1 – p) = (1 – p)
2 6 8 p 2
1–p
1 0 0 1 1 1 1 0 0 1 1 1
b1=1010 = 10 b0 + b1 = 1100+1010
b0+b1=12+10=7 mod15 =10110
=10000+0110
b2 = -7 = 8 mod15 =0001+0110
Therefore =0111
=7
b2=1000
Take 1s complement
b2 = -0111 =1000
Polynomial Codes
Polynomials instead of vectors for codewords
Polynomial arithmetic instead of check sums
Addition:
(x7 + x6 + 1) + (x6 + x5) = x7 + x6 + x6 + x5 + 1
= x7 +(1+1)x6 + x5 + 1
= x7 +x5 + 1 since 1+1=0 mod2
Multiplication:
(x + 1) (x2 + x + 1) = x(x2 + x + 1) + 1(x2 + x + 1)
= x3 + x2 + x + (x2 + x + 1)
= x3 + 1
Binary Polynomial Division
Division with Decimal Numbers
34 quotient dividend = quotient x divisor +remainder
35 ) 1222 dividend
105 1222 = 34 x 35 + 32
divisor 17 2
140
32 remainder
x3 + x2 + x = q(x) quotient
Polynomial
Division x3 + x + 1 ) x6 + x5
x6 + x4 + x3 dividend
divisor
x5 + x4 + x3
x5 + x3 + x2
Note: Degree of r(x) is less than x4 + x2
degree of divisor
x4 + x2 + x
x = r(x) remainder
Polynomial Coding
Code has binary generating polynomial of degree n–k
g(x) = xn-k + gn-k -1 xn-k -1 + … + g2x2 + g1x + 1
k information bits define polynomial of degree k – 1
i(x) = ik-1 xk-1 + ik-2 xk-2 + … + i2x2 + i1x + i0
Find remainder polynomial of at most degree n – k – 1
q(x)
g(x) ) xn-k i(x) xn-k i(x) = q(x)g(x) + r(x)
r(x)
Define the codeword polynomial of degree n – 1
b(x) = xn-k i(x) + r(x)
n bits k bits n-k bits
Polynomial example: k = 4, n–k = 3
Generator polynomial: g(x)= x3 + x + 1
Information: (1,1,0,0) i(x) = x3 + x2
Encoding: x3i(x) = x6 + x5
x3 + x2 + x
1110
x3 + x + 1 ) x6 + x5 1011 ) 1100000
x6 + x4 + x3 1011
x5 + x4 + x3 1110
x5 + x3 + x2 1011
x4 + x2 1010
x4 + x2 + x 1011
x 010
Transmitted codeword:
b(x) = x6 + x5 + x
b = (1,1,0,0,0,1,0)
Exercise 1
Generator polynomial: g(x)= x3 + x2 + 1
Information: (1,0,1,0,1,1,0) i(x) = x6 + x4 + x2 + x
Q1: Find the remainder (also called Frame Check
Sequence, FCS) and transmitted codeword
Encoding:
x3i(x) = x3 (x6 + x4 + x2 + x) = x9 + x7 + x5 + x3
Solution
11 0 11 0 1
11 0 1 1 0 1 0 11 0 0 0 0
11 0 1
1111
11 0 1
0101
0 0 0 0
1010
11 0 1
Remainder?
111 0
001 11 0 1
0 11 0
Transmitted codeword: 0 0 0 0
b(x) = x9 + x7 + x5 + x3 + 1 11 0 0
b = (1,0,1,0,1,1,0,0,0,1) 11 0 1
0 01
The Pattern in Polynomial Coding
CRC-8:
= x8 + x2 + x + 1 AT
M
CRC-16:
b5 = b1 + b3 + b4
b6 = b1 + b2 + b4
b7 = + b2 + b3 + b4
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1 1 1 1 4
0 0 1 0 0 0 1 0 1 0 1 3
0 0 1 1 0 0 1 1 0 1 0 3
0 1 0 0 0 1 0 0 0 1 1 3
0 1 0 1 0 1 0 1 1 0 0 3
0 1 1 0 0 1 1 0 1 1 0 4
0 1 1 1 0 1 1 1 0 0 1 4
1 0 0 0 1 0 0 0 1 1 0 3
1 0 0 1 1 0 0 1 0 0 1 3
1 0 1 0 1 0 1 0 0 1 1 4
1 0 1 1 1 0 1 1 1 0 0 4
1 1 0 0 1 1 0 0 1 0 1 4
1 1 0 1 1 1 0 1 0 1 0 4
1 1 1 0 1 1 1 0 0 0 0 3
1 1 1 1 1 1 1 1 1 1 1 7
Parity Check Equations
Rearrange parity check equations:
0 = b5 + b5 = b1 + b3 + b4 + b5
0 = b6 + b6 = b1 + b2 + b4 + b6
0 = b7 + b7 = + b2 + b3 + b4 + b7
In matrix form: b1
All codewords must
b2
satisfy these
0 = 1011100 b3 equations
0 = 1101010 b4 = H bt = 0
Note: each nonzero
3-tuple appears once
0 = 0111001 b5 as a column in check
b6 matrix H
b7
Error Detection with Hamming Code
0
0
1011100 1 1 Single error detected
s=He= 1101010 0 = 0
0111001 0 1
0
0
0
1
1011100 0 0 1 1
s=He= 1101010 0 = 1 + 0 = 1 Double error detected
0111001 1 1 0 1
0
0
1
1
1011100 1 1 0 1
s=He= 1101010 0 = 1 + Triple error not
1 + 0 = 0
0 detected
0111001 0 1 1
0
0
Hamming Distance (weight)
is the # of positions in two strings of equal length
for which the corresponding elements are different
(i.e., the # of substitutions required to change
one into the other)
For example:
Hamming distance between 1011101 and 1001001 is 2.
Hamming distance between 2143896 and 2233796 is 3.
Hamming distance between "toned" and "roses" is 3.
The Hamming weight of a string is its Hamming
distance from the zero string of the same length
it is the number of elements in the string which are
not zero
for a binary string this is just the number of 1's, so
for instance the Hamming weight of 11101 is 4.
General Hamming Codes
Form > 2, the Hamming code is obtained
through the check matrix H:
Each nonzero m-tuple appears once as a column
of H
The resulting code corrects all single errors
Foreach value of m, there is a polynomial
code with g(x) of degree m that is equivalent
to a Hamming code and corrects all single
errors
For m = 3, g(x) = x3+x+1
Error-correction using Hamming
Codes
(Transmitter) b + R (Receiver)
e Error pattern
s = H R = He
7p
s=0 s=0
1–3p 3p