0% found this document useful (0 votes)
148 views77 pages

Digital Transmission Fundamentals: 3.1 Digital Representation of Information 3.2 Why Digital Communications?

The document discusses the fundamentals of digital transmission, including how information is represented digitally using bits, how analog signals are digitized, and how different types of digital media like text, images, audio, and video are characterized in terms of their bit rates and compression requirements. It also examines how digital signals are transmitted over communication channels and the impairments that can occur, such as attenuation, distortion, noise, and interference. Finally, it compares analog and digital communication systems and why digital transmission is preferable.

Uploaded by

neha
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views77 pages

Digital Transmission Fundamentals: 3.1 Digital Representation of Information 3.2 Why Digital Communications?

The document discusses the fundamentals of digital transmission, including how information is represented digitally using bits, how analog signals are digitized, and how different types of digital media like text, images, audio, and video are characterized in terms of their bit rates and compression requirements. It also examines how digital signals are transmitted over communication channels and the impairments that can occur, such as attenuation, distortion, noise, and interference. Finally, it compares analog and digital communication systems and why digital transmission is preferable.

Uploaded by

neha
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 77

Chapter 3

Digital Transmission
Fundamentals
3.1 Digital Representation of Information
3.2 Why Digital Communications?
3.3 Digital Representation of Analog Signals
3.4 Characterization of Communication Channels
3.5 Fundamental Limits in Digital Transmission
3.6 Line Coding
3.7 Modems and Digital Modulation
3.8 Properties of Media and Digital Transmission Systems
3.9 Error Detection and Correction
Digital Networks
 Digital
transmission enables networks to
support many services

TV E-mail

Telephone
Questions of Interest
 How long will it take to transmit a message?
 How many bits are in the message (text, image)?
 How fast does the network/system transfer information?
 Can a network/system handle a voice (video) call?
 How many bits/second does voice/video require? At what
quality?
 How long will it take to transmit a message without
errors?
 How are errors introduced?
 How are errors detected and corrected?
 What transmission speed is possible over radio,
copper cables, fiber, infrared, …?
Chapter 3
Digital Transmission
Fundamentals
3.1 Digital Representation of
Information
Bits, numbers, information
 Bit: number with value 0 or 1
 n bits: digital representation for 0, 1, … , 2n
 Byte or Octet, n = 8
 Computer word, n = 16, 32, or 64
 n bits allows enumeration of 2n possibilities
 n-bit field in a header
 n-bit representation of a voice sample
 Message consisting of n bits
 The number of bits required to represent a message
is a measure of its information content
 More bits → More content
Block vs. Stream Information
Block Stream
 Information that occurs  Information that is
in a single block produced & transmitted
 Text message continuously
 Data file  Real-time voice
 JPEG image  Streaming video
 MPEG file
 Size = bits / block  Bit rate = bits / second
or bytes/block  1 Kbps = 103 bps
 1 Kbyte = 210 bytes  1 Mbps = 106 bps
 1 Mbyte = 220 bytes  1 Gbps = 109 bps
 1 Gbyte = 230 bytes
Transmission Delay
 L number of bits in message
 R bps speed of digital transmission system
 L/R time to transmit the information
 d distance in meters
 c speed of light (3x108 m/s in vacuum)
 tprop time for signal to propagate across medium

Delay = tprop + L/R = d/c + L/R seconds

What can be done to reduce the delay?


 Use data compression to reduce L
 Use higher-speed modem to increase R
 Place server closer to reduce d
Compression
 Information
usually not represented efficiently
 Data compression algorithms
 Represent the information using fewer bits
 Noiseless: original information recovered exactly
 e.g., zip, compress, GIF, fax
 Noisy: recover information approximately
 JPEG
 Tradeoff: # bits vs. quality
 Compression Ratio
#bits (original file) / #bits (compressed file)
Color Image
W W W W
Red Green Blue
Color component component component
H image = H image + H image + H image

Total bits = 3 × H × W pixels × B bits/pixel = 3HWB


bits
Example: 8× 10 inch picture at 400 × 400 pixels per inch2
400 × 400 × 8 × 10 = 12.8 million pixels
8 bits/pixel/color
12.8 megapixels × 3 bytes/pixel = 38.4 megabytes
Examples of Block Information

Type Method Format Original Compressed(


Ratio)
Text Zip, ASCII Kbytes- (2-6)
compress Mbytes

Fax CCITT Group A4 page 200x100 256 5-54 Kbytes


3 pixels/in2 Kbytes (5-50)

Color JPEG 8x10 in2 photo 38.4 1-8 Mbytes


Image 4002 pixels/in2 Mbytes (5-30)
Stream Information
 A real-time voice signal must be digitized &
transmitted as it is produced
 Analog signal level varies continuously in time

Th e s p ee ch s i g n al l e v el v a r ie s w i th t i m(e)
Digitization of Analog Signal
 Sample analog signal in time and amplitude
 Find closest approximation
Original signal
Sample value

7∆ /2 Approximation
5∆ /2
3 bits / sample

3∆ /2
∆ /2
−∆ /
2
−3∆ /2
−5∆ /2
−7∆ /2

Rs = Bit rate = # bits/sample x # samples/second


Bit Rate of Digitized Signal
 Bandwidth Ws Hertz: how fast the signal changes
 Higher bandwidth → more frequent samples
 Minimum sampling rate = 2 x Ws

 Representation accuracy: range of approximation


error
 Higher accuracy
→ smaller spacing between approximation values
→ more bits per sample
Example: Voice & Audio
Telephone voice CD Audio
 W = 4 kHz → 8000  W = 22 kHz → 44000
s s
samples/sec samples/sec
 8 bits/sample  16 bits/sample

 R =8 x 8000 = 64 kbps  R =16 x 44000= 704 kbps


s s
per audio channel
 MP3 uses more powerful
 Cellular phones use
more powerful compression algorithms:
compression 50 kbps per audio
algorithms: 8-12 kbps channel
Video Signal
 Sequence of picture frames
 Each picture digitized &
compressed
 Frame repetition rate
 10-30-60 frames/second
depending on quality
 Frame resolution
 Small frames for
videoconferencing 30 fps
 Standard frames for
conventional broadcast TV
 HDTV frames

Rate = M bits/pixel x (W x H) pixels/frame x F


frames/second
Video Frames
176
QCIF videoconferencing 144 at 30 frames/sec =
(144 lines and 176 pixels per line ) 760,000 pixels/sec

720
Broadcast TV at 30 frames/sec =
480
10.4 x 106 pixels/sec

1920
HDTV at 30 frames/sec =

1080 67 x 106 pixels/sec


Digital Video Signals
Type Method Format Original Compressed
Video H.261 176x144 or 2-36 64-1544 kbps
Confer- 352x288 pix Mbps
ence @10-30 fr/sec
Full MPEG2 720x480 pix 249 Mbps 2-6 Mbps
Motion @30 fr/sec

HDTV MPEG2 1920x1080 pix 1.6 Gbps 19-38 Mbps


@30 fr/sec
Transmission of Stream Information
 Constant bit-rate
 Signals such as digitized telephone voice produce
a steady stream: e.g., 64 kbps
 Network must support steady transfer of signal,
e.g., 64 kbps circuit
 Variable bit-rate
 Signals such as digitized video produce a stream
that varies in bit rate, e.g., according to motion
and detail in a scene
 Network must support variable transfer rate of
signal, e.g., packet switching or rate-smoothing
with constant bit-rate circuit
Stream Service Quality Issues
 Network Transmission Impairments
 Delay: Is information delivered in timely fashion?
 Jitter: Is information delivered in sufficiently smooth
fashion?
 Loss: Is information delivered without loss? If loss
occurs, is delivered signal quality acceptable?

 Applications & application layer protocols


developed to deal with these impairments
Chapter 3
Communication
Networks and Services
3.2 Why Digital
Communications?
A Transmission System
Transmitter Receiver

Communication channel

Transmitter
 Converts information into signal suitable for transmission
 Injects energy into communications medium or channel
 Telephone converts voice into electric current
 Modem converts bits into tones
Receiver
 Receives energy from medium
 Converts received signal into form suitable for delivery to user
 Telephone converts current into voice
 Modem converts tones into bits
Transmission Impairments
Transmitted Received
Transmitter
Signal Signal Receiver

Communication channel

Communication Channel Transmission Impairments


 Pair of copper wires  Signal attenuation
 Coaxial cable  Signal distortion
 Radio  Spurious noise
 Light in optical fiber  Interference from other
 Light in air signals
 Infrared
Analog Long-Distance
Communications
Transmission segment

Source Repeater ... Repeater Destination

 Each repeater attempts to restore analog signal to


its original form
 Restoration is imperfect
 Distortion is not completely eliminated
 Noise & interference is only partially removed
 Signal quality decreases with # of repeaters
 Communications is distance-limited
 Still used in analog cable TV systems
 Analogy: Copy a song using a cassette recorder
Analog vs. Digital Transmission
Analog transmission: all details must be reproduced accurately
Distortion
Sent Attenuation Received

Digital transmission: only discrete levels need to be reproduced

Sent Distortion Received


Simple Receiver:
Attenuation
Was original pulse
positive or
negative?
Digital Long-Distance
Communications
Transmission segment

Source Regenerator ... Regenerator Destination

 Regenerator recovers original data sequence and


retransmits on next segment
 Can design it so error probability is very small
 Then each regeneration is like the first time!
 Analogy: copy an MP3 file
 Communications is possible over very long distances
 Digital systems vs. analog systems
 Less power, longer distances, lower system cost
 Monitoring, multiplexing, coding, encryption, protocols…
Digital Binary Signal
1 0 1 1 0 1
+A

0 T 2T 3T 4T 5T 6T
-A

Bit rate = 1 bit / T seconds


For a given communications medium:
 How do we increase transmission speed?

 How do we achieve reliable communications?

 Are there limits to speed and reliability?


Pulse Transmission Rate
 Objective: Maximize pulse rate through a channel,
that is, make T as small as possible

Channel
T t t

 If input is a narrow pulse, then typical output is a


spread-out pulse with ringing
 Question: How frequently can these pulses be
transmitted without interfering with each other?
 Answer: 2 x Wc pulses/second
where Wc is the bandwidth of the channel
Bandwidth of a Channel
X(t) = a cos(2π ft) Channel Y(t) = A(f) a cos(2π ft)

 If input is sinusoid of frequency f, A(f)


1
then
 output is a sinusoid of same frequency f
f
 Output is attenuated by an amount A(f) 0 Wc
that depends on f
 A(f)≈1, then input signal passes readily Ideal low-pass
 A(f)≈0, then input signal is blocked channel

 Bandwidth Wc is the range of


frequencies passed by channel
Multilevel Pulse Transmission
 Assume channel of bandwidth Wc, and transmit 2 Wc
pulses/sec (without interference)
 If pulses amplitudes are either -A or +A, then each
pulse conveys 1 bit, so
Bit Rate = 1 bit/pulse x 2Wc pulses/sec = 2Wc bps
 If amplitudes are from {-A, -A/3, +A/3, +A}, then bit rate
is 2 x 2Wc bps
 By going to M = 2m amplitude levels, we achieve

Bit Rate = m bits/pulse x 2Wc pulses/sec = 2mWc bps

In the absence of noise, the bit rate can be increased


without limit by increasing m
Noise & Reliable Communications
 All physical systems have noise
 Electrons always vibrate at non-zero temperature
 Motion of electrons induces noise
 Presence of noise limits accuracy of measurement
of received signal amplitude
 Errors occur if signal separation is comparable to
noise level
 Bit Error Rate (BER) increases with decreasing
signal-to-noise ratio
 Noise places a limit on how many amplitude levels
can be used in pulse transmission
Signal-to-Noise Ratio
Signal Noise Signal + noise
High
SNR
t t t

No errors

Signal Noise Signal + noise

Low
SNR
t t t

Average signal power error


SNR =
Average noise power

SNR (dB) = 10 log10 SNR


Shannon Channel Capacity
C = Wc log2 (1 + SNR) bps
 Arbitrarily reliable communications is possible if the
transmission rate R < C.
 If R > C, then arbitrarily reliable communications is
not possible.
 “Arbitrarily reliable” means the BER can be made
arbitrarily small through sufficiently complex coding.
 C can be used as a measure of how close a system
design is to the best achievable performance.
 Bandwidth Wc & SNR determine C
Example
 Findthe Shannon channel capacity for a
telephone channel with Wc = 3400 Hz and
SNR = 10000

C = 3400 log2 (1 + 10000)


= 3400 log10 (10001)/log102 = 45200 bps

Note that SNR = 10000 corresponds to


SNR (dB) = 10 log10(10001) = 40 dB
Bit Rates of Digital Transmission
Systems
System Bit Rate Observations
Telephone 33.6-56 kbps 4 kHz telephone channel
twisted pair
Ethernet 10 Mbps, 100 Mbps, 100 meters of unshielded
twisted pair 1 Gbps twisted copper wire pair
Cable modem 500 kbps-4 Mbps Shared CATV return channel
ADSL 64-640 kbps in, 1.536- Coexists with analog telephone
6.144 Mbps out signal
2.4 GHz radio 2-11 Mbps IEEE 802.11 wireless LAN
28 GHz radio 1.5-45 Mbps 5 km multipoint radio
Optical fiber 2.5-10 Gbps 1 wavelength
Optical fiber >1600 Gbps Many wavelengths
Examples of Channels
Channel Bandwidth Bit Rates

Telephone voice 3 kHz 33 kbps


channel
Copper pair 1 MHz 1-6 Mbps

Coaxial cable 500 MHz (6 30 Mbps/ channel


MHz channels)
5 GHz radio 300 MHz (11 54 Mbps / channel
(IEEE 802.11) channels)
Optical fiber Many TeraHertz 40 Gbps /
wavelength
Chapter 3
Digital Transmission
Fundamentals
3.9 Error Detection and
Correction
Error Control
 Digital transmission systems introduce errors
 Applications require certain reliability level
 Data applications require error-free transfer
 Voice & video applications tolerate some errors
 Error control used when transmission system does
not meet application requirement
 Error control ensures a data stream is transmitted to
a certain level of accuracy despite errors
 Two basic approaches:
 Error detection & retransmission (ARQ)

 Forward error correction (FEC)


Key Idea
 All transmitted data blocks (“codewords”) satisfy a pattern
 If received block doesn’t satisfy pattern, it is in error
 Redundancy: only a subset of all possible blocks can be
codewords
 Blindspot: when channel transforms a codeword into
another codeword

All inputs to channel Channel


satisfy pattern or condition output
Deliver user
User Pattern
Encoder Channel information or
information checking
set error alarm
Single Parity Check
 Append an overall parity check to k information bits
Info Bits: b1, b2, b3, …, bk

Check Bit: bk+1 = b1+ b2+ b3+ …+ bk modulo 2


Codeword: (b1, b2, b3, …, bk,, bk+! )

 All codewords have even # of 1s


 Receiver checks to see if # of 1s is even
 All error patterns that change an odd # of bits are
detectable
 All even-numbered patterns are undetectable
 Parity bit used in ASCII code
Example of Single Parity Code
 Information (7 bits): (0, 1, 0, 1, 1, 0, 0)
 Parity Bit: b8 = 0 + 1 +0 + 1 +1 + 0 = 1
 Codeword (8 bits): (0, 1, 0, 1, 1, 0, 0, 1)

 If single error in bit 3 : (0, 1, 1, 1, 1, 0, 0, 1)


 # of 1’s =5, odd
 Error detected

 If errors in bits 3 and 5: (0, 1, 1, 1, 0, 0, 0, 1)


 # of 1’s =4, even
 Error not detected
Check bits & Error Detection

Information bits Received information bits

Recalculate
check bits
k bits
Channel
Calculate
check bits Compare
Sent Received
Information
check check bits
accepted if
bits check bits
match
n – k bits
How good is the single parity
check code?
 Redundancy: Single parity check code adds 1
redundant bit per k information bits:
overhead = 1/(k + 1)
 Coverage: all error patterns with odd # of errors can
be detected
 An error pattern is a binary (k + 1)-tuple with 1s where
errors occur and 0’s elsewhere
 Of 2k+1 binary (k + 1)-tuples, ½ are odd, so 50% of error
patterns can be detected
 Is it possible to detect more errors if we add more
check bits?
 Yes, with the right codes
What if bit errors are random?
 Many transmission channels introduce bit errors at random, independently of each
other, and with probability p
 Some error patterns are more probable than others:

p
P[10000000] = p(1 – p)7 = (1 – p)8 and
1–p
P[11000000] = p (1 – p) = (1 – p)
2 6 8 p 2

1–p

 In any worthwhile channel p < 0.5, and so (p/(1 – p) < 1


 It follows that patterns with 1 error are more likely than patterns
with 2 errors and so forth
 What is the probability that an undetectable error pattern
occurs?
Single parity check code with
random bit errors
 Undetectable error pattern if even # of bit errors:
P[error detection failure] = P[undetectable error pattern]
= P[error patterns with even number of 1s]
n n
= p (1 – p)
2 n-2
+ p4(1 – p)n-4 + …
2 4

 Example: Evaluate above for n = 32, p = 10-3


32 32
P[undetectable error] = (10-3 )2 (1 – 10-3 )30 + (10-3 )4 (1 – 10-3 )28
2 4
≈ 496 (10-6 ) + 35960 (10-12 ) ≈ 4.96 (10-4 )

 For this example, roughly 1 in 2000 error patterns is


undetectable
What is a good code?
 Many channels have o o
o o
preference for error patterns x x Poor
that have fewer # of errors o x x x o o distance
x x properties
 These error patterns map o o
transmitted codeword to o o o
nearby n-tuple
 If codewords close to each x = codewords
other then detection failures o = noncodewords
will occur
 Good codes should o x
maximize separation o x o Good
o o
between codewords x o x o x distance
properties
o o
o o
x o x
Two-Dimensional Parity Check
 More parity bits to improve coverage
 Arrange information as columns
 Add single parity bit to each column
 Add a final “parity” column
 Used in early error control systems
1 0 0 1 0 0
0 1 0 0 0 1
Last column consists
1 0 0 1 0 0 of check bits for each
1 1 0 1 1 0 row
1 0 0 1 1 1
Bottom row consists of
check bit for each column
Error-detecting capability
1 0 0 1 0 0 1 0 0 1 0 0
0 0 0 0 0 1 0 0 0 0 0 1
One error Two errors
1 0 0 1 0 0 1 0 0 1 0 0
1 1 0 1 1 0 1 0 0 1 1 0
1, 2, or 3 errors
1 0 0 1 1 1 1 0 0 1 1 1
can always be
detected; Not all
patterns >4 errors
1 0 0 1 0 0 1 0 0 1 0 0
can be detected
0 0 0 1 0 1 0 0 0 1 0 1
1 0 0 1 0 0 Three 1 0 0 1 0 0
errors Four errors
1 0 0 1 1 0 1 0 0 0 1 0 (undetectable)

1 0 0 1 1 1 1 0 0 1 1 1

Arrows indicate failed check bits


Other Error Detection Codes
 Many applications require very low error rate
 Need codes that detect the vast majority of errors
 Single parity check codes do not detect enough
errors
 Two-dimensional codes require too many check bits
 The following error detecting codes used in practice:
 Internet Check Sums

 CRC Polynomial Codes


Internet Checksum
 Several Internet protocols (e.g., IP, TCP, UDP) use
check bits to detect errors in the IP header (or in the
header and data for TCP/UDP)
 A checksum is calculated for header contents and
included in a special field.
 Checksum recalculated at every router, so algorithm
selected for ease of implementation in software
 Let header consist of L, 16-bit words,
b0, b1, b2, ..., bL-1
 The algorithm appends a 16-bit checksum bL
Checksum Calculation
The checksum bL is calculated as follows:
 Treating each 16-bit word as an integer, find
x = b0 + b1 + b2+ ...+ bL-1 modulo 216 -1
 The checksum is then given by:
bL = - x modulo 216 -1
Thus, the headers must satisfy the following pattern:
0 = b0 + b1 + b2+ ...+ bL-1 + bL modulo 216 -1
 The checksum calculation is carried out in software using
one’s complement arithmetic
Internet Checksum Example
Use Modulo Arithmetic Use Binary Arithmetic
 Assume 4-bit words  Note 16 =1 mod15
 Use mod 24-1 arithmetic  So: 10000 = 0001 mod15

 b =1100 = 12  leading bit wraps around


0

 b1=1010 = 10 b0 + b1 = 1100+1010
 b0+b1=12+10=7 mod15 =10110
=10000+0110
 b2 = -7 = 8 mod15 =0001+0110
 Therefore =0111
=7
 b2=1000
Take 1s complement
b2 = -0111 =1000
Polynomial Codes
 Polynomials instead of vectors for codewords
 Polynomial arithmetic instead of check sums

 Implemented using shift-register circuits

 Also called cyclic redundancy check (CRC)


codes
 Most data communications standards use
polynomial codes for error detection
 Polynomial codes is the basis for powerful
error-correction methods
Binary Polynomial Arithmetic
 Binary vectors map to polynomials
(ik-1 , ik-2 ,…, i2 , i1 , i0)  ik-1 xk-1 + ik-2 xk-2 + … + i2x2 + i1x + i0

Addition:
(x7 + x6 + 1) + (x6 + x5) = x7 + x6 + x6 + x5 + 1
= x7 +(1+1)x6 + x5 + 1
= x7 +x5 + 1 since 1+1=0 mod2

Multiplication:
(x + 1) (x2 + x + 1) = x(x2 + x + 1) + 1(x2 + x + 1)
= x3 + x2 + x + (x2 + x + 1)
= x3 + 1
Binary Polynomial Division
 Division with Decimal Numbers
34 quotient dividend = quotient x divisor +remainder
35 ) 1222 dividend
105 1222 = 34 x 35 + 32
divisor 17 2
140
32 remainder

x3 + x2 + x = q(x) quotient
 Polynomial
Division x3 + x + 1 ) x6 + x5
x6 + x4 + x3 dividend
divisor
x5 + x4 + x3
x5 + x3 + x2
Note: Degree of r(x) is less than x4 + x2
degree of divisor
x4 + x2 + x
x = r(x) remainder
Polynomial Coding
 Code has binary generating polynomial of degree n–k
g(x) = xn-k + gn-k -1 xn-k -1 + … + g2x2 + g1x + 1
 k information bits define polynomial of degree k – 1
i(x) = ik-1 xk-1 + ik-2 xk-2 + … + i2x2 + i1x + i0
 Find remainder polynomial of at most degree n – k – 1
q(x)
g(x) ) xn-k i(x) xn-k i(x) = q(x)g(x) + r(x)
r(x)
 Define the codeword polynomial of degree n – 1
b(x) = xn-k i(x) + r(x)
n bits k bits n-k bits
Polynomial example: k = 4, n–k = 3
Generator polynomial: g(x)= x3 + x + 1
Information: (1,1,0,0) i(x) = x3 + x2
Encoding: x3i(x) = x6 + x5
x3 + x2 + x
1110
x3 + x + 1 ) x6 + x5 1011 ) 1100000
x6 + x4 + x3 1011
x5 + x4 + x3 1110
x5 + x3 + x2 1011
x4 + x2 1010
x4 + x2 + x 1011
x 010
Transmitted codeword:
b(x) = x6 + x5 + x
b = (1,1,0,0,0,1,0)
Exercise 1
Generator polynomial: g(x)= x3 + x2 + 1
Information: (1,0,1,0,1,1,0) i(x) = x6 + x4 + x2 + x
Q1: Find the remainder (also called Frame Check
Sequence, FCS) and transmitted codeword
Encoding:
x3i(x) = x3 (x6 + x4 + x2 + x) = x9 + x7 + x5 + x3
Solution
11 0 11 0 1
11 0 1 1 0 1 0 11 0 0 0 0
11 0 1
1111
11 0 1
0101
0 0 0 0
1010
11 0 1
Remainder?
111 0
001 11 0 1
0 11 0
Transmitted codeword: 0 0 0 0
b(x) = x9 + x7 + x5 + x3 + 1 11 0 0
b = (1,0,1,0,1,1,0,0,0,1) 11 0 1
0 01
The Pattern in Polynomial Coding

 All codewords satisfy the following pattern:

b(x) = xn-k i(x) + r(x) = q(x)g(x) + r(x) + r(x) = q(x)g(x)


 All codewords are a multiple of g(x)
 Receiver should divide received n-tuple by g(x) and
check if remainder is zero
 If remainder is nonzero, then received n-tuple is not a
codeword
Exercise 1 cont’d
Q2: How does the receiver check whether the
message T was transmitted without any errors?
Show your work

Answer: The received message b is divided by g(x)


and if the remainder is zero then b is error-free
otherwise it contains errors.
Shift-Register Implementation
1. Accept information bits ik-1,ik-2,…,i2,i1,i0
2. Append n – k zeros to information bits
3. Feed sequence to shift-register circuit that
performs polynomial division
4. After n shifts, the shift register contains the
remainder
Division Circuit
Encoder for g(x) = x3 + x + 1
g0 = 1 g1 = 1 g3 = 1
0,0,0,i0,i1,i2,i3
+ Reg 0 + Reg 1 Reg 2

Clock Input Reg 0 Reg 1 Reg 2


0 - 0 0 0
1 1 = i3 1 0 0
2 1 = i2 1 1 0
3 0 = i1 0 1 1
4 0 = i0 1 1 1
5 0 1 0 1
6 0 1 0 0
7 0 0 1 0
Check bits: r0 = 0 r1 = 1 r2 = 0
r(x) = x
Undetectable error patterns
(Transmitter) (Receiver)
b(x) + R(x)=b(x)+e(x)

(Channel) e(x) Error polynomial

 e(x) has 1s in error locations & 0s elsewhere


 Receiver divides the received polynomial R(x) by g(x)
 Blindspot: If e(x) is a multiple of g(x), that is, e(x) is a
nonzero codeword, then
R(x) = b(x) + e(x) = q(x)g(x) + q’(x)g(x)
 The set of undetectable error polynomials is the set of
nonzero code polynomials
 Choose the generator polynomial so that selected error
patterns can be detected.
Designing good polynomial codes
 Select generator polynomial so that likely error patterns
are not multiples of g(x)
 Detecting Single Errors
 e(x) = xi for error in location i + 1
 If g(x) has more than 1 term, it cannot divide xi
 Detecting Double Errors
 e(x) = xi + xj = xi(xj-i +1) where j>i
 If g(x) has more than 1 term, it cannot divide xi
 If g(x) is a primitive polynomial, it cannot divide xm+1 for all
m<2n-k -1 (Need to keep codeword length less than 2n-k -1)
 Primitive polynomials can be found by consulting coding theory
books
Designing good polynomial codes
 Detecting Odd Numbers of Errors
 Suppose all codeword polynomials have an even
# of 1s, then all odd numbers of errors can be
detected
 As well, b(x) evaluated at x = 1 is zero because
b(x) has an even number of 1s
 This implies x + 1 must be a factor of all b(x)
 Pick g(x) = (x + 1) p(x) where p(x) is primitive
 Visit http://mathworld.wolfram.com/PrimitivePolynomial.html for
more info on primitive polynomials
Standard Generator Polynomials
CRC = cyclic redundancy check

 CRC-8:

= x8 + x2 + x + 1 AT
M
 CRC-16:

= x16 + x15 + x2 + 1 Bisync


= (x + 1)(x15 + x + 1)
 CCITT-16:

= x16 + x12 + x5 + 1 HDLC, XMODEM, V.41


 CCITT-32:
IEEE 802, DoD, V.42
= x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x2 + x + 1
Hamming Codes
 Class of error-correcting codes
 Can detect single and double-bit errors
 Can correct single-bit errors
 For each m > 2, there is a Hamming code of length
n = 2m – 1 with n – k = m parity check bits
Redundancy
m n = 2m–1 k = n–m m/n
3 7 4 3/7
4 15 11 4/15
5 31 26 5/31
6 63 57 6/63
m = 3 Hamming Code

Information bits are b1, b2, b3, b4

Equations for parity checks b5, b6, b7

b5 = b1 + b3 + b4
b6 = b1 + b2 + b4
b7 = + b2 + b3 + b4

 There are 24 = 16 codewords


 (0,0,0,0,0,0,0) is a codeword
Hamming (7,4) code
 Hamming code really refers to a specific (7,4)
code Hamming introduced in 1950
 Hamming code adds 3 additional check bits to
every 4 data bits of the message for a total of 7
 Hamming's (7,4) code can correct any single-bit
error, and detect all two-bit errors
 Since the medium would have to be uselessly
noisy for 2 out of 7 bits (about 30%) to be lost,
Hamming's (7,4) is effectively lossless
Hamming (7,4) code
Information Codeword Weight
b1 b2 b3 b4 b1 b2 b3 b4 b5 b6 b7 w(b)

0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1 1 1 1 4
0 0 1 0 0 0 1 0 1 0 1 3
0 0 1 1 0 0 1 1 0 1 0 3
0 1 0 0 0 1 0 0 0 1 1 3
0 1 0 1 0 1 0 1 1 0 0 3
0 1 1 0 0 1 1 0 1 1 0 4
0 1 1 1 0 1 1 1 0 0 1 4
1 0 0 0 1 0 0 0 1 1 0 3
1 0 0 1 1 0 0 1 0 0 1 3
1 0 1 0 1 0 1 0 0 1 1 4
1 0 1 1 1 0 1 1 1 0 0 4
1 1 0 0 1 1 0 0 1 0 1 4
1 1 0 1 1 1 0 1 0 1 0 4
1 1 1 0 1 1 1 0 0 0 0 3
1 1 1 1 1 1 1 1 1 1 1 7
Parity Check Equations
 Rearrange parity check equations:
0 = b5 + b5 = b1 + b3 + b4 + b5
0 = b6 + b6 = b1 + b2 + b4 + b6
0 = b7 + b7 = + b2 + b3 + b4 + b7
 In matrix form: b1
 All codewords must
b2
satisfy these
0 = 1011100 b3 equations
0 = 1101010 b4 = H bt = 0
 Note: each nonzero
3-tuple appears once
0 = 0111001 b5 as a column in check
b6 matrix H

b7
Error Detection with Hamming Code
0
0
1011100 1 1 Single error detected
s=He= 1101010 0 = 0
0111001 0 1
0
0
0
1
1011100 0 0 1 1
s=He= 1101010 0 = 1 + 0 = 1 Double error detected
0111001 1 1 0 1
0
0
1
1
1011100 1 1 0 1
s=He= 1101010 0 = 1 + Triple error not
1 + 0 = 0
0 detected
0111001 0 1 1
0
0
Hamming Distance (weight)
 is the # of positions in two strings of equal length
for which the corresponding elements are different
(i.e., the # of substitutions required to change
one into the other)
 For example:
 Hamming distance between 1011101 and 1001001 is 2.
 Hamming distance between 2143896 and 2233796 is 3.
 Hamming distance between "toned" and "roses" is 3.
 The Hamming weight of a string is its Hamming
distance from the zero string of the same length
 it is the number of elements in the string which are
not zero
 for a binary string this is just the number of 1's, so
for instance the Hamming weight of 11101 is 4.
General Hamming Codes
 Form > 2, the Hamming code is obtained
through the check matrix H:
 Each nonzero m-tuple appears once as a column
of H
 The resulting code corrects all single errors
 Foreach value of m, there is a polynomial
code with g(x) of degree m that is equivalent
to a Hamming code and corrects all single
errors
 For m = 3, g(x) = x3+x+1
Error-correction using Hamming
Codes
(Transmitter) b + R (Receiver)

e Error pattern

 The receiver first calculates the syndrome s:


s = HR = H (b + e) = Hb + He = He
 If s = 0, then the receiver accepts R as the transmitted
codeword
 If s is nonzero, then an error is detected
 Hamming decoder assumes a single error has occurred
 Each single-bit error pattern has a unique syndrome
 The receiver matches the syndrome to a single-bit error
pattern and corrects the appropriate bit
Performance of Hamming Error-
Correcting Code
 Assume bit errors occur independent of each other
and with probability p

s = H R = He
7p

s=0 s=0
1–3p 3p

No errors in Undetectable Correctable Uncorrectable


transmission errors errors errors
(1–p)7 7p3 7p(1–3p) 21p2
History of Hamming Code
 Read http://en.wikipedia.org/wiki/Hamming_code

You might also like