222 - EC8501, EC6501 Digital Communication - Notes 1
222 - EC8501, EC6501 Digital Communication - Notes 1
222 - EC8501, EC6501 Digital Communication - Notes 1
EC6501
DIGITAL COMMUNICATION
www.BrainKart.com
Click Here for Digital Communication full study material.
OBJECTIVES:
Toknow the principles of sampling &
quantization
Tostudy the various waveform coding
schemes
Tolearn the various baseband transmission
schemes
Tounderstand the various Band pass signaling
schemes
Toknow the fundamentals of channel coding
www.BrainKart.com
SYLLABUS www.BrainKart.com
TOTAL: 45 PERIODS
www.BrainKart.com
www.BrainKart.com
OUTCOMES
Upon completion of the course, students will be
able to
Design PCM systems
Design and implement base band transmission
schemes
Design and implement band pass signaling
schemes
Analyze the spectral characteristics of band
pass signaling schemes and their noise
performance
Design error control coding schemes
www.BrainKart.com
www.BrainKart.com
EC6501
DIGITAL COMMUNICATION
UNIT - 1
www.BrainKart.com
www.BrainKart.com
INTRODUCTION
www.BrainKart.com
www.BrainKart.com
www.BrainKart.com
BrainKart.com
UNIT I
SAMPLING & QUANTIZATION (9)
Low pass sampling
Aliasing
Signal Reconstruction
Quantization
Uniform & non-uniform quantization
Quantization Noise
Logarithmic Companding of speech signal
PCM
TDM
Click Here for Digital Communication full study material.
BrainKart.com
Input Low
Signal Source Channel
Pass Sampler Quantizer Multiplexer
Analog/ Encoder Encoder
Filter
Digital
Carrier
Pulse
Line
To Channel Modulator Shaping
Encoder
Filters
De- Receiver
From Channel Detector
Modulator Filter
Carrier Ref.
Signal
Digital-to-Analog Channel De-
at the
Converter Decoder Multiplexer
user end
Key Questions
Nyquist Theorem
For lossless digitization, the sampling rate
should be at least twice the maximum
frequency of the signal to be sampled.
In mathematical terms:
fs > 2*fm
Limited Sampling
But what if one cannot sample fast
enough?
Limited Sampling
Reduce signal frequency to half of
maximum sampling frequency
Aliasing effect
LP filter
Nyquist rate
aliasing
Linear Quantization
• Applicable when the signal is in a
finite range (fmin, fmax)
• The entire data range is divided
into L equal intervals of length Q
(known as quantization interval or
quantization step-size)
• Q=(fmax-fmin)/L Interval i is
mapped to the middle value of this
interval
• We store/send only the index of
quantized value min
Quantization Noise
Output sample
XQ 6
-8 -6 -4 -2 2 4 6 8
-2
Input sample
X
-4
-6
Click Here for Digital Communication full study material. 39
www.BrainKart.com
Non-Linear Quantization
• The quantizing intervals are not of equal size
• Small quantizing intervals are allocated to small
signal values (samples) and large quantization
intervals to large samples so that the signal-to-
quantization distortion ratio is nearly independent of
the signal level
• S/N ratios for weak signals are much better but are
slightly less for the stronger signals
• “Companding” is used to quantize signals
Function representation
Companding
• Formed from the words compressing and
expanding.
• A PCM compression technique where analogue
signal values are rounded on a non-linear scale.
• The data is compressed before sent and then
expanded at the receiving end using the same
non-linear scale.
• Companding reduces the noise and crosstalk
levels at the receiver.
log(1 m )
v (5.23)
log(1 )
d m log(1 )
(1 m ) (5.24)
dv
• μ-law is neither strictly linear nor strictly logarithmic
• A-law :
Am 1
1 log A , 0 m
v A (5.25)
1
1 log( Am ) , m1
1 log A A
1
1 log A 0 m
,
d m A A
d v 1 (5.26)
(1 log A) m , Fig. 5.11
m 1
A
Click Here for Digital Communication full study material. 45
BrainKart.com
Fig.11 Back Next
0.5
x[n]=speech /song/ 0
-0.5
-1
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
0.5
y[n]=C(x[n]) 0
-0.5
Companded Signal -1
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
0.5
Close View of the Signal
Segment of x[n] 0
-0.5
-1
2200 2300 2400 2500 2600 2700 2800 2900 3000
0.5
Segment of y[n] 0
-1
2200 2300 2400 2500 2600 2700 2800 2900 3000
Eeng 360 48
Presenter
2015-08-02 19:50:34
BrainKart.com
--------------------------------------------
Quantization - why do we need such classification ?!A-good
(3)Idea is to use a non-uniform
quantizer.A non-uniform quantizer can
provide fine
quantization levels for weak signals (
Comparison – Uniform Vs. Non-Uniform Usage 50
%)and coarse levels for strong signals
(15%) .
The goal is decrease the SQNR . And
theS
Q
Ris proportional to the number of
N
Speech signals doesn’t require high quantization resolution forlevels,
specially at the weak signal part.
high amplitudes (50% Vs. 15%).
wasteful to use uniform quantizer ?
The goal is decrease the SQNR, more levels for low amplitudes, less levels for
high ones.
Maybe use a Non-uniform quantizer ?
Compressor Uniform
Expander
Function Qua ntization
Function
(-1)
3. Encoding
1.To translate the discrete set of sample vales to a
more appropriate form of signal Fig. 11
2.A binary code
The maximum advantage over the effects of noise in a
transmission medium is obtained by using a binary
code, because a binary symbol withstands a relatively
high level of noise.
The binary code is easy to generate and regenerate
Table. 2
2. Reconstruction
1.Recover the message signal : passing the expander
output through a low-pass reconstruction filter
Click Here for Digital Communication full study material. 56
BrainKart.com
Categories of multiplexing
Frequency
• Advantages: Time
– Only one carrier in the medium at any given time
– High throughput even for many users
– Common TX component design, only one power amplifier
– Flexible allocation of resources (multiple time slots).
Click Here for Digital Communication full study material.
BrainKart.com
Time Division Multiplexing
• Disadvantages
– Synchronization
– Requires terminal to support a much higher data
rate than the user information rate therefore
possible problems with intersymbol-
interference.
• Application: GSM
GSM handsets transmit data at a rate of 270 kbit/s in
a 200 kHz channel using GMSK modulation.
Each frequency channel is assigned 8 users, each
having a basic data rate of around 13 kbit/s
Click Here for Digital Communication full study material.
Time Division Multiplexiwnwgw.rejinpaul.com
At the Transmitter
Simultaneous transmission of several signals on a time-sharing basis.
Each signal occupies its own distinct time slot, using all frequencies, for
the duration of the transmission.
Slots may be permanently assigned on demand.
At the Receiver
Decommutator (sampler) has to be synchronized with the incoming
waveform Frame Synchronization
Low pass filter
ISI – poor channel filtering
Feedthrough of one channel's signal into another channel -- Crosstalk
TDM-PAM: Transmitter
TDM-PAM : Receiver
Samples of Signal -1
g1(t)
time
0 Ts 2Ts
Samples of signal - 2
g2(t)
Ts Ts
0 Ts 2Ts
4
4
4
1 1 1
2 2 2
3 3 3
Time
Problem
Two low-pass signals of equal bandwidth
are sampled and time division
multiplexed using PAM. The TDM signal is
passed through a Low-pass filter & then
transmitted over a channel with a
bandwidth of 10KHz.
Continued….
Problem (continued…)
Problem: Solution
End of Unit-1
Unit – II
Waveform Coding
Syllabus
Prediction Filtering
• Linear prediction is a mathematical operation
where future values of a discrete-time signal
are estimated as a linear function of previous
samples.
Principle of DPCM
Delta Modulation
• A Delta modulation (DM or Δ-modulation) is an analog-to-digital
and digital-to-analog signal conversion technique used for
transmission of voice information where quality is not of primary
importance.
Features
• the analog signal is approximated with a series of
segments
• each segment of the approximated signal is compared to
the original analog wave to determine the increase or
decrease in relative amplitude
• the decision process for establishing the state of
successive bits is determined by this comparison
• only the change of information is sent, that is, only an
increase or decrease of the signal amplitude from the
previous sample is sent whereas a no-change condition
causes the modulated signal to remain at the same 0 or 1
state of the previous sample.
VCA
• 4. i = 1, k1 = r(1)
• 5. For i-p<=j<=p
qi(j) = qi-1(j) + ki *qi-1 (i-j)
ki = qi-1(j)/qi(0)
aj(i) = qi-1(i-j)
E(i) = E(i-1)(1-ki2)
• 6. If i<p, back to step 5
• 7. Stop
• If we only calculate ki, then only first two expressions
in step 5 are enough. It is suitable for fix-point
calculation (r<=1) or hardware implementation.
Get useful study materials from BrainKart.com
BrainKart.com
Thank you
Unit 3
Baseband Transmission
Syllabus
Properties of Line codes- Power Spectral Density of Unipolar /
Polar RZ & NRZ – Bipolar NRZ - Manchester- ISI – Nyquist
criterion for distortionless transmission – Pulse shaping –
Correlative coding - Mary schemes – Eye pattern - Equalization
Baseband Transmission
• The digital signal used in baseband
transmission occupies the entire bandwidth
of the network media to transmit a single data
signal.
• Baseband communication is bidirectional,
allowing computers to both send and receive
data using a single cable.
Baseband Modulation
• An information bearing-signal must conform to the limits of its channel
• • Generally modulation is a two-step process
• – baseband: shaping the spectrum of input bits to fit in a limited spectrum
• – passband: modulating the baseband signal to the system rf carrier
• • Most common baseband modulation is Pulse Amplitude Modulation
(PAM)
• – data amplitude modulates a sequence of time translates of basic pulse
• – PAM is a linear form of modulation: easy to equalize, BW is pulse BW
• – Typically baseband data will modulate in-phase [cos] and quadrature
[sine] data
• streams to the carrier passband
• • Special cases of modulated PAM include
• – phase shift keying (PSK)
• – quadrature amplitude modulation (QAM)
Line Codes
• In telecommunication, a line code (also called digital baseband
modulation or digital baseband transmission method) is a code
chosen for use within a communications system for baseband
transmission purposes.
Unipolar coding
• Unipolar encoding is a line code. A positive voltage represents a
binary 1, and zero volts indicates a binary 0. It is the simplest line
code, directly encoding the bitstream, and is analogous to on-off
keying in modulation.
• This is ideal if one symbol is sent much more often than the other
and power considerations are necessary, and also makes the signal
self-clocking.
• It is called NRZ because the signal does not return to zero at the
middle of the bit.
• Compared with its polar counterpart, Uni Polar NRZ, this scheme is
very expensive.
• The normalized power (power required to send 1 bit per unit line
resistance) is double that for polar NRZ.
Return-to-zero
• Return-to-zero (RZ) describes a line code used in
telecommunications signals in which the signal
drops (returns) to zero between each pulse.
• This takes place even if a number of consecutive
0s or 1s occur in the signal.
• The signal is self-clocking. This means that a
separate clock does not need to be sent
alongside the signal, but suffers from using twice
the bandwidth to achieve the same data-rate as
compared to non-return-to-zero format.
Polar RZ
BiPolar Signalling
Manchester Encoding
Disadvantages:
(a) an ideal LPF is not physically realizable.
(b) Note that
Pr ( f ) Re[ (1)
n
n
P( f n / T )] T cos( fT / 2)
PI ( f ) Im[ (1) n P( f n / T )] 0
n
(wt) / 2
sin(wT ,w
P(w) / 2) T
0,
w
T
1 /T
p(t)
(wt / 2) e jwt dw
2 / T sin (wT / 2)
1,
A n 0
2 n1
p(t)dt
T
2
2 n1
T 0
2
0, n
Get useful study materials from BrainKart.com
BrainKart.com
Eye Diagram
• Eye diagram is a means of evaluating the quality of a received
“digital waveform”
– By quality is meant the ability to correctly recover symbols and timing
– The received signal could be examined at the input to a digital receiver
or at some stage within the receiver before the decision stage
• Eye diagrams reveal the impact of ISI and noise
• Two major issues are 1) sample value variation, and 2) jitter
and sensitivity of sampling instant
• Eye diagram reveals issues of both
• Eye diagram can also give an estimate of achievable BER
• Check eye diagrams at the end of class for participation
2nd Nyquist
1st Nyquist:
1st Nyquist:
2nd Nyquist:
2nd Nyquist:
1st Nyquist
1st Nyquist:
1st Nyquist:
2nd Nyquist:
2nd Nyquist:
Thank you
UNIT IV
Geometric Representation of
Signals
• Objective: To represent any set of M energy
signals {si(t)} as linear combinations of N
orthogonal basis functions, where N ≤ M
• Real value energy signals s1(t), s2(t),..sM(t),
each of duration TOrtshoegocnal basis
function
N
0 t T
si (t) sij j (t), 4.1
(5.5)
j 1 i==1,2, ... ,M
coefficient
Energy signal
Get useful study materials from BrainKart.com
BrainKart.com
• Coefficients:
T i=1,2,....,M
sij si (t) j (t)dt, (5.6)
0
j=1,2, ... ,M
• Real-valued basis functions:
1 if i j
(t) (t)dt
T
(5.7)
0 if i j
i j ij
0
(a) Synthesizer for generating the signal si(t). (b) Analyzer for
generating the set of signal vectors si.
So,
• Each signal in the set si(t) is completely
determined by the vector of its coefficients
si1
s
i 2
.
si , i 1,2,. .. ,M (5.8)
.
.
siN
Finally,
• The signal vector si concept can be extended to 2D, 3D
etc. N-dimensional Euclidian space
• Provides mathematical basis for the geometric
representation of energy signals that is used in noise
analysis
• Allows definition of
– Length of vectors (absolute value)
– Angles between vectors
– Squared value (inner product of si with itself)
si 2 sT s Matrix
i i Transposition
N
Also,
What is the relation between the vector
representation of a signal and its energy value?
Ei s2i (t)dt
definition of average
energy in a signal… 0
(5.10)
• Where si(t) is N
N
T N
• After substitution: Ei sij j (t) sikk (t) dt
0 j 1 k 1
N N T
• Φj(t) is orthogonal, so N
E i s
2
2
finally we have: ij = si (5.12)
j 1
The energy of a
signal is equal to the
squared length of its
vector
Get useful study materials from BrainKart.com
BrainKart.com
Euclidian Distance
• The Euclidean distance between two points
represented by vectors (signal vectors) is equal
to
||si-sk|| and the squared
N
value is given by:
= (s -s )2
2
s s (5.14)
i k ij kj
j 1
T (t) s (t))2 dt
= (s
0 i k
switch
cos w2t
2Eb
sBFSK(t) = cos(2fct (t))
Tb
t
2Eb FSK
= cos2fct 2k m()d
Tb
t
where (t) = 2kFSK m()d
FSK Example
Data
1 1 0 1
FSK
Signal
0 1 1
x
a0 0 VCO modulated composite
a1 1 signal
cos wct
Get useful study materials from BrainKart.com
BrainKart.com
BT = 2(f + Rb)
0
V H (t)VL (t)dt 0 V
0
H (t )VH (t )dt 0 ?
Get useful study materials from BrainKart.com
BrainKart.com
An FSK signal for 0 ≤ t ≤ Tb
vH(t) = 2Eb 2Eb
cos(2 ( fc f )t) and vL(t) = cos(2 ( fc f )t)
Tb Tb
2Eb
then vH(t) vL(t) = cos(2 ( f c f )t)cos(2 ( f c f )t)
Tb
=
Eb
cos(2 (2 fc )t) cos(2 (2f )t)
Tb
T Tb
and
0 VH (t)VL (t)dt
Eb
cos(4fct) cos(4ft )dt
0
Tb
sin(4ft )
Tb
E sin(4f t) E sin(4f T ) sin(4fT )
b c b c
= =
b
b
Tb 4fc 4f 0 Tb 4fc 4f
QPSK
• Quadrature Phase Shift Keying (QPSK)
can be interpreted as two independent
BPSK systems (one on the I-channel
and one on Q-channel), and thus the
same performance but twice the
bandwidth (spectrum) efficiency.
I I
Carrier phases
Carrier phases
{0, /2, , 3/2}
{/4, 3/4, 5/4, 7/4}
Types of QPSK
Q Q Q
I I I
QPSK
• The striking result is that the bit error probability of QPSK is identical to
BPSK, but twice as much data can be sent in the same bandwidth. Thus,
when compared to BPSK, QPSK provides twice the spectral efficiency with
exactly the same energy efficiency.
• Similar to BPSK, QPSK can also be differentially encoded to allow non-
coherent detection.
QAM Transmitter
QAM Receiver
Carrier Synchronization
• Synchronization is one of the most critical
functions of a communication system with
coherent receiver. To some extent, it is the
basis of a synchronous communication
system.
• Carrier synchronization
• Symbol/Bit synchronization
• Frame synchronization
Get useful study materials from BrainKart.com
BrainKart.com
-asin(ct)
cos(ct) /2phase
shift
DPSK
• DPSK is a kind of phase shift keying which
avoids the need for a coherent reference
signal at the receiver.
• Differential BPSK
– 0 = same phase as last signal element
– 1 = 180º shift from last signal element
Thank you
Unit - 5
ENTROPY
• The entropy of a source is defined as “the source which produces average
information per individual message or symbol in a particular interval”.
• Then the number of messages is given as
Cont…
• Thus the total amount of information due to L
Cont…
PROPERTIES OF ENTROPY
• The entropy of a discrete memoryless channel
source
• Property 1
1. Entropy is zero, if the event is sure or its impossible
Cont..
RATE OF INFORMATION
• RATE OF INFORMATION
SOURCE CODING
• An important problem in communication system is the
efficient representation of data generated by a source, which
can be achieved by source encoding (or) source coding
process
• The device which performs source encoding is called source
encoder
STATEMENT
• Shanon first theorem is stated as “ Given a discrete memoryless source of
entropy H, the average codeword length L for any distortionless source
encoding is bounded as
L>= H
• According to source coding theorem the entropy H represents as the
fundamental limit on the average number of bits per source symbol
necessary to represent a discrete memoryless source
• It can be made as small as, but not smaller than the entropy H thus, with
Lmin=H, then η is represented as
η= H/L
Cont…
• Approach
The code rate is given as
STATEMENT
Two parts
Signal power
Noise power
MUTUAL INFORMATION
CHANNEL CAPACITY
CHANNEL CAPACITY
• What is channel capacity?
• Redundancy
Error Control
Coding
Introduction
• Error Control Coding (ECC)
– Extra bits are added to the data at the transmitter
(redundancy) to permit error detection or
correction at the receiver
– Done to prevent the output of erroneous bits
despite noise and other imperfections in the
channel
– The positions of the error control coding and
decoding are shown in the transmission model
Transmission Model
Error Modulator
Digital Source Line X()
Control (Transmit
Source Encoder Coding
Coding Filter, etc)
Hc() Channel
Transmitter
N() Noise
+
Error Demod
Digital Source Line
Control (Receive
Sink Decoder Decoding Y()
Decoding Filter, etc)
Receiver
Block Codes
• We will consider only binary data
• Data is grouped into blocks of length k bits
(dataword)
• Each dataword is coded into blocks of length n
bits (codeword), where in general n>k
• This is known as an (n,k) block code
Block Codes
• Dataword length k = 4
• Codeword length n = 7
• This is a (7,4) block code with code rate = 4/7
• For example, d = (1101), c = (1101001)
Codeword +
Dataword possible errors
(k bits) Channel
(n bits)
Channel
decoder
Error flags
X is a valid codeword
O is an invalid codeword
dmin 1
• That is the maximum number of correctable
errors is given by,
dmin 1
t
2
• Thus,
k
c di a i
i1
c3 c1 c2
c 0 ai 0
i1
Linear Block Codes – example 2
Systematic Codes
• I is k*k identity matrix. Ensures dataword
appears as beginning of codeword
• P is k*R matrix.
CYCLIC CODES
Definition
• An (n,k) linear code C is cyclic if every cyclic
shift of a codeword in C is also a codeword in
C.
If c0 c1 c2 …. cn-2 cn-1 is a codeword, then
cn-1 c0 c1 …. cn-3 cn-2
cn-2 cn-1 c0 …. cn-4 cn-3
: : : : :
c1 c2 c3 …. cn-1 c0 are all codewords.
C 000000,010101,101010,111111
is a cyclic code.
Example 3
• The (7,4) Hamming code discussed before is
cyclic:
Notice thatk,1
C m j g 1 ; where m j 0, if j 0or j k 1
j 0
Code Polynomial
• Let c = c0 c1 c2 …. cn-1. The code polynomial
of c: c(X) = c0 + c1X+ c2 X2 + …. + cn-1 Xn-1
where the power of X corresponds to the bit position,
and
the coefficients are 0’s and 1’s.
• Example:
1010001 1+X2+X6
0101110 X+X3+X4+X5
Each codeword is represented by a polynomial of degree
deg[c(X)] n 1
less than or equal n-1.
Example:
m(x) m0 m1x m2 x 2
g(x) g0 g1x
m(x) g(x) (m 0 g0 ) (m1 g1 )x (m2 0)x 2
additi on
m(x)g(x) m g (m g m g )x (m g m g )x 2 m g x 3
Mul tipliaiton
0 0 0 1 1 0 1 1 2 0 2 1
• For the (7,4) code given in the Table, the nonzero code
polynomial of minimum degree is g(X) = 1 + X + X3
Generator Polynomial
• Since the code is cyclic: Xg(X), X2g(X),…., Xn-r-1g(X)
are code polynomials in C. (Note that deg[Xn-r-
1g(X)] = n-1).
Constructing g(X)
• The generator polynomial g(X) of an (n,k) cyclic
code is a factor of Xn+1.
Xkg(X) is a polynomial of degree n.
Xkg(X)/ (Xn+1)=1 and remainder r(X). Xkg(X) = (Xn+1)+ r(X).
But r(X)=Rem[Xkg(X)/(Xn+1)]=g(k)(X) =code polynomial= a(X)g(X).
Therefore, Xn+1= Xkg(X) + a(X)g(X)= {Xk + a(X)}g(X). Q.E.D.
(1) To construct a cyclic code of length n, find the
factors of the polynomial Xn+1.
(2) The factor (or product of factors) of degree n-k
serves as the generator polynomial of an (n,k)
cyclic code. Clearly, a cyclic code of length n does
not exist for every k.
+ + + + Output
gr gr-1 gr-2 g1 g0
Input
Get useful study materials from BrainKart.com
BrainKart.com
+ + + +
Output
g0 g1 g2 gr-1 gr
Input
Get useful study materials from BrainKart.com
BrainKart.com
Output
g0 g1 g2 gr-1 gr
+ + + +
Input
Encoder Circuit
Gate
g1 g2 gr-1
+ + +
+
Gate
+
+
Input 1 1 0 1
Register : 000 110 101 100 100
initial 1st shift 2nd shift 3rd shift 4th shift
Codeword: 1 0 0 1 0 1 1
Parity-Check Polynomial
• Xn +1 = g(X)h(X)
• deg[g(x)] = n-k, deg[h(x)] = k
• g(x)h(X) mod (Xn +1) = 0.
• h(X) is called the parity-check polynomial. It
plays the rule of the H matrix for linear codes.
• h(X) is the generator polynomial of an (n,n-k)
cyclic code, which is the dual of the (n,k) code
generated by g(X).
Get useful study materials from BrainKart.com
BrainKart.com
• STEPS:
(1) Syndrome computation
(2) Associating the syndrome to the error pattern
(3) Error correction
Syndrome Computation
Gate
r = 0010110
+
+
Shift Input Register contents
0 0 0 (initial state) • What is g(x)?
1 0 000 • Find the syndrome using
2 1 100 long division.
3 1 110 • Find the syndrome using
4 0 011 the shortcut for the
remainder.
5 1 011
6 0 111
7 0 1 0 1 (syndrome s)
Get useful study materials from BrainKart.com
BrainKart.com
Association of Syndrome to Error
Pattern
• Look-up table implemented via a combinational logic circuit
(CLC). The complexity of the CLC tends to grow exponentially
with the code length and the number of errors to correct.
• Cyclic property helps in simplifying the decoding circuit.
• The circuit is designed to correct the error in a certain location
only, say the last location. The received word is shifted cyclically
to trap the error, if it exists, in the last location and then correct
it. The CLC is simplified since it is only required to yield a single
output e telling whether the syndrome, calculated after every
cyclic shift of r(X), corresponds to an error at the highest-order
position.
• The received digits are thus decoded one at a time.
Meggit Decoder
Shift r(X) into the buffer B and the syndrome
register R simultaneously. Once r(X) is completely
shifted in B, R will contain s(X), the syndrome of
r(X).
1. Based on the contents of R, the detection circuit
yields the output e (0 or 1).
2. During the next clock cycle:
(a) Add e to the rightmost bit of B while shifting
the contents of B. (The rightmost bit of B may be
read out). Call the modified content of B r1(1)(X).
+
r = 0010110 Gate +
Worked Example
Consider the (7,4) Hamming code generated by 1+X+X3.
• Theorem:
If g(X) has l roots (out of it n-k roots) that are consecutive
powers of , then the code it generates has a minimum
distance d = l + 1.
• To design a cyclic code with a guaranteed minimum
distance of d, form g(X) to have d-1 consecutive roots. The
parameter d is called the designed minimum distance of
the code.
• Since roots occur in conjugates, the actual number of
consecutive roots, say l, may be greater than d-1. dmin = l +
1 is called the actual minimum distance of the code.
Design Example
X15 + 1 has the roots 1= 0, 1 , …., 14.
Conjugate group Corresponding
polynomial
0) X1 + X
(, 2 , 4 , 8) X 1 + X + X4
(3 , 6 , 9 , 12) X 1 + X + X2 + X3 +
X4
(5 , 10) X 1 + X + X2
(, 14 , 13 , 11) X 1 + X3 + X4
Get useful study materials from BrainKart.com
BrainKart.com
BCH Codes
• Definition of the codes:
• For any positive integers m (m>2) and t0 (t0 <
n/2), there is a BCH binary code of length n =
2m - 1 which corrects all combinations of t0 or
fewer errors and has no more than mt0 parity-
check bits.
Block length 2m 1
Number of parity - check bits n k mt 0
minimum distance d min 2t 0 1
n k b g(X) (octal)
7 3 2 35 (try to find dmin!)
15 10 2 65
15 9 3 171
31 25 2 161
63 56 2 355
63 55 3 711
511 499 4 10451
1023 1010 4 22365
Basic Definitions
Generator Polynomial
• A convolutional code may be defined by a set
of n generating polynomials for each input bit.
• For the circuit under consideration:
g1(D) = 1 + D + D2
g2(D) = 1 + D2
• The set {gi(D)} defines the code completely.
The length of the shift register is equal to the
highest-degree generator polynomial.
Decoding
This is the
survival
path in
this
example
Decoded
sequence is
m=[10 1110]
Compute the two possible paths at
Add the weight of the each state and select the one This is called the
path at each state with less cumulative Hamming survival path
Geweight
t useful study materials from www.rejinpaul.com
BrainKart.com
Sequence 2:
code sequence: .. 00 11 10 11 00 ..
state sequence: a0 b c a1
Labeled: (D2LN)(DL)(D2L) = D5L3N
Prop. : w =5, dinf =1, diverges from the allzero path by 3
branches.
Sequence 3:
code sequence: .. 00 11 01 01 00 10 11 00 ..
state sequence: a0 b d c b c a1
Labeled: (D2LN)(DLN)(DL)(DL)(LN)(D2L) = D7L6N3
Prop. : w =7, dinf =3, diverges from the allzero path by 6
branches.
Transfer Function
• Input-Output relations:
a0 = 1
b = D2LN a0 + LNc
c = DLb + DLNd
d = DLNb + DLNd
a1 = D2Lc
• The transfer function T(D,L,N) = a1 /a0
D5 L3
T(D, L, N)
1 DNL(1 L)
y
dH y,
c mdH y,
c
c p
Pr y |
.(1 p)
where p is the probability of bit error of BSC from modulation
max
cC
c min
Pr y,
cC
d y,
H
c
m m
Choose the code sequence through the trellis which has the
smallest Hamming distance to the received sequence!
What path through the trellis does the Viterbi Algorithm choose?
SUMMARY
• Learnt about the concepts of Entropy and the
source coding techniques
• Statement and the theorems of Shanon
• Concepts about mutual information and
channel capacity
• Understand error control coding techniques
and the concepts about linear block codes,
cyclic codes, convolution codes & viterbi
decoding algorithm
Thank you