Digital Communication Systems: Unit-1 Source Coding Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 192

VEMU INSTITUTE OF TECHNOLOGY

(Approved by AICTE, New Delhi & Affiliated to JNTUA, Ananthapuramu)


P.Kothakota, Puthalapattu (M), Chittoor Dist – 517 112 AP, India

Department of Electronics and Communication Engineering

Digital Communication Systems


Unit-1
Source Coding Systems

By
B SAROJA
Associate Professor
Dept. of . ECE
Contents
Introduction, sampling process.
Quantization, quantization noise, conditions for optimality of
quantizer, encoding.
Pulse-Code Modulation (PCM),Line codes, Differential encoding,
Regeneration, Decoding & Filtering.
Noise considerations in PCM systems.
Time-Division Multiplexing (TDM), Synchronization.
Delta modulation (DM).
Differential PCM (DPCM), Processing gain
Adaptive DPCM (ADPCM)
Comparison of the above systems.
Introduction

➢ Source: analog or digital

➢ Transmitter: transducer, amplifier, modulator, oscillator,


power amp., antenna

➢ Channel: e.g. cable, optical fibre, free space

➢ Receiver: antenna, amplifier, demodulator, oscillator,


power amplifier, transducer

➢ Recipient: e.g. person, (loud) speaker, computer

3
➢Types of information:
Voice, data, video, music, email etc.

➢Types of communication systems:


Public Switched Telephone Network
(voice,fax,modem)
Satellite systems
Radio,TV broadcasting
Cellular phones
Computer networks (LANs, WANs, WLANs)

4
Information Representation
➢ Communication system converts information into electrical
electromagnetic/optical signals appropriate for the
transmission medium.
➢ Analog systems convert analog message into signals that can
propagate through the channel.
➢ Digital systems convert bits(digits, symbols) into signals

• Computers naturally generate information as characters/bits


• Most information can be converted into bits
• Analog signals converted to bits by sampling and quantizing (A/D
conversion)

5
WHY DIGITAL?
➢ Digital techniques need to distinguish between discrete
symbols allowing regeneration versus amplification

➢ Good processing techniques are available for digital


signals, such as medium.
• Data compression (or source coding)
• Error Correction (or channel coding)(A/D conversion)
• Equalization
• Security

➢ Easy to mix signals and data using digital techniques

6
7
8
Information Source and Sinks
Information Source and Input Transducer:
▪ The source of information can be analog or digital,
▪ Analog: audio or video signal,
▪ Digital: like teletype signal.
▪ In digital communication the signal produced by this source is
converted into digital signal consists of 1′s and 0′s.

Output Transducer:
▪ The signal in desired format analog or digital at the output

9
Channel

▪ The communication channel is the physical medium that is used


for transmitting signals from transmitter to receiver
• Wireless channels: Wireless Systems
• Wired Channels: Telephony
▪ Channel discrimination on the basis of their property and
characteristics, like AWGN channel etc.

10
Source Encoder And Decoder
Source Encoder
▪ In digital communication we convert the signal from source
into digital signal.
▪ Source Encoding or Data Compression: the process of
efficiently converting the output of wither analog or digital
source into a sequence of binary digits is known as source
encoding.
Source Decoder
▪ At the end, if an analog signal is desired then source decoder
tries to decode the sequence from the knowledge of the
encoding algorithm.

11
Channel Encoder And Decoder
Channel Encoder:
▪ The information sequence is passed through the channel
encoder. The purpose of the channel encoder is to introduce, in
controlled manner, some redundancy in the binary information
sequence that can be used at the receiver to overcome the effects
of noise and interference encountered in the transmission on the
signal through the channel.
Channel Decoder:
▪ Channel decoder attempts to reconstruct the original
information sequence from the knowledge of the code used by
the channel encoder and the redundancy contained in the
received data
12
Digital Modulator And Demodulator
Digital Modulator:
▪ The binary sequence is passed to digital modulator which
in turns convert the sequence into electric signals so that
we can transmit them on channel. The digital modulator
maps the binary sequences into signal wave forms.
Digital Demodulator:
▪ The digital demodulator processes the channel corrupted
transmitted waveform and reduces the waveform to the
sequence of numbers that represents estimates of the
transmitted data symbols.

13
Why Digital Communications?
➢Easy to regenerate the distorted signal
➢Regenerative repeaters along the transmission path can detect a
digital signal and retransmit a new, clean (noise free) signal
➢These repeaters prevent accumulation of noise along the path
This is not possible with analog communication systems
➢Two-state signal representation
The input to a digital system is in the form of a sequence of
bits (binary or M-ary)
➢Immunity to distortion and interference
➢Digital communication is rugged in the sense that it is more
immune to channel noise and distortion

14
➢ Hardware is more flexible
➢ Digital hardware implementation is flexible and permits the
use of microprocessors, mini-processors, digital switching
and VLSI
Shorter design and production cycle
➢ Low cost
The use of LSI and VLSI in the design of components and
systems have resulted in lower cost
➢ Easier and more efficient to multiplex several digital
signals
➢ Digital multiplexing techniques – Time & Code Division
Multiple Access - are easier to implement than analog
techniques such as Frequency Division Multiple Access
15
➢ Can combine different signal types – data, voice, text, etc.
➢ Data communication in computers is digital in nature
whereas voice communication between people is analog in
nature
➢ Using digital techniques, it is possible to combine both
format for transmission through a common medium
➢ Encryption and privacy techniques are easier to
implement
➢ Better overall performance
➢ Digital communication is inherently more efficient than
analog in realizing the exchange of SNR for bandwidth.
➢ Digital signals can be coded to yield extremely low rates
and high fidelity as well as privacy.

16
Disadvantages:
➢ Requires reliable “synchronization”

➢ Requires A/D conversions at high rate

➢ Requires larger bandwidth

➢ Nongraceful degradation

➢ Performance Criteria

➢ Probability of error or Bit Error Rate

17
Sampling Process

Sampling is converting a
continuous time signal into a
discrete time signal.
There three types of sampling
➢Impulse (ideal) sampling
➢Natural Sampling
➢Sample and Hold operation

18
Quantization
Quantization is a non linear transformation which maps
elements from a continuous set to a finite set.

19
Quantization Noise

20
Uniform & Non- Uniform Quantization

Non uniform Quantization Used to reduce quantization error


and increase the dynamic range when input signal is not
uniformly distributed over its allowed range of values.

Uniform quantization Non-Uniform


quantization
21
22
23
Encoding

24
Pulse Code Modulation
➢ Pulse Code Modulation (PCM) is a special form of A/D
conversion.

➢ It consists of sampling, quantizing, and encoding steps.


1.Used for long time in telephone systems
2. Errors can be corrected during long haul transmission
3. Can use time division multiplexing

4. In expensive

25
PCM Transmitter

26
PCM Transmission Path

27
PCM Receiver

rReconstructed waveform
28
Bandwidth of PCM
Assume w(t) is band limited to B hertz.
Minimum sampling rate = 2B samples / second
A/D output = n bits per sample (quantization level M=2n)
Assume a simple PCM without redundancy.
Minimum channel bandwidth = bit rate /2

➢ Bandwidth of PCM signals:


BPCM  nB (with sinc functions as orthogonal basis)
BPCM  2nB (with rectangular pulses as orthogonal basis)
➢ For any reasonable quantization level M, PCM requires
much higher bandwidth than the original w(t).

29
Advantages of PCM
Relatively inexpensive.
Easily multiplexed.
Easily regenerated.
Better noise performance than analog system.
Signals may be stored and time-scaled efficiently.
Efficient codes are readily available.
Disadvantage
Requires wider bandwidth than analog signals

30
Line Codes

31
Categories of Line Codes
▪ Polar - Send pulse or negative of pulse
▪ Unipolar - Send pulse or a 0
▪ Bipolar - Represent 1 by alternating signed pulses
Generalized Pulse Shapes
▪ NRZ -Pulse lasts entire bit period
▪Polar NRZ
▪Bipolar NRZ
▪ RZ - Return to Zero - pulse lasts just half of bit period
▪Polar RZ
▪Bipolar RZ
▪ Manchester Line Code
▪Send a 2-  pulse for either 1 (high→ low) or 0 (low→ high)
▪Includes rising and falling edge in each pulse
▪No DC component
32
Differential Encoding

33
34
35
Noise Considerations In PCM

36
Time Division Multiplexing(TDM)

37
Synchronization

38
Delta Modulation

Types of noise
➢ Quantization noise: step
size  takes place of smallest
quantization level.
➢ too small: slope overload
noise
too large: quantization noise
and granular noise

39
Delta Modulator Transmitter & Receiver

40
ADM

41
ADM Transmitter & Receiver

42
DPCM
Often voice and video signals do not
change much from one
sample to next.
- Such signals has energy concentrated
in lower frequency.
- Sampling faster than necessary
generates redundant information.
Can save bandwidth by not sending all
samples.
* Send true samples occasionally.
* In between, send only change
from previous value.
* Change values can be sent using
a fewer number of bits
than true samples.

43
DPCM Transmitter

44
DPCM Receiver

45
Processing Gain
In a spread-spectrum system, the process gain (or
"processing gain") is the ratio of the spread (or RF)
bandwidth to the unspread (or baseband) bandwidth.
It is usually expressed in decibels (dB).

For example, if a 1 kHz signal is spread to 100 kHz, the


process gain expressed as a numerical ratio would
be 100000/1000 = 100. Or in decibels, 10 log10(100) =
20 dB.

46
Adaptive DPCM

47
Comparisons

48
VEMU INSTITUTE OF TECHNOLOGY
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Ananthapuramu)
P.Kothakota, Puthalapattu (M), Chittoor Dist – 517 112 AP, India

Department of Electronics and Communication Engineering

Digital Communication Systems


Unit-2
Baseband Pulse Transmission

By
B SAROJA
Associate Professor
Dept. of . ECE
Contents

Introduction
Matched filter, Properties of Matched filter,Matched filter for
rectangular pulse, Error rate due to noise
Inter-symbol Interference(ISI)
Nyquist‟s criterion for distortion less baseband binary transmission
Ideal Nyquis tchannel
Raised cosine filter & its spectrum
Correlative coding–Duo binary & Modified duo binary signaling
schemes, Partial response signaling
Baseband M-array PAM transmission
Eye diagrams
3
Matched Filter
It passes all the signal frequency components while
suppressing any frequency components where there is
only noise and allows to pass the maximum amount of
signal power.
The purpose of the matched filter is to maximize the signal
to noise ratio at the sampling point of a bit stream and to
minimize the probability of undetected errors received
from a signal.
To achieve the maximum SNR, we want to allow through all
the signal frequency components.

4
Matched Filter:

 Consider the received signal as a vector r, and the transmitted signal vector as s
 Matched filter “projects” the r onto signal space spanned by s (“matches” it)

Filtered signal can now be safely sampled by the receiver at the correct sampling instants,
resulting in a correct interpretation of the binary message

Matched filter is the filter that maximizes the signal-to-noise ratio it can be shown that it
also minimizes the BER: it is a simple projection operation

5
Example Of Matched Filter (Real Signals)
y (t ) = si (t ) h opt (t )
si (t ) h opt (t ) A2
A A
T T

T t T t 0 T 2T t

y (t ) = si (t ) h opt (t )
si (t ) h opt (t ) A2
A A
T T

T/2 T t T/2 T t 0 T/2 T 3T/2 2T t


−A −A − A2
2
T T

6
Properties of the Matched Filter
1. The Fourier transform of a matched filter output with the matched signal as
input is, except for a time delay factor, proportional to the ESD of the input
signal.
Z ( f ) =| S ( f ) |2 exp( − j 2fT )

2. The output signal of a matched filter is proportional to a shifted version of the


autocorrelation function of the input signal to which the filter is matched.

z (t ) = Rs (t − T )  z (T ) = Rs (0) = Es

3. The output SNR of a matched filter depends only on the ratio of the signal
energy to the PSD of the white noise at the filter input.

S Es
max   =
 N T N 0 / 2

7
Matched Filter: Frequency domain View

Simple Bandpass Filter:


excludes noise, but misses some signal power

8
Matched Filter: Frequency Domain View (Contd)
Multi-Bandpass Filter: includes more signal power, but adds more noise also!

Matched Filter: includes more signal power, weighted according to size


=> maximal noise rejection!

9
Matched Filter For Rectangular Pulse
Matched filter for causal rectangular pulse has an impulse
response that is a causal rectangular pulse
Convolve input with rectangular pulse of duration T sec and
sample result at T sec is same as to
First, integrate for T sec
Second, sample at symbol period T sec
Sample and dump
Third, reset integration for next time period
Integrate and dump circuit

10
11
Inter-symbol Interference (ISI)
ISI in the detection process due to the filtering effects of
the system
Overall equivalent system transfer function

H ( f )= H ( f )H ( f )H ( f )
t c r
▪ creates echoes and hence time dispersion
▪ causes ISI at sampling time
ISI effect
zk = sk + nk +   i si
ik

12
Inter-symbol Interference (ISI): MODEL
Baseband system model

x1 x2
xk  Tx filter Channel r (t ) Rx. filter
zk
x̂k 
ht (t ) hc (t ) hr (t ) Detector
t = kT
T Equivalent
x3
Hmodel
t( f ) Hc ( f ) Hr ( f )
T n(t )

x1 x2
xk  Equivalent system
h(t )
z (t ) zk
x̂k 
Detector
t = kT
T H( f )
x3 T nˆ (t )
filtered noise
H ( f ) = Ht ( f )Hc ( f )H r ( f )
13
14
15
16
17
Equiv System: Ideal Nyquist Pulse
(FILTER)
Ideal Nyquist filter Ideal Nyquist pulse
H( f ) h(t ) = sinc( t / T )
T 1

0 f − 2T − T 0 T 2T t
−1 1
2T 2T
1
W=
2T 18
Nyquist Pulses (FILTERS)
Nyquist pulses (filters):
▪ Pulses (filters) which result in no ISI at the sampling time.
Nyquist filter:
▪ Its transfer function in frequency domain is obtained by
convolving a rectangular function with any real even-
symmetric frequency function
Nyquist pulse:
▪ Its shape can be represented by a sinc(t/T) function multiply
by another time function.
Example of Nyquist filters: Raised-Cosine filter

19
Raised Cosine Filter & Its Spectrum

| H ( f ) |=| H RC ( f ) | h(t ) = hRC (t )


1 r =0 1

r = 0.5
0.5 0.5 r =1
r =1 r = 0.5
r =0

−1 − 3 −1 0 1 3 1 − 3T − 2T − T 0 T 2T 3T
T 4T 2T 2T 4T T

Rs
Baseband W sSB= (1 + r ) Passband W DSB= (1 + r ) Rs
2

20
RAISED COSINE FILTER & ITS SPECTRUM
Raised-Cosine Filter
▪ A Nyquist pulse (No ISI at the sampling time)

1 for | f | 2W0 − W

2   | f | +W − 2W0 
H ( f ) = cos   for 2W0 − W | f | W
 4 W − W0 
0 for | f | W
cos[ 2 (W − W0 )t ]
h(t ) = 2W0 (sinc( 2W0t ))
1 − [4(W − W0 )t ]2
W − W0
Excess bandwidth: W − W0 Roll-off factor r =
W0
0  r 1
21
Correlative Coding – DUO BINARY
SIGNALING
Impulse Response of Duobinary Encoder
Encoding Process
1) an = binary input bit; an ∈ {0,1}.
2) bn = NRZ polar output of Level converter in the
precoder and is given by,

bn={−d,if an=0+d,if an=1

3) yn can be represented as

The duobinary encoding correlates present sample an and the


previous input sample an-1.
Decoding Process
 The receiver consists of a duobinary decoder and a
postcoder.
 b^n=yn−b^n−1
 This equation indicates that the decoding process is prone
to error propagation as the estimate of present sample
relies on the estimate of previous sample.
 This error propagation is avoided by using a precoder
before duobinary encoder at the transmitter and a
postcoder after the duobinary decoder.
 The precoder ties the present sample and previous sample
and the postcoder does the reverse process.
Correlative Coding – Modified Duobinary
Signaling
Modified Duobinary Signaling is an extension of duobinary
signaling.

Modified Duobinary signaling has the advantage of zero PSD at


low frequencies which is suitable for channels with poor DC
response.

It correlates two symbols that are 2T time instants apart,


whereas in duobinary signaling, symbols that are 1T apart
are correlated.

The general condition to achieve zero ISI is given by

p(nT)={1,n=00,n≠0
In the case of modified duobinary signaling, the above
equation is modified as

p(nT)={1,n=0,20,otherwise

which states that the ISI is limited to two alternate


samples.

Here a controlled or “deterministic” amount of ISI is


introduced and hence its effect can be removed upon
signal detection at the receiver.
Impulse Response Of A Modified Duobinary
Encoder
Partial Response Signalling

31
32
Eye Pattern
Eye pattern:Display on
an oscilloscope which sweeps the system
response to a baseband signal at the rate 1/T (T symbol
duration)

Distortion
due to ISI
Noise margin
amplitude scale

Sensitivity to
timing error

Timing jitter
time scale 33
Example Of Eye Pattern:
BINARY-PAM, SRRC PULSE

Perfect channel (no noise and no ISI)

34
Eye Diagram For 4-PAM

35
VEMU INSTITUTE OF TECHNOLOGY
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Ananthapuramu)
P.Kothakota, Puthalapattu (M), Chittoor Dist – 517 112 AP, India

Department of Electronics and Communication Engineering

Digital Communication Systems


Unit-3
Signal Space Analysis

By
B SAROJA
Associate Professor
Dept. of . ECE
Contents
Introduction
Geometric representation of signals
Gram-Schmidt orthogonalization procedure
Conversion of the Continuous AWGN channel into a vector channel
Coherent detection of signals in noise
Correlation receiver
Equivalence of correlation and Matched filter receivers
Probability of error
Signal constellation diagram
Introduction:signal Space
What is a signal space?
 Vector representations of signals in an N-dimensional orthogonal
space
Why do we need a signal space?
 It is a means to convert signals to vectors and vice versa.
 It is a means to calculate signals energy and Euclidean distances
between signals.
Why are we interested in Euclidean distances between signals?
 For detection purposes: The received signal is transformed to a
received vectors.
 The signal which has the minimum distance to the received signal
is estimated as the transmitted signal.
Transmitter takes the symbol (data) mi (digital
message source output) and encodes it into a
distinct signal si(t).
The signal si(t) occupies the whole slot T allotted to
symbol mi.
si(t) is a real valued energy signal (???)
T
E i   si2 (t )dt , i=1,2,....,M (5.2)
0

4
Transmitter takes the symbol (data) mi (digital message
source output) and encodes it into a distinct signal si(t).
The signal si(t) occupies the whole slot T allotted to symbol
m i.
si(t) is a real valued energy signal (signal with finite energy)

T
E i   si2 (t )dt , i=1,2,....,M (5.2)
0

5
Geometric Representation of Signals
Objective: To represent any set of M energy signals
{si(t)} as linear combinations of N orthogonal
basis functions, where N ≤ M
Real value energy signals s1(t), s2(t),..sM(t), each of
duration T sec Orthogonal basis
function

N
0  t  T 
si (t )   sij j (t ),   (5.5)
j 1 i==1,2,....,M 
coefficient

Energy signal
6
Coefficients:

T i=1,2,....,M 
sij   si (t ) j (t )dt ,   (5.6)
0
 j=1,2,....,M 
Real-valued basis functions:

T
1 if i  j 
0 i (t ) j (t )dt   ij  0 if i  j  (5.7)

7
A) SYNTHESIZER FOR GENERATING THE SIGNAL SI(T).
B) ANALYZER FOR GENERATING THE SET OF SIGNAL
VECTORS SI.

8
Each signal in the set si(t) is completely determined by the
vector of its coefficients
 si1 
s 
 i2 
. 
si    , i  1,2,....,M (5.8)
. 
. 
 
 siN 

9
The signal vector si concept can be extended to 2D, 3D etc. N-
dimensional Euclidian space
Provides mathematical basis for the geometric representation
of energy signals that is used in noise analysis
Allows definition of
 Length of vectors (absolute value)
 Angles between vectors
 Squared value (inner product of si with itself)
 siT si
2 Matrix
si Transposition
N
=  sij2 , i  1,2,....,M (5.9)
j 1

10
ILLUSTRATING THE
GEOMETRIC
REPRESENTATION OF
SIGNALS FOR THE CASE
WHEN N  2 AND M  3.
(TWO DIMENSIONAL
SPACE, THREE SIGNALS)

11
What is the relation between the vector representation of a
signal and its energy value?

…start with the definition of T

average energy in a E i   si2 (t )dt (5.10)


signal…(5.10) 0

N
Where si(t) is as in (5.5): si (t )   sij j (t ), (5.5)
j 1

12
T
N  N 
After substitution: Ei    sij j (t )   sikk (t )  dt
0  j 1   k 1 

N N T
Ei    s s   (t ) (t )dt (5.11)
After regrouping: j 1 k 1
ij ik
0
j k

Φj(t) is orthogonal, so N
Ei   s 2 2
= si (5.12)
finally we have: j 1
ij

The energy of a signal


is equal to the squared
length of its vector
13
Formulas for Two Signals
Assume we have a pair of signals: si(t) and
sj(t), each represented by its vector,
Then:
T
sij   si (t )sk (t )dt  siT sk (5.13)
0

Inner product is invariant


to the selection of basis
Inner product of the signals functions
is equal to the inner product
of their vector
representations [0,T]
14
Euclidian Distance
The Euclidean distance between two points represented by
vectors (signal vectors) is equal to
||si-sk|| and the squared value is given by:

N
si  s k = (sij -skj ) 2
2
(5.14)
j 1
T
=  ( si (t )  sk (t )) 2 dt
0

15
ANGLE BETWEEN TWO SIGNALS
The cosine of the angle Θik between two signal vectors si and
sk is equal to the inner product of these two vectors,
divided by the product of their norms:

T
s s
cosik  i k
(5.15)
si sk

So the two signal vectors are orthogonal if their inner


product siTsk is zero (cos Θik = 0)

16
Schwartz Inequality
Defined as:

      (5.16)
 2  
s1 (t )s2 (t )dt  2
s (t )dt s22 (t )dt
  1 

accept without proof…

17
Gram-schmidt Orthogonalization Procedure
Assume a set of M energy signals denoted by s1(t), s2(t), .. , sM(t).
1. Define the first basis s1 (t )
function starting with s1 as: 1 (t )  (5.19)
(where E is the energy of the E1
signal) (based on 5.12)
2. Then express s1(t) using the
basis function and an energy s1 (t )  E11 (t ) = s111 (t ) (5.20)
related coefficient s11 as:

T
3. Later using s2 define the s21   s2 (t )1 (t )dt (5.21)
coefficient s21 as: 0

18
4. If we introduce the g 2 (t )  s2 (t )  s211 (t ) (5.22)
intermediate function g2
as: Orthogonal to φ1(t)

g 2 (t )
2 (t )  (5.23)
T
5. We can define the second
basis function φ2(t) as: 0
g 22 (t )dt

s2 (t )  s211 (t )
2 (t )  (5.24)
6. Which after substitution of E2  s21
2

g2(t) using s1(t) and s2(t) it


becomes:
T
 0
22 (t )dt  1 (Look at 5.23)
Note that φ1(t) and φ2(t) are
T

orthogonal that means:
1 (t )2 (t )dt  0
0

19
AND SO ON FOR N DIMENSIONAL
SPACE…,
In general a basis function can be defined using the following
formula:
i 1
gi (t )  si (t )   sij - j (t) (5.25)
j 1

• where the coefficients can be defined using:

T
sij   si (t ) j (t )dt , j  1, 2,....., i 1 (5.26)
0

20
Special Case:
For the special case of i = 1 gi(t) reduces to si(t).

General case:

• Given a function gi(t) we can define a set of basis functions,


which form an orthogonal set, as:

gi (t )
i (t )  , i  1, 2,....., N (5.27)
T

0
gi2 (t )dt

21
22
Additive White Gaussian Noise (AWGN)
 Thermal noise is described by a zero-mean Gaussian random process,
n(t) that ADDS on to the signal => “additive”
 Its PSD is flat, hence, it is called white noise.
 Autocorrelation is a spike at 0: uncorrelated at any non-zero lag

[w/Hz]

Power spectral
Density
(flat => “white”)

Autocorrelation
Function
Probability density function (uncorrelated)
(gaussian)
23
24
Coherant Detection of Signals in Noise:

“likelihoods”

Assuming both symbols equally likely: uA is chosen if

Log-Likelihood => A simple distance criterion!


25
Effect of Noise In Signal Space

The cloud falls off exponentially (gaussian).


Vector viewpoint can be used in signal space, with a random noise vector w

26
Correlator Receiver

The matched filter output at the sampling time, can be


realized as the correlator output.
 Matched filtering, i.e. convolution with si*(T-) simplifies to
integration w/ si*(), i.e. correlation or inner product!
z (T )  hopt (T )  r (T )
T
  r ( )si ( )d  r (t ), s(t ) 
*

Recall: correlation operation is the projection of the received


signal onto the signal space!

Key idea: Reject the noise (N) outside this space as irrelevant:
=> maximize S/N
A Correlation Receiver
S1 (t ) integrator

Tb


 S1 (t )  n(t ) 0

V (t )   or
-
S (t )  n(t ) Threshold
 2 +
 device
(A\D)
Sample
every Tb
Tb seconds

0

S2 (t ) integrator
28
29
Probability Of Error

30
Signal Constellation Diagram

A constellation diagram is a representation of a signal modulated


by a digital modulation scheme such as quadrature amplitude
modulation or phase-shift keying.

It displays the signal as a two-dimensional xy-plane


scatter diagram in the complex plane at symbol sampling
instants.

31
32
VEMU INSTITUTE OF TECHNOLOGY
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Ananthapuramu)
P.Kothakota, Puthalapattu (M), Chittoor Dist – 517 112 AP, India

Department of Electronics and Communication Engineering

Digital Communication Systems


Unit-4
Passband Transmission Model

By
B SAROJA
Associate Professor
Dept. of . ECE
Contents

Introduction, Pass band transmission model


Coherent phase-shift keying – BPSK, QPSK
Binary Frequency shift keying (BFSK)
Error probabilities of BPSK, QPSK, BFSK
Generation and detection of Coherent PSK, QPSK, & BFSK
Power spectra of above mentioned modulated signals
M-array PSK, M-array QAM
Non-coherent orthogonal modulation schemes -DPSK, BFSK,
Generation and detection of non-coherent BFSK, DPSK
Comparison of power bandwidth requirements for all the above
schemes.
Introduction:
Baseband Vs Bandpass
Bandpass model of detection process is equivalent to baseband
model because:
 The received bandpass waveform is first transformed to a
baseband waveform.

Equivalence theorem:
 Performing bandpass linear signal processing followed by
heterodying the signal to the baseband, …
 … yields the same results as …
 … heterodying the bandpass signal to the baseband , followed by a
baseband linear signal processing.

3
PASSBAND TRANSMISSION MODEL

4
Types Of Digital Modulation
 Amplitude Shift Keying (ASK)
 The most basic (binary) form of ASK involves the process of switching
the carrier either on or off, in correspondence to a sequence of digital
pulses that constitute the information signal. One binary digit is
represented by the presence of a carrier, the other binary digit is
represented by the absence of a carrier. Frequency remains fixed
 Frequency Shift Keying (FSK)
 The most basic (binary) form of FSK involves the process of varying the
frequency of a carrier wave by choosing one of two frequencies (binary
FSK) in correspondence to a sequence of digital pulses that constitute
the information signal. Two binary digits are represented by two
frequencies around the carrier frequency. Amplitude remains fixed
 Phase Shift Keying (PSK)
 Another form of digital modulation technique which we will not discuss

5
BINARY PHASE SHIFT KEYING (PSK)

Baseband
Data
1 0 0 1
BPSK
modulated
signal
s1 s0 s0 s1
where s0 =-Acos(ct) and s1 =Acos(ct)
Major drawback – rapid amplitude change between symbols due to phase
discontinuity, which requires infinite bandwidth. Binary Phase Shift Keying
(BPSK) demonstrates better performance than ASK and BFSK
BPSK can be expanded to a M-ary scheme, employing multiple phases and
amplitudes as different states
6
BPSK TRANSMITTER

7
BINARY TO BIPOLAR
CONVERSION

8
COHERENT BPSK RECEIVER

9
BPSK WAVEFORM

10
11
QPSK
 Quadrature Phase Shift Keying (QPSK) can be interpreted
as two independent BPSK systems (one on the I-channel
and one on Q-channel), and thus the same performance
but twice the bandwidth (spectrum) efficiency.
 Quadrature Phase Shift Keying has twice the bandwidth
efficiency of BPSK since 2 bits are transmitted in a single
modulation symbol
 Quadrature Phase Shift Keying (QPSK) has twice the
bandwidth efficiency of BPSK, since 2 bits are transmitted
in a single modulation symbol.

12
Symbol and corresponding phase shifts in QPSK

13
QPSK

QPSK → Quadrature Phase Shift Keying

 Four different phase states in one symbol period


 Two bits of information in each symbol
Phase: 0 π/2 π 3π/2 → possible phase values
Symbol: 00 01 11 10
Note that we choose binary representations so an error between two adjacent
points in the constellation only results in a single bit error

 For example, decoding a phase to be π instead of π/2 will result in a "11"


when it should have been "01", only one bit in error.

14
15
 Now we have two basis functions
 Es = 2 Eb since 2 bits are transmitted per symbol
 I = in-phase component from sI(t).
 Q = quadrature component that is sQ(t).

16
QPSK Transmitter

AN OFFSET QPSK TRANSMITTER

17
QPSK Waveforms

18
QPSK Receiver

19
Types of QPSK

Q Q Q

I I I

Conventional QPSK Offset QPSK /4 QPSK

 Conventional QPSK has transitions through zero (i.e. 1800 phase transition). Highly linear
amplifiers required.
 In Offset QPSK, the phase transitions are limited to 900, the transitions on the I and Q
channels are staggered.
 In /4 QPSK the set of constellation points are toggled each symbol, so transitions through
zero cannot occur. This scheme produces the lowest envelope variations.
 All QPSK schemes require linear power amplifiers

20
Frequency Shift Keying (FSK)

Baseband
Data
1 0 0 1
BFSK
modulated
signal
f1 f0 f0 f1
where f0 =Acos(c-)t and f1 =Acos(c+)t
Example: The ITU-T V.21 modem standard uses FSK
FSK can be expanded to a M-ary scheme, employing multiple frequencies
as different states

21
Generation & Detection of FSK

22
Amplitude Shift Keying (ASK)

Baseband
Data
1 0 0 1 0
ASK
modulated
signal
Acos(t) Acos(t)

Pulse shaping can be employed to remove spectral spreading


ASK demonstrates poor performance, as it is heavily affected by noise,
fading, and interference

23
Error Probabilities of PSK,QPSK,BFSK

24
Power Spectral Density (PSD)
 In practical, pulse shaping should be considered for a precise bandwidth
measurement and considered in the spectral efficiency calculations.
 Power spectral density (PSD) describes the distribution of signal power in the
frequency domain. If the baseband equivalent of the transmitted signal sequence
is given as


g t    ak pt  kTs  ak : Baseband modulation symbol
k  Ts : Signal interval p t  : Pulse shape
then the PSD of g(t) is given as

P f   F pt 
1
g  f   P f   a  f 
2
where
Ts 
 a  f    Ra n e  j 2fnTs
1 né * ù
Ra ( n) = E ëak ak+n û
2
M-ARY Phase Shift Keying (MPSK)

In M-ary PSK, the carrier phase takes on one of the M


possible values, namely i = 2 * (i - 1) / M
where i = 1, 2, 3, …..M.
The modulated waveform can be expressed as

where Es is energy per symbol = (log2 M) Eb


Ts is symbol period = (log2 M) Tb.

26
The above equation in the Quadrature form is

By choosing orthogonal basis signals

defined over the interval 0  t  Ts

27
M-ary signal set can be expressed as

 Since there are only two basis signals, the constellation


of M-ary PSK is two dimensional.

 The M-ary message points are equally spaced on a


circle of radius Es, centered at the origin.

 The constellation diagram of an 8-ary PSK signal set is


shown in fig.

28
M-ARY PSK Transmitter

29
Coherent M-ARY PSK Receiver

30
M-ARY Quadrature Amplitude
Modulation (QAM)
It’s a Hybrid modulation
As we allow the amplitude to also vary with the phase, a
new modulation scheme called quadrature amplitude
modulation (QAM) is obtained.
The constellation diagram of 16-ary QAM consists of a
square lattice of signal points.
Combines amplitude and phase modulation
One symbol is used to represent n bits using one symbol
BER increases with n,

31
The general form of an M-ary QAM signal can be
defined as

where
Emin is the energy of the signal with the lowest
amplitude and
ai and bi are a pair of independent integers chosen
according to the location of the particular signal point.

 In M-ary QAM energy per symbol and also distance


between possible symbol states is not a constant.

32
M-PSK AND M-QAM

M-PSK (Circular Constellations) M-QAM (Square Constellations)


bn bn
4-PSK
16-QAM
16-PSK
4-PSK

an an

Tradeoffs
– Higher-order modulations (M large) are more spectrally
efficient but less power efficient (i.e. BER higher).
– M-QAM is more spectrally efficient than M-PSK but
also more sensitive to system nonlinearities.

33
QAM Constellation Diagram

34
Differential Phase Shift Keying (DPSK)

• DPSK is a non coherent form of phase shift keying which


avoids the need for a coherent reference signal at the
receiver.
Advantage:
• Non coherent receivers are easy and cheap to build,
hence widely used in wireless communications.
•DPSK eliminates the need for a coherent reference signal
at the receiver by combining two basic operations at the
transmitter:

35
DPSK Waveforms

36
Transmitter/Generator of DPSK Signal

37
Non-coherent Detection

38
Non-coherent DPSK Receiver

DPSK Receiver

Dpsk receiver using


correlator 39
Non-coherent BFSK Receiver

40
Comparisons Between Modulation Techniques

41
VEMU INSTITUTE OF TECHNOLOGY
(Approved by AICTE, New Delhi & Affiliated to JNTUA, Ananthapuramu)
P.Kothakota, Puthalapattu (M), Chittoor Dist – 517 112 AP, India

Department of Electronics and Communication Engineering

Digital Communication Systems


Unit-5
Channel Coding

By
B SAROJA
Associate Professor
Dept. of . ECE
Contents
Error Detection & Correction
Repetition & Parity Check Codes, Interleaving
Code Vectors and Hamming Distance
Forward Error Correction (FEC) Systems
Automatic Retransmission Query (ARQ) Systems
Linear Block Codes –Matrix Representation of Block Codes
Convolutional Codes – Convolutional Encoding, Decoding
Methods.
Introduction:
Types of Errors

3
Single bit errors are the least likely type of errors in serial data transmission because
the noise must have a very short duration which is very rare. However this kind of
errors can happen in parallel transmission.
Example:
 If data is sent at 1Mbps then each bit lasts only 1/1,000,000 sec. or 1 μs.
 For a single-bit error to occur, the noise must have a duration of only 1 μs, which is
very rare.

4
Burst Error

5
6
The term burst error means that two or more bits in the data unit have changed from
1 to 0 or from 0 to 1.

Burst errors does not necessarily mean that the errors occur in consecutive
bits, the length of the burst is measured from the first corrupted bit to the last
corrupted bit. Some bits in between may not have been corrupted.

7
 Burst error is most likely to happen in serial transmission since the duration of noise is
normally longer than the duration of a bit.
 The number of bits affected depends on the data rate and duration of noise.
Example:

If data is sent at rate = 1Kbps then a noise of 1/100


sec can affect 10 bits.(1/100*1000)

If same data is sent at rate = 1Mbps then a noise of


1/100 sec can affect 10,000 bits.(1/100*106)

8
Error Detection

Error detection means to decide whether the received data is correct or not without having a
copy of the original message.

Error detection uses the concept of redundancy, which means adding extra bits for detecting
errors at the destination.

9
Error Correction
It can be handled in two ways:
1) receiver can have the sender retransmit the entire data unit.
2) The receiver can use an error-correcting code, which automatically corrects
certain errors.

10
Single-bit Error Correction
To correct an error, the receiver reverses the
value of the altered bit. To do so, it must know
which bit is in error.
Number of redundancy bits needed
Let data bits = m
Redundancy bits = r
Total message sent = m+r
The value of r must satisfy the following relation:
2r ≥ m+r+1

11
Error Correction

12
Repetetion
Retransmission is a very simple concept. Whenever one
party sends something to the other party, it retains a
copy of the data it sent until the recipient has
acknowledged that it received it. In a variety of
circumstances the sender automatically retransmits
the data using the retained copy.

13
Parity Check Codes
information bits transmitted = k
bits actually transmitted = n = k+1
Code Rate R = k/n = k/(k+1)

Error detecting capability = 1


Error correcting capability = 0

14
Parity Codes – Example 1
Even parity
(i) d=(10110) so,
c=(101101)
(ii) d=(11011) so,
c=(110110)

15
Parity Codes – Example 2
Coding table for (4,3) even parity code

Dataword Codeword
0 0 0 0 0 0 0
0 0 1 0 0 1 1
0 1 0 0 1 0 1
0 1 1 0 1 1 0
1 0 0 1 0 0 1
1 0 1 1 0 1 0
1 1 0 1 1 0 0
1 1 1 1 1 1 1

16
Parity Codes
To decode
 Calculate sum of received bits in block (mod 2)
 If sum is 0 (1) for even (odd) parity then the dataword is the first k bits of the received
codeword
 Otherwise error

Code can detect single errors


But cannot correct error since the error could be in any
bit
For example, if the received dataword is (100000) the
transmitted dataword could have been (000000) or
(110000) with the error being in the first or second
place respectively
Note error could also lie in other positions including the
parity bit. 17
Interleaving
Interleaving is a process or methodology to make a system more efficient, fast and
reliable by arranging data in a non contiguous manner. There are many uses for
interleaving at the system level, including:

Storage: As hard disks and other storage devices are used to store user and system
data, there is always a need to arrange the stored data in an appropriate way.

Error Correction: Errors in data communication and memory can be corrected


through interleaving.

18
Code Vectors
In practice, we have a message (consisting of words,
numbers, or symbols) that we wish to transmit. We
begin by encoding each “word” of the message as a
binary vector.
A binary code is a set of binary vectors (of the same
length) called code vectors.
The process of converting a message into code vectors is
called encoding, and the reverse process is called
decoding.

19
Hamming Distance
Hamming distance is a metric for comparing two binary data strings.
While comparing two binary strings of equal length, Hamming distance
is the number of bit positions in which the two bits are different.
The Hamming distance between two strings, a and b is denoted as d(a,b).
It is used for error detection or error correction when data is transmitted
over computer networks. It is also using in coding theory for comparing
equal length data words.
Example :
Suppose there are two strings 1101 1001 and 1001 1101.
11011001 ⊕ 10011101 = 01000100. Since, this contains two 1s, the
Hamming distance, d(11011001, 10011101) = 2.

20
FEC System

21
Forward error correction (FEC) or channel coding is a
technique used for controlling errors in data
transmission over unreliable or noisy communication
channels.
The central idea is the sender encodes the message in
a redundant way by using an error-correcting code (ECC).
FEC gives the receiver the ability to correct errors without
needing a reverse channel to request retransmission of
data, but at the cost of a fixed, higher forward channel
bandwidth.
FEC is therefore applied in situations where retransmissions
are costly or impossible, such as one-way communication
links and when transmitting to multiple receivers
in multicast.
22
Automatic Repeat Request (ARQ)
Automatic Repeat reQuest (ARQ), also known as Automatic Repeat
Query, is an error-control method for data transmission that
uses acknowledgements and timeouts to achieve reliable data
transmission over an unreliable service.
If the sender does not receive an acknowledgment before the
timeout, it usually re-transmits the frame/packet until the sender
receives an acknowledgment or exceeds a predefined number of
re-transmissions.
The types of ARQ protocols include
Stop-and-wait ARQ
Go-Back-N ARQ
Selective Repeat ARQ
All three protocols usually use some form of sliding window
protocol to tell the transmitter to determine which (if any) 23
packets need to be retransmitted.
ARQ System

24
Block Codes
Data is grouped Into Blocks Of Length k bits
(dataword)
Each dataword is coded into blocks of length n bits
(codeword), where in general n>k
This is known as an (n,k) block code
A vector notation is used for the datawords and
codewords,
 Dataword d = (d1 d2….dk)
 Codeword c = (c1 c2……..cn)

The redundancy introduced by the code is


quantified by the code rate,
 Code rate = k/n
 i.e., the higher the redundancy, the lower the code rate

25
Block Code - Example
Data word length k = 4
Codeword length n = 7
This is a (7,4) block code with code rate = 4/7
For example, d = (1101), c = (1101001)

26
Linear Block Codes:Matrix
Representation
parity bits n-k (=1 for Parity Check)
Message m = {m1 m2 … mk}
Transmitted Codeword c = {c1 c2 … cn}
A generator matrix Gkxn

c  mG

27
Linear Block Codes

Linearity c1  m1G,
c1  c2  (m1  m2 )G c2  m2G

Example : 4/7 Hamming Code


 k = 4, n = 7
 4 message bits at (3,5,6,7)
 3 parity bits at (1,2,4)
 Error correcting capability =1
 Error detecting capability = 2

28
Linear Block Codes
If there are k data bits, all that is required is to hold k linearly
independent code words, i.e., a set of k code words none of
which can be produced by linear combinations of 2 or more
code words in the set.

The easiest way to find k linearly independent code words is to


choose those which have ‘1’ in just one of the first k
positions and ‘0’ in the other k-1 of the first k positions.

29
Linear Block Codes
For example for a (7,4) code, only four codewords are
required, e.g.,
1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 0 1 1
0 0 0 1 1 1 1

• So, to obtain the codeword for dataword 1011, the first, third
and fourth codewords in the list are added together, giving
1011010
• This process will now be described in more detail

30
An (n,k) block code has code vectors
d=(d1 d2….dk) and
c=(c1 c2……..cn)

The block coding process can be written as c=dG


where G is the Generator Matrix

 a11 a12 ... a1n   a1 


a   
 21 a22 ... a2n  a 2 
G 
 . . ... .   . 
   
 ak1 ak 2 ... akn  a k 
31
Thus,
k
c   dia i
i 1

• ai must be linearly independent, i.e., Since codewords


are given by summations of the ai vectors, then to avoid
2 datawords having the same codeword the ai vectors
must be linearly independent.
• Sum (mod 2) of any 2 codewords is also a codeword,
i.e.,
Since for datawords d1 and d2 we have;

d3  d1  d 2
32
So,

k k k k
c 3   d 3i a i   (d1i  d 2i )a i  d1i a i   d 2i a i
i 1 i 1 i 1 i 1

c3  c1  c2
0 is always a codeword, i.e.,
Since all zeros is a dataword then,
k
c   0 ai  0
i 1

33
Decoding Linear Codes
One possibility is a ROM look-up table
In this case received codeword is used as an address
Example – Even single parity check code;
Address Data
000000 0
000001 1
000010 1
000011 0
……… .
Data output is the error flag, i.e., 0 – codeword ok,
If no error, dataword is first k bits of codeword
For an error correcting code the ROM can also store
datawords.
34
Convolutional Codes
Block codes require a buffer
Example
k=1
n=2
Rate R = ½

35
Convolutional Codes:Decoding
Encoder consists of shift registers forming a finite state machine
Decoding is also simple – Viterbi Decoder which works by tracking these states
First used by NASA in the voyager space programme
Extensively used in coding speech data in mobile phones

36

You might also like