Notes DC
Notes DC
Tech
Subject Name: Digital Communication
Subject Code: EC-502
Semester: 5th
Downloaded from www.rgpvnotes.in
Unit 1
Syllabus: Random variables Cumulative distribution function, Probability density function, Mean, Variance
and standard deviations of random variable, Gaussian distribution, Error function, Correlation and
autocorrelation, Central-limit theorem, Error probability, Power Spectral density of digital data.
If in any finite interval X(λ) assumes only a finite number of distinct values then the random variable is
discrete. Ex. Tossing a die.
If in any finite interval X(λ) assumes continuous values then the random variable is continuous. For
Example, shift in the magnitude of miss of a bullet due to wind.
A random variable is a mapping from sample space Ω to a set of real numbers. What does this mean? Let’s
take the usual evergreen example of “flipping a coin”.
In a “coin-flipping” experiment, the outcome is not known prior to the experiment, that is we cannot
predict it with certainty. But we know the all possible outcomes – Head or Tail. Assign real numbers to the
all possible events, say “0” to “Head” and “1” to “Tail”, and associate a variable “could take these two
values. This variable “X” is called a random variable, since it can randomly take any value ‘0’ or ‘1’ before
performing the actual experiment.
Obviously, we do not want to wait till the coin-flipping experiment is done. Because the outcome will lose
its significance, we want to associate some probability to each of the possible event. In the coin-flipping
experiment, all outcomes are equally probable. This means that we can say that the probability of getting
Head as well that of getting Tail is 0.5.
This can be written as,
𝑃(𝑋 = 0) = 0.5 𝑎𝑛𝑑 𝑃(𝑋 = 1) = 0.5
The cumulative distributive function or distribution function for a discrete random variable is defined as
𝐹 (𝑋 ) = 𝑃 (𝑋 ≤ 𝑥 ) = ∑ 𝑓 (𝑢 ) − ∞ < 𝑥 < ∞
𝑢≤𝑥
Page 1 of 9
If X can take on the values x1, x2, x3, x4 … xn, then the distributive function is given by
0 − ∞ ≤ 𝑥 < 𝑥1
𝑓(𝑥1 ) 𝑥1 ≤ 𝑥 < 𝑥2
𝐹(𝑥) = 𝑓 (𝑥1 ) + 𝑓(𝑥2 ) 𝑥2 ≤ 𝑥 < 𝑥3
⋮
{ 𝑓(𝑥1 ) + 𝑓 (𝑥2 ) + ⋯ + 𝑓(𝑥𝑛 ) 𝑥𝑛 ≤ 𝑥 < ∞
𝑓𝑋 (𝑥) = 𝑃(𝑋 ≤ 𝑥)
If we plot the CDF for our coin-flipping experiment, it would look like the one shown in the figure
The example provided above is of discrete nature, as the values taken by the random variable are discrete
and therefore the random variable is called Discrete Random Variable.
If the values taken by the random variables are of continuous nature, then the random variable is called
Continuous Random Variable and the corresponding cumulative distribution function will be smoother
without discontinuities.
1.4 Probability Density function (PDF) and Probability Mass Function (PMF):
It’s more common deal with Probability Density Function (PDF)/Probability Mass Function (PMF) than CDF.
The PDF is given by taking the first derivate of CDF.
𝑑F𝑋 (𝑥)
𝑓𝑋 (𝑥 ) =
𝑑𝑥
For discrete random variable that takes on discrete values, is it common to defined Probability Mass
Function.
𝑓𝑋 (𝑥) = 𝑃(𝑋 = 𝑥)
Page 2 of 9
The previous example was simple. The problem becomes slightly complex if we are asked to find the
probability of getting a value less than or equal to 3. Now the straight forward approach will be to add the
probabilities of getting the values x=1,2,3 which comes out to be 1/10+2/10+3/10 =6/10. This can be easily
modeled as a probability density function which will be the integral of probability distribution function with
limits 1 to 3.
Based on the probability density function or how the PDF graph looks, PDF fall into different categories like
binomial distribution, Uniform distribution, Gaussian distribution, Chi-square distribution, Rayleigh
distribution, Rician distribution etc. Out of these distributions, you will encounter Gaussian distribution or
Gaussian Random variable in digital communication very often.
a) 𝑓𝑥 (𝑥 ) ≥ 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥
This results from the fact that F(x) increases monotonically as x increases, more outcomes are
included in the probability of occurrence represented by F(x).
∞
b) ∫−∞ 𝑓𝑥 (𝑥)𝑑𝑥 = 1
𝑥
c) 𝐹(𝑥 ) = ∫−∞ 𝑓𝑥 (𝑥)𝑑𝑥
1.5 Mean:
The mean of a random variable is defined as the weighted average of all possible values the random
variable can take. Probability of each outcome is used to weight each value when calculating the mean.
Mean is also called expectation (E[X])
For continues random variable X and probability density function fX(x)
∞
𝐸[𝑋] = ∫ 𝑥𝑓𝑋 (𝑥 )𝑑𝑥
−∞
For discrete random variable X, the mean is calculated as weighted average of all possible values (x i)
weighted with individual probability (pi)
1.6 Variance:
Variance measures the spread of a distribution. For a continuous random variable X, the variance is defined
as
∞
𝑣𝑎𝑟[𝑋] = ∫ (𝑥 − 𝐸[𝑋])2 𝑓𝑋 (𝑥 )𝑑𝑥
−∞
−∞
Page 3 of 9
𝐸 [𝑐𝑋] = 𝑐𝐸[𝑋]
𝐸 𝑋 + 𝑐 ] = 𝐸 [𝑋 ] + 𝑐
[
𝐸 [𝑐 ] = 𝑐
For a constant – “c” following properties will hold true for variance
PDF and CDF define a random variable completely. For example: If two random variables X and Y have the
same PDF, then they will have the same CDF and therefore their mean and variance will be same.
On the other hand, mean and variance describes a random variable only partially. If two random variables
X and Y have the same mean and variance, they may or may not have the same PDF or CDF.
Example 1.7.1.1
A fair die is tossed 5 times. A toss is called a success if face 1 or 6 appears. Find (a) the probability of two
successes, (b) the mean and the standard deviation for the number of successes.
(a)
𝟐 𝟏 𝟏 𝟐
𝑵 = 𝟓, 𝒑 = = , 𝒒 = 𝟏 − 𝒑 = 𝟏 − =
𝟔 𝟑 𝟑 𝟑
𝟓 𝟏 𝟐 𝟐 𝟓−𝟐 𝟖𝟎 𝒏 𝒏!
𝑷 ( 𝑿 = 𝟐) = ( ) [ ] [ ] = ( )=
𝟐 𝟑 𝟑 𝟐𝟒𝟑 𝒙 𝒙! (𝒏 − 𝒙)!
(b)
𝟏
𝑴𝒆𝒂𝒏 = 𝒏𝒑 = 𝟓 × = 𝟏. 𝟔𝟔𝟕
𝟑
𝟏 𝟐
𝑺𝒕𝒂𝒏𝒅𝒂𝒓𝒅 𝒅𝒆𝒗𝒊𝒂𝒕𝒊𝒐𝒏 = √𝒏𝒑𝒒 = √𝟓 × × = 𝟏. 𝟎𝟓𝟒
𝟑 𝟑
Page 4 of 9
𝒙 = 𝟎, 𝟏, 𝟐, …
Where, 𝝀 is a positive constant. The properties of Poisson distribution are
𝑴𝒆𝒂𝒏 = 𝝀, 𝒗𝒂𝒓𝒊𝒂𝒏𝒄𝒆 = 𝝀
Error Function
The error function of z is defined as
2 𝑧 2
𝑒𝑟𝑓 𝑧 = π ∫0 𝑒 −𝑢 𝑑𝑢 …1.7.3.5
√
The error function has the values between 0 and 1.
𝑒𝑟𝑓 (0) = 0 𝑎𝑛𝑑 𝑒𝑟𝑓 (∞) = 1
The Complementary error function of z is defined as
2 ∞ 2
𝑒𝑟𝑓𝑐 𝑧 = 1 − 𝑒𝑟𝑓 𝑧 = π ∫𝑧 𝑒 −𝑢 𝑑𝑢 …1.7.3.6
√
Page 5 of 9
𝑥
2 2
= ∫ 𝑒 −𝑡 𝑑𝑡
√𝜋 0
In statistics, for nonnegative values of x, the error function has the following interpretation: for a random
variable X that is normally distributed with mean 0 and variance , erf(x) describes the probability of X
falling in the range [−x, x].
Page 6 of 9
In digital communication, channel noise is often modeled as normally distributed. Modeling a channel as
normally distributed when the noise components in that channel are sufficiently large is justified by Central
limit theorem.
1.10 Correlation
The Correlation (more precisely Cross Correlation) between two waveforms is the measure of similarity
between one waveform and time delayed version of the other waveform. This express how much one
waveform is related to the time delayed version of the other waveform.
The expression for correlation is very close to convolution. Consider two general complex functions 𝑓1 (𝑡)
and 𝑓2 (𝑡), which may or may not be periodic, and not restricted, to finite interval. Then cross correlation or
simply correlation 𝑅1,2 (𝜏) between two function is defined as follows:
𝑇
2
…1.10.1a
𝑅1,2 (𝜏) = lim ∫ 𝑓1 (𝑡)𝑓2 ∗ (𝑡 + 𝜏)𝑑𝑡
𝜏→∞ −𝑇
2
(The conjugate symbol*, is removed if the function is real.)
This represents the shift of function 𝑓2 (𝑡) by an amount – 𝜏(i.e. towards left). A similar effect can be
obtained by shifting 𝑓1 (𝑡) by an amount +𝜏 (i.e. towards right). Therefore correlation may also be defined
as
𝑇
2
…1.10.1b
𝑅1,2 (𝜏) = lim ∫ 𝑓1 (𝑡 − 𝜏)𝑓2 ∗ (𝑡)𝑑𝑡
𝜏→∞ −𝑇
2
Let us define the correlation for two cases, (i) Energy (non periodic) signal and (ii) Power (Periodic) Signals.
In the definition of correlation limit of integration may be taken as infinite for energy signals,
∞ ∞
…1.10.2a
𝑅1,2 (𝜏) = ∫ 𝑓1 (𝑡)𝑓2 ∗ (𝑡 + 𝜏)𝑑𝑡 = ∫ 𝑓1 (𝑡 − 𝜏)𝑓2 ∗ (𝑡)𝑑𝑡
−∞ −∞
For power signals of period T0, the definition in above equation may not converge. Therefore the average
correlation over a period T0 is defined as
𝑇0 𝑇0
1 2 1 2
…1.10.2b
𝑅1,2 (𝜏) = ∫ 𝑇 𝑓1 (𝑡)𝑓2 ∗ (𝑡 + 𝜏)𝑑𝑡 = ∫ 𝑇 𝑓1 (𝑡 − 𝜏)𝑓2 ∗ (𝑡)𝑑𝑡
T0 − 0 T0 − 0
2 2
The correlation definition represents the overlapping area between the two functions.
𝑇
1
𝑅11 (𝜏) = 𝑅(𝜏) = lim T ∫ 2𝑇 𝑓 (𝑡)𝑓 ∗ (𝑡 + 𝜏)𝑑𝑡 (+ve Shift)
𝑇→∞ −
2
𝑇
1
= lim T ∫ 2𝑇 𝑓(𝑡 − 𝜏)𝑓 ∗ (𝑡)𝑑𝑡 (-ve Shift)
𝑇→∞ −
2
Auto correlation function of energy signal at origin i.e. at 𝜏 = 0 is equal to total energy of that signal, which
is given as:
Page 7 of 9
∞
𝑅(0) = ∫ |𝑥(𝑡)|2 𝑑𝑡
−∞
1
Auto correlation function ∞ 𝑇 ,
Auto correlation function is maximum at 𝜏 = 0 𝑖. 𝑒. |𝑅(𝜏)| ≤ 𝑅(0)∀𝜏
Auto correlation function and energy spectral densities are Fourier transform pairs. i.e.
𝐹. 𝑇. |𝑅(𝜏)| = Ψ(𝜔)
∞
Ψ(𝜔) = ∫ 𝑅(𝜏)𝑒 −𝑗𝜔𝑡 𝑑𝜏
−∞
𝑅(𝜏) = 𝑥(𝑡) ∗ 𝑥(−𝑡)
𝑇 sin(𝜋𝑇𝑓)
𝑠 (𝑡 ) = 𝐴 |𝑡 | < 𝑠(𝑓) = 𝐴𝑇
2 𝜋𝑇𝑓
𝑇
𝑠 (𝑡 ) = 0 |𝑡 | >
2
Figure 1.02 Power Spectral Density
Page 8 of 9
∞
1
𝑆𝑥 (𝜔) = (𝑅0 + 2 ∑ 𝑅𝑛 𝑐𝑜𝑠𝑛𝜔𝑇)
𝑇
𝑛=1
Since the pulse filter has the spectrum of (w)↔f(t), we have
𝑆𝑦 (𝜔) = |𝐹(𝜔)|2 𝑆𝑥 (𝜔)
∞
|𝐹 (𝜔)|2
= ( ∑ 𝑅𝑛 𝑒 −𝑗𝑛𝑢𝑇𝑏 )
𝑇
𝑛=∞
∞
|𝐹 (𝜔)|2
= (𝑅0 + 2 ∑ 𝑅𝑛 𝑐𝑜𝑠𝑛𝜔𝑇)
𝑇
𝑛=1
Hence, we get the equation for Power Spectral Density. Using this, we can find the PSD of various line
codes.
Page 9 of 9
Unit 2
Digital conversion of Analog Signals: Sampling theorem, sampling of band pass signals, Pulse Amplitude
Modulation (PAM), types of sampling (natural, flat-top), equalization, signal reconstruction and
reconstruction filters, aliasing and anti-aliasing filter, Pulse Width Modulation (PWM), Pulse Position
Modulation (PPM)
Digital transmission of Analog Signals: Quantization, quantization error, Pulse Code Modulation (PCM),
companding, scrambling, TDM-PCM, Differential PCM, Delta modulation, Adaptive Delta modulation,
vocoder.
Sampling is defined as, “The process of measuring the instantaneous values of continuous-time signal in a
discrete form.” In the process of sampling an analog signal is converted into a corresponding sequences of
samples, that are uniformly spaced in time.
This discretization of analog signal is called as Sampling. The following figure indicates a continuous-time
signal x (t) and a sampled signal xs (t). When x (t) is multiplied by a periodic impulse train, the sampled
signal xs (t) is obtained.
Sampling frequency is the reciprocal of the sampling period. This sampling frequency can be simply called
as Sampling rate. The sampling rate denotes the number of samples taken per second, or for a finite set of
values.
Page 1 of 24
For an analog signal to be reconstructed from the digitized signal, the sampling rate should be highly
considered. The rate of sampling should be such that the data in the message signal should neither be lost
nor it should get over-lapped. Hence, a rate was fixed for this, called as Nyquist rate.
Page 2 of 24
= ∑ 𝐹𝑛 𝑒 𝑗2𝜋𝑛𝑓0 𝑡
𝑛=−∞
𝑇
1
Where 𝐹𝑛 = 𝑇 ∫2𝑇 𝑝(𝑡)𝑒 𝑗𝑛𝜔0 𝑡 𝑑𝑡
−
2
1
= (𝑛𝜔𝑠 )
𝑇𝑃
Substitute Fn value in equation 2.1.2.2
∞
1
∴ p(t) = ∑ P(nωs )ejnω0 t
T
n=−∞
∞
1
= ∑ P(nωs )ejnω0 t
T
n=−∞
∞
1
= ∑ 𝑃(𝑛𝜔𝑠 )𝑥(𝑡)𝑒 𝑗𝑛𝜔0 𝑡
𝑇
𝑛=−∞
To get the spectrum of sampled signal, consider the Fourier transform on both sides.
∞
1
𝐹. 𝑇. [y(t)] = F. T. ∑ [P(nωs )x(t)ejnω0 t ]
T
n=−∞
∞
1
= ∑ P(nωs )F. T. [x(t)ejnω0 t ]
T
n=−∞
Page 3 of 24
Page 4 of 24
Here, you can observe that the sampled signal takes the period of impulse. The process of sampling can be
understood as under.
The sampled signal is given by
𝑦(𝑡) = 𝑥(𝑡). 𝛿 (𝑡) = 𝑥(𝑡). 𝛿𝑇𝑠 (𝑡)
= ∑ 𝑥(𝑛𝑇𝑠 ). 𝛿(𝑡−𝑛𝑇𝑠 ) ….(2.1.3.1)
𝑛
The impulse train 𝛿𝑇𝑠 (𝑡) is a periodic signal of period 𝑇𝑠 , hence it can be expressed as a Fourier series
1
𝛿𝑇𝑠 (𝑡) = [1 + 2 𝑐𝑜𝑠𝜔𝑠 𝑡 + 2 𝑐𝑜𝑠2𝜔𝑠 𝑡 + 2 𝑐𝑜𝑠3𝜔𝑠 𝑡 + ⋯ ]
𝑇𝑠
2𝜋
Where 𝜔𝑠 = 𝑇 = 2𝜋𝑓𝑠 ….(2.1.3.2)
𝑠
Therefore as long as the sampling frequency 𝑓𝑠 is greater than 2𝐵, 𝑌(𝜔), will consist of non overlapping
repetitions of 𝑋(𝜔), and 𝑥(𝑡) can be recovered from 𝑦(𝑡) by passing 𝑦(𝑡) by an ideal low pass filter with
cut off frequency B Hz.
Nyquist Rate
The minimum sampling rate 𝑓𝑠 = 2𝐵 required to recover 𝑥(𝑡) from its samples 𝑦(𝑡) is called the Nyquist
rate and the corresponding sampling interval is called Nyquist Interval for 𝑦(𝑡).
Page 5 of 24
In case of band pass signals, the spectrum of band pass signal X[ω] = 0 for the frequencies outside the
range f1 ≤ f ≤ f2. The frequency f1 is always greater than zero. Plus, there is no aliasing effect when f s > 2f2.
But it has two disadvantages:
The sampling rate is large in proportion with f2. This has practical limitations.
The sampled signal spectrum has spectral gaps.
To overcome this, the band pass theorem states that the input signal x(t) can be converted into its samples
and can be recovered back without distortion when sampling frequency 𝑓𝑠 < 2𝑓2.
Also
1 2𝑓2
𝑓𝑠 = =
𝑇 𝑚
Where m is the largest integer < f2 /B and B is the bandwidth of the signal. If f2=KB, then for band pass
signals of bandwidth 2fm and the minimum sampling rate fs=2B=4fm, the spectrum of the sampled signal is
given by
∞
1
𝑌[𝜔] = ∑ 𝑋[𝜔 − 2𝑛𝐵]
𝑇
𝑛=−∞
Aliasing
Aliasing can be referred to as “the phenomenon of a high-frequency component in the spectrum of a
signal, taking on the identity of a low-frequency component in the spectrum of its sampled version.”
The corrective measures taken to reduce the effect of Aliasing are −
In the transmitter section of PCM, a low pass anti-aliasing filter is employed, before the sampler, to
eliminate the high frequency components, which are unwanted.
Page 6 of 24
The signal which is sampled after filtering is sampled at a rate slightly higher than the Nyquist rate.
This choice of having the sampling rate higher than Nyquist rate, also helps in the easier design of the
reconstruction filter at the receiver.
In pulse width modulation, there are different types of modulation for analog and digital as shown below:
PCM: Pulse Code Modulation for Analog Modulation.
PPM: Pulse Position Modulation for Digital Modulation
PDM: Pulse Duration Modulation for Digital Modulation.
PAM: Pulse Amplitude Modulation for Digital Modulation.
Types of Modulation – Tree Diagram:
Types of Modulation
The PCM system block diagram is shown in fig 3.2. The essential operations in the transmitter of a PCM
system are Sampling, Quantizing and Coding. The Quantizing and
encoding operations are usually performed by the same circuit, normally referred to as
analog to digital converter. The essential operations in the receiver are regeneration, decoding and
Page 7 of 24
demodulation of the quantized samples. Regenerative repeaters are used to reconstruct the transmitted
sequence of coded pulses in order to combat the accumulated effects of signal distortion and noise.
Page 8 of 24
PAM can generate other pulse modulation signals and can carry the message or information at
same time.
Page 9 of 24
2.7 Quantization
In the process of quantization we create a new signal 𝑚𝑞 (𝑡), which is an approximation to 𝑚(𝑡). The
quantized signal 𝑚𝑞 (𝑡), has the great merit that it is separable from the additive noise.
The operation of quantization is represented in figure 2.7.1. Here we have a signal m(t), whose amplitude
varies in the range from VH to VL as shown in the figure.
We have divided the total range in to M equal intervals each of size S, called the step size and given by
(𝑉𝐻 − 𝑉𝐿 )
𝑆 = ∆=
𝑀
In our example M=8. In the centre of each of this step we located quantization levels 𝑚0, 𝑚1, 𝑚2, … 𝑚7.
The 𝑚𝑞 (𝑡) is generated in the following manner-
Whenever the signal 𝑚(𝑡) is in the range ∆0 , the signal 𝑚𝑞 (𝑡) maintains a constant level 𝑚0 , whenever
the signal 𝑚(𝑡) is in the range ∆1 , the signal 𝑚𝑞 (𝑡) maintains a constant level 𝑚1 and so on. Hence the
Page 10 of 24
signal 𝑚𝑞 (𝑡) will found all times to one of the levels 𝑚0, 𝑚1, 𝑚2, … 𝑚7. The transition in 𝑚𝑞 (𝑡) from
𝑚0 to 𝑚1 is made abruptly when 𝑚(𝑡) passes the transition level 𝐿01 , which is mid way between 𝑚0
and 𝑚1 and so on.
Using quantization of signals, the effect of noise can be reduced significantly. The difference between 𝑚(𝑡)
and 𝑚𝑞 (𝑡) can be regarded as noise and is called quantization noise.
𝑞𝑢𝑎𝑛𝑡𝑖𝑧𝑎𝑡𝑖𝑜𝑛 𝑛𝑜𝑖𝑠𝑒 = 𝑚(𝑡) − 𝑚𝑞 (𝑡)
Also the quantized signal and original signal differs from one another in a ransom manner. This difference
or error due to quantization process is called quantization error and is given by
𝑒 = 𝑚 ( 𝑡 ) − 𝑚𝑘
( )
when 𝑚 𝑡 happens to be close to quantization level 𝑚𝑘 , quantizer output will be 𝑚𝑘 .
The process of transforming sampled amplitude values of a message signal into a discrete amplitude value
is referred to as Quantization. The quantization Process has a two-fold effect:
1. the peak-to-peak range of the input sample values is subdivided into a finite set of decision levels or
decision thresholds that are aligned with the risers of the staircase, and
2. The output is assigned a discrete value selected from a finite set of representation levels that are aligned
with the treads of the staircase.
A quantizer is memory less in that the quantizer output is determined only by the value of a corresponding
input sample, independently of earlier analog samples applied to the input.
Types of Quantizers:
1. Uniform Quantizer
2. Non- Uniform Quantizer
0 Ts 2Ts 3Ts Time Analog Signal Discrete Samples (Quantized)
In Uniform type, the quantization levels are uniformly spaced, whereas in non-uniform type the spacing
between the levels will be unequal and mostly the relation is logarithmic. Types of Uniform Quantizers:
(based on I/P - O/P Characteristics)
1. Mid-Rise type Quantizer
Page 11 of 24
In the stair case like graph, the origin lies the middle of the tread portion in Mid –Tread type whereas the
origin lies in the middle of the rise portion in the Mid-Rise type. Mid – tread type: Quantization levels – odd
number. Mid – Rise type: Quantization levels – even number.
`
Figure 2.7.3 IO Characteristics of Mid-Tread type Quantizer
Quantization noise is produced in the transmitter end of a PCM system by rounding off sample values of an
analog base-band signal to the nearest permissible representation levels of the quantizer. As such
quantization noise differs from channel noise in that it is signal dependent.
Let ‘Δ’ be the step size of a quantizer and L be the total number of quantization levels.
Quantization levels are 0, ± Δ., ± 2 Δ., ±3 Δ . . . . . . .
The Quantization error, Q is a random variable and will have its sample values bounded
by [-(Δ/2) < q < (Δ/2)]. If Δ is small, the quantization error can be assumed to a
uniformly distributed random variable.
Page 12 of 24
Where fQ(q) = probability density function of the Quantization error. If the signal does not overload the
Quantizer, then the mean of Quantization error is zero and the variance 𝜎𝑥 2 .
Therefore
𝜎𝑄 2 = 𝐸{𝑄2 }
∞
𝜎𝑄 2 = ∫ 𝑞 2 𝑓𝑞 (𝑞 )𝑑𝑞 …(2.7.1.4)
−∞
1 ∆/2 2 ∆2
𝜎𝑄 2 = ∫ 𝑞 𝑑𝑞 = …(2.7.1.5)
∆ −∆/2 12
Page 13 of 24
Thus the varience of the Quantization noise produced by a Uniform Quantizer, grows as the square of the
step size. Equation (2.7.1.5) gives an expression for Quantization noise in PCM system.
Let 𝜎𝑥 2 = Variance of the base band signal x(t) at the input of the quantizer.
When the base band signal is reconstructed at the receiver output, we obtain original signal plus
Quantization noise. Therefore output signal to Quantization noise ration (SNR) is given by
𝑆𝑖𝑔𝑛𝑎𝑙 𝑃𝑜𝑤𝑒𝑟 𝜎𝑥 2 𝜎𝑥 2
(𝑆𝑁𝑅)𝑄 = = 2= 2 …(2.7.1.6)
𝑁𝑜𝑖𝑠𝑒 𝑃𝑜𝑤𝑒𝑟 𝜎𝑄 ∆ /12
Smaller the step size ∆, larger will be the SNR.
Page 14 of 24
2. The system operates with an average signal power above the error threshold so
that the effect of channel noise is made negligible and performance is there by
limited essentially by Quantization noise alone.
3. The Quantization is fine enough (say n>6) to prevent signal correlated patterns in
the Quantization error waveform
4. The Quantizer is aligned with input for a loading factor of 4
Note: 1. Error uniformly distributed
2. Average signal power
3. n > 6
4. Loading factor = 4
From (2.7.1.13): 10 log10 (SNR)O = 6n – 7.2
𝐵
10𝑙𝑜𝑔10 (𝑆𝑁𝑅)𝑄 = 6( ) − 7.2 …(2.7.1.14)
𝑊
Page 15 of 24
the recovered analog waveform at the output of the PCM system will have flat – top near the peak values.
This produces overload noise.
Granular Noise:- If the input level is reduced to a relatively small value w.r.to to the design level
(quantization level), the error values are not same from sample to sample and the noise has a harsh sound
resembling gravel being poured into a barrel. This is granular noise. This noise can be randomized (noise
power decreased) by increasing the number of quantization levels i.e. Increasing the PCM bit rate.
Hunting Noise:- This occurs when the input analog waveform is nearly constant. For these conditions, the
sample values at the Quantizer output can oscillate between two adjacent quantization levels, causing an
undesired sinusoidal type tone of frequency (0.5fs) at the output of the PCM system. This noise can be
reduced by designing the quantizer so that there is no vertical step at constant value of the inputs.
Page 16 of 24
equivalent before transmission. Then the digits of the binary representation of the code are transmitted as
pulses. This system of transmission is called binary Pulse Code Modulation. The whole process can be
understood by the following diagram.
(A) Transmitter
Page 17 of 24
Input
Message
Quantizer Holding Circuit LPF
Decoder
(c) Receiver
Figure (c) shows the receiver. The first block is again the quantizer, but this quantizer is different from the
transmitter quantizer sa it has to take the decision regarding the presence or absence of the pulse only.
Thus there are only two quantization levels. The output of the quantizer goes to the decoder which is an
D/A converter that performs the inverse operation of the encoder. The decoder output is a sequence of
quantized pulses. The original signal is reconstructed in the holding circuit and the LPF.
Page 18 of 24
Pulse code modulation receivers are cost effective when we compared to other modulation
receivers.
Developing pulse code modulation is bit complicated and checking the transmission quality is also
difficult and takes more time.
Large bandwidth is required for pulse code modulation when compared to bandwidth used by the
normal analog signals to transmit message.
Channel bandwidth should be more for digital encoding.
PCM systems are complicated when compared to analog modulation methods and other systems.
Decoding also needs special equipment’s and they are also too complex.
The word Companding is a combination of Compressing and Expanding, which means that it does both.
This is a non-linear technique used in PCM which compresses the data at the transmitter and expands the
same data at the receiver. The effects of noise and crosstalk are reduced by using this technique.
There are two types of Companding techniques. They are −
A-law Companding Technique
Uniform quantization is achieved at A = 1, where the characteristic curve is linear and no
compression is done.
A-law has mid-rise at the origin. Hence, it contains a non-zero value.
A-law companding is used for PCM telephone systems.
Page 19 of 24
quantize 𝑚(𝑘), and corresponding fewer bits will be needed to encode the signal. The basic principle of
DPCM is shown in figure 2.10.1.
(a) Transmitter
(b) Receiver
Figure 2.10.1 Differential PCM
The receiver consists of an accumulator which adds-up the receiver quantized differences ∆𝑄 (𝑘) and a
filter which smoothes out the quantization noise. The output of accumulator is the signal approximation
𝑚̂ (𝑘) which becomes 𝑚 ̂ (𝑡) at the filter output.
At the transmitter we need to know whether the 𝑚 ̂ (𝑡) is larger or smaller than 𝑚(𝑡) and by how much
amount. We may than determine whether the next difference ∆𝑄 (𝑘) needs to be positive or negative and
of what amplitude in order to bring 𝑚 ̂ (𝑡) as close as possible to 𝑚(𝑡). For this reason we have a duplicate
accumulator at transmitter.
At each sampling time the transmitter difference amplifier compares 𝑚(𝑡) and ̂(𝑡), 𝑚 and the sample and
hold circuit holds the result of that comparison ∆(𝑡), for the duration of interval between sampling times.
The quantizer generates the signal 𝑆0 (𝑡) = ∆𝑄 (𝑘) both for the transmission to the receiver and to provide
the input to the receiver accumulator in the transmitter.
The basic limitation of the DPCM scheme is that the transmitted differences are quantized and are of
limited values.
Need for a predictor:
There is a correlation between the successive samples of the signal 𝑚(𝑡). To take the advantage of this
correlation a predictor is included. It needs to incorporate the facility for storing past differences and
carrying out some algorithm to predict then next required increment.
Delta Modulation is a DPCM scheme in which the difference signal ∆(𝑡) is encoded into just a single bit.
The single bit providing just for two possibilities is used to increase or decrease the estimate ̂𝑚(𝑡)[𝑚𝑞 (𝑡)].
The Linear Delta Modulator is shown in figure 2.11.1.
The baseband signal 𝑚(𝑡) and its quantized approximation ̂ 𝑚(𝑡) are applied as input to a comparator. The
comparator has one fixed output V(H) when 𝑚(𝑡) > 𝑚𝑞 (𝑡) and a difference output V(L) when 𝑚(𝑡) <
𝑚𝑞 (𝑡). Ideally the transition between V(H) and V(L) is arbitrarily abrupt as 𝑚(𝑡) − 𝑚𝑞 (𝑡) passes through
Page 20 of 24
Figure 2.11.2 The response of the delta modulator to a baseband signal m(t)
Page 21 of 24
It should be noted that when 𝑚𝑞 (𝑡) has caught up 𝑚(𝑡) and even though 𝑚(𝑡) remains constant, 𝑚𝑞 (𝑡)
hunts, swinging up and down to 𝑚(𝑡).
Slope Overload
The excessive disparity between 𝑚(𝑡) and 𝑚𝑞 (𝑡) is described as a slope overload error and occurs
whenever 𝑚(𝑡) has a slope larger than the slope 𝑆/𝑇𝑠 which can be sustained by the waveform 𝑚𝑞 (𝑡). The
slope overload as shown in figure 3.11.4 is developed due to the small size of S. To overcome the overload
we have to increase the sampling rate above the rate initially selected to satisfy the Nyquist criterion. The
sampling rate 𝑓𝑠 must satisfy the following condition
𝑠𝑓𝑠 = 2𝜋𝑓𝐴
Features of DM
Following are some of the features of delta modulation.
An over-sampled input is taken to make full use of the signal correlation.
The quantization design is simple.
The input sequence is much higher than the Nyquist rate.
The quality is moderate.
The design of the modulator and the demodulator is simple.
The stair-case approximation of output waveform.
The step-size is very small, i.e., Δ (delta).
The bit rate can be decided by the user.
This involves simpler implementation.
In digital modulation, we have come across certain problem of determining the step-size, which influences
the quality of the output wave.
A larger step-size is needed in the step slope of modulating signal and a smaller step size is needed where
the message has a small slope. The minute details get missed in the process. So, it would be better if we
can control the adjustment of step-size, according to our requirement in order to obtain the sampling in a
desired fashion. This is the concept of Adaptive Delta Modulation.
Following is the block diagram of Adaptive delta modulator.
Page 22 of 24
In figure 2.12.1, the output 𝑆0 (𝑡) is called 𝑒(𝑘), which represents the error i.e. the discrepancy between
the 𝑚(𝑡) and 𝑚𝑞 (𝑡), and it is either V(H) or V(L).
The features of ADM are shown in figure 2.12.2. As long as the condition 𝑚(𝑡) > 𝑚𝑞 (𝑡) persists the jumps
in 𝑚𝑞 (𝑡) becomes larger, that’s why 𝑚𝑞 (𝑡) catches up with 𝑚(𝑡) sooner than in the case of linear DM, as
shown by 𝑚′𝑞 (𝑡).
On the other hand, when the response to the large slope in 𝑚(𝑡), 𝑚𝑞 (𝑡) develops large jumps and large
number of clock cycles are required for these jumps to settle down. Therefore the ADM system reduces
the slope overload but it increases the quantization error. Also when 𝑚(𝑡) is constant 𝑚𝑞 (𝑡) oscillates
about 𝑚(𝑡) but the oscillation frequency is half of the clock frequency.
Page 23 of 24
A vocoder i.e. voice encoder is an analysis/synthesis system, used to reproduce human speech. In the
encoder, the input is passed through a multiband filter, each band is passed through an envelope follower,
and the control signals from the envelope followers are communicated to the decoder. The decoder
applies these control signals to corresponding filters in the (re)synthesizer.
It was originally developed as a speech coder for telecommunications applications in the 1930s, the idea
being to code speech for transmission. Its primary use in this fashion is for secure radio communication,
where voice has to be encrypted and then transmitted. The advantage of this method of "encryption" is
that no 'signal' is sent, but rather envelopes of the band pass filters. The receiving unit needs to be set up
in the same channel configuration.
Information, and recreates it, The Voder i.e. Voice Operating Demonstrator generates synthesized speech
by means of a console with fifteen touch-sensitive keys and a pedal, basically consisting of the "second
half" of the vocoder, but with manual filter controls, needing a highly trained operator.
The human voice consists of sounds generated by the opening and closing of the glottis by the vocal cords,
which produces a periodic waveform with many harmonics. This basic sound is then filtered by the nose
and throat (a complicated resonant piping system) to produce differences in harmonic content (formants)
in a controlled way, creating the wide variety of sounds used in speech. There is another set of sounds,
known as the unvoiced and plosive sounds, which are created or modified by the mouth in different
fashions.
The vocoder examines speech by measuring how its spectral characteristics change over time. This results
in a series of numbers representing these modified frequencies at any particular time as the user speaks.
In simple terms, the signal is split into a number of frequency bands (the larger this number, the more
accurate the analysis) and the level of signal present at each frequency band gives the instantaneous
representation of the spectral energy content. Thus, the vocoder dramatically reduces the amount of
information needed to store speech, from a complete recording to a series of numbers. To recreate
speech, the vocoder simply reverses the process, processing a broadband noise source by passing it
through a stage that filters the frequency content based on the originally recorded series of numbers.
Information about the instantaneous frequency (as distinct from spectral characteristic) of the original
voice signal is discarded; it wasn't important to preserve this for the purposes of the vocoder's original use
as an encryption aid, and it is this "dehumanizing" quality of the vocoding process that has made it useful
in creating special voice effects in popular music and audio entertainment.
Since the vocoder process sends only the parameters of the vocal model over the communication link,
instead of a point by point recreation of the waveform, it allows a significant reduction in the bandwidth
required to transmit speech.
Page 24 of 24
Unit 3
Digital Transmission Techniques: Phase shift Keying (PSK)- Binary PSK, differential PSK, differentially
encoded PSK, Quadrature PSK, M-ary PSK. Frequency Shift Keying (FSK)- Binary FSK (orthogonal and non-
orthogonal), M-ary FSK. Comparison of BPSK and BFSK, Quadrature Amplitude Shift Keying (QASK),
Minimum Shift Keying (MSK)
M-ary Encoding
M-ary Encoding techniques are the methods where more than two bits are made to transmit
simultaneously on a single signal. This helps in the reduction of bandwidth.
The types of M-ary techniques are M-ary ASK, M-ary FSK & M-ary PSK.
Amplitude Shift Keying (ASK) is a type of Amplitude Modulation which represents the binary data in the
form of variations in the amplitude of a signal.
Any modulated signal has a high frequency carrier. The binary signal when ASK modulated, gives a zero
value for Low input while it gives the carrier output for High input.
The figure 3.1.1 represents ASK modulated waveform along with its input.
Page 1 of 34
ASK Demodulator
There are two types of ASK Demodulation techniques. They are −
Asynchronous ASK Demodulation/detection
Synchronous ASK Demodulation/detection
The clock frequency at the transmitter when matches with the clock frequency at the receiver, it is known
as a Synchronous method, as the frequency gets synchronized. Otherwise, it is known as Asynchronous.
Page 2 of 34
The modulated ASK signal is given to the half-wave rectifier, which delivers a positive half output. The low
pass filter suppresses the higher frequencies and gives an envelope detected output from which the
comparator delivers a digital output.
Frequency Shift Keying (FSK) is the digital modulation technique in which the frequency of the carrier
signal varies according to the digital signal changes. FSK is a scheme of frequency modulation. The output
of a FSK modulated wave is high in frequency for a binary High input and is low in frequency for a binary
Low input. The binary 1s and 0s are called Mark and Space frequencies. The following image is the
diagrammatic representation of FSK modulated waveform along with its input.
FSK Modulator
The FSK modulator block diagram comprises of two oscillators with a clock and the input binary sequence.
Following is its block diagram.
Page 3 of 34
FSK Transmitter
F1
FSK Modulated
Wave
F2
Binary
Message
FSK Demodulator
There are different methods for demodulating a FSK wave. The main methods of FSK detection are
asynchronous detector and synchronous detector. The synchronous detector is a coherent one, while
asynchronous detector is a non-coherent one.
Page 4 of 34
Phase Shift Keying (PSK) is the digital modulation technique in which the phase of the carrier signal is
changed by varying the sine and cosine inputs at a particular time. PSK technique is widely used for
wireless LANs, bio-metric, contactless operations, along with RFID and Bluetooth communications.
In binary phase-shift keying (BPSK) the transmitted signal is a sinusoid of fixed amplitude. It has one fixed
phase when the data is at one level and when the data is at the other level the phase is different by 180°. If
the sinusoid is of amplitude A it has a power
1
𝑃𝑠 = 𝐴2 𝑡ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒 𝐴 = √2𝑃𝑠
2
In BPSK the data b(t) is a stream of binary digits with voltage levels which, as a matter of convenience, we
take to be at + 1V and - 1V. When b(t) = 1V we say it is at logic level 1 and when b(t) = -I V we say it is at
logic level 0.
Hence 𝑉𝐵𝑃𝑆𝐾 (𝑡) can be written, as
𝑉𝐵𝑃𝑆𝐾 (𝑡) = 𝑏(𝑡)√2𝑃𝑠 . cos(𝜔𝑜 𝑡) …3.4.1.3
In practice, a BPSK signal is generated by applying the waveform cos(𝜔𝑜 𝑡), as a carrier, to a balanced
modulator and applying the baseband signal b(t) as the modulating waveform. In this sense BPSK can be
thought of as an AM signal.
Page 5 of 34
BPSK Modulator:
The block diagram of Binary Phase Shift Keying consists of the balance modulator which has the carrier sine
wave as one input and the binary sequence as the other input.
Following is the diagrammatic representation.
BPSK Modulator
PSK Wave
Career Wave
generator
Binary
Sequence Data
Figure 3.4.1 BPSK Modulator
The modulation of BPSK is done using a balance modulator, which multiplies the two signals applied at the
input. For a zero binary input, the phase will be 0° and for a high input, the phase reversal is of 180°.
Following is the diagrammatic representation of BPSK Modulated output wave along with its given input.
Reception of BPSK:
The received signal has the form
𝑉𝐵𝑃𝑆𝐾 (𝑡) = 𝑏(𝑡)√2𝑃𝑠 . cos(𝜔𝑜 𝑡 + 𝜃) = 𝑏(𝑡)√2𝑃𝑠 . cos 𝜔𝑜 (𝑡 + 𝜃/𝜔𝑜 ) …3.4.1.4
Here 𝜃 is a nominally fixed phase shift corresponding to the time delay θ/ωo which depends on the length
of the path from transmitter to receiver and the phase shift produced by the amplifiers in the" front-end"
of the receiver preceding the demodulator. The original data b(t) is recovered in the demodulator. The
demodulation technique usually employed is called synchronous demodulation and requires that there be
available at the demodulator the waveform cos(ωo t + θ). A scheme for generating the carrier at the
demodulator and for recovering the baseband signal is shown in Fig. 3.4.1.1.
The received signal is squared to generate the signal
1 1 …3.4.1.5
cos 2 (ωo t + θ) = + cos 2(ωo t + θ)
2 2
The DC component is removed by the band pass filter whose pass band is centered around 2fo and we
then have the signal whose waveform is that of cos 2(ωo t + θ). A frequency divider (composed of a flip-
flop and narrow-band filter tuned to fo is used to regenerate the waveform cos(ωo t + θ). Only the
waveforms of the signals at the outputs of the squarer, filter and divider are relevant, not their amplitudes.
Page 6 of 34
Accordingly in Fig. 3.4.1.1 we have arbitrarily taken amplitudes to be unity. In practice, the amplitudes will
be determined by features of these devices which are of no present concern. In any event, the carrier
having been recovered, it is multiplied with the received signal to generate
1 1 …3.4.1.6
𝑏(𝑡)√2𝑃𝑠 cos 2 (𝜔𝑜 𝑡 + 𝜃) = 𝑏(𝑡)√2𝑃𝑠 [ + cos 2(𝜔𝑜 𝑡 + 𝜃)]
2 2
which is then applied to an integrator as shown.
We have included in the system a bit synchronizer. This device, whose operation is able to recognize
precisely the moment which corresponds to the end of the time interval allocated to one bit and the
beginning of the next. At that moment, it closes switch S, very briefly to discharge (dump) the integrator
capacitor and leaves the switch S, open during the entire course of the ensuing bit interval, closing switch
S, again very briefly at the end of the next bit time, etc. (This circuit is called an "integrate-and-dump"
circuit.) The output signal of interest to us is the integrator output at the end of a bit interval but
immediately before the closing of switch 𝑆𝑜. This output signal is made available by switch S; which
samples the output voltage just prior to dumping the capacitor.
Let us assume for simplicity that the bit interval 𝑇𝑏 is equal to the duration of an integral number n of
cycles of the carrier of frequency 𝑓𝑜 that is, 𝑛 . 2𝜋 = 𝜔0 𝑇𝑏 In this case the output voltage 𝑣0 (𝑘𝑇𝑏 ) at the
end of a bit interval extending from time (𝑘 − 1)𝑇𝑏 to (𝑘)𝑇𝑏 is, using Eq. (3.4.1.6).
𝑘𝑇𝑏 𝑘𝑇𝑏
1 1 …3.4.1.7(a)
𝑣0 (𝑘𝑇𝑏 ) = 𝑏(𝑘𝑇𝑏 )√2𝑃𝑠 ∫ dt + 𝑏(𝑘𝑇𝑏 )√2𝑃𝑠 ∫ cos 2(𝜔𝑜 𝑡 + 𝜃) dt
(𝑘 – 1)𝑇𝑏 2 (𝑘 − 1)𝑇𝑏 2
…3.4.1.7(b)
𝑃𝑠
𝑣0 (𝑘𝑇𝑏 ) = 𝑏(𝑘𝑇𝑏 )√ 𝑇𝑏
2
Since the integral of a sinusoid over a whole number of cycles has the value zero. Thus we see that our
system reproduces at the demodulator output the transmitted bit stream b(t). The operation of the bit
synchronizer allows us to sense each bit independently of every other bit. The brief closing of both
switches, after each bit has been determined, wipes clean all influence of a preceding bit and allows the
receiver to deal exclusively with the present bit.
Page 7 of 34
In BPSK, to regenerate the carrier we start by squaring 𝑏(𝑡)√2𝑃𝑠 . cos(𝜔𝑜 𝑡). Accordingly, if the received
signal were instead −𝑏(𝑡)√2𝑃𝑠 . cos(𝜔𝑜 𝑡) the recovered carrier would remain as before. Therefore we
shall not be able to determine whether the received baseband signal is the transmitted signal 𝑏(𝑡) or it’s
negative − 𝑏(𝑡).
Differential phase-shift keying (DPSK) and differential encoded PSK (DEPSK) are modifications of BPSK
which have the merit that they eliminate the ambiguity about whether the demodulated data is or is not
inverted. In addition DPSK avoids the need to provide the synchronous carrier required at the demodulator
for detecting a BPSK signal.
A means for generating a DPSK signal is shown in Fig. 3.4.2.1. The data stream to be transmitted, 𝑑(𝑡), is
applied to one input of an exclusive-OR logic gate. To the other gate input is applied the output of the
exclusive or gate 𝑏(𝑡) delayed by the time 𝑇𝑏 , allocated to one bit. This second input is then 𝑏(𝑡 − 𝑇𝑏 ). In
Fig. 3.4.2.2 we have drawn logic waveforms to illustrate the response b(t) to an input d(t). The upper level
of the waveforms corresponds to logic 1, the lower level to logic O. The truth table for the exclusive-OR
gate is given in Fig 3.4.2.1, and with this table we can easily verify that the waveforms for (𝑡), 𝑏(𝑡 − 𝑇𝑏 ),
and 𝑏(𝑡) are consistent with one another. We observe that, as required, 𝑏(𝑡 − 𝑇𝑏 ) is indeed b(t) delayed
by one bit time and that in any bit interval the bit b(t) is given 𝑏(𝑡) = 𝑑(𝑡) ⊕ 𝑏(𝑡 − 𝑇𝑏 ).
Because of the feedback involved in the system of Fig. 3.4.2.2 there is a difficulty in determining the logic
levels in the interval in which we start to draw the waveforms (interval 1 in Fig. 3.4.2.2). We cannot
determine b(t) in this first interval of our waveform unless we know b(k = 0). But we cannot determine b(0)
unless we know both d(0) and b( -1), etc. Thus, to justify any set of logic levels in an initial bit interval we
need to know the logic levels in the preceding interval. But such a determination requires information
about the interval two bit times earlier and so on. In the waveforms of Fig. 3.4.2.2 we have circumvented
the problem by arbitrarily assuming that in the first interval b(0) = 0. It is shown below that in the
demodulator, the data will be correctly determined regardless of our assumption concerning b(0).
Page 8 of 34
Figure 3.4.2.2 Logic waveforms to illustrate the response b(t) to an input d(t).
We now observe that the response of b(t) to d(t) is that b(t) changes level at the beginning of each interval
in which d(t) = 1 and b(t) does not change level when d(t) = 0. Thus during interval 3, d(3) = 1, and
correspondingly b(3) changes at the beginning at that interval. During intervals 6 and 7, d(6) = d(7) = 1 and
there are changes in b(t) at the beginnings of both intervals. During bits 10, 11, 12, and 13 d(t) = 1 and
there are changes in b(t) at the beginnings of each of these intervals. This behavior is to be anticipated
from the truth table of the exclusive-OR gate. For we note that when 𝑑(𝑡) = 0, 𝑏(𝑡) = 𝑏(𝑡 − 𝑇𝑏 ) so that,
whatever the initial value of 𝑏(𝑡 − 𝑇𝑏 ) ,it reproduces itself. On the other hand when d(t) = 1, then 𝑏(𝑡) =
𝑏(𝑡 − 𝑇𝑏 ). Thus, in each successive bit interval b(t) changes from its value in the previous interval. Note
that in some intervals where d(t) = 0 we have b(t) = 0 and in other intervals when d(t) = 0 we have b(t) = 1.
Similarly, when d(t) = 1 sometimes b(t) = 1 and sometimes b(t) = 0. Thus there is no correspondence
between the levels of d(t) and b(t), and the only invariant feature of the system is that a change
(sometimes up and sometimes down) in b(t) occurs whenever d(t) = 1, and that no change in b(t) will occur
whenever d(t) = 0.
Finally, we note that the waveforms of Fig. 3.4.2.2 are drawn on the assumption that, in interval 1, b(0) = 0.
As is easily verified, if not intuitively apparent, if we had assumed b(0) = 1, the invariant feature by which
we have characterized the system would continue to apply. Since b(0) must be either b(0) = 0 or b(0) = 1,
there being no other possibilities, our result is valid quite generally. If, however, we had started with b(0) =
1 the levels b(l) and b(0) would have been inverted.
As is seen in Fig. 3.4.2.1 b(t) is applied to a balanced modulator to which is also applied the carrier
√2𝑃𝑠 . cos(𝜔𝑜 𝑡). The modulator output, which is the transmitted signal, is
Thus altogether when d(t) = 0 the phase of the carrier does not change at the beginning of the bit interval,
while when d(t) = 1 there is a phase change of magnitude 𝜋.
Reception:
A method of recovering the data bit stream from the DPSK signal is shown in Fig. 3.4.2.3. Here the received
signal and the received signal delayed by the bit time 1'" are applied to a multiplier. The multiplier output
is
Page 9 of 34
Further, with this selection, the bit duration encompasses an integral number of clock cycles and the
integral of the double-frequency term is exactly zero.
The transmitted data bit d(t) can readily be determined from the product 𝑏(𝑡). 𝑏(𝑡 − 𝑇𝑏 ). If d(t) = 0 then
there was no phase change and 𝑏(𝑡) = 𝑏(𝑡 − 𝑇𝑏 ) both being + 1V or both being - 1V. In this case
𝑏(𝑡). 𝑏(𝑡 − 𝑇𝑏 )= 1. If however, d(t) = 1 then there was a phase change and either b(t) = 1V with 𝑏(𝑡 − 𝑇𝑏 ) =
- I V or vice versa. In either case 𝑏(𝑡). 𝑏(𝑡 − 𝑇𝑏 ) = −1.
OR
Differential Phase Shift Keying (DPSK) the phase of the modulated signal is shifted relative to the previous
signal element. No reference signal is considered here. The signal phase follows the high or low state of the
previous element. This DPSK technique doesn’t need a reference oscillator.
The following figure represents the model waveform of DPSK.
DPSK Modulator
Page 10 of 34
DPSK is a technique of BPSK, in which there is no reference phase signal. Here, the transmitted signal itself
can be used as a reference signal. Following is the diagram of DPSK Modulator.
DPSK Demodulator
In DPSK demodulator, the phase of the reversed bit is compared with the phase of the previous bit.
Following is the block diagram of DPSK demodulator.
Page 11 of 34
The transmitter of the DEPSK system is identical to the transmitter of the DPSK system shown in Fig.
3.4.2.1, The signal b(t) is recovered in exactly the manner shown in Fig. 3.4.2.1 for a BPSK system. The
recovered signal is then applied directly to one input of an exclusive-OR logic gate and to the other input
is applied 𝑏(𝑡 − 𝑇𝑏 ) (see Fig. 3.4.3.1). The gate output will be at one or the other of its levels depending on
whether 𝑏(𝑡) = 𝑏(𝑡 − 𝑇𝑏 ) or 𝑏(𝑡) = ̅̅̅̅̅̅̅̅̅̅̅̅̅
𝑏(𝑡 − 𝑇𝑏 ) . In the first case b(t) did not change level and therefore
the transmitted bit is d(t) = 0. In the second case d(t) = 1.
We have seen that in DPSK there is a tendency for bit errors to occur in pairs but that single bit errors are
possible. In DEPSK errors always occur in pairs. The reason for the difference is that in DPSK we do not
make a hard decision, in each bit interval about the phase of the received signal. We simply allow the
received signal in one interval to compare itself with the signal in an adjoining interval and, as we have
seen, a single error is not precluded. In DEPSK, a firm definite hard decision is made in each interval about
the value of b(t). If we make a mistake, then errors must result from a comparison with the preceding and
succeeding bit. This result is illustrated in Fig.3.4.3.2. It is shown the error-free signals 𝑏(𝑘), 𝑏(𝑘 − 1) and
𝑑 (𝑘) = 𝑏(𝑘) ⊕ 𝑏(𝑘 − 1). We have assumed that b'(k) has a single error. Then b'(k - 1) must also have a
single error. We note that the reconstructed waveform d'(k) now has two errors.
This is the phase shift keying technique, in which the sine wave carrier takes four phase reversals such as
0°, 90°, 180°, and 270°.
Page 12 of 34
If these kinds of techniques are further extended, PSK can be done by eight or sixteen values also,
depending upon the requirement.
QPSK Modulator
The mechanism by which a bit stream b(t) generates a QPSK signal for transmission is shown in Fig. 3.4.4.1
and relevant waveforms are shown in Fig. 3.4.4.2. In these waveforms we have arbitrarily assumed that in
every case the active edge of the clock waveforms is the downward edge. The toggle flip-flop is driven by a
clock waveform whose period is the bit time Tb. The toggle flip-flop generates an odd clock waveform and
an even waveform. These clocks have periods 2Tb. The active edge of one of the clocks and the active edge
of the other are separated by the bit time Tb. The bit stream 𝑏(𝑡) is applied as the data input to both type-
D flip-flops, one driven by the odd and one driven by the even clock waveform. The flip-flops register
alternate bits in the stream b(t) and hold each such registered bit for two bit intervals, that is for a time Tb.
In Fig. 3.4.4.2 we have numbered the bits in 𝑏(𝑡). Note that the bit stream 𝑏(𝑡) (which is the output of the
flip-flop driven by the odd clock) registers bit 1 and holds that bit .for time 2Tb, then registers bit 3 for time
2Tb, then bit 5 for 2Tb, etc. The even bit stream be(t) holds, for times 2Tb each the alternate bits numbered
2.4, 6, etc.
The bit stream 𝑏(𝑡) (which. as usual, we take to be 𝑏𝑒 (𝑡)= ± 1 volt) is superimposed on a carrier
√𝑃𝑠 . sin(𝜔𝑜 𝑡) by the use of two multipliers (i.e., balanced modulators) as shown, to generate two signals
Se(t) and So(t). These signals are then added to generate the transmitted output signal Vm(t) which is
𝑣𝑚 (𝑡) = √𝑃𝑠 . 𝑏𝑜 (𝑡) sin(𝜔𝑜 𝑡) + √𝑃𝑠 . 𝑏𝑒 (𝑡) cos(𝜔𝑜 𝑡)
As may be verified, the total normalized power of 𝑣𝑚 (𝑡) is Ps.
The QPSK Modulator uses a bit-splitter, two multipliers with local oscillator, a 2-bit serial to parallel
converter, and a summer circuit. Following is the block diagram for the same.
At the modulator’s input, the message signal’s even bits (i.e., 2 nd bit, 4th bit, 6th bit, etc.) and odd bits (i.e.,
1st bit, 3rd bit, 5th bit, etc.) are separated by the bits splitter and are multiplied with the same carrier to
generate odd BPSK (called as PSKI) and even BPSK (called as PSKQ). The PSKQ signal is anyhow phase shifted
by 90° before being modulated.
The QPSK waveform for two-bits input is as follows, which shows the modulated result for different
instances of binary inputs.
Page 13 of 34
QPSK Demodulator
The QPSK Demodulator uses two product demodulator circuits with local oscillator, two band pass filters,
two integrator circuits, and a 2-bit parallel to serial converter. Following is the diagram for the same.
The two product detectors at the input of demodulator simultaneously demodulate the two BPSK signals.
The pair of bits are recovered here from the original data. These signals after processing, are passed to the
parallel to serial converter.
3.5 M-ary Equation
If a digital signal is given under four conditions, such as voltage levels, frequencies, phases, and amplitude,
then M = 4. The number of bits necessary to produce a given number of conditions is expressed
mathematically as N=log2M. Where N is the number of bits necessary M is the number of conditions,
levels, or combinations possible with N bits.
The above equation can be re-arranged as
2N=M
2
For example, with two bits, 2 = 4 conditions are possible.
M-ary ASK
This is called M-ary Amplitude Shift Keying (M-ASK) or M-ary Pulse Amplitude Modulation (PAM).
The amplitude of the carrier signal, takes on M different levels.
Page 14 of 34
M-ary FSK
This is called as M-ary Frequency Shift Keying (M-ary FSK).
The frequency of the carrier signal, takes on M different levels.
2𝐸 𝜋
𝑆𝑖 (𝑡) = √ 𝑇 𝑠 cos (𝑇 (𝑛𝑐 + 𝑖)𝑡) 0 ≤ 𝑡 ≤ 𝑇𝑠 𝑖 = 1,2, … … 𝑀
𝑠 𝑠
M-ary PSK
This is called as M-ary Phase Shift Keying (M-ary PSK).
The phase of the carrier signal, takes on M different levels.
Representation of M-ary PSK
2𝐸
𝑆𝑖 (𝑡) = √ 𝑇 cos(𝜔0𝑡 + 𝜙𝑖 𝑡) 0 ≤ 𝑡 ≤ 𝑇 𝑎𝑛𝑑 𝑖 = 1,2, … … 𝑀
2𝜋𝑖
𝜙𝑖 (𝑡) = 𝑀 where 𝑖 = 1,2, … … 𝑀
Some prominent features of M-ary PSK are −
The envelope is constant with more phase possibilities.
This method was used during the early days of space communication.
Better performance than ASK and FSK.
Minimal phase estimation error at the receiver.
The bandwidth efficiency of M-ary PSK decreases and the power efficiency increases with the
increase in M.
So far, we have discussed different modulation techniques. The output of all these techniques is a binary
sequence, represented as 1s and 0s. This binary or digital information has many types and forms, which are
discussed further.
Page 15 of 34
2. In BPSK the information of the message is stored in the phase variations of the career wave whereas in
case of the BFSK scheme, the information is available as the frequency variations of the career wave. Now
we know that the noise can affect the frequency of the career wave but cannot affect the phase of the
career signal. Therefore the BPSK Scheme is again a better option.
3. The noise will also occupy some frequency it may damage the signal flatly or frequency selectively. But
in PSK there is very less chance in the changing of phase f the signal. Hence for noisy channel PSK is better
than FSK.
We have seen that when a data stream whose bit duration is T b is to be transmitted by BPSK the channel
bandwidth must be nominally 2fb where fb = I/Tb.
Quadrature phase-shift keying allows bits to be transmitted using half the bandwidth. In QPSK system we
use the type-D flip-flop as a one bit storage device.
D Flip Flop
Page 16 of 34
The bit stream 𝑏(𝑡) (which. as usual, we take to be 𝑏𝑒 (𝑡)= ± 1 volt) is superimposed on a carrier
√𝑃𝑠 . sin(𝜔𝑜 𝑡) by the use of two multipliers (i.e., balanced modulators) as shown, to generate two signals
Se(t) and So(t). These signals are then added to generate the transmitted output signal Vm(t) which is
𝑣𝑚 (𝑡) = √𝑃𝑠 . 𝑏𝑜 (𝑡) sin(𝜔𝑜 𝑡) + √𝑃𝑠 . 𝑏𝑒 (𝑡) cos(𝜔𝑜 𝑡)
As may be verified, the total normalized power of 𝑣𝑚 (𝑡) is Ps.
In BPSK, the bit duration is Tb, and the generated signal has a nominal bandwidth of 2x1/T b. In the
waveforms of bo(t) and be(t), the bit times are each 1/2T b, hence both bo(t) and be(t) have nominal
bandwidth which are half of the bandwidth in BPSK.
Phasor Diagram:
When b0 = 1 the signal so(t) = √𝑃𝑠 . sin(𝜔𝑜 𝑡), and so(t) = −√𝑃𝑠 . sin(𝜔𝑜 𝑡) when bo = -1. Correspondingly, for
be(t) = ± 1, se(t) = ±. √𝑃𝑠 . (𝑡) cos(𝜔𝑜 𝑡) These four signals have been represented as phasors in Fig. 3.7.4.4.
They are in mutual phase quadrature. Also drawn are the phasors representing the four possible output
signals 𝑣𝑚 (𝑡) = 𝑠𝑜(𝑡) + 𝑆𝑒(𝑡). These four possible output signals have equal amplitude √2𝑃𝑠 and are in
phase quadrature; they have been identified by their corresponding values of bo and be. At the end of each
bit interval (i.e., after each time Tb) either bo, or be can change, but both cannot change at the same time.
Consequently, the QPSK system shown in Fig. 3.7.4.3 is called offset or staggered QPSK and abbreviated
OQPSK. After each time Tb, the transmitted signal, if it changes, changes phase by 90° rather than by 180°
as in BPSK.
Page 17 of 34
Non-offset QPSK
Suppose that in Fig. 3.7.4.3 we introduce an additional flip-flop before either the odd or even flip-flop. Let
this added flip-flop be driven by the clock which runs at the rate fb. Then one or the other bit streams, odd
or even, will be delayed by one bit interval. As a result, we shall find that two bits which occur in time
sequence (i.e., serially) in the input bit stream b(t) will appear at the same time (i.e., in parallel) at the
outputs of the odd and even flip-flops. In this case be(t) and bo(t) can change at the same time, after each
time 2Tb, and there can be a phase change of 180° in the output signal. There is no difference, in principle,
between a staggered and non-staggered system.
In practice, there is often a significant difference between QPSK and OQPSK. At each transition time, T" for
OQPSK and 2Tb for QPSK, one bit for OQPSK and perhaps two bits for QPSK change from 1V to -1V or -1V to
1V. Now the bits be(t) and bo(t) can, not change instantaneously and, in changing, must pass through zero
and dwell in that neighborhood at least briefly. Hence there will be brief variations in the amplitude of the
transmitted waveform. These variations will be more pronounced in QPSK than in OQPSK since in the first
case both be(t) and bo(t) may be zero simultaneously so that the signal amplitude may actually be reduced
to zero temporarily.
Page 18 of 34
Of course, as usual, a bit synchronizer is required to establish the beginnings and ends of the bit intervals
of each bit stream so that the times of integration can be established. The bit synchronizer is needed as
well to operate the sampling switch. At the end of each integration time for each individual integrator, and
just before the accumulation is dumped, the integrator output is sampled. Samples are taken alternately
from one and the other integrator output at the end of each bit time Tb and these samples are held in the
latch for the bit time Tb Each individual integrator output is sampled at intervals 2 Tb. The latch output is
the recovered bit stream b(t).
The voltages marked on Fig. 3.7.4.5 are intended to represent the waveforms of the signals only and not
their amplitudes. Thus the actual value of the sample voltages at the integrator outputs depends on the
amplitude of the local carrier, the gain, if any, in the modulators and the gain in the integrators. We have
however indicated that the sample values depend on the normalized power P s of the received signal and
on the duration Ts of the symbol.
The waveforms of Eq. (3.8.1) are represented by the dots in Fig. 3.8.1 in a signal space in which the
coordinate axes are the orthonormal waveforms 𝑢1(𝑡) = √2/𝑇𝑠 cos(𝜔𝑜 𝑡) and 𝑢2(𝑡) = √2/𝑇𝑠 sin(𝜔𝑜 𝑡).
The distance of each dot from the origin is √𝐸𝑠 = √𝑃𝑠 𝑇𝑠
From Eq. (3.5.1) we have
𝑣𝑚 (𝑡) = (√2𝑃𝑠 cos ∅𝑚 ) cos(𝜔𝑜 𝑡) − (√2𝑃𝑠 sin ∅𝑚 ) sin(𝜔𝑜 𝑡) …3.8.3
Defining pe and po by
Page 19 of 34
Page 20 of 34
waveform, whose phase has a one-to-one correspondence to the assembled N-bit symbol. The phase can
change once per symbol time.
Page 21 of 34
Page 22 of 34
QASK generator for 4 bit symbol is shown. The 4 bit symbol b k+3 bk+2 bk+1 bk is stored in the 4 bit register
made up of four flip flops. A new symbol is presented once per interval T s= 4Tb and the content of the
register is correspondingly updated at each active edge of the clock which also have a period T s. Two bits
are presented to one D/A converter and two to second converter. The converter output Ae(t) modulates
the balanced modulator whose input career is the even function √𝑃𝑠 cos(𝜔𝑜 𝑡) and Ao(t) modulates the
modulator whose input career is the odd function career. Then the transmitted signal is
𝑣𝑄𝐴𝑆𝐾 (𝑡) = 𝐴𝑒 (𝑡)√𝑃𝑠 . cos(𝜔𝑜 𝑡) + 𝐴𝑜 (𝑡)√𝑃𝑠 . sin(𝜔𝑜 𝑡) …3.9.6
Bandwidth of QASK
The Bandwidth of the QASK signal is
B=2fb/N
Which is the same as in the case of M ary PSK. With N=4 corresponding to 16 possible distinguishable
signals we have BQASK(16) =fb/2 which is one fourth of the bandwidth required for binary PSK.
QASK Receiver
4
𝑣𝑄𝐴𝑆𝐾 (𝑡 ) 𝐴4𝑒 (𝑡) + 𝐴4𝑜 (𝑡) − 6𝐴2𝑒 (𝑡)𝐴2𝑜 (𝑡)
=[ ] cos(4𝜔𝑜 𝑡)
𝑃𝑠 8
𝐴𝑒 (𝑡)𝐴𝑜 (𝑡)[𝐴2𝑒 (𝑡) − 𝐴2𝑜 (𝑡)]
+[ ] sin(4𝜔𝑜 𝑡)
2 …3.9.8
The average value of the coefficient of cos 4𝜔𝑜 𝑡 is not zero whereas the average value of the coefficient of
sin 4𝜔𝑜 𝑡 is zero. Thus a narrow filter centered at 4f o will recover the signal at 4fo.
After getting the careers, two balanced modulators together with two integrators recover the signals A e(t)
and Ao(t) as shown in figure. The integrators have an integration time equal to the symbol time T s. Finally
the original input bits are recovered by using A/D Converter.
Page 23 of 34
Thus when d(t) changes from +1 to -1 PH changes from 1 to 0 and PL from 0 to 1. At any time either PH or PL
is 1 but not both so that the generated signal is either at angular frequency ωH or at ωL.
Spectrum of BPSK
In terms of variables PH and PL the BFSK signal is given by
where we have assumed that each of the two signals are of independent and random, uniformly
distributed phase. Each of the terms in Eq. (3.10.4) looks like the signal √2𝑃𝑠 𝑏(𝑡) cos 𝜔𝑜 𝑡 which we
encountered in BPSK and for which we have already deduced the spectrum, but there is an important
difference. In the BPSK case, b(t) is bipolar, i.e., it alternates between + 1 and – 1 while in the present case
PH and PL are unipolar, alternating between + 1 and 0. We may, however, rewrite PH and PL as the sums of a
constant and a bipolar variable, that is
1 1 ′ …3.10.5a
𝑃𝐻 (𝑡) = + 𝑃𝐻 (𝑡)
2 2
1 1 …3.10.5b
𝑃𝐿 (𝑡) = + 𝑃𝐿′ (𝑡)
2 2
In above equations 𝑃𝐻′ (𝑡) and 𝑃𝐿′ (𝑡) are bipolar alternating between +1 and -1 and are complementary i.e.
when 𝑃𝐻′ (𝑡) = +1, 𝑃𝐿′ (𝑡) = −1 and vice versa. Then from equation 4,
Page 24 of 34
…3.10.6
𝑃𝑠 𝑃𝑠 𝑃𝑠
𝑣𝐵𝐹𝑆𝐾 (𝑡) = √ cos[𝜔𝐻 𝑡 + 𝜃𝐻 ] + √ cos[𝜔𝐿 𝑡 + 𝜃𝐿 ] + √ 𝑃𝐻′ cos[𝜔𝐻 𝑡 + 𝜃𝐻 ]
2 2 2
𝑃𝑠
+ √ 𝑃𝐿′ cos[𝜔𝐿 𝑡 + 𝜃𝐿 ]
2
The first two terms in Eq. (3.10.6) produce a power spectral density which consists of two impulses, one at
fH and one at fL. The last two terms produce the spectrum of two binary PSK signals one centered about fH
and one about fL. The individual power spectral density patterns of the last two terms are for the case fH - fL
= 2fb. For this separation between fH and fL we observe that the overlapping between the two parts of the
spectra is not large and we may expect to be able, to distinguish the levels of the binary waveform d(t). In
any event, with this separation the bandwidth of BFSK is
𝐵𝑊𝐵𝐹𝑆𝐾 = 4𝑓𝑏 …3.7.7
which is twice the bandwidth of BPSK.
Page 25 of 34
enough to encompass a main lobe in the spectrum of Fig. 3.10.2. Hence one filter will pass nearly all the
energy in the transmission at fH the other will perform similarly for the transmission at fL. The filter outputs
are applied to envelope detectors and finally the envelope detector outputs are compared by a
comparator. A comparator is a circuit that accepts two input signals. It generates a binary output which is
at one level or the other depending on which input is larger. Thus at the comparator output the data d(t)
will be reproduced.
When noise is present, the output of the comparator may vary due to the systems response to the signal
and noise. Thus, practical systems use a bit synchronizer and an integrator and sample the comparator
output only once at the end of each time interval Tb.
interval TS. And, if the symbol is a single bit, TS = Tb. The coefficients C1 and C2 are constants. The
normalized energies associated with C1u1(t) and with C2u2(t)are respectively C12 and C22 and the total signal
energy is C12 + C22. In M-ary PSK and QASK the orthogonality of the vectors u1 and u2 results from their
phase quadrature. In the present case of BFSK it is appropriate that the orthogonality should result from a
special selection of the frequencies of the unit vectors. Accordingly, with m and n integers, let us establish
unit vectors
2
𝑢1 (𝑡) = √ cos 2𝜋𝑚𝑓𝑏 𝑡
𝑇𝑏 …3.10.8
2
and 𝑢2 (𝑡) = √ cos 2𝜋𝑛𝑓𝑏 𝑡
𝑇𝑏 …3.10.9
th th
Where fb=1/Tb. The vectors U1 and U2 are the m and n harmonics of the (fundamental) frequency fb. As
we are aware, from the principles of Fourier analysis, different harmonics (m ± n) are orthogonal over the
interval of the fundamental period Tb = 1/fb.
If now the frequencies fH and fL in a BFSK system are selected to be (assuming m > n)
𝑓𝐻 = 𝑚𝑓𝑏 …3. 10.10a
and 𝑓𝐿 = 𝑛𝑓𝑏 …3. 10.10b
Then corresponding signal vectors are
𝑆𝐻 (𝑡) = √𝐸𝑏 𝑢1 (𝑡) …3. 10.11a
and 𝑆𝐿 (𝑡) = √𝐸𝑏 𝑢2 (𝑡) …3. 10.11b
𝑑 = √2𝐸𝑏
Page 26 of 34
𝑇𝑏 𝑇𝑏
∫ SL2 (t) 𝑑𝑡 = 2𝑃𝑆 ∫ 𝑐𝑜𝑠 2 𝜔𝐻 𝑡 𝑑𝑡 = 𝑆12
2 2
+ 𝑆22
0 0 …3. 10.16a
Therefore 2 2
sin 2𝜔L 𝑇𝑏
𝑆12 + 𝑆22 = 𝐸𝑏 [1 + ]
2𝜔𝐿 𝑇𝑏 …3. 10.16b
The distance d between SH and SL given in Eq. (6.8-14) can now be determined by substituting Eqs. 3.
10.14, 3. 10.15a, and 3. 10.16b into Eq. 3.10.13 The result is:
Page 27 of 34
Sin(𝜔𝐻 − 𝜔𝐿 )𝑇𝑏
𝑑 2 ≅ 2𝐸𝑏 [1 − ]
(𝜔𝐻 − 𝜔𝐿 )𝑇𝑏 …3.10.19
Here SH(t) and SL(t) are orthogonal (ωH-ωL)Tb=2π(m-n)fbTb=2π(m-n) and the above equation given 𝑑 =
√2𝐸𝑏 .
Note that if (ωH-ωL)Tb=3π/2, the distance d increases and becomes,
2 1/2
𝑑𝑜𝑝𝑡 = [2𝐸𝑏 (1 + )] = √2.4𝐸𝑏 …3. 10.20
3π
d2 is increased by 20%.
Using the trigonometric identity for the cosine of the sum of two angles and recalling that cos θ = cos ( - θ)
while sin θ = - sin ( -θ) we are led to the alternate equivalent expression
𝑣𝐵𝐹𝑆𝐾 (𝑡) = √2𝑃𝑠 cos Ω𝑡 cos 𝜔𝑜 𝑡 − √2𝑃𝑠 𝑑(𝑡) sin Ω𝑡 sin 𝜔𝑜 𝑡 …3.11.1
Note that the second term in above equation looks like the signal encountered in BPSK i.e., a carrier sinω0t
multiplied by a data bit d(t) which changes the carrier phase. In the present case however, the carrier is not
of fixed amplitude but rather the amplitude is shaped by the factor sin Ωt. We note further the presence of
a quadrature reference term cos Ωt cos ω0t which contains no information. Since this quadrature term
carries energy, the energy in the information bearing term is thereby diminished. Hence we may expect
that BFSK will not be as effective as BPSK in the presence of noise. For orthogonal BFSK, each term has the
same energy, hence the information bearing term contains only one-half of the total transmitted energy.
An M-ary FSK communications system is shown in Fig. 3.12.1. It is an obvious extension of a binary FSK
system. At the transmitter an N-bit symbol is presented each TS, to an N-bit D/A converter. The converter
output is applied to a frequency modulator, i.e., a piece of hardware which generates a carrier waveform
whose frequency is determined by the modulating waveform. The transmitted signal, for the duration of
the symbol interval, is of frequency f0 or f1 ...or fM-1 with M = 2N. At the receiver, the incoming signal is
applied to M paralleled bandpass filters each followed by an envelope detector. The bandpass filters have
center frequencies f0, f1, ... ,fM-1. The envelope detectors apply their outputs to a device which determines
which of the detector indications is the largest and transmits that envelope output to an N-bit AID
converter.
The probability of error is minimized by selecting frequencies f0, f1, ... ,fM-1 so that the M signals are
mutually orthogonal. One commonly employed arrangement simply provides that the carrier frequency be
successive even harmonics of the symbol frequency fS=1/TS. Thus the lowest frequency, say f0, is f0 = k fS,
while f1 = (k + 1) fS, f2 = (k + 2) fS etc. In this case, the spectral density patterns of the individual possible
transmitted signals overlap in the manner shown in Fig. 3.12.1, which is an extension to M-ary FSK of the
pattern of Fig. 3.9.2, which applies to binary FSK. We observe that to pass M-ary FSK the required spectral
range is
Page 28 of 34
𝐵 = 2𝑀𝑓𝑆 …3.12.1
Since fS= fb/N and M=2N, we have
𝐵 = 2𝑁+1 𝑓𝑏 /𝑁 …3.12.2
Figure 3.12.2 Power Spectral Density of an M-ARY FSK (Four Frequencies are shown)
Geometrical Representation of an M-ARY FSK
In Fig 3.7.4, we provided a signal space representation for the case of orthogonal binary FSK.
Figure 3.12.3 Geometrical representation of orthogonal M-ary FSK (M = 3) when the frequencies are
selected to generate orthogonal signals.
Page 29 of 34
The case of M-ary orthogonal FSK signals is clearly an extension of this figure. We simply conceive of a
coordinate system with M mutually orthogonal coordinate axes. The square of the length of the signal
vector is the normalized signal energy. Note that, as in Fig. 3.7.4, the distance between signal points is
3.9.3.
Note that this value of d is greater than the values of d calculated for M-ary PSK with the exception of the
cases M = 2 and M = 4. It is also greater than d in the case of 16-QASK.
𝑑 = √2𝐸𝑠 = √2𝑁𝐸𝑏 …3.12.3
In MSK, the carriers are multiplied by the" smoother" waveforms shown in Fig. 3.13.1(e) and (f). As we may
expect, the side lobes generated by these smoother waveforms will be smaller than those associated with
the rectangular waveforms and hence easier to suppress as is required to avoid interchannel interference.
In Eq. (3.13.1) MSK appears as a modified form of OQPSK, which we can call" shaped Q'PSK". We can,
however, rewrite the equation to make it apparent that MSK is an FSK system. Applying the trigonometric
identities for the products of sinusoids we find that Eq. (3.10.1) can be written:
Page 30 of 34
𝑇𝑏
∫ sin 𝜔𝐻 𝑡 sin 𝜔𝐿 𝑡 𝑑𝑡 = 0 …3.13.4
0
The equation 3.13.4 will be satisfied provided that it is arranged, with m and n integers, that
2𝜋(𝑓𝐻 − 𝑓𝐿 )𝑇𝑏 = 𝑛𝜋 …3.13.5a
and 2𝜋(𝑓𝐻 + 𝑓𝐿 )𝑇𝑏 = 𝑚𝜋 …3.13.5b
Page 31 of 34
Also
𝑓𝑏 …3.13.06a
𝑓𝐻 = 𝑓0 +
4
𝑓𝑏 …3.13.6b
and 𝑓𝐿 = 𝑓0 −
4
From equations 3.13.5 and 3.13.6
1 …3.13.7a
𝑓𝑏 𝑇𝑏 = 𝑓𝑏 =1=𝑛
𝑓𝑏
𝑚 …3.13.7b
and 𝑓0 = 𝑓𝑏
4
Equation (3.13.7a) shows that since n = 1, fH and fL are as close together as possible for orthogonality to
prevail. It is for this reason that the present system is called" minimum shift keying." Equation (3.13.7b)
shows that the carrier frequency fo is an integral multiple of fb/4. Thus
𝑓𝑏 …3.13.8a
𝑓𝐻 = (𝑚 + 1)
4
𝑓𝑏 …3.13.8b
and 𝑓𝐿 = (𝑚 − 1)
4
Signal Space Representation of MSK
The signal space representation of MSK is shown in Fig. 3.13.2 The orthonormal unit vectors of the
coordinate system are given by 𝑢H (t) = √2/𝑇𝑠 sin ωH t and 𝑢L (t) = √2/𝑇𝑠 sin ωL t. The end points of the
four possible signal vectors are indicated by dots. The smallest distance between signal points is
𝑑 = √2𝐸𝑠 = √4𝐸𝑏 …3.13.9
just as for the case of QPSK.
We recall that QPSK generates two BPSK signals which are orthogonal to one another by virtue of the fact
that the respective carriers are in phase quadrature. Such phase quadrature can also be characterized as
time quadrature since, at a carrier frequency f0 a phase shift of π/2 is accomplished by a time shift in
amount 1/4f0, that is sin 2πf0(t+1/4f0) = sin (2πf0t + π/2) = cos (2πf0t). It may be noted that in MSK we have
again two BPSK signals. Here, however, the respective carriers are orthogonal to one another by virtue of
the fact that they are in frequency quadrature.
Page 32 of 34
The MSK receiver is shown in Fig. 3.13.3b. Detection is performed synchronously, i.e., by determining the
correlation of the received signal with the waveform x(t) = cosΩt sin ω0t to determine bit b0(t), and with
y(t) =sinΩt cosω0t to determine bit be(t). The integration is performed over the symbol interval. The
integrators integrate over staggered overlapping intervals of symbol time Ts = 2Tb.
Page 33 of 34
At the end of each integration time the integrator output is stored and then the integrator output is
dumped. The switch at the output swings back and forth at the bit rate so that finally the output waveform
is the original data bit stream dk(t).
At the receiver we need to reconstruct the waveforms x(t) and y(t). A method for locally regenerating x(t)
and y(t) is shown in Fig. 3.10.4. From Eq3.13.3 we see that MSK consists of transmitting one of two possible
HPSK signals, the first at frequency ω0 – Ω and the second at frequency ω0 + Ω. Thus as in BPSK detection,
we first square and filter the incoming signal. The output of the squarer has spectral components at the
frequency 2ωH = 2(ω0 + Ω) and at 2ωL = 2(ω0 – Ω). These are separated out by band pass filters. Division by
2 yields waveforms (1/2)sin 𝜔H t and (1/2)sin 𝜔L t from which, as indicated, x(t) and y(t) are regenerated by
addition and subtraction respectively. Further, the multiplier and low-pass filter shown regenerate a
waveform at the symbol rate Is =fb/2 which can 'be used to operate the sampling switches in Fig. 3.13.3b.
Page 34 of 34
Unit 4
Syllabus:
Other Digital Techniques: Pulse shaping to reduce inter channel and inter symbol interference- Duo binary
encoding, Nyquist criterion and partial response signaling, Quadrature Partial Response (QPR) encoder
decoder, Regenerative Repeater- eye pattern, equalizers
Optimum Reception of Digital Signals: Baseband signal receiver, probability of error, maximum likelihood
detector, Bayes theorem, optimum receiver for both baseband and pass band receiver- matched filter and
correlator, probability of error calculation for BPSK and BFSK.
This equation shows that the ith bit transmitted is correctly reproduced. However, the presence of ISI
introduces bit errors and distortions in the output.
While designing the transmitter or a receiver, it is important that you minimize the effects of ISI, so as to
receive the output with the least possible error rate.
Correlative Coding
So far, we’ve discussed that ISI is an unwanted phenomenon and degrades the signal. But the same ISI if
used in a controlled manner is possible to achieve a bit rate of 2W bits per second in a channel of
bandwidth W Hertz. Such a scheme is called as Correlative Coding or Partial response signaling schemes.
Page 1 of 19
Correlative-level coding (partial response signaling) – adding ISI to the transmitted signal in a controlled
manner Since ISI introduced into the transmitted signal is known, its effect can be interpreted at the
receiver. A practical method of achieving the theoretical maximum signaling rate of 2W symbol per second
in a bandwidth of W Hertz.
If fM is the frequency of the maximum frequency spectral component of the baseband waveform, then, in
AM, the bandwidth is B = 2fM. In frequency modulation, if the modulating waveform were a sinusoid of
frequency fM, and if the frequency deviation was ∆f, then bandwidth would be
Altogether, it is apparent that bandwidth decreases with decreasing fM regardless of the modulation
technique employed. We consider now a mode of encoding a binary bit stream, called duobinary encoding
which effects a reduction of the maximum frequency in comparison to the maximum frequency of the un-
encoded data. Thus, if a carrier is amplitude or frequency modulated by a duobinary encoded waveform,
the bandwidth of the modulated waveform will be smaller than if the un-encoded data were used to AM or
FM modulate the carrier.
There are a number of methods available for duobinary encoding and decoding. One popular scheme is
shown in Fig. 4.2.1. The signal d(k) is the data bit stream with bit duration Tb. It makes excursions between
logic 1 and logic 0, and, as had been our custom, we take the corresponding voltage levels to be + 1V and -
1V. The signal b(k), at the output of the differential encoder also makes excursions between + 1V and -1V.
The waveform vd(k) is therefore
𝑣𝑑 (𝑘) = 𝑏(𝑘) + 𝑏(𝑘 − 1) …4.2.2
Page 2 of 19
The decoder, shown in Fig. 4.2.1, consists of a device that provides at its output the magnitude (absolute
value) of its input cascaded with a logical inverter. For the inverter we take it that logic 1 is + 1V or greater
and logic 0 is 0V. We can now verify that the decoded data 𝑑̂ (k) is indeed the input data d(k). For this
purpose we prepare the following truth table:
Truth Table For Duobinary Signaling
Adder Input 1 Adder Input 2 Adder output Magnitude Output Inverter output
I1 I2 vD(k) (Inverter output) d(k)
Voltage Logic Voltage Logic Input Voltage Voltage Logic Logic
-1V 0 -1V 0 -2V 2V 1 0
-1V 0 1V 1 0 0V 0 1
1V 1 -1V 0 0 0V 0 1
1V 1 1V 1 2V 2V 1 0
From the table we see that the inverter output is 𝐼1 ⨁𝐼2 .The differential encoder (called a precoder in the
present application) output is:
𝐼1 = 𝑏(𝑘) = 𝑑(𝑘)⨁𝑏(𝑘 − 1) …4.2.3
̂
The input I2 = b(k - 1) so that the inverter output 𝑑 (k) is:
The more rapidly d(k) switches back and forth between logic levels the higher will be the frequencies of the
spectral components generated. When d(k) switches at each time Tb, the switching speed is at a maximum.
The waveform d(k), under such circumstances, has the appearance of a square wave of period 2Tb and
frequency 1/2 Tb as shown in Fig. 4.2.2a.
If d(k) is the input to the duobinary encoder of Fig. 4.2.1 then, as can be verified, b(k) appears as in Fig.
4.2.2b and the waveform, vD(k) which is to be transmitted appears as in Fig. 4.2.2c. Observe that the period
of vD(k) is 4 Tb with corresponding frequency 1/4 Tb. Thus the frequency of vD(k) is half the frequency of the
Page 3 of 19
original unencoded waveform d(k). The waveform d(k) may be a sinusoid of frequency 1/2 Tb and
waveform vD(k) as a sinusoid of frequency 1/4Tb. If we were free to select either d(k) or vD(k) as a
modulating waveform for a carrier, and if we were interested in conserving bandwidth, we would choose
vD(k). If amplitude modulation were involved, the bandwidth of the modulated waveform would be
2(1/4Tb) = fb /2 using vD(k) since the modulating frequency is fM = 1/4Tb and would be 2(1/2Tb) = fb using
d(k). With frequency modulation, if the peak-to peak carrier frequency deviation were 2∆f, then, the
modulated carrier would have a bandwidth 2(∆f) + 2(1/2Tb) with d(k) as the modulating signal, as in BFSK;
and 2(∆f) + 2(1/4Tb)with vD(k) as the modulating signal.
In partial-response signaling, we shall transmit a signal during each bit interval that has contributions from
two successive bits of an original baseband waveform. But this superposition need not prevent us from
Page 4 of 19
disentangling the individual original baseband waveform bits. A complete (baseband) partial-response
signaling communications system is shown in Fig. 4.3.2.
Figure 4.3.2 Duo Binary Encoder and Decoder Using Cosine filter
It is seen to be just an adaptation of duobinary encoding and decoding. The cosine filter employed a delay
and an advance of the impulse by amount Tb/2, the total time between delayed and advanced impulses
being Tb. Since, in the real world, a time advance is not possible, we have employed only a delay by
amount Tb. The brickwall filter at the receiver input serves to remove any out of band noise added to the
signal during transmission. It can be shown, that the output data 𝑑̂ (k) = d(k).
The bandwidth required to transmit the signal is twice the bandwidth of the baseband duobinary signal
which is fb/2. Hence the bandwidth BDSB of an amplitude modulated duobinary signal is
𝐵𝐷𝑆𝐵 = 2(𝑓𝑏 /2) = 𝑓𝑏 …4.4.2
If the duobinary signal is to amplitude modulate two carriers in quadrature, the circuit shown in Fig. 4.4.1 is
used and the resulting encoder is called a "quadrature partial response" (QPR) encoder.
Figure 4.4.1 shows that the data d(t) at the bit rate fb is first separated into an even and an odd bit stream
de(t) and do(t) each operating with the bit rate fb /2, Both de(t) and do(t) are then separately duobinary
encoded into signals VTe(t) and VTo(t).
Each duobinary encoder is similar to the encoder shown in Fig. 4.3.2a except that each delay is now 2Tb,
rather than Tb, the data rate of the input is fb/2 rather than fb and the bandwidth of the brick wall filter is
now (1/2)(fb/2)= fb/4 rather than fb/2. Thus the bandwidth required to pass VTe(t) and VTo(t) is fb/4. Each
duobinary signal is then modulated using the quadrature carrier signals cos ωot and sin ωot.
Page 5 of 19
Hence the total bandwidth required to pass a QPR signal is also BQPR, since the two quadrature components
occupy the same frequency band.
It should be noted that if QPSK, rather than QPR, were used to encode the data d(t), the bandwidth
required would be BQPSK =fb. However, if 16 QAM or 16 PSK were used to encode the data the required
bandwidth would be B16QAM = B16PSK =fb/2. Thus the spectrum required to pass a QPR signal is similar to
that required to pass 16 QAM or 16 PSK. However, the QPR signal displays no (or in practice very small)
side lobes which makes QPR the encoding system of choice when spectrum width is the major problem.
The drawback in using QPR is that the transmitted signal envelope is not a constant but varies with time.
QPR Decoder
A QPR decoder is shown in Fig. 4.4.2. As in 16-QAM and 16-PSK to decode the input signal, VQ(t) is first
raised to the fourth power, filtered and then frequency divided by 4. The result yields the two quadrature
Page 6 of 19
carriers: cos ωot and sin ωot. Using the two quadrature carriers we demodulate VQ(t) and obtain the two
baseband duobinary signals VTe(t) and VTo(t). Duobinary decoding then takes place; each duobinary decoder
being similar to the decoder shown in Fig. 4.3.2b except that they operate at fb/2 rather than at fb. The
reconstructed data do(t) and de(t) is then combined to yield the data d(t).
4.6 Equalization
For reliable communication to be established, we need to have a quality output. The transmission losses of
the channel and other factors affecting the quality of the signal have to be treated. The most occurring
loss, as we have discussed, is the ISI.
To make the signal free from ISI, and to ensure a maximum signal to noise ratio, we need to implement a
method called Equalization. The following figure shows an equalizer in the receiver portion of the
communication system.
Sampled
Noise and
Received
Interference
Signal
Digital Pulse Analog Linear Decision
Source Shaping Channel digital Device
Received TS Equalizer
Analog
Signal
Page 7 of 19
Regenerative Repeater
For any communication system to be reliable, it should transmit and receive the signals effectively, without
any loss. A PCM wave, after transmitting through a channel, gets distorted due to the noise introduced by
the channel.
The regenerative pulse compared with the original and received pulse, will be as shown in the following
figure.
Timing
Circuit
Equalizer
The channel produces amplitude and phase distortions to the signals. This is due to the transmission
characteristics of the channel. The Equalizer circuit compensates these losses by shaping the received
pulses.
Timing Circuit
Page 8 of 19
To obtain a quality output, the sampling of the pulses should be done where the signal to noise ratio (SNR)
is maximum. To achieve this perfect sampling, a periodic pulse train has to be derived from the received
pulses, which is done by the timing circuit.
Hence, the timing circuit allots the timing interval for sampling at high SNR, through the received pulses.
Decision Device
The timing circuit determines the sampling times. The decision device is enabled at these sampling times.
The decision device decides its output based on whether the amplitude of the quantized pulse and the
noise, exceeds a pre-determined value or not.
The signal s(t) with added white gaussian noise n(t) of power spectral density η/2 is presented to an
integrator. At time t = 0 + we require that capacitor C be uncharged. Such a discharged condition may be
ensured by a brief closing of switch SW1 at time t = 0-, thus relieving C of any charge it may have acquired
Page 9 of 19
1 𝑇 1 𝑇 1 𝑇 …4.7.1
𝑣𝑜 (𝑇) = ∫ [𝑠(𝑡) + 𝑛(𝑡)]𝑑𝑡 = ∫ 𝑠(𝑡)𝑑𝑡 + ∫ 𝑛(𝑡)𝑑𝑡
𝜏 0 𝜏 0 𝜏 0
The sample voltage due to the signal is
1 𝑇 𝑉𝑇 …4.7.2
𝑠𝑜 (𝑇) = ∫ 𝑉𝑑𝑡 =
𝜏 0 𝜏
The sample voltage due to the noise is
1 𝑇 …4.7.3
𝑠𝑛 (𝑇) = ∫ 𝑛(𝑡)𝑑𝑡
𝜏 0
This noise-sampling voltage no(T) is a Gaussian random variable in contrast with n(t) which is a Gaussian
random process.
The variance of no(T) is given by
𝑛𝑇 …4.7.4
𝜎02 = ̅̅̅̅̅̅̅
𝑛02 (𝑡) = 2
2𝜏
It has a Gaussian probability density.
The output, of the integrator, before the sampling switch, is v0(t) = S0(t) + n0(t). As shown in Fig. 4.7.3a, the
signal output So(t) is a ramp, in each bit interval, of duration T. At the end of the interval the ramp attains
the voltage S0(t) which is + VT/τ or - VT/τ, depending on whether the bit is a 1 or a 0. At the end of each
interval the switch SW1 in Fig. 4.7.2 closes momentarily to discharge the capacitor so that so(t) drops to
zero. The noise n0(t) shown in Fig. 4.7.3b, also starts each interval with no(0) = 0 and has the random value
n0(t) at the end of each interval. The sampling switch SW2 closes briefly just before the closing of SW1 and
hence reads the voltage
𝑣𝑜 (𝑇) = 𝑠𝑜 (𝑇) + 𝑛𝑜 (𝑇) …4.7.5
We would naturally like the output signal voltage to be as large as possible in comparison with the noise
voltage. Hence a figure of merit of interest is the signal-to-noise ratio
[𝑠𝑜 (𝑇)]2 2 2
= 𝑉 𝑇 …4.7.6
̅̅̅̅̅̅̅̅̅̅̅
[𝑛𝑜 (𝑇)]2 𝜂
Page 10 of 19
Figure 4.7.3 (a) The Signal Output and (b) the Noise Output of the integrator
This result is calculated from Eqs. (4.7.2) and (4.7.4). Note that the signal-to noise ratio increases with
increasing bit duration T and that it depends on V2T which is the normalized energy of the bit signal.
Therefore, a bit represented by a narrow, high amplitude signal and one by a wide, low amplitude signal
are equally effective, provided V2T is kept constant. It is instructive to note that the integrator filters the
signal and the noise such that the signal voltage increases linearly with time, while the standard deviation
(rms value) of the noise increases more slowly, as √𝑇. Thus, the integrator enhances the signal relative to
the noise, and this enhancement increases with time as shown in Eq. (4.7.6).
𝑛𝑜 (𝑇)
Defining 𝑥 ≡ and using equations 4.7.4 equation 4.8.2 may be written as
√2𝜎0
1 2 ∞ −𝑥 2 1 𝑇 1 𝑉𝑇 1/2 1 𝐸𝑆 1/2
𝑃𝑒 = ∫ 𝑒 𝑑𝑥 = 𝑒𝑟𝑓𝑐 (𝑉√ ) = 𝑒𝑟𝑓𝑐 ( ) 𝑒𝑟𝑓𝑐 ( ) …4.8.3
2 √𝜋 𝑥=𝑉𝑇/𝜏 2 𝜂 2 𝜂 2 𝜂
In which Es=V2T is the signal energy of a bit.
Page 11 of 19
Figure 4.8.1 The Gaussian Probability Density of the noise sample n0(T)
If the signal voltage were held instead at + V during some bit interval, then it is clear from the symmetry of
the situation that the probability of error would again be given by P, in Eq. (4.8.3). Hence Eq. (4.8.3) gives
P, quite generally.
The probability of error Pe as given in Eq. (4.8.3), is plotted in Fig. 4.8.2. Note that Pe decreases rapidly as
Es/η increases. The maximum value of Pe is 1/2. Thus, even if the signal is entirely lost in the noise so that
any determination of the receiver is a sheer guess, the receiver cannot be wrong more than half the time
on the average.
4.9 The Optimum Receiver
In the receiver system of Fig. 4.7.2, the signal was passed through a filter (i.e. the integrator), so that at the
sampling time the signal voltage might be emphasized in comparison with the noise voltage. We are
naturally led to ask whether the integrator is the optimum filter for the purpose of minimizing the
probability of error. We shall find that for the received signal contemplated in the system of Fig. 4.7.2 the
integrator is indeed the optimum filter.
We assume that the received signal is a binary waveform. One binary digit (bit) is represented by a signal
waveform S1(t) which persists for time T, while the other bit is represented by the waveform S2(t) which
also lasts for an interval T. For example, in the case of transmission at baseband, as shown in Fig. 4.7.2,
S1(t) = + V, while S2(t) = - V; for other modulation systems, different waveforms are transmitted. For
example, for PSK signalling, S1(t) = A cos ω0t and S2(t) = - A cos ω0t; while for FSK, S1(t) = A cos (ω0+Ω)t and
S2(t) = A cos (ω0- Ω)t.
Page 12 of 19
1 2 ∞ 2
𝑃𝑒 = ∫ 𝑒 −𝑥 𝑑𝑥
2 √𝜋 [𝑠𝑜1 (𝑇)−𝑠𝑜2 (𝑇)]/2√2𝜎0 …4.9.4a
Note that for the case S01(T) = VT/τ and S02(T) = - VT/τ, and, using Eq. (4.7.4), Eq. (4.9.4b) reduces to Eq.
(4.8.3) as expected.
The complementary error function is a monotonically decreasing function of its argument. (See Fig. 4.8.2.)
Hence, as is to be anticipated, Pe decreases as the difference S01(T) - S02(T) becomes larger and as the rms
noise voltage σ0 becomes smaller. The optimum filter, then, is the filter which maximizes the ratio
Page 13 of 19
Page 14 of 19
∞ 2
∞
𝑝02 (𝑇) [∫−∞ 𝑋 (𝑓)𝑌(𝑓) 𝑑𝑓]
= ∞ ≤ ∫ |𝑌(𝑓)|2 𝑑𝑓 …4.9.17
𝜎02 | |
∫−∞ 𝐻(𝑓) 𝑑𝑓2
−∞
Using equation 16,
∞ ∞ [ ( )]2
𝑝02 (𝑇) 2
𝑃 𝑓
2 ≤∫
| |
𝑌(𝑓) 𝑑𝑓 = ∫ 𝑑𝑓 …4.9.18
𝜎0 −∞ −∞ 𝐺𝑛 (𝑓)
The ratio p02(T)/σ02; will attain its maximum value when the equal sign in Eq. (4.9.18) may be employed as
is the case when X(f) = K Y*(f). We then find from Eqs. (4.9.15) and (4.9.16) that the optimum filter which
yields such a maximum ratio p02(T)/σ02; has a transfer function
𝑃∗ (𝑓) −𝑗2𝜋𝑓𝑇
𝐻(𝑓) = 𝐾 𝑒 …4.9.19
𝐺𝑛 (𝑓)
Correspondingly, the maximum ratio is, from Eq. (4.9.18),
∞ [ ( )]2
𝑝02 (𝑇) 𝑃 𝑓
[ 2 ] = ∫ 𝑑𝑓 …4.9.20
𝜎0 𝑚𝑎𝑥 −∞ 𝐺𝑛 (𝑓)
A physically realizable filter will have an impulse response which is real, i.e., not complex. Therefore h(t) =
h*(t). Replacing the right-hand member of Eq. (4.10.2b) by its complex conjugate, an operation which
leaves the equation unaltered, we have
2𝐾 ∞
( )
ℎ 𝑡 = ∫ 𝑃(𝑓)𝑒 𝑗2𝜋𝑓(𝑡−𝑇) 𝑑𝑓 …4.10.3(a)
η −∞
2𝐾
= 𝑝(𝑇 − 𝑡) …4.10.3(b)
η
Finally since p(t)=s1(t) – s2(t), we have
2𝐾
ℎ(𝑡) = [𝑠 (𝑇 − 𝑡) − 𝑠2 (𝑇 − 𝑡)] …4.10.4
η 1
As shown in Fig. 4.10.1a, the s1(t) is a triangular waveform of duration T, while s2(t), (Fig. 4.10.1b), is of
identical form except of reversed polarity. Then p(t) is as shown in Fig. 4.10.1c, and p(-t) appears in Fig.
4.10.1d. The waveform p(-t) is the waveform p(t) rotated around the axis t =0. Finally, the waveform p(T -
t) called for as the impulse response of the filter in Eq. (4.10.3b) is this rotated waveform p(-t) translated in
the positive t direction by amount T. This last translation ensures that h(t) = 0 for t < 0 as is required for a
causal filter.
In general, the impulsive response of the matched filter consists of p(t) rotated about t = 0 and then
delayed long enough (i.e., a time T) to make the filter realizable. We may note in passing, that any
additional delay that a filter might introduce would in no way interfere with the performance of the filter,
for both signal and noise would be delayed by the same amount, and at the sampling the ratio of signal to
noise would remain unaltered.
Page 15 of 19
Figure 4.10.1 The signals (a) s1(t), (b) s2(t), (c) p(t)=s1(t)- s2(t), (d) p(t) rotated about the axis t=0, (e) The
waveform of (d) translated to right by amount T.
4.11 Correlator
Coherent Detection: Correlation
Coherent detection is an alternative type of receiving system, which is identical in performance with the
matched filter receiver. Again, as shown in Fig. 4.11.1, the input is a binary data waveform S1(t) or S2(t)
corrupted by noise n(t). The bit length is T. The received signal plus noise vi(t) is multiplied by a locally
generated waveform S1(t) - S2(t). The output of the multiplier is passed through an integrator whose output
is sampled at t = T. As before, immediately after each sampling, at the beginning of each new bit interval,
all energy-storing elements in the integrator are discharged. This type of receiver is called a correlator,
since we are correlating the received signal and noise with the waveform S 1(t)- S2(t).
The output signal and noise of the correlator shown in Fig. 4.11.1 are
1 𝑇
𝑠0 (𝑡) = ∫ 𝑆𝑖 (𝑡)[𝑆1 (𝑡) − 𝑆2 (𝑡)] 𝑑𝑡 …4.11.1
τ 0
1 𝑇
𝑛0 (𝑡) = ∫ 𝑛(𝑡)[𝑆1 (𝑡) − 𝑆2 (𝑡)] 𝑑𝑡 …4.11.2
τ 0
where Si(t) is either S1(t) or S2(t), and where τ is the constant of the integrator (i.e., the integrator output is
l/τ times the integral of its input). We now compare these outputs with the matched filter outputs.
If h(t) is the impulsive response of the matched filter, then the output of the matched filter vo(t) can be
found using the convolution integral. We have
Page 16 of 19
∞ 𝑇
𝑣0 (𝑡) = ∫ 𝑣𝑖 (𝜆)ℎ(𝑡 − 𝜆) 𝑑𝜆 = ∫ 𝑣𝑖 (𝜆)ℎ(𝑡 − 𝜆) 𝑑𝜆 …4.11.3
−∞ 0
The limits on the integral have been changed to 0 and T since we are interested in the filter response to a
bit which extends only over that interval. Using Eq. (4.10.4) which gives h(t) for the matched filter, we have
2𝐾
ℎ(𝑡) = [𝑠 (𝑇 − 𝑡) − 𝑠2 (𝑇 − 𝑡)] …4.11.4
η 1
2𝐾
So that ℎ(𝑡 − 𝜆) = [𝑠 (𝑇 − 𝑡 + 𝜆) − 𝑠2 (𝑇 − 𝑡 + 𝜆)] …4.11.5
η 1
Submitting equation 4.11.5, in equation 4.11.3
2𝐾 𝑇
𝑣0 (𝑡) = ∫ 𝑣 (𝜆)[𝑠1 (𝑇 − 𝑡 + 𝜆) − 𝑠2 (𝑇 − 𝑡 + 𝜆)] 𝑑𝜆 …4.11.6
η 0 𝑖
Since vi(λ) = si(λ)+n(λ), and v0(t) = s0(t)+n0(t), setting t=T yields,
2𝐾 𝑇
𝑠0 (𝑡) = ∫ 𝑠 (𝜆)[𝑠1 (𝜆) − 𝑠2 (𝜆)]𝑑𝜆 …4.11.7
η 0 𝑖
Where si(λ) is equal to s1(λ) or s2(λ). Simillarly,
2𝐾 𝑇
𝑛0 (𝑡 ) = ∫ 𝑛(𝜆)[𝑠1 (𝜆) − 𝑠2 (𝜆)]𝑑𝜆 …4.11.8
η 0
Thus as we can see from above equations so(T) and no(T), are identical. Hence the performances of the two
systems are identical.
The matched filter and the correlator are not simply two distinct, independent techniques which happen to
yield the same result. In fact they are two techniques of synthesizing the optimum filter h(t).
Figure 4.12.1 (a) BPSK representation in signal space showing r1 and r2 (b)Correlator receiver for BPSK
showing that r=r1+n0 or r2+ n0
The error probability, i.e., the probability that the signal is mistakenly judged to be S1 is the probability that
𝑛0 > √𝑃𝑠 𝑇𝑏 . Thus the error probability Pe, is
∞ ∞
1 −𝑛02/2𝜎02
1 2
𝑃𝑒 = ∫ 𝑒 𝑑𝑛0 = ∫ 𝑒 −𝑛0 /𝜂 𝑑𝑛0 …4.12.2
2
√2𝜋𝜎 √𝑃𝑠𝑇𝑏 √𝜋𝜂 √𝑃𝑠 𝑇𝑏
Page 17 of 19
The signal energy is Eb = PsTb and the distance between end points of the signal vectors in Fig. 4.12.1 is =
2√𝑃𝑠 𝑇𝑏 . Accordingly we find that
1 1
𝑃𝑒 = 𝑒𝑟𝑓𝑐 √𝐸𝑏 /𝜂 = 𝑒𝑟𝑓𝑐 √𝑑 2 /4𝜂 …4.12.4
2 2
The error probability is thus seen to fall off monotonically with an increase in distance between signals.
(ii) BFSK
The case of synchronous detection of orthogonal binary FSK is represented in Fig. 4.12.2. The signal space
is shown in (a). The unit vectors are
𝑢1 (𝑡) = √2/𝑇𝑏 cos 𝜔1 𝑡 …4.12.5a
and 𝑢2 (𝑡) = √2/𝑇𝑏 cos 𝜔2 𝑡 …4.12.5b
Figure 4.12.2 (a) Signal Space representation of BFSK (b) Correlator Receiver for BFSK
Orthogonality over the interval Tb having been insured by the selection of ω1 and ω2. The transmitted
signals s1 and s2 are of power Ps, and are
𝑠1 (𝑡) = √2𝑃𝑠 cos 𝜔1 𝑡 = √𝑃𝑠 𝑇𝑏 √2/𝑇𝑏 cos 𝜔1 𝑡 = √𝑃𝑠 𝑇𝑏 𝑢1 (𝑡) …4.12.6a
and 𝑠2 (𝑡) = √2𝑃𝑠 cos 𝜔2 𝑡 = √𝑃𝑠 𝑇𝑏 √2/𝑇𝑏 cos 𝜔2 𝑡 = √𝑃𝑠 𝑇𝑏 𝑢2 (𝑡) …4.12.6b
Detection is accomplished in the manner shown in Fig. 4.12.2 (b). The outputs are r1 and r2. In the absence
of noise when s1(t) is received, r2 = 0 and r1 = √𝑃𝑠 𝑇𝑏 . For S2(t), r1 = 0 and r2 =√𝑃𝑠 𝑇𝑏 . Hence the vectors
representing r1 and r2 are of length √𝑃𝑠 𝑇𝑏 as shown in Fig. 4.12.2(a).
Since the signal is two dimensional the relevant noise in the present case is
𝑛(𝑡) = 𝑛1 𝑢1 (𝑡) + 𝑛2 𝑢2 (𝑡) …4.12.7
2 2
In which n1 and n2 are Gaussian random variables each of variance σ1 = σ2 =η/2. Now let us suppose that
S2(t) is transmitted and that the observed voltages at the output of the processor are r’1 and r’2 as shown in
Fig. 4.12.2a. We find that r’2≠r2 because of the noise n2 and r’1≠0 because of the noise n1. We have drawn
the locus of points equidistant from r1 and r2 and suppose, that the received voltage r, is closer to r1 than
to r2. Then we shall have made an error in estimating which signal was transmitted. It is readily apparent
that such an error will occur whenever n1>r2-n2 or n1 + n2 > √𝑃𝑠 𝑇𝑏 . Since n1 and n2 are uncorrelated, the
random variable n0 = n1 + n2 has a variance σ02= σ12+ σ22=η and its probability density function is
1 2
𝑓 (𝑛0 ) = 𝑒 −𝑛0 /2𝜂 …4.12.8
√2𝜋𝑛
Page 18 of 19
Again we have Eb = PsTb and in the present case the distance between r1 and r2 is 𝑑 = √2√𝑃𝑠 𝑇𝑏 .
Accordingly, proceeding as in Eq. (4.12.2) we find that
1
𝑃𝑒 = 𝑒𝑟𝑓𝑐√𝐸𝑏 /2𝜂 …4.12.10a
2
1
= 𝑒𝑟𝑓𝑐√𝑑 2 /2𝜂 …4.12.10b
2
Comparing Eqs. (4.12.10b) and (4.12.4) we see that when expressed in terms of the distance d, the error
probabilities are the same for BPSK and BFSK.
----------X----------
Page 19 of 19
Unit 5
Syllabus:
Information Theory Source Coding: Introduction to information theory, uncertainty and information,
average mutual information and entropy, source coding theorem, Huffman coding, Shannon-Fano-Elias
coding, Channel Coding: Introduction, channel models, channel capacity, channel coding, information
capacity theorem, Shannon limit.
Encoding is the process of converting the data or a given sequence of characters, symbols, alphabets etc.,
into a specified format, for the secured transmission of data.
Decoding is the reverse process of encoding which is to extract the information from the converted
format.
Data Encoding
Encoding is the process of using various patterns of voltage or current levels to represent 1s and 0s of the
digital signals on the transmission link.
The common types of line encoding are Unipolar, Polar, Bipolar, and Manchester.
Encoding Techniques
The data encoding technique is divided into the following types, depending upon the type of data
conversion.
Analog data to Analog signals − The modulation techniques such as Amplitude Modulation,
Frequency Modulation and Phase Modulation of analog signals, fall under this category.
Analog data to Digital signals − This process can be termed as digitization, which is done by Pulse
Code Modulation (PCM). Hence, it is nothing but digital modulation. As we have already discussed,
sampling and quantization are the important factors in this. Delta Modulation gives a better output
than PCM.
Digital data to Analog signals − The modulation techniques such as Amplitude Shift Keying (ASK),
Frequency Shift Keying (FSK), Phase Shift Keying (PSK), etc., fall under this category. These will be
discussed in subsequent chapters.
Digital data to Digital signals − These are in this section. There are several ways to map digital data
to digital signals. Some of them are −
Information is the source of a communication system, whether it is analog or digital. Information theory is
a mathematical approach to the study of coding of information along with the quantification, storage, and
communication of information.
Page 1 of 13
Communication
Source Destination
Channel
Figure 5.01 A Communication System
1. The information associated with any event depends upon the probability with which it exists.
2. Higher the probability of occurring, lower the information associated with it and vice versa.
3. If the probability of occurring is 1, the information associated with that event is zero since we are
certain that a particular bit actually exists at the input of the system.
The unit of I(xi) is bit if b=2, Hartley or decit if b=10 and nat (natural unit) if b=e.
𝐼(𝑥𝑖 ) = − log 2 𝑃(𝑥𝑖 ) bits
𝐼(𝑥𝑖 ) = − log10 𝑃(𝑥𝑖 ) hartley or decit
𝐼(𝑥𝑖 ) = − log e 𝑃(𝑥𝑖 ) nats
It is standard to use b=2.
The unit bit (b) is a measure of information content.
Conversion of Units
ln 𝑎 log 𝑎
log 2 𝑎 = =
ln 2 log 2
log10 6 1
log 2 6 = = log 6
log10 2 log10 2 10
Page 2 of 13
Page 3 of 13
𝐻(𝑋 ∣ 𝑦 = 𝑦0 ) … … … … … … … . 𝐻(𝑋 ∣ 𝑦 = 𝑦𝑘 )
This is a random variable for p(y0 ) … … … … … … … . p(yk−1 ) with probabilities respectively. The mean
value of H(X ∕ y = yk ) for output alphabet y is –
𝑘−1
𝑘−1 𝑗−1
1
= ∑ ∑ 𝑝(𝑥𝑗 ∣ 𝑦𝑘 ) 𝑝(𝑦𝑘 )𝑙𝑜𝑔2[ ]
𝑝(𝑥𝑗 ∣ 𝑦𝑘 )
𝑘=0 𝑗=0
𝑘−1 𝑗−1
1
= ∑ ∑ 𝑝(𝑥𝑗 , 𝑦𝑘 ) 𝑙𝑜𝑔2 [ ]
𝑝(𝑥𝑗 ∣ 𝑦𝑘 )
𝑘=0 𝑗=0
Now, considering both the uncertainty conditions (before and after applying the inputs), we come to know
that the difference, i.e.
𝐻(𝑥) − 𝐻(x ∣ y)
must represent the uncertainty about the channel input that is resolved by observing the channel output.
This is called as the Mutual Information of the channel.
Denoting the Mutual Information as I(x;y), we can write the whole thing in an equation, as follows
𝐼(𝑥; 𝑦) = 𝐼(𝑦; 𝑥)
2. Mutual information is non-negative.
𝐼(𝑥; 𝑦) ≥ 0
j−1 k−1
1
𝐻(𝑥, 𝑦) = ∑ ∑ p(xj , yk ) log 2 [ ]
p(xj , yk )
j=0 k=0
Page 4 of 13
The conditional entropy H(X/Y) is a measure of the average uncertainty about the channel input after the
channel output has been observed.
𝑛 𝑚
The conditional entropy H(Y/X) is the average uncertainty of the channel output given that X was
transmitted.
𝑛 𝑚
The joint entropy H(X,Y) is the average uncertainty of the communication channel as a whole
𝑛 𝑚
5.06 Efficiency
The transmission efficiency or the channel efficiency is defined as
𝑎𝑐𝑡𝑢𝑎𝑙 𝑡𝑟𝑎𝑛𝑠𝑖𝑛𝑓𝑜𝑟𝑚𝑎𝑡𝑖𝑜𝑛
𝜂=
𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑡𝑟𝑎𝑛𝑠𝑖𝑛𝑓𝑜𝑟𝑚𝑎𝑡𝑖𝑜𝑛
𝐼(𝑋; 𝑌) 𝐼(𝑋; 𝑌)
𝜂= =
max 𝐼(𝑋; 𝑌) 𝐶𝑠
Case I If I(X;Y) = max I(X;Y)
Then η = 100%
The channel is fully utilized.
Page 5 of 13
This source is discrete as it is not considered for a continuous time interval, but at discrete time intervals.
This source is memory less as it is fresh at each instant of time, without considering the previous values.
The Code produced by a discrete memory less source, has to be efficiently represented, which is an
important problem in communications. For this to happen, there are code words, which represent these
source codes.
An objective of the source coding is to minimize the average bit rate required for representation of the
source by reducing the redundancy (i.e. increasing the efficiency) of the information source.
Code Length
Let X be a DMS with finite entropy H(X) and an alphabet {x1, x2, …, xm} with the corresponding probabilities
of occurrence P(xi) {i=1,2,…,m}.
Let the binary code word assigned to symbol xi have ni length, measured in bits. Then the average code
word length L per source symbol is given by:
𝑚
L = ∑ 𝑃(𝑥𝑖 ) 𝑛𝑖
𝑖=1
L represent the average number of bits per source symbol.
The code efficiency η can be defined as
𝐿𝑚𝑖𝑛
𝜂=
L
Where Lmin is the minimum possible value of L.
Code redundancy 𝜸
𝛾 =1−𝜂
Page 6 of 13
Example 5.01 A source delivers six digits with the following probabilities:
i A B C D E F
P(i) 1/2 1/4 1/8 1/16 1/32 1/32
Find (1) H(X) (2) H’(X) = r.H(X) (3) H’(X)max=C (4) Efficiency
Solution:
(1)
𝑚
𝐻′(𝑋) 1.938
𝜂= × 100% = × 100%
𝐻′(𝑋)|𝑚𝑎𝑥 2.58
𝜂 = 75.11%
Example 5.02 Apply the Shannon Fano coding procedure for the following message ensemble:
X x1 x2 x3 x4 x5 x6 x7 x8
P 1/4 1/8 1/16 1/16 1/16 1/4 1/16 1/8
Take M=2. Find the Code Efficiency.
Solution:
As per the procedure explained in section 5.10, The code can be obtained as under,
Page 7 of 13
𝐿 = ∑ 𝑃(𝑥𝑖 ). 𝑛𝑖
𝑖=1
1 1 1 1 1
= ×2+ ×2+ ×3+ ×3+ ×4×4
4 4 8 8 16
= 2.75 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
Then Efficiency
𝐻(𝑋) 2.75
𝜂= = 100% = 100% (𝑨𝒏𝒔𝒘𝒆𝒓)
𝐿 2.75
X x1 x2 x3 x4 x5 x6 x7
P 0.4 0.2 0.12 0.08 0.08 0.08 0.04
Apply the Shannon Fano coding procedure and calculate the efficiency of the code. Take M=2
Solution:
Message Prob. Step 1 Step 2 Step 3 Step 4 Code Code Length
X1 0.4 0 0 0 1
X2 0.2 1 0 0 100 3
X3 0.12 1 0 1 101 3
X4 0.08 1 1 0 0 1100 4
X5 0.08 1 1 0 1 1101 4
X6 0.08 1 1 1 0 1110 4
X7 0.04 1 1 1 1 1111 4
Page 8 of 13
Then Entropy is
𝑚
𝐿 = ∑ 𝑃(𝑥𝑖 ). 𝑛𝑖
𝑖=1
= 0.4 × 1 + 0.2 × 3 + 0.12 × 3 + 0.08 × 4 + 0.08 × 4 + 0.08 × 4 + 0.04 × 4
= 2.48 𝑏𝑖𝑡𝑠/𝑚𝑒𝑠𝑠𝑎𝑔𝑒
Then Efficiency
𝐻(𝑋) 2.42
𝜂= × 100% = 100% = 97.58% (𝑨𝒏𝒔𝒘𝒆𝒓)
𝐿𝑎𝑣𝑔 2.48
X x1 x2 x3 x4 x5 x6 x7
P 0.4 0.2 0.12 0.08 0.08 0.08 0.04
Apply Huffman coding procedure and calculate the efficiency of the code.
Solution:
Applying the Huffman’s Coding procedure and determining the codes as under
Page 9 of 13
Message Probability 1st Reduction 2nd Reduction 3rd Reduction 4th Reduction 5th Reduction Code Code Length
X1 0.4 0.4 0.4 0.6 0 1 1
0.4 0.4
0.36 0 0.4 1
0.24 0.24 1
X2 0.2 0.2 000 3
0.2 0.2 0
0.16 0.16 1
X3 0.12 0.12 0.12 0 010 3
0.12 0.12 1
X6 0.08 0110 4
0
1
X7 0.04 0111 4
𝐿 = ∑ 𝑃(𝑥𝑖 ). 𝑛𝑖
𝑖=1
= 0.4 × 1 + 0.2 × 3 + 0.12 × 3 + 0.08 × 4 + 0.08 × 4 + 0.08 × 4 + 0.04 × 4
= 2.48 𝑏𝑖𝑡𝑠/𝑚𝑒𝑠𝑠𝑎𝑔𝑒
Then Efficiency
𝐻(𝑋) 2.42
𝜂= × 100% = 100% = 97.58% (𝑨𝒏𝒔𝒘𝒆𝒓)
𝐿𝑎𝑣𝑔 2.48
Thus according to the theorem if R≤C, then the noise free transmission is possible in the presence of noise.
The negative theorem states that if the information rate R exceeds C (R>C), error probability approaches to
unity as M increases.
Page 10 of 13
Here C is the maximum capacity of the channel in bits/second otherwise called Shannon’s capacity limit for
the given channel, B is the bandwidth of the channel in Hertz, S is the signal power in Watts and N is the
noise power, also in Watts. The ratio S/N is called Signal to Noise Ratio (SNR). It can be ascertained that the
maximum rate at which we can transmit the information without any error, is limited by the bandwidth,
the signal level, and the noise level. It tells how many bits can be transmitted per second without errors
over a channel of bandwidth B Hz, when the signal power is limited to S Watt and is exposed to Gaussian
White Noise of additive nature.
Example 5.05 Calculate the channel capacity of a channel with BW 3kHz & S/N ratio given as 103,
assuming that the noise is white Gaussian noise.
Solution:
We know that
𝑆
𝐶 = 𝐵 log 2 (1 + )
𝑁
= 3000 log 2 (1 + 103 )
= 30000 𝑏𝑖𝑡𝑠/𝑠𝑒𝑐
= 𝐻(𝑌) − 𝐴 ∑ 𝑃(𝑥𝑗 )
𝑗=1
Where A=H(Y/xj) is independent of j and hence taken out of the summation sign. Also
𝑚
∑ 𝑃(𝑥𝑗 ) = 1
𝑗=1
therefore 𝐼(𝑋: 𝑌) = 𝐻(𝑌) − 𝐴
Hence the channel capacity
𝐶 = max 𝐼(𝑥: 𝑦)
= max[𝐻(𝑌) − 𝐴]
Page 11 of 13
= max[𝐻(𝑌)] − 𝐴
𝐶 = log 𝑛 − 𝐴
Where n is the total number of receiver symbols, and max[H(Y)] = logn.
X x1 x2 x3 x4 x5 x6 x7 x8 x9 x10
P 0.1 0.13 0.01 0.04 0.08 0.29 0.06 0.22 0.05 0.02
Page 12 of 13
Page 13 of 13