0% found this document useful (0 votes)
72 views21 pages

Unit 4 - Digital Communication - WWW - Rgpvnotes.in

This document summarizes key concepts about digital communication techniques including: 1) Inter-symbol interference (ISI) causes bit errors and distortions in digital signals due to multi-path propagation and non-linear channels. Techniques like pulse shaping and partial response signaling can reduce ISI. 2) Duo-binary encoding is a partial response signaling technique that encodes two data bits to generate a waveform with half the frequency of the original data, reducing bandwidth requirements. It works by differentially encoding and adding the current and previous data bits. 3) Optimal digital signal reception techniques aim to minimize probability of error. The maximum likelihood detector and matched filter/correlator receivers are optimal for baseband and pass
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views21 pages

Unit 4 - Digital Communication - WWW - Rgpvnotes.in

This document summarizes key concepts about digital communication techniques including: 1) Inter-symbol interference (ISI) causes bit errors and distortions in digital signals due to multi-path propagation and non-linear channels. Techniques like pulse shaping and partial response signaling can reduce ISI. 2) Duo-binary encoding is a partial response signaling technique that encodes two data bits to generate a waveform with half the frequency of the original data, reducing bandwidth requirements. It works by differentially encoding and adding the current and previous data bits. 3) Optimal digital signal reception techniques aim to minimize probability of error. The maximum likelihood detector and matched filter/correlator receivers are optimal for baseband and pass
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Program : B.

Tech
Subject Name: Digital Communication
Subject Code: EC-502
Semester: 5th
Downloaded from www.rgpvnotes.in

Department of Electronics and Communication Engineering


Sub. Code: EC 5002 Sub. Name: Digital Communication

Unit 4
Syllabus:
Other Digital Techniques: Pulse shaping to reduce inter channel and inter symbol interference- Duo binary
encoding, Nyquist criterion and partial response signaling, Quadrature Partial Response (QPR) encoder
decoder, Regenerative Repeater- eye pattern, equalizers
Optimum Reception of Digital Signals: Baseband signal receiver, probability of error, maximum likelihood
detector, Bayes theorem, optimum receiver for both baseband and pass band receiver- matched filter and
correlator, probability of error calculation for BPSK and BFSK.

4.1 Inter Symbol Interference


This is a form of distortion of a signal, in which one or more symbols interfere with subsequent signals,
causing noise or delivering a poor output.
Causes of ISI
The main causes of ISI are −
 Multi-path Propagation
 Non-linear frequency in channels
The ISI is unwanted and should be completely eliminated to get a clean output. The causes of ISI should
also be resolved in order to minimize its effect.
To view ISI in a mathematical form present in the receiver output, we can consider the receiver output.
The receiving filter output y(t)
is sampled at time ti=iTb
(with i taking on integer values), yielding –

𝑦(𝑡𝑖 ) = 𝜇 ∑ 𝑎𝑘 𝑝(𝑖𝑇𝑏 − 𝑘𝑇𝑏 )]


𝑘=−∞

= 𝜇𝑎𝑖 + 𝜇 ∑ 𝑎𝑘 𝑝(𝑖𝑇𝑏 − 𝑘𝑇𝑏 )]


𝑘=−∞
𝑘≠𝑖
In the above equation, the first term μai is produced by the ith transmitted bit.
The second term represents the residual effect of all other transmitted bits on the decoding of the ith bit.
This residual effect is called as Inter Symbol Interference.
In the absence of ISI, the output will be −
𝑦(𝑡𝑖 ) = 𝜇𝑎𝑖

This equation shows that the ith bit transmitted is correctly reproduced. However, the presence of ISI
introduces bit errors and distortions in the output.
While designing the transmitter or a receiver, it is important that you minimize the effects of ISI, so as to
receive the output with the least possible error rate.

Correlative Coding
So far, we’ve discussed that ISI is an unwanted phenomenon and degrades the signal. But the same ISI if
used in a controlled manner is possible to achieve a bit rate of 2W bits per second in a channel of
bandwidth W Hertz. Such a scheme is called as Correlative Coding or Partial response signaling schemes.

Correlative Level Coding:

Page 1 of 19

Page no: 1 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Correlative-level coding (partial response signaling) – adding ISI to the transmitted signal in a controlled
manner Since ISI introduced into the transmitted signal is known, its effect can be interpreted at the
receiver. A practical method of achieving the theoretical maximum signaling rate of 2W symbol per second
in a bandwidth of W Hertz.

Using realizable and perturbation-tolerant filters


Since the amount of ISI is known, it is easy to design the receiver according to the requirement so as to
avoid the effect of ISI on the signal. The basic idea of correlative coding is achieved by considering an
example of Duo-binary Signaling.

4.2 Duo-binary Signaling

If fM is the frequency of the maximum frequency spectral component of the baseband waveform, then, in
AM, the bandwidth is B = 2fM. In frequency modulation, if the modulating waveform were a sinusoid of
frequency fM, and if the frequency deviation was ∆f, then bandwidth would be

𝐵 = 2∆𝑓 + 2𝑓𝑀 …4.2.1

Altogether, it is apparent that bandwidth decreases with decreasing fM regardless of the modulation
technique employed. We consider now a mode of encoding a binary bit stream, called duobinary encoding
which effects a reduction of the maximum frequency in comparison to the maximum frequency of the un-
encoded data. Thus, if a carrier is amplitude or frequency modulated by a duobinary encoded waveform,
the bandwidth of the modulated waveform will be smaller than if the un-encoded data were used to AM or
FM modulate the carrier.
There are a number of methods available for duobinary encoding and decoding. One popular scheme is
shown in Fig. 4.2.1. The signal d(k) is the data bit stream with bit duration Tb. It makes excursions between
logic 1 and logic 0, and, as had been our custom, we take the corresponding voltage levels to be + 1V and -
1V. The signal b(k), at the output of the differential encoder also makes excursions between + 1V and -1V.
The waveform vd(k) is therefore
𝑣𝑑 (𝑘) = 𝑏(𝑘) + 𝑏(𝑘 − 1) …4.2.2

Figure 4.2.1 The Duobinary Encoder Decoder System


which can take on the values vD(k) = +2V, 0V and - 2V. The value of vD(k) in any interval k depends on both
b(k) and b(k - 1). Hence there is a correlation between the values of vD(k) in any two successive intervals.
For this reason the coding of Fig. 4.2.1 is referred to as correlative coding.
The correlation can be made apparent in another way. When the transition is made from one interval to
the next, it is not possible for vD(k) to change from +2V to - 2V or vice versa. In short, in any interval, vD(k)
cannot always assume any of the possible levels independently of its level in the previous intervals. Finally,
we note that the term duobinary is appropriate since in each bit interval, the generated voltage vD(k)
results from the combination of two bits.

Page 2 of 19

Page no: 2 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

The decoder, shown in Fig. 4.2.1, consists of a device that provides at its output the magnitude (absolute
value) of its input cascaded with a logical inverter. For the inverter we take it that logic 1 is + 1V or greater
and logic 0 is 0V. We can now verify that the decoded data 𝑑̂ (k) is indeed the input data d(k). For this
purpose we prepare the following truth table:
Truth Table For Duobinary Signaling
Adder Input 1 Adder Input 2 Adder output Magnitude Output Inverter output
I1 I2 vD(k) (Inverter output) d(k)
Voltage Logic Voltage Logic Input Voltage Voltage Logic Logic
-1V 0 -1V 0 -2V 2V 1 0
-1V 0 1V 1 0 0V 0 1
1V 1 -1V 0 0 0V 0 1
1V 1 1V 1 2V 2V 1 0

From the table we see that the inverter output is 𝐼1 ⨁𝐼2 .The differential encoder (called a precoder in the
present application) output is:
𝐼1 = 𝑏(𝑘) = 𝑑(𝑘)⨁𝑏(𝑘 − 1) …4.2.3
̂
The input I2 = b(k - 1) so that the inverter output 𝑑 (k) is:

𝑑̂ (k) = 𝐼1 ⨁𝐼2 = 𝑑(𝑘)⨁𝑏(𝑘 − 1)⨁𝑏(𝑘 − 1) = 𝑑(𝑘 ) …4.2.4

Waveforms of Duobinary Signaling

The more rapidly d(k) switches back and forth between logic levels the higher will be the frequencies of the
spectral components generated. When d(k) switches at each time Tb, the switching speed is at a maximum.
The waveform d(k), under such circumstances, has the appearance of a square wave of period 2Tb and
frequency 1/2 Tb as shown in Fig. 4.2.2a.

Figure 4.2.2 Waveforms of d(k), b(k) and vD(k)

If d(k) is the input to the duobinary encoder of Fig. 4.2.1 then, as can be verified, b(k) appears as in Fig.
4.2.2b and the waveform, vD(k) which is to be transmitted appears as in Fig. 4.2.2c. Observe that the period
of vD(k) is 4 Tb with corresponding frequency 1/4 Tb. Thus the frequency of vD(k) is half the frequency of the

Page 3 of 19

Page no: 3 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

original unencoded waveform d(k). The waveform d(k) may be a sinusoid of frequency 1/2 Tb and
waveform vD(k) as a sinusoid of frequency 1/4Tb. If we were free to select either d(k) or vD(k) as a
modulating waveform for a carrier, and if we were interested in conserving bandwidth, we would choose
vD(k). If amplitude modulation were involved, the bandwidth of the modulated waveform would be
2(1/4Tb) = fb /2 using vD(k) since the modulating frequency is fM = 1/4Tb and would be 2(1/2Tb) = fb using
d(k). With frequency modulation, if the peak-to peak carrier frequency deviation were 2∆f, then, the
modulated carrier would have a bandwidth 2(∆f) + 2(1/2Tb) with d(k) as the modulating signal, as in BFSK;
and 2(∆f) + 2(1/4Tb)with vD(k) as the modulating signal.

4.3 Partial Response Signaling


Suppose, that corresponding to each bit of duration Tb, of a data stream we generate a positive impulse of
strength +1 whenever the bit is at logic 1 and a negative impulse -1 whenever the bit is at logic 0. Suppose,
further, that these impulses are applied to the input of the cosine filter. In Fig. 6.4.3 we have drawn the
filter responses individually to five successive positive impulses. For simplicity, we have in each case drawn
only the central lobe-and we have indicated by dots all the places where the individual response
waveforms pass through zero. Where there is no dot, the waveforms has a finite value. The peaks of the
responses are separated by times Tb and the widths of the central lobes are 3Tb.
The total response is, of course, simply the sum of the individual responses.
We can make the following observations from
Fig. 4.3.1:
1. If we sample the total response at a time when
an individual response is at its peak, the sample
will have contributions from all the individual
responses.
2. There is no possible time at which a sample of
the total response is due only to a single
individual response.
3. Importantly, if we sample the total response
midway between times when the individual
responses are at peak value, i.e., at t = (2k-1)Tb/2,
then the sample value will have contributions in
equal amount from only the two individual
responses that straddle the sampling time. These
sampling times are indicated in Fig. 4.3.1, by the
light vertical lines. One such sampling time,
yielding contributions from individual responses
2 and 3 is explicitly marked. It can be calculated
that at the sampling time the contribution from
each of the straddling individual responses will
be a voltage Ifb. Note that in sampling at the
indicated times, we sample when the individual
responses are not at peak value. For this reason,
the present signal processing is referred to as
Partial Response Signaling.
Figure 4.3.1 Filter responses to Five Different Impulses

In partial-response signaling, we shall transmit a signal during each bit interval that has contributions from
two successive bits of an original baseband waveform. But this superposition need not prevent us from

Page 4 of 19

Page no: 4 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

disentangling the individual original baseband waveform bits. A complete (baseband) partial-response
signaling communications system is shown in Fig. 4.3.2.

Figure 4.3.2 Duo Binary Encoder and Decoder Using Cosine filter

It is seen to be just an adaptation of duobinary encoding and decoding. The cosine filter employed a delay
and an advance of the impulse by amount Tb/2, the total time between delayed and advanced impulses
being Tb. Since, in the real world, a time advance is not possible, we have employed only a delay by
amount Tb. The brickwall filter at the receiver input serves to remove any out of band noise added to the
signal during transmission. It can be shown, that the output data 𝑑̂ (k) = d(k).

4.4 Quadrature Partial Response (QPR) Encoder and Decoder


Amplitude Modulation of Partial Response Signal
The baseband partial response (duobinary) signal may be used to amplitude or frequency modulate a
carrier. If amplitude modulation is employed, either double sideband suppressed carrier DSB/SC or
quadrature amplitude modulation QAM can be employed.
For the case of DSB/SC the duobinary signal, VT(t), shown in Fig. 4.3.2a, is multiplied by the carrier
√2 cos 𝜔0 (𝑡). The resulting signal is
𝑣𝐷𝑆𝐵 (𝑡) = √2𝑣 𝑇 (𝑡) cos 𝜔0 (𝑡) …4.4.1

The bandwidth required to transmit the signal is twice the bandwidth of the baseband duobinary signal
which is fb/2. Hence the bandwidth BDSB of an amplitude modulated duobinary signal is
𝐵𝐷𝑆𝐵 = 2(𝑓𝑏 /2) = 𝑓𝑏 …4.4.2

If the duobinary signal is to amplitude modulate two carriers in quadrature, the circuit shown in Fig. 4.4.1 is
used and the resulting encoder is called a "quadrature partial response" (QPR) encoder.
Figure 4.4.1 shows that the data d(t) at the bit rate fb is first separated into an even and an odd bit stream
de(t) and do(t) each operating with the bit rate fb /2, Both de(t) and do(t) are then separately duobinary
encoded into signals VTe(t) and VTo(t).
Each duobinary encoder is similar to the encoder shown in Fig. 4.3.2a except that each delay is now 2Tb,
rather than Tb, the data rate of the input is fb/2 rather than fb and the bandwidth of the brick wall filter is
now (1/2)(fb/2)= fb/4 rather than fb/2. Thus the bandwidth required to pass VTe(t) and VTo(t) is fb/4. Each
duobinary signal is then modulated using the quadrature carrier signals cos ωot and sin ωot.

Page 5 of 19

Page no: 5 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Figure 4.4.1 QPR Encoder

The bandwidth of each of the quadrature amplitude modulated signals is


𝐵𝑄𝑃𝑅 = 2(𝑓𝑏 /4) = 𝑓𝑏 /2 …4.4.3

Hence the total bandwidth required to pass a QPR signal is also BQPR, since the two quadrature components
occupy the same frequency band.
It should be noted that if QPSK, rather than QPR, were used to encode the data d(t), the bandwidth
required would be BQPSK =fb. However, if 16 QAM or 16 PSK were used to encode the data the required
bandwidth would be B16QAM = B16PSK =fb/2. Thus the spectrum required to pass a QPR signal is similar to
that required to pass 16 QAM or 16 PSK. However, the QPR signal displays no (or in practice very small)
side lobes which makes QPR the encoding system of choice when spectrum width is the major problem.
The drawback in using QPR is that the transmitted signal envelope is not a constant but varies with time.

QPR Decoder

Figure 4.4.2 QPR Decoder

A QPR decoder is shown in Fig. 4.4.2. As in 16-QAM and 16-PSK to decode the input signal, VQ(t) is first
raised to the fourth power, filtered and then frequency divided by 4. The result yields the two quadrature

Page 6 of 19

Page no: 6 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

carriers: cos ωot and sin ωot. Using the two quadrature carriers we demodulate VQ(t) and obtain the two
baseband duobinary signals VTe(t) and VTo(t). Duobinary decoding then takes place; each duobinary decoder
being similar to the decoder shown in Fig. 4.3.2b except that they operate at fb/2 rather than at fb. The
reconstructed data do(t) and de(t) is then combined to yield the data d(t).

4.5 Eye Pattern


An effective way to study the effects of ISI is the Eye Pattern. The name Eye Pattern was given from its
resemblance to the human eye for binary waves. The interior region of the eye pattern is called the eye
opening. The following figure shows the image of an eye-pattern.

Figure 4.6 Image of eye pattern


Jitter is the short-term variation of the instant of digital signal, from its ideal position, which may lead to
data errors.
When the effect of ISI increases, traces from the upper portion to the lower portion of the eye opening
increases and the eye gets completely closed, if ISI is very high.
An eye pattern provides the following information about a particular system.
 Actual eye patterns are used to estimate the bit error rate and the signal-to-noise ratio.
 The width of the eye opening defines the time interval over which the received wave can be
sampled without error from ISI.
 The instant of time when the eye opening is wide, will be the preferred time for sampling.
 The rate of the closure of the eye, according to the sampling time, determines how sensitive the
system is to the timing error.
 The height of the eye opening, at a specified sampling time, defines the margin over noise.
Hence, the interpretation of eye pattern is an important consideration.

4.6 Equalization
For reliable communication to be established, we need to have a quality output. The transmission losses of
the channel and other factors affecting the quality of the signal have to be treated. The most occurring
loss, as we have discussed, is the ISI.
To make the signal free from ISI, and to ensure a maximum signal to noise ratio, we need to implement a
method called Equalization. The following figure shows an equalizer in the receiver portion of the
communication system.
Sampled
Noise and
Received
Interference
Signal
Digital Pulse Analog Linear Decision
Source Shaping Channel digital Device
Received TS Equalizer
Analog
Signal

Page 7 of 19

Page no: 7 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Figure 4.7 Equalization


The noise and interferences which are denoted in the figure are likely to occur, during transmission. The
regenerative repeater has an equalizer circuit, which compensates the transmission losses by shaping the
circuit. The Equalizer is feasible to get implemented.

Error Probability and Figure-of-merit


The rate at which data can be communicated is called the data rate. The rate at which error occurs in the
bits, while transmitting data is called the Bit Error Rate (BER).
The probability of the occurrence of BER is the Error Probability. The increase in Signal to Noise Ratio (SNR)
decreases the BER, hence the Error Probability also gets decreased.
In an Analog receiver, the figure of merit at the detection process can be termed as the ratio of output SNR
to the input SNR. A greater value of figure-of-merit will be an advantage.

Regenerative Repeater
For any communication system to be reliable, it should transmit and receive the signals effectively, without
any loss. A PCM wave, after transmitting through a channel, gets distorted due to the noise introduced by
the channel.
The regenerative pulse compared with the original and received pulse, will be as shown in the following
figure.

Original Pulse Resulting Pulse Restored Pulse

Figure 4.8 Regenerative Repeater


For a better reproduction of the signal, a circuit called as regenerative repeater is employed in the path
before the receiver. This helps in restoring the signals from the losses occurred. Following is the
diagrammatical representation.
Distorted Decision Regenerated
Amplifier and Making
PCM Wave Equalizer Device PCM Wave

Timing
Circuit

Figure 4.9 Block Diagram of Regenerative Repeater


This consists of an equalizer along with an amplifier, a timing circuit, and a decision making device. Their
working of each of the components is detailed as follows.

Equalizer
The channel produces amplitude and phase distortions to the signals. This is due to the transmission
characteristics of the channel. The Equalizer circuit compensates these losses by shaping the received
pulses.
Timing Circuit

Page 8 of 19

Page no: 8 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

To obtain a quality output, the sampling of the pulses should be done where the signal to noise ratio (SNR)
is maximum. To achieve this perfect sampling, a periodic pulse train has to be derived from the received
pulses, which is done by the timing circuit.
Hence, the timing circuit allots the timing interval for sampling at high SNR, through the received pulses.
Decision Device
The timing circuit determines the sampling times. The decision device is enabled at these sampling times.
The decision device decides its output based on whether the amplitude of the quantized pulse and the
noise, exceeds a pre-determined value or not.

4.7 Baseband Signal Receiver:


Consider that a binary-encoded signal consists of a time sequence of voltage levels +V or-V. If there is a
guard interval between the bits, the signal forms a sequence of positive and negative pulses. In either case
there is no particular interest in preserving the waveform of the signal after reception. We are interested
only in knowing within each bit interval whether the transmitted voltage was + V or - V. With noise
present, the received signal and noise together will yield sample values generally different from ± V. In this
case, what deduction shall we make from the sample value concerning the transmitted bit?
Suppose that the noise is Gaussian and therefore the noise voltage has a probability density which is
entirely symmetrical with respect to zero volts. Then the probability that the noise has increased the
sample value is the same as the probability that the noise has decreased the sample value. It then seems
entirely reasonable that we can do no better than to assume that if the sample value is positive the
transmitted level was + V, and if the sample value is negative the transmitted level was - V. It is, of course,
possible that at the sampling time the noise voltage may be of magnitude larger than V and of a polarity
opposite to the polarity assigned to the transmitted bit. In this case an error will be made as indicated in
Fig. 4.7.1. Here the transmitted bit is represented by the voltage + V which is sustained over an interval T
from t1 to t2. Noise has been superimposed on the level + V so that the voltage v represents the received
signal and noise. If now the sampling should happen to take place at a time t=t1+∆t; an error will have been
made.
We can reduce the probability of error by
processing the received signal plus noise in such
a manner that we are then able to find a sample
time where the sample voltage due to the signal
is emphasized relative to the sample voltage due
to the noise. Such a processer (receiver) is shown
in Fig. 4.7.2. The signal input during a bit interval
is indicated. As a matter of convenience we have
set t = q at the beginning of the interval. The
waveform of the signal s(t) before t =0 and after t
= T has not been indicated since, as will appear,
the operation of the receiver during each bit
interval is independent of the waveform during
past and future bit intervals.
Figure 4.7.1 Illustration that noise may cause an error
in determination of transmitted voltage level

The signal s(t) with added white gaussian noise n(t) of power spectral density η/2 is presented to an
integrator. At time t = 0 + we require that capacitor C be uncharged. Such a discharged condition may be
ensured by a brief closing of switch SW1 at time t = 0-, thus relieving C of any charge it may have acquired

Page 9 of 19

Page no: 9 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Figure 4.7.2 A Receiver for Binary Coded Signal


during the 'previous interval. The sample is taken at the output of the integrator by closing this sampling
switch SW2. This sample is taken at the end of the bit interval, at t = T. The signal processing indicated in
Fig. 4.7.2 is described by the phrase integrate and dump, the term dump referring to the abrupt discharge
of the capacitor after each sampling.

Peak Signal to RMS Noise Output Voltage Ratio


The integrator yields an output which is the integral of its input multiplied by 1/RC. Using τ = RC, we have

1 𝑇 1 𝑇 1 𝑇 …4.7.1
𝑣𝑜 (𝑇) = ∫ [𝑠(𝑡) + 𝑛(𝑡)]𝑑𝑡 = ∫ 𝑠(𝑡)𝑑𝑡 + ∫ 𝑛(𝑡)𝑑𝑡
𝜏 0 𝜏 0 𝜏 0
The sample voltage due to the signal is
1 𝑇 𝑉𝑇 …4.7.2
𝑠𝑜 (𝑇) = ∫ 𝑉𝑑𝑡 =
𝜏 0 𝜏
The sample voltage due to the noise is
1 𝑇 …4.7.3
𝑠𝑛 (𝑇) = ∫ 𝑛(𝑡)𝑑𝑡
𝜏 0
This noise-sampling voltage no(T) is a Gaussian random variable in contrast with n(t) which is a Gaussian
random process.
The variance of no(T) is given by
𝑛𝑇 …4.7.4
𝜎02 = ̅̅̅̅̅̅̅
𝑛02 (𝑡) = 2
2𝜏
It has a Gaussian probability density.

The output, of the integrator, before the sampling switch, is v0(t) = S0(t) + n0(t). As shown in Fig. 4.7.3a, the
signal output So(t) is a ramp, in each bit interval, of duration T. At the end of the interval the ramp attains
the voltage S0(t) which is + VT/τ or - VT/τ, depending on whether the bit is a 1 or a 0. At the end of each
interval the switch SW1 in Fig. 4.7.2 closes momentarily to discharge the capacitor so that so(t) drops to
zero. The noise n0(t) shown in Fig. 4.7.3b, also starts each interval with no(0) = 0 and has the random value
n0(t) at the end of each interval. The sampling switch SW2 closes briefly just before the closing of SW1 and
hence reads the voltage
𝑣𝑜 (𝑇) = 𝑠𝑜 (𝑇) + 𝑛𝑜 (𝑇) …4.7.5

We would naturally like the output signal voltage to be as large as possible in comparison with the noise
voltage. Hence a figure of merit of interest is the signal-to-noise ratio
[𝑠𝑜 (𝑇)]2 2 2
= 𝑉 𝑇 …4.7.6
̅̅̅̅̅̅̅̅̅̅̅
[𝑛𝑜 (𝑇)]2 𝜂

Page 10 of 19

Page no: 10 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Figure 4.7.3 (a) The Signal Output and (b) the Noise Output of the integrator

This result is calculated from Eqs. (4.7.2) and (4.7.4). Note that the signal-to noise ratio increases with
increasing bit duration T and that it depends on V2T which is the normalized energy of the bit signal.
Therefore, a bit represented by a narrow, high amplitude signal and one by a wide, low amplitude signal
are equally effective, provided V2T is kept constant. It is instructive to note that the integrator filters the
signal and the noise such that the signal voltage increases linearly with time, while the standard deviation
(rms value) of the noise increases more slowly, as √𝑇. Thus, the integrator enhances the signal relative to
the noise, and this enhancement increases with time as shown in Eq. (4.7.6).

4.8 Probability of Error:


Since the function of a receiver of a data transmission is to distinguish the bit 1 from the bit 0 in the
presence of noise, a most important characteristic is the probability that an error will be made in such a
determination. We now calculate this error probability P, for the integrate-and-dump receiver of Fig. 4.7.2.
We have seen .that the probability density of the noise sample no(T) is Gaussian and hence appears as in
Fig. 4.7.1.The density is therefore given by
2 2
𝑒 −𝑛0 (𝑇)/2𝜎0
𝑓 [𝑛𝑜 (𝑇)] = …4.8.1
√2𝜋𝜎02
Where σ02 the variance is given by 𝜎02 = ̅̅̅̅̅̅̅
𝜎02 (𝑡). Suppose, then, that during some bit interval the input-
signal voltage is held at, say, - V. Then, at the sample time, the signal sample voltage is So(T)= -VT/τ, while
the noise sample is no(T). If no(T) is positive and larger in magnitude than VT/ τ, the total sample voltage
vo(T) = so(T) + no(T) will be positive. Such a positive sample voltage will result in an error, since as noted
earlier, we have instructed the receiver to interpret such a positive sample voltage to mean that the signal
voltage was + V during the bit interval. The probability of such a misinterpretation, that is, the probability
that no(T) > VT/ τ, is given by the area of the shaded region in Fig. 4.8.1. The probability of error is, using
Eq. (4.8.1).
∞ ∞ 2 2
𝑒 −𝑛0 (𝑇)/2𝜎0
𝑃𝑒 = ∫ 𝑓 [𝑛𝑜 (𝑇)] 𝑑𝑛𝑜 (𝑇) = ∫ 𝑑𝑛𝑜 (𝑇) …4.8.2
2
𝑉𝑇/𝜏 𝑉𝑇/𝜏 √2𝜋𝜎0

𝑛𝑜 (𝑇)
Defining 𝑥 ≡ and using equations 4.7.4 equation 4.8.2 may be written as
√2𝜎0

1 2 ∞ −𝑥 2 1 𝑇 1 𝑉𝑇 1/2 1 𝐸𝑆 1/2
𝑃𝑒 = ∫ 𝑒 𝑑𝑥 = 𝑒𝑟𝑓𝑐 (𝑉√ ) = 𝑒𝑟𝑓𝑐 ( ) 𝑒𝑟𝑓𝑐 ( ) …4.8.3
2 √𝜋 𝑥=𝑉𝑇/𝜏 2 𝜂 2 𝜂 2 𝜂
In which Es=V2T is the signal energy of a bit.

Page 11 of 19

Page no: 11 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Figure 4.8.1 The Gaussian Probability Density of the noise sample n0(T)
If the signal voltage were held instead at + V during some bit interval, then it is clear from the symmetry of
the situation that the probability of error would again be given by P, in Eq. (4.8.3). Hence Eq. (4.8.3) gives
P, quite generally.

Figure 4.8.2 Variation of Pe versus Es/η

The probability of error Pe as given in Eq. (4.8.3), is plotted in Fig. 4.8.2. Note that Pe decreases rapidly as
Es/η increases. The maximum value of Pe is 1/2. Thus, even if the signal is entirely lost in the noise so that
any determination of the receiver is a sheer guess, the receiver cannot be wrong more than half the time
on the average.
4.9 The Optimum Receiver
In the receiver system of Fig. 4.7.2, the signal was passed through a filter (i.e. the integrator), so that at the
sampling time the signal voltage might be emphasized in comparison with the noise voltage. We are
naturally led to ask whether the integrator is the optimum filter for the purpose of minimizing the
probability of error. We shall find that for the received signal contemplated in the system of Fig. 4.7.2 the
integrator is indeed the optimum filter.
We assume that the received signal is a binary waveform. One binary digit (bit) is represented by a signal
waveform S1(t) which persists for time T, while the other bit is represented by the waveform S2(t) which
also lasts for an interval T. For example, in the case of transmission at baseband, as shown in Fig. 4.7.2,
S1(t) = + V, while S2(t) = - V; for other modulation systems, different waveforms are transmitted. For
example, for PSK signalling, S1(t) = A cos ω0t and S2(t) = - A cos ω0t; while for FSK, S1(t) = A cos (ω0+Ω)t and
S2(t) = A cos (ω0- Ω)t.

Page 12 of 19

Page no: 12 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Figure 4.9.1 A Receiver for binary coded signaling


As shown in Fig. 4.9.1 the input, which is S1(t) or S2(t), is corrupted by the addition of noise n(t). The noise
is gaussian and has a spectral density G(f). [In most cases of interest the noise is white, so that G(J) = η/2.
However, we shall assume the more general possibility, since it introduces no complication to do so.] The
signal and noise are filtered and then sampled at the end of each bit interval. The output sample is either
vo(T) = S01(T) + n0(T) or vo(T) = S02(T) + no(T). We assume that immediately after each sample, every energy-
storing element in the filter has been discharged.
We note that in the absence of noise the output sample would be vo(T) = So1(T) or S02(T). When noise is
present we have shown that to minimize the probability of error one should assume that S 1(t) has been
transmitted if v0(T) is closer to S01(T) than to S02(T). Similarly, we assume S2(t) has been transmitted if vo(T)
is closer to S02(T). The decision boundary is therefore midway between So1(T) and S02(T). For example, in
the baseband system of Fig. 4.7.2, where S01(T) = VT/τ and S02(T) = - VT/τ, the decision boundary is vo(T) =
0. In general. we shall take the decision boundary to be
𝑠𝑜1 (𝑇) + 𝑠𝑜2 (𝑇) …4.9.1
𝑣𝑜 (𝑇) =
2
The probability of error for this general case may be deduced as an extension of the considerations used in
the baseband case. Suppose that S01(T) > S02(T) and that S2(t) was transmitted. If, at the sampling time the
noise no(T) is positive and larger in magnitude than the voltage difference (1/2)[ S01(T)+ S02(T)] - S02(T), an
error will have' been made. That is, an error will result if

𝑠𝑜1 (𝑇) − 𝑠𝑜2 (𝑇) …4.9.2


𝑛𝑜 (𝑇 ) ≥
2
Hence the probability of error is
∞ 2 2
𝑒 −𝑛0 (𝑇)/2𝜎0
𝑃𝑒 = ∫ 𝑑𝑛𝑜 (𝑇) …4.9.3
[𝑠𝑜1 (𝑇)−𝑠𝑜2 (𝑇)]/2 √2𝜋𝜎02
𝑛𝑜 (𝑇)
If we make the substitution 𝑥 ≡ , then above equation becomes,
√2𝜎0

1 2 ∞ 2
𝑃𝑒 = ∫ 𝑒 −𝑥 𝑑𝑥
2 √𝜋 [𝑠𝑜1 (𝑇)−𝑠𝑜2 (𝑇)]/2√2𝜎0 …4.9.4a

1 𝑠𝑜1 (𝑇) − 𝑠𝑜2 (𝑇)


𝑃𝑒 = 𝑒𝑟𝑓𝑐 [ ] …4.9.4b
2 2√2𝜎0

Note that for the case S01(T) = VT/τ and S02(T) = - VT/τ, and, using Eq. (4.7.4), Eq. (4.9.4b) reduces to Eq.
(4.8.3) as expected.
The complementary error function is a monotonically decreasing function of its argument. (See Fig. 4.8.2.)
Hence, as is to be anticipated, Pe decreases as the difference S01(T) - S02(T) becomes larger and as the rms
noise voltage σ0 becomes smaller. The optimum filter, then, is the filter which maximizes the ratio

𝑠𝑜1 (𝑇) − 𝑠𝑜2 (𝑇) …4.9.5


𝛾=
𝜎0
We now calculate the transfer function H(f) of this optimum filter.

Page 13 of 19

Page no: 13 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Optimum Filter Transfer Function H(f)


The fundamental requirement we make of a binary encoded data receiver is that it distinguishes the
voltages S1(t) + n(t) and S2(t) + n(t). We have seen that the ability of the receiver to do so depends on how
large a particular receiver can make γ. It is important to note that it is proportional not to S1(t) nor to S2(t),
but rather to the difference between them. For example, in the baseband system we represented the
signals by voltage levels + V and - V. But clearly, if our only interest was in distinguishing levels, we would
do just as well to use +2 volts and 0 volt, or +8 volts and +6 volts, etc, (The + V and - V levels, however,
have the advantage of requiring the least average power to be transmitted.) Hence, while S1(t) or S2(t) is
the received signal, the signal which is to be compared with the noise, i.e., the signal which is relevant in all
our error-probability calculations, is the difference signal

𝑝(𝑇) ≡ 𝑠1 (𝑇) − 𝑠2 (𝑇) …4.9.6


Thus, for the purpose of calculating the minimum error probability, we shall assume that the Input Signal
to the optimum filter is p(t). The corresponding output signal of the filter is then

𝑝0 (𝑇) ≡ 𝑠01 (𝑇) − 𝑠02 (𝑇) …4.9.7


Let P(f) and P0(f) ne the Fourier transform of p(f) and P0(f) respectively. If H(f) is the transfer function of the
filter
𝑃0 (𝑓) = 𝐻(𝑓)𝑃(𝑓) …4.9.8
∞ ∞
And
𝑝0 (𝑇) = ∫ 𝑃0 (𝑓) 𝑒 𝑗2𝜋𝑓𝑇 𝑑𝑓 = ∫ 𝐻 (𝑓)𝑃(𝑓) 𝑒 𝑗2𝜋𝑓𝑇 𝑑𝑓 …4.9.9
−∞ −∞
The input noise to the optimum filler is n(t). The output noise is n0(t) which has a power spectral density
Gn0(f) and is related to the power spectral density of the input noise Gn(f) by

𝐺𝑛0 (𝑓) = |𝐻(𝑓)|2 𝐺𝑛 (𝑓) …4.9.10


Using Parseval's theorem, we find that the normalized output noise power, i.e., the noise varianceσ02, is
∞ ∞
…4.9.11
𝜎02 = ∫ 𝐺𝑛0 (𝑓) 𝑑𝑓 = ∫ |𝐻(𝑓)|2 𝐺𝑛 (𝑓) 𝑑𝑓
−∞ −∞
From equation 9 and 11,
∞ 2
2
𝑝02 (𝑇) [∫−∞ 𝐻 (𝑓)𝑃(𝑓) 𝑒 𝑗2𝜋𝑓𝑇 𝑑𝑓]
𝛾 = = ∞ …4.9.12
𝜎02 ∫−∞|𝐻(𝑓)|2 𝐺𝑛 (𝑓) 𝑑𝑓
Equation 4.9.12 is unaltered by the inclusion or deletion of the absolute value sign in the numerator since
the quantity within the magnitude sign p0(T) is a positive real number. The sign has been included,
however, in order to allow further development of the equation through the use of the Schwarz inequality.
The Schwarz inequality state that given arbitrary complex functions X(f) and Y(f) of a common variable f,
then
∞ 2 ∞ ∞ …4.9.13
|∫ 𝑋(𝑓)𝑌(𝑓) 𝑑𝑓| ≤ ∫ |𝑋(𝑓)|2 𝑑𝑓 ∫ |𝑌(𝑓)|2 𝑑𝑓
−∞ −∞ −∞
The equal sign applies when
𝑋 (𝑓) = 𝐾𝑌 ∗ (𝑓) …4.9.14
where K is an arbitrary constant and Y*(f) is the complex conjugate of Y(f).
We now apply the Schwarz inequality to Eq. (4.9.12) by making the identification
𝑋(𝑓) ≡ √𝐺𝑛 (𝑓) 𝐻(𝑓) …4.9.15
and 1 …4.9.16
𝑌 (𝑓 ) ≡ 𝑃(𝑓)𝑒 𝑗2𝜋𝑇𝑓
√𝐺𝑛 (𝑓)
Using equation 15 and 16 and using Schwarz inequality equation 13, we may write equation 12 as,

Page 14 of 19

Page no: 14 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

∞ 2

𝑝02 (𝑇) [∫−∞ 𝑋 (𝑓)𝑌(𝑓) 𝑑𝑓]
= ∞ ≤ ∫ |𝑌(𝑓)|2 𝑑𝑓 …4.9.17
𝜎02 | |
∫−∞ 𝐻(𝑓) 𝑑𝑓2
−∞
Using equation 16,
∞ ∞ [ ( )]2
𝑝02 (𝑇) 2
𝑃 𝑓
2 ≤∫
| |
𝑌(𝑓) 𝑑𝑓 = ∫ 𝑑𝑓 …4.9.18
𝜎0 −∞ −∞ 𝐺𝑛 (𝑓)
The ratio p02(T)/σ02; will attain its maximum value when the equal sign in Eq. (4.9.18) may be employed as
is the case when X(f) = K Y*(f). We then find from Eqs. (4.9.15) and (4.9.16) that the optimum filter which
yields such a maximum ratio p02(T)/σ02; has a transfer function
𝑃∗ (𝑓) −𝑗2𝜋𝑓𝑇
𝐻(𝑓) = 𝐾 𝑒 …4.9.19
𝐺𝑛 (𝑓)
Correspondingly, the maximum ratio is, from Eq. (4.9.18),

∞ [ ( )]2
𝑝02 (𝑇) 𝑃 𝑓
[ 2 ] = ∫ 𝑑𝑓 …4.9.20
𝜎0 𝑚𝑎𝑥 −∞ 𝐺𝑛 (𝑓)

4.10 White Noise: The Matched Filter


An optimum filter which yields a maximum ratio p02(T)/σ02is called a matched filler when the input noise is
white. In this case Gn(f) = η/2, and Eq. (4.9.19) becomes
𝑃∗ (𝑓) −𝑗2𝜋𝑓𝑇
𝐻(𝑓) = 𝐾 𝑒 …4.10.1
η/2
The impulsive response of this filter, i.e., the response of the filter to a unit strength impulse applied at t =
0, is
2𝐾 ∞ ∗
ℎ(𝑡) = 𝐹 −1 [𝐻(𝑓)] = ∫ 𝑃 (𝑓)𝑒 −𝑗2𝜋𝑓𝑇 𝑒 −𝑗2𝜋𝑓𝑡 𝑑𝑓 …4.10.2(a)
η −∞
2𝐾 ∞ ∗
= ∫ 𝑃 (𝑓)𝑒 𝑗2𝜋𝑓(𝑡−𝑇) 𝑑𝑓 …4.10.2(b)
η −∞

A physically realizable filter will have an impulse response which is real, i.e., not complex. Therefore h(t) =
h*(t). Replacing the right-hand member of Eq. (4.10.2b) by its complex conjugate, an operation which
leaves the equation unaltered, we have
2𝐾 ∞
( )
ℎ 𝑡 = ∫ 𝑃(𝑓)𝑒 𝑗2𝜋𝑓(𝑡−𝑇) 𝑑𝑓 …4.10.3(a)
η −∞
2𝐾
= 𝑝(𝑇 − 𝑡) …4.10.3(b)
η
Finally since p(t)=s1(t) – s2(t), we have
2𝐾
ℎ(𝑡) = [𝑠 (𝑇 − 𝑡) − 𝑠2 (𝑇 − 𝑡)] …4.10.4
η 1
As shown in Fig. 4.10.1a, the s1(t) is a triangular waveform of duration T, while s2(t), (Fig. 4.10.1b), is of
identical form except of reversed polarity. Then p(t) is as shown in Fig. 4.10.1c, and p(-t) appears in Fig.
4.10.1d. The waveform p(-t) is the waveform p(t) rotated around the axis t =0. Finally, the waveform p(T -
t) called for as the impulse response of the filter in Eq. (4.10.3b) is this rotated waveform p(-t) translated in
the positive t direction by amount T. This last translation ensures that h(t) = 0 for t < 0 as is required for a
causal filter.
In general, the impulsive response of the matched filter consists of p(t) rotated about t = 0 and then
delayed long enough (i.e., a time T) to make the filter realizable. We may note in passing, that any
additional delay that a filter might introduce would in no way interfere with the performance of the filter,
for both signal and noise would be delayed by the same amount, and at the sampling the ratio of signal to
noise would remain unaltered.

Page 15 of 19

Page no: 15 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Figure 4.10.1 The signals (a) s1(t), (b) s2(t), (c) p(t)=s1(t)- s2(t), (d) p(t) rotated about the axis t=0, (e) The
waveform of (d) translated to right by amount T.

4.11 Correlator
Coherent Detection: Correlation
Coherent detection is an alternative type of receiving system, which is identical in performance with the
matched filter receiver. Again, as shown in Fig. 4.11.1, the input is a binary data waveform S1(t) or S2(t)
corrupted by noise n(t). The bit length is T. The received signal plus noise vi(t) is multiplied by a locally
generated waveform S1(t) - S2(t). The output of the multiplier is passed through an integrator whose output
is sampled at t = T. As before, immediately after each sampling, at the beginning of each new bit interval,
all energy-storing elements in the integrator are discharged. This type of receiver is called a correlator,
since we are correlating the received signal and noise with the waveform S 1(t)- S2(t).

Figure 4.11.1 A Coherent System of Signal Reception

The output signal and noise of the correlator shown in Fig. 4.11.1 are
1 𝑇
𝑠0 (𝑡) = ∫ 𝑆𝑖 (𝑡)[𝑆1 (𝑡) − 𝑆2 (𝑡)] 𝑑𝑡 …4.11.1
τ 0
1 𝑇
𝑛0 (𝑡) = ∫ 𝑛(𝑡)[𝑆1 (𝑡) − 𝑆2 (𝑡)] 𝑑𝑡 …4.11.2
τ 0
where Si(t) is either S1(t) or S2(t), and where τ is the constant of the integrator (i.e., the integrator output is
l/τ times the integral of its input). We now compare these outputs with the matched filter outputs.
If h(t) is the impulsive response of the matched filter, then the output of the matched filter vo(t) can be
found using the convolution integral. We have

Page 16 of 19

Page no: 16 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

∞ 𝑇
𝑣0 (𝑡) = ∫ 𝑣𝑖 (𝜆)ℎ(𝑡 − 𝜆) 𝑑𝜆 = ∫ 𝑣𝑖 (𝜆)ℎ(𝑡 − 𝜆) 𝑑𝜆 …4.11.3
−∞ 0
The limits on the integral have been changed to 0 and T since we are interested in the filter response to a
bit which extends only over that interval. Using Eq. (4.10.4) which gives h(t) for the matched filter, we have
2𝐾
ℎ(𝑡) = [𝑠 (𝑇 − 𝑡) − 𝑠2 (𝑇 − 𝑡)] …4.11.4
η 1
2𝐾
So that ℎ(𝑡 − 𝜆) = [𝑠 (𝑇 − 𝑡 + 𝜆) − 𝑠2 (𝑇 − 𝑡 + 𝜆)] …4.11.5
η 1
Submitting equation 4.11.5, in equation 4.11.3
2𝐾 𝑇
𝑣0 (𝑡) = ∫ 𝑣 (𝜆)[𝑠1 (𝑇 − 𝑡 + 𝜆) − 𝑠2 (𝑇 − 𝑡 + 𝜆)] 𝑑𝜆 …4.11.6
η 0 𝑖
Since vi(λ) = si(λ)+n(λ), and v0(t) = s0(t)+n0(t), setting t=T yields,
2𝐾 𝑇
𝑠0 (𝑡) = ∫ 𝑠 (𝜆)[𝑠1 (𝜆) − 𝑠2 (𝜆)]𝑑𝜆 …4.11.7
η 0 𝑖
Where si(λ) is equal to s1(λ) or s2(λ). Simillarly,
2𝐾 𝑇
𝑛0 (𝑡 ) = ∫ 𝑛(𝜆)[𝑠1 (𝜆) − 𝑠2 (𝜆)]𝑑𝜆 …4.11.8
η 0
Thus as we can see from above equations so(T) and no(T), are identical. Hence the performances of the two
systems are identical.
The matched filter and the correlator are not simply two distinct, independent techniques which happen to
yield the same result. In fact they are two techniques of synthesizing the optimum filter h(t).

4.12 Probability of error calculation for BPSK and BFSK


(i) BPSK
The synchronous detector for BPSK is shown in figure 4.12.1(b). Since the BPSK signal is one dimensional,
The only relevant noise in the present case is
𝑛(𝑡) = 𝑛0 𝑢(𝑡) = 𝑛0 √2/𝑇𝑏 cos 𝜔0 𝑡 …4.12.1
2
where no is a Gaussian random variable of variance σ0 = η/2. Now let us suppose that S2 was transmitted.

Figure 4.12.1 (a) BPSK representation in signal space showing r1 and r2 (b)Correlator receiver for BPSK
showing that r=r1+n0 or r2+ n0

The error probability, i.e., the probability that the signal is mistakenly judged to be S1 is the probability that
𝑛0 > √𝑃𝑠 𝑇𝑏 . Thus the error probability Pe, is
∞ ∞
1 −𝑛02/2𝜎02
1 2
𝑃𝑒 = ∫ 𝑒 𝑑𝑛0 = ∫ 𝑒 −𝑛0 /𝜂 𝑑𝑛0 …4.12.2
2
√2𝜋𝜎 √𝑃𝑠𝑇𝑏 √𝜋𝜂 √𝑃𝑠 𝑇𝑏

Page 17 of 19

Page no: 17 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Let y2=n02/2 σ02, then



1 2 1 2 ∞ 2 1
𝑃𝑒 = ∫ 𝑒 −𝑦 𝑑𝑦 = ∫ 𝑒 −𝑦 𝑑𝑦 = 𝑒𝑟𝑓𝑐 √𝑃𝑠 𝑇𝑏 /𝜂 …4.12.3
√𝜋 √𝑃𝑠 𝑇𝑏 /𝜂 2 √𝜋 √𝑃𝑠 𝑇𝑏/𝜂 2

The signal energy is Eb = PsTb and the distance between end points of the signal vectors in Fig. 4.12.1 is =
2√𝑃𝑠 𝑇𝑏 . Accordingly we find that
1 1
𝑃𝑒 = 𝑒𝑟𝑓𝑐 √𝐸𝑏 /𝜂 = 𝑒𝑟𝑓𝑐 √𝑑 2 /4𝜂 …4.12.4
2 2
The error probability is thus seen to fall off monotonically with an increase in distance between signals.

(ii) BFSK
The case of synchronous detection of orthogonal binary FSK is represented in Fig. 4.12.2. The signal space
is shown in (a). The unit vectors are
𝑢1 (𝑡) = √2/𝑇𝑏 cos 𝜔1 𝑡 …4.12.5a
and 𝑢2 (𝑡) = √2/𝑇𝑏 cos 𝜔2 𝑡 …4.12.5b

Figure 4.12.2 (a) Signal Space representation of BFSK (b) Correlator Receiver for BFSK

Orthogonality over the interval Tb having been insured by the selection of ω1 and ω2. The transmitted
signals s1 and s2 are of power Ps, and are
𝑠1 (𝑡) = √2𝑃𝑠 cos 𝜔1 𝑡 = √𝑃𝑠 𝑇𝑏 √2/𝑇𝑏 cos 𝜔1 𝑡 = √𝑃𝑠 𝑇𝑏 𝑢1 (𝑡) …4.12.6a
and 𝑠2 (𝑡) = √2𝑃𝑠 cos 𝜔2 𝑡 = √𝑃𝑠 𝑇𝑏 √2/𝑇𝑏 cos 𝜔2 𝑡 = √𝑃𝑠 𝑇𝑏 𝑢2 (𝑡) …4.12.6b

Detection is accomplished in the manner shown in Fig. 4.12.2 (b). The outputs are r1 and r2. In the absence
of noise when s1(t) is received, r2 = 0 and r1 = √𝑃𝑠 𝑇𝑏 . For S2(t), r1 = 0 and r2 =√𝑃𝑠 𝑇𝑏 . Hence the vectors
representing r1 and r2 are of length √𝑃𝑠 𝑇𝑏 as shown in Fig. 4.12.2(a).
Since the signal is two dimensional the relevant noise in the present case is
𝑛(𝑡) = 𝑛1 𝑢1 (𝑡) + 𝑛2 𝑢2 (𝑡) …4.12.7
2 2
In which n1 and n2 are Gaussian random variables each of variance σ1 = σ2 =η/2. Now let us suppose that
S2(t) is transmitted and that the observed voltages at the output of the processor are r’1 and r’2 as shown in
Fig. 4.12.2a. We find that r’2≠r2 because of the noise n2 and r’1≠0 because of the noise n1. We have drawn
the locus of points equidistant from r1 and r2 and suppose, that the received voltage r, is closer to r1 than
to r2. Then we shall have made an error in estimating which signal was transmitted. It is readily apparent
that such an error will occur whenever n1>r2-n2 or n1 + n2 > √𝑃𝑠 𝑇𝑏 . Since n1 and n2 are uncorrelated, the
random variable n0 = n1 + n2 has a variance σ02= σ12+ σ22=η and its probability density function is
1 2
𝑓 (𝑛0 ) = 𝑒 −𝑛0 /2𝜂 …4.12.8
√2𝜋𝑛

Page 18 of 19

Page no: 18 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

The probability of error is



1 2
𝑃𝑒 = ∫ 𝑒 −𝑛0 /2𝜂 𝑑𝑛0 …4.12.9
√2𝜋𝜂 √𝑃𝑠 𝑇𝑏

Again we have Eb = PsTb and in the present case the distance between r1 and r2 is 𝑑 = √2√𝑃𝑠 𝑇𝑏 .
Accordingly, proceeding as in Eq. (4.12.2) we find that
1
𝑃𝑒 = 𝑒𝑟𝑓𝑐√𝐸𝑏 /2𝜂 …4.12.10a
2
1
= 𝑒𝑟𝑓𝑐√𝑑 2 /2𝜂 …4.12.10b
2
Comparing Eqs. (4.12.10b) and (4.12.4) we see that when expressed in terms of the distance d, the error
probabilities are the same for BPSK and BFSK.

----------X----------

Page 19 of 19

Page no: 19 Get real-time updates from RGPV


We hope you find these notes useful.
You can get previous year question papers at
https://qp.rgpvnotes.in .

If you have any queries or you want to submit your


study notes please write us at
[email protected]

You might also like