Binary Detect2 PDF
Binary Detect2 PDF
Gong 1
that is, an is sent by transmitting the waveform san (t−nT ). We will discuss the above representation
in detail in Chapter 4.
For such waveforms, the composite signal x(t), given by (2), satisfies
The details of different signal sets will be discussed in Chapter 4. In the following, we will give
three different signal sets, which are illustrated in Figure 1.
For the signal set in Figure 1-(a), there is only one nonzero waveform, a rectangular pulse of
duration T . Let
½
A 0≤t<T
s(t) = (4)
0 otherwise
E&CE 411, Spring 2005, Handout 3, G. Gong 2
where A is a positive number representing the amplitude of the signal. The signal set, given by
1-(a), is defined
This signal set is one form of on-off signalling. The data bit 1 is represented by the presence
of the pulse (on), and the data bit 0 is represented by its absence (off). The signal set is also
an example of an orthogonal signal set. We say that signals s0 (t) and s1 (t) are orthogonal on the
interval [0, T ] if
Z T
s0 (t)s1 (t)dt = 0. (5)
0
s0 (t) = −s(t)
s1 (t) = s(t)
as shown in Figure 1-(b). In this example, a positive pulse represents a 1 and a negative pulse
represents a 0. If the pulse shapes are identical and the pulses have opposite polarity, as in this
example, the resulting signal set is referred to as an antipodal signal sets. In other words, a signal set
{s0 (t), s1 (t)} is an antipodal signal set and the two signals are said to be antipodal if s1 (t) = −s0 (t)
for all t.
The third example is given by
½
A 0 ≤ t < T /2
s0 (t) = (6)
0 otherwise
½
A T /2 ≤ t < T
s1 (t) = (7)
0 otherwise
By observation, we know that these two signals satisfy (5). Thus they are two orthogonal signals,
which is referred to as orthogonal signalling.
Note that all signal sets shown in Figure 1 are time-limited signals that start at t = 0 and have
duration T , which is referred to as being time-limited to the interval [0, T ]. Also notice that each
of the signals has finite energy.
2 Detection
The receiver in a digital communication system cannot observe the transmitted signal. Instead,
it observes a signal that is only statistically related to the transmitted signal. Based on the ob-
servation, the receiver makes inferences about the binary sequence that was sent. At present, we
consider binary decisions only. Such a receiver is often referred to as a hard-decision receiver. Other
types of decisions are possible, such as including the option of allowing the receiver to declare it
does not know the value of a digit, but such an option is not allowed for the performance analysis
given in this note.
E&CE 411, Spring 2005, Handout 3, G. Gong 3
s0 (t) s1 (t)
T t T t
(a)
s1 (t)
s0 (t) A
T
t T t
−A
(b)
s0 (t) s1 (t)
A A
T /2 T t T /2 T t
(c)
decision variable
r(t) = si (t) + n(t) Y (T0 ) = sio (T0 ) + no (T0 )
binary sequence Y (t)
channel
detector: Decision
modulator s(t) matched filter
h(t) Device binary sequence
T
N0
AWGN: n(t) with SN (f ) = 2
For binary decisions, there are only two possible outcomes: The receiver’s decision is correct or
it is wrong. If the receiver makes the wrong decision, we say it has made an error. Our goal is to
design the receiver in a way that minimizes the probability that it makes an error.
A receiver consists of one or more time-invariant linear (LTI) filters followed by one sampler (or
more samplers) and a decision device. A sampler and a decision device are used in all receivers. Here
channel noise is additive Gaussian white noise (AWGN). The structure of a receiver for detecting
binary signals in an AWGN channel is shown in Figure 2.
s0 (t) = φ0 (t)
s1 (t) = φ1 (t)
where {φ0 (t), φ1 (t)} is a binary signal set of the type described in Section 1, which are finite-energy,
time limited signals of duration T . The general model of the receiver structure for detecting binary
signals in AWGN is shown in Figure 2. The received signal r(t) is the sum of the AWGN process
N (t) and the signal si (t) where i = 0 if the binary digit 0 is sent and i = 1 if the binary digit 1 is
sent. The filter shown in the Figure 2 is a LTI filter with impulse response h(t). The output of this
filter, which is denoted by Y (t), is sampled at time T0 . The output Y (T0 ) of the sampler is then
compare with a threshold α in order to make a decision between the two alternatives 0 and 1. A
detailed description follows.
First, we consider the channel noise process. Here we consider memoryless channel, i.e., the
signal is only corrupted by the AWGN process n(t). The AWGN process n(t) does not depend on
the transmitted signal. This channel is called an additive white Gaussian noise (AWGN) channel.
The essential features AWGN channel model are as follows. The output of the channel is equal to
the sum of the input signal and the channel noise. The channel noise a white Gaussian process,
has zero mean, and power spectral density N0 /2, and it independent of the input to the channel.
Although we speak of ”channel” noise, it must be remembered that much of noise originates in
the receiver itself. This noise is the thermal noise due to random motion of the conduction electrons
in the resistive components of the receiver. Channel model includes the RF portion (if it applies) of
the receiver, or receiver front end as it is sometimes called; thus, the thermal noise is viewed as part
of the channel. Our concerns of the design of the communication systems is with the demodulation
of the received signal, and this demodulation takes place after the signal is converted to a lower
E&CE 411, Spring 2005, Handout 3, G. Gong 5
frequency. Consequently, the received signal in this model is not the RF signal. In this chapter,
the baseband signal (or a PCM signal) is considered to be the received signal.
The received signal is r(t), which is a random process. This is the input signal of the filter, i.e.,
This process has a signal component si (t) and a noise component n(t). Since the filter is linear, its
output Y (t) can be written as the sum of a signal component and a noise component. The signal
component of Y (t) is just the convolution of the signal si (t) with the impulse h(t). Let sio (t) and
no (t) to represent the signal component and noise component, respectively, of the output of the
filter. Then
Y (t) = sio (t) + no (t) (9)
where the index i denotes the binary digit that is sent, and
Z ∞
sio (t) = si (t) ∗ h(t) = si (t − τ )h(τ )dτ (10)
∞
and Z ∞
no (t) = n(t) ∗ h(t) = n(t − τ )h(τ )dτ. (11)
∞
The noise component n(t) of the input gives rise to a noise component no (t) at the output of
the filter. The properties of the noise process n0 (t) can be obtained by applying the methods in
Chapter 2, which will given later.
The filter is followed by a sampler. The sampler is basically a switch that briefly at time T0 .
The value of T0 is arbitrary for now, we will determine it later by considering the optimal sampling
time. The output of the sampler is the random variable Y (T0 ), which is the input to the threshold
device (see the Figure 1 in Section 2 of Chapter 3 in the slides).
In the threshold device, Y (T0 ) is compared with a threshold level α. The output ŝ of the
threshold device is determined by whether Y (T0 ) > α or Y (T0 ) ≤ α. The decision that the binary
digit 0 was transmitted corresponds to ŝ = 0 as the output of the threshold device, and the decision
that 1 was sent corresponds to ŝ = 1. For a fixed value α, there are only two nontrivial decision
rules that are possible with this type of threshold device. One is to decide 0 was sent if Y (T0 ) ≤ α
and 1 was sent if Y (T0 ) > α. The other just opposite: Decide 1 is sent if Y (T0 ) ≤ α and 0 was sent
if Y (T0 ) > α.
The choice as to which of these two rules should be used depends on the relative values s0o (T0 )
and s1o (T0 ). Since the signal sio (t) is the convolution of the signal si (t) with the impulse response
h(t), the value of sio (T0 ) depends on si (t), h(t), and the sampling time T0 . If the following inequality
is true:
s1o (T0 ) > s0o (T0 )
then the decision rule that should be used is to decide 1 was sent (i.e., ŝ = 1) if Y (T0 ) > α and 0
was sent (i.e., ŝ = 0) if Y (T0 ) ≤ α.
This conclude the description of the components of the communication system shown in Figure
2. The next step is to analyze its performance in terms of optimal criteria in the sense that
minimizes the probability of error.
E&CE 411, Spring 2005, Handout 3, G. Gong 6
Next, suppose that 1 is sent. The decision variable is then the random variable
Y (T0 ) = Y1 (T0 )
Thus the difference of the means is a function of the impulse response of the filter and the difference
of the two signals s0 (t) and s1 (t). The performance of the communication system is optimized by
E&CE 411, Spring 2005, Handout 3, G. Gong 8
use of signals that are as different as possible and a filter with an impulse response that accen-
tuates this difference as much as possible. The selection of the signals and the filter to optimize
performance is discussed later.
Example 1
Assume that the signal selection is antipodal signaling, i.e., s0 (t) = −s(t) and s1 (t) = s(t)
where s(t) = A > 0 for 0 ≤ t < T , otherwise s(t) = 0. The impulse response h(t) is given by
h(t) = s(T − t) (it is called a matched filter). If 0 is sent, the input to the threshold device is the
random variable
Y (T0 ) = Y0 (T0 ) = s0o (T0 ) + no (T0 )
which is Gaussian with mean s0o (T0 ) and variance σ 2 = Rno (0) = 12 N0 T . If 1 is sent, the input to
the threshold device is the random variable
which is Gaussian with mean s1o (T0 ) and variance σ 2 . The difference in the means is
Thus, for the AWGN channel, σ2can be evaluated by integrating either the square of the magnitude
of the transfer function or the square of the impulse response, which is equal to the energy of the
impulse response.
The receiver is wrong when 1 is sent if and only if the input to the threshold device is less than
or equal to the threshold α. Under this condition that 1 is sent, the input Y (T0 ) to the threshold
device is the random variable Y0 (T0 ). Let X = Y (T0 ) and fX (x|i) be the pdf of X given that digit i
is sent, i = 0, 1. We denote the X being Gaussian with mean µ and variance σ 2 by N (µ, σ 2 ). From
the previous discussions, we have fX (x|i) is equal to the pdf of a Gaussian random variable with
mean sio (T0 ) and variance σ 2 , i.e., the pdf of a Gaussian variables with N (sio (T0 ), σ 2 ) for i = 0, 1.
Thus
P (e|1) = P {Y (T0 ) ≤ α | 1}
= P {Y1 (T0 ) ≤ α}
= Φ((α − s1o (T0 ))/σ)
= Q(s1o (T0 ) − α)/σ).
By taking advantage of the fact that (−1)i is equal to 1 if i = 0 and is −1 if i = 1, we can summarize
the key result as follows. If the transmitted signals are s0 (t) and s1 (t), and if µi (T0 ) = sio (T0 )
is equal to the mean of the signal component of the output of the filter h(t), sampled at time
T0 : 0 < T0 ≤ T , then probability of error given the signal si (t) is transmitted is
where α is the threshold, T0 is the sampling time, and σ is the standard deviation of the output of
the filter.
R∞
Note that −∞ h(t)2 dt = A2 T , the energy of the signal since h(t) = s(T − t), denoted as E. Thus
the variance σ 2 = 21 N0 A2 T = 12 N0 E, and the error probability expressions become
³ p ´
P (e|1) = Q (A2 T − α)/ (1/2)N0 E
³ p ´
P (e|0) = Q α/ (1/2)N0 E .
The typical selection for the threshold is to be in the range 0 < α < A2 T . Usually, it takes
α = A2 T /2. Under this value of α, note that A2 T − α = A2 T /2 = E/2, then we have
Ãr !
E
P (e|1) = Q
2N0
Ãr !
³ p ´ E
P (e|0) = Q (E/2)/ (1/2)N0 E = Q .
2N0
If the two signals are transmitted equally likely, i.e., P { 1 is sent} = P { 0 is sent} = 1/2, then we
have
Ãr ! Ãr ! Ãr ! Ã r !
1 E 1 E E 1 1 E
P (e) = Q + Q =Q = erf c .
2 2N0 2 2N0 2N0 2 2 N0
Question: (a) For the case of the sampling time T0 = T /2, derive the average error probability.
For h(t) = pT (t) where pT (t) = 1 for 0 ≤ t ≤ T and otherwise pT (t) = 0, and the sampling time
T0 = T , derive the average error probability.
which is referred to as the minimax threshold in the literature. In this case, the error probability
is given by
Pm (e) = P (e|1) = P (e|0) = Q ([µ1 (T0 ) − µ0 (T0 )]/(2σ)) . (16)
E&CE 411, Spring 2005, Handout 3, G. Gong 11
and for the AWGN channel, recall that the variance σ 2 can be written as
Z
2 N0 ∞ 2
σ = h (t)dt (19)
2 −∞
Let us define
g(t) = s1 (t) − s0 (t)
whose Fourier spectral is given by G(f ). Then maximizing SN Ro is equivalent to maximize ||g ∗
h(T0 )||2 . Let V ∗ (f ) = H(f )ej2πf T0 . The Schwarz inequality (see Appendix G) is applied to the
maximization of the signal-to-noise ratio. This gives, in the notation of frequency domain,
¯R ¯2
¯ ∞ ¯
¯ −∞ G(f )V ∗ (f )df ¯
(SN R)2o =
2N0 ||h||2
R∞ R
2 df ∞ |V ∗ (f )|2 df
−∞ |G(f )| −∞
≤
2N0 ||h||2
E&CE 411, Spring 2005, Handout 3, G. Gong 12
for some choice of λ. It follows that for a proper choice of c (the gain constant is unimportant, we
typically let c = 1/2 (or λ = 1)),
c||h||2 ||g(T0 − t)/2||
(SN R)o,max = √ = p .
2N0 ||h|| N0 /2
The filter given by (22) is called the match filter for the AWGN channel. For the remainder of this
section, we assume that
1
gT0 (t) = g(T0 − t)/2 = [s1 (T0 − t) − s0 (T0 − t)].
2
The signal-to-noise ratio for the matched filter receiver can be expresses in terms of more funda-
mental parameters of the signal set {s0 (t), s1 (t)}. We start with
||gT ||
(SN R)o = p 0
N0 /2
and then use the fact that
Z ∞ Z
2 2 1 ∞
||gT0 || = [gT0 (t)] dt = [s1 (T0 − t) − s0 (T0 − t)]2 dt
−∞ 4 −∞
Z Z
1 ∞ 2 1 ∞
= [s1 (u) + s20 (u)]du − s1 (u)s0 (u)du
4 −∞ 2 −∞
1 1
= (E0 + E1 ) − ρ
4 2
where ρ is the integral in the second term (the inner product of signals s0 and s1 ). Letting
E = (E0 + E1 )/2 and r = ρ/E, we can write ||gT0 ||2 as
The signal-to-noise ratio for a receiver with the matched filter is therefore given by
The parameter E is called the average energy for the signal set, and r is the correlation coefficient
(or normalized correlation) for the two signals s0 and s1 .
The first observation to be made about (23) is that the signal-to-noise ration does not depend
on the sampling time T0 if the matched filter is used. The intuitive reason for this is that the
matched filter, as defined by
automatically compensates for any changes in the sampling time. It should also noted that, because
the noise is WSS, the variance of the output noise is independent of the sampling time. (This is
true for any LTI filter.)
Another observation for (23) is that the signal-to-noise ratio and hence the error probabilities
depend on two signal parameters only. These are the average energy in the two signals s0 and s1
and the inner product of s0 and s1 . The detailed structure of the signals are unimportant.
Example 3. Antipodal signals have equal energy, so E = E0 = E1 . We drop the subscripts and
denote the energy by E. The inner product for antipodal signals is
Z ∞ Z ∞
ρ= s1 (t)s0 (t)dt = − [s1 (t)]2 dt = −E1 .
−∞ −∞
The correlation coefficient for antipodal signals is therefore r = −1. The resulting signal-to-noise
ratio is p
(SN R)o = 2E/N0 ,
and the error probabilities for the minimax threshold are given by
p
P (e|1) = P (e|0) = Q( 2E/N0 ).