0% found this document useful (0 votes)
33 views13 pages

Binary Detect2 PDF

This document discusses binary signal detection in additive white Gaussian noise (AWGN). It begins by describing binary signal sets and how a sequence of binary digits is transmitted as a sequence of waveforms. It then provides three examples of binary signal sets: on-off signaling, antipodal signaling, and orthogonal signaling. The document next describes the structure of a digital receiver for binary signals in AWGN. It matches the received signal to a template to obtain a decision variable, which is then compared to a threshold to make a decision. The goal is to minimize the probability of decision errors.

Uploaded by

Amang Udan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views13 pages

Binary Detect2 PDF

This document discusses binary signal detection in additive white Gaussian noise (AWGN). It begins by describing binary signal sets and how a sequence of binary digits is transmitted as a sequence of waveforms. It then provides three examples of binary signal sets: on-off signaling, antipodal signaling, and orthogonal signaling. The document next describes the structure of a digital receiver for binary signals in AWGN. It matches the received signal to a template to obtain a decision variable, which is then compared to a threshold to make a decision. The goal is to minimize the probability of decision errors.

Uploaded by

Amang Udan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

E&CE 411, Spring 2005, Handout 3, G.

Gong 1

Binary Signal Detection in AWGN

1 Examples of Signal Sets for Binary Data Transmission


In an M -ary data tranmission system there is a collection {si | 0 ≤ i < M } of M signals, which are
also referred to as waveform. Information is conveyed to the receiver by transmitting signals from
this collection. For a binary data transmission, the collection has only two signals.
For a binary baseband communication system, each of the two signals has its energy concen-
trated at low frequencies (below some frequency W ). In order to send a sequence of binary digit,
a corresponding sequence of waveforms is transmitted to the receiver. If the binary digits are gen-
erated at the rate of one digit every T unites of time, the waveform must be transmitted at a rate
of 1/T . If the waveforms are time limited and of duration T , the transmitted signal consists of a
sequence of nonoverlapping waveforms in consecutive time intervals of duration T .
The collection of waveform that is available to the transmitter is known as the signal set. A
binary signal set consists of two waveforms s0 (t) and s1 (t). If the waveforms have duration T , then
for both i = 0 and i = 1,
si (t) = 0 for t < 0 and t > T . (1)
Let a0 , a1 , aN −1 be a sequence of binary digits (i.e., for each n, an = 0 or an = 1). In order to send
this sequence of binary digits to a receiver, a sequence of waveforms is transmitted. The composite
signal x(t) formed by the sequence of waveforms can be written as
N
X −1
x(t) = san (t − nT ); (2)
n=0

that is, an is sent by transmitting the waveform san (t−nT ). We will discuss the above representation
in detail in Chapter 4.
For such waveforms, the composite signal x(t), given by (2), satisfies

x(t) = san (t − nT ), nT ≤ t < (n + 1)T. (3)

For example, if the binary sequence 010 is to be sent, then

x(t) = s0 (t), 0≤t<T


x(t) = s1 (t), T ≤ t < 2T
x(t) = s0 (t), 2T ≤ t < 3T.

The details of different signal sets will be discussed in Chapter 4. In the following, we will give
three different signal sets, which are illustrated in Figure 1.
For the signal set in Figure 1-(a), there is only one nonzero waveform, a rectangular pulse of
duration T . Let
½
A 0≤t<T
s(t) = (4)
0 otherwise
E&CE 411, Spring 2005, Handout 3, G. Gong 2

where A is a positive number representing the amplitude of the signal. The signal set, given by
1-(a), is defined

s0 (t) = 0, for all t


s1 (t) = s(t)

This signal set is one form of on-off signalling. The data bit 1 is represented by the presence
of the pulse (on), and the data bit 0 is represented by its absence (off). The signal set is also
an example of an orthogonal signal set. We say that signals s0 (t) and s1 (t) are orthogonal on the
interval [0, T ] if
Z T
s0 (t)s1 (t)dt = 0. (5)
0

The second example is obtained by letting

s0 (t) = −s(t)
s1 (t) = s(t)

as shown in Figure 1-(b). In this example, a positive pulse represents a 1 and a negative pulse
represents a 0. If the pulse shapes are identical and the pulses have opposite polarity, as in this
example, the resulting signal set is referred to as an antipodal signal sets. In other words, a signal set
{s0 (t), s1 (t)} is an antipodal signal set and the two signals are said to be antipodal if s1 (t) = −s0 (t)
for all t.
The third example is given by
½
A 0 ≤ t < T /2
s0 (t) = (6)
0 otherwise
½
A T /2 ≤ t < T
s1 (t) = (7)
0 otherwise

By observation, we know that these two signals satisfy (5). Thus they are two orthogonal signals,
which is referred to as orthogonal signalling.
Note that all signal sets shown in Figure 1 are time-limited signals that start at t = 0 and have
duration T , which is referred to as being time-limited to the interval [0, T ]. Also notice that each
of the signals has finite energy.

2 Detection
The receiver in a digital communication system cannot observe the transmitted signal. Instead,
it observes a signal that is only statistically related to the transmitted signal. Based on the ob-
servation, the receiver makes inferences about the binary sequence that was sent. At present, we
consider binary decisions only. Such a receiver is often referred to as a hard-decision receiver. Other
types of decisions are possible, such as including the option of allowing the receiver to declare it
does not know the value of a digit, but such an option is not allowed for the performance analysis
given in this note.
E&CE 411, Spring 2005, Handout 3, G. Gong 3

s0 (t) s1 (t)

T t T t

(a)

s1 (t)
s0 (t) A

T
t T t

−A
(b)

s0 (t) s1 (t)

A A

T /2 T t T /2 T t
(c)

Figure 1: Three examples of beaseband signal sets


E&CE 411, Spring 2005, Handout 3, G. Gong 4

decision variable
r(t) = si (t) + n(t) Y (T0 ) = sio (T0 ) + no (T0 )
binary sequence Y (t)
channel
detector: Decision
modulator s(t) matched filter
h(t) Device binary sequence
T

N0
AWGN: n(t) with SN (f ) = 2

Figure 2: Receiver structure for detecting binary signals in AWGN

For binary decisions, there are only two possible outcomes: The receiver’s decision is correct or
it is wrong. If the receiver makes the wrong decision, we say it has made an error. Our goal is to
design the receiver in a way that minimizes the probability that it makes an error.
A receiver consists of one or more time-invariant linear (LTI) filters followed by one sampler (or
more samplers) and a decision device. A sampler and a decision device are used in all receivers. Here
channel noise is additive Gaussian white noise (AWGN). The structure of a receiver for detecting
binary signals in an AWGN channel is shown in Figure 2.

2.1 The General Model


The signals considered are given by

s0 (t) = φ0 (t)
s1 (t) = φ1 (t)

where {φ0 (t), φ1 (t)} is a binary signal set of the type described in Section 1, which are finite-energy,
time limited signals of duration T . The general model of the receiver structure for detecting binary
signals in AWGN is shown in Figure 2. The received signal r(t) is the sum of the AWGN process
N (t) and the signal si (t) where i = 0 if the binary digit 0 is sent and i = 1 if the binary digit 1 is
sent. The filter shown in the Figure 2 is a LTI filter with impulse response h(t). The output of this
filter, which is denoted by Y (t), is sampled at time T0 . The output Y (T0 ) of the sampler is then
compare with a threshold α in order to make a decision between the two alternatives 0 and 1. A
detailed description follows.
First, we consider the channel noise process. Here we consider memoryless channel, i.e., the
signal is only corrupted by the AWGN process n(t). The AWGN process n(t) does not depend on
the transmitted signal. This channel is called an additive white Gaussian noise (AWGN) channel.
The essential features AWGN channel model are as follows. The output of the channel is equal to
the sum of the input signal and the channel noise. The channel noise a white Gaussian process,
has zero mean, and power spectral density N0 /2, and it independent of the input to the channel.
Although we speak of ”channel” noise, it must be remembered that much of noise originates in
the receiver itself. This noise is the thermal noise due to random motion of the conduction electrons
in the resistive components of the receiver. Channel model includes the RF portion (if it applies) of
the receiver, or receiver front end as it is sometimes called; thus, the thermal noise is viewed as part
of the channel. Our concerns of the design of the communication systems is with the demodulation
of the received signal, and this demodulation takes place after the signal is converted to a lower
E&CE 411, Spring 2005, Handout 3, G. Gong 5

frequency. Consequently, the received signal in this model is not the RF signal. In this chapter,
the baseband signal (or a PCM signal) is considered to be the received signal.
The received signal is r(t), which is a random process. This is the input signal of the filter, i.e.,

r(t) = si (t) ∗ δ(t) + n(t) = si (t) + n(t). (8)

This process has a signal component si (t) and a noise component n(t). Since the filter is linear, its
output Y (t) can be written as the sum of a signal component and a noise component. The signal
component of Y (t) is just the convolution of the signal si (t) with the impulse h(t). Let sio (t) and
no (t) to represent the signal component and noise component, respectively, of the output of the
filter. Then
Y (t) = sio (t) + no (t) (9)
where the index i denotes the binary digit that is sent, and
Z ∞
sio (t) = si (t) ∗ h(t) = si (t − τ )h(τ )dτ (10)

and Z ∞
no (t) = n(t) ∗ h(t) = n(t − τ )h(τ )dτ. (11)

The noise component n(t) of the input gives rise to a noise component no (t) at the output of
the filter. The properties of the noise process n0 (t) can be obtained by applying the methods in
Chapter 2, which will given later.
The filter is followed by a sampler. The sampler is basically a switch that briefly at time T0 .
The value of T0 is arbitrary for now, we will determine it later by considering the optimal sampling
time. The output of the sampler is the random variable Y (T0 ), which is the input to the threshold
device (see the Figure 1 in Section 2 of Chapter 3 in the slides).
In the threshold device, Y (T0 ) is compared with a threshold level α. The output ŝ of the
threshold device is determined by whether Y (T0 ) > α or Y (T0 ) ≤ α. The decision that the binary
digit 0 was transmitted corresponds to ŝ = 0 as the output of the threshold device, and the decision
that 1 was sent corresponds to ŝ = 1. For a fixed value α, there are only two nontrivial decision
rules that are possible with this type of threshold device. One is to decide 0 was sent if Y (T0 ) ≤ α
and 1 was sent if Y (T0 ) > α. The other just opposite: Decide 1 is sent if Y (T0 ) ≤ α and 0 was sent
if Y (T0 ) > α.
The choice as to which of these two rules should be used depends on the relative values s0o (T0 )
and s1o (T0 ). Since the signal sio (t) is the convolution of the signal si (t) with the impulse response
h(t), the value of sio (T0 ) depends on si (t), h(t), and the sampling time T0 . If the following inequality
is true:
s1o (T0 ) > s0o (T0 )
then the decision rule that should be used is to decide 1 was sent (i.e., ŝ = 1) if Y (T0 ) > α and 0
was sent (i.e., ŝ = 0) if Y (T0 ) ≤ α.
This conclude the description of the components of the communication system shown in Figure
2. The next step is to analyze its performance in terms of optimal criteria in the sense that
minimizes the probability of error.
E&CE 411, Spring 2005, Handout 3, G. Gong 6

2.2 The Filter Output


The output of the filter is the random process Y (t), which is given by (9). For the AWGN channel,
n(t) is a white Gaussian noise process. Thus, Rn (τ ) = (N0 /2)δ(τ ) (see Chapter 2). Applying the
result in Chapter 2, we have
Sno (f ) = |H(f )|2 Sn (f )
where H(f ) is the transfer function for the LTI filter h(t), Sn (f ) is the spectral density for the
channel noise process, and Sno (f )is the spectral density for the output process no (t). If needed, the
autocorrelation function for no (t) can be computed from its spectral density by taking the inverse
Fourier transform, but in many problems this is not required. Since n(t) is white, we obtain
1
Sno (f ) = N0 |H(f )|2
2
where N0 /2 is the spectral density for the noise process n(t).
Note that no (t) is still Gaussian. In order to determine the error probabilities, we really do not
need a complete characterization of the random process Y (t). All that matters is the values of this
process at the sampling time T0 , and what is actually required is a characterization of the random
variable Y (T0 ). The decision make by the receiver is based solely on the value of Y (T0 ) since this
is the only input to the threshold device.

2.3 The Input to the Threshold Device


The input to the threshold device is the random variable Y (T0 ), which is the sum of the deterministic
quantity sio (T0 ) and the random variable no (T0 ). Because the receiver’s decision is based entirely
on the random variable Y (T0 ), this random variable is called the decision variable. The decision
variable contains all of the information from the receiver input Y (t) that is actually used in the
making decision.
Clearly, for any system of interest, no (T0 ) does not depend on which signal is sent. This follows
from the definition of the additive Gaussian noise channel: The process n(t) is independent of the
channel input for such a channel. Consequently, the process no (t) is independent of the transmitted
signal. It follows that the random variable no (T0 ) does not depend on i.
Suppose that 0 is sent. The decision variable is then given by
Y (T0 ) = s0o (T0 ) + no (T0 ).
For convenience, we define a random process

Y0 (t) = s0o (t) + no (t). (12)


When 0 is sent, Y (t) is equal to Y0 (t) for all t. The random variable Y0 (T0 ) is Gaussian because it
is the sum of a deterministic value s0o (T0 ) and a Gaussian random variable no (T0 ). The mean of
Y0 (T0 ), denoted by µ0 (T0 ), is given by
µ0 (T0 ) = E[Y0 (T0 )] = s0o (T0 ) + E[no (T0 )].
Since E[no (T0 )] = E[n(T0 )]H(0) and the mean of n(t) is zero, then the mean of the output noise
process is zero, i.e., E[no (T0 )] = 0. Therefore,
Z ∞
µ0 (T0 ) = s0o (T0 ) = s0 (T0 ) ∗ h(T0 ) = s0 (T0 − τ )h(τ )dτ. (13)

E&CE 411, Spring 2005, Handout 3, G. Gong 7

The variance of Y0 (T0 ) is given by

V ar(Y0 (T0 )) = E[(Y0 (T0 ) − µ0 (T0 ))2 ]


= E[(Y0 (T0 ) − s0o (T0 ))2 ]
= E[no (T0 )] = Rno (0).

Next, suppose that 1 is sent. The decision variable is then the random variable

Y (T0 ) = Y1 (T0 )

where Y1 (t) is the random process defined by

Y1 (t) = s1o (t) + no (t) (14)


The random variable Y1 (T0 ) is Gaussian with mean µ1 (T0 ), given by

µ1 (T0 ) = E[Y1 (T0 )] = s1o (T0 ) + E[no (T0 )]


Z ∞
= s1o (T0 ) = s1 (T0 − τ )h(τ )dτ.

Similarly, we can obtain that the variance of Y1 (T0 ), given as follows

V ar(Y1 (T0 )) = E[(Y1 (T0 ) − µY1 (T0 ))2 ]


= E[(Y1 (T0 ) − s1o (T0 ))2 ]
= E[no (T0 )] = Rno (0).

Thus we obtain that


V ar(Y0 (T0 )) = V ar(Y1 (T0 )) = Rn0 (0).
If we set σ 2 = Rn0 (0), then V ar(Yi (T0 )) = σ 2 for both i = 0 and i = 1. In other words, the two
decision variables have the same variance which is independent of which signal is sent.
One conclusion that can be drawn from this is that the only difference between two decision
variables Y0 (T0 ) and Y1 (T0 ) is the mean. Both of these are Gaussian random variables; hence, they
are completely characterized by their means and variances. However, they have the same variance
σ 2 . Thus, the decision device at the output of the sampler must discriminate between two random
variables that differ only in their mean values. If 0 is sent, the mean is µ0 (T0 ) = s0o (T0 ), but if 1 is
sent, the mean is µ1 (T0 ) = s1o (T0 ). Intuitively, we expect that the ability of the decision device to
discriminate between these two cases should depend on the difference of the means µ1 (T0 ) − µ0 (T0 ).
Notice that

µ1 (T0 ) − µ0 (T0 ) = s1o (T0 ) − s0o (T0 )


Z ∞
= [s1 (T0 − τ ) − s0 (T0 − τ )]h(τ )dτ, or equivalently,
−∞
Z ∞
= [s1 (τ ) − s0 (τ )]h(T0 − τ )dτ.
−∞

Thus the difference of the means is a function of the impulse response of the filter and the difference
of the two signals s0 (t) and s1 (t). The performance of the communication system is optimized by
E&CE 411, Spring 2005, Handout 3, G. Gong 8

use of signals that are as different as possible and a filter with an impulse response that accen-
tuates this difference as much as possible. The selection of the signals and the filter to optimize
performance is discussed later.

Example 1

Assume that the signal selection is antipodal signaling, i.e., s0 (t) = −s(t) and s1 (t) = s(t)
where s(t) = A > 0 for 0 ≤ t < T , otherwise s(t) = 0. The impulse response h(t) is given by
h(t) = s(T − t) (it is called a matched filter). If 0 is sent, the input to the threshold device is the
random variable
Y (T0 ) = Y0 (T0 ) = s0o (T0 ) + no (T0 )
which is Gaussian with mean s0o (T0 ) and variance σ 2 = Rno (0) = 12 N0 T . If 1 is sent, the input to
the threshold device is the random variable

Y (T0 ) = Y1 (T0 ) = s1o (T0 ) + no (T0 )

which is Gaussian with mean s1o (T0 ) and variance σ 2 . The difference in the means is

µ1 (T0 ) − µ0 (T0 ) = s1o (T0 ) − s0o (T0 )


= s1o (T0 ) − [−s1o (T0 )] = 2s1o (T0 )

Since h(t) = s(T − t), we have

µ1 (T0 ) − µ0 (T0 ) = 2s1o (T0 ) = 2s(T0 ) ∗ h(T0 )



 2A2 T0 0 ≤ T0 ≤ T
= 2A2 (T − T0 ) T < T0 ≤ 2T

0 otherwise

The difference is maximized if the sampling time T0 is equal to T .


For certain filters, the transfer function H(f ) is easier to work with than the impulse response.
In general, we have Z ∞
1
σ 2 = N0 |H(f )|2 df.
2 ∞
By Parseval’s theory, we have Z ∞
2 1
σ = N0 h(t)2 dt.
2 ∞

Thus, for the AWGN channel, σ2can be evaluated by integrating either the square of the magnitude
of the transfer function or the square of the impulse response, which is equal to the energy of the
impulse response.

2.4 The Error Probabilities


Having characterized the decision variable Y (T0 ), we can now give analytical expressions for the
error probabilities P (e|i), i = 0, 1, where P (e|0) denotes the probability that the decision made
by the receiver is wrong when 0 is sent (i.e., when s0 (t) is transmitted), and P (e|1) denotes the
probability that the decision made by the receiver is wrong when 1 is sent.
E&CE 411, Spring 2005, Handout 3, G. Gong 9

The receiver is wrong when 1 is sent if and only if the input to the threshold device is less than
or equal to the threshold α. Under this condition that 1 is sent, the input Y (T0 ) to the threshold
device is the random variable Y0 (T0 ). Let X = Y (T0 ) and fX (x|i) be the pdf of X given that digit i
is sent, i = 0, 1. We denote the X being Gaussian with mean µ and variance σ 2 by N (µ, σ 2 ). From
the previous discussions, we have fX (x|i) is equal to the pdf of a Gaussian random variable with
mean sio (T0 ) and variance σ 2 , i.e., the pdf of a Gaussian variables with N (sio (T0 ), σ 2 ) for i = 0, 1.
Thus

P (e|1) = P {Y (T0 ) ≤ α | 1}
= P {Y1 (T0 ) ≤ α}
= Φ((α − s1o (T0 ))/σ)
= Q(s1o (T0 ) − α)/σ).

Similarly, the probability of error when 0 is sent is given by

P (e|0) = P {Y (T0 ) > α | 0}


= P {Y0 (T0 ) > α}
= 1 − P {Y0 (T0 ) ≤ α}
= 1 − Φ((α − s0o (T0 ))/σ)
= Q((α − s0o (T0 ))/σ).

By taking advantage of the fact that (−1)i is equal to 1 if i = 0 and is −1 if i = 1, we can summarize
the key result as follows. If the transmitted signals are s0 (t) and s1 (t), and if µi (T0 ) = sio (T0 )
is equal to the mean of the signal component of the output of the filter h(t), sampled at time
T0 : 0 < T0 ≤ T , then probability of error given the signal si (t) is transmitted is

P (e|i) = Q[(−1)i (sio (T0 ) − α)/σ] (15)

where α is the threshold, T0 is the sampling time, and σ is the standard deviation of the output of
the filter.

Example 2. On-Off Signals


Consider a binary communication system with the signals s1 (t) = s(t) and s0 (t) = 0 where s(t) =
A > 0 for 0 ≤ t < T and otherwise s(t) = 0. The receiver filter has impulse response h(t) = s(T −t),
i.e., the filter is the matched filter, and the sampling time is T0 = T . The output signal s1o (t) is
given by  2
 A t 0≤t≤T
s1o (t) = A2 (T − t) T < t ≤ 2T

0 otherwise
and the output signal s0o (t) is identically zero. It follows that µ1 (T ) = s1o (T ) = A2 T and µ0 (T ) = 0.
For an AWGN channel with psd N0 /2 and threshold α, the error probabilities obtained by (15) are

P (e|1) = Q((A2 T − α)/σ)


P (e|0) = Q(α/σ).
E&CE 411, Spring 2005, Handout 3, G. Gong 10

R∞
Note that −∞ h(t)2 dt = A2 T , the energy of the signal since h(t) = s(T − t), denoted as E. Thus
the variance σ 2 = 21 N0 A2 T = 12 N0 E, and the error probability expressions become
³ p ´
P (e|1) = Q (A2 T − α)/ (1/2)N0 E
³ p ´
P (e|0) = Q α/ (1/2)N0 E .

The typical selection for the threshold is to be in the range 0 < α < A2 T . Usually, it takes
α = A2 T /2. Under this value of α, note that A2 T − α = A2 T /2 = E/2, then we have
Ãr !
E
P (e|1) = Q
2N0
Ãr !
³ p ´ E
P (e|0) = Q (E/2)/ (1/2)N0 E = Q .
2N0

The average probability of error is defined as

P (e) = P { 1 is sent}P (e|1) + P { 0 is sent}P (e|0).

If the two signals are transmitted equally likely, i.e., P { 1 is sent} = P { 0 is sent} = 1/2, then we
have
Ãr ! Ãr ! Ãr ! Ã r !
1 E 1 E E 1 1 E
P (e) = Q + Q =Q = erf c .
2 2N0 2 2N0 2N0 2 2 N0

Question: (a) For the case of the sampling time T0 = T /2, derive the average error probability.
For h(t) = pT (t) where pT (t) = 1 for 0 ≤ t ≤ T and otherwise pT (t) = 0, and the sampling time
T0 = T , derive the average error probability.

3 Optimization of the Threshold


Note that Q(x) is a decreasing function of x. From (15), if all of the parameters of the signals
and the noise are fixed, the error probabilities P (e|1) and P (e|0) are functions of the threshold α.
From the individual expressions for P (e|1) and P (e|0), it is clear that one of them is an increasing
function of α and the other is a decreasing function of α. For an AWGN channel, the optimal
threshold in the sense of minimizing the error probabilities, is given by

α = [µ0 (T0 ) + µ1 (T0 )]/2

which is referred to as the minimax threshold in the literature. In this case, the error probability
is given by
Pm (e) = P (e|1) = P (e|0) = Q ([µ1 (T0 ) − µ0 (T0 )]/(2σ)) . (16)
E&CE 411, Spring 2005, Handout 3, G. Gong 11

4 The Matched Filter for the AWGN Channel


Because Q(x) is a decreasing function of x, P (e|i) is minimized by maximizing the quantity [µ1 (T0 )−
µ0 (T0 )]/(2σ). Because µ1 (T0 ) − µ0 (T0 ) depends on the signals, but not on the noise, and σ depends
on the noise spectral density, but not on the signals, the quantity of interest can be thought of as
a signal-to-noise ratio. It is convenient to define

(SN R)o = [µ1 (T0 ) − µ0 (T0 )]/(2σ), (17)

so that (16) becomes


Pm (e) = Q(SN Ro ).
Keep in mind that SN Ro depends on the filter impulse response h(t), the sampling time T0 , and
the signal set {s0 (t), s1 (t)}. Note that

µi (T0 ) = (si ∗ h)(T0 ), i = 0, 1 (18)

and for the AWGN channel, recall that the variance σ 2 can be written as
Z
2 N0 ∞ 2
σ = h (t)dt (19)
2 −∞

where N0 /2 is the psd of the noise.


These results are valid for any LTI filter. The goal of this section is to find the optimum filter;
that is,we wish to find the filter that gives the smallest values of the error probability Pm (e). Here
we use the threshold α = [µ1 (T0 ) + µ0 (T0 )]/2, which is referred to as the minmax threshold in the
literature. Thus the signal-to-noise ratio is a function of the filter impulse response, substituting
(18) to the SNR representation, we have

(s1 ∗ h)(T0 ) − (s0 ∗ h)(T0 )


(SN R)o = √ (20)
2N0 ||h||

where ½Z ¾1/2 ½Z ¾1/2


∞ ∞
2 2
||h|| = h (t)dt = H (f )df .
−∞ −∞

Let us define
g(t) = s1 (t) − s0 (t)
whose Fourier spectral is given by G(f ). Then maximizing SN Ro is equivalent to maximize ||g ∗
h(T0 )||2 . Let V ∗ (f ) = H(f )ej2πf T0 . The Schwarz inequality (see Appendix G) is applied to the
maximization of the signal-to-noise ratio. This gives, in the notation of frequency domain,
¯R ¯2
¯ ∞ ¯
¯ −∞ G(f )V ∗ (f )df ¯
(SN R)2o =
2N0 ||h||2
R∞ R
2 df ∞ |V ∗ (f )|2 df
−∞ |G(f )| −∞

2N0 ||h||2
E&CE 411, Spring 2005, Handout 3, G. Gong 12

where the equality is true if and only G(f ) = k 0 V (f ) or equivalently,

H(f ) = kG∗ (f )e−j2πf T0 (21)

where k is an arbitrary constant.


We conclude that the optimum filter has impulse response

h(t) = λg(T0 − t) = λ[s1 (T0 − t) − s0 (T0 − t)] (22)

for some choice of λ. It follows that for a proper choice of c (the gain constant is unimportant, we
typically let c = 1/2 (or λ = 1)),
c||h||2 ||g(T0 − t)/2||
(SN R)o,max = √ = p .
2N0 ||h|| N0 /2
The filter given by (22) is called the match filter for the AWGN channel. For the remainder of this
section, we assume that
1
gT0 (t) = g(T0 − t)/2 = [s1 (T0 − t) − s0 (T0 − t)].
2
The signal-to-noise ratio for the matched filter receiver can be expresses in terms of more funda-
mental parameters of the signal set {s0 (t), s1 (t)}. We start with
||gT ||
(SN R)o = p 0
N0 /2
and then use the fact that
Z ∞ Z
2 2 1 ∞
||gT0 || = [gT0 (t)] dt = [s1 (T0 − t) − s0 (T0 − t)]2 dt
−∞ 4 −∞
Z Z
1 ∞ 2 1 ∞
= [s1 (u) + s20 (u)]du − s1 (u)s0 (u)du
4 −∞ 2 −∞
1 1
= (E0 + E1 ) − ρ
4 2
where ρ is the integral in the second term (the inner product of signals s0 and s1 ). Letting
E = (E0 + E1 )/2 and r = ρ/E, we can write ||gT0 ||2 as

||gT0 ||2 = E(1 − r)/2.

The signal-to-noise ratio for a receiver with the matched filter is therefore given by

(SN R)o = {E(1 − r)/N0 }1/2 . (23)

The parameter E is called the average energy for the signal set, and r is the correlation coefficient
(or normalized correlation) for the two signals s0 and s1 .
The first observation to be made about (23) is that the signal-to-noise ration does not depend
on the sampling time T0 if the matched filter is used. The intuitive reason for this is that the
matched filter, as defined by

h(t) = c[s1 (T0 − t) − s0 (T0 − t)].


E&CE 411, Spring 2005, Handout 3, G. Gong 13

automatically compensates for any changes in the sampling time. It should also noted that, because
the noise is WSS, the variance of the output noise is independent of the sampling time. (This is
true for any LTI filter.)
Another observation for (23) is that the signal-to-noise ratio and hence the error probabilities
depend on two signal parameters only. These are the average energy in the two signals s0 and s1
and the inner product of s0 and s1 . The detailed structure of the signals are unimportant.

Example 3. Antipodal signals have equal energy, so E = E0 = E1 . We drop the subscripts and
denote the energy by E. The inner product for antipodal signals is
Z ∞ Z ∞
ρ= s1 (t)s0 (t)dt = − [s1 (t)]2 dt = −E1 .
−∞ −∞

The correlation coefficient for antipodal signals is therefore r = −1. The resulting signal-to-noise
ratio is p
(SN R)o = 2E/N0 ,
and the error probabilities for the minimax threshold are given by
p
P (e|1) = P (e|0) = Q( 2E/N0 ).

You might also like