0% found this document useful (0 votes)
37 views7 pages

Chapter 2

The document discusses optimum receiver design principles for communication systems over additive white Gaussian noise channels. It describes the maximum a posteriori receiver and maximum likelihood receiver, detailing their operation for vector signals. It also covers correlation receivers and matched filters.

Uploaded by

Soumyajit Paul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views7 pages

Chapter 2

The document discusses optimum receiver design principles for communication systems over additive white Gaussian noise channels. It describes the maximum a posteriori receiver and maximum likelihood receiver, detailing their operation for vector signals. It also covers correlation receivers and matched filters.

Uploaded by

Soumyajit Paul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

24

2. OPTIMUM RECEIVER PRINCIPLES


2.1 Maximum Aposteriori Receiver
Consider the generic block diagram of end-to-end communication over the ubiquitous additive
white Gaussian noise (AWGN) channel.

• Source: {mi } with apriori probabilities: {P( mi )}


• Transmitter: A particular message symbol is represented by a signal waveform allowable in
the signal space permitted for a given modulation technique.
m = mi ⇔ s (t ) = si (t ) (2.1)
• Channel: r (t ) = s (t ) + nw (t ) (2.2)
Problem 1: Design an optimum receiver which estimates m̂ for the transmitted signal s(t ) of a
source output m such that the probability of error P(ε ) ≡ Pr ob( mˆ ≠ m) is MINIMUM.
Problem 2: Given that {P( mi )} are UNKNOWN, which is the real- life problem in many
emerging communication systems, design a similar optimum receiver. (Inherently more difficult
task!).

VECTOR CHANNEL: Consider the case when a sequence of source outputs are bundled into a
vector form and transmitted as the case of QAM and other m-ary signaling schemes. In some
cases, the signal itself may be in a vector form to start with, as in the case of LPC coefficients in a
CELP Speech coder.

• Source Information is mapped into source vectors: {s i ; i = 0,1,..., M − 1} , where each vector is
composed of N-components: s i = [ si1, si 2 ,..., siN ] .
• Received Information is also mapped into vectors:
r i = s i + ni = [ si1 + ni1 , si 2 + ni 2 ,..., siN + niN ] (2.3)
Given that the received vector is a point r = ρ in the N-dimensional space with coordinates:

These notes are © Huseyin Abut, February 2004


25

? = [?1,?2 ,...,? N ] then the optimum receiver must compute the transmitted vector signal s i for
the message mi having a maximum aposteriori probability from its knowledge of the set of
parameters: Pr|s , {s i } , and the source distribution: {Pr ob( m )} .
In other words:
mˆ = m k if Pr ob( m k | r = ρ ) > Pr ob (m i | r = ρ )
for all i ≠ k (2.4)
which is a nearly impossible challenge to meet in many real- life situations.
Do we have an equivalent task?
Consider the correct decision for a given incoming vector:
Pr ob( C | r = ρ ) = Pr ob (m k | r = ρ ) (2.5)
and the overall correct decision is simply ensemble of correct decisions:

Pr ob( C) = ∫ Pr ob (C | r = ρ ). P
−∞
r (ρ )d ρ (2.6)

Since P r ( ρ ) ≥ 0 we do not need to include it in the maximization process, i.e. only the term
Pr ob( C | r = ρ ) must be maximized. Let us use the Bayes Rule on (2.5)
Pr ob(mi | r = ρ ) = P (mi ). Pr ob r ( ρ | mi ) / P r (ρ ) (2.7)
but the statement m = mi is equivalent to s = s i which implies:
Pr ob r ( ρ | mi ) = Pr ob r ( ρ | s = s i ) (2.8)
Furthermore, the denominator term is independent of the index i, hence, the maximization and we
have the revised principle for our optimum receiver:
mˆ = mk if P( mi ). Pr ob r ( ρ | s = s i ) is maximum when i = k (2.9)

When P( mi ) are not known and the receiver can only maximize the last portion of (2.9). Then we
have a restricted version of the general optimum receiver called MAXIMUM-LIKELIHOOD
(ML) Receiver.

ML Receiver Principle:
mˆ ⇒ mk when Pr ob r ( ρ | s = s i ) is MAXIMUM. (2.10)
Decision Regions are needed to perform the mapping properly for each signal vector.

Example 2.1: Given (3) input vectors in a 2-D vector space with the following signal set
assignment: m0 ⇒ s 0 = [1,2] ; m1 ⇒ s 1 = [ 2,1] ; and m2 ⇒ s 2 = [1, −2]

These notes are © Huseyin Abut, February 2004


26

Let us also assume that the input message probabilities: P( m0 ), P( m1 ), P( m2 ) are given. For this
assignment, our receiver will compute:
P( mi ). Pr ob r ( ρ | s = s i ) for I=0,1,2.

An ML Receiver will choose the index of the message with the largest product above.
For every point ρ in (ϕ1 , ϕ2 ) plane an assignment can be made if the plane is partitioned into
disjoint regions {Ii} for i=0,1,2; which are called decision regions, very similar to the codeword
selection process in Vector Quantization (VQ). Then we have the ML receiver as a simple
geometric map:
r = I k ⇒ mˆ = mk and an error is made if mˆ ⇒ mk iff r ⊄ I k (2.11)

2.2 ML Receiver for AWGN Channel


Given that the signal in the channel is corrupted by a zero mean AWGN with a variance σ 2 .
r = s + n = [ s1 + n1, s 2 + n2 ,...,s N + n N ] (2.12)
Now:
r = ρ when s = s i iff n = ρ − s i (2.13)
And then
P r ( ρ | s = s i ) = P n ( ρ − s i | s = s i ) for i = 0,1,..., M − 1 (2.14)
Since the signal s and the channel noise n are statistically independent Pn|s = P n . This simplifies
(2.14) into:
P n ( ρ − si | s = si ) = P n (ρ − si ) (2.15)

In this case, the general ML decision function becomes P( mi ).P n ( ρ − s i ) . Now the components of
signal is assumed to be independent, noise has a zero-mean we can write the noise distribution:
1 1 N 2
2σ 2 ∑
P n (u ) = exp{− u j} (2.16)
( 2πσ 2 ) N / 2 j =1

Let us use the following dot-product notation:


N
u = u • u = ∑ u 2j
2 ∗

j =1

Our distribution is written as:


1 1
P n (u ) = exp{− 2 u }
2
(2.17)
( 2πσ )
2 N/2

Then for this probability system we have the ML principle as:
1 2
mˆ ⇒ mi whenever P( mi ). exp{ − 2 ρ − s i } is maximum. (2.18)

Equivalently, the task is to MINIMIZE:


2
ρ − s i − ( 2σ 2 ). log e P( mi ) (2.19)

These notes are © Huseyin Abut, February 2004


27

The first term is the Euclidean Distance between the received vector and a candidate signal vector.
If all the messages are equally likely then the optimum decision rule does not depend on the index
at all and we have the MINIMUM MEAN-SQUARE (MMS) DISTANCE Receiver. That is we
assign the message index of the closest neighbor of the incoming vector, which is also known as
the Nearest Neighbor Rule in VQ and other clustering techniques.

2.3 Correlation and Matched-Filter Receivers


If we revisit the Communication System Block Diagram for vector signals as shown below, it
would be necessary to synthesize waveform signals to be transmitted over real- life channels, such
as the twisted-pair or coaxial cable, microwave or fiber-optic links.

• It is necessary to synthesize the signal set {si (t )} at the transmitter. This can be achieved by
"building blocks waveforms".
• Synthesis signal sets and Recovery of signal vectors:
1. A set of N integrating filters are used to generate N signal components with strengths {sij} .
2. The filter outputs are summed to yield the signal waveform: s (t ) to be transmitted for a
particular message mi for each of M different messages.

These notes are © Huseyin Abut, February 2004


28

N
si (t ) = ∑ sijϕ j (t ) for i =0,1,...,M −1 (2.20)
j =1
1. Let us choose the building-block waveforms from an ortho-normal set such that:
ℵ 1 if j = l 
ϕ
∫ j (t )ϕ (t ) dt =   for all 1 ≤ i, j ≤ N (2.21)
0 if j ≠ l 
l
−∞
2. This will yield a probability of error independent of the actual wave-shapes.
3. We can exactly recover the signal vectors and hence, the messages in the absence of channel if
we push these synthesized waveforms of (2.20) into a simple integrating filter structure as
shown above.
N N
∫ s i (t )ϕ l (t ) dt = ∫ [ ∑ s ij ϕ j (t )]ϕ l ( t ) dt = ∑ s ij δ jl = sil (2.22)
j =1 j =1
If we perform similar integration for all the branches we obtain: si = [ s i1 , s i 2 ,..., s iN ] .
4. Examples of Orthonormal Signal Sets:
• Orthonormal time-shifted pulses: ϕ j ( t ) = g (t − j τ ) for j = 1,2,..., N
 1 /τ −T ≤τ < 0
• Orthonormal Fourier Transform pulses: ϕ j ( t ) = 
 0 otherwise

The optimum ML receiver of the system performs:


2
Set mˆ = mk if r − s i − 2σ n2 log P (m i ) is MINIMUM. (2.23)
Square operations can be eliminated in (2.23) by observing:
2 2
r − si = r − 2 (r • s i ) + s i 2
(2.24)

These notes are © Huseyin Abut, February 2004


29

where the dot product is given also by:


N
r • s i ≡ ∑ r j s ij (2.25)
j =1

Observations :
• Note 1: First term in (2.24) is independent of the index and no need to worry in optimization.

• Note 2: Last terms in (2.23) and (2.24) depend only on source side information supplied by the
designer then they can be combined into a constant parameter set and burned into the ROM of
the system:
c i ≡ (1 / 2)[σ n2 log P( mi ) − s i 2 ] (2.26)
• The optimum receiver of (2.23) is now equivalent to:
Set mˆ = m k if ( r • s i + ci ) is MAX . (2.27)
which is simply the structure of a CORRELATION RECEIVER.

Note: When the source vocabulary size M is not very large this implementation is not costly and
most of the operations to the right of the "Integrators" can be done by table look- ups. However,
when M is very large then the dot-products are usually handled by using "DSP" based devices.
The use of multipliers can be avoided if we replace the structure to the left of the "Weighting
Matrix" as follows:
1. Let us consider a filter with an impulse response
h j (t ) = ϕ j (T − t ) . (2.28)

These notes are © Huseyin Abut, February 2004


30

2. If the input to this filter is r (t ) then its response is simply:


∞ ∞
u j (t ) = ∫ r (α ) h( t − α ) dα = ∫ r (α )ϕ j (T − t + α ) dα (2.29)
−∞ −∞
3. When we sample the output at t=T we have
u j (T ) ≡ r j (2.30)
4. Finally, the task is to push it through the weighting matrix and the rest of the receiver above.

This receiver is called a "MATCHED-FILTER" Receiver since it is constructed by using the


shifted versions of the signal building block functions: ϕ j (T − t ) .

Example 2.2: Consider the case for a gated-sinusoidal tone signal with a gate period of T seconds
as shown below. The convolution operation in the matched - filter above will result in a triangular
enveloped sinusoid with the same frequency and thus it will peak at the sampling instant T.

These notes are © Huseyin Abut, February 2004

You might also like