Chapter 2
Chapter 2
VECTOR CHANNEL: Consider the case when a sequence of source outputs are bundled into a
vector form and transmitted as the case of QAM and other m-ary signaling schemes. In some
cases, the signal itself may be in a vector form to start with, as in the case of LPC coefficients in a
CELP Speech coder.
• Source Information is mapped into source vectors: {s i ; i = 0,1,..., M − 1} , where each vector is
composed of N-components: s i = [ si1, si 2 ,..., siN ] .
• Received Information is also mapped into vectors:
r i = s i + ni = [ si1 + ni1 , si 2 + ni 2 ,..., siN + niN ] (2.3)
Given that the received vector is a point r = ρ in the N-dimensional space with coordinates:
? = [?1,?2 ,...,? N ] then the optimum receiver must compute the transmitted vector signal s i for
the message mi having a maximum aposteriori probability from its knowledge of the set of
parameters: Pr|s , {s i } , and the source distribution: {Pr ob( m )} .
In other words:
mˆ = m k if Pr ob( m k | r = ρ ) > Pr ob (m i | r = ρ )
for all i ≠ k (2.4)
which is a nearly impossible challenge to meet in many real- life situations.
Do we have an equivalent task?
Consider the correct decision for a given incoming vector:
Pr ob( C | r = ρ ) = Pr ob (m k | r = ρ ) (2.5)
and the overall correct decision is simply ensemble of correct decisions:
∞
Pr ob( C) = ∫ Pr ob (C | r = ρ ). P
−∞
r (ρ )d ρ (2.6)
Since P r ( ρ ) ≥ 0 we do not need to include it in the maximization process, i.e. only the term
Pr ob( C | r = ρ ) must be maximized. Let us use the Bayes Rule on (2.5)
Pr ob(mi | r = ρ ) = P (mi ). Pr ob r ( ρ | mi ) / P r (ρ ) (2.7)
but the statement m = mi is equivalent to s = s i which implies:
Pr ob r ( ρ | mi ) = Pr ob r ( ρ | s = s i ) (2.8)
Furthermore, the denominator term is independent of the index i, hence, the maximization and we
have the revised principle for our optimum receiver:
mˆ = mk if P( mi ). Pr ob r ( ρ | s = s i ) is maximum when i = k (2.9)
When P( mi ) are not known and the receiver can only maximize the last portion of (2.9). Then we
have a restricted version of the general optimum receiver called MAXIMUM-LIKELIHOOD
(ML) Receiver.
ML Receiver Principle:
mˆ ⇒ mk when Pr ob r ( ρ | s = s i ) is MAXIMUM. (2.10)
Decision Regions are needed to perform the mapping properly for each signal vector.
Example 2.1: Given (3) input vectors in a 2-D vector space with the following signal set
assignment: m0 ⇒ s 0 = [1,2] ; m1 ⇒ s 1 = [ 2,1] ; and m2 ⇒ s 2 = [1, −2]
Let us also assume that the input message probabilities: P( m0 ), P( m1 ), P( m2 ) are given. For this
assignment, our receiver will compute:
P( mi ). Pr ob r ( ρ | s = s i ) for I=0,1,2.
An ML Receiver will choose the index of the message with the largest product above.
For every point ρ in (ϕ1 , ϕ2 ) plane an assignment can be made if the plane is partitioned into
disjoint regions {Ii} for i=0,1,2; which are called decision regions, very similar to the codeword
selection process in Vector Quantization (VQ). Then we have the ML receiver as a simple
geometric map:
r = I k ⇒ mˆ = mk and an error is made if mˆ ⇒ mk iff r ⊄ I k (2.11)
In this case, the general ML decision function becomes P( mi ).P n ( ρ − s i ) . Now the components of
signal is assumed to be independent, noise has a zero-mean we can write the noise distribution:
1 1 N 2
2σ 2 ∑
P n (u ) = exp{− u j} (2.16)
( 2πσ 2 ) N / 2 j =1
j =1
The first term is the Euclidean Distance between the received vector and a candidate signal vector.
If all the messages are equally likely then the optimum decision rule does not depend on the index
at all and we have the MINIMUM MEAN-SQUARE (MMS) DISTANCE Receiver. That is we
assign the message index of the closest neighbor of the incoming vector, which is also known as
the Nearest Neighbor Rule in VQ and other clustering techniques.
• It is necessary to synthesize the signal set {si (t )} at the transmitter. This can be achieved by
"building blocks waveforms".
• Synthesis signal sets and Recovery of signal vectors:
1. A set of N integrating filters are used to generate N signal components with strengths {sij} .
2. The filter outputs are summed to yield the signal waveform: s (t ) to be transmitted for a
particular message mi for each of M different messages.
N
si (t ) = ∑ sijϕ j (t ) for i =0,1,...,M −1 (2.20)
j =1
1. Let us choose the building-block waveforms from an ortho-normal set such that:
ℵ 1 if j = l
ϕ
∫ j (t )ϕ (t ) dt = for all 1 ≤ i, j ≤ N (2.21)
0 if j ≠ l
l
−∞
2. This will yield a probability of error independent of the actual wave-shapes.
3. We can exactly recover the signal vectors and hence, the messages in the absence of channel if
we push these synthesized waveforms of (2.20) into a simple integrating filter structure as
shown above.
N N
∫ s i (t )ϕ l (t ) dt = ∫ [ ∑ s ij ϕ j (t )]ϕ l ( t ) dt = ∑ s ij δ jl = sil (2.22)
j =1 j =1
If we perform similar integration for all the branches we obtain: si = [ s i1 , s i 2 ,..., s iN ] .
4. Examples of Orthonormal Signal Sets:
• Orthonormal time-shifted pulses: ϕ j ( t ) = g (t − j τ ) for j = 1,2,..., N
1 /τ −T ≤τ < 0
• Orthonormal Fourier Transform pulses: ϕ j ( t ) =
0 otherwise
Observations :
• Note 1: First term in (2.24) is independent of the index and no need to worry in optimization.
• Note 2: Last terms in (2.23) and (2.24) depend only on source side information supplied by the
designer then they can be combined into a constant parameter set and burned into the ROM of
the system:
c i ≡ (1 / 2)[σ n2 log P( mi ) − s i 2 ] (2.26)
• The optimum receiver of (2.23) is now equivalent to:
Set mˆ = m k if ( r • s i + ci ) is MAX . (2.27)
which is simply the structure of a CORRELATION RECEIVER.
Note: When the source vocabulary size M is not very large this implementation is not costly and
most of the operations to the right of the "Integrators" can be done by table look- ups. However,
when M is very large then the dot-products are usually handled by using "DSP" based devices.
The use of multipliers can be avoided if we replace the structure to the left of the "Weighting
Matrix" as follows:
1. Let us consider a filter with an impulse response
h j (t ) = ϕ j (T − t ) . (2.28)
Example 2.2: Consider the case for a gated-sinusoidal tone signal with a gate period of T seconds
as shown below. The convolution operation in the matched - filter above will result in a triangular
enveloped sinusoid with the same frequency and thus it will peak at the sampling instant T.