0% found this document useful (0 votes)
104 views105 pages

WC - 2015S - Lecture 6 - Equalization, Diversity & Channel Coding - Part1 - Me2

This document discusses equalization, diversity, and channel coding techniques for mitigating the effects of fading channels. It covers linear and nonlinear equalization methods, diversity techniques such as rake receivers, and channel coding. Specific topics include intersymbol interference, Nyquist criteria, raised cosine pulses, eye diagrams, matched filtering, zero forcing equalization, and adaptive equalization methods. Homework, reading, and project assignments related to equalization and channel impairment are also provided.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views105 pages

WC - 2015S - Lecture 6 - Equalization, Diversity & Channel Coding - Part1 - Me2

This document discusses equalization, diversity, and channel coding techniques for mitigating the effects of fading channels. It covers linear and nonlinear equalization methods, diversity techniques such as rake receivers, and channel coding. Specific topics include intersymbol interference, Nyquist criteria, raised cosine pulses, eye diagrams, matched filtering, zero forcing equalization, and adaptive equalization methods. Homework, reading, and project assignments related to equalization and channel impairment are also provided.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

Equalization, Diversity &

Channel Coding
Part 1

Assignments
Homework:
HW5: pp168-176, Chapter 7, Prob: 1, 2, 3
Hand-in beginning of class, 27 March
HW5: pp168-176, Chapter 7, Prob: 6, 7, 9,11
Hand-in beginning of class, 15 April
Read:
Rappaport, Chapter 7, Sections: 1 8
Finish by 27 March
Rappaport, Chapter 7, Sections: 9 18
Finish by 15 April
Project3:
TBA

Outline
Fading channel impairments intersymbol interference
Equalization Methods
Linear
Nonlinear
Adaptive
Diversity Techniques
Rake Receiver
Channel Coding

Fading, Bandwidth, Matched Filtering

Review
4

Bandwidth of signal
Different definition of bandwidth:
a) Half-power bandwidth
b) Noise equivalent bandwidth
c) Null-to-null bandwidth

d) Fractional power containment


e) Bounded power spectral density
f) Absolute bandwidth

(a)
(b)
(c)
(d)
(e)50dB
5

What is a Hostile Channel


Symbol distortion
Multipath interference
Co-channel interference
Additive noise
Doppler spread
Media dispersion signal at different frequencies travel at
different speeds
Intersymbol Interference (ISI)
Product of distortion
Symbol often spreads out in time
Spills into next symbol period

Two Methods to Mitigate ISI


First method: Nyquist Criteria, design bandlimited
transmission pulses to minimize the effect of ISI, i.e.,
shaping filter, e.g., sinc function, raised cosine, others.
Second method: Equalization, filter the received signal to
cancel the ISI introduced by the channel impulse response

Windowing
8

Window function
a mathematical function zero-valued outside of some
chosen interval.
a function or waveform/data-sequence multiplied by a
window function
zero-valued outside the interval:
the "view through the window is the overlap.
window functions are non-negative smooth "bell-shaped"
curves,
rectangle, triangle, and other functions are classified as
windows.

Applications
window functions used in spectral analysis, filter design,
and beamforming.
general definition of window functions
not required to be identically zero outside an interval.
the product of the window multiplied by its argument
is required to be square integrable.
the function goes sufficiently rapidly toward zero out side
the interval.

10

Rectangular window
Leakage from sinusoid (rectangular window)

11

Timing jitter
Random jitter (fluctuacin)
Translation of random voltage noise into timing fluctuations
Phase noise of the transmitter and receiver
Deterministic jitter distinct circuit origins
Bandlimited channel
Frequency dispersive
Signal reflection
Duty cycle distortion
Power supply noise
Particular patterns data symbols

c.f., Buckwalter, Analysis and Equalization of Data-Dependent Jitter, 2006


12

Band-limited Channels
Signal Design for Band-limited Channels
(a) Intersymbol Interference
Time-continuous
z t noise

bn

gT t

c t

g R t

y t

y n

y t bn x t nT v t
n

where

x t gT t c t g R t
v t z t gR t

at t = kT + t0
13

Band-limited Channels
Time-discrete

y k

y kT t 0 bn x k n v k
n

we wish to extract the bit information for k = n, explicitly

y k x 0 bk
bn x k n v k

x
0
n k

For convenience let x[0] = 1

y k bk bn x k n v k
n k

The summation on the RHS is a measure of ISI


14

Band-limited Channels
(b) Nyquist Condition
Possible to have no ISI even if x(n) and c(n) are bandlimited
Consider the decision statistic expression as the sampled
output
y k bn x k n v k
n

There is no ISI if the Nyquist condition is satisfied:

c for n 0
0 for n 0

x n

where c is some constant that is set to equal to 1


In this form, Nyquist condition is not too helpful in design of
ISI-free pulses. The frequency domain is much more
illustrative
15

Band-limited Channels
Use Poisson summation formula* to express x(nT) as the
sum of pulses

x t

x nT

x k t kT

The Fourier transform*

1
X f
T

X f

The Nyquist condition x t t X f 1

i.e., that the folded spectrum of x(t) has to be flat for no ISI
16

Band-limited Channels
Folded spectrum has non-overlapping copies of X(f)
For very high symbol rate such that 1 T W, Nyquist
condition infers that the folded spectrum has gaps between
copies of X(f).

X
f

T
n

The Nyquist condition cannot be satisfied, ISI inevitable

17

Band-limited Channels
Folded spectrum has copies of X(f) touching their neighbor.
For very slow symbol
rate such that,1 T 2W then the

folded spectrum X f n T is flat if and only if


n

X
f

n
f

T
X f
0

for
for

f W
f W

Time-domain function is the cardinal sine pulse x t sinc t T


Not physically realizable, above critical rate 1 T 2W
ISI is unavoidable
18

Band-limited Channels
Folded spectrum has copies of X(f) overlap with their
neighbors.
For even slower symbol rate such that 1 T 2W , then the
folded spectrum can be flat with many different choices of
X(f).

X
f

We can design an ISI-free pulse shape which gives a flat folded


spectrum
When the symbol rate is below the Nyquist rate, a widely used
ISI-free spectrum is the raised-cosine spectrum
19

Band-limited Channels
(c) Eye Diagrams
1.5
1
0.5
0
-0.5
-1
-1.5

-1 -0.8 -0.6 -0.4

-0.2 0

0.2 0.4

0.6

0.8

20

Example: Raised Cosine


Time domain function
x t

sin t T cos t T
t T 1 4 t T 2

Frequency response

T
0 f 1 2T

X f T
T
1
2 1 sin f 2T 1 2T f 1 2T


The frequency response of the square root raised cosine filter

P f X f
21

Example: Raised Cosine


Plots of the raised cosine frequency and domain time functions

c.f., Institute of Communication Engineering, National of Sun Yat-sen University


22

Example: Sinc Function


Time domain function

t
sin
xsin c t
t
Frequency response

sin t j 2 ft
X f
e
dt

1 2 1 2 f 1 2
X f
otherwise
0
23

Filtrado adaptado

Match filtering
24

Match filtering
Optimum Demodulator
Match filter for single symbol, b0, is match filter x t such that
x t kT x kT t 1

Sample at t = 0 to obtain the decision statistic


Reasonable strategy for a sequence of symbols:
Sample the matched filter output at mT to obtain the
decision statistic for symbol bm

25

Match filtering
Optimum Demodulator
Decision statistic is match filter output at mT

y m b n x m n T x n m T v m
n

stated differently
2

y m b m x
b n x m n T x n m T v m
n m

where the second term RHS is the intersymbol interference ISI

26

Match filtering
Optimum Demodulator
Suppose x[nT] were timelimited
x n x n 0 for T t T

and

x n x n 0 for

t T

Therefore

x n m T x n m T 0 n m
This demodulation strategy can be interpreted as match filtering
From hereon we will define the channel impulse response as
p t y t and p n y n
27

Match filtering
Shortcomings
A timelimited waveform is never bandlimited
Since the channel is bandlimited, neither p n nor p n p n
are timelimited
Hence ISI will in general be present
One way to observe and quantitatively measure the effect of
ISI is by eye diagrams

28

Example 1
In a wireless environment with one direct path and one
multipath, the received signal is given by

y n b1 x n b2 x n l v n
if we assume that the channel slowly varying compared to the
path time delay
y n b n x n v n

then the channel response is

x n x1 n x2 n l
Could a filter be designed so that the recovered signal
contains only information in the direct path?
29

Equalization
30

Digital Communication System

What is equalization
The multipath channel causes frequency selectivity and ISI
Equalization can reduce the ISI and noise effects for better
demodulation

Heq(f) equalizer response

equalized channel response

F (f) channel response


frequency

frequency

32

Two Major Methods

Characteristics:
Compensation for intersymbol interference
Adaptive in time time varying wireless channels
Two phases
Learning known symbols are transmitted back and forth
between base station and mobile unit
In use transmitted data used to continually update the
equalizer filter
Objective
Produce inverse of channel frequency response, such that.

F f H eq f constant
Reduction of the bit error rate (BER)

33

Zero Forcing Equalizer


34

Equalization
Equalization compensates for or mitigates inter-symbol
interference (ISI) created by multipaths in time dispersive
channels (frequency selective fading channels).

Equalizer must be adaptive, since channels are time


varying.

Basic idea
Z Pulse f B f H f E f
:
Pulse
filter
spectrum

Transmitted
symbol
spectrum

Channel frequency
response
(incl. T & R filters)

Equalizer
frequency
response

B f
H f
E f
Z f
0

fs = 1/T

X RC f GT f C f GR f GE f

Equalization
Design from frequency domain viewpoint.

H rc f e j 2 ft0
HT f
0

f W
f W

Raised Cosine Pulse

Raised cosine spectrum

Raised cosine spectrum

Zero forcing equalizer


Design received filter as inverse of transmitted filter
H R f H T f
H T f H R f H rc f H T f
Therefore
must compensate for the channel distortion.
2

Design a inverse channel filter that completely eliminates ISI


caused by the channel
Zero Forcing equalizer.

Example: Zero forcing equalizer


A two-path channel with impulse response

The transfer function is

The equalizer we seek is given by

Zero forcing equalizer


DSP is generally adopted for automatic equalizers
Offers convenient representation time-sampled signal.
Convoluted time-continuous signal

Convoluted time-discrete signal


r n h0 s n h1s n 1 , n 0, 1, 2,

Aside
Convolution
r t s t hc t
r nt s nt hc nt
r n s n hn
1

r n s n hn hk s n k
k 0

h0 s n h1s n 1

Zero forcing equalizer


z-transform of channel impulse response
H c z h0 h1 z 1

Zero forcing equalizer


The transfer function of the inverse channel filter is

This can be realized by a circuit known as the linear


transversal filter.

Zero forcing equalizer

Zero forcing equalizer


The exact ZF equalizer is of infinite length but usually
implemented by a truncated (finite) length approximation.
For
, a 2-tap version of the ZF equalizer has
coefficients

Normal equation solution


49

Normal equation solution


For known channel impulse response, the tap gains of the zero
forcing can be found by direct solution of linear matrix
equation.
We form the N p1 1 N p1 1 symmetric Toeplitz
matrix from Y y p , y p 1 , . . . ,y p , . . . ,y N p 1
1
1
1

y p1
0

Y 0

y p1 1
y p1
0
0

y N p1 1
y N p1 2

y N p1 3

y p1

and the vector output of the ZF

d d p1 , . . . ,d p , . . . ,d N p1 1

T
50

Normal equation solution


Then the vector of optimal tap gains, wop, satisfies

wTop Y dT
To solve for wop, first take the transpose of both sides
T y T dT T
w
op

Applying the rule of transpose of product of matrices

YT w op d
Assume YT square, nonsingular and invertible. Pre-multiply
by YT 1

w op YT 1 d
51

Example
Suppose that a system has the channel impulse response
vector

y 0.90, 0.15, 0.20, 0.10, 0.05

where L = 4 is channel causal finite-length, i.e., yi 0, i 0, i 4


The initial distortion before equalization, y0 is maximum
value

1
D
y0

n1

0.15 0.20 0.10 0.05 0.50


yn

0.5555
0.90
0.90

and therefore, minimum distortion is achievable with ZF


52

Example
We wish to design a 3-tap equalizer, where y0 is the largest
component, p1 = 0 and p2 = 1
Thus, the delay is chosen as p = p1 + p2 = 1.
The desire response is d e1T so that

0
d 1

0

53

Example
Construct the matrix

0.90 0.15 0.20


Y 0
0.90 0.15

0
0
0.90
and obtain the optimal tap solution
T 1

w op Y
d

0
0 0
0
1.1111

0.1852 1.1111
0 1 1.11111

0.2160 0.1852 1.1111 0 0.185185

54

Example
The overall response of the channel and equalizer

d YT w op

0
0
0
0.90
0
0.15 0.90
0 1.111111 1.0
0.20 0.15 0.90 0.185185 0

0.10 0.20 0.15


0.194
0.05 0.10 0.20
0.148
0
0.037
0.05 0.10
0
0.009
0
0.05

The distortion after equalization is

Dmin

d1

n 0
n1

d n dn 0.3889
55

Based on the Wiener-Hoff Equation

Adaptive Solution
56

What is an adaptive equalizer?


Time-varying filter
Most often used basic structure transversal filter
N delays (retrasos)
N + 1 taps (derivaciones)
N + 1 time-varying complex equalizer weights
Adaptive algorithm used to update equalizer weights
Sample by sample basis
Block by block basis
Uses an error signal, ek , derived by comparing the output of
the equalizer, dk , and some other signal, d k, to update
iteratively the equalizer weights and thereby minimize the
cost or objective function.
57

Adaptive Solution
Consider a wireless communication system consisting of a
transmitter and a mobile unit and a N multipath channel
The channel impulse response is unknown to the receiver
A known real-valued finite sequence a is used to train the
equalizer
The signal sequence received at time n at the mobile unit is
described by

yn

N 1

pl anl
l 0

where pl is a measure of the loss in the lth path

58

Adaptive Solution
A transversal filter is used as an equalizer whose taps wj are
used to model the inverse impulse response of the channel.
The equalizer taps can be obtained by using a steepest-descent
recursive algorithm (algoritmo recursivo mxima pendiente)
w nj 1 w nj Den an j d1

(1)

where D is an adaptation step-size used to optimized to the


trade off between the convergence rate and steady state bit error
rate performance
The error between the input sequence and equalizer output is

en and dn and

N 1

n
w
i yn i
i 0

(2)

where w nj is the set of equalizer tap gains at sample time n


59

Adaptive Solution
Fact: The adaptation rule in (1) attempts to find the equalizer
w nj taps s.t., the cross correlations

enan j d1 , j 0,

,N 1

is force to zero.
Sounds familiar? Yes. Remember least square estimation? The
estimation error is forced normal (perpendicular) to the column
space of the measurements. In this case the training sequence
space.

Adaptive Solution
To see this substitute

yn i

N 1

pl ani l
l 0

in the RHS of (2)


N 1 L

E enan j d1 E and an j d1 wi pl E ani l an j d1


i 0 l 0

(3)

Note: the message symbols, ai may be considered as independent


random variables.

Hence E ai a j a2 i j

where

i j

1 i j

0 i j

2
and a2 E ak
61

Adaptive Solution
Substitute d = d1 + d2 in the product term first term RHS of
(3)
2
E and1 d2 and1 j E and1 and1 d 2 j E ak d 2 j
Using similar argument the summation term 2
E ani l an j d1 E anan l i j d1 E ak l i j d1
Use these two results
in (3)N 1

2
E enan j d1 a d 2 j wi p j d1 i l i j d1
i 0

a2

d j y j d
2

for j 0,1, . . . ,N 1

(4)

62

Adaptive Solution
Fact: The conditions E enan j d 0 are satisfied when
1

yd = 1 and yi = 0 ford d 2 i d and d i d d 2 , which


is the zero forcing solution.
Note: the ensemble average is over the noise and the data
symbol alphabet

After training the equalizer, a decision-feedback mechanism


is typically employed where the sequence of symbol
a
decisions is used to update the tap coefficients. This mode
is called the date mode and allows the equalizer to track
variations in the channel vector y.

63

Adaptive Solution
In the data mode
w nj 1 w nj Dena n j d1 , j 0, . . . , N 1

where the error term en (2) becomes

en a nd

N 1

w i yn i
i 0

again,a nd is the detector decision on the equalizer outputa n


delayed by d samples.

64

Minimum Mean Square Error


(MMSE) Algorithm
65

MMSE Estimator
Minimizes the mean square error (MSE) of the fitted values
of dependent variables.
Provides a measure of the estimation quality.
Estimator in a Bayesian setting with quadratic cost function.
Seeks to estimate a parameter that is itself a random variable
Bayesian approach allows better posterior estimates as more
observations become available.
Bayesian estimates sequences of observations that are not
necessarily independent.
Useful when the minimum variance unbiased estimator
(MVUE) does not exist or cannot be found.

66

Minimum Mean Square Error


The zero-forcing equalizer, removes ISI, may not give the best
error performance for communication systems because it does
not take into account the noises in the system.
A different equalizer that takes noise into account is the
minimum mean square error (MMSE) (mnima media del error
cuadrado) algorithm.
It is based on the mean square error (MSE) (media del error
cuadrado) criterion
Without prior knowledge of information symbols ak, each
symbol is modeled as a random variable.
A linear equalizer wn(z) is chosen to minimize the MSE
between the original information symbols ak and the output of
the equalizer dk :
2

2
MSE E ek E ak dk
67

Minimum Mean Square Error


For the equalizer use a FIR filter of order N

yk

z 1

wn,0

z 1

wn,1

z 1

z 1

wn, N 1 wn, N

dk

Note that a delay of N symbols is incurred at the output of the


FIR filter

68

Mean Square Error


Define the input and output signal and filter weighting vectors
as
T
y k yk , . . . , yk N
T
d d , . . . , d
k
k
kN

w wn,0 , . . . , wn, N
n

where

dk

w n , m yk m

m 0

d k wTn y k yTk w n

We need to extract a desired signal dk . To this end we construct


the error between the output of the equalizer filter dk and dk
e d d
k

69

Mean Square Error


When the desired equalizer output is known, i.e., dk bk the
error signal is given by
e b d
k
ek bk wTk y k

bk yTk w k

We compute the mean square error ek

at time instant k

ek bk2 wTk y k yTk w k 2bk yTk w k


2

The expected value of ek , tantamounts to the time average

2
E ek bk2 wTk E y k yTk w k 2 E b k yTk w k

filter weights wk are considered converged to an optimum value


70

Second Order Statistics


It would be trivial to simplify the above equation if bk and yk
where independent. This is not true in general since bk and yk
should be correlated if we are to successfully extract the desired
symbol information.
Instead, we will use the cross correlation vector r between the
desired output bk and the input signal yk as defined by

r E bk yTk E bk yk bk yk 1 . . . bk yk N
and the input auto correlation square matrix R

yk2
y y
R E y k yTk k 1 k

yk N yk

yk yk 1
yk21
yk N yk 1

N 1 N 1

yk yk N
yk 1 yk N

yk2 N
71

Minimum Mean Square Error


If bk and yk are stationary, then the elements of R and r are
non varying second order statistics. Rewrite expression for
expected value of the mean square error in terms of these
statistics

k bk2 wTk Rw k 2rT w k


By minimizing the above equation in terms of weight vector
wk it becomes possible to adaptively tune the equalizer to
provide a flat spectral response (minimal ISI) in the received
signal.
We take the gradient wrt wk of to determine the minimum
mean square error (mnima de la media del error cuadrado)

72

Minimum Mean Square Error


The gradient of is defined as

,
,...,
w w0 w1
w N

Gradient of
w w bk2 wTk Rw k 2rT w k

w wTk Rw k 2 w rT w k
2 Rw 2r

73

Minimum Mean Square Error


Setting the gradient equal to zero results in the Wiener-Hoff
equation

Rw r
Pre-multiply both sides by R-1 we obtain the equation for the
equalizer weights

w R1r

74

Iteration of the Gradient


The linear MMSE equalizer can be found iteratively
The MSE is quadratic, thus the gradient of the MSE wrt to
wn gives the direction to change wn for the largest increase
of the MSE.
In our notation, the gradient is 2 Rw k 2r
To decrease the MSE, we update wk in direction opposite to
the gradient.

75

Steepest Descent Algorithm


This is the steepest descent (descenso mas rpido) algorithm
At the nth step, the vector wkn is update as

w nk w nk 1 D r Rw nk 1 column vectors
where D is a small positive constant that controls the rate of
convergence to the optimal solution

76

Least Mean Square Algorithm


In many applications R and r are unknown in advance.
The transmitter can transmit a training sequence known a
priori by the receiver.
With training sequence, the receiver can estimate R and r.
Alternately, with a training sequence, we can replace R and r
at each step in the steepest descent algorithm by rough
T
T
y
y
and
y
y
estimates k k
k k respectively
The algorithm becomes

w kn w kn1 DeTk y k
this is the stochastic steepest descent algorithm called the
Least Mean Square (LMS) (menor de la media cuadrada)
algorithm.
77

Least Mean Square Algorithm


The LMS method is a gradient-based approach using
statistical ensemble averages
Convergence of the LMS algorithm is very slowly
There are other methods not based on statistics which
converge faster
Such a method is the Least Square (LS) (menor cuadrado)
algorithm

78

Least Squares Algorithm


79

Least Squares Algorithm


Rapid convergence: Error measurements expressed as time
average of the actual received signal
The problem is about approximating an overdetermined
system, given by the matrix equation

Pn w n q n
wherePn is an M N matrix, M N of measured input
data;w
nis an N 1 column vector of unknown coefficients
and qn is an M 1 column vector of observed output
measurements

80

Least Squares Algorithm


The consider the error vector expression
ei q i p i w i
and the least square error

J ei ei q i p i w i q i p i w i
T

The gradient of J n

i w
Ti pTi q i w
Ti pTi p i w
i
w J w qTi q i qTi p i w
Ti pTi q i pTi p i w
i0
2w

Using notation from the previous section

i pTi q i
pTi p i w
81

Least Squares Algorithm


T
When the process is overdetermined, p i p i has more rows than
columns and therefore not invertible.
However, it may be full
T
rank column-wise, thus pTi p i pTi p i may be invertible
T
T
Hence premultiply both sides of the inverse of p i p i pTi p i
and solve for the unknown tap coefficients

n p pi p pi
w
T

T
i

T
i

p p
T
i

q n

This means that RHS

n pi p pi p pi
pi w
T
i

T
i

p p
T
i

q n

is a linear combination of the column vectors of pi , i.e., in the


column space of pi.
This algorithm is impractical for wireless systems
82

Recursive Least Squares Algorithm


83

Recursive Least Squares


This above algorithm is impractical for wireless systems
A Recursive Least Square (RLS) version exists based the
time average of an error function
n

J n n1e i , n e i , n
i 1

is more attractive, l weighting factor close to but less than one.

84

Recursive Least Squares


Summary of the recursive least square algorithm* (RLS)
equalizer filter output signal dn wnT1 pn
adaptive error equation e x d
n

recursive gain coefficient kn

Rn11 pn

pnT Rn11 pn

recursive update of inverse correlation matrix


Rn1

1 1
Rn1 kn pnT Rn11

weight vector update equation

wn wn1 knen
* (algoritmo recursivo menor de cuadrados)
85

Classification of equalizers

86

Comparison of adaptive algorithm

87

Appendices
88

Matched Filters
(Filtros Adaptados)
89

Matched Filters
A matched filter is the linear filter, h, that maximizes the
output signal-to-noise ratio

y n

h n k x k

Derivations:
Geometric argument maximize signal-to-noise ratio
Correlation argument maximize an inner vector product
Definition of problem: we seek a filter, h[n], such that we
maximize the vector inner product of the filter outputs y[n]
and the observed signal x[n] = s[n] + w[n], where s[n] is the
desired signal and w[n] an additive noise
90

Matched Filters
Hermitian covariance matrix

Rw E ww H

Inner vector product of filter and observed signal

h k x k hH s hH w y s yw

We want to maximize the Signal-to-Noise ratio


SNR

ys

E yw

h s

E h w

by choosing h[n]
91

Matched Filters
Expand the denominator of the objective function

E h w

E h w h w
H

hH E ww H h hH Rw h

Substituting in the objective expression


H

h s
SNR H
h Rw h
Using the symmetric properties of an Hermitian matrix
H

hH R

12 H
w

Rw1 2 s

h s
SNR H

H
12 H
h Rw h
h Rw Rw1 2 h
92

Matched Filters
Rewriting

R h R s
hH s
SNR H

H
12
h Rw h
Rw h Rw1 2 h
2

12
w

1 2
w

H 2
H
H

a
b

a
a
b
b
Using Cauchy-Schwarz inequality

R h R s
SNR
R h R h
12
w

12
w

1 2
w

12
w

R1 2 h H R1 2 h R 1 2 s H R 1 2 s
w
w
w
w

R h R h
12
w

R h R s
SNR
R h R h
12
w

12
w

1 2
w

12
w

12
w

s H Rw1 s

c.f. appendix at the end for an example of the Cauchy-Schwarz inequality


93

Matched Filters
We can achieve a tight upper bound if we choose the substitution

Rw1 2 h Rw1 2 s
Thus

R h R s
SNR
R h R h
12
w

12
w

1 2
w

12
w

R s R s

R s R s
1 2
w

1 2
w

1 2
w

1 2
w

s R R s

s H Rw1 s
s R R s
2

1 2 1 2
w
w
2 H
1 2 1 2
w
w
H

Hence optimum matched filter

h Rw1 s
94

Matched Filters
Substitute h Rw1 s in ys h s s H hto obtain output

ys s H Rw1 s
Constrain the noise power to unity i.e.,

E yw

and normalize expected value filter output power to unity noise


From the expression for SNR

SNR

ys

E yw

s H Rw1 s

ys s H Rw1 s
2

95

Matched Filters
Substitute

ys s H Rw1 s in the LHS above expression


2

s R s s H Rw1 s
2

1
w

and solve for proportionality constant

1
s H Rw1 s

Hence the expression for normalized filter

Rw1
s H Rw1 s

96

Matched Filters

y n

h n k s k w k

h n
y n

Rw1
s H Rw1 s
Rw1
s H Rw1 s

s n

s n k s k w k

97

Discrete-time Fourier Transform


98

Poisson Summation Formula


Continuous-time Fourier transform
X f

x t e i 2 ft dt

(1)

Poisson summation formula for appropriate functions x

x n

X k where X F f x

With substitutiong xT

f x and transform property

F g xT G for T 0
T

(2)

(3)

99

Poisson Summation Formula


Expression (3) becomes

1
(4)
g nT T G k T
n
k
Another definition,s t x g x and transform property

F s t x S e i 2 t

(5)

Expression in (4) becomes

s t nT T
n

S k T e

i 2 tk T

(6)

100

Poisson Summation Formula


Another definition,s t x

g x and transform property

F s t x S e i 2 t

(7)

Periodic summation in (6) becomes

s t nT T
n

S k T e i 2 tk T

(8)

Similarly, periodic summation of a function Fourier

transform
i 2 nT

k
T

T
s
nT
e

T F s nT t nT
n

(9)
101

Methodology
Transversal filter implementation
yk 0

wk 0

yk 1

wk 1

yk n

yk 2

wk 2

wk n

d n

Adaptive algorithm for updating


The set of weights wk[n]

d n

102

Example 7.2
Four matrix algebra rules useful in study of adaptive equalizers
1. The gradient of a similarity transformation
a. Similarity transformation: scalar resulting from the product
of a row vector, a square matrix and a column vector
N

x A x xi Aij x j
T

i 0
j 0

where
x is an (N + 1) is column vector
A is an (N + 1) x (N + 1) symmetric matrix, i.e., A = AT
b. Gradient: an (N + 1) column vector of partial derivatives


x
0

x1


x N

103

Example 7.2
c. The gradient of a scalar is a column vector, thus
N

x x A x x xi Aij x j
T

i 0
j 0

where the kth element of column vector is given by

xk

N
N
N
N
N
N
xi Aij x j xi Aik Akj x j Aki xi Akj x j 2 Aki xi
i 0, j 0
i 0
j 0
i 0
j 0
i 0

resulting in the (N + 1) column vector


x xTA x 2Ax
104

Example 7.2
2. For any square nonsingular matrix,AA 1 A 1 A I
3. For any matrix product AB T BT AT
T
T

1
4. For any symmetric nonsingular matrixA A and A A 1

105

You might also like