0% found this document useful (0 votes)
6 views68 pages

Sumer Study Group FIR Channel

Uploaded by

gary911001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views68 pages

Sumer Study Group FIR Channel

Uploaded by

gary911001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 68

FIR equalizers

Outline
• Overview of FIR Equalizers • MMSE equalizers
• Zero-forcing equalizers • MIMO Frequency-Nonselective
• Linear estimation Channels
• Orthogonality Principle • Example
• Biased and unbiased linear • Channel-Shortening equalizers
estimates
• Estimation of multiple random
variables
Overview of FIR Equalizers
Overview of FIR Equalizers(1/1)
• In wideband communication systems, the transmission channels are
usually frequency-selective and thus will introduce some degree of
inter symbol interference (ISI) in addition to the noise.

• At the receiver, some signal processing is often carried out to


alleviate the effect of these distortions. Such processing is in general
known as channel equalization.
System Model (1/3)
• The goal of channel equalization is to produce an output signal that
is "close" to the transmitted signal x(n) in some sense.

• Transmission system with an LTI equalizer a(n).


System Model (2/3)
• In many applications, the channel can be modeled as an FIR LTI
filter by making the order Lc ​sufficiently large:
𝐿𝑐
𝐶 ( 𝑧 ) =∑ 𝑐 ( 𝑛 ) 𝑧 − 𝑛
𝑛=0

• The channel noise q(n) is a zero-mean wide sense stationary (WSS)


random process.
System Model (3/3)
• The equalizer A(z) has transfer function:

𝐿
𝐴 ( 𝑧 )= ∑ 𝑎 ( 𝑛 ) 𝑧 −𝑛

𝑛 =0

• Letting x(n) be the transmitted signal, the equalizer output is given by

^
𝑥 ( 𝑛 ) =( 𝑎 ∗ 𝑐 ∗ 𝑥 ) ( 𝑛 ) + ( 𝑎 ∗𝑞 )( 𝑛 )
where "*" denotes convolution.
Zero-forcing equalizers
Zero-forcing equalizers(1/9)
• The transfer function from the transmitted signal 𝑥(𝑛) to the
equalizer output is given by:
𝑇 ( 𝑧 )= 𝐴( 𝑧 )𝐶 ( 𝑧 )
• The equalizer 𝐴(𝑧) is zero-forcing if . The output of a zero-forcing
equalizer can be expressed as:
^
𝑥 ( 𝑛 ) = 𝑥 ( 𝑛 − 𝑛 0 ) +𝑞 0 ( 𝑛 )

where is the equalizer output noise. The integer


is the system delay.
Zero-forcing equalizers(2/9)
• For an FIR channel 𝐶(𝑧) and an FIR equalizer 𝐴(𝑧), their product
𝐴(𝑧)𝐶(𝑧) is also FIR. The transfer function 𝑇( 𝑧) can be a delay if
and only if both 𝐶(𝑧) and 𝐴(𝑧) are delays . When the channel is
frequency-selective, 𝐶(𝑧) has more than one tap.

• Therefore, there does not exist an FIR zero-forcing equalizer for


frequency-selective channels. An alternative solution would be to
find 𝐴(𝑧) so that 𝐴(𝑧)𝐶(𝑧) is "close" to a delay ​ . To do this, let us
define
𝑑 ( 𝑛 ) =( 𝑎 ∗ 𝑐 ) ( 𝑛 ) − δ ( 𝑛 − 𝑛 0 )
Zero-forcing equalizers(3/9)
• When 𝑑(𝑛)=0, the equalizer 𝐴(𝑧) is zero-forcing. However, this is
not possible unless the channel 𝑐( 𝑛) has only one nonzero tap. So we
design the equalizer 𝑎(𝑛) to minimize
• This problem can be formulated using matrix notation as follows:
 Let the vectors and be given by:

[ ] [ ]
𝑡 (0 ) 𝑎 ( 0)
( )
𝒕 = 𝑡 ( 1 ) 𝑎𝑛𝑑 𝒂= 𝑎 1
⋮ ⋮
𝑡 ( 𝐿) 𝑎 ( 𝐿 𝑎)

where is the order of .


Zero-forcing equalizers(4/9)
• Then they are related by:
𝐭 =𝐂 𝑙𝑜𝑤 𝐚
• where the matrix ​is a lower triangular Toeplitz matrix given by:
Zero-forcing equalizers(5/9)
• The quantity is a measure of the closeness of the transfer function
𝑇(𝑧) to the delay .

• The problem of finding the optimal 𝑎(𝑛) becomes a least-square


problem of finding 𝑎(𝑛) to minimize .

𝐿
𝜀𝑑 ( 𝑛0 ) = ∑ |𝑑 ( 𝑛 )| =‖𝐭 − 𝟏𝑛 ‖ =‖𝐂𝑙𝑜𝑤 𝐚 −𝟏𝑛 ‖
2 2 2
0 0
𝑛=0
Zero-forcing equalizers(6/9)
• From linear algebra theory, we know that the closest vector is the
orthogonal projection of ​onto the column space of .

• The least-square solution is given by:

−1
𝐚 𝑙𝑠 =( 𝐂†
𝑙𝑜𝑤 𝐂𝑙𝑜𝑤 ) 𝐂 †
𝑙𝑜𝑤 𝟏𝑛
0
Zero-forcing equalizers(7/9)
• Substituting this into the expression, we get:
2
𝜀 𝑑 , 𝑙𝑠 ( 𝑛0 ) =‖𝐁 𝑙𝑠 𝟏𝑛 − 𝟏𝑛 ‖
0 0

where:

−1
𝐁 𝑙𝑠 =𝐂𝑙𝑜𝑤 ( 𝐂 †
𝑙𝑜𝑤 𝐂𝑙𝑜𝑤 ) †
𝐂
𝑙𝑜𝑤
Zero-forcing equalizers(8/9)
• Expanding the right-hand side of

ℰ 𝒹 ,𝑙𝑠 ( 𝑛 0 ) =𝟏𝑛 𝐁 𝑙𝑠 𝐁𝑙𝑠 1𝑛 −𝟏 𝑛 𝐁 𝑙𝑠 𝟏𝑛 − 𝟏𝑛 𝐁 𝑙𝑠 𝟏𝑛 +1


† † † † †
0 0 0 0 0 0

• Using the facts that and = , we simplify to:

ℰ 𝒹 ,𝑙𝑠 ( 𝑛 0 ) =1− [ 𝐁 𝑙𝑠 ] 𝑛 , 𝑛
0 0
Zero-forcing equalizers(9/9)
• The minimum is achieved when ​ is chosen such that the th diagonal
entry of is the largest.

• This equalizer is called a delay-optimized least-squares equalizer.

• The design procedure is summarized as follows:


 Compute the matrix
 Find so that the th diagonal entry of A is the largest
Effect of channel noise
Effect of channel noise(1/3)
• Suppose there is channel noise . The noisy output is:

𝑒 (𝑛 )=( 𝑎∗𝑐 ∗ 𝑥 )( 𝑛 ) − 𝑥 ( 𝑛− 𝑛0 ) + (⏟
𝑎∗ 𝑞 ) ( 𝑛 ) .

𝑒 𝑠𝑖𝑔 ( 𝑛 ) 𝑒𝑞 (𝑛 )

• The output error consists of two terms:


 Signal-dependent term
 Noise-dependent term
Effect of channel noise(2/3)
• Channel:
• AWGN 𝑞(𝑛) with variance
• SNR=10/3(5.23dB)
• Equalizer: delay-optimized least-squares equalizer
• Error variances:
σ
2
𝑒𝑞 =𝐸 [|𝑒 𝑞
2
( 𝑛 )| ]
Effect of channel noise(3/3)
Linear estimation
Linear estimation(1/1)
• In digital communications, we often estimate a random variable from
noisy observations .

• The estimate is a linear combination of the observed samples:

^
𝑥 =𝑎 0 𝑦 0 + 𝑎1 𝑦 1 +⋯ + 𝑎 𝐾 −1 𝑦 𝐾 −1
• This is known as a linear estimator. Define the estimation error 𝑒 as:

^ − 𝑥
𝑒= 𝑥
Orthogonality Principle
Orthogonality Principle(1/3)
• One commonly used criterion is the MSE

• The optimal estimate that minimizes the MSE is known as the


minimum mean squared error (MMSE) solution.
• A powerful tool for finding the MMSE solution is the orthogonality
principle, outlined in Theorem 3.1:
𝐸 [ 𝑒 𝑦∗
𝑖 ] =0 , for 𝑖=0 , 1 , … , 𝐾 −1

• This means the estimation error 𝑒 is orthogonal to every observed


sample .
Orthogonality Principle(2/3)
• The estimate is a linear combination of and thus belongs to the
subspace spanned by ​.
Orthogonality Principle(3/3)
• The MMSE estimate is the orthogonal projection of 𝑥 onto this
subspace:​

𝐸 [ ⊥ ] =0
^ ⊥ 𝑒∗
𝑥

• From the Pythagorean theorem:


2
[
𝐸 [| 𝑥| ]= 𝐸 |𝑒 ⊥| + 𝐸
2
] [| ⊥|
^
𝑥
2
]
• This implies that the MMSE estimate and its estimation error are
orthogonal.
Biased and unbiased linear estimates
Biased and unbiased linear estimates(1/3)
• A linear estimate of is said to be an unbiased estimate if:
𝐸 [ ^
𝑥|𝑥 ] =𝑥
• If this condition is not met, the estimate is considered biased. The
estimate can often be written as:
^
𝑥 =α 𝑥 + τ
• Here, 𝜏 is a random variable related to channel noise and interfering
symbols. We usually assume:
 τ has zero-mean:
 τ is statistically independent of
Biased and unbiased linear estimates(2/3)
• Using the assumptions about τ, we can write:
𝐸[ ^
𝑥|𝑥 ] =α 𝑥 .
• If the estimate is biased, the bias can be removed by dividing by α:
^
𝑥 τ
^
𝑥𝑢𝑛𝑏 = = 𝑥+
α α
• We define the SNR for biased and unbiased estimates:
 Biased SNR:
 Unbiased SNR:
Biased and unbiased linear estimates(3/3)
• MMSE estimate can be expressed as:
^
𝑥⊥ =α 𝑥 + τ
• The relationship between the biased SNR 𝛽𝑏𝑖𝑎𝑠𝑒𝑑
_ ​ and the unbiased
SNR β is given by:
𝛽 𝑏𝑖𝑎𝑠𝑒𝑑 = 𝛽 + 1 .
• Moreover, the constant 𝛼 is real, satisfying 0<𝛼 ≤1 , and the biased and
unbiased SNRs can be respectively expressed as:
,=
Estimation of multiple random variables
Estimation of multiple random variables(1/3)
• The orthogonality principle can be applied to estimate 𝑀 random
variables from 𝐾 observed samples ​. A linear estimate of ​ is given by:

𝐾 −1
𝑥𝑖 = ∑ 𝑎𝑖𝑗 𝑦
^ 𝑗
𝑗 =0

• Defining vectors:
𝐱 =[ 𝑥 0 , 𝑥1 , … , 𝑥 M − 1 ] 𝑇

𝐲 =[ 𝑦 0 , 𝑦 1 , … , 𝑦 𝐾 − 1 ] 𝑇
Biased and unbiased linear estimates(2/3)
• The problem becomes estimating from y:
𝐱
^ =𝐀 𝐲
• Where 𝐴 is the 𝑀×𝐾 matrix given by:
• Define the error vector:
-
• Define the k-th error vector:
^ k − 𝑥k
𝑒 k =𝑥
Biased and unbiased linear estimates(3/3)
• Aim to find 𝐴 such that is minimized, equivalent to minimizing for
each 𝑘.

• By orthogonality principle, is optimal in MMSE sense if and only if:

𝐸 [ 𝑒𝑖 𝑦 ∗
𝑗 ] =0 for 0 ≤ 𝑖≤ 𝑀 − 1 , 0 ≤ 𝑗 ≤ 𝐾 −1
• The MMSE estimate is such that:

𝐸 𝐞 𝐲 =0 and 𝐸 [ 𝐞 ⊥ 𝐱 ]=0
[ †
] ^ †
MMSE equalizers
MMSE equalizers(1/10)
• An equalizer is called an MMSE equalizer if it minimizes the quantity:

E [| e ( n ) | ]= 𝐸 [| ^
𝑥 ( 𝑛 ) − 𝑥 ( 𝑛 − 𝑛 0 )| ]
2 2

• This helps in reducing the error between the transmitted and received
signals.
MMSE equalizers(2/10)
• For an ​th order channel with noise the received signal is:

𝐿𝑐
𝑦 ( 𝑛 ) =∑ 𝑐 ( 𝑘 ) 𝑥 (𝑛 − 𝑘 )+ 𝑞(𝑛)
𝑘=0

• Transmitted signal is a zero-mean WSS random process, and is a zero-


mean WSS random process uncorrelated with
MMSE equalizers(3/10)
• The output of an ​th order equalizer is:

𝐿𝑎
𝑥 ( 𝑛 ) =∑ 𝑎 ( 𝑘 ) 𝑦 (𝑛 −𝑘 )
^
𝑘=0

• Define the output error:

^ ( 𝑛 ) − 𝑥 (𝑛 −𝑛 0)
𝑒 (𝑛 )= 𝑥
MMSE equalizers(4/10)
• To minimize, use the orthogonality principle, which states that the
error should be orthogonal to the received signal :

𝐸 [ 𝑒 𝑛 𝑦 ( 𝑛 − 𝑘 ) ] =0 , 𝑓𝑜𝑟 0 ≤ 𝑘 ≤ 𝐿 𝑎
( ) ∗

• We have () equations and we can solve for the () unknows


MMSE equalizers(5/10)
• A matrix approach to solving such a problem is given below.

• Define the following vectors:

[ ] [ ] [ ] [ ]
𝑎 (0) 𝑦 ( 𝑛) 𝑞 (𝑛) 𝑥( 𝑛)
( )
𝐚 = 𝑎 1 , 𝐲 = 𝑦 ( 𝑛− 1) , 𝐪 = 𝑞 (𝑛 −1) , 𝐱 = 𝑥( 𝑛− 1)
⋮ ⋮ ⋮ ⋮
𝑎 ( 𝐿𝑎 ) 𝑦 (𝑛 − 𝐿 𝑎) 𝑞 (𝑛 − 𝐿 𝑎) 𝑥 (𝑛 − 𝐿)

• Where . Then the error is


MMSE equalizers(6/10)
• The orthogonality condition becomes:

𝐸 [ 𝐲 ∗
( 𝐲 𝑇
𝐚 − 𝑥 ( 𝑛 − 𝑛 0 ) ) ] =𝟎 ,

• Autocorrelation matrices are defined as , representing the correlation


of signals with themselves.

• Cross-correlation vector:

𝐫 𝑥𝑦 ( 𝑛0 ) = 𝐸 [ 𝑥 ( 𝑛 − 𝑛0 ) 𝐲 ∗
]
MMSE equalizers(7/10)
• Then the MMSE equalizer is given by:

𝐚 ⊥= [ 𝐑 ∗
𝑦 ]
−1
𝐫 𝑥𝑦 ( 𝑛0 )
• Express ​in terms of channel taps :

𝐲 =𝐂 𝑇
𝑙𝑜𝑤 𝐱 + 𝐪
• Autocorrelation matrix for the received signal:

𝐑 𝑦 =𝐂 𝑇
𝑙𝑜𝑤 𝐑𝑥 𝐂 ∗
𝑙𝑜𝑤 +𝐑 𝑞
MMSE equalizers(8/10)
• Cross-correlation vector between 𝑥(𝑛) and 𝑦(𝑛)

𝐫 𝑥𝑦 ( 𝑛0 ) = 𝐂 †
𝑙𝑜𝑤 𝐑 ∗
𝑥 𝟏𝑛 0

• MMSE equalizer expression:


MMSE equalizers(9/10)
• The output signal of the MMSE equalizer is:

𝑥⊥ ( 𝑛 ) = 𝐲 𝐚 ⊥= 𝐱 𝐂𝑙𝑜𝑤 𝐚 ⊥ + 𝐪 𝐚 ⊥
𝑇 𝑇 𝑇
^

• Minimized MSE is derived as:


2
𝜀⊥ ( 𝑛 0 ) = 𝐸 [|𝜀 ⊥ ( 𝑛 )| ]
MMSE equalizers(10/10)
• Define matrix ​:

• Minimized mean squared error:


MIMO Frequency-Nonselective Channels
MIMO Frequency-Nonselective Channels(1/3)
• Transmission scheme: transmitted signal 𝑥 is an 𝑀×1 vector,
received signal 𝑦 is a 𝐾×1 vector.

𝐲 =𝐂 𝑥+ 𝐪
• The equalizer is an 𝑀×𝐾 matrix 𝐴

• The equalizer output:

^
𝒙= 𝐀𝐲
MIMO Frequency-Nonselective Channels(2/3)
• Output Error Vector:

• By the orthogonality principle, the MMSE equalizer satisfies:

𝐸 [𝐞 𝐲 †
] =0
• Substituting the expression into the above equation, one can show
that the MMSE equalizer is given by

=
MIMO Frequency-Nonselective Channels(3/3)
• Expressing correlation matrices in terms of 𝐶:
=

• It can be verified that the autocorrelation matrix of the corresponding


output error vector ​ is given by

𝐑 𝑒⊥ =𝐸 [ 𝑒⊥ 𝑒 ] = ( 𝐑 𝑥 + 𝐂 𝐑
−1


−1 †
𝑞
−1
𝐂)
• The minimized mean squared error for the 𝑘-th input signal ​:
2
E [|𝑒 k ,⊥| ]= [ 𝐑 𝑒 ]𝑘𝑘

Example
Example(1/5)
• Introduction to the example: channel model and assumptions.

−1
𝐶 ( 𝑧 ) =1 + 0 . 95 𝑧

• Assume 𝑥(𝑛) and 𝑞(𝑛) are zero-mean, white, and independent.

• Signal powerand noise power .


Example(2/5)
• Variances of (n), , and versus
the equalizer order

• Comparison with least-squares


equalizer

• No noise amplification for the


MMSE equalizer.
Example(3/5)
• Unbiased SNRs of IIR zero-forcing, least-squares, and MMSE equalizers.
Example(4/5)
• MMSE equalizer effectively avoids noise amplification.
Example(5/5)
Channel-Shortening equalizers
Channel-Shortening equalizers(1/10)
• Example 3.8 demonstrates that the performance of equalizers depends
strongly on the location of channel zeros. When the channel has a zero
on the unit circle, the zero-forcing equalizer is unstable, and the
performance of the MMSE equalizer becomes unsatisfactory.

• To alleviate this problem, we use transmission schemes that insert


redundant samples (guard intervals) to the transmitted signal. The
number of redundant samples required is usually equal to the channel
order ​.
Channel-Shortening equalizers(2/10)
• For digital subscriber loops (DSL), having redundant samples as long
as the channel can be impractical. Hence, we use an equalizer at the
receiver to shorten the channel. This is called a Time Domain
Equalizer (TEQ).

• The received signal is:

𝑦 ( 𝑛 ) =( 𝑐 ∗ 𝑥 ) ( 𝑛 ) +𝑞 ( 𝑛 )
Channel-Shortening equalizers(3/10)
• The received signal is:

𝑦 ( 𝑛 ) =( 𝑐 ∗ 𝑥 ) ( 𝑛 ) +𝑞 ( 𝑛 )

• where is the channel noise, uncorrelated with . Let the TEQ be an ​th-
order FIR filter. Its output is

^𝑥 ( 𝑛 ) =( 𝑎 ∗ 𝑐 ∗ 𝑥 ) ( 𝑛 ) + ( 𝑎 ∗𝑞 )( 𝑛 ) ,
Channel-Shortening equalizers(4/10)
• The effective impulse response is given by:

𝐿𝑎
𝑡 ( 𝑛 ) =( 𝑎∗ 𝑐 )( 𝑛 )=∑ 𝑎 ( 𝑘 ) 𝑐(𝑛 − 𝑘).
𝑘=0

• The goal is to design so most of the energy of lies in a prescribed


window of length
Channel-Shortening equalizers(5/10)
• Rewrite the equalizer output as:

𝑛0 +𝜈 ❑
^𝑥 ( 𝑛 ) = ∑ 𝑡 ( 𝑘 ) 𝑥 ( 𝑛 −𝑘 ) +
∑ 𝑡 ( 𝑘 ) 𝑥 ( 𝑛 −𝑘 ) + (⏟
𝑎∗𝑞 ) (𝑛 )
⏟ ⏟
𝑘=𝑛 𝑘∉[𝑛 ,𝑛 +𝜈 ]
0 0 0
𝑞 (𝑛 ) 𝑜
^𝑥 𝑑 ( 𝑛 ) 𝑥^ 𝑢 ( 𝑛 )
Channel-Shortening equalizers(6/10)
• The three quantities (𝑛), (𝑛), and are the desired signal due to impulse
response within the window, the undesired signal (ISI term) due to
impulse response outside the window, and the noise term, respectively.



2
𝑃 𝑖𝑠𝑖 =𝜀 𝑥 |𝑡 ( 𝑘)|

𝑘 ∉ [𝑛 , 𝑛 +𝜈 ]
0 0
Channel-Shortening equalizers(7/10)
• Define We reproduce below:

𝐭 =𝐂 𝑙𝑜𝑤 𝐚
• Where is the Toeplitz matrix

• We define an diagonal matrix whose diagonal entries are:

[ 𝐃𝑛 0
{ 1 ,𝑛0 ≤ 𝑛 ≤ 𝑛0 +𝜈 ;
]𝑖𝑖 = 0 ,𝑜𝑡 h 𝑒𝑟𝑤𝑖𝑠𝑒 .
Channel-Shortening equalizers(8/10)
• Then the coefficients that lie inside and outside the window are,
respectively, given by:

𝐃𝑛 𝐭 = 𝐃𝑛 𝐂 𝑙𝑜𝑤 𝐚
0 0

( 𝐈 − 𝐃𝑛 ) 𝐭 =( 𝐈 − 𝐃𝑛 ) 𝐂𝑙𝑜𝑤 𝐚
0 0
Channel-Shortening equalizers(9/10)
• Define the TEQ coefficients 𝑎(𝑛):

𝐭 =𝐂 𝑙𝑜𝑤 𝐚
𝑃 𝑑 = 𝜀𝑥 𝐚 𝐂 𝐃𝑛 𝐂𝑙𝑜𝑤 𝐚
† †
𝑙𝑜𝑤 0

† †
𝑃 𝑖𝑠𝑖 =𝜀 𝑥 𝐚 𝐂 𝑙𝑜𝑤 ( 𝐈 − 𝐃𝑛 ) 𝐂𝑙𝑜𝑤 𝐚
0

𝑃 𝑞 =𝐚 𝐑𝑞 𝐚

Channel-Shortening equalizers(10/10)
• Maximization of SIR:

𝑃𝑑
𝑆 𝐼𝑅 =
𝑃 𝑖𝑠𝑖

• Maximization of SINR:

𝑃𝑑
𝑆𝐼𝑁𝑅 =
𝑃 𝑖𝑠𝑖 + 𝑃 𝑞
END

You might also like