0% found this document useful (0 votes)
272 views19 pages

Gsop PDF

Martin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin Hewings

Uploaded by

Ayush Purohit
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
272 views19 pages

Gsop PDF

Martin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin HewingsMartin Hewings

Uploaded by

Ayush Purohit
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

'

&
$
%
Basic Problem
Problem:
Suppose that we have a set of M nite energy signals S = {s
1
(t), s
2
(t), . . . , s
M
(t)},
where each signal has a duration T seconds.
Every T seconds one of the waveforms from the set S is selected for transmission
over an AWGN channel. The transmitted waveform is
x(t) =

n
s
n
(t nT)
The received noise corrupted waveform is
r(t) =

n
s
n
(t nT) + n(t)
By observing r(t) we wish to determine the time sequence of waveforms {s
n
(t)}
that was transmitted. That is, in each T second interval, we must determine
which s
i
(t) S was transmitted.
0
c 2011, Georgia Institute of Technology (lect6 2)
'
&
$
%
Orthogonal Expansions
Consider a real valued signal s(t) with nite energy E
s
,
E
s
=
_

s
2
(t)dt
Suppose there exists a set of orthornormal functions {f
n
(t)}, n = 1, . . . , N. By
orthornormal we mean
_

f
n
(t)f
k
(t)dt =
kn

kn
=
_
_
_
1 , k = n
0 , k = n
We now approximate s(t) as the weighted linear sum
s(t) =
N

k=1
s
k
f
k
(t)
and wish to determine the s
k
, k = 1, . . . , N to minimize the square error
=
_

(s(t) s(t))
2
dt
=
_

_
_
s(t)
N

k=1
s
k
f
k
(t)
_
_
2
dt
0
c 2011, Georgia Institute of Technology (lect6 3)
'
&
$
%
Orthogonal Expansions
To minimize the mean square error, we take the partial derivative with respect
to each of the s
k
and set equal to zero, i.e., for the nth term we solve

s
n
= 2
_

_
_
s(t)
N

k=1
s
k
f
k
(t)
_
_
f
n
(t)dt = 0.
Using the orthonormal property of the basis functions, s
n
=
_

s(t)f
n
(t)dt and
=
_

_
_
s(t)
N

k=1
s
k
f
k
(t)
_
_
2
dt
=
_

s
2
(t)dt 2
_

s(t)
N

k=1
s
k
f
k
(t)dt +
_

k=1
s
k
f
k
(t)
N

=1
s

(t)dt
=
_

s
2
(t)dt 2
N

k=1
s
k
_

s(t)f
k
(t)dt +
N

k=1
N

=1
s
k
s

f
k
(t)f

(t)dt
= E
s

k=1
s
2
k
For a complete set of basis functions = 0.
0
c 2011, Georgia Institute of Technology (lect6 4)
'
&
$
%
Gram-Schmidt Orthonormalization
Suppose that we have a set of nite energy real signals {s
i
(t)}, i = 1, . . . , M}.
We wish to obtain a complete set of orthonormal basis functions for the signal
set. This can be done in 2 steps.
Step1: Determine if the set of waveforms is linearly independent. If they are
linearly dependent, then there exists a set of coecients a
1
, a
2
. . . , a
M
, not all
zero, such that
a
1
s
1
(t) + a
2
s
2
(t) + + a
M
s
M
(t) = 0.
Suppose, without loss of generality, that a
M
= 0. If a
M
= 0, then the signal set
can be permuted so that a
M
= 0. Then
s
M
(t) =
_
a
1
a
M
s
1
(t) +
a
2
a
M
s
2
(t) + +
a
M1
a
M
s
M
(t)
_
.
Next consider the reduced signal set {s
i
(t)}
M1
i=1
. If this set of waveforms is
linearly dependent, then there exists another set of co-ecients {b
i
}
M1
i=1
, not all
zero, such that
b
1
s
1
(t) + b
2
v
2
(t) + + b
M1
s
M1
(t) = 0 .
0
c 2011, Georgia Institute of Technology (lect6 5)
'
&
$
%
Gram-Schmidt Orthonormalization
We continue until a set {s
i
(t)}
N
i=1
of linearly independent waveforms is obtained.
Note that N M with equality if and only if the set of waveforms {s
i
(t)}
M
i=1
is
linearly independent.
If N < M, then the set of linearly independent waveforms {s
i
(t)}
N
i=1
is not
unique, but any one will do.
Step 2: From the set {s
i
(t)}
N
i=1
construct the set of N orthonormal basis func-
tions {f
i
(t)}
N
i=1
as follows. First, let
f
1
(t) =
s
1
(t)

E
1
where E
1
is the energy in the waveform s
1
(t), given by
E
1
=
_
T
0
s
2
1
(t)dt
Then
s
1
(t) =

E
1
f
1
(t) = s
11
f
1
(t)
where s
11
=

E
1
.
0
c 2011, Georgia Institute of Technology (lect6 6)
'
&
$
%
Gram-Schmidt Orthonormalization
Next, by using the waveform s
2
(t) we obtain
s
21
=
_
T
0
s
2
(t)f
1
(t)dt
along with the intermediate function
g
2
(t) = s
2
(t) s
21
f
1
(t)
Note that g
2
(t) is orthogonal to f
1
(t).
The second basis function is
f
2
(t) =
g
2
(t)
_
_
T
0
(g
2
(t))
2
dt
=
s
2
(t) s
21
f
1
(t)
_
E
2
s
2
21
0
c 2011, Georgia Institute of Technology (lect6 7)
'
&
$
%
Gram-Schmidt Orthonormalization
Continuing in the above fashion, we dene the ith intermediate function
g
i
(t) = s
i
(t)
i1

j=1
s
ij
f
j
(t)
where
s
ij
=
_
T
0
s
i
(t)f
j
(t)dt
The set of functions
f
i
(t) =
g
i
(t)
_
_
T
0
(g
i
(t))
2
i = 1, 2, , . . . , N
is the required set of complete orthonormal basis functions.
0
c 2011, Georgia Institute of Technology (lect6 8)
'
&
$
%
Gram-Schmidt Orthonormalization
We can now write the signals as weighted linear combinations of the basis func-
tions, i.e.,
s
1
(t) = s
11
f
1
(t)
s
2
(t) = s
21
f
1
(t) + s
22
f
2
(t)
s
3
(t) = s
31
f
1
(t) + s
32
f
2
(t) + f
33
f
3
(t)
.
.
. =
.
.
.
s
N
(t) = s
N1
f
1
(t) + + s
NN
f
N
(t)
For the remaining signals s
i
(t), i = N + 1, . . . , M, we have
s
i
(t) =
N

k=1
s
ik
f
k
(t)
where
s
ik
=
_
T
0
s
i
(t)f
k
(t)dt
0
c 2011, Georgia Institute of Technology (lect6 9)
'
&
$
%
Signal Vectors
It follows that the signal set s
i
(t), i = 1, . . . , M can be expressed in terms of a
set of signal vertors s
i
, i = 1, . . . , M in an N-dimensional signal space, i.e.,
s
1
(t) s
1
= (s
11
, s
12
, . . . , s
1N
)
s
2
(t) s
2
= (s
21
, s
22
, . . . , s
2N
)
.
.
. =
.
.
.
s
M
(t) s
M
= (s
M1
, s
M2
, . . . , s
MN
)
0
c 2011, Georgia Institute of Technology (lect6 10)
'
&
$
%
Example
0 T
1
0 T
1
0 T
1
0 T
1
s (t)
s (t)
s (t)
s (t)
1
2
3
4
T/3 2T/3
T/3
0
c 2011, Georgia Institute of Technology (lect6 10)
'
&
$
%
Example
Step 1: This signal set is not linearly independent because
s
4
(t) = s
1
(t) + s
3
(t)
Therefore, we will use s
1
(t), s
2
(t), and s
3
(t) to obtain the complete orthonormal
set of basis functions.
Step 2:
a)
E
1
=
_
T
0
s
2
1
(t)dt = T/3
f
1
(t) =
s
1
(t)

E
1
=
_
_
_
_
3/T , 0 t T/3
0 , else
0
c 2011, Georgia Institute of Technology (lect6 11)
'
&
$
%
Example
b)
s
21
=
_
T
0
s
2
(t)f
1
(t)dt
=
_
T/3
0
_
3/Tdt =
_
T/3
E
2
=
_
T
0
s
2
2
(t)dt = 2T/3
f
2
(t) =
s
2
(t) s
21
f
1
(t)
_
E
2
s
2
21
=
_
_
_
_
3/T , T/3 t 2T/3
0 , else
0
c 2011, Georgia Institute of Technology (lect6 12)
'
&
$
%
Example
c)
s
31
=
_
T
0
s
3
(t)f
1
(t)dt = 0
s
32
=
_
T
0
s
3
(t)f
2
(t)dt
=
_
2T/3
T/3
_
3/Tdt =
_
T/3
g
3
(t) = s
3
(t) s
31
f
1
(t) s
32
f
2
(t)
=
_
_
_
1 , 2T/3 t T
0 , else
f
3
(t) =
g
3
(t)
_
_
T
0
g
2
3
(t)dt
=
_
_
_
_
3/T , 2T/3 t T
0 , else
0
c 2011, Georgia Institute of Technology (lect6 13)
'
&
$
%
Example
3/T 3/T
3/T
1
2
3
T/3 2T/3
T/3
0 T
0 T 0 T
f (t)
f (t)
f (t)
T/3
2T/3
0
c 2011, Georgia Institute of Technology (lect6 14)
'
&
$
%
Example
s
1
(t) s
1
= (
_
T/3, 0, 0)
s
2
(t) s
2
= (
_
T/3,
_
T/3, 0)
s
3
(t) s
3
= (0,
_
T/3,
_
T/3)
s
4
(t) s
4
= (
_
T/3,
_
T/3,
_
T/3)
T/3
T/3
T/3
3
1
2
f (t)
f (t)
f (t)
s
s
s
s
1
2
3
4
0
c 2011, Georgia Institute of Technology (lect6 15)
'
&
$
%
Properties of Signal Vectors
Signal Energy:
E =
_
T
0
s
2
(t)dt
=
_
T
0
N

k=1
s
k
f
k
(t)
N

=1
s

dt
=
N

k=1
N

=1
s
k
s

_
T
0
f
k
(t)f

(t)dt
=
N

k=1
s
2
k

= s
2
The energy in s(t) is just the squared length of its signal vector s.
0
c 2011, Georgia Institute of Technology (lect6 16)
'
&
$
%
Properties of Signal Vectors
Signal Correlation: The correlation or similarity between two signals s
j
(t)
and s
k
(t) is

jk
=
1
_
E
j
E
k
_
T
0
s
j
(t)s
k
(t)dt
=
1
_
E
j
E
k
_
T
0
N

n=1
s
jn
f
n
(t)
N

m=1
s
km
f
m
(t) dt
=
1
_
E
j
E
k
N

n=1
N

m=1
s
jn
s
km
_
T
0
f
n
(t)f
m
(t)dt
=
1
_
E
j
E
k
N

n=1
s
jn
s
kn
=
s
j
s
k
s
j
s
k

Note that
=
_
_
_
0 , if s
j
(t) and s
k
(t) are orthogonal
1 , if s
j
(t) = s
k
(t)
0
c 2011, Georgia Institute of Technology (lect6 17)
'
&
$
%
Properties of Signal Vectors
Euclidean Distance: The Euclidean distance between two signals s
j
(t) and
s
k
(t) is
d
jk
=
_
_
T
0
(s
j
(t) s
k
(t))
2
dt
_
1/2
=
_

_
_
T
0
_
_
N

n=1
s
jn
f
n
(t)
N

m=1
s
km
f
m
(t)
_
_
2
dt
_

_
1/2
=
_
_
_
N

n=1
(s
jn
s
kn
)
2
_
_
_
1/2
=
_
s
j
s
k

2
_
1/2
= s
j
s
k

0
c 2011, Georgia Institute of Technology (lect6 18)
'
&
$
%
Example
Consider the earlier example where
s
1
= (
_
T/3, 0, 0)
s
2
= (
_
T/3,
_
T/3, 0)
s
3
= (0,
_
T/3,
_
T/3)
We have E
1
= s
1

2
= T/3, E
2
= s
2

2
= 2T/3, and E
3
= s
3

2
= 2T/3.
The correlation between s
2
(t) and s
3
(t) is

23
=
s
2
s
3
s
2
s
3

=
T/3
2T/3
= 0.5
The Euclidean distance between s
1
(t) and s
3
(t) is
d
13
= s
1
s
3
=
_
T/3 + T/3 + T/3 =

T
0
c 2011, Georgia Institute of Technology (lect6 19)

You might also like