Signals and Signal Spaces
Signals and Signal Spaces
Chapter 1
The goal of this chapter is to give a brief overview of methods for char-
acterizing signals and for describing their properties. Wewill start with a
discussion of signal spaces such as Hilbert spaces, normed and metric spaces.
Then, the energy density and correlation function of deterministic signals will
be discussed. The remainder of this chapter is dedicated to random signals,
which are encountered in almost all areas of signal processing. Here, basic
concepts such as stationarity, autocorrelation, and power spectral densitywill
be discussed.
is finite, we call it an energy signal. If the energy is infinite, but the mean
power
1
2 Chapter 1 . Signals and Signal Spaces
I” lX(t)lPdt
to be evaluated in the Lebesgue sense is finite. If the interval limits a and b
are expanded to infinity, we also write L p ( m )or LP@). According to this
classification, energy signals defined on the real axis are elements of the space
L2 (R).
1.1.2 Normed
Spaces
When considering normed signal spaces, we understand signals as vectorsthat
are elements of a linear vector spaceX . The norm of a vector X can somehow
be understood as the length of X. The notation of the norm is 1 1 ~ 1 1 .
Norms must satisfy the following three axioms, where a is an arbitrary
real or complex-valued scalar, and 0 is the null vector:
(1.6)
Thus, the signal energy according to (1.1) can also be expressed in the form
00
X E L2(IR). (1.8)
1.1. Signal Spaces 3
Norms for Discrete-Time Signals. The spaces l p ( n ln2) , are the discrete-
time equivalent to the spaces L p ( a ,b ) . They are normed as follows:
(1.9)
n=-cc
1.1.3 Metric
Spaces
A function that assigns a real number to two elements X and y of a non-empty
set X is called a metric on X if it satisfies the following axioms:
Thus, d(a,c ) I d(a,b) + d(b,c ) , which means that also (1.14) is satisfied. 0
4 Chapter 1 . Signals and Signal Spaces
Nevertheless, we also find metrics which are not associated with a norm.
An example is the Hamming distance
n
d(X,Y) = C K X k + Y k ) mod 21,
k=l
(1.21)
1.1. Signal Spaces 5
and
We will prove this in the following along with the Schwarz inequality, which
states
Ib , Y >I I l 1 4 IlYll. (1.29)
Equality in (1.29) is given only if X and y are linearly dependent, that is, if
one vector is a multiple of the other.
Proof of the Schwarz inequality. The validity of the equality sign in the
Schwarz inequality (1.29) for linearly dependent vectors can easily be proved
1.1. Signal Spaces 7
0 I (G.4
= +
(z a y , X +ay) (1.30)
= (z,z+ay)+(ay,z+ay)
= (~,~)+a*(~,Y)+a(Y,~)+aa*(Y,Y).
and we get
(1.32)
Comparing (1.32) with (1.28) and (1.29) confirms the Schwarz inequality. 0
Equation (1.28) shows that the inner products given in (1.21) and (1.22)
lead to the norms (1.7) and (1.10).
Finally, let us remark that a linear space with an inner product which is
complete with respect to the induced metric is called a Hilbert space.
8 Chapter 1 . Signals and Signal Spaces
1.2 EnergyDensityandCorrelation
1.2.1 Continuous-Time Signals
Let us reconsider (1.1):
00
E, = - (1.34)
= IX(w)I2. (1.35)
The energy density spectrum S,“,(w) can also be regarded as the Fourier
transform of the so-called autocorrelation function
cc
r,”,(r) = z * ( t )z(t + r ) dt = X * ( - r )* X(.). (1.36)
J -cc
We have cc
S,”,(W)
= l c c r f z ( ~e-jwT
) dr. (1.37)
d(2,2A2 = 112 - 42
= - (2,G)
(2,4 - ( G ,2) + ( G ,2,)
(1.38)
= 2 1 1 2 1 1 2 - 2 % { ( G ,2))
= 2 1 1 2 1 1 2 - 2 %{?fx(r)}.
S,",(W) =
I-, r,E,(r) C j W Td r , (1.40)
(1.41)
(1.42)
(1.43)
The term IX(ejW)12in (1.43) is called the energy density spectrum of the
discrete-time signal. We use the notation
?-:,(m) = c00
+
z*(n)z(n m ) . (1.45)
3See Section 4.2 for more detail on the discrete-time Fourier transform.
10 Chapter 1 . Signals and Signal Spaces
We have
c
M
m=-cc
(1.46)
5
r,E,(m) = G1I T S"F z ( e j w )ejwm dw.
Note that the energy density may also be viewed as the product X ( z ) X ( z ) ,
evaluated on the unit circle ( z = e j w ) , where X ( z ) is the z-transform of z ( n ) .
The definition of the cross correlation sequence is
r,E,(m)= ccc
n=-cc
y ( n + m ) z*(n). (1.47)
For the corresponding cross energy density spectrum the following holds:
cc
(1.48)
m=-m
that is
(1.49)
1.3 RandomSignals
Random signals are encountered in all areas of signal processing. For example,
they appear as disturbances in the transmission of signals. Even the trans-
mitted and consequently also the received signals in telecommunications are
of random nature, because only random signals carry information. In pattern
recognition, the patterns that are tobe distinguished are modeled as random
processes. In speech, audio, and image coding, the signals to be compressed
are modeled as such.
First of all,one distinguishes between randomvariables and random
processes. A random variable is obtained by assigning a real orcomplex
number to each feature mi from a feature set M . The features (or events)
occur randomly. Note that the featuresthemselves may also be non-numeric.
If one assigns a function iz(t)to each feature mi, then the totality of all
possible functions is called a stochastic process. The features occur randomly
whereas the assignment mi + i z ( t )is deterministic. A function i z ( t )is called
the realization of the stochasticprocess z ( t ) .See Figure 1.1for an illustration.
1.3. Random Signals 11
t"
3 \ 1
(b)
Figure 1.1. Random variables (a) and random processes (b).
(1.52)
Joint Probability Density. The joint probability density p,,,,, ([l, &) of
two random variables 21 and 22 is given by
mz =E{x}= lcct CQ
(1.63)
which means that, apart from the sign of the argument, it is the Fourier
transform of the pdf. According tothe momenttheorem of the Fourier
transform (see Section 2.2), the moments of the random variable can also
be computed from the characteristic function as
(1.64)
(1.66)
14 Chapter 1 . Signals and Signal Spaces
(1.67)
$
(1.75)
(1.76)
with
t
X T ( W ) t)z ( t ) rect(-),
T
and
0.5
rect(t) = 1, for It1
0, otherwise
is identical to the power spectral density given in (1.74).
Taking (1.75) for T = 0, we obtain
S; = r Z Z ( 0=
) LJ SZZ(w)
dw. (1.77)
27r -CQ
16 Chapter 1 . Signals and Signal Spaces
The Fourier transform of rXy(7) is the cross power spectral density, denoted
as Szy( W ) . Thus, we have the correspondence
(1.79)
(1.82)
(1.83)
m=-cc
(1.84)
1.3. Random Signals 17
m=--00
R,, = E{xxH},
(1.89)
Rzy = E{YXH},
where
X = +
[z(n), z(n l),. . . , z(n + NZ - 1 ) I T ,
(1.90)
Y = [ y ( n ) , y ( n+ l),. . . ,Y(n + Ny - IllT.
. (1.91)
18 Chapter 1 . Signals and Signal Spaces
(1.95)
(1.96)
ai = 1; cpi(t)X ( t ) dt
E { a ? } = cT2 vi
we call x ( t ) a Gaussian white noise process.
t
-%lax umax 0
- L cm
E { ~ * (xt()~ + T - - X ) h(X)dX
} (1.101)
= TZZ(T) * h(.).
The cross power spectral density is obtained by taking the Fourier trans-
form of (1.101):
SZY(W) = S Z Z ( W ) H ( w ) . (1.102)
= / / E { x * ( ~ - Q ! z) ( t + ~ - P ) } h*(a)h(P)dadP
(1.103)
Szy(ejw)= Szz(ej")
H(ej"). (1.107)
For the autocorrelation sequence and thepower spectral density at the output
we get
(1.108)