0% found this document useful (0 votes)
87 views21 pages

Signals and Signal Spaces

Energy density and correlation function of deterministic signals will be discussed. The remainder of this chapter is dedicated to random signals, which are encountered in almost all areas of signal processing.

Uploaded by

gieffe1960
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views21 pages

Signals and Signal Spaces

Energy density and correlation function of deterministic signals will be discussed. The remainder of this chapter is dedicated to random signals, which are encountered in almost all areas of signal processing.

Uploaded by

gieffe1960
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Signal Analysis: Wavelets, Filter Banks, Time-Frequency Transformsand

Applications. Alfred Mertins


Copyright 0 1999 John Wiley & Sons Ltd
Print ISBN 0-471-98626-7 ElectronicISBN 0-470-84183-4

Chapter 1

Signals and Signal Spaces

The goal of this chapter is to give a brief overview of methods for char-
acterizing signals and for describing their properties. Wewill start with a
discussion of signal spaces such as Hilbert spaces, normed and metric spaces.
Then, the energy density and correlation function of deterministic signals will
be discussed. The remainder of this chapter is dedicated to random signals,
which are encountered in almost all areas of signal processing. Here, basic
concepts such as stationarity, autocorrelation, and power spectral densitywill
be discussed.

1.l Signal Spaces


1.1.1 Energy and Power Signals
Let us consider a deterministic continuous-time signalz(t),
which may be real
or complex-valued. If the energy of the signal defined by

is finite, we call it an energy signal. If the energy is infinite, but the mean
power

1
2 Chapter 1 . Signals and Signal Spaces

is finite, we call z ( t ) a power signal. Most signals encountered in technical


applications belong to these two classes.
A second important classification of signals is their assignmentto thesignal
spaces L,(a, b ) , where a and b are the interval limits within which the signal
is considered. By L,(a, b) with 1 5 p < m we understand that class of signals
z for which the integral

I” lX(t)lPdt
to be evaluated in the Lebesgue sense is finite. If the interval limits a and b
are expanded to infinity, we also write L p ( m )or LP@). According to this
classification, energy signals defined on the real axis are elements of the space
L2 (R).

1.1.2 Normed
Spaces
When considering normed signal spaces, we understand signals as vectorsthat
are elements of a linear vector spaceX . The norm of a vector X can somehow
be understood as the length of X. The notation of the norm is 1 1 ~ 1 1 .
Norms must satisfy the following three axioms, where a is an arbitrary
real or complex-valued scalar, and 0 is the null vector:

Norms for Continuous-Time Signals. The most common norms for


continuous-time signals are the L, norms:

(1.6)

For p + m, the norm (1.6) becomes llxllL, = ess sup Iz(t)l.


astsb
For p = 2 we obtain the well-known Euclidean norm:

Thus, the signal energy according to (1.1) can also be expressed in the form
00

X E L2(IR). (1.8)
1.1. Signal Spaces 3

Norms for Discrete-Time Signals. The spaces l p ( n ln2) , are the discrete-
time equivalent to the spaces L p ( a ,b ) . They are normed as follows:

(1.9)

For p + CO,(1.9) becomes llzlleoo = sup;Lnl Ix(n)I.


For p = 2 we obtain

Thus, the energy of a discrete-time signal z ( n ) ,n E Z can be expressed as:

n=-cc

1.1.3 Metric
Spaces
A function that assigns a real number to two elements X and y of a non-empty
set X is called a metric on X if it satisfies the following axioms:

(i)d(x, y) 2 0, d(x, y) = 0 if and only if X = y, (1.12)


(ii) d(X,Y) = d(Y,X), (1.13)
(iii) +
d(x, z ) I d(x, y) d(y, z ) . (1.14)

The metric d(x,y) can be understood as the distance between X and y.


A normed space is also a metric space. Here, the metric induced by the
norm is the norm of the difference vector:

Proof (norm + metric). For d ( z , g) = 112 - 2 / 1 1 the validity of (1.12) imme-


diately follows from (1.3). With a = -1, (1.5) leads to 1 1 2 - 2 / 1 1 = 119 - zlI,
and (1.13) is also satisfied. For two vectors z = a - b and y = b - c the
following holds according to (1.4):

Thus, d(a,c ) I d(a,b) + d(b,c ) , which means that also (1.14) is satisfied. 0
4 Chapter 1 . Signals and Signal Spaces

An example is the Eucladean metric induced by the Euclidean norm:


1/2

I 4 t ) - Y,,,l2dt] , 2,Y E L z ( a ,b ) . (1.16)

Accordingly, the following distancebetween discrete-time signals canbe


stated:

Nevertheless, we also find metrics which are not associated with a norm.
An example is the Hamming distance
n
d(X,Y) = C K X k + Y k ) mod 21,
k=l

which states the number of positions where twobinarycode words X =


[Q, 2 2 , . . . ,X,] and y = [ y l ,y ~. .,.,yn] with xi,yi E (0, l} differ (the space of
the code words is not a linear vector space).

Note. The normed spaces L, and l , are so-called Banachspaces, which


means that they are normed linear spaces which are complete with regard to
their metric d ( z , y) = 1 1 2 - y 11. A space is complete if any Cauchy sequenceof
the elements of the space converges within the space. That is, if 1 1 2 , - zl, +
0 as n and m + m, while the limit of X, for n + 00 lies in the space.

1.1.4 Inner Product Spaces


The signal spaces most frequently considered are the spaces L 2 ( a , b ) and
&(nl,n2); for these spaces inner products can be stated. An inner product
assigns a complex number to two signals z ( t ) and y ( t ) , or z(n) and y ( n ) ,
respectively. Thenotation is ( X ,y). An inner productmust satisfy the
following axioms:

(i) k,Y>=( Y A * (1.18)


(4 (aa:+Py,z) = Q ( X , . Z ) + P ( Y , 4 (1.19)
(iii) (2,~ 2 )0, ( 2 , ~= )0 if and only if X = 0. (1.20)
Here, a and ,B are scalars with a,@E (E,and 0 is the null vector.
Examples of inner products are

(1.21)
1.1. Signal Spaces 5

and

The inner product (1.22) may also be written as

where the vectors are understood as column vectors:'

More general definitions of inner products include weighting functions or


weighting matrices. An inner product of two continuous-time signals z ( t ) and
y ( t ) including weighting can be defined as

where g ( t ) is a real weighting function with g ( t ) > 0, a 5 t 5 b.


The general definition of inner products of discrete-time signals is

where G is a real-valued, Hermitian, positive definite weighting matrix. This


means that GH = GT = G, and all eigenvalues Xi of G must be larger than
zero. As can easily be verified, the inner products (1.25) and (1.26) meet
conditions (1.18) - (1.20).
The mathematical rules for inner products basically correspond to those
for ordinary productsof scalars. However, the order in which the vectors occur
must be observed: (1.18) shows that changing the order leads to a conjugation
of the result.
As equation (1.19) indicates, a scalar prefactor of the left argument may
directly precede the inner product: (az, y) = a (2, y). If we want a prefactor
lThe superscript T denotestransposition.Theelements of a and g mayberealor
complex-valued. A superscript H , as in (1.23), means transposition and complex conjug&
tion. A vector a H is also referred to as the Herrnitian of a.If a vector is to be conjugated
but not to be transposed, we write a * such that a H = [=*lT.
6 Chapter 1 . Signals and Signal Spaces

of the right argument to precede the inner product, it must be conjugated,


since (1.18) and (1.19) lead to

Due to (1.18), an inner product is )always real: ( 2 , ~


(2,~ = )!I&{(%,z)}.

By defining an inner product we obtain a norm and also a metric. The


norm induced by the inner product is

We will prove this in the following along with the Schwarz inequality, which
states
Ib , Y >I I l 1 4 IlYll. (1.29)
Equality in (1.29) is given only if X and y are linearly dependent, that is, if
one vector is a multiple of the other.

Proof (inner product + n o m ) . From (1.20) it follows immediately that


(1.3) is satisfied. For the norm of a z , we conclude from (1.18) and (1.19)

llazll = ( a z , a z y= [ la12 ]1/2 = la1


(2,z) ( 2 , 2 ) 1 /=
2 la1 l l z l l .

Thus, (1.5) is also proved.


Now the expression 112 + will be considered. We have

Assuming the Schwarz inequality is correct, we conclude

112 + Y1I2 I 1 1 4 1 2 + 2 l l 4 l IlYll + 11YIl2 = ( 1 1 4 + llYll)2*


This shows that also (1.4) holds. 0

Proof of the Schwarz inequality. The validity of the equality sign in the
Schwarz inequality (1.29) for linearly dependent vectors can easily be proved
1.1. Signal Spaces 7

by substituting z = a y or y = a z , a E C,into (1.29) and rearranging the


expression obtained, observing (1.28). For example, for X = a y we have

In order to prove the Schwarz inequality for linearly independent vectors,


some vector z = z + a y will be considered. On the basis of (1.18) - (1.20) we
have

0 I (G.4
= +
(z a y , X +ay) (1.30)
= (z,z+ay)+(ay,z+ay)
= (~,~)+a*(~,Y)+a(Y,~)+aa*(Y,Y).

This also holds for the special a (assumption: y # 0)

and we get

The second and the fourth termcancel,

(1.32)

Comparing (1.32) with (1.28) and (1.29) confirms the Schwarz inequality. 0

Equation (1.28) shows that the inner products given in (1.21) and (1.22)
lead to the norms (1.7) and (1.10).
Finally, let us remark that a linear space with an inner product which is
complete with respect to the induced metric is called a Hilbert space.
8 Chapter 1 . Signals and Signal Spaces

1.2 EnergyDensityandCorrelation
1.2.1 Continuous-Time Signals
Let us reconsider (1.1):
00

E, = S__lz(t)l2 dt. (1.33)

According to Parseval’s theorem, we may also write

E, = - (1.34)

where X(W)is the Fourier transform of ~ ( t )The . ~quantity Iz(t)I2in (1.33)


represents the distribution of signal energy withrespect to time t ; accordingly,
IX(w)I2 in (1.34) can be viewed as the distribution of energy with respect to
frequency W. Therefore IX(w)I2 is called the energy density spectrum of z ( t ) .
We use the following notation

= IX(w)I2. (1.35)

The energy density spectrum S,“,(w) can also be regarded as the Fourier
transform of the so-called autocorrelation function
cc
r,”,(r) = z * ( t )z(t + r ) dt = X * ( - r )* X(.). (1.36)
J -cc

We have cc
S,”,(W)
= l c c r f z ( ~e-jwT
) dr. (1.37)

The correspondence is denoted as S,”,(w) t)r,”,(r).


The autocorrelationfunction is a measure indicating the similarity between
an energy signal z(t) and its time-shifted variant z r ( t )= z ( t r ) . This can +
be seen from

d(2,2A2 = 112 - 42
= - (2,G)
(2,4 - ( G ,2) + ( G ,2,)
(1.38)
= 2 1 1 2 1 1 2 - 2 % { ( G ,2))
= 2 1 1 2 1 1 2 - 2 %{?fx(r)}.

With increasing correlation the distance decreases.


21n this section, we freely use the properties of the Fourier transform. For more detail
on the Fourier transform and Parseval’s theorem, see Section 2.2.
1.2. Energy Density and Correlation 9

Similarly, the cross correlation function


cc
r,",(r) = [
J -00
y(t + r ) z*(t)d t (1.39)

and the corresponding cross energy density spectrum


Fcc

S,",(W) =
I-, r,E,(r) C j W Td r , (1.40)

(1.41)

are introduced, where .Fy(.) may be viewed as a measure of the similarity


between the two signals z ( t ) and y T ( t ) = y(t 7). +
1.2.2 Discrete-Time Signals
All previous considerations are applicable to discrete-time signals z ( n )as well.
The signals z ( n ) may be real or complex-valued. As in the continuous-time
case, we start the discussion with the energy of the signal:
00

(1.42)

According to Parseval's relation for the discrete-time Fourier transform, we


may alternatively compute E, from X ( e j w ) : 3

(1.43)

The term IX(ejW)12in (1.43) is called the energy density spectrum of the
discrete-time signal. We use the notation

S,E,(ejw)= IX(ejW)12. (1.44)

The energy density spectrum S,",(ej") is the discrete-time Fourier transform


of the autocorrelation sequence

?-:,(m) = c00

+
z*(n)z(n m ) . (1.45)

3See Section 4.2 for more detail on the discrete-time Fourier transform.
10 Chapter 1 . Signals and Signal Spaces

We have

c
M

m=-cc
(1.46)
5
r,E,(m) = G1I T S"F z ( e j w )ejwm dw.

Note that the energy density may also be viewed as the product X ( z ) X ( z ) ,
evaluated on the unit circle ( z = e j w ) , where X ( z ) is the z-transform of z ( n ) .
The definition of the cross correlation sequence is

r,E,(m)= ccc

n=-cc
y ( n + m ) z*(n). (1.47)

For the corresponding cross energy density spectrum the following holds:
cc
(1.48)
m=-m

that is
(1.49)

1.3 RandomSignals
Random signals are encountered in all areas of signal processing. For example,
they appear as disturbances in the transmission of signals. Even the trans-
mitted and consequently also the received signals in telecommunications are
of random nature, because only random signals carry information. In pattern
recognition, the patterns that are tobe distinguished are modeled as random
processes. In speech, audio, and image coding, the signals to be compressed
are modeled as such.
First of all,one distinguishes between randomvariables and random
processes. A random variable is obtained by assigning a real orcomplex
number to each feature mi from a feature set M . The features (or events)
occur randomly. Note that the featuresthemselves may also be non-numeric.
If one assigns a function iz(t)to each feature mi, then the totality of all
possible functions is called a stochastic process. The features occur randomly
whereas the assignment mi + i z ( t )is deterministic. A function i z ( t )is called
the realization of the stochasticprocess z ( t ) .See Figure 1.1for an illustration.
1.3. Random Signals 11

t"
3 \ 1

(b)
Figure 1.1. Random variables (a) and random processes (b).

1.3.1 Properties of RandomVariables


The properties of a real random variable X are thoroughly characterized by
its cumulative distribution function F,(a) and also by its probability density
function (pdf) p,(.). The distribution states the probability P with which
the value of the random variable X is smaller than or equal to a given value
a:
F,(a) = P ( x a ) . (1.50)
Here, the axioms of probability hold, which state that

lim F,(a) = 0, lim F,(a) = 1, F,(al)5 F,(a2) for a1 5 a2.


a+--00 a+w
(1.51)
12 Chapter 1 . Signals and Signal Spaces

Given the distribution, we obtain the pdf by differentiation:

(1.52)

Since the distribution is a non-decreasing function, we have

Joint Probability Density. The joint probability density p,,,,, ([l, &) of
two random variables 21 and 22 is given by

PZ1,22(tl,t22) =pz,(t1) PZZ1X1(t221t1), (1.54)


where pz,lzl (52 I&) is a conditional probability density (densityof 2 2 provided
x1 has taken on the value 51). If the variables 2 1 and 22 are statistically
independent of one another, (1.54) reduces to

P m , m ([l, t2) = p,, (t1) p,, (&). (1.55)

The pdf of a complex random variable is defined as the jointdensity of its


real and imaginary part:

Moments. The properties of a random variable are often described by its


moments
m?) = E {Ixl"} . (1.57)
Herein, E {-} denotes the expected value (statistical average). An expected
value E {g(z)}, where g ( x ) is an arbitrary function of the random variable x,
can be calculated from the density as

E {dxt.)}= Icc -CQ


g(<) P X ( 5 ) d t . (1.58)

For g(x) = x we obtain the mean value (first moment):

mz =E{x}= lcct CQ

Pz(5) dt. (1.59)

For g(%) = we obtainthe average power(second moment):


1.3. Random Signals 13

The variance (second central moment) is calculated with g(x) = Ix - mx12 as


cc
d =E { Ix - mXl2}=
-cc
15 - m,I2 P,(<) d5. (1.61)

The following holds:


2 2
U, = S, - m,.
2 (1.62)

Characteristic Function. The characteristic function of a random variable

(1.63)

which means that, apart from the sign of the argument, it is the Fourier
transform of the pdf. According tothe momenttheorem of the Fourier
transform (see Section 2.2), the moments of the random variable can also
be computed from the characteristic function as

(1.64)

1.3.2 Random Processes


The startingpoint for the following considerations is a stochastic process x ( t ) ,
from which the randomvariables xtl ,x t z ,. . . , xi, with xtk = x(tk) are takenat
times tl < t z < . . . < t,, n E Z. The properties of these random variables are
2 z t n (a1,a ~. .,. ,an).Then a second
characterized by their joint pdf p z t 1 , Z t,...,
set of random variables is taken from theprocess x ( t ) ,applying a timeshift T:
xtl+T,xtz+T,.. . , ~ t , +with
~ x t k + r = x(tk + T). If the joint densities of both
sets are equal for all time shifts T and all n, that is, ifwe have

then we speak of a strictly stationary process, otherwise we call the process


non-stationary.
Autocorrelation and Autocovariance Functions of Non-Stationary
Processes. Theautocorrelation function of a general random process is
defined as a second-order moment:

(1.66)
14 Chapter 1 . Signals and Signal Spaces

where z1 = z(t1) and 22 = x*(tz).


Basically, the autocorrelation function indicates how similar the process is
at times tl and t2, since for the expected Euclidean distance we have

The autocovariance function of a random process is defined as

(1.67)

where mtk denotes the expected value at time tk; i.e.

mtk =E {Z(tk)} ‘ (1.68)

Wide-Sense Stationary Processes. There areprocesses whose mean value


is constant and whose autocorrelation function is a function of tl - t2. Such
processes are referred to as “wide-sense stationary”, even if they are non-
stationary according to the above definition.

Cyclo-Stationary Process. If a process is non-stationary according to the


definition stated above, but if the properties repeatperiodically, then we speak
of a cyclo-stationary process.

Autocorrelation and Autocovariance Functions of Wide-Sense Sta-


tionary Processes. In the following we assume wide-sense stationarity, so
that the first and second moments are independent of the respective time.
Because of the stationarity we mustassume that the process realizations
are not absolutely integrable, and that their Fourier transforms do not exist.
Since in the field of telecommunications one also encounters complex-valued
processes when describing real bandpass processes in the complex baseband,
we shallcontinue by looking at complex-valued processes. Forwide-sense
stationary processes the autocorrelation function (acf) depends only on the
time shift between the respective times; it is given by

T,,(T) = E { z * (zt()t + T)} . (1.69)

For 21 = z(t + T) and 2 2 = z * ( t ) ,the expected value E {e} can be written as

Tzz(.) =E {X1 z2} = (1.70)


1.3. Random Signals 15

The maximum of the autocorrelation function is located at r = 0, where its


value equals the mean square value:

Furthermore we have r,,(-r) = r i z (7).


When subtracting the mean

prior computing theautocorrelationfunction, we get the autocovariance


function
c,,(r) ) 4 1 [x(t ). - m,])
= E { [ x * ( t- +
(1.73)

PowerSpectralDensity. The powerspectral density, or power density


spectrum, describes the distribution of power with respect to frequency. It
is defined as the Fourier transform of the autocorrelation function:
CQ

S,,(w) = ~ m r , , ( r )e-jwT d r (1.74)

$
(1.75)

This definition is based on the Wiener-Khintchine theorem, which states that


the physically meaningful power spectral density given by

(1.76)

with
t
X T ( W ) t)z ( t ) rect(-),
T
and
0.5
rect(t) = 1, for It1
0, otherwise
is identical to the power spectral density given in (1.74).
Taking (1.75) for T = 0, we obtain

S; = r Z Z ( 0=
) LJ SZZ(w)
dw. (1.77)
27r -CQ
16 Chapter 1 . Signals and Signal Spaces

Cross Correlation andCrossPower Spectral Density. The cross


correlation between two wide-sense stationary randomprocesses z ( t )and y ( t )
is defined as
Txy (7) = E {X* ( t ) Y (t + 7)} . (1.78)

The Fourier transform of rXy(7) is the cross power spectral density, denoted
as Szy( W ) . Thus, we have the correspondence

(1.79)

Discrete-Time Signals. The following definitions for discrete-time signals


basically correspond to those for continuous-time signals; the correlation and
covariance functions, however, become correlation and covariance sequences.
For the autocorrelation sequence we have

r x x ( m )= E {x*(n)x ( n +m)}. (1.80)

The autocovariance sequence is defined as

(1.82)

The discrete-time Fourier transform of the autocorrelation sequence is the


power spectral density (Wiener-Khintchine theorem). We have
M

(1.83)
m=-cc

(1.84)
1.3. Random Signals 17

The definition of the cross correlation sequence is

m=--00

A cross covariance sequence can be defined as

Correlation Matrices. A u t o and cross correlation matrices are frequently


required. We use the following definitions

R,, = E{xxH},
(1.89)
Rzy = E{YXH},
where
X = +
[z(n), z(n l),. . . , z(n + NZ - 1 ) I T ,
(1.90)
Y = [ y ( n ) , y ( n+ l),. . . ,Y(n + Ny - IllT.

The terms x x H and gxH are dyadic products.


For the sake of completeness it shall be noted that the autocorrelation
matrix R,, of a stationary process z(n) has the following Toeplitz structure:

. (1.91)
18 Chapter 1 . Signals and Signal Spaces

Here, the property


r,, (-4= c,(4 7 (1.92)

which is concluded from (1.80) by taking stationarity into consideration, has


been used.
If two processes x ( n ) and y(n) are pairwise stationary, we have

.zy(-i) = .f,(i), (1.93)

and thecross correlation matrix R,, = E { y X"} has thefollowing structure:

Auto and cross-covuriunce matrices can be defined in an analog way by


replacing the entries rzy(m)through czy(m).

Ergodic Processes. Usually, theautocorrelation function is calculated


according to (1.70) by taking the ensemble average. An exception to this
rule is the ergodic process, where the ensemble average can be replaced by a
temporal average. For the autocorrelation function of an ergodic continuous-
time process we have

(1.95)

where iz(t)is an arbitrary realization of the stochastic process. Accordingly,


we get

(1.96)

for discrete-time signals.

Continuous-Time White Noise Process. A wide-sense stationary


continuous-time noise process x ( t ) is said to be white if its power spectral
density is a constant:
S z z ( W ) = CJ 2 . (1.97)
1.3. Random Signals 19

The autocorrelation function of the process is a Dirac impulse with weight


2:
r z z ( 7 )= uz d(7). (1.98)
Since the power of such a process is infinite it is not realizable. However,
the white noise process is a convenient model process which is often used for
describing properties of real-world systems.

Continuous-Time Gaussian White Noise Process. We consider a real-


valued wide-sense stationary stochastic process ~ ( tand ) try to represent it
on the interval [-a, a] via a series expansion4 with an arbitrary real-valued
orthonormal basis cpi(t)for L2 (-a, a). The basis satisfies

If the coefficients of the series expansion given by

ai = 1; cpi(t)X ( t ) dt

are Gaussian random variables with

E { a ? } = cT2 vi
we call x ( t ) a Gaussian white noise process.

Bandlimited White Noise Process. A bandlimited white noise process is


a whitenoise process whose power spectral density is constant within a certain
frequency band and zero outside this band. See Figure 1.2 for an illustration.

t
-%lax umax 0

Figure 1.2. Bandlimited white noise process.

Discrete-Time White Noise Process. A discrete-time white noise process


has the power spectral density

SZZ(&) = cTz (1.99)


4Series expansions are discussed in detail in Chapter 3.
20 Chapter 1 . Signals and Signal Spaces

and the autocorrelationsequence


2
TZdrn) = fJ dmo. (1.100)

1.3.3 Transmission of Stochastic Processes through


Linear Systems
Continuous-Time Processes. We assume a linear time-invariant system
with the impulse response h(t),which is excited by a stationary process ~ ( t ) .
The cross correlation function between the input process ~ ( tand
) the output
process y ( t )is given by

- L cm
E { ~ * (xt()~ + T - - X ) h(X)dX
} (1.101)

= TZZ(T) * h(.).
The cross power spectral density is obtained by taking the Fourier trans-
form of (1.101):
SZY(W) = S Z Z ( W ) H ( w ) . (1.102)

Calculating the autocorrelation function of the output signal is done as


follows:

= / / E { x * ( ~ - Q ! z) ( t + ~ - P ) } h*(a)h(P)dadP

(1.103)

= /rZZ(. - X) /h*(a)h(a X) dadX +


1.3. Random Signals 21

Thus, we obtain the following relationship:

Taking the Fourier transform of (1.104), we obtain the power spectral


density of the output signal:

Sy,(w) = Szz(w) I H ( w ) I 2 . (1.105)

We observe that the phase of H ( w ) has no influence on Syy(w).Consequently,


only the magnitude frequency response of H ( w ) canbedetermined from
S Z Z ( W ) and S y y ( 4 .

Discrete-Time Processes. Theresults for continuous-time signals and


systems can be directly applied to the discrete-time case, where a system
with impulse response h(n) is excited by a process z ( n ) ,yielding the output
process y(n). The cross correlation sequence between input and output is

%y(m) = r z z ( m ) * h(m). (1.106)


The cross power spectral density becomes

Szy(ejw)= Szz(ej")
H(ej"). (1.107)
For the autocorrelation sequence and thepower spectral density at the output
we get

(1.108)

s,,(ej") = szz(eju) IH(eju)l" (1.109)

As before, the phase of H(ej'") has no influence on S,,(ej").


Here we ceasediscussion of the transmission of stochastic processes
through linear systems, but we will returntothis topic in Section 5 of
Chapter 2, where we will study the representation of stationary bandpass
processes by means of their complex envelope.

You might also like