Fourier Transforms: Transmitter Output X (T) X (T) Input
Fourier Transforms: Transmitter Output X (T) X (T) Input
Fourier Transforms: Transmitter Output X (T) X (T) Input
2013
1. Basic Introduction
In its simplest form, a communication system consists of three blocks, transmitter, communication
channel, receiver as shown in Fig. 1.1.
x(t) xr ( t )
Communication Receiver
Transmitter channel
Input Output
As seen from Fig. 1.1, it is possible to have one way or two way communication system depending on
the whether the system incorporates only transmitter or receiver or both simultaneously. For instance,
the radio that we listen to or the TV that we watch (including the transmitters on the other side)
represent one way communication system, whereas telephone networks are two way communications
systems. Examples of communication channels are atmosphere, cables etc.
a) Bandlimiting
b) Digitizing
c) Encoding
d) Encryption
e) Multiplexing
f) Modulation
g) RF amplification
h) Channel adaptation
i) Multi access arrangement
For a system (including communication system) we model the input and output as depicted in Fig. 1.2.
X(f)
h(t),H(f),H() Y(f)
X() Y( )
t : time variable
f : frequency variable
: angular frequency variable
Fig. 1.2 The block diagram illustrating the modelling of communication system.
x t : Input signal in time domain, input time signal, input time waveform
X f : Input signal in frequency domain (frequency spectrum of input signal)
X : Input signal in angular frequency domain, 2 f
h t : Response of communication system in time domain (time response of a system)
H f : Response of communication system in frequency domain
Transfer function (Magnitude and phase responses)
H : Response of communication system in angular frequency domain
y t : Output signal in time domain, output time signal, output time waveform
Y f : Output signal in frequency domain (frequency spectrum of output signal)
Y : Output signal in angular frequency domain (1.1)
In this course, we will rarely use the angular frequency variable and its associated functions. For the
group of functions shown in Fig. 1.2 and (1.1), the following relation holds
Y f X f H f (1.2)
which means that the output (in frequency domain) can be obtained from the multiplication of input
signal (in frequency domain) and the response of the system (in frequency domain). Note that the
corresponding relation of (1.2) in time domain is convolution integral, i.e.,
y t x h t d x t h d (1.3)
Looking at (1.2), we see that if the input signal is given as x t or the response of the system is given
as h t , then to benefit from (1.2), we need Fourier transform operation, which, for a given x t , can
be expressed as
X f x t exp j 2 ft dt (1.5)
a) The integral over time is performed from minus infinity to plus infinity so as to include all
behaviours of x t along time axis.
b) The exponential function exp j 2 ft contains the time variable t ,as well as the frequency
variable f , in this manner enabling the transformation from time domain to frequency
domain (also the reverse)
c) The absolute value of the exponential is unity, i.e., exp j 2 ft 1 . This way, Fourier
transform (and the inverse) applies no scaling during transformation. Thus we expect the
energy or the power of a signal for instance to be the same, whether we take the time or
frequency domain expression of the signal.
x t X f exp j 2 ft df (1.7)
Or in short notation
If x t and X f form a Fourier transform pair, then it is customary to denote them in the following
symbolic form.
x t
Xf
F
(1.9)
-1F
t f
0 0
F-1
The appearance in Fig. 1.3 implies that we use the positive as well the negative side of the frequency
axis.
From (1.2) and (1.3), it is clear that in order to find the output of a system, we need to go between
time and frequency domains. In this, we also need to introduce the concepts and topics such as delta
(impulse) function, unit step function, convolution, periodic and aperiodic functions.
A time delta function, has theoretically an existence only at the value that make its time argument
zero, its duration goes to zero, its amplitude goes to infinity, while the area underneath remains fixed
at unity. Mathematically expressed, this means
t dt 1 (2.1)
t exp j 2 ft dt 1 (2.2)
1 exp j 2 ft df t (2.3)
f x t exp j 2 ft dt , x t 1 (2.4)
The results of (2.2), (2.3) and (2.4) are graphically displayed in Fig. 2.1.
x(t) X(f)
1
(t) F
F-1
0
t f
0
x(t) X(f)
t f
0 0
As can be seen from (2.2) and the right upper plot in Fig. 2.1, the time delta function is able to generate
all frequencies of the frequency domain, hence a time delta function can be used to assess the impulse
response of a system. On the other hand, the lower plots of Fig. 2.1 refer to the Fourier transform of a
DC signal in time domain and the inverse. As seen here, the Fourier transform of a DC signal is f ,
i.e. a delta function at f 0 , this result is in perfect agreement with physical reality.
It is possible to envisage time delayed or advance versions of the delta function as stated below.
By replacing the exp j 2 ft in (2.2) with a general function of x t applying the rule of evaluation
explained underneath, we can write
t x t dt x t 0 (2.6)
x t x t dt (2.7)
Note that (2.7) forms the basis of finding the time impulse response of a system.
x(t)
Time advanced Time delayed
delta function delta function
Delta function
with zero delay
( t + t0 ) ( t - t0 )
(t)
t
t = - t0 0 t = t0
Fig. 2.2 Time delayed, time advanced delta functions and delta function with zero time delay.
Finally in this section, we state the Fourier transforms of the delta functions with time delay or with
time advance
Hence a time delay or time advance is reflected into the frequency domain as phase delay, since
1 for t 0
u t (3.1)
0 for t 0
1
t
0 t
-t t=0
Physically unit step function can be visualized as a DC power source being switched on at time,
Other forms of unit step function can be envisaged. These are illustrated in Fig. 3.2.
u(-t) u(t-T)
1 1
0
t t
0 T
0
Au ( t ) t
A
-u(t)
0 1
t
The individual unit step function plots of Fig. 3.2 can be expressed mathematically as
1 for t 0
1 for t T
u t
, u t T
0
for t 0
0
for t T
A
for t 0
1 for t 0
Au t
, u t
(3.2)
0 for t 0
0 for t 0
u t : A DC power supply which has been on from time t , is switched off at t 0
u t T : A DC power supply that has been on at t T
Au t : A DC power supply that gives an output of amptiude, A
u t : A DC power supply that gives an negative output (3.3)
It is possible to retrieve a rectangular function (pulse or window function) from the difference of two
unit step functions as shown in Fig. 3.3.
x(t)
Au ( t )
A
T
t
0 T t 0
Rectangular function
-Au ( t - T )
Two unit step functions
Fig. 3.3 Retrieving a rectangular function from two unit step functions.
x t A u t u t T
or
A for 0 t T
x t
(3.4)
0 elsewhere
From the second line of (3.4), we understand that unit step function also serve to confine time
functions between certain intervals. In this sense, there is no need to specify time interval of existence
for x t of the first line of (3.4).
Exercise 3.1 : a) From Fig. 3.1, we see that the unit step function u t consist of three distinct parts.
The first part is the zero (constant) prior to t 0 . The second part is at t 0 , where a rapid transition
is made from zero to an unity amplitude. Final part is where the function remains constant at unity for
t 0 . Considering this property, write the derivative and plot the derivative of the rectangular
function given in (3.4).
Now we attempt to evaluate the Fourier transform of the unit step function using (1.5)
U f u t exp j 2 ft dt 1 exp j 2 ft dt
0
?
exp j 2 ft exp j 2 ft
1
(3.5)
j 2 f 0
j 2 f j 2 f
The ambiguity in term on the second line of (3.5), marked with a ? is resolved by rigrous analysis, to
get
f 1
U f F u t (3.6)
2 j 2 f
U(f)
0.5 ( f )
-0.5j / f
Fig. 3.4 The graph of the Fourier transform of unit step function.
The result in (3.6) and the plot in Fig. 3.4 can be interpreted in terms of Fig. 3.1 as follows ;
a) Considering the time range t , we see from Fig. 3.1 that u t spends half of its
time, that is t 0 at zero amplitude, the other half, that is 0 t at unity
amplitude. The amplitude coefficient of f (DC) being 0.5 is therefore is reasonable.
1
b) can be associated with the rapid transition (rise) at t 0
j 2 f
0.5 ( f )
1
-0.5j / f
t
0 t
-t t=0
Fig. 3.5 The temporal parts of the unit step function giving rise to its Fourier transform.
Note that the Fourier transform of time delayed unit step function can be worked out benefiting from
(2.9) and from (3.5), thus
exp j 2 ft f exp j 2 fT
(3.7)
j 2 f T
2 j 2 f
Finally in this section, we evaluate the Fourier transform of rectangular function given (3.4). We do this
again using (1.5), so
exp j 2 ft
T
T
A
A exp j 2 ft dt A 1 exp j 2 fT
0 j 2 f 0
j 2 f
sin a
AT exp j fT sinc fT , sinc a (3.8)
a
The sinc of X f found in (3.8) is often encountered in communications. As seen it is the result of a
time limited rectangular function or pulse. The absolute value of X f found in (3.8) is plotted in Fig.
3.6.
Exercise 3.2 : Plot the phase (the argument) part of X f found in (3.8) against f .
A
F
t -2 / T 2/T
4/T
0 T -4 / T
f
-3 / T -1 / T 1/T 3/T
0
4. Convolution
In the context of the communication system displayed in Fig. 1.2,convolution (in time domain) allows
us to work and write for the output directly in terms of time domain functions. This way, the
requirement to use Fourier transforms seems to be relaxed. The relevant integral and symbolic
representations are
y t x h t d x t h d
Note that in (4.1) is a dummy time variable to retrieve the output in the required variable of t . As
will be demonstrated later, convolution is a sliding process of one of the two function inside the
convolution integrand against t .
which means that multiplication in frequency domain corresponds to convolution in time domain.
Note that convolution is not restricted to time axis, for instance a convolution integral as a frequency
domain expression would be
Y f X f1 H f f1 df1 (4.3)
y t t h d h t d h t (4.4)
The implication of (4.4) is that, if a system is given a delta function as input, the at the output, we get
the impulse response of that system. This is reasonable, since as noted from (2.2) and Fig. 2.1, the time
delta function is able to excite all frequencies.
Now, we wish to talk about the time dilemma associated with the entry of a signal to a system. Fig 4.1
depicts such a situation, where the time signal is oriented in direction exactly as we plot is in our
graphs.
x(t)
Past Future
System
t
0
t =0 t1 = T t = 2T t = 3T t = 4T t = 5T
0 2 3 4 5
Fig. 4.1 Illustration of the orientation of signal entry into a system, as plotted in graphs.
Looking at Fig. 4.1, it is easy to identify that something is something wrong and strange. That is with
the selection of the direction for the time axis of x t , we have the following order amoung the time
quantities of Fig. 4.1
t0 t1 t2 t3 t4 t5 (4.5)
(4.5) means that along time axis, that t0 is the earliest, t1 is the next and so on, while t5 is the latest.
In other words, for instance t1 is future with respect to t0 . We know that in order to be in line with
physical reality, the x t of Fig. 4.1 should enter the system with earliest part of the signal first, then
the latest part of the signal last. The way to prevent and correct the wrong entry in Fig. 4.1 is displayed
in Fig. 4.2.
System
t
0
t5 = - 5T t = - 4T t = - 3T t2 = - 2T t =-T t =0
4 3 1 0
Fig. 4.2 The illustration of signal entry into a system in the correct way.
As seen from Fig. 4.2, the correct entry is established by inverting the signal along time axis, i.e. setting
the input to x t , instead of x t . This explains the significance of the minus sign in front of in
the argument of x t or h t . Note that in mathematical sense, inverting the input or the
system is equivalent.
Example 4. 1 : Now we do a long example to help better comprehension of the above topics. To the
simple RC network drawn in Fig. 4.3, we apply a rectangular pulse, we are asked to find the output via
two methods
R
A
x(t) y(t)
)
C
t Vin ( f ) i Vout ( f )
0 T
h(t),H(f)
Network , System
Vout f i 1/ j 2 fC 1
H f (4.6)
Vin f iR i / j 2 fC 1 j 2 fRC
From (4.6), the magnitude and phase response of H f in (4.6), which are H f and arg H f
can be found as follows
1 1 1
Hf , fc
1 2 fRC 2 0.5
1 f / f 2 0.5
2 RC
c
f
arg H f f tan 1 (4.7)
f c
|H(f)|
Magnitude response 1
0
/2 arg [ H ( f ) ]
f
0
Phase response
- / 2
Fig. 4.4 The plots of magnitude and phase response of the network given in Fig. 4.3.
It is clear from (4.7) and Frig. 4.4 that the network of Fig. 4.3 represents a low pass filter.
In order to express the output in Fig. 4.3 in the manner of (1.2), i.e., Y f X f H f we take
X f from the middle line of (3.8), thus
f f
X H
A 1
Y f X f H f 1 exp j 2 fT (4.8)
j 2 f 1 j 2 fRC
Taking into account (3.6) and (3.7), we can take the inverse Fourier transform of (4.9) and get
Exercise 4.1 : Prove the steps from the last line of (4.9) to the second line of (4.10).
Before plotting y t , the output of the network given in Fig. 4.3, it is possible to make a number of
observations regarding the result shown in (4.10)
a) The output consist of two parts, the first part, y1 t contains u t , thus starts at t 0 , while
the second part, y2 t contains u t T , thus starts at t T
b) y1 t is applicable in the time interval t 0 , while y2 t is applicable in the time interval
t T
c) The output is zero (or nonexistent) prior to t 0 , which is reasonable, since the input x t ,
starts at t 0 .
t
0 T
y(t)
T / RC >> 1
A
T / RC = 1
0.6321A
T / RC << 1
Output
t
0 T
Fig. 4.5 The output of the RC low pass filter of Fig. 4.3 against the input of rectangular pulse at three
different ratios of T / RC .
Viewed along time axis, the physical interpretations related to Fig. 4.5 are
Note that due to the use of unit step functions in (4.10), no time interval are separately defined, since
they are embedded into unit step functions. It is possible to remove unit step functions and rewrite
(4.10) as follows
0 for t 0
y t A 1 exp t / RC for 0 t T (4.11)
A exp t T / RC 1 exp T / RC for t T
B. Solution Using (time) convolution integrals : We use (1.3) and (4.2), to write the output of the
network in Fig. 4.3, directly as
y t x h t d x t h d (4.12)
1 u t
h t F -1 H f F -1 exp t / RC (4.13)
1 j 2 fRC RC
y t x t h d
A
u t u t T u exp / RC d
RC
A
u t u t T exp / RC d
RC 0
(4.14)
It is best to solve the convolution integral in (4.14) graphically. Initially we show in Fig. 4.6 the details
of how to obtain x t from x t .
A A A A
t
t
0 T 0 T -T t-T t
0 0
Note that in Fig. 4.6, t is a variable making x t slide along the negative and positive sides of the
axis. The one marked in Fig. 4.6 corresponds to the case of t being negative.
when t > T
increasing (sliding) t
t-T
t-T t 0 t t-T t
h()
1 / RC
Partial overlap
Reagions of overlap
Full overlap
0
From Fig. 4.7, the convolution integral can be solved in three steps (or rather two steps)
y t t 0 x t h d 0 (4.15)
t
y t 0t T x t h d x t h d
0
A t
t
y t t T x t h d x t h d
t T
A t
exp / RC d A 1 exp T / RC exp t T / RC
RC t T
(4.17)
x(t) y(t)
)
Vin ( f ) i R
t Vout ( f )
0 T
h(t),H(f)
Network , System
Solution : Similar to (4.6), by defining a loop current in the high pass filter of Fig. 4.2, we write
(4.19) means that H f of high pass filter can be written in terms of H f of low pass filter as
On the other hand, we know from the properties of Fourier transforms that
d d
x t F -1 j 2 fX f or F x t j 2 fX f (4.21)
dt dt
Applying (4.21) to (4.20) and using (4.13), we get the impulse response of the RC high pass filter as
j 2 fRC d u t
h t F -1 RC exp t / RC
1 j 2 fRC dt RC
u t
t exp t / RC (4.22)
RC
Bearing in mind that the Fourier transform expression of input (rectangular pulse) remains the same,
then the same can be applied to the lower line of (4.10) to obtain
T
x(t) A
Input
t
0
y(t) T / RC << 1
A
T / RC = 1
Output
0.3679A
t
0 T
T / RC >> 1
Fig. 4.9 The output of the RC high pass filter of Fig. 4.8 against the input of rectangular pulse at three
different ratios of T / RC .
Exercise 4.2 : By taking the rectangular input and high pass filter of Fig. 4.8, find the output via (time)
convolution integrals.
The input signal that is used for Exercises 4.1. and 4.2 is time limited (time discontinuous), and its
frequency spectrum is continuous as illustrated on the right hand side of Fig. 3.6. The use of (1.5) to
arrive at the Fourier transform for time continuous signals causes a problem, since
We alleviate this problem (for time continuous periodic signals), by the use of Fourier series. For every
periodic signal, we assume that a Fourier series expansion exists such that
x t cn exp j 2 nf 0t (5.2)
n
In (5.2) and (5.3), f0 is the (fundamental) frequency in x t , and T0 1/ f 0 is the associated period.
Note that once a Fourier series representation of a time continuous signal then a term by term Fourier
transform of the exponentials in (5.2) can be implemented using (2.4) such that
Find the Fourier transform this x t via the use of Fourier series formulations supplied in (5.2) and
(5.3).
Solution : By writing the cosine waveform of (5.5) in terms of the exponentials, we get
1
x t cos 2 f 0t exp j 2 f 0 t exp j 2 f 0t (5.6)
2
1 T /2
exp j 2 f 0t exp j 2 f0t exp j 2 nf 0t dt
0
cn
2T0 T / 2
0
T0 / 2
T0 / 2
1 t 1
c1
2T0 T / 2
dt
2T0
2
(5.8)
0 T0 / 2
By using the result delivered in (5.8), we can write the Fourier series representation of the cosine
function given in (5.5) as
c1 c
1
1 1
x t cos 2 f 0t cn exp j 2 nf 0t exp j 2 f 0t exp j 2 f 0t (5.9)
n 2 2
Note that this results was already known when writing (5.6).
From (5.9) and (5.10), we can find the Fourier transform of cosine and sine waveforms using (5.4),
thus
1
X c f F xc t F cos 2 f 0 t f f 0 f f 0
2
j
X s f F xs t F sin 2 f 0t f f 0 f f 0 (5.11)
2
The time waveforms and the frequency transforms of cosine and sine signals are drawn in Fig. 5.1.
Xc ( f )
x (t) 0.5 ( f + f ) 0.5 ( f - f )
0
c 0
cosine signal F
t
0 f
f=-f
0
0 f=f
0
Xs ( f )
0.5 ( f + f )
0
x (t)
s
F
f=f
0
0 f
t f=-f 0
0
- 0.5 ( f - f )
0
Fig. 5.1 Time waveform and the frequency spectrums of continuous cosine and sine signals.
It is important to realize that a time limited (thus nonperiodic) sinusoidal signal is quite different from
the time continuous signals discussed above. For clarification, a time limited (thus nonperiodic) cosine
signal and a time unlimited (continuous) signal are plotted together in Fig. 5.2.
t
0
t
0
T
0
Fig. 5.2 Plots for time limited (discontinuous) and time unlimited (continuous) cosine signals
(waveforms).
The signals of Fig. 5.2 are mathematically defined by placing a time range directly or by another
function as shown below.
As seen from (5.12) , when time range is defined by unit step functions, there is no need to write it
additionally. It is also customary to assume that the time function is unlimited in time if no time range
is placed across. We also use the dots to indicate that the function continues from minus infinity to
plus infinity as marked in Fig. 5.2.
Exercise 5.1 : Find and plot the Fourier transform of x1 t given in (5.12), also plotted on the first line
of Fig. 5.2.
Hint : You can use the symbolic tool box facility of Matlab for this. For instance to find the integral of
cos 2 f 0t from t 0 up to t t1 , type the followings on the command window
>> syms f0 t t1
>> int(cos(2*pi*f0*t),t,0,t1)
ans =
ECE 373 - HTE Haziran 2013 Sayfa 23
sin(2*pi*f0*t1)/(2*pi*f0)
Now we turn to a more complicated case of periodic rectangular function whose time waveform is
shown in Fig. 5.3.
x(t)
T
A
-T / 2 T /2
0 0
t
-T - T / 2 -T + T / 2 -T / 2 T/2 T -T/2 T +T/2
0 0 0 0 0
T
0
A for T / 2 nT0 t T / 2 nT0
x t
, n 2, 1, 0, 1, 2 (5.13)
0 elsewhere
The Fourier coefficients of the rectangular function given in Fig. 5.3 and (5.13) can be evaluated using
(5.3), hence
T0 / 2
1 1 T /2
cn
T0 T / 2
x t exp j 2 nf 0
t dt A exp j 2 nf0t dt
T0 T / 2
0
A exp j 2 nf 0t
T /2
A
exp j 2 nf 0T / 2 exp j 2 nf 0T / 2
(5.14)
T0 j 2 nf 0
T / 2 j 2 n
By using the sinc function definition given in (3.8), (5.14) can be converted into
sin nT / T0 AT sin nT / T0 AT nT
cn A sinc (5.15)
n T0 nT / T0 T0 T0
From (5.15) and (5.2), the Fourier series of the periodic rectangular function can be written as
nT
sinc
AT
x t exp j 2 nf0t (5.16)
T0 n T0
nT
sinc
AT
X f F x t f nf 0 (5.17)
T0 n T0
The absolute value of frequency spectrum defined by (5.17) is plotted in Fig. 5.4, together with its
associated time signal.
T x(t)
t
0 T0
|X(f)| AT / T
0 T/2
(f)
( f + f0 )
( f - f0 )
( f + 2 f0 )
( f - 2 f0 )
f
-4 / T -3 / T -2 / T -1 / T 0 1/T 2/T 3/T 4/T
Fig. 5.4 Time periodic rectangular function and its corresponding frequency spectrum.
Comparing to Fig. 5.4 to Fig. 3.6, where (on the right hand side) the spectrum of time limited (single)
rectangular is displayed, we see that in both cases, the frequency spectrum is sinc shaped, but in the
case of single rectangular function, this spectrum is continuous, but when the time function is
continuous and periodic, then the spectrum is discontinuous or rather discrete, consisting of delta
functions as depicted in Fig. 5.4. This is not surprising, since time and frequency axis act in reverse.
x t
X f , y t
Yf
F F
(6.1)
F
-1 -1 F
which means that x t and X f and y t and Y f form a Fourier transform pairs.
b) Convolution property is
1
F x t cos 2 f 0t X f f 0 X f f 0 (6.4)
2
d) Rayleigh’s property state that the energy of the signal is the same whether we use the time or
frequency axis representation, thus
x 2 t dt X f df
2
(6.5)
F Rx X f
2
(6.6)
Rx x t x t dt
*
(6.7)
d
F x t j 2 fX f (6.8)
dt
7. Classification of Signals
A signal is said to be deterministic if its value is known at all times. For instance, the following x t is
deterministic
x ( t ) - Noise
Random signals Deterministic signal x(t)
t
x ( t ) - Message signal
t
0
T0
t
0
-A
x t x t T0 for t (7.2)
then x t is said to be periodic, otherwise it is nonperiodic. For instance x t on the right hand side
of Fig. 7.1 is periodic, whereas the two signals on the left hand side of the same figure are nonperiodic.
Another example of nonperiodic signal is the (single) rectangular pulse in shown on the left hand side
of Fig. 3.3.
As an analogue signal, x t is continuous function of time, thus defined at all times, whereas a discrete
signal will only exist at discrete times. An analog and a discrete signal in the form of time delta functions
(sample of analog signal) are displayed in Fig. 7.2. As seen from this figure x nTs is undetermined
when t nTs .
Ts
t t
0 0
Initially, we state the definition of power. If x t is voltage waveform, the instantaneous power
dissipated across a resistor R can be written as
x 2 t
Px t (7.3)
R
On the other hand if x t is current waveform, then the definition in (7.3) will convert into
Px t x 2 t R (7.4)
It is clear from (7.3) and (7.4) that if we set R 1 , then the power based on voltage waveform and the
current waveforms will be identical. Such power is called normalized power.
With such a definition, for x t referring to voltage or current signal, the following integral will
indicate the energy dissipated during a time interval T / 2 t T / 2
T /2
Ex x 2 t dt (7.5)
T / 2
Ex 1 T /2
Px x 2 t dt (7.6)
T T T / 2
At the detection stage, the performance of a communication system is governed by the received signal
energy. This means received signals with higher energy give rise to fewer errors, while those signals
T /2
Ex Tlim
x 2 t dt , 0 E x , Ex is finite
T / 2
1 T /2 2
Px Tlim
x t dt , Px 0 ,
T T / 2
Px goes to zero (7.7)
(7.7) means that it possible to quantify an energy signal only by its amount of energy, since its power
goes to zero.
T /2
Ex Tlim
x 2 t dt , Ex is infinite
T / 2
1 T /2 2
Px Tlim
x t dt , 0 Px ,
T T / 2
Px is finite (7.8)
(7.8) means that for a power signal, the power is finite, while the energy is infinite, thus energy is a
meaningless quantity for a power signal. In the case of a periodic signal with a period T0 , the power
expression on the second line of (7.8) becomes
T0 / 2
1
Px x 2 t dt , 0 Px , Px is finite (7.9)
T0 T / 2
0
x (t)
1
A
1
- t1 t
1
0
t
- A1
x (t)
2
A
2
t
0
- A2
T
0
0
t
-T / 2 T/2
x (t)
-T / 2 2 T
A 0 T /2
2 0
0
t
-T - T / 2 -T + T / 2 -T / 2 T/2 T -T/2 T +T/2
0 0 0 0
T
0
x (t)
1
t=-t
1
t
0
t=t
2
x (t)
2
t
0
Exercise 7.1 : For the x1 t and x2 t signals given Figs. 7.3, 7.4 and 7.5, identify each of them in the
followings classifications
As seen in the above developments, for instance in (2.8), (3.6) and (3.8), X f is usually complex. This
is because, X f has two components, magnitude and phase. Sometimes we are not interested in
phase and seek a real quantity, to observe how the spectral components are distributed over the
frequency axis, thus we define spectral density function given by
Sx f X f X * f X f
2
(8.1)
For energy signals, using the Fourier transform property of (6.5), we can write
Ex x 2 t dt X f df S x f df
2
(8.2)
Example 8.1 : Find the (energy) spectral density of x1 t given in Fig. 7.4.
A for 0 t T
x t (8.3)
0 elsewhere
A
X f 1 exp j 2 fT AT exp j fT sinc fT (8.4)
j 2 f
From Figs. 3.3 and 7.4, it is easy to see that x1 t and x t are related as
x1 t x t T / 2 (8.5)
Now using the time shifting property of Fourier transform, i.e. (6.2), we get
sin fT
X 1 f F x1 t F x t exp j fT ATsinc fT AT (8.7)
fT
It is interesting to note that (8.7) does not contain a phase term, since x1 t is centred with respect to
t 0.
The result in (8.7) could have equally been achieved by the Fourier transform of x1 t as shown below
X1 f x1 t exp j 2 ft dt A exp j 2 ft dt
T / 2
exp j 2 ft
T /2
A ATsinc fT (8.8)
j 2 f T / 2
S x1 f X 1 f X 1* f X 1 f A2T 2sinc2 fT
2
(8.9)
Note that it is easy to plot S1 f in Matlab with the following command lines
>> A = 1;T = 1;
>> plot(f,Sx1f)
It is important to realize that in the spectrum of S x1 f , the sidelobes (tails) will decay more rapidly
than those shown in Fig. 3.6 (why ?).
Now we turn to the spectral density of power signals, in particular periodic power signals. We know
from the developments in section 5, in particular from (5.16) and (5.17) that the Fourier transform of
periodic (time) signals is in the form of
Fourier series representationof xt
X f F x t F cn exp j 2 nf 0t cn f nf 0
(8.10)
n n
Then by using a slightly modified version of the power definition given in (7.9)
1
T /2
1
T /2
x t dt cn exp j 2 n1 f 0t cn* exp j 2 n2 f 0t dt
0 0
Px
2
T0 T / 2 0
T0 T / 2 n 0
1n
1
2
2
T0 / 2
1
cn c *
n2 exp j 2 n1 f 0t exp j 2 n2 f 0t dt
T0 n1 n2
1
T0 / 2
T0 / 2
1
cn cn* exp j 2 f 0 n1 n2 t dt
T0 n1 n2
1 2
T0 / 2
T0 / 2
1
exp j 2 n1 n2 t / T0 dt
2
*
cn c n2
cn (8.11)
T0 n1 n2
1
T0 / 2 n1
1
The spectral density for a power signal can be deduced from the expression on the far right hand side
of (8.10), thus
Sx f X f X * f X f cn f n1 f 0 cn* f n2 f 0
2
1 2
n1 n2
cn f nf 0
2
(8.12)
n
The result on the second line of (8.12) is based on the fact that in the multiplication of the two delta
functions, only the time coincident ones survive, the others become zero. This means
f nf 0
if n1 n2 n
f n1 f 0 f n2 f 0 (8.13)
if n1 n2
0
Eventually, it is possible to express the power of a power signal in terms of the spectral density as
follows
T0 / 2
1
Px Sx f df
2 2
x t dt X f df (8.14)
T0 T / 2
0
Example 8.2 : Find the power for x t A cos 2 f 0t both from the time and the frequency
integrals of (8.14).
T0 / 2 T0 / 2 T0 / 2
1 1 A2
Px x t dt A2 cos 2 2 f0 t dt 1 cos 4 f 0t dt
2
T0 T / 2 T0 2T0
0
T0 / 2 T0 / 2
V
2
of xt
rms
T0 0
2 2
A A A
(8.15)
2T0 2 2
From (5.9) and (5.11), we know that there are only two Fourier series coefficients of the cosine
function, thus (8.12) turns into
Sx f X f X * f X f cn f nf 0
2 2
n
2 2
c1 c1
A2 A2
f f0 f f0 (8.16)
4 4
9. Autocorrelation
Autocorrelation measures the similarity of a signal with its time shifted copy. It was defined earlier for
an energy signal in (6.7), repeated here as
Rx x t x* t dt x t x t dt
*
(9.1)
(9.1) means that we have to shift x t over the whole interval of overlap by and amount and
perform integration over t .
By taking x t to be (a single) rectangular function of Fig. 3.3, whose mathematical expression is given
in (3.4), we illustrate in Fig. 9.1 graphically the sliding of x* t x t against x t
x(t)
Region of overlap when < 0 A
Region of overlap when > 0
0
t
<0 T
x(t-) >0
A
0
t
+T +T
R ()
x
A2 T
=-T =T
0
Fig. 9.1 Graphical illustration of computation of autocorrelation function of a single rectangular pulse.
T
when 0 : R x A2 dt A2 T
0
T
when 0 : R x A2 dt A2 T
Rx A T 2
, 0 T (9.2)
When x t is a power signal, in particular periodic power signal with a period of T0 , then the
autocorrelation function is defined as
A sample of calculations for a periodic signal is found in ECE 373MT-19112012_Solutions and in ECE
373 MT_21112011_Solutions_CC. Here we show the solution of ECE 373MT-19112012_Solutions.
Rx
S x f : Autocorrelation function and spectral density form Fourier transform
F
c)
F
-1
pairs.
d) Rx 0 Ex if x t is an energy signal, Rx 0 Px if x t is a power signal.
As mentioned in section 7, random signals can best be described by statistical quantities. As regards
to defining how a variable x distributed over the statistical domain, we have three functions which are
written below together with their relations to each other.
d
p x F x : probability density function (pdf)
dx
x
F x p x dx
1 1
: cumulative distribution function
To illustrate the implications of (10.1), we take the case shown in Fig. 1.10
p(x)
1/x
0
x=-x x=x
0 0
x
0
P(x=0)=1/x F(x) 1
0
0.5
x
0
Fig. 10.1 Typical example of probability density function, p x , probability of x assuming a certain
value, P x x1 and cumulative distribution function, F x .
1
x x0 for x0 x 0
x 2
p x
0
x0 : positive quantity
1
x0 x for 0 x x0
x0
2
1 x 2
for x0 x 0
0
x x
x0 2
2
F x (10.2)
1 x 2
2 0
x x for 0 x x0
x0
2
Another example drawn from practice is the representation of throwing a dice with p x and F x .
This case is illustrated in Fig. 10.2.
Fig. 10.2 Probability density function, p x , probability of x assuming equal and less than a certain
value, P x x1 and cumulative distribution function, F x for the case of throwing a dice.
As seen from Fig. 10.2, it is also possible to define the probability up to a certain value. The definition
of such a probability becomes
x1
P x x1 F x1 p x dx
In our case, the most important probability density function (pdf) is the exponential Gaussian pdf,
which we shall study in the next section.
From Figs. 10.1, 10.2 and the expressions (10.1) and (10.2), it is possible to make the following
generalizations about p x and F x
E x n x n p x dx (10.5)
In (10.5), the first and second moments are used often. The first moment is named mean (average),
also denoted by m x
mx E x xp x dx (10.6)
E x 2 x 2 p x dx (10.7)
It is also possible to define central moments as the difference between x and the mean. Of particular
importance is the second central moment, which is commonly called as variance, hence
Variance : E x mx x m p x dx
2 2 2
(10.8)
x
x
The square root of the variance, i.e., x is called standard deviation. Both the variance and the
standard deviation are indications of how the variable values are concentrated around the mean.
Solution : From the given x data, we construct the pdf, p x and set its integration to unity, so that
we can determine the respective amplitudes. This is illustrated in Fig. 10.3
2A ( x + 1 )
2A
A ( x + 4 ) A ( x + 2 ) A ( x ) A ( x - 3 )
A
x
x=-4 x=-2 x=-1 0 x=1 x=3
10 A 1 , A 0.1 (10.9)
x2 x mx p x dx
2
0.1 4 0.1 2 0.1 2 1 0.1 0 0.1 4 1 0.1 3 0.1 3.88
2 2 2 2 2 2
(10.11)
Exercise 10.1 : Assume that the outcome of a random event is represented by the uniform probability
distribution, p x , as plotted in Fig. 10.4. Find the cumulative distribution, F x , mean m x , and
the variance, x for the pdf given in Fig.10.4.
2
x
0
As defined in (10.6), the average (mean) is obtained by multiplying the pdf with all possible values of
the variable and integrating the result. There is another possibility however as explained below.
Suppose we have random signal, or in particular the noise signal. To characterize the statistical
properties of this random event, we may envisage two ways. One is to take one noise source and
examine its (statistical) properties along time axis. The other is, have a lot of noise sources and examine
the statistical properties by taking samples simultaneously (at a fixed time). The first method
corresponds to multiplying the time axis, the other is multiplying the event axis. This situation is
illustrated in Fig. 10.5.
n1 ( t )
t
n (t)
2
Ensemble average
t = t1 t = t 2 t = t3 t = t4 t = t5 t = t6 t = t7
Time average
ni ( t )
t
nN ( t )
t
t = tk
If we want to retrieve the statistical properties by examining a single source along time axis, then we
operate on the samples
On the other hand, if we want to retrieve the statistical properties by examining several source at a
fixed time, then we operate on the samples
For instance, if we arrive at the mean (average) by using the samples in (10.12), then this is called time
averaging, whereas if we obtain the mean by using the samples in (10.13), then it is called ensemble
averaging. In either case, the greater the number of samples, the nearer we get to the theoretical limit
or the more accurate our result becomes.
The time average of any noise signal n t is achieved in the following manner
1
n t k
N
mn , N
N k 1
1 T /2
mn Tlim
n t dt
T T / 2
(10.14)
Provided that the statistical properties of a random event (signal) do not change (drift) with time, then
it is not important in what time slice or fixed time instance we take the sample, so long as the number
of samples are sufficiently high so that the random event is represented including all its characteristics.
Example 10.2 : Suppose that gambling house orders 6000 dice and want them all to be fair dice.
Describe how such a test can be conducted.
Solution : If all dice are fair, then the probability density function of this event should be exactly like
shown in the upper figure of Fig. 10.2. This means that if all the dice are tried (tested), then we should
get the following results
As described above, we can achieve the result in (10.15) in two ways. The first one is to take only one
dice (assuming that this selected dice is a perfect representative of the remaining dice) and throw it
many times (multiplying the time axis, to get time average). The second is to throw all dice or some of
The result in (10.16) means that in order to approach the result of upper figure in Fig. 10.2 within a
confidence interval of 98 %, the minimum number of times we have to throw a dice is 60 or
alternatively we have to throw a minimum of 60 dice simultaneously to approximate to the result of
upper figure in Fig. 10.2.
It is clear from (10.6) and (10.14), it is possible to arrive at the mean either by ensemble or time
averaging. We illustrate this by the following example based on question 1 of ECE 373_MT
Exam_16.11.2009.
Example 10.3 : For the time continuous (power) signal given in Fig. 10.6, find the mean and the power
and make appropriate comments on the results.
Solution : From Fig. 10.6, it is clear that the signal x t is a DC shifted AC signal. The mathematical
expression for this signal is
DC shift
A result, which show that the time averaging gives the DC value of the signal. On the other hand, from
(8.15), we get the rms value of x t as
0.5 A
Vxrms (10.18)
2
Based on the definition of normalized power of (7.3), we calculate DC, AC and total power of x t as
T0 / 2 T0 / 2
1 0.25 A2
Px 0.5 cos 2 f 0t dt 0.1825 A2
2 2
x t dt (10.20)
T0 T / 2
0
T0 T0 / 2
1
p x , 0.25 A x 0.75 A (10.21)
0.25 A2 x 0.25 A
2 0.5
>> syms A x
>> int(F,x,-0.25*A,0.75*A)
ans = A/4
Exercise 10.1 : In a manner similar to Example 10.3, find the AC power in x t using (10.8), i.e.,
PxAC x2 E x mx x mx p x dx
2 2
(10.23)
Note that you may have to perform the integration in Matlab (further note that numeric integration
delivers the correct result for (10.23)).
Noise and message signals are random signals. But message signal (in discrete form) is also
deterministic in the sense that within a given symbol interval, it can only assume one of the finite
number of waveform shapes. When the noise time signal is sampled along time axis and the measured
amplitudes are mapped into a probability density function, we obtain a Gaussian distribution pdf as
shown in Fig. 11.1
Gaussian pdf
pdf from samples
0
t
n
0
If the noise time signal, n t and the samples in Fig. 11.1 are extended to infinity, then we would get
the Gaussian pdf marked in green on the right hand side of Fig. 11.1. Otherwise with a limited number
of samples (more than those shown on the left hand side of Fig. 11.1), we can only get the
approximated Gaussian pdf plotted as histogram bars on right hand side of Fig. 11.1. We must bear in
mind that for such a pdf, the normalization should be applied as shown in (10.9).
The thermal noise created by the agitation of atoms has the following (normalized) Gaussian pdf
n2
exp 2
1
p n (11.1)
n 2 2 n
where n denotes the variance, corresponding to the amount of power in the noise.
2
a) The time and the ensemble averages are the same. This also means whether we analyse a
single noise source or analyse a lot of different noise sources, we get the same result. Hence
one single noise source is capable of exhibiting the properties of all noise sources (in the
world), if it is observed for a sufficient length of time.
b) Its pdf is Gaussian as stated in (11.1).
c) It has zero mean.
From these properties we name this type of noise as additive white Gaussian noise (AWGN). From
(11.1), the pdf of noise with the marking of the parameters is plotted in Fig. 11.2.
p(n) p(n)
1 / n 2
small n
n
exp ( - 0.5 ) / n 2 2 n 0
p(n)
mn , mean
large n
n n
n = - n n = n
0 0
Note that as stated above and illustrated in Fig. 11.2, the mean of noise is always zero, but the variance
my change. Small variance causes the width of the Gaussian exponential to contract, while large
variance broadens the Gaussian exponential. In the noise time waveform of Fig. 11.1, small variance
means small amplitudes are seen along the time axis, large variance means large amplitudes are seen
along time axis.
The additive nature of noise has the implication that when noise is mixed with the transmitted
(message) signal (at receiver) as
received signal
transmitted signal
noise signal
r s n (11.2)
The interpretation of (11.2) is that the noise pdf retains the Gaussian profile and the variance, but its
mean shifts from zero to a value s (the square of the energy in the message signal). Thus the pdf of
(11.1) will become
r s 2
exp
1
p r (11.3)
n 2 2 n
2
Exercise 11.1 : Plot (11.3) and make the necessary markings on this plot, like done in Fig. 11.2.
Another important parameter of noise is its autocorrelation. Noise is a power signal, hence from (9.3),
we have
Noise is completely random, it never repeats itself along time axis, so it is impossible to find a time
sliced copy of noise as sweeps the range from minus infinity to plus infinity. This way, the result of
(11.4) will be
1 T /2 N
Rn Tlim n t n* t dt 0 n2 (11.5)
T T / 2 2
N0
is known as the two sided noise spectral density and related to the variance of noise as shown in
2
(11.5). From the autocorrelation property c) listed at the end of section 9, we know that
autocorrelation function and spectral density form Fourier transform pairs. Thus
N0
F Rn S n f n2 (11.6)
2
According to (11.6), the noise spectral density is exactly equivalent to noise variance. The results of
(11.5) and (11.6) are displayed in Fig. 11.3.
Rn ( ) Sn ( f )
Autocorrelation Spectral density
function of noise of noise
N /2
0
N /2
0
f
0 0
Example 11.1 : To demonstrate the above stated properties of Gaussian noise, we utilize the Matlab
file GaussianNoise.m, also available on the course webpage.
There we see the variations in noise time waveform, noise pdf and noise spectral density against the
number of noise samples, and noise variance.
1) John G. Proakis, Masoud Salehi, “Communication Systems Engineering” 2nd Ed. 2002, ISBN : 0-
13-061793-8.
4) M. C. Jeruchim, “Techniques for estimating the bit error rate in the simulation of digital
communication systems,” IEEE J. Select. Areas Commun. vol. SAC 2, pp. 153-170, Jan. 1984.