Lecture Notes - EENG 32151 Communication Systems
Lecture Notes - EENG 32151 Communication Systems
Grey Book
Complex Fourier series: evaluation of complex coecients for periodic functions; inversion relationship; the idea of spectra. Fourier transform: derivation of transform from Fourier series; inverse transform; convolution integral; impulse response functions; proof and use of duality; convolution and Parsevals theorems. Introduction to sampling and reconstruction, including the sampling theorem and aliasing. Introduction to random processes.
Lecture Content
1. 2. 3. 4. 5. 6.
From Signals to Complex Fourier Series From Complex Fourier Series to the Fourier Transform Convolution. The impulse response and transfer functions Sampling, Aliasing Power & Energy Spectra, Autocorrelation, and Spectral Densities Random Processes and Signals
The content will ow from one lecture slot to another. If there is any time left over, we will go through examples.
0/2
References
These notes are not meant to be comprehensive. Fourier Analysis is a topic where a good book with decent diagrams and examples can make a dierence. So | do read promiscuously. This list will be updated on the website with comments.
, Wiley
, CUP.
Baher Denbigh,
Wiley, 1991.
of MATLAB
, Addison-Wesley, 1998.
Papoulis
Hill, 1991.
McGraw-
www.robots.ox.ac.uk/$dwm/Courses/2TF
Just the notes and the tute sheets get put on weblearn.
By this stage, you will have realized how important linear systems and their analysis are to engineers. In a linear system | one described by a linear dierential equation of some order | the response y (t ) occurs at the same frequency as the input x (t ), and if the input's amplitude is changed by some factor, the output's amplitude will change by that same factor. When faced with a non-linear system, engineers often linearize the system by considering incremental inputs and outputs that occur around a xed operating point. The use of small signal equivalents in transistor circuits is an obvious example.
Y (! ) = H (! )X (! ) ;
but so far you have thought of X (!) and Y (!) as phasor representations of harmonic input and output at a single frequency. At a single frequency, the temporal input and output are related to the frequency representations by
x (t ) = Re X (!)ei!t ;
As a concrete example, consider as input the voltage x (t ) = V0 cos !t applied to an inductor L in series with a resistor R, and as output the voltage y (t ) across R.
1
1/2
The input phasor, transfer function and output phasor are, respectively
Notice that although we can write Y (!) = H (!)X (!) there seems to be no equivalently crisp operation involving y (t ) and x (t ). Linear systems obey the principle of linear superposition. It says that if the input is a linear combination of signals, the output is the same linear combination of the individual outputs x (t ) = 1x1(t ) + 2x2(t ) + : : : A y (t ) = 1y1(t ) + 2y2(t ) + : : : : If the x1(t ) etc are harmonic signals at frequencies !1 etc, the output must be y (t ) = 1Re H(!1)X (!1)ei!1t + 2Re H(!2)X (!2)ei!2t + : : : : A rather more elegant way of thinking about this is to write the discrete frequency spectrum of x (t ) as X (!) = 1X (!1) + 2X (!2) + then Y (!) = 1Y (!1)+ 2Y (!2)+ = 1H(!1)X (!1)+ 2H(!2)X (!2)+ :
This is remarkable, but would be utterly arcane were it not for an amazing property of (most) periodic signals, viz: A periodic signal of angular frequency !0 can1 be represented as the sum of a set of harmonic signals at frequencies !0, 2!0, 3!0, and so on. These sums of harmonic waves are Fourier Series. For example, the Fourier series of a unit square wave with a zero at t = 0 and period T = 2=!0 is 1 1 4 1 4 sin n!0t : f (t ) = sin !0t + 3 sin 3!0t + 5 sin 5!0t + = n n odd Now, armed with a system's transfer function H (!), the principle of linear superposition, and this and similar Fourier Series, you can work out the output of the system corresponding to the square wave, or any other periodic input. This is illustrated in Figure 1.1.
1 Provided
1/3
X ( ) Fourier Series X (2 ) X (3 )
H ( ) H (2 ) H (3 )
Y ( ) Y (2 ) Y (3 )
Figure 1.1: The Fourier Series, the principle of linear superposition, and the transfer function, allow one to compute the output for any periodic input.
1.2 The gap in our knowledge
Unfortunately, not all inputs are periodic. Can Fourier analysis help us? Well, if the signal x (t ) is of nite duration we might make it periodic by \pretending" it repeats. However, this does not help with general signals of innite duration. Another way of perceiving the gap in out knowledge is to realize that well-behaved periodic functions give rise only to discrete frequency spectra. For example, the unit square wave (Figure 1.2(a)) has the frequency spectrum shown in Figure 1.2(b), where the components are in the ratio 1 : (1=3) : (1=5) : (1=7) : : : :. However, instinct tells us that there must be signals that have a continuous frequency spectrum as sketched in Figure 1.2(c). Nothing we know yet about Fourier analysis would allow the analysis of a continuous spectrum.
f(t) F( ) F( ) +1/3 +1/5 T= 2 / 0 (a) 0 3 0 5 0
+1
(b)
(c)
Figure 1.2: A periodic function (a) gives a discrete frequency spectrum (b). (c) A continuous frequency spectrum cannot be derived from a periodic function.
1/4
1.3 Fourier Transforms
We shall see that Fourier transforms provide a method of transforming innite duration signals, both non-periodic and periodic, from the time domain into the continuous frequency domain. In fact, they provide an entire language with which to work and think in the frequency domain. The language involves a deal of new vocabulary and several new mathematical techniques, like convolution, correlation, modulation, sampling, spectral density, -functions and so on. Most of these techniques depend at the lowest level on integration. The integrals can look daunting, but it is important to rise up onto the next level | so that you can say \a signal's power spectral density is the Fourier transform of its autocorrelation". Success will come if you practice the mathematics, but also practice xing the concepts in your head by using simple physical examples.
1.4 Joseph Fourier
If at any point you become really cross, just remember that Fourier seems a really jolly chap. He had an interesting life not only in mathematics but in politics in France during Napoleon's time. Read his biography (and those of many other mathematicians) at the website maintained by the Department of Maths and Computer Science at St Andrew's University. Follow the links from wwwhistory.mcs.st-and.ac.uk/$history/
1.5 Signals
You will have noticed that the emphasis of our introductory discussion drifted from systems, and towards signals. Indeed, this course could be titled \An introduction to analogue signal processing". Let us start by dening various signal types. A signal might be a function of one or of several variables. Space and time are very common variables, but here we will tend to stick to one variable, and choose time t more often than not. Again more often than not, we will think about electrical signals | but do remember that the variation in temperature during the day is just as much a signal as the voltage output of a thermocouple sensing it. An Analogue signal is one whose amplitude covers a continuous range. It may be bounded (e.g. 0 5 V) but it can just as easily take the value 2:0572941675975 V as 2:0572941675974 V, and indeed anything between! A continuous-time analogue signal is one that has a value for a continuous sweep of values of its parameter, t . Some examples are shown in Figure 1.4. Notice the
1/5
value can be zero, but it is dened as zero; and notice too that a continuous-time signal does not have to be a continous function.
f(t) f(t) f(t)
times only. Typically these will be at regular intervals, arising from regular sampling. Examples are shown in Figure 1.5. Note that rather than f (t ), this type of signal is labelled f (nT ), where n is an integer, and T is the sampling interval. A digital signal is one that is by denition sampled, but whose amplitude can only take one of a discrete set of values represented by some binary coding scheme. Suppose we used 4 bits. This could represent f0; 1; : : : ; 15g V, or f0:2; 0:1; 0:0; : : : ; 1:3g V, or f0; 1; 4; 9; : : : ; 225g V; but, using the rst example, we cannot properly represent any value between 0 and 1 V. In this course we are not concerned with digital signals, and consider only analogue, continuous-time and discrete-time.
1/6
f(t)
Sampling Period T
f(nT)
Sampling Period T
Figure 1.5: A continuous time signal f (t ) sampled at intervals of T to general a discrete-time signal f (nT ).
A causal signal is one that is nite only for t > 0 | ie, f (t ) = 0 for all t < 0. A casual signal is one that has been mis-spelled. See above. A deterministic signal is one that can be described by a function, mapping, or
some other recipe or algorithm. If you know t , you can work out f (t ). We shall be interested in deterministic signals for much of the course. A random signal is determined by some underyling random process. Although its statistical properties might be known (e.g. you might know its mean and variance) you cannot evaluate its value at time t . You might by able to say something about the likelihood of its taking some value at time t , but not more. We will come to think about random processes towards the end of the course.
1.6 Orthogonal basis functions - Revision
Before revising the Fourier series, we think about orthogonality and functions, taking a scenic meander via vectors. Consider three vectors v1, v2, v3, which are of dierent lengths but lie at right angles to each other. They form a set of orthogonal basis vectors in 3D. It should be evident that any 3D vector f can be described by a unique linear combination of the vectors How would you nd these unique coecients for a particular f ? To nd A1, take the scalar or inner product of both sides with v1 and then exploit orthogonality which tells you that v2 v1 = 0 and v3 v1 = 0, so
1/7
that
(The denominator is important | v1 and so on were not unit vectors.) In the early 19th century it was realized that it was possible to treat functions f in the same way and to make them up from a set of orthogonal basis functions. Pretend there is a set of orthogonal \v -functions", so that, similar to the vector case,
A1 = v v1 : 1 1
f v
hf ; v1i = A1 hv1; v1i + A2 hv2; v1i + : But hv2; v1i = 0 and so on, so that hf ; v i hf ; v i and An = hv ; vn i : A1 = hv ; v1 i 1 1 n n
There are a considerable number of famous sets of orthogonal basis functions, many derived by French mathematicians, but it was Fourier who noticed that the cosines and sines made up such a set which were then capable of representing periodic functions.
1.7 Fourier Series
Fourier found that a periodic function f (t ), with period T , can be written as a sum of cosine and sine functions of the fundamental frequency ! = 2=T and its harmonics 2!, 3!, etc
I I 1 f (t ) = 2 A0 + An cos(n!t ) + Bn sin(n!t ) : n=1 n=1
The expressions for An and Bn are found from the orthogonality conditions as
1/8
An = T Bn = T
2 2
+T=2
T=2 T=2
+T=2
+T=2
T=2
f (t )dt
To derive the stated expressions for the coecients, we must rst dene the inner product for these functions. It is dened by the integral over a period, divided by the period. Then, we should demonstrate that the basis functions, cn = cos n!t and sn = sin n!t , are orthogonal. In other words, we must show that the inner products hcn ; cm i and hsn ; sm i are zero for m T= n, that hcn ; sm i = 0, and that only hcn ; cn i and hsn ; sn i are nite. One nds 0 1=2 T T=2 cos m!t cos n!t dt = 1 0 1 T=2 1=2 T T=2 sin m!t sin n!t dt = 0 1 T=2 cos m!t sin n!t dt = 0 1
T=2
T=2
The only complication is that we have both cn and sn as in the basis set, so we need two coecients with the subscript n. Calling these An and Bn we nd
1 T=2 h f ; cn i 2 T T=2 f (t ) cos n!t dt An = hc ; c i = 1 T=2 = T n n T T=2 cos n!t cos n!t dt
T=2
T=2
T=2
f (t ) cos n!t dt
and
1 T=2 h f ; sn i 2 T T=2 f (t ) sin n!t dt Bn = hs ; s i = 1 T=2 = T n n T T=2 sin n!t sin n!t dt
T=2
f (t ) sin n!t dt
1/9
If you felt there was sleight of hand in the above, you may prefer to write down the series, and then, to nd the Bm for example, multiply it by sin(m!t ) and average over a period:
T T=2 f (t ) sin(m!t )dt =
I I 1 A0 + An cos(n!t ) + Bn sin(n!t ) sin(m!t )dt T T=2 2 n=1 n=1
1
T=2
T=2
I I
n=1 n=1
An T
1 1
T=2
T=2
T=2
Bn T
T=2
T=2
T=2
f (t ) sin(m!t )dt = Bm 2
Similarly, to obtain Am we would multiply by cos(m!t ) and average over one period.
1/10
1.8
|
|
2
Examples
Eg 1: Square Wave
+T=2
Am = T
T=2
f (t ) cos(m!t )dt = T
T=2
T=2
0
1 cos(m!t )dt = 0
Bm = T
= =
2 4 2
0
+T=2
T=2
T=2
f (t ) sin(m!t )dt = T
4 1 2
T=2
0
T=2
1 sin(m!t )dt
m
[ cos(m) + 1] =
m
(1)m+1 + 1
Period is 2 hence ! = 1.
0
2 + 1 Am = 2 f (t ) cos(m!t )dt = 1 2
t cos(mt )dt +
0
t cos(mt )dt
1)m ]
t sin(mt )dt +
0
1/11
1.5 1
f(t)
1
y(t)
T/2 1 Period T
1.5 1 0.5 y(t) 0 0.5 1 1.5 1
+T/2
0.5
0 t
0.5
0.5
0 t
0.5
0.5
0 t
0.5
Figure 1.6: The FS of a square wave built up over 1,3,5 terms; then 11 and 101 terms; then 1001 terms. The slight \chip" at the discontinuity is a result of the Gibbs' phenomenon, discussed later.
Eg 3: Train of tophats
A0 = T
f (t ) dt = T T=2
T=2
2a
An = T
4
0
a=2
cos n!t dt = =
T n!
2
4 1
n sin(n!a=2)
a I 2 So f (t ) = + T n=1 n sin(n!a=2) cos n!t Of interest later are the values of the An . Suppose we set the (on/o) ratio to be = a=T , sin(n) An = 2 (n)
If the on/o ratio = 1= then the An are taken from the sin(x )=(x ) curve as shown in Figure 1.8(b). If we reduce , here by 1/2, then the An values are sampled from the curve more closely, as in Figure 1.8(c).
1/12
f(t)
Period 2
3 2.5 2 1.5 1 0.5 0 -4 -3 -2 -1 0 1
t
"tri.5" "tri.11"
There is a set of conditions, known as the Dirichlet conditions, that determine whether or not a function can be expressed as a Fourier Series. A function must 1. be periodic, or be of nite extent so that it can be made periodic by extension; 2. have only a nite number of discontinuities within a period; 3. the discontinuities must be of nite size. 4. nite number of maxima and minima So, we can nd the FS of the function f (t ) = et , 1 t < 1, in Figure 1.9(a), but not of f (t ) = 1=t for 1 t < 1 in Figure 1.9(b).
1/13
f(t)
1
t
1.0
sin(x)/(x)
sin(x)/(x)
x 0 5 10 0 5
x 10
Figure 1.8:
1.10 Fourier Series at Discontinuities
Provided the Dirichlet conditions are satised, at a point of discontinuity in the orginal function, a Fourier series converges to 1 FS(t ) 3 (f (t) + f (t+)) 2 where f (t) is the value of the signal f (t ) just below the discontinuity, and f (t+) that just above. We have already seen this in the square wave example. We have also seen is that we require a large number of terms in the series to faithfully reproduce the function at a discontinuity.
1.11 Symmetry properties
The task of deriving series coecients is made a little easier by exploiting symmetries in the signal f (t ) and the basis functions sin(n!t ) and cos(n!t ). The sine basis functions all have have odd 1=2-wave symmetry; ie, sin(n!t ) = sin(n!t ) (Figure 1.10). If the signal f (t ) is even, then all integrals
T=2
T=2
f (t ) sin(n!t )dt
1/14
1 1 +1
+1
(a)
(b)
an even signal is contains only cosine terms; and, similarly, an odd signal contains only sine terms.
One also notes that for an even, then odd, signals f (t )
T=2
T=2
T=2
T=2
0
f (t ) cos(n!t )dt
T=2
0
T=2
f (t ) sin(n!t )dt
sin 2 t
sin t
Figure 1.10: Basis function sin(!t ) has odd 1/2-wave, even 1/4-wave symmetry. sin(2!t ) has odd 1/2-wave, odd 1/4-wave symmetry.
Further use can be made of symmetries about the 1/4-wave points. Any sin(n!t ) with n-even has odd symmetry about these points. Thus if a signal f (t ) has even symmetry about the 1/4-wave points, any n-even sine terms will vanish. However
1/15
if the signal's symmetry is odd about the 1/4-wave points, the n-odd sine terms vanish. Similar arguments can be made for the cosine series. The square wave and triangular waves (Figure 1.11) are examples, but do note that the signal f (t ) does not have to have 1/2-wave symmetry to exploit 1/4-wave symmetry. One could consider higher symmetries, but they get increasingly dicult to recognize.
f(t)
f(t)
T/2 1 Period T
+T/2
t Period 2
Figure 1.11:
1.12 Completing Functions
Suppose you need derive the Fourier series of a non- time-continuous, non-period function. For example, f (t ) = t for 0 t 1. First, you MUST make the function periodic | but exactly how is a matter of choice. Figure 1.12 shows three possible ways of completing the example function. Which is best? Because the cosine series has no discontinuities it will require fewer terms to make an overall decent approximation. However, the kink at t = 0 will not be accurate. If a good approximation with few terms is required close to t = 0, it is probably best to use the sine series completion.
1.13
Something rather curious occurs in the FS at a discontinuity. Residual oscillations lead to an overshoot, whose size is a characterstic of the underlying function. (It can be large! For a square wave of amplitude A, the overshoot is around = 0:18A.) As more and more terms are added to the series, the oscillations get squeezed into a shorter and shorter region around the discontinuity, but their characteristic amplitude remains constant! This eect is known as the Gibbs phenomenon.
1/16
Defined
Completed as sine
as cosine
Figure 1.12:
1.5
1 "gibbs.1001" "gibbs.10001"
1
0.5
-0.5
-1
0.5
0 t
0.5
-1.5 0.98
0.985
0.99
0.995
1.005
1.01
1.015
(a)
(b)
Figure 1.13: The Gibbs phenomenon. (b) shows the discontinuity of (a) at a ner time scale, but with more terms added.
1.14 Mean square values of Fourier Series Parseval's Theorem
The instantaneous power in a signal is proportional to the its modulus squared. For a periodic signal we can derive the average signal power by integrating over a period.
= =
+T=2
T=2
jf (t )j2dt
+T=2
T=2
1/17
At rst this looks nightmarish, because the squaring introduces an innite number of cross terms, on the bottom line of the next expression.
2 1 2 2 2 2 2 2 2 A0 + A2 1 cos !t + A2 cos 2!t + : : : + B1 sin !t + B2 sin 2!t + : : : + T T=2 2 A0 A1 cos !t + : : : + A0 B1 sin !t + : : : + 2A1 B1 cos !t sin !t + 2A1 B2 cos !t sin 2!t + : : :g dt
1
+T=2
However, when you integrate over cross terms, orthogonality will get rid of all the terms on the bottom line! So, we are left with the simple result that the mean square is
Ave pwr = T
+T=2
T=2
jf (t )j dt = 1 A 2 0
2
A good way to remember this is as 1 Mean Square = (d:c: amplitude)2 + 2 (a:c: amplitude)2
which is true whether the signal is pure d.c. or single frequency a.c. The Root mean square value of a periodic signal is therefore
1 A 2 0
[Q] Suppose the full wave rectied voltage in Figure 1.14(a) is applied to the
v(t)
0 10 20 t (ms)
Figure 1.14:
5k
1 F
1/18
[A] From the diagram T = 102 s and !0 = 2=T = 200. By grinding, or from
HLT, we nd the signal v (t ), with t in seconds, is 24 2 2
1 1 + 2
2
2 3
2 + 15
+ :::
The current drawn from the source at a single frequency ! is I (!) = Y (!)V (!), where Y is the admittance. But we have voltage components at ! = 0, ! = !0, ! = 2!0 and so on. We must work out the admittances at all these frequencies.
Y (!) = R + j!C = (2 104) + j!(1 106) 4 10 (2 + j 0); ! 104(2 + j 2); = 104 2 + j = 104(2 + j 4); 100 4 10 (2 + j 2n);
To avoid mixing functions of time with phasors, it makes sense to rewrite v (t ) as the real part of
v (t ) = Re 1 + 3 ej! t 15 ej 2! t + : : :
0 0
24
Hence
i (t ) = Re
2 2 2 + (2 + j 2) ej 200t (2 + j 4) ej 400t + ::: 3 15 48 104 2 = 1 + (1 + 2)1=2 cos(200t + 1) 3 2 (1 + 42)1=2 15 cos(400t + 2) + :::
24 104
1/19
1.16 The Complex Fourier Series
You will have noticed while working out i (t ) that there was a rather unsatisfactory moment when we had to rewrite the Fourier series using an exponential representation. It raises the following question.
Would it be possible use an exponential, or complex, form of the Fourier Series from the outset?
It turns out | perhaps not surprisingly | that the set ein!t provides a set of orthogonal basis functions, and that a periodic function that satises the Dirichlet conditions and has period T = 2=! can be written as
Cn ein!t
The inner product is dened as integration over a period with the complex conjugate, divided by the period, and the orthogonality conditions are simpler than before, 1
T=2
T=2
ein!t eim!t dt =
0 1
m T= n m=n
We determine the form of Cn using the orthogonality relationship in either the short hand or long hand ways. The short hand way says
Cm = heim!t eim!t i =
f (t ) eim!t
T=2 f (t )e
T=2
im!t dt
T=2
T=2
f (t )eim!t dt
1/20
Cn ein!t
I
T=2
T=2
f (t )eim!t dt =
= =
T=2 n=I
Cn ein!t eim!t dt
ein!t eim!t dt
n=I Cm
Cn T
T=2
T=2
T=2
f (t )eim!t dt
There is a third way of deriving the complex Fourier series. It is a bit feeble because it simply uses the relationship eim!t = cos m!t i sin m!t : The relationship between the C coecients and those for the non-complex Fourier series is (Am iBm )=2 for m > 0 for m = 0 Cm = A0=2 (Ajmj + iBjmj)=2 for m < 0
, where denotes complex conjugate. Also note that Cm = C m
1.17 Parseval and the Complex Fourier Series
This is asked as a question on the rst tutorial sheet, and I want you to think about it, rather than copy it down. Notes on this will appear later in the course to check your answers!
1/21
1.18 Summary
We have dened certain terms used to describe signals. We have reviewed how periodic signals can be represented as Fourier Series | linear sums of pure harmonic signals | and revised their properties. We have introduced the complex Fourier Series, which is often a more convenient representation to use when having to deal with phase shifts. Interesting and useful as the Complex Fourier Series is, there is nothing in it that addresses the gap in our knowledge, which is how to cope with non-periodic signals of innite duration; that is, those which have continuous spectra in the frequency domain. This is where we move next.