0% found this document useful (0 votes)
13 views11 pages

Doc11 B14 Dspcourse l1

This document outlines a course on signal processing and filter design, covering both analogue and digital techniques. It discusses various applications of signal processing, such as speech and biomedical signal processing, and emphasizes the importance of understanding both analogue and digital systems. Key concepts include linear time-invariant systems, convolution, and the relationship between time and frequency domains.

Uploaded by

Javier Giner
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views11 pages

Doc11 B14 Dspcourse l1

This document outlines a course on signal processing and filter design, covering both analogue and digital techniques. It discusses various applications of signal processing, such as speech and biomedical signal processing, and emphasizes the importance of understanding both analogue and digital systems. Key concepts include linear time-invariant systems, convolution, and the relationship between time and frequency domains.

Uploaded by

Javier Giner
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

SIGNAL PROCESSING & FILTER

DESIGN

B3 Option – 8 lectures
Michaelmas Term 2003

Stephen Roberts
Recommended texts

Analogue filters
Paul Horowitz, Winfield Hill. The art of electronics. 2nd Ed. Cambridge
University Press. Especially useful for getting a feel for the issues in ana-
logue design.

G.B Clayton. Linear integrated circuit applications., MacMillan 1975.

Digital signal processing


Lynn, PA. An Introduction to the analysis and processing of signals. A
useful introductory text.

Oppenheim, Willsky & Nawab. Signals and Systems, 2nd Ed. Prentice Hall.

Oppenhem & Schafer. Digital Signal Processing., Prentice Hall. A good


book covering basic concepts and advanced topics.

Matlab manuals (for signal processing & system identification toolboxes)

1
Lecture 1 - Introduction

1.1 Introduction
Signal processing is the treatment of signals (information-bearing waveforms or
data) so as to extract the wanted information, removing unwanted signals and
noise. The two applications mentioned below are familiar to me, but I could have
equally well chosen applications as diverse as the analysis of seismic waveforms
or the recording of moving rotor blade pressures and heat transfer rates.

1.1.1 Examples of Signal Processing Applications


Processing of speech signals
Up to now, human–machine communication has been almost entirely by means
of keyboards and screens, but there are substantial disadvantages in this approach
for many applications. The human speech perception and production processes
are so complex that very complicated signal processing algorithms are required
even to solve simple problems such as the recognition of single words spoken
by one speaker. The list below includes some of the main applications of signal
processing to speech:

Speech storage and transmission (in order to minimise bandwidth, a set of


parameters describing the speech production process is sent rather than the
speech waveform itself.

Speech enhancement (for example, improving the intelligibility of speech


in a very noisy environment).

Speech synthesis (e.g. voice output from a computer).

Speech recognition (e.g. voice input into a computer).

2
Speaker verification and identification.

Processing of Biomedical Signals


The body generates a multitude of bio-electric signals. Well known (and well
analysed) signals are, for example, the electrical activity of the brain (the elec-
troencephalogram or EEG) and the electrical activity of the heart (the electrocar-
diogram or ECG). Both of these signals are small (the EEG is of order  V and
the ECG or order mV) and are often heavily corrupted by artifacts from the sub-
ject’s breathing and body movements as well as electrical artifacts such as 50Hz
noise. Signal processing is a valuable asset in clinical and research medicine for
detection and analysis as well as for the removal of artifacts. Figure 1.1 shows an
example of a section of EEG.

0.1 mV

1s

Figure 1.1: A typical section of EEG signal. The large positive spikes are artifacts
caused by eye movements.

3
1.1.2 Analogue vs. Digital Signal Processing
Most of the signal processing techniques mentioned above could be used to pro-
cess the original analogue (continuous-time) signals or their digital version (the
signals are sampled in order to convert them to sequences of numbers). For
example, the earliest type of “voice coder” developed was the channel vocoder
which consists mainly of a bank of band-pass filters. More recent versions of this
vocoder use digital (rather than analogue) filtering, although this does not result
directly in any improvement in performance; however, one advantage of the digi-
tal implementation is that it can be “re-configured” as another type of vocoder (for
example, the linear prediction vocoder); it is also much easier to encrypt digital
rather than analogue data, for applications where communication must remain se-
cure. The trend is, therefore, towards digital signal processing systems; even the
well-established radio receiver has come under threat. The other great advantage
of digital signal processing lies in the ease with which non-linear processing may
be performed. Almost all recent developments in modern signal processing are in
the digital domain. This lecture course concentrates on the basics though.
It is, however, important not to neglect analogue signal processing and some
of the reasons for this should become clear during the course. On a more practical
point, you will have noted that filters are the main topic of this course, and a thor-
ough grounding in the design of analogue filters is a pre-requisite to understanding
much of the underlying theory of digital filtering.

1.1.3 Summary/Revision of basic definitions


1.1.4 Linear Systems
A linear system may be defined as one which obeys the Principle of Superposi-
tion. If  and   are inputs to a linear system which gives rise to outputs
  and   respectively, then the combined input    will give rise
to an output       , where  and  are arbitrary constants.

Notes
If we represent an input signal by some support in a frequency domain,

(i.e. the set of frequencies present in the input) then no new frequency
support will be required to model the output, i.e.
!#"%$'&(

4
Linear systems can be broken down into simpler sub-systems which can be
re-arranged in any order, i.e.

*),+ -./),+ -0213*)4+ -0!)4+5-6713*),+8-6:9

1.1.5 Time Invariance


A time-invariant system is one whose properties do not vary with time (i.e. the
input signals are treated the same way regardless of their time of arrival); for
example, with discrete systems, if an input sequence ;< produces an output
sequence  =< , then the input sequence ;=<7)>< will produce the output sequence
 =<?)@< for all < .

1.1.6 Linear Time-Invariant (LTI) Systems


Most of the lecture course will focus on the design and analysis of systems which
are both linear and time-invariant. The basic linear time-invariant operation is
“filtering” in its widest sense.

1.1.7 Causality
In a causal (or realisable) system, the present output signal depends only upon
present and previous values of the input. (Although all practical engineering sys-
tems are necessarily causal, there are several important systems which are non-
causal (non-realisable), e.g. the ideal digital differentiator.)

1.1.8 Stability
A stable system (over a finite interval A ) is one which produces a bounded output
in response to a bounded input (over A ).

1.2 Linear Processes


Some of the common signal processing functions are amplification (or attenua-
tion), mixing (the addition of two or more signal waveforms) or un-mixing1 and
1
This linear unmixing turns out to be one of the most interesting current topics in signal pro-
cessing.

5
filtering. Each of these can be represented by a linear time-invariant “block” with
an input-output characteristic which can be defined by:

The impulse response -B in the time domain.

The transfer function in a frequency domain. We will see that the choice of
frequency basis may be subtly different from time to time.

As we will see, there is (for the systems we examine in this course) an in-
vertable mapping between the time and frequency domain representations.

1.3 Time-Domain Analysis – convolution


Convolution allows the evaluation of the output signal from a LTI system, given
its impulse response and input signal.
The input signal can be considered as being composed of a succession of im-
pulse functions, each of which generates a weighted version of the impulse re-
sponse at the output, as shown in 1.2.

0.4 0.25

0.2
0.3

0.15
0.2
0.1

0.1
0.05

0 0
0 2 4 6 0 20 40 60 80 100

components total
0.1 0.25

0.08 0.2

0.06 0.15

0.04 0.1

0.02 0.05

0 0
0 20 40 60 80 100 0 20 40 60 80 100

Figure 1.2: Convolution as a summation over shifted impulse responses.

6
The output at time  ,   , is obtained simply by adding the effect of each
separate impulse function – this gives rise to the convolution integral:
%F OQP
   7CEDFHG;I)KJ LJ,M-J IN ),+ ( R PS ;
 ')TJ :-BJ ULJ

J is a dummy variable which represents time measured “back into the past” from
the instant  at which the output   is to be calculated.

1.3.1 Notes
Convolution is commutative. Thus:

  is also RVP S ;J :-')WJX LJ

For discrete systems convolution is a summation operation:

BY <4Z4C D [S \ P  ^Y ] _Z - Y <) ] Z,C D [S \ P  Y <) ] Z_- ^Y ] Z

Relationship between convolution and correlation The general form of


the convolution integral̀

 7CERVS ;  J :-I)TJX LJ


a
S
is very similar2 to that of the cross-correlation function relating 2 variables
; and   b!cd

JXeCfR S ;   Ig  h)WJ L


a
S
Convolution is hence an integral over lags at a fixed time whereas correla-
tion is the integral over time for a fixed lag.
2
Note that the lower limit of the integral can be i;j or 0. Why?

7
Step response The step function is the time integral of an impulse. As
integration (and differentiation) are linear operations, so the order of appli-
cation in a LTI system does not matter:
k
 l),+ RmL2)B+8-B 2),+ step response

k
 l),+8- 2),+ RnLl),+ step response

1.4 Frequency-Domain Analysis


LTI systems, by definition, may be represented (in the continuous case) by lin-
` differential equations (in the discrete case by linear difference equations).
ear
Consider $ the application of the linear
` `
differential operator, o , to the function
 7CEpq :
`
o  7CEr `
`
An equation of this form means that  is the eigenfunction of o . Just like the
eigen analysis you know from matrix theory, this means that  and any linear
operation on  may be represented using a set of functions of exponential form,
and that this function may be chosen to be orthogonal. This naturally gives rise to
the use of the Laplace and Fourier representations.
The Laplace transform:
s
#rt2),+ Transfer function uv#rtl),+ wx#rt
where,
s $
P a q
yrtzC (
R S ;
  p L  Laplace transform of ;

s
w{yrt7CEuxyrt yr
where uxyrt can be expressed as a pole-zero representation of the form:
#r})H~0B€€€#r!)W~‚Q
uv#rteC
| 
yr)*ƒB #r})*ƒX%B€€€#r})„ƒ
(NB: The inverse transformation, ie. obtaining   from w?#rt , is not a
straightforward mathematical operation.)

8
The Fourier transform:
s
†ˆ‡2/),+ Frequency response uv†ˆ‡2 wv†ˆ‡2

where,
s $
a0‰:Š
_‡2'CfRVS ;   p L Fourier transform of ;
a
S
and
s
wx†ˆ‡27C3uv†ˆ‡2 †ˆ‡2
The output time function can be obtained by taking the inverse Fourier trans-
form:
$
  IC Œ ‹  R S x :
‰ Š
w †ˆ‡2 p Lˆ‡
a
S
1.4.1 Relationship between time & frequency domains
Theorem
If -B is the impulse response of an LTI system, uv†ˆ‡2 , the Fourier transform of
-B , is the frequency response of the system.

Proof
Consider an input ; ŽC ‡e to an LTI system. Let -B be the impulse
|‘’
response, with a Fourier transform ux_‡2 .

Using convolution, the output   is given by:

  7C R(P S ‡>;)TJ :-BJ ULJ


|‘’

$ %F – $ %F –
”
‰ .
Š • a 0
a :
‰ .
Š • a B- JX LJ
C“| Œ R(P S p B- J LJ—˜| Œ RVP S p

$ F $ F F
”
‰ Š 0
a :
‰ Š 0
a ”
‰ Š :
‰ Š
C | Œ p R S B- J p LJ— | Œ p R S -BJX p N
a a
S S
9
(lower limit of integration can be changed from ™ to )›š since -J !Cœ™ for
2(™ )

$ $
‰:Š a0‰”Š
C |Œ G p ux_‡2Bp uvU)/ˆ‡2%M
Let uv†ˆ‡27CŸž—p
‰U ie. ž˜Cœ¡¢uv†ˆ‡2t¡¤£ ¥{CE¦ - G uv†ˆ‡2%M

ž $_¨ – $_¨ –
‰•§Š a0‰•§Š
Then   7C | Œ Gp p
‡eBV¥, M›CEž
| ‘’

i.e. an input sinusoid has its amplitude scaled by ¡¢uv†ˆ‡2t¡ and its phase changed
by arg G uv†ˆ‡2M , where ux_ˆ‡2 is the Fourier transform of the impulse response
-B .

Theorem
Convolution in the time domain is equivalent to multiplication in the frequency
domain i.e.
  7C©- hª2; e1  a  G wx†ˆ‡27CEuv†ˆ‡2 s †ˆ‡2%M
and
  7C«-B hª/; e1E¬ a  G wx#rteC3uv#rt s #rt%M
Proof
` (Laplace) transform
Consider the general integral ` of a shifted function:
$
` a
R $ F I)TJX p q L
¬!G h)WJ M C
C p a q ¬>G  %M
`
Now consider the Laplace transform of the` convolution integral
$
¬!G  ­ª/-B M C R $ R F ')WJX”-BJX` LJXp a q L
F
F a `
C R -BJX p q LJ®¬!G  M
C ¬>G-B %Mt¬>G  %M
By allowing r›+¯‡ we prove the result for the Fourier transform as well.

10

You might also like