0% found this document useful (0 votes)
29 views13 pages

Lec6aa 2021

The document discusses parameter estimation methods for obtaining physical models of dynamic systems from experimental data. It provides an overview of topics related to system identification and parameter estimation, including linear time-invariant systems, experiment design, identification of closed-loop systems, and software tools for modeling and analysis.

Uploaded by

anıl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views13 pages

Lec6aa 2021

The document discusses parameter estimation methods for obtaining physical models of dynamic systems from experimental data. It provides an overview of topics related to system identification and parameter estimation, including linear time-invariant systems, experiment design, identification of closed-loop systems, and software tools for modeling and analysis.

Uploaded by

anıl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

• How is this toolbox being used?

Often with a GUI:


Introduction part 2: Parameter Estimation

• What is parameter estimation (in this lecture)?


A set of methods to obtain a physical model of a dynamic system from
experimental data.
• What is the goal of parameter estimation?
The obtained parameters should represent certain physical properties of
the system being examined.
• How is parameter estimation being carried out?
Click-and-play?? The parameters of the model are fitted to obtain the “best” representation
of the experimental data, e.g. with M ATLAB’s optimization toolbox.
• And what is the use of these lectures?
An overview of the applied methods, the background of the algorithms, the
pitfalls and suggestions for use.

Physical models vs. Mathematical models


microscopic macroscopic Approach for (1) system identification and (2) parameter estimation:
“First principles”: conservation laws, Experimental: planned experi-
physical properties, parameter esti- ments, look for relations in data, • Input/output selection
mation system identification • Experiment design
PDE’s: ∞-dimensional ODE’s: finite dimension • Collection of data
• Choice of the model structure (set of possible solutions)
non-linear LTI • Estimation of the parameters
no general-purpose technique “standard” techniques • Validation of the obtained model (preferably with separate
“validation” data).
LTI: Linear Time Invariant system, e.g. described by means of a transfer
1. The models used for system identification are “mathematical”:
function
The underlying physical behaviour is unknown or incomplete.
Y (s) y(t) To the extend possible physical insight is used, e.g. to select the model
= G(s) of = G(s) structure (representation, order).
U (s) u(t)
2. The parameters in the models for parameters estimation have, to a more or
Where possible, non-linearities are removed from the data by preprocessing lesser extend, a physical meaning.
prior to system identification
Overview topics: 1. System identification (first part of the lectures):
Overview topics: 2. Parameter estimation (first and second part of the lectures):
• Introduction, discrete systems, basic signal theory
• Open-loop LTI SISO systems, time domain, frequency domain • Non-linear model equations
• Non-parametric identification: correlations and spectral analysis • Linearity in the parameters
• Subspace identification • Identifiability of parameters
• Identification with “Prediction Error”-methods: prediction, model structure, • Error propagation
approximate models, order selection, validation • Experiment design
• Experiment design

Software:
M ATLAB (any recent version) + identification toolbox + optimization toolbox: E.g.
available from NCS.

Course material:
Overview topics: 3. Miscellaneous (second part of the lectures):
Lecture notes / slides in PDF-format from BlackBoard site.
On-line M ATLAB documentation of the toolboxes (selection).
• MIMO-systems
• Identification in the frequency domain Examination (5 EC):
• Identification of closed loop systems Exercises parallel to the lectures, usually to be handed in next week.
• Non-linear optimisation One exercise includes a lab experiment (during the “break” in March / April).
• Cases. The answers will be graded and may be discussed during an oral exam.
Grades will be available on-line∗.

Approximately 65% of maximum: 5.5


∗ Grades: 100% of maximum: 10.0
In between: linear interpolation
u System y
T The Bode plot of a system G(iω) is the amplitude and phase plot of the
Systems and signals
complex function G depending on the real (angular) frequency ω.
A description of the system T should specify how the output signal(s) y depend
on the input signal(s) u. It specifies for each frequency ω the input-output relation for harmonic signals
(after the transient behaviour vanished):
The signals depend on time: continuous or at discrete time instants.

Several system descriptions are available: • The amplification equals the absolute value |G(iω)|.
• The phase shift is found from the phase angle ∠G(iω).
• Continuous versus discrete time
• Time domain versus frequency domain
The output y(t) for an arbitrary input signal u(t) can be found by considering
Examples for LTI systems: all frequencies in the input signal: Fourier transform.

• Frequency Response Function (FRF) or Bode plot


Procedure (in principle): Transform the input into the frequency domain, multiply
• Impulse response of step response
• State space model with the Bode plot or FRF, transform the result back to the time domain.
• Transfer function

Discrete time dynamic systems


The impulse response g(t) of an LTI system can also be used to compute the
output y(t) for an arbitrary input signal u(t): Signals are sampled at discrete time instances tk with sample time Ts,:

uk−1 = u(tk−1) = u(tk − Ts)


The input signal can be considered as a sequence of impulses with some (time
uk = u(tk ) = u(tk )
dependent) amplitude u(t). The outputs due to all impulses are added: uk+1 = u(tk+1) = u(tk + Ts)
uk+2 = u(tk+2) = u(tk + 2Ts )
Continuous time signals and systems: Convolution integral
Transfer function y = G(z)u + H(z)e (in z-domain)
Z∞
Continuous time vs. Discrete time
y(t) = (g ∗ u)(t) = g(τ )u(t − τ ) dτ
0 differential operator shift operator
du
su ⇒ (1 − q)uk ⇒ uk − uk+1
Discrete time signals and systems: Summation Z dt
s−1u ⇒ u dt (1 − q −1)uk ⇒ uk − uk−1

X State space equations:
y(k) = (g ∗ u)(k) = g(l)u(k − l) (k = 0, 1, 2, ...) ẋ(t) = Ax(t) + Bu(t) + Kv(t) x(tk+1) = Ax(tk ) + Bu(tk ) + Kv(tk )
l=0 y(t) = Cx(t) + Du(t) + v(t) y(tk ) = Cx(tk ) + Du(tk ) + v(tk )
Discrete time dynamic systems: stability Discrete systems and phase lag

Continue time vs. Discrete time Zero order hold (ZOH) discretisation introduces a phase lag:
Laplace transform z transform
Transfer function: 1 Phase lag depends on
n1 s + n0 b1z −1 + b2z −2 frequency ω and sample time
G(s) = 2
G(z) = 0.5
d2s + d1s + d0 1 + a1z −1 + a2z −2 Ts: −ω · Ts/2 (in rad).

x [−]
Relation between s en z: z = esTs (Franklin (8.19)) 0 0

−20

phase [deg]
Stability: −0.5
−40
Poles in LHP: Poles inside the unit circle:
Re(s) < 0 |z| < 1 −1 −60
T = 0.1 s
0 0.5 1 1.5 2 −80
s

Undamped poles: t [s] 0 1


Imaginary axis: Unit circle: 10 10
ω [rad/s]
s = iω z = eiωTs
So the phase lag is −90o at the Nyquist frequency ωN = ωs/2 = π/Ts.

Fourier transforms
Signal characterisation
Continuous-time deterministic signals u(t): Fourier integral:
Z ∞ Z ∞
1
M ATLAB’s identification toolbox ident works with time domain data. Even then, U (ω) = u(t)e−iωt dt u(t) = U (ω)eiωt dω
−∞ 2π −∞
the frequency domain will appear to be very important. Furthermore,
identification can also be applied in the frequency domain. For a finite number (N ) of discrete-time samples ud(tk ): Fourier summation:
NX
−1
1 NX
−1
• Frequency content: Fourier transform UN (ωl ) = ud(tk )e−iωl tk ud(tk ) = UN (ωl )eiωl tk
k=0
N l=0
• Energy
• Power
UN (ωl ) with ωl = Nl ωs = Nl 2π
Ts , l = 0, ..., N − 1 is the discrete Fourier
transform (DFT) of the signal ud(tk ) with tk = kTs, k = 0, ...N − 1.
Deterministic or stochastic signals?
For N equal a power of 2, the Fast Fourier Transform (FFT) algorithm can be
applied.
Example: 32768 data samples of the piezo mechanism (left).
Energy and power (continue time signals)
6
900 10 Energy spectrum: Ψu(ω) = |U (ω)|2
4 Z∞ Z∞
800 10 1

Y [−]
Energy: Eu = u(t)2dt = Ψu(ω)dω
y [−]

N
2 −∞ −∞
700 10
0 1
600 10 Power spectrum: Φu(ω) = lim |UT (ω)|2
0 2 4
0 0.5 1 10 10 10 T →∞ T
t [s] f [Hz] ZT Z∞
1 1
Power: Pu = lim u(t)2dt = Φu(ω)dω
T →∞ T 2π
0 −∞
DFT (right) in 16384 frequencies computed with M ATLAB’s fft command.
[ UT (ω) is de Fourier transform of a continuous time signal with a finite duration ]
Horizontal axis in steps of 1/(total measurement time) = 0.925 Hz.

Signal types
Energy and power (discrete time signals)

Energy spectrum: Ψu(ω) = |U (ω)|2 (from the DFT)


• Deterministic with finite energy: 1
R ∞
Eu = Ψu(ω)dω bounded.
Z
0
ud(k)2 =
X
Energy: Eu = Ψu(ω)dω
Ψu(ω) = |U (ω)|2 limited. −1 k=−∞ ωs
• Deterministic with finite power: 1
0
R
Pu = Φu (ω)dω finite. 1
Φu(ω) = T1 |UT (ω)|2 unbounded. −1 Power spectrum: Φu(ω) = lim |UN (ω)|2 (“periodogram”)
N →∞ N
1
• Stochastic with finite power: 1 NX
−1 Z
R
Pu = Φu (ω)dω finite. 0 Power: Pu = lim ud(k)2 = Φu(ω)dω
N →∞ N
k=0 ωs
Φu(ω) = T1 |UT (ω)|2 bounded. −1
0 20 40 60 80 100
t [s]

Ronald Aarts PeSi/2/12


Convolutions Stochastic signals
Z∞
Continuous time: y(t) = (g ∗ u)(t) = g(τ )u(t − τ ) dτ Realisation of a signal x(t) is not only a function of time t, but depends also on
0 the ensemble behaviour.

After Fourier transform: Y (ω) = G(ω) · U (ω)


An important property is the expectation:E f (x(t))


X Examples: Mean E x(t)
Discrete time: y(k) = (g ∗ u)(k) = g(l)u(k − l) (t = 0, 1, 2, ...)
l=0
Power E (x(t) − E x(t))2

After Fourier transform: Y (ω) = G(ω) · U (ω) Cross-covariance: Rxy (τ ) = E[x(t) − Ex(t)][y(t − τ ) − Ey(t − τ )]

Autocovariance: Rx(τ ) = E[x(t) − Ex(t)][x(t − τ ) − Ex(t − τ )]


Example: u is the input and g(k), k = 0, 1, 2, ... is the impulse response of the
system, that is the response for an input signal that equals 1 for t = 0 en
White noise: e(t) is not correlated met signals e(t − τ ) for any τ 6= 0.
equals 0 elsewhere.
Then with the expressions above y(k) is the output of the system. Consequence: Re(τ ) = 0 for τ 6= 0.

Ronald Aarts PeSi/2/13 Ronald Aarts PeSi/2/14

Power density or Power Spectral Density: Systems and models


Z∞ ∞
Rx(τ ) e−iωτ dτ Rxd (k) e−iωkT
X
Φx(ω) = Φxd (ω) = A system is defined by a number of external variables (signals) and the
−∞ k=−∞
relations that exist between those variables (causal behaviour).

With (inverse) Fourier transform:


Z∞ v
1 T
Z
Rx(τ ) = Φx(ω) eiωτ dω Rxd (k) = Φxd (ω) eiωkT dω
2π 2π ω u 
? y
−∞ s -
G -

-

Power:
E (x(t) − E x(t))2 = Rx(0) = E (xd(t) − E xd(t))2 = Rxd (0) =
Z∞ Signals: • measurable input signal(s) u
1 T
Z
= Φx(ω) dω = Φxd (ω) dω • measurable output signal(s) y
2π 2π ω
−∞ s • unmeasurable disturbances v (noise, non-linearities, ...)

Ronald Aarts PeSi/2/15 Ronald Aarts PeSi/2/16


Estimators
Non-parametric (system) identification
Suppose we would like to determine a vector θ with n real coefficients of which
• t-domain: Impulse or step response IDENT: cra ∗
the unknown (true) values equal θ0.
An estimator θ̂N has been computed from N measurements. • f -domain: Bode plot IDENT: spa & etfe

This estimator is • Give models with “many” numbers, so we don’t obtain models with a “small”
number of parameters.
• unbiased if the estimator E θ̂N = θ0. • The results are no “simple” mathematical relations.
• consistent, if for N → ∞ the estimator E θ̂N resembles a δ-function, or in • The results are often used to check the “simple” mathematical relations that
other words the certainty of the estimator improves for increasing N . are found with (subsequent) parametric identification.
• Non-parametric identification is often the fist step.
The estimator is consistent if it is unbiased and the asymptotic covariance ∗The IDENT commands impulse & step use a different approach that is related to the
lim covθ̂N = lim E(θ̂N − E θ̂N )(θ̂N − E θ̂N )T = 0 parametric identification to be discussed later.
N →∞ N →∞

Correlation analyse Ident manual: Tutorial pages 3-9,10,15; The Finite Impulse Response (FIR) ĝ(k), k = 0, 1, 2, ..., M , is a model
Function reference cra (4-42,43). estimator for system G0(z) for sufficiently high order M :
v
M
y(t) = G0(z) u(t) + v(t)
X
u 
? y y(t) ≈ ĝ(k)u(t − k) (t = 0, 1, 2, ...)
-
G0 -

-
k=0
Note: In an analysis the lower limit of the summation can be taken less than 0 (e.g. −m) to
verify the (non-)existence of a non-causal relation between u(t) and y(t).
Using the Impulse Response g0(k), k = 0, 1, 2, ... of system G0(z)
How do we compute the estimator ĝ(k)?

X
y(t) = g0(k)u(t − k) + v(t) (t = 0, 1, 2, ...) • u(t) en v(t) are uncorrelated (e.g. no feedback from y to u!!!).
k=0
• Multiply the expression y(t) = g0(k)u(t − k) + v(t) with u(t − τ )
P

∞ and compute the expectation.


g0(k)z −k
X
So the transfer function can be written as: G0(z) =
k=0
• This leads to the Wiener-Hopf equation:

⇒ Impulse response of infinite length. ∞


X
Ryu(τ ) = g0(k)Ru (τ − k)
⇒ Assumption that the “real” system is linear and v is a disturbance (noise, not k=0
related to input u).

X
Wiener-Hopf: Ryu(τ ) = g0(k)Ru (τ − k)
k=0

X
If u(t) is a white noise signal, then Ru(τ ) = σu2δ(τ ), so Wiener-Hopf: Ryu(τ ) = g0(k)Ru (τ − k)
k=0

Ryu (τ ) R̂yu (τ )
g0(τ ) = 2
and ĝ(τ ) = And what if u(t) is not a white noise signal?
σu σu2

Option 1: Estimate the autocovariance e.g. with


How do we compute the estimator for the cross covariance R̂yu (τ ) from N
N
measurements? N 1 X
R̂u (τ ) = u(t)u(t − τ )
N t=τ
and solve the linear set of M equations for ĝ(k):
Sample covariance function: M
N N
X
R̂yu (τ ) = ĝ(k)R̂u (τ )
N
N 1 X k=1
R̂yu (τ ) = y(t)u(t − τ ) Not so favorable ....
N t=τ

is asymptotically unbiased (so for N → ∞).


X
Wiener-Hopf: Ryu(τ ) = g0(k)Ru (τ − k)
k=0
Finding a pre-whitening filter L(z) for uF (k) = L(z)u(k):
And what if u(t) is not a white noise signal (continued)?
Try to use a linear model of order n (default in ident n = 10)
Option 2: Filter input and output with a pre-whitening filter L(z):
Suppose we know a filter L(z), such that uF (k) = L(z)u(k) is a white L(z) = 1 + a1z −1 + a2z −2 + ... + anz −n
noise signal.
N
Apply this filter to the output as well (yF (k) = L(z)y(k)), then 1 X
and look for a “best fit” of the n parameters such that e.g. u2 (k) is
yF (t) = G0(z) uF (t) + L(z)v(t) N k=1 F
so the impulse response ĝ(k) can also be estimated from uF (k) and minimised.
yF (k).
⇒ Exercise 3.
OK, but how do we find such a pre-whitening filter L(z)?
Impulse response of the piezo mechanism.
An example (1): The piezo mechanism.
[ir,R,cl]=cra(piezod,200,10,2);
Warning: All equations starting from y(t) = G0(z) u(t) + v(t) do not account Covf for filtered y Covf for prewhitened u
2000 2500
for offsets due to non-zero means in input and/or output. So detrend! 2000
1800
1500
Upper right: u is indeed
1600

150
Input and output signals
load piezo; 1000 whitened.
1400
100 500

50 Ts = 33e-6; 1200 0
Lower right: The impulse
y1

−50
u=piezo(:,2); 1000
−200 −100 0 100 200
−500
−50 0 50

−100 y=piezo(:,1); response is causal.


Correlation from u to y (prewh) Impulse response estimate
−150 0.1 2500
0 0.2 0.4 0.6 0.8 1
piezo = 0.08 2000
100
iddata(y,u,Ts); 0.06 1500
The horizontal axes count the
50
piezod =
0.04
1000 time samples, so the values
0.02
500
should be scaled with
u1

0
detrend(piezo); 0
0
−50
plot(piezod);
−0.02
−500
T = 33 µs.
0 50 100 150 200 0 50 100 150 200
−100
0 0.2 0.4 0.6 0.8 1
Time

Second example: Known system


Impulse response of known system
Ident manual page 2-15: Impulse response
3

yk − 1.5yk−1 + 0.7yk−2 = uk−1 + 0.5uk−2 + ek − ek−1 + 0.2ek−2. 2.5


G0 (exact)
2

q −1 + 0.5q −2 z + 0.5 1.5 cra with n = 10


So G0(q) = or G0(z) = 2
1 − 1.5q −1 + 0.7q −2 z − 1.5z + 0.7 1

1 − q −1 + 0.2q −2 z 2 − z + 0.2 cra with n = 1 (not


and H0(q) = or H0(z) = 2 0.5

1 − 1.5q −1 + 0.7q −2 z − 1.5z + 0.7 enough pre-whitening)


0

Simulation: N = 4096 −0.5

T = 1s −1

fs = 1 Hz
−1.5
u(t) binary signal in frequency band 0..0.3fs 0 5 10 15 20
Time [s]
25 30 35 40

e(t) “white” noise (random signal) with variance 1


Spectral analyse Ident manual: Tutorial pages 3-10,15,16;
Function reference etfe (4-53,54), The estimator ĜN (a) is unbiased.
spa (4-193–195). (b) has an asymptotic variance
Φv (ω)/ N1 |U (ω)|2 unequal 0 !
v N

 y(t) = G0(z) u(t) + v(t) (c) is asymptotically uncorrelated for different


u ? y
-
G0 -

-
frequencies ω.

Difficulty: For N → ∞ there is more data, but there are also estimators at more
Fourier transform (without v): Y (ω) = G0(eiωT )U (ω), so (=N/2) frequencies, all with a finite variance.

Y (ω)
G0(eiωT ) = . Solutions:
U (ω)
1. Define a fixed period N0 and consider an increasing number of
YN (ω) measurements N = rN0 by r → ∞. Carry out the spectral analysis for each
Estimator for G0(eiωT ) using N measurements: ĜN (eiωT ) = . period and compute the average to obtain a “good” estimator in N0/2
UN (ω)
frequencies.
V (ω)
Effect of v: ĜN (eiωT ) = G0(eiωT ) + N . 2. Smoothen the spectrum in the f -domain.
UN (ω)

An example: 32768 data samples of the output signal (position) of the piezo An example: 32768 data samples of the output signal (position) of the piezo
mechanism. mechanism.

Filter the DFT with a Hamming∗ filter:


Split the DFT in parts:
8
10
6 7
10 10

7
10

6
10
5
10 6
10

5
10
5
10
4
10

4 4
10 10

3
10
3
3 10
10

2
2 10
10
2
10

1
10
0 1 2 3 4
10 10 10 10 10
1 1
10 10
0 1 2 3 4 0 1 2 3 4
10 10 10 10 10 10 10 10 10 10

Original DFT and with Hamming∗ filters of width 8, 64 and 512.


DFT 16384 frequencies and 2048 and 256 frequencies ∗ Signal Processing Toolbox: Filters neighbouring frequencies with cosine function.
With IDENT (2): identification
With IDENT (1): periodogram (“data spectra”) of input and output signals
The output signal shows a ETFE (Empirical Transfer Function Estimate, manual page 4-53):
low frequent slope of -2, YN (ω)
Estimate the transfer function G with Fourier transforms ĜN (eiωT ) =
Periodogram

slope -2 which is caused by the UN (ω)


in 128 (default) frequencies.
0
10
existence of a pure
Output # 1

integrator 1/s in the system


Smoothing is applied and depends on a parameter M , which equals the width
−5
10 (slope -1 in a Bode plot and
of the Hamming window of a filter that is applied to input and output (small M
squared in the
0
10
1
10 10
2 3
10
4
10 means more smoothing).
periodogram).
SPA (SPectral Analysis, manual page 4-193):
0
10
Estimate the transfer function G with Fourier transforms of the covariances
The input signal is
Input # 1

Φ̂yu(ω)
−5 “reasonable” white. ĜN (eiωT ) = in 128 (default) frequencies.
10
Φ̂u(ω)

Smoothing is also applied and again depends on a parameter M , that sets the
0 1 2 3 4
10 10 10 10 10
Frequency (Hz)

width of the Hamming window of the applied filter.

Ronald Aarts PeSi/3/16 Ronald Aarts PeSi/3/17

ETFE of the piezo example


Frequency response
Choice between ETFE and SPA?
0
10

Amplitude
• Not always straightforward to predict which method will perform best. Why
−2
10
not try both?
• ETFE is preferred for systems with clear peaks in the spectrum. −4
10
2 3 4

• SPA estimates also the noise spectrum v(t) = y(t) − G0(z)u(t) 10 10 10

according to window
0
2 parameter:
Φ̂yu(ω)

−200
Φ̂v (ω) = Φ̂y (ω) − Phase (deg)
Φ̂u(ω) −400
M = 15
• Measure of signal
v
to noise ratio with the coherence spectrum −600 M = 30
u |Φ̂yu(ω)|2 M = 60 *
u
−800
κ̂yu(ω) = t M = 90
Φ̂y (ω)Φ̂u (ω) 10
2
10
3 4
10
Frequency (Hz) M = 120

What is the “real” width of the peak near 2 kHz?

Ronald Aarts PeSi/3/18 Ronald Aarts PeSi/3/19


SPA of the piezo example Second example: Spectral analysis of the known system
Frequency response

Ident manual page 2-15:


0
10
Amplitude

yk − 1.5yk−1 + 0.7yk−2 = uk−1 + 0.5uk−2 + ek − ek−1 + 0.2ek−2.


−2
10

q −1 + 0.5q −2 z + 0.5
−4
10 So G0(q) = or G0(z) = 2
10
2
10
3 4
10 1 − 1.5q −1 + 0.7q −2 z − 1.5z + 0.7
window 1 − q −1 + 0.2q −2 z 2 − z + 0.2
0
parameter: and H0(q) = or H0(z) = 2
−200
1 − 1.5q −1 + 0.7q −2 z − 1.5z + 0.7
Phase (deg)

−400
M = 15 Simulation: N = 4096
−600 M = 30 T = 1s
−800
M = 60 fs = 1 Hz
2 3 4
M = 90 * u(t) binary signal in frequency band 0..0.3fs
10 10 10
Frequency (Hz) M = 120 e(t) “white” noise (random signal) with variance 1

Ronald Aarts PeSi/3/20 Ronald Aarts PeSi/3/21

r=1 r=4 r = 16
Smoothen the ETFE: 2

Phase [deg] Magnitude [−]


10
Spectra: 0
Frequency response 10

10
1 −2
10
Periodic input,
r periods
Amplitude

0
0
10
−180
G0 −360
−1
10
−3 −2 −1 −3 −2 −1 −3 −2 −1
−2 −1 0
10 10 10 10 10 10 10 10 10
10 10 10
ETFE, M = 30 f [Hz] f [Hz] f [Hz]

2
w=1 w=2 w=4

Phase [deg] Magnitude [−]


0
10
SPA, M = 30 0
−90 10
Phase (deg)

−2
10
−180 Filtering, Hamming
window of width w 0
−270
−2 −1 0 −180
10 10 10
Frequency (Hz)
−360
−3 −2 −1 −3 −2 −1 −3 −2 −1
10 10 10 10 10 10 10 10 10
f [Hz] f [Hz] f [Hz]

Ronald Aarts PeSi/3/22 Ronald Aarts PeSi/3/23


Intermezzo: Linear regression and Least squares estimate

Going from “many to “just a few” parameters: a first step Regression:

Idea: Try to recognise “features” in the data. • Prediction of variable y on the basis of information provided by other
measured variables ϕ1, ..., ϕd.
• Immediately in u(k) en y(k) ???  
• In the spectral models: Are there “features” e.g. like peaks as are are ϕ1
• Collect ϕ =  .. .
 
expected in the Bode plots / FRF (eigenfrequency, ...) of a system with a
ϕd
complex pole pair.
• In the impulse response (measured or identified): • Problem: find function of the regressors g(ϕ) that minimises the difference
(1) Recognise “features” (settling time, overshoot, ...). y − g(ϕ) in some sense.
(2) Realisation algorithms → to be discussed next. So ŷ = g(ϕ) should be a good prediction of y.

• Example in a stochastic framework: minimise E[y − g(ϕ)]2.

Ronald Aarts PeSi/3/24 Ronald Aarts PeSi/4/1

Linear regression — Examples:


15
Linear regression:
10

y [−]
• Linear fit y = ax + b. " #
5
0
• Regression function g(ϕ) is parameterised. It depends on a set of x
Then g(ϕ) = ϕT θ with input vector ϕ = 0 5 10
parameters 1 x [−]
  " # " #
θ1 a h i a
and parameter vector θ = . So: g(ϕ) = x 1 .
θ =  .. . b b
 
θd

• Special case: regression function g(ϕ) is linear in the parameters θ. 15

• Quadratic function y = c2x2 + c1x + c0. 10

y [−]
Note that this does not imply any linearity with respect to the variables 5
x2

0
from ϕ.
Then g(ϕ) = ϕT θ with input vector ϕ =  x  −5
 
0 5 10
1 x [−]
• Special case: g(ϕ) = θ1ϕ1 + θ2ϕ2 + ... + θdϕd
   
c2 h i c2
So g(ϕ) = ϕT θ. and parameter vector θ =  c1 . So: g(ϕ) = x2 x 1  c1 .
   
c0 c0

Ronald Aarts PeSi/4/2 Ronald Aarts PeSi/4/3

You might also like