0% found this document useful (0 votes)
64 views7 pages

Subspace State Space System Identification For Industrial Processes

The document discusses subspace state space system identification methods for industrial processes. It provides an overview of the main concepts and algorithms in subspace identification, including the two basic steps all subspace methods consist of. Different identification algorithms are analyzed and placed in a unifying framework, and a comparison is made between subspace identification and prediction error methods.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views7 pages

Subspace State Space System Identification For Industrial Processes

The document discusses subspace state space system identification methods for industrial processes. It provides an overview of the main concepts and algorithms in subspace identification, including the two basic steps all subspace methods consist of. Different identification algorithms are analyzed and placed in a unifying framework, and a comparison is made between subspace identification and prediction error methods.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Subspace state space system identication for industrial processes

Wouter Favoreel*, Bart De Moor, Peter Van Overschee


Katholieke Universiteit Leuven, Department of Electrical Engineering-ESAT/SISTA, Kardinaal Mercierlaan 94, 3001 Leuven, Belgium
Abstract
We give a general overview of the state-of-the-art in subspace system identication methods. We have restricted ourselves to the
most important ideas and developments since the methods appeared in the late eighties. First, the basics of linear subspace identi-
cation are summarized. Dierent algorithms one nds in literature (such as N4SID, IV-4SID, MOESP, CVA) are discussed and
put into a unifying framework. Further, a comparison between subspace identication and prediction error methods is made on
the basis of computational complexity and precision of the methods by applying them on 10 industrial data sets. # 2000 IFAC.
Published by Elsevier Science Ltd. All rights reserved.
Keywords: Subspace methods; System identication; State-space models; Multivariable systems; Linear algebra
1. Introduction
The beginning of the 1990s witnessed the birth of a
new type of linear system identication algorithms,
called subspace methods. Subspace methods originate in
a happy menage-a-trois between system theory, geo-
metry and numerical linear algebra. Previous papers
and books emphasizing dierent aspects of subspace
system identication and signal processing and in which
one can nd large sets of references to the literature are
[14]. We should also mention some special issues of the
journals IFAC Automatica (``Special Issue on Statistical
Signal Processing and Control'', Jan. 94; ``Special Issue
on System Identication'', Dec 95;) and Signal Proces-
sing (``Special Issue on Subspace Methods for System
Identication'', July 1996), which contain contributions
on subspace identication, as well as the Proceedings of
the 11th IFAC Symposium on System Identication
(Kitakyushu, Japan, July 1997).
Linear subspace identication methods are concerned
with systems and models of the form
1
x
k1
= Ax
k
Bu
k
w
k
Y (1)
y
k
= Cx
k
Du
k
v
k
Y (2)
with
E
w
p
v
p
_ _
w
q
T
v
q
T
_ _
_ _
=
Q S
S
T
R
_ _

pq
50X (3)
The vectors u
k
R
m1
and y
k
R
l1
are the mea-
surements at time instant k of, respectively, the m inputs
and l outputs of the process. The vector x
k
is the state
vector of the process at discrete time instant k, v
k
R
l1
and w
k
R
n1
are unobserved vector signals, v
k
is called
the measurement noise and w
k
is called the process
noise. It is assumed that they are zero mean, stationary
white noise vector sequences and uncorrelated with the
inputs u
k
. A R
nn
is the system matrix, B R
nm
is
the input matrix, C R
ln
is the output matrix while
D R
lm
is the direct feed-through matrix. The matri-
ces Q R
nn
, S R
nl
and R R
ll
are the covariance
matrices of the noise sequences w
k
and v
k
.
In subspace identication it is typically assumed that
the number of available data points goes to innity, and
that the data is ergodic. We are now ready to state the
main problem treated:
Given a large number of measurements of the input u
k
and the output y
k
generated by the unknown system
(1)(3). Determine the order n of the unknown system,
the system matrices A, B, C, D up to within a similarity
transformation and an estimate of the matrices Q, S, R.
In this paper we briey recapitulate the main con-
cepts and algorithms of linear subspace system identi-
cation (Section 2). Dierent methods of the literature
0959-1524/00/$ - see front matter # 2000 IFAC. Published by Elsevier Science Ltd. All rights reserved.
PI I : S0959- 1524( 99) 00030- X
Journal of Process Control 10 (2000) 149155
www.elsevier.com/locate/jprocont
* Corresponding author. Tel.: +32-16-321-907; fax: +32-16-321-
970; http://www.esat.kuleuven.ac.be/sista
E-mail address: [email protected] (W. Favoreel).
E-mail addresses: [email protected] (B. De Moor);
[email protected] (P. Van Overschee).
1
E denotes the expected value operator and
pq
the Kronecker
delta.
are presented and put into a unifying framework. We
comment on the comparison between prediction error
methods (PEM) and subspace identication methods. It
should be emphasized that these two identication
approaches are by no means competing. Instead, they
are ``... a most useful complement to traditional max-
imum-likelihood based methods'', as emphasized in [5].
2. An overview of the theory
In this section we describe the general concepts in
subspace identication. Further, the two basic steps all
subspace methods consist of are described. Finally, the
dierent algorithms existing in the literature are ana-
lyzed in a unifying framework.
Subspace identication algorithms always consist of
two steps. The rst step makes a projection of certain
subspaces generated from the data, to nd an estimate
of the extended observability matrix and/or an estimate
of the states of the unknown system. The second step
then retrieves the system matrices from either this
extended observability matrix or the estimated states.
We will come back to these two steps in Section 2.2.2,
where we describe dierent subspace identication
methods and t them into a unifying framework.
2.1. The subspace structure of linear systems
The following inputoutput matrix equation [6],
played a very important role in the development of
subspace identication:
Y
f
= I
i
X
i
H
i
d
U
f
H
i
s
M
f
N
f
X (4)
The dierent terms in this equation are:
. The extended observability matrix I
i
:
I
i
=
oel
C
CA
CA
2
. . .
CA
i1
_
_
_
_
_
_
_
_
_
_
_
_
(5)
. The deterministic lower block triangular Toeplitz
matrix H
i
d
:
H
i
d
=
oel
D 0 0 . . . 0
CB D 0 . . . 0
CAB CB D . . . 0
. . . . . . . . . . . . . . .
CA
i2
B CA
i3
B CA
i4
B . . . D
_
_
_
_
_
_
_
_
_
_
_
_
(6)
. The stochastic lower block triangular Toeplitz
matrix H
s
i
:
H
i
s
=
oel
0 0 0 . . . 0
C 0 0 . . . 0
CA C 0 . . . 0
. . . . . . . . . . . . . . .
CA
i2
CA
i3
CA
i4
. . . 0
_
_
_
_
_
_
_
_
_
_
_
_
(7)
. The input and output block Hankel matrices are
dened as:
U
0[i1
=
oel
u
0
u
1
. . . u
j1
u
1
u
2
. . . u
j
. . . . . . . . . . . .
u
i1
u
i
. . . u
ij2
_
_
_
_
_
_
_
_
Y (8)
Y
0[i1
=
oel
y
0
y
1
. . . y
j1
y
1
y
2
. . . y
j
. . . . . . . . . . . .
y
i1
y
i
. . . y
ij2
_
_
_
_
_
_
_
_
Y (9)
where we assume for stochastic reasons that j
throughout the paper. For convenience and short hand
notation, we call:
U
p
=
oel
U
0[i1
Y U
f
=
oel
U
i[2i1
Y
Y
p
=
oel
Y
0[i1
Y Y
f
=
oel
Y
i[2i1
Y
where the subscript p and f denote, respectively, the past
and the future. The matrix containing the inputs U
p
and
outputs Y
p
will be called W
p
:
W
p
=
oel Y
p
U
p
_ _
X
The block Hankel matrix formed with the process noise
w
k
and the measurement noise v
k
are dened, respec-
tively, as M
0[i1
and N
0[i1
in the same way. Once again,
we dene for short hand notation:
M
p
=
oel
M
0[i1
Y M
f
=
oel
M
i[2i1
Y
N
p
=
oel
N
0[i1
Y N
f
=
oel
N
i[2i1
X
We nally denote the state sequence X
i
as:
X
i
=
oel
x
i
x
i1
x
i2
. . . x
ij1
)X
_
(10)
In what follows, we will use the matrices / R
pj
and
B R
qj
.
Denition (Orthogonal projections)
The orthogonal projection of the row space of / into
the row space of B is denoted by /aB and dened as
2
:
2
.,denotes the MoorePenrose pseudo-inverse. This generalized
inverse could be replaced by less ``restricted'' generalized inverses, but
this will not be pursued here.
150 W. Favoreel et al. / Journal of Process Control 10 (2000) 149155
/aB = /B
f
BX
/aB
l
is the projection of the row space of / into B
l
,
the orthogonal complement of the row space of B, for
which we have /aB
l
= A /aBX
2.2. The two basic steps in subspace identication
In this section we will explore the two main steps that
all subspace algorithms consist of (see Fig. 1). The rst
step always performs a weighted projection of the row
space of the previously dened data Hankel matrices.
From this projection, the observability matrix I
i
and/or
an estimate

X
i
of the state sequence X
i
can be retrieved.
In the second step, the system matrices AY BY CY D and
QY SY R are determined. As shown in Fig. 1, a clear dis-
tinction can be made between the algorithms that use
the extended observability matrix I
i
to obtain the state
space matrices, and those using the estimated state
sequence

X
i
.
2.2.1. First step: nding the state sequence and/or the
extended observability matrix
In this section, we show how an orthogonal projection
with data block Hankel matrices forms one of the key
elements in subspace system identication algorithms.
All subspace methods start from the previously pre-
sented matrix inputoutput equation (4). It states that
the block Hankel matrix containing the future outputs
Y
f
is related in a linear way to the future input block
Hankel matrix U
f
and the future state sequence X
i
. The
basic idea of subspace identication now is to recover
the I
i
X
i
-term of this equation. This is a particularly
interesting term since either the knowledge of I
i
or X
i
leads to the system parameters (see Section 2.2.2.
Moreover I
i
X
i
is a rank decient term (of rank n, i.e.
the system order!) which means that once I
i
X
i
is
known, I
i
, X
i
and the order n can be simply found from
a SVD.
How can an estimate of I
i
X
i
be extracted from the
above equation? For this we need the previously dened
notion of orthogonal projection. By projecting the row
space of Y
f
into the orthogonal complement U
l
f
of the
row space of U
f
we nd:
Y
f
aU
J
f
= I
i
X
i
aU
J
f
H
s
i
M
f
aU
J
f
N
f
aU
J
f
X
Since it is assumed that the noise is uncorrelated with
the inputs we have that:
M
f
aU
J
f
= M
f
Y N
f
aU
J
f
= N
f
X
Therefore:
Y
f
U
J
f
= I
i
X
i
aU
J
f
H
s
i
M
f
N
f
X
The following step consists in weighting this projection
to the left and to the right with some matrices W
1
and
W
2
:
W
1
XY
f
aU
J
j
XW
2
= W
1
XI
i
...,,...
1X
X
i
aU
J
f
XW
2
........,,........
2X
W
1
X H
i
s
s
M
f
N
f
_ _
XW
2
....................,,....................
3X
X
Of course, the inputs U
f
and the weighting matrices
W
1
and W
2
can not be chosen arbitrarily but they
should satisfy the following three conditions:
1X ianL W
1
XI
i
( ) = ianL I
i
(11)
2X ianL X
i
aU
J
f
XW
2
_ _
= ianL X
i
(12)
3X W
1
X H
s
i
M
f
N
f
_ _
XW
2
= 0 (13)
The rst two conditions guarantee that the rank-n
property of I
i
X
i
is preserved after projection onto U
l
f
and weighting by W
1
and W
2
. The third condition
expresses that W
2
should be uncorrelated with the noise
sequences w
k
and v
k
. If these three conditions are satis-
ed, we have that:
O
i
=
oel
W
1
XY
f
aU
J
f
XW
2
(14)
= W
1
XI
i
X
i
aU
J
f
XW
2
Fig. 1. Subspace algorithms always consist of two main steps. In Sec-
tion 2.2.1 it is explained how from the rst step the extended observa-
bility matrix I
i
and an estimate X

i
of the state sequence X
i
are
determined. In the second step the system matrices AY BY CY D and the
noise covariance matrices QY RY S are calculated using one of the
algorithms described in Section 2.2.2.
W. Favoreel et al. / Journal of Process Control 10 (2000) 149155 151
with SVD:
O
i
= U
1
U
2
_ _
S
1
0
0 0
_ _
V
T
1
V
T
2
_ _
X
The following important properties can now be stated:
ianL O
i
= nY
W
1
XI
i
= U
1
S
1a2
1
Y
X
i
aU
J
f
XW
2
= S
1a2
V
T
2
X
Obviously, the singular value decomposition of the
matrix W
1
XY
f
aU
l
f
XW
2
delivers the order n of the system.
Moreover, from the left singular vectors corresponding
to the non-zero singular values the extended obser-
vability matrix I
i
can be found (up to a similarity
transformation) where as the right singular vectors
contain information about the states X
i
. For an
appropriate choice of the weighting matrix W
2
, the
matrix

X
i
=
oel
X
i
aU
f
XW
2
(15)
can indeed be considered as an estimate of the state
sequence X
i
. It was shown [4] that, for a particular
choice of W
2
,

X
i
is a Kalman lter estimate of X
i
. One
might wonder about the eect of choosing the weights
W
1
and W
2
in (14). Without going into details here, it
suces to say that, by choosing appropriate weighting
matrices W
1
and W
2
, all subspace algorithms for LTI
systems can be interpreted in the above framework,
including N4SID [7], MOESP [8], CVA [9], basic-4SID
and IV-4SID [10]. It should be noted that for the basic-
4SID algorithm, condition (13) is not satised which
implies that in general this method is not consistent. We
present in Fig. 2 the acronyms of these algorithms and
the weights to be plugged into (14). We refer to [4,11,12]
for proofs and details.
2.2.2. Second step: nding the state space model
What have we done so far? We have found how an
estimate

X
i
of the state sequence X
i
and the extended
observability matrix I
i
can be retrieved from the
weighted projection (14) of the future outputs Y
f
into
the orthogonal complement of the future inputs U
f
. At
this point, we can clearly distinguish two classes of
subspace algorithms. The rst class uses the state esti-
mates

X
i
(the right singular vectors) to nd the state
space model. Algorithms that follow this approach are
N4SID [7] and CVA [9]. The second class of algorithms
uses the extended observability matrix I
i
(i.e. the left
singular vectors) to nd the model parameters. Exam-
ples in the literature are MOESP [8] and IV-4SID, basic-
4SID [10,13,14].
2.2.2.1. Algorithms using an estimate

X
i
of the state
sequence. Without going into the details [4,7], we men-
tion that if the weights W
1
and W
2
correspond to those
of the N4SID algorithm (See Fig. 2), the estimated state
sequence

X
i
can be interpreted as the solution of a bank
of Kalman lters, working in parallel on each of the
columns of the matrix W
p
. Besides

X
i
, we also need the
state sequence

X
i1
. This sequence can be obtained from
a projection and new weights W
1
, W
2
in (14) based on
W
0[i
, Y
i1[2i1
and U
i1[2i1
(see Section 2.1 for nota-
tions). This leads to the sequence O
i1
and the Kalman
lter states

X
i1
:
O
i1
= W
1
XY
i1[2i1
aU
J
i1[2i1
XW
2
= W
1
XI
i1

X
i1
XW
2
X
System model: The state space matrices AY BY C and D
can now be found by solving a simple set of over deter-
mined equations in a least squares sense [4,7]:

X
i1
Y
i[i
_ _
=
A B
C D
_ _

X
i
U
i[i
_ _

&
w
&
v
_ _
Y (16)
with obvious denitions for &
w
and &
v
as residual
matrices. This reduces to
min
AYBYCYD

X
i1
Y
i[i
_ _

A B
C D
_ _

X
i
U
i[i
_ _ _
_
_
_
_
_
_
_
2
F
Noise model: The noise covariances QY S and R can be
estimated from the residuals &
w
and &
v
as:
Q S
S
T
R
_ _
i
=
1
j
&
w
&
v
_ _
&
T
w
&
T
s
_ _
_ _
50Y
where the index i denotes a bias induced for nite i,
which disappears as i (see further). As is obvious
by construction, this matrix is guaranteed to be positive
semi-denite. This is an important feature since only
positive denite covariances can lead to a physically
realizable noise model.
There is an important observation to be made here:
Corresponding columns of

X
i
and of

X
i1
are Kalman
lter state estimates of X
i
and X
i1
. It would lead us
beyond the scope of the present paper to give all the
mathematical details, but it should be mentioned that
although

X
i
and

X
i1
are both Kalman lter estimates
of the states of the system, the state sequences needed to
initialize these Kalman lters are dierent. As a con-
sequence, the set of equations (16) is not theoretically
consistent which means that the estimates of the system
matrices AY BY CY D are slightly biased. One should refer
to [4] for a more thorough explanation on this topic, and
for more involved algorithms that provide consistent
estimates of AY BY CY D and slightly biased estimates of
152 W. Favoreel et al. / Journal of Process Control 10 (2000) 149155
QY RY S that are consistent (if i ). These algo-
rithms tackle the origin of the bias (i.e. the dierence in
initial state for the Kalman lter sequences

X
i
and

X
i1
)
to nd an unbiased version of the algorithm presented
in this paper. The algorithms in [4] have moreover been
optimized with respect to numerical robustness, bias
and noise sensitivity. Also Matlab-code is provided for
these algorithms. Since the aim of the present paper is
only to give an overview of the existing methods, we
restricted ourselves here to a simple, but slightly biased
version of more sophisticated N4SID algorithms.
2.2.2.2. Algorithms using the extended observability
matrix I
i
. Contrarily to the previous class of algo-
rithms, here the system matrices are determined in two
separate steps: rst, A and C are determined from I
i
while in a second step B and D are computed.
Determination of A and C: The matrices A and C can
be determined from the extended observability matrix in
dierent ways. All the methods, make use of the shift
invariance property of the matrix I
i
, which implies that
[15] (following Matlab notations):
A = I
f
i
XI
i
Y C = I
i
1 : lY : ( )X
Determination of B and D: After the determination of A
and C, the system matrices B and D have to be com-
puted. Here we will only sketch one possible way to do
so. From the inputoutput equation (4), we nd that:
I
J
i
XY
f
U
f
j
......,,......
cR
lin ( )mi
= I
J
i
.,,.
cR
lin ( )li
H
d
i
.,,.
cR
limi
where I
l
i
R
lin ( )li
is a full row rank matrix satisfying
I
l
i
XI
i
= 0. Here once again the noise is cancelled out
due to the assumption that the input u
k
is correlated
with the noise. Observe that with known matrices
A,C,I
l
i
,U
f
and Y
f
this equation is linear in B and D.
Dierent extensions of the present algorithms to other
classes of systems such as bilinear systems, continuous-
time systems, periodic systems, systems operating in
closed-loop, etc. can be found in [16].
3. Subspace identication vs. prediction error methods
for some industrial data sets
In this section it is our purpose to make a direct
comparison between prediction error methods (PEM)
[17] and the currently discussed subspace identication
algorithms. First we will analyze some general dier-
ences between these two approaches. Further we will
apply both methods to the same data sets obtained from
real-life applications.
Besides some conceptual novelties, such as re-empha-
sizing of the state in the eld of system identication
(see Section 2), subspace methods are characterized by
several advantages with respect to PEMs. One of them
is the so-called parameterization problem, which is par-
ticularly non-trivial for systems with multiple outputs
(see references in [17]). In subspace methods on the
contrary, the model is parameterized by the full state
space model, and the model order is decided upon in the
identication procedure. Further, there is no basic
complication for subspace algorithms in going from
SISO to MIMO systems. Also, a nonzero initial state
poses no additional problems in terms of parameteriza-
tion, which is not the case with inputoutput based
parameterizations, typically used in PEMs. Finally,
stable systems are treated exactly the same way as
unstable ones. Another main advantage is that subspace
methods, when implemented correctly, have better
numerical properties than PEMs. For instance, sub-
space identication methods do not involve nonlinear
optimization techniques which means they are fast
(since non-iterative) and accurate (since no problems
with local minima occur). The price to be paid is that
they are suboptimal. In order to demonstrate this trade-
o, we have compared two methods on 10 practical
examples. The 10 industrial examples are mechanical,
from process industry and thermal ([4] pp. 189196 and
the references therein for more details). It should be
noted that all the data sets that are discussed here can
be downloaded freely from the internet site DAISY
3
.
N4SID ``Robustied'' version of the N4SID algorithm
presented above. For more details we refer to [4].
PEM The prediction error algorithm described in [18],
which uses a full parameterization of the state space
model combined with regularization. The implementa-
tion in Matlab of the algorithm was obtained from
McKelvey [19]. As an initial starting value of the model
we took the result of the above mentioned N4SID sub-
space identication algorithm.
What is of interest here, is the trade-o for each of the
above methods, between two quantities:
. The computational requirements (measured in the
number of oating points operations as indicated
by Matlab).
. The prediction error on the validation data set,
dened as:
4 = 100
1
l

l
c=1

s
k=1
y
k
( )
c
y
p
k
_ _
c
_ _
2

s
k=1
y
k
( )
c
_ _
2

_
_
_
_
_
_

_
"X
where y
p
k
is the one step ahead predicted output.
We can say that, for nine out of the 10 practical
examples, the error for the subspace methods is only
15% larger than for prediction error methods. This
illustrates the fact that subspace identication methods
3
http://www.esat.kuleuven.ac.be/sista/daisy
W. Favoreel et al. / Journal of Process Control 10 (2000) 149155 153
are suboptimal w.r.t. prediction error methods, who
minimize the maximum likelihood criterion. On the
other hand, this does not mean that PEM are always
better than subspace methods. From the glass oven
example for instance, it can be seen that the error for
PEM is much larger than for subspace identication.
This means that the PEM algorithm only found a local
optimum which, in this case, appears to be much worse
than the suboptimal solution of N4SID. Furthermore,
from a computational point of view, the subspace
methods are about 20 times faster (see Fig. 3).
The conclusion of this comparison is that subspace
methods present a valid alternative to the ``classical''
versions of prediction error methods (PEM). They are
fast because no iterative nonlinear optimization methods
are involved and moreover, they are suciently accurate
in practical applications. From a theoretical point of
view, prediction error methods are more accurate than
subspace methods, as they clearly optimize an objective
function. However, if a good initial estimate of the
model parameters is not available, the solution one nds
might not be the optimal solution (due to local minima
in the optimization problem).
4. Conclusions
In this paper we have given a brief overview of linear
subspace system identication methods. We made a
clear distinction between methods using the states and
methods starting from the observability matrix to
recover the system parameters. Further a direct com-
parison between the N4SID subspace identication
method and a PEM identication method on the basis
accuracy and computational time is made. Therefore,
both methods were applied on a wide variety of real-life
Fig. 2. This table interprets dierent existing subspace identication
algorithms in a unifying framework. All these algorithms rst calculate a
weighted projection (14) followed by a SVD. The left and right weighting
matrices are W
1
and W
2
, respectively. The rst two algorithms use the
state estimates X

i
(the right singular vectors) to nd the system matrices
while the last three algorithms are based on the extended observability
matrix I
i
(the left singular vectors). The matrix + of the IV-4SIDmethod
is a matrix containing the instrumental variables.
Fig. 3. The above plot shows the prediction errors of Table 1 (x axis)
and the computational time of Table 2 (y axis) for PEM (o), N4SID
(*). Both methods have been applied to the dierent industrial data
sets. The computational complexity has been normalized to the value
obtained by N4SID while the prediction error has been normalized to
the value obtained by PEM. One can clearly see that the subspace
method is much faster than the prediction error method although a
little less accurate.
Table 2
Computational complexity of the algorithms, i.e. the number of oat-
ing point operations (ops) computed by Matlab
a
PEM N4SID
Glass tubes 515 62
Dryer 2.1 4.5
Glass oven 791 81
Wing utter 46 15
Robot arm 7 12
Evaporator 942 103
Chemical 3640 186
CD player arm 141 38
Ball and beam 14 14.5
Wall temperature 48 40
a
The optimization based algorithm (PEM) is a lot slower for mul-
tivariable systems (up to a factor 20 for the chemical process example)
than the subspace algorithm (N4SID).
Table 1
Prediction errors for the validation data
a
PEM N4SID
Glass tubes 13.8 14
Dryer 4.4 4.5
Glass oven 56.4 13.2
Wing utter 1.2 1.4
Robot arm 1.5 2.1
Evaporator 22.4 22.8
Chemical process 60.7 60.2
CD player arm 16.0 16.1
Ball and beam 36.2 36.5
Wall temperature 16.0 16.0
a
If no validation data was available, the identication data was
used. PEM computes the most accurate models (for almost all cases).
We conclude from these tables that the subspace identication algo-
rithms compute accurate models, and that these models (if needed)
provide excellent initial starting values for optimization algorithms.
154 W. Favoreel et al. / Journal of Process Control 10 (2000) 149155
data sets. The conclusion is that subspace methods and
prediction error methods are complementary in the
sense that a good initial model can be quickly obtained
with subspace methods while a further optimization of
the parameters (if needed at all) can be done with pre-
diction error methods.
Acknowledgements
W. Favoreel is a Research Assistant supported by the
I.W.T. (Institute for Science and Technology, Flanders).
B. De Moor is a Research Associate supported by the
Fund for Scientic Research (F.W.O. Vlaanderen). P.
Van Overschee was a Senior Research Assistant of the
Fund for Scientic Research (F.W.O. Vlaanderen) and
is now working at ISMC (Intelligent System Modeling
and Control). Work supported by the Flemish Govern-
ment (Administration of Science and Innovation (Con-
certed Research Action MIPS: Model-based Information
Processing Systems, Bilateral International Collaboration:
Modeling and Identication of nonlinear systems, IWT-
Eureka SINOPSYS: Model-based structural monitoring
using in-operation system identication), FWO-Vlaande-
ren: Analysis and design of matrix algorithms for adaptive
signal processing, system identication and control, based
on concepts from continuous time system theory and dif-
ferential geometry, Numerical algorithms for subspace
system identication: Extension towards specic applica-
tions, FWO-Onderzoeksgemeenschappen: Identication
and Control of Complex Systems, Advanced Numerical
Methods for Mathematical Modeling); Belgian Federal
Government (Interuniversity Attraction Pole IUAP IV/02:
Modeling, Identication, Simulation and Control of
Complex Systems, Interuniversity Attraction Pole
IUAP IV/24: IMechS: Intelligent Mechatronic Sys-
tems); European Commission: (Human Capital and
Mobility: SIMONET: System Identication and Mod-
eling Network, SCIENCE-ERNSI: European Research
Network for System Identication.)
References
[1] A. Van Der Veen, E.F. Deprettere, A.L. Swindlehurst, Subspace-
based signal analysis using singular value decompositions, Pro-
ceedings of the IEEE 81 (9) (1993) 12771308.
[2] M. Viberg, Subspace-based methods for the identication of lin-
ear time-invariant systems, Automatica 31 (12) (1995) 18351852.
[3] B. De Moor, P. Van Overschee, Numerical algorithms for sub-
space state space system identication, in: A. Isidori (Ed.),
Trends in Control. A European Perspective, Springer, Italy, 1995,
pp. 385422.
[4] P. Van Overschee, B. De Moor, Subspace Identication for Lin-
ear Systems: Theory, Implementation, Applications. Kluwer
Academic, Dordrecht, The Netherlands, 1996.
[5] L. Ljung, Developments for the system identication toolbox for
matlab, in: Proc. of the 11th IFAC Symposium on System Iden-
tication, SYSID 97, 811 July, Kitakyushu, Japan, vol. 2, pp.
969973, 1997.
[6] B. De Moor, Mathematical concepts and techniques for modeling
of static and dynamic systems, PhD thesis, Department of Electrical
Engineering, Katholieke Universiteit Leuven, Belgium, 1988.
[7] P. Van Overschee, B. De Moor, N4SID: subspace algorithms for
the identication of combined deterministic and stochastic sys-
tems, Automatica 30 (1) (1994) 7593.
[8] M. Verhaegen, P. Dewilde, Subspace identication, part I: the
output-error state space model identication class of algorithms,
Internat. J. Control 56 (1992) 11871210.
[9] W.E. Larimore, Canonical variate analysis in identication, l-
tering and adaptive control, in: Proc. of the 29th Conference on
Decision and Control, CDC 90, HI, pp. 596604, 1990.
[10] M. Viberg, Subspace methods in system identication, in: Proc.
of the 10th IFAC Symposium on System Identication, SYSID
94, 46 July, Copenhagen, Denmark, vol. 1, pp. 112, 1994.
[11] M. Jansson, B. Wahlberg, On consistency of subspace based sys-
tem identication methods, in: Proc. of the 13th IFAC World
Congress, San Francisco, CA, pp. 181186, 1996.
[12] P. Van Overschee, B. De Moor, Choice of state space basis in
combined deterministicstochastic subspace identication, Auto-
matica 31 (2) (1995) 18771883.
[13] T. Gustafsson, System identication using subspace-based
instrumental variable methods, in: Proc. of the 11th IFAC Sym-
posium on System Identication, SYSID 97, 811 July, Kita-
kyushu, Japan, vol. 3, pp. 11191124, 1997.
[14] B. Ottersten, M. Viberg, A subspace based instrumental variable
method for state space system identication, in: Proc. of the 10th
IFAC Symposium on System Identication, SYSID 94, 46 July,
Copenhagen, Denmark, vol. 2, pp. 139144, 1994.
[15] S.Y. Kung, A new identication method and model reduction
algorithm via singular value decomposition, in: 12th asilomar
conf. on Circuits, Systems and Comp., pp. 705714, 1978.
[16] B. De Moor, P. Van Overschee, W. Favoreel, Numerical Algo-
rithms for Subspace State Space System IdenticationAn
overview, Birkhauser, 1998.
[17] L. Ljung, System IdenticationTheory for the User. Prentice
Hall, Englewood Clis, NJ, 1987.
[18] T. McKelvey, On state-space models in system identication,
PhD thesis, Department of Electrical Engineering Linko ping
University, Sweden, 1994.
[19] T. McKelvey, Ssida matlab toolbox for multivariable state-
space model identication, Technical report, Dept. of EE, Lin-
ko ping University, Linko ping, Sweden, 1994.
W. Favoreel et al. / Journal of Process Control 10 (2000) 149155 155

You might also like