Linear System Theory and Design (Part5)
Linear System Theory and Design (Part5)
and
yˆ ( s ) N ( s ) D 1 ( s )uˆ ( s ) (7.2)
Then we have
x 2 x1 , x3 x2 , , xn xn 1 (7.6)
substituting (7.5) into (7.3) to develop an
equation for state variable x1
( s n 1s n 1 n 1s n )vˆ( s ) uˆ ( s )
or
sxˆ1 ( s ) 1 xˆ1 ( s ) n 1 xˆn 1 ( s ) n xˆn ( s ) uˆ ( s )
Theorem 7.1
Thus we have
Av 1v,
A 2 v A( Av) 1 Av 12 v,
A n 1 v 1n 1 v
It easily to compute
c cv cv
cA cA v c v
Ov= v 1 0
n 1 n 1 n 1
cA cA v 1 cv
Rank (O ) n
v v1 v2 vn
T
Suppose that , then Av 1v can
be rewritten as
1v1 2 v 2 n v n 1v1 1v1 2 v 2 n v n 1v1
v v v n 1v
1 1 2 1 1 n
v 2 1v3 n2
v
2 1 vn
v n 1 1v n v n 1 1v n
That is
(1n 11n 1 2 1n 2 n 11 n )v n 0
or
D(1 )v n 0
From cv 0 , we have
( 11n 1 2 1n 2 n 11 n )v n 0
0 1 0 0 0
0 0 1 0 0
x Ax bu x u
0 0 0 1 0
1 2 3 n 1
y cx 1 2 n x
y cx 1 0 0x
Observable Canonical Form 2
0 0 0 n n
1 0 0 n 1 n 1
x Ax bu 0 1 0 n 2 x n 2 u
0 0 1 1 1
y cx 0 0 1x
7.2.1 Minimal Realizations
N ( s ) N ( s ) R ( s ) and D( s ) D ( s ) R ( s )
m 2 N 1 D 1 N 2 Nn Dn
T
D 2
then, *
m1
m
*
0
where
m1* N 0* D0* N1* D1* N * D*
T
N ( s) N ( s) N 0 N1 s N s
* * *
* (7.30’)
D( s ) D ( s ) D0 D1* s D* s
Theorm 7.4
Consider gˆ ( s) N ( s) / D( s) . We use the
coefficients of D(s) and N(s) to form the
Sylvester resultant S in (7.28) and search its
linear independent columns from left to right.
Then we have
deg gˆ ( s ) number of linear independen t N _ columns :
So
Rank (S) n
zb=[4 2 -3 2 0 1]T
So we have the coprime fraction:
N (s) 6s 3 s 2 3s 20 3s 4
4 2
D( s) 2s 7 s 15s 16s 10 s 2 s 2
3 2
7.3.1 QR Decomposition
Consider an n×m matrix M. Then there
existes n×n orthogonal matrix Q such that
QM R
where R is an upper triangular matrix of the
same dimensions as M. So we have
M=QR
where Q Q 1 Q T . The above equation is called
QR decomposition of M.
Conclusion: The linear independence of
the columns of M is preserved in the columns
of R.
To determine whether a column is linearly
dependent of its LHS in R, we only need to
know whether the diagonal entry is zero or not.
Let us apply QR decomposition to the resultant
in Example 7.1.
%%%%%%%%%%%%%%%%
D=[10 16 15 7 2]; N=[-20 3 1 6 0];
S=[D 0 0 0;N 0 0 0;0 D 0 0;0 N 0 0;0 0 D 0;0 0
N 0;0 0 0 D;0 0 0 N]’;
[Q,R]=qr(S);
%%%%%%%%%%%%%%%%
7.6 Degree of Transfer Matrix
This section will extend the concept degree for
scalar rational functions to rational matrices. Given a
proper rational matrix Gˆ ( s ), we assume that every entry
of Gˆ ( s ) is a coprime fraction throughout this section.
s 1
s 1 s 3 s4
det
1 1 ( s 1)( s 3)
s 1 s
1 1
( s 1)( s 2) s 3 3
det
1 1 s ( s 1)( s 2)( s 3)
( s 1)( s 2) s
For example,
2s s 2 s 1
2 s 1
We have
[the characteristic polynomial of Gˆ ( s ) ]=detD(s)
=the least common denominator of all minors of
ˆ ( s)
G
has c1 1, c 2 3, c3 0, r1 3, r 2 2
for i=1,2,…, p.
Corollary 7.8
Let N (s) and D(s) be q×p and q×q polynomial
matrices, and let D(s) be row reduced. Then
the rational matrix D 1 ( s) N ( s)is proper (strictly
proper) if and only if
ci N ( s ) ci D ( s ) ( ci N ( s ) ci D ( s ))
or
(7.82)
D( s )( N( s )) N ( s )D( s ) 0
where Di R q q
, Ni R q p
, D i R p p and N i R q p are all
constant matrices. By (7.82), we have
D0 N0 0 0 0 0 0 0 N 0
D1 N1 D0 N0 0 0 0 0 D
0
D2 N 2 D1 N1 D0 N0 0 0 N1
D3 N 3 D2 N 2 D1 N1 D0 N0 1 D
SM : 0
D N 4 D3 N 3 D2 N 2 D1
N1 N 2
4
0 0 D4 N 4 D3 N 3 D2 N 2 D2
0 0 0 0 D4 N 4 D3 N3 N 3
0 0 0 0 0 0 D4 N 4 D3
(7.83)
The equation is the matrix version of (7.82) and
the matrix S∈R2nq×n(p+q) will be called a
generalized resultant.
Let us search linear independent columns of
S in order from left to right. We find that
Every column in D block columns is linear
independent of its left-hand-side (HLS)
columns.
For each N block column, we use N i column
to denote the ith column of it.
If the N i column in some N block is linear
dependent on its LHS columns, then all subsequent
Ni-columns will linear dependent on its LHS
columns.
Let i (i 1,2, , p ) be the number of linear independent
Ni-columns in S. They are called the column indices
of Gˆ ( s ).
The first N i column that becomes linearly
dependent on its LHS columns is called primary
dependent N i column. It is clearly that the( i 1)th N i
column is the primary dependent column.
Example 7.7 Find a right coprime fraction of
the transfer matrix
4 s 10 3
2s 1 s2
ˆ
G ( s) 1 s 1
(2 s 1)( s 2) ( s 2)
2
2 0 5 0 2 0 2 0 0 3
s s s
0 4 0 12 0 9 0 2
D-block N-block D-block N-block D-block N-block
2 0 20 3 0 0 0 0 0 0 0 0
0 4 2 1 0 0 0 0 0 0 0 0
5 0 2 6 2 0 20 3 0 0 0 0
0 12 1 3 0 4 2 1 0 0 0 0
2 0 4 0 5 0 2 6 2 0 20 3
0 9 0 2 0 12 1 3 0 4 2 1
S
0 0 0 0 2 0 4 0 5 0 2 6
0 2 0 0 0 9 0 2 0 12 1 3
0 0 0 0 0 0 0 0 2 0 4 0
0 0 0 0 0 2 0 0 0 9 0 2
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 2 0 0
0 0 0 0 2 0 4 5 0 -2
0 2 0 0 0 9 0 0 12 1
0 0 0 0 0 0 0 2 0 4
0 0 0 0 0 2 0 0 9 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 2 0
z1=null(S1); z1b=z1/z1(10); zz1b=[z1b(1:7);0;z1b(8:9);z1b(10);0];
- 0.9386 10 10
0.0469 0 .5 - 0.5
- 0.0939 1 1
- 0.0000 0 0
- 0.0939 1 1
z1 z1b
- 0.0000 0
zz1b
0
- 0.2347 2 .5 2.5
0.1877 2 0
0.0000 0 -2
- 0.0939 1 0
1
0
M=cat(2,zz1b,zz2b);
10 7
N0=-M(1:2,:); - 0.5 - 1
1 1
D0=-M(3:4,:); 0 2
1 - 4
N1=-M(5:6,:); 0 0
M
2.5 2
D1=-M(7:8,:); 0 1
-2 0
N2=-M(9:10,:); 0 0
D1=-M(7:8,:); 1 0
0 0
That is we obtain
1 1 2.5 2 1 0
D0 , D1 , D2
0 2 0 1 0 0
10 7 1 4 2 0
N0 , N1 , N2
0.5 1 0 0 0 0
s 2
2.5s 1 2s 1
D( s) D0 D1s D 2 s
2
0 s 2
and
2 s 2
s 10 4s 7
N( s ) N 0 N1s N 2 s
2
0.5 1
Thus Gˆ ( s ) in (7.84) has the following right
coprime fraction
1
ˆ (2 s 5)(2 2) 4 s 7 ( s 2)( s 0.5) 2 s 1
G (s) (7.86)
0. 5 1 0 s 2
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [N0,N1,N2,D0,D1,D2]=rightcoprimefraction
d0=[2 0;0 4]; d1=[5 0;0 12];d2=[2 0;0 9]; d3=[0 0;0 2];
n0=[-20 3;2 1]; n1=[-2 6;1 3]; n2=[4 0;0 2]; n3=[0 0;0 0];
zero24=zeros(2,4);
D=[d0;d1;d2;d3]; N=[n0;n1;n2;n3];
DN=[D N];
S=cat(2,[DN;zero24;zero24],[zero24;DN;zero24],[zero24;zero24;DN]);
[Q,R]=qr(S);
S2=S(1:12,1:8);
z2=null(S2);
z2b=z2/z2(8); zz2b=[z2b;0;0;0;0];
S1=cat(2,S(1:12,1:7),S(1:12,9:11));
z1=null(S1);
z1b=z1/z1(10); zz1b=[z1b(1:7);0;z1b(8:9);z1b(10);0];
M=cat(2,zz1b,zz2b);
N0=-M(1:2,:); D0=M(3:4,:);
N1=-M(5:6,:); D1=M(7:8,:);
N2=-M(9:10,:); D2=M(11:12,:);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In general, if the generalized resultant has i
linearly independent N i columns, then D(s)
computed using the preceding procedure is
column reduced with column degrees i .
Thus we have
i
ˆ ( s ) deg det D( s )
deg G
where ˆ ( s)
N( s)P N , ˆ (s)
D( s ) P D and
0 0 1
P 1 0 0
0 1 0
N 0 D0 N1 D1 N 2 D2 N3 D3 T 0
(7.88)
with
D 0 D1 D2 D3 D4 0 0 0
N N1 N2 N3 N4 0 0 0
0
0 D0 D1 D2 D3 D4 0 0
0 N0 N1 N2 N3 N4 0 0
T (7.89)
0 0 D0 D1 D2 D3 D4 0
0 0 N0 N1 N2 N3 N4 0
0 0 0 D0 D1 D2 D3 D4
0 0 0 N0 N1 N2 N3 N4
Search linearly independent rows in order
from top to bottom;
All D-rows are linearly independent.
Let the Ni-row denote the ith N-row in each N
block-row;
If any Ni-row becomes linearly independent,
then all Ni-row in subsequent N block-row are
linearly Independent on their preceding rows.
The first Ni-row that becomes linearly;
independent is called a primary dependent
Ni-row;
Let vi (called row indices), i=1,2,…,q, be the
number of linearly independent Ni-rows
7.9 Realizations from Matrix Coprime
Fractions
Simply, we consider a strictly proper rational
matrix Gˆ ( s ) with dimension of 2×2.
ˆ ( s ) N ( s ) D 1 ( s )
G (7.90)
where N(s) and D(s) are right coprime and D(s)
is in column echelon form. We further assume
that the column degrees of D(s) are 1 4 and
2 2 . First we define
s 1 0 s 4 0
H ( s ) : 2
2 (7.91)
0 s 0 s
and
s 1 1 0 s 3 0
2
s 0
1 0 s 0
L( s ) 2 1
(7.92)
0 s 1 0
0 s
0 1 0 1
The procedure for developing a realization for
ˆ ( s )uˆ ( s ) N ( s ) D 1 ( s )uˆ ( s )
yˆ ( s ) G
D(s)=DhcH(s)+DlcL(s) (7.97)
where H(s) and L(s) are defined in (7.91) and
(7.92) respectively, and Dhc and Dlc are
constant matrices and the column-degree
coefficient matrix Dhc is a unit upper triangular
matrix.
or
H ( s ) vˆ ( s ) D hc1 D lc L ( s ) vˆ ( s ) D hc1uˆ ( s )
By (7.95), we have
H ( s ) vˆ ( s ) D hc1 D lc xˆ ( s ) D hc1uˆ ( s ) (7.98)
Let
111 112 113 114 121 122
1
D D lc :
hc
(7.99)
211 212 213 214 221 222
and
1 b12
D 1
hc
0 1 (7.100)
Substituting (7.99) and (7.100) into (7.98) yields
sxˆ1 ( s) 111 112 113 114 121 122 1 b12
sxˆ ( s) xˆ ( s ) uˆ ( s)
5 211 212 213 214 221 222 0 1
In time domain, it becomes
x1 111 112 113 114 121 122 1 b12
x x u (7.101)
5 211 212 213 214 221 222 0 1