0% found this document useful (0 votes)
38 views99 pages

Linear System Theory and Design (Part5)

Uploaded by

Jack Ronms
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views99 pages

Linear System Theory and Design (Part5)

Uploaded by

Jack Ronms
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

Chapter 7

Minimal Realizations and Coprime Fractions


7.1 Introduction
This chapter studies further the realization
problems. A transfer matrix Gˆ ( s) is said to
be realizable if there exists a state-space
equation
x  Ax  Bu
y  Cx  Du
that has ˆ (s)
G as its transfer matrix.
Realization is an important problem for
the following reasons:
 Many design methods and computational
algorithms are developed for state equations.
 Once a transfer function is realized into a
state equation, the transfer function can be
implemented using op-amp circuits.
If a transfer function is realizable, then it
has many realizations, not necessarily of the
same dimension.
An important question is then raised:
 What is the smallest possible dimension?
 Realizations with the smallest possible
dimension are called minimal-dimensioned or
minimal realizations.
In this chapter, we will show how to obtain
minimal realizations. We will point out that a
realization of gˆ ( s )  N ( s ) / D( s ) is minimal if and
only if it is controllable and observable, or if
and only if its dimension equations the degree
of gˆ ( s) . The degree of gˆ ( s) is defined as the
degree of D(s) if the two polynomials D(s) and
N(s) are coprime or have no common factors.
7.2 Implications of Coprimeness
Consider strictly proper rational functions
N ( s ) 1s n 1   2 s n  2     n 1s   n
gˆ ( s )   (7.1)
D( s) s n  1s n 1     n 1s   n

and
yˆ ( s )  N ( s ) D 1 ( s )uˆ ( s ) (7.2)

Let us introduce a new variable v(t) defined by


vˆ( s )  D 1 ( s )uˆ ( s ) or
D ( s )vˆ( s )  uˆ ( s ) (7.3)
yˆ ( s)  N ( s)vˆ( s) (7.4)

Define state variables as


v ( n 1) (t )  xˆ1 ( s )   s n 1 
      

x(t ) :   ˆx( s )        vˆ( s )
 v(t )  or  xˆn 1 ( s )  s  (7.5)
     
 v(t )   xˆn ( s )   1 

Then we have
x 2  x1 , x3  x2 ,  , xn  xn 1 (7.6)
substituting (7.5) into (7.3) to develop an
equation for state variable x1
( s n  1s n 1     n 1s   n )vˆ( s )  uˆ ( s )
or
sxˆ1 ( s )  1 xˆ1 ( s )     n 1 xˆn 1 ( s )   n xˆn ( s )  uˆ ( s )

In the time domain, it is


x 1 (t )   1    n 1   n x(t )  1  u (t ) (7.7)
Substituting (7.5) into (7.4) yields
yˆ ( s )   n  n 1  1 xˆ ( s)
which becomes, in the time domain
y (t )   n  n 1  1 x(t ) (7.8)
So a realization of (7.1) can be given by
matrix form as
  1   2  3   n  1 
 1 0 0  0  0 
  
x  Ax  bu   0 1 0  0  x    u
    (7.9)
       0 
 0  0 1 0  0
y  cx  1  2   n x
It is not difficult to verify that (7.9) is
controllable. For this reason, (7.9) is called a
controllable canonical form realization of (7.1)
A natural question to be asked is: the
realization is observable? It depends on
whether or not N(s) and D(s) are coprime.
Definition 7.1’: A polynomial R(s) is called
a common factor or a common divisor of D(s)
and N(s) it they can be expressed as
D( s )  D ( s ) R ( s ), N ( s )  N ( s ) R ( s )
Definition 7.2’: A polynomial R(s) is called
a greatest common divisor (gcd) of D(s) and
N(s) if
(1) it is a common divisor of D(s) and N(s);
(2) it can be divided without any remainder by
every other common divisor of D(s) and N(s).
Definition 7.3’: Two polynomial are said
to be coprime if they have no common
factor of degree at least 1, or their gcd R(s)
is a nonzero constant.

Theorem 7.1

The controllable canonical form (7.9) is


observable if and only if D(s) and N(s) are
coprime.
Proof: (Necessary): Suppose that D(s) and
N(s) are not coprime, then there exists a 1
such that
N (1 )  11n 1   2 1n  2     n 11   n  0 (7.11)
D (1 )  1n  11n 1     n 11   n  0 (7.12)
Let us define v :  n 1
1  1 1  0 .T
Then (7.11)
can be written as
N (1 )  cv  1  2   n v  0

we can also readily verity


  1 2   3    n   1n 1   1n   1n 1 
 1  
0 0  0  1n 2  1n 1   n2 
   1 
Av   0 1 0  0          1     1 v
    2   
        1   1   1 
 0  0 1 0   1   1   1 
 

Thus we have
Av  1v,
A 2 v  A( Av)  1 Av  12 v,

A n 1 v  1n 1 v
It easily to compute
 c   cv   cv 
 cA   cA v    c v 
Ov= v    1 0
        
 n 1   n 1   n 1 
cA  cA v  1 cv 

 Rank (O )  n

but this contradicts to the assumption that


(7.9) is observable.
(Sufficient): Suppose that (7.9) is not
observable, then there exists an engenvalue 1
such that  A  1I 
Rank 
c n
 
That is, there exists a nonzero vector v such
that
 A  1I 
 c v  0
 
or
Av  1 v and cv  0

v  v1 v2  vn 
T
Suppose that , then Av  1v can
be rewritten as
  1v1   2 v 2     n v n  1v1   1v1   2 v 2     n v n  1v1
v   v v  n 1v
 1 1 2  1 1 n

v 2  1v3  n2
 v
 2  1 vn
 
 
v n 1  1v n v n 1  1v n

 (  11n 1   2 1n  2     n 11   n )v n  1n v n

That is
(1n   11n 1   2 1n  2     n 11   n )v n  0
or
D(1 )v n  0

vn  0, otherwise, v1  1n 1vn  0,  , vn 1  1vn  0  v  0


So we have D(1 )  0

From cv  0 , we have
(  11n 1   2 1n  2     n 11   n )v n  0

or N (1 )v n  0  N (1 )  0 . Now D(1 )  0 and N (1 )  0


mean that D(s) and N(s) have the same factor
This contradicts the assumption that D(s)
s  1 ,
and N(s) are coprime. So if D(s) and N(s) are
coprime, (7,9) is observable.
Another controllable canonical form

 0 1 0  0  0 
 0 0 1  0  0
  
x  Ax  bu         x    u
   
 0 0 0  1  0 
 1 2  3    n  1

y  cx  1  2   n x

Two observable canonical forms


Observable Canonical Form 1
  1 1 0  0  1 
  0 1  0   2 
 2
x  Ax  bu     3 0 0    x    3 u
   
     1   
  n 0 0  0   n  (7.14)

y  cx  1 0  0x
Observable Canonical Form 2

0 0  0  n   n 
1 0  0   n 1    n 1 

x  Ax  bu  0 1  0   n  2  x    n  2 u
   
        
0 0  1  1   1 

y  cx  0  0 1x
7.2.1 Minimal Realizations

Consider transfer function in proper rational


function
N ( s)
gˆ ( s ) 
D( s )

Let R(s) be a greatest common divisor (gcd)


of N(s) and D(s). That is, if we write

N ( s )  N ( s ) R ( s ) and D( s )  D ( s ) R ( s )

Then the polynomials N ( s ) and D ( s ) are coprime.


The rational transfer function gˆ ( s ) can be reduced
to
N (s)
gˆ ( s ) 
D (s)

the above expression is called a coprime fraction.


the D (s) is called characteristic polynomial of gˆ ( s).
The degree of the characteristic polynomial is
defined as the degree of gˆ ( s).
Theorem 7.2
A state equation (A, b, c, d) is a minimal
realization of a proper rational function gˆ (s) if
and only if (A, b) is controllable and (A, c) is
observable or if and only if
dim A  deg gˆ ( s )
Theorem 7.3
All minimal realization of gˆ ( s ) are equivalent.
Summarization
 Theorem 7.2 provides an alternative way of
checking controllability and observability.
 Given a ration function, if we compute first its
common factors and reduced it to a coprime
fraction, then the state equations based on it
will be controllable and observable.
 If a state equation is controllable and
observable, then every eigenvalue of A is a
pole of gˆ ( s) and vice versa. At this time, we
have
Asymptotic stability  BIBO stability
Conclusion: Controllable and observable state
equations and coprime fractions contain
essentially the same information and either
description can be used to carry out analysis
and design.

7.3 Computing Coprime Fractions


Consider
N ( s)
gˆ ( s ) 
D( s )

We assume deg N(s)≤deg D(s)=n=4. Let us


write
N ( s) N ( s)

D( s) D ( s)
which implies
D( s )( N ( s ))  N ( s ) D ( s )  0 (7.26)
It is clear that D(s) and N(s) are not coprime if
and only if there exists polynomials N (s) and D (s)
with
deg N ( s )  deg D ( s )  n  4

to meet (7.26). If we write


D( s )  D0  D1s  D2 s 2  D3 s 3  D4 s 4
N ( s )  N 0  N1 s  N 2 s 2  N 3 s 3  N 4 s 4
D ( s )  D0  D1s  D2 s  D3 s
2 3 (7.27)
N ( s )  N 0  N1 s  N 2 s 2  N 3 s 3

where D4≠0. Substituting (7.27) into (7.26) and


compare the coefficients of sk(k=0,1,…,7),we
obtain
 D0 N0  0 0  0 0  0 0   N 0 
D N 1  D0 N0  0 0  0 0  
 1   D0 
 D2 N 2  D1 N 1  D0 N0  0 0    N1 
  
 D3 N 3  D2 N 2  D1 N 1  D0 N 0   D1 
Sm : 0
 D4 N 4  D3 N 3  D2 N 2  D1 
N1   N 2 
  
0 0  D4 N 4  D3 N 3  D2 N 2   D2 
0 0  0 0  D4 N 4  D3 N 3   N 3 
  
 0 0  0 0  0 0  D4 N 4   D3 
(7.28)
The square matrix S is called the Sylvester
resultant
D(s) and N(s) are coprime if and only if the
Sylvewter resultant is nonsingular
Suppose that S is singular, that N(s) and D(s) are
not coprime. How to obtain a coprime fraction directly
form (7.28)? Or how to obtain a nonzero vector
solution of (7.28).
Let us search linear independent columns of S in
order from left to right. We find that
 Every D-column is linear independent of its left-hand-side
(LHS) columns.
 An N-columns can be dependent or independent of its LHS
columns.
 If an N-columns becomes linear dependent on its LHS
columns, so are all the subsequent N-columns.
Let  be the number of linear independent N-
columns in S. Then the (   1) th N-column is the
first one to become linear dependent on its
LHS columns and will be called the primary
dependent N-column.
Let us use S1 to denote the submatrix of S
that consists of the primary dependent N-
column and all its LHS columns. That is, S1
consists of   1 D-columns (all of them are
independent columns) and   1 N-columns (the
last one is dependent).
So we have
S1  R 2 n2 (  1) and Rank(S1 )  2  1 (7.29)’
Thus, (7.82) can be rewritten as
 m1 
S1 S 2    0 (7.28)’
m 2 
where
m 1   N 0   N1     N D 
T
D0 D1

m 2   N  1 D 1   N   2     Nn Dn 
T
D  2

Because of (7.29)’, S1 has one independent


null vector. The null vector with 1 as its last
entry is called monic null vector.
If we denote the monic null vector of S1 as m 1*

then, *
m1 
m  
*

0 
where

m1*   N 0* D0*   N1* D1*     N * D* 
T

will be a nonzero vector solution of (7.28) and


the coprime fraction of N(s)/D(s) is obtained as

N ( s) N ( s) N 0  N1 s    N  s
* * * 

  * (7.30’)
D( s ) D ( s ) D0  D1* s    D* s 
 Theorm 7.4
Consider gˆ ( s)  N ( s) / D( s) . We use the
coefficients of D(s) and N(s) to form the
Sylvester resultant S in (7.28) and search its
linear independent columns from left to right.
Then we have
deg gˆ ( s )  number of linear independen t N _ columns : 

and a coprime fraction gˆ ( s )  N ( s ) / D ( s )is given


by (7.30)’.
Example 7.1 Consider
N ( s) 6s 3  s 2  3s  20
 4 (7.29)
D( s ) 2s  7 s 3  15s 2  16s  10
is it a coprime fraction? If not, find a coprime
fraction of it.
Based on MATLAB to construct the
Selvester resultant:

n=4; D=[10 16 15 7 2]; N=[-20 3 1 6 0];


S=[D 0 0 0;N 0 0 0;0 D 0 0;0 N 0 0;0 0 D 0;0 0
N 0;0 0 0 D;0 0 0 N]’;
Since
Rank(S)=the number of the D-columns
+the number of the N-columns
 n

So
  Rank (S)  n

For this example,


r=rank(S); mu=r-n;   2
S1

10 - 20  0 0  0 
0 0  0
16 3  10 - 20  0 0  0 0 

15 1  16 3  10 - 20  0 0 
 
 7 6  15 1  16 3  10 - 20
S
2 0  7 6  15 1  16 3 
 
0 0  2 0  7 6  15 1 
0 0  0 0  2 0  7 6 
 
 0 0  0 0  0 0  2 0 
 
2  1
S1=S(1:8,1:2*(mu+1)); z=null(S1);
zb=z/z(2*(mu+1));

zb=[4 2 -3 2 0 1]T
So we have the coprime fraction:
N (s) 6s 3  s 2  3s  20 3s  4
 4  2
D( s) 2s  7 s  15s  16s  10 s  2 s  2
3 2

7.3.1 QR Decomposition
Consider an n×m matrix M. Then there
existes n×n orthogonal matrix Q such that
QM  R
where R is an upper triangular matrix of the
same dimensions as M. So we have
M=QR
where Q  Q 1  Q T . The above equation is called
QR decomposition of M.
Conclusion: The linear independence of
the columns of M is preserved in the columns
of R.
To determine whether a column is linearly
dependent of its LHS in R, we only need to
know whether the diagonal entry is zero or not.
Let us apply QR decomposition to the resultant
in Example 7.1.
%%%%%%%%%%%%%%%%
D=[10 16 15 7 2]; N=[-20 3 1 6 0];
S=[D 0 0 0;N 0 0 0;0 D 0 0;0 N 0 0;0 0 D 0;0 0
N 0;0 0 0 D;0 0 0 N]’;
[Q,R]=qr(S);
%%%%%%%%%%%%%%%%
7.6 Degree of Transfer Matrix
This section will extend the concept degree for
scalar rational functions to rational matrices. Given a
proper rational matrix Gˆ ( s ), we assume that every entry
of Gˆ ( s ) is a coprime fraction throughout this section.

Definition 7.1 The characteristic polynomial of a


proper rational matrix Gˆ ( s ) is defined as the least
common denominator of all minors of Gˆ ( s ). The
degree of the characteristic polynomial is defined as
the McMillan degree or, simply, the degree of Gˆ ( s )
and is denoted by Gˆ ( s ).
Example 7.4 Consider the rational matrices
 1 1   2 1 
ˆ  s 1 s  1 , ˆ s 1 s  1
G1 ( s )   G 2 ( s)  
1 1  1 1 
   
 s 1 s  1 s 1 s  1

The matrix Gˆ 1 (s) has 1/(s+1), 1/(s+1), 1/(s+1)


and 1/(s+1) as its minors of order 1 and det Gˆ 1 (s)  0
as its minor of order 2. Thus the characteristic
polynomial of Gˆ 1 ( s) is s+1 and  Gˆ 1 ( s)  1
The matrix Gˆ 2 ( s ) has 2/(s+1),1/(s+1), 1/(s+1)
and 1/(s+1) as its minors of order 1, and
ˆ ( s )  1 /( s  1) 2
det G 2 as it minor of order 2. Thus the
Characteristic polynomial of Gˆ 2 ( s ) is (s+1)2 and
 Gˆ 2 ( s )  2

Example 7.5 Consider the 2×3 rational matrix


 s 1 1 
 s 1 ( s  1)( s  2) s  3
Ĝ ( s )   
 1 1 1 
 s  1 ( s  1)( s  2) s 
Its minors of order 1 are the six entries of ˆ ( s ).
G

The minors of order 2 are


 s 1 
 s  1 ( s  1)( s  2)  s 1
det   
  1 1  ( s  1) 2
( s  2) ( s  1) 2
( s  2)
 s  1 ( s  1)( s  2) 
s 1 1
 
( s  1) ( s  2) ( s  1)( s  2)
2

 s 1 
s 1 s  3  s4
det 
1 1  ( s  1)( s  3)
 
s 1 s 
 1 1 
 ( s  1)( s  2) s  3 3
det  
 1 1  s ( s  1)( s  2)( s  3)
 ( s  1)( s  2) s 

The least common denominator of all the minors,


or the characteristic polynomials is s(s+1)(s+2)(s+3).
Thus the degree of Gˆ ( s).
In computing the characteristic polynomial, every
minor must be reduced to a coprime fraction.
If (A,B, C, D) be a controllable and observable
realization of Gˆ ( s ) , then we have the following
 Monic least common denominator of all minors of Gˆ ( s)
=characteristic polynomial of A.
 Monic least common denominator of all entries of Gˆ ( s)
=minimal polynomial of A .
7.7 Minimal Realizations-Matrix Case
 Theorem 7.M2
A state equation (A,B,C,D) is a minimal realization of
a proper rational matrix Gˆ ( s) if and only if (A,B) is
controllable and (A,C) is observable or dim A  deg Gˆ ( s).
Theorem 7.M3
All minimal realization of ˆ (s)
G are equivalent.
Example 7.6 Consider the transfer matrix
 4 s  10 3 
 2s  1 s2 
ˆ
G ( s)   (7.75)
1 1 
 
 (2 s  1)( s  2) ( s  2) 
2

Its characteristic polynomial can be computed


as (2s+1)(s+2)2. Thus the rational matrix has
degree 3. So, (4.39) and (4.44) are neither
minimal realizations.
The dimension of the minimal realization is 3
and it can be realized by calling the MATLAB
function minreal. For example, for the realization
in (4.39) typing
%%%%%%%%%%%%%%%%
a=[-4.5 0 -6 0 -2 0;0 -4.5 0 -6 0 -2;1 0 0 0 0 0;0 1 0 0 0
0;0 0 1 0 0 0;0 0 0 1 0 0];
b=[1 0;0 1;0 0;0 0;0 0;0 0];
c=[-6 3 -24 7.5 -24 3;0 1 0.5 1.5 1 0.5];
d=[2 0;0 0];
[am,bm,cm,dm]=minreal(a,b,c,d)
%%%%%%%%%%%%%%%%%%
yields
- 1.3387 0.2176 - 1.6005 - 0.2666 0.2026 
x   2.5335 - 1.1572 4.8333  x   0.2513 - 0.6117 u
- 0.0021 - 0.0007 - 2.0041  - 0.0002 0.3487 
32.7210 10.8394 8.6077   2 0
y  x  u
- 0.8143 - 0.8622 1.8285  0 0
7.8 Matrix Polynomial Fractions
Every q×p proper rational matrix ˆ (s)
G can be
expressed as
ˆ ( s )  N ( s ) D 1 ( s )
G (7.76)
where N(s) and D(s) are q×p and p×p
polynomial matrices. For example,
 s 1 1 
 s 1 ( s  1)( s  2) s  3
Ĝ ( s )   
 1 1 1 
 s  1 ( s  1)( s  2) s 
1
s  1 0 0 
s 1 s  
  0 ( s  1)( s  2 ) 0  (7.77)
  1 1 s  3 0
 0 s ( s  3)

The three diagonal entries of D(s) in (7.77) are


the least common denominators of the three
columns of Gˆ ( s ) . The fraction in (7.76) or (7.77)
is called a right polynomial fraction or, simply,
right fraction.
Dual to (7.76), the expression
ˆ ( s )  D 1 ( s ) N ( s )
G

where D(s) and N (s) are q×q and q×p polynomial


matrices, is called a left polynomial fraction or,
simply, left fraction.
Let R(s) be any p×p nonsingular polynomial
matrix. Then we have
ˆ ( s )  N ( s ) D 1 ( s )  N ( s ) R ( s ) R 1 ( s ) D 1 ( s )
G

 [N( s )R ( s )][D( s )R ( s )]1


Thus right fraction are not unique.
Definition 7.2’ Consider A(s)=B(s)C(s), where
A(s), B(s) and C(s) are polynomial matrices of
compatible orders. We call C(s) a right divisor of
A(s) and A(s) a left multiple of C(s). Similarly,
we call B(s) a left divisor of A(s) and A(s) a right
multiple of B(s).
Definition 7.2” Consider two polynomial
matrices D(s) and N(s) with the same number of
columns p, A p×p square polynomial matrix R(s)
is called a common right divisor of D(s) and N(s)
if there exist polynomial matrices Dˆ ( s ) and Nˆ ( s )
such that
ˆ ( s)R ( s)
D( s )  D and ˆ ( s)R ( s)
N(s)  N

Note that D(s) and N(s) can have different


number of rows
Definition 7.2 A square polynomial matrix M(s)
is called a unimodular matrix if its determinant
is nonzero and independent of s.

For example,
2s s 2  s  1
 
2 s 1 

is a unimodular matrix since


2 s s 2  s  1
det    2 s ( s  1)  2 ( s 2
 s  1)  2  0
2 s 1 
Conclusion 1: Products of unimodular matrices are
also unimodular matrices.
Conclusion 2: The inverse of a unimodular matrix is
again a unimodular matrix.

Definition 7.3 A square polynomial matrix R(s) is a


greatest common right divisor (gcrd) of D(s) and N(s)
if (i) R(s) is a common right divisor of D(s) and N(s)
and (ii) R(s) is a left multiple of every common right
divisor of D(s) and N(s). If a gcrd is a unimodular
matrix, then D(s) and N(s) are said to be right coprime.
Definition 7.3’ A square polynomial matrix R (s )

is a greatest common left divisor (gcld) of D(s )

and N (s ) if (i) R (s ) is a common left divisor of D(s )

and N (s) and (ii) R (s) is a left multiple of every


common left divisor of D(s) and N (s). If a gcld is
a unimodular matrix, then D(s) and N (s) are
said to be left coprime.
Definition 7.4 Consider a proper rational matrix
ˆ ( s ) factored as
G
ˆ ( s )  N ( s ) D  1 ( s )  D 1 ( s ) N ( s )
G

where N(s) and D(s) are right coprime, and N (s)


and D(s) are left coprime. Then the characteristic
polynomial of Gˆ ( s ) is defined as detD(s) or det D( s).
The degree of Gˆ ( s) is defined as
ˆ ( s )  deg det D( s )  deg det D ( s )
deg G
Conclusion: For coprime fraction
ˆ ( s )  N ( s ) D 1 ( s )
G

We have
[the characteristic polynomial of Gˆ ( s ) ]=detD(s)
=the least common denominator of all minors of
ˆ ( s)
G

7.8.1 Column and Row Reducedness


In order to apply Definition 7.4 to determine
the degree of Gˆ ( s)  N( s)D 1 ( s), we need to
compute the determinant of D(s). This can be
avoided if the coprime fraction has some
additional property as we will discuss next.
The degree of a polynomial vector is defined
as the highest power of s in all entries of the
vector.
Consider a polynomial matrix M(s). We
define
 ci M (s) =degree of ith column of M(s)
 ri M (s ) =degree of ith row of M(s)
and call  ci the column degree and  ri the row
degree.
For example, the matrix
 s  1 s 3  2 s  5  1
M(s)   
s  1
2
s 0

has  c1  1,  c 2  3,  c3  0,  r1  3,  r 2  2

Definition 7.5A nonsingular polynomial matrix


M(s) is column reduced if
deg det M(s)=sum of all column degree
It is row reduced if
deg det M(s)=sum of all row degrees
A column reduced matrix may not row
reduced and vice versa. For example, the matrix
 3s 2  2 s 2s  1
M(s)   2  (7.79)
 s  s  3 s 

has determinant s3-s2+5s+3.

Let  ci M( s)  k ci and define H c ( s )  diag ( s kc1 , s kc 2 , )

Then the polynomial matrix M(s) can be


expressed as
M(s)=MhcHc(s)+Mlc(s) (7.80)
The ith column of Mhc is the coefficients of the
ith column of M(s) associated with skci. The
polynomial matrix Mlc(s) contains the remaining
terms and its ith column has degree less than
kci.
For example, the M(s) in (7.79) can be
expressed as
3 2  s 2 0  2 s 1 
M ( s)     
1 1  0 s  s  3 0 
Definition 7.5’ The constant matrix Mhc is
called the column-degree coefficient matrix.
Conclusion: M(s) is column reduced if and
only if its column-degree coefficient matrix Mhc
is nonsingular.
Dual to (7.80), M(s) can also be expressed
as
M(s)=Hr(s)Mhr+Mlr(s)
where  ri M ( s)  k ri and H r ( s )  diag( s k r 1 , s k r 2 , ). The

matrix Mhr is called the row-degree coefficient


matrix.
Conclusion: The M(s) is row reduced if and
only if Mhr is nonsingular.
For the proper rational matrix as follows
ˆ ( s )  N ( s ) D 1 ( s )  D 1 ( s ) N ( s )
G

where N(s) and D(s) are right coprime, N (s) and


D(s ) are left coprime. D(s) is column reduced
and D(s ) is row reduced. Then we have
ˆ ( s ) =sum
deg G of column degree of D(s)
=sum of row degree of D(s)
If Gˆ ( s ) is strictly proper (proper), then
 ci N( s)   ci D( s) ( ci N( s )   ci D( s ))

The converse, however, is not necessarily true.


For example, consider
1
 s s  1
2
  2s  1 2 s 2  s  1
N( s )D ( s)  1 2
1
  
s  1 1   1 1 

Although  ci N( s)   ci D( s) for i=1,2, N(s)D-1(s) is not


trictly proper. The reason is that D(s) is not
column reduced.
Theorem 7.8
Let N(s) and D(s) be q×p and p×p polynomial
matrices, and let D(s) be column reduced.
Then the rational matrix N(s)D-1(s) is proper
(strictly proper) if and only if
 ci N( s)   ci D( s ) [ ci N( s )   ci D( s )]

for i=1,2,…, p.
 Corollary 7.8
Let N (s) and D(s) be q×p and q×q polynomial
matrices, and let D(s) be row reduced. Then
the rational matrix D 1 ( s) N ( s)is proper (strictly
proper) if and only if
 ci N ( s )   ci D ( s ) ( ci N ( s )   ci D ( s ))

7.8.2 Computing Matrix Coprime Fractions


Consider a q×p proper rational matrix Gˆ ( s )
expressed as
ˆ ( s )  D  1 ( s ) N ( s )  N ( s ) D 1 ( s )
G (7.81)
Clearly (7.81) implies
N ( s )D( s )  D( s )N( s )

or
(7.82)
D( s )( N( s ))  N ( s )D( s )  0

We should show that given a left fraction D 1 ( s) N ( s),


not necessarily left coprime, we can obtain a
right coprime fraction N(s)D-1(s) by solving the
polynomial matrix equation in (7.82).
Suppose that
D( s )  D0  D1 s  D 2 s 2  D3 s 3  D 4 s 4
N ( s )  N 0  N1 s  N 2 s 2  N 3 s 3  N 4 s 4
D( s )  D 0  D1 s  D 2 s 2  D 3 s 3
N( s)  N 0  N1 s  N 2 s 2  N 3 s 3

where Di  R q q
, Ni  R q p
, D i  R p p and N i  R q p are all
constant matrices. By (7.82), we have
 D0 N0  0 0  0 0  0 0   N 0 
  
 D1 N1  D0 N0  0 0  0 0 D
 0 
 D2 N 2  D1 N1  D0 N0  0 0    N1 
  
 D3 N 3  D2 N 2  D1 N1  D0 N0   1 D
SM : 0
D N 4  D3 N 3  D2 N 2  D1 
N1  N 2 
 4  
0 0  D4 N 4  D3 N 3  D2 N 2   D2 
0 0  0 0  D4 N 4  D3 N3   N 3 
  
 0 0  0 0  0 0  D4 N 4   D3 

(7.83)
The equation is the matrix version of (7.82) and
the matrix S∈R2nq×n(p+q) will be called a
generalized resultant.
Let us search linear independent columns of
S in order from left to right. We find that
 Every column in D block columns is linear
independent of its left-hand-side (HLS)
columns.
 For each N block column, we use N i column
to denote the ith column of it.
 If the N i  column in some N  block is linear
dependent on its LHS columns, then all subsequent
Ni-columns will linear dependent on its LHS
columns.
 Let  i (i  1,2, , p ) be the number of linear independent
Ni-columns in S. They are called the column indices
of Gˆ ( s ).
 The first N i column that becomes linearly
dependent on its LHS columns is called primary
dependent N i column. It is clearly that the( i  1)th N i 
column is the primary dependent column.
Example 7.7 Find a right coprime fraction of
the transfer matrix
 4 s  10 3 
 2s  1 s2 
ˆ
G ( s)   1 s 1 
 
 (2 s  1)( s  2) ( s  2) 
2

First we must find a left fraction, not necessary


left coprime.
1
ˆ (2 s  1)(s  2) 0 
G (s)   2
 0 ( 2 s  1)( s  2 ) 
(4 s  10)( s  2) 3(2 s  1) 
 
 s  2 ( s  1)( 2 s  1 ) 
Thus we have,
(2 s  1)( s  2) 0 
D( s)   2
 0 ( 2 s  1)( s  2) 
 2 s 2  5s  2 0 
 
 0 2s  9s  12s  4
3 2

 2 0  5 0   2 0  2 0 0  3
    s  s   s
0 4 0 12 0 9 0 2 

(4 s  10)( s  2) 3(2s  1)  4s 2  2s  20 6s  3 


N ( s)     
 s  2 ( s  1)( 2 s  1)   s  2 2 s 2
 3s  1
 20 3  2 6 4 0 2 0 0 3
    s  s   s
 2 1  1 3 0 2 0 0 
So, we obtain
2 0 5 0   2 0 0 0 
D0    , D1    , D2    , D3   
 0 4   0 12   0 9   0 2 
and
 20 3   2 6 4 0 0 0 
N0    , N1    , N2    , N3   
 2 1   1 3   0 2   0 0 
%%%%%%%%%%%%%%%%%%%%
d0=[2 0;0 4]; d1=[5 0;0 12];
d2=[2 0;0 9]; d3=[0 0;0 2];
n0=[-20 3;2 1]; n1=[-2 6;1 3];
n2=[4 0;0 2]; n3=[0 0;0 0];
zero24=zeros(2,4);
D=[d0;d1;d2;d3]; N=[n0;n1;n2;n3];
DN=[D N];
S=cat(2,[DN;zero24;zero24],[zero24;DN;zero24
],[zero24;zero24;DN]);
%%%%%%%%%%%%%%%%%%%%%%


D-block N-block D-block N-block D-block N-block
2 0  20 3 0 0 0 0 0 0 0 0
0 4 2 1 0 0 0 0 0 0 0 0

5 0  2 6 2 0  20 3 0 0 0 0
 
0 12 1 3 0 4 2 1 0 0 0 0
2 0 4 0 5 0 2 6 2 0  20 3
 
 0 9 0 2 0 12 1 3 0 4 2 1
S
0 0 0 0 2 0 4 0 5 0  2 6
 
0 2 0 0 0 9 0 2 0 12 1 3
0 0 0 0 0 0 0 0 2 0 4 0
 
0 0 0 0 0 2 0 0 0 9 0 2
 
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 2 0 0

D1-column D2-column N1-column N2-column


[Q,R]=qr(S); 
 d1 0 * * * * * * * * 0 *
0 d2 * * * * * * 0 * * *

0 0 n1 * * * * * * * * *
 
0 0 0 n2 * * * * * * * *
0 0 0 0 d1 * * * * * * *
 
 0 0 0 0 0 d2 * * * * * *
R
0 0 0 0 0 0 n1 * * * * *
 
0 0 0 0 0 0 0 0 0 * * 0
0 0 0 0 0 0 0 0 d1 * * 0
 
0 0 0 0 0 0 0 0 0 d 2 0 0
 
0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0
2  1 1  2
S2=S(1:12,1:8); 
2 0  20 3 0 0 0 0
0 4 2 1 0 0 0 0

5 0  2 6 2 0  20 3
 
0 12 1 3 0 4 2 1
2 0 4 0 5 0 2 6
 
 0 9 0 2 0 12 1 3
S2  
0 0 0 0 2 0 4 0
 
0 2 0 0 0 9 0 2
0 0 0 0 0 0 0 0
 
0 0 0 0 0 2 0 0
 
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
z2=null(S2); z2b=z2/z2(8); zz2b=[z2b;0;0;0;0];

  7
 0.8030   7.0000   - 1
- 0.1147  - 1.0000   
    1
 0.1147   1.0000   
    2
0.2294 2.0000 - 4 
z2    z 2b   
 
- 0.4588 - 4.0000
     0
zz 2b   
 0.0000   0.0000  2
 0.2294   2.0000   
    1
 0.1147   1.0000  0
 
0
 
0
 0 
S1=cat(2,S(1:12,1:7),S(1:12,9:11)); 
 2 0 - 20 3 0 0 0 0 0 0 
 0 4 2 1 0 0 0 0 0 0 

 5 0 -2 6 2 0 - 20 0 0 0 
 
 0 12 1 3 0 4 2 0 0 0 
 2 0 4 0 5 0 -2 2 0 - 20
 
0 9 0 2 0 12 1 0 4 2 
S1    R 1210

0 0 0 0 2 0 4 5 0 -2 
 
 0 2 0 0 0 9 0 0 12 1 
 0 0 0 0 0 0 0 2 0 4 
 
 0 0 0 0 0 2 0 0 9 0 
 
 0 0 0 0 0 0 0 0 0 0 
 0 0 0 0 0 0 0 0 2 0 
z1=null(S1); z1b=z1/z1(10); zz1b=[z1b(1:7);0;z1b(8:9);z1b(10);0];

  
- 0.9386  10   10 
 0.0469    0 .5  - 0.5
     
- 0.0939  1   1 
     
- 0.0000 0  0 
   
- 0.0939  1   1 
z1    z1b     
- 0.0000  0   
zz1b  
0 
- 0.2347  2 .5  2.5 
     
 0.1877   2   0 
 0.0000   0   -2 
     
- 0.0939  1   0 
 
 1 
 0 
M=cat(2,zz1b,zz2b);

 10 7
N0=-M(1:2,:);  - 0.5 - 1
 
 1 1
D0=-M(3:4,:);   0 2
 
 1 - 4
N1=-M(5:6,:);   0 0
M   
2.5 2 
D1=-M(7:8,:);   0 1 
 -2 0
N2=-M(9:10,:);   0 0 
 
D1=-M(7:8,:);   1 0
 0 0 
That is we obtain

1 1  2.5 2 1 0
D0    , D1    , D2  
0 2  0 1 0 0
 10  7   1 4 2 0
N0    , N1    , N2  
 0.5 1   0 0 0 0

 s 2
 2.5s  1 2s  1
D( s)  D0  D1s  D 2 s  
2

 0 s  2 
and
 2 s 2
 s  10 4s  7
N( s )  N 0  N1s  N 2 s  
2

 0.5 1 
Thus Gˆ ( s ) in (7.84) has the following right
coprime fraction
1
ˆ (2 s  5)(2  2) 4 s  7  ( s  2)( s  0.5) 2 s  1
G (s)     (7.86)
 0. 5 1  0 s  2 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [N0,N1,N2,D0,D1,D2]=rightcoprimefraction
d0=[2 0;0 4]; d1=[5 0;0 12];d2=[2 0;0 9]; d3=[0 0;0 2];
n0=[-20 3;2 1]; n1=[-2 6;1 3]; n2=[4 0;0 2]; n3=[0 0;0 0];
zero24=zeros(2,4);
D=[d0;d1;d2;d3]; N=[n0;n1;n2;n3];
DN=[D N];
S=cat(2,[DN;zero24;zero24],[zero24;DN;zero24],[zero24;zero24;DN]);
[Q,R]=qr(S);
S2=S(1:12,1:8);
z2=null(S2);
z2b=z2/z2(8); zz2b=[z2b;0;0;0;0];
S1=cat(2,S(1:12,1:7),S(1:12,9:11));
z1=null(S1);
z1b=z1/z1(10); zz1b=[z1b(1:7);0;z1b(8:9);z1b(10);0];
M=cat(2,zz1b,zz2b);
N0=-M(1:2,:); D0=M(3:4,:);
N1=-M(5:6,:); D1=M(7:8,:);
N2=-M(9:10,:); D2=M(11:12,:);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In general, if the generalized resultant has i
linearly independent N i  columns, then D(s)
computed using the preceding procedure is
column reduced with column degrees  i .
Thus we have
 i
ˆ ( s )  deg det D( s )  
deg G

=total number of linearly independent


N i  columns in S.
The order of column degrees is not crucial.
For example, we have
ˆ ( s )  N ( s )D 1 ( s )  [ N ( s ) P ][ D( s ) P ] 1  N
G ˆ ( s)D
ˆ 1 ( s )

where ˆ ( s)
N( s)P  N , ˆ (s)
D( s ) P  D and

0 0 1 
P  1 0 0
0 1 0

The above equations mean that the order of


the columns of N(s) and D(s) can be changed.
 Theorem 7.4M4
Let Gˆ ( s)  D 1 ( s ) N ( s) be a left fraction, not
necessary coprime. We use the coefficient
matrices of D(s) and N (s) to form the
generalized resultant S in (7.83) and search
its linearly independent columns from left to
right. Let  i (i  1,2,, p) be the number of linearly
independent N i  columns. Then we have
ˆ (s)        
deg G 1 2 p (7.87)
and a right coprime fraction N(s)D-1(s) can be
obtained by computing p monic null vector of
the p matrices formed from each primary
dependent N i  column and all its LHS linearly
independent columns.
Computing a left coprime from a right
fraction N(s)D-1(s)
Similar to (7.83), we form

 N 0 D0   N1 D1   N 2 D2   N3 D3 T  0

(7.88)
with
D 0 D1 D2 D3 D4 0 0 0
N N1 N2 N3 N4 0 0 0 
 0
       
 
0 D0 D1 D2 D3 D4 0 0
0 N0 N1 N2 N3 N4 0 0
 
T          (7.89)
0 0 D0 D1 D2 D3 D4 0
 
0 0 N0 N1 N2 N3 N4 0
       
 
0 0 0 D0 D1 D2 D3 D4 
 
0 0 0 N0 N1 N2 N3 N4 
 Search linearly independent rows in order
from top to bottom;
 All D-rows are linearly independent.
 Let the Ni-row denote the ith N-row in each N
block-row;
 If any Ni-row becomes linearly independent,
then all Ni-row in subsequent N block-row are
linearly Independent on their preceding rows.
 The first Ni-row that becomes linearly;
independent is called a primary dependent
Ni-row;
 Let vi (called row indices), i=1,2,…,q, be the
number of linearly independent Ni-rows
7.9 Realizations from Matrix Coprime
Fractions
Simply, we consider a strictly proper rational
matrix Gˆ ( s ) with dimension of 2×2.
ˆ ( s )  N ( s ) D 1 ( s )
G (7.90)
where N(s) and D(s) are right coprime and D(s)
is in column echelon form. We further assume
that the column degrees of D(s) are 1  4 and
2  2 . First we define
 s 1 0  s 4 0
H ( s ) :  2 
 2 (7.91)
0 s  0 s 

and
 s 1 1 0  s 3 0
   2 
    s 0
 1 0  s 0
L( s )    2 1 
  (7.92)
 0 s  1 0
    0 s
   
 0 1   0 1
The procedure for developing a realization for
ˆ ( s )uˆ ( s )  N ( s ) D 1 ( s )uˆ ( s )
yˆ ( s )  G

Follows closely the scalar case. First we


introduce a new variable vˆ ( s)  D1 (s)uˆ ( s) , then
we have
D( s ) vˆ ( s )  uˆ ( s ) (7.93)
yˆ ( s)  N( s) vˆ ( s) (7.94)
Let us define state variable as
 s 1 1 0  s3 0
   2 
    s 0
 1 0   vˆ1 ( s )   s 0  vˆ1 ( s ) 
xˆ ( s )  L( s ) vˆ ( s )    2 1      
 0 s  vˆ2 ( s )  1 0 vˆ2 ( s )
    0 s
   
 0 1   0 1
 s 3vˆ1 ( s )   x1 ( s ) 
 2   
s ˆ
v (
 1   2  s ) x ( s )
 svˆ1 ( s )   x3 ( s ) 
  :   (7.95)
 vˆ1 ( s )   x4 ( s )
 svˆ ( s )   x5 ( s ) 
 2   
 vˆ2 ( s )   x6 ( s ) 
or, in the time domain,
x1 (t )  v1( 3) (t ), x2 (t )  v1 (t ), x3 (t )  v1 (t ), x4 (t )  v1 (t ),
x5 (t )  v2 (t ), x6 (t )  v2 (t )
The state vector has dimension 1     6 . The
definitions imply immediately
x 2  x1 , x 3  x 2 , x 4  x3 , x 6  x5
Next we use (7.93) to develop equations for
dotx1 and dotx2. First we express D(s) as

D(s)=DhcH(s)+DlcL(s) (7.97)
where H(s) and L(s) are defined in (7.91) and
(7.92) respectively, and Dhc and Dlc are
constant matrices and the column-degree
coefficient matrix Dhc is a unit upper triangular
matrix.

Substituting (7.97) into (7.93) yields


[D hc H ( s )  D lc L( s )]vˆ ( s )  uˆ ( s )

or
H ( s ) vˆ ( s )  D hc1 D lc L ( s ) vˆ ( s )  D hc1uˆ ( s )
By (7.95), we have
H ( s ) vˆ ( s )   D hc1 D lc xˆ ( s )  D hc1uˆ ( s ) (7.98)
Let
 111  112  113  114  121  122 
1
D D lc : 
hc
       (7.99)
 211 212 213 214 221 222 

and
1 b12 
D 1
hc 
0 1  (7.100)
 
Substituting (7.99) and (7.100) into (7.98) yields
 sxˆ1 ( s)   111  112  113  114  121  122  1 b12 
 sxˆ ( s)     xˆ ( s )    uˆ ( s)
 5   211  212  213  214  221  222  0 1 
In time domain, it becomes
 x1   111  112  113  114  121  122  1 b12 
 x      x  u (7.101)
 5  211  212  213  214  221  222  0 1 

You might also like