Matrix Polynomials
Matrix Polynomials
net/publication/328066012
CITATIONS READS
0 1,391
1 author:
Belkacem Bekhiti
Saad Dahlab University
58 PUBLICATIONS 118 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Algebraic Theory of Matrix Polynomials and its Applications in Multivariable Aeronautical Systems View project
Multivariable linear control system design using the theory of matrix polynomials View project
All content following this page was uploaded by Belkacem Bekhiti on 04 October 2018.
The mathematical prerequisites for reading this monograph are a working knowledge of linear
algebra and matrix theory and a familiarity with analysis including complex variables. In this
introductory we present the notation we will use and review some of the concepts in matrix
theory. In the remainder of the chapter we treat some of the more specialized topics that will
be used in engineering applications: matrix functions, Drazin inverses, matrix polynomials.
than) its algebraic multiplicity. Equivalently, is said to be defective if it does not have
linearly independent (right or left) eigenvectors.
THE INDEX OF MATRIX AND DRAZIN INVERSE: There are many types of inversion
of square matrices, such as a group invers, pseudo invers and the generalized inverse. We now
introduce a different type of generalized inverse (i.e. Drazin invers), which applies only to
square matrices yet is more useful in certain applications. The Drazin invers can be easily
defined using the Jordan canonical form. However, because this concept is important in our
work we shall provide as self-contained development that will also serve to review some basic
techniques.
( ) ( )
( ) ( )
DEFINITION (Drazin Inverse) Inverting the nonsingular core and neglecting the
nilpotent part in the core-nilpotent decomposition, produces a natural generalization of
matrix inversion. More precisely, if : 0 1 0 1
defines the Drazin inverse of . Even though the components in a core-nilpotent
decomposition are not uniquely defined by , it can be proven that is unique and has the
following properties:
( ) , with ( ).
FUNCTION OF MATRIX: [Campbell S.L 1980] Throughout this section we use the
following notation: for let its characteristic polynomial is ( ) ( ) (
) ( ) where the eigenvalues are distinct and . Let
( ) and (( ) ). We know that is an invariant subspace for
and . We also know that: ( )( ) is a projection on .
Since and are polynomials in we have . Other properties of and are
( )
( )
( ) ∑∑ ( ) ( )
THEOREM: [Campbell S.L 1980] For any with spectrum ( ), let ( ) denote
the class of all functions f : C → C which are analytic in some open set containing ( ). For
any scalar function ( ), the corresponding matrix function f(A) is defined by:
( )
( )
( )
( ) ∑ ∑ ( ) ( )
( )
This shows that the Drazin inverse is the matrix function corresponding to the
reciprocal ( ) , defined on nonzero eigenvalues.
( )
( ) ∑ ∑ ( ) ( )
( )
Where ( )( ) is a projection.
( ) ∫ ( )( ) ( )
Where is a contour lying in the disk | | and enclosing all the eigenvalues of .
LEMMA [L.S. Shieh 1983] Let be defined as above and ( ), ( )and ( )are analytic at
( ), , then
(i) if ( ) , then ( )
(ii) if ( ) , then ( ) and ( ) ( )
(iii) if ( ) ( ) ( ), then ( ) ( ) ( )
(iv) if ( ) ( ) ( ), then ( ) ( ) ( ) ( ) ( )
(v) if ( ) ( ( )) and ( ) is analytic at ( ), and ( ), then ( ) ( ( )).
(vi) ( ) ( ), where is the Kronecker product;
(vii) ( ) ( ) , ( ) * ( )+ ( ) ( )
∫ ( )( ) ( ) ( )
∑ * ( )( ) + ∫( ) ( )
i) ( ) ( ) .
ii) ( ) ( ) ( ) ( ).
iii) ( ) ( ) ( ) ( ).
iv) .
v) is the idempotent matrix onto ( ) along ( ).
vi) if and only if is nilpotent.
vii) ( )
( ) (8)
Where the coefficients are constant matrices in . The matrix is named the highest
coefficient or leading matrix coefficient of the matrix polynomial ( ). If is true,
then the number is called the degree of the matrix polynomial ( ), and it is designated
by ( ), and the number is called the order of the matrix polynomial ( ) is a
complex variable.
REMARK If the leading matrix coefficient is non singular but not an identity matrix, then
( ) can be multiplied by to get a monic matrix polynomial. In case is singular and,
if ( ( )) for any thus is non-singular, then ( ) can be reversed to make a new
non-singular .
THEOREM [P. Lancaster 1985] The right and left remainders of a matrix polynomial ( )
division by are ( ) and ( ), respectively.
( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )
DEFINITION [K. Hariche 1989] Let ( ) be a monic matrix polynomial with right solvents
of order (i.e. ( ) is a right divisor of ( ) and ( )
isn't). Assume that ( ) then the generalized block
Vandermonde matrix
, [ 4( ) 5 ] - . / ( )
THEOREM [K. Hariche 1989] Given a -matrix ( ), a complete set of right solvents
* + with multiplicities * + will exists if and only if the generalized
block Vandermonde matrix is invertible.
( ) . / . / . / . /
. / . /
( )
( ) , -( ) [ ]
[ ( )]
{ , - [ ]
The generalized block Jordan matrix can be obtained from block companion form matrix
associated with ( ).
( ) ( )
REMARK Infinite eigenvalues can still be covered by defining invariant pairs for the
reversal polynomial (8). If a polynomial has zero and infinite eigenvalues, they have to be
handled by separate invariant pairs, one for the original and one for the reverse polynomial.
( ) ( ) [ ] (14)
has full rank. The smallest such is called minimality index of ( ). The invariant pairs
( ) is said to be minimal if 0 1
DEFINITION [T. Betcke (2011)] An invariant pair ( ) for a regular matrix polynomial
( ) of degree is called simple if ( ) is minimal and the algebraic multiplicities of the
eigenvalues of are identical to the algebraic multiplicities of the corresponding eigenvalues
of ( )
REMARK Invariant pairs are closely related to the theory of standard pairs presented in the
very well-known book (I. Gohberg, P. Lancaster, and L. Rodman 2009), and, in particular, to
Jordan pairs. If ( ) is a simple invariant pair and is in Jordan form, and then ( ) is a
Jordan pair. Polynomial eigenpairs and invariant pairs can also be defined in terms of a
contour integral. Indeed, an equivalent representation for (13) is the following.
PROPOSITION [C. Effenberger (2013)] A pair ( ) is an invariant pair if
and only if satisfies the relation:
( ) ∮ ( ) ( ) ( )
Where is a contour with the spectrum of in its interior. This formulation allows us to
choose the contour to compute with eigenvalues lying in a particular region of the
complex plane.
Matrix Solvents: In this section we study the matrix solvent problem as a particular case
of the invariant pair problem, and we apply to solvents some results we have obtained for
invariant pairs.
( ) ∮ ( )( ) ( )
Where is a contour with the spectrum of in its interior.
( )
PAIRS OF MATRIX POLYNOMIALS: In the last section, a language and formalism have
been developed for the full description of eigenvalues, eigenvectors, and Jordan chains of
matrix polynomials. In this section, pairs of matrices will be introduced which determine the
spectral information about a matrix polynomial. A monic matrix polynomial ( ) is uniquely
determined by its Jordan pairs (or eigenpairs) which are a pair of matrices( ) , where is
a Jordan block of size and are formed by the respective Jordan chain (generalized
eigenvectors), this pair satisfies the equation ( ) and a solvent is always a suitable
combination of these eigenpairs. Observe that when a eigenpair is just an eigenvector
and respective eigenvalue (latent vector and latent value or root). Then, the fact that
eigenvalues and the eigenvetors of a solvent are also from ( ).
, - [ ] ( )
( ) [ ] ( )
The pair ( ) will be called a standard pair of order if Its main property is
summarized in the following
LEMMA [Nir Cohen (1983)] The admissible pair ( ) is standard iff rank( ) is maximal
for all (The matrix is sometimes called controllable matrix)
LEMMA [Nir Cohen (1983)] The admissible pair ( ) is standard of order for ( ) iff
( ) is standard of order and the equation 0 1 ( ) ( ) holds, where
( ) is the companion linearization of ( ) [ ∑ ]
And ( ) ( ) ( ).
1) , -
2) ( ) is a finite Jordan pair for ( )
3) ( ) is a local Jordan pair for ( ( )) ( ) at .
It can be easily verified that a Jordan pair of order for ( ) is also a standard pair of order
for ( ). , - . [ ] / .
THEOREM [Nir Cohen (1983)] Let two regular matrix polynomials, ( ) and ( ) have
the same standard pair ( ) Then there exists a (constant) invertible matrix such
that ( ) ( )
( ) *( ) ( )+ ( )
Where
[ ], [ ] 0 1, - [ ∑ ]
The following two theorems give the explicit solutions of basic linear systems of differential
(and/or difference) equations, using a given standard pair of the characteristic
polynomial ( ). We shall assume throughout that ( ) ∑ , is a given regular
matrix polynomial ( ) with a given standard pair ( , - ). It is
also assumed that is a nilpotent matrix, i.e. for some . This is equivalent to
stating that where ( ( )). This condition can always be achieved by
transforming our given standard pair ( ) to a Jordan pair, via simple operations. In most
applications, however, ( ) is a Jordan pair to begin with, so that this condition holds.
THEOREM [I. Gohberg (1981)] The general solution of the differential equation
()
∑ ( ) ( ) For a given smooth differentiable function ( ) is:
( ) ( )(
( ) ∫ ( ) ∑ ) ( )
THEOREM [I. Gohberg (1981)] The general solution of the difference equation
∑ For a given sequence of vectors :
( ∑ ) ∑ ( )
[̂ ̂ ] , -( ) and ( )( ̂ ̂ ) ( )( ) (24)
THEOREM [Nir Cohen (1983)] Let ( ) and ( ̂ ̂ ) be Jordan pairs of orders and ,
respectively, for the regular matrix polynomials ( ) and ( ), where . Then
( ) is a right divisor of ( ) such that iff ( ̂ ̂ ) is a restriction of
( ).
THEOREM [I. Krupnik (1979)] Suppose that ( ) is a monic matrix polynomial. If the
lengths of all its Jordan chains are equal to 1, then ( ) can be decomposed into a product of
linear factors: ( ) ∏ ( ). (This is a sufficient condition for factorization)
COROLLARY [I. Gohberg (1985)] If the monic matrix polynomial ( ) has all its
eigenvalues distinct, then the number of distinct factorizations into products of linear factors
is not less than ∏ ( ).
THEOREM [E. Pereira 2003] Let ( ) be a matrix polynomial and let be the associated
block companion matrix. If is diagonalizable and at least one of its eigenvalues has
geometric multiplicity greater than 1, then ( ) has infinitely many solvents.
THEOREM [E. Pereira 2003] Let ( ) be a matrix polynomial, the associated block
companion matrix, and a solvent of ( ). If one eigenvalue of has the geometric
multiplicity greater in than in , then ( ) has infinitely many solvents.
FINAL RESULT: [E. Pereira 2003] The number of solvents can be summarized as: if a
matrix is diagonalizable and its eigenvalues are distinct, then the minimum number of
solvents of ( ) is , and the maximum number is . / . If a matrix is diagonalizable
and has at least one multiple eigenvalue, then the number of solvents of ( ) is infinite. If a
matrix is not diagonalizable, then the number of solvents of ( ) can be zero, finite or
infinite. It will be infinite if ( ) has a solvent which has an eigenvalue with geometric
multiplicity greater in than in .
--------------------------------------------------
Additional results about the existence of complete set of roots
--------------------------------------------------
THEOREM [I. Krupnik (1979)] Suppose that ( ) is a monic matrix polynomial, if all the
multiplicities of zeros of the characteristic polynomial ( ) are not greater than 2, then
( ) can be decomposed into a product of linear factors.
THEOREM [I. Krupnik (1979)] Suppose that ( ) is a monic matrix polynomial, if the
lengths of all its Jordan chains are not greater than 2, then ( ) can be decomposed into a
product of linear factors. The poof is found in the paper (I. Krupnik (1979))
Let be a Hilbert space, and a closed linear operator acting in (this means that its
domain ( ) and its range ( ) are contained in ). Denote by ( ) the set of all
bounded linear operators acting in . For ( ) it is always assumed that ( ) .
If the linear pencil ( ) is a spectral divisor of the pencil ( ), then will be called a
spectral root of the operator equation ( ) (or a spectral root of the pencil ( )).
LEMMA [A. S. Markus 1988] Suppose that is an operator root (right or left) of the pencil
( ), ( ) ( ), and ( ) ( ) is closed. If is a closed curve separating ( ) from
( ) ( ), then is a spectral root (right or left) of ( ) if and only if
( ∮ ( ) )( ∮ ( ) ) ( )
( ∮ ( ) ) ( ∮ ( ) ) ( )
COROLLARY [A. S. Markus 1988] If the Vandermonde operator is invertible, then the
operators and ( ) are similar.
COROLLARY [A. S. Markus 1988] If is invertible, then ( ) ⋃ ( ).
COROLLARY [A. S. Markus 1988] If is invertible, then the system of eigenvectors and
associated vectors of ( ) is n-fold complete in if and only if the system of root vectors of
each of the operators ( ) is complete in .
THEOREM [A. S. Markus 1988] Suppose that * + are (right or left) operator roots of
the pencil ( ) and * +. If the system of root vectors of each of the operators
( ) is complete in , then the system of eigenvectors and associated vectors
of ( ) is n-fold complete in .
In this subsection we investigate the connection between the properties of the operator and
the mutual arrangement of the spectrum of ( ) and the spectra of its operator roots
* +
THEOREM [A. S. Markus 1988] If * + is complete set of operator roots of the pencil
( ), then the following statements are equivalent:
1) is left-invertible.
2) * +.
3) ( ) ( ) ( )
THEOREM [A. S. Markus 1988] If * + is complete set of operator roots of the pencil
( ), then the following statements are equivalent:
1) is right-invertible.
2) * +.
3) ( ) ⋃ ( ).
1) is invertible.
2) * +
3) ( ) ⋃ ( ) ( ) ( ) ( ).
THEOREM [A. S. Markus 1988] Suppose that * + is a complete set of operator roots
of the pencil ( ) and * +. If the system of root vectors of each of the operators
( ) is complete in , then the system of eigenvectors and associated vectors of
( ) is n-fold complete in .
the complete set of solvent at time is the Q.D. algorithm. Also, the block-power method has
been developed by (Tsai J.S.H. et.al 1988) for finding the solvents and spectral factors of a
general nonsingular polynomial matrix, which may be monic and/or comonic. On the other
hand, without prior knowledge of the eigenvalues and eigenvectors of the matrix, the Newton-
Raphson method (Shieh L.S. et.al 1981) has been successfully utilized for finding the
solvents. In this section we are going to present some of the existing algorithms that can
factorize a linear term from a given matrix polynomial.
( ) (∑( ) ⨂∑ ) (∑ )
()
( )‖ ‖ With and
( )‖ ‖ ‖ ‖
EXAMPLE: Let us now consider the generalized Newton’s algorithm. The considered matrix
polynomial is: ( )
EXAMPLE: Let us now consider the generalized Bernoulli’s method. The considered matrix
polynomial is: ( )
( )
( ), -
, -
( )
( )
( ) ( ) ( ( ) ( ))
( ) ( ) ( )
( )
( )
METHOD III: THE RIGHT BLOCK MATRIX Q.D. ALGORITHM The matrix
quotient-difference Q.D. algorithm is a generalization of the scalar one (P. Henrici et.al 1958).
The use of the Q.D. algorithm for such purpose has been suggested firstly by Hariche in
reference (K. Hariche et.al 1987). The scalar Q.D. algorithm is just one of the many global
methods that are commonly used for finding the roots of a scalar polynomial. The Quotient-
Difference scheme for matrix polynomials can be defined just like the scalar one (Dahimene
A et.al 1992) by a set of recurrence equations. The algorithm consists on building a table that
we call the Q.D. tableau. We start the Q.D. algorithm by considering the following
initialization:
( )
( ) ( )
Those last two equations provide us with the first two rows of the Q.D. tableau (one row of
Q’s and one row of E’s). Hence, we can solve the rhombus rules for the bottom element
(called the south element by Henrici P. et.al 1958). We obtain the row generation of the Q.D.
algorithm:
( ) ( ) ( ) ( )
{ ( ) ( ) ( ) ( )
[ ]
Where the ( ) are the spectral factors of ( ). In addition, note that the Q.D. algorithm
gives all spectral factors simultaneously and in dominance order. We have chosen, in the
above, the row generation algorithm because it is more stable numerically. For further
information about the row generation algorithm and the column generation algorithm we may
orient the reader to see ref (A. Dahimene et.al 1992). Writing equations () in tabular form
yield
( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
EXAMPLE: Consider a matrix polynomial of 2nd order and 3rd degree with the following
matrix coefficients: ( ) . We apply now the generalized row
generation Q.D. algorithm to find spectral factors.
1
2 Enter the degree and the order
3 Enter the number of iterations
4 Enter the matrix polynomial coefficients
5
6 , - , -
7 For
8 ,- ,-
9 For
10 ( (( ) ( )) ( )
11 [ ]
12 End
13
14 For
15 . ( )/ ( ( )) . ( ( ))/
16 , -
17 End
18 , -
19
20
21 End
22
23 ( ) ( ) ( )
THEOREM [B. Bekhiti 2018] Let the function ( ) be the matrix polynomial of degree
and order defined on the complex field , - where are constant
matrix coefficients and is a complex variable. ( ) ∑ . Let ( )
( ) be the right functional evaluation of ( ) by
the sequence of square matrices * + . The solution ( ) of matrix polynomial
( ) converges iteratively to the exact solution iff:
( ) ( ( ))
{
( ) ( ) ( )
Proof: Using the remainder theorem we can obtain that if we divide ( ) on the right by the
linear factor ( ) we get ( ) ( )( ) ( ) means that:
( ) ( )( ) ( )
( ) ( ) ( )
( )
Since is a right operator root of ( ), means that ( ) and from this last
equation we can deduce that , in other word ( ) . If
we iterate the last obtained equations we arrive at:
( ) ( ( ))
{
( ) ( ) ( )
Based on the iterative Horner’s sachem we can redo the process too many times to get a
solution ( ), and the theorem is proved.
( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( )[ ( ( ))]
The following corollary is an immediate consequence of the above theorem.
If the square matrix * ( )+ is invertible for each given value of , then the sequence
of matrices* + converges linearly to the operator root (i.e.
) under the following condition:
, ( )-
5 While
6 ( )
7 , ( )-
‖ ‖
8 ‖ ‖
9
10
11 End
( ( )) ( ( ))
( ) ( )
Comparing to Newton’s method notice that we have to provide in the later not only a good
approximation of the solution but also a good approximation of the Jacobian evaluated at the
solution. We can remark also that the evaluation of the Jacobian is avoided in Broyden’s
method. Much more computation can be avoided if we calculate directly the inverse of the
matrix at each step. This can be accomplished by using the Sherman- Morrison-Woodbury
formula as stated in (J. E. Dennis et al.(1983)).
MATRIX SIGN FUNCTION: The matrix sign function of a square matrix with
( ( )) is defined by ( ) ( ) , where is an identity
matrix and ( ) ∮ ( ) , with is a simple closed contour in right
halfplane of and encloses all the right-half plane eigenvalues of . If has a Jordan form
with ( ) ( ), then:
( ) ( ( ) ( )) . ( )/
( ) ( ( ) ( ) ( )) . ( ) ( )/
NOTE: Two other representations have some advantages. First is the particularly concise
formula ( ) ( ) which generalizes the scalar formula ( ) ( ) .
Next, ( ) has the integral representation ( ) ∫ ( ) . Some
properties of ( ) are collected in the following theorem.
THEOREM [Nicholas J. Higham 2008] (properties of the sign function) Let have
no pure imaginary eigenvalues and let ( ). Then
(a) ( is involutory);
(b) is diagonalizable with eigenvalues ±1;
(c) ; and ( ) ( );
(d) If is real then is real;
(e) ( ) and ( ) are projectors onto the invariant subspaces
associated with the eigenvalues in the right half-plane and left half-plane, respectively.
(f) ( ) ( ) ( ) ( )
(g) ( ) √ √
METHOD I: The most widely used and best known method for computing the sign function
is the Newton iteration, due to Roberts:
THEOREM [Nicholas J. Higham 2008] (convergence of the Newton sign iteration). Let
have no pure imaginary eigenvalues. Then the Newton iterates in
( )
‖ ‖ ‖ ‖‖ ‖
METHOD II: The Newton iteration provides one of the rare circumstances in numerical
analysis where the explicit computation of a matrix inverse is required. One way to try to
remove the inverse from the formula is to approximate it by one step of Newton’s method for
the matrix inverse, which has the form ( ) for computing ; this is
known as the Newton–Schulz iteration. Replacing k by ( )
( )
METHOD III: An effective way to enhance the initial speed of convergence is to scale the
iterates, prior to each iteration, is replaced by , giving the scaled Newton iteration
. /
As long as is real and positive, the sign of the iterates is preserved. Three main scalings
have been proposed:
| ( )|
√ ( ) ( )
√‖ ‖ ‖ ‖
REMARK: It should be noted that the first iterated definition of ( ) will fail when the
matrix has a zero or imaginary eigenvalue. The singularity encountered in ( ) can be
avoided by a translation of the spectrum or origin shift-i.e., by adding to . Assume that
is selected such that | ( )| for all eigenvalues having a nonzero real part, and further,
let ( ) and ( ) be nonsingular. Let ( ) and ( ) denote the signs
of and respectively. The generalized sign of is then given by: ( )
* ( ) ( )+; where the direction of the shift is set by the sign of . There is no
assurance that a random choice of will not produce a new singularity; thus care should be
taken in selecting .
MATRIX SQUARE ROOT: The matrix square root is one of the most commonly occurring
matrix functions, arising most frequently in the context of symmetric positive definite
matrices. The rich variety of methods for computing the matrix square root, with their widely
differing numerical stability properties are an interesting subject of study in their own right.
We will almost exclusively be concerned with the principal square root, . Notice that for
with no eigenvalues on , is the unique square root of A whose spectrum
lies in the open right half-plane. We will denote by √ an arbitrary, possibly nonprincipal
square root. We note the integral representation √ ∫ ( ) .
LEMMA Suppose that in the Newton iteration commutes with and all the iterates are
well-defined. Then, for all , commutes with and
( )
METHOD II: A coupled version of the Newton iteration can be obtained by defining
. Then ( ) and ( ), on
using the fact that commutes with . This is the iteration of Denman and Beavers
( )
( )
√ √
.( ) /
MATRIX SECTOR FUNCTION: of matrix sector functions are widely used in engineering
applications such as the separation of matrix eigenvalues, the determination of A-invariant
space, the block diagonalisation of a matrix, and the generalized block partial fraction
expansion of a rational matrix Also It should be noted that the well-known matrix sign
function of is a special class of the matrix sector function of .
,( ) -[ ]
( ) ( ,( )( ) -)
The matrix disk function was introduced in the same paper by (J. D Roberts 1980) that
introduced the matrix sign function. It can be used to obtain invariant subspaces in an
analogous way as for the matrix sign function.
MATRIX EXPONENTIAL: The matrix exponential is by far the most studied matrix
function. The interest in it stems from its key role in the solution of differential equations.
Depending on the application, the problem may be to compute for a given , to compute
for a fixed and many , or to apply or to a vector; the precise task affects the
choice of method. The matrix exponential is defined for by ∑ ,
another representation is . / . This formula is the limit of the first order
Taylor expansion of raised to the power . In term of the Cauchy integral definition
we can also define ( )∮ ( ) . The power series algorithm for the
matrix exponential function can be formulated as follows:
̇( ) ( ) ( ) ( )
If its right-hand side vanishes for some constant matrix, then this matrix will be a solution in
the following sense. ; where ( ) . In this case the differential
equation has a constant solution . If the matrix ⨂ ⨂ is stable then we
may represent the solution matrix as ∫ . The solution can be obtained
using the Kroneker operator ( ) ( ⨂ ⨂ ) ( ). The numerical
solution of the matrix initial-value problem can be summarized in:
FACTS ON SOME MATRIX FUNCTIONS: The following infinite series converge for
with the given bounds on ( ) and on the open right half plane ORHP:
A matrix function, which is the product of a matrix polynomial with a complex variable ( )
and the inverse of another matrix polynomial with the same matrix variable, is defined as a
rational matrix function. Few algebraic theorems of rational matrix functions have been
developed in the literature. A strictly proper rational left -matrix with lth degree mth order is
described by ( ) ( ( )) ( ). Where: ( ) ∑ and ( )
∑ . An alternate of ( ), a strictly proper rational right -matrix, is ( )
( )( ( )) . Where: ( ) ∑ and ( ) ∑ . The
rational -matrix ( ) can also be expressed as: ( ) , ( )- , ( )- ( ) and/or
( ) , ( )- ( ) , ( )- where ( ) ( ( )) and ( ) ( ( )). If
( ) is irreducible, then ( ) and ( ) are left coprime and ( ) and ( ) are right
coprime (Kailath 1980). The , ( )- and ( ( )) can be found using a recursive
algorithm in Buslowicz (1980). For an irreducible ( ), the roots of ( ) or ( ) are
referred to as the poles of ( ). Furthermore, if ( ) is irreducible, then ( )
( ) and , ( )- ( ) ( ) , ( )-.
■ (State space from Left MFD) A state space realization of a proper rational left -matrix is
given by:
, -
[ ]
[ ] [ ]
■ (Left MFD from state space) A minimal realization of the state space system, described by
{A, B, C, D}, can be represented by a proper rational left -matrix as:
( ) ( ( )) ( ), if the state space realization fulfills the dimensional requirement that
the state dimension n, divided by the number of channels , equals an integral value k. The
matrix coefficients of the rational left -matrix, which all have the dimensions m × m, are
then given by:
, - ( )
, - , - ( )
( ) ( )
[ ] [ ]
( ) ∑ , - ( ) ∑ , -
( ) ∑∑ ( ) ∑
, - , - , -
( )
[ ] . /
( )
[ ]
For the general case, the Laplace transform of system, given by ̇ ( ) ( ) ( ), and
( ) ( ) ( ) under the zero conditions, results in the following generalized transfer
function matrix: ( ) ( ) 2 ( )( ( )) 3 with
characteristic equation, of the form: ( ) ( ). It can be shown that the transfer
function of linear singular systems, in certain circumstances, cannot be found. This problem is
completely determined by question of possible solvability of singular system. It is obvious,
that only regular singular systems, can have such description. If singular system have no
transfer function, i.e. it is irregular, it may still have a general description pairing, Dziurla,
Newcomb (1987), that is a description of the form: R(s)Y(s) = Q(s)U(s), where Y(s) and U(s)
are Laplace transforms of the output and input, respectively. Since, irregular systems may
have many or no solutions at all, the question arises as to whether we would meet them in the
practice.
DEFINITION: The pair ( ) is said to be regular if ( ) is not identically zero.
The pair ( ) is said to be impulse-free if ( ( )) ( ). The pair ( )
is said to be stable if all the roots of ( ) , have negative real parts. The pair
( ) is said to be admissible if it is regular, impulse-free and stable.
LEMMA: (L. Dai. et.al 1989) The pair ( ) is regular if and only if there exist two
nonsingular matrices such that: ( ); ( ); where
is a nilpotent matrix which satisfy the following properties ( ) ( ) ,
( ) and ( ) ( ) . This system decomposition is called
the (Slow-Fast Decomposition).
LEMMA (L. Dai. et.al 1989) Suppose that the pair ( ) is regular, and two nonsingular
matrices are found such that slow-fast decomposition holds, then we have:
In the next section, an algorithm is derived which can be programmed in the computer for the
computation ( ) (Dragutin Lj. Debeljković 2004). Firstly, find a so that matrix
( ) is invertible, a number which is not the root of the polynomial ( ) must
be found. The term ( ) can easily be evaluated using a computer, since for
constant , ( ) is given known constant matrix of appropriate dimension. The following
change of variable is introduced . The inverse of matrix ( ), where det E = 0,
is given by the following formula:
( ) { }
With
1. ( )
2.
3. ( ) (̂ )
4. ̂
5. ̂ ( )
0 1 0 1 0 1 0 1
the above results, relation (??) takes the form ( ) . Applying the
algorithm we obtain:
(̂ )
̂ (̂ )
( ) ( )
( )
( ) ( ) 0 1{ }0 1 0 1
( )
REMARK The linearization is not unique (D.S. Mackey et.al 2006, F. Tisseur et.al 2001).
Most of the linearizations used in practice are of the companion forms ( ) with
[ ]
[ ]
THE FINITE AND INFINITE SPECTRAL DATA When the leading coefficient matrix
is singular the degree of ( ) and ( ) has finite eigenvalues, to which we
add infinite eigenvalues. Infinite eigenvalues correspond to the zero eigenvalues
of the reverse polynomial ( ).
LEMMA (Lazaros Moysis et.al 2014) Let ( ) be a general matrix polynomial. Let also
be the sum of degrees of the finite and infinite elementary divisors of ( ) respectively, then
The finite Jordan pair (Gohberg et al., 1982, Chapter 7) of ( ) corresponding to the zero
structure at , is defined as the infinity Jordan pair( ), of ( ). As a result, the
infinity Jordan pair of ( ) satisfies the following properties.
∑ [ ]
Furthermore, the structure of the infinity Jordan pair of ( ) is closely related (Vardulakis,
1991) to its Smith-McMillan form at . Particularly,
* + [ ]
THEOREM (Gohberg et al., 2009) Let ( ) and ( )be the finite and infinite Jordan
Pairs of ( ) , with ; ; ; and . These
pairs satisfy the following properties:
2. ∑ ∑ [ ] [ ]
3. , - ( ) , - [ ]
( ( )) , -[ ] [ ]
[ ] [ ] [ ]
∑
( )
REMARKS
4. When leading coefficient matrix is singular, the Jordan pair ( ) is decomposed into a
finite Jordan pair ( ) corresponding to the finite eigenvalues and an infinite Jordan pair
( ) corresponding to the infinite eigenvalues, where is a Jordan matrix formed of
Jordan blocks with eigenvalue .
THEOREM (George Fragulis 1993) Let ( ) be a regular matrix polynomial and let
( ) and ( ) be its finite and infinite Jordan pairs, respectively. Then the
pair (, - ) is a decomposable pair for ( ).
( ) *, -+
( ) *, -+
Where: the superscript T denotes transpose. Then one has
( ) ( ) ( ) ( ) ( )
( )
[ ]
, -
[ ] ( )
, -
In the above core realization, denotes the dimensional identity matrix. If the given
right MFD is column proper, then , , and . However,
when MFD is column pseudoproper, then , and are arbitrarily chosen based
on and . With the definitions of the above core realization, we can obtain a
generalized realization for the given system in the following way:
THEOREM (Jason S. H. Tsai 1992) The triple * + is observable at finite modes, i.e. the
,( ) - is full rank for all finite if and only if the polynomial matrix of
, ( ) ( )- is right coprime. For the convenience of describing the observability at infinite
modes, we decompose into two submatrices and i.e. , -, where
( )
and then we also write the result as the following. If
[ ] , then the realization in above is observable at infinite frequencies.
( ) 0 1[ ] ( ) ( )
( ) ( ) ( ) 0 10 1 0 1[ ]
( ) ( ) 0 1[ ]
0 1 0 1, then we have [ ] * +
[ ] [ ] * + * +
Note that the quadruple * + is in quasi-controller canonical form, since the given
MFD is right coprime and , - , then , - .
Hence the above realized system is observable at finite and infinite modes.
Now, the realization for a given row-pseudoproper (row-proper) rational transfer matrix in the
left MFD is derived in the following.
( )
[ ]
, -
0 1 ( )
[ ]
In the above core realization, denotes the dimensional identity matrix. If the given
left MFD is row proper, then , , and . However, when
MFD is row pseudoproper, then , and are arbitrarily chosen based on
and . With the definitions of the above core realization, we can obtain a generalized
realization for the given system in the following way:
THEOREM (Jason S. H. Tsai 1992) The triple * + is controllable at finite modes, i.e.
the ,( ) - is full rank for all finite if and only if the polynomial matrix of
, ( ) ( )- is left coprime. For the convenience of describing the controllability at infinite
modes, we decompose into two submatrices and i.e. [ ], where
( )
and then we also write the result as the following. If
, - , then the realization in above is controllable at infinite frequencies.
( ) [ ] 0 1 ( ) ( )
( ) ( ) ( ) 0 10 1 0 1* +
( ) ( ) 0 1* +
0 1 0 1, then we have [ ] * +
[ ] * + * + * +
Note that the quadruple * + is in the quasi-observer canonical form, since the given
MFD is left coprime and , - , then , - . Hence the
above realized system is controllable at finite and infinite modes.