0% found this document useful (0 votes)
61 views39 pages

Matrix Polynomials

This document provides an overview of matrix theory concepts that will be used in engineering applications, including matrix functions, Drazin inverses, and matrix polynomials. Key topics covered include definitions of the index of a matrix, nilpotent matrices, core-nilpotent decompositions, and the Drazin inverse. The Drazin inverse generalizes the inverse of a matrix and can be defined using a core-nilpotent decomposition, with properties including being the unique solution to consistent systems of linear equations.

Uploaded by

edgowhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views39 pages

Matrix Polynomials

This document provides an overview of matrix theory concepts that will be used in engineering applications, including matrix functions, Drazin inverses, and matrix polynomials. Key topics covered include definitions of the index of a matrix, nilpotent matrices, core-nilpotent decompositions, and the Drazin inverse. The Drazin inverse generalizes the inverse of a matrix and can be defined using a core-nilpotent decomposition, with properties including being the unique solution to consistent systems of linear equations.

Uploaded by

edgowhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/328066012

Lecture Notes on Matrix Theory in Control Engineering

Preprint · October 2018


DOI: 10.13140/RG.2.2.26319.33441

CITATIONS READS

0 1,391

1 author:

Belkacem Bekhiti
Saad Dahlab University
58 PUBLICATIONS 118 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Algebraic Theory of Matrix Polynomials and its Applications in Multivariable Aeronautical Systems View project

Multivariable linear control system design using the theory of matrix polynomials View project

All content following this page was uploaded by Belkacem Bekhiti on 04 October 2018.

The user has requested enhancement of the downloaded file.


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

The mathematical prerequisites for reading this monograph are a working knowledge of linear
algebra and matrix theory and a familiarity with analysis including complex variables. In this
introductory we present the notation we will use and review some of the concepts in matrix
theory. In the remainder of the chapter we treat some of the more specialized topics that will
be used in engineering applications: matrix functions, Drazin inverses, matrix polynomials.

MATRIX THEORY CONCEPTS AND NOTATIONS The set of complex numbers is


defined by , the set of real numbers is defined by , the set of matrices over is
denoted by , the set of matrices over is denoted by . Unless stated
otherwise, all matrices will be in . The column vector in the vector space
will be dented If , we use for the conjugate transpose of . For
vectors we employ the usual inner product ( ) the norm of vector
is the Euclidean norm, ‖ ‖ ( ) . For matrices , we use the operator
norm ‖ ‖ *‖ ‖ ‖ ‖ +. A subspace of is called the invariant subspace or
-invariant if for every . If is a subspace of , denotes the
dimension of . If the range (column space) of is denoted by ( ) and the null
space of , * + by ( ). Recall that ( ) ( ) . Let
be subspaces of , the sum of these subspaces is the subspace
* +. If * + for , the subspaces are said to be
independent, the sum is then called a direct sum and we write . Recall that
( ) and if , then there exist
unique such that . A projection is a matrix such that
. It is easily seen that ( ) ( ) . Conversely if there exists a
unique projection such that ( ) and ( ) ; we denote this projection
by , the projection onto along . If is a subspace of , the orthogonal
complement of is * ( ) +. is a subspace and
. is denoted by . If , there exists a unique matrix
which satisfies the equations: and also ( )

( ) . The matrix is the Moore-Penrose (Generalized) inverse of . If is


consistent, then is a solution (in fact the solution of minimal norm) and all solutions are
given by ( ) , where is arbitrary. Also the Moore-Penrose
pseudoinverse inverse can be defined in term of limits: ( )
( ) . We shall often make use of block matrices. In particular, if
, is block diagonal, that is has blocks along the main diagonal and zero
blocks elsewhere, we write ( ) or ( ). The
eigenvalues of are the roots of the polynomial ( ) ( ). The spectrum
of is the set of eigenvalues of and is denoted by ( ). The spectrum radius of is
( ) *| | ( )+ . If is a root of multiplicity of ( ), we say that is an
eigenvalue of of algebraic multiplicity . The geometric multiplicity of is the number of
associated independent eigenvectors ( ) ( ). If ( )
has algebraic multiplicity , then ( ) . Thus, if we denote the
geometric multiplicity of by , then we must have . A matrix is said
to be defective if it has an eigenvalue whose geometric multiplicity is not equal to (i.e., less

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

than) its algebraic multiplicity. Equivalently, is said to be defective if it does not have
linearly independent (right or left) eigenvectors.

If we say that is similar to in case there exists a nonsingular matrix such


that . Similar matrices represent the same linear transformation or operator
on but with respect to different bases. We shall also make use of the fact that every
is similar to a matrix in Jordan canonical form, that is similar to

( ) and [ ] . There may be more than one such block

corresponding to the eigenvalue . The numerical range of is ( )


*( ) ‖ ‖ +, and the numerical radius of is ( ) *| | ( )+ . ( ) is a
compact convex set which contains ( ). In general ( ) may be larger than the convex hull
of ( ). However, it is possible to find an invertible matrix such that ( ) is as close
as desired to the convex hull of ( ). A matrix is positive semi-definite if
( ) for all . If is a positive semi-definite, then it has a unique positive semi-
definite square root which we denote it . That is ( ) .

THE INDEX OF MATRIX AND DRAZIN INVERSE: There are many types of inversion
of square matrices, such as a group invers, pseudo invers and the generalized inverse. We now
introduce a different type of generalized inverse (i.e. Drazin invers), which applies only to
square matrices yet is more useful in certain applications. The Drazin invers can be easily
defined using the Jordan canonical form. However, because this concept is important in our
work we shall provide as self-contained development that will also serve to review some basic
techniques.

DEFINITION If is an matrix of complex numbers, then the of , denoted


by ( ), is the smallest nonnegative integer such that: ( ) ( ). For
nonsingular matrices, ( ) . For singular matrices, ( ) is the smallest positive
integer such that either of the following two statements is true:

( ) ( )
( ) ( )

DEFINITION (Nilpotent Matrices) is said to be nilpotent whenever for some


positive integer . ( ) is the smallest positive integer such that . (Some
authors refer to ( ) as the index of nilpotency.)

DEFINITION (Core-Nilpotent Decomposition) If is an singular matrix of index


such that ( ) , then there exists a nonsingular matrix such that:
0 1 in which is nonsingular, and is nilpotent of index k. This last block-
diagonal matrix is called a core-nilpotent decomposition of

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

NOTE: When is nonsingular, and , so is not present, and we can set


and

DEFINITION (Drazin Inverse) Inverting the nonsingular core and neglecting the
nilpotent part in the core-nilpotent decomposition, produces a natural generalization of
matrix inversion. More precisely, if : 0 1 0 1
defines the Drazin inverse of . Even though the components in a core-nilpotent
decomposition are not uniquely defined by , it can be proven that is unique and has the
following properties:

, when is nonsingular (the nilpotent part is not present).


, where ( ).
is a consistent system of linear equations in which ( ) then
is the unique solution that belongs to ( )
is the projector onto ( ) along ( ), and is the complementary
projector onto ( ) along ( )
4 5 ( ) and ( ) ( )

( ) , with ( ).

THEOREM: [Campbell S.L 1980] If and is an eigenvalue of of multiplicity ,


then is an eigenvalue of of multiplicity . If is an eigenvalue of of
multiplicity , then is an eigenvalue of of multiplicity .

THEOREM: [Campbell S.L 1980] If , then the Drazin inverse is a polynomial


in of degree or less.

FUNCTION OF MATRIX: [Campbell S.L 1980] Throughout this section we use the
following notation: for let its characteristic polynomial is ( ) ( ) (
) ( ) where the eigenvalues are distinct and . Let
( ) and (( ) ). We know that is an invariant subspace for
and . We also know that: ( )( ) is a projection on .
Since and are polynomials in we have . Other properties of and are

THEOREM: [Campbell S.L 1980] Using the notation above: (a) { }


(b) (c) (d)

THEOREM: [Campbell S.L 1980] If ( ) is a polynomial of degree , then

( )
( )
( ) ∑∑ ( ) ( )

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

THEOREM: [Campbell S.L 1980] For any with spectrum ( ), let ( ) denote
the class of all functions f : C → C which are analytic in some open set containing ( ). For
any scalar function ( ), the corresponding matrix function f(A) is defined by:

( )
( )
( )
( ) ∑ ∑ ( ) ( )
( )

The analogous result for the Drazin inverse is:


( )
( )
∑ ∑ ( ) ( )
( )

This shows that the Drazin inverse is the matrix function corresponding to the
reciprocal ( ) , defined on nonzero eigenvalues.

THEOREM: [Campbell S.L 1980] For any with spectrum ( ), let be an


analytic functions in some open set containing ( ), then ( ) ( ) if and only if
( )( ) ( )
( ) , for and . In particular, If ( ) is a
( )( ) ( )
polynomial such that ( ) , for and then
( ) ( )

EXAMPLE: If has ( ) * + we have ( ) ( ) The Drazin


inverse can be computed using the function ( ) , therfore when we apply the last
theorem we arive at: ( ) and ( ) with
, this means that . Thus ( ).

REPRENTAION OF MATRIX FUNCTION BY CONTOUR INTEGRAL: In some of


the later works in engeeniring it will be very helpful to use representations of matrix functions
by contour integals. Recall that if is analytic in and on simple closed rectifiable curve or
contour, , then ∫ ( ) . Also if is in interior of , then
( ) ( )(
( )
( ) ∫ ) ∫ ( )
( )
We obtaine similar representations for functions of matrices, if , the matrix function
( ) is called resolvent of . It is analytic for ( ). If the characteristic
polynomial of is ( ) ( ) ( ) ( ) where the eigenvalues are
distinct and . Then the spectral representation of resolvent is:

( )
( ) ∑ ∑ ( ) ( )
( )

Where ( )( ) is a projection.

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

THEOREM: [Campbell S.L 1980] If and is analytic function for | |


and ( ) , then

( ) ∫ ( )( ) ( )

Where is a contour lying in the disk | | and enclosing all the eigenvalues of .

Consider a matrix with spectra ( ) * + where . If a scalar


function ( ) is analytic at then the matrix function ( ) generated by
( ) ∫ ( )( ) where is a simple closed contour which encloses
.The matrix function described by contour integral has the following properties

LEMMA [L.S. Shieh 1983] Let be defined as above and ( ), ( )and ( )are analytic at
( ), , then
(i) if ( ) , then ( )
(ii) if ( ) , then ( ) and ( ) ( )
(iii) if ( ) ( ) ( ), then ( ) ( ) ( )
(iv) if ( ) ( ) ( ), then ( ) ( ) ( ) ( ) ( )
(v) if ( ) ( ( )) and ( ) is analytic at ( ), and ( ), then ( ) ( ( )).
(vi) ( ) ( ), where is the Kronecker product;
(vii) ( ) ( ) , ( ) * ( )+ ( ) ( )

COROLLARY: [Campbell S.L 1980] If ( ) where are disjoint sets of


eigenvalues and is a contour containing in its interior and in its exterior, then

∫ ( )( ) ( ) ( )

∑ * ( )( ) + ∫( ) ( )

REMARKS: If and then ( ) . If and


( ) , where are sequare matrices, then ( ) . If
, then ( ) . Finally, ∫ ( ) where
encloses all the nonzeros eigenvalues of . Also, the following statements hold:

i) ( ) ( ) .
ii) ( ) ( ) ( ) ( ).
iii) ( ) ( ) ( ) ( ).
iv) .
v) is the idempotent matrix onto ( ) along ( ).
vi) if and only if is nilpotent.
vii) ( )

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

MATRIX POLYNOMIALS (λ-MATRICES): By a matrix polynomial ( ) , we mean a


matrix of the form ( ) [ ( )], where all elements ( ) are polynomials in , -,
especially also in , -or , -. The set of these matrices will be designated by , -, or
symbolized directly by , - (resp. , -), and their subsets containing the constant
matrices , are denoted by ( or ,respectively). The matrices in
and , - are called real matrices. An alternative formulation of matrix polynomials (Are
also called sometimes -matrices) is:

( ) (8)

Where the coefficients are constant matrices in . The matrix is named the highest
coefficient or leading matrix coefficient of the matrix polynomial ( ). If is true,
then the number is called the degree of the matrix polynomial ( ), and it is designated
by ( ), and the number is called the order of the matrix polynomial ( ) is a
complex variable.

DEFINITION The matrix polynomial ( ) is called: Monic if leading matrix coefficient


is the identity matrix. Comonic if the trailing matrix coefficient is the identity matrix.
Regular if ( ) . Nonsingular if ( ( )) is not identically zero. Unimodular if
( ( )) is a nonzero constant. Co-regular if the trailing matrix coefficient is also non-
singular. Anomalous if leading matrix coefficient satisfies ( ) .

REMARK If the leading matrix coefficient is non singular but not an identity matrix, then
( ) can be multiplied by to get a monic matrix polynomial. In case is singular and,
if ( ( )) for any thus is non-singular, then ( ) can be reversed to make a new
non-singular .

DIVISION OF MATRIX POLYNOMIALS: Suppose that ( ) is a matrix polynomial of


degree with an invertible leading coefficient and that there exist matrix polynomials ( ),
( ), with ( ) or the degree of ( ) less than , such that ( ) ( ) ( ) ( ).
If this is the case we call ( ) a right quotient of ( ) on division by ( ) and ( ) is a right
remainder of ( ) on division by ( ). Similarly, ( ), ( ) are a left quotient and left
remainder of ( ) on division by ( ) if ( ) ( ) ( ) ( ) and ( ) or the
degree of ( ) less than . If the right remainder of ( ) on division by ( ) is zero, then
( ) is said to be a right divisor of ( ) on division by ( ). A similar definition applies for
a left divisor of ( ) on division by ( ). The right quotient, right, remainder, left quotient,
and left remainder are each unique.

THEOREM [P. Lancaster 1985] The right and left remainders of a matrix polynomial ( )
division by are ( ) and ( ), respectively.

( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )

An matrix such that ( ) (respectively, ( ) ) is referred to as a


right (respectively, left) solvent of the matrix polynomial ( )).

COROLLARY The matrix polynomial ( ) is divisible on the right (respectively, left)


by ( ) with zero remainder if and only if is a right (respectively, left) solvent of ( ).

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

DEFINITION [K. Hariche 1987] Given ( ) as defined above, an matrix


[respecively ] is a right [respectively left] solvent of ( ) of multiplicity if ( )
[respectively ( ) divides exactly ( ) on the right [respectively on the left].

( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )

Where ( ), ( )- is right [resp. left] functional evaluation of the matrix


polynomial ( ), and ( ) , ( ) - iff [resp. ] is a right [resp. left]
solvent of ( ).

PROPOSITION [K. Hariche 1987] An matrix [resp. ] is a right [resp. left]


solvent of ( ) with multiplicity if and only if [resp. ] is a right [resp. left] solvent
( )( )
of for , where ( ) ( ) represents the derivative of ( ) with
respect to .

DEFINITION [K. Hariche 1989] Let ( ) be a monic matrix polynomial with right solvents
of order (i.e. ( ) is a right divisor of ( ) and ( )
isn't). Assume that ( ) then the generalized block
Vandermonde matrix

, [ 4( ) 5 ] - . / ( )

THEOREM [K. Hariche 1989] Given a -matrix ( ), a complete set of right solvents
* + with multiplicities * + will exists if and only if the generalized
block Vandermonde matrix is invertible.

EXAMPLE Consider the -matrix

( ) . / . / . / . /

It can readily be verified that ( ) has solvents

. / . /

The generalized block Vandermonde matrix is given by:

( )
( ) , -( ) [ ]
[ ( )]
{ , - [ ]

The generalized block Jordan matrix can be obtained from block companion form matrix
associated with ( ).

( ) ( )

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

EIGENSTRUCTURE OF MATRIX POLYNOMIALS: A complex number is an


eigenvalue (latent-value) of ( ) if the equation ( ) has a nonzero solution .
The vector is known as eigenvector (primary latent-vector) of ( ) corresponding to
and the vectors are called associated vectors (generalized latent-vectors) of ; if
( )
∑ ( ) The system is said to be a Jordan
chain of ( ) corresponding to the Eigenvalue and leads to solutions of the differential
equation ∑ . / ( ) The set of all eigenvalues of ( ) is called the spectrum of
( ) such that: , ( )- * ( ) +.

REMARK If is a finite latent root of ( ( ) ), then ⁄ is a finite latent root of


( ). If ( ( ) ) has a zero latent root ( is singular), then ( ) is said to have an
unbounded latent root (i.e. infinite). A lambda-matrix ( ) is said to be degenerate (i.e
singular) if ( ) for all . This can only occur if and are singular (John E.
Dennis 1971).

DIVISORS: Consider the matrix polynomial ( ) in (8). A matrix polynomial ( )


∑ ( ) is called right divisor of ( ) if there exists a matrix polynomial
( ) such that:
( ) ( ) ( ) (12)

If in addition , ( )- , ( )- , then ( ) is called spectral divisor of ( ) For


we have ( ) where ( ) ⨂ ( ) and is the companion matrix of ( )
If the linear pencil ( ) is a (spectral) right divisor of ( ) then the matrix is called
(spectral) right root of ( ) and satisfies the equality ( ) . For a spectral divisor ( )
of ( ) it is well known that the Jordan chains of ( ) are also Jordan chains of
( ) corresponding to the same eigenvalues.

Invariant pairs: Let ( ) be an matrix polynomial. A pair ( ) ,


and , is called an invariant pair if it satisfies the relation:
( ) (13)

Where , and is an integer between and

REMARK Infinite eigenvalues can still be covered by defining invariant pairs for the
reversal polynomial (8). If a polynomial has zero and infinite eigenvalues, they have to be
handled by separate invariant pairs, one for the original and one for the reverse polynomial.

DEFINITION [Gohberg, Lancaster, and Rodman (2009)] A pair ( ) , is


called minimal if there is such that:

( ) ( ) [ ] (14)

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

has full rank. The smallest such is called minimality index of ( ). The invariant pairs
( ) is said to be minimal if 0 1

DEFINITION [T. Betcke (2011)] An invariant pair ( ) for a regular matrix polynomial
( ) of degree is called simple if ( ) is minimal and the algebraic multiplicities of the
eigenvalues of are identical to the algebraic multiplicities of the corresponding eigenvalues
of ( )

REMARK Invariant pairs are closely related to the theory of standard pairs presented in the
very well-known book (I. Gohberg, P. Lancaster, and L. Rodman 2009), and, in particular, to
Jordan pairs. If ( ) is a simple invariant pair and is in Jordan form, and then ( ) is a
Jordan pair. Polynomial eigenpairs and invariant pairs can also be defined in terms of a
contour integral. Indeed, an equivalent representation for (13) is the following.
PROPOSITION [C. Effenberger (2013)] A pair ( ) is an invariant pair if
and only if satisfies the relation:
( ) ∮ ( ) ( ) ( )

Where is a contour with the spectrum of in its interior. This formulation allows us to
choose the contour to compute with eigenvalues lying in a particular region of the
complex plane.

Matrix Solvents: In this section we study the matrix solvent problem as a particular case
of the invariant pair problem, and we apply to solvents some results we have obtained for
invariant pairs.

DEFINITION [Malika Yaici (2014)] Let ( ) be an matrix polynomial. A matrix


is called a (right) solvent for ( ) if satisfies the right functional
evaluation: ( ) ∑ The relation between eigenvalues of ( ) and solvents
is highlighted in (P. Lancaster 2002): a corollary of the generalized Bézout theorem states that
if is a solvent of ( ), then: ( ) ( )( ) where ( ) is a matrix polynomial
of degree . Then any finite eigenpair of the solvent is a finite eigenpair of ( ).

PROPOSITION [P. Lancaster (2002)] is a solvent if and only if

( ) ∮ ( )( ) ( )
Where is a contour with the spectrum of in its interior.

THEOREM [Malika Yaici (2014)] Suppose ( ) has distinct eigenvalues* + ;


with , and that the corresponding set of eigenvectors * + ; satisfies the
Haar’s condition (every subset of of them is linearly independent). Then there are at least
. / different solvents of ( ), and exactly this many if , which are
( )
given by:
( ) , - ( )

Where the eigenpairs * + are chosen among the eigenpairs * + of ( ).

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

THEOREM Let ( ) be a matrix polynomial and consider an invariant pair ( )


of ( ) (sometimes called admissible pairs). If the matrix has size ,
i.e. , and is invertible, then satisfies ( ) i.e. is a matrix solvent of
( ).

Proof: As ( ) is an invariant pair of ( ), we have: ( )


∑ . Since is invertible, we can post-multiply by . Then we get:

( )

Therefore, is a matrix solvent of ( ).

REMARK If invariant pair ( ) is finite, then the matrix is a solvent


to the reversal matrix polynomial ( ( )).

PAIRS OF MATRIX POLYNOMIALS: In the last section, a language and formalism have
been developed for the full description of eigenvalues, eigenvectors, and Jordan chains of
matrix polynomials. In this section, pairs of matrices will be introduced which determine the
spectral information about a matrix polynomial. A monic matrix polynomial ( ) is uniquely
determined by its Jordan pairs (or eigenpairs) which are a pair of matrices( ) , where is
a Jordan block of size and are formed by the respective Jordan chain (generalized
eigenvectors), this pair satisfies the equation ( ) and a solvent is always a suitable
combination of these eigenpairs. Observe that when a eigenpair is just an eigenvector
and respective eigenvalue (latent vector and latent value or root). Then, the fact that
eigenvalues and the eigenvetors of a solvent are also from ( ).

DEFINITION [Nir Cohen (1983)] A pair of matrices ( ) is said to be an admissible pair if


there exists an integer ,( ) and matrices
such that:

, - [ ] ( )

To each admissible pair ( ) we can associate the matrices

( ) [ ] ( )

The pair ( ) will be called a standard pair of order if Its main property is
summarized in the following

LEMMA [Nir Cohen (1983)] The admissible pair ( ) is standard iff rank( ) is maximal
for all (The matrix is sometimes called controllable matrix)

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

DEFINITION [Nir Cohen (1983)] An admissible pair ( ) is called a standard pair of


order for ( ) if :

1) ∑ and ∑ or in the equivalent form


2) , -

Another equivalent definition is given in the following result:

LEMMA [Nir Cohen (1983)] The admissible pair ( ) is standard of order for ( ) iff
( ) is standard of order and the equation 0 1 ( ) ( ) holds, where
( ) is the companion linearization of ( ) [ ∑ ]
And ( ) ( ) ( ).

THEOREM [Nir Cohen (1983)] The matrix polynomial ( ) ( ) ( ) is a


linearization of the regular matrix polynomial ( ) iff there exists a matrix such that ( )
is a standard pair for ( ) , where

Jordan Pairs: Let be an eigenvalue of the regular matrix polynomial ( ) of


multiplicity . Then by the analysis in (I. Gohberg, P. Lancaster, and L. Rodman 1978) we
can construct from the spectral data of ( ) a pair of matrices ( ) with the following
properties: and is a Jordan matrix with as its only eigenvalue.
∑ and 0 ( ) 1 We shall say that any pair of
matrices( ) with these properties is a local Jordan pair of ( ) at . If are all
the eigenvalues of ( ) we obtain in this way local Jordan pairs ( ), .

DEFINITION A pair ( ) of the form , -and is


called a finite Jordan pair for ( ). We define now a Jordan pair of order for ( ) as pair
( ) with the following properties:

1) , -
2) ( ) is a finite Jordan pair for ( )
3) ( ) is a local Jordan pair for ( ( )) ( ) at .

It can be easily verified that a Jordan pair of order for ( ) is also a standard pair of order
for ( ). , - . [ ] / .

THEOREM [Nir Cohen (1983)] Let two regular matrix polynomials, ( ) and ( ) have
the same standard pair ( ) Then there exists a (constant) invertible matrix such
that ( ) ( )

DEFINITION A resolvent form for the regular matrix polynomial ( ) is a representation


( ) ( ) where ( ) is a linear matrix polynomial and are matrices of
appropriate size.

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

THEOREM [Nir Cohen (1983)] Let , - be a standard pair of order


for the regular matrix polynomial ( ) ∑ . Then ( ) has the representation

( ) *( ) ( )+ ( )
Where
[ ], [ ] 0 1, - [ ∑ ]
The following two theorems give the explicit solutions of basic linear systems of differential
(and/or difference) equations, using a given standard pair of the characteristic
polynomial ( ). We shall assume throughout that ( ) ∑ , is a given regular
matrix polynomial ( ) with a given standard pair ( , - ). It is
also assumed that is a nilpotent matrix, i.e. for some . This is equivalent to
stating that where ( ( )). This condition can always be achieved by
transforming our given standard pair ( ) to a Jordan pair, via simple operations. In most
applications, however, ( ) is a Jordan pair to begin with, so that this condition holds.

THEOREM [I. Gohberg (1981)] The general solution of the differential equation
()
∑ ( ) ( ) For a given smooth differentiable function ( ) is:
( ) ( )(
( ) ∫ ( ) ∑ ) ( )

Where is arbitrary vector and .

THEOREM [I. Gohberg (1981)] The general solution of the difference equation
∑ For a given sequence of vectors :

( ∑ ) ∑ ( )

Where is arbitrary vector and . .


A standard pair ( ) is said to be a restriction of another standard pair ( ̂ ̂ ) if there exist
two left-invertible matrices such that
̂ ̂ ̂ ̂ ( )
Or in more compact form we write:

[̂ ̂ ] , -( ) and ( )( ̂ ̂ ) ( )( ) (24)

If above are left and right invertible, we call ( ) and ( ̂ ̂ ) similar. By


transitivity of restriction, we can transform( ̂ ̂ ) and ( ) by pair similarity without
changing the restriction relation between them.

Factorization of ( ) Factorizations of matrix polynomials play an important role in the


theory of over-damped vibration systems, with a finite number of degrees of freedom

THEOREM [Nir Cohen (1983)] Let ( ) and ( ̂ ̂ ) be Jordan pairs of orders and ,
respectively, for the regular matrix polynomials ( ) and ( ), where . Then
( ) is a right divisor of ( ) such that iff ( ̂ ̂ ) is a restriction of
( ).

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

THEOREM [I. Krupnik (1979)] Suppose that ( ) is a monic matrix polynomial. If the
lengths of all its Jordan chains are equal to 1, then ( ) can be decomposed into a product of
linear factors: ( ) ∏ ( ). (This is a sufficient condition for factorization)

COROLLARY [I. Gohberg (1985)] If the monic matrix polynomial ( ) has all its
eigenvalues distinct, then the number of distinct factorizations into products of linear factors
is not less than ∏ ( ).

We recall that if the ( ( )) , then is the number of finite eigenvalues, including


multiplicities. Furthermore, if , ( ) has only finite eigenvalues, otherwise if
, ( ) has infinite eigenvalues, including the multiplicities. Thus, the maximum finite
number of solvents in the case of ( ( )) will be . / and it
( )
occurs when ( ) has eigenvectors that are linearly independent by .
LEMMA Let ( ) be a monic matrix polynomial. ( ) Admits a factorization into
a product of linear factors: ( ) ∏ ( ) if and only if there exists at least an
basis in formed by some Jordan chains of ( ), with the spectrum corresponding to each
basis is denoted and ( ( )) ⋃ . (This is a necessary and sufficient condition)
---------
THEOREM [E. Pereira 2003] Let ( ) be a matrix polynomial and let be the associated
block companion matrix. If is diagonalizable, then ( ) has at least solvents.

THEOREM [E. Pereira 2003] Let ( ) be a matrix polynomial and let be the associated
block companion matrix. If is diagonalizable and at least one of its eigenvalues has
geometric multiplicity greater than 1, then ( ) has infinitely many solvents.

THEOREM [E. Pereira 2003] Let ( ) be a matrix polynomial, the associated block
companion matrix, and a solvent of ( ). If one eigenvalue of has the geometric
multiplicity greater in than in , then ( ) has infinitely many solvents.

FINAL RESULT: [E. Pereira 2003] The number of solvents can be summarized as: if a
matrix is diagonalizable and its eigenvalues are distinct, then the minimum number of
solvents of ( ) is , and the maximum number is . / . If a matrix is diagonalizable
and has at least one multiple eigenvalue, then the number of solvents of ( ) is infinite. If a
matrix is not diagonalizable, then the number of solvents of ( ) can be zero, finite or
infinite. It will be infinite if ( ) has a solvent which has an eigenvalue with geometric
multiplicity greater in than in .

--------------------------------------------------
Additional results about the existence of complete set of roots
--------------------------------------------------
THEOREM [I. Krupnik (1979)] Suppose that ( ) is a monic matrix polynomial, if all the
multiplicities of zeros of the characteristic polynomial ( ) are not greater than 2, then
( ) can be decomposed into a product of linear factors.

THEOREM [I. Krupnik (1979)] Suppose that ( ) is a monic matrix polynomial, if the
lengths of all its Jordan chains are not greater than 2, then ( ) can be decomposed into a
product of linear factors. The poof is found in the paper (I. Krupnik (1979))

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

SUMMARY FROM THE BOOK: INTRODUCTION TO THE SPECTRAL THEORY OF


POLYNOMIAL OPERATOR PENCILS

Let be a Hilbert space, and a closed linear operator acting in (this means that its
domain ( ) and its range ( ) are contained in ). Denote by ( ) the set of all
bounded linear operators acting in . For ( ) it is always assumed that ( ) .

If the linear pencil ( ) is a spectral divisor of the pencil ( ), then will be called a
spectral root of the operator equation ( ) (or a spectral root of the pencil ( )).

LEMMA [A. S. Markus 1988] If is an operator root of the pencil ( ), then ( ) ( ),


each eigenvalue of is an eigenvalue of ( ), and the eigenvectors and root vectors of are
eigenvectors and associated vectors of ( ). But if is a spectral root of ( ), then ( )
( ) , and a number ( ) is an eigenvalue of if and only if it is an eigenvalue of
( ), and, moreover, the corresponding eigenvectors and root vectors of coincide with the
eigenvectors and associated vectors of ( ).

LEMMA [A. S. Markus 1988] Suppose that is an operator root (right or left) of the pencil
( ), ( ) ( ), and ( ) ( ) is closed. If is a closed curve separating ( ) from
( ) ( ), then is a spectral root (right or left) of ( ) if and only if

( ∮ ( ) )( ∮ ( ) ) ( )

( ∮ ( ) ) ( ∮ ( ) ) ( )

LEMMA [A. S. Markus 1988] If is an operator root of the pencil ( ), then


( ) ( )( )
Where ( ) ∑ ∑ ( )

LEMMA [A. S. Markus 1988] If * + are operator roots of the pencil ( )


then, ( ). With is the companion form matrix corresponding to the
pencil ( )

COROLLARY [A. S. Markus 1988] If the Vandermonde operator is invertible, then the
operators and ( ) are similar.
COROLLARY [A. S. Markus 1988] If is invertible, then ( ) ⋃ ( ).

COROLLARY [A. S. Markus 1988] If is invertible, then the system of eigenvectors and
associated vectors of ( ) is n-fold complete in if and only if the system of root vectors of
each of the operators ( ) is complete in .

THEOREM [A. S. Markus 1988] Suppose that * + are (right or left) operator roots of
the pencil ( ) and * +. If the system of root vectors of each of the operators
( ) is complete in , then the system of eigenvectors and associated vectors
of ( ) is n-fold complete in .

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

LEMMA [A. S. Markus 1988] Let ( ) . The vector-valued function ( ) ( ) is


a solution of the vector differential equation ∑ for any if and
only if is an operator root of ( ), ( ) is an -valued function. If * + are complete
set of (right or left) operator roots of the pencil ( ) then the general solution of this vector
differential equation is given by: ( ) ∑ ( ) for any .

THEOREM [A. S. Markus 1988] The following statements are equivalent:


1) Any solution of ∑ has the form ( ) ∑ ( ).
2) , and/or * +. (i.e. in other word is an invertible operator)

In this subsection we investigate the connection between the properties of the operator and
the mutual arrangement of the spectrum of ( ) and the spectra of its operator roots
* +

THEOREM [A. S. Markus 1988] If * + is complete set of operator roots of the pencil
( ), then the following statements are equivalent:

1) is left-invertible.
2) * +.
3) ( ) ( ) ( )

THEOREM [A. S. Markus 1988] If * + is complete set of operator roots of the pencil
( ), then the following statements are equivalent:

1) is right-invertible.
2) * +.
3) ( ) ⋃ ( ).

COROLLARY [A. S. Markus 1988] The following statements are equivalent:

1) is invertible.
2) * +
3) ( ) ⋃ ( ) ( ) ( ) ( ).

THEOREM [A. S. Markus 1988] Suppose that * + is a complete set of operator roots
of the pencil ( ) and * +. If the system of root vectors of each of the operators
( ) is complete in , then the system of eigenvectors and associated vectors of
( ) is n-fold complete in .

SPECTRAL FACTORIZATION ALGORITHMS: Numerous methods are available for


computing the solvents and spectral factors of matrix polynomial ( ) simple approaches use
the eigenvalues and eigenvectors of companion form matrix to construct a solvents of -
matrices However, it is often inefficient to explicitly determine the eigenvalues and
eigenvectors of a matrix, which can be ill conditioned and either non-defective or defective.
Moreover, there are numerous numerical methods for computing the block roots of matrix
polynomials without any prior knowledge of the eigenvalues and eigenvectors of the matrix
polynomial. Of such sophisticated algorithms, the most efficient and more stable one that give

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

the complete set of solvent at time is the Q.D. algorithm. Also, the block-power method has
been developed by (Tsai J.S.H. et.al 1988) for finding the solvents and spectral factors of a
general nonsingular polynomial matrix, which may be monic and/or comonic. On the other
hand, without prior knowledge of the eigenvalues and eigenvectors of the matrix, the Newton-
Raphson method (Shieh L.S. et.al 1981) has been successfully utilized for finding the
solvents. In this section we are going to present some of the existing algorithms that can
factorize a linear term from a given matrix polynomial.

METHOD I: GENERALIZED NEWTON’S METHOD:

THEOREM [W. Kratz 1987] Let ( ) (∑ ) , - be an arbitrary monic


-matrix. Suppose that is a simple solvent of ( ). Then, if the starting matrix
is sufficiently near , the algorithm:

( ) (∑( ) ⨂∑ ) (∑ )

Converges quadratically to . More precisely: if ‖ ‖ and for sufficiently


small and , then we have:

()
( )‖ ‖ With and
( )‖ ‖ ‖ ‖

EXAMPLE: Let us now consider the generalized Newton’s algorithm. The considered matrix
polynomial is: ( )

Algorithm I: (Generalized Newton’s Method)

1 Enter the number of iterations


2 Initial guess;
3 For
4 ( ) ( ) ( )
5
6 , ( ) ( )-
7 (( ) ) ( ) ( )
8
9 , ( ) ( )-
10
11 End
12

METHOD II: BERNOULLI’S METHOD: The Bernoulli’s iterative method is a


generalization of the scalar Bernoulli’s method for the computation of zeros of a scalar
polynomial with largest absolute value. Bernoulli’s method is based on the solution of a
difference equation (J. E. Dennis et al.(1978)). A solvent is said to be a dominant solvent if
every eigenvalue of exceeds in modulus every eigenvalue of the quotient ( )( ) .

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

THEOREM [I. Gohberg et al.(1982)] Let ( ) be a


monic matrix polynomial of degree . Assume that has a dominant solvent (right: or left :
) and the transposed matrix polynomial ( ) also has a dominant solvent. Let * + be
the right solution of the system:

Where: * + is a sequence of matrices to be found, with:


and is determined by the initial conditions: . Then
and exists for large with:

: is zero matrix, :is identity matrix.

EXAMPLE: Let us now consider the generalized Bernoulli’s method. The considered matrix
polynomial is: ( )

Algorithm II: (Generalized Bernoulli’s Method)

( )
( ), -
, -

( )
( )

( ) ( ) ( ( ) ( ))
( ) ( ) ( )
( )

( )

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

METHOD III: THE RIGHT BLOCK MATRIX Q.D. ALGORITHM The matrix
quotient-difference Q.D. algorithm is a generalization of the scalar one (P. Henrici et.al 1958).
The use of the Q.D. algorithm for such purpose has been suggested firstly by Hariche in
reference (K. Hariche et.al 1987). The scalar Q.D. algorithm is just one of the many global
methods that are commonly used for finding the roots of a scalar polynomial. The Quotient-
Difference scheme for matrix polynomials can be defined just like the scalar one (Dahimene
A et.al 1992) by a set of recurrence equations. The algorithm consists on building a table that
we call the Q.D. tableau. We start the Q.D. algorithm by considering the following
initialization:

( )

( ) ( )

Those last two equations provide us with the first two rows of the Q.D. tableau (one row of
Q’s and one row of E’s). Hence, we can solve the rhombus rules for the bottom element
(called the south element by Henrici P. et.al 1958). We obtain the row generation of the Q.D.
algorithm:

( ) ( ) ( ) ( )
{ ( ) ( ) ( ) ( )
[ ]

Where the ( ) are the spectral factors of ( ). In addition, note that the Q.D. algorithm
gives all spectral factors simultaneously and in dominance order. We have chosen, in the
above, the row generation algorithm because it is more stable numerically. For further
information about the row generation algorithm and the column generation algorithm we may
orient the reader to see ref (A. Dahimene et.al 1992). Writing equations () in tabular form
yield

( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )

Table 1: The extended Q.D. scheme

EXAMPLE: Consider a matrix polynomial of 2nd order and 3rd degree with the following
matrix coefficients: ( ) . We apply now the generalized row
generation Q.D. algorithm to find spectral factors.

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

Algorithm III: (Generalized Quotient-Difference Method)

1
2 Enter the degree and the order
3 Enter the number of iterations
4 Enter the matrix polynomial coefficients
5
6 , - , -
7 For
8 ,- ,-
9 For
10 ( (( ) ( )) ( )
11 [ ]
12 End
13
14 For
15 . ( )/ ( ( )) . ( ( ))/
16 , -
17 End
18 , -
19
20
21 End
22
23 ( ) ( ) ( )

METHOD IV: BLOCK HORNER’S METHOD Horner’s method is a technique to


evaluate polynomials quickly. It needs multiplications and additions to evaluate a
polynomial. It is also a nested algorithmic programming that can factorizes a polynomial into
a product of linear factors; this last scheme (Horner’s method) is based on the Euclidian
synthetic long division. As a division algorithm, Horner’s method is a nesting technique
requiring only multiplications and additions to evaluate an arbitrary -degree
polynomial, which can be surveyed by Horner’s theorem (Burden R.L. et.al 2005). Now let
us generalize this nested algorithm to matrix polynomials; consider the next theorem:

THEOREM [B. Bekhiti 2018] Let the function ( ) be the matrix polynomial of degree
and order defined on the complex field , - where are constant
matrix coefficients and is a complex variable. ( ) ∑ . Let ( )
( ) be the right functional evaluation of ( ) by
the sequence of square matrices * + . The solution ( ) of matrix polynomial
( ) converges iteratively to the exact solution iff:

( ) ( ( ))
{
( ) ( ) ( )

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

Proof: Using the remainder theorem we can obtain that if we divide ( ) on the right by the
linear factor ( ) we get ( ) ( )( ) ( ) means that:

( ) ( )( ) ( )

If we set ( ) , and we expand this matrix equation we get:

( ) ( ) ( )

Identifying the coefficients of with different powers we get:

( )

Since is a right operator root of ( ), means that ( ) and from this last
equation we can deduce that , in other word ( ) . If
we iterate the last obtained equations we arrive at:

( ) ( ( ))
{
( ) ( ) ( )

Based on the iterative Horner’s sachem we can redo the process too many times to get a
solution ( ), and the theorem is proved.

Algorithm IV: (Block Horner’s Method)

1 Enter the number of iterations


2 For
3 Enter the degree and the order
4 Enter the matrix polynomial coefficients
5 ( ) Initial guess;
6 ( )
7 For
8 ( ) ( ) ( )
9 ( ) ( )
10 End
11 ( ) ( ( ))
12 ( ) ( )
13 End

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

METHOD V: REFORMULATION OF THE BLOCK HORNER’S METHOD: Now we


will introduce a new version of the preceding procedure which is an efficient algorithm for the
convergence study; Hence with some reconfigurations and algebraic manipulation we obtain:

( )
( ) ( ) ( ) ( )

( ) ( ) ( ) ( )
( ) ( ) ( ) ( )

After back substitution we get: ( ) ∑ ( ( )) { ( ( )) } ( ) ,


using the last theorem and substitute ( ) into the equation of ( ) we get :

( ) ( )[ ( ( ))]
The following corollary is an immediate consequence of the above theorem.

COROLLARY [B. Bekhiti 2018] Let the function ( ) , - be a monic matrix


polynomial of degree Assume that ( ) has an operator root , let * + be the
sequence of square matrices and ( ) be the right functional evaluation of ( ),
means that:
( )

If the square matrix * ( )+ is invertible for each given value of , then the sequence
of matrices* + converges linearly to the operator root (i.e.
) under the following condition:

, ( )-

Where is any arbitrary initial guess.

Algorithm V: (Extended Block Horner’s Method)

1 Enter the degree and the order


2 Enter the matrix polynomial coefficients
3 Initial guess;
4 Give some small and ( =Initial start)
5

5 While
6 ( )
7 , ( )-
‖ ‖
8 ‖ ‖
9
10
11 End

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

METHOD VI: BROYDEN’S METHOD Broyden’s method is a generalization of the


secant method to the multivariable case. It is shown in (J. E. Dennis et al.(1983)) that it has
only a super-linear convergence rate. However, it is much less expensive in terms of
computations at each step. Given ( ) ( ( )) , an -dimensional vector valued
function of an –dimensional vector ( ) , an initial approximation and
matrix at any iteration we can compute them: Initial guess; =Initial guess; Enter
the large number . Enter the set of ( ) matrices, and for (
) we have ( ) and
( ) ( ) ( ) ( ( ))
( )

( ( )) ( ( ))
( ) ( )
Comparing to Newton’s method notice that we have to provide in the later not only a good
approximation of the solution but also a good approximation of the Jacobian evaluated at the
solution. We can remark also that the evaluation of the Jacobian is avoided in Broyden’s
method. Much more computation can be avoided if we calculate directly the inverse of the
matrix at each step. This can be accomplished by using the Sherman- Morrison-Woodbury
formula as stated in (J. E. Dennis et al.(1983)).

PROPOSITION Let and let the square matrix be nonsingular. Then:


is nonsingular if and only if: Furthermore:
( )
The use of last result in provides directly the following update for the inverse of the
matrix : ( ) ( )

Algorithm VI: (Extended Broyden’s Method)


1 ( )
2 ( )
3
4 (
) ( ) ( )
5 , ( ) ( )-
6 , ( ) ( )-
7 ( )
8
9 , ( ) ( )-
10 ( ) ( ) ( )
11 , ( ) ( )-
12 , ( ) ( )-
13
14
15 (( ) ( )) ( )
16
17
18

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

CPMPUTING MATRIX FUNCTIONS USING NUMERICAL METHODS: Many


different techniques are available for computing or approximating matrix functions, some of
them very general and others specialized to particular functions. In this section we survey a
variety of techniques applicable to some different functions f. Matrix iterations are an
important tool for certain functions, such as matrix roots, sign function etc.

MATRIX SIGN FUNCTION: The matrix sign function of a square matrix with
( ( )) is defined by ( ) ( ) , where is an identity
matrix and ( ) ∮ ( ) , with is a simple closed contour in right
halfplane of and encloses all the right-half plane eigenvalues of . If has a Jordan form
with ( ) ( ), then:

( ) ( ( ) ( )) . ( )/

Where , , and are the collection of Jordan blocks with


( ( )) and ( ( )) , respectively. is a modal matrix of .

The extended matrix sign function ̂ of including ( ( )) is defined by

( ) ( ( ) ( ) ( )) . ( ) ( )/

Where is the collection of Jordan blocks with ( ( )) ,


null matrix, and .

NOTE: Two other representations have some advantages. First is the particularly concise
formula ( ) ( ) which generalizes the scalar formula ( ) ( ) .
Next, ( ) has the integral representation ( ) ∫ ( ) . Some
properties of ( ) are collected in the following theorem.

THEOREM [Nicholas J. Higham 2008] (properties of the sign function) Let have
no pure imaginary eigenvalues and let ( ). Then

(a) ( is involutory);
(b) is diagonalizable with eigenvalues ±1;
(c) ; and ( ) ( );
(d) If is real then is real;
(e) ( ) and ( ) are projectors onto the invariant subspaces
associated with the eigenvalues in the right half-plane and left half-plane, respectively.
(f) ( ) ( ) ( ) ( )
(g) ( ) √ √

METHOD I: The most widely used and best known method for computing the sign function
is the Newton iteration, due to Roberts:

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

THEOREM [Nicholas J. Higham 2008] (convergence of the Newton sign iteration). Let
have no pure imaginary eigenvalues. Then the Newton iterates in

( )

Converge quadratically to ( ), with

‖ ‖ ‖ ‖‖ ‖

For any consistent norm, moreover, for ,


( ) ( ) ( )( )

METHOD II: The Newton iteration provides one of the rare circumstances in numerical
analysis where the explicit computation of a matrix inverse is required. One way to try to
remove the inverse from the formula is to approximate it by one step of Newton’s method for
the matrix inverse, which has the form ( ) for computing ; this is
known as the Newton–Schulz iteration. Replacing k by ( )

( )

METHOD III: An effective way to enhance the initial speed of convergence is to scale the
iterates, prior to each iteration, is replaced by , giving the scaled Newton iteration

. /

As long as is real and positive, the sign of the iterates is preserved. Three main scalings
have been proposed:

| ( )|
√ ( ) ( )

√‖ ‖ ‖ ‖

REMARK: It should be noted that the first iterated definition of ( ) will fail when the
matrix has a zero or imaginary eigenvalue. The singularity encountered in ( ) can be
avoided by a translation of the spectrum or origin shift-i.e., by adding to . Assume that
is selected such that | ( )| for all eigenvalues having a nonzero real part, and further,
let ( ) and ( ) be nonsingular. Let ( ) and ( ) denote the signs
of and respectively. The generalized sign of is then given by: ( )
* ( ) ( )+; where the direction of the shift is set by the sign of . There is no
assurance that a random choice of will not produce a new singularity; thus care should be
taken in selecting .

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

MATRIX SQUARE ROOT: The matrix square root is one of the most commonly occurring
matrix functions, arising most frequently in the context of symmetric positive definite
matrices. The rich variety of methods for computing the matrix square root, with their widely
differing numerical stability properties are an interesting subject of study in their own right.
We will almost exclusively be concerned with the principal square root, . Notice that for
with no eigenvalues on , is the unique square root of A whose spectrum
lies in the open right half-plane. We will denote by √ an arbitrary, possibly nonprincipal
square root. We note the integral representation √ ∫ ( ) .

METHOD I: Newton’s method for solving can be derived as follows.

LEMMA Suppose that in the Newton iteration commutes with and all the iterates are
well-defined. Then, for all , commutes with and

( )

If the spectrum of lies in the right half-plane then converges quadratically to


and, for any consistent norm, ‖ √ ‖ ‖ ‖‖ √ ‖ .

METHOD II: A coupled version of the Newton iteration can be obtained by defining
. Then ( ) and ( ), on
using the fact that commutes with . This is the iteration of Denman and Beavers

( )

( )

√ √

MATRIX ROOT is a root of if . The root of a matrix, for


, arises less frequently than the square root, but nevertheless is of interest both in theory
and in practice. The matrix root is an interesting object of study because algorithms and
results for the case do not always generalize easily, or in the manner that they might be
expected to. Given the important role Newton’s method plays in iterations for the matrix sign
decomposition and matrix square root, it is natural to apply it to the matrix root. Newton’s
method for the system defines an increment to an iterate by

.( ) /

The most obvious choice, , is unsatisfactory because no simple conditions are


available that guarantee convergence to the principal root for general .

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

MATRIX SECTOR FUNCTION: of matrix sector functions are widely used in engineering
applications such as the separation of matrix eigenvalues, the determination of A-invariant
space, the block diagonalisation of a matrix, and the generalized block partial fraction
expansion of a rational matrix Also It should be noted that the well-known matrix sign
function of is a special class of the matrix sector function of .

The matrix sector function ( ) can be expressed as ( ) ( ) . The


Newton-Raphson algorithm for the matrix -sector function can be formulated as follows:

,( ) -[ ]

MATRIX DISK FUNCTION: Let have Jordan canonical form


with ( ), where the eigenvalues of are inside the unit disk and the eigenvalues
of are outside the unit disk. The matrix disk function is defined by ( )
( ) ; if A has an eigenvalue on the unit circle then ( ) is undefined. An
alternative representation is

( ) ( ,( )( ) -)

The matrix disk function was introduced in the same paper by (J. D Roberts 1980) that
introduced the matrix sign function. It can be used to obtain invariant subspaces in an
analogous way as for the matrix sign function.

MATRIX EXPONENTIAL: The matrix exponential is by far the most studied matrix
function. The interest in it stems from its key role in the solution of differential equations.
Depending on the application, the problem may be to compute for a given , to compute
for a fixed and many , or to apply or to a vector; the precise task affects the
choice of method. The matrix exponential is defined for by ∑ ,
another representation is . / . This formula is the limit of the first order
Taylor expansion of raised to the power . In term of the Cauchy integral definition
we can also define ( )∮ ( ) . The power series algorithm for the
matrix exponential function can be formulated as follows:

Properties of the matrix exponential:

(a) For all ( ) and


( )
(b) For all and for all iff
(c) For all and for all ( )
( )
(d) ∫ ( ) (is called the Laplace transform of )
(e) For all and for all
(f) ( ) ( ) with ( ) ( )

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

LINEAR MATRIX DIFFERENTIAL EQUATIONS: Matrix-valued initial-value problems


also occur frequently, system of differential equations provide a rich source of ( ) problems,
because of the fundamental role that the exponential function plays in linear differential
equations. In the matrix case, we can have coefficient matrices on both the right and left. For
convenience, the following theorem is stated with initial time

THEOREM Let and Then the matrix initial-value problem

̇( ) ( ) ( ) ( )

has the solution ( ) ( ) ∫ . The initial-value problem is known


as a Sylvester differential equation. When is symmetric, ( ) is symmetric and when
then it is known as Lyapunov differential equation.

If its right-hand side vanishes for some constant matrix, then this matrix will be a solution in
the following sense. ; where ( ) . In this case the differential
equation has a constant solution . If the matrix ⨂ ⨂ is stable then we
may represent the solution matrix as ∫ . The solution can be obtained
using the Kroneker operator ( ) ( ⨂ ⨂ ) ( ). The numerical
solution of the matrix initial-value problem can be summarized in:

With and is small enough real number which


should be set to guarantee the convergence and the speed of convergence.

FACTS ON SOME MATRIX FUNCTIONS: The following infinite series converge for
with the given bounds on ( ) and on the open right half plane ORHP:

(a) For all ( ) ∑ ( ) , ( ) ∑ ( )


( ) ( )

(b) For all such that ( ) , ( ) ∑ *, -, - +


(c) For all ( ) ( ), and ( ) ( )
(d) If is nonsingular and ( ) then the matrix logarithm ( ) is given by the
( )
series ( ) ∑ ( ) .
( )

(e) For all ( ) ( ), ( ) ( )


(f) For all ( ) ( ) ( ), ( ) ( )
(g) For all ( ) ( ( )) ( ) ( ( )) ( ).
(h) For all ( ) ( ( )) ( ) ( ( )) ( ).

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

STATE-SPACE REPRESENTATION OF RATIONAL MATRIX FUNCTION:

A matrix function, which is the product of a matrix polynomial with a complex variable ( )
and the inverse of another matrix polynomial with the same matrix variable, is defined as a
rational matrix function. Few algebraic theorems of rational matrix functions have been
developed in the literature. A strictly proper rational left -matrix with lth degree mth order is
described by ( ) ( ( )) ( ). Where: ( ) ∑ and ( )
∑ . An alternate of ( ), a strictly proper rational right -matrix, is ( )
( )( ( )) . Where: ( ) ∑ and ( ) ∑ . The
rational -matrix ( ) can also be expressed as: ( ) , ( )- , ( )- ( ) and/or
( ) , ( )- ( ) , ( )- where ( ) ( ( )) and ( ) ( ( )). If
( ) is irreducible, then ( ) and ( ) are left coprime and ( ) and ( ) are right
coprime (Kailath 1980). The , ( )- and ( ( )) can be found using a recursive
algorithm in Buslowicz (1980). For an irreducible ( ), the roots of ( ) or ( ) are
referred to as the poles of ( ). Furthermore, if ( ) is irreducible, then ( )
( ) and , ( )- ( ) ( ) , ( )-.

■ (State space from Left MFD) A state space realization of a proper rational left -matrix is
given by:

Transition matrix A : dim = n × n


̇ ( ) ( ) Input matrix B : dim = n × m
( ) ( ) ( ) Observation matrix C : dim = m × n
Direct term matrix D : dim = m × m
With:

, -

[ ]

[ ] [ ]

■ (Left MFD from state space) A minimal realization of the state space system, described by
{A, B, C, D}, can be represented by a proper rational left -matrix as:
( ) ( ( )) ( ), if the state space realization fulfills the dimensional requirement that
the state dimension n, divided by the number of channels , equals an integral value k. The
matrix coefficients of the rational left -matrix, which all have the dimensions m × m, are
then given by:

, - ( )

, - , - ( )

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

With ( ) ( ) are defined as

( ) ( )

[ ] [ ]

Block-PFE for MFD: Application of block-PFEs in the large-scale multivariable systems


deserves much attention. These systems described by high degree MFDs can be decomposed
into low degree MFDs for low sensitivities and good reliability, as well as simple design and
simulations (Chen 1984, Chapt. 9; Kailath 1980, Chapt. 6). The block-PFE problem can be
phrased as follows. Given a right coprime MFD ( ) ( )( ( )) .

( ) ∑ , - ( ) ∑ , -

Find the block residues , for , and , such that

( ) ∑∑ ( ) ∑

And is the multiplicity of (left solvents of the th degree th


order denominator matrix
polynomial ( ), i.e. ∑ ), is the number of provided that: ,
for ; and , for , are given.

, - , - , -

( )

[ ] . /

( )
[ ]

SINGULAR SYSTEMS AND MATRIX POLYNOMIALS

DESCRIPTOR SYSTEMS: Descriptor linear systems theory is an important part in the


general field of control systems theory, and has attracted much attention in the last two
decades. In spite of the fact that descriptor linear systems theory has been very rich in content,
there have been only a few comprehensive books on this topic, e.g., Campbell (1980),
Campbell (1982), and Dai (1989b). Descriptor systems appear in many fields, such as power
systems, electrical networks, aerospace engineering, chemical processes, social economic
systems, network analysis, biological systems, time-series analysis. Many electrical circuit
systems can be described by descriptor linear systems, in this paper, we consider linear
differential algebraic equations with constant coeficients of the form ̇ ( ) ( ) ( )
with and . As a special case when the homogeneous solution
is given by the following lemma.

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

LEMMA: [Campbell S.L 1980] Consider the differential-algebraic equation ̇ with


and suppose for every , the function ( ) defined by:
is a solution of ̇ .

For the general case, the Laplace transform of system, given by ̇ ( ) ( ) ( ), and
( ) ( ) ( ) under the zero conditions, results in the following generalized transfer
function matrix: ( ) ( ) 2 ( )( ( )) 3 with
characteristic equation, of the form: ( ) ( ). It can be shown that the transfer
function of linear singular systems, in certain circumstances, cannot be found. This problem is
completely determined by question of possible solvability of singular system. It is obvious,
that only regular singular systems, can have such description. If singular system have no
transfer function, i.e. it is irregular, it may still have a general description pairing, Dziurla,
Newcomb (1987), that is a description of the form: R(s)Y(s) = Q(s)U(s), where Y(s) and U(s)
are Laplace transforms of the output and input, respectively. Since, irregular systems may
have many or no solutions at all, the question arises as to whether we would meet them in the
practice.
DEFINITION: The pair ( ) is said to be regular if ( ) is not identically zero.
The pair ( ) is said to be impulse-free if ( ( )) ( ). The pair ( )
is said to be stable if all the roots of ( ) , have negative real parts. The pair
( ) is said to be admissible if it is regular, impulse-free and stable.

LEMMA: (L. Dai. et.al 1989) The pair ( ) is regular if and only if there exist two
nonsingular matrices such that: ( ); ( ); where
is a nilpotent matrix which satisfy the following properties ( ) ( ) ,
( ) and ( ) ( ) . This system decomposition is called
the (Slow-Fast Decomposition).

LEMMA (L. Dai. et.al 1989) Suppose that the pair ( ) is regular, and two nonsingular
matrices are found such that slow-fast decomposition holds, then we have:

The pair( ) is impulse-free if and only if .


The pair ( ) is stable if and only if ( ) .
The pair ( ) is admissible if and only if and ( ) .

In the next section, an algorithm is derived which can be programmed in the computer for the
computation ( ) (Dragutin Lj. Debeljković 2004). Firstly, find a so that matrix
( ) is invertible, a number which is not the root of the polynomial ( ) must
be found. The term ( ) can easily be evaluated using a computer, since for
constant , ( ) is given known constant matrix of appropriate dimension. The following
change of variable is introduced . The inverse of matrix ( ), where det E = 0,
is given by the following formula:

( ) { }

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

With

1. ( )
2.
3. ( ) (̂ )
4. ̂
5. ̂ ( )

NUMERICAL EXAMPLE System under consideration is given by:

0 1 0 1 0 1 0 1

The spectrum of and are * + * + * + * + Let we choose , then the


matrix ̂ can be easily computed using ̂ ( ) therefore ̂ 0 1. Using

the above results, relation (??) takes the form ( ) . Applying the

algorithm we obtain:
(̂ )
̂ (̂ )

( ) ( )

Carrying out these computations, we get 0 1 0 1

Finally, the matrix transfer function is given by:

( )
( ) ( ) 0 1{ }0 1 0 1
( )

LINEARIZATION OF NOMONIC MATRIX POLYNOMIALS: Given a polynomial


eigenvalue problem of degree , the linearization method approach consists in converting
( ) to a linear pencil ( ) , having the same spectrum as ( ). This
linear eigenvalue problem can then be solved by standard methods, e.g., the QZ algorithm.

DEFINITION [Gohberg, Lancaster, and Rodman (2009)] Let ( ) be an matrix


polynomial of degree . A pencil ( ) with with is
called linearization of ( ) if there exist unimodular matrix polynomials ( ) ( ) ,
( )
such that: ( ) ( ) ( ) [ ].
( )

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

REMARK The linearization is not unique (D.S. Mackey et.al 2006, F. Tisseur et.al 2001).
Most of the linearizations used in practice are of the companion forms ( ) with

[ ]

[ ]

THE FINITE AND INFINITE SPECTRAL DATA When the leading coefficient matrix
is singular the degree of ( ) and ( ) has finite eigenvalues, to which we
add infinite eigenvalues. Infinite eigenvalues correspond to the zero eigenvalues
of the reverse polynomial ( ).

LEMMA (Lazaros Moysis et.al 2014) Let ( ) be a general matrix polynomial. Let also
be the sum of degrees of the finite and infinite elementary divisors of ( ) respectively, then

The finite Jordan pair (Gohberg et al., 1982, Chapter 7) of ( ) corresponding to the zero
structure at , is defined as the infinity Jordan pair( ), of ( ). As a result, the
infinity Jordan pair of ( ) satisfies the following properties.

∑ [ ]

Furthermore, the structure of the infinity Jordan pair of ( ) is closely related (Vardulakis,
1991) to its Smith-McMillan form at . Particularly,

* + [ ]

THEOREM (Gohberg et al., 2009) Let ( ) and ( )be the finite and infinite Jordan
Pairs of ( ) , with ; ; ; and . These
pairs satisfy the following properties:

1. . ( ( ))/ and . ( )/ has a zero at with multiplicity

2. ∑ ∑ [ ] [ ]

3. , - ( ) , - [ ]

In addition, a realization of ( ( )) is given by:

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

( ( )) , -[ ] [ ]

[ ] [ ] [ ]


( )
REMARKS

1. Since ( ) , ( ) has no eigenvalue at infinity if and only if is nonsingular. In


particular, monic matrix polynomials (when ) have no eigenvalue at infinity.

2. Clearly, the polynomial ( ) has a zero eigenvalue if and only if is singular


and, in this case, we say that ( ( )) has an eigenvalue at infinity and the multiplicities of
this eigenvalue are defined to be just those of the zero eigenvalue of ( ). Similarly, if is
singular, ( ) is said to have an eigenvalue at infinity.

3. If is a finite eigenvalue of ( ), then ( ( )) and there exist nonzero vectors


for which ( ) called the (right) eigenvectors associated with . The geometric
multiplicity of a finite eigenvalue is given by ( ( )) . The algebraic
multiplicity is equal to the multiplicity of as a zero of ( ) . When is a multiple
eigenvalue (i.e. a multiple zero of ( )), the concept of eigenvector has been generalized to
the so called Jordan chain.

4. When leading coefficient matrix is singular, the Jordan pair ( ) is decomposed into a
finite Jordan pair ( ) corresponding to the finite eigenvalues and an infinite Jordan pair
( ) corresponding to the infinite eigenvalues, where is a Jordan matrix formed of
Jordan blocks with eigenvalue .

THEOREM (George Fragulis 1993) Let ( ) be a regular matrix polynomial and let
( ) and ( ) be its finite and infinite Jordan pairs, respectively. Then the
pair (, - ) is a decomposable pair for ( ).

GENERALIZED EIGENVALUE PROBLEM AND DEGENERATE MATRIX


POLYNOMIALS
Consider an ordered pair ( ) of matrices in . A nonzero vector is called an
eigenvector of the pair ( ) if there exist , not both zero, such that . The
scalars and are not uniquely determined by x, but their ratio is (except in the singular
case , which we will mention again below). If , then equation
is equivalent to . Where . The scalar is then called an eigenvalue of
the pair ( ) associated with the eigenvector x. If holds with and ,
then ( ) is said to have an infinite eigenvalue; the eigenvalue of ( ) associated with x
is . This is the generalized eigenvalue problem. In the case E = I it reduces to the standard
eigenvalue problem. The following proposition records some fairly obvious facts.

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

PROPOSITION Let , and let be nonzero.

1. is an eigenvalue of ( ) if and only if is an eigenvalue of ( )


2. is an eigenvalue of ( ) if and only if 0 is an eigenvalue of ( ).
3. is an eigenvalue of ( ) if and only if E is a singular matrix.

REMARKS: If is nonsingular, the eigenvalues of ( ) are exactly the eigenvalues of


and . If x is an eigenvector of ( ) with associated eigenvalue , then x is an
eigenvector of with eigenvalue , and is an eigenvector of with eigenvalue .
The expression ( )with indeterminate is commonly called a matrix pencil. The terms
"matrix pair" and "matrix pencil" are used more or less interchangeably. For example, if x is
an eigenvector of the pair ( ), we also say that x is an eigenvector of the matrix pencil
. Clearly is an eigenvalue of ( ) if and only if the matrix ( ) is
singular, and this is in turn true if and only if ( ) : This is the characteristic
equation of the pair ( ). From the definition of the determinant it follows easily that the
function ( ) ( ) is a polynomial in of degree n or less. We call it the
characteristic polynomial of the pair ( ). It can happen that the characteristic polynomial
( ) is identically zero. For example, if there is a such that , then
( ) for all , and ( ) is identically zero, every is an eigenvalue. The
pair ( ) is called a singular pair if its characteristic polynomial is identically zero.
Otherwise it is called a regular pair.

GENERALIZED REALIZATIONS WITH A SPECIAL COORDINATE FOR


MATRIX FRACTION DESCRIPTIONS
A generalized and feasible realization algorithm for constructively determining generalized
stale-space representation from a so-called column (row)-pseudoproper or a column (row)-
proper right (left) matrix fraction description (MFD) is proposed (Jason S. H. Tsai 1992). The
realized state-space representation form with a special coordinate is proved to be controllable
and observable in the sense of Rosenbrock and Cobb; moreover, the dimension of the system
after realization is equal to the determinantal degree of the given right (left) MFD in the
normal sense. If the MFD is right (left) coprime, then the algorithm provides a minimal
realization in state-space representation (i.e. the realization system is controllable and
observable).

DEFINITION: Consider a non-singular polynomial matrix ( ) with dimension ,


and let be the highest degree of the i column of ( ) and
th
be the highest degree of the
i th
row of ( ). If ( ) ∑ , then ( ) is column reduced.
If ( ) ∑ , then ( ) is row reduced.

DEFINITION: Assume the given right MFD ( ) ( ) ( ) is already in column-


based form, then it will be called column pseudoproper if
( ) where th
are the i generalized column degrees of ( )
and ( ), respectively. Similarly, these definitions can be appropriately extended to that of
the row-pseudoproper case.

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

DEFINITION: We define the following: for and

( ) *, -+
( ) *, -+
Where: the superscript T denotes transpose. Then one has

( ) ( ) ( ) ( ) ( )

Where: denotes the highest-column-degree coefficient matrix.


denotes the lowest-column-degree coefficient matrix, and is a constant matrix.
Firstly, we define the core realization as follows:

( )

[ ]

, -

[ ] ( )
, -

In the above core realization, denotes the dimensional identity matrix. If the given
right MFD is column proper, then , , and . However,
when MFD is column pseudoproper, then , and are arbitrarily chosen based
on and . With the definitions of the above core realization, we can obtain a
generalized realization for the given system in the following way:

Where , . , and is defined as [ ]

THEOREM (Jason S. H. Tsai 1992) The triple * + is observable at finite modes, i.e. the
,( ) - is full rank for all finite if and only if the polynomial matrix of
, ( ) ( )- is right coprime. For the convenience of describing the observability at infinite
modes, we decompose into two submatrices and i.e. , -, where
( )
and then we also write the result as the following. If
[ ] , then the realization in above is observable at infinite frequencies.

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

EXAMPLE Consider a right MFD of the form

( ) 0 1[ ] ( ) ( )

From the given structures, it is seen that and , and


( ) , i.e. ( ) is not column reduced; it is also shown to be column
pseudoproper based on the above algorithm and the core realization, we have

( ) ( ) ( ) 0 10 1 0 1[ ]

( ) ( ) 0 1[ ]

Since is not full rank and ( ) 0 1. When we choose

0 1 0 1, then we have [ ] * +

The realization structures are :

[ ] [ ] * + * +

Note that the quadruple * + is in quasi-controller canonical form, since the given
MFD is right coprime and , - , then , - .
Hence the above realized system is observable at finite and infinite modes.

Now, the realization for a given row-pseudoproper (row-proper) rational transfer matrix in the
left MFD is derived in the following.

DEFINITION: We define the following: for and


( ) *, -+
( ) *, -+
Where: the superscript T denotes transpose. Then one has
( ) ( ) ( ) ( ) ( )

Where: denotes the highest-row-degree coefficient matrix. denotes


the lowest-row-degree coefficient matrix, and is a constant matrix.

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

Firstly, we define the core realization as follows:

( )

[ ]

, -
0 1 ( )
[ ]

In the above core realization, denotes the dimensional identity matrix. If the given
left MFD is row proper, then , , and . However, when
MFD is row pseudoproper, then , and are arbitrarily chosen based on
and . With the definitions of the above core realization, we can obtain a generalized
realization for the given system in the following way:

Where , . , and is defined as [ ]

THEOREM (Jason S. H. Tsai 1992) The triple * + is controllable at finite modes, i.e.
the ,( ) - is full rank for all finite if and only if the polynomial matrix of
, ( ) ( )- is left coprime. For the convenience of describing the controllability at infinite
modes, we decompose into two submatrices and i.e. [ ], where
( )
and then we also write the result as the following. If
, - , then the realization in above is controllable at infinite frequencies.

EXAMPLE Consider a left MFD of the form

( ) [ ] 0 1 ( ) ( )

From the given structures, it is seen that and , and


( ) , i.e. ( ) is not row reduced; it is also shown to be row pseudoproper
based on the above algorithm and the core realization, we have

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)


Dr. BEKHITI BELKACEM Lecture Notes 2018/2019

( ) ( ) ( ) 0 10 1 0 1* +

( ) ( ) 0 1* +

Since is not full rank and ( ) 0 1. When we choose

0 1 0 1, then we have [ ] * +

The realization structures are :

[ ] * + * + * +

Note that the quadruple * + is in the quasi-observer canonical form, since the given
MFD is left coprime and , - , then , - . Hence the
above realized system is controllable at finite and infinite modes.

CONTROLLABILITY AND OBSERVABILITY OF HIGHER ORDER SYSTEMS:


( )
We consider the linear time invariant dynamical system of the form: ∑ ( )
( ), ( ) ( ) ( ) Where ( ) is the input control, ( )
is the output measurement, ( ) is the state function and
, are the coefficient matrices. If the leading coefficient matrix is
nonsingular then the associated matrix polynomial ( ) is said to be monic else we say that
( ) is nonmonic matrix polynomial. The tuple ( ) is called controllable if it
is possible to derive the system into any desired state at any given time by the proper selection
of the input. Controllability is equivalent to the rank problem see (Kailath Thomas et.al 1980)
, ( ) - . The autonomous state-space system is observable if each
possible output is caused by a unique initial condition. While controllability is defined in
terms of the set of possible output for a given initial state, observability is defined in the
reverse direction in terms of set of initial conditions for a given output. It is not surprising that
( )
the rank characterization 0 1 for observability is analogous to
controllability. The state-space system is stabilizable if for all initial conditions there exist an
input so that the state vector decays asymptotically. When the system is controllable, the
system is stabilizable, but the reverse implication is not necessarily true. The stabilizability of
the state space system can be reduced to the condition , ( ) - .

The Institute Of Aeronautics And Space Studies-Blida University (Algeria)

View publication stats

You might also like