0% found this document useful (0 votes)
88 views

Interpolated Advanced Algorithms

This document describes an interpolation algorithm based on the Leverrier-Faddeev method for computing outer inverses of polynomial matrices. The algorithm uses the Leverrier-Faddeev method to compute constant generalized inverses at selected base points, then uses Newton interpolation to generate interpolating polynomials. The complexity of the original Leverrier-Faddeev algorithm is analyzed, showing it has complexity O(n5d02) where d0 is the degree of the input matrix. The new interpolation algorithm is described and its complexity analysis is given. Methods for computing the rank and index of a polynomial matrix using similar ideas are also presented.

Uploaded by

gowtham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views

Interpolated Advanced Algorithms

This document describes an interpolation algorithm based on the Leverrier-Faddeev method for computing outer inverses of polynomial matrices. The algorithm uses the Leverrier-Faddeev method to compute constant generalized inverses at selected base points, then uses Newton interpolation to generate interpolating polynomials. The complexity of the original Leverrier-Faddeev algorithm is analyzed, showing it has complexity O(n5d02) where d0 is the degree of the input matrix. The new interpolation algorithm is described and its complexity analysis is given. Methods for computing the rank and index of a polynomial matrix using similar ideas are also presented.

Uploaded by

gowtham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Interpolation algorithm of

Leverrier-Faddev type for polynomial


matrices
Marko D. Petkovi
c, Predrag S. Stanimirovi
c
University of Nis, Department of Mathematics, Faculty of Science,
Visegradska 33, 18000 Nis, Serbia and Montenegro
E-mail: dexter of nis@neobee.net, pecko@pmf.pmf.ni.ac.yu

Abstract
We investigated an interpolation algorithm for computing outer inverses of a given polynomial matrix, based on the Leverrier-Faddeev method.
This algorithm is a continuation of the finite algorithm for computing generalized inverses of a given polynomial matrix, introduced in [11]. Also,
a method for estimating the degrees of polynomial matrices arising from
the Leverrier-Faddeev algorithm is given as the improvement of the interpolation algorithm. Based on similar idea, we introduced methods for
computing rank and index of polynomial matrix. All algorithms are implemented in the symbolic programming language MATHEMATICA, and
tested on several different classes of test examples.
AMS Subj. Class.: 15A09, 68Q40.
Key words: Pseudoinverse, interpolation, MATHEMATICA, Leverrier-Faddeev method, polynomial matrices.

Introduction

Let R be the set of real numbers, Rmn be the set of m n real matrices,
and Rmn
= {X Rmn : rank(X) = r}. As usual, R[s] (resp. R(s))
r
denotes the polynomials (resp. rational functions) with real coefficients in the
indeterminate s. The m n matrices with elements in R[s] (resp. R(s)) are
denoted by R[s]mn (resp R(s)mn ).
For any matrix A Rmn the Moore-Penrose inverse inverse of A is the
unique matrix, denoted by A , and satisfying the following Penrose equations
in X [1, 14]:
(1) AXA = A,
Corresponding

(2) XAX = X,

(3) (AX) = AX,

author

(4) (XA) = XA

M.D. Petkovic, P.S. Stanimirovic

If A is a square matrix we also consider the following equations:


(5) AX = XA,

(1k ) Ak+1 X = Ak .

For a sequence S of elements from the set {1, 2, 3, 4, 5}, the set of matrices
obeying the equations represented in S is denoted by A{S}. A matrix from
A{S} is called an S-inverse of A and denoted by A(S) . The Moore-Penrose
inverse A of A is the unique {1, 2, 3, 4}-inverse of A. The group inverse, denoted
by A# , is the unique {1, 2, 5}-inverse of A, and it exists if and only if ind(A) =
min{k : rank(Ak+1 ) = rank(Ak )} = 1. A matrix X = AD is said to be the Drazin
inverse of A if (1k ) (for some positive integer k), (2) and (5) are satisfied. By
1
A1
R and AL we denote a right and a left inverse of A, respectively.
A matrix X C nm is called a 2-inverse of A with the prescribed range T
(2)
and null space S, denoted by AT,S , if the following conditions are satisfied:
XAX = X,

R(X) = T,

N (X) = S,

where R(X) is the range of X and N (X) is the null space of X. It is a well(2)
known fact [1, 9] that if dim T = dim S , then there exists a unique AT,S if and
m
only if AT S = C .
In literature, researchers have proposed many representations and methods
(2)
for computing AT,S , see [4, 16, 17, 18, 19].
An algorithm for computing the Moore-Penrose inverse of a constant real
matrix A(s) A0 Rmn by means of the Leverrier-Faddeev algorithm (also
called Souriau-Frame algorithm) is introduced in [2]. A generalization of this
algorithm for the computation of various classes of generalized inverses is introduced in [11].This algorithm generates the class of outer inverses of a rational or
polynomial matrix. Also, in this paper, we isolate partial cases when the class
of reflexive g-inverses is derived, as well as the Moore-Penrose inverse and the
Drazin inverse.
In [13] Schuster and Hippe generalize known polynomial interpolation methods for polynomial matrices in order to compute the ordinary inverse of the
polynomial (non-singular) matrices using the formula A1 = (detA)1 adjA.
In [6] it is utilized a representation and corresponding algorithm for computing the Moore-Penrose inverse of a nonregular polynomial matrix of an arbitrary degree. Corresponding algorithm for two-variable polynomial matrix is
presented in [7]. In [5] it is described an implementation of the algorithm for
computing the Moore-Penrose inverse of a singular one variable rational matrix
in the symbolic computational language MAPLE.
Effective version of given algorithm is presented in the paper [8]. This algorithm is efficient when elements of the input matrix are polynomials with only
few nonzero addends. On the other hand the interpolation algorithm presented
in the paper possesses that better performances with respect to the classical
method when the matrices are dense.

Interpolation algorithm of Leverrier-Faddev type for polynomial matrices

In [3], [10] it is introduced a representation and corresponding algorithm


for computing the Drazin inverse of a nonregular polynomial matrix of an arbitrary degree. Corresponding algorithm for two-variable polynomial matrix is
presented in [15]. In [15] it is described an implementation of this algorithm in
the programming language MATLAB. Also, an effective version of given algorithm
is presented in the paper [15].
In the present paper we describe an universal interpolation method for computing outer inverses, applying interpolation technique from [13] on the general
Leverrier-Faddeev method for rational and polynomial matrices, introduced in
[11].
In the second section we restate the finite algorithm based on LeverrierFaddev method for one variable polynomial matrices and present a complexity
analysis of this algorithm.
In the third section we presented an modification of finite algorithm, based
on the interpolation techniques. We use the Leverier-Faddeev method to compute constant generalized inverses into selected base points, and the Newton
interpolation method to generate interpolating polynomial. Also, complexity
analysis of new algorithm is given.
In the fourth section we used similar ideas and results from previous section
to establish method for computing rank and index of polynomial matrix. These
numbers are required for computing some classes of generalized inverses.
In the fifth section we improve the previous algorithm using more efficient
estimation of degrees of matrices which appear in the Leverrier-Faddev method.
Implementation of algorithms in the symbolic program language MATHEMATICA and the experience with testing the program are shown in last section.

Leverrier-Faddev method for one-variable polynomial matrices

A general finite algorithm of Leverrier-Faddev type for computing various classes


of generalized inverses for polynomial matrices is introduced in [11].
Algorithm 2.1. Input: Polynomial matrices R(s), T (s) R[s]nm with respect
to unknown s and integer e N .
Step 1. Set initial values B0 (s) = In , a0 (s) = 1
Step 2. For j = 1, . . . , n perform the following
Step 2.1. Calculate Aj (s) = T (s)R(s) Bj1 (s)
Step 2.2. Calculate aj (s) =

T r(Aj (s))
j

Step 2.3. Calculate Bj (s) = Aj (s) + aj (s)In .

M.D. Petkovic, P.S. Stanimirovic

Step 3. Let k be maximal index such that ak (s) 6= 0. Return

Xe (s) =

e
(1)e ak (s)
k>0
e R(s) Bk1 (s) ,
0,
k = 0.

Next theorem shows how to use Algorithm 2.1 for computing different types
of generalized inverses of a given polynomial matrix A(s). We restate it from
[11]:
Theorem 2.1. Let A(s) R[s]nm be polynomial matrix and A(s) = P (s)Q(s)
its full rank factorization. The following statements are valid:
(1) In the case e = 1, R(s) = T (s) = A(s) we get X1 (s) = A (s).
(2) If m = n, e = 1, R(s) = A(s)l , T (s) = A(s), l indA(s) we obtain
X1 (s) = AD (s).
(3) If T (s) = A(s), n > m = rankA(s), for arbitrary R(s) such that A(s)R(s)
is invertible we have X1 (s) = A(s)1
R .
(4) If m = n, e = 1, R(s) = A(s)k , T (s) = In , then X1 (s) exists iff indA(s) =
k and X1 (s) = A(s)A(s)D .
(5) In the case m = n, e = l+1, T (s)R(s) = A(s), R(s) = A(s)l , l indA(s)
we obtain Xe (s) = A(s)D .
(6) For m = n, e = 1, T (s) = R(s) = A(s)l , l indA(s) we have X1 (s) =
(A(s)D )l .
(7) X1 (s) A(s){2} e = 1, T (s) = A(s), R(s) = G(s)H(s), G(s)
R[s]nt , H(s) R[s]tm , rankH(s)A(s)G(s) = t.
(8) X1 (s) A(s){1, 2} e = 1, T (s) = A(s), R(s) = G(s)H(s), G(s)
R[s]nr , H(s) R[s]rm , rankH(s)A(s)G(s) = r = rankA(s).
(9) X1 (s) A(s){1, 2, 3} e = 1, T (s) = A(s), R(s) = G(s)P (s) , G(s)
R[s]nr , rankP (s) A(s)G(s) = r = rankA(s).
(10) X1 (s) A(s){1, 2, 4} e = 1, T (s) = A(s), R(s) = Q(s) H(s), H(s)
R[s]rn , rankH(s)A(s)Q(s) = r = rankA(s).
(11) If T (s) = A(s), m > n = rankA(s) for arbitary R(s) such that R(s) A(s)
is invertible we get X1 (s) = A(s)1
L .
We used simpler notations k R,T , aR,T
and BiR,T , respectively for values k,
i
ai and Bi , i = 0, . . . , n when the input in Algorithm 2.1 are matrices R and T
(eider polynomial or constant). Also we will denote aR,T = aR,T
and B R,T =
kR,T
BkR,T
R,T 1 . Following definition and lemma will be used in further considerations:

Interpolation algorithm of Leverrier-Faddev type for polynomial matrices

Definition 2.1. For a given polynomial matrix M (s) R[s]nm its maximal
degree is defined as the maximal degree of its elements:
degM (s) = max{dg(M (s))ij | 1 i n, 1 j m}.
Lemma 2.1. Let R and T be real n m matrices. Denote A0 = T R Then
holds:
0 R,T

i1
(a) BkR,T
AB
+ aR,T In for all i = 1, . . . , n k R,T ,
R,T +i1 = (T R )
R(s),T (s)

(b) If R = R(s) and T = T (s) then also holds degBi


R(s),T (s)
(s) i degA0 (s) for all i = 0, . . . , n.
and dgai

(s) idegA0 (s)

For the sake of simplicity, we denote by d0 = deg(T (s)R(s) ). Then the


required time for Step 2.1 is O(n3 j d02 ). Times required for Step 2.2 and Step
2.3 are O(n j d0 ), which is much less than the time required for Step 2.1. So,
the total time required for Algorithm 2.1 is:
n
X
O(
n3 j d02 ) = O(n5 d02 )

(2.1)

j=1

In practice, the complexity of Algorithm 2.1 is smaller than (2.1) (not all
elements of matrices Bj (s), Aj (s) and A0 (s) has the maximal degree), but it is
still large.
Also it can be shown that complexity of Leverrier-Faddev algorithm for
constant matrices is O(n3 n) = O(n4 ).

Generalized inversion of polynomial matrices


by interpolation

It is well known that there exists one and only one polynomial f (s) of degree
q n which assumes the values f (s0 ), f (s1 ), . . . , f (sn ) at distinct base points
s0 , s1 , . . . , sn . The polynomial is called the qth degree interpolation polynomial.
Three important interpolation methods are [13]:
(i) the direct approach using Vandermondes matrix
(ii) Newton interpolation,
(iii) Lagranges interpolation.
In the case of finding generalized inverses of polynomial matrices (and also
in many other applications) it is suitable to use the Newton interpolation polynomial [12].
In the following theorem we investigate a sufficient number of interpolation
points to determine the value k T (s),R(s) and polynomials B R(s),T (s) , aR(s),T (s) .
We used the notation = k R(s),T (s) for k corresponding to polynomial matrices
R(s) and T (s).
Theorem 3.1. Let T (s), R(s) R[s]nm . Denote A0 (s) = T (s)R(s) Then
holds:

M.D. Petkovic, P.S. Stanimirovic

(a) Let si , i = 0, . . . , n degA0 (s) be any pairwise different real numbers.


Then holds
= max{k R(si ),T (si ) | i = 0, . . . , n degA0 (s)}.
(b) Polynomials B R(s),T (s) and aR(s),T (s) can be computed using the set of
values B R(si ),T (si ) and aR(si ),T (si ) , i = 0, . . . , k R(s),T (s) degA0 (s).
Proof. (a) Let si , i = 0, . . . , n degA0 (s) be any pairwise different real numbers
and k 0 = max{k R(si ),T (si ) | i = 0, . . . , n degA0 (s)}. We will show that k 0 = .
R(s),T (s)

Assume that a
(si ) = 0 for all i = 0, . . . , n degA0 (s). In accordance
R(s),T (s)
with Algorithm 2.1, the degree of polynomial a
(s) is limited by
R(s),T (s)
degA0 (s). Since degA0 (s) n degA0 (s), we have a
(s) = 0, which is
contradiction with the definition of . Then holds
R(si0 ),T (si0 )

(i0 n)(a

R(si0 ),T (si0 )

R(si0 ),T (si0 )

= a

(si0 ) 6= 0),

which implies k
k.
R(s),T (s)
On the other hand, by definition of we have a+t
(s) = 0 for all
R(s),T (s)
R(s ),T (si )
(si ) = 0 is satisfied for
= a+t
t = 1, . . . n . Since the equality a+ti
R(s ),T (si )
= 0. Consequently
all i = 0, . . . , n degA0 , it can be concluded that a+ti
R(si ),T (si )
0
k
holds for all i = 0, . . . , n degA (s), and we obtain k 0 . This
completes part (a) of the proof.
(b) Denote Bi0 = B R(si ),T (si ) and a0i = aR(si ),T (si ) It can be easily proven
that we can compute values B R(s),T (s) (si ) and aR(s),T (s) (si ) using following
relations:
(
A0 (si )i 1 (A0 (si )Bi0 + a0i In ) , > i
R(si ),T (si )
R(s),T (s)
B
(si ) = B1
=
Bi0 ,
= i
(
a0i , i = ,
R(s),T (s)
a
(si ) =
0, i < .
Now we know values of polynomials B R(s),T (s) and aR(s),T (s) in degA0 (s)+1
different points si . From degB R(s),T (s) ( 1) degA0 (s) and dgaR(s),T (s))
degA0 (s) it holds that polynomials B R(s),T (s) and aR(s),T (s) can be computed
from the set of points B R(s),T (s) (si ) and aR(s),T (s) (si ) (i = 0, . . . , degA0 (s))
using interpolation.
Previous theorem gives the main idea for the following interpolation algorithm.
Algorithm 3.1. Input: a polynomial matrices R(s) and T (s) of the order nm.
Step 1. Initial calculations
Step 1.1. Compute A0 (s) = T (s)R(s) , d0 = degA0 (s) and d = ndegA0 (s).
Step 1.2. Select distinct base points s0 , s1 , . . . , sd R.

Interpolation algorithm of Leverrier-Faddev type for polynomial matrices

Step 2. For i = 0, 1, . . . , d perform the following:


Step 2.1. Calculate the constant matrices Ri = R(si ) and Ti = T (si ).
,Ti
and a0i = aRii ,Ti
Step 2.2. Compute values i = k Ri ,Ti , Bi0 = BRii1
applying Algorithm 2.1 on the input matrices Ri and Ti .

Step 3. Set = k R(s),T (s) = max{i | i = 0, . . . , d}.


If = 0 then return Xe (s) = 0.
Otherwise, for each i = 0, . . . , degA0 (s) perform the following:
Step 3.1. Compute A0i = A0 (si )
(
Bi =

i 1

A0i
Bi0 ,

(A0i Bi0 + a0i In ) ,

> i
= i

Step 3.2. If > i then set ai = 0 else set ai = a0i .


Step 4. Interpolate polynomial aR(s),T (s) and matrix polynomial B R(s),T (s) using pairs (si , ai ) and (si , Bi ), i = 0, . . . , degA0 (s) as base points.
We per

R(s),T (s)
form the matrix interpolation by interpolating each element B1
by values (Bi )pq for i = 0, . . . , degA0 (s).

pq

Step 5. Return the value Xe (s) as in the Step 3 of Algorithm 2.1.


In Step 3.1 and Step 3.2, we updated only first degA0 (s) matrices Bi and
numbers ai , because they are sufficient in Step 4.
Let us now make the complexity analysis of Algorithm 3.1. First, we have
a loop of d + 1 cycles. In every cycle we compute values ai , Bi and i using
Algorithm 2.1 for constant matrices Ai . The complexity of Algorithm 2.1 for
constant matrices is O(n4 ). Therefore, the complexity of the exterior loop is
O(n4 d) = O(n5 d0 ) (d0 = degA0 (s)). In Step 3 we are calculating matrices Bi in
time O(n n3 log( i )) = O(n4 log(n d0 )), which is less than the complexity
of the previous step. We assumed that matrix degrees are calculating in time
O(log(m)) using recursive formulae A2l = (Al )2 and A2l+1 = (Al )2 A. Finally,
2
complexity of the last step (interpolation) is O(n2 d2 ) = O(n4 d0 ) when we
are using Newton interpolation method. So, the complexity of whole algorithm
2
is O(n4 d0 + n5 d0 ).
Shown complexity is better (but not so much) than the complexity of Algorithm 2.1 for polynomial matrices. But as we will show in the last section, in
practice Algorithm 3.1 is much better than Algorithm 2.1 especially for dense
matrices. Also, both algorithms usually does not achieve their maximal complexity, which will be also shown in the next section.

M.D. Petkovic, P.S. Stanimirovic

Computing the rank and index of polynomial


matrices

For the computation of some generalized inverses by using Theorem 2.1, we


require to compute rank and index of some matrices. These matrices are polynomial, and well-known methods for computing rank and index require working
with rational matrices. Working times of such methods are then very large.
Our algorithms deals only with constant matrices and reduces the working time
drastically.
First we will see how to compute rank of polynomial matrix A(s) R[s]nm
by interpolation. For that purpose, let we restate following well-known lemma
using our notations.
Lemma 4.1. Numbers aR,T
, computed by Algorithm 2.1, are coefficients of
i
characteristic polynomial of matrix A0 = T R .
Next theorem shows application of Leverrier-Faddev and previously considered interpolation method to this problem.
Theorem 4.1. Let A Rnn . Then holds rankA = k A,In .
Proof. For the sake of simplicity, denote k = k A,In and r = rankA0 . In our case,
we have that A0 = AI = A. From Lemma 4.1, characteristic polynomial of
A0 = A has the form
pA0 () = det (A0 In ) = nk

k
X

ai ki .

(4.1)

i=0

Matrix A0 = A can be written in the form A0 = P 1 DP where D is Jordan


normal form of matrix A and P is regular matrix. Let we reorder eigenvalues
1 , . . . , k such that first r values are non-zero. Then also holds:
pA0 () = pD () = nr

r
Y

( i ) ,

(4.2)

i=1

Comparing (4.1) and (4.2) we can conclude that:


nk

k
X
i=0

ai ki =

r
Y

( i )

i=1

which implies that k = r, or rankA = rankA0 = k = k R,T .


From the Theorems 4.1 and 3.1 we can establish formula for computation of
rank of polynomial square matrix A(s).
Corollary 4.1. Let A(s) R[s]nn . Then, rankA can be computed using
following formula:
rankA(s) = max{rankA(si ) | i = 0, . . . , n degA(s)}

(4.3)

Interpolation algorithm of Leverrier-Faddev type for polynomial matrices

Proof. Using Theorem 4.1 we have that in the case R(s) = A(s) and T (s) = In
holds = rankA(s) and i = rankA0i = rankA(si ). Now the conclusion is
straight from part (a) of Theorem 3.1.
Let us notice that in formula (4.3) we can use any method for computing
rank of constant matrices. For example, if we use the gaussian elimination, we
need O(n degA(s) n3 ) = O(n4 degA(s)) time for computation of rankA(s). If
we compute rankA(si ) using Leverrier-Faddev method (and Theorem 4.1) then
required time is O(n5 degA(s)).
Using Theorem 4.1 and Corollary 4.1 we can obtain little improvement of Algorithm 3.1. Before Step 2, we can precompute = max{rankA0i | i = 0, . . . , d}
R(s ),T (si )
(A0i = A0 (si )) and after that in Step 2 we can calculate ai = a i
and
R(si ),T (si )
0
Bi = B1
only for i = 0, . . . , degA (s). This modification is actually
used in the implementation of Algorithm 2.1.
Now we will describe an algorithm for computing an index of a given polynomial matrix A(s). Therefore, we will again use Leverrier-Faddev method and
well-known lemma (formulated in our notations):
Lemma 4.2. Let A Rnn is given matrix. Denote tR,T = min{r | BrR,T = 0}.
Then indA = n tA,In .
Lemma 4.3. For arbitrary R(s), T (s) R[s]nm , and pairwise different real
numbers s0 , . . . , sd , where d = nd0 = n deg (T (s)R(s) ) there holds:
tR(s),T (s) = max{tR(si ),T (si ) | i = 0, . . . d}
Proof. Denote i = tR(si ),T (si ) , = tR(s),T (s) and 0 = max{i
From the definition we have that 0 = i0 (for some 0 i0
be proven by mathematical induction that BiR,T = 0 i
two constant or polynomial matrices R and T . Therefore if
R(s),T (s)
(s) = 0 and also
B
R(si0 ),T (si0 )

BR(s),T (s) (si0 ) = B

(4.4)
| i = 0, . . . d}.
d). It can
tR,T for every
0 > , then

=0

which is contradiction with definition of i0 .


If we suppose that 0 < , then because 0 i we have:
R(s),T (s)

B 0

R(si ),T (si )

(si ) = B 0

=0

R(s),T (s)

for all i = 0, . . . , d. Because degB 0


(s) 0 d0 nd0 = d we have that
R(s),T (s)
B 0
(s) = 0 which is contradiction with definition of .
So we conclude that = 0 .
Now we will prove the main theorem which establishes an algorithm for
computing indA(s), for a given square matrix A(s).

10

M.D. Petkovic, P.S. Stanimirovic

Theorem 4.2. Let A(s) R[s]nn is given polynomial matrix. Then indA(s)
can be computed using following formula:
indA(s) = min{indA(si ) | i = 0, . . . , n degA(s)}

(4.5)

Proof. From the Lemma 4.2 we have that indA(s) = n tA,In and indA(si ) =
n tAi ,In . Now conclusion of the theorem immediately follows from Lemma
4.3.
As in the previous case, we can use any method for computing the index
of constant matrices A(si ). For example, if we use Algorithm 2.1 (and Lemma
4.2), total required time for computation is O(n4 nd0 ) = O(n5 degA(s)).
Let we now summarize results from this section and construct two algorithms
for computing rank and index of a given square polynomial matrix.
Algorithm 4.1. (Computing the rank of square polynomial matrix)
Input: Polynomial matrix A(s) R[s]nn .
Step 1. Compute d0 = degA(s), d = n d0 and select pairwise distinct real
numbers s0 , . . . , sd .
Step 2. For each i = 0, . . . d compute i = rankA(si ) using some method for
computing rank of constant matrices.
Step 3. Return rankA(s) from the formula (4.3).

Algorithm 4.2. (Computing the index of square polynomial matrix)


Input: Polynomial matrix A(s) R[s]nn .
Step 1. Compute d0 = degA(s), d = n d0 and select pairwise distinct real
numbers s0 , . . . , sd .
Step 2. For each i = 0, . . . d compute i = indA(si ) using some method for
computing rank of constant matrices.
Step 3. Return indA(s) from the formula (4.4).

R(s),T (s)

Estimating degrees of polynomials Bi

R(s),T (s)

, ai

R(s),T (s)

In the Lemma 2.1, we stated inequality degBj


j degA0 (s), and we used
this (and related) relations for complexity analysis. In practice, this bound is
not usually achieved, because some elements of matrix A0 (and other matrices)
do not have maximal degree. In this section we will try to improve this bound.
Definition 5.1. The degree matrix corresponding to A(s) R[s]nm is the
matrix defined by dgA(s) = [dgA(s)ij ]mn .
Next lemma shows some properties of degree matrices.

Interpolation algorithm of Leverrier-Faddev type for polynomial matrices

11

Lemma 5.1. Let A(s), B(s) Rnn (s) and a(s) R(s). The following facts
are valid
(a) dg(A(s)B(s))ij = max{dgA(s)ik + dgB(s)kj | 1 k n}.
(b) dg(A(s) + B(s))ij max{dgA(s)ij , dgB(s)ij }.
(c) dg(a(s)A(s))ij = dgA(s)ij + dg(a(s)).
Proof. (a) From the definition of the matrix product, and using simple formulae
dg(p(s) + q(s)) max{dg(p(s)), dg(q(s))},

dg(p(s)q(s)) = dg(p(s)) + dg(q(s))

which holds for every p(s), q(s) R(s) we conclude:


dg(A(s)B(s))ij = dg((A(s)B(s))ij ) max{dgA(s)ik + dgB(s)kj | k = 1, . . . , n}.
This completes the proof of part (a).
The other two parts can be similarly verified.
Using lemma 4.1, we construct the following algorithm for estimating the upR(s),T (s)
R(s),T (s)
per bounds DiB and DiA corresponding to Bi
and Ai
respectively,
as well as the upper bound di corresponding to polynomial ai (s).
R(s),T (s)

Algorithm
degree matrix dgBt
(s) and degree of poly 5.1. Estimating

R(s),T (s)
nomial dg at
for a given matrices R(s) and T (s), 0 t n.
Step 1. Set (D0B )ii = 0, i = 1, . . . , n and (D0B )ij = for all i = 1, . . . , n,
j = 1, . . . , n, i 6= j. Also denote Q = dgA0 (s), d0 = 0.
Step 2. For t = 1, . . . n perform the following
B
)kj | k = 1, . . . , n} for
Step 2.1. Calculate (DtA )ij = max{Qik + (Dt1
i = 1, . . . , n, j = 1, . . . , n.

Step 2.2. Calculate dt = max{(DtA )ii | i = 1, . . . , n}.


Step 2.3. Calculate (DtB )ii = max{(DtA )ii , dt } and (DtB )ij = (DtA )ij for
all i = 1, . . . n, j = 1, . . . n, i 6= j.
Step 3. Return the set of matrices {DtB }0tn and set of values {dt }0tn
Consequently, the required number of interpolation points used in the reR(s),T (s)
R(s),T (s)
construction of polynomial (Bt
)ij is equal to (DtB )ij and for at
is
dt .

Implementation

All algorithms are implemented in symbolic programming language MATHEMATICA. About the package MATHEMATICA see, for example [20].
Function GeneralInv[A, p] implements a slightly modified version of Algorithm 2.1.

12

M.D. Petkovic, P.S. Stanimirovic

RTGeneralInv[R_, T_, kk_] :=


Module[{AA, n, m, t, l, h, a, A1, B, k, at, Btm1, Btm2, ID},
AA = T.Transpose[R];
{n, m} = Dimensions[AA];
ID = IdentityMatrix[n];
B = IdentityMatrix[n]; t = -1; l = -1; a = 1;
Btm1 = B;
For [h = 1, h <= n, h++,
A1 = Expand[AA.B];
a = Expand[-1/h*Tr[A1]];
If [a =!= 0, t = h; at = a; Btm2 = B;];
Btm1 = B;
B = Expand[A1 + a*ID];
If [h == kk, Return[{t, Expand[a], Expand[Btm1]}];];
];
Return[{t, Expand[at], Expand[Btm2]}];
];

For the input matrices R, T Rnm and positive integer kk it returns the
R,T
list with elements kk, aR,T
kk and Bkk1 respectively, if 0 kk n. Otherwise,
it returns the list with elements k R,T , aR,T and B R,T . Function works for both
polynomial matrices (it is used for the first and second step of Algorithm 2.1)
and constant matrices used for Step 2.2 of Algorithm 3.1.
Function DegreeEstimator[R, T, i, var] implements Algorithm 5.1 and
R(s),T (s)
gives an upper bound for the degree of polynomial ai
and the matrix
R(s),T (s)
degree of Bi1
.
DegreeEstimator[R_, T_, i_, var_] :=
Module[{A, h, j, k, l, m, d1, d2, Bd, ad, AA, Ad, Btm1d, Btm2d, atd, td,
IDd},
A = T.Transpose[R];
{d1, d2} = Dimensions[A];
Ad = MatrixDg[A, var];
Ad = MultiplyDG[Ad, Transpose[Ad]];
Bd = MatrixDg[IdentityMatrix[d1], var];
IDd = Bd; td = -1; l = -1; ad = -\[Infinity];
For [h = 1, h <= i, h++,
A1d = MultiplyDG[Ad, Bd];
ad = Max[Table[A1d[[j, j]], {j, d1}]];
td = h; atd = ad; Btm2d = Bd;
Btm1d = Bd;
Bd = A1d;
For [j = 1, j <= d1, j++,
Bd[[j, j]] = Max[Bd[[j, j]], ad];
];
];
Return[{atd, Btm2d}];
];

Interpolation algorithm of Leverrier-Faddev type for polynomial matrices

13

This function uses next two auxiliary functions: MatrixDg[A, var] which
computes the matrix degree of matrix A and MultiplyDG[Ad,Bd] which computes upper bound for matrix degree of product of matrices A and B whose
degrees (or its upper bounds) are Ad and Bd. Both functions are based on
Lemma 5.1.
Functions PolyMatrixRank[A, var] and PolyMatrixIndex[A, var] implements Algorithms 4.1 and 4.2. In the first function, we used built-in MATHEMATICA function MatrixRank[A] for computing the rank of constant matrices.
In the second, we used function MatrixIndex[A] based on modified version of
Algorithm 2.1.
PolyMatrixRank[A_, var_] := Module[{r, r1, n, m, x},
Print[Dimensions[A]];
{n, m} = Dimensions[A];
p = 1 + n*MatrixDg[A, var];
x = Table[i, {i, 1, p}];
r = 0;
For [h = 1, h <= p, h++,
r1 = MatrixRank[ReplaceAll[A, var -> x[[h]]]];
If [r1 > r, r = r1];
];
Return[r];
];

Function GeneralInvPoly[A, var] implements a small modification of Algorithm 3.1 (defined with Theorem 3.2).
GeneralInvPoly[R_, T_, var_] :=
Module[{R1, T1, dg, tg, deg, n, m, x, tm, i, h, p, Ta, TB, A1, a, B, t, at, Btm1,
r1, r, degA, Deg},
AA = Expand[T.Transpose[R]];
{n, m} = Dimensions[AA];
degA = MatrixPolyDegree[AA, var];
p = n*degA + 1;
x = Table[i, {i, 1, p}];
r = PolyMatrixRank[AA, var];
tm = -1; tg = 0;
p = r*degA + 1;
Ta = Table[0, {i, 1, p}]; TB = Table[0, {i, 1, p}];
For [h = 1, h <= p, h++,
R1 = ReplaceAll[R, var -> x[[h]]];
T1 = ReplaceAll[T, var -> x[[h]]];
{t, a, B} = RTGeneralInv[R1, T1, r];
Ta[[h]] = {h, a}; TB[[h]] = {h, B};
];
{deg, Deg} = DegreeEstimator[A, r, var];
at = SimpleInterpolation[Ta, deg, var];
Btm1 = AdvMatrixMinInterpolation[TB, Deg, var];

14

M.D. Petkovic, P.S. Stanimirovic

Return[{Expand[at], Expand[Btm1]}];
];

In this function the input is a polynomial matrix A(s) with respect to variable
var. The first and second dimension of A are equal to n and m, respectively.
It returns the list of elements = k R(s),T (s) , aR(s),T (s) and B R(s),T (s) . In this
implementation, we used si = i for base interpolation points. With this set of
interpolation points, function is fastest (we also tried si = [ n2 ] + i, si = ni ,
etc).
Inside the function GeneralInvPoly we are using our auxiliary functions
SimpleInterpolation[Ta, deg, var] and
AdvMatrixMinInterpolation[TB, Deg, var]
which provides interpolation of polynomials aR(s),T (s) and B R(s),T (s) respectively, through calculated data. Both functions are using built-in MATHEMATICA
function InterpolatingPolynomial[T, var] based on Newton interpolation
method.

Testing Experience

We tested implementations of Algorithm 2.1 and Algorithm 3.1 improved by Algorithm 5.1 on test cases from [21] and some randomly generated test matrices.
Also we tested Algorithms 4.1 and 4.2 on randomly generated test matrices.
In the next table we presented timings of functions RTGeneralInv and
RTGeneralInvPoly on test cases from [21]. In this example, input of the function was R(s) = T (s) = A(s), e = 1 and result, according to the part (1) of
the Theorem 2.1, output is the Moore-Penrose inverse A (s). All times are in
seconds.
Matrix
S3
S6
S10
V4
V5
H3
H6
H10

Alg 2.1
0.049
0.42
2.04
0.08
0.63
0.01
0.04
0.5

Alg 3.1
0.070
0.67
5.59
1.3
16.2
0.03
0.2
2.01

These matrices are very sparse, so Algorithm 3.1 (RTGeneralInvPoly) is slower


than Algorithm 2.1 (RTGeneralInv).
Example 7.1. Let us consider test matrix Vn (a, b) defined recursively [21]:

V0 (a, b) =

a
b

b
a

Vn (a, b) =

Vn1 (a, b) Vn1 (a, b)


Vn1 (a, b) Vn1 (a, b)

Interpolation algorithm of Leverrier-Faddev type for polynomial matrices

15

Denote Vn (s) = Vn (s, s). The following relations are true:


k Vn (s),Vn (s)
B Vn (s),Vn (s)

=
=

2n 1,
n

2n(2

aVn (s),Vn (s) = n2n s2


n

1) 2 1

I2n X1 (s) =

n+1

Vn (s) = Vn (

1
2n s

As we can see, only main diagonal elements of matrix B Vn (s),Vn (s) are nonzero and have only one addend (only the first coefficient is not zero). Similarly
V (s),Vn (s)
, 0 i 2n 1. This explains bad result of Algorithm
holds for all Bi n
3.1 on test matrix V5 . Similarly holds for other test matrices from previous
table.
For presenting test results on random matrices, let us consider the following
two definitions.
Definition 7.1. For given matrix A(s) (polynomial or constant), the first sparse
number sp1 (A) is the ratio of the total number of non-zero elements and total
number of elements in A(s). In the case when A(s) = [aij (s)] has the order
m n, then
sp1 (A(s)) =

|{(i, j) | aij (s) 6= 0}|


.
mn

The first sparse number represents density of non-zero elements and it is


between 0 and 1.
Definition 7.2. For a given polynomial matrix A(s) R[s]mn , let us define
the second sparse number sp2 (A(s)) as the following ratio:
sp2 (A(s)) =

|{(i, j, k) | Coef(aij (s), sk ) 6= 0}|


,
degA m n

where Coef(P (s), sk ) denotes coefficient corresponding to sk in polynomial P (s).


The second sparse number represents density of non-zero coefficients contained
in elements aij (s), and it is also between 0 and 1.
These two numbers measures the sparsity of matrix A. As it will be shown,
these sparsity numbers have the influence on timings of Algorithms 2.1 and 3.1.
Our function RandomMatrix[n, deg, prob1, prob2, var] generates random
Ann polynomial matrix with respect to variable var whose degree (degA)
is deg and two sparse numbers (sp1 (A) and sp2 (A)) are equal to prob1 and
prob2. In the next tables are presented average timings (average value of 10
random generated different test matrices of the same type) of both algorithms for
n = 5, 6, 7 and degA = 3, 4, 5. In all cases, we computed the Drazin inverse using
the part (2) of Theorem 2.1. All matrices had constant index, i.e. indA(s) = 1.

16

M.D. Petkovic, P.S. Stanimirovic


n
5
5
5
6
6
6
7
7
7

degA Alg 2.1 Alg 3.1


n degA Alg 2.1 Alg 3.1
3
1.8
1.01
5
3
0.69
0.61
4
2.05
1.6
5
4
1.27
1.03
5
2.75
2.18
5
5
2.01
1.5
3
3.5
1.9
6
3
0.79
0.70
4
5.2
3.29
6
4
2.2
1.8
5
7.6
4.75
6
5
2.6
2.0
3
7.92
3.7
7
3
1.24
1.09
4
11.9
6.8
7
4
5.4
3.2
5
15.7
8.7
7
5
6.9
4.8
sp1 (A) = sp2 (A) = 1
sp1 (A) = sp2 (A) = 0.5
For dense matrices (sp1 (A) = sp2 (A) = 1), Algorithm 3.1 is much faster with
respect to Algorithm 2.1 in all test cases. For sp1 (A) = sp2 (A) = 0.5 Algorithm
2.1 is little bit faster but still slower than Algorithm 3.1. Note that intermediate matrices in this computation usually have greater sparse numbers (basic
matrix operations usually increase sparse numbers), so the situation is similar
as in the first case. In our examples, critical value of sparse numbers when algorithms are almost equally fast is sp1 (A) = sp2 (A) = 0.35. For smaller sparse
numbers Algorithm 2.1 is faster than Algorithm 3.1. Note that when sparse
numbers decreases (with the condition sp = sp1 (A) = sp2 (A)) evaluating time
of Algorithm 2.1 decreases rapidly, which is not the case with Algorithm 3.1,
which decreases slowly. In Algorithm 3.1, evaluating time for the interpolation
depends on degree of polynomials being interpolated, which decreases slowly
with sp2 . Let us now consider now two extreme cases, presented in the next
tables.
n
9
9
9
10
10
10
11
11
11

degA Alg 2.1 Alg 3.1


3
1.9
1.0
4
3.1
1.9
5
4.6
2.6
3
3.6
1.6
4
5.5
2.9
5
7.7
4.0
3
5.9
2.7
4
8.5
4.1
5
14.2
7.05
sp1 (A) = 1, sp2 (A) = 0.1

n
9
9
9
10
10
10
11
11
11

degA Alg 2.1 Alg 3.1


3
0.1
0.25
4
0.11
0.21
5
0.2
0.44
3
0.19
0.48
4
0.3
0.59
5
0.42
0.81
3
0.54
0.72
4
0.59
0.78
5
0.71
1.77
sp1 (A) = 0.1, sp2 (A) = 1

Note that time of Algorithm 2.1 reduces rapidly when either sp1 (A) or sp2 (A)
is small. In the case of interpolation (Algorithm 3.1) sp2 (A) almost does not
influence on timing which is not the case with sp1 (A). When sp1 (A) is small,
degree matrix dgA(s) has large number of elements equal to . Also the
same holds for output matrix of Algorithm 5.1 (function DegreeEstimator[A,
i, var]), but this number is smaller. This accelerates the matrix interpolation
(function AdvMatrixMinInterpolation[TB, Deg, var]).

Interpolation algorithm of Leverrier-Faddev type for polynomial matrices

17

Following table shows the testing results of Algorithm 4.1 and 4.2 compared
with the results obtained by direct application of functions MatrixRank and
MatrixIndex on polynomial matrices. In this case, working time of all functions
directly depends on value of rank and index of matrix. So, we just presented
average ratios between
n
9
9
9
10
10
10
11
11
11

degA
3
4
5
3
4
5
3
4
5

Alg4.1
MatrixRank

1.3
2.1
4.6
3.6
5.8
8.7
9.3
10.5
16.7

Alg4.2
MatrixIndex

1.0
1.6
2.3
1.1
2.3
4.2
2.3
4.6
8.95

Conclusion

We presented a interpolation variant of the finite algorithm for computing various classes of generalized inverses, introduced in [11]. This algorithm is the
extension of the Leverrier-Faddeev method. We apply the polynomial interpolation on the finite algorithm, generalizing principles from [13].
Computation of generalized inverses for constant matrices in the interpolation algorithm is based on the Leverrier-Faddev method. Complexity analysis
is made for both algorithm and modification. We also applied similar idea for
the computation of rank and index of polynomial matrices, which are required
in some applications of Algorithm 2.1. All algorithms are implemented in symbolic programming language MATHEMATICA and tested on several classes of test
examples. In practice, the interpolation algorithm was faster on dense matrices.
Future research can be based on application of rational interpolation and
construction of similar algorithms for rational matrices. Also, many known
method for computation of generalized inverses have rational matrices as temporary variables even if the input matrix is polynomial.

References
[1] A. Ben-Israel and T.N. E. Greville, Generalized Inverses. Theory and Applications, Second edition, CMS Books in Mathematics/Ouvrages de Mathmatiques de la SMC, 15. Springer-Verlag, New York, 2003.
[2] H. P. Decell, An application of the Cayley-Hamilton theorem to generalized
matrix inversion, SIAM Review 7 No 4 (1965) 526528.
[3] J. Ji, A finite algorithm for the Drazin inverse of a polynomial matrix,
Appl. Math. Comput., 30 (2002), 243251.

18

M.D. Petkovic, P.S. Stanimirovic

[4] J. Ji, Explicit expressions of the generalized inverses and condensed Cramer
rules, Linear Algebra Appl., 404 (2005), 183192.
[5] J. Jones, N. P. Karampetakis, and A. C. Pugh, The computation and application of the generalized inverse vai Maple, J. Symbolic Computation,25
(1998) 99124.
[6] N. P. Karampetakis, Computation of the generalized inverse of a polynomial
matrix and applications Linear Algebra Appl. 252 (1997) 3560.
[7] N. P. Karampetakis, Generalized inverses of two-variable polynomial matrices and applications Circuits Systems Signal Processing 16 (1997) 439453.
[8] N. P. Karampetakis and P. Tzekis, On the computation of the generalized
inverse of a polynomial matrix, Ima Journal of Mathematical Control and
Information 18 (2001) 8397.
(2)

[9] X. Li and Y. Wei, A note on computing the generalized inverse AT,S of a


matrix A, Int. J. Math. Math. Sci., 31 (2002), 497507.
[10] P.S. Stanimirovic and M.B. Tasic, Drazin inverse of one-variable polynomial matrices, Filomat, Nis 15 (2001) 7178.
[11] P. S. Stanimirovic, A finite algorithm for generalized inverses of polynomial
and rational matrices, Appl. Math. Comput. 144 (2003) 199214.
[12] W. H. Press, S. A. Teukolsky, W. T. Wetterling and B. P. Flannery, Numerical receipts in C, Cambridge University Press, Cambridge (MA), 1992.
[13] A. Schuster and P. Hippe, Inversion of Polynomial Matrices by Interpolation IEEE Transactions on Automatic control 37, No. 3 (March 1992)
363365.
[14] G.Wang, Y.Wei and S. Qiao, Generalized Inverses: Theory and Computations, Science Press, Beijing, 2004.
[15] F. Bu and Y. Wei, The algorithm for computing the Drazin inverse of twovariable polynomial matrices, Appl. Math. Comput. 147 (2004) 805836.
[16] Y. Wei, A characterization and representation of the generalized inverse
(2)
AT,S and its applications, Linear Algebra Appl., 280 (1998), 8796.
[17] Y. Wei and D. S. Djordjevic, On integral representation of the generalized
(2)
inverse AT,S , Appl. Math. Comput., 142 (2003), 189194.
[18] Y. Wei and H. Wu, The representation and approximation for the general(2)
ized inverse AT,S , Appl. Math. Comput., 135 (2003), 263276.
[19] Y. Wei and N. Zhang, A note on the representation and approximation of
(2)
the outer inverse AT,S of a matrix A, Appl. Math. Comput., 147 (2004),
no. 3, 837841.

Interpolation algorithm of Leverrier-Faddev type for polynomial matrices

19

[20] S. Wolfram, The Mathematica Book, 4th ed., Wolfram Media/Cambridge


University Press, 1999.
[21] Zielke, G., Report on test matrices for generalized inverses, Computing 36
(1986) 105162.

You might also like