Interpolated Advanced Algorithms
Interpolated Advanced Algorithms
Abstract
We investigated an interpolation algorithm for computing outer inverses of a given polynomial matrix, based on the Leverrier-Faddeev method.
This algorithm is a continuation of the finite algorithm for computing generalized inverses of a given polynomial matrix, introduced in [11]. Also,
a method for estimating the degrees of polynomial matrices arising from
the Leverrier-Faddeev algorithm is given as the improvement of the interpolation algorithm. Based on similar idea, we introduced methods for
computing rank and index of polynomial matrix. All algorithms are implemented in the symbolic programming language MATHEMATICA, and
tested on several different classes of test examples.
AMS Subj. Class.: 15A09, 68Q40.
Key words: Pseudoinverse, interpolation, MATHEMATICA, Leverrier-Faddeev method, polynomial matrices.
Introduction
Let R be the set of real numbers, Rmn be the set of m n real matrices,
and Rmn
= {X Rmn : rank(X) = r}. As usual, R[s] (resp. R(s))
r
denotes the polynomials (resp. rational functions) with real coefficients in the
indeterminate s. The m n matrices with elements in R[s] (resp. R(s)) are
denoted by R[s]mn (resp R(s)mn ).
For any matrix A Rmn the Moore-Penrose inverse inverse of A is the
unique matrix, denoted by A , and satisfying the following Penrose equations
in X [1, 14]:
(1) AXA = A,
Corresponding
(2) XAX = X,
author
(4) (XA) = XA
(1k ) Ak+1 X = Ak .
For a sequence S of elements from the set {1, 2, 3, 4, 5}, the set of matrices
obeying the equations represented in S is denoted by A{S}. A matrix from
A{S} is called an S-inverse of A and denoted by A(S) . The Moore-Penrose
inverse A of A is the unique {1, 2, 3, 4}-inverse of A. The group inverse, denoted
by A# , is the unique {1, 2, 5}-inverse of A, and it exists if and only if ind(A) =
min{k : rank(Ak+1 ) = rank(Ak )} = 1. A matrix X = AD is said to be the Drazin
inverse of A if (1k ) (for some positive integer k), (2) and (5) are satisfied. By
1
A1
R and AL we denote a right and a left inverse of A, respectively.
A matrix X C nm is called a 2-inverse of A with the prescribed range T
(2)
and null space S, denoted by AT,S , if the following conditions are satisfied:
XAX = X,
R(X) = T,
N (X) = S,
where R(X) is the range of X and N (X) is the null space of X. It is a well(2)
known fact [1, 9] that if dim T = dim S , then there exists a unique AT,S if and
m
only if AT S = C .
In literature, researchers have proposed many representations and methods
(2)
for computing AT,S , see [4, 16, 17, 18, 19].
An algorithm for computing the Moore-Penrose inverse of a constant real
matrix A(s) A0 Rmn by means of the Leverrier-Faddeev algorithm (also
called Souriau-Frame algorithm) is introduced in [2]. A generalization of this
algorithm for the computation of various classes of generalized inverses is introduced in [11].This algorithm generates the class of outer inverses of a rational or
polynomial matrix. Also, in this paper, we isolate partial cases when the class
of reflexive g-inverses is derived, as well as the Moore-Penrose inverse and the
Drazin inverse.
In [13] Schuster and Hippe generalize known polynomial interpolation methods for polynomial matrices in order to compute the ordinary inverse of the
polynomial (non-singular) matrices using the formula A1 = (detA)1 adjA.
In [6] it is utilized a representation and corresponding algorithm for computing the Moore-Penrose inverse of a nonregular polynomial matrix of an arbitrary degree. Corresponding algorithm for two-variable polynomial matrix is
presented in [7]. In [5] it is described an implementation of the algorithm for
computing the Moore-Penrose inverse of a singular one variable rational matrix
in the symbolic computational language MAPLE.
Effective version of given algorithm is presented in the paper [8]. This algorithm is efficient when elements of the input matrix are polynomials with only
few nonzero addends. On the other hand the interpolation algorithm presented
in the paper possesses that better performances with respect to the classical
method when the matrices are dense.
T r(Aj (s))
j
Xe (s) =
e
(1)e ak (s)
k>0
e R(s) Bk1 (s) ,
0,
k = 0.
Next theorem shows how to use Algorithm 2.1 for computing different types
of generalized inverses of a given polynomial matrix A(s). We restate it from
[11]:
Theorem 2.1. Let A(s) R[s]nm be polynomial matrix and A(s) = P (s)Q(s)
its full rank factorization. The following statements are valid:
(1) In the case e = 1, R(s) = T (s) = A(s) we get X1 (s) = A (s).
(2) If m = n, e = 1, R(s) = A(s)l , T (s) = A(s), l indA(s) we obtain
X1 (s) = AD (s).
(3) If T (s) = A(s), n > m = rankA(s), for arbitrary R(s) such that A(s)R(s)
is invertible we have X1 (s) = A(s)1
R .
(4) If m = n, e = 1, R(s) = A(s)k , T (s) = In , then X1 (s) exists iff indA(s) =
k and X1 (s) = A(s)A(s)D .
(5) In the case m = n, e = l+1, T (s)R(s) = A(s), R(s) = A(s)l , l indA(s)
we obtain Xe (s) = A(s)D .
(6) For m = n, e = 1, T (s) = R(s) = A(s)l , l indA(s) we have X1 (s) =
(A(s)D )l .
(7) X1 (s) A(s){2} e = 1, T (s) = A(s), R(s) = G(s)H(s), G(s)
R[s]nt , H(s) R[s]tm , rankH(s)A(s)G(s) = t.
(8) X1 (s) A(s){1, 2} e = 1, T (s) = A(s), R(s) = G(s)H(s), G(s)
R[s]nr , H(s) R[s]rm , rankH(s)A(s)G(s) = r = rankA(s).
(9) X1 (s) A(s){1, 2, 3} e = 1, T (s) = A(s), R(s) = G(s)P (s) , G(s)
R[s]nr , rankP (s) A(s)G(s) = r = rankA(s).
(10) X1 (s) A(s){1, 2, 4} e = 1, T (s) = A(s), R(s) = Q(s) H(s), H(s)
R[s]rn , rankH(s)A(s)Q(s) = r = rankA(s).
(11) If T (s) = A(s), m > n = rankA(s) for arbitary R(s) such that R(s) A(s)
is invertible we get X1 (s) = A(s)1
L .
We used simpler notations k R,T , aR,T
and BiR,T , respectively for values k,
i
ai and Bi , i = 0, . . . , n when the input in Algorithm 2.1 are matrices R and T
(eider polynomial or constant). Also we will denote aR,T = aR,T
and B R,T =
kR,T
BkR,T
R,T 1 . Following definition and lemma will be used in further considerations:
Definition 2.1. For a given polynomial matrix M (s) R[s]nm its maximal
degree is defined as the maximal degree of its elements:
degM (s) = max{dg(M (s))ij | 1 i n, 1 j m}.
Lemma 2.1. Let R and T be real n m matrices. Denote A0 = T R Then
holds:
0 R,T
i1
(a) BkR,T
AB
+ aR,T In for all i = 1, . . . , n k R,T ,
R,T +i1 = (T R )
R(s),T (s)
(2.1)
j=1
In practice, the complexity of Algorithm 2.1 is smaller than (2.1) (not all
elements of matrices Bj (s), Aj (s) and A0 (s) has the maximal degree), but it is
still large.
Also it can be shown that complexity of Leverrier-Faddev algorithm for
constant matrices is O(n3 n) = O(n4 ).
It is well known that there exists one and only one polynomial f (s) of degree
q n which assumes the values f (s0 ), f (s1 ), . . . , f (sn ) at distinct base points
s0 , s1 , . . . , sn . The polynomial is called the qth degree interpolation polynomial.
Three important interpolation methods are [13]:
(i) the direct approach using Vandermondes matrix
(ii) Newton interpolation,
(iii) Lagranges interpolation.
In the case of finding generalized inverses of polynomial matrices (and also
in many other applications) it is suitable to use the Newton interpolation polynomial [12].
In the following theorem we investigate a sufficient number of interpolation
points to determine the value k T (s),R(s) and polynomials B R(s),T (s) , aR(s),T (s) .
We used the notation = k R(s),T (s) for k corresponding to polynomial matrices
R(s) and T (s).
Theorem 3.1. Let T (s), R(s) R[s]nm . Denote A0 (s) = T (s)R(s) Then
holds:
Assume that a
(si ) = 0 for all i = 0, . . . , n degA0 (s). In accordance
R(s),T (s)
with Algorithm 2.1, the degree of polynomial a
(s) is limited by
R(s),T (s)
degA0 (s). Since degA0 (s) n degA0 (s), we have a
(s) = 0, which is
contradiction with the definition of . Then holds
R(si0 ),T (si0 )
(i0 n)(a
= a
(si0 ) 6= 0),
which implies k
k.
R(s),T (s)
On the other hand, by definition of we have a+t
(s) = 0 for all
R(s),T (s)
R(s ),T (si )
(si ) = 0 is satisfied for
= a+t
t = 1, . . . n . Since the equality a+ti
R(s ),T (si )
= 0. Consequently
all i = 0, . . . , n degA0 , it can be concluded that a+ti
R(si ),T (si )
0
k
holds for all i = 0, . . . , n degA (s), and we obtain k 0 . This
completes part (a) of the proof.
(b) Denote Bi0 = B R(si ),T (si ) and a0i = aR(si ),T (si ) It can be easily proven
that we can compute values B R(s),T (s) (si ) and aR(s),T (s) (si ) using following
relations:
(
A0 (si )i 1 (A0 (si )Bi0 + a0i In ) , > i
R(si ),T (si )
R(s),T (s)
B
(si ) = B1
=
Bi0 ,
= i
(
a0i , i = ,
R(s),T (s)
a
(si ) =
0, i < .
Now we know values of polynomials B R(s),T (s) and aR(s),T (s) in degA0 (s)+1
different points si . From degB R(s),T (s) ( 1) degA0 (s) and dgaR(s),T (s))
degA0 (s) it holds that polynomials B R(s),T (s) and aR(s),T (s) can be computed
from the set of points B R(s),T (s) (si ) and aR(s),T (s) (si ) (i = 0, . . . , degA0 (s))
using interpolation.
Previous theorem gives the main idea for the following interpolation algorithm.
Algorithm 3.1. Input: a polynomial matrices R(s) and T (s) of the order nm.
Step 1. Initial calculations
Step 1.1. Compute A0 (s) = T (s)R(s) , d0 = degA0 (s) and d = ndegA0 (s).
Step 1.2. Select distinct base points s0 , s1 , . . . , sd R.
i 1
A0i
Bi0 ,
> i
= i
R(s),T (s)
form the matrix interpolation by interpolating each element B1
by values (Bi )pq for i = 0, . . . , degA0 (s).
pq
k
X
ai ki .
(4.1)
i=0
r
Y
( i ) ,
(4.2)
i=1
k
X
i=0
ai ki =
r
Y
( i )
i=1
(4.3)
Proof. Using Theorem 4.1 we have that in the case R(s) = A(s) and T (s) = In
holds = rankA(s) and i = rankA0i = rankA(si ). Now the conclusion is
straight from part (a) of Theorem 3.1.
Let us notice that in formula (4.3) we can use any method for computing
rank of constant matrices. For example, if we use the gaussian elimination, we
need O(n degA(s) n3 ) = O(n4 degA(s)) time for computation of rankA(s). If
we compute rankA(si ) using Leverrier-Faddev method (and Theorem 4.1) then
required time is O(n5 degA(s)).
Using Theorem 4.1 and Corollary 4.1 we can obtain little improvement of Algorithm 3.1. Before Step 2, we can precompute = max{rankA0i | i = 0, . . . , d}
R(s ),T (si )
(A0i = A0 (si )) and after that in Step 2 we can calculate ai = a i
and
R(si ),T (si )
0
Bi = B1
only for i = 0, . . . , degA (s). This modification is actually
used in the implementation of Algorithm 2.1.
Now we will describe an algorithm for computing an index of a given polynomial matrix A(s). Therefore, we will again use Leverrier-Faddev method and
well-known lemma (formulated in our notations):
Lemma 4.2. Let A Rnn is given matrix. Denote tR,T = min{r | BrR,T = 0}.
Then indA = n tA,In .
Lemma 4.3. For arbitrary R(s), T (s) R[s]nm , and pairwise different real
numbers s0 , . . . , sd , where d = nd0 = n deg (T (s)R(s) ) there holds:
tR(s),T (s) = max{tR(si ),T (si ) | i = 0, . . . d}
Proof. Denote i = tR(si ),T (si ) , = tR(s),T (s) and 0 = max{i
From the definition we have that 0 = i0 (for some 0 i0
be proven by mathematical induction that BiR,T = 0 i
two constant or polynomial matrices R and T . Therefore if
R(s),T (s)
(s) = 0 and also
B
R(si0 ),T (si0 )
(4.4)
| i = 0, . . . d}.
d). It can
tR,T for every
0 > , then
=0
B 0
(si ) = B 0
=0
R(s),T (s)
10
Theorem 4.2. Let A(s) R[s]nn is given polynomial matrix. Then indA(s)
can be computed using following formula:
indA(s) = min{indA(si ) | i = 0, . . . , n degA(s)}
(4.5)
Proof. From the Lemma 4.2 we have that indA(s) = n tA,In and indA(si ) =
n tAi ,In . Now conclusion of the theorem immediately follows from Lemma
4.3.
As in the previous case, we can use any method for computing the index
of constant matrices A(si ). For example, if we use Algorithm 2.1 (and Lemma
4.2), total required time for computation is O(n4 nd0 ) = O(n5 degA(s)).
Let we now summarize results from this section and construct two algorithms
for computing rank and index of a given square polynomial matrix.
Algorithm 4.1. (Computing the rank of square polynomial matrix)
Input: Polynomial matrix A(s) R[s]nn .
Step 1. Compute d0 = degA(s), d = n d0 and select pairwise distinct real
numbers s0 , . . . , sd .
Step 2. For each i = 0, . . . d compute i = rankA(si ) using some method for
computing rank of constant matrices.
Step 3. Return rankA(s) from the formula (4.3).
R(s),T (s)
R(s),T (s)
, ai
R(s),T (s)
11
Lemma 5.1. Let A(s), B(s) Rnn (s) and a(s) R(s). The following facts
are valid
(a) dg(A(s)B(s))ij = max{dgA(s)ik + dgB(s)kj | 1 k n}.
(b) dg(A(s) + B(s))ij max{dgA(s)ij , dgB(s)ij }.
(c) dg(a(s)A(s))ij = dgA(s)ij + dg(a(s)).
Proof. (a) From the definition of the matrix product, and using simple formulae
dg(p(s) + q(s)) max{dg(p(s)), dg(q(s))},
Algorithm
degree matrix dgBt
(s) and degree of poly 5.1. Estimating
R(s),T (s)
nomial dg at
for a given matrices R(s) and T (s), 0 t n.
Step 1. Set (D0B )ii = 0, i = 1, . . . , n and (D0B )ij = for all i = 1, . . . , n,
j = 1, . . . , n, i 6= j. Also denote Q = dgA0 (s), d0 = 0.
Step 2. For t = 1, . . . n perform the following
B
)kj | k = 1, . . . , n} for
Step 2.1. Calculate (DtA )ij = max{Qik + (Dt1
i = 1, . . . , n, j = 1, . . . , n.
Implementation
All algorithms are implemented in symbolic programming language MATHEMATICA. About the package MATHEMATICA see, for example [20].
Function GeneralInv[A, p] implements a slightly modified version of Algorithm 2.1.
12
For the input matrices R, T Rnm and positive integer kk it returns the
R,T
list with elements kk, aR,T
kk and Bkk1 respectively, if 0 kk n. Otherwise,
it returns the list with elements k R,T , aR,T and B R,T . Function works for both
polynomial matrices (it is used for the first and second step of Algorithm 2.1)
and constant matrices used for Step 2.2 of Algorithm 3.1.
Function DegreeEstimator[R, T, i, var] implements Algorithm 5.1 and
R(s),T (s)
gives an upper bound for the degree of polynomial ai
and the matrix
R(s),T (s)
degree of Bi1
.
DegreeEstimator[R_, T_, i_, var_] :=
Module[{A, h, j, k, l, m, d1, d2, Bd, ad, AA, Ad, Btm1d, Btm2d, atd, td,
IDd},
A = T.Transpose[R];
{d1, d2} = Dimensions[A];
Ad = MatrixDg[A, var];
Ad = MultiplyDG[Ad, Transpose[Ad]];
Bd = MatrixDg[IdentityMatrix[d1], var];
IDd = Bd; td = -1; l = -1; ad = -\[Infinity];
For [h = 1, h <= i, h++,
A1d = MultiplyDG[Ad, Bd];
ad = Max[Table[A1d[[j, j]], {j, d1}]];
td = h; atd = ad; Btm2d = Bd;
Btm1d = Bd;
Bd = A1d;
For [j = 1, j <= d1, j++,
Bd[[j, j]] = Max[Bd[[j, j]], ad];
];
];
Return[{atd, Btm2d}];
];
13
This function uses next two auxiliary functions: MatrixDg[A, var] which
computes the matrix degree of matrix A and MultiplyDG[Ad,Bd] which computes upper bound for matrix degree of product of matrices A and B whose
degrees (or its upper bounds) are Ad and Bd. Both functions are based on
Lemma 5.1.
Functions PolyMatrixRank[A, var] and PolyMatrixIndex[A, var] implements Algorithms 4.1 and 4.2. In the first function, we used built-in MATHEMATICA function MatrixRank[A] for computing the rank of constant matrices.
In the second, we used function MatrixIndex[A] based on modified version of
Algorithm 2.1.
PolyMatrixRank[A_, var_] := Module[{r, r1, n, m, x},
Print[Dimensions[A]];
{n, m} = Dimensions[A];
p = 1 + n*MatrixDg[A, var];
x = Table[i, {i, 1, p}];
r = 0;
For [h = 1, h <= p, h++,
r1 = MatrixRank[ReplaceAll[A, var -> x[[h]]]];
If [r1 > r, r = r1];
];
Return[r];
];
Function GeneralInvPoly[A, var] implements a small modification of Algorithm 3.1 (defined with Theorem 3.2).
GeneralInvPoly[R_, T_, var_] :=
Module[{R1, T1, dg, tg, deg, n, m, x, tm, i, h, p, Ta, TB, A1, a, B, t, at, Btm1,
r1, r, degA, Deg},
AA = Expand[T.Transpose[R]];
{n, m} = Dimensions[AA];
degA = MatrixPolyDegree[AA, var];
p = n*degA + 1;
x = Table[i, {i, 1, p}];
r = PolyMatrixRank[AA, var];
tm = -1; tg = 0;
p = r*degA + 1;
Ta = Table[0, {i, 1, p}]; TB = Table[0, {i, 1, p}];
For [h = 1, h <= p, h++,
R1 = ReplaceAll[R, var -> x[[h]]];
T1 = ReplaceAll[T, var -> x[[h]]];
{t, a, B} = RTGeneralInv[R1, T1, r];
Ta[[h]] = {h, a}; TB[[h]] = {h, B};
];
{deg, Deg} = DegreeEstimator[A, r, var];
at = SimpleInterpolation[Ta, deg, var];
Btm1 = AdvMatrixMinInterpolation[TB, Deg, var];
14
Return[{Expand[at], Expand[Btm1]}];
];
In this function the input is a polynomial matrix A(s) with respect to variable
var. The first and second dimension of A are equal to n and m, respectively.
It returns the list of elements = k R(s),T (s) , aR(s),T (s) and B R(s),T (s) . In this
implementation, we used si = i for base interpolation points. With this set of
interpolation points, function is fastest (we also tried si = [ n2 ] + i, si = ni ,
etc).
Inside the function GeneralInvPoly we are using our auxiliary functions
SimpleInterpolation[Ta, deg, var] and
AdvMatrixMinInterpolation[TB, Deg, var]
which provides interpolation of polynomials aR(s),T (s) and B R(s),T (s) respectively, through calculated data. Both functions are using built-in MATHEMATICA
function InterpolatingPolynomial[T, var] based on Newton interpolation
method.
Testing Experience
We tested implementations of Algorithm 2.1 and Algorithm 3.1 improved by Algorithm 5.1 on test cases from [21] and some randomly generated test matrices.
Also we tested Algorithms 4.1 and 4.2 on randomly generated test matrices.
In the next table we presented timings of functions RTGeneralInv and
RTGeneralInvPoly on test cases from [21]. In this example, input of the function was R(s) = T (s) = A(s), e = 1 and result, according to the part (1) of
the Theorem 2.1, output is the Moore-Penrose inverse A (s). All times are in
seconds.
Matrix
S3
S6
S10
V4
V5
H3
H6
H10
Alg 2.1
0.049
0.42
2.04
0.08
0.63
0.01
0.04
0.5
Alg 3.1
0.070
0.67
5.59
1.3
16.2
0.03
0.2
2.01
V0 (a, b) =
a
b
b
a
Vn (a, b) =
15
=
=
2n 1,
n
2n(2
1) 2 1
I2n X1 (s) =
n+1
Vn (s) = Vn (
1
2n s
As we can see, only main diagonal elements of matrix B Vn (s),Vn (s) are nonzero and have only one addend (only the first coefficient is not zero). Similarly
V (s),Vn (s)
, 0 i 2n 1. This explains bad result of Algorithm
holds for all Bi n
3.1 on test matrix V5 . Similarly holds for other test matrices from previous
table.
For presenting test results on random matrices, let us consider the following
two definitions.
Definition 7.1. For given matrix A(s) (polynomial or constant), the first sparse
number sp1 (A) is the ratio of the total number of non-zero elements and total
number of elements in A(s). In the case when A(s) = [aij (s)] has the order
m n, then
sp1 (A(s)) =
16
n
9
9
9
10
10
10
11
11
11
Note that time of Algorithm 2.1 reduces rapidly when either sp1 (A) or sp2 (A)
is small. In the case of interpolation (Algorithm 3.1) sp2 (A) almost does not
influence on timing which is not the case with sp1 (A). When sp1 (A) is small,
degree matrix dgA(s) has large number of elements equal to . Also the
same holds for output matrix of Algorithm 5.1 (function DegreeEstimator[A,
i, var]), but this number is smaller. This accelerates the matrix interpolation
(function AdvMatrixMinInterpolation[TB, Deg, var]).
17
Following table shows the testing results of Algorithm 4.1 and 4.2 compared
with the results obtained by direct application of functions MatrixRank and
MatrixIndex on polynomial matrices. In this case, working time of all functions
directly depends on value of rank and index of matrix. So, we just presented
average ratios between
n
9
9
9
10
10
10
11
11
11
degA
3
4
5
3
4
5
3
4
5
Alg4.1
MatrixRank
1.3
2.1
4.6
3.6
5.8
8.7
9.3
10.5
16.7
Alg4.2
MatrixIndex
1.0
1.6
2.3
1.1
2.3
4.2
2.3
4.6
8.95
Conclusion
We presented a interpolation variant of the finite algorithm for computing various classes of generalized inverses, introduced in [11]. This algorithm is the
extension of the Leverrier-Faddeev method. We apply the polynomial interpolation on the finite algorithm, generalizing principles from [13].
Computation of generalized inverses for constant matrices in the interpolation algorithm is based on the Leverrier-Faddev method. Complexity analysis
is made for both algorithm and modification. We also applied similar idea for
the computation of rank and index of polynomial matrices, which are required
in some applications of Algorithm 2.1. All algorithms are implemented in symbolic programming language MATHEMATICA and tested on several classes of test
examples. In practice, the interpolation algorithm was faster on dense matrices.
Future research can be based on application of rational interpolation and
construction of similar algorithms for rational matrices. Also, many known
method for computation of generalized inverses have rational matrices as temporary variables even if the input matrix is polynomial.
References
[1] A. Ben-Israel and T.N. E. Greville, Generalized Inverses. Theory and Applications, Second edition, CMS Books in Mathematics/Ouvrages de Mathmatiques de la SMC, 15. Springer-Verlag, New York, 2003.
[2] H. P. Decell, An application of the Cayley-Hamilton theorem to generalized
matrix inversion, SIAM Review 7 No 4 (1965) 526528.
[3] J. Ji, A finite algorithm for the Drazin inverse of a polynomial matrix,
Appl. Math. Comput., 30 (2002), 243251.
18
[4] J. Ji, Explicit expressions of the generalized inverses and condensed Cramer
rules, Linear Algebra Appl., 404 (2005), 183192.
[5] J. Jones, N. P. Karampetakis, and A. C. Pugh, The computation and application of the generalized inverse vai Maple, J. Symbolic Computation,25
(1998) 99124.
[6] N. P. Karampetakis, Computation of the generalized inverse of a polynomial
matrix and applications Linear Algebra Appl. 252 (1997) 3560.
[7] N. P. Karampetakis, Generalized inverses of two-variable polynomial matrices and applications Circuits Systems Signal Processing 16 (1997) 439453.
[8] N. P. Karampetakis and P. Tzekis, On the computation of the generalized
inverse of a polynomial matrix, Ima Journal of Mathematical Control and
Information 18 (2001) 8397.
(2)
19