Recursive Matrix Calculation Paradigm by the Example of Structured Matrix
Recursive Matrix Calculation Paradigm by the Example of Structured Matrix
Article
Recursive Matrix Calculation Paradigm by the
Example of Structured Matrix
Jerzy S. Respondek
Institute of Computer Science, Faculty of Automatic Control, Electronics and Computer Science, Silesian
University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland; [email protected];
Tel.: +48-32-237-2151; Fax: +48-32-237-2733
Received: 1 December 2019; Accepted: 6 January 2020; Published: 13 January 2020
Abstract: In this paper, we derive recursive algorithms for calculating the determinant and inverse
of the generalized Vandermonde matrix. The main advantage of the recursive algorithms is the
fact that the computational complexity of the presented algorithm is better than calculating the
determinant and the inverse by means of classical methods, developed for the general matrices. The
results of this article do not require any symbolic calculations and, therefore, can be performed by a
numerical algorithm implemented in a specialized (like Matlab or Mathematica) or general-purpose
programming language (C, C++, Java, Pascal, Fortran, etc.).
Keywords: numerical recipes; numerical algebra; linear algebra; matrix inverse; generalized
Vandermonde matrix; C++
1. Introduction
In previous studies [1,2], we proposed a classical numerical method for inverting the generalized
Vandermonde matrix (GVM). The new contributions in this article are as follows:
• We derive recursive algorithms for calculating the determinant and inverse of the generalized
Vandermonde matrix.
• The importance of the recursive algorithms becomes clear when we consider practical
implementation of the GVM; they are useful each time we add a new interpolation node
or a new root of a given differential equation in question.
• The recursive algorithms, which we propose in this work, can allow avoiding the recalculation of
the determinant and/or inverse.
• The main advantage of the recursive algorithms is the fact that the computational complexity of
the presented algorithm is of the O(n) class for the computation of the determinant.
• The results of this article do not require any symbolic calculations and, therefore, can be performed
by a numerical algorithm implemented in a high-level (like Matlab or Mathematica) or low-level
programming language (C, C++, Java, Pascal, Fortran, etc.).
In this article, we neatly combined the results from previous studies [3,4] and extended the
computational examples.
The main results of this article are shown in Algorithms 1 and 2. The paper is organized as
follows: Section 2 justifies the importance of the generalized Vandermonde matrices, Section 3 gives
the recursive algorithms for the generalized Vandermonde matrix determinant, Section 4 gives two
recursive algorithms for calculating the desired inverse, Section 5 presents, with an example, the
application of the proposed algorithms, and Section 6 summarizes the article.
These matrices arise in a broad range of both theoretical and practical issues. Below, we survey
the issues which require the use of the generalized Vandermonde matrices.
Linear, ordinary differential equations (ODE): the Jordan canonical form matrix of the ODE in the
Frobenius form is a generalized Vandermonde matrix ([8] pp. 86–95).
Control issues: investigating the so-called controllability [9] of the higher-order systems leads to
the issue of inverting the classic Vandermonde matrix [10] (in the case of distinct zeros of the
system characteristic polynomial) and the generalized Vandermonde matrix [11] (for systems
with multiple characteristic polynomial zeros). As the examples of the higher-order models of
the physical objects, we can mention Timoshenko’s elastic beam equation [12] (fourth order)
and Korteweg-de Vries’s equation of waves on shallow water surfaces [13,14] (third, fifth, and
seventh order).
Interpolation: apart from the ordinary polynomial interpolation with single nodes, we consider
the Hermite interpolation, allowing multiple interpolation nodes. This issue leads to the system
of linear equations, with the generalized Vandermonde matrix ([15] pp. 363–373).
Information coding: the generalized Vandermonde matrix is used in coding and decoding
information in the Hermitian code [16].
Optimization of the non-homogeneous differential equation [17].
(A) Suppose we have the value of the Vandermonde determinant for a given series of roots c1 , . . . , cn−1 .
How can we calculate the determinant after inserting another root into an arbitrary position in
the root series, without the need to recalculate the whole determinant? This problem corresponds
to the situation which frequently emerges in practice, i.e., adding a new node (polynomial
interpolation) or increasing the order of the characteristic equation (linear differential equation
solving, optimization, and control problems).
(B) Contrary to the previous scenario, we have the Vandermonde determinant value for a given root
series c1 , . . . , cn . We remove an arbitrary root cq from the series. How can we recursively calculate
the determinant in this case? The examples of real applications from the previous point also apply
here. The proper solution is given in Section 3.1.
(C) We are searching for the determinant value, when, in the given root series c1 , . . . , cn , we change
the value of an arbitrarily chosen root (Section 3.1).
(D) We are searching for the determinant value, for the given root series c1 , . . . , cn ,
calculated recursively.
The theorem below is the main tool to construct the above recursive algorithm.
Information 2020, 11, 42 3 of 13
Theorem 1. The following recursive formula is fulfilled for the generalized Vandermonde matrix:
Yn
(k ) (k )
detVG (c1 , . . . , cn ) = (−1)q+1 ckq · detVG c1 , . . . , cq−1 , cq+1 , . . . , cn ·
ci − cq , q = 1, .., n. (2)
i=1, i,q
ck1 ck1+1 ck1+n−1 ck1 (ck1 c1 − ck1 cq ) (ck1 cn−1 − ck1 cn−2
··· ··· 1 1
cq )
.. .. .. .. .. .. .. ..
. . . .
. . . .
(k )
detVG (c1 , . . . , cn ) = det ckq ckq+1 ··· ckq+n−1 ckq 0 ··· 0
= det .
.. .. .. .. .. ..
.. ..
. . . .
. . . .
.. ..
ckn+1 ckn+n−1 ckn (ckn cn − ckn cq ) (ckn cn−1 k n−2
ckn . . n − cn cn cq )
Next, in compliance with Laplace’s expansion formula applied to the q-th column, we directly have
ck1 c1 − cq ck1+1 c1 − cq c1k+n−2 c1 − cq
···
.. .. .. ..
. .
. .
k
c k +1 k+n−2
(k ) q−1 c q−1 − c q cq−1 cq−1 − cq · · · cq−1 cq−1 − cq
detVG (c1 , . . . , cn ) = (−1)q+1 ckq det k
ckq+ 1
cqk+ n−2
cq+1 cq+1 − cq +1 q+1
c − cq · · · +1
cq + 1 − cq
.. .. .. ..
.
. . .
ck c − c k +1
cn cn − cq
··· k+n−2
cn cn − cq
n n q
k k +1 k+n−2
c1 c1 · · · c1
. .. ..
..
.
.
. . .
ck k +1 k+n−2
c · · · cq−1 n
= (−1)q+1 ckq det kq−1 kq−1
· Q ci − cq .
c +1 k+n−2
q+1 cq+1 · · · cq+1
i = 1
.
. .. .. .. i , q
. . . .
ckn ckn+1 · · · ckn+n−2
Case D: The proper recursive function expressing the determinant value, for the given root series
c1 , . . . , cn , has the following form:
q+1 (k )
q−1
Q
(−1) ckq · detVG c1 , . . . , cq−1 for q > 1
ci − cq ,
(k )
c1 , . . . , cq
detVG =
i=1 .
ck ,
for q = 1
1
The computational complexity of the presented Algorithms A–C is of the O(n) class with respect
to the number of floating-point operations necessary to perform. This enables us to efficiently
solve the incremental Vandermonde problems, avoiding the quadratic complexity, typical in the
Vandermonde field (e.g., References [14,18])
Algorithm D is of the O(n2 ) class, being, by the linear term, more efficient than the ordinary Gauss
elimination method.
(k ) (k )
detVG (c1 , . . . , cn ) = (q − 1)! (n − q)! hn−1 ckq detVG c1 , . . . , cq−1 , cq+1 , . . . , cn , q = 1, .., n, (3)
(k )
For the special VS , Equation (3) becomes
(k ) (k )
detVS (1, . . . , n) = (q − 1)! (n − q)! ckq detVS (1, . . . , q − 1, q + 1, . . . , n), q = 1, .., n. (5)
previously [5]. Reference [5] features an explicit form of the GVM inverse, which makes use of the
so-called elementary symmetric functions, defined below.
The efficient algorithm, of the O(n2 ) computational complexity class, for calculating the elementary
symmetric functions in Equation (6), is given in Reference [5]. Now, it is possible to present the explicit
form of the inverse GVM given by Reference [5] (p. 647).
(n) (n)
(−1)n+1 σn,1 (−1)n+1 σn,2 (n)
(−1)n+1 σn,n
···
n n n−1
ck1 ck2
Q Q
(c1 −ci ) (c2 −ci ) ckn
Q
(cn −ci )
i=2 i=1,i,2 i=1
(n) (n) (n)
(−1)n+2 σn−1,1 (−1)n+2 σn−1,2 (−1)n+2 σn−1,n
−1 n n ··· n−1
ck1 ck2
(k )
Q Q
VG (1, . . . , n) (c1 −ci ) (c2 −ci ) ckn
Q
=
(cn −ci ) .
(7)
i=2 i=1,i,2 i=1
.. .. .. ..
. . . .
(n) (n) (n)
(−1)n+n σ1,1 (−1)n+n σ1,2 (−1)n+n σ1,n
n−1 n−1
··· n−1
ck1 ck2 ckn
Q Q Q
(c1 −ci ) (c2 −ci ) (cn −ci )
i=2 i=1,i,2 i=1
Let us return to the objective of this chapter, i.e., construction of the efficient, recursive algorithm
for inverting the generalized Vandermonde matrix. This issue can be formalized as follows: we know
the GVM inverse for the root series c1 , . . . , cn . We want to efficiently calculate the inverse for the root
(k ) (k )
series c1 , . . . , cn , cn+1 , making use of the known inverse. Let VG (n + 1) = VG (c1 , c2 , . . . , cn+1 ); then,
the theorem below enables recursively calculating the desired inverse.
ck1+n ck1+n
−1
n−1 V (k) (n) −1
−1
.. ..
(k )
VG (n) ckn+1 ··· ckn+ (k )
VG (n)
−1
.
+1 G .
(k )
ckn+n ckn+n
−1
, (8)
VG ( n + 1 ) =
(k )
VG (n) + d − d
−1
ckn+1 ckn+ n−1 (k )
··· VG (n)
+1 1
−
d d
k +n
c1
h
i (k ) −1 .
d = ckn+ n
− ckn+1 ··· cnk+ n−1
VG (n) .. , (9)
+1 +1
k +n
cn
−1
(k )
where VG (n) denotes the known GVM inverse for the roots c1 , . . . , cn .
Information 2020, 11, 42 6 of 13
Proof. To prove the matrix recursive identity in Equation (8), we make use of the block matrix
algebra rules. A useful formula, expressing the block matrix inverse by the inverses of the respective
sub-matrices, is:
#−1
A−1 + A−1 A2 B−1 A3 A−1 −A−1 A2 B−1
" " #
−1 A1 A2
A = = 1 1 1 1 , B = A4 − A3 A−1
1 A2 . (10)
A3 A4 −B−1 A3 A−1
1
B−1
Thus, if we know the inverse of the sub-matrix A1 and the inverse of B, we can directly obtain the
inverse of the block matrix A by performing a few matrix multiplications. For different c1 , c2 , . . . , cn+1 ,
the GVM matrix is invertible and Equation (10) holds true; thus, the coefficient d given by Equation (9)
(k )
is non-zero. Now, let us take into account the generalized Vandermonde matrix VG (n + 1) for the
n + 1 roots c1 , . . . , cn+1 . It can be treated as a block matrix of the following form:
k +n
ck1 ck1+n−1 ck1+n
··· c1
.. .. .. ..
.. (k )
(k )
. . . .
VG ( n ) .
VG (n + 1) = =
k+n
. (11)
k+n−1 k +n
ckn ··· cn cn c
n
h
i h i
ckn+1 ckn+ n−1
ckn+ n
ckn+1 ckn+ n−1
ckn+ n
··· ···
+1 +1 +1 +1
Now, applying the block matrix identities in Equation (10) to the block matrix in Equation (11),
we directly obtain the thesis in Equation (9).
Despite the explicit form of the block matrix inverse in Equation (9), its efficient
algorithmic implementation is not obvious. The order in which we calculate the matrix term
−1 h iT h i (k) −1
(k )
VG ( n ) ck1+n · · · ckn+n ckn+1 · · · ckn+
+1
n−1
VG ( n ) has a crucial influence on the final
computational complexity of the algorithm. Therefore, let us analyze all three possible orders.
(A) Left-to-right order of multiplications.
−1 h iT h
(k )
i
One can notice that VG (n) ck1+n · · · ckn+
+1
n−1
ckn+n ckn+1
has dimensions equal ···
−1
(k )
to n × n. Hence, the multiplication of this last matrix and of the matrix VG (n) is an O(n3 ) class
algorithm. Thus, the computational complexity in the left-to-right order is of the O(n3 ) class.
(B) Right-to-left order.
A detailed analysis leads also to the O(n3 ) class.
(C) The order of the following form:
( −1 h iT
)(
i (k) −1
)
(k )
h
VG ( n ) ck1+n ··· ckn+n ckn+1 ··· ckn+
+1
n−1
VG ( n ) . (12)
Finally, all we have to do is to multiply these two last vectors, which obviously is an operation of
the O(n2 ) class.
Summarizing, the most efficient is the multiplication order ((C), giving a quadratic computational
complexity. All other orders lead to the worse O(n3 ) class algorithms. Combining the above results, we
Information 2020, 11, 42 7 of 13
Information 2020, 11, x FOR PEER REVIEW 7 of 13
Finally, all we have to do is to multiply these two last vectors, which obviously is an operation
can give the algorithm which solves the incremental inverse problem, i.e., calculating the GVM inverse
of the O(n2) class.
for the root series c1 , . . . , cn , cn+1 on the basis of the known inverse for the root series c1 , . . . , cn .
Summarizing, the most efficient is the multiplication order ((C), giving a quadratic
computational
4.3. Algorithm 1 complexity. All other orders lead to the worse O(n3) class algorithms. Combining the
above results, we can give the algorithm which solves the incremental inverse problem, i.e.,
Using the incremental
the GVMAlgorithm
inverse for 1,
thewe can buildcthe final, recursive algorithm for inverting the
calculating root series 1 ,..., cn , cn+1 on the basis of the known inverse for
generalized Vandermonde matrix of the form in Equation (1).
the root series c1 ,..., cn .
Algorithm 1: Incremental Inverting of the Generalized Vandermonde Matrix
4.3. Algorithm 1
−1 −1
(k ) (k )
1. Function Incremental_Inverse(n;
Using k; c1 , . . . , c1,
the incremental Algorithm n+1 ;
we Vcan
G
( n )
build ):
theVfinal,
G
( n + 1 )
recursive algorithm for inverting the
2. generalized Vandermonde matrix of the form in Equation (1).
Input:
Algorithm 1: Incremental Inverting of the Generalized Vandermonde Matrix
- n : integer -number of roots − 1 −1 −1
- 1. Function
k Incremental_Inverse(n;
: real k; c1 ,...,
-Vandermonde matrix VG( k ) (exponent
cn+1 ; general n ) ): VG( k ) ( n +1)
- c1 , . . . , cn+1 : real
2. Input: -the roots
−1
(k )
- - n VG (n) :integer : realn×n -number
-the GVM of roots
inverse − 1 roots c1 , . . . , cn .
for the
-k :real -Vandermonde matrix general exponent
3. Locals:
- c1 ,..., cn+1 :real -the roots
n
- (vk1) , v2 :−1real n×n
- VG ( n ) :real -the GVM inverse for the roots c1 ,..., cn .
- d : real
3. Locals:
4. Calculate the auxiliary vectors v1 , v2 .
- v1 , v2 : real n
−1 h iT
(k )
- d :real v1 := VG (n) ck1+n · · · ckn+n . (13)
4. Calculate the auxiliary vectors v1 , v2 .
i (k) −1
v2 := vc:kn=+1V ( k· )· (· n )ckn−+
1 n−1k +n
VGc k(+nn )T . .
h
+1c (14)
(13)
1 G 1 n
−1
5. v2 := cnk+1 cnk++1n−1 VG( k ) ( n ) .
Calculate the coefficient d using Equation (9). (14)
5. Calculate the coefficient d dusing
iT
Equation
− v2 c(9).
h
:= ckn+
+1
n k +n
1
· · · ckn+n . (15)
k +n T
d := c − v2 c c .
k +n
−1 n+1
k +n
1 n (15)
(k )
6. Build the desired matrix inverse VG (n + 1) ( k ) as a block
−1 matrix.
6. Build the desired matrix inverse VG ( n +1) as a block matrix.
(k ) −1 v v v1
VG ( n ) + d −
1 2
−1 d
VG ( n +1) :=
(k )
. (16)
v 1
− 2
d d
7. Output:
−1
- VG ( n +1) :real
7. Output:( k ) ( n+1)×( n+1)
-the inverse for the roots c1 ,..., cn , cn+1 .
−1
8. End.
(k )
- VG ( n + 1 ) : real(n+1)×(n+1) -the inverse for the roots c1 , . . . , cn , cn+1 .
8. End.
4.4. Computational Complexity
It is possible to note the following advantages of the computational complexity of inverting the
GVM by recursive
4.4. Computational algorithms in comparison with the classical Equation (7), the complexity of which
Complexity
is O(n ) (the classical Equation (7) requires calculating the elementary symmetric functions σ i , j for
(n)
3
It is possible to note the following advantages of the computational complexity of inverting the
j = 1, ..., n .algorithms
GVM byi ,recursive To this aim,inAlgorithm 2.1 with
comparison p. 644the
[5],classical
with quadratic complexity,
Equation should be executed
(7), the complexity of whichn
is O(n3 ) times (Formula 2.5, p. 645): (n)
(the classical Equation (7) requires calculating the elementary symmetric functions σi,j for
Information 2020, 11, 42 8 of 13
i, j = 1, . . . , n. To this aim, Algorithm 2.1 p. 644 [5], with quadratic complexity, should be executed n
times (Formula 2.5, p. 645):
As we analyzed in the point (C), Section 4.2, the computational complexity of the incremental
Algorithm 1, which is constructed on the basis of of Equation (12), is of the O(n2 ) class with respect
to the number of floating-point operations which have to be performed. This is possible thanks to
the proper multiplication order in Equation (12). This way, we avoid the O(n3 ) complexity while
adding a new root.
The computational complexity of the recursive Algorithm 2 is of the O(n3 ) class.
- V [][] o f real
- i:integer
4. Calculate the variable V:
V= 1
ck1
5. For i = 1 To n − 1
Next i
6. Output:
Last but not least, Equation (7) requires recalculating the desired inverse each time we add a
new root. The main idea of the recursive algorithms we proposed is to make use of the already
calculated inverse. This is how high efficiency was obtained. On the graphs below, we practically
compare the efficiency of the recursive algorithms with the standard algorithms (in a non-recursive
form, for matrices with arbitrary entries, contrary to the algorithms presented in this article, which are
developed specially for the GVM) embedded in Matlab® . On the left, we can see the execution time of
the standard and recursive algorithms, and, on the right, we can see the relative performance gain
(recursive algorithms vs. the standard ones for inversion and determinant calculation).
On Figures 1 and 2 we show a practical performance tests of the Algorithms 1 and 2.
Information 2020, 11, 42 9 of 13
Information
Information 2020,
2020, 11,
11, xx FOR
FOR PEER
PEER REVIEW
REVIEW 99 of
of 13
13
3
3
standard determinant
standard determinant
2.5
2.5
recursive determinant
recursive determinant
[[
m
m standard inversion
a e i standard inversion
a e i 2
l x l 2
l x l recursive inversion
recursive inversion
g e i
g e t i
o c t s
o c i s
r u i e 1.5 …
r u m e 1.5 …
i t m c
i t e c
t i e o
t i o
h o n
h o n 1
1
m n d
m n d
s
s
]]
0.5
0.5
0
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
matrix dimension
matrix dimension
Figure
Figure 1.
1. The
The execution
execution time
time of
of the
the standard
standard and
and recursive algorithms.
recursive algorithms.
60
60
50
50
matrix determinant
p matrix determinant
p matrix inversion
e matrix inversion
e 40
r 40
r
f
f g
o g
o a
r a 30
r i 30
m i
m n
a n
a
n
n 20
20
c
c
e
e
10
10
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
matrix dimension
matrix dimension
Figure 2.
2. The
Figure 2. The relative performance gain
gain of
of the
the algorithms.
Figure The relative
relative performance
performance gain of the algorithms.
algorithms.
5. Example
5. Example
We
We show
show aaa practical
We show practical application
practical application
application of of the
the algorithms
the algorithms from this
algorithms from this article
article using
using the
using the same
the same numerical
same numerical
numerical
example
example as
as in
in Reference
Reference [5][5]
(p. (p. 649),
649), to to enable
enable easy easy comparison
comparison of the of
two the two
oppositeopposite
algorithms:
example as in Reference [5] (p. 649), to enable easy comparison of the two opposite algorithms: algorithms:
classical
classical
classical
and and
and recursive.
recursive. recursive. Let
Let us
Let us consider theconsider
us consider the
generalized generalized
generalized Vandermonde
theVandermonde (k )
Vandermonde
matrix VG (n) and matrix
matrix V (k )
VGG( k ) nnVG(kand
its inverse
)
and
−1
(n) its
its
1
VGG( k ) nn with
inverse V
with the following
(k ) parameters:
1
inverse with the the following
following parameters:
parameters:
- general exponent: k = 0.5.
- general exponent: kk 0.5
0.5 ..
-- general
size: n = exponent:
7.
-- size: n
size: n 7 . 7 .
-- roots: ci =i,i, ii =1,..., 1, . . . , 7.
- roots: ccii
roots: i, i 1,..., 77 ..
The
The generalized
generalized Vandermonde
Vandermonde matrix
matrix of
of such
such parameters
parameters has
has the
the following
following form:
form:
Information 2020, 11, 42 10 of 13
The generalized Vandermonde matrix of such parameters has the following form:
1 1 1 1 1 1 1
√ √ √ √ √ √ √
2 2 2 4 2 8 2 16 2 32 2 64 2
√ √ √ √ √ √ √
3 3 3 9 3 27 3 81 3 243 3 729 3
(0.5)
VG (7) = 2 8 32 128 512 2048 8192 . (17)
√ √ √ √ √ √ √
5 5 5 25 5 125 5 625 5 3125 5 15625 5
√ √ √ √ √ √ √
6 6 6 36 6 216 6 1296 6 7776 6 46656 6
√ √ √ √ √ √ √
7 7 7 49 7 343 7 2401 7 16807 7 117649 7
(0.5)
The determinant of the matrix VG (7) and its inverse have the following forms, respectively:
(0.5)
√
det VG (7) = 298598400 35, (18)
−21 35 −35 21 −7 √1
7 √ √
2
√ √
2 3 5 6 7
√
−223 879 −949 −201 1019 −7 7
41
√ √ √ √
20 20
20 2 12 3 4 5 60 6 √
319 −3929 389 −2545 134 −1849 29 7
√ √ √ √
45 120 2 6 3 72 3 5√ 120 6 90
−1 √
(0.5) −37 71 −1219 44 −185 5 41 −7 7
VG ( 7 ) = √ √ √ . (19)
16 6 2 48 3 3 48 6 6 48
√
59 −9
√ 247√ −113 69√ −19
√ 5 7
144 4 2 48 3 36 16 5 12 6 144
√
−3 13
√ −25√ 1 −23√ 11
√ − 7
80 60 2 48 3 3 48 5 60 6 240
1 −1√ 1√ −1 1√ −1√ 1√
720 120 2 48 3 72 48 5 120 6 720 7
5.1. Objective
Our objective is to find the determinant and inverse of the generalized Vandermonde matrix
(0.5)
VG ( 8 ) , with roots ci = i, i = 1, . . . , 8, in the recursive way.
−1 −1
v2 = cnk+1 cnk++1n−1 VG( k ) ( n ) = 1 128 2 2187 3 32768 78125 5 279936 6 823543 7 VG( k ) ( n ) = .
Information 2020, 11, 42 11(21)
of 13
= 2 2 −14 14 6 −35 2 14 10 −14 3 2 14
(0.5) −1 v v v
VG ( 7 ) + 1 2 − 1
−1 d d
VG(0.5) ( 8 ) =
v2 1
−
d d
56 56 −14 8 −1
8 −14 2 3 −35 5 6 7 2
3 5 3 7 4
−481 621 −2003 691 −141 2143 −103 363
35 2 3 5 6 7 2
20 45 8 5 180 35 560
349 −18353 2 797 3 −1457 4891 5 −187 6 527 7 −469 2
36 720 20 18 180 16 180 720
(23)
−329 15289 2 −268 3 10993 −1193 5 2803 6 −67 7 967
2
90 1440 15 288 90 480 45 2880
=
115 −179 71 −179 2581 −13 61 −7
2 3 5 6 7 2
144 72 16 18 720 8 144 72
−73 239
2
−149
3
209 −391
5
61
6
−49
7
23
2
720 720 240 144 720 240 720 1440
1 −17 11 −1 31 −1 29 −1
2 3 5 6 7 2
144 720 240 9 720 48 5040 720
−1 1
2
−1
3
1 −1
5
1
6
−1
7
1
2 .
5040 1440 720 288 720 1440 5040 20160
−1
One can see that the incrementally received inverse V(G0.5 ) ( 8)
(0.5) −1 is equivalent to the inverse
One can see that the incrementally received inverse VG (8) is equivalent to the inverse
obtained by the classical algorithms in Reference [5] (p. 649).
obtained by the classical algorithms in Reference [5] (p. 649).
5.4. Summary
5.4. Summary of
of the
the Example
Example
In this example, we recursively calculated the determinant and inverse of the VGG ((8)) matrix,
In this example, we recursively calculated the determinant and inverse of the V((0.5) 0.5) 8 matrix,
) ( 7 ) matrix, respectively. It is
(0.5)
making use
making use of
of the
the known
known determinant
determinant and
and the
the inverse
inverse of the VV(G0.5
of the (7) matrix, respectively. It is
G
worth noting
worth noting that,
that, to
to perform
perform this,this, there
there were
weremerely
merelyeight
eightscalar
scalarmultiplications
multiplications necessary
necessary for
for the
the
22
determinant, and 33· ⋅77+
determinant, and + 22⋅·77 scalar
scalarmultiplications
multiplicationsnecessary
necessaryfor
forthe
theinverse.
inverse. This
This confirms
confirms the
the high
high
efficiency
efficiencyof
ofthe
therecursive
recursiveapproach.
approach.
6.
6. Research
Research and
and Extensions
Extensions
The
The following
followingcan
canbe
beseen
seenas
asthe
thedesired
desiredfuture
futureresearch
researchdirections:
directions:
Construction of the parallel algorithm for the generalized
generalized Vandermonde
Vandermondematrices.
matrices.
vector-oriented hardware
Adaptation of the algorithms to vector-oriented hardware units.
units.
Combination of both.
Application on
Application on Graphics
Graphics Hardware
Hardware Unit
Unit architecture.
architecture.
Application of the results in new branches, like deep learning and artificial intelligence.
Information 2020, 11, 42 12 of 13
The proposed results could also be applied to other related applications which use Vandermonde
or matrices of similar type, such as the following [19–21]:
Total variation problems and optimization methods;
Power systems networks;
The numerical problem preconditioning;
Fractional order differential equations.
7. Summary
In this paper, we derived recursive numerical recipes for calculating the determinant and inverse
of the generalized Vandermonde matrix. The results presented in this article can be performed
automatically using a numerical algorithm in any programming language. The computational
complexity of the presented algorithms is better than the ordinary GVM determinant/inverse methods.
The presented results neatly combine the theory of algorithms, particularly the recursion
programming paradigm and computational complexity analysis, with numerical recipes, which
we consider as the right branch in constructing computational algorithms.
Considering software production, the recursion is not only a purely academical paradigm, as it
was successfully used by programmers for decades.
Funding: This work was supported by Statutory Research funds of Institute of Informatics, Silesian University of
Technology, Gliwice, Poland (BK/204/RAU2/2019).
Acknowledgments: I would like to thank my university colleagues for stimulating discussions and reviewers for
apt remarks which significantly improved the paper.
Conflicts of Interest: The author declares no conflict of interest.
References
1. Respondek, J. On the confluent Vandermonde matrix calculation algorithm. Appl. Math. Lett. 2011, 24,
103–106. [CrossRef]
2. Respondek, J. Numerical recipes for the high efficient inverse of the confluent Vandermonde matrices. Appl.
Math. Comput. 2011, 218, 2044–2054. [CrossRef]
3. Respondek, J. Highly Efficient Recursive Algorithms for the Generalized Vandermonde Matrix. In Proceedings
of the 30th European Simulation and Modelling Conference—ESM’ 2016, Las Palmas de Gran Canaria, Spain,
26–28 October 2016; pp. 15–19.
4. Respondek, J. Recursive Algorithms for the Generalized Vandermonde Matrix Determinants. In Proceedings
of the 33rd Annual European Simulation and Modelling Conference—ESM’ 2019, Palma de Mallorca, Spain,
28–30 October 2019; pp. 53–57.
5. El-Mikkawy, M.E.A. Explicit inverse of a generalized Vandermonde matrix. Appl. Math. Comput. 2003, 146,
643–651. [CrossRef]
6. Hou, S.; Hou, E. Recursive computation of inverses of confluent Vandermonde matrices. Electron. J. Math.
Technol. 2007, 1, 12–26.
7. Hou, S.; Pang, W. Inversion of confluent Vandermonde matrices. Comput. Math. Appl. 2002, 43, 1539–1547.
[CrossRef]
8. Gorecki, H. Optimization of the Dynamical Systems; PWN: Warsaw, Poland, 1993.
9. Klamka, J. Controllability of Dynamical Systems; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1991.
10. Respondek, J. Approximate controllability of infinite dimensional systems of the n-th order. Int. J. Appl.
Math. Comput. Sci. 2008, 18, 199–212. [CrossRef]
11. Respondek, J. Approximate controllability of the n-th order infinite dimensional systems with controls
delayed by the control devices. Int. J. Syst. Sci. 2008, 39, 765–782. [CrossRef]
12. Timoshenko, S. Vibration Problems in Engineering, D, 3rd ed.; Van Nostrand Company: London, UK, 1955.
13. Bellman, R. Introduction to Matrix Analysis; Mcgraw-Hill Book Company: New York, NY, USA, 1960.
14. Eisinberg, A.; Fedele, G. On the inversion of the Vandermonde matrix. Appl. Math. Comput. 2006, 174,
1384–1397. [CrossRef]
Information 2020, 11, 42 13 of 13
15. Kincaid, D.R.; Cheney, E.W. Numerical Analysis: Mathematics of Scientific Computing, 3rd ed.; Brooks Cole:
Florence, KY, USA, 2001.
16. Lee, K.; O’Sullivan, M.E. Algebraic soft-decision decoding of Hermitian codes. IEEE Trans. Inf. Theory 2010,
56, 2587–2600. [CrossRef]
17. Gorecki, H. On switching instants in minimum-time control problem. One-dimensional case n-tuple
eigenvalue. Bull. Acad. Pol. Sci. 1968, 16, 23–30.
18. Yan, S.; Yang, A. Explicit Algorithm to the Inverse of Vandermonde Matrix. In Proceedings of the 2009
Internatonal Conference on Test and Measurement, Hong Kong, China, 5–6 December 2009; pp. 176–179.
19. Dassios, I.; Fountoulakis, K.; Gondzio, J. A preconditioner for a primal-dual newton conjugate gradients
method for compressed sensing problems. SIAM J. Sci. Comput. 2015, 37, A2783–A2812. [CrossRef]
20. Dassios, I.; Baleanu, D. Optimal solutions for singular linear systems of Caputo fractional differential
equations. Math. Methods Appl. Sci. 2018. [CrossRef]
21. Dassios, I. Analytic Loss Minimization: Theoretical Framework of a Second Order Optimization Method.
Symmetry 2019, 11, 136. [CrossRef]
© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).