0% found this document useful (0 votes)
7 views

Recursive Matrix Calculation Paradigm by the Example of Structured Matrix

This paper presents recursive algorithms for calculating the determinant and inverse of the generalized Vandermonde matrix (GVM), which offer improved computational efficiency compared to classical methods. The algorithms are particularly useful in practical applications such as adding interpolation nodes or solving differential equations, as they avoid unnecessary recalculations. The results can be implemented in various programming languages without the need for symbolic calculations.

Uploaded by

besnik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Recursive Matrix Calculation Paradigm by the Example of Structured Matrix

This paper presents recursive algorithms for calculating the determinant and inverse of the generalized Vandermonde matrix (GVM), which offer improved computational efficiency compared to classical methods. The algorithms are particularly useful in practical applications such as adding interpolation nodes or solving differential equations, as they avoid unnecessary recalculations. The results can be implemented in various programming languages without the need for symbolic calculations.

Uploaded by

besnik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

information

Article
Recursive Matrix Calculation Paradigm by the
Example of Structured Matrix
Jerzy S. Respondek
Institute of Computer Science, Faculty of Automatic Control, Electronics and Computer Science, Silesian
University of Technology, ul. Akademicka 16, 44-100 Gliwice, Poland; [email protected];
Tel.: +48-32-237-2151; Fax: +48-32-237-2733

Received: 1 December 2019; Accepted: 6 January 2020; Published: 13 January 2020 

Abstract: In this paper, we derive recursive algorithms for calculating the determinant and inverse
of the generalized Vandermonde matrix. The main advantage of the recursive algorithms is the
fact that the computational complexity of the presented algorithm is better than calculating the
determinant and the inverse by means of classical methods, developed for the general matrices. The
results of this article do not require any symbolic calculations and, therefore, can be performed by a
numerical algorithm implemented in a specialized (like Matlab or Mathematica) or general-purpose
programming language (C, C++, Java, Pascal, Fortran, etc.).

Keywords: numerical recipes; numerical algebra; linear algebra; matrix inverse; generalized
Vandermonde matrix; C++

1. Introduction
In previous studies [1,2], we proposed a classical numerical method for inverting the generalized
Vandermonde matrix (GVM). The new contributions in this article are as follows:

• We derive recursive algorithms for calculating the determinant and inverse of the generalized
Vandermonde matrix.
• The importance of the recursive algorithms becomes clear when we consider practical
implementation of the GVM; they are useful each time we add a new interpolation node
or a new root of a given differential equation in question.
• The recursive algorithms, which we propose in this work, can allow avoiding the recalculation of
the determinant and/or inverse.
• The main advantage of the recursive algorithms is the fact that the computational complexity of
the presented algorithm is of the O(n) class for the computation of the determinant.
• The results of this article do not require any symbolic calculations and, therefore, can be performed
by a numerical algorithm implemented in a high-level (like Matlab or Mathematica) or low-level
programming language (C, C++, Java, Pascal, Fortran, etc.).

In this article, we neatly combined the results from previous studies [3,4] and extended the
computational examples.
The main results of this article are shown in Algorithms 1 and 2. The paper is organized as
follows: Section 2 justifies the importance of the generalized Vandermonde matrices, Section 3 gives
the recursive algorithms for the generalized Vandermonde matrix determinant, Section 4 gives two
recursive algorithms for calculating the desired inverse, Section 5 presents, with an example, the
application of the proposed algorithms, and Section 6 summarizes the article.

Information 2020, 11, 42; doi:10.3390/info11010042 www.mdpi.com/journal/information


Information 2020, 11, 42 2 of 13

2. Practical Importance of the Generalized Vandermonde Matrix


In this article, we consider the generalized Vandermonde matrix (GVM) of the form proposed
by El-Mikkawy [5]. The classical form is considered in References [6,7]. For the n ∈ Z+ real pairwise
distinct roots c1 , . . . , cn and the real constant coefficient k, we define the GVM as follows:

ck1 ck1+1 c1k+n−1


 
 ··· 
ck2 ck2+1 ck2+n−1
 
(k )
 ··· 
VG (c1 , . . . , cn ) = 
 
.. .. .. .. . (1)

 . . . .


ckn ckn+1 ckn+n−1

···

These matrices arise in a broad range of both theoretical and practical issues. Below, we survey
the issues which require the use of the generalized Vandermonde matrices.

 Linear, ordinary differential equations (ODE): the Jordan canonical form matrix of the ODE in the
Frobenius form is a generalized Vandermonde matrix ([8] pp. 86–95).
 Control issues: investigating the so-called controllability [9] of the higher-order systems leads to
the issue of inverting the classic Vandermonde matrix [10] (in the case of distinct zeros of the
system characteristic polynomial) and the generalized Vandermonde matrix [11] (for systems
with multiple characteristic polynomial zeros). As the examples of the higher-order models of
the physical objects, we can mention Timoshenko’s elastic beam equation [12] (fourth order)
and Korteweg-de Vries’s equation of waves on shallow water surfaces [13,14] (third, fifth, and
seventh order).
 Interpolation: apart from the ordinary polynomial interpolation with single nodes, we consider
the Hermite interpolation, allowing multiple interpolation nodes. This issue leads to the system
of linear equations, with the generalized Vandermonde matrix ([15] pp. 363–373).
 Information coding: the generalized Vandermonde matrix is used in coding and decoding
information in the Hermitian code [16].
 Optimization of the non-homogeneous differential equation [17].

3. Algorithms for the Generalized Vandermonde Matrix Determinant


In this chapter, we propose a library of recursive algorithms for the calculation of the generalized
Vandermonde matrix determinant. These algorithms solve the following set of practically important,
incremental problems:

(A) Suppose we have the value of the Vandermonde determinant for a given series of roots c1 , . . . , cn−1 .
How can we calculate the determinant after inserting another root into an arbitrary position in
the root series, without the need to recalculate the whole determinant? This problem corresponds
to the situation which frequently emerges in practice, i.e., adding a new node (polynomial
interpolation) or increasing the order of the characteristic equation (linear differential equation
solving, optimization, and control problems).
(B) Contrary to the previous scenario, we have the Vandermonde determinant value for a given root
series c1 , . . . , cn . We remove an arbitrary root cq from the series. How can we recursively calculate
the determinant in this case? The examples of real applications from the previous point also apply
here. The proper solution is given in Section 3.1.
(C) We are searching for the determinant value, when, in the given root series c1 , . . . , cn , we change
the value of an arbitrarily chosen root (Section 3.1).
(D) We are searching for the determinant value, for the given root series c1 , . . . , cn ,
calculated recursively.

The theorem below is the main tool to construct the above recursive algorithm.
Information 2020, 11, 42 3 of 13

3.1. The Recursive Determinant Formula

Theorem 1. The following recursive formula is fulfilled for the generalized Vandermonde matrix:

 Yn
(k ) (k )
detVG (c1 , . . . , cn ) = (−1)q+1 ckq · detVG c1 , . . . , cq−1 , cq+1 , . . . , cn ·
  
ci − cq , q = 1, .., n. (2)
i=1, i,q

Proof. Applying the standard determinant linear properties, we can obtain

ck1 ck1+1 ck1+n−1 ck1 (ck1 c1 − ck1 cq ) (ck1 cn−1 − ck1 cn−2
   
 ···   ··· 1 1
cq ) 
.. .. .. .. .. .. .. ..
   
   

 . . . . 


 . . . . 

(k )
detVG (c1 , . . . , cn ) = det ckq ckq+1 ··· ckq+n−1 ckq 0 ··· 0
   
 = det .
.. .. .. .. .. ..
  
 ..   .. 

 . . . . 


 . . . . 

.. ..

  
ckn+1 ckn+n−1 ckn (ckn cn − ckn cq ) (ckn cn−1 k n−2

ckn . . n − cn cn cq )

Next, in compliance with Laplace’s expansion formula applied to the q-th column, we directly have
     
 ck1 c1 − cq ck1+1 c1 − cq c1k+n−2 c1 − cq
 
··· 
 
 .. .. .. .. 
. .
 . .
 
     
 k
 c k +1 k+n−2 
(k )  q−1 c q−1 − c q cq−1 cq−1 − cq · · · cq−1 cq−1 − cq 
detVG (c1 , . . . , cn ) = (−1)q+1 ckq det k       
ckq+ 1
cqk+ n−2 
 cq+1 cq+1 − cq +1 q+1
c − cq · · · +1
cq + 1 − cq 


 .. .. .. .. 
.


 . . .


 ck c − c  k +1
cn cn − cq

··· k+n−2
cn cn − cq
 
n n q
 k k +1 k+n−2 
 c1 c1 · · · c1 
 . .. ..
..

 .
.

 . . . 
 
 ck k +1 k+n−2
c · · · cq−1  n
= (−1)q+1 ckq det kq−1 kq−1
   
 · Q ci − cq .
 c +1 k+n−2
 q+1 cq+1 · · · cq+1

 i = 1
 . 
 . .. .. ..  i , q
 . . . . 
 
ckn ckn+1 · · · ckn+n−2

This concludes the proof of Equation (2).


Directly from Theorem 1, we can obtain the algorithms below for the incremental problems A–D.
The detailed implementation of these formulas is straightforward and omitted.
Cases A, B: All we need to do is apply Equation (2).
Case C: Let us assume that, for the given root series c1 , . . . , cn , the corresponding determinant
(k )
 
value is equal to detVG c1 , . . . , cq , . . . , cn . Our objective is to find the value of the determinant
(k )
 
detVG c1 , . . . , cq + ∆cq , . . . , cn . Applying Equation (2) twice, we can obtain the following expression
for the searched determinant:
n
(ci −cq −∆cq )
Q
cq +∆cq k i=1, i,q
 
(k ) (k )
   
detVG c1 , . . . , cq + ∆cq , . . . , cn = cq n detVG c1 , . . . , cq , . . . , cn , q = 1, .., n.
(ci −cq )
Q
i=1, i,q
Information 2020, 11, 42 4 of 13

Case D: The proper recursive function expressing the determinant value, for the given root series
c1 , . . . , cn , has the following form:

q+1 (k )
 q−1
Q 
 (−1) ckq · detVG c1 , . . . , cq−1 for q > 1

ci − cq ,

(k )
 
c1 , . . . , cq

detVG =
 i=1 .
 ck ,

for q = 1
1

3.2. Computational Complexity of the Proposed Algorithms


The following facts are worth noting:

 The computational complexity of the presented Algorithms A–C is of the O(n) class with respect
to the number of floating-point operations necessary to perform. This enables us to efficiently
solve the incremental Vandermonde problems, avoiding the quadratic complexity, typical in the
Vandermonde field (e.g., References [14,18])
 Algorithm D is of the O(n2 ) class, being, by the linear term, more efficient than the ordinary Gauss
elimination method.

3.3. Special Cases


In this section, we give special forms of Algorithms A–D tuned for two special cases of the
generalized Vandermonde matrix, i.e., for the equidistant roots, as well as the roots equal to the
succeeding positive integers.

3.3.1. Generalized Vandermonde Matrix with Equidistant Roots


Let us take into account the GVM with the equidistant roots of the form ci = c1 + (i − 1)h, h ∈ R.
In this special case Formula (2) becomes

(k ) (k )
 
detVG (c1 , . . . , cn ) = (q − 1)! (n − q)! hn−1 ckq detVG c1 , . . . , cq−1 , cq+1 , . . . , cn , q = 1, .., n, (3)

and Algorithms A–D change to the recursive Equation (3).

3.3.2. Generalized Vandermonde Matrix with Positive Integer Roots


In Reference [5], it is considered a special case of GVM, which can be obtained from Equation (1)
(k )
when ci = i, i = 1, . . . , n, denoting this matrix by VS (n).
 
 1 1 ··· 1 
2k 2k+1 ··· 2k+n−1
 
 
(k )
VS (1, . . . , n) =  . (4)
 
.. .. .. ..

 . . . . 

nk n +1
k ··· +
n n−1
k
 

(k )
For the special VS , Equation (3) becomes

(k ) (k )
detVS (1, . . . , n) = (q − 1)! (n − q)! ckq detVS (1, . . . , q − 1, q + 1, . . . , n), q = 1, .., n. (5)

4. Algorithms for the Generalized Vandermonde Matrix Inverse


In this chapter, we give a recursive algorithm to invert the generalized Vandermonde matrix
of Equation (1). At first, let us refer to the known, non-recursive results within this topic presented
Information 2020, 11, 42 5 of 13

previously [5]. Reference [5] features an explicit form of the GVM inverse, which makes use of the
so-called elementary symmetric functions, defined below.

4.1. Definition of the Elementary Symmetric Functions


(n)
If the n parameters c1 , c2 , . . . , cn are distinct, then the elementary symmetric functions σi,j in
c1 , c2 , . . . , c j−1 , c j+1 , . . . , cn are defined for i, j = 1, . . . , n in El-Mikkawy [5] p. 644 by
 (n)



 σ1, j = 1

 n n n i−1
 σ(n) =

... crm , f or i = 2, . . . , n .
 P P P Q
 i,j (6)
 m=1
r1 = 1 r2 = r1 + 1 ri−1 = ri−2 + 1






 r1 , j r2 , j ri−1 , j

The efficient algorithm, of the O(n2 ) computational complexity class, for calculating the elementary
symmetric functions in Equation (6), is given in Reference [5]. Now, it is possible to present the explicit
form of the inverse GVM given by Reference [5] (p. 647).
(n) (n)
(−1)n+1 σn,1 (−1)n+1 σn,2 (n)
 
(−1)n+1 σn,n
···
 
 n n n−1

ck1 ck2
Q Q
(c1 −ci ) (c2 −ci ) ckn
 Q 
 (cn −ci ) 
 i=2 i=1,i,2 i=1 
(n) (n) (n)
(−1)n+2 σn−1,1 (−1)n+2 σn−1,2 (−1)n+2 σn−1,n
 
 
 −1  n n ··· n−1

ck1 ck2
 
(k )
Q Q
VG (1, . . . , n) (c1 −ci ) (c2 −ci ) ckn
Q
= 
 (cn −ci ) .

(7)
i=2 i=1,i,2 i=1
 .. .. .. .. 

 . . . .


 (n) (n) (n)

 (−1)n+n σ1,1 (−1)n+n σ1,2 (−1)n+n σ1,n 

 n−1 n−1
··· n−1


ck1 ck2 ckn
Q Q Q
(c1 −ci ) (c2 −ci ) (cn −ci )
 
i=2 i=1,i,2 i=1

Let us return to the objective of this chapter, i.e., construction of the efficient, recursive algorithm
for inverting the generalized Vandermonde matrix. This issue can be formalized as follows: we know
the GVM inverse for the root series c1 , . . . , cn . We want to efficiently calculate the inverse for the root
(k ) (k )
series c1 , . . . , cn , cn+1 , making use of the known inverse. Let VG (n + 1) = VG (c1 , c2 , . . . , cn+1 ); then,
the theorem below enables recursively calculating the desired inverse.

4.2. Theorem of the Recursive Inverse


 −1
(k )
Theorem 2. The inverse generalized Vandermonde matrix VG (n + 1) , corresponding to the root series
c1 , . . . , cn+1 , can be expressed by the following block matrix:

ck1+n ck1+n
     
     
   
−1 
n−1 V (k) (n) −1
−1 
.. ..
       
(k )
VG (n)  ckn+1 ··· ckn+ (k )
VG (n) 
     
 −1

  . 

 +1 G  . 




(k )
ckn+n ckn+n
−1  
, (8)
 
VG ( n + 1 ) = 
    
(k )
 VG (n) + d − d


   −1 
ckn+1 ckn+ n−1 (k )

··· VG (n)
 
+1 1



d d

 k +n 
 c1 
h 
i (k ) −1  . 
d = ckn+ n
− ckn+1 ··· cnk+ n−1
VG (n)  .. , (9)

+1 +1  
 k +n
cn

 −1
(k )
where VG (n) denotes the known GVM inverse for the roots c1 , . . . , cn .
Information 2020, 11, 42 6 of 13

Proof. To prove the matrix recursive identity in Equation (8), we make use of the block matrix
algebra rules. A useful formula, expressing the block matrix inverse by the inverses of the respective
sub-matrices, is:
#−1
A−1 + A−1 A2 B−1 A3 A−1 −A−1 A2 B−1
" " #
−1 A1 A2
A = = 1 1 1 1 , B = A4 − A3 A−1
1 A2 . (10)
A3 A4 −B−1 A3 A−1
1
B−1

Thus, if we know the inverse of the sub-matrix A1 and the inverse of B, we can directly obtain the
inverse of the block matrix A by performing a few matrix multiplications. For different c1 , c2 , . . . , cn+1 ,
the GVM matrix is invertible and Equation (10) holds true; thus, the coefficient d given by Equation (9)
(k )
is non-zero. Now, let us take into account the generalized Vandermonde matrix VG (n + 1) for the
n + 1 roots c1 , . . . , cn+1 . It can be treated as a block matrix of the following form:
 k +n 
ck1 ck1+n−1 ck1+n
   
 ···      c1  
.. .. ..  .. 
 
.. (k )
   
(k )

. . . .
  VG ( n )  .  
VG (n + 1) =   = 
   

 k+n 
 . (11)
k+n−1 k +n
ckn ··· cn cn c
  
n
   h 
 i h i
ckn+1 ckn+ n−1
ckn+ n
ckn+1 ckn+ n−1
ckn+ n
  
··· ···

+1 +1 +1 +1

Now, applying the block matrix identities in Equation (10) to the block matrix in Equation (11),
we directly obtain the thesis in Equation (9).
Despite the explicit form of the block matrix inverse in Equation (9), its efficient
algorithmic implementation is not obvious. The order in which we calculate the matrix term
 −1 h iT h i (k) −1
(k )
VG ( n ) ck1+n · · · ckn+n ckn+1 · · · ckn+
+1
n−1
VG ( n ) has a crucial influence on the final
computational complexity of the algorithm. Therefore, let us analyze all three possible orders.
(A) Left-to-right order of multiplications.
 −1 h iT h
(k )
i
One can notice that VG (n) ck1+n · · · ckn+
+1
n−1
ckn+n ckn+1
has dimensions equal ···
 −1
(k )
to n × n. Hence, the multiplication of this last matrix and of the matrix VG (n) is an O(n3 ) class
algorithm. Thus, the computational complexity in the left-to-right order is of the O(n3 ) class.
(B) Right-to-left order.
A detailed analysis leads also to the O(n3 ) class.
(C) The order of the following form:
( −1 h iT
)(
i (k) −1
)
(k )
h
VG ( n ) ck1+n ··· ckn+n ckn+1 ··· ckn+
+1
n−1
VG ( n ) . (12)

In this case, at first, we perform the following two multiplications:


( −1 h )
(k ) k +n T
i
k +n
• The multiplication VG (n) c1 · · · cn requires O(n2 ) operations; as the result, we
get the n-element vertical vector.
i (k) −1
( )
h
k k+n−1
• The multiplication cn + 1 · · · cn + 1 VG ( n ) requires O(n2 ) operations; as the result,
we get the n-element horizontal vector. 

Finally, all we have to do is to multiply these two last vectors, which obviously is an operation of
the O(n2 ) class.
Summarizing, the most efficient is the multiplication order ((C), giving a quadratic computational
complexity. All other orders lead to the worse O(n3 ) class algorithms. Combining the above results, we
Information 2020, 11, 42 7 of 13
Information 2020, 11, x FOR PEER REVIEW 7 of 13

Finally, all we have to do is to multiply these two last vectors, which obviously is an operation
can give the algorithm which solves the incremental inverse problem, i.e., calculating the GVM inverse
of the O(n2) class.
for the root series c1 , . . . , cn , cn+1 on the basis of the known inverse for the root series c1 , . . . , cn .
Summarizing, the most efficient is the multiplication order ((C), giving a quadratic
computational
4.3. Algorithm 1 complexity. All other orders lead to the worse O(n3) class algorithms. Combining the
above results, we can give the algorithm which solves the incremental inverse problem, i.e.,
Using the incremental
the GVMAlgorithm
inverse for 1,
thewe can buildcthe final, recursive algorithm for inverting the
calculating root series 1 ,..., cn , cn+1 on the basis of the known inverse for
generalized Vandermonde matrix of the form in Equation (1).
the root series c1 ,..., cn .
Algorithm 1: Incremental Inverting of the Generalized Vandermonde Matrix
4.3. Algorithm 1
 −1  −1
(k ) (k )
1. Function Incremental_Inverse(n;
Using k; c1 , . . . , c1,
the incremental Algorithm n+1 ;
we Vcan
G
( n )
build ):
theVfinal,
G
( n + 1 )
recursive algorithm for inverting the
2. generalized Vandermonde matrix of the form in Equation (1).
Input:
Algorithm 1: Incremental Inverting of the Generalized Vandermonde Matrix
- n : integer -number of roots − 1 −1 −1
- 1. Function
k Incremental_Inverse(n;
: real k; c1 ,...,
-Vandermonde matrix VG( k ) (exponent
cn+1 ; general n )  ): VG( k ) ( n +1) 
- c1 , . . . , cn+1 : real
2. Input: -the roots
 −1
(k )
- - n VG (n) :integer : realn×n -number
-the GVM of roots
inverse − 1 roots c1 , . . . , cn .
for the
-k :real -Vandermonde matrix general exponent
3. Locals:
- c1 ,..., cn+1 :real -the roots
n
- (vk1) , v2 :−1real n×n
- VG ( n )  :real -the GVM inverse for the roots c1 ,..., cn .
- d : real
3. Locals:
4. Calculate the auxiliary vectors v1 , v2 .
- v1 , v2 : real n
 −1 h iT
(k )
- d :real v1 := VG (n) ck1+n · · · ckn+n . (13)
4. Calculate the auxiliary vectors v1 , v2 .
i (k) −1
v2 := vc:kn=+1V ( k· )· (· n )ckn−+
1 n−1k +n
VGc k(+nn )T . .
h
+1c  (14)
(13)
1  G   1 n 
−1
5. v2 := cnk+1  cnk++1n−1  VG( k ) ( n )  .
Calculate the coefficient d using Equation (9). (14)
5. Calculate the coefficient d dusing
iT
Equation
− v2 c(9).
h
:= ckn+
+1
n k +n
1
· · · ckn+n . (15)
k +n T
 d := c − v2 c  c  .
k +n
−1 n+1
k +n
1 n (15)
(k )
6. Build the desired matrix inverse VG (n + 1) ( k ) as a block
−1 matrix.
6. Build the desired matrix inverse VG ( n +1)  as a block matrix.

 (k ) −1 v v v1 
 VG ( n )  + d −
1 2
−1 d
VG ( n +1)  := 
(k )
. (16)
 v 1 
− 2
 d d 

7. Output:
−1
- VG ( n +1)  :real
7. Output:( k ) ( n+1)×( n+1)
-the inverse for the roots c1 ,..., cn , cn+1 .
−1
8. End.

(k )
- VG ( n + 1 ) : real(n+1)×(n+1) -the inverse for the roots c1 , . . . , cn , cn+1 .

8. End.
4.4. Computational Complexity
It is possible to note the following advantages of the computational complexity of inverting the
GVM by recursive
4.4. Computational algorithms in comparison with the classical Equation (7), the complexity of which
Complexity
is O(n ) (the classical Equation (7) requires calculating the elementary symmetric functions σ i , j for
(n)
3
It is possible to note the following advantages of the computational complexity of inverting the
j = 1, ..., n .algorithms
GVM byi ,recursive To this aim,inAlgorithm 2.1 with
comparison p. 644the
[5],classical
with quadratic complexity,
Equation should be executed
(7), the complexity of whichn
is O(n3 ) times (Formula 2.5, p. 645): (n)
(the classical Equation (7) requires calculating the elementary symmetric functions σi,j for
Information 2020, 11, 42 8 of 13

i, j = 1, . . . , n. To this aim, Algorithm 2.1 p. 644 [5], with quadratic complexity, should be executed n
times (Formula 2.5, p. 645):

 As we analyzed in the point (C), Section 4.2, the computational complexity of the incremental
Algorithm 1, which is constructed on the basis of of Equation (12), is of the O(n2 ) class with respect
to the number of floating-point operations which have to be performed. This is possible thanks to
the proper multiplication order in Equation (12). This way, we avoid the O(n3 ) complexity while
adding a new root.
 The computational complexity of the recursive Algorithm 2 is of the O(n3 ) class.

Algorithm 2: For Recursive Inverting the Generalized Vandermonde Matrix.


 −1
(k )
1. Function Inverse(n; k; c1 , . . . , cn ): VG (n)
2. Input:

- n : integer - number of roots


- k : real - Vandermonde matrix general exponent
- c1 , . . . , cn : real -the roots
3. Locals:

- V [][] o f real
- i:integer
4. Calculate the variable V:

V= 1
ck1
5. For i = 1 To n − 1

V = Incremental_Inverse(i,k,c1 , . . . , ci+1 ,V)

Next i
6. Output:

- V : realn×n - the inverse for the roots c1 , . . . , cn .


7. End.

Last but not least, Equation (7) requires recalculating the desired inverse each time we add a
new root. The main idea of the recursive algorithms we proposed is to make use of the already
calculated inverse. This is how high efficiency was obtained. On the graphs below, we practically
compare the efficiency of the recursive algorithms with the standard algorithms (in a non-recursive
form, for matrices with arbitrary entries, contrary to the algorithms presented in this article, which are
developed specially for the GVM) embedded in Matlab® . On the left, we can see the execution time of
the standard and recursive algorithms, and, on the right, we can see the relative performance gain
(recursive algorithms vs. the standard ones for inversion and determinant calculation).
On Figures 1 and 2 we show a practical performance tests of the Algorithms 1 and 2.
Information 2020, 11, 42 9 of 13
Information
Information 2020,
2020, 11,
11, xx FOR
FOR PEER
PEER REVIEW
REVIEW 99 of
of 13
13

3
3

standard determinant
standard determinant
2.5
2.5
recursive determinant
recursive determinant

[[
m
m standard inversion
a e i standard inversion
a e i 2
l x l 2
l x l recursive inversion
recursive inversion
g e i
g e t i
o c t s
o c i s
r u i e 1.5 …
r u m e 1.5 …
i t m c
i t e c
t i e o
t i o
h o n
h o n 1
1
m n d
m n d
s
s
]]

0.5
0.5

0
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
matrix dimension
matrix dimension

Figure
Figure 1.
1. The
The execution
execution time
time of
of the
the standard
standard and
and recursive algorithms.
recursive algorithms.

60
60

50
50

matrix determinant
p matrix determinant
p matrix inversion
e matrix inversion
e 40
r 40
r
f
f g
o g
o a
r a 30
r i 30
m i
m n
a n
a
n
n 20
20
c
c
e
e
10
10

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
matrix dimension
matrix dimension

Figure 2.
2. The
Figure 2. The relative performance gain
gain of
of the
the algorithms.
Figure The relative
relative performance
performance gain of the algorithms.
algorithms.

5. Example
5. Example
We
We show
show aaa practical
We show practical application
practical application
application of of the
the algorithms
the algorithms from this
algorithms from this article
article using
using the
using the same
the same numerical
same numerical
numerical
example
example as
as in
in Reference
Reference [5][5]
(p. (p. 649),
649), to to enable
enable easy easy comparison
comparison of the of
two the two
oppositeopposite
algorithms:
example as in Reference [5] (p. 649), to enable easy comparison of the two opposite algorithms: algorithms:
classical
classical
classical
and and
and recursive.
recursive. recursive. Let
Let us
Let us consider theconsider
us consider the
generalized generalized
generalized Vandermonde
theVandermonde (k )
Vandermonde
matrix VG (n) and matrix
matrix V (k ) 
VGG( k ) nnVG(kand
its inverse
)
and
−1
(n) its
its  
1
VGG( k )  nn  with
inverse V
with the following
(k ) parameters:
1
inverse with the the following
following parameters:
parameters:
- general exponent: k = 0.5.
- general exponent: kk  0.5
0.5 ..
-- general
size: n = exponent:
7.
-- size: n
size: n 7 . 7 .
-- roots: ci =i,i, ii =1,..., 1, . . . , 7.
- roots: ccii 
roots:  i, i 1,..., 77 ..
The
The generalized
generalized Vandermonde
Vandermonde matrix
matrix of
of such
such parameters
parameters has
has the
the following
following form:
form:
Information 2020, 11, 42 10 of 13

The generalized Vandermonde matrix of such parameters has the following form:
 
1 1 1 1 1 1 1

 √ √ √ √ √ √ √ 

 2 2 2 4 2 8 2 16 2 32 2 64 2 
 √ √ √ √ √ √ √ 
3 3 3 9 3 27 3 81 3 243 3 729 3
 
 
(0.5)  
VG (7) =  2 8 32 128 512 2048 8192 . (17)
 √ √ √ √ √ √ √ 
5 5 5 25 5 125 5 625 5 3125 5 15625 5
 
√ √ √ √ √ √ √
 

6 6 6 36 6 216 6 1296 6 7776 6 46656 6
 
√ √ √ √ √ √ √
 

7 7 7 49 7 343 7 2401 7 16807 7 117649 7

(0.5)
The determinant of the matrix VG (7) and its inverse have the following forms, respectively:

(0.5)
 √
det VG (7) = 298598400 35, (18)

 −21 35 −35 21 −7 √1

7 √ √
2
√ √
2 3 5 6 7
 
 √ 
−223 879 −949 −201 1019 −7 7
41
 
√ √ √ √
20 20
 
 20 2 12 3 4 5 60 6 √ 
319 −3929 389 −2545 134 −1849 29 7
 
 √ √ √ √ 
 45 120 2 6 3 72 3 5√ 120 6 90 
 −1  √ 
(0.5) −37 71 −1219 44 −185 5 41 −7 7
VG ( 7 ) =  √ √ √ . (19)
 
16 6 2 48 3 3 48 6 6 48
 √ 
 59 −9
√ 247√ −113 69√ −19
√ 5 7 

 144 4 2 48 3 36 16 5 12 6 144

 


 −3 13
√ −25√ 1 −23√ 11
√ − 7 
 80 60 2 48 3 3 48 5 60 6 240 

 1 −1√ 1√ −1 1√ −1√ 1√ 
720 120 2 48 3 72 48 5 120 6 720 7

5.1. Objective
Our objective is to find the determinant and inverse of the generalized Vandermonde matrix
(0.5)
VG ( 8 ) , with roots ci = i, i = 1, . . . , 8, in the recursive way.

5.2. Recursive Determinant Calculation


(0.5)
We calculate the determinant value of the matrix VG (8) using Equation (5) because the GVM in
question has consecutive integer roots. In this case, the equality in Equation (5) leads to the following
determinant value:
(0.5) √ (0.5) √ √ √
detVG (8) = (8 − 1)! (8 − 8)! 8 detVG (7) = 7! 8 · 298598400 35 = 3009871872000 70.

5.3. Recursive Inverse Finding


(0.5)
The task of calculating the inverse of the matrix VG (8) is performed using Algorithm 1, with
 −1
(0.5)
the use of the known inverse VG (7) in Equation (19). The auxiliary vectors v1 , v2 have forms in
compliance with Equations (13) and (14)

(k )
−1 h iT  (k) −1 √ h iT
v1 = VG ( n ) ck1+n · · · ckn+n = VG ( n ) 2 2 16 128 1024 8192 65536 524288 =
h iT . (20)
= 5040 −13068 13132 −6769 1960 −322 28

h i (k) −1 h √ √ √ √ √ i (k) −1


v2 = ckn+1 · · · ckn+ n−1
VG ( n ) = 1 128 2 2187 3 32768 78125 5 279936 6 823543 7 VG (n) =
h √
+1
√ √ √ √ √ i . (21)
= 2 2 −14 14 6 −35 2 14 10 −14 3 2 14
Information 2020, 11, x FOR PEER REVIEW 11 of 13

−1 −1
v2 = cnk+1  cnk++1n−1  VG( k ) ( n )  = 1 128 2 2187 3 32768 78125 5 279936 6 823543 7  VG( k ) ( n )  = .
Information 2020, 11, 42 11(21)
of 13
=  2 2 −14 14 6 −35 2 14 10 −14 3 2 14 

Next, we calculate the coefficient d as follows:


Next, we calculate the coefficient d as follows:
T
d = cnk++1hn −kv+2n  c1k +n k+cnnk +iTn  =78+70.5
+ 0.5
−h  2√ 2 −14 14 √6 −35 √2 14 √10 −14 √3 2 14
√  ⋅i
d= ckn+ n
− v 2 c1
+h1
· · · cn =8 − 2 2 −14 14 6 −35 2 14 10 −14 3 2 14 ·
√ iT
1 1
. . (22)
(22)
[ 1281281024 524288]= =10080 √2
2 2 2162 16 T
10248192 81926553665536524288
10080 2
The last step of Algorithm 1 is building a block matrix in compliance with Equation (16). Combining
The last step of Algorithm 1 is building a block matrix in compliance with Equation (16).
the vectors v1 in Equation (20) and v2 in Equation (21), and the coefficient d in Equation (22) with the
Combining the vectors v1 in Equation
(0.5)
−1(20) and v in Equation (21), and the coefficient d in
2
known Vandermonde inverse VG (7) in Equation (19), we finally
−1 obtain
Equation (22) with the known Vandermonde inverse VG(0.5) ( 7 )  in Equation (19), we finally obtain

 (0.5) −1 v v v 
 VG ( 7 )  + 1 2 − 1 
−1 d d
VG(0.5) ( 8 )  =  
 v2 1 
 −
 d d 
 56 56 −14 8 −1 
 8 −14 2 3 −35 5 6 7 2 
3 5 3 7 4
 
 −481 621 −2003 691 −141 2143 −103 363 
 35 2 3 5 6 7 2 
20 45 8 5 180 35 560
 
 349 −18353 2 797 3 −1457 4891 5 −187 6 527 7 −469 2 
 36 720 20 18 180 16 180 720 
  (23)
 −329 15289 2 −268 3 10993 −1193 5 2803 6 −67 7 967
2
 90 1440 15 288 90 480 45 2880 
= 
 115 −179 71 −179 2581 −13 61 −7
2 3 5 6 7 2 
 144 72 16 18 720 8 144 72 
 
 −73 239
2
−149
3
209 −391
5
61
6
−49
7
23
2 
 720 720 240 144 720 240 720 1440 
 
 1 −17 11 −1 31 −1 29 −1
2 3 5 6 7 2 
 144 720 240 9 720 48 5040 720 
 
 −1 1
2
−1
3
1 −1
5
1
6
−1
7
1
2 .
 5040 1440 720 288 720 1440 5040 20160 
−1
One can see that the incrementally received inverse  V(G0.5 ) ( 8) 
(0.5) −1 is equivalent to the inverse
One can see that the incrementally received inverse VG (8)  is equivalent to the inverse
obtained by the classical algorithms in Reference [5] (p. 649).
obtained by the classical algorithms in Reference [5] (p. 649).
5.4. Summary
5.4. Summary of
of the
the Example
Example

In this example, we recursively calculated the determinant and inverse of the VGG ((8)) matrix,
In this example, we recursively calculated the determinant and inverse of the V((0.5) 0.5) 8 matrix,

) ( 7 ) matrix, respectively. It is
(0.5)
making use
making use of
of the
the known
known determinant
determinant and
and the
the inverse
inverse of the VV(G0.5
of the (7) matrix, respectively. It is
G
worth noting
worth noting that,
that, to
to perform
perform this,this, there
there were
weremerely
merelyeight
eightscalar
scalarmultiplications
multiplications necessary
necessary for
for the
the
22
determinant, and 33· ⋅77+
determinant, and + 22⋅·77 scalar
scalarmultiplications
multiplicationsnecessary
necessaryfor
forthe
theinverse.
inverse. This
This confirms
confirms the
the high
high
efficiency
efficiencyof
ofthe
therecursive
recursiveapproach.
approach.

6.
6. Research
Research and
and Extensions
Extensions
The
The following
followingcan
canbe
beseen
seenas
asthe
thedesired
desiredfuture
futureresearch
researchdirections:
directions:
 Construction of the parallel algorithm for the generalized
generalized Vandermonde
Vandermondematrices.
matrices.
 vector-oriented hardware
Adaptation of the algorithms to vector-oriented hardware units.
units.
 Combination of both.
 Application on
Application on Graphics
Graphics Hardware
Hardware Unit
Unit architecture.
architecture.
 Application of the results in new branches, like deep learning and artificial intelligence.
Information 2020, 11, 42 12 of 13

The proposed results could also be applied to other related applications which use Vandermonde
or matrices of similar type, such as the following [19–21]:
 Total variation problems and optimization methods;
 Power systems networks;
 The numerical problem preconditioning;
 Fractional order differential equations.

7. Summary
In this paper, we derived recursive numerical recipes for calculating the determinant and inverse
of the generalized Vandermonde matrix. The results presented in this article can be performed
automatically using a numerical algorithm in any programming language. The computational
complexity of the presented algorithms is better than the ordinary GVM determinant/inverse methods.
The presented results neatly combine the theory of algorithms, particularly the recursion
programming paradigm and computational complexity analysis, with numerical recipes, which
we consider as the right branch in constructing computational algorithms.
Considering software production, the recursion is not only a purely academical paradigm, as it
was successfully used by programmers for decades.

Funding: This work was supported by Statutory Research funds of Institute of Informatics, Silesian University of
Technology, Gliwice, Poland (BK/204/RAU2/2019).
Acknowledgments: I would like to thank my university colleagues for stimulating discussions and reviewers for
apt remarks which significantly improved the paper.
Conflicts of Interest: The author declares no conflict of interest.

References
1. Respondek, J. On the confluent Vandermonde matrix calculation algorithm. Appl. Math. Lett. 2011, 24,
103–106. [CrossRef]
2. Respondek, J. Numerical recipes for the high efficient inverse of the confluent Vandermonde matrices. Appl.
Math. Comput. 2011, 218, 2044–2054. [CrossRef]
3. Respondek, J. Highly Efficient Recursive Algorithms for the Generalized Vandermonde Matrix. In Proceedings
of the 30th European Simulation and Modelling Conference—ESM’ 2016, Las Palmas de Gran Canaria, Spain,
26–28 October 2016; pp. 15–19.
4. Respondek, J. Recursive Algorithms for the Generalized Vandermonde Matrix Determinants. In Proceedings
of the 33rd Annual European Simulation and Modelling Conference—ESM’ 2019, Palma de Mallorca, Spain,
28–30 October 2019; pp. 53–57.
5. El-Mikkawy, M.E.A. Explicit inverse of a generalized Vandermonde matrix. Appl. Math. Comput. 2003, 146,
643–651. [CrossRef]
6. Hou, S.; Hou, E. Recursive computation of inverses of confluent Vandermonde matrices. Electron. J. Math.
Technol. 2007, 1, 12–26.
7. Hou, S.; Pang, W. Inversion of confluent Vandermonde matrices. Comput. Math. Appl. 2002, 43, 1539–1547.
[CrossRef]
8. Gorecki, H. Optimization of the Dynamical Systems; PWN: Warsaw, Poland, 1993.
9. Klamka, J. Controllability of Dynamical Systems; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1991.
10. Respondek, J. Approximate controllability of infinite dimensional systems of the n-th order. Int. J. Appl.
Math. Comput. Sci. 2008, 18, 199–212. [CrossRef]
11. Respondek, J. Approximate controllability of the n-th order infinite dimensional systems with controls
delayed by the control devices. Int. J. Syst. Sci. 2008, 39, 765–782. [CrossRef]
12. Timoshenko, S. Vibration Problems in Engineering, D, 3rd ed.; Van Nostrand Company: London, UK, 1955.
13. Bellman, R. Introduction to Matrix Analysis; Mcgraw-Hill Book Company: New York, NY, USA, 1960.
14. Eisinberg, A.; Fedele, G. On the inversion of the Vandermonde matrix. Appl. Math. Comput. 2006, 174,
1384–1397. [CrossRef]
Information 2020, 11, 42 13 of 13

15. Kincaid, D.R.; Cheney, E.W. Numerical Analysis: Mathematics of Scientific Computing, 3rd ed.; Brooks Cole:
Florence, KY, USA, 2001.
16. Lee, K.; O’Sullivan, M.E. Algebraic soft-decision decoding of Hermitian codes. IEEE Trans. Inf. Theory 2010,
56, 2587–2600. [CrossRef]
17. Gorecki, H. On switching instants in minimum-time control problem. One-dimensional case n-tuple
eigenvalue. Bull. Acad. Pol. Sci. 1968, 16, 23–30.
18. Yan, S.; Yang, A. Explicit Algorithm to the Inverse of Vandermonde Matrix. In Proceedings of the 2009
Internatonal Conference on Test and Measurement, Hong Kong, China, 5–6 December 2009; pp. 176–179.
19. Dassios, I.; Fountoulakis, K.; Gondzio, J. A preconditioner for a primal-dual newton conjugate gradients
method for compressed sensing problems. SIAM J. Sci. Comput. 2015, 37, A2783–A2812. [CrossRef]
20. Dassios, I.; Baleanu, D. Optimal solutions for singular linear systems of Caputo fractional differential
equations. Math. Methods Appl. Sci. 2018. [CrossRef]
21. Dassios, I. Analytic Loss Minimization: Theoretical Framework of a Second Order Optimization Method.
Symmetry 2019, 11, 136. [CrossRef]

© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like