Zeroing Neural Networks, An Introduction To, A Survey Of, and Predictive Computations For Time-Varying Matrix Problems
Zeroing Neural Networks, An Introduction To, A Survey Of, and Predictive Computations For Time-Varying Matrix Problems
Zeroing Neural Networks, An Introduction To, A Survey Of, and Predictive Computations For Time-Varying Matrix Problems
This paper is designed to increase knowledge and understanding of time-varying matrix problems and
Zeroing Neural Networks in the numerical analysis community of the west. Zeroing neural networks (ZNN)
were invented 20 years ago in China and almost all of their advances have been made in and still come
from its birthplace. ZNN methods have become a backbone for solving discretized sensor driven time-
varying matrix problems in real-time, in theory and in on-chip applications for robots, in control theory and
engineering in general. They have become the method of choice for many time-varying matrix problems
that benefit from or require efficient, accurate and predictive real-time computations. The typical ZNN
algorithm needs seven distinct steps for its set-up. The construction of ZNN algorithms starts with an error
equation and the stipulation that the error function decrease exponentially fast. The error function DE is
then mated with a convergent look-ahead finite difference formula to create a derivative free computer code
that predicts the future state of the system reliably from current and earlier state data. Matlab codes for
ZNN typically consist of one linear equations solve and one short recursion of current and previous state
data per time step. This makes ZNN based algorithms highly competitive with ODE IVP path following
or homotopy methods that are not designed to work with constant sampling gap incoming sensor data but
rather work adaptively. To illustrate the easy adaptability of ZNN and further the understanding of ZNN,
this paper details the seven set-up steps for 11 separate time-varying problems and supplies the codes for
six. Open problems are mentioned as well as detailed references to recent work on each of the treated
problems.
Keywords: time-varying matrix problem, neural network, zeroing neural network, ZNN algorithm, matrix flow,
time-varying numerical algorithm, predictive numerical method
AMS : 15A99, 15 B99, 65F99
1 Introduction
This is an introduction and overview of a seemingly new area of Numerical Linear Algebra. The paper deals with
time-varying matrices and how to solve standard numerical problems of matrix analysis when the matrices are
parameter dependent and not static. Time-varying or parameter-varying matrices are called matrix flows and we
study Zeroing Neural Networks or Zhang Neural Networks (ZNN). This set of methods forms a special class of
recurrent neural networks that originated some 40 years ago to solve dynamical systems. The ZNN method was
proposed twenty years ago by Yunong Zhang and Jun Wang in 2001, see [29]. Zhang and Wang introduced a
global error function and an error differential equation to achieve exponential error decay for time-varying systems
and neural computing. At the time Yunong Zhang was a Ph D candidate at Chinese University in Hong Kong and
Jun Wang his advisor. And in the meantime, Zhang Neural Networks have become one mainstay for predictive
time-varying matrix flow computations in the engineering world with well over 300 or 400 papers and a handful of
books on the subject. ZNN methods help with optimizing and controlling robot behavior and autonomous vehicles
et cetera. They are extremely swift and accurate in their numerical matrix flow computations. ZNN methods and
time-varying matrix flow problems are clearly governed by different mathematical principles and are subject to
different quandaries than those of static matrix analysis where Wilkinson’s backward stability and error analysis
are common and beautiful modern principles. In fact ZNN methods cannot solve static matrix problems at all.
Discretized ZNN processes are predictive by design and require look-ahead convergent finite difference schemes
∗ Department of Mathematics and Statistics, Auburn University, Auburn, AL 36849-5310 ([email protected])
1
ZNN and Time-varying Matrix Flows 2
that have never occurred or been used before. Thus time-varying matrix computations are a new, separate and still
uncharted area of Numerical Linear Algebra that is worth studying and learning about.
This paper is divided into two parts: Part A will explain the ZNN set-up process in detail in Section 2. Then part B
in Section 3 will list a number of applied problems and models for time-varying matrix phenomena that engineers
are now commonly solving via ZNN methods.
with a time-varying unknown vector or matrix x(t) and with compatibly sized time-varying matrices A(t), B(t),
C(t), ... and time-varying vectors u(t), .. that are known at discrete equidistant time instances ti for i ≤ k and
k = 1, ... such as from sensor data. Steadily timed sensor data is ideal for discretized ZNN. Our task is to find
x(tk+1 ) accurately and in real-time from earlier x.. values and earlier matrix and vector data. Note that here x(t)
might be a concatenated vector or matrix x(t) of various unknown data such as eigenvectors and their associated
eigenvalues for the time-varying matrix eigenvalue problem. Then the given flow matrices A(t) and others might
have to be enlarged likewise to stay compatible with an expanded eigendata vector x(t) and likewise for any other
vectors or matrices u(t).
Step 1 : From the model equation (1) form the error function
E(t) = F (A(t), B(t), x(t), ..) − g(t, C(t), u(t), ..) (2)
which would identically be zero, i.e., E(t) = 0 for all t if x(t) solves (1).
Step 2 : Take the derivative Ė(t) of the error function E(t) and stipulate its exponential decay:
Demand that
Ė(t) = −η E(t) (3)
for some constant η > 0 in case of Zhang Neural Networks (ZNN).
Or demand that
Ė(t) = −γ F(E(t))
for γ > 0 and a monotonically increasing activation function F in a slightly modified model, called RNN.
The right-hand side differences for ZNN and RNN methods are insignificant. Exponential error decay and thus
convergence to the exact solution x(t) of (1) is automatic for both variants. Depending on the problem, different
activation functions F are used in the RNN version such as linear, power sigmoid or hyperbolic sine functions.
These can result in different and better problem suited convergence properties with RNN, see the References.
In this paper we will, however, limit our attention to Zhang Neural Networks (ZNN) exclusively from now on for
simplicity.
Step 3 : Solve the exponentially decaying error equation differential equation (3) of Step 2 algebraically for
ẋ(tk ) = . . . if possible. If impossible, revise the start-up model and try again.
Step 4 : Select a look-ahead convergent finite difference formula for the desired truncation error order O(τ j ) that
ZNN and Time-varying Matrix Flows 3
expresses ẋ(tk ) in terms of x(tk+1 ), x(tk ), ..., x(tk+1−j ), i.e., in terms of j + 1 known data points from
the table of known convergent look-ahead finite difference formulas in [18] and [19].
Step 5 : Equate the ẋ(tk ) derivative terms in Steps 3 and 4 and thereby dispose of ẋ(tk ) altogether.
Step 6 : Solve the derivative free linear equation obtained in Step 5 for x(tk+1 ) and iterate.
Step 7 : Increase k + 1 to k + 2 and up all data of Step 6; then solve the updated recursion for x(tk+2 ). And repeat
until t... ≥ tf .
Discretized ZNN methods are highly accurate due to the stipulated exponential error decay of Step 2.
The errors of ZNN methods have two sources: for one, the chosen finite difference formula’s truncation error order
in Step 4 depends on the sampling gap τ = tk+1 − tk = const, and on the rounding errors of the linear equation
solves in Step 6. Besides, discretized ZNN (or RNN ) are the only predictive sensor driven methods that we know
of. They are designed to give us the future solution value x(tk+1 ) accurately immediately after time tk from cur-
rent and earlier system data.
Convergent look-ahead finite difference schemes do not exist at all in the literature prior to ZNN. Discrete ZNN
methods reduce time-varying matrix problems of the ’starting kind’ to a recurrence relation and a linear equations
solve with small computational costs per time step. These methods can be easily transferred to on-board chip de-
signs for driving and controlling robots. See [31] for 13 separate time-varying matrix/vector tasks, their Simulink
models and circuit diagrams, as well as 2 chapters on fixed-base and mobile robot applications. Each chapter in
[31] is well referenced with 10 to 30 plus citations from the engineering literature.
Zeroing Neural Networks have been used extensively in engineering and design for 2 decades now but numerical
analysis of ZNN has hardly been started. Time-varying matrix numerical analysis seems to be very different from
static matrix analysis. It seems to depend on and run according to different principles than Wilkinson’s now classic
backward stability and error analysis based static matrix methods. This will become clear and clearer throughout
this introductory survey paper.
Since the majority of our readers have probably never seen or attempted any method that can predict future events
in real time for time-varying matrix and vector problems we now exemplify one such problem and lead the readers
along the 7 steps of ZNN and RNN methods. We choose the time-varying eigenvalue problem A(t)x(t) = λ(t)x(t)
for hermiteam matrix flows A(t) = A(t)∗ ∈ Cn,n .
If An,n = A∗ is a fixed static matrix, one would likely solve the eigenvalue problem via Francis multi-shift implicit
QR algorithm for n ≤ 11, 000 and use Krylov methods for larger sized A. Eigenvalues are continuous functions
of the matrix entries. Thus taking the computed eigendvalues for A(tk ) as an approximation for the eigenvalues of
A(tk+1 ) might seem to suffice if the sampling gap τ = tk+1 −tk is relatively small. But in practice the eigenvalues
of A(tk ) share only few correct leading digits with the eigenvalues of A(tk+1 ). Hence we clearly need more than
static matrix methods to deal with precison for time-varying matrix flows A(t).
By definition, for square matrix flows A(t) ∈ Cn,n we need to compute a nonsingular matrix flow V (t) ∈ Cn,n
and a diagonal time-varying matrix flow D(t) ∈ Cn,n so that
A(t)V (t) = V (t)D(t) for all t. (1∗ )
This serves as the model for the time-varying eigenvalue problem.
Here are the steps for time-varying matrix eigen-analyses and ZNN.
Step 1 : Create the error function
E(t) = A(t)V (t) − V (t)D(t) (= On,n ideally.) (2∗ )
Step 2 : Stipulate exponential decay of E(t) as a function of time, i.e.,
Ė(t) = −η E(t) (3∗ )
for a decay constant η > 0. The RNN method would use a different decaying function on the right-hand
side.
ZNN and Time-varying Matrix Flows 4
or rearranged with all derivatives of the unknowns V (t) and D(t) gathered on the left-hand side :
A(t)V̇ (t) − V̇ (t)D(t) − V (t)Ḋ(t) = −ηA(t)V (t) + ηV (t)D(t) − Ȧ(t)V (t) . (5)
Unfortunately we do not know how to solve the full system eigen-equation (5) for the eigendata derivative matrices
V̇ (t) and Ḋ(t) as Step 3 asks us to do. This is due to the non-commutativity of matrix products and since V̇ (t)
appears both as a left and right matrix factor in (5). A solution that relies on Kronecker products for symmetric
matrix flows A(t) = A(t)T is available in [33] and we will go that route when dealing with square roots of time-
varying matrices in subpart (VII) and with solving time-varying classic matrix equations in subpart (IX) of Section
2.
Now we have to revise our model and restart the whole process. To overcome the dilemma, we separate the
time-varying matrix global eigenvalue problem for An,n (t) into n eigenvalue problems
that can be solved for one eigenvector and one eigenvalue at a time as follows.
Step 1 : Here the error function is
ė(t) = Ȧ(t)xi (t) + A(t)ẋi (t) − λ̇i (t)xi (t) − λi (t)ẋi (t)
= −η A(t)xi (t) + η λi (t)xi (t) = −η e(t), (4i)
or rearranged with the derivatives of xi (t) and λi (t) gathered on the left
A(t)ẋi (t)− λ̇i (t)xi (t)−λi (t)ẋi (t) = −η A(t)xi (t)+η λi (t)xi (t)− Ȧ(t)xi (t) . (5i)
(A(t) − λi (t)In )ẋi (t) − λ̇i (t)xi (t) = −η (A(t) − λi (t)In ) − Ȧ(t))xi (t) . (6i)
n
xi (t) ∈ C and the
For each i = 1, ..., n equation (6i) is a differential equation in the unknowneigenvector
xi (t)
unknown eigenvalue λi (t) ∈ C. We concatenate xi (t) and λi (t) in zi (t) = ∈ Cn+1 and have the
λi (t)
following system of DEs for the unknown eigenvector xi (t) and its associated eigenvalue λi (t)
ẋi (t)
A(t) − λi (t)In −xi (t) n,n+1 = −η (A(t) − λi (t)In ) − Ȧ(t) xi (t) . (7i)
λ̇i (t)
Since eigenvectors define invariant 1-dimensional subspaces of matrices, it is advisable to force the computed
eigenvectors xi (t) to have length 1 through the added error function e2 (t) = x∗i (t)xi (t)−1. Stipulating exponential
decay for e2 leads to
ė2 (t) = 2x∗i (t)ẋi (t) = −µ (x∗i (t)xi (t) − 1) = −µ e2 (t)
ZNN and Time-varying Matrix Flows 5
or
−x∗i (t)ẋi (t) = µ/2 (x∗i (t)xi (t) − 1) (8i)
for a decay constant µ > 0. If we place equation (8i) below the last row of the n by n+1 system matrix of equation
(7i) and extend its right hand side vector by the right hand side entry in (8i), we obtain an n by n time-varying
system of DEs with a hermitean system matrix if A(t) is hermitean and that was assumed at the start. For µ = 2η
we thus have
A(t) − λi (t)In −xi (t) ẋi (t) (−η (A(t) − λi (t)In ) − Ȧ(t))xi (t)
= . (9i)
−x∗i (t) 0 λ̇i (t) η (x∗i (t)xi (t) − 1)
We set
A(tk ) − λi (tk )In −xi (tk ) xi (tk )
P (tk ) = ∈ Cn+1,n+1 , z(tk ) = ∈ Cn+1 ,
−x∗i (tk ) 0 λi (tk )
(−η (A(tk ) − λi (tk )In ) − Ȧ(tk ))xi (tk )
and q(tk ) = ∈ Cn+1
η (x∗i (t)xi (t) − 1)
for discretized time t = tk and we have completed Step 3.
Step 3 : Our model (1i) for the ith eigenvalue equation has been transformed into the matrix/vector differential
equation
P (tk )ż(tk ) = q(tk ) or ż(tk ) = P (tk )\q(tk ) . (10i)
Step 4 : We choose the following convergent look-ahead finite 5-IFD (five Instance Finite Difference) formula of
truncation error order O(τ 3 ) from [19] for żk :
8zk+1 + zk − 6zk−1 − 5zk−2 + 2zk−3
żk = ∈ Cn+1 . (11i)
18τ
Step 5 : Equating the different expressions for 18τ żk in (10i) and (11i) we have
18τ · żk = 8zk+1 +zk −6zk−1 −5zk−2 +2zk−3 = 18τ ·(P \q) (12i)
of error order O(τ 4 ).
Step 6 : Here we solve (12i) for zk+1 to obtain the ZNN recursion
9 1 3 5 1
zk+1 = τ (P (tk )\q(tk ))− zk + zk−1 + zk−2 − zk−3 ∈ Cn+1 (13i)
4 8 4 8 4
with truncation error order O(τ 4 ).
Step 7 : Now iterate to predict the eigendata zk+2 for A(tk+2 ) from earlier eigen and system data for tj with
j ≤ k + 1.
The 5-IFD formula (11i) is of type j s = 2 2 and when used in ZNN its truncation error order becomes O(τ 4 )
as j + 2 = 4. To start a ZNN iteration process with a look-ahead convergent finite difference formula of type
j s from the list in [19] requires j + s known starting values. For time-varying matrix eigenvalue problems we
generally use Francis QR to generate j + s initial eigendata sets, then iterate via ZNN and throughout the iterative
process we need to keep only the most recent j + s data sets in memory.
MATLAB codes for time-varying matrix eigenvalue computations via ZNN are available at [21].
Next we show how to construct general look-ahead finite difference schemes from random entry seed vectors via
Taylor polynomials and elementary linear algebra that may or – most likely – may not be convergent. The con-
structive step is followed by a second optimization procedure to find look-ahead and convergent finite difference
formulas. The second part may or may not succeed in every attempt and this we will explain further below.
Consider a discrete time-varying state vector xj = x(tj ) = x(j ·τ ) for a constant sampling gap τ and j = 0, 1, 2, ...
and write out ` + 1 explicit Taylor expansions for xj+1 , xj−1 , ..., xj−` about xj as follows.
ZNN and Time-varying Matrix Flows 6
m−1 terms
z }| {
τ2 τ 3 ... τm m
xj+1 = xj + τ ẋj + ẍj + x j ... + ẋ j + O(τ m+1 ) (14)
2! 3! m!
τ2 τ 3 ... τm m
xj−1 = xj − τ ẋj + ẍj − x j ... + (−1)m ẋ j + O(τ m+1 ) (15)
2! 3! m!
(2τ )2 (2τ )3 ... (2τ )m m
xj−2 = xj − 2τ ẋj + ẍj − x j ... + (−1)m ẋ j + O(τ m+1 ) (16)
2! 3! m!
(3τ )2 (3τ )3 ... (3τ )m m
xj−3 = xj − 3τ ẋj + ẍj − x j ... + (−1)m ẋ j + O(τ m+1 ) (17)
2! 3! m!
.. ..
. . (18)
(`τ )2 (`τ )3 ... (`τ )m m
xj−` = xj − `τ ẋj + ẍj − x j ... + (−1)m ẋ j + O (τ m+1 ) (19)
| 2! 3! {z m! }
m−1 terms
Each equation above contains m + 2 terms on the right hand side, namely m derivative terms and terms for xj
and O(τ m+1 ). Each right hand side’s under- and overbraced m − 1 ’column’ terms contain products of identical
powers of τ and identical partial derivatives of xj . Our aim is to find a linear combination of these ` + 1 equations
for which the under- and overbraced sums vanish for all possible higher derivatives of the solution x(t). If we are
able to do so, then we can express xj+1 in terms of xj , ẋj , and τ with a truncation error of order O(τ m+1 ) by
using the found linear combination for a shortened version of the ` + 1 equations (14) through (19).
We first isolate the ’rational number’ factors in the ’braced m − 1 columns’ on the right hand side of the equations
(14) through (19) in the rational matrix
1 1 1 1
···
2! 3! 4! m!
1 1 1 1
− · · · (−1)m
2! 3! 4! m!
2 3 4 m
A`+1,m−1 = 2 2 2 m 2 . (20)
2! − 3! 4! · · · (−1) m!
. .. .. ..
.
. . . .
2 3 4 m
` ` ` `
− · · · (−1)m
2! 3! 4! m!
A has ` + 1 rows and m − 1 columns. The over- and underbraced expressions in equations (14) through (19) have
the matrix times vector product form
1 1 1 1
···
2! 3! 4! m! τ 2 ẍj
...
1
1 1 1
τ 3 xj
− · · · (−1)m
4
2! 3! 4! m! 4
2
τ ẋj
3 4 m
A · taudx = 2
2 2 m 2 ·
.. . (21)
2! − 3! 4! · · · (−1) m! .
. .. .. .. m−1 m−1
. τ ẋ j
. . . .
m
m
2
` `3 `4 `m
τ ẋ j m−1,1
− · · · (−1)m
2! 3! 4! m! `+1,m−1
ZNN and Time-varying Matrix Flows 7
Here the m − 1 dimensional column vector taudx contains the increasing powers of τ multiplied by the respective
higher derivatives of xj that were left out from equations (14) to (19) when forming A.
Note that for a nonzero left kernel row vector y ∈ R`+1 of A`+1,m−1 with y · A = O1,m−1 we have
And thus we have achieved our goal. Clearly A`+1,m−1 has a nontrivial left kernel when it has more rows than
columns, i.e. when ` + 1 > m − 1. To simplify subscripts, we introduce k = m − 1 and s = ` + 1 − (m − 1) ≥ 1
and assume that A has more rows than columns. With this switch of row and column dimension designations
A`+1,m−1 becomes Ak+s,k for s ≥ 1 and k ≥ 1.
The left kernel of Ak+s,k is the transposed right kernel of ATk,k+s . Each rational matrix Ak+s,k with s ≥ 1 and
1 ≤ k ≤ 6 has rank k by inspection. Therefore the reduced row echelon form of ATk,k+s is (Ik , Rk,s )k,k+s .
For an arbitrary seed vector y 6= 0 ∈ Rs , the column vector w = [u; y] ∈ Rk+s in Matlab colon notation with
u = −Ry ∈ Rk lies in the right kernel of AT since
Note that the linear combination of the equations (14) through (19) for the coefficients in w creates a predictive
recurrence relation for xj+1 in terms of xj , xj−1 , ..., xj−` and ẋj with truncation error order O(τ k+2 ) since k+2 =
m + 1. This recurrence relation is look-ahead and determines xj+1 from earlier data. This ends the first, the linear
algebra part for constructing suitable finite difference formulas for discretized ZNN methods.
A recursion formula’s characteristic polynomial determines its convergence and thus its suitability for discretized
ZNN methods. More specifically, the convergence of finite difference formulas or recurrence relations like ours
hinges on the lay of the roots of their associated characteristic polynomials. This restriction is well known for
multistep formulas and also applies to processes such as discretized ZNN. Convergence requires that all roots of
the formula’s characteristic polynomial lie inside or on the unit disk in C with no repeated roots allowed on the
unit circle, see [1, Sect. 17.6.3] e.g..
Next we explain a second, nonlinear part of our look-ahead and convergent discretization formula algorithm. It
tries to find look-ahead and convergent discretization formulas by minimizing the maximal modulus root of ’look-
ahead’ characteristic polynomials to below 1 + eps for a very small threshold 0 ≈ eps ≥ 0 so that they become
numerically and practially convergent as well.
The seed vector space for ATk,k+s in the first linear algorithm part is Rs . Any seed y therein spawns a look-
ahead finite difference scheme of truncation error order O(τ k+2 ) when its associated characteristic polynomial
coefficients are used for the linear combination of the equations (14) to (19). The set of look-ahead characteristic
polynomials is not a linear space since sums of such polynomials may or – most often – may not represent look-
ahead finite difference schemes. Therefore we are only allowed to vary the seed vector and not the associated
look-ahead polynomials themselves when trying to minimize their maximal magnitude root. We must search
indirectly instead in a neighborhood of the starting seed y for look-ahead characteristic polynomials with minimal
maximal magnitude root after y has been extended using the linear first part to a k + s coefficient vector whose
entries give rise to a look-ahead finite difference formula.
Our indirect minimization process uses Matlab’s multidimensional minimizer function fminsearch.m until we
have either found a seed with an associated characteristic polynomial that is convergent or there is no convergent
such formula from the chosen seed. fminsearch.m uses the Nelder-Mead downhill simplex method that finds
local minima for non-linear functions without using derivatives. Nelder-Mead mimics the method of steepest
descend and it searches for local minima via multiple function evaluations. Our seed selection process starts with
random entry seeds y ∈ Rs with normally distributed entries. The minimization algorithm runs through a double
do loop. An outside loop for a number (100, 500, 2,000, 10,000 or 30,000, or ...) of random starting seeds in Rs
and an inner loop for 3 to 6 randomized restarts from the latest updated fminsearch seed if its latest look-ahead
polynomial find is not convergent. In a few inner loop repeats, we then project the latest unsuccessful seed onto
a point with newly randomized entries nearby and use the new seed for another minimization run before starting
afresh.
ZNN and Time-varying Matrix Flows 8
In our experiments with trying to find convergent and look-ahead discretization formulas of type k s we never
succeeded when 1 ≤ s < k. Success always occurred for s = k and better success when s = k + 1 or s = k + 2.
It is obvious that for s = 1 and any seed y ∈ R1 there is only one normalized look-ahead discretization formula
k 1 and it is never convergent. For convergence we seemingly need more freedom in our seed space Rs than there
is in one dimension or less than k-dimensional space.
Our two part algorithm has computed many convergent look-ahead finite difference schemes of all types k s with
1 ≤ k ≤ 6 and k ≤ s ≤ k + 3 with truncation error orders between O(τ 3 ) and O(τ 8 ). Convergent look-
ahead finite difference formulas were unavailable before for error orders above O(τ 4 ). They had never occurred
anywhere before.
Zeroing Neural Network methods and the quest for high order convergent and look-ahead finite difference formulas
bring up many open numerical analysis problems:
Are there any look-ahead finite difference schemes with s < k in Ak+s,k and minimally more rows than columns?
Why or why not?
For relatively low dimensions the rational numbers matrix Ak+s,k can easily be checked for full rank when 1 ≤
k ≤ 6. Is that true in general? Has the A matrix ever been encountered anywhere else?
Every polynomial that we have constructed from any seed vector y ∈ Rs with s ≥ k by our method has had
precisely one root on the unit circle within 10−15 numerical accuracy. This even holds for non-convergent finite
difference formula polynomials with some roots outside of the unit disk. Do all Taylor expansion matrix Ak+s,k
based polynomials have at least one root on the unit circle in C? Is this root always 1?
And are there some convergent look-ahead finite difference formulas with all of their characteristic polynomial
roots inside the open unit disk?
For any low dimensional type k s finite difference scheme there are dozens of convergent and look-ahead finite
difference formulas of any one truncation error order. What is the most advantageous such formula to use in ZNN
or RNN? What properties of these multiple suitable formulas improve or hinder the computations for time-varying
matrix processes?
Intuitively we have preferred those finite difference formulas whose characteristic polynomials have relatively
small second largest magnitude roots. Is that correct and a good strategy for discretized ZNN methods?
A list of observations and open problems for ZNN based time-varying matrix eigen methods is included in [20].
The error contributions to ZNN’s output from the two sources of rounding errors in solving P \q and truncation
errors of the finite difference formula were mentioned earlier. One other source of errors relates to the appearance
of derivatives such as Ȧ(tk ) in q(tk ) in formulas (9i) and (10i) above. How should one minimize or equilibrate
their effects for the best overall computed accuracy when using recurrence relations with high truncation error
orders? High degree backward recursion formulas for derivatives are generally not very good.
Cuneiform tablet (from Yale) with Babylonian methods for solving a system of two linear equations
(I) Time-varying Linear Equations and ZNN :
Here we deal with matrix flows A(t)n,n ∈ Cn,n all of whose matrices are invertible on a time interval to ≤ t ≤
tf ⊂ R. Our model equation is An,n (t)x(t) = b(t) ∈ Cn for the unknown solution vector x(t). The error function
is 1je(t) = A(t)x(t) − b(t) and the error differential equation is
Solving 2jfor A(x)ẋ(t) and then for ẋ(t) we have the model, see also [29, (4.4)]
and
3j ẋ(t) = A(t)−1 (−Ȧ(t)x(t) + ḃ(t)) + η b(t)) − η x(t) .
To simplify matters we use the simple 5-IFD formula (11i) of Section 1 again in discretized mode with Ak =
A(tk ), xk = x(tk ) and bk = b(tk ) to obtain
8x + xk − 6xk−1 − 5xk−2 + 2xk−3
4j ẋk = k+1 ∈ Cn .
18τ
This yields
Solving the inner equation (∗) of 5jfor xk+1 gives us the predictive convergent look-ahead ZNN formula
9 1 3 5 1
6j xk+1 = τ · A−1
k (− Ȧ x
k k + ḃ k + η bk ) − η x n
k − xk + xk−1 + xk−2 − xk−3 ∈ C .
4 8 4 8 4
Since equation 6jinvolves the matrix inverse A−1 k at each time step tk , we propose two different Matlab codes
to solve time-varying linear equations for invertible matrix flows. The code tvLinEquatexinv.m in [26] uses
Matlab’s matrix inversion method inv.m explicitly in each time step as required in 6jabove while our second
code tvLinEquat.m, also in [26], uses two separate ZNN formulas. One of our codes tvLinEquatexinv.m
finds time-varying matrix inverses of A(t) externally via Matlab and then solves the linear equations flow A(t)x(t) =
b(t) via ZNN. The second code tvLinEquatexinv.m solves the time-varying linear equation with the help of
ZNN and Time-varying Matrix Flows 10
ZNN that computes the inverse of each A(t) and thus it solves equation 6jby using two independent interwoven
ZNN iterations in one code. Both methods run equally fast. The first with its explicit matrix inversion is a little
more accurate since Matlab’s inv computes small dimensioned matrix inverses near perfectly with 10−16 errors
while ZNN based time-varying matrix inversions give us slightly larger errors, losing 1 or 2 accurate leading digits.
This is most noticeable if we use low truncation error look-ahead finite difference formulas for ZNN and relatively
large sampling gaps τ . There are dozens of references when googling ’ZNN for time-varying linear equations’,
see also [28] or [32].
(II) Time-varying Matrix Inverses via ZNN :
Assuming that all matrices of a given time-varying matrix flow A(t)n,n ∈ Cn,n are invertible on a given time
interval to ≤ t ≤ tf ⊂ R, we construct a ZNN method that finds the inverse X(t) of each A(t) predictively from
previous data so that A(t)X(t) = In , i.e., with A(t) = X(t)−1 for every t. This gives rise to the error function
1jE(t) = A(t) − X(t)−1 (= On,n ideally) and the associated error differential equation
Since X(t)X(t)−1 = In is constant for all t, d(X(t)X(t)−1 )/dt = On,n . The product rule gives us the following
relation
d(X(t)X(t)−1 )
On,n = = Ẋ(t)X(t)−1 + X(t)Ẋ(t)−1 ,
dt
or Ẋ(t)−1 = −X(t)−1 Ẋ(t)X(t)−1 .
Plugging this into 2jestablishes
(∗)
Ė(t) = Ȧ(t) − Ẋ(t)−1 = Ȧ(t) + X(t)−1 Ẋ(t)X(t)−1 = −η A(t) + η X(t)−1 = −η E(t).
Multiplying the inner equation (∗) above by X(t) from the left on both, the left and right hand sides and then
solving for Ẋ(t) yields
If – for simplicity – we choose the same 5-IFD look-ahead and convergent formula as was chosen on line (11i) for
Step 4 of the ZNN eigendata method in Section 1, then we obtain the analogous equation to (12i) here with (P \q)
replaced by the right hand side of equation 3j. Instead of the eigendata iterates zj in (12i) we use the inverse
matrix iterates Xj = X(tj ) here for j = k − 3, ..., k + 1 and obtain
5j 18τ · Ẋk = 8Xk+1 + Xk − 6Xk−1 − 5Xk−2 + 2Xk−3 = −18τ · Xk ((Ȧ(tk ) + η A(tk )Xk − η In ) .
Solving 5jfor Xk+1 supplies the complete ZNN recursion formula that finishes Step 6 of the predictive dis-
cretized ZNN algorithm development for time-varying matrix inverses.
9 1 3 5 1
6jXk+1 = − τ Xk ((Ȧ(tk ) + η A(tk )Xk − η In ) − Xk + Xk−1 + Xk−2 − Xk−3 ∈ Cn,n
4 8 4 8 4
This look-ahead iteration is based on the convergent 5-IFD formula of type k s = 2 2 with truncation error order
O(τ 4 ). The formula requires two matrix multiplications, two matrix additions, one backward approximation of
the derivative of A(tk ) and a short recursion with Xj at each time step.
The error function differential equation 3jis akin to the Getz and Marsden dynamic system (without our ZNN
η terms) for time-varying matrix inversions, see [3] and [5]. Simulink circuit diagrams for our error function and
time-varying matrix inversions are available in [31, p. 97].
A Matlab code for the time-varying matrix inversion problem is available in [26] as tvMatrixInverse.m. A
different model is used in [30] and several others are described in [31, chapters 9, 12].
Remark 1 : (a) This example reminds us that the numerics for time-varying matrix problems may differ greatly
ZNN and Time-varying Matrix Flows 11
from our static matrix numerical approaches. Time-varying matrix problems are governed by different concepts
and follow different best practices.
For static matrices An,n we are always conscious of and we remind our students never to compute the inverse A−1
in order to solve a linear equation Ax = b because this is a very expensive proposition and is rightly shunned.
But for time-varying matrix flows A(t)n,n it seems impossible to solve time-varying linear systems A(t)x(t) =
b(t) without explicit matrix inversions as has been explained in part (I) above. For time-varying linear equations,
ZNN methods allow us to compute time-varying matrix inverses and solve time-varying linear equations in real
time, accurately and predictively. What is shunned for static matrix problems may work well for the time-varying
variant and vice versa.
(b) For each of the example problems in this section an annotated rudimentary ZNN based Matlab code is stored
in [26]. For high truncation error order look-ahead convergent finite difference formulas such as 4 5 these codes
achieve 12 to 16 correct leading digits predictively for each entry of the desired solution matrix or vector and they
do so uniformly for all parameter values of t.
(III) Pseudoinverses of Time-varying Non-square Matrices with Full Rank and without :
Here we first look at rectangular matrix flows A(t)m,n ∈ Cm,n with m 6= n that have uniform full rank(A(t)) =
min(m, n) for all to ≤ t ≤ tf ⊂ R.
All matrices Am,n with m = n or m 6= n have two kinds of nullspaces or kernels, namely
N (A)r = {x ∈ Cn | Ax = 0 ∈ Cm } and N (A)` = {x ∈ Cm | xA = 0 ∈ Cn }
that are called A’s right or left nullspace, respectively. If m > n and Am,n has full rank n, then A’s right kernel
is {0} ⊂ Cn while Ax = b ∈ Cm cannot be solved for every b ∈ Cm since the number the columns of Am,n is
less than required for spanning Rm . If m < n and Am,n has full rank m, then A’s left kernel is {0} ⊂ Cm and
similarly not all equations xA = b ∈ Cn are solvable with x ∈ Cm . Hence we need to abandon matrix inversion
for rectangular non-square matrices and resort to pseudoinverses instead.
There are two kinds of pseudoinverse of Am,n , depending on whether m > n or m < n. They are always denoted
by A+ and always have size n by m. If m > n and A has full rank n, then A+ = (A∗ A)−1 A∗ ∈ Cn,m is
called the left pseudoinverse because A+ A = In . For m < n the right pseudoinverse of Am,n with full rank m is
A+ = A∗ (AA∗ )−1 ∈ Cn,m with AA+ = Im .
In either case A+ solves a minimization problem, i.e.,
min ||Ax − b||2 = ||A+ b||2 ≥ 0 for m > n and minm ||xA − b||2 = ||bA+ ||2 ≥ 0 for m < n.
x∈Cn x∈C
Thus the pseudoinverse of a full rank rectangular matrix Am,n with m 6= n solves the least squares problem of
linear equations that have nontrivial left or right kernels, respectively. It is easy to verify that (A+ )+ = A in either
case, see e.g. [14, section 4.8.5]. Thus the pseudoinverse A+ acts similarly to the matrix inverse A−1 when An,n
is invertible for m = n. Hence its name.
First we want to find the peudoinverse X(t) of a full rank rectangular matrix flow A(t)m,n with m < n. Since
X(t)+ = A(t) we can try to use the dynamical system of Getz and Marsden [3] again and start from A(t) = X(t)+
as our model equation.
(a) The right pseudoinverse X(t) = A(t)+ for full rank m matrix flows A(t)m,n when m < n for all t :
The exponential decay stipulation for our model’s error equation 1jE(t) = A(t) − X(t)+ makes
Thus Ẋ(t)+ X(t) = −X(t)+ Ẋ(t) or Ẋ(t)+ = −X(t)+ Ẋ(t)X(t)+ after multiplying through by X(t)+ on the
right. Updating equation 2jestablishes
Ȧ(t) + X(t)+ Ẋ(t)X(t)+ = −η A(t) + η X(t)+ .
ZNN and Time-varying Matrix Flows 12
Multiplying both sides on the left and right by X(t) then yields
But XX(t)+ has size n by n and rank m < n. Therefore it is not invertible. And thus we cannot cancel the
encumbering left factors for Ẋ(t) above and solve the equation for Ẋ(t) as would be needed for Step 3. And a
valid ZNN formula cannot be obtained from our first simple model equation 1jabove.
This example contains a warning not to give up if one model does not work for a time-varying matrix problem.
So let us try another model equation for the right pseudoinverse X(t) of a full rank matrix Am,n with m < n. We
use the definition of A(t)+ = X(t) = A(t)∗ (A(t)A(t)∗ )−1 in the revised model X(t)A(t)A(t)∗ = A(t)∗ now.
From the error equation 1jE = XAA∗ − A∗ we obtain (leaving out all time dependencies of t from now on for
better readability)
Here the matrix product A(t)A(t)∗ on the left-hand side is of size m by m and has rank m for all t since A(t) does.
Thus we can find an explicit expression for Ẋ(t), namely
The steps 4j, 5jand 6jnow follow as before. The Matlab ZNN based code for right pseudoinverses is
tvRightPseudInv.m in [26]. Our code finds right pseudoinverses of time-varying full rank matrices A(t)m,n
predictively with an entry accuracy of 14 to 16 leading digits in every position of A+ (t) = X(t) when compared
with the pseudoinverse defining formula. We have used the 4 5 look-ahead convergent finite difference formula
with sampling gap τ = 0.0002.
Similar numerical results are achieved for left pseudoinverses A(t)+ for time-varying matrix flows A(t)m,n with
m > n.
(b) The left pseudoinverse X(t) = A(t)+ for full rank n matrix flows A(t)m,n when m > n for all t :
The starting model is A+ = X n,m = (A∗ A)−1 A∗ and the error equation 1jE = (A∗ A)X − A∗ then leads to
Then follow the steps from subpart (a) and develop a Matlab ZNN code with a truncation error order finite differ-
ence formula of your own choice.
(c) The pseudoinverses X(t) = A(t)+ for matrix flows A(t)m,n with variable rank(A(t)) ≤ min(m, n) :
As before with the unknown pseudo inverse X(t)n,m now for a matrix flow A(t) ∈ Cm,n , we use the error equation
1jE(t) = A(t) − X(t)+ and the error function DE
For matrix flows A(t) with rank deficiencies the derivative of X + , however, becomes more complicated, see [4,
Eq. 4.12] : ∗ ∗
Ẋ + = −X + ẊX + + X + X + Ẋ ∗ (In − XX + ) + (Im − X + X)Ẋ ∗ X + X +
where previously for full rank matrix flows A(t), only the first term above was needed to express Ẋ + . Plugging
the long expression for Ẋ + into the inner equation of 2jwe obtain
which needs to be solved for Ẋ. Unfortunately Ẋ appears once in the second term on the left and twice as Ẋ ∗ in
the third and fourth term of 3jabove. Maybe another start-up error function can give better results, but it seems
that the general rank pseudoinverse problem cannot be easily solved via the ZNN process.
The Matlab code for ZNN look-ahead left pseudoinverses of full rank time-varying matrix flows is tvLeftPseudInv.m
in [26]. The right pseudoinverse code for full rank matrix flows is similar. Recent work on pseudo-inverses has
appeared in [13] and [31, chapters 8,9].
(IV) Least Squares, Pseudoinverses and ZNN :
Linear systems of time-varying equations A(t)x(t) = b(t) can be unsolvable or solvable with unique or multiple
solutions and pseudoinverses can help us.
If the matrix flow A(t)m,n admits a left pseudoinverse A(t)+
n,m then
A(t)+ A(t)x(t) = A(t)+ b(t) and x(t) = (A(t)+ A(t))−1 A(t)b(t) or x(t) = A(t)+ b(t) .
Thus A(t)+ b(t) solves the linear system at each time t and x(t) = A(t)+ b(t) is the solution with minimal euclidean
norm ||x(t)||2 since all other time-varying solutions have the form
Here ||A(t)x(t) − b(t)||2 = 0 precisely when b(t) lies in the span of the columns of A(t) and the linear system is
uniquely solvable. Otherwise minx (||A(t)x(t) − b(t)||2 > 0.
Right pseudoinverses A(t)+ play the same role for linear-systems y(t)A(t) = c(t). In fact
And c(t)A(t)+ solves the left-sided linear system y(t)A(t) = c(t) with minimal euclidean norm.
In this subsection we will only work on time-varying linear equations of the form A(t)m,n x(t) = b(t) ∈ Cm for
m > n with rank(A(t)) = n for all t. Then A(t) has the left pseudoinverse A(t)+ = (A(t)∗ A(t))−1 A∗ . The
associated error function is 1je(t) = A(t)m,n x(t) − b(t). Stipulated exponential error decay gives us the error
DE
2j ė = Ȧx + Aẋ − ḃ = −η Ax + η b = −η e
where we have again left off the time parameter t for clarity and simplicity. Solving 2jfor ẋ gives us
Here the subscripts . . .k remind us that we are describing the discretized version of our Matlab codes where bk for
example will be replaced by btk = b(tk) and so forth for Ak , xk , ... . The Matlab code for the ZNN look-ahead
method for time-varying linear equations, least squares problems with full rank matrix flows A(t)m,n with m > n
is tvPseudInvLinEquat.m in [26]. We advise readers to develop a similar code for full rank matrix flows
A(t)m,n with m < n independently.
The survey article [7] describes ZNN methods for nine time-varying matrix optimization problems such as least
squares and constrained optimizations in subsection (V) below. More can be learned more about current and future
ZNN methods for time-varying matrix methods through google searches.
ZNN and Time-varying Matrix Flows 14
Using the 5-IFD look-ahead finite difference formula again for ẏk with discretized data yk = y(tk ), we obtain the
following derivative free equation for the iterates yj with j ≤ k by equating the two expressions for ẏk as follows :
5j 18τ · ẏk = 8yk+1 + yk − 6yk−1 − 5yk−2 + 2yk−3 = −18τ · J(h(yk ))−1 η h(yk )) + ḣt (yk ) .
ZNN and Time-varying Matrix Flows 15
Solving 5jfor yk+1 supplies the complete ZNN recursion formula that finishes Step 6 of the predictive discretized
ZNN algorithm development for time-varying constrained non-linear optimizations :
9 1 3 5 1
6j yk+1 = − τ · J(h(yk ))−1 η h(yk )) + ḣt (yk ) − yk + yk−1 + yk−2 − yk−3 ∈ Cn+m .
4 8 4 8 4
The Lagrange based optimizing algorithm for multivariate functions and constraints is coded for one specific exam-
ple with m = 1 and n = 2 in tvLagrangeOptim2.m, see [26]. For this specific example the optimal solution
is known. The code can be modified for optimization problems with more than n = 2 variables and for more than
m = 1 constraint functions. It is modular and accepts all look-ahead convergent finite difference formulas that
are listed in Polyksrestcoeff3.m in our k s format. It is important to try and understand the interaction
between feasible η and τ values for ZNN methods here in order to be able to use ZNN well for other time-varying
problems.
An introduction to constrained optimization methods is available at https://www.researchgate.net/
publication/255602767_Methods_for_Constrained_Optimization ; see also [7]. Several op-
timization problems are studied in [7] such as Lagrange optimization for unconstrained time-varying convex non-
linear optimizations called U-TVCNO there and time-varying linear inequality systems called TVLIS. The latter
will also be treated in subpart (VI) just below.
Remark 2 : An important concept in this realm is the product τ · η for any one problem and any fixed ZNN
method k s. This product, regularly denoted as h = τ · η, seems to be nearly constant for the optimal choice of
parameters over a wide range of sampling gaps τ if k s stays fixed. Yet the optimal value of the near ’constant’
h varies widely from one look-ahead convergent finite difference formula to another. The reason for this is quite
unknown and worthy of further studies.
Thus far in this practical section, we have worked through five models and a variety of time-varying matrix prob-
lems. We have developed seven respective Matlab codes. These numerical codes implement ZNN look-ahead
convergent difference formula based processes for time-varying matrix and vector problems in seven steps as out-
lined in Section 1. The resulting ZNN computations all require a linear equation solve or a matrix times vector
product and a simple convergent recursion. Some of the codes are very involved such as for example (V) which
relies on Matlab’s symbolic toolbox and its differentiation functionality. Others were straightforward. All of our
seven algorithms are look-ahead and rely only on earlier data to predict future solutions. They do so with high
accuracy and run in fractions of a second over time intervals that are 10 to 100 times longer than their run time.
This makes ZNN methods highly useful for real-time implementations.
We continue with further examples and restrict our explanations of ZNN methods to the essentials from now on.
We also refrain from coding further ZNN based programs; instead we give extended references with the hope that
our readers can implement their own ZNN Matlab codes for their own time-varying matrix/vector problems by
using our detailed examples (I) ... (V) above as guides.
(VI) Time-varying Linear Equations with linear Equation and Inequality Constraints :
We consider two types of linear equation and linear inequality constraints here :
(A) and (AC)
Am,n (t)xn (t) ≤ bm (t) Am,n (t)xn (t) = bm (t)
Ck,n (t)xn (t) ≤ dk (t) .
We assume that the matrices and vectors all have real entries and that the given inequality problem has a unique
solution for all to ≤ t ≤ tf ⊂ R. Otherwise with subintervals of [to, tf ] in which a given problem is unsolvable
or has infinitely many solutions, the problem itself would become subject to potential bifurcations and thus well
beyond the scope of this survey paper.
2
In both situations (A) or (AC) it is customary to introduce a nonnegative ’slack variable’ u. (t), see [15] e.g., in or-
der to replace the linear system with inequalities by a system of linear equations. The slack variable u typically has
2
nonnegative entries in the form of squares, i.e., u. (t) = [u21 (t), ..., u2` (t)]T ∈ R` with ` = m or k, depending on
ZNN and Time-varying Matrix Flows 16
the type of time-varying inequality system. By using the special form of u and setting wm+k (t) = [xn (t); (ui (t))k ]
in Matlab notation, the models (A) and (AC) now become
for (Au) and for (ACu)
2 Am,n Om,k xn bm
.
Am,n (t)xn (t) + u (t)m = bm (t) = ∈ Rm+k .
Ck,n diag(ui ) m+k,n+k (ui )k n+k dk
The error equation for (Au) is 1jE(t) = A(t)x(t)+u. (t)−b(t). When using the product rule on each component
2
2
u2i of u. the error function DE becomes
Here the .∗ product uses the Matlab notation for entry-wise vector multiplication and diag(a) denotes the n by n
diagonal matrix with the entries ai of a ∈ Rn on its diagonal. If the unknown entries of x ∈ Rn and u ∈ Rm are
gathered in one column vector [x(t); (ui (t))] ∈ Rn+m we obtain the alternate error function DE for (Au) in block
matrix form as
j
2 a Ė = Ȧx + A 2 diag(ui ) m,2m
ẋ 2
− ḃ = −η (Ax + u. − b) = −η E ∈ Rm .
(u̇i ) 2m
Similarly for (ACu), the error function is 1ju Eu (t) =
A O x b
− ∈
C diag(ui ) m+k,n+k
(ui ) n+k
d
Rm+k and its error function DE is 2ju
Ȧ O ẋ ḃ A O x b
Ėu (t) = − = −η − = −η Eu (t).
Ċ 2 diag(ui ) (u̇i ) d˙ C diag(ui ) (ui ) d
Solving the error function DEs 2ja and 2ju for the derivative vector [ẋ(t); (u̇i (t))] respectively via the built-in
pseudoinverse function pinv.m of Matlab, see subsection (III) above, we obtain the following expressions of the
derivative of the unknown x(t) and ui (t).
For model (Au)
j ẋ
2
3a = pinv((Am,n 2 diag(ui ))m,n+m )n+m,m · ḃ − Ȧx − η (Ax + u. − b) ∈ Rn+m ,
(u̇i ) m
The Matlab function pinv.m in 3juses the Singular Value Decomposition (SVD). The derivative of the vector
[x(t); (ui (t))] can alternately be expressed in terms of Matlab’s least square function lsqminnorm.m in
3jals
ẋ 2
= lsqminnorm ((Am,n 2 diag(ui ))m,n+m )n+m,m , ḃ − Ȧx − η (Ax + u. − b) ∈ Rn+m ,
(u̇i ) m
or
3juls
ẋ Ȧ O ḃ A O x b
= lsqminnorm , − η − .
(u̇i ) Ċ 2diag(ui ) d˙ C diag(ui ) (ui ) d
Next choose a look-ahead finite difference formula of type j s for the discretized problem and equate its derivative
[ẋk ; (u̇i (tk ))] with the above value in 3ja , 3jals or 3ju , 3juls in order to eliminate the derivatives from now on.
Then solve the resulting derivative free equation for the 1-step ahead unknown [xk+1 ; (ui (tk+1 ))] at time tk+1 .
ZNN and Time-varying Matrix Flows 17
The Matlab coding of a ZNN based algorithm for time-varying linear systems with equation or inequality con-
straints can now begin after j + s initial values have been set.
Recent work on ZNN methods for time-varying matrix inequalities is available in [7, TVLIS], [32] and [28].
(VII) Square Roots of Time-varying Matrices :
Square roots Xn,n ∈ Cn,n exist for all nonsingular static matrices A ∈ Cn,n , generalizing the fact that all complex
numbers have square roots over C. Like square roots of numbers, matrix square roots may be real or complex. For
singular matrices A the existence of square roots depends on A’s Jordan block structure and its nilpotent Jordan
blocks J(0) and some matching dimension conditions thereof, see e.g. [2, p. 466, Thm. 4.5, Cor. 11.3] for details.
Here we assume that our time-varying flow matrices A(t) are nonsingular for all to ≤ t ≤ tf ⊂ R. Our model
is A(t) = X(t) · X(t) for the unknown time-varying square root X(t) of A(t). Then the error equation becomes
1jE(t) = A(t) − X(t) · X(t) and the error DE under exponential decay stipulation is
2jĖ = Ȧ − ẊX − X Ẋ = −η A − η XX = −η E
where we have omitted the time variable t for simplicity. Rearranging 2j with all unknown Ẋ terms on the
left-hand side gives us
3j ẊX + X Ẋ = Ȧ + η (A − XX) .
Equation 3jis the model (10.4) in [33, ch. 10] except for a minus sign. In 3jwe have a similar situation as was
encountered in Section 1 with equation (5) that a matrix derivative such as Ẋ above appears as both, a left and right
matrix product factor in 3j. In Section 1 we gave up and solved the time-varying matrix eigenvalue problem one
eigenvector eigenvalue pair at a time. If we use the Kronecker product for matrices and column vectorized matrix
representations in 3j– and in equation (5) of Section 1, too, for the complete time-varying matrix eigendata
problem – this problem can be solved. And we can continue with ZNN by relying on static matrix theory such as
Kronecker products. Their properties help us greatly to construct time-varying ZNN algorithms here and elsewhere.
For two real or complex matrices Am,n and Br,s the Kronecker product is defined as
a1,1 B a1,2 B . . . a1,n B
..
a2.1 B . a2,n B
A⊗B = .. ..
.
..
. . .
am,1 B ... . . . am,n B m·r,n·s
The command kron(A,B) in Matlab creates A ⊗ B for any pair of matrices. Compatibly sized Kronecker
products are added entry by entry just as matrices are. For matrix equations a very useful property of the Kronecker
product is the rule (B T ⊗ A)X(:) = C(:) where C = AXB. When this rule is combined with the column vector
matrix X(:) notation and function of Matlab for X and we rewrite the left hand side of equation 3jẊX + X Ẋ =
Ȧ + η A − η XX it becomes
2
(X T (t) ⊗ In + In ⊗ X(t))n2 ,n2 · Ẋ(t)(:)n2 ,1 ∈ Cn ,
And miraculously the difficulty of Ẋ appearing on both sides as a factor in the left hand matrix products of 3jis
gone. We cannot tell whether the sum of Kronecker products in front of Ẋ is nonsingular, but if we assume it is,
then
Ẋ(t)(:) = (X T (t) ⊗ In + In ⊗ X(t))−1 · Ȧ(t)(:) + η A(t)(:) − η (X T (t) ⊗ In ) · X(t)(:) .
ZNN and Time-varying Matrix Flows 18
Otherwise if X T (t)⊗In +In ⊗X(t) is singular, simply replace the matrix inverse by the pseudoinverse pinv(X T (t)⊗
In + In ⊗ X(t)) above and likewise in the next few lines. With
P (t) = (X T (t) ⊗ In + In ⊗ X(t)) ∈ Cn2 ,n2
when assuming non-singularity and
2
q(t) = Ȧ(t)(:) + η A(t)(:) − η (X T (t) ⊗ In ) · X(t)(:) ∈ Cn
we have Ẋ(t)(:)n2 ,1 = P (t)\q(t). This formulation is reminiscent of formula (10i) in Section 1, except that
here the entities followed by (:) represent column vector matrices instead of square matrices and these now have
squared dimensions n2 instead of n in (10i). This might lead to execution time problems for real-time applications
if the size of the original system is in the hundreds or beyond, while n = 10 or 20 should pose no problems at all.
How to mitigate such size problems, see [10] e.g..
To obtain the derivatives Ẋ(tk ) for each discrete time tk = to + (k − 1)τ for use in Step 5 of ZNN, we need to
solve the n2 by n2 linear system P (t)\q(t) and obtain the column vectorized matrix Ẋ(tk )n2 ,1 . Then we reshape
Ẋ(t)(:)n2 ,1 into square matrix form via Matlab’s reshape.m function. Equation 5jthen equates the matrix
version Ẋ(tk )n,n with the formula for Ẋ(tk )n,n from our chosen finite difference expression and this helps us to
predict X(tk+1 )n,n . This has been done many times before when finalizing a ZNN formula. Next it requires one
additional n by n linear equations solve and a short X(tj ) recursion for earlier eigendata with j ≤ k.
For further analyses, a convergence proof and numerical tests of this ZNN based time-varying matrix square root
algorithm, see [33]. Computing time-varying matrix square roots is the subject of [31, Ch. 8, 10].
(VIII) Applications of 1-parameter Matrix Flows to solve Static Matrix Problems :
Static matrix theory concepts have so far helped us often with time-varying matrix problems. Concepts and results
from the time-varying matrix realm can likewise help with classic, previously unsolvable fixed matrix theory
problems and applications.
Numerically the Francis QR eigenvalue algorithm ’diagonalizes’ every square matrix A over C in a backward
stable manner. It does so for diagonalizable matrices as well as for derogatory matrices, regardless of their Jordan
structure or of repeated eigenvalues. QR works out a ’diagonalizing’ eigenvector matrix similarity for any A.
For unitarily invariant matrix problems such as least squares, the SVD or the field of values problem, classic
static matrix theory does unfortunately not offer any way to find unitary block reductions of fixed entry matrices
A ∈ Cn,n . If such decompositions could be found computationally, unitarily invariant matrix problems could be
decomposed into subproblems for decomposable matrices A.
Here is an elementary time-varying method that can be adapted to unitary block decompositions of fixed time-
invariant matrices A if such is possible. [24] deals with general 1-parameter matrix flows A(t) ∈ Cn,n . If X
diagonalizes one flow matrix A(t1 ) via similarity X −1 ...X and X −1 A(t2 )X is block diagonal for t2 6= t1 , then
all A(t) are simultaneosly block diagonalized by X and the flow A(t) decomposes uniformly. [25] then applies
this matrix flow result to the previously intractable field of value problems for unitarily decomposable matrices A
when using path following methods.
For any fixed entry matrix A ∈ Cn,n the hermitean and skew parts
H = (A + A∗ )/2 = H ∗ and K = (A − A∗ )/(2i) = K ∗ ∈ Cn,n
of A generate a 1-parameter hermitean matrix flow HKA (t) = cos(t)H + sin(t)K = (HKA (t))∗ ∈ Cn,n for all
angles 0 ≤ t ≤ 2π. If a unitary matrix Xn,n diagonalizes HkA (t1 ) and HkA (t2 ) is block diagonalized by the same
unitary matrix X for some t2 6= t1 , then every matrix HKA (t) of the flow HKA is uniformly block diagonalized
by X and subsequently so is A = H + iK itself.
The matrix field of values problem [6] is invariant under unitary similarities. It can be solved by finding the ex-
treme real eigenvalues for each hermitean HK(t) with 0 ≤ t ≤ 2π and then evaluating certain eigenvector A-inner
products to construct the field of values boundary points in the complex plane. One way to find that curve is to
compute full eigenanalyses for each matrix HK(t) via Francis QR. Speedier ways are to use path following meth-
ods such as initial value ODE solvers or ZNN methods. But these methods cannot insure that they find the extreme
ZNN and Time-varying Matrix Flows 19
eigenvalues of HKA (t) when eigencurves of A(t) cross. Eigencurve crossings can only occur for decomposable
matrices A, see [11]. Finding eigencurve crossings for decomposing matrices A takes up a large part of [8] and
still fails to adapt IVP ODE path following methods to all types of decomposing A. The elementary method of [24]
solves the field of values problem for decomposable matrices A for the first time without having to compute all
eigendata and sort eigenvalues which is done automatically inside Francis QR. Our method is up to 4 times faster
than the Francis QR based field of values method or the IVP ODE solver method and it succeeds for all manners
of decompositions of A, see [25].
(IX) Time-varying Sylvester and Lyapunov Matrix Equations :
The static Sylvester equation
AX + XB = C
with A ∈ Cn,n , B ∈ Cm,m , C ∈ Cn,m is solvable for X ∈ Cn,m if A and B have no common eigenvalues. With
the error function 1jE(t) = A(t)X(t) + X(t)B(t) − C(t) for time-varying matrices we start with the exponetial
decay stipulation Ė(t) = −η E(t) for a positive decay constant η. Then we obtain the equations
and
3j AẊ + ẊB = −η (AX + XB − C) − ȦX − X Ḃ + Ċ ,
where we have gathered all terms that involve Ẋ in 2jon the left hand side of 3jand have dropped all references
to time t to simplify the view. Using the properties of Kronecker products as introduced in subsection 2 (VII)
above we rewrite the left hand side of 3jas
and the right hand side, expressed at first also in Kronecker and column vector matrix notation as
Kronecker product notation is only necessary for expressing the two sided appearances of Ẋ(t) on the left hand
side of 3j. The right hand side of 3jcan be expressed more simply with matrix products instead of Kronecker
matrix products, namely
q(t) = −((Ȧ · X)(:) + (X · Ḃ)(:))nm,1 + Ċ(:)nm,1 − η ((A · X)(:) + (X · B)(:) − C(:))nm,1 ∈ Cn·m .
The expressions such as (A · X)(:) above denote the column vector representation of matrix products such as
A(t) · X(t). Thus we obtain the linear system M (t)Ẋ(t) = q(t) for Ẋ(t) in 3jwith M (t) = (Im ⊗ A(t) +
B T (t) ⊗ In ) ∈ Cn·m,n·m when using either form of q(t). And Ẋ(t)(:) ∈ Cn·m can be expressed in various forms,
depending on the solution method and the state of (non)-singularity of M (t) as Ẋ(t)(:) = M (t)\q(t) or
Ẋ(t)(:) = inv(M (t)) · q(t) or Ẋ(t)(:) = pinv(M (t)) · q(t) or Ẋ(t)(:) = lsqminnorm(M (t)), q(t)) ,
with the latter two formulations in case M (t) is singular. Which form of q(t) gives faster or more accurate results
for Ẋ(t)(:) can be tested in Matlab by opening the >> profile viewer and running the discretized ZNN
method for all versions. There are four possible combinations for each of the two M (t) singularity cases here that
users can try and thereby learn how to optimize their codes. Once Ẋ(t)(:) has been found in column vector form
it must be reshaped into an n by m matrix Ẋ(t). Next we have to equate our computed derivative matrix Ẋk in
the discretized version at time tk with a specific look-ahead finite difference formula value for Ẋk in step 5j.
The resulting derivative free equation finally is solved for the future solution Xk+1 of the time-varying Sylvester
equation in step 6jof our standard procedure list. Iteration then concludes the ZNN algorithm for Sylvester
problems.
ZNN and Time-varying Matrix Flows 20
The time-varying Lyapunov equation has the the form of an error equation, namely
where all matrices are complex and square of size n by n. Here we take a shortcut and convert the matrix error
equation 1jimmediately to its column vector with Kronecker product equation form
1j
(cvK) E(:) = (Ā ⊗ A)X(:) − X(:) + Q(:) ∈ C
n2
where we have used the formula (AXA∗ )(:) = (Ā ⊗ A)X(:) and dropped all mention of dependencies on t
for simplicity. Working towards the exponentially decaying differential error equation for E(:), we note that
derivatives of time-varying Kronecker products U (t) ⊗ V (t) follow the product rule
Upon reordering the terms in 2jwe have the following linear system for the unknown column vector matrix Ẋ(:)
where Ā means the conjugate of A. With M (t) = (In2 − Ā ⊗ A) ∈ Cn2 ,n2 and
˙ 2
q(t)(:) = (A(t) ⊗ A(t) + A(t) ⊗ Ȧ(t))X(t)(:) − η (In2 − A(t) ⊗ A(t))X(t)(:) + η Q(t)(:) + Q̇(t)(:) ∈ Cn
2
we have to solve the system M (t)Ẋ(t)(:) = q(t) for Ẋ(t)(:) ∈ Cn as explained earlier for the Sylvester equation.
In Step 4 we equate the matrix-reshaped expressions for Ẋ in 3jand in the chosen look-ahead convergent finite
difference scheme, then solve this derivative free equation for Xk+1 in Steps 5 and 6 for discrete data and thereby
obtain the ZNN iteration and we can write the Matlab code.
Introducing Kronecker products and column vector matrix notations is a significant short-cut for matrix equations
whose unknown variable X(t) occurs in time-varying matrix products. This is a is new technique. It will speed up
the ZNN algorithm developments for matrix problems in which the error function DE contains more than one term
with Ẋ(t).
The ZNN method for classical time-varying Sylvester equations has recently been studied in [27]. For a new right
and left 2-factor version of Sylvester see [34]. For recent work on ZNN and Lyapunov, see [16] e.g..
(X) Time-varying Matrices, ZNN Methods and Computer Science :
The recent development of new and predictive ZNN algorithms for time-varying matrix applications has impli-
cations for our understanding of computer science and of tiered logical equivalences on several levels of our
mathematical realms.
The most stringent realm of math is ’pure mathematics’ where theorems are proved and where, for example, a
matrix either has determinant equal to 0 nor not.
In the next, the mathematical computations realm with its floating point arithmetic, zero is generally computed in-
accurately as not 0 and any computed value with magnitude below a threshold such as a small multiple or a fraction
ZNN and Time-varying Matrix Flows 21
of the machine constant eps may be treated rightfully as 0. In the computational realm the aim is to approximate
quantities to high precision, including zero and never worrying about zero exactly being 0.
A third, the least stringent realm of mathematics belongs to the engineering world. There one needs to find solu-
tions that are good enough to approach the ”true theoretical solution” of a problem as known from the ’pure’ realm
asymptotically; needing possibly only 3 to 5 or 6 correct leading digits for a successful algorithm.
The concept of differing math-logical equivalences in these three tiers of mathematics is exemplified and inter-
preted in Yunong Zhang and his research group’s recent paper [35] that is well worth reading and contemplating
about.
4 Conclusions
This paper has introduced a recent new branch of numerical analysis for discretized time-varying, mostly matrix
dynamical systems to the west. Time-varying discretized ZNN algorithms are built from standard concepts and well
known relations and facts of algebra, matrix theory and of static numerical matrix analysis. Yet they differ most
deeply from their close cousins, the differential equation initial value solvers. ZNN does not follow the modern
call for backward stable computing methods that find the exact solution of a nearby problem whose distance from
the given problem depends on the given problem’s conditioning. Instead ZNN computes highly accurate future
solutions based on an exponentially decaying error function. In its coded version, ZNN algorithms use just one
linear equations solve and a short recursion for earlier system data, besides some necessary auxiliary functions.
These codes run extremely fast using previous data immediately after time tk and they are done well before time
tk+1 has arrived with an accurate prediction of the unknown data for the future instance tk+1 .
Here we have outlined the inner working of ZNN methods in seven steps. These steps do not compare in any way
with IVP ODE solvers and are hard to fathom at first. Therefore we have outlined many time-varying problems
of matrix theory and for optimizations in explicit detail for ZNN, including working Matlab codes for half of
them. This was done here with ever increasing levels of difficulty and complexity, from simple time-varying linear
equations solving routines to matrix inversion and time-varying pseudoinverses; from Lagrange multipliers for
function optimization to more complicated matrix problems such as the time-varying matrix eigenvalue problem,
time-varying linear equations with inequality constraints, matrix square roots, and time-varying Sylvester and
Lyapunov equations, some of which require Kronecker product matrix representations so that they can be handled
successfully in ZNN’s standard seven steps way.
On the way we have encountered models and error functions that do not yield usable derivative information for the
unknown and we have learnt how to re-define models and error functions accordingly for success. We have pointed
out alternate ways to solve the linear equations part of ZNN differently in Matlab and shown how to optimize the
speed of ZNN Matlab codes. What we have not dealt with is the final engineering application of our discretized
time-varying sensor driven ZNN algorithms as on-chip designs for use in robotics and other machinery. That was
the task of [31] and it is amply explained in Simulink schematics there. Circuit diagrams are also often drawn out
in the quoted Chinese literature.
There are many glaringly open questions with ZNN as, for example, how to choose among dozens and dozens of
otherwise equivalent same j s type look-ahead and convergent finite difference formulas for accuracy or speed
and also for the ability of specific finite difference formulas to handle widely varying sampling gap sizes τ well.
Neither do we know how to distinguish between high optimal h = τ · η and low optimal h value difference
formulas, nor why there are such variations in h for equivalent truncation error order formulas. This is an open
challenge for difference equation experts.
Many other open questions with ZNN are mentioned in my ZNN papers. They need no repeating here.
ZNN and Time-varying Matrix Flows 22
Thanks
My involvement with Zhang Neural Networks began in 2016 and ’right out of the blue sky’ thanks to the Zentral-
blatt when I was sent the book [31] by Yunong Zhang and Dongsheng Guo for review, see Zbl. 1339.65002.
I was impressed and made contact with Yunong, then visited him and his research group at Sun Yat-Sen Univer-
sity in Guangzhou in the summer of 2017 on a trip to China with several other previously planned mathematical
venues. Thanks to this visit and to our exchanges of ideas and plans there, I became active in this totally new-to-me
area of time-varying matrix numerics. As I began to write and submit papers on ZNN to standard western matrix,
numerical and applied journals, the referees would generally send back reports that listed their understandings of
static matrix numerics for similar static matrix problems but showed no comprehension of the time-varying realm
and gave me no usable feedback at all. In one week alone two of my papers were rejected within 48 hours of
submission by two editors-in-chief of leading western journals and a third paper suffered the same fate by another
very respectable editor-in-chief more recently. I had to understand that there are no reviewers in western numerical
linear algebra circles that knew of or were capable or willing to see or even read and contemplate a new area of
numerics that was developing on their own turf, but outside the west and our knowledge base, almost exclusively
in China.
I am very grateful to the Journal of Difference Equations which accepted my look-ahead and convergent finite
difference formula paper [18] quickly without revision despite no usable referee reports. That acceptance and the
results of my work eventually broke the ice in journals of the west that now accept our ZNN work, even without
sense making referee reports. I thank those courageous editors who read my papers and accepted them without us
having any knowledge base for ZNN in-house in the west. It was time and appropriate. Thank you.
Last summer (2019) I took a quick sidetrip to visit Nick Trefethen and Yuji Nakatsukasa in Oxford to confer with
them about the ideas behind Zhang Neural Networks. I wanted to find out their ideas and insights as we discussed
and dissected the ZNN algorithm that neither one of us could then clearly understand. And for which we pulled
our hairs out for one intense morning. Thank you both for your lovely gift of mind and time!
This eventually lead me to ZNN’s seven steps, with its first introduction of a DE, the exponential error decay that
makes convergence doubts and proofs obsolete (though referees continue to ask for them needlessly), then the
get-away from DEs into linear equation solving and look-ahead convergent recursions in a derivative free setting.
All of this is the basis of my survey paper. I am glad and thankful for the help and patience of my family, of the
editors and their referees (despite all their and my missteps), and to Nick and Yuri.
I do hope that time-varying matrix methods and ZNN can become a new part of our global numerical matrix anal-
ysis community; soon enough.
We need to build a new knowledge base again, as we have done so many times before, see [17].
References
[1] G ISELA E NGELN –M ÜLLGES AND F RANK U HLIG, Numerical Algorithms with C, with CD-ROM, Springer,
1996, 596 p. (MR 97i:65001) (Zbl 857.65003)
[2] J EAN -C LAUDE E VARD AND F RANK U HLIG, On the matrix equation f (X) = A, Lin. Alg. Appl., 162
(1992), p. 447 - 519.
[3] N EIL H. G ETZ AND J ERROLD E. M ARSDEN, Dynamical methods for polar decomposition and inversion of
matrices, Lin. Alg. Appl., 258 (1997), p. 311 - 343.
[4] G ENE H. G OLUB AND V ICTOR P EREYRA, The differentiation of pseudo-inverses and nonlinear least
squares problems whose variables separate, SIAM J. Numer. Anal, 10 (1973), p. 413 - 432.
[5] D ONGSHENG G UO AND Y UNONG Z HANG, Zhang neural network, Getz-Marsden dynamic system, and
discrete time algorithms for time-varying matrix inversion with applications to robots’ kinematic control,
Neurocomputing, 97 (2012), p. 22 - 32.
ZNN and Time-varying Matrix Flows 23
[6] C HARLES R. J OHNSON, Numerical determination of the field of values of a general complex matrix, SIAM
J. Numer. Anal., 15 (1978), p .595 - 602, https://doi.org/10.1137/0715039 .
[7] J IAN L I , YANG S HI AND H EJUN X UAN, Unified model solving nine types of time-varying problems in the
frame of zeroing neural network, IEEE Trans. Neur. Netw. Learn. Syst., (2020), in print, 10 p.
[8] S ÉBASTIEN L OISEL AND P ETER M AXWELL, Path-following method to determine the field of val-
ues of a matrix at high accuracy, SIAM J. Matrix Anal. Appl., 39 (2018), p. 1726 - 1749.
https://doi.org/10.1137/17M1148608 .
[9] JAN R. M AGNUS AND H EINZ N EUDECKER, Matrix differential calculus with applications to simple,
Hadamard, and Kronecker products, J. Math. Psych., 29 (1985), p. 474 - 492.
[10] JAMES G. NAGY, http://www.mathcs.emory.edu/˜nagy/courses/fall10/515/
KroneckerIntro.pdf .
[11] J OHN VON N EUMANN AND E UGENE PAUL W IGNER, On the behavior of the eigenvalues of adiabatic pro-
cesses, Physikalische Zeitschrift, 30 (1929), p. 467 - 470; reprinted in Quantum Chemistry, Classic Scientific
Papers, Hinne Hettema (editor), World Scientific (2000), p. 25 - 31.
[12] B INBIN Q IU , Y UNONG Z HANG AND Z HI YANG, New discrete-time ZNN models for least-squares solution
of dynamic linear equation system with time-varying rank-deficient coefficient, IEEE Transactions on Neural
Networks and Learning Systems, 29 (2018), p. 5767 - 5776. https://doi.org/10.1109/TNNLS.2018.2805810
[13] P REDRAG S. S TANIMIROVI Ć , X UE -Z HONG WANG AND H AIFENG M A, Complex ZNN for computing time-
varying weighted pseudo-inverses, Applicable Anal. Discr. Math., 13 (2019), p. 131 - 164.
[14] J OSEF S TOER AND ROLAND B ULIRSCH, Introduction to Numerical Analysis, 2nd edition, Springer, 2002,
729 p.
[15] J OSEPH S TOER AND C HRISTOPH W ITZGALL Convexity and Optimization in Finite Dimensions I, Springer,
1970, 298 p.
[16] M IN S UN AND J ING L IU, A novel noise-tolerant Zhang neural network for time-varying Lyapunov equation,
Adv. Diff. Equat., in print (2020), 15 p.
[17] F RANK U HLIG The eight epochs of math as regards past and future matrix computations, Re-
cent Trends in Computational Science and Engineering, S. Celebi (Ed.), InTechOpen (2018),
http://dx.doi.org/10.5772/intechopen.73329 , 25 p.
[ complete with graphs and references at http://arxiv.org/abs/2008.01900 (2020), 19 p. ]
[18] F RANK U HLIG, The construction of high order convergent look-ahead finite difference formulas for Zhang
Neural Networks, J. Diff. Equat. Appl., 25 (2019), p. 930 - 941,
https://doi.org/10.1080/10236198.2019.1627343 .
[19] F RANK U HLIG, List of look-ahead convergent finite difference formulas at http://www.auburn.edu/
˜uhligfd/m_files/ZNNSurveyExamples under Polyksrestcoeff3.m .
[20] F RANK U HLIG AND Y UNONG Z HANG, Time-varying matrix eigenanalyses via Zhang Neural Net-
works and look-ahead finite difference equations, Lin. Alg. Appl., 580 (2019), p. 417 - 435,
https://doi.org/10.1016/j.laa.2019.06.028 .
[21] F RANK U HLIG, MATLAB codes for time-varying matrix eigenvalue computations via ZNN are available at
http://www.auburn.edu/˜uhligfd/m_files/T-VMatrixEigenv/
[22] F RANK U HLIG, Coalescing eigenvalues and crossing eigencurves of 1-parameter matrix flows, SIAM J Matr.
Anal. Appl., in print (2020), 17 p.
ZNN and Time-varying Matrix Flows 24
[23] F RANK U HLIG, The MATLAB codes for plotting and assessing matrix flow block diagonalizations are avail-
able at http://www.auburn.edu/˜uhligfd/m_files/MatrixflowDecomp/
[24] F RANK U HLIG, On the decomposability of 1-parameter matrix flows, arXiv:2006.01215 (2020), 8 p.
[25] F RANK U HLIG, Constructing the field of values of decomposable and general matrices, submitted, 8 p.
[26] F RANK U HLIG, MATLAB codes for the examples in Section 2 are available at http://www.auburn.
edu/˜uhligfd/m_files/ZNNSurveyExamples/
[27] L IN X IAO , YONGSHENG Z HANG , J IANHUA DAI , J ICHUN L I AND W EIBING L I, New noise-tolerant ZNN
models with predefined-time convergence for time-variant Sylvester equation solving, IEEE Trans. Syst.,
Man, Cybern., in print (2020), 12 p.
[28] F ENG X U , Z EXIN L I , Z HUOYUN N IE , H UI S HAO AND D ONGSHENG G UO, Zeroing Neural Network for
solving time-varying linear equation and inequality systems, IEEE Trans. Neur. Netw. Learn. Syst., 30 (2019),
p. 2346 - 2357.
[29] Y UNONG Z HANG AND J UN WANG, Recurrent neural networks for nonlinear output regulation, Automatica,
37 (2001), p. 1161 - 1173.
[30] Y UNONG Z HANG , Z HAN L I AND K ENE L I, Complex-valued Zhang neural network for online complex-
valued time-varying matrix inversion, Appl. Math. Comp., 217 (2011), p. 10066 - 10073.
[31] Y UNONG Z HANG AND D ONGSHENG G UO, Zhang Functions and Various Models, Springer 2015, 236 p.
(Zbl 1339.65002)
[32] Y UNONG Z HANG , M IN YANG , M IN YANG , H UANCHANG H UANG , M ENGLING X IAO AND H AIFENG
H U, New discrete solution model for solving future different-level linear inequality and equality with robot
manipulator control, IEEE Trans. Ind. Inform., 15 (2019), p. 1975 - 1984.
[33] Y UNONG Z HANG , H UANCHANG H UANG , M IN YANG , Y IHONG L ING , J IAN L I AND B INBIN Q IU, New
zeroing neural dynamics models for diagonalization of symmetric matrix stream, Num. Alg., (2020), in print,
18p.
[34] Y UNONG Z HANG , X IAO L IU , Y IHONG L ING , M IN YANG AND H UANCHANG H UANG, Continuous and
discrete zeroing dynamics models using JMP function array and design formula for solving time-varying
Sylvester-transpose matrix inequality, Numer. Alg., (2020), in print, 24 p.
[35] Y UNONG Z HANG , M IN YANG , B INBIN Q IU , J IAN L I , M INGJIE Z HU, From mathematical equivalence such
as Ma equivalence to generalized Zhang equivalency including gradient equivalency, Theoretical Computer
Science, 817 (2020), p. 44 - 54.
1 image file :
CuneiformYBC4652.png