Symmetry: Modified Jacobi-Gradient Iterative Method For Generalized Sylvester Matrix Equation
Symmetry: Modified Jacobi-Gradient Iterative Method For Generalized Sylvester Matrix Equation
Symmetry: Modified Jacobi-Gradient Iterative Method For Generalized Sylvester Matrix Equation
Article
Modified Jacobi-Gradient Iterative Method for
Generalized Sylvester Matrix Equation
Nopparut Sasaki and Pattrawut Chansangiam *
Department of Mathematics, Faculty of Science, King Mongkut’s Institute of Technology Ladkrabang,
Bangkok 10520, Thailand; [email protected]
* Correspondence: [email protected]; Tel.: +66-935-266600
Received: 14 October 2020; Accepted: 2 November 2020; Published: 5 November 2020
Abstract: We propose a new iterative method for solving a generalized Sylvester matrix equation
A1 XA2 + A3 XA4 = E with given square matrices A1 , A2 , A3 , A4 and an unknown rectangular matrix
X. The method aims to construct a sequence of approximated solutions converging to the exact
solution, no matter the initial value is. We decompose the coefficient matrices to be the sum of
its diagonal part and others. The recursive formula for the iteration is derived from the gradients
of quadratic norm-error functions, together with the hierarchical identification principle. We find
equivalent conditions on a convergent factor, relied on eigenvalues of the associated iteration matrix,
so that the method is applicable as desired. The convergence rate and error estimation of the
method are governed by the spectral norm of the related iteration matrix. Furthermore, we illustrate
numerical examples of the proposed method to show its capability and efficacy, compared to recent
gradient-based iterative methods.
Keywords: generalized Sylvester matrix equation; iterative method; gradient; Kronecker product;
matrix norm
1. Introduction
In control engineering, certain problems concerning the analysis and design of control systems
can be formulated as the Sylvester matrix equation:
A1 X + XA2 = C (1)
where X ∈ Rm×n is an unknown matrix, and A1 , A2 , C are known matrices of appropriate dimensions.
Here, Rm×n stands for the set of m × n real matrices. Let us denote by (·) T the transpose of a matrix.
When A2 = A1 T , the equation is reduced to the Lyapunov equation, which is often found in continuous-
and discrete-time stability analysis [1,2]. The Sylvester equation is a special case of a generalized
Sylvester matrix equation:
(AS): The sequence of approximated solutions converges to the exact solution, no matter the initial
value is.
The first GI algorithm for solving (1) was developed by Ding and Chen [19]. In that paper, a sufficient
condition in terms of a convergence factor is determined so that the algorithm satisfies (AS) property.
By introducing of a relaxation parameter, Niu et al. [20] suggested a relaxed gradient iterative (RGI)
gradient algorithm to overcome (1). Numerical studies show that when the relaxation factor is correctly
selected the convergent behavior of the Niu’s algorithm is stronger than the Ding’s algorithm. Zhang
and Sheng [21] introduced an RGI algorithm for finding the symmetric (skew-symmetric) solution of
Equation (1). Xie et al. [22] improved the RGI algorithm to become an accelerated GI (AGBI) algorithm,
on the basis of the information generated in the previous half-step and a relaxation factor. Ding and
Chen [23] also applied the ideas of gradients and least-squares to formulate the least-squares iterative
(LSI) algorithm. In [24], Fan et al. realized that multiplication of the matrix in GI would take great time
and space if A1 and A2 were big and dense, so they proposed the following Jacobi-gradient iterative
(JGI) method.
Method 1 (Jacobi-Gradient based Iterative (JGI) algorithm [24]). For i = 1, 2, let Di be the diagonal part
of Ai . Given any initial matrices X1 (0), X2 (0). Set k = 0 and compute X (0) = (1/2)( X1 (0) + X2 (0)).
For k = 1, 2, . . . , End, do:
X1 (k) = X (k − 1) + µD1 [C − A1 X (k − 1) − X (k − 1) A2 ],
X2 (k) = X (k − 1) + µ[C − A1 X (k − 1) − X (k − 1) A2 ] D2 ,
1
X (k) = ( X (k) + X2 (k)).
2 1
After that, Tian et al. [25] proposed that an accelerated Jacobi-gradient iterative (AJGI) algorithm
for solving the Sylvester matrix equation relies on two relaxation factors and the half-step update.
However, the parameter values to apply to the algorithm are difficult to find since they are given in
terms of a nonlinear inequality. For the generalized Sylvester Equation (2), the gradient iterative (GI)
algorithm [19] and the least-squares iterative (LSI) algorithm [26] were established as follows.
Symmetry 2020, 12, 1831 3 of 15
Method 2 (GI algorithm [19]). Given any two initial matrices X1 (0), X2 (0). Set k = 0 and compute
X (0) = (1/2)( X1 (0) + X2 (0)). For k = 1, 2, . . . , End, do:
2
0<µ< .
k A1 k22 k A2 k22 + k A3 k22 k A4 k22
Method 3 (LSI algorithm [26]). Given any two initial matrices X1 (0), X2 (0). Set k = 0 and compute X(0) =
1
2 (X1 (0) + X2 (0)). For k = 1, 2, . . . , End, do:
In this paper, we shall propose a new iterative method for solving the generalized Sylvester matrix
Equation (2), when A1 , A3 ∈ Rm×m , A2 , A4 ∈ Rn×n and X, E ∈ Rm×n . This algorithm requires only one
initial value X (0) and only one parameter, called a convergence factor. We decompose the coefficient
matrices to be the sum of its diagonal part and others. The recursive formula for iteration is derived
from the gradients of quadratic norm-error functions together with hierarchical identification principle.
Under assumptions on the real-parts sign of eigenvalues of matrix coefficients, we find necessary and
sufficient conditions on a convergent factor for which (AS) holds. The convergence rate and error
estimates are regulated by the iteration matrix spectral radius. In particular, when the iteration matrix
is symmetric, we obtain a convergence criteria, error estimates and the optimal convergent factor in
terms of spectral norms and condition number. Moreover, numerical simulations are also provided
to illustrate our results for (2) and (1). We compare the efficiency of our algorithm to LSI, GI, RGI,
AGBI and JGI algorithms.
Let us recall some terminology from matrix analysis—see e.g., [27]. For any square matrix X,
denote σ ( X ) its spectrum, ρ( X ) its spectral radius, and tr( X ) its trace. Let us denote the largest and
the smallest eigenvalues of a matrix by λmax (·) and λmin (·), respectively. Recall that the spectral norm
k · k2 and the Frobenius norm k · k F of A ∈ Rm×n are, respectively, defined by
q q
k A k2 = λmax ( A T A) and k Ak F = tr( A T A).
P := A2T ⊗ A1 + A4T ⊗ A3 .
( D1 + F1 ) X ( D2 + F2 ) + A3 XA4 = E, (4)
A1 XA2 + ( D3 + F3 ) X ( D4 + F4 ) = E. (5)
From (4) and (5), we shall find the approximated solution of the following two subsystems
d
tr( AX ) = A T ,
dX
we can deduce the gradient of the error L1 as follows:
∂ ∂ h i
L1 ( X ) = tr ( D1 XD2 − M) T ( D1 XD2 − M )
∂X ∂X
∂ ∂ ∂
= tr( XD2 D2 X T D1 D1 ) − tr( XD2 M T D1 ) − tr( X T D1 MD2 )
∂X ∂X ∂X
= ( D1 D2 ) X T D2 D2 + D1 D1 XD2 D2 − D1 MD2 − ( D2 M T D1 )T
= 2D1 ( D1 XD2 − M) D2 . (8)
Similarly, we have
∂
L2 ( X ) = 2A3T ( A3 XA4 − N ) A4T . (9)
∂X
Symmetry 2020, 12, 1831 5 of 15
Let X1 (k) and X2 (k) be the estimates or iterative solutions of the system (6) at k-th iteration. The recursive
formulas of X1 (k) and X2 (k) come from the gradient formulas (8) and (9), as follows:
X1 (k ) = X (k − 1) + µD1 ( M − D1 X (k − 1) D2 ) D2
= X (k − 1) + µD1 ( E − A1 X (k − 1) A2 + A3 X (k − 1) A4 ) D2 ,
X2 (k ) = X (k − 1) + µD3 ( N − D3 X (k − 1) D3 ) D4
= X (k − 1) + µD3 ( E − A1 X (k − 1) A2 + A3 X (k − 1) A4 ) D4 .
Based on the hierarchical identification principle, the unknown variable X is replaced by its estimates
at the (k − 1)-th iteration. To avoid duplicated computation, we introduce a matrix
S ( k ) = E − ( A1 X ( k ) A2 + A3 X ( k ) A4 ),
so we have
1
X (k) = ( X (k) + X2 (k)) = X (k − 1) + µ( D1 S(k − 1) D2 + D3 S(k − 1) D4 ). (10)
2 1
Since any diagonal matrix is sparse, the operation counts in the computation (10) can be substantially
(l )
reduced. Let us denote S(k) = [sij (k)], X (k) = [ xij (k)], and Dl = [dij ] for each l = 1, 2, 3, 4. Indeed,
the multiplication of D1 S(k) D2 results in a matrix whose (ij)-th entry is the product of the i-th entry in
(1) (2)
the diagonal of D1 , the (ij)-th entry of S(k), and the j-th entry of D2 —i.e., D1 S(k) D2 = [dii sij (k)d jj ].
(3) (4)
Similarly, D3 S(k ) D4 = [dii sij (k)d jj ]. Thus,
The complexity analysis for each step of the algorithm is given by 2mn(m + n + 5). When m = n,
the complexity analysis is 4n3 + 10n2 ∈ O(n3 ), so that the algorithm runtime complexity is cubic time.
The convergence property of the algorithm relies on the convergence factor µ. The appropriate value
of this parameter is determined in the next section.
2<(λ j )
0<µ< max .
j=1,...,mn | λ j |2
(3) If <(λ j ) < 0 for all j = 1, . . . , mn, then (AS) holds if and only if
2<(λ j )
min < µ < 0.
j=1,...,mn | λ j |2
(4) If H is symmetric, then (AS) holds if and only if λmax ( H ) and λmin ( H ) have the same sign, and µ is
chosen so that
0 < µ < 2
λ (H)
if λmin ( H ) > 0,
max
2
λmin ( H )
<µ<0 if λmax ( H ) < 0.
We will show that X̃ (k) → 0, or equivalently, vec X̃ (k ) → 0 as k → ∞. A direct computation reveals that
1
X̃ (k ) = ( X (k) + X2 (k))
2 1
1
= X̃ (k − 1) − µD1 ( A1 X̃ (k − 1) A2 + A3 X̃ (k − 1) A4 ) D2
2
1
− µD3 ( A1 X̃ (k − 1) A2 + A3 X̃ (k − 1) A4 ) D4 .
2
By taking the vector operator and using properties of the Kronecker product, we have
D( P) = D2 ⊗ D1 + D4 ⊗ D3 .
Thus, ρ( Imn − µH ) < 1 if and only if |1 − µλ| < 1 for all λ ∈ σ( H ). Write λ j = a j + ib j where a j , b j ∈ R.
It follows that the condition |1 − µλ j | < 1 is equivalent to (1 − µλ j )(1 − µλ j ) < 1, or
(i) µ > 0 and −2a j + µ( a2j + b2j ) < 0 for all j = 1, 2, 3, . . . , mn;
(ii) µ < 0 and −2a j + µ( a2j + b2j ) > 0 for all j = 1, 2, 3, . . . , mn.
Case 1 a j = <(λ j ) > 0 for all j. In this case, ρ( Imn − µH ) < 1 if and only if
2a j
0<µ< max . (13)
j=1,...,mn a2 + b2j
j
Case 2 a j = <(λ j ) < 0 for all j. In this case, ρ( Imn − µH ) < 1 if and only if
2a j
min < µ < 0. (14)
j=1,...,mn a2 + b2j
j
Now, suppose that H is a symmetric matrix. Then Imn − µH is also symmetric, and thus all its
eigenvalue are real. Hence,
2
0<µ< .
λmax ( H )
2
< µ < 0.
λmin ( H )
2 2
<µ<0 and 0<µ< ,
λmin ( H ) λmax ( H )
which is a contradiction.
Symmetry 2020, 12, 1831 8 of 15
Therefore, the condition (16) holds if and only if λmax ( H ) and λmin ( H ) have the same sign and µ
is chosen according to the above condition.
Hence, the spectral norm of Imn − µH describes how fast the approximated solutions X (k) converges
to the exact solution X ∗ . The smaller spectral radius, the faster X (k) goes to X ∗ . In that case, since
k Imn − µH k2 < 1, if k X (k − 1) − X ∗ k F 6= 0 (i.e., X (k − 1) is not the exact solution) then
k X ( k ) − X ∗ k F < k X ( k − 1) − X ∗ k F . (19)
Thus, the error at each iteration gets smaller than the previous one.
The above discussion is summarized in the following theorem.
Theorem 2. Suppose that the parameter µ is chosen as in Theorem 1 so that Algorithm 1 satisfies (AS). Then,
the convergence rate of the algorithm is governed by the spectral radius (16). Moreover, the error estimate
k X (k) − X ∗ k F compared to the previous step and the fast step are provided by (17) and (18), respectively.
In particular, the error at each iteration gets smaller than the (nonzero ) previous one, as in (19).
From (16), if the eigenvalues of µH are close to 1, then the spectral radius of the iteration matrix is
close to 0, and hence, the error vec X̃ (k) or X̃ (k) converge faster to 0.
Remark 1. The convergence criteria and the convergence rate of Algorithm 1 depend on A, B, C and D but not
on E. However, the matrix E can be used for the stopping criteria.
The next proposition determines the iteration number for which the approximated solution X (k )
is close to the exact solution X ∗ so that k X (k ) − X ∗ k F < e.
Proposition 1. According to Algorithm 1, for each given error e > 0, we have k X (k ) − X ∗ k F < e after k∗
iterations for any k∗ , such that
log e − log k X (k ) − X ∗ k F
k∗ > . (20)
log k Imn − µH k2
This means precisely that for each given e > 0, there is a k∗ ∈ N such that for all k ≥ k∗ ,
Taking logarithms, we have that the above condition is equivalent to (20). Thus, if we run Algorithm 1 k∗
times, then we get kX (k) − X ∗ k < e as desired.
Theorem 3. The optimal convergence factor µ for which Algorithm 1 satisfies (AS) is one that minimizes
k Imn − µH k2 . If, in addition, H is symmetric, then the optimal convergence factor for which the algorithm
satisfies (AS) is determined by
2
µopt = . (21)
λmin ( H ) + λmax ( H )
λmax ( H ) − λmin ( H ) κ2 − 1
ρ( Imn − µH ) = = 2 , (22)
λmax ( H ) + λmin ( H ) κ +1
κ2 − 1
k X (k) − X ∗ k F ≤ k X ( k − 1) − X ∗ k F , (23)
κ2 + 1
κ2 − 1 k
k X (k) − X ∗ k F ≤ ( 2 ) k X (0) − X ∗ k F . (24)
κ +1
Proof. From Theorem 2, it is clear that the fastest convergence factor is attained at a convergence factor
that minimizes k Imn − µH k2 . Now, assume that H is symmetric. Then, Imn − µH is also symmetric,
thus all its eigenvalues are real and
First, we consider the case λmin ( H ) > 0. To obtain the fastest convergence factor, according to (15),
we must solve the following optimization problem
We obtain that the minimizer is given by µopt = 2/( a + b), so that f (µopt ) = (b − a)/(b + a). For the
case λmax ( H ) < 0, we solve the following optimization problem
A similar argument yields the same minimizer (21) and the same convergence rate (22). From (17),
(18) and (25), we obtain the bounds (23) and (24).
4. Numerical Simulations
In this section, we report numerical results to illustrate the effectiveness of Algorithm 1. We consider
various sizes of matrix systems, namely, small (2 × 2), medium (10 × 10) and large (100 × 100). For the
generalized Sylvester equation, we compare the performance of Algorithm 1 to the GI and LSI algorithms.
Symmetry 2020, 12, 1831 10 of 15
For the Sylvester equation, we compare our algorithm with GI, RGI, AGBI and JGI algorithms. All iterations
have been carried out the same environment: MATLAB R2017b, Intel(R) Core(TM) i7-7660U CPU @ 2.5GHz,
RAM 8.00 GB Bus speed 2133 MHz. We abbreviate IT and CPU for iteration time and CPU time (in seconds),
respectively. As step k-th of the iteration, we consider the following error:
δ ( k ) : = k E − A1 X ( k ) A2 − A3 X ( k ) A4 k F
Choose X (0) = zeros(2). In this case, all eigenvalues of H have positive real parts. The effect of changing the
convergence factor µ is illustrated in Figure 1. According to Theorem 1, the criteria for the convergence of X (k )
is that µ ∈ (0, 4.1870). Since µ1 , µ2 , µ3 , µ4 satisfy this criteria, the error is becoming smaller and goes to zero as
k increase, as in Figure 1. Among them, µ4 = 4.0870 gives the fastest convergence. For µ5 and µ6 , which do not
meet the criteria, the error δ(k ) does not converge to zero.
Example 2. Suppose that A1 XA2 + A3 XA4 = E, where A1 , A2 , A3 , A4 and E are 10 × 10 matrices where
Symmetry 2020, 12, 1831 11 of 15
Here, E is a heptadiagonal matrix—i.e., a band matrix with bandwidth 3. Choose an initial matrix X(0) = zeros(10),
where zeros(n) is an n-by-n matrix that contains 0 for every position. We compare Algorithm 1 with the direct
method, LSI and GI algorithms. Table 1 shows the errors at the final step of iteration as well as the computation
time after 75 iterations. Figure 2 illustrates that the approximated solutions via LSI diverge, while those via GI
and MJGI converge. Table 1 and Figure 2 imply that our algorithm takes significantly less computational time
and error than others.
Example 3. We consider the equation A1 XA2 + A3 XA4 = E in which A1 , A2 , A3 , A4 and E are 100 × 100
matrices determined by
The initial matrix is given by X (0) = zeros(100). We run LSI, GI and MJGI algorithms by using
−1 −1
µ = 0.1, µ = (k A1 k2 k A2 k2 + k A3 k2 k A4 k2 ) , µ = 2(k A1 k2 k A2 k2 + k A3 k2 k A4 k2 ) ,
respectively. The reported result in Table 2 and Figure 3 illustrate that the approximated solution generated from
LSI diverges, while those from GI or MJGI converge. Both computational time and the error δ(100) from MJGI
are less than those from GI.
Symmetry 2020, 12, 1831 12 of 15
A1 X + XA4 = E (26)
has a unique solution. This condition is equivalent to that the Kronecker sum A4T ⊕ A1 is invertible,
or all possible sums between eigenvalues of A1 and A4 are nonzero. To solve (26), the Algorithm 2
is proposed:
T ( k ) : = E − A1 X ( k ) − X ( k ) A4 .
Algorithm 2: Modified Jacobi-gradient based iterative (MJGI) algorithm for Sylvester equation
A1 , A4 , E, X (0);
Choose µ ∈ R, e > 0 and set k = 1;
for k = 1, . . . , n do
T ( k ) = E − A1 X ( k ) − X ( k ) A4 ;
(1) (4)
xij (k ) = xij (k − 1) + µ(dii + d jj )tij (k − 1) where T (k ) = [tij (k)];
if k T (k − 1)k F < e then
break;
else
k = k + 1;
end
end
Symmetry 2020, 12, 1831 13 of 15
Example 4. Consider the equation A1 X + XA4 = E, in which E is the same matrix as in the previous example,
In this case, all eigenvalues of the iteration matrix have positive real parts, so that we can apply our algorithm.
We compare our algorithm with GI, RGI, AGBI and JGI algorithms. The results after running 100 iterations are
shown in Figure 4 and Table 3. According to the error and CT in Table 3 and Figure 4, our algorithm uses less
computational time and has smaller errors than others.
parameter for an updating step to make the algorithm converge faster—see [25]. Another possible way
is to apply the idea in this paper to derive an iterative algorithm for nonlinear matrix equations.
Author Contributions: Supervision, P.C.; software, N.S.; writing—original draft preparation, N.S.; writing—review
and editing, P.C. All authors contributed equally and significantly in writing this article. All authors have read
and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Acknowledgments: The first author received financial support from the RA-TA graduate scholarship from the
faculty of Science, King Mongkut’s Institute of Technology Ladkrabang, Grant. No. RA/TA-2562-M-001 during
his Master’s study.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Shang, Y. Consensus seeking over Markovian switching networks with time-varying delays and uncertain
topologies. Appl. Math. Comput. 2016, 273, 1234–1245. [CrossRef]
2. Shang, Y. Average consensus in multi-agent systems with uncertain topologies and multiple time-varying
delays. Linear Algebra Appl. 2014, 459, 411–429. [CrossRef]
3. Golub, G.H.; Nash, S.; Van Loan, C.F. A Hessenberg-Schur method for the matrix AX + XB = C. IEEE Trans.
Automat. Control. 1979, 24, 909–913. [CrossRef]
4. Ding, F.; Chen, T. Hierarchical least squares identification methods for multivariable systems. IEEE Trans.
Automat. Control 1997, 42, 408–411. [CrossRef]
5. Benner, P.; Quintana-Orti, E.S. Solving stable generalized Lyapunov equations with the matrix sign function.
Numer. Algorithms 1999, 20, 75–100. [CrossRef]
6. Starke, G.; Niethammer, W. SOR for AX − XB = C. Linear Algebra Appl. 1991, 154–156, 355–375. [CrossRef]
7. Jonsson, I.; Kagstrom, B. Recursive blocked algorithms for solving triangular systems—Part I: One-sided
and coupled Sylvester-type matrix equations. ACM Trans. Math. Softw. 2002, 28, 392–415. [CrossRef]
8. Jonsson, I.; Kagstrom, B. Recursive blocked algorithms for solving triangular systems—Part II: Two-sided
and generalized Sylvester and Lyapunov matrix equations. ACM Trans. Math. Softw. 2002, 28, 416–435.
[CrossRef]
9. Kaabi, A.; Kerayechian, A.; Toutounian, F. A new version of successive approximations method for solving
Sylvester matrix equations. Appl. Math. Comput. 2007, 186, 638–648. [CrossRef]
10. Lin, Y.Q. Implicitly restarted global FOM and GMRES for nonsymmetric matrix equations and Sylvester
equations. Appl. Math. Comput. 2005, 167, 1004–1025. [CrossRef]
11. Kressner, D.; Sirkovic, P. Truncated low-rank methods for solving general linear matrix equations.
Numer. Linear Algebra Appl. 2015, 22, 564–583. [CrossRef]
12. Dehghan, M.; Shirilord, A. A generalized modified Hermitian and skew-Hermitian splitting (GMHSS)
method for solving complex Sylvester matrix equation. Appl. Math. Comput. 2019, 348, 632–651. [CrossRef]
13. Dehghan, M.; Shirilord, A. Solving complex Sylvester matrix equation by accelerated double-step scale
splitting (ADSS) method. Eng. Comput. 2019. [CrossRef]
14. Li, S.Y.; Shen, H.L.; Shao, X.H. PHSS iterative method for solving generalized Lyapunov equations. Mathematics
2019, 7, 38. [CrossRef]
15. Shen, H.L.; Li, Y.R.; Shao, X.H. The four-parameter PSS method for solving the Sylvester equation.
Mathematics 2019, 7, 105. [CrossRef]
16. Hajarian, M. Generalized conjugate direction algorithm for solving the general coupled matrix equations
over symmetric matrices. Numer. Algorithms 2016, 73, 591–609. [CrossRef]
17. Hajarian, M. Extending the CGLS algorithm for least squares solutions of the generalized Sylvester-transpose
matrix equations. J. Frankl. Inst. 2016, 353, 1168–1185. [CrossRef]
18. Dehghan, M.; Mohammadi-Arani, R. Generalized product-type methods based on Bi-conjugate gradient
(GPBiCG) for solving shifted linear systems. Comput. Appl. Math. 2017, 36, 1591–1606. [CrossRef]
19. Ding, F.; Chen, T. Gradient based iterative algorithms for solving a class of matrix equations. IEEE Trans.
Automat. Control 2005, 50, 1216–1221. [CrossRef]
Symmetry 2020, 12, 1831 15 of 15
20. Niu, Q.; Wang, X.; Lu, L.-Z. A relaxed gradient based algorithm for solving Sylvester equation. Asian J. Control
2011, 13, 461–464. [CrossRef]
21. Zhang, X.D.; Sheng, X.P. The relaxed gradient based iterative algorithm for the symmetric (skew symmetric)
solution of the Sylvester equation AX + XB = C. Math. Probl. Eng. 2017, 2017, 1624969. [CrossRef]
22. Xie, Y.J.; Ma, C.F. The accelerated gradient based iterative algorithm for solving a class of generalized
Sylvester-transpose matrix equation. Appl. Math. Comput. 2012, 218, 5620–5628. [CrossRef]
23. Ding, F.; Chen, T. Iterative least-squares solutions of coupled Sylvester matrix equations. Syst. Control Lett.
2005, 54, 95–107. [CrossRef]
24. Fan, W.; Gu, C.; Tian, Z. Jacobi-gradient iterative algorithms for Sylvester matrix equations. In Proceedings
of the 14th Conference of the International Linear Algebra Society, Shanghai University, Shanghai, China,
16–20 July 2007.
25. Tian, Z.; Tian, M.; Gu, C.; Hao, X. An accelerated Jacobi-gradient based iterative algorithm for solving
Sylvester matrix equations. Filomat 2017, 31, 2381–2390. [CrossRef]
26. Ding, F.; Liu, P.X.; Chen, T. Iterative solutions of the generalized Sylvester matrix equations by using the
hierarchical identification principle. Appl. Math. Comput. 2008, 197, 41–50. [CrossRef]
27. Horn, R.A.; Johnson, C.R. Topics in Matrix Analysis; Cambridge University Press: New York, NY, USA, 1991.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional
affiliations.
c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).