0% found this document useful (0 votes)
114 views27 pages

Direct Solution To Equations Using The LA Form Revised 1

The document presents a new method for solving systems of linear equations called the LA=U decomposition method. This method decomposes the coefficient matrix into a single lower triangular matrix L, rather than separate L and U matrices as in LU decomposition. The elements of L are written as sums of permutation products of the pivot row multipliers from Gaussian elimination. This decomposition requires only storing and operating with the L matrix, using half the storage of LU decomposition. The method is shown to be equivalent to Gaussian elimination and LU decomposition. The document provides examples of determining the elements of L for a 4x4 system of equations.

Uploaded by

sonia parveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views27 pages

Direct Solution To Equations Using The LA Form Revised 1

The document presents a new method for solving systems of linear equations called the LA=U decomposition method. This method decomposes the coefficient matrix into a single lower triangular matrix L, rather than separate L and U matrices as in LU decomposition. The elements of L are written as sums of permutation products of the pivot row multipliers from Gaussian elimination. This decomposition requires only storing and operating with the L matrix, using half the storage of LU decomposition. The method is shown to be equivalent to Gaussian elimination and LU decomposition. The document provides examples of determining the elements of L for a 4x4 system of equations.

Uploaded by

sonia parveen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/335433111

The LA=U decomposition method for solving systems of linear equations

Preprint · August 2019

CITATIONS READS
0 4,658

4 authors, including:

Ababu Teklemariam Tiruneh Stanley Jabulani Nkambule


University of Swaziland University of Swaziland
43 PUBLICATIONS   239 CITATIONS    16 PUBLICATIONS   91 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

variable chlorine decay rate modelling View project

Food Science View project

All content following this page was uploaded by Ababu Teklemariam Tiruneh on 31 August 2019.

The user has requested enhancement of the downloaded file.


The LA=U decomposition method for solving systems of linear equations

Ababu T. Tiruneh1, Tesfamariam Y. Debessai 2, Gabriel C. Bwembya2, Stanley J. Nkambule1


1
University of Eswatini, Department of Environmental Health Science, P.O. Box 369, Mbabane, Eswatini
2
University of Eswatini, Department of Chemistry, Private Bag 4, Kwaluseni, Eswatini.

Abstract

A method for solving systems of linear equations is presented based on direct decomposition of
the coefficient matrix using the form LAX= LB = B. Elements of the reducing lower triangular
matrix L can be determined using either row wise or column wise operations and are
demonstrated to be sums of permutation products of the Gauss pivot row multipliers. These sums
of permutation products can be constructed using a tree structure that can be easily memorized or
alternatively computed using matrix products. The method requires only storage of the L matrix
which is half in size compared to storage of the elements in the LU decomposition. Equivalence
of the proposed method with both the Gauss elimination and LU decomposition is also shown in
this paper.

Key words

Systems of linear equations, Gauss elimination, LU decomposition, linear equations, matrix


inverse, determinant.

1. Introduction

Systems of linear equations or equations linearized for iterative solutions arise in many science
and engineering problems (Knott, 2013). Practical application of systems of linear equations are
many, examples of such application include applications in digital signal processing, linear
programming problems, numerical analysis of non-linear problems and least square curve fitting
(Matani, 2005). Systems of equations are also historically reported to have provided a motivation
for the development of digital computer as less cumbersome way of solving the equations
(Smiley, 2010).

Gaussian elimination is a systematic way of reducing systems of linear equations into a


triangularised matrix through addition of the independent equations (Grcar, 2011). Carl Fredrich
Gauss, a great 19th Century mathematician proposed the elimination method as part of his proof
for a particular theorem (Mon and Kyi, 2014). When zeros appear on the diagonal of the
coefficient matrix at a particular row during the reduction process, row interchange is made with
row from below. The Gauss elimination method requires 2n3/3 operations for n by n system of
equations (Kreyszig, 2011; Burden and Faires, 2015).

1
The LU decomposition was developed by Alan Turing as an alternative way carrying out
Gaussian elimination through factorization of the coefficient matrix into a product of upper and
lower triangular matrices, namely, A = LU (Turing, 1948). The system is solved in two
consecutive steps using the equations LY= B and UX=Y (Computational Sciences, 2013). The
DoLittle method is one alternative way of the LU factorization in which the diagonal elements of
the lower triangular matrix L are all set equal to 1 (Burden and Faires, 2015). The Dolittle
method requires n2 number of operations (Rafique and Ayub, 2015). The Crout method was
developed by the American mathematician Prescott Crout. In the Crout method, the upper
triangular matrix U has its diagonal elements all set to 1 (Wilkinson, 1988). The Crout method
likewise requires n2 number of operations.

The Cholesky factorization works for symmetric positive definite matrices. The coefficient
matrix in the system of equation AX= B is factorized into A = LLT where L is the lower
triangular matrix with its transpose LT being an upper triangular matrix. The solution involves
solving successively for LY=B and LTX = Y (Pascal, 2019). The Cholesky factorization as such
can be taken as a special case of LU decomposition in which the coefficient matrix is a
symmetric, positive definite, a non-singular matrix. Gaussian elimination for symmetric positive
definite matrices do not need pivoting and take half of the work and storage requirement of LU
decomposition method (Layton, and Sussman, 2014: Nguyen, 2010). The Cholesky method
requires 2n2/3 number of operations (Rafique and Ayub, 2015).

The QR decomposition transforms the system of equation AX = b into triangular system RX


=QTb where A=QR. The matrix Q is orthogonal (QQT=I) and R is an upper triangular matrix
(GNU, 2019; Čížek and Čížková, 2004).

2. Method development

The method proposed in this paper is based on reducing the coefficient matrix A in the system of
linear equations AX =B using a single lower triangular reducing matrix L. The original
coefficient matrix A is transformed into an upper triangular matrix U that allows solution
through back substitution as is usual with both LU decomposition as well as Gauss elimination
methods. For the original system of n by n linear equations given as:

The matrix representation of the equation will be:

AX = B (1)

2
Where A is the coefficient matrix having the elements aij of the original equations and B is the
right hand side column vector containing the elements b1, b2 …..bn

The proposed method establishes a solution that transforms both the coefficient matrix A and the
right hand side column vector B as follows:

LAX = UX = LB = B (2)

In other words the coefficient matrix and the right hand side column vector B are transformed
through the equations:

LA = U and LB = B (3)

The procedure, therefore, essentially centers on determining the lower triangular matrix L that
reduces the coefficient matrix A to an upper triangular matrix U. Let this matrix L be given
through its elements lij so that:

[ ]

The operation LA = U will reduce the coefficient matrix A in to an upper triangular matrix U
given by:

[ ]

However, this proposed method does not need storage of the U matrix as only the L matrix
needs to be determined and used to reduce both the A matrix and the right hand side column
vector B. This is easily seen through the matrix operation involving the reducing matrix L only,
namely,
LAX = LB = B (6)

3
In this method, the lij elements will be written in terms of the Gauss pivot row multipliers mij of
the Gauss elimination, and, as will be shown shortly, the lij elements are the sum of the
permutation products of the mij multipliers assembled into a tree like structure for easy
memorization. The elements lij will not remain constant during the reduction process as is
normally the case with Gauss elimination or LU decomposition, but change as the reduction of A
to U matrix progresses column wise or row wise as new members of the Gauss pivot row
multipliers are added to the element lij.

Unlike the Gauss method which is restricted to column wise operation, in this method it is also
possible to proceed row wise. In fact the row wise procedure will be followed to derive the lij
elements.

Starting with row 2 of the lower triangular L matrix, , the only unknown is l21 and in terms of
the Gauss elimination pivot row multipliers mij, the pivot operation to educe u21 to zero is given
as:

m21a11 +a21 = 0 so that m21 = -a21/a11

For row 3, l31 is determined through the pivot element a11 so that:

m31a11 +a31 = 0 so that m31 = -a31/a11

For row 3 again, the remaining element l32 is determined through the pivot element a’22 which is
modified from the original value a22 because of the earlier reduction operation on row 2. Hence
the reduction to u32= 0 is given by:

m32 (m21a12 +a22) + (m31a12 +a32) = 0

Collating the pivot row multipliers m with respect to the coefficient matrix A elements, namely,
aij, the above equation becomes:

(m31+ m32 m21) a12 + m32 a22 + a32 = 0

Likewise the lij elements for row 4 are determined as follows:

For (u41 = 0), m41a11 +a41 = 0

For (u42 = 0), (m41+ m42 m21 )a12 + m42 a22 + a42 = 0

For (u43 = 0), (m41 + m42 m21 + m43 m31 + m43 m32 m21) a13 + (m42+ m43 m32) a23 + a43 = 0

For a 4 X 4 L matrix, summarizing the lij elements, expressed in terms of the Gauss pivot row
multipliers m shown above, will give the L matrix shown in Equation 7.

4
[ ]

It is easy to show that the m terms in the L matrix in Equation 7 form permutation products
where by the number of terms correspond to coefficients of the binomial series expansion. For
any element lij of the L matrix, the number of m-product terms is given by:

The power of binomial expansion K (i, j) is given by;

K (i, j) = i-j-1 (9)


For example, for l41,

This corresponds to the binomial expansion of power K (4, 1) = 2, i.e., {1, 2, 1}.

The permutation products l41 as shown in the L matrix are:

{ }

For l51 similarly,

This corresponds to the binomial expansion of power K (5, 1) = 3, i.e., {1, 3, 3, 1}.

The permutation m-products for l51 of the L matrix are, therefore,

{ }

2.1 Tree-like structure of the m-permutation products

5
It is easy to enumerate the m-permutation products of lij as these products can be arranged in a
tree-like structure. Taking the example of elements of l51 for example, the tree structure shown in
Figure 1 is formed:

l51

m54 m53 m52 m51

m43 m42 m41 m32 m31 m21

m32 m31 m21 m21

m21

Figure 1: A tree structure showing the permutation products used in forming the L matrix

2.2 Formula for calculation of the sum of permutation products

For the element lij of the lower triangular matrix L, with the number of m products Nm
corresponding to the binomial coefficients of power K(i,j), the binomial coefficients Nm(r) for r
= 0, 1, 2…K is given by:

For example for l41 with K= 4-1-1 = 2 and K(r) = {0, 1, 2};

Hence, Nm = 1 + 2 + 1 = 4

6
Similarly for l51 with K (5, 1) = 5-1-1 = 3 and K(r) = {0, 1, 2, 3};

Once the Nm(r) values are determined corresponding to the binomial coefficients the sum of
permutation products are calculated as follows:

As an example for l51 with K = {0, 1, 2, 3) and taking K = 2 which contain 3 terms, the sum of
permutation products Mij (K=2) is given by:

∑ ∑

∑ ∑

In general for any element lij, the mij sum of products can be calculated using the formula:

∑ ∑ ∑ ∑

Finally, the element lij is computed by summing the Mij sum of products as follows:

7
2.3 Matrix solution to the computation the lij elements of the lower triangular matrix L

The computation of elements of the lower triangular matrix L can be easily carried out using
matrix multiplication. For any element lij the matrix multiplication takes the following form:

[ ]

[ ]

Equation 13 shows the lij can be determined from already determined previous values of lkj
where j+1 < k < i and the Gauss pivot row multipliers mij , mij+1 , mi j+2…. mi i-1.

Equation 13 can be summarized in the general matrix product form as follows. Considering the
lower triangular matrix L of Equation (4) again;

[ ]

The negative of the corresponding Gauss pivot row multipliers mrs that are already determined at
this stage are given by the matrix form LLU;

[ ]

The matrix LLU is simply the L matrix of the LU decomposition method. This can be verified as
follows:

To avoid confusion, let traditional LU decomposition method have its L matrix relabelled LLU to
make it different from the L matrix of the proposed direct decomposition procedure

8
From the relationship A = LLU U as well as LA = U, it follows that:

It follows then that:

in which I is the identity matrix. Therefore the L matrix is simply the inverse of the L matrix
LLU of the Lu decomposition method.

The L matrix elements as shown in equation 13 can be can be reproduced from the matrix
equation

[ ][ ]

This computation will be illustrated for the 4 by 4 matrix of L shown in Equation 4 and later for
the example of the 4 by 4 system of linear equations solved in the section that follows. Starting
with the element l21 the matrix form of Equation (13) will take the form:

[ ] [ ]

For element l31:


[ ] [ ] [ ] [ ]

For element l32:

9
[ ] [ ]

For element l41:

[ ] [ ]

[ ] [ ]

For element l42:

[ ] [ ] [ ] [ ]

For element l43:

[ ]

This completes the L matrix for the 4 by 4 matrix shown in Equation 7, i.e.,

[ ]

2.4 Number of operations required

The number of operations required Np are related to the determination of the elements of the L
matrix only. It is apparent that similar to the LU decomposition, the order of operations is of
power 2, i.e., for n by n matrix the number of operations required grows proportional to n2. This
is clearly seen as the number of lij across the rows for an arithmetic series, 1, 2, 3, n-1 which
sums to:

( )

10
For example for a 4 by 4 L matrix

The lij elements of the L matrix shown in Equation 4 show the six elements to be determined.
Compared to the LU decomposition, the proposed method requires only half of the operations
required for the LU decomposition. The reason is, unlike the LU method the LAX = LB= B
method does not require storage of the U elements, i.e., only the L matrix is needed to solve the
system of linear equations.

2.5 Procedure for determining elements of the L matrix

The computation of the lij elements of the lower triangular matrix L can be carried out either row
wise or column wise using more or less the same procedure as outlined in the following step by
step procedure.

Step 1: Initially set all the Gaussian pivot row multipliers mrs of the element lij to zero values.
During computation of a particular value of mrs the most recent values of the other pivot row
multipliers will be used. In other words, the values of mrs will be updated once their values
change because of successive row wise or column wise computation.

Step 2: Starting with the first column and second row and proceeding either row wise or column
wise, calculate the mrs value for which r = i and s = j. For example for the element l21, the m
value to be calculated is that of m21 and at l53 it would be m53 that will be calculated. The matrix
equation for the computation of the m values is that of LA= U in which for the element l ij the
equation takes the form:
lip apj = uij = 0 (16)

since uij is zero for the upper triangular matrix for i > j.

Step 3: Proceed likewise for all the elements mrs taking into account that fact that all the other m
values are updated once a new value is computed for them as per step 2.

Step 4: After the computation of all the m values of the Gauss pivot multipliers is completed,
form the L matrix elements lij using the summation rules of the permutation products involving
the m products as given by Equation 11 and Equation 12 or using the matrix product given in
Equation 14.

Step 5: Once the L matrix is formed compute the solution X vector of the system of equations
AX =B using the formula shown in Equation 2, namely,

LAX = LB = B

11
In other words, the product LA results in the upper triangular matrix U which will allow the
computation of the solution vector elements of X using back substitution.

As in the Gauss method, it is possible to check if a zero appears on the diagonal of the U= LA
matrix, i.e., to check if uii = 0 for a given row i during the computation of the lij elements. In
other words, for a given row i, a check can be made for the value of uii using the formula:

If the condition uii =0 becomes true, row interchange can be made with rows from below in the
equation.

The procedure stated above will be illustrated with an example given below which is a 4 X 4
system of linear equations. Two methods are given, Method 1 using column wise operations and
Method 2 using row wise operations.

3. Application examples

Example 1:

The 4X4 system of linear equation shown below will be used to illustrate the proposed method of
solving systems of linear equations using direct decomposition of the A matrix, i.e. using the
matrix reduction LAX = UX = LB = B. The system of equation is:

[ ] [ ] [ ]

Forming the lower triangular matrix L in the equation LA = U and using the undetermined Gauss
pivot row multipliers mrs, the matrix equation LA = U becomes;

[ ][ ]

[ ]

12
Method 1 (Column wise operation)

Column 1 operations:

Initially all the m values will be set to zero as outlined in the steps for solving the system of
equations. Starting with column 1 and at row 2, the equation l2p ap1 = u21 = 0 gives;

Since row 2 operation is completed at this stage, check for the occurrence of a zero on the new
pivot element, u22 ≠ 0;

u22 = l2p ap2 ≠ 0;

For the third row operation at column 1;

l3p ap1 = u31 = 0; the unknown to be determined is m31

Since m32 is as initially set zero and not yet determined, the above equation reduces to;

For the fourth row at column 1;

L4p ap1 = u41 = 0; the unknown to be determined is m41

Since all the m terms corresponding to columns 2 and 3 are as they were initially set zero (still
undetermined) the above equation reduces to;

Column 2 operations:

13
Starting with row 3;

u32 = l3p ap2 = 0;

( )

Since row 3 operation is completed at this stage, check the new pivot element u33, i.e.,

u33 = l3pap3 ≠ 0;

( )

Since u33 ≠ 0, there is no need for row interchange.

For row 4 column 2 operations;

u42 = l4p ap2 = 0;

Since m43 = 0 (not yet determined), the above equation reduces to;

[ ]

Column 3 operations;

Proceeding to row 4 since the upper rows are already determined;

14
u43 = l4pap3 = 0

[ ] [ ]

[ ] [ ]

( )

Since row 4 is completed, check for the occurrence of a zero on the new pivot element, i.e., u44.

u44 = l4p ap4 ≠ 0;

[ ] [ ]

[ ] [ ]

[ ] [ ]

Now all the m pivot row multipliers are determined and the l elements of the lower triangular
matrix L can be determined as follows:

Column 1 elements

( )

( ) ( )

Column 2 elements:

15
Column 3 elements:

Since all the l elements of the low triangular matrix are determined, the L matrix can now be
written as follows:

[ ]

3.1 Matrix Computation of the L matrix for the Example 1

The L matrix can be computed using the matrix form given by Equation 14, i.e.,

[ ][ ]

For the above example Equation 14 takes the form:

[ ][ ]

Substituting the computed m values

16
[ ]

[ ]

For element l21

For element l31:

For element l32:

For element l41:

( )

( ) ( )

For element l42:

( )

17
For element l43:

This completes the L matrix, i.e.,

[ ]

The reduced matrix LA becomes;

[ ] [ ]

[ ]

Similarly, the operation LB = B becomes;

[ ] [ ]

[ ]

Finally the reduced equation LAX = LB = B takes the form:

[ ][ ] [ ]

18
The elements of the solution vector X can now be determined by back substitution. Starting from
the fourth row, x4 is determined;

From the equation of row 3, x3 is determined;

Similarly using row 2 equation for x2;

Finally x1 is determined from equation of row 1;

Therefore the solution vector X is given by:

{ }

This completes the solution using the proposed method. The alternative solution given below is
only useful up to the computation of the Gaussian pivot row multiplier m following which the
computation of the elements of the L and U matrix and the procedure for the determination of the
solution vector X would be the same as was demonstrated above and need not be repeated.

Method 2 (Row wise operation)

Row 2 operations:

Initially all the m values will be set to zero as outlined in the steps for solving the system of
equations. Starting with row 2 and at column 1, the equation l2p ap1 = u21 = 0 gives;

19
Since row 2 operation is completed at this stage, check for the occurrence of a zero on the new
pivot element, u22 ≠ 0;

u22 = l2p ap2 ≠ 0;

Row 3 operations

For the third row operation at column 1;

l3p ap1 = u31 = 0 ; the unknown to be determined is m31

Since the column 2 multiplier m32 is as initially set zero and not yet determined at this stage, the
above equation reduces to;

For row 3 column 2;

u32 = l3p ap2 = 0;

( )

Since row 3 operation is completed at this stage, check the new pivot element u33, i.e.,

u33 = l3pap3 ≠ 0;

( )

20
Since u33 ≠ 0, there is no need for row interchange.

Row 4 operations

For the fourth row at column 1;

L4p ap1 = u41 = 0; the unknown to be determined is m41

Since all the m terms corresponding to columns 2 and 3 belonging to row 4 are as they were
initially set zero (still undetermined) the above equation reduces to;

For row 4 column 2 operations;

u42 = l4p ap2 = 0;

Since the row 4 column 3 multiplier, m43 = 0 (is not yet determined), the above equation reduces
to;

[ ]

For row 4 column 3 operations:

u43 = l4pap3 = 0

[ ] [ ]

[ ] [ ]

( )

21
Since row 4 is completed, check for the occurrence of a zero on the new pivot element, i.e., u44.

u44 = l4p ap4 ≠ 0;

[ ] [ ]

[ ] [ ]

[ ] [ ]

Now all the m pivot row multipliers are determined and the determination of the L and U
matrices as well as the computation of the solution vector X would proceed in exactly the same
manner as demonstrated in method 1, column wise operations.

The example provided above shows in clear steps the solution step for solving system of linear
equations using direct decomposition of the form LAX = LB = B. Once the L matrix is formed,
it can be used to solve any variants of the equation AX = B in which the right hand side column
vector B is changed. This is demonstrated in the example given below.

Example 2: let the right hand side column vector be changed to the following while the
coefficient matrix A remains the same. The new column vector B is given as:

{ }

In this case, since the L matrix in the equation LA has already been worked out, the only
additional operation needed would be the computation of LB = B. Equation 2, namely,

would be used to determine the new solution vector X. Starting with the computation of LB = B;

22
[ ] [ ]

[ ]

Finally, the solution vector X is computed from LAX = LB = B:

[ ][ ] [ ]

The elements of the solution vector X can now be determined by back substitution. Starting from
the fourth row, x4 is determined;

From the equation of row 3, x3 is determined;

Similarly using row 2 equation for x2;

Finally x1 is determined from equation of row 1;

Therefore, the solution vector X is given by:

{ }

23
4. Discussion

The proposed method, developed and demonstrated with examples so far, shows that solution to
linear systems of equation can be obtained through direct decomposition of the A matrix using
the operation LAX = LB = B. The method provides a clear procedure for direct computation of
the L matrix, the only matrix that is needed to transform the original equation AX =B in to a
reduced form, i.e., LAX = BX unlike for example the LU method which requires that both the L
and U matrix be stored to find the solution through AX = LUX = B. The elements lij of the lower
triangular matrix L are shown to be sums of permutation products of the Gauss pivot row
multipliers mrs. The relationship between lij and mrs is clearly established through a formula and
it is easy to visually construct this relationship using a tree diagram that will assist in easy
memorisation of the relationship. In addition (and as an alternative procedure) the relationship so
established between elements lij of the lower triangular matrix L and the Gauss pivot row
multipliers mrs enables construction of the L matrix directly from the Gauss elimination steps.

The characteristic of Gauss elimination method is that the reduction to an upper triangular matrix
can only proceed column wise. It is not possible to proceed row wise in the Gauss method. On
the other hand, the LU decomposition requires alternate transition between the L and U elements
for determining the LU compact matrix. By contrast, the proposed LA = U reduction method
can proceed either column wise or row wise essentially giving the same result. This flexibility is
demonstrated in the example shown above where it is easily seen that the computation of the
Gauss pivot row multipliers remains more or less the same for both the row wise and column
wise operations.

The storage requirement during the reduction process is related to the generation of the L matrix.
Unlike the LU method, storage is needed only for the L matrix since the solution directly
proceeds from the reduction LAX = LB= B in which there is no need to store the U matrix. The
number of elements that need change is of the order O (n2) as shown in Equation 15 and is
typically half the number of operations required for the LU decomposition because in the LU
decomposition both the L and U elements need to be determined and stored.

5. Conclusion

A direct decomposition of the coefficient matrix forming part of a system of linear equations
using a single lower triangular reducing matrix L has been demonstrated as shown in this paper.
The method allows solution to the system of linear equations to proceed through storage of a
single lower triangular matrix L only, through which both the coefficient matrix A and the right
hand side column vector B are transformed. Elements of the reducing matrix L are shown to be
sums of permutation products of the pivot row multipliers of the Gauss elimination technique.
These sums of permutation products, for any element of the reducing matrix L, can be easily

24
constructed using a tree diagram that is relatively easy to memorize besides using the formula
developed for the purpose. These L matrix elements can also be alternatively computed using
matrix products. In the process of determining the elements of the L matrix, either row wise or
column wise procedure can be followed essentially giving the same result which provides added
flexibility to the proposed method. Equivalence of this new proposed method with both the
Gauss elimination and LU decomposition techniques has been established. In the case of the
equivalence with Gauss elimination technique, elements of the L matrix are specified as
functions of the Gauss pivot row multipliers. This also implies that it is possible to construct the
reducing L matrix of the proposed direct decomposition method using the Gauss pivot row
multipliers. As has been demonstrated, the L matrix can be directed constructed from the Gauss
pivot row multipliers using the matrix product LLuL = I. For the LU decomposition, the L matrix
of the proposed method is simply the inverse of the L matrix of the LU decomposition. In terms
of storage of computed values, it can be seen that the proposed method of direct decomposition
using the transformation LAX = LB = B need only storage of the L matrix elements which is
half in size compared with storage of all the L and U elements in the LU decomposition method.

Apart from providing added flexibility and simplicity, the proposed method would be of good
educational value providing an alternative procedure for solving systems of linear equations.

References

1. The GNU Scientific Library, GSL. 2019. Linear Algebra: QR Decomposition with Column
pivoting. Accessed rom: https://www.gnu.org/software/gsl/doc/html/_sources/intro.rst.txt,
on August 24, 2019.

2. Čížek, P. and Čížková, L. 2004. Numerical linear algebra. Papers / Humboldt-Universität


Berlin, Center for Applied Statistics and Economics (CASE),No. 2004,23, pp. 5-13.

3. Layton, W. and Sussman, M. 2014. Numerical Linear Algebra. University of Pittsburgh,


USA, pp. 28-39.

4. Nguyen, D. 2010. Cholesky and LDLT Decomposition. University of South Florida.


Accessed from: http://nm.mathforcollege.com/#sthash.RljP1TrJ.dpbs

5. Pascal F. 2019. Solving linear systems. Laboratoire Jacques Louis Lions, UFR de
Mathematiques, Sorbonne Université, Paris. Accessed from: From:
https://www.ljll.math.upmc.fr/frey/ftp/linear%20systems.pdf, on August 24, 2019.

6. Knott, G.D. 2013. Gaussian elimination and LU-decomposition. Civilized Software Inc.

7. Smiley, J. 2010. The man who invented the computer: the Biography of John Atanasoff.
Doubleday, New York, 2010.

8. Grcar, J.F. 2011. How ordinary elimination became Gausssian elimination. Historia Math.,
38:163–218.

25
9. Computational Sciences. 2013. LU Decomposition. Accessed from:
https://computationalsciences.wordpress.com/2013/06/06/lu-decomposition/

10. Kreyszig, E. 2011. Advanced Engineering Mathematics, John Wiley.

11. Turing, A.M. 1948. Rounding-Off Errors in Matrix Processes, The Quarterly Journal of
Mechanics and Applied Mathematics, 1, 287-308.

12. Burden, R.L and Faires, J.D. 2015. Numerical Analysis. Cengage Learning, 10th Edition,
285-340.

13. Rafique, M. and Ayub, S. 2015. Some convalescent of linear equations. Applied and
Computational Mathematics, 4(3): 207-213.

14. Wilkinson, J.H. 1988. The Algebraic Eigenvalue Problem, Oxford University Press.

15. Matani, D. 2005. A distributed approach for solving a system of linear equations. The
Journal of American Science, 1(2), 1-8.

16. Mon, Y. and Kyi, L.L.W. 2014. Performance comparison of Gauss elimination and Gauss-
Jordan elimination. International Journal of Computer & Communication Engineering
Research, 2(2), 67-71.

26

View publication stats

You might also like