Three Methods To Solve Linear Eqaution With Advantages and Disadvantages
Three Methods To Solve Linear Eqaution With Advantages and Disadvantages
Conference Series
Abstract. This paper aims to introduce some prevalent techniques which have been used to solve
linear systems. Firstly, we introduce the simplest method: how to eliminate variables in which
the original systems would not be changed. We then introduce Gaussian eliminations, which
work on the augmented matrix derived from a linear system. To the end, we present Cramer’s
rule, which computes the solution to a linear system based on matrix determinants. These
methods have their application scenarios. For instance, eliminating variables has no limited
condition for its operations. However, Cramer's rule needs to be under the condition of square
matrices. At the same time, Gaussian elimination requires three elementary row operations, and
Gaussian elimination paves the way for computing the rank of matrices. The choice between
these methods depends on coefficient matrices in terms of dimensions and matrix properties such
as singularity.
1. Introduction
How to find the solutions for simultaneous linear equations? Linear algebra is the branch of mathematics
concerned with linear equations, which have been studied for a very long time. The whole map for linear
algebra did not progress until the 17th century: matrices, an array of numbers, appeared [1]. It is
surprising that as linear algebra rapidly developed, the study of coefficients in linear equations inspired
mathematicians to discover determinants. In the late 17th century, Leibnitz contributed to linear algebra
by relating his work to square matrices with a unique solution.
Moreover, in 1750 AD, Lagrange adopted matrices to his work, called "characterize the maxima and
minima multivariate functions." However, mathematicians had not stopped their research. Cramer's rule
is one of the essential parts of utilizing the determinants to find the solution for simultaneous linear
equations after the results of Leibnitz. Whereas, this rule concluded a potential problem which is that
n×n linear systems were not able to be proved. This drew to Euler's idea that the systems may not have
a solution, so he found out the conditions required: square matrices and unique solution.
In this context, in the 19th century, Gauss made significant progress: he solved the systems without
matrices. Moreover, Gaussian Elimination is a famous and ingenious method for solving linear systems
since Gauss used adding, multiplying, and swapping and put much effort into unknown variables [2].
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
ICMMAP 2021 IOP Publishing
Journal of Physics: Conference Series 2012 (2021) 012061 doi:10.1088/1742-6596/2012/1/012061
As there are many ways to solve linear equations, mathematicians may wonder about efficiency, the
advantages, and the disadvantages of each method. Reviewing the research done by other experts, we
choose to focus on three main techniques: Eliminating variables, Cramer's rule, and Gaussian
Elimination.
2. Elimination of variables
The first introduced method in this paper aims to indicate one important approach to solve linear
problems: eliminating variables. The most basic of this part is the mutual conversion of matrix and linear
systems. A table of m rows and n columns arranged by a series of m n numbers a i
1,2,3, ⋯ , m; j 1,2,3, ⋯ n is called an m rows and n columns matrix, an m n matrix for short.
For a matrix denoted as a or a , a capital letter is used to represent it. And for a linear system,
it has something in common with matrices. This is a linear system with m equations and n variables,
both m and n unknown:
a x a x a x ⋯ a x k
⎧ a x a x a x ⋯ a x k
⎪
a x a x a x ⋯ a x k (1)
⎨ …
⎪
⎩a x a x a x ⋯ a x k
So, using a matrix to represent this linear system, which can be written as A k and A is the
coefficient matrix. The matrix composed of the coefficients and constant terms of the system is called
an augmented matrix. [3]
a ⋯ a x k
A ⋮ ⋯ ⋮ , x ⋮ , k ⋮ (2)
a ⋯ a x k
When the transformation of matrices and linear systems are clear, their operational correspondence
is also the same. The elementary operation is the heart of the elimination of variables.[4]
In this case, the following three operations are considered:
i. Interchange the order of the equations, denoted as E ↔ E
ii. Replace an equation by multiplying it by a nonzero number, denoted as (c E )→ E
iii. Replace an equation by adding c times another equation, denoted as (c E E )→ E
All three of these transformations are reversible, so the system of equations before and after the
transformation are of the same solution. Moreover, in the above transformations, only the coefficients
and constants of the equations are calculated, so the operations of a linear system can be completely
converted into a row transformation of an augmented matrix, also known as the row elementary
operations:[5]
1. Interchanging the order of two rows: R ↔ R
2. Multiplying any row of the matrix by a nonzero number: R → kR
3. Adding a multiple of a row to another row: R → R cR
Similarly, as long as replacing row with the column, operations on matrix columns can correspond
one-to-one with row elementary operations and are called elementary column operations.
a) Examples
2x x 2x 4 1
x x 2x 1 2
4x x 4x 2 3
First, transferring the simultaneous linear equations into augmented matrix form, denoted as the
capital letter B.
2 1 2 1
B 1 1 2 4
4 1 4 2
2
ICMMAP 2021 IOP Publishing
Journal of Physics: Conference Series 2012 (2021) 012061 doi:10.1088/1742-6596/2012/1/012061
3. Gaussian Elimination
Gaussian elimination or we can call it the row reduction, is an algorithm for solving systems of linear
equations. It is made of a sequence of operations performed on the corresponding matrix of coefficients.
To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the
matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible:
According to repeat those three elementary row operations that won’t affect the solution but able to
simplify the augmented matrix, we can finally get the matrix like:
a x a x …a x a x …a x k
⎡ 0 a x …a x a x …a x ⎤
k
⎢ … ⎥
⎢ …⎥
0 0 … a x a x …a x (3)
⎢ k ⎥
⎢ … …⎥
⎣ 0 0 0 0 … a x k ⎦
a ,a ,a ,a ,a ,…a , k ,k ,k ,…k ∈C
We call the left part of the separate line “Row echelon matrix” [6], which has the character: the
column mark of the first element in the non-zero row Aj must be smaller than or equal to the column
mark of the first element in the non-zero row Aj+1.
After we get the row echelon matrix, the amounts of variables are decreasing from top to bottom.
Thus, using the back substitution method: getting the solution of the bottom first, then repeating plug in
the known value of variables the above row until the top. Therefore, we will get the solution.
3
ICMMAP 2021 IOP Publishing
Journal of Physics: Conference Series 2012 (2021) 012061 doi:10.1088/1742-6596/2012/1/012061
4
ICMMAP 2021 IOP Publishing
Journal of Physics: Conference Series 2012 (2021) 012061 doi:10.1088/1742-6596/2012/1/012061
By extracting the related condition from the question, we can have an expression:
a x a x a x ⋯ a x k L1
⎧
⎪ a x a x a x ⋯ a x k L2
a x a x a x ⋯ a x k L3 (4)
⎨ …
⎪
⎩a x a x a x ⋯ a x k Ln
a ,a ,a ,a ,…a , k ,k ,k ,…k ∈ C ,
Thus, transfer to the augmented matrix form:
a x a x a x … a x k
⎡a x a x a x … a x ⎤
k
⎢a x a x a x … a x ⎥
⎢ k ⎥ (5)
⎢ … … ⎥
⎣a x a x a x … a x k ⎦
Therefore, we can analogy the steps in the example, to get the row echelon matrix:
b x b x …b x b x …b x t
⎡ ⎤
0 b x …b x b x …b x t
⎢ ⎥
… …
⎢ ⎥ (6)
⎢ 0 0 …b x b x …b x t ⎥
⎢ … …⎥
⎣ 0 0 0 0 … b x t ⎦
b ,b ,b ,b ,b ,…b , t ,t ,t ,…t ∈ C
Then use the x to make the back substitution for all the row above and get the solution.
3.2.2 Rank:
“In linear algebra, the rank of a matrix is the highest order of its non-zero subforum.” [6] Claim that we
use R(A) to represent the rank of the matrix A. During the process of Gaussian Elimination in our
Research Question, the rank of the coefficient matrix(u) equal to the rank that in the form of row echelon
matrix. Credit to the character of row echelon, R(u)=m; if there is another matrix(a) with n variables,
getting a row echelon form after the process of Gaussian Elimination, R(a) will equal to the row mark
of the highest x with the coefficient 0.
Rank can help us to define the range/feature of the solution of the linear equations: let the coefficient
matrix of the linear equation system be A, and the augmented matrix is B=(A,b), then we will have:
i. R A R B n, n ∈ N , the system of equations has a unique solution;
ii. R A R B n, n ∈ N , the equations have infinite solutions;
iii. R A R B n, n ∈ N , the equation has no solution.
4. Cramer’s rule
Cramer’s rule provides another method other than using augmented matrices.
5
ICMMAP 2021 IOP Publishing
Journal of Physics: Conference Series 2012 (2021) 012061 doi:10.1088/1742-6596/2012/1/012061
(2) If one row or column in a square matrix is proportional to another row or column, det(A) will
always be zero.
In geometric ways, the essence of determinants is that they give the factor by which area changes.
1 5 5 3 0
To simplify, using an example of a transforming triangle by matrix helps
1 1 4 0 1
understand the concept:
3 0 1 5 5 3 15 15
where the original area is 6 unit² and the transformed area is
0 1 1 1 4 1 1 4
18 unit²
In this case, the new area is 3 times the original, where 3 is the determinant of the transformation
matrix.
By adopting the essence, the Leibniz formula computes the standard forms of measuring
determinants.
a b a b
det ad bc (7)
c d c d
a b c a b c
det d e f d e f cdh ceg fah fbg iae ibd (8)
g h i g h i
With the given formulae, we present the whole process step by step. In setting the examples, we just
mention the nonzero cases.
Use 2×2 matrix as an example:
3 4 3 4
det =3×7-4×6=-3
6 7 6 7
6
ICMMAP 2021 IOP Publishing
Journal of Physics: Conference Series 2012 (2021) 012061 doi:10.1088/1742-6596/2012/1/012061
(1) x= 5
(2)y= 6
(3)z= = 7
7
ICMMAP 2021 IOP Publishing
Journal of Physics: Conference Series 2012 (2021) 012061 doi:10.1088/1742-6596/2012/1/012061
tells the rank of the matrix, while Cramer's rule only tells whether the matrix is invertible. The Gaussian
elimination method is also applicable to non-square matrices (i.e. equations with unequal numbers of
variables and equations), but Cramer's rule does not work. In addition, if there are multiple solutions
(i.e. the number of solutions is unlimited), then the Gaussian elimination method will provide you with
all the solutions. So instead, Cramer's rules cannot work well.
6. Conclusion
When faced with a math problem, finding the answer is always the goal, but solving the answer can be
varied. The same is true of linear algebra. This paper identified three solutions for different linear
systems: The elimination method, Gaussian Elimination, and Cramer's rule.
For each method, we provide detailed calculations, practical applications, and advantages and
disadvantages. In summary, the elimination method has a wide range of applications and no use
conditions. Still, its operation is cumbersome, and it is not suitable for complex and large linear systems.
Other than eliminating variables, the Gaussian elimination method requires less computation for larger
linear systems and is more convenient to obtain the rank of a given matrix. Nevertheless, without a
computer, doing calculations by hand is time-consuming and hard to remember. In addition, Cramer's
rule is a good choice for inhomogeneous linear systems because each unknown variable is computed
independently. However, when it comes to large matrices, it gets complicated. Cramer's rule is based
on the condition that the determinant is non-zero; otherwise, it leads to the uncertainty of whether there
is an infinite solution or no solution.
Each method has its characteristics and shortcomings, so how to choose the best method to solve
different linear algebra is what we want to appeal to. For different linear systems, it is necessary to apply
various methods flexibly and finally successfully solve the problem. In terms of these problems,
combining and unifying with other multi-disciplines and making these methods widely used in computer
programming models is also a major development trend in the future. It's not just about solving problems
manually, it's about computer science playing an increasingly important role in the future of algorithm
optimization, and that's what we want to see.
References
[1] AYDIN S. (2017). On the history of some Linear Algebra Concepts: From Babylon to Pre-
Technology. The Online Journal of Science and Technology-January, 7(1).
[2] Christensen, J., Gustafson, G. (2012). A Brief History of Linear Algebra. Grant Gustafson Univ.
Utah.
[3] Tucker, A. (1993). The growing importance of linear algebra in undergraduate
mathematics. The college mathematics journal, 24(1), 3-9.
[4] Leon, S. J., Bica, I., & Hohn, T. (1998). Linear algebra with applications (Vol. 6). Upper Saddle
River, NJ: Prentice Hall.
[5] Olver, P. J., Shakiban, C., & Shakiban, C. (2006). Applied linear algebra (Vol. 1). Upper Saddle
River, NJ: Prentice Hall.
[6] Szabo PhD, 2015. In The Linear Algebra Survival Guide.
[7] Lancaster, P., & Tismenetsky, M. (1985). The theory of matrices: with applications. Elsevier.