Case Study Numerical Method
Case Study Numerical Method
1
INTRODUCTION
Numerical methods and analysis form a crucial branch of applied mathematics and computer
science that focus on the development and implementation of algorithms to solve mathematical
problems. These methods are particularly useful when analytical solutions are either impossible
or impractical to obtain. Numerical analysis involves studying and developing techniques to
approximate solutions to mathematical problems, while numerical methods refer to the practical
application of these techniques to real-world problems.
Numerical methods and analysis find application in various fields, including physics,
engineering, finance, computer science, and more. They are instrumental in solving problems
that arise in diverse domains, where analytical solutions may be elusive or impractical.
In summary, numerical methods and analysis are indispensable tools for scientist, engineering,
and researchers dealing with real-world problems that require efficient and accurate
computational solutions. The field continues to evolve with advancements in algorithm
development, computational power, and interdisciplinary applications
2
I. Solving System of Linear Algebraic Equation
a. SMALL SCALES
GRAPHICAL MEHTOD
A graphical solution is obtained for two equations by plotting them on Cartesian Coordinates
with one axis corresponding to x 1 and other to x 2. Because we are dealing with linear systems,
each equation is a straight line. This can be easily illustrated for the general equations
a 11 x 1 +a 12 x 2=b1
a 21 x 1+ a22 x 2=b 2
Thus, the equations are now in the form of straight line; that is, x 2=(slope) x1 +intercept . These
lines can be graphed on Cartesian coordinates with x 2 as the coordinate and x 1 as the abscissa.
The values of x 1 and x 2 at the intersection of the lines represent the solution.
3
Example: Given the equation 0.5 x 1−x 2=−9.5 & 1.02 x1 −2 x 2=−18.8
x1 x2 x1 x2
0 9.5 0 9.4
1 10 1 9.91
2 10.5 2 10.42
3 11 3 10.93
4 11.5 4 11.44
5 12 5 11.95
6 12.5 6 12.46
7 13 7 12.97
8 13.5 8 13.48
9 14 9 13.99
10 14.5 10 14.5
11 15 11 15.01
12 15.5 12 15.52
13 16 13 16.03
14 16.5 14 16.54
18
16
14
12
10
0.5x1 - x2 = -9.5
8 1.02x1 - 2x2 = -18.8
0
0 2 4 6 8 10 12 14 16
4
CRAMER’S RULE
Cramer’s rule is another solution technique that is best suited to small numbers of equations.
Before describing this method, we will briefly introduce the concept of the determinant, which is
used to implement Cramer’s rule. In addition, the determinant has relevance to the evaluation of
the ill-conditioning of a matrix.
Determinants
The determinant can be illustrated for a set of three equations:
The determinant D of this system is formed from the coefficient of the equation, as in
Although the determinant D and the coefficient matrix [A] are composed of the same elements,
they are completely different mathematical concepts. That is why they are distinguished visually
by using brackets to enclose the matrix and straight lines to enclose the determinant. In contrast
to a matrix, the determinant is a single number. For example, the value of the second-order
determinant.
Is calculated by
For the third-order case, a single numerical value for the determinant can be computed as
5
Example 1:
0.5 x 1−x 2=−9.5
1.02 x1 −2 x 2=−18.8
Solution:
a. Find the Determinant
D= |1.02 −2|
0.5 −1
D x1 D x2
x 1= x 2=
D D
x=1
|−18.8 −2|
−9.5 −1
x=
2
| 0.5
1.02 |
−9.5
−18.8
0.02 0.02
19−18.8 (−9.4 )−(9.69)
x 1= x 2=
0.02 0.02
0.2 0.29
x 1= x 2=
0.02 0.02
Example 2:
x 1+ x2 −x3 =−3
6 x 1+ 2 x 2 +2 x3 =2
−3 x 1+ 4 x 2 + x 3=1
Solution:
a. Find the determinant.
| |
1 1 −1
D= 6 2 2
−3 4 1
6
D=1 |24 21|−1|−36 21|+(−1)|−36 42|
D=1 (2−8)−1 [ 6−(−6 ) ] + ( 1 ) [ 24−(−6 ) ]
D=−6−12+ ( 30 )
D=−48
b. Divide each of the determinants by D to find the value of the corresponding variables
D x1 D x2
x 1= x 2=
D D
| | | |
−3 1 −1 1 −3 −1
2 2 2 6 2 2
1 4 1 −3 1 1
x 1= x 2=
−48 −48
x 1=
−3 |24 21|−1|21 21|+(−1 )|21 24| x=
2
|2 2
1 1|
1− (−3 )|
6 2
−3 1|+ (−1 )|
−3 1|
6 2
−48 −48
−3 ( 2−8 ) −1 ( 2−2 ) + (−1 ) ( 8−2 ) 1 ( 2−2 )−(−3 ) [ 6−(−6 ) ] + (−1 ) [ 6−(−6 ) ]
x 1= x 2=
−48 −48
D x3
x 3=
D
| |
1 1 −3
6 2 2
−3 4 1
x 3=
−48
x=
3
|4 1| |−3 1|
2 2
1−1
6 2
+ (−3 )|
−3 4|
6 2
−48
1 ( 2−8 )−1 [ 6−(−6 ) ] + (−3 ) [ 24−(−6 ) ]
x 3=
−48
7
8
ELIMINATION OF UNKNOWN
The basic strategy is to multiply the equations by constants so that one of the unknowns will be
eliminated when the two equations are combined. The results is a single equation that can be
solved for the remaining unknown. This value can then be substituted into either of the original
equations to compute the other variable.
For example, Eq (9.6) might be multiplied by a 21 and Eq. (9.7) by a 11 to give
Subtract Eq. (9.8) from Eq. (9.9) will, therefore, eliminate the x 1 term from the equations to yield
Equation (9.10) can then be subtracted into Eq. (9.6), which can be solved for
Notice that Eq. (9.10) and (9.11) follow directly from Cramer’s rule, which states
9
Example 1:
0.5 x 1
¿
1.02 x1
Solution:
¿
Example 2:
x1
6 x1 ¿
−3 x 1
Solution:
(−2 x 1+ 5 x 2=−2 ) 4
x + x −x =−3
( x 1+ x 2−x 3 =−3 ) 2 −8 x 1 +20 x 2=−8
+ 1 2 3 8 x 1 + 4 x 2=−4
2 x 1+ 2 x 2−2 x 3=−6
−3 x 1+ 4 x 2 + x 3=1 24 x 2 −12
+ 6 x 1 +2 x 2+2 x 3=2 =
−2 x1 +5 x 2=−2 24 24
8 x1 + 4 x 2=−4
−1
x 2=
2
10
−2 x 1 +5 x 2=−2 −1
x 1=
4
−2 x 1+5(−12 )=−2 −1x + x−1−x =−3
1 2 3 x 2=
−1
2
2 x =5 (
1
−1
2 ) +2 ( 4 ) ( 2 )
+ −x =−3 3
x3 =
9
1 1 4
5(
2 )
−1 x =3− −
3
+2 4 2
x= 9
1
2 x 3=
4
−1
x 1=
4
11
b. LARGE SCLE
GAUSSIAN ELIMINATION WITH ROW PIVOTING
Gauss elimination method is used to solve a system of linear equations. Let’s recall the
definition of these systems of equations. A system of equations is a group of linear equations
with various unknown factors. Unknown factors exist in multiple equations. Solving a system
involves finding the value for the unknown factors to verify all the equations that make up the
system.
In the basic gauss elimination method, the element a JJ when i=j is known as a pivot
element. Each row is normalized by dividing the coefficients of that row by its pivot element.
That is,
a kj
a kj= j=1 ,… .. , n .
a kk
If akk =0, kth row cannot be normalized. Therefore, the procedure fails. One way to overcome this
problem is to interchange this row with another row below it which does not have a zero element
in that position.
Complete pivoting is an alternative scheme in which the largest element in any of the
remaining rows is used as the pivot at each stage. Complete pivoting necessitates a significant
amount of overhead and is thus rarely used (though it may yield slightly improved numerical
stability).
ALGORITHM:
1. Input n, aij and bi values.
2. Beginning from the first equation,
i. Check for the pivot element.
ii. If it is the largest among the elements below it, obtain the derived system.
iii. Otherwise identify the largest element and make it the pivot element.
iv. Interchange, the original pivot equation with the one containing the largest
element so that, the later becomes the new pivot equation.
v. Obtain the derived system.
vi. Continue the process till the system is reduced to triangular form
3. Compute xi values by back substitution
4. Print results.
12
13
Example 1:
¿ 2 x 2 +¿ 3 x 3 ¿ 8 4 x 1 +¿ 6 x 2 +¿ 7 x3 ¿ −3
4 x 1 +¿ 6 x 2 +¿ 7 x 3 ¿ −3 → 2 x 1 +¿ x 2 + ¿ 6 x3 ¿ 5
2 x 1 +¿ x 2 +¿ 6 x 3 ¿ 5 ¿ 2 x2 +¿ 3 x3 ¿ 8
Solution:
[ ][ ] [ ]
4 6 7 x1 −3
m21=1/2
2 1 6 x2 = 5
m31=0
0 2 3 x3 8
[ ][ ] [ ]
4 6 7 x 1 −3
m32=−1 0 −2 2.5 x 2 = 6.5
0 2 3 x3 8
[ ][ ] [ ]
4 6 7 x1 −3
0 −2 2.5 x 2 = 6.5
0 0 5.5 x 3 14.5
Answer
x 1=−5.431
x 2=0.045
x 1=2.636
14
Example 2:
¿
Solution:
[ ][ ] [ ] [ ][ ] [ ]
6 −2 2 4 x1 16 6 −2 2 4 x1 16
12 −8 6 10 x2 0 −4 2 2 x2 −6
= 26 =
3 −13 9 −3 x 3 −19 0 −12 8 1 x 3 −27
−6 4 1 −18 x 4 −34 0 2 3 −14 x 4 −18
[ ][ ] [ ] [ ][ ] [ ]
6 −2 2 4 x1 16 6 −2 2 4 x1 16
−4 x2 x 2 −6
0 2 2 = −6 0 −4 2 2 =
0 0 2 −5 x3 −9 0 0 2 −5 x 3 −9
0 0 4 −13 x 4 −21 0 0 0 −3 x 4 −3
Answer:
6 x 1−2 x 2+ 2 x 3 +4 x 4=16 x 1=3
6 x 1−2(1)+2(−2)+4 (1)=16 x 2=1
6 x 1 16+2 (1)−2(−2)−4 (1) x 3=−2
=
6 6 x 4=1
x 1=3
15
Naïve Gaussian Elimination Method.
Example 1:
¿
Eliminate of x 1. Elimination of x 2
a21 12 a32 −12
= =2 = =3
a11 6 a22 −4
12 x1 −8 x2 +6 x 3 +10 x 4=26 −12 x 2+ 8 x 3 + x 4 =−27
−
− 12 x 1−4 x 2+ 4 x 3+ 8 x 4=32 −12 x 2+6 x 3−6 x 4=18
−4 x 2+ 2 x 3 +2 x 4=−6 2 x 3−5 x 4=−19
a31 3 a42 2 −1
= =0.5 = =
a11 6 a22 −4 2
3 x 1−13 x2 + 9 x 3 +3 x 4=−19 2 x 2+3 x 3−14 x 4 =−18
− −
3 x 1−x 2+ x 3 +2 x 4=8 2 x2 −x3 −x 4=3
−12 x 2+ 8 x 3 + x 4 =−27 4 x3 −13 x 4=−21
16
Example 2:
2 x 1+ 4 x 2−2 x 3−2 x 4=−4
x 1+2 x 2 +4 x 3−3 x 4 =5
−3 x 1−3 x2 +8 x 3−2 x 4=7
−x 1+ x2 +6 x 3−3 x 4 =7
Elimination of x 1 : Rearrange:
a21 1 2 x1 + 4 x 2−2 x3 −2 x 4 =−4
= =0.5
a11 2 3 x 2−5 x3 −5 x 4=1
3 x 2 +5 x 3−4 x 4=5
x 1+ 2 x 2 +4 x3 −3 x 4=5
− 5 x 3−2 x 4=7
x 1 +2 x 2−x 3−x 4 =−2
5 x3 −2 x 4 =7 Eliminate x 2 :
a41 −1 a32 3
= =−0.5 = =1
a11 2 a22 3
Answer:
x 1=1
x 2=2
x3 =3
x 4 =4
17
LU DECOMPOSITION
The primary appeal of LU decomposition is that the time-consuming elimination step can
be formulated so that it involves only operations on the matrix of coefficients, [A]. One motive
for introducing LU decomposition is that it provides an efficient means to compute the matrix
inverse. The inverse has a number of valuable applications in engineering practice. It also
provides a means for evaluating system condition.
[ ][ ] [ ]
a11 a12 a13 x 1 b1
a21 a22 a23 x 2 = b2
a31 a32 a33 x 3 b3
Here,
[ ] [] []
a11 a12 a13 x1 b1
A= a21 a22 a23 , X= x 2 , B= b2
a31 a32 a33 x3 b3
18
Example 1:
2 x 1+ 4 x 2−2 x 3−2 x 4=−4
x 1+2 x 2 +4 x 3−3 x 4 =5
−3 x 1−3 x2 +8 x 3−2 x 4=7
−x 1+ x2 +6 x 3−3 x 4 =7
Solution:
[ |]
A x B −2 −2 −4
][ ] [ ]
2 4
[
2 4 −2 −2 x1 −4 m32=1 0 3 5 −4 5
1 2 4 −3 x 2 =¿ 5 m42=0 0 3 5 −5 1
−3 −3 8 −2 x3 7 0 0 5 −2 7
−1 1 6 −3 x4 7
m31=1.5
m41=0.5 [ |]
m21=0.5 2 4 −2 −2 −4
1 2 4 −3 5
−3 −3 8 −2 7
−1 1 6 −3 7
[ |]
2 4 −2 −2 −4
0 3 5 −4 5 A
0 0 5 −2 7 ¿
0 0 0 −1 −4
L Y B
[ ][ ] [ ]
1 0 0 0 y1 −4
−0.5 1 0 0 y2 = 5
0.5 0 1 0 y3 7
−1.5 1 0 1 y4 −4
U X Y
[ ][ ] [ ]
2 4 −2 −2 x1 −4
0 3 5 −4 x2 = 5
0 0 5 −2 x3 7
0 0 0 −1 x4 −4
19
x 4 =4 3 x 2+5 ( 3 )−4 ( 4 )=5
x 2=2
5 x 3+ 2 ( 4 ) =7
x 3=3 2 x1 + 4 ( 2 )−2 ( 3 ) −2 ( 4 ) =−4
x 1=1
Example 2:
¿
Solution:
[ ][ ] [ ] [ |]
6 −2 2 4 x1 16 6 −2 2 4 16
12 −6 6 10 x2 m32=3 −4 2 2
= 26 0 6
3 −13 9 3 x 3 −19 m4 2=−0.5 0 −12 8 1 −27
−6 −4 1 −18 x 4 −34 −0 2 3 −14 −18
[ |] [ |]
6 −2 2 4 16 6 −2 2 4 16
m21=2
12 −6 6 10 26 m42=2 0 −4 2 2 6
m31=0.5 0 0 2 −5 −9
3 −13 9 3 −19
m41=−1 0 0 4 −1 3 −21
−6 −4 1 −18 −34
[ |]
6 −2 2 4 16
0 −4 2 2 6
0 0 2 −5 −9
0 0 0 −3 −3
A L U
[ ][ ][ ]
6 −2 2 4 1 0 0 0 6 −2 2 4
12 −6 6 10 = 2 1 0 0 0 −4 2 2
3 −13 9 3 0.5 3 1 0 0 0 2 −5
−6 −4 1 −18 −1 −0.5 1 1 0 0 0 −3
LY =B 2 ( 16 ) + y 2=26
][ ] [ ]
y 2=−6
[
1 0 0 0 y1 16
2 1 0 0 y2 0.5 ( 16 )+3 (−6 ) + y 3=−19
= 26
0.5 3 1 0 y 3 −19 y 3 =−9
−1 −0.5 1 1 y 4 −34
−16−0.5 ( 6 ) +2 (−9 )+ y 4=−34
y 1=16 y 4=−3
20
[ ][ ] [ ]
−3 x 1=−3 6 −2 2 4 x1 16 4 x +2 ( 2 ) +2 (−2 ) +4 (1 )=16
2
0 −4 2 2 x 2 −6
x 1=1 = x 2=1
0 0 2 −5 x 3 −9
2 x3 −5 ( 1 )=−9 0 0 0 −3 x 4 −36 x 1−2 ( 1 )+ 2 (−2 )+ 4 ( 1 )=16
x3 =−2 x 1=3
21
ITERATION METHOD
a. GAUSS-SEIDEL METHOD
The Gauss-Seidel method is controlled by the number of iterations, round-off error is not an
issue of concern with this method. However, there are certain instances where the Gauss-Seidel
technique will not converge on the correct answer.
In numerical method, Gauss-Seidel method, also known as the Liebmann method or the
method of successive displacement, is an iterative method used to solve a system of linear
equations. Though it can be applied to any matrix with non-zero elements on the diagonals,
convergence is only guaranteed if the matrix is either strictly diagonally dominant, or symmetric
and positive definite.
The Gauss–Seidel method is an iterative technique for solving a square system of n linear
equations. Let AX=B be a square system of n linear equations, where:
22
Example 1:
6 x 1−2 x 2+2 x 3 +4 x 4 =4
12 x 1−8 x 2 +6 x 3+ 10 x 4 =26
3 x 1−8 x 2 +9 x 3 +3 x 4=−19
−6 x 1+ 4 x 2 + x 3−18 x 4=−34
Solution:
1 1
x 1= ( 16+ 2 x 2−2 x 3−4 x 4 ) x 3= ( −19−3 x 1+13 x 2−3 x 4 )
6 9
8 1 1 2 −19 1 13 1
x1 = + x 2− x 3− x 4 x 3= − x 1 + x 2− x 4
3 3 3 3 9 3 9 3
1 1
x 2=
−8
( 26−12 x 1−6 x 3−10 x 4 ) x4=
−18
(−34+ 6 x1−4 x 2−x 3 )
−13 3 3 5 17 3 2 1
x 2= + x 1+ x 3 + x 4 x 4= − x 1 + x 2+ x 3
4 2 4 4 8 4 9 18
23
b. JACOBI METHOD
Jacobian method or Jacobi method is an iterative method for approximating the solution
of a system of n linear equations in n variables. The Jacobi iterative method is considered as an
iterative algorithm which is used for determining the solutions for the system of linear equations
in numerical linear algebra, which is diagonally element. Until it converges, the process is
iterated. This algorithm was first called the Jacobi transformation process of matrix
diagonalization. Jacobi method is also known as the simultaneous displacement method.
Assumption 1: The given system of equations has a unique solution.
Assumption 2: The coefficient matrix A has no zeros on its main diagonal, namely, a 11, a22,…, ann,
are non-zeros
If any of the diagonal entries a11, a22,…, ann are zero, then we should interchange the rows
or columns to obtain a coefficient matrix that has nonzero entries on the main diagonal. Follow
the steps given below to get the solution of a given system of equations.
24
Example:
25
II. SYSTEM OF NON-LINEAR EQUATIONS
a. OPEN METHOD
FIXED-POINT ITERATION METHOD
Fixed-point iteration method is an effective method for approximating solutions to
algebraic and transcendental equations. It can be particularly handy when dealing with complex
equations such as cubic, bi-quadratic and transcendental ones.
The core concept of the fixed-point iteration method revolves around the repeated use of
a fixed point to calculate the solution for a give equation. Fixed point, in this context, is a point
within the function g’s domain where g(x )=x . the fixed-point iteration method involves the
algebraic conversion of the given function into the form of g(x) = x.
Let’s consider an equation f(x) = 0, for which we need to find the solution. This equation
can be expressed as x = g(x). we need to choose g(x) such that |g ' ( x )|<1 at x=x o where xo is the
initial guess known as the fixed-point iterative scheme. Subsequently, the iterative method is
implemented through successive approximations given by x n=g ( x n−1 ) , i.e., x 1=g ( x o ), x 2=g ( x 1 )
and so on.
26
Example:
27
NEWTON RAPHSON METHOD
Newton Raphson is a kind of open method which employs Taylor series for estimation
the position of the root.
For arbitrary function f(x), the Taylor series around a starting point can be written as
follows:
1 2
f (x i+1 )=f ( x i ) + ( x i+1 −xi ) f ' ( x ) + ( xi +1−x i) f (n
2
where is a point between xi and xi+1. If we neglect the second order term, the linear
approximation of f(x) around xi is as follow:
'
f ( x i +1 ) ≅ f ( x i ) + ( x i+1−x i ) f xi
As it is shown in the figure the new guess of the root can be found from following equation:
f ( xi )
0=f ( x i ) + ( x i+1− xi ) f ' ( x i ) or x i=x i−
f ' ( xi )
Step 4:
28
29
Example:
30
SECANT METHOD
The secant method is very similar to the bisection method except instead of dividing each
interval by choosing the midpoint the secant method divides each interval by the secant line
connecting the endpoints. The secant method always converges to a root of f ( x )=0 provided that
f(x) is continuous on [ x 0 , x 1 ] and f ( x 0 ) f ( x1 ) < 0.
Step 2:
x 1−x 0
Take the interval [ x 0 , x 1 ] and find next value x 2=x 0−f ( x 0 ) ∙
f (x 1 )−f (x 0)
Step 3:
If f (x 2)=0 then x 2 is an exact root, else x 0=x 1 and x 1=x 2.
Step 4
31
Example:
32
b. BRACKETING METHOD
BISECTION METHOD
The bisection method in mathematics is a root-finding method that repeatedly bisects an
interval and then selects a sub-interval in which a root must lie for further processing.
Its strategy is to begin with two values of x say x L and x U , that brackets a root of f ( x )=0.
It determines that the initial values x=x L and x=x U . the method treats the interval distance
between the initial values as line segment then successively divides the interval in half and
replaces one endpoint with the midpoint so that again the root is bracketed.
Note:
x L =for lower limit
5. Lastly, after repeating the steps from step 2 to step 4, if the answer to x R became the same
then STOP. It only means that the same value occurring to x R is the root being solved.
33
Example
34
FALSE-POSITION METHOD
The Regula-Falsi Method is a numerical method for estimating the roots of a polynomial
f (x). A value of x replaces the midpoint in the Bisection Method and serves as the new
approximation of the root of f (x). The objective is to make convergence faster. Assume that f (x)
is continuous.
False Position method (Regula Falsi Method) Step (Rule)
Step 1:
Find points x 0 and x 1 such that x 0 < x 1 and f (x ¿¿ 0)∙ f ( x1 )<0 ¿
Step 2:
x1− x0
Take the interval [ x0 , x 1] and find next value x 2=x 0−f (x 0)∙
f (x 1)−f (x 0 )
Step 3:
If f (x 2)=0 then x 2 is an exact root, else if f (x 0) ∙ f (x 2 )< 0 then x 1=x 2, else if
f (x 2)∙ f (x 1)<0 then x 0=x 2.
Step 4:
35
Example:
36
III. POLYNOMIAL INTERPOLATION AND CURVE FITTING
The least squares method is a statistical procedure to find the best fit for a set of data
points. The method works by minimizing the sum of the offsets or residuals of points from the
plotted curve. Least squares regression is used to predict the behavior of dependent variables.
the least squares method provides the overall rationale for the placement of the line of
best fit among the data points being studied. Traders and analysts can use the least square method
to identify trading opportunities and economic or financial trends. One of the main benefits of
using this method is that it is easy to apply and understand. That’s because it only uses two
variables (one that is shown along the x-axis and the other on the y-axis) while highlighting the
best relationship between them.
37
Example:
38
IV. NUMERICAL INTEGRATION
A. EULER’S METHOD
39
Example:
40
B. TRAPEZOIDAL RULE
41
Example
42
C. SIMPSON’S RULE
43
Example:
44
D. SOLUTION TO ORDINARY DIFFERENTIAL EQUATION
45
Example:
46
E. RUNGE-KUTTA METHODS
47
Example:
48