Solutions of Simultaneous System of Linear Equations: Assignment 01
Solutions of Simultaneous System of Linear Equations: Assignment 01
Solutions of
01 Simultaneous System of
Linear Equations
Submitted by:
GELIZON, SHELLA MAE A.
DATARIO, PETER SHANDEL
P.
SUMOOK, EDRIAN ANTONIO
I. DIRECT SOLUTION
• Inverse Method
• Cramer’s Rule
• Gauss Elimination
• Gauss Jordan Method
II. INDIRECT or ITERATIVE SOLUTIONS
• Jacobi's Method
• Gauss Seidel Method
III. NUMERICAL INTEGRATION
• Trapezoidal Rule
• Simpson’s Rule
• Romberg Integration
IV. CURVE FITTING
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
• Method of Successive Substitution
• Half Interval Method
• Method of False Position
• Bairstow’s Method
V. NUMERICAL SOLUTIONS of ORDINARY
DIFFERENTIAL EQUATIONS
• Taylor Series Method
• Euler’s Method
• Runge Kutta Method
I. DIRECT SOLUTION
INVERSE METHOD
Solving a system of linear equations using the inverse of a matrix requires
the definition of two new matrices: X is the matrix representing the
variables of the system, and B is the matrix representing the constants.
Using matrix multiplication, we may define a system of equations with
the same number of equations as variables as,
AX = B
SOLUTION:
Multiply row 1 by .
1 3 0 0
-4 -11 -41 0 1 0
-1 -3 -11 0 0 1
I. DIRECT SOLUTION
INVERSE METHOD
Multipy row 1 by 4 and add to row 2.
1 3 0 0
0 1 1 0
-1 -3 -11 0 0 1
Multiply row 3 by 5.
1 0 - - -3 0
0 1 1 0
0 0 1 1 0 5
I. DIRECT SOLUTION
INVERSE METHOD
Multiply row 3 by and add to row 1.
1 0 0 -2 -3 1
0 1 1 0
0 0 1 1 0 5
-2 -3 1 5 15 56 x -2 -3 1 35
-3 1 -19 -4 -11 -41 y = -3 1 -19 -26
1 0 5 -1 -3 -11 z 1 0 5 -7
I. DIRECT SOLUTION
INVERSE METHOD
Thus,
-70 +78 – 7 1
= -105 – 26 +133 = 2
35 + 0 – 35 0
x(
x=
y () = =
Notice that the denominator for both x and y is the determinant of the
coefficient matrix.
We can use these formulas to solve for x and y, but Cramer’s Rule also
introduces new notation:
The key to Cramer’s Rule is replacing the variable column of interest with
the constant column and calculating the determinants. We can then express
x and y as a quotient of two determinants.
I. DIRECT SOLUTION
CRAMER’S RULE
And so, solving a linear system with matrices using Gaussian elimination
happens to be a structured, organized and quite efficient method.
I. DIRECT SOLUTION
GAUSS ELIMINATION
SOLUTION:
Transcribing the linear system into an augmented matrix:
x+y+z=3 1 1 1 3
x + 2y + 3z = 0 1 2 3 0
x + 3y +2z = 3 1 3 2 3
I. DIRECT SOLUTION
GAUSS ELIMINATION
Row reducing (applying the Gaussian elimination method to) the
augmented matrix:
1 1 1 3 1 1 1 3
1 2 3 0 0 1 2 -3
1 3 2 3 1 3 2 3
1 1 1 3 1 1 1 3
0 1 2 -3 0 2 4 -6
0 2 1 0 0 2 1 0
1 1 1 3 x+y+z=3
0 2 4 -6 2y + 4z = -6
0 0 -3 6 -3z = 6
I. DIRECT SOLUTION
GAUSS ELIMINATION
Reduced matrix into its echelon form: Resulting linear system of
equations to solve:
1 1 1 3 x+y+z=3
0 2 4 -6 2y + 4z = -6
0 0 -3 6 -3z = 6
Having: z = -2
2y + 4z = -6 2y + 4 (-2) = 2y – 8 = -6
2y = 2 y=1
SOLUTION: The first step is to write the augmented matrix of the system.
-1 2 -6
3 -4 14
I. DIRECT SOLUTION
GAUSS JORDAN METHOD
We can multiply the first row by -1 to make the leading entry 1.
1 -2 6
3 -4 14
We can now multiply the first row by 3 and subtract it from the second row.
1 -2 6
3 – (1 x 3) -4 – (-2 x 3) 14 – (6 x 3)
=
1 -2 6
0 2 -4
The second entry of the first row should be . In order to do that, we multiply
the second row by and add it to the first row.
1 + (0 x 2) -2 + (1 x 2) 6 + (-2 x 2) 1 0 2
0 1 -2 = 0 1 -2
x + 0y = 2
0x + y = -2 Thus, the solution of the system of equations is x =
2 and y = -2 .
II. INDIRECT or ITERATIVE
SOLUTIONS
JACOBI’s METHOD
To begin, solve the 1st equation for , the 2nd equation for and so on to obtain the rewritten
equations:
Then make an initial guess of the solution Substitute these values into the right hand side the of
the rewritten equations to obtain the first approximation, .
II. INDIRECT or ITERATIVE
SOLUTIONS
JACOBI’s METHOD
This accomplishes one iteration.
In the same way, the second approximation is computed by substituting the first
approximation’s x-values into the right hand side of the rewritten equations.
By repeated iterations, we form a sequence of approximation
20x + y -2z = 17
3x + 20y – z + 18 = 0
2x – 3y + 20z = 25
II. INDIRECT or ITERATIVE
SOLUTIONS
JACOBI’s METHOD
SOLUTION:
Rewriting above equations we get:
x=
y=
z=
First Iteration:
=
=
=
Second Iteration:
=
=
=
= 1.02
= 0.965
= 1.03
II. INDIRECT or ITERATIVE
SOLUTIONS
JACOBI’s METHOD
Third Iteration:
=
=
=
=1
= -1
=1
2nd Approximation
(11-(1.733333)) = (9.266667) = 3.088889
[16-2(3.088889)] = (9.822222) = 1.964444
3rd Approximation
(11-(1.964444)) = 3.011852 [16-2(3.011852)] = 1.995259
II. INDIRECT or ITERATIVE
SOLUTIONS
GAUSS SEIDEL METHOD
4th Approximation
(11-(1.995259)) = (9.004741) = 3.00158
[16-2(3.00158)] = (9.99684) = 1.999368
5th Approximation
(11-(1.999368)) = (9.000632) = 3.000211
[16-2(3.000211)] = (9.999579) = 1.999916
6th Approximation
(11-(1.999916)) = (9.000084) = 3.000028 Solution by Gauss Seidel
[16-2(3.000028)] = (9.999944) = 1.999989
Method:
x = 3.000028 = 3
y = 1.999989 = 2
III. NUMERICAL INTEGRATION
TRAPEZOIDAL RULE
The trapezoid rule in mathematics is a numerical integration method that we use
to calculate the approximate value of the definite integral.
The rule is on the basis of an approximating value of the integral of f (x) by that
of the linear function that passes through the points (a, f (a)) and (b, f (b)). The
integral of this is always equal to the area of the trapezoid under the graph of the
linear function. It follows that:
= (b-a)
The error here is:
-
Xi is a number between a and b.
a b
III. NUMERICAL INTEGRATION
TRAPEZOIDAL RULE
EXAMPLE: Use the trapezoidal rule to estimate using four subintervals.
Calculate also the absolute and relative error.
SOLUTION:
The endpoints of the subintervals consist of elements of the set P = {0, , , , 1)
and = = .
= [ f(0) + 2f() + 2f + 2f() + f(1) ]
= (0+2x +2x +2x +1)
=
III. NUMERICAL INTEGRATION
TRAPEZOIDAL RULE
The absolute error is given by:
0.25
= f (2)
= = 0.3333333
x
0
1.5
2 2.5
3
= =
III. NUMERICAL INTEGRATION
SIMPSON’s RULE
EXAMPLE: Approximate using Simpson’s Rule with n = 4.
y
y =
0.25
= f (2)
= = 0.3333333
x
0
1.5
2 2.5
3
= =
III. NUMERICAL INTEGRATION
SIMPSON’s RULE
= =
= =
So,
AREA =
AREA = 0.2876831
III. NUMERICAL INTEGRATION
ROMBERG INTEGRATION
It is an approximate computation of integral using numerical techniques.
I= 𝑦
𝑓 ( 𝑥)
Given a set of data points (, (,……
(I of a function Y = f(x), where the function f(x) is
𝑆
not known, we have to find or to compute the value of the 𝑥
definite integral. 𝑎 𝑏
EXAMPLE: Table 1 shows the results obtained for the integral using multi-
segment Trapezoidal Rule for the time required 50% consumption of a reactant
in an experiment.
T=-
III. NUMERICAL INTEGRATION
ROMBERG INTEGRATION
Table 1: Multi-Segment Trapezoidal Rule Values
n Value % %
1 191190 -1056.17 0.55555 …
2 190420 -282.12 0.14840 0.40650
3 190260 -127.31 0.067000 0.08140
4 190200 -72.017 0.037900 0.029100
5 190180 -46.216 0.024300 0.013600
6 190170 -32.142 0.016900 0.0074000
7 190160 -23.636 0.012400 0.0045000
8 190150 -18.107 0.0095000 0.0029000
Use Romberg’s rule to find the time required for 50% of consumption. Use the 1, 2,
4, and 8 segment Trapezoidal Rule results as given in the Table I.
III. NUMERICAL INTEGRATION
ROMBERG INTEGRATION
SOLUTION:
From Table 1, the needed values from original Trapezoidal rule are
Similarly,
III. NUMERICAL INTEGRATION
ROMBERG INTEGRATION
For the second order interpolation values,
Similarly,
Suppose that the data points are ( ( ….., ( where x is the independent variable and
y is the dependent variable. The fitting curve f(x) has the deviation (error) from
each data point, as follows:
……
According to the method of least squares, the best fitting curve has the property
that is minimum.
IV. CURVE FITTING
LEAST SQUARE METHODS
1.) Least Squares Method for Fitting a Linear Relationship (Linear
Regression)
Consider a set of n values (),(),…().
Suppose we have to find linear relationship in the form y = a + bx among the
above set of x and y values.
SOLUTION:
We shall prepare the following table.
1 3 1 3
2 4 4 8
3 5 9 15
4 6 16 24
5 8 25 40
IV. CURVE FITTING
LEAST SQUARE METHODS
Normal Equations for Fitting y = a + bx are:
i.e., solving,
26 = 5a + 15b 78 = 15a +45b
90 = 15a + 55b
i.e.,
12 = 10b; b = 1.2; a = 1.6
EXAMPLE:
X: Given1the following
2 data, fit3an equation
4 of the form
5 y = a.
Y: 0.5 2 4.5 8 12.5
SOLUTION:
y=a log y = log a + b log x
i.e., Y = a + bx, where y = log y, A = log a, B = b, X = log x
IV. CURVE FITTING
LEAST SQUARE METHODS
Y = log y X = log x Xy
1 0.5 -0.6931 0 0 0
2 2 0.6931 0.6931 0.4804 0.4804
3 4.5 1.5041 1.0986 1.2069 1.6524
4 8 2.0794 1.3863 1.9218 2.8827
5 12.5 2.5257 1.6094 2.5902 4.0649
6.1092 4.7874 6.1993 9.0804
MAIN IDEA:
Rewrite a nonlinear function into a form given by
x = f(x)
Starting with an initial guess, evaluate to yield Continue the iteration
until the result no longer changes to within a specified tolerance, i.e., after m
iterations where
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
METHOD OF SUCCESSIVE SUBSTITUTION
EXAMPLE: Find x that solves the following equation
SOLUTION:
Rearranging equation (4) to the following form,
Algorithm:
Let 𝑓(𝑥) be a continuous function in the interval 𝑎, 𝑏 , such that 𝑓 𝑎 and 𝑓 𝑏 are
of opposite signs, i.e. 𝑓 (𝑎) . 𝑓(𝑏) < 0.
Step1. Take the initial approximation given by 𝑥0 = 𝑎+𝑏/2 , one of the three
conditions arises for finding the 1st approximation 𝑥1.
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
HALF INTERVAL METHOD
I. 𝑓(𝑥0) = 0, we have a root at 𝑥0 .
II. If 𝑓 (𝑎) . 𝑓(𝑥0)< 0, the root lies between 𝑎 and 𝑥0 ∴ 𝑥1 = 𝑎+𝑥1/2 and repeat the
procedure by halving the interval again.
III. If𝑓(𝑏) . 𝑓(𝑥0)< 0, the root lies between 𝑥0 and 𝑏 ∴ 𝑥1 = 𝑥0+𝑏/2 and repeat the
procedure by halving the interval again.
IV. Continue the process until root is found to be of desired accuracy.
SOLUTION:
First 2 decimal places have been stabilized; hence 0.8633 is the real root
correct to two decimal places.
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
METHOD of FALSE POSITION
• Method of false position as false position of curve is taken as initial approximation. Let 𝑦 =
𝑓(𝑥) be represented by the curve 𝐴𝐵.The real root of equation 𝑓(𝑥) = 0 is 𝛼 as shown in
adjoining figure. The false position of curve 𝐴𝐵 is taken as chord 𝐴𝐵 and initial
approximation 𝑥0 is the point of intersection of chord 𝐴𝐵 with 𝑥-axis. Successive
approximations 𝑥1 , 𝑥2 , …are given by point of intersection of chord 𝐴′𝐵, 𝐴 ′′ 𝐵, …with 𝑥 −
axis, until the root is found to be of desired accuracy. Now equation of chord AB in two-
point form is given by: 𝑦 − 𝑓(𝑎) = ("𝑓" (𝑏)−"𝑓" (𝑎))/(𝑏−𝑎)(𝑥- 𝑎).
• This is another method to find the roots of f (x) = 0. This method is also
known as Regular False Method. In this method, we choose two points a and
b such that f (a) and f (b) are of opposite signs. Hence a root lies in between
these points. The equation of the chord joining the two points, (a, f(a)) and (b,
f(b)) is given by:
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
METHOD of FALSE POSITION
We replace the part of the curve between the points [a, f (a)] and [b, f (b)] by
means of the chord joining these points and we take the point of intersection of
the chord with the x axis as an approximation to the root (see Fig.3). The point
of intersection is obtained by putting y = 0 in (5), as
𝑥="𝑥1"= (𝑎𝑓(𝑏)−𝑏𝑓(𝑎))/(𝑓(𝑏)−𝑓(𝑎))
SOLUTION:
f(x) = Here f(0) = -1 and f(1) = 1 f(0).f(1) 0
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
METHOD of FALSE POSITION
Also f(x) is continuous in [0,1], atleast one root exists in [0,1]
Initial approximation: ; a = 0, b= 1
First 2 decimal places have been stabilized; hence 0.6796 is the real root
correct to two decimal places.
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
BAIRSTOW’s METHOD
Bairstow method is an iterative method used to find both the real and complex roots of a
polynomial. It is based on the idea of synthetic division of the given polynomial by a quadratic
function and can be used to find all the roots of a polynomial. Given a polynomial say,
i = n – 2 to 0
Here values of R and S will depend on p and q and hence they are taken
functions of p and q. If p and q are so chosen that R and S vanish (i.e. no
remainder), then (x + px + q) will be factor of the given polynomial. Meaning
thereby, the problem reduces to finding pand q such that R(p, q) = 0 and S(p, q)
= 0 …(2) Let (p0 q0) be an initial approximation and (p0 + ∆p0, q0 + ∆q0) be
the actual solution of the equation (2). Then R(p0 + ∆p0, q0 + ∆q0) = 0 and S(p0
+ ∆p0, q0 + ∆q0) = 0 …(3)
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
BAIRSTOW’s METHOD
EXAMPLE: Find all the roots of the polynomial bi Bairstow Method. With the
initial values r = 0.5, s = -0.5, and .
SOLUTION:
Set Iteration = 1
s = -0.5 + = -0.203580916
And
= 100 = 69.0983582
= 100 = 145.602585
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
BAIRSTOW’s METHOD
Set iteration = 2
where
f is a function of two variables x and y and (x0 , y0) is a known point on the
solution curve.
If the existence of all higher order partial derivatives is assumed for y at x = x0,
then by Taylor series the value of y at any neigbhouring point x+h can be written
as
where ' represents the derivative with respect to x. Since at x0, y0 is known, y'
at x0 can be found by computing f(x0,y0). Similarly higher derivatives of y at
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
TAYLOR SERIES METHOD
SOLUTION:
Given
EXAMPLE: Use Euler’s method with h=0.1 to find approximate values for the
solution of the initial value problem
y’ + 2y =
at x =0.1, 0.2, 0.3
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
EULER’s METHOD
SOLUTION:
We rewrite the equation
Moreover, it can be shown that a method with local truncation error has global
truncation error In section 3.1 and 3.2 we studied numerical methods where k =
1 and k = 2. We’ll skip methods for which k = 3 and proceed to the Runge-Kutta
Method, the most widely used method, for which k = 4. The magnitude of the
local truncation error is determined by the fifth derivative of the solution of the
initial value problem. Therefore the local truncation error will be larger where
is large, or smaller where is small. The Runge-Kutta Method computes
approximate values of the solution of Equation 3.3.1 at as follows:
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
RUNGE KUTTA METHOD
And
.
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
RUNGE KUTTA METHOD