0% found this document useful (0 votes)
143 views85 pages

Solutions of Simultaneous System of Linear Equations: Assignment 01

The document outlines various numerical methods for solving systems of linear equations, differential equations, and other problems. It discusses direct solution methods like inverse matrices, Cramer's rule, Gaussian elimination, and Gauss-Jordan elimination. It also mentions indirect iterative methods like Jacobi's method and Gauss-Seidel method. Finally, it lists numerical integration techniques and methods for finding roots of equations and solving differential equations.

Uploaded by

shandel datario
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views85 pages

Solutions of Simultaneous System of Linear Equations: Assignment 01

The document outlines various numerical methods for solving systems of linear equations, differential equations, and other problems. It discusses direct solution methods like inverse matrices, Cramer's rule, Gaussian elimination, and Gauss-Jordan elimination. It also mentions indirect iterative methods like Jacobi's method and Gauss-Seidel method. Finally, it lists numerical integration techniques and methods for finding roots of equations and solving differential equations.

Uploaded by

shandel datario
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 85

CE 333 ASSIGNMENT

Solutions of
01 Simultaneous System of
Linear Equations
Submitted by:
GELIZON, SHELLA MAE A.
DATARIO, PETER SHANDEL
P.
SUMOOK, EDRIAN ANTONIO
I. DIRECT SOLUTION
• Inverse Method
• Cramer’s Rule
• Gauss Elimination
• Gauss Jordan Method
II. INDIRECT or ITERATIVE SOLUTIONS
• Jacobi's Method
• Gauss Seidel Method
III. NUMERICAL INTEGRATION
• Trapezoidal Rule
• Simpson’s Rule
• Romberg Integration
IV. CURVE FITTING
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
• Method of Successive Substitution
• Half Interval Method
• Method of False Position
• Bairstow’s Method
V. NUMERICAL SOLUTIONS of ORDINARY
DIFFERENTIAL EQUATIONS
• Taylor Series Method
• Euler’s Method
• Runge Kutta Method
I. DIRECT SOLUTION
INVERSE METHOD
Solving a system of linear equations using the inverse of a matrix requires
the definition of two new matrices: X is the matrix representing the
variables of the system, and B is the matrix representing the constants.
Using matrix multiplication, we may define a system of equations with
the same number of equations as variables as,

AX = B

To solve a system of linear equations using an inverse matrix, let A be the


coefficient matrix, let X be the variable matrix, and let B be the constant
matrix. Thus we want to solve a system AX = B .
I. DIRECT SOLUTION
INVERSE METHOD
EXAMPLE: Solve the following system using the inverse of a matrix.
5x + 15y + 56z = 35
-4x – 11y – 41z = -26
-x – 3y – 11z = -7

SOLUTION:

Write the equation AX = B


5 15 56 x 35
-4 -11 -41 y = -26
-1 -3 -11 z -7
I. DIRECT SOLUTION
 
INVERSE METHOD
First, we will find the inverse of A by augmenting with the identity.
5 15 56 1 0 0
-4 -11 -41 0 1 0
-1 -3 -11 0 0 1

Multiply row 1 by .
1 3 0 0
-4 -11 -41 0 1 0
-1 -3 -11 0 0 1
I. DIRECT SOLUTION
 
INVERSE METHOD
Multipy row 1 by 4 and add to row 2.
1 3 0 0
0 1 1 0
-1 -3 -11 0 0 1

Add row 1 to row 3.


1 3 0 0
0 1 1 0
0 0 0 1
I. DIRECT SOLUTION
 
INVERSE METHOD
Multiply row 2 by -3 and add to row 1.
1 0 - - -3 0
0 1 1 0
0 0 0 1

Multiply row 3 by 5.
1 0 - - -3 0
0 1 1 0
0 0 1 1 0 5
I. DIRECT SOLUTION
 
INVERSE METHOD
Multiply row 3 by and add to row 1.
1 0 0 -2 -3 1
0 1 1 0
0 0 1 1 0 5

Multiply row 3 by - and add to row 2.


1 0 0 -2 -3 1
0 1 0 -3 1 -19
0 0 1 1 0 5
I. DIRECT SOLUTION
 
INVERSE METHOD
So,
-2 -3 1
= -3 1 -19
1 0 5

Multiply both sides of the eqution by We want = :

-2 -3 1 5 15 56 x -2 -3 1 35
-3 1 -19 -4 -11 -41 y = -3 1 -19 -26
1 0 5 -1 -3 -11 z 1 0 5 -7
I. DIRECT SOLUTION
 
INVERSE METHOD
Thus,
-70 +78 – 7 1
= -105 – 26 +133 = 2
35 + 0 – 35 0

Therefore, the solution is (1, 2, 0).


I. DIRECT SOLUTION
 
CRAMER’S RULE
Cramer’s Rule is a viable and efficient method for finding solutions to
systems with an arbitrary number of unknowns, provided that we have the
same number of equations as unknowns. Cramer’s Rule will give us the
unique solution to a system of equations, if it exists. However, if the system
has no solution or an infinite number of solutions, this will be indicated by
a determinant of zero. To find out if the system is inconsistent or dependent,
another method, such as elimination, will have to be used.

To understand Cramer’s Rule, let’s look closely at how we solve systems of


linear equations using basic row operations. Consider a system of two
equations in two variables.
I. DIRECT SOLUTION
 
CRAMER’S RULE
We eliminate one variable using row operations and solve for the other. Say
that we wish to solve for x. If equation (2) is multiplied by the opposite of
the coefficient of y in equation (1), equation (1) is multiplied by the
coefficient of y in equation (2), and we add the two equations, the variable
y will be eliminated.
 Multiply by
Multiply by
I. DIRECT SOLUTION
 
CRAMER’S RULE
Now solve for x,

x(

x=  

Similarly to solve for y, we will eliminate x.


 Multiplyby
Multiply by
I. DIRECT SOLUTION
 
CRAMER’S RULE
Solving for y gives,

 
y () = =

Notice that the denominator for both x and y is the determinant of the
coefficient matrix.
We can use these formulas to solve for x and y, but Cramer’s Rule also
introduces new notation:

D: determinant of the coefficient matrix


Dx: determinant of the numerator in the solution of x
I. DIRECT SOLUTION
 
CRAMER’S RULE
x=

Dy: determinant of the numerator in the solution of y


y=

The key to Cramer’s Rule is replacing the variable column of interest with
the constant column and calculating the determinants. We can then express
x and y as a quotient of two determinants.
I. DIRECT SOLUTION
 
CRAMER’S RULE

EXAMPLE: Solve the following 2x2 system using Cramer’s Rule.


12x + 3y = 15
2x – 3y = 13
SOLUTION:
Solve for x, 15 3
13 -3
x = = 12 3 = = = 2
2 -3

Solve for y, Therefore, the solution is


y= = 12 15
= = = -3 (2, -3)
2 13
12 3
2 -3
I. DIRECT SOLUTION
GAUSS ELIMINATION
Gauss elimination, in linear and multilinear algebra, is a process for
finding the solutions of a system of simultaneous linear equations by first
solving one of the equations for one variable (in terms of all the others) and
then substituting this expression into the remaining equations. The
Gaussian elimination rules are the same as the rules for the three
elementary row operations, in other words, you can algebraically operate on
the rows of a matrix in the next three ways (or combination of):

• Interchanging two rows


• Multiplying a row by a constant (any constant which is not zero)
• Adding a row to another row

And so, solving a linear system with matrices using Gaussian elimination
happens to be a structured, organized and quite efficient method.
I. DIRECT SOLUTION
GAUSS ELIMINATION

EXAMPLE: Solve the following linear system using Gaussian Elimination.


x+y+z=3
x + 2y + 3z = 0
x + 3y +2z = 3

SOLUTION:
Transcribing the linear system into an augmented matrix:

x+y+z=3 1 1 1 3
x + 2y + 3z = 0 1 2 3 0
x + 3y +2z = 3 1 3 2 3
I. DIRECT SOLUTION
 
GAUSS ELIMINATION
Row reducing (applying the Gaussian elimination method to) the
augmented matrix:

1 1 1 3 1 1 1 3
1 2 3 0 0 1 2 -3
1 3 2 3 1 3 2 3

1 1 1 3 1 1 1 3
0 1 2 -3 0 2 4 -6
0 2 1 0 0 2 1 0

1 1 1 3 x+y+z=3
0 2 4 -6 2y + 4z = -6
0 0 -3 6 -3z = 6
I. DIRECT SOLUTION
GAUSS ELIMINATION
Reduced matrix into its echelon form: Resulting linear system of
equations to solve:

1 1 1 3 x+y+z=3
0 2 4 -6 2y + 4z = -6
0 0 -3 6 -3z = 6

Having: z = -2
2y + 4z = -6 2y + 4 (-2) = 2y – 8 = -6
2y = 2 y=1

Applying the values of y and z to the first equation FINAL


x+y+z=3 x + (1) + (-2) = x – 1 = 3 x=4
ANSWER:
x = 4, y = 1, z
I. DIRECT SOLUTION
GAUSS JORDAN METHOD
The main goal of Gauss-Jordan Elimination is:
• to represent a system of linear equations in an augmented matrix
form
• then performing the 3 row operations on it until the reduced row
echelon form (RREF) is achieved
• Lastly, we can easily recognize the solutions from the RREF
EXAMPLE: Solve the system shown below using the Gauss Jordan
Elimination method.
-x + 2y =-6
3x – 4y = 14

SOLUTION: The first step is to write the augmented matrix of the system.
-1 2 -6
3 -4 14
I. DIRECT SOLUTION
GAUSS JORDAN METHOD
We can multiply the first row by -1 to make the leading entry 1.
1 -2 6
3 -4 14

We can now multiply the first row by 3 and subtract it from the second row.
1 -2 6
3 – (1 x 3) -4 – (-2 x 3) 14 – (6 x 3)
=
1 -2 6
0 2 -4

We have 0 as the first entry of the second row.


I. DIRECT SOLUTION
 
GAUSS JORDAN METHOD
To make the second entry of the second row 1, we can multiply the second
row by
1 -2 6 1 -2 6
x0 x2 x -4 = 0 1 -2

The second entry of the first row should be . In order to do that, we multiply
the second row by and add it to the first row.
1 + (0 x 2) -2 + (1 x 2) 6 + (-2 x 2) 1 0 2
0 1 -2 = 0 1 -2

x + 0y = 2
0x + y = -2 Thus, the solution of the system of equations is x =
2 and y = -2 .
II. INDIRECT or ITERATIVE
SOLUTIONS
 
JACOBI’s METHOD
To begin, solve the 1st equation for , the 2nd equation for and so on to obtain the rewritten
equations:

Then make an initial guess of the solution Substitute these values into the right hand side the of
the rewritten equations to obtain the first approximation, .
II. INDIRECT or ITERATIVE
SOLUTIONS
 
JACOBI’s METHOD
This accomplishes one iteration.
In the same way, the second approximation is computed by substituting the first
approximation’s x-values into the right hand side of the rewritten equations.
By repeated iterations, we form a sequence of approximation

EXAMPLE: Solve the following equations by Jacobi’s Method, performing


three iterations only.

20x + y -2z = 17
3x + 20y – z + 18 = 0
2x – 3y + 20z = 25
II. INDIRECT or ITERATIVE
SOLUTIONS
 
JACOBI’s METHOD
SOLUTION:
Rewriting above equations we get:
x=
y=
z=

First Iteration:
=
=
=

Substituting in equations (4), (5), and (6)


II. INDIRECT or ITERATIVE
SOLUTIONS
 
JACOBI’s METHOD
=
=
=

Second Iteration:
=
=
=

Substituting in equations (7), (8) and (9)

= 1.02
= 0.965
= 1.03
II. INDIRECT or ITERATIVE
SOLUTIONS
 
JACOBI’s METHOD
Third Iteration:
=
=
=

Substituting in equations (10), (11) and (12)

=1
= -1
=1

After three iterations x = 1, y = -1, and z = 1


(approx.)
II. INDIRECT or ITERATIVE
SOLUTIONS
 
GAUSS SEIDEL METHOD
With the Jacobi method, the values of obtained in kth iteration remain
unchanged until the entire (k+1)th iteration has been calculated. With the Gauss-
Seidel method, we use the new values as soon as they are known. For example ,
once we have computed from the first equation, it’s value is then used in the
second equation to obtain the new , and so on.

EXAMPLE: Solve the following equations using Gauss Seidel Method.


2x + 5y = 16
3x + y = 11
II. INDIRECT or ITERATIVE
SOLUTIONS
 
GAUSS SEIDEL METHOD
SOLUTION:
From above equations 1st Approximation
x = (11 – y) (11-(0)) = (11) = 3.666667
y = (16 – 2x) [16-2(3.666667)] = (8.666667) = 1.733333

2nd Approximation
(11-(1.733333)) = (9.266667) = 3.088889
[16-2(3.088889)] = (9.822222) = 1.964444

3rd Approximation
(11-(1.964444)) = 3.011852 [16-2(3.011852)] = 1.995259
II. INDIRECT or ITERATIVE
SOLUTIONS
 
GAUSS SEIDEL METHOD
4th Approximation
(11-(1.995259)) = (9.004741) = 3.00158
[16-2(3.00158)] = (9.99684) = 1.999368

5th Approximation
(11-(1.999368)) = (9.000632) = 3.000211
[16-2(3.000211)] = (9.999579) = 1.999916

6th Approximation
(11-(1.999916)) = (9.000084) = 3.000028 Solution by Gauss Seidel
[16-2(3.000028)] = (9.999944) = 1.999989
Method:
x = 3.000028 = 3
y = 1.999989 = 2
III. NUMERICAL INTEGRATION
 
TRAPEZOIDAL RULE
The trapezoid rule in mathematics is a numerical integration method that we use
to calculate the approximate value of the definite integral.

The rule is on the basis of an approximating value of the integral of f (x) by that
of the linear function that passes through the points (a, f (a)) and (b, f (b)). The
integral of this is always equal to the area of the trapezoid under the graph of the
linear function. It follows that:
= (b-a)
The error here is:
-
Xi is a number between a and b.
a b
III. NUMERICAL INTEGRATION
 
TRAPEZOIDAL RULE
EXAMPLE: Use the trapezoidal rule to estimate using four subintervals.
Calculate also the absolute and relative error.

SOLUTION:
The endpoints of the subintervals consist of elements of the set P = {0, , , , 1)
and = = .
= [ f(0) + 2f() + 2f + 2f() + f(1) ]
= (0+2x +2x +2x +1)
=
III. NUMERICAL INTEGRATION
 
TRAPEZOIDAL RULE
The absolute error is given by:

The relative error is given by:


III. NUMERICAL INTEGRATION
 
SIMPSON’s RULE
In the last section, Trapezoidal Rule, we used straight lines to model a curve and
learned that it was an improvement over using rectangles for finding areas under
curves because we had much less "missing" from each segment. We seek an even
better approximation for the area under a curve. In Simpson's Rule, we will use
parabolas to approximate each part of the curve. This proves to be very efficient
since it's generally more accurate than the other numerical methods we've seen.

We divide the area into n equal segments of the


width . The  𝑦 4 approximate area is given by the
following:  𝑦 3  𝑦 5
 𝑦 2  𝑦 6 AREA
 𝑦 1 =
 𝑦 0
∆  𝑥 ∆  𝑥 ∆  𝑥 ∆  𝑥 ∆  𝑥 ∆  𝑥 where: =
III. NUMERICAL INTEGRATION
 
SIMPSON’s RULE
EXAMPLE: Approximate using Simpson’s Rule with n = 4.
y
 y =

0.25
= f (2)
= = 0.3333333
x
0
1.5
   2 2.5
   3
= =
III. NUMERICAL INTEGRATION
 
SIMPSON’s RULE
EXAMPLE: Approximate using Simpson’s Rule with n = 4.
y
 y =

0.25
= f (2)
= = 0.3333333
x
0
1.5
   2 2.5
   3
= =
III. NUMERICAL INTEGRATION
 
SIMPSON’s RULE
= =
= =
So,
AREA =

AREA = 0.2876831
III. NUMERICAL INTEGRATION
 
ROMBERG INTEGRATION
 It is an approximate computation of integral using numerical techniques.
I=   𝑦
𝑓  ( 𝑥)
   Given a set of data points (, (,……
(I of a function Y = f(x), where the function f(x) is
 𝑆
not known, we have to find or to compute the value of the  𝑥
definite integral.  𝑎  𝑏

 This method is used to improve the approximate results obtained by the


finite difference method.

 This integration process, first proposed by Werner Romberg in 1955.


I
III. NUMERICAL INTEGRATION
 
ROMBERG INTEGRATION

This method can be stopped when two


successive values are very close to each
other.

EXAMPLE: Table 1 shows the results obtained for the integral using multi-
segment Trapezoidal Rule for the time required 50% consumption of a reactant
in an experiment.

T=-
III. NUMERICAL INTEGRATION
ROMBERG INTEGRATION
Table 1: Multi-Segment Trapezoidal Rule Values
n Value % %
1 191190 -1056.17 0.55555 …
2 190420 -282.12 0.14840 0.40650
3 190260 -127.31 0.067000 0.08140
4 190200 -72.017 0.037900 0.029100
5 190180 -46.216 0.024300 0.013600
6 190170 -32.142 0.016900 0.0074000
7 190160 -23.636 0.012400 0.0045000
8 190150 -18.107 0.0095000 0.0029000

Use Romberg’s rule to find the time required for 50% of consumption. Use the 1, 2,
4, and 8 segment Trapezoidal Rule results as given in the Table I.
III. NUMERICAL INTEGRATION
 
ROMBERG INTEGRATION
SOLUTION:
From Table 1, the needed values from original Trapezoidal rule are

Where the above four values correspond to using 1, 2, 4 and 8 segment


Trapezoidal Rule, respectively. To get the first order extrapolation values,

Similarly,
III. NUMERICAL INTEGRATION
 
ROMBERG INTEGRATION
For the second order interpolation values,

Similarly,

For third order extrapolation values,


III. NUMERICAL INTEGRATION
ROMBERG INTEGRATION
Table 2 shows these increased correct values in a tree graph.

Table 2: Improved estimates of the integral value using Romberg Integration

1st Order 2nd Order 3rd Order


1-segment 191190
190163
2-segment 190420 190125
190127 190134
4-segment 190200 190133
190133
8-segment 190150
IV. CURVE FITTING
 
LEAST SQUARE METHODS
The method of least squares assumes that the best fit curve of a given type is the
curve that has the minimal sum of deviations , i.e., least square error from a
given set of data.

Suppose that the data points are ( ( ….., ( where x is the independent variable and
y is the dependent variable. The fitting curve f(x) has the deviation (error) from
each data point, as follows:

……

According to the method of least squares, the best fitting curve has the property
that is minimum.
IV. CURVE FITTING
 
LEAST SQUARE METHODS
1.) Least Squares Method for Fitting a Linear Relationship (Linear
Regression)
Consider a set of n values (),(),…().
Suppose we have to find linear relationship in the form y = a + bx among the
above set of x and y values.

The difference between observed and estimated values of y is called residual


and is given by in the least square method, we find a and b in such a way that is
minimum.
Let T=
=

The condition for T to be minimum is that, ∂T/∂a = 0 and ∂T/∂b = 0


IV. CURVE FITTING
LEAST SQUARE METHODS
EXAMPLE: Fit a straight line to the following set of data points.
X: 1 2 3 4 5
Y: 3 4 5 6 8

SOLUTION:
We shall prepare the following table.

1 3 1 3
2 4 4 8
3 5 9 15
4 6 16 24
5 8 25 40
IV. CURVE FITTING
 
LEAST SQUARE METHODS
Normal Equations for Fitting y = a + bx are:

i.e., solving,
26 = 5a + 15b 78 = 15a +45b
90 = 15a + 55b

i.e.,
12 = 10b; b = 1.2; a = 1.6

Hence, straight line is y = 1.6 + 1.2x.


IV. CURVE FITTING
 
LEAST SQUARE METHODS
2.) Least Squares Method for Fitting a Non-Linear Relationship (Linear
Regression)
Non-linear relationships of the form y = a, y = a and y = a can be converted into
the form of y = a + bx, by applying logarithm on both sides.

EXAMPLE:
X: Given1the following
2 data, fit3an equation
4 of the form
5 y = a.
Y: 0.5 2 4.5 8 12.5

SOLUTION:
y=a log y = log a + b log x
i.e., Y = a + bx, where y = log y, A = log a, B = b, X = log x
IV. CURVE FITTING
 
LEAST SQUARE METHODS
Y = log y X = log x Xy
1 0.5 -0.6931 0 0 0
2 2 0.6931 0.6931 0.4804 0.4804
3 4.5 1.5041 1.0986 1.2069 1.6524
4 8 2.0794 1.3863 1.9218 2.8827
5 12.5 2.5257 1.6094 2.5902 4.0649
6.1092 4.7874 6.1993 9.0804

Normal equations are: Solving:


A = -0.6931; B = 2.0
Therefore, a = 0.5; b = 2.0;
Curve: y = 0.5
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
IMPORTANT TAKEAWAYS

• Most numerical methods use iterative procedures to find an approximate root


of an equation 𝑓 𝑥 = 0. They require an initial guess of the root as starting
value and each subsequent iteration leads closer to the actual root.

• Order of convergence: For any iterative numerical method, each successive


iteration gives an approximation that moves progressively closer to actual
solution. This is known as convergence. Any numerical method is said have
order of convergence 𝜌, if 𝜌 is the largest positive number such that |𝜖n+1|≤ 𝑘
|𝜖𝑛|𝜌 , where 𝜖n and 𝜖n+1 are errors in 𝑛th and (𝑛 + 1)th iterations, 𝑘 is a finite
positive constant.
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
IMPORTANT TAKEAWAYS

Some useful results:


• If 𝛼 is root of the equation 𝑓 (𝑥) = 0, then 𝑓 𝛼 = 0
• Every equation of 𝑛 𝑡ℎ degree has exactly 𝑛 roots (real or imaginary)
• Intermediate Value Theorem: If 𝑓 (𝑥) is a continuous function in a closed
interval 𝑎, 𝑏 and 𝑓 (𝑎) &𝑓 (𝑏) are having opposite signs, then the equation 𝑓 (𝑥)
= 0 has atleast one real root or odd number of roots between 𝑎 and 𝑏.
• If 𝑓 (𝑥) is a continuous function in the closed interval 𝑎, 𝑏 and 𝑓 (𝑎) & 𝑓 (𝑏) are
of same signs, then the equation 𝑓 (𝑥) = 0 has no root or even number of roots
between 𝑎and 𝑏
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
METHOD OF SUCCESSIVE SUBSTITUTION
DEFINITION:
A numerical method for solving a nonlinear equation for the unknown.

MAIN IDEA:
Rewrite a nonlinear function into a form given by
x = f(x)
Starting with an initial guess, evaluate to yield Continue the iteration

until the result no longer changes to within a specified tolerance, i.e., after m
iterations where
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
METHOD OF SUCCESSIVE SUBSTITUTION
EXAMPLE: Find x that solves the following equation

SOLUTION:
Rearranging equation (4) to the following form,

Then the spreadsheet can be implemented as given


in Figure 1,
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
HALF INTERVAL METHOD
Bisection method is used to find an approximate root in an interval by
repeatedly bisecting into subintervals. It is a very simple and robust method but
it is also relatively slow. Because of this it is often used to obtain a rough
approximation to a solution which is then used as a starting point for more
rapidly converging methods. This method is based on the intermediate value
theorem for continuous functions.

Algorithm:
 Let 𝑓(𝑥) be a continuous function in the interval 𝑎, 𝑏 , such that 𝑓 𝑎 and 𝑓 𝑏 are
of opposite signs, i.e. 𝑓 (𝑎) . 𝑓(𝑏) < 0.
 Step1. Take the initial approximation given by 𝑥0 = 𝑎+𝑏/2 , one of the three
conditions arises for finding the 1st approximation 𝑥1.
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
HALF INTERVAL METHOD
I. 𝑓(𝑥0) = 0, we have a root at 𝑥0 .
II. If 𝑓 (𝑎) . 𝑓(𝑥0)< 0, the root lies between 𝑎 and 𝑥0 ∴ 𝑥1 = 𝑎+𝑥1/2 and repeat the
procedure by halving the interval again.
III. If𝑓(𝑏) . 𝑓(𝑥0)< 0, the root lies between 𝑥0 and 𝑏 ∴ 𝑥1 = 𝑥0+𝑏/2 and repeat the
procedure by halving the interval again.
IV. Continue the process until root is found to be of desired accuracy.

EXAMPLE: Apply Bisection Method to find a root of the equation

SOLUTION:

Here 𝑓 (0) = −1and 𝑓 1 = 1 ⇒ 𝑓 0 . 𝑓 1 < 0


Also 𝑓 (𝑥) is continuous in [0,1] , ∴ at least one root exists in [0,1]
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
HALF INTERVAL METHOD
Initial approximation: 𝑎 = 0, 𝑏 = 1
= 0+1/ 2 = 0.5, 𝑓( 0.5) = −1.1875, 𝑓 (0.5) . 𝑓 (1) < 0

First approximation: 𝑎 = 0.5, 𝑏 = 1


= 0.5+1/ 2 = 0.75, 𝑓 (0.75) = −0.5898, 𝑓 (0.75) . 𝑓 (1) < 0

Second approximation: 𝑎 = 0.75, 𝑏 = 1


= 0.75+1/ 2 = 0.875, 𝑓 (0.875) = 0.051, 𝑓 (0.75) . 𝑓 (0.875) < 0

Third approximation: 𝑎 = 0.75, 𝑏 = 0.875


= 0.75+0.875 / 2 = 0.8125, 𝑓 (0.8125) = −0.30394, 𝑓 (0.8125) . 𝑓 (0.875) < 0
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
HALF INTERVAL METHOD
Fourth approximation: 𝑎 = 0.8125, 𝑏 = 0.875
= 0.8125+0.875 / 2 = 0.84375, 𝑓 (0.84375) = −0.135, 𝑓 (0.84375) . 𝑓 (0.875) < 0

Fifth approximation: 𝑎 = 0.84375, 𝑏 = 0.875


= 0.84375+0.875 / 2 = 0.8594, 𝑓 (0.8594) = −0.0445, . 𝑓 (0.8594) . 𝑓 (0.875) < 0

Sixth approximation: 𝑎 = 0.8594, 𝑏 = 0.875


= 0.8594 + 0.875 / 2 = 0.8672, 𝑓 (0.8672) = 0.0027, 𝑓 (0.8594) . 𝑓 0.8672 . < 0

Seventh approximation: 𝑎 = 0.8594, 𝑏 = 0.8672


= 0.8594+0.8672 / 2 = 0.8633

First 2 decimal places have been stabilized; hence 0.8633 is the real root
correct to two decimal places.
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
METHOD of FALSE POSITION
• Method of false position as false position of curve is taken as initial approximation. Let 𝑦 =
𝑓(𝑥) be represented by the curve 𝐴𝐵.The real root of equation 𝑓(𝑥) = 0 is 𝛼 as shown in
adjoining figure. The false position of curve 𝐴𝐵 is taken as chord 𝐴𝐵 and initial
approximation 𝑥0 is the point of intersection of chord 𝐴𝐵 with 𝑥-axis. Successive
approximations 𝑥1 , 𝑥2 , …are given by point of intersection of chord 𝐴′𝐵, 𝐴 ′′ 𝐵, …with 𝑥 −
axis, until the root is found to be of desired accuracy. Now equation of chord AB in two-
point form is given by: 𝑦 − 𝑓(𝑎) = ("𝑓" (𝑏)−"𝑓" (𝑎))/(𝑏−𝑎)(𝑥- 𝑎).

• This is another method to find the roots of f (x) = 0. This method is also
known as Regular False Method. In this method, we choose two points a and
b such that f (a) and f (b) are of opposite signs. Hence a root lies in between
these points. The equation of the chord joining the two points, (a, f(a)) and (b,
f(b)) is given by:
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
METHOD of FALSE POSITION
We replace the part of the curve between the points [a, f (a)] and [b, f (b)] by
means of the chord joining these points and we take the point of intersection of
the chord with the x axis as an approximation to the root (see Fig.3). The point
of intersection is obtained by putting y = 0 in (5), as
𝑥="𝑥1"= (𝑎𝑓(𝑏)−𝑏𝑓(𝑎))/(𝑓(𝑏)−𝑓(𝑎))

𝑥1 is the first approximation to the root of f (x) = 0.

EXAMPLE: Apply Regula-False Method to find a root of the equation correct


to two decimal places.

SOLUTION:
f(x) = Here f(0) = -1 and f(1) = 1 f(0).f(1) 0
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
METHOD of FALSE POSITION
Also f(x) is continuous in [0,1], atleast one root exists in [0,1]

Initial approximation: ; a = 0, b= 1

f(0.5) = -0.375, f(0.5).f(1) 0

First Approximation: a = 0.5, b = 1

f(0.636) = -0.107, f(0.636).f(1) 0


V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
METHOD of FALSE POSITION

Second Approximation: a = 0.636, b = 1

f(0.6711) = -0.0267, f(0.6711).f(1) 0

Third Approximation: a = 0.6711, b = 1

First 2 decimal places have been stabilized; hence 0.6796 is the real root
correct to two decimal places.
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
BAIRSTOW’s METHOD
Bairstow method is an iterative method used to find both the real and complex roots of a
polynomial. It is based on the idea of synthetic division of the given polynomial by a quadratic
function and can be used to find all the roots of a polynomial. Given a polynomial say,

Bairstow’s method divides the polynomial by a quadratic function.

Now the quotient will be polynomial.

And the remainder is a linear function.


R(x) =
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
BAIRSTOW’s METHOD
Since the quotient and the remainder are obtained by standard synthetic division
the coefficients can be obtained by the following recurrence relation.

i = n – 2 to 0

Here values of R and S will depend on p and q and hence they are taken
functions of p and q. If p and q are so chosen that R and S vanish (i.e. no
remainder), then (x + px + q) will be factor of the given polynomial. Meaning
thereby, the problem reduces to finding pand q such that R(p, q) = 0 and S(p, q)
= 0 …(2) Let (p0 q0) be an initial approximation and (p0 + ∆p0, q0 + ∆q0) be
the actual solution of the equation (2). Then R(p0 + ∆p0, q0 + ∆q0) = 0 and S(p0
+ ∆p0, q0 + ∆q0) = 0 …(3)
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
BAIRSTOW’s METHOD
EXAMPLE: Find all the roots of the polynomial bi Bairstow Method. With the
initial values r = 0.5, s = -0.5, and .

SOLUTION:
Set Iteration = 1

Using the recurrence relations (B.5a)-(B.5c) and (B.8a)-(B.8c) we get

Therefore the simultaneous equations for and are :


V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
BAIRSTOW’s METHOD
4.75 - 4 = 4.125
0.25 + 4.75 = 1.6875

On solving we get = 1.1180371, = 0.296419084

s = -0.5 + = -0.203580916

And
= 100 = 69.0983582
= 100 = 145.602585
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
BAIRSTOW’s METHOD
Set iteration = 2

Therefore, now we have to solve,


1.26659977 – 1.76392567 = 2.31465483
0.0938522071 – 1.26659977 = 0.625537872

On solving we get = 2.27996969, = 0.324931115


= 1.6180371 + = 3.89800692
s = 0.203580916 + = 0.121350199
V. ROOTS of ALGEBRAIC and
TRANSCENDENTAL EQUATIONS
 
BAIRSTOW’s METHOD
= 100 = 58.490654
= 100 = 267.763153

Now proceeding in the above manner in about ten iteration we get r = 3, s = -2


with
7.95
5.96

Now on using (i.e. eqn. B.12)


 Roots of are: x = 1 – i, 1 + i
So at this point Quotient is a quadratic equation
Roots are = 1 – i, 1 + i, 1, 2
i.e.
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
TAYLOR SERIES METHOD
Consider the one dimensional initial value problem

where
f is a function of two variables x and y and (x0 , y0) is a known point on the
solution curve.
If the existence of all higher order partial derivatives is assumed for y at x = x0,
then by Taylor series the value of y at any neigbhouring point x+h can be written
as

where ' represents the derivative with respect to x. Since at x0, y0 is known, y'
at x0 can be found by computing f(x0,y0). Similarly higher derivatives of y at
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
TAYLOR SERIES METHOD

and so on. Then

Hence the value of y at any neighboring point x0+ h can be obtained by


summing the above infinite series. However, in any practical computation, the
summation has to be terminated after some finite number of terms. If the series
has been terminated after the pthderivative term then the approximated formula
is called the Taylor series approximation to y of order p and the error is of order
p+1. The same can be repeated to obtain y at other points of x in the interval
[x0, xn] in a marching process.
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
TAYLOR SERIES METHOD
Error in the approximation : The Taylor series method of order p has the property that the final
global error is of order o(hp+1); hence p can be chosen as large as necessary to make the error is
as small as desired. If the order p is fixed, it is theoretically possible to a priori determine the
size of h so that the final global error will be as small as desired. Since

Making use of finite differences, the p+1th derivative of y at x+qh can be


approximated as

However, in practice one usually computes two sets of approximations using


step sizes h and h/2 and compares the solutionsFor p = 4, E4 = c * h4 and the
same with step size h/2, E4 = c * (h/2)4, that is if the step size is halved the
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
TAYLOR SERIES METHOD
EXAMPLE: Solve the initial value problem for y at x = 1 with step length 0.2
using Taylor Series Method of order four.

SOLUTION:
Given

The forth order Taylor’s formula is


y(xi+h) = y(xi) + h y'(xi, yi) + h2  y''(xi, yi)/2! + h3 y'''(xi, yi)/3! + h4 yiv(xi, yi)/4!
+....
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
TAYLOR SERIES METHOD
given at x=0, y=1 and with h = .2 we have

y'   = -2(0)(1)2 = 0.0


y''  = -2(1)2 - 4(0)(1)(0) = -2
y''' = -8(1)(0) - 4(0)(0)2 - 4(0)(1)(-2) = 0.0
yiv  = -12(0)2 - 12(1)(-2) - 12(0)(0)(-2) - 4(0)(1)(0) = 24

y(0.2) = 1 + .2 (0) + .22 (-2)/2! + 0 + .24 (24)/4!  = .9615

now at x = .2 we have y = .9615

y' = -0.3699,  y'' = -1.5648,  y''' =  3.9397 and  yiv  =  11.9953


VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
TAYLOR SERIES METHOD
then y(0.4) = 1 + .2 (-.3699) + .22 (-1.5648)/2! + .23(3.9397)/3! + .24 (11.9953)/4!  =  0.8624
       y(0.6) = 1 + .2 (-.5950) + .22 (-0.6665)/2! + .23 (4.4579)/3! + .24 (-5.4051)/4!   =  0.7356
       y(0.8) = 1 + .2 (-.6494) + .22 (-0.0642)/2! + .23 (2.6963)/3! + .24 (-10.0879)/4! =  0.6100
       y(1.0) = 1 + .2 (-.5953) + .22 (-0.4178)/2! + .23 (0.9553)/3! + .24 (-6.7878)/4!   =  0.5001

now at x = 1.0 we have y = .5001


y' = -0.5001,  y'' = 0.5004,  y''' =  -.000525 and  yiv  =  -3.0005

Error in the approximation


E4 = h4 (y4(1.0) - y4(0.8))/ 5! = h4 (-3.0005-11.9953)/ 5! = -1.9994e-004

Analytical solution y(x) = 1/(1+x2) at x = 1, y = .5


VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
EULER’s METHOD
• To derive Euler’s method, consider the standard derivative approximation
from beginning calculus,
Y ′ (t) ≈ [Y (t + h) − Y (t)]
his is called a forward difference approximation to the derivative. Applying this
to the initial value problem (2.1) at
Y ′ () = f(, Y ()),
we obtain
[Y (+1) − Y ()] ≈ f(, Y ()),
Y (+1) ≈ Y () + hf(, Y ()).
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
EULER’s METHOD
• Euler’s method is defined by taking this to be exact:
+1 = + hf(, ), 0 ≤ n ≤ N − 1.

• For the initial guess, use y0 = Y0 or some close approximation of Y0.


Sometimes Y0 is obtained empirically and thus may be known only
approximately. Formula (2.5) gives a rule for computing, , . . ., in succession.
This is typical of most numerical methods for solving ordinary differential
equations. Some geometric insight into Euler’s method is given in Figure 2.1.
The line z = p(t) that is tangent to the graph of z = Y (t) at has slope
Y ′ () = f( Y ())
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
EULER’s METHOD
Using this tangent line to approximate the
curve near the point (tn, Y (tn)), the value of the tangent line

p(t) = Y () + f(, Y ())(t − )

at t = is given by the right side of (2.4)

EXAMPLE: Use Euler’s method with h=0.1 to find approximate values for the
solution of the initial value problem
y’ + 2y =
at x =0.1, 0.2, 0.3
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
EULER’s METHOD
SOLUTION:
We rewrite the equation

f(x,y) = -2y + and

Euler’s method yields


VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
RUNGE KUTTA METHOD
In general, if k is any positive integer and f satisfies appropriate assumptions,
there are numerical methods with local truncation error for solving an initial
value problem
y’ = f(x,y), y(

Moreover, it can be shown that a method with local truncation error has global
truncation error In section 3.1 and 3.2 we studied numerical methods where k =
1 and k = 2. We’ll skip methods for which k = 3 and proceed to the Runge-Kutta
Method, the most widely used method, for which k = 4. The magnitude of the
local truncation error is determined by the fifth derivative of the solution of the
initial value problem. Therefore the local truncation error will be larger where
is large, or smaller where is small. The Runge-Kutta Method computes
approximate values of the solution of Equation 3.3.1 at as follows:
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
RUNGE KUTTA METHOD

And

EXAMPLE: Use the Runge-Kutta Method with h = 0.1 to find approximate


values for the solution of the initial value problem at x =0.1, 0.2
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
RUNGE KUTTA METHOD
SOLUTION:
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
RUNGE KUTTA METHOD
SOLUTION:

.
VI. NUMERICAL SOLUTIONS of
ORDINARY DIFFERENTIAL EQUATIONS
 
RUNGE KUTTA METHOD

You might also like