Applied 1
Applied 1
Petroleum Engineering
References:
Numerical Methods for Engineers and Scientists
By: Joe D. Hoffman
Numerical Methods for Engineers
By: Kendall E. Atkinson
Numerical Methods for Engineers and Scientists
By: Amos Gilat ; Vish Subramaniam
Numerical Methods for Engineers
By: Steven C. Chapra ; Raymond P. Canale
Applied Numerical Analysis
By: Curtis F. Gerald ; Partick O. Wheatley
Numerical Methods for Engineers
By: Thomas R. Bewley
Contents:
-Solving Nonlinear Equation
-System of Nonlinear Equation
-Solving a System of Linear Equation
-Curve fitting
-Numerical differentiation
-Numerical integration
-Initial value problems
-Boundary value problems
-Modeling
Solving Nonlinear Equations
Overview of approaches in solving equations numerically
The process of solving an equation numerically is
different from the procedure used to find an analytical
solution. An analytical solution is obtained by deriving
an expression that has an exact numerical value. A
numerical solution is obtained in a process that starts by
finding an approximate solution and is followed by a
numerical procedure in which a better (more accurate)
solution is determined.
Overview of approaches in solving equations numerically
An initial numerical solution of an equation f(x) = 0 can be estimated by plotting
f(x) versus x and looking for the point where the graph crosses the x-axis. It is also
possible to write and execute a computer program that looks for a domain that
contains a solution. Such a program looks for a solution by evaluating f(x) at
different values of x. It starts at one value of x and then changes the value of x in
small increments. A change in the sign of f(x) indicates that there is a root within the
last increment. In most cases, when the equation that is solved is related to an
application in science or engineering, the range of x that includes the solution can be
estimated and used in the initial plot of f(x), or for a numerical search of a small
domain that contains a solution.
Overview of approaches in solving equations numerically
When an equation has more than one root, a numerical solution is
obtained one root at a time. The methods used for solving equations
numerically can be divided into two groups: bracketing methods and
open methods. In bracketing methods, illustrated in Fig. 4, an interval
that includes the solution is identified. By definition, the endpoints of
the interval are the upper bound and lower bound of the solution. Then,
by using a numerical scheme, the size of the interval is successively
reduced until the distance between the endpoints is less than the desired
accuracy of the solution. In open methods, illustrated in Fig. 5, an initial
estimate (one point) for the solution is assumed. The value of this initial
guess for the solution should be close to the actual solution. Then, by
using a numerical scheme, better (more accurate) values for the solution
are calculated. Bracketing methods always converge to the solution.
Open methods are usually more efficient but sometimes might not yield
the solution.
Overview of approaches in solving equations numerically
The point xNS where the line intersects the x-axis is determined by
substituting y = 0 in Eq. (10), and solving the equation for x :
(11)
The procedure (or algorithm) for finding a solution with the regula falsi
method is almost the same as that for the bisection method.
REGULA FALSI METHOD
Algorithm for the regula falsi method
1. Choose the first interval by finding points a and b such that a solution exists between them.
This means that f(a) and f(b) have different signs such that f(a) f(b) < 0. The points can be
determined by looking at a plot of f(x) versus x.
2. Calculate the first estimate of the numerical solution xNs1 by using Eq. (11).
3. Determine whether the actual solution is between a and xNs1 or between xNs1 and b. This is
done by checking the sign of the product
f(a) · f(xNs1):
If f(a) · f(xNs1) < 0, the solution is between a and xNs1·
4. Select the subinterval that contains the solution (a to xNs1, or xNs1 to b) as the new interval [a,
b] , and go back to step 2.
REGULA FALSI METHOD
The iterations are stopped when the estimated error, according to one of
(13)
(14)
NEWTON’S METHOD
Equation (14) is the general iteration formula for Newton's method. It is
called an iteration formula because the solution is found by repeated
application of Eq. (14) for each successive value of i.
Newton's method can also be derived by using Taylor series. Taylor
series expansion of f(x) about x1 is given by:
(15)
NEWTON’S METHOD
If x2 is a solution of the equation f(x) = 0 and x1 is a point near x2, then:
(16)
(17)
NEWTON’S METHOD
The result is the same as Eq. (13). In the next iteration the Taylor
calculated. The general formula is the same as that given in Eq. (14).
NEWTON’S METHOD
Algorithm for Newton's method
1. Choose a point xi as an initial guess of the solution.
2. For i = 1, 2, . . . , until the error is smaller than a specified value, calculate
Xi+1 by using Eq. (14).
When are the iterations stopped?
Ideally, the iterations should be stopped when an exact solution is obtained. This
means that the value of x is such that f(x) = 0. Generally, this exact solution cannot
be found computationally. In practice therefore, the iterations are stopped when an
estimated error is smaller than some predetermined value. A tolerance in the
solution, as in the bisection method, cannot be calculated since bounds are not
known. Two error estimates that are typically used with Newton's method are:
Estimated relative error: The iterations are stopped when the estimated relative error
is smaller than a specified value ɛ:
NEWTON’S METHOD
Example 2: Solution of equation using Newton's method.
Find the solution of the equation 8-4.5(x- sinx) = 0 (the same equation
as in Example 1) by using Newton's method in the following way:
Using a nonprogrammable calculator, calculate the first two iterations
on paper using six significant figures.
SOLUTION
In the present problem, f(x) = 8 - 4.5(x-sinx) and f '(x) = -4.5(1 - cosx) .
(a) To start the iterations, f(x) and f '(x) are substituted in Eq. (14):
(20)
In the first iteration, i = 1 and x1 = 2, and Eq. (20) gives:
(21)
For the second iteration, i = 2 and x2 = 2.48517, and Eq. (20) gives:
(22)
NEWTON’S METHOD
Notes on Newton's method
• The method, when successful, works well and converges fast. When it
does not converge, it is usually because the starting point is not close
enough to the solution. Convergence problems typically occur when the
value of f '(x) is close to zero in the vicinity of the solution (where f(x) =
0). It is possible to show that Newton's method converges if the function
f(x) and its first and second derivatives f '(x) and f "(x) are all
continuous, if f '(x) is not zero at the solution, and if the starting value x1
is near the actual solution. Illustrations of two cases where Newton's
method does not converge (i.e., diverges) are shown in Fig.12.
NEWTON’S METHOD
From these results it looks like the solution converges to x = 0, which is not a solution. At x
= 0, the function is actually not defined (it is a singular point). A solution is obtained from
Eq. (23) because the equation was simplified. This case is illustrated in Fig. 13b.
(c) When the starting point for the iterations is x = 0.4, the next two iterations, using Eq.
(23), are:
In this case, Newton's method converges to the correct solution. This case is illustrated in
Fig. 13c.
This example also shows that if the starting point is close enough to the true solution,
Newton's method converges.
NEWTON’S METHOD
Figure 13: Solution with Newton's method using different starting points.
SECANT METHOD
The secant method is a scheme for finding a numerical solution of an
equation of the form f(x) = 0. The method uses two points in the
neighborhood of the solution to determine a new estimate for the
solution (Fig. 14). The two points (marked as x1 and x2 in the figure) are
used to define a straight line (secant line), and the point where the line
intersects the x-axis (marked as x3 in the figure) is the new estimate for
the solution. As shown, the two points can be on one side of the
solution(Fig.14a) or the solution can be between the two points (Fig.
14b).
SECANT METHOD
(24)
(27)
This equation is almost identical to Eq. (14) of Newton's method. In Eq. (27), the
denominator of the second term on the right-hand side of the equation is an
approximation of the value of the derivative of f(x) at xi. In Eq. (14), the
denominator is actually the derivative f'(x). In the secant method (unlike Newton's
method), it is not necessary to know the analytical form of f'(x).
Problem:
Problem:
FIXED-POINT ITERATION METHOD
Fixed-point iteration is a method for solving an equation of the form
f(x) = 0. The method is carried out by rewriting the equation in the
form:
x = g(x) (28)
Obviously, when x is the solution of f(x) = 0, the left side and the right
side of Eq. (28) are equal. This is illustrated graphically by plotting y
= x and y = g( x), as shown in Fig. 17. The point of intersection of the
two plots, called the fixed point, is the solution. The numerical value of
the solution is determined by an iterative process. It starts by taking a
value of x near the fixed point as the first guess for the solution and
substituting it in g(x).
FIXED-POINT ITERATION METHOD
The value of g(x) that is obtained is the new (second) estimate for the
solution. The second value is then substituted x back in g(x), which then
gives the third estimate of the solution. The iteration formula is thus
given by: (29)
Case b: (33)
FIXED-POINT ITERATION METHOD
Case c: (34)
FIXED-POINT ITERATION METHOD
These results show that the iteration function g(x) from Case b is the
one that should be used since, in this case, |g'(l)| < 1 and |g'(2)|<1 .
Substituting g(x) from Case bin Eq. (29) gives:
(35)
The true error (the difference between the true solution and the
estimated solution) cannot be calculated since the true solution in
general is not known. As with Newton's method, the iterations can be
stopped either when the relative error or the tolerance in f(x) is smaller
than some predetermined value (Eqs. (18) or (19)).
Problem
Problem
SYSTEMS OF NONLINEAR EQUATIONS
A system of nonlinear equations consists of two, or more, nonlinear equations that
have to be solved simultaneously. For example, Fig. 22 shows a catenary (hanging
cable) curve given by the equation and an ellipse specified
by the equation . The point of intersection between the two curves is
given by the solution of the following system of nonlinear equations:
(38)
(39)
(41)
(42)
Since x2 and y2 are close to x1 and y1, approximate values for f1(x2,y2) and f2(x2,y2)
can be calculated by neglecting the higher order terms. Also, since f 1 ( x2, y2) = 0
and f 2( x2, y2) = 0, Eqs. (41) and (42) can be rewritten as:
(43)
(44)
Newton's Method for Solving a System of
Nonlinear Equations
where ∆x = x2 -x1 and ∆y = y2 -y1• Since all the terms in Eqs. (43) and
(44) are known, except the unknowns ∆x and ∆y , these equations are a
system of two linear equations. The system can be solved by using
Cramer's rule :
(45)
(46)
Newton's Method for Solving a System of
Nonlinear Equations
Where
(47)
(49)
(50)
(52)
Newton's Method for Solving a System of
Nonlinear Equations
The Jacobian is given by:
(53)