0% found this document useful (0 votes)
11 views106 pages

Applied 1

Uploaded by

fpuyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views106 pages

Applied 1

Uploaded by

fpuyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

Applied Mathematics in

Petroleum Engineering
References:
Numerical Methods for Engineers and Scientists
By: Joe D. Hoffman
Numerical Methods for Engineers
By: Kendall E. Atkinson
Numerical Methods for Engineers and Scientists
By: Amos Gilat ; Vish Subramaniam
Numerical Methods for Engineers
By: Steven C. Chapra ; Raymond P. Canale
Applied Numerical Analysis
By: Curtis F. Gerald ; Partick O. Wheatley
Numerical Methods for Engineers
By: Thomas R. Bewley
Contents:
-Solving Nonlinear Equation
-System of Nonlinear Equation
-Solving a System of Linear Equation
-Curve fitting
-Numerical differentiation
-Numerical integration
-Initial value problems
-Boundary value problems
-Modeling
Solving Nonlinear Equations
Overview of approaches in solving equations numerically
The process of solving an equation numerically is
different from the procedure used to find an analytical
solution. An analytical solution is obtained by deriving
an expression that has an exact numerical value. A
numerical solution is obtained in a process that starts by
finding an approximate solution and is followed by a
numerical procedure in which a better (more accurate)
solution is determined.
Overview of approaches in solving equations numerically
An initial numerical solution of an equation f(x) = 0 can be estimated by plotting
f(x) versus x and looking for the point where the graph crosses the x-axis. It is also
possible to write and execute a computer program that looks for a domain that
contains a solution. Such a program looks for a solution by evaluating f(x) at
different values of x. It starts at one value of x and then changes the value of x in
small increments. A change in the sign of f(x) indicates that there is a root within the
last increment. In most cases, when the equation that is solved is related to an
application in science or engineering, the range of x that includes the solution can be
estimated and used in the initial plot of f(x), or for a numerical search of a small
domain that contains a solution.
Overview of approaches in solving equations numerically
When an equation has more than one root, a numerical solution is
obtained one root at a time. The methods used for solving equations
numerically can be divided into two groups: bracketing methods and
open methods. In bracketing methods, illustrated in Fig. 4, an interval
that includes the solution is identified. By definition, the endpoints of
the interval are the upper bound and lower bound of the solution. Then,
by using a numerical scheme, the size of the interval is successively
reduced until the distance between the endpoints is less than the desired
accuracy of the solution. In open methods, illustrated in Fig. 5, an initial
estimate (one point) for the solution is assumed. The value of this initial
guess for the solution should be close to the actual solution. Then, by
using a numerical scheme, better (more accurate) values for the solution
are calculated. Bracketing methods always converge to the solution.
Open methods are usually more efficient but sometimes might not yield
the solution.
Overview of approaches in solving equations numerically

Figure 4: Illustration of a Figure 3-5: Illustration of an


bracketing method. open method.
BISECTION METHOD
The bisection method is a bracketing method for finding a numerical
solution of an equation of the form f(x) = 0 when it is known that within
a given interval [a, b], f(x) is continuous and the equation has a solution.
When this is the case, f(x) will have opposite signs at the endpoints of
the interval. As shown in Fig. 6, if f(x) is continuous and has a solution
between the points x = a and x = b , then either f(a) > 0 and f(b) < 0 or
f(a) < 0 and f(b) > 0. In other words, if there is a solution between x =a
and x = b, then f(a)f(b) < 0 .
BISECTION METHOD

Figure 6: Solution of f(x) = C between x =a and x = b.


BISECTION METHOD
The process of finding a solution with the bisection method is illustrated
in Fig. 7. It starts by finding points a and b that define an interval where
a solution exists. Such an interval is found either by plotting f(x) and
observing a zero crossing, or by examining the function for sign change.
The midpoint of the interval xNS1 is then taken as the first estimate for
the numerical solution. The true solution is either in the section between
points a and xNS1 or in the section between points xNS1 and b. If the
numerical solution is not accurate enough, a new interval that contains
the true solution is defined. The new interval is the half of the original
interval that contains the true solution, and its midpoint is taken as the
new (second) estimate of the numerical solution. The process continues
until the numerical solution is accurate enough according to a criterion
that is selected. The procedure (or algorithm) for finding a numerical
solution with the bisection method is summarized as follows:
BISECTION METHOD
Algorithm for the bisection method
1. Choose the first interval by finding points a and b such that a solution
exists between them. This means that f(a) and f(b) have different signs
such that f(a)f(b) < 0. The points can be determined by examining the
plot of f(x) versus x.

2. Calculate the first estimate of the numerical solution xNS1 by:


BISECTION METHOD
3. Determine whether the true solution is between a and x NS1• or
between xNS1 and b. This is done by checking the sign of the product
f(a) · f(xNS1) :
If f(a) · f(xNS1) < 0, the true solution is between a and xNS1·
If f(a) · f(xNS1) > 0, the true solution is between xNS1 and b.
4. Select the subinterval that contains the true solution (a to x NS1, or
xNS1to b) as the new interval [a, b], and go back to step 2.
Steps 2 through 4 are repeated until a specified tolerance or error
bound is attained.
BISECTION METHOD

Figure 7: Bisection method.


BISECTION METHOD
When should the bisection process be stopped?
Ideally, the bisection process should be stopped when the true solution
is obtained. This means that the value of xNs is such that f(xNs) = 0. In
reality, as discussed in Section 1, this true solution generally cannot be
found computationally. In practice, therefore, the process is stopped
when the estimated error, according to one of the measures listed in
Section 2, is smaller than some predetermined value. The choice of
termination criteria may depend on the problem that is actually solved.
BISECTION METHOD
Example 1: Solution of a nonlinear equation using the bisection method.
Determines the solution of the equation 8 - 4.5 (x - sinx) = 0 by using
the bisection method. The solution should have a tolerance of less than
0.001 rad. Create a table that displays the values of a, b, xNS , f(xNS), and
the tolerance for each iteration of the bisection process.
BISECTION METHOD
SOLUTION
To find the approximate location of the solution, a plot of the function
f(x) = 8-4.5(x- sinx) is like as below. The plot (Fig. 8), shows that the
solution is between x = 2 and x = 3. The initial interval is chosen as a =
2 and b = 3 .

Figure 8: A plot of the function


f(x) = 8 -4.S(x - sinx).
BISECTION METHOD
BISECTION METHOD
Additional notes on the bisection method
• The method always converges to an answer, provided a root was
trapped in the interval [a, b] to begin with.
• The method may fail when the function is tangent to the axis and
does not cross the x-axis at f(x) = 0.
• The method converges slowly relative to other methods.
REGULA FALSI METHOD
The regula falsi method (also called false position and linear
interpolation methods) is a bracketing method for finding a numerical
solution of an equation of the form f(x) = 0 when it is known that,
within a given interval [a, b], f(x) is continuous and the equation has a
solution. As illustrated in Fig. 9, the solution starts by finding an initial
interval [a1, b1] that brackets the solution. The values of the function at
the endpoints are f(a1) and f(b1). The endpoints are then connected by a
straight line, and the first estimate of the numerical solution, xNs1, is the
point where the straight line crosses the x-axis. This is in contrast to the
bisection method, where the midpoint of the interval was taken as the
solution. For the second iteration a new interval, [a2, b2] is defined.
REGULA FALSI METHOD

Figure 9: Regula Falsi method.


REGULA FALSI METHOD
The new interval is a subsection of the first interval that contains the
solution. It is either [a1, x Nsl] ( a1 is assigned to a2, and xNs1 to b2) or [xNSl ,
b1] ( xNs1 is assigned to a2, and b1 to b2 ). The endpoints of the second
interval are next connected with a straight line, and the point where this
new line crosses the x-axis is the second estimate of the solution, xNs2.
For the third iteration, a new subinterval [a3, b3] is selected, and the
iterations continue in the same way until the numerical solution is
deemed accurate enough.
For a given interval [a, b], the equation of a straight line that connects
point (b, f(b)) to point (a, f(a)) is given by:
REGULA FALSI METHOD
(10)

The point xNS where the line intersects the x-axis is determined by
substituting y = 0 in Eq. (10), and solving the equation for x :

(11)

The procedure (or algorithm) for finding a solution with the regula falsi
method is almost the same as that for the bisection method.
REGULA FALSI METHOD
Algorithm for the regula falsi method
1. Choose the first interval by finding points a and b such that a solution exists between them.
This means that f(a) and f(b) have different signs such that f(a) f(b) < 0. The points can be
determined by looking at a plot of f(x) versus x.
2. Calculate the first estimate of the numerical solution xNs1 by using Eq. (11).
3. Determine whether the actual solution is between a and xNs1 or between xNs1 and b. This is
done by checking the sign of the product
f(a) · f(xNs1):
If f(a) · f(xNs1) < 0, the solution is between a and xNs1·

If f(a) · f(xNs1) > 0, the solution is between xNs1 and b.

4. Select the subinterval that contains the solution (a to xNs1, or xNs1 to b) as the new interval [a,
b] , and go back to step 2.
REGULA FALSI METHOD

When should the iterations be stopped?

The iterations are stopped when the estimated error, according to one of

the measures listed in before Section , is smaller than some


predetermined value.
REGULA FALSI METHOD
Additional notes on the regula falsi method
The method always converges to an answer, provided a root is initially
trapped in the interval [a, b] .
Frequently, as in the case shown in Fig. 9, the function in the interval [a,
b] is either concave up or concave down. In this case, one of the
endpoints of the interval stays the same in all the iterations, while the
other endpoint advances toward the root. In other words, the numerical
solution advances toward the root only from one side. The convergence
toward the solution could be faster if the other endpoint would also
"move" toward the root. Several modifications have been introduced to
the regula falsi method that make the subinterval in successive iterations
approach the root from both sides.
Problem:
NEWTON’S METHOD
Newton's method (also called the Newton-Raphson method) is a scheme
for finding a numerical solution of an equation of the form f(x) = 0
where f(x) is continuous and differentiable and the equation is known to
have a solution near a given point. The method is illustrated in Fig. 10.

Figure 10: Newton's method.


NEWTON’S METHOD

The solution process starts by choosing point x1 as the first estimate of


the solution. The second estimate x2 is obtained by taking the tangent
line to f(x) at the point (x1, f(x1)) and finding the intersection point of
the tangent line with the x-axis. The next estimate x3 is the intersection
of the tangent line to f(x) at the point (x2, f(x2)) with the x-axis, and so
on. Mathematically, for the first iteration, the slope, f '(x1), of the
tangent at point (x1, f(x1)) is given by:
NEWTON’S METHOD
(12)

Solving Eq. (12) for x2 gives:

(13)

Equation 13 can be generalized for determining the "next" solution Xi+ 1


from the present solution xi:

(14)
NEWTON’S METHOD
Equation (14) is the general iteration formula for Newton's method. It is
called an iteration formula because the solution is found by repeated
application of Eq. (14) for each successive value of i.
Newton's method can also be derived by using Taylor series. Taylor
series expansion of f(x) about x1 is given by:

(15)
NEWTON’S METHOD
If x2 is a solution of the equation f(x) = 0 and x1 is a point near x2, then:

(16)

By considering only the first two terms of the series, an approximate


solution can be determined by solving Eq. (16) for x2:

(17)
NEWTON’S METHOD

The result is the same as Eq. (13). In the next iteration the Taylor

expansion is written about point x2, and an approximate solution x3 is

calculated. The general formula is the same as that given in Eq. (14).
NEWTON’S METHOD
Algorithm for Newton's method
1. Choose a point xi as an initial guess of the solution.
2. For i = 1, 2, . . . , until the error is smaller than a specified value, calculate
Xi+1 by using Eq. (14).
When are the iterations stopped?
Ideally, the iterations should be stopped when an exact solution is obtained. This
means that the value of x is such that f(x) = 0. Generally, this exact solution cannot
be found computationally. In practice therefore, the iterations are stopped when an
estimated error is smaller than some predetermined value. A tolerance in the
solution, as in the bisection method, cannot be calculated since bounds are not
known. Two error estimates that are typically used with Newton's method are:
Estimated relative error: The iterations are stopped when the estimated relative error
is smaller than a specified value ɛ:
NEWTON’S METHOD
Example 2: Solution of equation using Newton's method.
Find the solution of the equation 8-4.5(x- sinx) = 0 (the same equation
as in Example 1) by using Newton's method in the following way:
Using a nonprogrammable calculator, calculate the first two iterations
on paper using six significant figures.
SOLUTION
In the present problem, f(x) = 8 - 4.5(x-sinx) and f '(x) = -4.5(1 - cosx) .
(a) To start the iterations, f(x) and f '(x) are substituted in Eq. (14):
(20)
In the first iteration, i = 1 and x1 = 2, and Eq. (20) gives:
(21)
For the second iteration, i = 2 and x2 = 2.48517, and Eq. (20) gives:
(22)
NEWTON’S METHOD
Notes on Newton's method
• The method, when successful, works well and converges fast. When it
does not converge, it is usually because the starting point is not close
enough to the solution. Convergence problems typically occur when the
value of f '(x) is close to zero in the vicinity of the solution (where f(x) =
0). It is possible to show that Newton's method converges if the function
f(x) and its first and second derivatives f '(x) and f "(x) are all
continuous, if f '(x) is not zero at the solution, and if the starting value x1
is near the actual solution. Illustrations of two cases where Newton's
method does not converge (i.e., diverges) are shown in Fig.12.
NEWTON’S METHOD

Figure 12: Cases where Newton's method diverges.


NEWTON’S METHOD
• A function f '(x), which is the derivative of the function f(x), has to be
substituted in the iteration formula, Eq. (14). In many cases, it is simple
to write the derivative, but sometimes it can be difficult to determine.
When an expression for the derivative is not available, it might be
possible to determine the slope numerically or to find a solution by
using the secant method (Next section), which is somewhat
similar to Newton's method but does not require an expression for the
derivative.
Next, Example 3 illustrates the effect that the starting point can
have on a numerical solution with Newton's method.
NEWTON’S METHOD
Example 3: Convergence of Newton's method.
Find the solution of the equation (1/x) - 2 = 0 by using Newton's method. For
the starting point (initial x estimate of the solution) use:
( a) x= 1.4 , (b) x= 1, and (c) x= 0.4
SOLUTION
The equation can easily be solved analytically, and the exact solution is x =
0.5.
For a numerical solution with Newton's method the function, f(x) = (1/x) - 2,
and its derivative,
f '(x) = -(1/x2)are substituted in Eq. (14): (23)
NEWTON’S METHOD
(a) When the starting point for the iterations is x 1 = 1.4, the next two
iterations, using Eq. (23), are:

These results indicate that Newton's method diverges. This case is


illustrated in Fig.13a.
NEWTON’S METHOD
(b) When the starting point for the iterations is x = 1, the next two iterations, using Eq. 23),
are:

From these results it looks like the solution converges to x = 0, which is not a solution. At x
= 0, the function is actually not defined (it is a singular point). A solution is obtained from
Eq. (23) because the equation was simplified. This case is illustrated in Fig. 13b.
(c) When the starting point for the iterations is x = 0.4, the next two iterations, using Eq.
(23), are:

In this case, Newton's method converges to the correct solution. This case is illustrated in
Fig. 13c.
This example also shows that if the starting point is close enough to the true solution,
Newton's method converges.
NEWTON’S METHOD

Figure 13: Solution with Newton's method using different starting points.
SECANT METHOD
The secant method is a scheme for finding a numerical solution of an
equation of the form f(x) = 0. The method uses two points in the
neighborhood of the solution to determine a new estimate for the
solution (Fig. 14). The two points (marked as x1 and x2 in the figure) are
used to define a straight line (secant line), and the point where the line
intersects the x-axis (marked as x3 in the figure) is the new estimate for
the solution. As shown, the two points can be on one side of the
solution(Fig.14a) or the solution can be between the two points (Fig.
14b).
SECANT METHOD

Figure 14: The secant method.


SECANT METHOD
The slope of the secant line is given by:

(24)

which can be solved for x3 :


(25)
SECANT METHOD
Once point x3 is determined, it is used together with point x2 to calculate
the next estimate of the solution, x4 Equation (25) can be generalized

to an iteration formula in which a new estimate of the solution x i+ 1 is


determined from the previous two solutions xi and x i-1.
(26)
Figure 15 illustrates the iteration process with
the secant method.

Figure 15: Secant method.


SECANT METHOD
Relationship to Newton's method
Examination of the secant method shows that when the two points that define the
secant line are close to each other, the method is actually an approximated form of
Newton's method. This can be seen by rewriting Eq. (26) in the form:

(27)

This equation is almost identical to Eq. (14) of Newton's method. In Eq. (27), the
denominator of the second term on the right-hand side of the equation is an
approximation of the value of the derivative of f(x) at xi. In Eq. (14), the
denominator is actually the derivative f'(x). In the secant method (unlike Newton's
method), it is not necessary to know the analytical form of f'(x).
Problem:
Problem:
FIXED-POINT ITERATION METHOD
Fixed-point iteration is a method for solving an equation of the form
f(x) = 0. The method is carried out by rewriting the equation in the
form:
x = g(x) (28)
Obviously, when x is the solution of f(x) = 0, the left side and the right
side of Eq. (28) are equal. This is illustrated graphically by plotting y
= x and y = g( x), as shown in Fig. 17. The point of intersection of the
two plots, called the fixed point, is the solution. The numerical value of
the solution is determined by an iterative process. It starts by taking a
value of x near the fixed point as the first guess for the solution and
substituting it in g(x).
FIXED-POINT ITERATION METHOD
The value of g(x) that is obtained is the new (second) estimate for the
solution. The second value is then substituted x back in g(x), which then
gives the third estimate of the solution. The iteration formula is thus
given by: (29)

Figure 3-17: Fixed-point iteration


method.
The function g(x) is called the iteration function.
FIXED-POINT ITERATION METHOD
When the method works, the values of x that are obtained are successive
iterations that progressively converge toward the solution. Two such
cases are illustrated graphically in Fig. 18. The solution process starts by
choosing point x1 on the x-axis and drawing a vertical line that
intersects the curve y = g(x) at point g(x1). Since x2 = g(x1), a horizontal
line is drawn from point (x1, g(x1)) toward the line y = x. The
intersection point gives the location of x2 . From x2 a vertical line is
drawn toward the curve y = g(x). The intersection point is now (x2,
g(x2)), and g(x2) is also the value of x3. From point (x2, g(x2)) a
horizontal line is drawn again toward y = x, and the intersection point
gives the location of x3 • As the process continues the intersection points
converge toward the fixed point, or the true solution xTS.
FIXED-POINT ITERATION METHOD

Figure 18: Convergence of the fixed-point iteration method.


FIXED-POINT ITERATION METHOD
It is possible, however, that the iterations will not converge toward the
fixed point, but rather diverge away. This is shown in Fig. 19. The
figure shows that even though the starting point is close to the solution,
the subsequent points are moving farther away from the solution.

Figure 19: Divergence of the fixed-point iteration method.


FIXED-POINT ITERATION METHOD
Choosing the appropriate iteration function g(x)
For a given equation f(x) = 0, the iteration function is not unique since it
is possible to change the equation into the form x = g(x) in different
ways. This means that several iteration functions g(x) can be written for
the same equation. A g(x) that should be used in Eq. (29) for the
iteration process is one for which the iterations converge toward the
solution. There might be more than one form that can be used, or it may
be that none of the forms are appropriate so that the fixed-point iteration
method cannot be used to solve the equation. In cases where there are
multiple solutions, one iteration function may yield one root, while a
different function yields other roots. Actually, it is possible to determine
ahead of time if the iterations converge or diverge for a specific g( x) .
FIXED-POINT ITERATION METHOD
As an example, consider the equation:
xe0.5x + l.2x - 5 = 0 (31)
A plot of the function f(x) = xe0.5x + l.2x- 5 (see Fig. 20) shows that the
equation has a solution between x= 1 and x= 2 .
Equation (31) can be rewritten in the form x= g ( x) in different
ways. Three possibilities are discussed next.

Figure 20: A plot of


f(x) = xe0.5x + l.2x - 5.
FIXED-POINT ITERATION METHOD
Case a: (32)
FIXED-POINT ITERATION METHOD

Case b: (33)
FIXED-POINT ITERATION METHOD

Case c: (34)
FIXED-POINT ITERATION METHOD
These results show that the iteration function g(x) from Case b is the
one that should be used since, in this case, |g'(l)| < 1 and |g'(2)|<1 .
Substituting g(x) from Case bin Eq. (29) gives:

(35)

Starting with x1 = 1, the first few iterations are:


FIXED-POINT ITERATION METHOD
FIXED-POINT ITERATION METHOD

When should the iterations be stopped?

The true error (the difference between the true solution and the
estimated solution) cannot be calculated since the true solution in
general is not known. As with Newton's method, the iterations can be
stopped either when the relative error or the tolerance in f(x) is smaller
than some predetermined value (Eqs. (18) or (19)).
Problem
Problem
SYSTEMS OF NONLINEAR EQUATIONS
A system of nonlinear equations consists of two, or more, nonlinear equations that
have to be solved simultaneously. For example, Fig. 22 shows a catenary (hanging
cable) curve given by the equation and an ellipse specified
by the equation . The point of intersection between the two curves is
given by the solution of the following system of nonlinear equations:

(38)

(39)

Analysis of many problems in science and engineering requires solution of systems


of nonlinear equations. In addition, as shown in other Chapter, one of the popular
numerical methods for solving nonlinear ordinary differential equations (the finite
difference method) requires the solution of a system of nonlinear algebraic
equations.
In this section, two methods for solving systems of nonlinear equations
are presented. The Newton method (also called the Newton-Raphson
method), suitable for solving small systems, and the other method is
using the Laplace transformation.

Figure 22: A plot of Eq. (38)


and Eq. (39).
Newton's Method for Solving a System of Nonlinear Equations

Newton's method for solving a system of nonlinear equations is an


extension of the method used for solving a single equation (Section 5).
The method is first derived in detail for the solution of a system of two
nonlinear equations. Subsequently, a general formulation is presented
for the case of a system of n nonlinear equations.
Newton's Method for Solving a System of
Nonlinear Equations
Solving a system of two nonlinear equations
A system of two equations with two unknowns x and y can be written
as:

The solution process starts by choosing an estimated solution x1 and


y1• If x2 and y2 are the true (unknown) solutions of the system and are
sufficiently close to x1 and y1, then the values of f1 and f 2 at x2 and
y2 can be expressed using a Taylor series expansion of the functions
f1(x,y) and f2(x,y) about (x1,y1) :
Newton's Method for Solving a System of
Nonlinear Equations

(41)

(42)

Since x2 and y2 are close to x1 and y1, approximate values for f1(x2,y2) and f2(x2,y2)
can be calculated by neglecting the higher order terms. Also, since f 1 ( x2, y2) = 0
and f 2( x2, y2) = 0, Eqs. (41) and (42) can be rewritten as:
(43)

(44)
Newton's Method for Solving a System of
Nonlinear Equations
where ∆x = x2 -x1 and ∆y = y2 -y1• Since all the terms in Eqs. (43) and
(44) are known, except the unknowns ∆x and ∆y , these equations are a
system of two linear equations. The system can be solved by using
Cramer's rule :
(45)

(46)
Newton's Method for Solving a System of
Nonlinear Equations

Where
(47)

is the Jacobian. Once ∆x and ∆y are known, the values of x2 and y2


are calculated by:
X2 = X1 + ∆x
Y2 = Y1 + ∆y (48)
Newton's Method for Solving a System of
Nonlinear Equations
Obviously, the values of x2 and y2 that are obtained are not the true
solution since the higher-order terms in Eqs. (41) and (42) were
neglected. Nevertheless, these values are expected to be closer to the
true solution than x1 and y1•
The solution process continues by using x2 and y2 as the new estimates
for the solution and using Eqs. (43) and (44) to determine new ∆x and
∆y that give x3 and y3• The iterations continue until two successive
answers differ by an amount smaller than a desired value.
Newton's Method for Solving a System of
Nonlinear Equations
Example 5: Solution of a system of nonlinear equations using Newton's
method.
The equations of the catenary curve and the ellipse, which are shown in
the figure, are given by:

(49)

(50)

Use Newton's method to determine the point of intersection of the


curves that resides in the first quadrant of the coordinate system.
Newton's Method for Solving a System of
Nonlinear Equations
Newton's Method for Solving a System of
Nonlinear Equations
SOLUTION:
Equations (49) and (50) are a system of two nonlinear equations. The
points of intersection are
given by the solution of the system. The solution with Newton's method
is obtained by using Eqs. (43) and (44). In the present problem, the
partial derivatives in the equations are given by:
(51)

(52)
Newton's Method for Solving a System of
Nonlinear Equations
The Jacobian is given by:

(53)

Substituting Eqs. (51)--(53) in Eqs. (45) and (46) gives the


solution for ∆x and ∆y.

The order of operations in the program is:


Newton's Method for Solving a System of
Nonlinear Equations
• The solution is initiated by the initial guess, xi = 2.5, yi = 2.0.
• The iterations start. ∆y and ∆x are determined by substituting xi and yi
in Eqs. (45) and (46).
• X i+ 1 = xi+ ∆x, and yi+1= yi + ∆y are determined.
• If the estimated relative error for both variables is smaller than 0.001,
the iterations stop. Otherwise, the values of xi+1 and yi+1 are assigned to
xi and yi, respectively, and the next iteration starts.
Cramer's rule
a1 x  b1 y  c1
a2 x  b2 y  c2
 a1 b1 
D
 a2 b2 
c1 b1 a1 c1
 c1 b1  Dx c2 b2 D y a2 c2
Dx    x  ;y 
c2 b2  D a1 b1 D a1 b1
a2 b2 a2 b2
 a1 c1 
Dy  
 a2 c2 

You might also like