0% found this document useful (0 votes)
17 views3 pages

Numerical Methods1

The document discusses the secant method and Newton's method for finding roots of functions numerically. It provides examples of using Newton's method to estimate the square root of 2 and find the cubic roots of unity. The secant method approximates the derivative of a function using the slope of the secant line between two points, while Newton's method uses the analytical derivative where available.

Uploaded by

Saim Keshri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views3 pages

Numerical Methods1

The document discusses the secant method and Newton's method for finding roots of functions numerically. It provides examples of using Newton's method to estimate the square root of 2 and find the cubic roots of unity. The secant method approximates the derivative of a function using the slope of the secant line between two points, while Newton's method uses the analytical derivative where available.

Uploaded by

Saim Keshri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

2.3.

SECANT METHOD

analytical derivative of the function f ( x ). We write in place of f 0 ( xn ),


Chapter 2 f ( x n ) − f ( x n −1 )
f 0 ( xn ) ≈ .
x n − x n −1

Root Finding Starting the Secant Method requires a guess for both x0 and x1 .

Solve f ( x ) = 0 for x, when an explicit analytical solution is impossible. √


2.3.1 Estimate 2 = 1.41421356 using Newton’s Method

The 2 is the zero of the function f ( x ) = x2 − 2. To implement Newton’s Method,
2.1 Bisection Method we use f 0 ( x ) = 2x. Therefore, Newton’s Method is the iteration
The bisection method is the easiest to numerically implement and almost always
xn2 − 2
works. The main disadvantage is that convergence is slow. If the bisection method x n +1 = x n − .
results in a computer program that runs too slow, then other faster methods may 2xn
be chosen; otherwise it is a good choice of method. We take as our initial guess x0 = 1. Then
We want to construct a sequence x0 , x1 , x2 , . . . that converges to the root x = r
that solves f ( x ) = 0. We choose x0 and x1 such that x0 < r < x1 . We say that x0 −1 3
x1 = 1 − = = 1.5,
and x1 bracket the root. With f (r ) = 0, we want f ( x0 ) and f ( x1 ) to be of opposite 2 2
sign, so that f ( x0 ) f ( x1 ) < 0. We then assign x2 to be the midpoint of x0 and x1 , 9
3 4 −2 17
that is x2 = ( x0 + x1 )/2, or x2 = − = = 1.416667,
2 3 12
x − x0
x2 = x0 + 1 . 17
172
− 2 577
2 x3 =
2
− 12 17 = = 1.41426.
The sign of f ( x2 ) can then be determined. The value of x3 is then chosen as either 12 6
408
the midpoint of x0 and x2 or as the midpoint of x2 and x1 , depending on whether
x0 and x2 bracket the root, or x2 and x1 bracket the root. The root, therefore, 2.3.2 Example of fractals using Newton’s Method
stays bracketed at all times. The algorithm proceeds in this fashion and is typically
stopped when the increment to the left side of the bracket (above, given by ( x1 − Consider the complex roots of the equation f (z) = 0, where
x0 )/2) is smaller than some required precision.
f (z) = z3 − 1.

2.2 Newton’s Method These roots are the three cubic roots of unity. With

This is the fastest method, but requires analytical computation of the derivative of ei2πn = 1, n = 0, 1, 2, . . .
f ( x ). Also, the method may not always converge to the desired root.
the three unique cubic roots of unity are given by
We can derive Newton’s Method graphically, or by a Taylor series. We again
want to construct a sequence x0 , x1 , x2 , . . . that converges to the root x = r. Consider
1, ei2π/3 , ei4π/3 .
the xn+1 member of this sequence, and Taylor series expand f ( xn+1 ) about the point
xn . We have With
f ( x n +1 ) = f ( x n ) + ( x n +1 − x n ) f 0 ( x n ) + . . . . eiθ = cos θ + i sin θ,
To determine xn+1 , we drop the higher-order terms in the Taylor series, and assume

and cos (2π/3) = −1/2, sin (2π/3) = 3/2, the three cubic roots of unity are
f ( xn+1 ) = 0. Solving for xn+1 , we have
√ √
1 3 1 3
f ( xn ) r1 = 1, r2 = − + i, r3 = − − i.
x n +1 = x n − . 2 2 2 2
f 0 ( xn )
The interesting idea here is to determine which initial values of z0 in the complex
Starting Newton’s Method requires a guess for x0 , hopefully close to the root x = r. plane converge to which of the three cubic roots of unity.
Newton’s method is
z3 − 1
2.3 Secant Method z n +1 = z n − n 2 .
3zn
The Secant Method is second best to Newton’s Method, and is used when a faster If the iteration converges to r1 , we color z0 red; r2 , blue; r3 , green. The result will
convergence than Bisection is desired, but it is too difficult or impossible to take an be shown in lecture.

7 8 CHAPTER 2. ROOT FINDING

3.5. SYSTEM OF NONLINEAR EQUATIONS 5.1. POLYNOMIAL INTERPOLATION

However, once the LU decomposition of a matrix A is known, the solution of The matrix is called the Vandermonde matrix, and can be constructed using the
Ax = b can proceed by a forward and backward substitution. How does a back- MATLAB function vander.m. The system of linear equations can be solved in MAT-
ward substitution, say, scale? For backward substitution, the matrix equation to be LAB using the \ operator, and the MATLAB function polyval.m can used to inter-
solved is of the form polate using the c coefficients. I will illustrate this in class and place the code on
     the website.
a1,1 a1,2 · · · a1,n−1 a1,n x1 b1
 0
 a2,2 · · · a2,n−1 a2,n   x2   b2 
   
 .. .. .. .. ..   ..  =  ..  .
 . . . . .   .   . 
   
5.1.2 Lagrange polynomial

 0 0 · · · an−1,n−1 an−1,n   xn−1  bn−1 
0 0 ··· 0 an,n xn bn
The Lagrange polynomial is the most clever construction of the interpolating poly-
The solution for xi is found after solving for x j with j > i. The explicit solution for nomial Pn ( x ), and leads directly to an analytical formula. The Lagrange polynomial
xi is given by ! is the sum of n + 1 terms and each term is itself a polynomial of degree n. The full
n
1 polynomial is therefore of degree n. Counting from 0, the ith term of the Lagrange
xi = bi − ∑ ai,j x j .
ai,i j = i +1
polynomial is constructed by requiring it to be zero at x j with j 6= i, and equal to yi
when j = i. The polynomial can be written as
The solution for xi requires n − i + 1 multiplication-additions, and since this must
be done for n such i0 s, we have
n ( x − x1 )( x − x2 ) · · · ( x − xn )y0 ( x − x0 )( x − x2 ) · · · ( x − xn )y1
Pn ( x ) = +
op. counts = ∑n−i+1 ( x0 − x1 )( x0 − x2 ) · · · ( x0 − xn ) ( x1 − x0 )( x1 − x2 ) · · · ( x1 − xn )
i =1 ( x − x0 )( x − x1 ) · · · ( x − xn−1 )yn
= n + ( n − 1) + · · · + 1 +···+ .
( xn − x0 )( xn − x1 ) · · · ( xn − xn−1 )
n
= ∑i It can be clearly seen that the first term is equal to zero when x = x1 , x2 , . . . , xn and
i =1
1 equal to y0 when x = x0 ; the second term is equal to zero when x = x0 , x2 , . . . xn and
= n ( n + 1). equal to y1 when x = x1 ; and the last term is equal to zero when x = x0 , x1 , . . . xn−1
2
and equal to yn when x = xn . The uniqueness of the interpolating polynomial
The leading-order term is n2 /2 and the scaling of backward substitution is O(n2 ). implies that the Lagrange polynomial must be the interpolating polynomial.
After the LU decomposition of a matrix A is found, only a single forward and back-
ward substitution is required to solve Ax = b, and the scaling of the algorithm to
solve this matrix equation is therefore still O(n2 ). For large n, one should expect
that solving Ax = b by a forward and backward substitution should be substan- 5.1.3 Newton polynomial
tially faster than a direct solution using Gaussian elimination.
The Newton polynomial is somewhat more clever than the Vandermonde polyno-
mial because it results in a system of linear equations that is lower triangular, and
3.5 System of nonlinear equations therefore can be solved by forward substitution. The interpolating polynomial is
written in the form
A system of nonlinear equations can be solved using a version of Newton’s Method.
We illustrate this method for a system of two equations and two unknowns. Sup-
pose that we want to solve Pn ( x ) = c0 + c1 ( x − x0 ) + c2 ( x − x0 )( x − x1 ) + · · · + cn ( x − x0 ) · · · ( x − xn−1 ),

f ( x, y) = 0, g( x, y) = 0,
which is clearly a polynomial of degree n. The n + 1 unknown coefficients given by
for the unknowns x and y. We want to construct two simultaneous sequences the c’s can be found by substituting the points ( xi , yi ) for i = 0, . . . , n:
x0 , x1 , x2 , . . . and y0 , y1 , y2 , . . . that converge to the roots. To construct these se-
quences, we Taylor series expand f ( xn+1 , yn+1 ) and g( xn+1 , yn+1 ) about the point
y0 = c0 ,
( xn , yn ). Using the partial derivatives f x = ∂ f /∂x, f y = ∂ f /∂y, etc., the two-
dimensional Taylor series expansions, displaying only the linear terms, are given y1 = c0 + c1 ( x1 − x0 ),
by y2 = c0 + c1 ( x2 − x0 ) + c2 ( x2 − x0 )( x2 − x1 ),
.. .. ..
f ( x n +1 , y n +1 ) = f ( x n , y n ) + ( x n +1 − x n ) f x ( x n , y n ) . . .
+ ( y n +1 − y n ) f y ( x n , y n ) + . . . yn = c0 + c1 ( xn − x0 ) + c2 ( xn − x0 )( xn − x1 ) + · · · + cn ( xn − x0 ) · · · ( xn − xn−1 ).

20 CHAPTER 3. SYSTEMS OF EQUATIONS 28 CHAPTER 5. INTERPOLATION


5.2. PIECEWISE LINEAR INTERPOLATION 5.3. CUBIC SPLINE INTERPOLATION

This system of linear equations is lower triangular as can be seen from the matrix 5.3 Cubic spline interpolation
form
   The n + 1 points to be interpolated are again
1 0 0··· 0 c0
1
 ( x1 − x0 ) 0··· 0   c1 
  ( x0 , y0 ), ( x1 , y1 ), . . . ( x n , y n ).
 .. .. .... ..   .. 
. . . . .  .  Here, we use n piecewise cubic polynomials for interpolation,
1 ( xn − x0 ) ( xn − x0 )( xn − x1 ) · · · ( x n − x 0 ) · · · ( x n − x n −1 ) cn
g i ( x ) = a i ( x − x i ) 3 + bi ( x − x i ) 2 + c i ( x − x i ) + d i , i = 0, 1, . . . , n − 1,
 
y0
 y1 
=  . ,
  with the global interpolation function written as
 .. 
yn g ( x ) = gi ( x ) , for xi ≤ x ≤ xi+1 .

and so theoretically can be solved faster than the Vandermonde polynomial. In To achieve a smooth interpolation we impose that g( x ) and its first and second
practice, however, there is little difference because polynomial interpolation is only derivatives are continuous. The requirement that g( x ) is continuous (and goes
useful when the number of points to be interpolated is small. through all n + 1 points) results in the two constraints

gi ( x i ) = y i , i = 0 to n − 1, (5.1)

5.2 Piecewise linear interpolation gi ( x i + 1 ) = y i + 1 , i = 0 to n − 1. (5.2)


The requirement that g0 ( x ) is continuous results in
Instead of constructing a single global polynomial that goes through all the points,
one can construct local polynomials that are then connected together. In the the gi0 ( xi+1 ) = gi0+1 ( xi+1 ), i = 0 to n − 2. (5.3)
section following this one, we will discuss how this may be done using cubic poly-
nomials. Here, we discuss the simpler case of linear polynomials. This is the default And the requirement that g00 ( x ) is continuous results in
interpolation typically used when plotting data.
Suppose the interpolating function is y = g( x ), and as previously, there are gi00 ( xi+1 ) = gi00+1 ( xi+1 ), i = 0 to n − 2. (5.4)
n + 1 points to interpolate. We construct the function g( x ) out of n local linear
polynomials. We write There are n cubic polynomials gi ( x ) and each cubic polynomial has four free co-
efficients; there are therefore a total of 4n unknown coefficients. The number of
g ( x ) = gi ( x ) , for xi ≤ x ≤ xi+1 , constraining equations from (5.1)-(5.4) is 2n + 2(n − 1) = 4n − 2. With 4n − 2 con-
straints and 4n unknowns, two more conditions are required for a unique solution.
where These are usually chosen to be extra conditions on the first g0 ( x ) and last gn−1 ( x )
polynomials. We will discuss these extra conditions later.
g i ( x ) = a i ( x − x i ) + bi , We now proceed to determine equations for the unknown coefficients of the
and i = 0, 1, . . . , n − 1. cubic polynomials. The polynomials and their first two derivatives are given by
We now require y = gi ( x ) to pass through the endpoints ( xi , yi ) and ( xi+1 , yi+1 ).
g i ( x ) = a i ( x − x i ) 3 + bi ( x − x i ) 2 + c i ( x − x i ) + d i , (5.5)
We have
gi0 ( x ) = 3ai ( x − xi )2 + 2bi ( x − xi ) + ci , (5.6)
y i = bi , gi00 ( x ) = 6ai ( x − xi ) + 2bi . (5.7)
y i + 1 = a i ( x i + 1 − x i ) + bi .
We will consider the four conditions (5.1)-(5.4) in turn. From (5.1) and (5.5), we
Therefore, the coefficients of gi ( x ) are determined to be have
di = yi , i = 0 to n − 1, (5.8)
y i +1 − y i which directly solves for all of the d-coefficients.
ai = , bi = y i .
x i +1 − x i To satisfy (5.2), we first define

Although piecewise linear interpolation is widely used, particularly in plotting rou- h i = x i +1 − x i ,


tines, it suffers from a discontinuity in the derivative at each point. This results in a
function which may not look smooth if the points are too widely spaced. We next and
consider a more challenging algorithm that uses cubic polynomials. f i = y i +1 − y i .

CHAPTER 5. INTERPOLATION 29 30 CHAPTER 5. INTERPOLATION

6.2. COMPOSITE RULES

Chapter 6 Therefore,
Ih = h f (h/2) +
h3 00
f (h/2) +
h5 0000
f (h/2) + . . . . (6.3)
24 1920

6.1.2 Trapezoidal rule


Integration From the Taylor series expansion of f ( x ) about x = h/2, we have
We want to construct numerical algorithms that can perform definite integrals
of the form h 0 h2 00 h3 000 h4 0000
Z b f (0) = f (h/2) − f (h/2) + f (h/2) − f (h/2) + f (h/2) + . . . ,
I= f ( x )dx. (6.1) 2 8 48 384
a
and
Calculating these definite integrals numerically is called numerical integration, nu-
merical quadrature, or more simply quadrature. h 0 h2 00 h3 000 h4 0000
f (h) = f (h/2) + f (h/2) + f (h/2) + f (h/2) + f (h/2) + . . . .
2 8 48 384
6.1 Elementary formulas Adding and multiplying by h/2 we obtain

h h3 00 h5 0000
We first consider integration from 0 to h, with h small, to serve as the building blocks

f (0) + f (h) = h f (h/2) + f (h/2) + f (h/2) + . . . .
for integration over larger domains. We here define Ih as the following integral: 2 8 384
Z h We now substitute for the first term on the right-hand-side using the midpoint rule
Ih = f ( x )dx. (6.2) formula:
0
h3 00 h5 0000
 
To perform this integral, we consider a Taylor series expansion of f ( x ) about the h 
f ( 0 ) + f ( h ) = Ih − f (h/2) − f (h/2)
value x = h/2: 2 24 1920
h3 00 h5 0000
( x − h/2)2 00 + f (h/2) + f (h/2) + . . . ,
f ( x ) = f (h/2) + ( x − h/2) f 0 (h/2) + f (h/2) 8 384
2
( x − h/2)3 000 ( x − h/2)4 0000 and solving for Ih , we find
+ f (h/2) + f (h/2) + . . .
6 24 h  h3 00 h5 0000
Ih = f (0) + f ( h ) − f (h/2) − f (h/2) + . . . . (6.4)
2 12 480
6.1.1 Midpoint rule
The midpoint rule makes use of only the first term in the Taylor series expansion. 6.1.3 Simpson’s rule
Here, we will determine the error in this approximation. Integrating, To obtain Simpson’s rule, we combine the midpoint and trapezoidal rule to elimi-
nate the error term proportional to h3 . Multiplying (6.3) by two and adding to (6.4),
( x − h/2)2 00
Z h
Ih = h f (h/2) + ( x − h/2) f 0 (h/2) + f (h/2) we obtain
0 2    
1 2 1
( x − h/2)3 000 ( x − h/2)4 0000 3Ih = h 2 f (h/2) + ( f (0) + f (h)) + h5 f 0000 (h/2) + . . . ,


+ f (h/2) + f (h/2) + . . . dx. 2 1920 480
6 24
or
Changing variables by letting y = x − h/2 and dy = dx, and simplifying the integral h  h5 0000
depending on whether the integrand is even or odd, we have Ih = f (0) + 4 f (h/2) + f (h) − f (h/2) + . . . .
6 2880
Ih = h f (h/2) Usually, Simpson’s rule is written by considering the three consecutive points 0, h
and 2h. Substituting h → 2h, we obtain the standard result
y2 00 y3 000 y4 0000
Z h/2  
+ y f 0 (h/2) + f (h/2) + f (h/2) + f (h/2) + . . . dy
−h/2 2 6 24 h  h5 0000
4
I2h = f (0) + 4 f (h) + f (2h) − f (h) + . . . . (6.5)
Z h/2  3 90

y 0000
= h f (h/2) + y2 f 00 (h/2) + f (h/2) + . . . dy.
0 12
The integrals that we need here are 6.2 Composite rules
h h
Z
2 2 h3
Z
2 4 h5 We now use our elementary formulas obtained for (6.2) to perform the integral
y dy = , y dy = . given by (6.1).
0 24 0 160

35 36 CHAPTER 6. INTEGRATION
7.2. NUMERICAL METHODS: INITIAL VALUE PROBLEM 7.2. NUMERICAL METHODS: INITIAL VALUE PROBLEM

7.2.1 Euler method 7.2.3 Second-order Runge-Kutta methods


The Euler method is the most straightforward method to integrate a differential We now derive all second-order Runge-Kutta methods. Higher-order methods can
equation. Consider the first-order differential equation be similarly derived, but require substantially more algebra.
We consider the differential equation given by (7.3). A general second-order
ẋ = f (t, x ), (7.3) Runge-Kutta method may be written in the form

with the initial condition x (0) = x0 . Define tn = n∆t and xn = x (tn ). A Taylor k1 = ∆t f (tn , xn ),
series expansion of xn+1 results in k2 = ∆t f (tn + α∆t, xn + βk1 ), (7.5)
xn+1 = xn + ak1 + bk2 ,
xn+1 = x (tn + ∆t)
= x (tn ) + ∆t ẋ (tn ) + O(∆t2 ) with α, β, a and b constants that define the particular second-order Runge-Kutta
method. These constants are to be constrained by setting the local error of the
= x (tn ) + ∆t f (tn , xn ) + O(∆t2 ).
second-order Runge-Kutta method to be O(∆t3 ). Intuitively, we might guess that
two of the constraints will be a + b = 1 and α = β.
The Euler Method is therefore written as
We compute the Taylor series of xn+1 directly, and from the Runge-Kutta method,
xn+1 = x (tn ) + ∆t f (tn , xn ). and require them to be the same to order ∆t2 . First, we compute the Taylor series
of xn+1 . We have
We say that the Euler method steps forward in time using a time-step ∆t, starting
xn+1 = x (tn + ∆t)
from the initial value x0 = x (0). The local error of the Euler Method is O(∆t2 ).
The global error, however, incurred when integrating to a time T, is a factor of 1/∆t 1
= x (tn ) + ∆t ẋ (tn ) + (∆t)2 ẍ (tn ) + O(∆t3 ).
larger and is given by O(∆t). It is therefore customary to call the Euler Method a 2
first-order method. Now,
ẋ (tn ) = f (tn , xn ).
7.2.2 Modified Euler method The second derivative is more complicated and requires partial derivatives. We
This method is of a type that is called a predictor-corrector method. It is also the have
first of what are Runge-Kutta methods. As before, we want to solve (7.3). The idea

d
is to average the value of ẋ at the beginning and end of the time step. That is, we ẍ (tn ) = f (t, x (t))
dt t=tn
would like to modify the Euler method and write
= f t (tn , xn ) + ẋ (tn ) f x (tn , xn )
1 = f t ( t n , x n ) + f ( t n , x n ) f x ( t n , x n ).
xn+1 = xn + ∆t f (tn , xn ) + f (tn + ∆t, xn+1 ) .

2
Therefore,
The obvious problem with this formula is that the unknown value xn+1 appears
on the right-hand-side. We can, however, estimate this value, in what is called the 1
xn+1 = xn + ∆t f (tn , xn ) + (∆t)2 f t (tn , xn ) + f (tn , xn ) f x (tn , xn ) .

(7.6)
predictor step. For the predictor step, we use the Euler method to find 2
p Second, we compute xn+1 from the Runge-Kutta method given by (7.5). Substi-
x n +1 = xn + ∆t f (tn , xn ). tuting in k1 and k2 , we have
The corrector step then becomes 
xn+1 = xn + a∆t f (tn , xn ) + b∆t f tn + α∆t, xn + β∆t f (tn , xn ) .
1 p We Taylor series expand using
xn+1 = xn + ∆t f (tn , xn ) + f (tn + ∆t, xn+1 ) .

2

f tn + α∆t, xn + β∆t f (tn , xn )
The Modified Euler Method can be rewritten in the following form that we will
later identify as a Runge-Kutta method: = f (tn , xn ) + α∆t f t (tn , xn ) + β∆t f (tn , xn ) f x (tn , xn ) + O(∆t2 ).

k1 = ∆t f (tn , xn ), The Runge-Kutta formula is therefore


k2 = ∆t f (tn + ∆t, xn + k1 ), (7.4) xn+1 = xn + ( a + b)∆t f (tn , xn )
1
x n +1 = x n + ( k 1 + k 2 ). + (∆t)2 αb f t (tn , xn ) + βb f (tn , xn ) f x (tn , xn ) + O(∆t3 ). (7.7)

2

44 CHAPTER 7. ORDINARY DIFFERENTIAL EQUATIONS CHAPTER 7. ORDINARY DIFFERENTIAL EQUATIONS 45

7.2. NUMERICAL METHODS: INITIAL VALUE PROBLEM

Comparing (7.6) and (7.7), we find

a + b = 1,
αb = 1/2,
βb = 1/2.

There are three equations for four parameters, and there exists a family of second-
order Runge-Kutta methods.
The Modified Euler Method given by (7.4) corresponds to α = β = 1 and a =
b = 1/2. Another second-order Runge-Kutta method, called the Midpoint Method,
corresponds to α = β = 1/2, a = 0 and b = 1. This method is written as

k1 = ∆t f (tn , xn ),
 
1 1
k2 = ∆t f tn + ∆t, xn + k1 ,
2 2
x n +1 = x n + k 2 .

7.2.4 Higher-order Runge-Kutta methods


The general second-order Runge-Kutta method was given by (7.5). The general
form of the third-order method is given by

k1 = ∆t f (tn , xn ),
k2 = ∆t f (tn + α∆t, xn + βk1 ),
k3 = ∆t f (tn + γ∆t, xn + δk1 + ek2 ),
xn+1 = xn + ak1 + bk2 + ck3 .

The following constraints on the constants can be guessed: α = β, γ = δ + e, and


a + b + c = 1. Remaining constraints need to be derived.
The fourth-order method has a k1 , k2 , k3 and k4 . The fifth-order method requires
up to k6 . The table below gives the order of the method and the number of stages
required.

order 2 3 4 5 6 7 8
minimum # stages 2 3 4 6 7 9 11

Because of the jump in the number of stages required between the fourth-order
and fifth-order method, the fourth-order Runge-Kutta method has some appeal.
The general fourth-order method starts with 13 constants, and one then finds 11
constraints. A particularly simple fourth-order method that has been widely used
is given by

k1 = ∆t f (tn , xn ),
 
1 1
k2 = ∆t f tn + ∆t, xn + k ,
2 2 1
 
1 1
k3 = ∆t f tn + ∆t, xn + k2 ,
2 2
k4 = ∆t f (tn + ∆t, xn + k3 ) ,
1
x n +1 = x n + (k + 2k2 + 2k3 + k4 ) .
6 1

46 CHAPTER 7. ORDINARY DIFFERENTIAL EQUATIONS

You might also like