Euler's Method, Runge Kutta Methods, Predictor-Corrector Methods
Euler's Method, Runge Kutta Methods, Predictor-Corrector Methods
com
Numerical Methods
Introduction
This chapter attempts to describe a few methods for finding approximate values of a solution of a non-linear initial value problems. By no means, it is not an exhaustive study but can be looked upon as an introduction to numerical methods. Again, we stress that no attempt is made towards a deeper analysis. A question may arise: why one needs numerical methods for differential equations. Probably because, differential equations play an important role in many problems of engineering and science. This is so because the differential equations arise in mathematical modelling of many physical problems. The use of the numerical methods have become vital in the absence of explicit solutions. Normally, numerical methods have two major roles: 1. the amicability of the method for easy implementation on a computer; 2. the method allows us to deal with the analysis of error estimates. In this chapter, we do not enter into the aspect of error analysis. For the present, we deal with some of the numerical methods to find approximate value of the solution of initial value problems (IVP's) on finite intervals. We also mention that no effort is made to study the boundary value problems. Let us consider an initial value problem (14.1.1)
where and are prescribed real numbers and . By Picard's Theorem 7.6.8, we know that under certain conditions, the initial value problem ( ) has a unique solution. Our aim is to determine is to determine on the interval . Let us also assume that our aim into equal parts
at at
), we determine the
. The details are given in the ensuing sections. The number at is is called the step size and depends on and . For , the value of
mywbut.com
. This is achieved
steps. The approximate value The method which uses uses more than one value of
In the sequel, we deal with some simple single step methods to find the approximate value of the solution of ( ). In these methods, our stress is on the use of computers for numerical evaluation. In other words, the implementation of the methods by using computers is one of our present aims. Each method will be pictorially represented by what is called a flow chart. Flow chart is a pictorial representation which shows the sequence of each step of the computation. Usually the details of both the input and the termination of the method is indicated in the flow chart. In short, a flow chart is a chart that shows the flow of the computation including the start and the termination of the method.
Euler's Method
For a small step size , the derivative is close enough to the ratio . In the Euler's method, such an approximation is attempted. To recall, we consider the problem ( ).
Let
with
. Let
be the
approximate value of
(14.1.2)
by (
mywbut.com
Remark 14.1.1 1. Euler's method is an one-step method. 2. The Euler's method has a few motivations.
1. The derivative at can be approximated by sufficiently small. Using such an approximation in ( ), we have
if
is
) yields
The integral on the right hand side is approximated, for sufficiently small value of , by
3. Moreover, if is differentiable sufficient number of times, we can also arrive at ( considering the Taylor's expansion
) by
We illustrate the Euler's method with an example. The example is only for illustration. In ( ), we do not need numerical computation at each step as we know the exact value of the solution. The purpose of the example is to have a feeling for the behaviour of the error and its estimate. It will be more transparent to look at the percentage of error. It may throw more light on the propagation of error.
mywbut.com
EXAMPLE 14.1.2 Use Euler's algorithm to find an approximate value of of the IVP
, where
is the solution
(14.1.3)
with step sizes and Show that the exact solution of the IVP is Calculate the error at each step and tabulate the results. Solution: Comparing the given IVP with ( ), we note that
and
places of decimal).
Initial
Initial 1.00000
Approx
Exact
Error
mywbut.com
Table Initial Step size 0.05000 0.05000 0.05000 0.05000 0.05000 0.05000 0.05000 0.05000 0.05000 0.05000 1.05000 1.10513 1.16619 1.23419 1.31035 1.39620 1.49367 1.60522 1.73406 1.05263 1.11111 1.17647 1.25000 1.33333 1.42857 1.53846 1.66667 1.81818 0.00599 0.01028 0.01581 0.02298 0.03237 0.04479 0.06144 0.08412 Error
Approx
Exact
mywbut.com
0.50000
1.73406
0.05000
1.88441
2.00000
0.11559
Remark 14.1.3 At step , the approximation of the derivative induces a certain error. At the next step, the approximation of the derivative and the approximate value from the previous step can increase the error. This is what we see from the last column of the table. As mentioned earlier, the percentage of the error perhaps gives a better view of the accumulation of the error. With this in mind, we are tempted to have a look at the error estimates which is the topic of the ensuing section.
mywbut.com
for an analysis of the error, which is the main aim of this section. Note that we are not dealing with the truncation error in actual calculation. Recall that 1. 2. is the approximate solution of the IVP method, namely and defined by the Euler's
(14.2.5) 3.
for
. The quantity for . So, the 's are step. It is also called the discretization is ``small" when the step
is the absolute deviation of from called the absolute error estimates, committed at the
error. In this section, we examine the nature of . It is desirable that size is small. In this connection, we have the following result. THEOREM 14.2.1 Consider the IVP
(14.2.6)
of (
and
(14.2.7)
mywbut.com
satisfying
. Also, by Euler's
(14.2.8)
Again, by the mean value theorem, for some constant between error bound and lying
. Therefore, using the given bounds in the statement of the theorem, the (see ( )) reduces to (14.2.9)
is the solution
then
2.
where
Hence, the proof of the theorem is complete. height6pt width 6pt depth 0pt Remark 14.2.2 Inequality ( ) implies that the error is in the class gives an upper bound for the estimate of the error at more error at the next step, the final value step. The error at error". Theorem . Theorem also
is called a ``local error"; the cumulative error at is called the ``global does not throw any light on the estimate of global error.
mywbut.com
Runge-Kutta Method
Runge-Kutta Method is a more general and improvised method as compared to that of the Euler's method. It uses, as we shall see, Taylor's expansion of a ``smooth function" (thereby, we mean that the derivatives exist and are continuous up to certain desired order). Before we proceed further, the following questions may arise in our mind, which has not found place in our discussion so far. 1. How does one choose the starting values, sometimes called starters that are required for implementing an algorithm? 2. Is it desirable to change the step size (or the length of the interval) during the computation if the error estimates demands a change as a function of ? For the present, the discussion about Question is not taken up. We try to look more on Question in the ensuing discussion. There are many self-starter methods, like the Euler method which uses the initial condition. But these methods are normally not very efficient since the error bounds may not be ``good enough". We have seen in Theorem that the local error (neglecting the rounding-off error) is in the Euler's algorithm. This shows that as the values of
become smaller, the approximations improve. Moreover, the error of order , may not be sufficiently accurate for many problems. So, we look into a few methods where the error is of higher order. They are Runge-Kutta (in short R-K) methods. Let us analyze how the algorithm is reacher before we actually state it. To do so, we consider the IVP
and
. We now assume
(14.3.10)
For
mywbut.com
and and
,(
with respect to
respectively. (14.3.12 )
A comparision of (
) and (
in order that the powers of Here we note that the simplest solution is
up to
match (in some sense) in the approximate values of . So, we choose and so that ( ) is satisfied. One of
Evaluation of ).
A few things are worthwhile to be noted in the above discussion. Firstly, we need the existence of partial derivatives of we need of order up to order for R-K method of order . For higher order methods,
to be more smooth. Secondly, we note that the local truncation error (in R-K method ) is of order . Again, we remind the readers that the round-off error in the case
10
mywbut.com
of implementation has not been considered. Also, in ( ), the partial derivatives of do not appear. In short, we are likely to get a better accuracy in Runge-Kutta method of order in comparision with the Euler's method. Formally, we state the Runge-Kutta method of order .
Then the flow chart associated with the R-K method of order
is
EXAMPLE 14.3.1 Use the Ringe-Kutta method to find the approximate value of solution of the IVP
where
is the
11
mywbut.com
and
Solution: Comparing the given IVP with ( We now calculate the values of and
to calculate the approximate values. The results are shown in Tables Table Initial Step size 0.10000 0.10000 0.10000 0.10000 0.10000 0.10000 Error 0.00000 0.1 0.1 0.01623 0.04106 0.08451 0.16411 0.121 0.146531
Initial 1.00000
Approx 1.00000
Exact 1.00000
12
mywbut.com
1.00000
0.05000
1.05256
1.05263
0.05539 0.06138
0.00102
0.35000
1.42755
0.05000
1.53697
1.53846
0.00149
0.11812 0.13697
0.40000
1.53697
0.05000
1.66451
1.66667
0.00215
0.13853 0.16255
0.45000
1.66451
0.05000
1.81505
1.81818
0.00313
0.16472 0.19598
0.50000
1.81505
0.05000
1.99540
2.00000
0.00460
0.199082 0.24079
for , we define
where
. Also, we set
. The for
13
mywbut.com
where
is
forced to do more computation or in other words, spend more time to compute and . It all depends on the nature of the function to estimate the time consumed for the computation. The cost we pay for higher accuracy is more computation. Also, to reduce the local error, we need smaller values of the step size , which again results in large number of computation. Each computation leads to more of rounding errors. In other words, reduction in discretisation error may lead to increase in rounding off error. THE MORAL IS THAT THE INDISCRIMINATE REDUCTION OF STEP-SIZE NEED NOT MEAN MORE ACCURACY.
Figure: Flow-Chart of Runge-Kutta method of order EXERCISE 14.3.3 Use the Runge-Kutta method of order to find an approximate solution of the IVP
14
mywbut.com
Predictor-Corrector Methods
In Sections and , during the course of the discussion on the Euler's algorithm, the value of
considered its approximate value by . We could have thought of it to solve the IVP (numerically) by defining the approximations
(14.4.15)
is called an implicit
resort to (numerical) approximate value for computing , for a fixed and for
(14.4.16)
Essentially, we are trying to iterate for ``find the value" of and designate it as
. we need to stop this iteration at some stage and then . One method of stopping the iteration is when
15
mywbut.com
is ``small" (small have means that the absolute value of the ratio is lesser than an assigned (previously) small number). we repeat the process with In general, ( ) allows us to recursively define in place of and in place of .
for
and .
value and correct it (by iteration) to obtain . For this reason such methods are called PREDICTOR-CORRECTOR MEHTODS, (in short PC methods). Again, we repeat that PC methods need some condition to step the inner iterations, usually they are: 1. the number of iterations (called the tolerance on the number of iterations), 2. a bound on the relative error (called the tolerance on the relative error). As for as condition is concerned, it simply says we do not wish to iterate beyond iterations, while condition says that keep iterating till the relative error is small, no matter how many iterations are needed. The number is known as the tolerance. On occasions many use both the conditions and to stop the inner iterations, which even leads to early termination. With these preliminaries, we state the Predictor-Corrector algorithm.
16
mywbut.com
3. Stop the iterations whenever the absolute of the relative error previously prescribed positive real number . Remark 14.4.1 The tolerance in the number of inner iteration has been incorporated in .
for some
by setting
17