0% found this document useful (0 votes)
143 views10 pages

Numerical Solution of Ordinary Differential Equations

The document discusses numerical methods for solving ordinary differential equations (ODEs), specifically Runge-Kutta methods. It introduces the problem of solving initial value ODE problems and describes simple integration methods like Euler's method and the trapezoidal rule. It then discusses single step methods and their truncation errors before focusing on explicit Runge-Kutta methods, detailing how to derive higher order methods and representing them using Butcher tables.

Uploaded by

jymuzo7793
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PS, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views10 pages

Numerical Solution of Ordinary Differential Equations

The document discusses numerical methods for solving ordinary differential equations (ODEs), specifically Runge-Kutta methods. It introduces the problem of solving initial value ODE problems and describes simple integration methods like Euler's method and the trapezoidal rule. It then discusses single step methods and their truncation errors before focusing on explicit Runge-Kutta methods, detailing how to derive higher order methods and representing them using Butcher tables.

Uploaded by

jymuzo7793
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PS, PDF, TXT or read online on Scribd
You are on page 1/ 10

Numerical solution of ordinary differential equations

Carsten Knudsen Department of Physics Technical University of Denmark 1999

1 Introduction
The present document is an additional set of notes for the course 10233 entitled Dynamic Simulation of Complex Systems. The set of notes is refresher/introduction to numerical solution of ordinary differential equations as they appear in the context of System Dynamics. The following textbooks contain more advanced material: Dormand [1], Iserles [4], Lambert [5], and Hairer, Nrsett and Wanner [3]. Parker and Chua also discuss numerical solution of ordinary differential equations, and presents many algorithms of use in analysis of dynamical systems [6]. Before getting started on the material, a few words on the used notation. Scalars are typeset like: , whereas vectors are typeset in boldface: . Differentiaion with respect to time is denoted by a dot above the dependent variable, i.e., . The System Dynamics methodology, as well as other modelling methodologies, in general leads to a model being represented by a set of coupled rst order differential equations like in

(1)

where and . The numerical solution problem becomes an initial value problem the moment we specify suitable initial conditions with and . The remainder of this document will discuss simple methods for solving the initial value problem by so-called Runge-Kutta methods.

2 Simple integration methods


For simplicity we shall consider the scalar version of equation (1), i.e., the equation

(2)

where and , and with a initial condition . We assume that the right hand side function is sufciently often differentiable for our purposes. Most of the results can easily be generalised to a higher-dimensional setting. 1

Our aim is to produce a sequence of numbers associated with times that approximate the real solution at those precise times, i.e., we hope that . Often one lets the chosen times be equidistant, that is, is a constant. Then we have the simple relation for . As we have assumed that the right hand side function is nice, we can expand the solution at time in a Taylor series with Lagrange remainder

(3)

Again assuming that is nice, the second order derivative can be bounded on the open interval and we can write the solution with an order symbol as

(4)

This naturally gives rise to the approximation formula known as Eulers method or the Forward Euler method:

(5)

which has error of second order (we will return to more detailed error considerations later on). The Forward Euler method approximates the time derivative by

(6)

Another posibility is to approximate the derivative going backwards in time, i.e.,

(7)

which at rst sight looks like very much the same thing. However, it does yield a very different rule

(8)

The main difference between (5) and (8) is that the former directly gives the new approximation based on the old approximation , whereas for the latter the right hand side also depends on the new approximation . In other words the latter metod requires solving an implicit equation to obtain the new approximation, which will almost certainly lead to an iterative method. Quite naturally, the Forward Euler method is called explicit and the Backward Euler method is called implicit. A natural question is whether the Backward Euler has smaller error than the Forward Euler. To investigate this we expand the solution. For simplicity we consider the autonomous case where .

(9) (10) (11) (12)

At rst it looks like the error in a step is , but note that the second order term should be whereas it is with a missing one-half factor. Hence the error is and the error of the method is of the same order as for the Forward Euler. Another simple method can be derived as follows. We write the solution using an integral like

(13)

We can then approximate the integral by the well-known trapezoidal rule form elementary calculus, i.e.,

(14)

Using this we obtain the so-called Trapezoidal rule

(15)

This method is, in fact, better than the Euler methods since it is more accurate, as the following calculation shows.

(16) (17) (18) (19)

From the latter equation we see that the Trapezoidal method is indeed better since the error is of third order.

3 Single step methods


The kind of numerical methods we looked at in the previous section, and we are going to concentrate on in the following, is the so-called single step methods, where the new approximation is computed from a single old solution. These methodds have the general form

(20)

The error committed in each step is, like above, of the form

(21)

This error is called the truncation error because it is due to the fact that our solution method is only matched to a Taylor expansion of the true solution of nite order; hence we are using a truncated expansion. In the case above we say that the method is of order . Hence Forward and Backward Euler is of rst order and the Trapezoidal rule is of second order. The important problem to solve here is to make a sensible choice for the function . In the next section we shall discuss one popular choice, namely the explicit Runge-Kutta methods. 3

4 Explicit Runge-Kutta methods


The general form of the explicit lows.

-stage Runge-Kutta method is algorithmically given as fol

(22)

(23)

(24)

(25)

In addition it is required that the coefcients satisfy


(26)

Such a method is called an -stage method. The denition looks a bit daunting, however, it is clear that it is well-dened. First we compute by equation (22). Then we compute the linear combination of s in equation (23) using only already known s; then we proceed to compute the next . When all of the s have been computed we use equation (24) and nally we compute the approximate solution using equation (25). What remains is to determine the coefcients entering the equations. We will discuss the determination of these in a simple case. In the special case of 2-stage methods ( ) we obtain

(27)

(28) (29)

(30)

and the condition from equation (26) yields (31)

Hence in a more compact form we get

(32) (33) (34) (35)

Next we address the question of determining the coefcients. The method is to pick the s, s and s such that the method has the highest possible order. To simplify the calculations we introduce the standard notation for partial derivatives,  ,  , and so on, where it is implicitly assumed that all partial derivatives are evaluated at and . We rst Taylor expand the real solution to third order.

(36)

Next we expand the two-stage Runge-Kutta method


(37) (38) (39) (40) (41) (42) (43)

Thus we arrive at

(44)

Comparing equations (36) and (44) it is clear that we must require

(45)

to match the rst order terms, and

(46)

to match the second order terms. Thus, in general, the 2-stage explicit Runge-Kutta method has order 2 as long as we satisfy equations (45) and (46). It is in general not possible to obtain higher orders since there are terms present in (36) not present in (44), except possibly for special choices of , but this is not of interest here. For instance, we obtain a second order method with the choice and . Note that there is an innity of second order methods parameterised by, say, . The method of determining the coefcients can be applied to higher-order methods but it becomes very tedious and one should probably employ a symbolic mathematics package to deal with the algebra. One fourth-order method is so often used that it is often referred to as the Runge-Kutta method or the classical Runge-Kutta method. The s should be computed as follows

(47) (48) (49) (50)

Table 1: Butcher table for the classical Runge-Kutta method.


. . .

. . .

..

..

. . .

Table 2: Butcher table.

and then the approximate solution can be generated through

(51)

An efcient way in which to represent a Runge-Kutta method is in a so-called Butcher table. We have shown the Butcher table for the classical Runge-Kutta method in Table 1. In the rst column we place the s (the time step fractions), in the last row is the s (the -weights in ), and the matrix contains the s (the -weights used in -computation). The zeroes on, and above, the diagonal are necessary for the method to be explicit. The general form of the Butcher table for an explicit -stage Runge-Kutta method is illustrated in Table 2.

5 Implementation
Though the Runge-Kutta methods as described in the previous section are relatively simple, it is, as it is often the case, easy to make programming errors in the actual implementation. A straightforward implemention of the fourth order Runge-Kutta method from equation (51), in pseudo-code for a programmig language supporting vectors, can be found in Program 5.1. Some of the pitfalls of programming the method disappears when vectors are readily available, as in Program 5.1. For a simpler programming language such as C the problem can be solved as in Program 5.2. The important thing is the order in which calculations are carried out, so note that carefully. Having implemented in integration procedure it is very important to test it before employing it to solve serious problems. It is always a good idea to solve a known problem, i.e., one where the analytical solution is known. If the solution improves, that is, the error decreases with decreasing time step, then we might be on the right track. If we can show that the error has the 6

Program 5.1 Program implementing the classical fourth order Runge-Kutta method. input x,t,h; vector k1,k2,k3,k4,z; k1:=f(x,t); z:=x+0.5*h*k1; k2:=f(z,t+0.5*h); z:=x+0.5*h*k2; k3:=f(z,t+0.5*h); z:=x+h*k3; k4:=f(z,t+h); x:=x+h/6*(k1+2*k2+2*k3+k4); t:=t+h; output x,t;

Program 5.2 C program implementing the classical fourth order Runge-Kutta method. #include math.h #include stdio.h #dene N=2 void fcn(double *x,double t,double *f) f[0]=x[1]; f[1]=-x[0]; void step(double *x,double *t,double h) double k1[N],k2[N],k3[N],k4[N],z[N]; int i; fcn(x,*t,k1); for (i=0; i N; i++) z[i]=x[i]+0.5*h*k1[i]; fcn(z,*t+0.5*h,k2); for (i=0; i N; i++) z[i]=x[i]+0.5*h*k2[i]; fcn(z,*t+0.5*h,k3); for (i=0; i N; i++) z[i]=x[i]+h*k3[i]; fcn(z,*t+h,k4); for (i=0; i N; i++) x[i]+=h/6*(k1[i]+2*k2[i]+2*k3[i]+k4[i]); *t+=h; void main(void) double x[2],t(0),h(0.1); x[0]=0.0; x[1]=1.0; step(x,t,h);

correct order, i.e., ve for the classical Runge-Kutta method, we can probably proceed and use the code. The order test for a th order method goes as follows. The error for small time steps is roughly , and taking logarithms on both sides we get

(52)

This equation shows that if we plot the error against the time step using logarithmic scales the slope for small is the order plus one.

6 Error control
Numerical solution of ordinary differential equations, such as with Runge-Kutta methods, involves several errors, not including the ever present programming errors, which the previous section should have reduced. We refer to two kinds of unavoidable errors, namely truncation errors and rounding errors. Rounding errors are due to the nite precision used by the computer when carrying out operations. However, the nite precision which leads to rounding often only inuences the least signicant digit, and hence its effect can mostly be ignored, except if we use very small time steps. The truncation errors, which are more serious, arise from th e fact that our methods are based on nite orders (as compared with a Taylor expansion of the solution). We have already discussed this partly and we know that the error of a th order method is . We shall utilise this fact to improve the numerical solution in the remainder of this section. The problem facing us is what step size to pick. There is, of course, no one optimal step size for which the truncation error is minimised during a simulation, and it is therefore wise to let the step size change dynamically. The problem is: how? This section attempts to give a rst answer to this question. Let us assume that we wish to ensure that the truncation error (hereafter referred to as the error) never exceeds, say, . For a th order method the (absolute value of the) error committed in a single time step can be estimated by

(53) that would (54)

where is an unknown positive constant. There then exists an optimal time step have lead to precisely the maximum error which can be expressed by

Dividing the last two equations eliminates the unknown constant and we obtain

(55)

from which we can eliminate the optimal time step

(56) So, all we have to do is to estimate the actual error . This is not easy in general but we can give a simple x. If we assume that it is much better to take two steps of size rather than

one of size , we can estimate the error by

(57)

..

. . .

. . .

..

. . .

Table 3: Butcher table for a general Runge-Kutta pair.

where the function takes Runge-Kutta steps of size with the initial conditions and . For a fourth order method like the classical Runge-Kutta method halving the time step should multiply the error by a factor of , so presumably it is alright to assume that two steps of half-size are much better than one of full-size. In practice this kind of error control can lead to large uctuations in step size and it is often better to introduce a bit of conservatism in the correction, such as in the formula

(58)

where

is a weighting factor; the smaller the more conservative the time step control.

7 Runge-Kutta pairs
There is a better way of computing the error estimate used in the time step control discussed in the last section. The method estimates the error by computing the solution from two different Runge-Kutta methods rather than computing two different solutions with the same method. At rst this sounds like a bad idea since that will lead to a lot more computational work. However, there exists so-called Runge-Kutta pairs which are methods of different order that shares the same - and -coefcients. The two methods in the pair does, however, have different s (else they would be identical of course). Since we normally assume that the computational work primarily is located in evaluating the right hand side function ( ), there is little extra work involved in using a pair rather than a single method. A Runge-Kutta pair is usually represented in an extended Butcher table with the general form shown in Table 3. A concrete example of a ve-stage Runge-Kutta pair of orders three and four due to Nrsett can be found in Table 4 (see Enright et al. [2]). The second to last row is a third order method, and the last row is a fourth order method. The use of a Runge-Kutta pair is as follows. First compute the third and fourth order solutions and . The error can then be estimated by , where we have assumed that the fourth order solution is superior since it is of a higher order. We can then compute the optimal time step using equation (56) (do not forget that the error here is ) and then possibly a new time step employing equation (58). Hence the lower order method is used for the time step control. The solution we use should, however, is the fourth order approximation since we believe it is much better.

...

Table 4: Butcher table for a Runge-Kutta pair of order 3-4.

References
[1] J. R. Dormand, Numerical methods for differential equations, CRC Press, Boca Raton, 1996. [2] W. H. Enright, K. R. Jackson, S. P. Nrsett, and P. G. Thomsen, Interpolants for Runge Kutta formulas, ACM Trans. Math. Softw. 12 (1986), 193218. [3] E. Hairer, S. P. Nrsett, and G. Wanner, Solving ordinary differential equations I Nonstiff problems, SpringerVerlag, Berlin, 1993. [4] A. Iserles, A rst course in the numerical analysis of differential equations, Cambridge University Press, Cambridge, 1996. [5] J. D. Lambert, Computational methods in ordinary differential equations, John Wiley & Sons, Chichester, 1973. [6] T. S. Parker and L. O. Chua, Practical numerical algorithms for chaotic systems, Springer Verlag, New York, 1989.

10

You might also like