Multistep Method For Solving Ordinary Differential Equations
Multistep Method For Solving Ordinary Differential Equations
EQUATIONS
IBRAHIM ZERGA
January 2011
Haramaya University
MULTISTEP METHOD FOR SOLVING ORDINARY DIFFERENTIAL
EQUATIONS
By
Ibrahim Zerga
Advisor
January 2011
1
Haramaya University
As members of the Examination Board of the Final M. Sc. Open Defense, we certify that we
have read and evaluated this Graduate Seminar prepared by Ibrahim Zerga
Entitled:”multistep method for solving ordinary differential equations” and recommended
that it be accepted as fulfilling the Graduate Seminar requirements for the Degree of Master
of Science in Mathematics.
Final approval and acceptance of the Graduate Seminar is contingent upon the submission of
the final copy of the Graduate Seminar to the Council of Graduate Study (CGS) through the
Department Graduate Committee (DGC) of the candidate’s department.
I hereby certify that I have read this Graduate Seminar prepared under my direction and
recommended that it be accepted as fulfilling the Graduate Seminar requirement.
1
PREFACE
Among different kinds of numerical method to solve ordinary differential equation, this
seminar report constitutes the most popular and efficient method called linear multistep
method, for solving first order initial value problems.
In general the report consists of three chapters, among which the first part is about some
mathematical preliminaries (definitions and theorems) that will be helpful for the main body
of seminar.
The second chapter discusses about the derivation of multistep method and presents some
powerful multistep method (Adams bashforth and Adams Moulton). And examples are also
discussed to come up with the definition called predicator corrector method.
The final part is about the analysis of multistep method and defines some basic terms which
are used to control the error occurred while using starting points. Moreover some basic
theorems (Dahlquist’s) are discussed without a proof. And also includes example to show that
consistency is not a sufficient condition for the convergence of the method, to make it brief
the above example is supported by the graph in detail.
2
ACKNOWLEDGEMENT
My advisor Dr Getinet Alemayehu has always been extremely encouraging towards me, and
he has basically taught me how to do mathematics and his outstanding contributions to the
field of numerical analysis have, always been of encouragement to me. And next I wish to
thank my family and friends for all the wonderful times we have had.
2
TABLE OF CONTENTS
PREFACE.........................................................................................................III
ACKNOWLEDGEMENT...............................................................................IV
INTRODUTION.................................................................................................1
CHAPTER ONE ...............................................................................................2
MATHEMATICAL PRELIMINARIES..........................................................2
CHAPTER TWO................................................................................................6
LINEAR MULTISTEP METHOD...................................................................6
2.1. Explicit Multistep Methods..............................................................................7
2.1.1. Adams Bashforth Methods......................................................................................9
2.1.2. NystrÖm Methods................................................................................................9
2.2. Implicit Multistep Methods............................................................................10
2.2.1. Adams-Moulton Method......................................................................................11
2.2.2. Milne-Simpson Method.......................................................................................12
CHAPTER THREE..........................................................................................16
ANALYSIS OF MULTISTEP METHOD......................................................16
3.1. Zero-Stability...........................................................................................16
3.2. Consistency..............................................................................................20
SUMMARY.......................................................................................................24
REFERENCES.................................................................................................25
REFERENCES
1
2
INTRODUTION
Differential equations are used to model problems in science and engineering that involves
the change of some variable with respect to the other. Most of these problems require the
solution to the initial problem, that is, the solution to a differential equation that satisfies a
given initial condition.
In most real-life situation, the differential equation that models the problem is too
complicated to solve exactly, and one of the two approaches is taken to approximate the
solution. The first approach is to simplify the differential equation. The other approach uses
methods for approximating the solution of the original problem.
Numerical methods will always be concerned with solving a perturbed problem since any
round-of error introduced in the representation perturbs the original problem. Unless the
original problem is well-posed, there is a little reason to expect that the numerical solution to
the perturbed problem will accurately approximate the solution to the original problem.
One-step methods construct an approximate solution yn+1≈yxn+1 using only one previous
approximation yn. The method discussed in this seminar enjoys the virtue that the step size h
can be changed at every iteration, if desired, thus providing a mechanism for error control,
but in our case we consider only constant step size. And they are good in computer time than
the Runga-Kutta methods, multistep methods are frequently used in commercial routines
because of their combined accuracy, stability, and computational efficiency properties.
Several drawbacks to this scheme are evident: it is difficult to adjust the step size, and values
y0,y1,…,yk-1 should be known before starting the method, it is not self started. The former
concern can be addressed in practice through interpolation techniques. To handle the latter
concern, initial data can be generated using a one-step method with small step size h.
1
CHAPTER ONE
MATHEMATICAL PRELIMINARIES
This chapter includes some basic definition and theorem which will be helpful for the next
chapter that is for the main parts of the seminar.
Definition (1.1) an ordinary differential equation is the relation between a function and, its
derivatives, and the variable upon which they depends. The most general form of an ordinary
differential equation is given by
Where m represents the highest order derivatives, and y and its derivatives are functions of x.
The order of the differential equation is the order of its higher derivatives and its degree is
the degree of the highest derivative of the highest order after the equation has been
rationalized in derivatives. If the initial conditions
at a point x=x0 are given then the deferential equation together with the initial condition is
called an mth order initial value problem.
- ≤ Ly1-y2
f ( t , y1 ) f ( t, y2 )
1
Whenever , belongs to D, the constant L is called the Lipchitz constant forf.
( t , y1 ) ( t , y 2 )
Lipchitz constantL.
Definition (1.4) the initial value problem dydt =ft,y a≤t≤b, ya=α is said to be
well posed problem if
3
The initial value problem
, a≤t≤b,
dz z( a ) = a + δ 0
= f ( t, z ) + δ ( t )
dt
Lipchitz condition in the variable y on the setD, then the initial value problem
Suppose f ϵCa,b, the Riemann integral of g exists ina,b, and gx does not
abfxgxdx=fcabgxdx.
When gx=1, then the theorem becomes the usual mean value theorem for integrals. It gives
the average value of the function f over the interval a,b as
fc=1b-aabfxdx.
Newton backward interpolation formula let the function y=f(x) takes the values
1
If f(x) is continuous and posses continuous derivatives of order n in an interval that
includes x=a, then in the interval
( x − a ) 2 f ' ' (a ) ( x − a ) n −1 f n −1 (a )
f ( x) = f ( a ) + ( x − a) f ' (a ) + + ... + + Rn ( x )
2! (n − 1)!
Where Rn(x) = .
( x − a ) f (ξ ) a < ξ < x
n n
,
n!
, n=0,1, (*)
α k y n+ k + + α 1 y n+1 + α 0 y n = 0
pz=
α k z k + + α1 z +α0
Let 1 be the distinct root of the polynomial, and let mr≥1 denote the
zr , ≤ r ≤ l, l ≤ k ,
(*), then
Where prn is a polynomial of degreemr-r, where 1≤r≤l in particular, if all roots are simple,
that is mr=1 , 1≤r≤k, then the pr , r=1, . . . ,k are constants.
3
CHAPTER TWO
While Runga-kutta method give an improvement over Euler‘s method in terms of accuracy,
this is achieved by investigating additional computational effort; in fact, Runga-kutta method
requires more evaluation of f (.,.) then would seems necessary. For example, the fourth-order
method involves four functional evaluations per step. For comparison, by considering three
consecutive points xn-1, xn =xn-1+h, xn+1=xn-1+2h, integrating the differential
equation between xn-1 and xn+1 , yields
yxn+1 =yxn-1+xn-1xn+1fx,y(x)dx ,
3
And applying Simpson’s rule to approximate the integral on the right hand side then leads to
the method
requiring only three functional evaluations per step. In contrast with the one-step method
considered in the previous section where only a single value yn was required to compute the
next approximation yn+1, here we need two preceding values yn and yn-1, to be able to
calculate yn+1 , and therefore (2.1) is not a one-step method.
In this section we consider a class of methods of the type (2.1) for the numerical solution
of the initial value problem (1.1), (1.2), called linear multistep method
Given a sequence of equally spaced mesh points xn with step size h, we consider the
general linear k-step method
degenerate cases, we shall assume that αk≠0 and that α0 and β0 are not both equal to zero.
If βk=0,then yn+k is obtained explicitly from previous value of yj and fxj,yj, and the k-
step method is then explicit. On the other hand, if βk≠0,then yn+k appears not only in the
left hand side but also on the right, within fxn+k,yn+k; due to this implicit dependence on
yn+k the method is then is called implicit. The method (2.2) is called linear because it
involves only linear combinations of the yn+k and fxn+k,yn+k, for the sake
j = 0,1, , k ;
Most Runga-Kutta methods though one-step method, are not multistep methods. Euler’s
method is an example of a one-step method that also fits multistep templates. Here are a few
examples of a linear multistep method.
1
β0=-12,β1=32,β2=12,
To begin the derivation of explicit multistep method, first note that the solution to the initial
value problem
If we integrate the above differential equation from xj-i to xj+1 for i≥0, we get that
yxj+1-yxj-i=xi-ixi+1y'xdx=xi-ixi+1fx,yxdx. Consequently
yxj+1=yxj-i+xj-ixj+1fx,yxdx . (2.3)
Since the integrand in (2.3) involves the unknown function yx , we cannot integrate it
directly rather we replace f(x,yx) by some polynomial pk-1(x) of degree k-1which
interpolates fx,yx at k poins xj, xj-1 , …,xj-k+1. The Newton backward difference
polynomial interpolating the data xj,fj, xj-1,fj-1, …,xj-k+1,fj-k+1 is given by
Pk-1x=fj+x-xjh∇fj+x-xjx-xj-12!h2∇2fj+…
Where ε lies in the interval containing the points xj,xj+1,…xj-k+1 and t..
Pk-1xj+hs= fj+s∇fj+ss+12!∇2fj+…+
=m=0k-1-1m-sm∇mfj+-1k-skhkfkε
yxj+1=yxj-i+h-i1m=0k-1-1m-sm∇mfj+-1k-skhkfkεds
3
=yxj-i+hm=0k-1γmi∇mfj+Tk(i) (2.5)
Tk(i)=hk-i1-1k-skfkεds.
Neglecting the error term Tk(i) in (2.5), we obtain the explicit multistep method
yj+1=yj-i+hm=0k-1γmi∇mfj. (2.7)
Since the error is of O (hk+1), the method (2.7) is at least of order k, now from (2.6)
γ0i=-i1ds=1+i.
γ1i=-i1sds=121+i1-i.
γ2i=12-i1ss+1ds=1125-3i2+2i3.
γ3i=16-i1ss+1s+2ds=1243-i3+i-i2+i3.
yj+1=yj+hfj+12∇fj+512∇2fj+38∇3fj+251720∇4fj+ …
Tk0=hk+101-1k-skfkεds
=hk+101gsfkεds
Since gs does not change the sign in 0,1, we have by the mean value theorem of integral
calculus
Tk0=hk+1γk0fkϵ1
3
Where ϵ1lies in the interval containing the points tj-k+1, tj-k+2, …,tj and t.
yj+1=yj-1+h2fj+0×∇fj+13∇2fj+13∇3fj+2990∇4fj+ ….
Tk1=hk+1-11gsfkεds
Since gs changes sign in-1,1, the mean value theorem cannot be applied. However, a bound
for the error can be written as
Tk1≤hk+1Mk-11gsds
Where Mk= .
k
max | f (x) |
−1≤ k 1
We replace fx,y in (2.3) by the polynomial pkx of degree k, which interpolates fx,y at k+1
pkx=fj+1+x-xj+1h∇fj+1+x-xj+1x-xj2!h2∇2fj+1+ …
Where ε lies in the interval containing the points xj+1, xj, ,xj-k+1 and x.
3
pkxj+hs=m=0k-1m1-sm∇mfj+1+-1k+11-sk+1hk+1fk+1ε
yxj+1=yxj-i+h-i1m=0k-1m1-sm∇mfj+1+-1k+11-sk+1hk+1fk+1ϵds
=yxj-i+hm=0kδmi∇mfj+1+τk+1i (2.8)
τk+1i=hk+2-i1-1k+11-sk+1fk+1εds (2.10)
yj+1=yj-i+hm=0kδmi∇mfj+1 (2.11)
Since the truncation error in (2.8), is of Ohk+2, the method (2.11) is at least of order (k+1).
We can calculate δmi, m=o, 1, …, we obtain
δ0i=-i1ds=1+i
δ1i=-i11-sds=-121+i2
δ2i=-12-i1s1-sds=-1121+i21-2i
δ3i=-16-i1s1-ss+1ds=-1241+i21-i2
δ4i=-124-i1s1-ss+1s+2ds
=-17201+i219- 38i+27i2-6i3 and so on.
yj+1=yj+hfj+1-12∇fj+1-112∇2fj+1-124∇3fj+1-19720∇4fj+1-…
τk+1(0)=hk+201-1k+11-sk+1f(k+1)εds
3
= hk+201gsfk+1εds
Since g(s) does not change sign in 0,1, we have by the mean value theorem
τk+1(0)= hk+1δk+1(0)fk+1ε
yj+1=yj-1+h2fj+1-2∇fj+113∇2fj+1+0×∇3fj+1-190∇4fj+1- …
τk+1(1)=hk+2-11gsfk+1εds
Since gs changes sign in-1,1, the mean value theorem cannot be applied. However, a bound
for the error can be written as
τk+1(1)≤hk+1Mk+1-11gsds
Where,
M K +1 = max f ( k +1) ( x ) .
−1≤ x ≤1
Some additional, special interesting formulas of (2.3) are those corresponding to k=1, i=1
and to k=3, i=3. these formulas together with their local-error terms are
Formula (2.11), which is comparable in simplicity to Euler’s method, has a more favorable
discretization error. Similarly (2.12), which requires knowledge of fx,y at only three points,
has a discretization error comparable with that of the Adams-bashforth method. It can be
shown that all formula of type (2.3) with k odd and k=i have the property that the
coefficient of the kth difference vanishes, thus yielding a formula of higher order than might
be expected. On the other hand, these formulas are subject to greater instability, a concept
which will be developed on the next chapter.
To compare the above two Adams family methods and to come up with the definition called
predicator-corrector method let us consider the following example.
3
Example consider the initial value problem
And the approximations given by the explicit Adams-bashforth four-step method and the
implicit Adams-Moulton three-steps method, both using h=0.5
yi+1=yi+h2455fxi,yi-59fxi-1,yi-1+37fxi-2,yi-2-9fxi-3,yi-3,
For i=3, 4, …,9. when simplified using fx,y=y-x2+1, h=0.2, and xi=0.2i, it becomes
yi+1=yi+12435yi-11.8yi-1+7.4yi-2-1.8yi-3-0.192i2-0.192i+4.736,
yi+1=1241.8yi+1+27.8yi-yi-1+0.2yi-2-0.192i2-0.192i+4.736,
The result in table 5.11 was obtained using the exact values from yx=x+12-
0.5ex for α, α1,α2, and α3 in the explicit Adams-bashForth case and α, α1, and
α2, in the implicit Adams-Moulton case.
Adams- Adams-
Bashforth Moulton
0.0 0.5000000
0.2 0.8292986
1
0.4 1.2140877
In the above example the implicit Adam-Moulton method gave better result than the explicit
Adam-bashforth method of the same order. Although this is generally the case, the implicit
methods have the inherent weakness of first having to convert the method algebraically to an
explicit representation for,yj+1.This procedure is not always possible, as can be seen by
considering the elementary initial-value problem
We could use Newton’s method or the secant method to approximate yi+1, but this
complicates the procedure considerably. In practice implicit multistep method are not used as
described above. Rather, they are used to improve approximations obtained by explicit
methods. The combination of an explicit and implicit technique is called a predictor-
corrector method. The explicit method predicts an approximation, and the implicit methods
correct this prediction:
y40=y3+12455fx3,y3-59fx2,yy+37fx1,y1-9fx0,y0.
This approximation is improved by inserting y40 in the right side of the three-step implicit
Adams-Moulton method and using the method as a corrector. This gives
1
y41=y3+h249fx4,y40+19fx3,y3-5fx2,y2+fx1,y1.
The only new function evaluation requires in this procedure is fx4,y40 in the corrector
equation; all the other values of f have been calculated for earlier approximations.
The value y41 is then used as the approximation to yx4, and the technique of using the
Adams-bashforth method as a predicator and the Adams-Moulton method as a corrector is
repeated to findy51 and y50 , the initial and final approximation to yx5. This process is
continued until we obtain an approximation to yxN=yb.
CHAPTER THREE
This chapter discusses about the analysis of the method discussed above in chapter two, and
introduces the concepts of zero-stability, consistency and convergence. The significance of
2
these properties cannot be over emphasized: the failure of any of the three will render the
linear multistep method practically useless.
3.1. Zero-Stability
As is clear from (2.2) we need k starting points , before we can apply a linear k
y 0 , , y k −1
step method to the initial value problem (1.1), (1.2); of these, y0is given by the initial
condition (1.2), but the others , , have to be computed by other means : say, by
y1 , , y k −1
using one step method (Euler, Runga-kutta or Taylor method). At any rate the starting values
will contain numerical error and it is important to know how will this affect further
approximation yn, n≥k, which are calculated by means of (2.3). Thus, we wish to consider
the stability of the numerical method with respect to ‘small perturbation ‘in the starting
conditions.
We are interested in the behavior of linear multistep methods as h→0. In this limit, the
right hand side of the formula for the generic multistep method,
j=1kαjyn+j =hj=0kβjfxn+j,yn+j,
makes a negligible consideration. This motivates our consideration of the trivial initial model
problem y'x=0 with y0=0. Does the linear multistep method recover the exact solution
yx=0?. When y'x=0, clearly we have fn+j=0 for all j.Tthe condition αk≠0, allows as
writing
yk=-α0y0+α1y1+ …+yk-1αk
Hence if the method is started with exact datas y0=y1= …=yk-1=0, then
And this pattern will continue: yk+1=0, yk+2=0, … . Any linear method with exact
starting data produces the exact solution for this special problem, regardless of the time-step.
2
Of course, for more complicated problems it is unusual to have the exact starting values y1,
y2, …yk-1 ; typically, these values are only approximate, obtained from some high-order
one-step ODE solver or from an asymptotic expansion of the solution that is accurate in a
neighborhood of x0. To discover how multistep method behave, we must first understand
how this errors in the initial data pollute future iterations of the linear multistep method.
Definition: a linear k-step method (for ordinary differential equation y,=fx,y) is said to
be zero-stable if there exist a constant k such that, for any two sequence yn and zn that
have been generated by the same formulae different starting values y0,y1,…,yk-1 and
z0,z1,…,zk-1, respectively, we have
(3.1)
y n − z n ≤ k max{ y 0 − z 0 , y1 − z1 , , y k −1 − z k −1 }
For xn≤kM, and as h tends to 0. More plainly, a method is zero-stable for a particular
problem if errors in the starting values are not magnified in unbounded fashion. Let us first
consider a particular example.
The truncation error formulas from the previous chapter can be used to derive a variety of
linear multistep methods that satisfies a given order of truncation error. So one can use those
conditions two verify that the explicit two-step method
yk+2=2yk-yk+1.
As seen above, this method produces the exact solution if given initial data y0=y1=0. But
what if y0=0 but y1=ε for some ε >0 ? This method produces the iterates
y2=2y0-y1=2.0- ε=-ε
y3=2y1-y2=2ε--ε=3ε
y4=2y2-y3=2-ε-3ε=-5ε
y5=2y3-y4=23ε--5ε=11ε
2
y6=2y4-y5=2-5ε-11ε=-21ε
y7=2y5-y6=211ε-(-21ε)=43 ε
y8=2y6-y7=2-21ε-4343ε=-85ε
In just seven steps, the error has been multiplied 85 fold. The error is roughly doubling at
each step, and before long the approximation solution is complete garbage. This is illustrated
in the plot on the left below, which shows the evolution of yk, when h=0.1 and ε=0.01.
There is another quirk. When applied to this particular model problem, the linear multistep
method reduces to j=1kαjyn+j =0, and thus never incorporates the time-step, h. Hence the
error at some fixed time xfinal=hn gets worse as h gets smaller and n grows accordingly!
The figure on the right below illustrates the fact, showing yk over x ϵ 0,1 for three different
values of h. Clearly the smallest h leads to the most rapid error growth.
Though this method has second-order local (truncation) error, it blows up if fed incorrect
initial data for y1. Decreasing h can magnify this effect: for linear multistep methods,
consistency (that is,Tn→0 as h→0) is not sufficient to insure convergence as one-step
methods.
Proving zero-stability directly from the above definition would be a chore. So theorem shows
as an easy technique to determine whether a particular linear multistep method is zero-stable
or not.
Theorem: (root condition) A linear multistep method is zero-stable for an initial value
problem of the form (1.1), (1.2) where f satisfies the hypothesis of Picard’s theorem, if, and
only if, all roots of the first characteristic polynomial of the method are inside the closed unit
disk in the complex plane, with any which lie on the unit circle being simple.
Note that given linear k-step method (2.2) we define the first and second characteristic
polynomials, respectively as follows
ρz=j=0kαjzj,
3
σz=j=0kβjzj,
Proof (Necessity).
αkyn+k+…+α1yn+1+α0yn=0 (3.2)
According to lemma (1.1), every solution of this kth-order linear recurrence relation has the
form
r=1lprnzrn,
Sufficiency, the proof that the root condition is sufficient for zero-stability is long and
technical, and will be omitted here.
Example
The Euler method and the implicit Euler method have characteristic polynomial pz=z-1
with simpleroot z=1, so both methods are zero-stable.
11yn-3+27yn+2-27yn+1+11yn
=3hfn+3+9fn+2+9fn+1+fn
3
is not zero stable. Indeed, the corresponding first characteristic polynomial pz-
11z3+27z2-27z-11=0 has roots at z1=1,z2≈-0.32, z3=-3.14,soz3>1.
3.2. Consistency
In this section we consider the accuracy of the linear k-step method (2.2). For this purpose,
as in the case of one-step method, we introduce the notion of truncation error. Recall that the
truncation error of one-step methods of the form yn+1=yn+hφxn,yn;h was given by
Tn=yxn+1-yxnh-φxn,yn;h
Tn=j=0kαjyxn+j-hβjfxn+j,yxn+jhj=0kβj (3.3)
Of course, the definition requires implicitly that σ1=j=0kβj≠0. which is called the
normalization tern; if it were absent, multiplying the entire multistep formula by a constant
would alter the truncation error, but not the iterates yj. Again, as in the case of one-step
method, the truncation error can be thought of as the residual that is obtained by inserting the
solution of the differential equation in to the formula (2.2) and scaling this residual
appropriately (in this case dividing through by hj=0kβj) ,so that Tn resembles y'=fx,yx.
Definition: the numerical method (2.3) is said to be consistent with the differential equation
(1.1) if the truncation error defined by (3.2) is such that for any ε>0 there is hε for which
and any k+1 point xn,yxn, …, xn+k,yxn+k on any solution curve in D of the initial value
problem (1.1), (1.2)
Now, let us suppose that the solution to the differential equation is sufficiently smooth, and
let us expand the expression yxn+j and fxn+j,yxn+j into Taylor series about the point xn,
on substituting these expansions into the numerator in (3.3) we obtain
3
Tn=1hσ1c0yxn+c1hy'xn+c2h2y''xn+ … (3.4)
Where
c0=j=0kαjc1=j=1kjαj-j=0kβjc2=j=1kj22!αj-j=0kjβj…cq=j=1kjqq!-j=1kjq-1q-1!βj
(3.5)
For consistency we need that h→0 and n→∞ with xn→x∈x0,XM, the truncation error Tn
tends to 0. This requires that c0=0 and c1=0 in (3.4).
Definition: the numerical method (2.3) is said to have order of accuracy p, if p is the largest
positive integer such that, for any sufficiently smooth solution curve in D of the initial value
problem (1.1),(1.2), there exist constants K and h0 such that
Thus, we deduce from (3.4) that the method is of order of accuracy p if, and only if,
In this case,
Tn=Cp+1σ1hpyp+1xn+ Ohp+1.
3
Clearly α0+α1=-1+1=0 and 0α0+1α1-β0+β1=0 . When we analyzed this algorithm as
one-step method, we saw it had Tk=Oh. we expect the same result from this multistep
analysis. Indeed,
1202α0+1212α1-0β0+1β1=12≠0.
Thus, Tk=Oh.
1603α0+1613α1-1202β0+1212β11=16-14≠0,
Theorem 12.5 (Dahlquist’s equivalence theorem) for a linear k-step method that is consistent
with the ordinary differential equation (1.1) where f is said to satisfies a Lipchitz condition,
and with consistent starting values, zero-stability is necessary and sufficient for convergence.
Moreover if the solution y has a continuous derivatives of order p+1 and truncation error of
the method, en=yxn-yn is also Ohp.
Theorem 12.6 (Dahlquist’s Barrier Theorem) the order of accuracy of a zero-stable k-step
method cannot exceed k+1 if k is odd, or k+2 if k is even.
3
SUMMARY
In general multistep methods are efficient and they have better computer time than the
corresponding one step methods. Among different kinds of multistep method the most known
and used in most practical problems are the explicit and implicit Adam’s family.
One step methods like those of Runge-Kutta type do not exhibit any numerical instability for
h sufficiently small. Multistep methods may, in some cases, be unstable for all values of h.
Moreover unlike the one step method consistency is not a sufficient condition for the
convergence that is it needs some additional property called zero-stability.
2
REFERENCES
[3]. Endre Suli and David Mayer, 2003. “An introduction to Numerical Analysis”,
Cambridge university press, UK.
[4]. Grewal, B.S. Ph.D, 2002. “Numerical methods in engineering and science programs
in FORTRAN 77, C+, C++”, Khanna Publishers 6th edition.
[5]. Jain M.K., S.R.K. Iyengar, R.K. Jain, 2007. “Numerical methods for scientific and
Engineering computation”. New age international publishers, 5th edition.
[6]. Richard L. Burden, J. Douglas Faires, 2005. “Numerical Analysis”, Books /cool, pacific
Grove, 5th edition.
[7]. Sastry, S.S., 2003. “Introductory methods for Numerical Analysis”, Prentice-Hall of
India, New Delhi, 3rd edition.