0% found this document useful (0 votes)
57 views5 pages

P22MAXX201 Experiment 8

Uploaded by

pegasus12469
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views5 pages

P22MAXX201 Experiment 8

Uploaded by

pegasus12469
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Department of Mathematics, PESCE, Mandya Lab Manual-P22MAXX201, Experiment-8

Experiment-8: Solution of ODE of first order and


first degree by Runge-Kutta 4th order method and
Milne’s Predictor and Corrector method
Objectives:
1.To write a python program to solve first order differential equation using
4th order Runge Kutta method

2.To write a python program to solve first order differential equation using
Milne’s predictor and corrector method

Remarks:
Runge-Kutta 4th order method (RK4):
Recall that Taylor’s series is a numerical method used for solving ordinary
differential equations (ODEs) which needs finding derivatives of the higher-
order. To overcome this, we use a numerical method called Runge-Kutta fourth
order method. The method provides the approximate value of y for a given
point x. Consider the one dimensional initial value problem:
𝒚′ = 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎 ,

where f is a function of two variables x and y and (x0 , y0) is a known point
on the solution curve.

The formula for the fourth-order Runge-Kutta method is given by:


(𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
𝑦1 = 𝑦0 +
6
Here,
𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 )
ℎ 𝑘1
𝑘2 = ℎ𝑓 (𝑥0 + , 𝑦0 + )
2 2
ℎ 𝑘2
𝑘3 = ℎ𝑓 (𝑥0 + , 𝑦0 + )
2 2
𝑘4 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘3 ).

One can see that in each step,


the derivative is evaluated four
times- once at the initial point, twice at trial midpoints, and once at a
trial endpoint. From these derivatives the final function value (shown as a
filled dot) is calculated.

Advantages of RK4:

1. Accuracy-The method provides more accurate result compared to simpler


numerical methods, it considers multiple intermediate steps and calculates
weighted averages, resulting in improved accuracy
2. Versatility-the method can be applied to a wide range of ODEs, including
both first order and higher order differential equations. It is not
limited to specific types of equations and can handle various initial
conditions
3. Efficiency-despite its higher accuracy, the method remains computationally
efficient. It strikes a balance between accuracy and computational
complexity, making it a reliable choice for solving ODEs in various
scientific and engineering applications
4. Stability-the method exhibits stability properties thus ensuring that the
numerical solutions remain bounded and do not diverge significantly. This

1
Department of Mathematics, PESCE, Mandya Lab Manual-P22MAXX201, Experiment-8

stability is crucial for obtaining reliable results over a wide range of


steps.

*****
Example-1: Apply the Runge Kutta method to find the solution of
𝑑𝑦/𝑑𝑥 = 𝟏 + 𝑦/𝑥 at 𝑦(𝟐) taking ℎ = 𝟎. 𝟐. Given that 𝑦(𝟏) = 𝟐.

Python Code:
from sympy import *
import numpy as np
def RungeKutta (g,x0 ,h,y0 ,xn):
x,y= symbols ('x,y')
f= lambdify ([x,y],g)
print('Result by RK4 method:')
print ('y( %3.3f'%x0,') = %3.3f '%y0)
xt=x0+h
Y=[y0]
while xt<=xn:
k1=h*f(x0 ,y0)
k2=h*f(x0+h/2, y0+k1/2)
k3=h*f(x0+h/2, y0+k2/2)
k4=h*f(x0+h, y0+k3)
y1=y0+(1/6)*(k1+2*k2+2*k3+k4)
Y. append (y1)
print ('y( %3.6f'%xt,') = %3.6f '%y1)
x0=xt
y0=y1
xt=xt+h
return np. round (Y,2)
RungeKutta ('1+y/x',1,0.2,2,2)

Output:
Result by RK4 method:
y( 1.000 ) = 2.000
y( 1.200000 ) = 2.618779
y( 1.400000 ) = 3.271049
y( 1.600000 ) = 3.951990
y( 1.800000 ) = 4.657997
y( 2.000000 ) = 5.386272
array([2. , 2.62, 3.27, 3.95, 4.66, 5.39])

Exercise:

Milne’s predictor and corrector method(MPCM):


The idea behind the predictor-corrector methods is to use a suitable combinat
ion of an explicit and an implicit technique to obtain a method with better
convergence characteristics. To start with predictor-corrector methods, four
2
Department of Mathematics, PESCE, Mandya Lab Manual-P22MAXX201, Experiment-8

prior values are required for finding the value of y at given x. These four
values may be given or have to be extracted using the initial condition by
other known methods. A Predictor formula is used to predict the value of y at x
and then corrector formula is applied to improve this value.

Working Rule :
Consider the Initial value problem (IVP) with a set of points:
y(x0) = y0, y(x1) = y1, y(x2) = y2, y(x3) = y3
Here, x0,x1,x2,x3 are equally spaced.

To find y4 at the point x4:

Milne’s Predictor formula: yp4 = y0+(4h/3)(2y1-y2+2y3)

Milne’s Corrector formula: yc4 = y2+(h/3)(y2+4y3+y4)

Now, to improve the accuracy, we apply corrector formula again.

Note: Predictor-Corrector methods use the information gained from the


previous n steps to predict what the state of the system will be at the end
of the next step. They use the predicted value to evaluate the derivative at
the end of the step. A final evaluation of the derivative is then made using
the final state values. This is done to further ensure stability of ensuing
predictions.

Example-2: Apply Milne’s predictor and corrector method to solve


𝑑𝑦 𝑦
= 𝑥2 + at 𝑦(1.4). Given that 𝑦(1) = 2, 𝑦(1.1) = 2.2156, 𝑦(1.2) = 2.4649, 𝑦(1.3) = 2.7514.
𝑑𝑥 2
Use corrector formula thrice.

Python Code:
# Milne 's method to solve first order DE
# Use corrector formula thrice
x0=1
y0=2
y1=2.2156
y2=2.4649
y3=2.7514
h=0.1
x1=x0+h
x2=x1+h
x3=x2+h
x4=x3+h
def f(x,y):
return x ** 2+(y/2)
y10 =f(x0 , y0)
y11 =f(x1 ,y1)
y12 =f(x2 ,y2)
y13 =f(x3 ,y3)
y4p =y0+(4*h/3)*(2*y11-y12+2*y13)
print ('Predicted value of y4 is %3.3f '% y4p)
y14 =f(x4 ,y4p );
for i in range (1,4):

3
Department of Mathematics, PESCE, Mandya Lab Manual-P22MAXX201, Experiment-8

y4=y2+(h/3)*(y14 +4*y13 +y12 );


print('Corrected value of y4 after \t iteration %d is \t %3.5f\t'%(i,y4))
y14=f(x4 ,y4);

Output:
Predicted value of y4 is 3.079
Corrected value of y4 after iteration 1 is 3.07940
Corrected value of y4 after iteration 2 is 3.07940
Corrected value of y4 after iteration 3 is 3.07940

Example-3: Apply Milne’s predictor and corrector method to solve


𝑑𝑦 2 𝑦
𝑑𝑥
= 𝑥 + at 𝑦(1.4). Given that 𝑦(1) = 2, 𝑦(1.1) = 2.2156, 𝑦(1.2) = 2.4649, 𝑦(1.3) = 2.7514.
2
Use corrector formula thrice.

Python Code:
from sympy import *
def Milne (g,x0 ,h,y0 ,y1 ,y2 ,y3):
x,y= symbols('x,y')
f= lambdify ([x,y],g)
x1=x0+h
x2=x1+h
x3=x2+h
x4=x3+h
y10=f(x0 , y0)
y11=f(x1 ,y1)
y12=f(x2 ,y2)
y13=f(x3 ,y3)
y4p=y0+(4*h/3)*(2*y11-y12+2* y13)
print ('Predicted value of y4 ',y4p )
y14=f(x4 ,y4p)
for i in range (1,4):
y4=y2+(h/3)*(y14 +4*y13 + y12 )
print ('Corrected value of y4 , iteration %d '%i,y4)
y14 =f(x4 ,y4)
Milne('x**2+y/2',1,0.1,2,2.2156 ,2.4649 ,2.7514 )

Output:
Predicted value of y4 3.0792733333333335
Corrected value of y4 , iteration 1 3.0793962222222224
Corrected value of y4 , iteration 2 3.079398270370371
Corrected value of y4 , iteration 3 3.079398304506173

Example-4: Apply Milne’s predictor and corrector method to solve


𝒅𝒚
𝒅𝒙
= 𝒙 − 𝒚𝟐 , 𝑦(0) = 2 obtain 𝑦(0.8). Take ℎ = 0.2. Use Runge-Kutta method to calc
ulate required initial values.

Python Code:
(After executing RK4 and MPCM codes, type and execute the below python code)

Y= RungeKutta ('x-y**2',0,0.2,0,0.8)
print ('y values from Runge -Kutta method :',Y)
Milne ('x-y**2',0,0.2,Y[0],Y[1],Y[2],Y[3])

4
Department of Mathematics, PESCE, Mandya Lab Manual-P22MAXX201, Experiment-8

Output:
Result by RK4 method:
y( 0.000 ) = 0.000
y( 0.200000 ) = 0.019980
y( 0.400000 ) = 0.079484
y( 0.600000 ) = 0.176204
y( 0.800000 ) = 0.304589
y values from Runge -Kutta method : [0. 0.02 0.08 0.18 0.3 ]
Predicted value of y4 0.3042133333333334
Corrected value of y4 , iteration 1 0.3047636165214815
Corrected value of y4 , iteration 2 0.3047412758696499
Corrected value of y4 , iteration 3 0.3047421836520892

Exercise:

*****
Indian-American mathematician C. R. Rao to be
awarded International Prize in Statistics at 102
Calyampudi Radhakrishna Rao, a prominent Indian
-American mathematician and Statistician,
will receive the 2023 International Prize in
Statistics, the equivalent to the Nobel Prize
in the field, for his monumental work done 75
years ago that revolutionised statistical think
ing. Rao's work, more than 75 years ago, conti
nues to exert a profound influence on Science.
Rao, who is now 102, will receive the prize,
which comes with a USD 80,000 award, this July
at the biennial International Statistical Insti
tute World Statistics Congress in Ottawa,
Ontario, Canada.

In his remarkable 1945 paper published in the Bulletin of the Calcutta Mathe
matical Society, Rao demonstrated three fundamental results that paved the
way for the modern field of statistics and provided statistical tools heavily
used in science today.

Rao in his career has received many honours; he was awarded the title of
Padma Bhushan (1968) and Padma Vibhushan (2001) by the Indian Government. The
International Prize in Statistics is awarded every two years by a
collaboration among five leading international statistics organisations. The
prize recognises a major achievement by an individual or team in the
statistics field, particularly an achievement of powerful and original ideas
that have led to practical applications and breakthroughs in other
disciplines. The prize is modelled after the Nobel prizes, Abel Prize, Fields
Medal and Turing Award. The first International Prize in Statistics was
awarded in 2017.

*****

You might also like