0% found this document useful (0 votes)
141 views8 pages

Solving First Order Ordinary Differential Equations Using Least Square Method A Comparative Study

In this research article an attempt has been made to examine the performance of Finite difference method (FDM) and Least square method (LSM) on the solution of first order ordinary differential equations (ODE)
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
141 views8 pages

Solving First Order Ordinary Differential Equations Using Least Square Method A Comparative Study

In this research article an attempt has been made to examine the performance of Finite difference method (FDM) and Least square method (LSM) on the solution of first order ordinary differential equations (ODE)
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Volume 7, Issue 3, March – 2022 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Solving First Order Ordinary Differential Equations


using Least Square Method: A comparative study
Parth Singh Pawar, Dhananjay R. Mishra, Pankaj Dumka *
Department of Mechanical Engineering, Jaypee University of Engineering and Technology, Guna-473226, India

Abstract:- In this research article an attempt has been Perturbation Method (HPM) and Variational Iteration
made to examine the performance of Finite difference Methods (VIM) are good methods [7]. Perturbation method
method (FDM) and Least square method (LSM) on the is another method but due to its drawback it is not that
solution of first order ordinary differential equations frequently adopted for the solution of ODE’s [8]. The
(ODE). Both FDM and LSM are applied on the test problems associated with linear stability in solving
problem and the results thus obtained are compared with differential equations have been talked by Hajmohammadi
the exact solution. It has been observed that for third and Nourazar [9].
degree basis function the results of LSM very close to the
analytical result. On further increasing the degree the Weighted residual-based methods are the schemes
improvement in the result is very meagre and N=3 can be which are approximation techiniques which are also adopted
considered as the optimum solution for LSM. It has also to solve differential equations. Ozisik first introduced Least
been observed that the FDM is very sensitive to the square method (LSM) and Galerkin techiniques which are
number of grid points and deviates from the exact results based on weighted residuals [10]. The solution of third order
by a substantive amount for lower number of nodes. differential equation based on collection method has been
Whereas the LSM is independent of the number of nodes. introduced by Stern and Rasmussen [11].

Keywords:- Least Square Method; Finite Difference Method; Finite difference method (FDM) is a very old method to
Ordinary Differential Equation; Python; Optimization. solve differential equations [12]. This is based on Taylor
series expansion [13]. In this method domain is divided into
Nomenclature: nodes and the differential equation is discretized at each node.
𝑥 independent variable
𝑦 dependent variable In this research article, LSM based solution of first order
i degree of basis function linear differential equation has been reported. FDM has also
j Node index been applied on the problem and the results of both the
D differential operator methods are compared with the exact analytical solution. This
𝑤𝑖 weight will dictate the accuracy of the LSM over FDM.
R residue
N Number of weights II. PROBLEM STATEMENT
E squared residuals
𝑦𝑒𝑥𝑎𝑐𝑡 exact solution In in this research article, we will be focusing on
𝑦̃ approximate function following one dimensional ordinary differential equation
𝜙𝑖 basis function (ODE):
Δ𝑥 spacing between nodes
𝑑𝑦
LSM Least square Method −𝑦 = 0 (1)
𝑑𝑥
FDM Finite difference method where, 0 ≤ 𝑥 ≤ 1.
ADM Adomian Decomposition Method
HPM Homotopy Perturbation Method As this is a first order ODE so this will have only one
boundary condition. Let say that when 𝑥 = 0 the value of 𝑦 =
I. INTRODUCTION 1(viz. 𝑦(0) = 1). The exact solution of the Eq. 1 is 𝑒 𝑥 which
will act as a benchmark to our optimization solution.
At the very core of every physical phenomenon lies
some sort of mathematical relation in the form of the III. LEAST SQUARE METHOD (LSM)
differential equation [1]. The behaviour of these equations
can be linear or non-linear [2]. So, several researchers have Least square method is one of the methods which is
devised many methods to solve these differential equations based on weighted residuals minimization. In this method a
[3]. Till very recent the solution of differential equations is trial function is introduced in the parent differential equation
mostly dominated by numerical computations but, very and then the residue is minimized. Let us consider a boundary
recently the analytical methods have gained popularity [4]. value problem as follows:
For solving differential equations semi-analytically 𝐷(𝑦) − 𝑓(𝑥) = 0
Adomian Decomposition Method (ADM) has been adopted (2)
by researchers [5], [6]. For non-linear equations Homotopy

IJISRT22MAR1022 www.ijisrt.com 857


Volume 7, Issue 3, March – 2022 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
where, D is the differential operator.  Minimizing the square error
𝜕𝐸 𝑥=1 𝜕𝑅(𝑥)
𝜕𝑤
= 2 ∫𝑥=0 𝑅(𝑥) 𝜕𝑤 𝑑𝑥 = 0, 𝑖 = 1, … , 𝑁
To start the LSM, it is assumed that the dependent 𝑖 𝑖

variable y is estimated by an approximation function 𝑦̃ which (10)


is composed of coefficients/weights (𝑤𝑖 ) and basis function
(𝜙𝑖 ) [14]. The basis functions are picked from linearly  Now solving linear equation for different values of N and
independent set of functions in the projection space. The 𝑦̃ comparing it with the exact solution.
can be written as:
In this research article Python has been used to solve the
𝑦̃ = ∑𝑁 problem symbolically. N is varied from 1 to 5 to check the
𝑖=1 𝑤𝑖 𝜙𝑖 (3)
results for different simulations. Appendix section shows the
Where i varies from 1 to n. Now the target is to obtain 𝑤𝑖 by Python code developed for the evaluation of different weights
least square mechanism as follows: and data plotting.
The approximate function thus obtained for different N’s are
Let 𝑦𝑒𝑥𝑎𝑐𝑡 be the exact solution of the differential Eq. 2 i.e., enumerated in Table 1.
𝑦𝑒𝑥𝑎𝑐𝑡 once replaced in Eq. 2 will result in zero (as shown in
Eq. 4). Table 1: Approximate solutions for different N.
𝐷(𝑦𝑒𝑥𝑎𝑐𝑡 ) − 𝑓(𝑥) = 0 (4) N Approximate solution
1 3
𝑦̃ = 𝑥+ 1
Whereas if we replace 𝑦̃ in the Eq. 2 the result will not 2
be zero as this is not the exact solution. This non-zero value 70 72
2
which the Eq. 2 return when approximate solution is plugged- 𝑦̃ = 𝑥 2 + 𝑥 + 1
in into it is what we call as Residue (R) which can be written 83 83
as: 3 2485 3 945 2 2250
𝑅(𝑥, 𝑦̃) = 𝐷(𝑦̃) − 𝑓(𝑥) ≠ 0 𝑦̃ = 𝑥 + 𝑥 + 𝑥+1
8884 2221 2221
(5)
4 126126 4 254240 3 921942 2
𝑦̃ = 𝑥 + 𝑥 + 𝑥
Now the concept of LSM is to make the residue tend to 1810709 1810709 1810709
zero by minimizing the error function in 𝐷2 norm, so that the 1809000
+ 𝑥+1
weight coefficients can be evaluated as follows: 1810709
𝐸 = ∫𝑥 𝑅2 (𝑥, 𝑦̃)𝑑𝑥 (6) 5 4178559 5 10498950 4
𝑦̃ = 𝑥 + 𝑥
300698723 300698723
The optimum solution is obtained once E is set to 51177210 3
minimum viz.: + 𝑥
300698723
𝜕𝐸 𝜕
= 𝜕𝑤 ∫𝑥 𝑅 2 (𝑥, 𝑦̃)𝑑𝑥 = 0 (7) 150115840 2
𝜕𝑤𝑖 𝑖 + 𝑥
300698723
601429185
As i varies from 1 to N, so Eq. 7 will result in N linear + 𝑥+1
601397446
equations which can be solved for N unknown i.e.,
𝑤1 , 𝑤2 … 𝑤𝑁 . Proper choosing of 𝜙𝑖 is very essential in LSM
so for the problem in hand we will go for a polynomial IV. FINITE DIFFERENCE METHOD (FDM) AND
function. The choice should be such that the approximate ITS APPLICATION
solution should satisfy the boundary conditions.
In case of finite difference, the domain is divided into n
 LSM applied to the problem number of discrete points. The governing equation is
Applying the algorithm discussed in section 3 onto the discretised, and the discretised equation is evaluated at each
Eq. 1 will result into following 5 steps: grid point. In case of FDM the Taylor series expansion is used
to evaluate the derivatives. As the problem in hand is first
 As the problem is of first order so choosing a polynomial order ODE so backward difference is used to discretize the
basis function for guess solution. governing equation.
𝑦̃ = ∑𝑁 𝑖
𝑖=1 𝑤𝑖 𝑥 + 𝑦0 (8)
Let 𝑓(𝑥) be the function whose derivative is to be
 As it was already mentioned that the approximate solution evaluated at some 𝑗𝑡ℎ location then the Taylor series can be
should satisfy the boundary condition so the boundary written as:
condition 𝑦(0) = 1 can only be satisfied if 𝑦0 = 1. 𝑑𝑓 𝑑 2 𝑓 Δ𝑥 2 𝑑 3 𝑓 Δ𝑥 3
𝑓(𝑥 − Δ𝑥) = 𝑓(𝑥) − Δ𝑥 + 2 − +⋯
𝑑𝑥 𝑑𝑥 2! 𝑑𝑥 3 3!
(11)
 Developing expression for residue by plugging 𝑦̃ from Eq.
8 into Eq. 1.
𝑑 𝑑 where, Δ𝑥 is the distance between 2 grid points. Now
𝑅(𝑥, 𝑦̃) = (𝑦̃) − 𝑦̃ = (∑𝑁 𝑖
𝑖=1 𝑤𝑖 𝑥 + 𝑦0 ) − the first derivative can be written as:
𝑑𝑥 𝑑𝑥
(∑𝑁 𝑖
𝑖=1 𝑤𝑖 𝑥 + 𝑦0 ) (9)

IJISRT22MAR1022 www.ijisrt.com 858


Volume 7, Issue 3, March – 2022 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
𝑑𝑓 𝑓(𝑥)−𝑓(𝑥−Δ𝑥) 𝑦𝑗 = 𝑦𝑗−1 /(1 − Δ𝑥)
= + 𝑂(Δ𝑥)
𝑑𝑥 Δ𝑥
(12) (15)

Above equation is the backward difference Now Eq. 15 is the finite difference discretization of Eq.
approximation of first order. Also, this approximation is first 1 which is be solved for each node in the domain.
order accurate.
V. RESULTS AND DISCUSSION
 FDM applied to the problem
Applying the above-mentioned method to the Eq. 1 we The objective of this study is to evaluate the
will get: performance of LSM to evaluate the solution of first order
𝑑𝑦 𝑦(𝑥)−𝑦(𝑥−Δ𝑥) ODE and its comparison with the Euler’s method. Figure 1(a)
−𝑦 = − 𝑦(𝑥) = 0 (13) shows the variation of LSM output, exact solution, and error
𝑑𝑥 Δ𝑥
as a function of domain length for different values of N. For
When it comes to computation the x will correspond to N=1 the LSM returns a liner relation between x and y which
the 𝑗𝑡ℎ node and (𝑥 − Δ𝑥) corresponds to the node just before is not what the exact solution tells hence, large deviation and
it viz. (𝑗 − 1). Now Eq. 13 then becomes: error. For N=2 (Figure 1(b)) the LSM results are close to
exact and still there is a little bit of error. When N is increased
𝑦𝑗 −𝑦𝑗−1
− 𝑦𝑗 = 0 (14) to 3 (Figure 1(c)), one can see that the error has reduced a lot
Δ𝑥
in comparison to N=1 and 2. And for N=4 and 5 (Figure 1(d)
and 1(e)) the error has almost reached zero.
On further simplification Eq. 14 becomes:

(a) N=1

IJISRT22MAR1022 www.ijisrt.com 859


Volume 7, Issue 3, March – 2022 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

(b) N=2

(c) N=3

IJISRT22MAR1022 www.ijisrt.com 860


Volume 7, Issue 3, March – 2022 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

(d) N=4

(e) N=5
Fig 1: Variation of y and error as a function of x for different N

One more thing to observe from Figure 1 is that as the order of basis function increases the lobes in the error function also
increase. The error presented is the exact error hence once can se its variation both in positive and negative direction. The number
of lobes in the error plot is representative of the degree of polynomial. For N=1 the polynomial is one so there is only one lobe and
at last for N=5 as the polynomial is of 5th order so the number of lobes in error plot is five, likewise for other values of N.

IJISRT22MAR1022 www.ijisrt.com 861


Volume 7, Issue 3, March – 2022 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig 2:- Error variation for N varying from 1 to 5 as a function of x

Figure 2 depicts the error for all N’s in one figure. As the N goes from 1 to 3 the error the error reduces drastically to a very
small value. In fact, N=3 can be considered as the optimum solution of the problem as beyond it the degree of approximating
function output improves to a higher value with only a meagre change in the function output and error. To further investigate into
the problem Figure 3 show the variation of N from 3 to 5. Here one can see that for N=4 and 5 the error is almost zero but with extra
cost of computation time. Hence, N=3 is the optimum degree of approximating function as after it the error reduces at diminishing
rate.

Fig 3: Error variation for N varying from 3 to 5

Fig 4: Variation of output variable from FDM solution, LSM solution, and exact solutions as a function of x

IJISRT22MAR1022 www.ijisrt.com 862


Volume 7, Issue 3, March – 2022 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Figure 4 show a comparison between solution obtained [9] M. R. Hajmohammadi and S. S. Nourazar, “On the
from LSM & FDM and their comparison with the exact solution of characteristic value problems arising in
solution. The domain is divided into 10 equal parts and one linear stability analysis; semi analytical approach,”
can see that there is a substantive variation between the result Appl. Math. Comput., vol. 239, pp. 126–132, 2014.
predicted from FDM and LSM. The accuracy of FDM is [10] M. N. Ozisik, Boundary Value Problems of Heat
dependent on how division of the domain is done, more the Conduction (Dover Phoenix Editions) (Dover Phoneix
divisions more will be the accuracy. Whereas the LSM is Editions). courier Corporation, 2002.
independent of the number of nodes and it gives more [11] R. H. Stern and H. Rasmussen, “Left ventricular
accurate result even for fewer number of nodes. ejection: Model solution by collocation, an approximate
analytical method,” Comput. Biol. Med., vol. 26, no. 3,
VI. CONCLUSIONS pp. 255–261, 1996.
[12] M. Necati Özişik, H. R. B. Orlande, M. J. Colaço, and
Based on the LSM and FDM results following R. M. Cotta, Finite difference methods in heat transfer:
conclusions can be drawn: Second Edition. CRC press, 2017.
 Computation wise LSM is faster than FDM. [13] Y. Ren, B. Zhang, and H. Qiao, “A simple Taylor-series
 Polynomial can be used as a basis funciton for first order expansion method for a class of second kind integral
ODE’s. equations,” J. Comput. Appl. Math., vol. 110, no. 1, pp.
 With increase in the order of polynomial the accuracy of 15–24, 1999.
the solution is improved. [14] B. Hashemi and Y. Nakatsukasa, “Least-squares
 After N=3 the solution improves at diminishing rate spectral methods for ODE eigenvalue problems,” 2021.
hence, at N=3 one can get the optimum solution of first
order ODE at low cost of computation.
 FDM is very sensitive to the number of nodes. Higher the APPENDIX
number of nodes more the accuracy of FDM. Increasing
the number of nodes will result in more computation time A. Python code for the evaluation of weights
and cost.
 LSM is independent of the number of nodes and hence can from sympy import *
give better accuracy in comparison to FDM. c,s,n,x,c1,c2,c3,c4,c5,y_g=symbols('c,s,n,x,c1,c2,c3,c4,c5,y
_g')
REFERENCES
c=[c1,c2,c3,c4,c5]
[1] R. W. Easton, Ordinary Differential Equations: An
Introduction to Nonlinear Analysis (Herbert Amann), n=int(input('Enter the value of N upto which you want to
vol. 33, no. 4. Walter de gruyter, 1991. solve'))
[2] S. Wang, H. Wang, and P. Perdikaris, “Learning the
solution operator of parametric partial differential #Creating Basis function
equations with physics-informed DeepONets,” Sci. s=1
Adv., vol. 7, no. 40, 2021. for i in range(1,n+1):
[3] G. E. Latta and G. M. Murphy, Ordinary Differential s=s+c[i-1]*x**i
Equations and Their Solutions., vol. 68, no. 4. Courier y_g=s
Corporation, 1961.
[4] P. Kunkel and V. Mehrmann, Differential-algebraic #Residue creation
equations: analysis and numerical solution. European symbols('R')
Mathematical Society, 2006. R=diff(y_g,x)-y_g
[5] D. J. Evans and K. R. Raslan, “The Adomian
decomposition method for solving delay differential # Minimizing the square error
equation,” Int. J. Comput. Math., vol. 82, no. 1, pp. 49– E=integrate(R**2,(x,0,1))
54, 2005. EE=[]
[6] J. Biazar, E. Babolian, and R. Islam, “Solution of the
system of ordinary differential equations by Adomian for i in range(1,n+1):
decomposition method,” Appl. Math. Comput., vol. 147, EE.append(diff(E,c[i-1]))
no. 3, pp. 713–719, 2004.
[7] J. H. He, “Variational iteration method for autonomous Eqns=(EE)
ordinary differential systems,” Appl. Math. Comput., a=solve(Eqns,c)
vol. 114, no. 2–3, pp. 115–123, 2000.
[8] A. A. Hemeda, “Homotopy perturbation method for y_g.subs(a)
solving partial differential equations of fractional
order,” Int. J. Math. Anal., vol. 6, no. 49–52, pp. 2431–
2448, 2012.

IJISRT22MAR1022 www.ijisrt.com 863


Volume 7, Issue 3, March – 2022 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
B. Python code for data plotting
y2=70*𝑥**2/83+72*𝑥/83+1
 y vs x and error vs x for different cases
from matplotlib.pylab import * y3=2485*𝑥**3/8884+945*𝑥**2/2221+2250*𝑥/2221+1
font = {'family' : 'Times New Roman',
'size' : 16} y4=126126*𝑥**4/1810709+254240*𝑥**3/1810709+921942
matplotlib.rc('font', **font) *𝑥**2/1810709+\
x=linspace(0,1,40) 1809000*𝑥/1810709+1
y_exact=exp(x)
y5=4178559*𝑥**5/300698723+10498950*𝑥**4/300698723
#N=1 +\
#y=3*x/2+1 51177210*𝑥**3/300698723+150115840*𝑥**2/300698723+
\
#N=2 601429185*𝑥/601397446+1
#y=70*𝑥**2/83+72*𝑥/83+1
y=[y1,y2,y3,y4,y5]
#N=3
#y=2485*𝑥**3/8884+945*𝑥**2/2221+2250*𝑥/2221+1 # Error evaluation
error=(y-y_exact)
#N=4
#y=126126*𝑥**4/1810709+254240*𝑥**3/1810709+921942
*𝑥**2/1810709+\ figure(1,dpi=300)
1809000*𝑥/1810709+1 for i in range(2,len(error)):
plot(x,error[i],'-o',label=f'N={i+1}',markersize=3)
#N=5
y=4178559*𝑥**5/300698723+10498950*𝑥**4/300698723+
\ xlabel('x')
51177210*𝑥**3/300698723+150115840*𝑥**2/300698723+ ylabel('Error')
\ legend()
601429185*𝑥/601397446+1 savefig('Error for 3 to 5.jpg')
show()
# Error evaluation
error=(y-y_exact) C. FDM code and comparison plot
from pylab import *
figure(1,dpi=300) font = {'family' : 'Times New Roman',
tight_layout() 'size' : 16}
# plot 1: matplotlib.rc('font', **font)
plt.subplot(2, 1, 1) n=10
plot(x,y_exact,'ro',label='Exact solution') x=linspace(0,1,n)
plot(x,y,'b-',label='LSM solution') Δx=1/(n-1)
xlabel('x') y=zeros(n)
ylabel('y')
legend() y[0]=1

# plot 2: for i in range(1,n):


plt.subplot(2, 1, 2) y[i]=y[i-1]/(1-Δx)
plot(x,error,'k-o')
xlabel('x') figure(1,dpi=300)
ylabel('Error') plot(x,y,'r-o',label='FDM')
savefig('N=5.jpg')
show() plot(x,exp(x),'g-',label='Exact')

 Single error plot for all the cases y_LSM=2485*𝑥**3/8884+945*𝑥**2/2221+2250*𝑥/2221+1


from matplotlib.pylab import * plot(x,y_LSM,'bo',label='LSM')
font = {'family' : 'Times New Roman', xlabel('x')
'size' : 16} ylabel('y')
matplotlib.rc('font', **font) legend()
x=linspace(0,1,40) savefig('FDM_LSM.jpg')
y_exact=exp(x) show()

y1=3*x/2+1

IJISRT22MAR1022 www.ijisrt.com 864

You might also like