0% found this document useful (0 votes)
12 views

CP Numerical

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

CP Numerical

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Numerical Differentiation and Integration

Computational Physics
Department of Physics, University of Dhaka

1. Numerical Differentiation:

Finite Differences
Finite differences are a way of approximating derivatives. The derivative is the limit of (∆f /∆x)
as ∆x → 0. A finite difference approximation is, roughly speaking, (∆f /∆x) evaluated for a
small value of ∆x.
Several variations are possible. Use h in place of ∆x and let

fi−1 = f (x − h)

fi = f (x)
fi+1 = f (x + h)
Then we have the following approximations to the derivativef (x) :
fi+1 −fi
1. Forward finite difference: f 0 (x) = h
+ O(h) ≈ (fi+1 − fi )/h
(fi −fi−1 )
2. Backward finite difference: f 0 (x) = h
+ O (h) ≈ (fi − fi−1 ) /h
(fi+1 −fi−1 )
3. Central finite difference: f 0 (x) = 2h
+ O (h2 ) ≈ (fi+1 − fi−1 ) /2h

Example
Considerf (x) = x3 . Its derivative at x = 5 is 3 × 52 = 75. Take h = 0.1. Then
[(5+0.1)3 ]
Forward difference (0.1) = 76.51
[53 −(5−0.1)3 ]
Backward difference (0.1)
= 73.51
3 3
[(5+0.1) −(5−0.1) ]
Central difference (2×0.1)
= 75.01
Central difference is clearly the best estimate.
Errors in the approximation: The forward and the backward difference formulas have first
order i.e. O (h) error while the central difference formula has second order i.e. O (h2 ) error.
These can be seen by expanding the functions at x + nh and x − nh in Taylor series, where n
is a positive integer:

(nh)2 00
0
f (x + nh) = f (x) + nhf (x) + f (x) + · · ·
(2!)

(nh)2 00
f (x − nh) = f (x) − nhf 0 (x) + f (x) − · · ·
(2!)
2
Then we get: fi+1 − fi−1 = f (x + h) − f (x − h) = f (x) + xf 0 (x) + (2!) h
f 00 (x) + · · · −
 2

2h3
f (x) − hf 0 (x) + (2!)fh 0 (x) − · · · = 2hf 0 (x) + (3!)f 0
0 (x) + · · · = 2hf (x) + O (h )
3

(f (x + h) − f (x − h))
⇒ f 0 (x) = + O h2

2h
Higher order approximations to the first derivative can be obtained by using more Taylor series,
more terms in the Taylor series, and cleverly weighting the various expansions in a sum. For
example, we get,

1
1. Forward difference approximation with second order error:
[−f (x + 2h) + 4f (x + h) − 3f (x)]
f 0 (x) = + O h2

2h

The formula can be derived as follows:

• All discrete approximations to derivatives are linear combinations of functional val-


ues at the nodes
aα fα + aβ fβ + ................. + aλ fλ
fip = +E
hp
• The total number of nodes used must be at least one greater than the order of
differentiation to achieve minimum accuracy O(h).
• To obtain better accuracy, you must increase the number of nodes considered.
• For central difference approximations to even derivatives, a cancellation of trunca-
tion error terms leads to one order of accuracy improvement.

We need to derive a formula for first order derivative (fi0 ) with second order accuracy
(O(h2 )):

• First derivative with O(h) accuracy ⇒ the minimum number of nodes is 2


• First derivative with O(h2 ) accuracy ⇒ the minimum number of nodes is 3

Figure 1: One need 3 nodes for second order accuracy.

• The first forward derivative can therefore be approximated to O(h2 ) as:


 
df α1 fi + α2 fi+1 + α3 fi+2
−E =
dx x=xi h

• The Taylor series expansion about xi give:

fi = fi
h2 00 h3 000
fi+1 = fi + hfi0 + f + fi + O(h4 )
2 i 6
2
h 4h3 000
fi+2 = fi + 2hfi0 + 4 fi00 + f + O(h4 )
2 3 i
• Substituting into our assumed form of fi0 and re-arranging
α1 fi + α2 fi+1 + α3 fi+2 α1 + α2 + α3 α2
= fi + (α2 + 2α3 )fi0 + ( + 2α3 )hfi00
h h 2
1 4
+( α2 + α3 )h2 fi000 + O(h3 )
6 3
2
• In order to have our desired accuracy, the coefficient of fi0 must equal unity and the
coefficient of fi and fi00 must vanish, which gives
α1 + α2 + α3
=0
h
α2 + 2α3 = 1
α2
( + 2α3 )h = 0
2
Solving this simultaneous equation gives α1 = −3/2, α2 = 2, and α3 = −1/2

• Thus the equation now becomes


− 32 fi + 2fi+1 − 21 fi+2
 
1 4 1
= (0)fi + (2 − 1)fi0 + (0)fi00 + .2 − . h2 fi000 + O(h3 )
h 6 3 2
This gives,
−3fi + 4fi+1 − fi+2 1 2 000
fi0 = + h fi + O(h3 )
2h 3
• So the formula become
−3fi + 4fi+1 − fi+2
fi0 = +E
2h
Where E = 31 h2 fi000
Which can be written as
[−f (x + 2h) + 4f (x + h) − 3f (x)]
f 0 (x) = + O h2

2h
Similarly one can derive the backward and central difference formula for higher order
accuracy;
2. Backward difference approximation with second order error:
[3f (x) − 4f (x − h) + f (x − 2h)]
f 0 (x) = + O h2

(2h)

3. Centered difference approximation with fourth order error:


[−f (x + 2h) + 8f (x + h) − 8f (x − h) + f (x − 2h)]
f 0 (x) = + O h4

(12h)

Higher Derivatives:
Higher order derivatives can be approximated in the same way by using Taylor expansions. For
example:

1. Forward difference approximation to f 00 (x) is:


[f (x + 2h) − 2f (x + h) + f (x)]
f 00 (x) = + O (h)
(h2 )

One can derive this formula as before by considering 3 nodes for O(h) order accuracy of f 00 :
α1 fi + α2 fi+1 + α3 fi+2
fi00 − E =
h2
3
• Expanding in Taylor series and rearranging we have
α1 fi + α2 fi+1 + α3 fi+2 α1 + α2 + α3
2
= +
h h2
 
α2 + 2α3 1
fi0 + (α2 + 4α3 ) fi00 +
h 2
h
(α2 + 8α3 ) fi000 + O(h2 )
6
00
• In order to compute f we must have:
α1 + α2 + α3
=0
h2
 
α2 + 2α3
=0
h
1
(α2 + 4α3 ) = 1
2
Give α1 = 1, α2 = −2, and α3 = 1
• Therefore
fi+2 − 2fi+1 + Fi
fi00 = +E
h2
Where E = −hfi000
The higher order accuracy can be obtain subsequently by increasing number of nodes:
fi00 → Require 3 nodes for order of O(h) accuracy.
fi00 → Require 4 nodes for order of O(h2 ) accuracy.
fi00 → Require 5 nodes for order of O(h3 ) accuracy.
fi00 → Require 6 nodes for order of O(h4 ) accuracy.

2. Centered difference approximations are


[f (x + h) − 2f (x) + f (x − h)]
f 00 (x) = + O h2

2
(h )
[−f (x + 2h) + 16f (x + h) − 30f (x) + 16f (x − h) − f (x − 2h)]
f 00 (x) = 4

+ O h
(12h2 )
A list of finite difference approximations is given below:

1. First order:  
0 1
F f (x) = [f (x + h) − f (x)]
h
 
0 1
B f (x) = [f (x) − f (x − h)]
h
2. Second order:

0 1
F f (x) = [−3f (x) + 4f (x + h) − f (x + 2h)]
2h
 
00 1
f (x) = [f (x) − 2f (x + h) + f (x + 2h)]
h2

4
 
0 1
C f (x) = [f (x + h) − f (x − h)]
2h
 
00 1
f (x) = [f (x + h) − 2f (x) + f (x − h)]
h2
 
0 1
B f (x) = [3f (x) − 4f (x − h) + f (x − 2h)]
2h
 
00 1
f (x) = [f (x) − 2f (x − h) + f (x − 2h)]
h2
Etc.
Don’t let h be too small:
The smaller the h the better the finite differences estimate. That’s certainly true “early on”,
but at some point you run into trouble. The answer is lack of precision. We only have 16 digits
of precision in our machine (i.e. the computer).
To see this limitation,
 let us consider the function sinx atx = π/10. The values of sinπ/10
π −10
and sin 10 + 10 agree to 10 digits:
π
sin = 0.3090169943749474
10
π 
sin + 10−10 = 0.3090169944700531
10
So when we subtract one from the other and divide byh = 10−10 , we get
π
+ 10−10 − sin 10 π
   
sin 10 [0.3090169944700531 − 0.3090169943749474]
−10
= = 0.951057
10−10
 
10

Hence we lose 10 digits of precision! So the best we can hope for is 6 digits of accuracy. Hence,
the best h is chosen by a trade-off between accuracy of the estimate and the available precision.
A simple function can be differentiate very easily at one point very easily by using any of
the above method. For example a simple central difference method as bellow:

#include<iostream>
#include<cmath>
using namespace std;

double f(double x){


return 3*x*x+5*x;}

double deff(double x, double h){


double dy=f(x+h)-f(x-h);
return dy/(2*h);}
int main(){
cout<<deff(1,0.1)<<endl;
return 0;}

Example-1

5
The following program computes the sine function at discrete points, stores the results in an
array, then computes numerically the first derivative of the function. The results are compared
to the exact form.
/* numdiff1.cpp */
#include <iostream>
#include<fstream>
#include<cmath>
using namespace std;
const char *FILENAME = “numdiff1.txt”; //*FILENAME is an output stream
#define N POINTS 300 /* Array size */
int main()
{
int i;
float angle[N POINTS], f[N POINTS], f1 3[N POINTS];
float d angle = 0.0;
ofstream fout(FILENAME); //create output object associated with file
/* —compute the data— */
d angle = 2.0*M PI / (float) (N POINTS-1);
/* M PI is defined in the math library to be equal to pi */
for(i=0; i< N POINTS; i++) {
angle[i] = i*d angle;
f[i] = sin(angle[i]);
}
/* — 2-point derivative at beginning and end of the lattice — */
f1 3[0] = ( f[1] - f[0] ) / d angle;
f1 3[N POINTS-1] = (f[N POINTS-1] - f[N POINTS-2]) / d angle;
/* — 3-point derivative everywhere else ———– */
for(i = 1; i < N POINTS-1; i++ )
{
f1 3[i] = ( f[i+1] - f[i-1] ) / (2*d angle);
}
/* —- output results and error—- */
for ( i = 0; i < N POINTS; i++)
{
cout<< angle [i] << “ “ << f [i] << “ “ << f1 3 [i] << “ “ << fabs(f1 3 [i] – cos(angle
[i])) << endl;
/* —– write to file — */
fout<< angle[i] << “ ” << f[i] << “ ”<< f1 3 [i] << “ ” << fabs(f1 3[i] – cos(angle[i]))
<<endl;
}
fout.close(); /*close file */
return 0;
}
Plotting Data and Saving the Plot
To plot these data using gnuplot, use the following command in the terminal window:
gnuplot
This will start the gnuplot program. Within the gnuplot prompt, i.e. right after “gnuplot>”,
that you will see by default, write the following commands:
plot ‘numdiff1.txt’ using ($1) : ($2) with line

6
Close the plot that should pop up in a window. Then to save the plot in a “postscript” type
of file using the following commands:
set terminal postscript enhanced colot
set output ‘yourname diff1 err.ps’
plot ‘numdiff1.txt’ u ($1) : ($2) w l
set terminal X11
quit
The last command will return you back to the shell terminal window by quitting gnuplot. ($1)
in the above refers to the values in the first column of the file while ($2) refers to the second
column values etc. and the above plots ($2) vs ($1). Using ($4) vs ($1) saves the postscript
file of the error.
Note that the error is relatively small. Note also that there is a systematic error introduced by
the 3-point form which causes oscillations in the error function having the same wavelength as
that of the original (sine) function.
Difference between double and float
Instead of using float type of variables, change the code above to use double type of variables
(replace float by double at all places) and redo the error plot. Depending on machines, you
should get slightly different plots.
Example-2 Round-off errors
One might think that using the higher-order derivative forms would always be better; this is in
general the case, but with an important caveat! These forms may involve large cancellations
in the numerator as h becomes small, thereby producing bad results due to round-off. The
following code illustrates this point. It computes the derivative of sinx at 45 degrees.
/* numdiff2.cpp */
#include <iostream>
#include <fstream>
#include <cmath>
using namespace std;
int main()
{
float x = M PI/4.0, f1 3, f1 5, f1 e;
float d angle;
int i;
d angle = 5.0;
cout << “ \n\nTable of errors in 3- and 5- point derivatives\n\n” ;
cout << “ angle 3-point 5-point \n” ;
for (i =1; I < 10; i++)
{
d angle = d-angle / 10.0;
f1 3 = (sin(x+d angle) – sin(x-d angle)) / (2.0*d angle);
f1 5 = (sin(x-2*d angle) – 8*sin(x-d angle) + 8*sin(x+d angle) – sin(x+2*d angle))/ (12.0*d angle);
f1 e = cos(x) ;
cout<< d angle<< “ ”<<f1 3 – f1 e<< “ ”<<f1 5 – f1 e<<endl;
}
return 0;
}
Run the above code to verify the loss of precision. The minimum error is for d angle = 0.005
for the 3-point formula, and 0.05 for the 5-point formula. The error becomes larger for smaller

7
intervals due to arithmetic errors. Note the use of single precision in the code to make effect
more evident!
Richardson Extrapolation:

We have seen how two different expressions can be combined to eliminate the leading error
term and thus yield a more accurate expression. It is also possible to use a single expression to
achieve the same goal. This general technique is due to L. F. Richardson, a meteorologist who
pioneered numerical weather prediction in the 1920s.
Let’s start with the central difference formula, and imagine that we have obtained the usual
approximation for the derivative,
(f (x + h) − f (x − h)) h2 000
f 0 (x) = − f (x) + ..
2h 6
Using a different step size we can obtain a second approximation to the derivative. Then using
these two expressions, we can eliminate the leading term of the error. In practice, the second
expression is usually obtained by using an h twice as large as the first, so that
(f (x + 2h) − f (x − 2h)) 4h2 000
f 0 (x) = − f (x) + ..
4h 6
Dividing this expression by 4 and subtracting it from the previous one eliminates the error!
Well, actually only the leading term of the error is eliminated, but it still sounds great! Solving
for f 0 (x), we obtain
(f (x − 2h) − 8f (x − h) + 8f (x + h) − f (x + 2h))
f 0 (x) = + O(h4 )
12h
4
a 5-point central difference formula with error given by E = h30 f iv (x)
Of course, we can do the same thing with other derivatives: using the 3-point expression of
Equation
[f (x + h) − 2f (x) + f (x − h)]
f 00 (x) = 2

+ O h
(h2 )
We can easily derive the 5-point expression:
[−f (x + 2h) + 16f (x + h) − 30f (x) + 16f (x − h) − f (x − 2h)]
f 00 (x) = 4

+ O h
(12h2 )
h4 vi
With error E = 90
f (x) + .........

Now, there are two different ways that Richardson extrapolation can be used. The first is
to obtain ”new” expressions, as we’vejust done, and to use these expressions directly. Be fore-
warned, however, that these expressions can become rather cumbersome. The other is indirect
computational scheme, which we are going to discuss bellow. This is the same scheme as before
but here we execute this numerically.

• Let D1 (h) be the approximation to the derivative obtained from the 3-point central dif-
ference formula with step size h, and imagine that both D1 (2h) and D1 (h) have been
calculated.
• Since the error goes as the square ofthe step size, D1 (2h) must contain four times the
error contained in D1 (h).

8
• The difference between these two approximations is then three times the error of the
second. But the difference is something we can easily calculate, so in fact we can calculate
the error.

• D2 (h), is then obtained by simply subtracting this calculated error from the second ap-
proximation,
h D (2h) − D (h) i
1 1
D2 (h) = D1 (h) − 2
2 −1
4D1 (h) − D1 (2h)
D2 (h) =
3
• Of course, D2 (h) is not the exact answer, since we’ve only accounted for the leading term
in the error. Since the central difference formulas have error terms involving only even
powers of h, the error in D2 (h) must be O(h4 )

• D2 (2h) contains 24 times as much error as D2 (h), and so this error can be removed to
yield an even better estimate of the derivative,
h D (2h) − D (h) i
2 2
D3 (h) = D2 (h) − 4
2 −1
16D2 (h) − D2 (2h)
D3 (h) =
15
• This processes can be continued indefinitely, with each improved estimated given by
h D (2h) − D (h) i
i i
Di+1 (h) = Di (h) −
22i −1
22i Di (h) − Di (2h)
Di+1 (h) =
22i − 1
#include <iostream>
#include <cmath>
#include <iomanip>

using namespace std;


double f( double x)
{ double f;
f=sin(x);
return f;
}
void Derivative(double x, int n, double h, double D[10][10])

int i, j;

for (i=0; i<n; i++)

9
D[i][0]=(f(x+h)-f(x-h))/(2*h);

for (j=0; j<=(i-1); j++)

{
D[i][j+1]=D[i][j]+(D[i][j]-D[i-1][j])/(pow(4.0,double(j+1))-1);

h=h/2;

int main()
{

double D[10][10];

int n=10, digits=5;

double h=1, x=0;

Derivative(x, n, h, D);
cout.setf(ios::fixed );

cout.setf(ios::showpoint);

cout << setprecision(digits) << endl;


for(int i=0; i<n; i++)
{
for(int j=0;j<i+1;j++)

{ cout << setw(digits+2) << D[i][j]<< ” ”;


}

cout << endl;


}
cout.unsetf(ios::fixed);
cout.unsetf(ios::showpoint);

return 0;

1. Numerical Integration

There are two main reasons for you to need to do numerical integration:

10
1. Analytical integration may be impossible or infeasible or,

2. You may wish to integrate tabulated data rather than known functions.

In this section we outline the main approaches to numerical integration. Which one is preferable
depends on the results required, and in part on the function or data to be integrated.

1. (a) Constant Rule

Perhaps the simplest form of numerical integration is to assume that the function f (x) is
constant over the interval being integrated. Clearly this is not going to be a very accurate
method of integrating, and indeed leads to an ambiguous result, depending on whether the
constant is selected from the lower or the upper limit of the integral.
Integration of a Taylor series expansion of f (x) shows the error in this approximation to be:
Z x0 +∆x Z x0 +∆x 
f 00 (x0 )

0 2
I= f (x) dx = f (x0 ) + f (x0 ) (x − x0 ) + (x − x0 ) + · · · dx =
x0 x0 2
1 1
f (x0 ) ∆x + f 0 (x0 ) (∆x)2 + f 00 (x0 ) (∆x)3 + · · · = f (x0 ) ∆x + O (∆x)2

2 6
In the constant rule, the integral I is approximated as

I ≈ f (x0 ) (x0 − ∆x − x0 ) = f (x0 ) ∆x

if the constant is taken from the lower limit (i.e. x0 ). Similar analysis shows that
I = f (x0 + ∆x) ∆x if the constant is taken from the upper limit (i.e. x0 + ∆x. In both cases
the error is O (∆x)2 , with the coefficient being derived from f 0 (x0 ) or f 0 (x0 + ∆x). Clearly
we can do much better than this, and as a result, this rule is not used in practice.

1. (a) Trapezium Rule

Consider the Taylor series expansion of f (x) around x0 , integrated from x0 to x0 + ∆x:
Z x0 +∆x 
f 00 (x0 )

0 2
I= f (x0 ) + f (x0 ) (x − x0 ) + (x − x0 ) + · · · dx =
x0 2
1 1
f (x0 ) ∆x + f 0 (x0 ) (∆x)2 + f 00 (x0 ) (∆x)3 + · · · =
2 6
   
1 1 0 1 00 2 1 00 2
f (x0 ) + f (x0 ) + f (x0 ) ∆x + f (x0 ) (∆x) + · · · − f (x0 ) (∆x) + · · · ∆x =
2 2 2 12
1  1
[f (x0 ) + f (x0 + ∆x)] ∆x + O (∆x)3 ≈ [f (x0 ) + f (x0 + ∆x)] ∆x
2 2
1
This approximation represented by I ≈ 2 [f (x0 ) + f (x0 + ∆x)] ∆x, is called the trapezium
rule, based on its geometric interpretation of approximating the area under the curve by a
trapezium. It is exact for polynomials up to and including degree 1, i.e. f (x) = ax + c.
Compound Trapezium Rule
The error in the Trapezium Rule is proportional to (∆x)3 . Thus if we were to halve ∆x, the
error would be decreased by a factor of eight. However, the size of the domain would be halved,
thus requiring the Trapezium Rule to be evaluatedtwice and  the contributions summed. The
1
net result is the error decreasing by a factor of four rather than eight. The Trapezium
(2× 81 )

11
rule used in this manner is sometimes called the Compound Trapezium Rule, but more often
simply the Trapezium Rule.
Suppose we need to integrate from x0 = a to xN = b. We shall subdivide this interval into N
sub-intervals of size ∆x = (b−a)
N
= (xN − x0 ) /N . The Compound Trapezium Rule approxima-
tion to the integral is therefore (noting that xN = x0 + N ∆x):
Z xN N −1 Z x0 +(i+1)∆x N −1
X ∆x X
I= f (x) dx = f (x) dx ≈ [f (x0 + i∆x) + f (x0 + (i + 1) ∆x)] =
x0 i=0 x0 +i∆x 2 i=0

∆x
[f (x0 ) + 2f (x0 + ∆x) + 2f (x0 + 2∆x) + · · · + 2f (x0 + (N − 1) ∆x + f (xN ))]
2
Note that, while the error in each step is O (∆x)3 (from the Trapezium Rule), the cumulative


error is N times this or O (∆x)2 ∼ O (N −2 ) which is bigger.




Example-3 Trapezoidal Rule


Consider the simple integrator that makes use of the Compound Trapezium Rule:
/* Implementation of Trapezoidal Rule of Integration */
#include<iostream>
using namespace std;
double integtrapzd(double (*func)(double), double a, double b, int n)
{
double x=0.0, sum=0.0, del=0.0;
int i=0, j=0;
if (n==1)
{ return 0.5*(b-a)*( (*func) (a) + (*func) (b) ; }
else if (n>1)
{
del= (b-a)/n;
sum += (*func) (a);
x = a + del;
for(j=1; j<n; j++, x += del) sum += 2.0 * (*func) (x);
sum += (*func) (b);
return sum*0.5*del;
}
else
{
cout<< “Number of interval numint has to be >= 1” << endl;

exit(1);
}
}
double myfunc1(double x) { return x*x; }
int main()
{
double res=0.0, low=0.0, up=1.0;
int numint=10;
res = integtrapzd(myfunc1, low, up, numint);
cout<< “res: ”<<res<<endl;
return 0;

12
}
Example-4 Error in Trapezoidal Rule
Consider the following modifications in the code for observing the dependence of the error on
the number of sub-intervals numint:
...
#include<iostream>
#include<cmath>
#include<fstream>
...
const char *FILENAME = “trapzderror.txt”;
...
double integtrapzd(double (*func) (double), double a, double b, double n)
{
double x=0.0, sum=0.0, del=0.0; int i=0, j=0;
if (n==1)

{ return 0.5*(b-a)*( (*func) (a) + (*func) (b)) ; }


else if (n>1)
{

del = (b-a) / n;
sum += (*func) (a);
x = a + del;
for(j=1; j<n; j++, x += del){
x+=del;
sum += 2.0 * (*func) (x);}

sum += (*func) (b);

return sum*0.5*del;
}

else { cout<< ”Number of interval numint has to be >= 1” << endl; }


}

}
double myfunc2 (double x) { return x*x*x ; }
int main()
{
double res=0.0, low=0.0, up=1.0, error=0.0;
double i=1.0, numint=10.0;
...
ofstream fout(FILENAME) ; //create output object associated with the file
for(i=1.0; i<1000000000.0; i*=10.0) //up to 108
{
res = integtrapzd(myfunc2, low, up, i);
cout<< “res: ” << res << “error” << res -0.25 <<endl;
/* — Write to file —*/
fout<< i << “ ” << res << “ ” << res -0.25 << endl;

13
}
return 0;
}
Plot the error file trapzderror.txt using the gnuplot program. Write gnuplot in the terminal
window to start the program prompt:
gnuplot
Within the gnuplot prompt, i.e. right after “gnuplot>”, that you will see by default, write the
following commands:
plot “trapzderror.txt” using (log($1)/log(10): (log($3)/log(10) with line
Close the plot that should pop up in a window. Then to save the plot in a “postscript” type
of file using the following commands:
set terminal postscript enhanced color
set output “yourname trapzd err.ps”
plot “trapzderror.txt” u (log($1)/log(10): (log($3)/log(10) w l
set terminal X11
quit
You should see that the error becomes minimum for certain small values of ∆x but increases if
∆x is further decreased.

1. (a) Simpson’s Rule

An alternative approach to decreasing the step size ∆x for the integration, is to increase the
accuracy of the functions (i.e. increase the number of terms of the Taylor expansion) used to
approximate the integrand.
Consider integrating the Taylor series expansion of f (x) around x = x0 , over an interval of
length 2∆x:
Z x0 +2∆x Z x0 +2∆x 
f 00 (x0 )

0 2
I= f (x) dx = f (x0 ) + f (x0 ) (x − x0 ) + (x − x0 ) + · · · dx =
x0 x0 2

1 1 1 000 1
f (x0 ) (2∆x)+f 0 (x0 ) (2∆x)2 + f 00 (x0 ) (2∆x)3 + f (x0 ) (2∆x)4 + f iv (x0 ) (2∆x)5 +· · ·
2 2.3 3.2.4 4.3.2.5
4 2 4
= 2f (x0 ) ∆x + 2f 0 (x0 ) (∆x)2 + f 00 (x0 ) (∆x)3 + f 000 (x0 ) (∆x)4 + f iv (x0 ) (∆x)5 + · · · =
3 3 15
   
∆x 0 1 00 2 1 00 3 1 iv 4
f (x0 ) + 4 f (x0 ) + f (x0 ) ∆x + f (x0 ) (∆x) + f (x0 ) (∆x) + f (x0 ) (∆x) + · · · +
3 2 6 24

 
0 1 00 2 1 000 3 1 iv 4
f (x0 ) + f (x0 ) (2∆x) + f (x0 ) (2∆x) + f (x0 ) (2∆x) + f (x0 ) (∆x) + · · ·
2 3.2 4.3.2
 
1 iv 4 ∆x
f (x0 ) + 4f (x0 + ∆x) + f (x0 + 2∆x) + O(∆x)5

− f (x0 ) (∆x) = (1)
30 3
Whereas the error in the Trapezium rule was O (∆x)3 ,Simpson’s rule is two orders more


accurate at O (∆x)5 , giving exact integration of cubics.




Compound Simpson’s Rule


To improve the accuracy when integrating over larger intervals, say the interval x0 = a to
xN = b, we may again be subdivide into N steps. The three-point evaluation for subinterval
requires that there are an even number of subintervals. Hence we must be able to express the

14
number of intervals as N = 2m. The compound Simpsons rule is then ( using a = x0 , b = xN ,
N = 2m = b−a
∆x
):
Z b m−1
∆x X
I= f (x) dx ≈ [f (a + 2i∆x) + 4f (a + (2i + 1) ∆x) + f (a + (2i + 2) ∆x)] =
a 3 i=0

∆x
[f (a) + 4f (a + ∆x) + 2f (a + 2∆x) + 4f (a + 3∆x) + · · · + 4f (a + (N − 1) ∆x) + f (b)]
3
The corresponding error is N × O (∆x)5 ∼ O (∆x)4 ∼ O (N −4 )
 

#include<iostream>
#include<cmath>
using namespace std;
float f(float(x))
{
return (pow(x,3)+pow(x,2)-(4*x)-5);
}
double simpson(double a, double b, int n){
int i;
long double d, I=0,J=0,A,K=0,E=0;
d=(b-a)/n;
for(i=1;i<n;i++)
{
if((i%2)!=0)
{ I=I+f(a+(i*d));
}
}
for(i=2;i<n-1;i++)
{
if((i%2)==0)
{
J=J+f(a+(i*d));
}
}
A=(d/3)*(f(a)+(4*I)+(2*J)+f(b));
return A;
}
int main()
{
int i;
long double a,b,d,n,I=0,J=0,A,K=0,E=0;
cout<< ”Given f (x) = x3 + 2x2 − 4x − 5 ”<<endl;
cout<<”Enter lower limit ”<<endl;
cin>>a;
cout<<”Enter Upper Limit ”<<endl;
cin>>b;
cout<<”Enter the number of intervals : ”<<endl;
cin>>n;
cout<<”The Value of integral under the enterd limits is : ”<<endl;
cout<<simpson(a,b,n)<<endl;

15
return 0;
}

Simpson’s 38 ’th Rule and Boole’s Rule


We can improve the accuracy of these procedures by using higher-order polynomials. Using
cubic and quartic polynomials yield, respectively, Simpson’s 38 and Boole’s rules:
Z x0 +3∆x
3∆x
[f0 + 3f1 + 3f2 + f3 ] + O (∆x)5

f (x) dx =
x0 8
Z x0 +4∆x
2∆x
[7f0 + 32f1 + 12f2 + 32f 3 + 7f4 ] + O (∆x)7

f (x) dx =
x0 45
We can similarly devise compounded form of these rules.
#include<iostream>
#include<cmath>
using namespace std;
float f(float(x))
{
return (pow(x,3)+pow(x,2)-(4*x)-5);
}
double trap(double a, double b, int n){
double x, sum, del;
del=(b-a)/n;
sum=f(a);
sum+=f(b);
x=a+del;
for(int j=1;j¡n;j++){
x+=del;
sum+=2.0*f(x); }
return sum*0.5*del;
}
double simpson(double a, double b, int n){
int i;
long double d, I=0,J=0,A,K=0,E=0;
d=(b-a)/n;
for(i=1;i<n;i++)
{
if((i%2)!=0)
{ I=I+f(a+(i*d));
}
}
for(i=2;i<n-1;i++)
{
if((i%2)==0)
{
J=J+f(a+(i*d));
}
}
A=(d/3)*(f(a)+(4*I)+(2*J)+f(b));

16
return A;
}

double simpson3(double a, double b, int n)


{int i;
long double d,A3, I=0, J=0, K=0;
d=(b-a)/n;
for(i=1;i¡n;i++){
if(((i%2)!=0)&&((i%3)!=0))
{
I=I+f(a+i*d);}
}
for(i=2;i<n;i++){
if(((i%2)==0)&&((i%3)!=0))
{J=J+f(a+i*d);
}
}
for(i=3;i<n;i++){
if((i%3)==0)
{K=K+f(a+i*d);
}
}
A3=(3*d/8)*(f(a)+3*I+3*J+2*K+f(b));
return A3;
}

int main()
{
int i;
long double a,b,d,n,I=0,J=0,A,K=0,E=0;
cout<< ”Given f (x) = x3 + 2x2 − 4x − 5 ”<<endl;
cout<<”Enter lower limit ”<<endl;
cin>>a;
cout<<”Enter Upper Limit ”<<endl;
cin>>b;
cout<<”Enter the number of intervals : ”<<endl;
cin>>n;
cout<<”The Value of integral under the enterd limits is : ”<<endl;
cout<<trap(a,b,n)<<endl;
cout<<simpson(a,b,n)<<endl;
cout<<simpson3(a,b,n)<<endl;
return 0;
}

Advantages of simpson 3/8 rule: There are two advantages of the 3/8ths rule: First,
the error term is smaller than Simpson’s rule.
The second more important use of the 3/8ths rule is for uniformly sampled function inte-
gration. Suppose you have a function known at equally spaced points. If the number of points
is odd, then the composite Simpson’s rule works just fine. If the number of points is even, then

17
you have a problem at the end. One solution is to use the 3/8ths rule. For example, if the user
passed 6 samples, then you use Simpson’s for the first three points, and 3/8ths for the last 4
(the middle point is common to both). This preserves the order of accuracy without putting
an arbitrary constraint on the number of samples.

Figure 2: Comparison of all three methods.

Example-5- A simple integration scheme


/* integration1.cpp*/
#include <iostream>
#include <cmath>
using namespace std;
typedef double real; // convenient
const real x0 = 1;
const real v0 = 4;
const real acc = -0.5;
real xanalytic (real t) {
return x0 + v0*t + 0.5*acc*t*t; }
real acc1 (real x, real v, real t) { return acc; }
void take a step(real &x, real &v, real &t, real dt) {
// set the acceleration.
Real a = acc1(x, v, t);
// take a time step.
x += v*dt + 0.5*a*dt*dt;
v += a*dt;
t += dt;
}
int main () {

18
real t = 0, x = x0, v = v0;
real dt = 0.01;
real tp, xp, vp;
cout<< t << “ ” << x << “ ” << xanalytic(t) << endl;
while (x >= 0) {
tp = t;
xp = x;
vp = v;
take a step ( x, v, t, dt);
//print the numerical and analytic results.
cout << t << “ ” << x << “ ” << xanalytic (t) <<endl;
}
cerr << “Final t = “ << t << ” . Analytic solution = “ << (-v0 – sqrt (v0*v0-
2*acc*x0))/acc <<endl;
return 0;
}

1. (a) Romberg’s Method

Romberg devised an ingenious method to reduce the error in the numerical estimate of an
integral. In general, we can express an integral as a sum of its numerical estimate and error
terms, having dependence on the length of the sub-interval h at some power of it:

I = A (h) + Chk + C 0 hk+1 + C 00 hk+2 + · · ·

For example, in the Compound Trapezoidal Rule, with ∆x = h:


h
A (h) = T (h) = [f (x0 ) + 2f (x0 + h) + 2f (x0 + 2h) + · · · + f (xN )]
2
Chk + · · · = O h2 ⇒ k = 2


Similarly, for Compound Simpson’s Rule, k = 4, etc.


We can express the same integral by an estimate an error terms when we divide the step-size
by two:
   k  k+1  k+2
h h 0 h 00 h
I=A +C +C +C + ···
2 2 2 2
Combining the above two expressions for I, using suitable weighting factors, we get,
k
     
k k h k h k k+1 k h
− A (h) + O hk+1
  
2 −1 I =2 A − A (h) + 2 C k − Ch + O h = 2 A
2 2 2

2 A h2 − A (h)
 k  
k+1 (1) k+1 0 k+2 00 k+3 (1) k+1
 
⇒I = +O h = B (h)+Dh +D h +D h +· · · = B (h)+ O h
2k − 1

Here, B (1) (h) is a better estimate of the integral (with error hk+1 ). We can use the same
trick to get even better estimates of the integral as shown below:
k+1
 
k k+1 (1) h k+1 h
− B (h) + 2 D k+1 − Dhk+1 + O hk+2
(1)
 
2 −1 I =2 B
2 2
 k+1 (1) h  
2 B − B (1) (h)
2
+ O hk+2 = B (2) (h) + O hk+2
 
⇒I= k
2 −1

19
And so on.
As an example, we can form successive better approximations of the integral I, using Compound
Trapezoidal Rule as follows:

I = T (h) + Chk + C 0 hk+2 + C 00 hk+4 + · · · , k = 2

T (1) (h) = 31 2k T (0) h2 − T (0) (h) error: O hk+2 = O (h4 )


   

   
(2) 1 k+2 (1) h (1)
error : O h6

T (h) = 2 T − T (h)
15 2
   
(3) 1 k+4 (2) h (2)
error : O h8

T (h) = 2 T − T (h)
63 2
Example-6 Romberg’s Trick applied to Trapezoidal Rule
#include <iostream>
#include <cmath>
using namespace std;
double power(double x, int p) {
double val=1.0;
int j=0;
for (j=1; j<=abs(p); j++) val *= x;
//abs(p) has to be used to take into account of negative value of p
if (p>0) { return val; }
else if (p<0) { return (1.0/val); }
else { return 1.0; }
}
double rombergtrap(double (*func)(double), double a, double b, int n, int m) {
double TRP[10][10];
double x=0.0, sum=0.0, res=0.0, del=0.0, h=0.0, delb2=0.0, powerfac=0.0, factor=0.0;
int i=0, j=0, k=0, numi=0;
h = del = (b-a)/n; numi = n;
for(k=0; k<=m; k++) {
sum = (*func)(a);
x = a+h;
for(j=1; j<n; j++, x += h) { sum += 2.0 * (*func)(x); }
sum += (*func)(b);
TRP[0][k] = sum * 0.5 * h;
h /= 2.0;
n *= 2;
}
for(i=1; i<=m; i++) {
for(j=m-i; j>=0; j–) {
factor = power(2.0, (2*i) );
TRP[i][j] = factor * TRP[i-1][j+1] – TRP[i-1][j];
TRP[i][j] /= (factor – 1.0);
}
}
if( m!=0) {
TRP[m][0] = power(2.0,2*m) * TRP[m-1][1] – TRP[m-1][0];

20
TRP[m][0] /= (power(2.0,2*m) – 1.0) ;
}
return TRP[m][0];
}
double myfunc1(double x) { return cos(x); }
int main() {
double resromb=0.0, low=0.0, up=1.0;
int m=0, numint=10;
low =0.0; up=M PI/2.0; //limits of the integration
for(m=0; m<=4; m++) {
resromb = rombergtrap(myfunc1, low, up, numint, m );
printf(“Order of the trick m=%d, resromb=%16.14lf \n \n”, m, resromb);
}
return 0;
}
Example-7 The period of a simple pendulum for large angle amplitude (θM ) is given in terms
of the complete elliptic integral of the first kind K(m) as
 L 1/2   L 1/2 Z π/2  θM  2 −1/2
2 θM

T =4 K sin =4 1 − sin2 sin φ dφ
g 2 g 0 2

where 0 ≤ θM < π . (a) Using Simpson’s 1/3 rule, write a code in C++ function double
pendulumT(double thetaM), that evaluates the period T for a given θM as the arument using
the above definition. Choose the value of L/g such that, as θM → 0, T=1s. Call the function
pendulumT() from the main() function with different values of θM as input. (b) Using the
above function, plot T vs θM for θM ∈ [0, π/2]
//Pendulum time period by using trapizeum rule
#include<iostream>
#include<cmath>
#include<fstream>
using namespace std;
double f(double x,double theta-M){
double s=pow((sin(theta-M/2.0)),2.0);
double t=s*pow((sin(x)),2.0);
double u=1-t;
return 4.0*sqrt(0.0253)*(1.0/sqrt(u)); //(L/g)=0.0253 }
double pendulum-T(double theta-M) { double x=0.0,sum=0.0,h=0.0; double a=0,b=M PI/2.0;
int i,n=100;
if(n==1){ return 0.5*(b-a)*(f(a,theta-M)+f(b,theta-M)); }
else
{ h=(b-a)/n; sum+=f(a,theta-M);
x=a+h;
for(i=1;i¡n;i++)
{
sum+=2.0*f(x,theta-M);
x+=h;
}
sum+=f(b,theta-M);
double T=(0.5*h*sum);

21
return T;
}
}
int main()
{
ofstream fout(”pendulum.dat”);
for(double theta-M=0.0;theta-M<=(M PI/2.0);theta-M+=M PI/10000.0)
{
cout<<theta-M<<” ”<<pendulum-T(theta-M)<<endl;
fout<<theta-M<<” ”<<pendulum-T(theta-M)<<endl;
}
return 0;
}
//Pendulum time period by using simpson 1/3 rule
/*double pendulum-Tsim(double theta-M)
{
double a=0,b=M PI/2.0;
int i,n=100;
double I=0.0,J=0.0;
double h=(b-a)/n;
for(i=1;i<n;i+=2){ //representing the odd numbers of the intervals i.e. 1,3,5.....,n-1; since
the number of interval is always even for simpson 1/3 rule
I+=f(a+i*h,theta-M);
}
for(i=2;i<n;i+=2){ //representing the even numbers of the intervals i.e. 2,4,6.....,n-2; since
the number of interval is always even for simpson 1/3 rule
J+=f(a+i*h,theta-M);
}
double T=(h/3)*(f(a,theta-M)+(4*I)+(2*J)+f(b,theta-M));
return T;
}
int main(){
ofstream fout(”pendulum.dat”);
for(double theta-M=0.0;theta-M<=(M PI/2.0);theta-M+=M PI/10000.0)
{
cout<<theta-M<<” ”<<pendulum-Tsim(theta-M)<<endl;
fout<<theta-M<<” ”<<pendulumTsim(theta-M)<<endl;
}
return 0;
}*/
3 Exercises

1. Write a code that evaluates the second derivative of sin2 (x) and plot the errors in your
result when you use forward, backward and centered difference approximations. Use both
first order and second order approximation schemes. Plot the errors in gnuplot and save
the error plots as postscript files.

2. Change the code in example 3 to implement the Simpson’s rule.

22
3. Integrate the function sinx between the limits 0 to π using, (a) Trapezoidal, Simpson’s
rule, Simpson’s 3/8 rule and Boole’s rule. Compare the errors in calculation with the
actual result in all these different rules.

4. Compute the following integrals correct to 7 decimal places


Z π/2
cosx lnsinx
I1 = dx (Ans : −0.02657996319303578798 . . . ..)
π/4 1 + sinx
Z 3π/2
I2 = tanx dx (Ans : 0.96054717892973049468 . . . . . . ..)
0

5. Using Romberg’s trick applied to the Simpson’s 1/3 rd rule.

6. Using Romberg’s trick applied to Simpson’s 3/8 th rule.

7. Write a code to use Romberg’s trick using recursive calling of function. Your definition
of the function that implements the trick may be like

double rombergtrapzd(double(*func)(double), double a, double b, int n, int mrom)


{
.......
res = . . . . . . . rombergtrapzd (. . . . . . .); //Recursive calling
. . . . . . ..
return res;
}

23

You might also like