Lecture 02
Lecture 02
Trapezoidal rule
Simpson's rule
31
NUMERICAL
INTEGRATION (II)
■ Let's practice a classical integration: the trapezoidal rule, e.g.
2 3 4 sin(13x)
f (x) = x x +x x +
Z 13 2
x x3 x4 x5 cos(13x)
f (x)dx = +
2 3 4 5 169
L
xi+1
xi
fi fi+1
h
32
TRAPEZOIDAL RULE:
IMPLEMENTATION
import math
def f(x):
return x - x**2 + x**3 - x**4 + math.sin(x*13.)/13.
def fint(x):
return x**2/2. - x**3/3. + x**4/4. - x**5/5. -
math.cos(x*13.)/169.
fint_exact = fint(1.2)-fint(0.)
area, x, h = 0., 0., 1E-3 start with h = 10–3
f0 = f1 = f(x)
while x<1.2-h*0.5:
f0, f1 = f1, f(x+h) Exact: 0.1765358676046381,
x += h Numerical: 0.1765352854227494,
area += f0+f1 diff: 0.0000005821818886
area *= h/2.
Exact = 0.1765358676046381
34
ERROR ANALYSIS:
APPROXIMATION ERROR
■ Consider Taylor expansions for f(x): 2 3
h h
f (x + h) ⇡ f (x) + hf 0 (x) + f 00 (x) + f 000 (x) + ...
2 6
Exact integration:Z
h
h2 0 h3 00 h4 000
f (x + ⌘)d⌘ ⇡ hf (x) + f (x) + f (x) + f (x) + ...
0 2 6 24
Trapezoidal rule:
h h2 0 h3 00 h4 000
[f (x) + f (x + h)] ⇡ hf (x) + f (x) + f (x) + f (x) + ...
2 2 4 12
h3 00
Error per interval: ⇡ f (x) + ...
12
L
Approximation error: ✏approx ⇡ O(h ) ⇥ ⇡ O(h2 )
3
h
35
ERROR ANALYSIS:
TOTAL ERROR
■ If we believe theptheory:
L
✏roundo↵ ⇡ O( N ✏m ) N / = total no. of operation steps.
h
■ The total error: p
✓ ◆
2 ✏m
✏total ⇡ O( N ✏m ) + O(h ) ⇡ O p + O(h2 )
h
15 16
For a double precision float point number, ✏m ⇡ O(10 ) O(10 )
The best precision will be of O(10–12) when h ⇡ O(✏1/2.5
m ) ⇡ O(10 6
)
36
AN EASY IMPROVEMENT
■ Another classical method: Simpson's Rule.
■ Instead of liner interpolation, we could use a 2nd-order (parabola)
interpolation along 3 points:
L
xi+2
xi+1
xi
fi fi+1 fi+2
h h
37
THE FORMULAE
■ Treat the function as a parabola between the interval [–1,+1]:
Z +1 +1
a 3 b 2 2a
f (x) ⇡ ax2 + bx + c f (x)dx = x + x + cx = + 2c
1 3 2 1 3
{
f (+1) ⇡ a + b + c Z +1
f ( 1) 4f (0) f (+1)
f (0) ⇡ c Solve a,b,c : f (x)dx = + +
1 3 3 3
f ( 1) ⇡ a b+c
Z 2h
h 4h h
Simpson’s rule: f (x + ⌘)d⌘ ⇡ f (x) + f (x + h) + f (x + 2h)
0 3 3 3
Total integration:
Z
h 4h 2h 4h 2h 4h h
f (x)dx ⇡ f1 + f2 + f3 + f4 + f5 + ... + fN 1 + fN
3 3 3 3 3 3 3
38
SIMPSON’S RULE:
IMPLEMENTATION
import math
def f(x):
return x - x**2 + x**3 - x**4 + math.sin(x*13.)/13.
def fint(x):
return x**2/2. - x**3/3. + x**4/4. - x**5/5. -
math.cos(x*13.)/169.
fint_exact = fint(1.2)-fint(0.)
area, x, h = 0., 0., 1E-3
f0 = f1 = f2 = f(x)
while x<1.2-h*0.5:
f0, f1, f2 = f2, f(x+h), f(x+h*2.) Exact: 0.1765358676046381,
x += h*2. Numerical: 0.1765358676063498,
area += f0+f1*4.+f2 diff: 0.0000000000017117
area *= h/3.
h5 (4) L
⇡ f (x) + ... ✏approx ⇡ O(h ) ⇥ ⇡ O(h4 )
5
90 h
40
SIMPSON’S RULE:
ERROR ANALYSIS (II)
■ The total error is given
p
by:
✏m
✓ ◆
4
✏total ⇡ O( N ✏m ) + O(h ) ⇡ O p + O(h4 )
h
The best precision could be of O(10–14) when h ⇡ O(✏1/4.5
m ) ⇡ O(10 4
)
Exact = 0.1765358676046381
41
COMMENTS
■ Maybe you already realized the general rule:
▫ The approximate error of numerical integration heavily
depends on the algorithm (cancellation of higher order error).
▫ The round-off error and speed of calculation depend on the
number of steps.
▫ The best algorithm: as less steps/points as possible, with as
higher order as possible.
▫ Adaptive stepping can be a solution.
▫ Many integration rules can be generalized as sum of the
weights times the function f(x) values, ie.
Z N
X The art is to find the best
f (x)dx ⇡ wi · f (xi )
approximation of Wi with smallest N!
i=1
42