0% found this document useful (0 votes)
9 views13 pages

Lecture 02

The document discusses numerical integration techniques, focusing on the Rectangle, Trapezoidal, and Simpson's rules. It includes practical implementations in Python, error analysis, and the impact of step size on precision and computation time. The document emphasizes the importance of choosing the right algorithm to minimize approximation and round-off errors while maximizing efficiency.

Uploaded by

dsmlab986
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views13 pages

Lecture 02

The document discusses numerical integration techniques, focusing on the Rectangle, Trapezoidal, and Simpson's rules. It includes practical implementations in Python, error analysis, and the impact of step size on precision and computation time. The document emphasizes the importance of choosing the right algorithm to minimize approximation and round-off errors while maximizing efficiency.

Uploaded by

dsmlab986
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Numerical Algorithms

7th March 2025


NUMERICAL
INTEGRATION
■ Starting from some super basic integration rules:
Rectangle rule

Trapezoidal rule

Simpson's rule

31
NUMERICAL
INTEGRATION (II)
■ Let's practice a classical integration: the trapezoidal rule, e.g.
2 3 4 sin(13x)
f (x) = x x +x x +
Z 13 2
x x3 x4 x5 cos(13x)
f (x)dx = +
2 3 4 5 169
L

xi+1

xi

fi fi+1

h
32
TRAPEZOIDAL RULE:
IMPLEMENTATION
import math

def f(x):
return x - x**2 + x**3 - x**4 + math.sin(x*13.)/13.
def fint(x):
return x**2/2. - x**3/3. + x**4/4. - x**5/5. -
math.cos(x*13.)/169.

fint_exact = fint(1.2)-fint(0.)
area, x, h = 0., 0., 1E-3 start with h = 10–3
f0 = f1 = f(x)
while x<1.2-h*0.5:
f0, f1 = f1, f(x+h) Exact: 0.1765358676046381,
x += h Numerical: 0.1765352854227494,
area += f0+f1 diff: 0.0000005821818886
area *= h/2.

print('Exact: %.16f, Numerical: %.16f, diff: %.16f' \


% (fint_exact,area,abs(fint_exact-area)))
l202-example-03.py
33
HOW ABOUT
A SMALLER STEP SIZE?
■ As expected, the precision cannot be improved by simply using a
smaller h.
■ It's very time consuming: smaller h, more operations, more
computing time needed.

Exact = 0.1765358676046381

h = 1e-02, Numeric = 0.1764776451750985, diff = 0.0000582224295395


h = 1e-03, Numeric = 0.1765352854227494, diff = 0.0000005821818886
h = 1e-04, Numeric = 0.1765358617829089, diff = 0.0000000058217292
h = 1e-05, Numeric = 0.1765358675475263, diff = 0.0000000000571118
h = 1e-06, Numeric = 0.1765358676034689, diff = 0.0000000000011692
h = 1e-07, Numeric = 0.1765358677680409, diff = 0.0000000001634028
h = 1e-08, Numeric = 0.1765358661586719, diff = 0.0000000014459662

34
ERROR ANALYSIS:
APPROXIMATION ERROR
■ Consider Taylor expansions for f(x): 2 3
h h
f (x + h) ⇡ f (x) + hf 0 (x) + f 00 (x) + f 000 (x) + ...
2 6
Exact integration:Z
h
h2 0 h3 00 h4 000
f (x + ⌘)d⌘ ⇡ hf (x) + f (x) + f (x) + f (x) + ...
0 2 6 24
Trapezoidal rule:
h h2 0 h3 00 h4 000
[f (x) + f (x + h)] ⇡ hf (x) + f (x) + f (x) + f (x) + ...
2 2 4 12

h3 00
Error per interval: ⇡ f (x) + ...
12
L
Approximation error: ✏approx ⇡ O(h ) ⇥ ⇡ O(h2 )
3
h
35
ERROR ANALYSIS:
TOTAL ERROR
■ If we believe theptheory:
L
✏roundo↵ ⇡ O( N ✏m ) N / = total no. of operation steps.
h
■ The total error: p
✓ ◆
2 ✏m
✏total ⇡ O( N ✏m ) + O(h ) ⇡ O p + O(h2 )
h
15 16
For a double precision float point number, ✏m ⇡ O(10 ) O(10 )
The best precision will be of O(10–12) when h ⇡ O(✏1/2.5
m ) ⇡ O(10 6
)

Well, this is just an order of magnitude guess,


usually it's highly dependent on the algorithm and your exact coding.
(also, smaller h means much more computing time!)

36
AN EASY IMPROVEMENT
■ Another classical method: Simpson's Rule.
■ Instead of liner interpolation, we could use a 2nd-order (parabola)
interpolation along 3 points:
L

xi+2
xi+1
xi

fi fi+1 fi+2

h h
37
THE FORMULAE
■ Treat the function as a parabola between the interval [–1,+1]:
Z +1  +1
a 3 b 2 2a
f (x) ⇡ ax2 + bx + c f (x)dx = x + x + cx = + 2c
1 3 2 1 3

{
f (+1) ⇡ a + b + c Z +1
f ( 1) 4f (0) f (+1)
f (0) ⇡ c Solve a,b,c : f (x)dx = + +
1 3 3 3
f ( 1) ⇡ a b+c
Z 2h
h 4h h
Simpson’s rule: f (x + ⌘)d⌘ ⇡ f (x) + f (x + h) + f (x + 2h)
0 3 3 3

Total integration:
Z
h 4h 2h 4h 2h 4h h
f (x)dx ⇡ f1 + f2 + f3 + f4 + f5 + ... + fN 1 + fN
3 3 3 3 3 3 3
38
SIMPSON’S RULE:
IMPLEMENTATION
import math

def f(x):
return x - x**2 + x**3 - x**4 + math.sin(x*13.)/13.
def fint(x):
return x**2/2. - x**3/3. + x**4/4. - x**5/5. -
math.cos(x*13.)/169.

fint_exact = fint(1.2)-fint(0.)
area, x, h = 0., 0., 1E-3
f0 = f1 = f2 = f(x)
while x<1.2-h*0.5:
f0, f1, f2 = f2, f(x+h), f(x+h*2.) Exact: 0.1765358676046381,
x += h*2. Numerical: 0.1765358676063498,
area += f0+f1*4.+f2 diff: 0.0000000000017117
area *= h/3.

print('Exact: %.16f, Numerical: %.16f, diff: %.16f' \


% (fint_exact,area,abs(fint_exact-area)))
l202-example-04.py
39
SIMPSON’S RULE:
ERROR ANALYSIS
■ Could we cancel the O(h3) and O(h4) term?
2 3 4
h h h
f (x + h) ⇡ f (x) + hf 0 (x) + f 00 (x) + f 000 (x) + f (4) (x) + ...
2 6 24
3 4
4h 2h
f (x + 2h) ⇡ f (x) + 2hf 0 (x) + 2h2 f 00 (x) + f 000 (x) + f (4) (x) + ...
3 3
h 4h h
f (x) + f (x + h) + f (x + 2h)
3 3 3
3 4 5
4h 2h 5h
⇡ 2hf (x) + 2h2 f 0 (x) + f 00 (x) + f 000 (x) + f (4) (x) + ...
Z 2h 3 3 18
3 4 5
4h 2h 4h
f (x + ⌘)d⌘ ⇡ 2hf (x) + 2h2 f 0 (x) + f 00 (x) + f 000 (x) + f (4) (x) + ...
0 3 3 15

h5 (4) L
⇡ f (x) + ... ✏approx ⇡ O(h ) ⇥ ⇡ O(h4 )
5
90 h
40
SIMPSON’S RULE:
ERROR ANALYSIS (II)
■ The total error is given
p
by:
✏m
✓ ◆
4
✏total ⇡ O( N ✏m ) + O(h ) ⇡ O p + O(h4 )
h
The best precision could be of O(10–14) when h ⇡ O(✏1/4.5
m ) ⇡ O(10 4
)

Is it true? Not too bad in principle...

Exact = 0.1765358676046381

h = 1e-02, Numeric = 0.1765358847654857, diff = 0.0000000171608476


h = 1e-03, Numeric = 0.1765358676063498, diff = 0.0000000000017117
h = 1e-04, Numeric = 0.1765358676047102, diff = 0.0000000000000721
h = 1e-05, Numeric = 0.1765358676043926, diff = 0.0000000000002455
h = 1e-06, Numeric = 0.1765358676131805, diff = 0.0000000000085424
h = 1e-07, Numeric = 0.1765358676224454, diff = 0.0000000000178073
h = 1e-08, Numeric = 0.1765358675909871, diff = 0.0000000000136510

41
COMMENTS
■ Maybe you already realized the general rule:
▫ The approximate error of numerical integration heavily
depends on the algorithm (cancellation of higher order error).
▫ The round-off error and speed of calculation depend on the
number of steps.
▫ The best algorithm: as less steps/points as possible, with as
higher order as possible.
▫ Adaptive stepping can be a solution.
▫ Many integration rules can be generalized as sum of the
weights times the function f(x) values, ie.

Z N
X The art is to find the best
f (x)dx ⇡ wi · f (xi )
approximation of Wi with smallest N!
i=1

42

You might also like