Numerical Integration
Numerical Integration
(Quadrature)
Sachin Shanbhag
Dept. Scientific Computing
Numerical Integration
%0
dx = 2 ! " (#1)k
$
x cosh x
k =0 2k +1
b
a
"x 2
dx
Error function
Possible Issues
the integrand is some sort of table of numbers
regularly spaced
irregularly spaced
contaminated with noise (experimental data)
the integrand is computable everywhere in the range of integration,
but there may be
infinite range of integration
local discontinuities
considerations
time to compute the integral
estimate of the error due to
- truncation
- round-off
- noise in tabulated values
"
a
N (1
i= 0
i= 0
intervals
dx
error
0.785398
-0.340759
0.392699
-0.183465
0.196350
-0.094960
16
0.098175
-0.048284
32
0.049087
-0.024343
64
0.024544
-0.012222
128
0.012272
-0.006123
256
0.006136
-0.003065
512
0.003068
-0.001533
10
1024
0.001534
-0.000767
Analytically
! /2
"0
! /2
0
=1
Note that the error is decreasing by a factor 2, just like our discretization interval dx.
Question: Why is the error = I(exact) - I(calc) negative?
Instead of having the top of the rectangle hit the left (or right) edge we could also
have it hit the function at the midpoint of each interval:
N %1
"
f (x)dx # & f (
dx
i= 0
intervals
0.785398
!
-0.026172153
0.392699
-0.006454543
0.196350
-0.001608189
16
0.098175
-0.000401708
32
0.049087
-0.000100406
64
0.024544
-0.000025100
128
0.012272
-0.000006275
256
0.006136
-0.000001569
512
0.003068
-0.000000392
10
1024
0.001534
-0.000000098
x i + x i+1
)$x
2
error
Orientation
Newton-Cotes Methods
Use intepolating polynomials. Trapezoid, Simpsons 1/3 and 3/8 rules,
Bodes are special cases of 1st, 2nd, 3rd and 4th order polynomials are
used, respectively
Gauss Quadrature
Like Newton-Cotes, but instead of a regular grid, choose a set that lets
you get higher order accuracy
Newton-Cotes Methods
b
a
f (x)dx " #
b
a
(a
n
+
a
x
+
...+
a
x
)dx
0
1
n
interpolation
(b 2 $ a 2 )
(b n +1 $ a n +1 )
# a f (x)dx "a0 (b $ a) + a1 2 + ...+ an n + 1
!
b
Newton-Cotes Methods
Trapezoid Method (First Order Polynomial are used)
b
a
f (x)dx " # a ( a0 + a1 x ) dx
b
! f (b) ! f (a)
f (a) +
b!a
I = ) f ( x)dx
a
( x ! a)
f (b) ! f (a )
'
$
I ( ) % f (a) +
( x ! a ) "dx
a
b!a
&
#
b
f(x)
f (b) ! f (a ) $
'
= % f (a) ! a
"x
b!a
&
# a
2 b
+
a
f (b) ! f (a ) x
b!a
2
= (b ! a )
f (b) + f (a )
2
b
a
n%1
1
f (x)dx # & ( x i+1 % x i )( f (x i+1 ) + f (x i ))
2
i= 0
intervals
dx
error
0.78539816
0.05194055
0.39269908
0.01288420
0.19634954
0.00321483
16
0.09817477
0.00080332
32
0.04908739
0.00020081
64
0.02454369
0.00005020
128
0.01227185
0.00001255
256
0.00613592
0.00000314
512
0.00306796
0.00000078
10
1024
0.00153398
0.00000020
! /2
"0
! /2
0
=1
b#a 2
Error "
h max f ''(x)
x $[a,b ]
12
Newton-Cotes Methods
Simpson 1/3 Rule
f(x)
b
a
f (x)dx " #
b
a
(a
+ a1 x + a2 x ) dx
2
h=(b-a)/2
b
a
f(x)
f (x)dx " # a ( a0 + a1 x + a2 x 2 + a3 x 3 ) dx
h=(b-a)/3
Newton-Cotes Methods
These are called closed because we use function evaluations at the end-points
of the interval. There are open formulae which dont evalute f(a) and f(b), but we
wont discuss them here.
wikipedia.org
Romberg Integration
f ##($ )
%
(b " a)
=
3
Et
i=1
12n 2
Think of
$ f ""(# )
i
i=1
Et "
!
C
n2
Romberg Integration
How good is this approximation?
Consider
30
140000
'
$
.
+
x = ! % 2000 ln ,
(
9
.
8
t
"dt
)
-140000 ( 2100t *
#
8&
n
Value
Et
11868
807
11266
205
11153
91.4
11113
51.5
11094
33.0
11084
22.9
11078
16.8
11074
12.9
Romberg Integration
The true error gets approximately quartered as the number of
segments is doubled. This information is used to get a better
approximation of the integral, and is the basis of Romberg
Integration (or Richardsons extrapolation).
C
Et ! 2
n
I2n # In
= I2n +
3
Example
The vertical distance covered by a rocket from 8 to 30 seconds is given by
140000
'
$
.
+
x = ! % 2000 ln ,
(
9
.
8
t
"dt
)
-140000 ( 2100t *
#
8&
30
1.
2.
Value
Et
RelErr
11868
807
7.296
11266
205
1.854
11153
91.4
0.8265
11113
51.5
0.4655
11094
33.0
0.2981
11084
22.9
0.2070
11078
16.8
0.1521
11074
12.9
0.1165
Solution
I 2 = 11266m
I 4 = 11113m
Using Richardsons extrapolation formula for Trapezoidal
rule, choosing n=2
Itrue
I2n # In
" I2n +
3
= 11062 m (Itrue,est)
Et = Iexact - Itrue,est = -1 m
#t =
Solution
30
140000
'
$
.
+
x = ! % 2000 ln ,
(
9
.
8
t
"dt
)
-140000 ( 2100t *
#
8&
n
Trapezoidal
Rule
et for Trapezoidal
Rule
Richardsons
Extrapolation
et for Richardsons
Extrapolation
1
2
4
8
11868
11266
11113
11074
7.296
1.854
0.4655
0.1165
-11065
11062
11061
-0.03616
0.009041
0.0000
!
In(1) represents
the values obtained from the regular Trapezoidal
rule with n intervals.
k=2 represents values obtained using the true estimate as O(h2).
In(k) has an error of the order 1/n2k.
1-segment
11868
First Order
(k=2)
Second Order
(k=3)
Third Order
(k=4)
11065
2-segment
1126
11062
11062
4-segment
11113
11061
11061
8-segment
11074
11061
I2n # In
= I2n +
3
C1 C2 C3
+ 4 + 6 ...
2
n
n
n
O(1/n4)
Over here identical to
Simpsons rule.
In fact!this is how Numerical Recipes (Press et al.) implements the Simpsons rule
Successive !
iterations:
(k )
2n
(k"1)
4 k I2n
" In(k"1)
=
,k # 2
k"1
4 "1
(k )
2n
(k"1)
4 k I2n
" In(k"1)
=
,k # 2
k"1
4 "1
"x
0
log(x + x 2 + 1)dx
Romberg Integration
Questions:
1.
2.
Romberg Integration
Questions:
1.
2.
No!
But you have to derive new relationships in lieu of:
(k )
2n
(k"1)
4 k I2n
" In(k"1)
=
,k # 2
k"1
4 "1
But note that it may destroy recursive structure used in the expression
above to minimize function calls.
Gauss Quadrature
Motivation
"
b
a
f (x)dx = $ c i f (x i )
i= 0
+ h
where c i = ,
- 0.5 h
i = 1,2,...,n #1
i = 0 and n
Gauss Quadrature
"
b
a
f (x)dx = # c i f (x i )
i= 0
c i : Weights
x i :Nodes
Problem
b
a
b
a
f (x)dx " $ c i f (x i )
i= 0
where c i =
b
a
l i (x) dx
Newton-Cotes
For a uniform grid { xi } Pn(x) is exact if f(x) is a polynomial d(n)
Gaussian Quadrature
Choose the n+1 grid points { xi } so that the polynomial
Pn(x) is exact if f(x) is a polynomial d(2n+1)
#
!
1
"1
f (x) dx = c 0 f (x 0 ) + c1 f (x1 )
Brute Force:
Set up equations for all polynomials d(0) to d(2n+1) and solve for ci and xi
f (x) = 1; c 0 + c1 =
1
"1
1 dx = 2
f (x) = x; c 0 x 0 + c1 x1 =
2
1
"1
f (x) = x ; c 0 x 0 + c1 x1 =
f (x) = x ; c 0 x 0 + c1 x1 =
x dx = 0
2
x
dx = 2 /3
"1
1
3
x
# "1 dx = 2
c 0 = c1 = 1
x 0 = "1/ 3;x1 = 1/ 3
wikipedia.org
"
b
a
b#a 1 b#a
a+b
f (y) dy =
f
(
x
+
)dx
"
#1
2
2
2
n
b"a
b"a
a+b
=
c
f
(
x
+
)
#
i
i
2 i=1
2
2
!
Example
1 - x2
e dx
0
.!
1
=
2
. -1 e
-(.5t +.5 )
& ,*
- -0.5
1 $ *+
= $e
2 $
%
dt
)
1
+.5 ''
3
(
,
)
1
-** 0.5 +.5 ''
3
+
(
+e
#
!
!
!
"
2 points
Advantages/Disadvantages
1. For functions that are smooth or approximately polynomial beats
Newton-Cotes in accuracy.
2
erf(1) =
"
$e
#x 2
dx
!
3. Unlike
Romberg Integration, we cannot successively refine (GaussKonrad tries to overcome that.)
"
b
a
w(x) f (x) dx = # c i f (x i )
i=1
!
There are other forms of Gauss Quadrature (not only Gauss-Legendre)
which are useful, when:
1. there are discontinuties,
2. range of integration is not finite,
3. when the weight w(x) can help the function look more polynomial
4. Etc.
Generalization
The fundamental theorem of Gaussian quadrature states that
the optimal nodes xi of the n-point Gaussian quadrature
formulas are precisely the roots of the orthogonal polynomial
for the same interval and weighting function.
Generalization
"
b
a
w(x) f (x) dx = # c i f (x i )
i=1
wikipedia
Gauss-Legendre
All we do are look for zeros of Pn(x) in [-1,1]. These are our xis.
The cis can be obtained from
ci =
2
(1" x i2 )( Pn#(x i )) 2
Generalization
In practice,
1.
2.
We look at the limits and the weighting function w(x) for the integral we
want to evaluate and decide what quadrature formula might be best.
3.
4.
One-Dimensional Integrals
Methodical approaches
trapezoid rule, Simpsons rule, Gauss quadrature
f ( x)
Quadrature formula
n
I " #x $
i =1
b!a n
f ( xi ) =
f ( xi )
$
n i =1
Stochastic approach
Same quadrature formula, different selection of points
b!a n
I"
f ( xi )
#
n i =1
! ( x)
http://www.eng.buffalo.edu/~kofke/ce530/Applets/applets.html
Plot of successive
deviates (Xn,Xn+1)
Not so random!
http://www.purinchu.net/wp/2009/02/06/the-randu-pseudo-random-number-generator/
Comparison of errors
methodical approach
Monte Carlo integration
"I # $x 2 # n %2
"I # n $1/ 2
"I # n $2 / d
"I # n $1/ 2
independent of dimension!
!
!
MC wins at about d = 4
!x = L / n1/ d
d=2
36 points,
361/2 = 6 in
each row
Example: mean-square
distance from origin
r2 =
2
2
(
x
+
y
)dxdy
!!
R
!! dxdy
R
wi ?
Example: mean-square
distance from origin
r2 =
2
2
(
x
+
y
)dxdy
!!
R
!! dxdy
R
High-Dimensional Integrals
Sample Integral from Statistical Physics
U =
1 1
ZN N !
# dr U (r )e
" !U (r N )
High-Dimensional Integrals
Sample Integral from Statistical Physics
U =
1 1
ZN N !
# dr U (r )e
" !U (r N )
compute such
IBM BlueGene/L-system:
300 Tflop using MC
properties
Total time: 10300/1015 ~10285 s = 10277 years
Age of the universe: 1014
# atoms on earth: 1050
+0.5
dx "
dy ( x + y ) s ( x, y )
"
!
0.5
!
0.5
=
+0.5
+0.5
dx
"!0.5 "!0.5 dys( x, y)
!1 inside R
s="
#0 outside R
U =
1 1
ZN N !
N
N " !U (r
dr
U
(
r
)e
#
(e" !U # 0)
Importance Sampling
I = ! 3 x 2 dx
0
f ( x) = 3x 2
! ( x) = 2 x
I ! # f ( xi )"xi
i =1
b"a 1
n ! ( xi )
!x1
!x2 !x3
b " a n f ( xi )
I#
$
n i =1 ! ( xi )
! ( x)
choose x points
according to (x)
!xn
f2 $ f
"2 #
% f (2
%f(
' * + ' *
&$ )
&$ )
n
Example
we want ! ( x) = ax on x = (0,1)
2
2
1
a and c from boundary conditions
then z = 2 ax + c = x
so x = z1/2
taking square root of uniform deviate gives linearly distributed values
# " (x)dx
" (x) =
2x
b2 # a2
(x)
! = 2x = 1 dz
" (x)
b2 # a2
dx
x
z
2x
$ dx b2 # a2 = $ dz
a
0
2
x = a + (b # a )z
U(z)
Do not want (x) to be too much smaller or too much larger than f(x)
too small leads to significant contribution from poorly sampled region
too large means that too much sampling is done in region that is not (now)
contributing much
! = 3x 2
! = 2x
! ( x)
f ( x)
! ( x)
! = 3x 4
f ( x) = 3x 2
! ( x)
2$
#' % f ( x) & 2
%
&
%
&
1
f
(
x
)
'
! I2 = ) 1 +
"
(
x
)
dx
(
"
(
x
)
dx
+ 1 + " ( x) ,
, *
n ' - " ( x) ,.
.
. '0
/
4x
0.09
0.03
0.04
0.01
0.04
0.01
5n
8n
8n
2x
3x
n = 1000
"I
2
n = 100
http://www.eng.buffalo.edu/~kofke/ce530/Applets/applets.html
Summary
Monte Carlo methods use stochastic process to
answer a non-stochastic question
generate a random sample from an ensemble
compute properties as ensemble average
permits more flexibility to design sampling algorithm
Importance Sampling
focuses selection of points to region contributing most to
integral
selecting of weighting function is important
choosing perfect weight function is same as solving integral
Extra Slides
li (x) =
" (x)
(x # x i )" $(x i )
n
# "(x i) = & (x i $ x j )
j= 0
j%i
!
!
# (x)
=1
x "x i (x $ x )# %(x )
i
i
x "x i
Theorem (Gauss)
Let P(x) be a nontrivial polynomial of degree n such that it is orthogonal to
polynomials of lesser degree
"
b
a
x k P(x)dx = 0
0 # k # n $1
"
b
a
f (x)dx # $ c i f (x i ) where c i =
i= 0
"
b
a
l i (x)dx
Method 2:
In practice, we use Gauss Theorem and well-studied classes of orthogonal
polynomials
Here, Legendre Polynomials (hence sometimes Gauss-Legendre Quadrature)
2
# "1 Pm (x)Pn (x)dx = 2n + 1$nm
1
All we do are look for zeros of Pn(x) in [-1,1]. These are our xis.
2
ci =
(1" x i2 )( Pn#(x i )) 2