0% found this document useful (0 votes)
18 views35 pages

Integration

Applied numerical computing - OCTAVE

Uploaded by

abhiya rajeev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views35 pages

Integration

Applied numerical computing - OCTAVE

Uploaded by

abhiya rajeev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

8.

Numerical integration
Computing the definite integral of a function is a challenging problem. Except for a few exceptions, no
closed form solution can be provided. However, from a practical point of view, the calculation of the
definite integral is a very common problem in engineering and science that needs to be addressed
frequently. A large number of important quantities are defined through the calculations of definite
integrals such as the center of mass, the RMS value of signals, probabilities of continuous random
variables, the mass flow of fluids or the calculation of the position from the signal measured by an
accelerometer are only a very few of them. Luckily, from a numerical point of view, the problem can be
solved and, as discussed in the present chapter, a large number of very accurate algorithms exist to
estimate the numerical values of definite integrals.

8.1. Numerical integration based on polynomial interpolation


Several strategies have been developed to estimate numerical definite integrals. A popular approach,
which we will explore in this chapter, involves two main steps. First, the function 𝑓(𝑥) to be integrated
is approximated by a polynomial 𝑃𝑛 (𝑥). Second, the definite integral is approximated using that
polynomial according:

𝑏 𝑏

∫ 𝑓(𝑥)𝑑𝑥 ≅ ∫ 𝑃𝑛 (𝑥)𝑑𝑥
𝑎 𝑎

To calculate the polynomial 𝑃𝑛 (𝑥), a polynomial interpolation can be used by choosing 𝑛 + 1 distinct
interpolation points 𝑥𝑜 , 𝑥1 , … , 𝑥𝑛 in the integration interval [𝑎, 𝑏] as explained in section 7.5. Figure 8-1
illustrates the concept of quadrature formulas developed using polynomial interpolation. The figure
demonstrates a case where three interpolation points are selected to define the quadratic interpolating
polynomial 𝑃2 (𝑥). Instead of integrating the function 𝑓(𝑥), the quadratic polynomial 𝑃2 (𝑥) is
integrated.

<Figure 8-1 here>


Figure 8-1 Replacing a function 𝑓(𝑥) (-) to be integrated by an interpolating polynomial 𝑃𝑛 (𝑥) (--) computed based on 𝑛 + 1
distinct interpolation points 𝑥𝑜 , 𝑥1 , … , 𝑥𝑛 in the integration interval [𝑎, 𝑏].

An important characteristic of interest is the error of a quadrature formula. It is defined as the


difference between the true value of the definite integral and the estimation of it using the quadrature
formula.

DEFINITION 8-1. ERROR OF A QUADRATURE FORMULA


The error 𝐸 of a quadrature formula is the difference between the true value of the definite integral and
the estimation of it using the quadrature formula. In the case of quadrature formulas developed based
on polynomial interpolation, the error is obtained as follows:

1
𝑏 𝑏 𝑏

𝐸 = |∫ 𝑓(𝑥)𝑑𝑥 − ∫ 𝑃𝑛 (𝑥)𝑑𝑥 | = ∫|𝑓(𝑥) − 𝑃𝑛 (𝑥)|𝑑𝑥


𝑎 𝑎 𝑎

where 𝑃𝑛 (𝑥) is an interpolation polynomial computed based on 𝑛 + 1 distinct interpolation points


𝑥𝑜 , 𝑥1 , … , 𝑥𝑛 in the integration interval [𝑎, 𝑏].

Figure 8-2 provides a visual representation of the error of a quadrature formula based on a polynomial
interpolation using two interpolating points 𝑥𝑜 = 𝑎 and 𝑥1 = 𝑏. In section 8.3.2, we will derive
approaches to estimate the truncation errors of quadrature formulas.

<Figure 8-2 here>


Figure 8-2 Error of a quadrature formula developed based on polynomial interpolation using two interpolating points 𝑥𝑜 = 𝑎
and 𝑥1 = 𝑏. The function 𝑦 = 𝑓(𝑥) (-) is approximated by the interpolation polynomial 𝑦 = 𝑃1 (𝑥) (--). The error of the
quadrature formula is the surface between 𝑓 and 𝑃1 .

Let us illustrate the introduced concepts with an example.

TRAPEZOIDAL RULE
In this example, we develop the quadrature formula that uses two interpolation points 𝑥𝑜 = 𝑎 and 𝑥1 =
𝑏. Figure 8-2 demonstrates why this formula is called the trapezoidal rule. The definite integral, which is
the surface below the function 𝑓(𝑥), is replaced by the surface of a trapezoid formed by the
interpolation polynomial 𝑃1 (𝑥).

The linear interpolation polynomial 𝑃1 (𝑥) can be written using Lagrange interpolation:

𝑥−𝑏 𝑥−𝑎
𝑃1 (𝑥) = 𝑓(𝑎) + 𝑓(𝑏)
𝑎−𝑏 𝑏−𝑎

To obtain the quadrature formula, we compute the definite integral of 𝑃1 (𝑥):

𝑏 𝑏 𝑏
𝑥−𝑏 𝑥−𝑎
∫ 𝑃1 (𝑥)𝑑𝑥 = 𝑓(𝑎) ∫ 𝑑𝑥 + 𝑓(𝑏) ∫ 𝑑𝑥
𝑎−𝑏 𝑏−𝑎
𝑎 𝑎 𝑎

To compute the quadrature formula, we have to integrate the two Lagrange polynomials 𝐿1 and 𝐿2 :

𝑏
𝑥−𝑏 1
∫ 𝑑𝑥 = (𝑏 − 𝑎)
𝑎 𝑎−𝑏 2

and

𝑏
𝑥−𝑎 1
∫ 𝑑𝑥 = (𝑏 − 𝑎)
𝑎 𝑏−𝑎 2

Inserting the results above in equation (8-4) yields the trapezoidal rule:

2
𝑏 𝑏
𝑓(𝑎) + 𝑓(𝑏)
∫ 𝑓(𝑥)𝑑𝑥 ≅ ∫ 𝑃1 (𝑥)𝑑𝑥 = (𝑏 − 𝑎)
2
𝑎 𝑎

Let us now apply this quadrature formula to estimate the following definite integral:

2
𝐼 = ∫ ln(𝑥) 𝑑𝑥
1

From calculus, we know that the exact value is

2
∫ ln(𝑥) 𝑑𝑥 = 2 ln(2) − ln(1) − 1 ≅ 0.3863
1

Using our trapezoidal rule, we obtain:

2
ln(2) − ln(1)
∫ ln(𝑥) 𝑑𝑥 ≅ (2 − 1) ≅ 0.3466
1 2

In this example, the error is about:

𝐸 ≅ |0.3863 − 0.3466| ≅ 0.04

Quadrature formulas can be characterized according to their degree of precision defined as follows:

DEFINITION 8-2. DEGREE OF PRECISION OF A QUADRATURE FORMULA


The degree of precision of a quadrature formula is the highest degree of a polynomial which will be
integrated exactly with the quadrature formula.

Based on Definition 8-2, it follows that any quadrature formula constructed using an interpolation
polynomial 𝑃𝑛 (𝑥) of degree 𝑛, is at least of degree of precision 𝑛. A quadrature formula could be of
higher degree of precision, but only under certain circumstances that we will discuss later.

8.2. Numerical integration based on Lagrange interpolation


In Example 8-1, we used Lagrange interpolation to determine the interpolating polynomial that
approximates the function 𝑓(𝑥) to be integrated. To find the corresponding quadrature formula, we had
to integrate the individual Lagrange polynomials. This approach offers various advantages. In fact, the
Lagrange interpolation polynomial 𝑃𝑛 (𝑥) generally writes as follows (see chapter 7):

𝑃𝑛 (𝑥) = ∑ 𝑓(𝑥𝑖 )𝐿𝑖 (𝑥)


𝑖=0

3
with 𝐿𝑖 (𝑥) the 𝑛 + 1 Lagrange polynomials associated to the 𝑛 + 1 interpolation points. Using Lagrange
interpolation is an excellent choice, as the quadrature formula becomes:

𝑏 𝑏 𝑛

∫ 𝑓(𝑥)𝑑𝑥 ≅ ∫ 𝑃𝑛 (𝑥)𝑑𝑥 = ∑ 𝜔𝑖 𝑓(𝑥𝑖 )


𝑎 𝑎 𝑖=0

with the coefficients 𝜔𝑖 calculated as

𝜔𝑖 = ∫ 𝐿𝑖 (𝑥)𝑑𝑥
𝑎
The coefficients 𝜔𝑖 are often termed the weights of the quadrature formula. The practical value of this
approach lies in the fact that the Lagrange polynomials 𝐿𝑖 (𝑥) do not depend on the function 𝑓(𝑥), but
only on the choice of the interpolation points. In other words, once we have chosen our interpolation
points, we can compute the weights 𝜔𝑖 independently of 𝑓(𝑥). It becomes possible to establish tables
for these weights that can then be used later with Equation (8-13) to estimate the definite integral of a
function 𝑓(𝑥). In Example 8-1 we computed the weights 𝜔𝑜 and 𝜔1 for the trapezoidal rule when
evaluating Equations (8-5) and (8-6).

The weights 𝜔𝑖 of a quadrature formula will always sum to 𝑏 − 𝑎.

SUM OF QUADRATURE WEIGHTS


The sum of the weights of a Quadrature Formula (8-13) based on Lagrange interpolation is always equal
to 𝑏 − 𝑎.

To prove this theorem, consider the function 𝑓(𝑥) = 1. The interpolating polynomial of this function will
be 𝑃𝑛 (𝑥) = ∑𝑛𝑖=0 𝐿𝑖 (𝑥). The quadrature formula (8-13) is at least of degree of precision 1. Consequently,
we can calculate the exact value of the definite integral of 𝑓(𝑥) = 1:

𝑏 𝑏 𝑛 𝑛 𝑏 𝑛

𝑏 − 𝑎 = ∫ 1 ∙ 𝑑𝑥 = ∫ [∑ 𝐿𝑖 (𝑥)] 𝑑𝑥 = ∑ [∫ 𝐿𝑖 (𝑥)𝑑𝑥 ] = ∑ 𝜔𝑖
𝑎 𝑎 𝑖=0 𝑖=0 𝑎 𝑖=0

which concludes the proof.

In practice, Theorem 8-1 can be used to verify if the computed weights of a quadrature formula seem to
be correct or not. For example, the reader can verify that the weights 𝜔𝑜 and 𝜔1 of the trapezoidal rule
computed in Equations (8-5) and (8-6) indeed sum to 𝑏 − 𝑎.

OPEN TRAPEZOIDAL RULE


1
In this example, we develop a quadrature formula using the two interpolation points 𝑥𝑜 = 𝑎 + 3 (𝑏 − 𝑎)
2
and 𝑥1 = 𝑎 + (𝑏 − 𝑎). This results in a linear interpolating polynomial, as was the case in the
3
trapezoidal rule, but this time the polynomial will be slightly different as we changed the interpolation
points. This quadrature formula is often referred to as the open trapezoidal rule.

4
The linear interpolation polynomial 𝑃1 (𝑥) is obtained using Lagrange interpolation:

𝑥 − 𝑥1 𝑥 − 𝑥𝑜
𝑃1 (𝑥) = 𝑓(𝑥𝑜 ) + 𝑓(𝑥1 )
𝑥𝑜 − 𝑥1 𝑥1 − 𝑥𝑜

The two quadrature weights are calculated as

𝑏
𝑥 − 𝑥1 𝑏−𝑎
𝜔𝑜 = ∫ 𝑑𝑥 =
𝑎 𝑥𝑜 − 𝑥1 2

and

𝑏
𝑥 − 𝑥𝑜 𝑏−𝑎
𝜔1 = ∫ 𝑑𝑥 =
𝑎 𝑥1 − 𝑥𝑜 2

As stated by Theorem 8-1, 𝜔o + 𝜔1 = 𝑏 − 𝑎.

The quadrature formula writes:

𝑏 𝑏
𝑏−𝑎
∫ 𝑓(𝑥)𝑑𝑥 ≅ ∫ 𝑃𝑛 (𝑥)𝑑𝑥 = 𝜔𝑜 𝑓(𝑥𝑜 ) + 𝜔1 𝑓(𝑥1 ) = [𝑓(𝑥𝑜 ) + 𝑓(𝑥1 )]
𝑎 𝑎 2

8.3. Newton-Cotes quadrature


So far, we have discussed how to build a quadrature formula using polynomial interpolation to
approximate the function 𝑓(𝑥) of the definite integral. This requires choosing some interpolation points
𝑥𝑜 , 𝑥1 , … , 𝑥𝑛 in the interval [𝑎, 𝑏]. A straight forward choice consists in distributing these points
uniformly over the interval [𝑎, 𝑏]. This is exactly how Newton-Cotes1 quadrature formulas proceed.

8.3.1. Newton-Cotes formulas


Two families of Newton-Cotes formula exist depending on whether or not the end-points 𝑎 and 𝑏 of the
definite integral are used as interpolation points.

DEFINITION 8-3. DEFINITION NEWTON-COTES FORMULAS


Newton-Cotes formulas are quadrature rules based on polynomial interpolation, built using uniformly
distributed interpolation points 𝑥𝑜 , 𝑥1 , … , 𝑥𝑛 , to approximate the definite integral of a function over the
integration interval [𝑎, 𝑏].
𝑏−𝑎
• Closed Newton-Cotes formulas use the interpolating points 𝑥𝑖 = 𝑎 + 𝑖ℎ, where ℎ = 𝑛
.
𝑏−𝑎
• Open Newton-Cotes formulas use the interpolating points 𝑥𝑖 = 𝑎 + (𝑖 + 1)ℎ, where ℎ = 𝑛+2.

1
Roger Cotes (1682 –1716) English mathematician

5
Using the methodology presented in section 8.2, closed and open Newton-Cotes formulas can be
derived for various values of 𝑛.

Degree of
𝑛 𝐴 𝑊𝑜 𝑊1 𝑊2 𝑊3 𝑊4
precision
1 1/2 1 1 1
2 1/3 1 4 1 3
3 3/8 1 3 3 1 3
4 2/45 7 32 12 32 7 5
Table 8-1 Weights of some closed Newton-Cotes formulas

Table 8-1 lists four closed Newton-Cotes formulas. Note that 𝑛 is, in agreement with the notation used
in the present chapter, the degree of the interpolating polynomial. Consequently, 𝑛 + 1 is the number
of interpolation points and the number of weights 𝜔𝑖 . The weights 𝜔𝑖 of the quadrature formula,
𝑏−𝑎
computed using equation (8-14), are given by 𝜔𝑖 = ℎ𝐴𝑊𝑖 where ℎ = , the distance between the
𝑛
interpolating points. The numbers 𝑊𝑖 in Table 8-1 are sometimes called the Cotes weights or Cotes
numbers. Note that for each quadrature formula, the sum of the weights 𝜔𝑖 is equal to 𝑏 − 𝑎, in
agreement with Theorem 8-1. One can prove that if 𝑛 is odd, the closed Newton-Cotes formulas have a
degree of precision equal to 𝑛, but if 𝑛 is even, one has a higher degree of precision equal to 𝑛 + 1.

The listed quadrature formulas are the trapezoidal rule (𝑛 = 1), Simpson’s-1/3 rule2 (𝑛 = 2), Simpson’s-
3/8 rule (𝑛 = 3) and Boole’s rule3 (𝑛 = 4). Example 8-1 shows, for the case of the trapezoidal rule, how
the quadrature formulas are derived.

SIMPSON’S-1/3 RULE
Let us apply Simpson’s-1/3 rule to the definite integral (8-8) already used to illustrate the trapezoidal
rule in Example 8-1. From Table 8-1, we can write down Simpson’s-1/3 rule as:

𝑏
ℎ 𝑎+𝑏
∫ 𝑓(𝑥)𝑑𝑥 ≅ [𝑓(𝑎) + 4𝑓 ( ) + 𝑓(𝑏)]
𝑎 3 2

𝑏−𝑎
with ℎ = 2
.

Applying Simpson’s-1/3 rule to the definite integral (8-8) results in:

2
2−1
∫ ln(𝑥) 𝑑𝑥 ≅ [ln(1) + 4 ln(1.5) + ln(2)] ≅ 0.3858
1 6

Comparing the above numerical approximation with the value obtained using calculus via Equation (8-
9), we can evaluate the error of Simpson’s-1/3 rule in this case as follows:

𝐸 ≅ |0.3863 − 0.3858| ≅ 0.0005

2
Thomas Simpson (1710 –1761) British mathematician and inventor
3
George Boole (1815 –1864) British mathematician, philosopher and logician

6
which is significantly lower than the error of the trapezoidal rule computed in Example 8-1.

The reader may have noted that the numbers 𝑊𝑖 are symmetric in Table 8-1. This is not an accident but
a consequence of the following theorem.

SYMMETRY OF WEIGHTS OF NEWTON-COTES FORMULAS


The weights 𝜔𝑖 of a Newton-Cotes quadrature formula are symmetric, meaning that 𝜔𝑜 = 𝜔𝑛 , 𝜔1 =
𝜔𝑛−1 , etc.

To prove this theorem, we start by noting that the weights 𝜔𝑖 are given by Equation (8-14), which
represents the definite integral of the Lagrange polynomial 𝐿𝑖 (𝑥). In the case of a Newton-Cotes
formula, where the interpolation points are uniformly distributed across the integration interval, this
polynomial is an odd function relative to the midpoint (𝑎 + 𝑏)/2 of the integration interval.
Consequently, the definite integrals of, for example 𝐿𝑜 (𝑥) and 𝐿𝑛 (𝑥), will be equal. The same holds for
the definite integrals of 𝐿1 (𝑥) and 𝐿𝑛−1 (𝑥) and so on which concludes the proof.

There are closed Newton-Cotes quadrature formulas as well as open ones. Table 8-2 lists the first five
open Newton-Cotes formulas. Here, as for Table 8-1, 𝑛 is the degree of the interpolating polynomial and
consequently 𝑛 + 1 is the number of interpolation points and weights 𝜔𝑖 . The weights 𝜔𝑖 of the
𝑏−𝑎
quadrature formula are given by 𝜔𝑖 = ℎ𝐴𝑊𝑖 where ℎ = is the distance between the interpolating
𝑛+2
points. Again, for each quadrature formula, the sum of the weights 𝜔𝑖 is equal to 𝑏 − 𝑎, in agreement
with Theorem 8-1. Similarly, to what we saw in Table 8-1, and for the same reasons, the weights are
symmetric. Example 8-2 shows, for the case of the open trapezoidal rule, how to derive the values from
Table 8-2.

Degree of
𝑛 𝐴 𝑊𝑜 𝑊1 𝑊2 𝑊3 𝑊4
precision
0 2 1 1
1 3/2 1 1 1
2 4/3 2 -1 2 3
3 5/24 11 1 1 11 3
4 3/10 11 -14 26 -14 11 5
Table 8-2 Weights of some open Newton-Cotes formulas

We can prove that if 𝑛 is odd, the open Newton-Cotes formulas have a degree of precision equal to 𝑛.
When 𝑛 is even, the open Newton-Cotes formulas have a degree of precision equal to 𝑛 + 1.

Some of the listed quadrature formulas have names such as the mid-point (or rectangle) rule (𝑛 = 0),
the open trapezoidal rule (𝑛 = 1) and the Milne’s rule4 (𝑛 = 2).

MIDPOINT RULE
Derive the open 1-point Newton-Cotes formula

4
James S. Milne (born 10 October 1942) New Zealand mathematician

7
For the open 1-point Newton-Cotes formula, the only interpolation point is 𝑥𝑜 = (𝑎 + 𝑏)/2. Using
Equation (8-14), we can compute the weight 𝜔𝑜 :
𝑏 𝑏

𝜔𝑜 = ∫ 𝐿𝑜 (𝑥)𝑑𝑥 = ∫ 1 ∙ 𝑑𝑥 = 𝑏 − 𝑎
𝑎 𝑎

Equation (8-13) yields the open 1-point Newton-Cotes formula:


𝑏
𝑎+𝑏
∫ 𝑓(𝑥)𝑑𝑥 ≅ (𝑏 − 𝑎) ∙ 𝑓 ( )
2
𝑎

which is known as the midpoint rule.

In principle, we could use any number of interpolation points. However, Runge’s phenomenon (wiggling
of the interpolation polynomial as described in section 7.5) could negatively affect quadrature rules for
excessively large values of 𝑛. Furthermore, some coefficients 𝜔𝑖 may become negative. It can be shown
that this makes the formulas sensitive to round-off errors. For the aforementioned reasons, we
generally use a limited number of interpolations points. To further increase the accuracy associated with
approximating definite integrals, other methods must be used. One of them are composite methods as
discussed in section 8.3.3.

8.3.2. Truncation errors of Newton-Cotes formulas


In this section we turn our attention to the important question of errors associated with using
quadrature formulas. Definition 8-1 states what is the error of a quadrature formula. However, the error
cannot be estimated without knowledge of the true value of the definite integral. In this section, we
derive expressions for error estimations that do not require the knowledge of the true value of the
definite integral.

Let us start by giving the general expression of the truncation error 𝐸 of a Newton-Cotes formula using
the expression of the error from polynomial interpolation derived in Theorem 7-1 of chapter 7:

𝑏 𝑏 𝑏
(𝑥 − 𝑥𝑜 ) … (𝑥 − 𝑥𝑛 ) (𝑛+1)
𝐸 = ∫ 𝑓(𝑥)𝑑𝑥 − ∫ 𝑃𝑛 (𝑥)𝑑𝑥 = ∫ 𝑓 (𝑐)𝑑𝑥
𝑛 + 1!
𝑎 𝑎 𝑎
Let us draw the reader’s attention to two common mistakes in relation with Equation (8-23).

First, note that the unknown point 𝑐 is some number between 𝑎 and 𝑏. It is important to realize that this
number is in fact a function of 𝑥, as for every 𝑥, the error 𝑓(𝑥) − 𝑃𝑛 (𝑥) will be different and
consequently the number 𝑐 will be different as well. It is wrong to believe that 𝑓 (𝑛+1) (𝑐) is a constant
that could be factorized out of the integral.

Furthermore, we note that generally the function 𝑞𝑛 (𝑥) = (𝑥 − 𝑥𝑜 ) … (𝑥 − 𝑥𝑛 ) is not a purely positive
or purely negative function on the interval [𝑎, 𝑏]. Therefore, the second mean value theorem of definite
integrals can not be used to simplify the expression of the error 𝐸.

8
In fact, to derive correctly expressions for the truncation error of Newton-Cotes formulas is not a
straightforward task. It can, however, be achieved (see for example the book “A first course in numerical
analysis” by Ralston and Rabinowitz). In the case of closed Newton-Cotes formulas, it can be shown that
there exists a number 𝜂 in the interval [𝑎, 𝑏] such that

𝑏
𝑓 (𝑛+1) (𝜂)
𝐸= ∫(𝑥 − 𝑥𝑜 ) … (𝑥 − 𝑥𝑛 )𝑑𝑥 if 𝑛 odd
(𝑛 + 1)!
𝑎
𝑏
𝑓 (𝑛+2) (𝜂)
𝐸= ∫ 𝑥 ∙ (𝑥 − 𝑥𝑜 ) … (𝑥 − 𝑥𝑛 )𝑑𝑥 if 𝑛 even
(𝑛 + 2)!
𝑎
Equations (8-24) and (8-25) allow deriving expressions for the truncation errors for some Newton-Cotes
formulas by computing the value of the definite integral over the function (𝑥 − 𝑥𝑜 ) … (𝑥 − 𝑥𝑛 ),
respectively 𝑥 ∙ (𝑥 − 𝑥𝑜 ) … (𝑥 − 𝑥𝑛 ).

Table 8-3 lists the truncation errors for some Newton-Cotes formulas together with the truncation
errors for the open Newton-Cotes formulas. The definition of 𝑛 and ℎ are the same as in Table 8-1 and
Table 8-2. The quantity ℎ is the distance between two interpolation points, as stated in Definition 8-3.

Truncation error for


𝑛
Closed Newton-Cotes formulas Open Newton-Cotes formulas
1
0 N.A. − ℎ3 𝑓 (2) (𝜂)
3
1 3 (2) 1
1 − ℎ 𝑓 (𝜂) − ℎ3 𝑓 (2) (𝜂)
12 4
1 28
2 − ℎ5 𝑓 (4) (𝜂) − ℎ5 𝑓 (4) (𝜂)
90 90
3 95 5 (4)
3 − ℎ5 𝑓 (4) (𝜂) − ℎ 𝑓 (𝜂)
80 144
8 7 (6) 41 7 (6)
4 − ℎ 𝑓 (𝜂) − ℎ 𝑓 (𝜂)
945 140
Table 8-3 Truncation errors of some Newton-Cotes formulas. Refer to Definition 8-3 for the values of ℎ.

From Table 8-3, we can derive the degree of precision (Table 8-1 and Table 8-2) of the various
quadrature formulas. Consider for example Simpon’s-1/3 rule (the 3 points Newton-Cotes formula). Its
truncation error is equal to a term involving the fourth derivative of 𝑓(𝑥). Consequently, applying that
quadrature rule to any polynomial of degree three or less, will result in a truncation error equal to zero.
This means, according to Definition 8-2, that the degree of precision of Simpon’s-1/3 rule is equal to
three, as stated in Table 8-1.

As previously mentioned, the derivation of Equations (8-24) and (8-25) is not straight forward. In the
case of the trapezoidal rule, the calculations are simple and the second mean value theorem of definite
integrals can be used as demonstrated in Example 8-5. However, for the Simpson’s-1/3 rule and higher
order methods, the calculations are difficult because the second mean value theorem of definite

9
integrals can no longer be used. For open Newton-Cotes formulas, the difficulties begin at the first open
Newton-Cotes rule, the midpoint rule.

TRUNCATION ERROR OF THE TRAPEZOIDAL RULE


Derive an expression for the truncation error of the trapezoidal rule

For the trapezoidal rule, which uses the two interpolation points 𝑥𝑜 = 𝑎 and 𝑥1 = 𝑏, the truncation
error writes using Equation (8-23):

𝑏 𝑏 𝑏
(𝑥 − 𝑎)(𝑥 − 𝑏) (2)
𝐸 = ∫ 𝑓(𝑥)𝑑𝑥 − ∫ 𝑃1 (𝑥)𝑑𝑥 = ∫ 𝑓 [𝑐(𝑥)]𝑑𝑥
2!
𝑎 𝑎 𝑎
As (𝑥 − 𝑎)(𝑥 − 𝑏) ≤ 0 in the integration interval [𝑎, 𝑏], we can use the second mean value theorem of
definite integrals to simplify the expression of the truncation error to

𝑏
𝑓 (2) (𝜂) (𝑎 − 𝑏)3 (2) ℎ3
𝐸= ∫(𝑥 − 𝑎)(𝑥 − 𝑏)𝑑𝑥 = 𝑓 (𝜂) = − 𝑓 (2) (𝜂)
2 12 12
𝑎
with 𝜂 some number in the interval [𝑎, 𝑏]. This expression agrees with the general formula given by
Equation (8-24).

The Newton-Cotes truncation errors of Table 8-3 can be employed to estimate the errors of practical
situations as illustrated in the following example.

ESTIMATING TRUNCATION ERRORS OF NEWTON-COTES FORMULAS


In Example 8-1 and Example 8-3, we applied the trapezoidal and Simpon’s-1/3 rules to numerically
estimate the definite integral (8-8). Let us now illustrate how the expressions of the truncation errors of
Newton-Cotes formulas from Table 8-3 can be used to estimate the truncation errors.

We fist have to compute the second and forth derivative of 𝑓(𝑥), the natural logarithmic function in our
example:

𝑑2 1
2
ln(𝑥) = − 2
𝑑𝑥 𝑥

𝑑4 6
4
ln(𝑥) = − 4
𝑑𝑥 𝑥
We don’t know the value of the number 𝜂 in the interval [1,2] needed to compute the truncation errors.
But we can still find the maximal values of these truncation error:

(2 − 1)3 1 1
max | ∙ 2| = ≅ 0.08
[1,2] 12 𝑥 12

10
2−1 5
( 2 ) 6 1
max | ∙ 4| = ≅ 0.002
[1,2] 90 𝑥 480

Comparing these error estimations with the true errors computed in Example 8-1 and Example 8-3, we
observe that our error estimations are indeed conservative estimations and can consequently be used
to give an upper bound to the truncation error.

8.3.3. Composite Newton-Cotes formulas


Table 8-3, which lists the truncation errors of Newton-Cotes formulas, shows that as the number of
interpolation points used to construct the quadrature method increases, the truncation error decreases.
This is because the distance ℎ between two interpolation points reduces and its power increase with the
number of interpolation points used. However, as discussed in section 8.3, Runge phenomenon and
other effects can lead to numerically unstable methods as the number of interpolation points increases.
If higher precision quadrature formulas are required, an alternative approach must be developed. One
such approach is to use the additive property of definite integrals. For example, if we add a point 𝑐 ∈
[𝑎, 𝑏] we can write:

𝑏 𝑐 𝑏

∫ 𝑓(𝑥)𝑑𝑥 = ∫ 𝑓(𝑥)𝑑𝑥 + ∫ 𝑓(𝑥)𝑑𝑥


𝑎 𝑎 𝑐

This leads to the idea of composite Newton-Cotes formulas. The integration interval [𝑎, 𝑏] is subdivided
into 𝑚 equal sized sub-intervals, called panels. In each panel, a Newton-Cotes formula is used to
approximate the definite integral. Depending on the chosen Newton-Cotes formula, each panel is
subdivided into the required interpolation points. In this chapter, all the interpolations points within the
integration interval [𝑎, 𝑏] are numbered consecutively as 𝑥𝑜 , 𝑥1 , … , 𝑥𝑛 where 𝑥𝑜 is the first interpolation
point in the first panel and 𝑥𝑛 is the last interpolation point of the last panel.

<Figure 8-3 here>


Figure 8-3 Two examples of composite Newton-Cotes formulas: a) composite trapezoidal rule and b) composite Simpson’s-1/3
rule. For both examples five interpolation points are used. This results into four panels for the composite trapezoidal rule and
two panels for the composite Simpson’s-1/3 rule.

Figure 8-3 provides two examples (the composite trapezoidal and composite Simpson’s-1/3 rule). The
reader is invited to carefully observe how the interpolation points are numbered on the figure. Different
authors often use different conventions in numbering these points. This results in different composite
Newton-Cotes formulas and is often a source of confusion when learning these methods.

For example, for the composite Simpson’s-1/3 rule (Figure 8-3 b), we used two panels. The first panel is
the interval [𝑥𝑜 = 𝑎, 𝑥2 ] and the second panel is the interval [𝑥2 , 𝑥4 = 𝑏]. Each panel contains three
interpolations points, as needed by the Simpson’s-1/3 rule. In all cases, we keep our definition of ℎ as
being the distance between two interpolation points. The reader should be careful not to confuse ℎ with
the size of a panel. In the case of the composite trapezoidal rule, it happens that ℎ also represents the
size of a panel. However, this isn't universally true; for instance, in the composite Simpson's 1/3 rule, ℎ
is half the size of a panel.

11
COMPOSITE TRAPEZOIDAL RULE
Let us show how to use the composite trapezoidal rule to estimate numerically the definite integral (8-8)
from Example 8-1. We will use four panels:

2 1.25 1.5 1.75 2

∫ ln(𝑥) 𝑑𝑥 = ∫ ln(𝑥) 𝑑𝑥 + ∫ ln(𝑥) 𝑑𝑥 + ∫ ln(𝑥) 𝑑𝑥 + ∫ ln(𝑥) 𝑑𝑥


1 1 1.25 1.5 1.75

In each panel, we apply the trapezoidal rule. For example, in the second panel:

1.5
ln(1.5) − ln(1.25)
∫ ln(𝑥) 𝑑𝑥 ≅ (1.5 − 1.25) ≅ 0.0228
2
1.25

Summing the results on all panels yields:

∫ ln(𝑥) 𝑑𝑥 ≅ 0.3837
1

Comparing with the exact solution (8-9) obtained from calculus gives us the following error:

𝐸 ≅ |0.3863 − 0.3837| ≅ 0.003

Note that this error is significantly lower than the error from Example 8-1 were we applied the
trapezoidal rule.

As an exercise the reader is invited to repeat the same calculations, but using Simposn’s-1/3 rule, which
will lead to the numerical estimation of the definite integral to be equal to 0.3863, identical with the
answer from calculus (the difference will start to appear after the fifth significant digit).

The approach (computing in each panel explicitly the value of the numerical estimation of the definite
integral using a given quadrature rule) used in Example 8-7 is well suited for hand calculations. It is
however not very efficient. For example, for every start and endpoint of the panels, the function 𝑓(𝑥)
will be evaluated twice (once for the left panel and once for the right panel). It is possible to develop
compact formulas for composite methods that avoid evaluating multiple times the function 𝑓. As an
example, let us show it for the composite trapezoidal method.
𝑏−𝑎
We start by dividing the integration interval into 𝑚 panels. Defining ℎ = 𝑚
, then the interpolation
points are 𝑥𝑖 = 𝑎 + 𝑖ℎ with 𝑖 = 1,2, … , 𝑚. With these notations, we obtain for composite trapezoidal
method:

𝑏
1 1
∫ 𝑓(𝑥)𝑑𝑥 ≅ ℎ [ 𝑓(𝑥𝑜 ) + 𝑓(𝑥1 ) + ⋯ 𝑓(𝑥𝑚−1 ) + 𝑓(𝑥𝑚 )]
2 2
𝑎
which can be written as:

12
𝑏 𝑚−1

∫ 𝑓(𝑥)𝑑𝑥 ≅ [𝑓(𝑎) + 𝑓(𝑏) + 2 ∑ 𝑓(𝑥𝑖 )]
2
𝑎 𝑖=1

Equation (8-38) is known as the composite trapezoidal rule.

Similarly, we can derive expressions for other composite Newton-Cotes formulas. For example, the
composite Simpson-1/3 rule writes

𝑏 𝑚 𝑚−1

∫ 𝑓(𝑥)𝑑𝑥 ≅ [𝑓(𝑎) + 𝑓(𝑏) + 4 ∑ 𝑓(𝑥2𝑖−1 ) + 2 ∑ 𝑓(𝑥2𝑖 )]
3
𝑎 𝑖=1 𝑖=1
𝑏−𝑎
with ℎ = and 𝑥𝑖 = 𝑎 + 𝑖ℎ.
2𝑚

The composite midpoint method writes as:

𝑏 𝑚−1

∫ 𝑓(𝑥)𝑑𝑥 ≅ ℎ ∑ 𝑓(𝑥𝑖 )
𝑎 𝑖=0
𝑏−𝑎 1
with ℎ = 𝑚
and 𝑥𝑖 = 𝑎 + ℎ (2 + 𝑖).

Equations, such as (8-38), (8-39) and (8-40), can be used to code efficiently composite Newton-Cotes
quadrature formulas. The following example gives a possible implementation in Octave.

COMPOSITE NEWTON-COTES FORMULAS IN OCTAVE


Let us demonstrate how to employ the composite trapezoidal rule (8-38) to approximate the definite
integral (8-8) of Example 8-1. We will use 𝑚 = 4 panels.

We start by defining the function that needs integrating and the interpolation points:
>> f = @(x) log(x);
>> a = 1; b = 2;
>> m = 4;
>> x = linspace(a, b, m+1);

The parameter ℎ, the distance between two interpolation points, is given by


>> h = (b-a)/m;

Equation (8-38) becomes:


>> I = h/2*(2*sum(f(x)) - f(a)-f(b))
I = 0.38370

Note that, as the octave function call sum(f(x)) computes the sum of all 𝑓(𝑥𝑖 ), including for 𝑥𝑜 = 𝑎
and 𝑥𝑚 = 𝑏, we need to subtract f(𝑎) and 𝑓(𝑏) to reproduce Equation (8-38).

Similarly, we can implement the composite Simpon’s-1/3 rule (8-39):

13
>> f = @(x) log(x);
>> a = 1; b = 2;
>> m = 4;
>> h = (b-a)/(2*m);
>> x1 = linspace(a+h, b-h, m);
>> x2 = linspace(a+2*h, b-2*h, m-1);
>> I = h/3*(f(a)+f(b)+4*sum(f(x1))+2*sum(f(x2)))
I = 0.38629

Note that ℎ, the distance between two interpolation points, needs to be computed differently than in
the case of the trapezoidal rule. Furthermore, two sets of interpolations points, stored in the vectors x1
and x2, need to be computed.

8.3.4. Truncation errors of composite Newton-Cotes formulas


An expression for the truncation errors of composite formulas can be derived by leveraging the
truncation errors of the corresponding Newton-Cotes formulas from Table 8-3. To carry out these
calculations, we need to utilize the generalized intermediate value theorem.

GENERALIZED INTERMEDIATE VALUE THEOREM


Let 𝑓 be a continuous function on the interval [𝑎, 𝑏] and let 𝑥1 , 𝑥2 , … , 𝑥𝑛 be points in [𝑎, 𝑏]. Given some
strictly positive numbers 𝑎1 , 𝑎2 , … , 𝑎𝑛 > 0, there exists a number 𝑐 such that

(𝑎1 + 𝑎2 + ⋯ + 𝑎𝑛 )𝑓(𝑐) = 𝑎1 𝑓(𝑥1 ) + 𝑎2 𝑓(𝑥2 ) + ⋯ + 𝑎𝑛 𝑓(𝑥𝑛 )

Theorem 8-3 is a generalisation of the intermediate value theorem. If 𝑓(𝑥𝑖 ) is the smallest and 𝑓(𝑥𝑗 ) the
largest value among the function values 𝑓(𝑥1 ), 𝑓(𝑥2 ), … , 𝑓(𝑥𝑛 ), then we have

𝑎1 𝑓(𝑥𝑖 ) + ⋯ + 𝑎𝑛 𝑓(𝑥𝑖 ) ≤ 𝑎1 𝑓(𝑥1 ) + ⋯ + 𝑎𝑛 𝑓(𝑥𝑛 ) ≤ 𝑎1 𝑓(𝑥𝑗 ) + ⋯ + 𝑎𝑛 𝑓(𝑥𝑗 )

from which it follows that:

𝑎1 𝑓(𝑥1 ) + ⋯ + 𝑎𝑛 𝑓(𝑥𝑛 )
𝑓(𝑥𝑖 ) ≤ ≤ 𝑓(𝑥𝑗 )
𝑎1 + ⋯ + 𝑎𝑛

Using the intermediate value theorem, we can conclude that there exists a number 𝑐 such that:

𝑎1 𝑓(𝑥1 ) + ⋯ + 𝑎𝑛 𝑓(𝑥𝑛 )
𝑓(𝑐) =
𝑎1 + ⋯ + 𝑎𝑛

which concludes the proof of Theorem 8-3.

TRUNCATION ERROR OF THE COMPOSITE TRAPEZOIDAL RULE


In this example, we derive an expression for the truncation error of the composite trapezoidal rule
based on the truncation error of the trapezoidal rule and Theorem 8-3.

14
In order to apply the composite trapezoidal rule, we divide the integration interval [𝑎, 𝑏] into 𝑚 equal
sized panels. The interpolation points become 𝑥𝑖 = 𝑎 + 𝑖ℎ with ℎ = (𝑏 − 𝑎)/𝑚. In each panel, we
apply the trapezoidal rule, which according to Table 8-3 is:
𝑥𝑖+1
ℎ ℎ3
∫ 𝑓(𝑥)𝑑𝑥 = [𝑓(𝑥𝑖 ) + 𝑓(𝑥𝑖+1 )] − 𝑓"(𝑐𝑖 )
2 12
𝑥𝑖
with 𝑐𝑖 an unknown number in the interval [𝑥𝑖 , 𝑥𝑖+1 ]. Summing all these approximations over all panels
results in

𝑏 𝑚−1 𝑚−1
ℎ ℎ3
∫ 𝑓(𝑥)𝑑𝑥 = [𝑓(𝑎) + 𝑓(𝑏) + 2 ∑ 𝑓(𝑥𝑖 )] − ∑ 𝑓"(𝑐𝑖 )
2 12
𝑎 𝑖=1 𝑖=0
The error term can be simplified using Theorem 8-3:

𝑚−1
ℎ3 ℎ3 𝑏−𝑎 2
∑ 𝑓"(𝑐𝑖 ) = 𝑚 𝑓"(𝑐) = ℎ 𝑓"(𝑐)
12 12 12
𝑖=0
with 𝑐 some point in [𝑎, 𝑏].

Similarly to Example 8-9, one can derive the expressions for the truncation error of other composite
Newton-Cotes formulas. Table 8-4 lists three examples.

Composite Newton-Cotes formulas Truncation error


𝑏−𝑎 2
Trapezoid − ℎ 𝑓"(𝑐)
12
𝑏 − 𝑎 4 (4)
Simpson-1/3 − ℎ 𝑓 (𝑐)
180
𝑏−𝑎 2
Midpoint − ℎ 𝑓"(𝑐)
24
Table 8-4 Truncation errors of some composite Newton-Cotes formulas

The composite Trapezoidal and midpoint rules are both of second order in ℎ. Consequently, they will
produce approximations of similar accuracy when using interpolation points separated by the same
distance ℎ. The composite Simpson 1/3 rule is significantly better, as it is in order four. The following
example gives an illustration of this fact.

ORDER OF CONVERGENCE OF COMPOSITE QUADRATURE METHODS


In this example, we will verify by evaluating the definite integral
𝜋

∫ sin(𝑥) 𝑑𝑥 = 2
0
with the composite trapezoidal and Simpson’s-1/3 rules, that the orders of the composite quadrature
formulas are two and four respectively.

15
We perform successive applications of the composite trapezoidal (8-38) and Simpson’s-1/3 rule (8-39)
with increasing numbers of panels. Next, we calculate the true error by comparing the numerical
approximations to the true value. This allows constructing a graph of the true error as a function of ℎ.

In Octave, we can achieve this as follows. First, we define, similarly to Example 8-6, a function to
compute the numerical approximation of a definite integral. Below is the code for the trapezoidal
composite rule:
>> function I = trapezoidal(f, a, b, m)
>> x = linspace(a, b, m+1);
>> h = (b-a)/m;
>> I = h/2*(2*sum(f(x)) - f(a)-f(b));
>> endfunction

Then, we compute the numerical approximations of the definite integral (8-48) for different numbers of
panels and store them in the column vector I. For the number of panels 𝑚, we start with one panel and
then successively double them. At the same time, we compute ℎ, which we store in the column vector
h. Recall that ℎ is the difference between two interpolating points 𝑥𝑖 and 𝑥𝑖+1 . If 𝑚 is the number of
panels used, then ℎ can be calculated as ℎ = 𝜋/𝑚 for the composite trapezoidal rule, and ℎ = 𝜋/2𝑚 for
Simpson’s-1/3 composite rule.
>> f = @(x) sin(x);
>> m = 1;
>> I = []; h = [];
>> for i = 1:13
>> I = [I; trapezoidal(f, 0, pi, m)];
>> h = [h; pi/m];
>> m = m*2;
>> endfor

Finally, we compute the true errors, which we store in the column vector err. We can then plot the
results in a log-log plot:
>> err = abs(I-2);
>> loglog(h, err, 'o')

<Figure 8-4 here>


𝜋
Figure 8-4 True absolute error of the trapezoidal and Simpons-1/3 rule when approximating the definite integral ∫0 𝑠𝑖𝑛(𝑥) 𝑑𝑥 .

To find the order of convergence of our calculations, we fit a power law 𝐸 = 𝐾ℎ𝑞 over the errors (see
Example 6-8 of chapter 6 for the details):
>> A = [h.^0 log(h)];
>> a = A\log(err)
a =
-1.7380
2.0114

This confirms that the order of convergence of the error is quadratic (𝑞 ≅ 2.0114).

16
Similarly, calculations can be repeated for Simpon’s-1/3 composite method. Figure 8-4 summarize the
results and confirms that the composite trapezoidal method is in order two, whereas the composite
Simpson 1/3 rule is in order four.

Note that to derive the expressions of Table 8-4, the function 𝑓 must be sufficiently derivable and
continuous. If that hypothesis is not satisfied, the truncation error can be of a different order in ℎ than
shown in Table 8-4. This is illustrated in Example 8-11.

REDUCED ORDER OF CONVERGENCE OF COMPOSITE METHODS


Let us study the order of composite Newton-Cotes formulas applied to the following definite integral:

1
2
∫ √𝑥 𝑑𝑥 =
3
0
We start by noting that both the second and forth derivative of 𝑓(𝑥) = √𝑥 diverge in 𝑥 = 0. Because of
this fact, the truncation errors of Table 8-3 and Table 8-4 are no longer valid.

Successive application of the composite trapezoidal (8-38) and Simpson’s-1/3 rule (8-39) followed by
calculating the true error by comparing the approximations with the true value, allows constructing
Figure 8-5 similarly to Example 8-10.

<Figure 8-5 here>


1
Figure 8-5 True absolute error of the trapezoidal and Simpons-1/3 rule when approximating the definite integral ∫0 √𝑥 𝑑𝑥

Fitting a power law 𝐸 = 𝐾ℎ𝑞 over the errors (Figure 8-5), shows that the composite trapezoidal method
and the composite Simpson-1/3 rule are, in this case, of about 1.5, significantly lower than in the case
where the results of Table 8-4 can be applied.

As demonstrated Example 8-11, if the function to be integrated isn’t sufficiently derivable, then the
results from Table 8-4 are in general not valid. As will demonstrate the following Example 8-12, the
results from Table 8-4 give the minimal order of convergence (for the case where the function to be
integrated is sufficiently derivable). It can happen that the order of convergence is higher.

INCREASED ORDER OF CONVERGENCE OF COMPOSITE METHODS


Let us study the order of the composite trapezoidal and Simpson-1/3 formulas applied to the definite
integral:

1
4(𝑒 − 1)𝜋 2
∫ 𝑒 𝑥 ∙ [1 − cos 2𝜋𝑥] 𝑑𝑥 =
1 + 4𝜋 2
0
In the present case, all conditions required for the results of Table 8-4 to be valid are met. Successive
applications of the composite trapezoidal (8-38) and composite Simpson 1/3 rule (8-39) followed by
calculating the true error by comparing the approximations with the true value, allow constructing
Figure 8-6 in a similar way as in Example 8-10.

17
<Figure 8-6 here>
Figure 8-6 True absolute error of the trapezoidal and Simpons-1/3 rule when approximating the definite integral
1
∫0 𝑒 𝑥 ∙ [1 − 𝑐𝑜𝑠 2𝜋𝑥] 𝑑𝑥.

We note that in this case, both methods, the composite trapezoidal and Simpson-1/3 rule are of order
four. The trapezoidal method is, for this case, as precise as Simpson’s-1/3 rule.

8.3.5. Estimating truncation errors of composite quadrature formulas


Example 8-11 and Example 8-12 showed that the truncation error of a composite method may behave
differently than the expected expressions listed in Table 8-4. This happens for example if the function 𝑓
to be integrated is not sufficiently derivable. But, as shows Example 8-12, sometimes a method behaves
differently, even if the function 𝑓 is sufficiently derivable (we will discuss the reason of this in section
8.3.6). Such cases are rather difficult to detect, and it would be desirable to have a method that allows
estimating the truncation error in a more practical way, without having to rely on the results of Table 8-
4. This can be achieved and is the topic of the present section.

We start by noting that a given definite integral can be evaluated with a same composite quadrature
formula but using two different numbers of panels 𝑚1 and 𝑚2 , resulting in two different values ℎ1 and
ℎ2 . If the composite method is of order 𝑞, then we can write:

∫ 𝑓(𝑥)𝑑𝑥 = 𝐼(ℎ1 ) + 𝐸(ℎ1 ) = 𝐼(ℎ2 ) + 𝐸(ℎ2 )


𝑎
where 𝐼(ℎ) stands for the numerical approximation computed by the composite method with ℎ being
the distance between two interpolation points and 𝐸(ℎ) the corresponding truncation error. Assuming
ℎ1 and ℎ2 are sufficiently small so the truncation error can be expressed as a power law 𝐸 = 𝐾ℎ𝑞 , then
we can write:

𝑞
𝐸(ℎ1 ) ℎ1

𝐸(ℎ2 ) ℎ2𝑞

Combining Equation (8-51) and (8-52) and solving for 𝐸(ℎ2 ) results in Richardson’s error estimation
formula5.

RICHARDSON’S TRUNCATION ERROR ESTIMATION


If 𝐼(ℎ1 ) and 𝐼(ℎ2 ) are two approximations produced with two step sizes ℎ1 and ℎ2 by a composite
quadrature formula in order 𝑂(ℎ𝑞 ) of a definite integral, the truncation error 𝐸(ℎ2 ) of 𝐼(ℎ2 ) is given by

𝐼(ℎ2 ) − 𝐼(ℎ1 )
𝐸(ℎ2 ) ≅ | |
ℎ 𝑞
( 1) − 1
ℎ2

5
Lewis Fry Richardson (1881 – 1953) English mathematician, physicist and meteorologist

18
as long as ℎ1 and ℎ2 are such that the truncation errors can be expressed reasonably as 𝐸(ℎ) = 𝐾ℎ𝑞 .

Richardson’s error formula is of immense practical value as it allows a simple and precise estimation of
the truncation error of a composite quadrature rule.

ESTIMATING TRUNCATION ERRORS WITH RICHARDON’S METHOD


Verify the validity of Richardson’s error formula (8-53) by applying it to the numerical approximation of
the definite integral using the trapezoidal composite rule:
𝜋

∫ sin(𝑥) 𝑑𝑥 = 2
0
Using the numerical approximations of the definite integral, we can compute the error estimation using
Richardson’s error formula (8-53) and compare it with the true error. For example, applying the
𝜋 𝜋
trapezoidal rule for ℎ1 = (i.e. 𝑚 = 10 panels) and ℎ2 = (i.e. 𝑚 = 20 panels) we obtain (the reader
10 20
can use the codes from Example 8-8 to conduct the calculations):

𝜋 𝐼(ℎ2 ) − 𝐼(ℎ1 ) 1.9959 − 1.9835


𝐸 (ℎ2 = )≅| 2
|≅| | ≅ 0.04
20 2 −1 3

The true error is:

|𝐼(ℎ2 ) − 2| ≅ |1.9959 − 2| ≅ 0.04

This is in excellent agreement with the error estimation. Proceeding similarly with the other values of ℎ
shows an excellent match between both.

A key hypothesis behind the Richardson error formula (8-53), is that the values of ℎ must be small
enough to be able to represent the error by the power low 𝐸(ℎ) = 𝐾ℎ𝑞 . Practically, this means one first
has to find the range of values of ℎ for which the hypothesis 𝐸(ℎ) = 𝐾ℎ𝑞 is valid. As illustrates Example
8-14 this task is simple to execute.

VALID RANGE OF H IN THE RICHARDON ERROR FORMULA


Let us find the range for ℎ where Richardson’s error formula (8-53) can be used to estimate the
truncation error of the definite integral:

3
2 2
𝐼= ∫ 𝑒 −𝑥 𝑑𝑥
√𝜋 0

when using the composite trapezoidal and Simpson-1/3 rule. Furthermore, we will provide an
approximation of the definite integral 𝐼 with an absolute error below 10−9 .

To solve this problem, we start by applying the composite trapezoidal and Simpson-1/3 rules for
different values ℎ to produce approximations of the definite integral 𝐼. Using the same function
trapezoidal we defined in Example 8-10, we can achieve this in Octave:

19
>> f = @(x) 2/sqrt(pi)*exp(-x.^2);
>> m = 1;
>> I = []; h = [];
>> for i = 1:25
>> I = [I; trapezoidal(f, 0, 3, m)];
>> h = [h; 3/m];
>> m = m*2;
>> endfor

Note how the function f is defined. As this function has to handle vectors, the .^ operator must be
used. The for loop calculates the definite integral approximations 𝐼(ℎ) and the used values ℎ to
produce them in the two column vectors I and h.

Next, we apply Richardson’s error formula (8-53) to produce estimations of the truncation error in
function of the used value ℎ. As we double our number of panels in each step, the ratios between ℎ1
and ℎ2 will be equal to two. In Octave we can compute the error estimations (stored in the column
vector err) using Richardson’s error formula (8-53) like this:
>> err = [];
>> for i = 2:length(I)
>> err = [err; abs(I(i)-I(i-1))/(2^2-1)];
>> endfor

The reader is invited to pay attention to the indexes used in the column vector I. The same calculation
can be achieved in a single call taking advantage of the vector notations of Octave (the vector index end
is an Octave shortcut for the last element in the vector):
>> err = abs(I(2:end) - I(1:end-1)) / (2^2-1);

We can now present our results in a log-log plot:


>> loglog(h(2:end), err, 'o')

The reader should pay attention how the correct values of ℎ are matched with the corresponding error
estimations.

<Figure 8-7 here>


Figure 8-7 Truncation error estimations using Richardson’s error formula for the composite trapezoidal and Simpson-1/3
methods.

Repeating the same calculations with Simpson’s-1/3 composite method, allows producing Figure 8-7,
which shows the estimated truncation error versus ℎ in a log-log scale. Inspecting the graphs, we
identify values of ℎ for which the hypothesis 𝐸(ℎ) = 𝐾ℎ𝑞 is valid. This corresponds to values of ℎ
resulting in a straight line for the dependence of the error in the log-log scale. For these values of ℎ,
Richardson’s error formula can be used to estimated accurately the truncation error. Let us explain why,
for the other values of ℎ, Richardson’s formula will fail.

For too large values ℎ (for the composite trapezoidal rule for values larger than about 0.5 and for the
composite Simpson’s 1/3 rule for values larger than about 0.2), the truncation errors do not yet follow
the power law 𝐸(ℎ) = 𝐾ℎ𝑞 .

20
On the other side, for too small values ℎ (in the case of the composite trapezoidal rule for values smaller
than about 1 ∙ 10−3 and for the composite Simpson’s 1/3 rule for values smaller than about 2 ∙ 10−5),
the total error isn’t only determined by the truncation errors, but round-off errors become significant
too. Richardson’s formula can’t estimate properly the total error (which is the sum of the truncation and
round-off errors), as it will estimate only the contribution from the truncation errors.

Now that we know the range of values ℎ where Richardon’s formula is valid, we can select a particular ℎ
to compute an approximation of the definite integral 𝐼 with an absolute error below 10−9 . According to
Figure 8-7, we can, for example, choose ℎ = 10−3, which corresponds to 𝑚 = 3000 panels in the case
of the composite trapezoidal rule and to 𝑚 = 1500 panels for the composite Simpson rule. For both
cases, the approximations are below the requested precision of 10−9 and in the range where
Richardson’s error formula is valid. Now, using Simpson’s-1/3 composite rule we find:

𝐼(ℎ = 10−3 ) ≅ 0.999977909503001

This approximation has an absolute error below 10−9 . In fact, according to Figure 8-7 Truncation error
estimations using Richardson’s error formula for the composite trapezoidal and Simpson-1/3 methods.,
the absolute error is much smaller than 10−9 and is about 10−16 , indicating that almost all 16 digits of
our approximation are correct. Note that we would not be able to achieve this accurate result with the
composite trapezoidal method, which is able at best to give an approximation with an absolute error of
about 10−14.

The methodology presented in Example 8-14 not only allows identifying regions where the total error is
dominated by the truncation errors (and consequently the total error follows a power law 𝐸(ℎ) =
𝐾ℎ𝑞 ), but can also determine the order of the composite method used. Often, the order of the method
is known. But as discussed previously, if the function to be integrated isn’t sufficiently derivable, the
results from Table 8-4 are in general not valid. In such cases, the order of the method needs to be
estimated.

This estimation can be obtained by analysing the quantity ∆𝐼(ℎ) = |𝐼(ℎ) − 𝐼(2ℎ)|. From Richardson’s
formula (8-53) and 𝐸(ℎ) = 𝐾ℎ𝑞 it follows:
𝑞
̃ ℎ2 𝑞
∆𝐼(ℎ) = |𝐼(ℎ) − 𝐼(2ℎ)| ≅ [2𝑞 − 1]𝐾ℎ2 = 𝐾

Equation (8-59) shows that fitting a power low to ∆𝐼(ℎ) allows the estimation of the order 𝑞 of the
quadrature formula. Once the order 𝑞 is known, the correct error estimation of 𝐼(ℎ) can be obtained
from Richardson’s formula. This approach is illustrated in Example 8-15.

ESTIMATING THE ORDER OF CONVERGENCE OF A COMPOSITE METHOD


Let us estimate the order of the truncation error of the composite Simpson-1/3 rule applied to:

2
𝐼 = ∫ √𝑥 + sin 𝑥 𝑑𝑥
0

Start by noting that the first derivative of the integrant is:

21
𝑑 cos 𝑥 + 1
√𝑥 + sin 𝑥 =
𝑑𝑥 2√𝑥 + sin 𝑥
The first derivate diverges in 𝑥 = 0 and consequently we know that the results of Table 8-4 are not
valid. This implies that the order of convergence of the Simpson’-1/3 rule does not necessarily have to
be equal to four.

To estimate the correct order of convergence, we start by applying the composite Simpson-1/3 rule for
different values of ℎ to produce approximations 𝐼(ℎ) of the definite integral 𝐼. We choose the values
ℎ = 1, 0.5, 0.25, … that correspond to 𝑚 = 1, 2, 4, … panels6. In Octave:
>> function I = simp(f, a, b, m)
>> h = (b-a)/(2*m);
>> x1 = linspace(a+h, b-h, m);
>> x2 = linspace(a+2*h, b-2*h, m-1);
>> I = h/3*(f(a)+f(b)+4*sum(f(x1))+2*sum(f(x2)));
>> endfunction
>> f = @(x) sqrt(x+sin(x));
>> m = 1;
>> I = []; h = [];
>> for i = 1:25
>> I = [I; simp(f, 0, 2, m)];
>> h = [h; 2/(2*m)];
>> m = m*2;
>> endfor
̃ ℎ𝑞 .
Next, we compute ∆𝐼(ℎ) = |𝐼(ℎ) − 𝐼(2ℎ)| in function of ℎ and fit a power law ∆𝐼(ℎ) = 𝐾
>> deltaI = abs(I(1:end-1) - I(2:end));
>> A = [h(1:end-1).^0 log(h(1:end-1))];
>> a = A\log(deltaI)
a =
-2.6055
1.4997

From the fit we estimate the order of the composite Simpson’s-1/3 rule to be about 1.5. We further
note (Figure 8-8) that in our case the quantity ∆𝐼(ℎ) follows a power law over the full range of values ℎ.
Consequently, the truncation error will also follow a power law 𝐸(ℎ) = 𝐾ℎ1.5.

<Figure 8-8 here>


Figure 8-8 Truncation error estimations using Richardson’s error formula for the composite Simpson-1/3 rule applied to the
2
definite integral 𝐼 = ∫0 √𝑥 + 𝑠𝑖𝑛 𝑥 𝑑𝑥 .

Knowing the order of convergence, we can use Richardson’s formula (8-53) the usual way. For example,
we can use the two approximations 𝐼(0.02) and 𝐼(0.01) to estimate the truncation error of 𝐼(0.01):

6 𝑏−𝑎
Recall that for Simpon’s-1/3 rule, ℎ is related to 𝑚 by h = .
2𝑚

22
𝐼(0.01) − 𝐼(0.02) 2.4902485 − 2.4900386
𝐸(0.01) ≅ | 1.5
|≅| | ≅ 1 ∙ 10−4
(0.02/0.01) − 1 21.5 − 1

We can conclude that:

2
𝐼 = ∫ √𝑥 + sin 𝑥 𝑑𝑥 = 2.4902 ± 0.0001
0

Note that if we would have wrongly used an order of convergence equal to four, which corresponds to
the value indicated by Table 8-4, we would have estimated the truncation error to be about 1 ∙ 10−5 ,
which is a factor ten too small.

8.3.6. Truncation error of the composite trapezoidal method


Before leaving the discussion about Newton-Cotes quadrature methods, let us discuss in more detail the
truncation error of the trapezoidal method. This discussion will be used to develop another type of
quadrature formula discussed in the next section 8.4.

Example 8-12 demonstrated that there must exist cases where the truncation error of the composite
trapezoidal method is higher than two. This comes from the very particular form the truncation error of
the composite trapezoidal method takes. We can demonstrate the following theorem.

TRUNCATION ERROR OF COMPOSITE TRAPEZOIDAL METHOD


For a function 𝑓 sufficiently derivable, one has

𝑏 𝑚−1
ℎ ℎ2
∫ 𝑓(𝑥)𝑑𝑥 = [𝑓(𝑎) + 𝑓(𝑏) + 2 ∑ 𝑓(𝑥𝑖 )] − [𝑓 ′ (𝑏) − 𝑓 ′ (𝑎)] + 𝑂(ℎ4 )
2 12
𝑎 𝑖=1
𝑏−𝑎
with ℎ = 𝑚
and 𝑚 an integer number.

In fact, the truncation error 𝐸 of the composite trapezoidal rule can be written as

ℎ2 ℎ4
𝐸 = −[𝑓 ′ (𝑏) − 𝑓 ′ (𝑎)] + [𝑓 (3) (𝑏) − 𝑓 (3) (𝑎)] −⋯
12 720

Only even orders of ℎ appear. This result follows from the Euler-Maclaurin formula and the interested
reader can find more details in “A first course in numerical analysis” by Ralston and Rabinowitz.

Theorem 8-5 explains why in Example 8-12 the composite trapezoidal rule is of fourth order. Indeed, we
have in this particular case that:

𝑑 𝑥
𝑓 ′ (𝑥) = {𝑒 ∙ [1 − cos 2𝜋𝑥]} = 𝑒 𝑥 ∙ [2𝜋 sin 2𝜋𝑥 − cos 2𝜋𝑥 + 1]
𝑑𝑥

23
and we realize that 𝑓 ′ (0) = 𝑓 ′ (1) = 0 such that according to Equation (8-64), the trapezoidal
composite method will be in order four.

Theorem 8-5 can be used to increase dramatically the precision of the composite trapezoidal method if
the derivatives of the function 𝑓 to be integrated are known. For a function given explicatively, this task
is possible. If the function isn’t available (e.g. tabulated or the output of another algorithm) then the
derivatives of 𝑓 can be estimated using numerical derivation. This way to proceed would result in
Gregory’s algorithm. The next example illustrates the first approach.

INCREASING THE PRECISION OF THE COMPOSITE TRAPEZOIDAL RULE


Let us use Theorem 8-5 to improve the composite trapezoidal rule to an order four method for the
following definite integral:

∫ sin(𝑥) 𝑑𝑥 = 2
0
As the first derivate of sin 𝑥 is cos 𝑥, we obtain, based on Equation (8-64) and after simplification, a first
corrected trapezoidal rule:

𝜋
ℎ2
∫ sin(𝑥) 𝑑𝑥 = 𝑇(ℎ) − + 𝑂(ℎ4 )
6
0
where 𝑇(ℎ) stands for the composite trapezoidal method with step size ℎ calculated according to
Equation (8-38).

We can correct further the composite trapezoidal method using Equation (8-65) to obtain a second
corrected trapezoidal rule:

𝜋
ℎ2 ℎ4
∫ sin(𝑥) 𝑑𝑥 = 𝑇(ℎ) − − + 𝑂(ℎ6 )
6 360
0
Successive applications of the composite trapezoidal (8-38) and corrected trapezoidal rules (8-68) and
(8-69), followed by calculating the true error by comparing the numerical approximations with the true
value, allows constructing the following graph:

<Figure 8-9 here>


𝜋
Figure 8-9 True errors for three quadrature methods applied to the definite integral 𝐼 = ∫0 𝑠𝑖𝑛(𝑥) 𝑑𝑥.

As expected, the first and second corrected trapezoidal rules are now of order four and six respectively,
as show the fits of a power law over the errors.

Correcting the composite trapezoidal rule using the explicit expression of the truncation error according
to Equation (8-65), allows deriving, at relatively low cost, higher order methods. However, we must
compute the derivatives of the function to be integrated. The integration method presented in next

24
section 8.4 will achieve, at a very similar cost, higher order methods too, but without the need of
computing the derivatives of the function to be integrated.

8.4. Romberg integration


Richardson’s error formula (8-53) provides a precise estimation of the truncation error of a composite
Newton-Cotes formula. Considering its precision, it is tempting to use that estimation to correct the
numerical approximation of the definite integral with this error term. This idea turns out to be valid and
its application is the base of Romberg’s integration method7.

8.4.1. Richardson extrapolation


Let us consider a Newton-Cotes composite quadrature formula to approximate a definite integral. In
general, such a formula will have a truncation error that can be expanded into a power series in ℎ, the
distance between two interpolation points, as

∫ 𝑓(𝑥)𝑑𝑥 = 𝐼(ℎ) + 𝐾ℎ𝑞 + 𝐾′ℎ𝑞+1 + ⋯


𝑎
where 𝐼(ℎ) stands for the composite quadrature formula applied with ℎ and 𝐾, 𝐾 ′ , … are constants,
depending on the integration interval and derivatives of 𝑓 evaluated in some points between 𝑎 and 𝑏,
but no on ℎ. The number 𝑞 is the order of the method (for example 𝑞 = 2 for the compsite trapezoidal
method, or 𝑞 = 4 for the composite Simpson-1/3 rule).

We can apply our composite method with twice as many panels, i.e. with ℎ/2, to obtain

𝑏
ℎ ℎ 𝑞 ℎ 𝑞+1
∫ 𝑓(𝑥)𝑑𝑥 = 𝐼 ( ) + 𝐾 ( ) + 𝐾′ ( ) +⋯
2 2 2
𝑎
Multiplying Equation (8-71) by 2𝑞 and subtracting it from Equation (8-70) yields

𝑏

(2𝑞 − 1) ∫ 𝑓(𝑥)𝑑𝑥 = 2𝑞 ∙ 𝐼 ( ) − 𝐼(ℎ) + 𝑂(ℎ𝑞+1 )
2
𝑎
Note that we eliminated the constant 𝐾 by performing this operation. We can rearrange this expression
as:

𝑏
2𝑞 ∙ 𝐼(ℎ/2) − 𝐼(ℎ)
∫ 𝑓(𝑥)𝑑𝑥 = + 𝑂(ℎ𝑞+1 )
2𝑞 − 1
𝑎
We obtain a new numerical approximation of our definite integral, but with a higher order of the
truncation error by one. This result is summarised in Richardson’s theorem:

RICHARDSON EXTRAPOLATION

7
Werner Romberg (1909 - 2003) German mathematician and physicist.

25
Let 𝐼(ℎ) be a composite Newton-Cotes quadrature method of order 𝑂(ℎ𝑞 ) in ℎ, the distance between
𝑏 ℎ
two interpolation points, for approximating a definite integral ∫𝑎 𝑓(𝑥)𝑑𝑥. If 𝐼(ℎ) and 𝐼 (2 ) are two

approximations obtained using ℎ and 2, then

2𝑞 ∙ 𝐼(ℎ/2) − 𝐼(ℎ)
𝑅(ℎ) =
2𝑞 − 1
𝑏
is an approximation of ∫𝑎 𝑓(𝑥)𝑑𝑥 in order 𝑂(ℎ𝑞+1 ).

In short, Richardson extrapolation allows computing an improved approximation based on the


calculations of two approximations done with two different values of ℎ (obtained by doubling the
number of panels). The order of the truncation error of this improved approximation is one more than
the original method.

8.4.2. Romberg quadrature method


Romberg integration consists in creating successive improved approximations based on Richardson
extrapolation. If one starts with a Newton-Cotes formula in order 𝑞, one can get approximations in order
𝑞 + 1, 𝑞 + 2, 𝑞 + 3, ….

It turns out that if one starts the process with the trapezoidal method, then one can even obtain a gain
of two orders in each step. This comes from the very particular form of the truncation error of the
trapezoidal method. As discussed in section 8.3.6, the numerical approximation of a definite integral can
be written as

∫ 𝑓(𝑥)𝑑𝑥 = 𝑇(ℎ) + 𝐾ℎ2 + 𝐾′ℎ4 + 𝐾"ℎ6 + ⋯


𝑎
where 𝑇(ℎ) is the composite trapezoidal rule. Only even powers of ℎ are present in the series expansion
of the truncation error. Recall that this is a property specific to the trapezoidal rule.

Richardson extrapolation applied to the trapezoidal rule gives

ℎ ℎ
22 ∙ 𝑇 (2) − 𝑇(ℎ) 4 ∙ 𝑇 (2) − 𝑇(ℎ)
𝑅(ℎ) = =
22 − 1 3

but the truncation error will now be in order 𝑂(ℎ2+2 ) = 𝑂(ℎ4 ), one order more than if we would have
used another second order quadrature composite formula. The reader is invited to verify this statement
by repeating the steps that led to Equation (8-74) but using Equation (8-75) instead of Equation (8-70).
In fact, this new approximation will again yield only even powers of ℎ for the truncation error:

𝑏 ℎ
4 ∙ 𝑇 (2) − 𝑇(ℎ)
∫ 𝑓(𝑥)𝑑𝑥 = ̃ ℎ4 + 𝐾
+𝐾 ̃ ℎ6 + 𝐾
̃ ℎ8 + ⋯
3
𝑎

26
Applying again Richardson extrapolation gives an improved approximation, in order 4+2=6 (again
involving only even powers of ℎ for the remaining terms).

Consequently, we can produce successive approximations that improve the order of the truncation error
by two orders in each step:
4∙𝑇(ℎ/2)−𝑇(ℎ)
𝑇1 (ℎ) = 3
error in order 𝑂(ℎ4 )

16∙𝑇1 (ℎ/2)−𝑇1 (ℎ)


𝑇2 (ℎ) = 15
error in order 𝑂(ℎ6 )

64∙𝑇2 (ℎ/2)−𝑇2 (ℎ)


𝑇3 (ℎ) = 63
error in order 𝑂(ℎ8 )


or in general

22𝑖+2 ∙ 𝑇𝑖 (ℎ/2) − 𝑇𝑖 (ℎ) 4𝑖+1 ∙ 𝑇𝑖 (ℎ/2) − 𝑇𝑖 (ℎ)


𝑇𝑖+1 (ℎ) = = Error in 𝑂(ℎ2(𝑖+2) )
2 2𝑖+2 −1 𝑖+1
4 −1
Equation (8-78) is the basis of Romberg integration. We organize the calculation in a table as shown
below:

𝑅11
𝑅21 𝑅22
𝑅31 𝑅32 𝑅33
𝑅41 𝑅42 𝑅43 𝑅44

The first column is calculated with the trapezoidal rule:

𝑏−𝑎
𝑅𝑗1 = 𝑇 ( 𝑗−1 )
2

Each line in the table above corresponds to a certain value ℎ. The first line is for ℎ = 𝑏 − 𝑎, the second is
for ℎ = (𝑏 − 𝑎)/2 and so on. The approximations, listed in the first column, are of order 𝑂(ℎ2 ) since
they are computed using the composite trapezoidal rule.

The second column contains approximations of order 𝑂(ℎ4 ) computed using Richardson extrapolation
based on the first column:

4 ∙ 𝑅𝑗,1 − 𝑅𝑗−1,1
𝑅𝑗,2 =
4−1

Similarly, the third column of order 𝑂(ℎ6 ), is computed using Richardson extrapolation using the results
of the second column as:

42 ∙ 𝑅𝑗,2 − 𝑅𝑗−1,2
𝑅𝑗,3 =
42 − 1

27
The fourth column, of order 𝑂(ℎ8 ), is computed using Richardson extrapolation performed on the third
column as:

43 ∙ 𝑅𝑗,3 − 𝑅𝑗−1,3
𝑅𝑗,4 =
43 − 1

In general, a column 𝑘 is computed, using Richardson extrapolation using the previous column 𝑘 − 1 as

4𝑘−1 ∙ 𝑅𝑗,𝑘−1 − 𝑅𝑗−1,𝑘−1


𝑅𝑗,𝑘 = 𝑗>1
4𝑘−1 − 1

These 𝑅𝑗,𝑘 are approximations of order 𝑂(ℎ2(𝑘−1) ). The following example illustrates how to use
Romberg integration.

ROMBERG INTEGRATION
Let us estimate the definite integral:

3
𝑥3
𝐼=∫ 𝑑𝑥
√𝑥 4 − 1
2

with the Romberg integration approximation 𝑅44 .

We start by computing the approximations of the definite integral 𝐼 using the composite trapezoidal
method using 1, 2, 3 and 4 panels and then produce a table according to Equation (8-83).

𝑘=1 𝑘=2 𝑘=3 𝑘=4


𝑗=1 2.542141444
𝑗=2 2.537384626 2.53579902
𝑗=3 2.536088707 2.535656733 2.535647248
𝑗=4 2.535756024 2.53564513 2.535644357 2.535644311

The approximation 𝑅44 of 𝐼 is consequently:

𝐼 ≅ 𝑅44 = 2.535644311

To give an estimation of the error of 𝑅44 , we can use Richardson’s error formula using the
approximations 𝑅33 and 𝑅43 to give an error estimation of 𝑅43 (recall that column 𝑘 = 3 is of order six):

𝑅33 −𝑅43
𝐸(𝑅43 ) ≅ | | ≅ 5 ∙ 10−8
26 − 1
We know that 𝑅44 is of order eight, so its error is smaller than the error of 𝑅43 . Since we don’t have two
estimations of order eight, we have no choice but to take the error estimation of 𝑅43 , even though we
know the actual error of 𝑅44 is smaller:

𝐼 ≅ 2.53564431 ± 5 ∙ 10−8

28
In fact, comparing with the true value of 𝐼, which is 2√5 − √15/2, we get a true error of 𝑅44 of about
3 ∙ 10−8 .

8.5. Gauss quadrature


In this chapter we discussed numerical integration formulas based on polynomial interpolation. The idea
is to replace the function 𝑓(𝑥) we want to integrate by a polynomial 𝑃𝑛 (𝑥) that interpolates the
function. We introduced Newton-Cotes formulas in which 𝑛 + 1 interpolating points 𝑥𝑜 , 𝑥1 , … , 𝑥𝑛 are
uniformly distributed in the integration interval [𝑎, 𝑏]. We could think that by using appropriate
interpolation points, we can obtain quadrature formulas of higher degree of precision than Newton-
Cotes formulas. Figure 8-10 suggests that using different interpolation points to construct a quadrature
formula can indeed reduce the error. In Figure 8-10, two interpolation points are used, resulting in the
linear polynomial 𝑃1 (𝑥). Figure 8-10-a shows the error induced when the two interpolation points
correspond to the closed Newton-Cotes formula. When the two interpolating points are moved as
shown in Figure 8-10-b, the error can be significantly reduced (almost zero in the example of the figure),
as in some parts the quadrature formula will over and in others under estimate the actual definite
integral.

<Figure 8-10 here>


Figure 8-10 Reducing the error of a quadrature formula by placing smartly interpolation points. a) Error of the closed two-points
Newton-Cotes Formula. b) By placing differently, the two interpolation points, the error of the quadrature formula can be
reduced

Once the locations of the interpolation points 𝑥𝑖 are known, Equation (8-14) can be used to compute the
weights of the quadrature method and Equation (8-13) can be used to approximate the definite integral.
In this section a method, based on orthogonal polynomials, is presented that can identify how to place
the interpolation points 𝑥𝑖 to reduce the error corresponding to the quadrature method.

8.5.1. Legendre polynomials


In chapter 5 we discussed the concept of orthogonality and how, polynomials of degree 𝑛 can be
considered as vectors from a real-valued vector space of dimension 𝑛 + 1. Orthogonality is defined
using an inner product. In this chapter, we employ the standard inner product between two polynomials
𝑃𝑛 (𝑥) and 𝑄𝑛 (𝑥) defined as follows

+1

⟨𝑃𝑛 (𝑥)|𝑄𝑛 (𝑥)⟩ = ∫ 𝑃𝑛 (𝑥)𝑄𝑛 (𝑥)𝑑𝑥


−1

In chapter 5, we discussed how, using Gram-Schmidt’s algorithm, a set of orthogonal polynomials, the
so-called Legendre polynomials, can be constructed. The following definition gives a practical way to
compute these polynomials.

DEFINITION 8-4. LEGENDRE POLYNOMIALS


The set of Legendre polynomials 𝑝𝑖 (𝑥) is defined by:

29
1 𝑑𝑖
𝑝𝑖 (𝑥) = [(𝑥 2 − 1)𝑖 ]
2𝑖 𝑖! 𝑑𝑥 𝑖

with 𝑖 = 0,1,2, . ...

Legendre polynomial 𝑝𝑖 (𝑥) are of degree 𝑖. Equation (8-89) is known as Rodrigues' formula8. Using this
equation, one can immediately compute these polynomials. Note that the Legendre polynomials
defined in Rodrigues’ formula (8-89) are such that 𝑝𝑖 (𝑥 = 1) = 1. There exist different definitions of
Legendre polynomials that follow other conventions (for example, the Legendre polynomials we
constructed in chapter 5 using Gram-Schmidt’s algorithm differ slightly).

LEGENDRE POLYNOMIALS
Let us determine the first four Legendre polynomials.

From Equation (8-89) follows:

𝑝𝑜 (𝑥) = 1
1 𝑑 2
𝑝1 (𝑥) = (𝑥 − 1) = 𝑥
2 𝑑𝑥
1 𝑑2 1
𝑝2 (𝑥) = 2 2
[(𝑥 2 − 1)2 ] = (3𝑥 2 − 1)
2 2! 𝑑𝑥 2
1 𝑑2 2 3]
1
𝑝3 (𝑥) = [(𝑥 − 1) = 𝑥(5𝑥 2 − 3)
23 3! 𝑑𝑥 2 2
The reader is encouraged to compute some more examples.

Legendre polynomials have several interesting properties. The ones we will use in this section are
summarized in the following theorem.

LEGENDRE POLYNOMIALS
The Legendre polynomials, as defined by Equation (8-89), have the following properties:

1. 𝑝𝑖 (𝑥) is of degree 𝑖
2. 𝑝𝑖 (𝑥) has 𝑖 distinct roots in [−1, +1] for 𝑖 > 0
3. The set {𝑝𝑜 (𝑥), 𝑝1 (𝑥), … 𝑝𝑛 (𝑥)} forms an orthogonal vector base of the real-valued vector space
of real polynomials of degree 𝑛 or less, that is

+1
0, 𝑖≠𝑗
⟨𝑝𝑖 (𝑥)|𝑝𝑗 (𝑥)⟩ = ∫ 𝑝𝑖 (𝑥)𝑝𝑗 (𝑥)𝑑𝑥 = {
≠ 0, 𝑖=𝑗
−1

8
Benjamin Olinde Rodrigues (1795 – 1851) French banker, mathematician and social reformer

30
The reader is encouraged to verify this theorem for the particular case of 𝑛 = 2 (the first three Legendre
polynomials of Example 8-18).

8.5.2. Gauss quadrature formulas


Gauss quadrature takes advantage of the orthogonality of Legendre polynomials to get an
approximation of a definite integral with higher degree of precision than Newton-Cotes formula for the
same number of interpolating points.

Let us give an intuitive understanding behind the key ideas. The problem we have at hand is to develop
an approximation of the definite integral:

+1 +1
𝐼 = ∫ 𝑓(𝑥)𝑑𝑥 ≅ ∫ 𝑃𝑛 (𝑥)𝑑𝑥
−1 −1

by using polynomial interpolation to approximate the function 𝑓(𝑥) in 𝑛 + 1 interpolating points. Note
that in a first step, we restrict ourselves to the integration interval of [−1, +1].

Contrary to the case of Newton-Cotes formulas, we do not impose any condition on the choice of the
𝑛 + 1 interpolating points, except that they must all be distinct and within the integration interval
[−1, +1]. But regardless on the choice of the interpolation points, such a quadrature formula will be at
least of degree of precision 𝑛. Can’t we now choose these interpolation points in a smart way, such that
the degree of precision of the quadrature formula will be higher than 𝑛?

Let us first note that the definite integral over the interpolating polynomial in Equation (8-91) can be
seen as an inner product between the polynomials 𝑃𝑛 (𝑥) and 𝑄(𝑥) = 1:

+1
∫ 𝑃𝑛 (𝑥)𝑑𝑥 = ⟨1|𝑃𝑛 (𝑥)⟩
−1

Equation (8-92) can lead us to the idea behind Gauss quadrature. If we use the orthogonal Legendre
polynomials we can for example write:

+1
∫ 𝑃𝑛 (𝑥)𝑑𝑥 = ⟨1|𝑃𝑛 (𝑥)⟩ + ⟨𝑝1 (𝑥)|𝑝𝑛+1 (𝑥)⟩
−1

because ⟨𝑝1 (𝑥)|𝑝𝑛+1 (𝑥)⟩ = 0 as the Legendre polynomials 𝑝1 (𝑥) and 𝑝𝑛+1 (𝑥) are orthogonal. But we
can in fact add a more general term ⟨𝑆(𝑥)|𝑝𝑛+1 (𝑥)⟩ where:

𝑆(𝑥) = ∑ 𝑎𝑖 𝑝𝑖 (𝑥)
𝑖=0

is a polynomial made of a linear combination out of the first 𝑛 + 1 Legendre polynomials. The
polynomial 𝑆(𝑥) will be of degree 𝑛. The term ⟨𝑆(𝑥)|𝑝𝑛+1 (𝑥)⟩ still evaluates to zero as all polynomials
𝑝𝑖 (𝑥), 𝑖 = 0,1, … , 𝑛 are orthogonal to 𝑝𝑛+1 (𝑥). Consequently, we can rewrite Equation (8-92) as:

31
+1
+1
∫ 𝑃𝑛 (𝑥)𝑑𝑥 = ⟨1|𝑃𝑛 (𝑥)⟩ + ⟨𝑆(𝑥)|𝑝𝑛+1 (𝑥)⟩ = ∫ [𝑃𝑛 (𝑥) + 𝑆(𝑥)𝑝𝑛+1 (𝑥)] 𝑑𝑥
−1
−1

The polynomial 𝑄(𝑥) = 𝑃𝑛 (𝑥) + 𝑆(𝑥)𝑝𝑛+1 (𝑥) is of degree 2𝑛 + 1. If we choose the 𝑛 + 1 interpolating
points that serve to build the interpolating polynomial 𝑃𝑛 (𝑥) equal to the 𝑛 + 1 distinct roots 𝑥𝑖 of the
Legendre polynomial 𝑝𝑛+1 (𝑥) then we obtain:

𝑄(𝑥𝑖 ) = 𝑃𝑛 (𝑥𝑖 ) + 𝑆(𝑥𝑖 )𝑝𝑛+1 (𝑥𝑖 ) = 𝑃𝑛 (𝑥𝑖 ) for all 𝑖 = 0,1, … , 𝑛

Equation (8-96) shows that the polynomial 𝑄(𝑥), of degree 2𝑛 + 1, is interpolated by 𝑃𝑛 (𝑥) if we
interpolate it with the interpolating points 𝑥𝑖 . Furthermore, Equation (8-95) shows that the definite
integral from -1 to +1 of this polynomial 𝑄(𝑥) is evaluated without any truncation error using the
quadrature formula (8-91) constructed based on the interpolation points 𝑥𝑖 .

This result can be summarized by the Gauss-Legendre quadrature formula theorem.

GAUSS-LEGENDRE QUADRATURE
+1
The quadrature formula of the definite integral 𝐼 = ∫−1 𝑓(𝑥)𝑑𝑥 developed based on the 𝑛 + 1
interpolating points 𝑥𝑖 that are the 𝑛 + 1 distinct roots of the Legendre polynomial 𝑝𝑛+1 (𝑥) has a
degree of precision of 2𝑛 + 1.

The following example illustrates how Theorem 8-8 can be used to construct Gauss quadrature
formulas.

TWO POINT GAUSS QUADRATURE FORMULA


Let us develop the Gauss quadrature formula based on two interpolation points.

According to Theorem 8-8, the two points Gauss quadrature formula is obtained using the roots of the
𝑝2 (𝑥) Legendre polynomial as interpolating points.

1 1
From Example 8-18, we know 𝑝2 (𝑥). The two roots of 𝑝2 (𝑥) are 𝑥0 = −√3 and 𝑥1 = +√3. Using them
as interpolating points, we obtain the interpolating polynomial 𝑃2 (𝑥) as
𝑃2 (𝑥) = 𝑓(𝑥0 )𝐿0 (𝑥) + 𝑓(𝑥1 )𝐿1 (𝑥)

With the two Lagrange polynomials


𝑥 − 𝑥1 𝑥 − 𝑥0
𝐿0 (𝑥) =
𝑥0 − 𝑥1 and 𝐿1 (𝑥) = 𝑥 − 𝑥
1 0

The Gauss quadrature formula writes, using Equation (8-13)

+1
𝐼 = ∫ 𝑓(𝑥)𝑑𝑥 ≅ 𝜔0 𝑓(𝑥0 ) + 𝜔1 𝑓(𝑥1 )
−1

with the two weights

32
+1 +1
𝜔0 = ∫ 𝐿0 (𝑥)𝑑𝑥 = 1 and 𝜔1 = ∫ 𝐿1 (𝑥)𝑑𝑥 = 1
−1 −1

Combining everything, results in two points Gauss quadrature formula

+1
1 1
𝐼 = ∫ 𝑓(𝑥)𝑑𝑥 ≅ 𝑓 (−√ ) + 𝑓 (+√ )
−1 3 3
According to Theorem 8-8, this quadrature formula has a degree of precision of 3 (here 𝑛 = 1), which is
the same precision as Simpson’s-1/3 rule, but we need only two function evaluations.

It is possible, similarly to Example 8-19, to compute the values of the interpolation points 𝑥𝑖 and their
corresponding weights 𝜔𝑖 for cases using more than two interpolation points. Table 8-5 list these
parameters for the first five Gauss quadrature methods.

𝑛 Weights 𝜔𝑖 Gauss interpolation points 𝑥𝑖


2 1.0000000000000000 -0.5773502691896257
1.0000000000000000 +0.5773502691896257
3 0.5555555555555556 -0.7745966692414834
0.8888888888888888 0.0000000000000000
0.5555555555555556 +0.7745966692414834
4 0.3478548451374538 -0.8611363115940526
0.6521451548625461 -0.3399810435848563
0.6521451548625461 +0.3399810435848563
0.3478548451374538 +0.8611363115940526
5 0.2369268850561891 -0.9061798459386640
0.4786286704993665 -0.5384693101056831
0.5688888888888889 0.0000000000000000
0.4786286704993665 +0.5384693101056831
0.2369268850561891 +0.9061798459386640

Table 8-5 Weights and interpolation points of Gauss quadrature formulas

Table 8-5, together with Equation (8-13), allows computing approximations of definite integrals over an
interval [−1, +1]. The following example illustrates this.

GAUSS QUADRATURE FOR A DEFINITE INTEGRALS OVER AN INTERVAL [−1, +1]


Let us use the two points Gauss quadrature formula to approximate the following definite integral:

1 𝑥2
𝐼 = ∫ 𝑒 − 2 𝑑𝑥
−1

From Equation (8-13) and Table 8-5 it follows:

1 𝑥2
𝐼 = ∫ 𝑒 − 2 𝑑𝑥 ≅ 𝜔𝑜 𝑓(𝑥1 ) + 𝜔1 𝑓(𝑥2 ) ≅ 1.692963
−1

33
with 𝜔𝑜 = 1, 𝜔1 = 1 and 𝑥1 = -0.5773502691896257, 𝑥2 = +0.5773502691896257.

In case the definite integral is over a general interval [𝑎, 𝑏], a change of variable needs to be done first:

𝑏 +1 (𝑏 − 𝑎)𝑡 + 𝑏 + 𝑎 𝑏 − 𝑎
∫ 𝑓(𝑥)𝑑𝑥 = ∫ 𝑓( ) 𝑑𝑡
𝑎 −1 2 2
The following example illustrates this.

GAUSS QUADRATURE FOR A DEFINITE INTEGRALS OVER AN INTERVAL [𝑎, 𝑏]


Let us use the four points Gauss quadrature formula to approximate the following definite integral:

2
𝐼 = ∫ ln(𝑥)𝑑𝑥
1

In a first step we must proceed with a change of variable to bring the integration interval to the interval
[−1, +1]:

2 +1
𝑡+3 1
∫ ln(𝑥)𝑑𝑥 = ∫ ln ( ) 𝑑𝑡
1 −1 2 2
Next, we use Equation (8-13) and Table 8-5 to approximate our definite integral:

2 +1 4
𝑡+3 1
∫ ln(𝑥)𝑑𝑥 = ∫ ln ( ) 𝑑𝑡 = ∑ 𝜔𝑖 𝑓(𝑥𝑖 ) ≅ 0.3862944969
1 −1 2 2
𝑖=1
1 𝑡+3
with 𝑓(𝑥𝑖 ) = ln ( ) and the coefficients 𝜔𝑖 and interpolation points 𝑥𝑖 taken from Table 8-5 (the row
2 2
for 𝑛 = 4).

8.6. Lab: Computing a numerical approximation of the error function


In statistics, an important integral that needs often to be evaluated is the following one:

𝑧
2 2
Erf(𝑧) = ∫ 𝑒 −𝑥 𝑑𝑥
√𝜋 0

which is known as the error function9. Our aim is to find an approximation of the value Erf(𝑧 = 3) with
an absolute error lower than 10−12 . We will use the composite Simpson 1/3 rule.

We start by defining the function we need to integrate:


>> f = @(x) 2*exp(-x.^2)/sqrt(pi);

To compute numerically the integral (8-108) using Simpson’s 1/3 composite rule, we use the function
simp defined in Example 8-15. The following code snippet, approximates integral (8-108) using

9
The word “error” in “error function” is not related in anyway to numerical errors.

34
𝑏−𝑎
Simpson’s 1/3 composite rule (Recall that for Simpon’s-1/3 rule, ℎ is related to 𝑚 by h = 2𝑚
, which in
3
our case is h = 2𝑚
):

>> m = 1;
>> I = []; h = [];
>> for i = 1:25
>> I = [I; simp(f, 0, 3, m)];
>> h = [h; 3/(2*m)];
>> m = m*2;
>> endfor

The errors of the approximations are computed using the Richardson’s formula (8-53):
>> err = abs(I(2:end) - I(1:end-1)) / (2^4-1);

considering that Simpson’s 1/3 composite rule is of order four. The results are best displayed in a log-log
plot:
>> loglog(h(2:end), err)

<Figure 8-11 here>


Figure 8-11 Estimated truncation errors, using Richardson’s formula, of the numerical approximations of Erf(𝑧 = 3) produced by
Simpson’s composite 1/3 rule and the trapezoidal composite rule.

Inspection of Figure 8-11 allows concluding that Richardson’s formula is valid for values of ℎ ranging
from about 10−5 to 0.5. For comparison purposes, Figure 8-11 plots the estimated errors of the
numerical approximations produced by the composite trapezoidal rule. We observe that Richardson’s
formula is valid for values of ℎ ranging from about 10−3 to 10−1.

If an approximation with an error lower than 10−12 is desired, we must choose a value of ℎ between
10−2 and 10−3 if we use Simpson’s rule and between 10−4 and 10−5 if we use the trapezoidal rule.
But, if we target an approximation with an error lower than 10−15, only Simpson’s 1/3 rule can yield
such an approximation. The trapezoidal rule cannot provide such an approximation because the round-
off errors take over before this accuracy can be reached.

Let us compute an approximation of Erf(𝑧 = 3) with an absolute error lower than 10−12 with Simpson’s
composite rule. We choose ℎ = 10−3. This corresponds to:
>> m = 3/(2*1E-3)
m = 1500

and consequently, an approximation of Erf(𝑧 = 3), with an error lower than 10−12, is:
>> format long
>> simp(f, 0, 3, 1500)
ans =
0.999977909503001

35

You might also like