0% found this document useful (0 votes)
44 views

Note 5 - Interpolation and Appriximations

Interpolation is a method of constructing new data points based on a discrete set of known data points. It finds a simple function that passes through all given data points exactly and can estimate values between points. There are two common types of interpolation polynomials: Newton's divided differences and Lagrange polynomials. Newton's method constructs coefficients from divided differences evaluated at the data points. Lagrange polynomials express the interpolating polynomial as a linear combination of basis polynomials multiplied by the function values. The error when approximating a function with an interpolating polynomial depends on the (n+1)th derivative of the function between the data points.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

Note 5 - Interpolation and Appriximations

Interpolation is a method of constructing new data points based on a discrete set of known data points. It finds a simple function that passes through all given data points exactly and can estimate values between points. There are two common types of interpolation polynomials: Newton's divided differences and Lagrange polynomials. Newton's method constructs coefficients from divided differences evaluated at the data points. Lagrange polynomials express the interpolating polynomial as a linear combination of basis polynomials multiplied by the function values. The error when approximating a function with an interpolating polynomial depends on the (n+1)th derivative of the function between the data points.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

In20 S2 MA1024

5 Interpolation and Approximation


5.1 Interpolation
• Interpolation is a type of estimation, a method of constructing (finding) new data points
based on the range of a discrete set of known data points.

• It is the process of deriving a simple function from a set of discrete data points so that
the function passes through all the given data points (i.e. reproduces the data points
exactly) and can be used to estimate data points in-between the given ones.

• Given any function, defined and continuous on a closed and bounded interval, there exists
a polynomial that is as “close” to the given function as desired. 1

• For n + 1 data points, there is one and only one polynomial of order n that passes through
all the points.

Figure 5.1

Theorem 5.1 – Weierstrass Approximation Theorem. Suppose that f is defined and continuous on [a, b]. For
each  > 0, there exists a polynomial P (x), with the property that

|f (x) − P (x)| < ,

for all x in [a, b].

1 UoM
In20 S2 MA1024

5.1.1 Newton’s divided-difference interpolating polynomials


• Suppose that Pn (x) is the nth interpolatng polynomial that agrees with the function f at
the distinct numbers x0 , x1 , . . . , xn .

• The polynomial Pn (x) has the form

Pn (x) = a0 + a1 (x − x0 ) + a2 (x − x0 )(x − x1 ) + · · · + an (x − x0 ) . . . (x − xn−1 ), (5.1)

for appropriate constants a0 , a1 , . . . an .

• To find constants a0 , a1 , . . . an

– set x = x0 :
a0 = Pn (x0 ) = f (x0 ) (5.2)
– set x = x1 :

f (x1 ) = Pn (x1 ) = a0 + a1 (x1 − x0 ) = f (x0 ) + a1 (x1 − x0 ) (5.3)


f (x1 ) − f (x0 )
a1 = (5.4)
x1 − x0

– set x = x2 :

f (x2 ) = Pn (x2 ) = f (x0 ) + f (x1 )(x2 − x0 ) + a2 (x2 − x0 )(x2 − x1 ) (5.5)


f (x2 ) − f (x1 ) f (x1 ) − f (x0 )

x2 − x1 x1 − x0
a2 = (5.6)
x2 − x0

– Likewise, the remaining constants can be identified.

• We now introduce the divided-difference notation.

• The zeroth divided difference of the function f with respect to xi , denoted f [xi ], is
simply the value of f at xi :
f [xi ] = f (xi ) (5.7)

• The first divided difference of f with respect to xi and xi+1 is denoted f [xi , xi+1 ] and
defined as
f [xi+1 ] − f [xi ]
f [xi , xi+1 ] = (5.8)
xi+1 − xi

• The second divided difference, f [xi , xi+1 , xi+2 ], is defined as

f [xi+1 , xi+2 ] − f [xi , xi+1 ]


f [xi , xi+1 , xi+2 ] = (5.9)
xi+2 − xi

• The k th divided difference relative to xi , xi+1 , xi+2 , . . . , xi+k is


f [xi+1 , xi+2 , . . . , xi+k ] − f [xi , xi+1 , xi+2 , . . . , xi+k−1 ]
f [xi , xi+1 , xi+2 , . . . , xi+k ] = (5.10)
xi+k − xi

• The generation of the divided differences is outlined below.

2 UoM
In20 S2 MA1024

Figure 5.2

• Note that the coefficient of the polynomial in the equation (5.1) be a0 = f [x0 ], a1 =
f [x0 , x1 ], a2 = f [x0 , x1 , x2 ] and therefore

ak = f [x0 , x1 , x2 , . . . , xk ] (5.11)

• So Pn (x) can be rewritten in a form called Newton’s Divided Difference:

n
X
Pn (x) = f [x0 ] + f [x0 , x1 , . . . , xk ](x − x0 )(x − x1 ) . . . (x − xk−1 ) (5.12)
k=1

5.1.2 Lagrange Interpolating Polynomials


• Newton interpolating polynomials can be re-arranged in following manner.
• When n = 1,
f (x1 ) − f (x0 )
P1 (x) = a0 + a1 (x − x0 ) = f (x0 ) + (x − x0 ) (5.13)
x1 − x0
f (x1 ) f (x0 )
= f (x0 ) + (x − x0 ) + (x − x0 )
x1 − x0 x0 − x 1
x − x1 x − x0
= f (x0 ) + f (x1 ) (5.14)
x0 − x 1 x1 − x 0
x − x1 x − x0
Define L1,0 (x) = and L1,1 (x) = .
x0 − x 1 x1 − x 0
P1 (x) = L1,0 (x)f (x0 ) + L1,1 (x)f (x1 ) (5.15)

• Similarly, when n = 2,
(x − x1 )(x − x2 ) (x − x0 )(x − x2 ) (x − x0 )(x − x1 )
P2 (x) = f (x0 ) + f (x1 ) + f (x2 )
(x0 − x1 )(x0 − x2 ) (x1 − x0 )(x1 − x2 ) (x2 − x0 )(x2 − x1 )
| {z } | {z } | {z }
L2,0 L2,1 L2,2
(5.16)

3 UoM
In20 S2 MA1024

So,
P2 (x) = L2,0 (x)f (x0 ) + L2,1 (x)f (x1 ) + L2,2 (x)f (x2 ) (5.17)
• Likewise, we may rearrange the polynomial in the form
n
X
Pn (x) = Ln,k (x)f (xk ) (5.18)
k=0

where

n
Y (x − xi )
Ln,k = (5.19)
i=0,i6=k (xk − xi )

• This polynomial called as the nth Lagrange interpolating polynomial.

example 5.1 The current in a wire is measured with great precision as a function of time
and the measured values are given in the following table.
Determine i at t = 0.23.

Table 5.1

t 0 0.1250 0.2500 0.3750 0.5000


i 0 6.24 7.75 4.85 0.0000

5.1.3 Errors of Interpolating Polynomials

Theorem 5.2. Suppose x0 , x1 , . . . , xn are distinct numbers in the interval [a, b] and f ∈
C n+1 [a, b]. Then, for each x in [a, b], a number ξ(x) (generally unknown) between x0 , x1 , . . . , xn ,
and hence in (a, b), exists with

f (n+1) (ξ(x))
f (x) = Pn (x) + (x − x0 )(x − x1 ) . . . (x − xn ),
(n + 1)!

where Pn (x) is the interpolating polynomial.

• Thus, the error when approximating function by interpolating polynomil be


f (n+1) (ξ(x))
Rn = (x − x0 )(x − x1 ) . . . (x − xn )
(n + 1)!
• For this formula to be of use, the function in question must be known and differentiable.
This is not usually the case.
• An alternative formula to find error is by applying a finite divided difference to approxi-
mate the (n + 1)th derivative.
• Thus, we have
Rn = f [x, xn , xn−1 , . . . , x0 ](x − x0 )(x − x1 ) . . . (x − xn )

• Since thiscontains the unknown f (x), it cannot be solved for the error.
• However, using an additional data point f (xn+1 ), the error can be estimated as follows.
Rn ≈ f [xn+1 , xn , xn−1 , . . . , x0 ](x − x0 )(x − x1 ) . . . (x − xn )

4 UoM
In20 S2 MA1024

5.2 Least Squares Approximation


• Another way to find the ”best fit” for the given data set is to derive a curve that minimizes
the discrepancy between the data points and the curve.

• The error, or residual, between the model and the observation is

e = yi,measured − yi,model (5.20)

• If there n number of observations, the total or the sum of the residual is


n
X n
X
ei = yi,measured − yi,model (5.21)
i=1 i=1

• Finding the model or curve that minimizes the sum of the squares of the error,
n n
e2i = (yi,measured − yi,model )2 ,
X X
Sr = (5.22)
i=1 i=1

is least squares approximation.

5.2.1 Linear Regression


• The least squares approach that involves determining the best approximating line is linear
regression.

• Let the best least squares line to a collection of data {(xi , yi )}ni=1 be

y = a1 x + a0

Figure 5.3

• Thus, the sum of the squares of the error is


n
(yi − (a1 xi + a0 ))2
X
Sr = (5.23)
i=1

5 UoM
In20 S2 MA1024

• For a minimum to occur, we need both


n
∂Sr X
= 2 (yi − (a1 xi + a0 ))(−xi ) = 0 (5.24)
∂a1 i=1
n
∂Sr X
= 2 (yi − (a1 xi + a0 ))(−1) = 0 (5.25)
∂a0 i=1

• Thus, we have
n n n
x2i + a0
X X X
a1 xi = xi y i (5.26)
i=1 i=1 i=1
Xn Xn Xn
a1 x i + a0 1= yi (5.27)
i=1 i=1 i=1
| {z }
n

• Solve equations (5.26) & (5.27) for a0 , a1 .

example 5.2 In water-resources engineering, the sizing of reservoirs depends on accurate


estimates of water flow in the river that is being impounded. For some
rivers, long-term historical records of such flow data are diffcult to obtain.
In contrast, meteorological data on precipitation is often available for many
years past.Therefore, it is often useful to determine a relationship between
flow and precipitation. This relationship can then be used to estimate flows
for years when only precipitation measurements were made. The following
data are available for a river that is to be dammed:

Table 5.2

Precipitation, cm 88.9 108.5 104.1 139.7 127 94 116.8 99.1


Flow, m3 /s 14.6 16.7 15.3 23.2 19.5 16.1 18.1 16.6

Fit a straight line to these data with linear regression and predict the annual
water flow if the precipitation is 120 cm.

6 UoM
In20 S2 MA1024

5.2.2 Polynomial Regression


• The datas are not fit for the line all the times as shown in following figure.

Figure 5.4

• Another alternative is to fit polynomials to the data using polynomial regression.

• Consider a set of data, {(xi , yi )|i = 1, 2, . . . , m}, with an algebraic polynomial


n
Pn (x) = an xn + an−1 xn−1 + · · · + a1 x + a0 = aj xj
X
(5.28)
j=0

of degree n < m − 1.
• As before, we choose the constants a0 , a1 , . . . , an to minimize the sum of least squares
error Sr , where
m
(yi − Pn (xi ))2
X
Sr = (5.29)
i=1
m m m
yi2 − 2 (Pn (xi ))2
X X X
= yi Pn (xi ) + (5.30)
i=1 i=1 i=1
   
m n m n X
n m
yi2 yi xji  j+k
X X X X X
= −2 aj  + aj ak  x  i (5.31)
i=1 j=0 i=1 j=0 k=0 i=1

∂Sr
• For error to be minimized it is necessary that = 0, for each j = 0, 1, . . . , n.
∂aj
• Thus,
m n m
yi xji j+k
X X X
0 = −2 +2 ak xi (5.32)
i=1 k=0 i=1
n m m
j+k
yi xji
X X X
ak xi = (5.33)
k=0 i=1 i=1

for j = 0, 1, . . . , n.
• a0 , a1 , . . . , an are determined by solving above system of n number of equations.

7 UoM
In20 S2 MA1024

Theorem 5.3. Given a set of data, {(xi , yi )|i = 1, 2, . . . , m}, its least square regression curve
of degree n is given by
n
n n−1
aj x j
X
Pn (x) = an x + an−1 x + · · · + a1 x + a0 = (5.34)
j=0

of degree n < m − 1. where the constants an , an−1 , . . . , a0 are given by the equations
n m m
j+k
yi xji
X X X
ak xi = (5.35)
k=0 i=1 i=1

for j = 0, 1, . . . , n.

example 5.3 The population (p) of a small community on the outskirts of a city grows
rapidly over a 20-year period:

Table 5.3
t 0 5 10 15 20
p 100 200 450 950 2000

As an engineer working for a utility company, you must forecast the popu-
lation 5 years into the future in order to anticipate the demand for power.
Employ the polynomial regression of order 3 to make this prediction.

1
REFERENCES
(i) Numerical Analysis, Richard L. Burden, J.Douglas Faires.
(ii) Numerical Methods for Engineers, Steven C. Chapra, Raymond P. Canale

8 UoM

You might also like