Interpolation
Interpolation
Interpolation
MTH 204
TOPIC
DIFFERENCE BETWEEN INTRPOLATION AND EXTRAPOLATION
The successful completion of any task would be incomplete without mentioning the people who
have made it possible. So it`s with the gratitude that I acknowledge the help, which crowned my
efforts with success.
Life is a process of accumulating and discharging debts, not all of those can be measured. We
cannot hope to discharge them with simple words of thanks but we can certainly acknowledge
them.
I owe my gratitude to miss. ARSHI MERAJ, LSE for completing my term paper. Last but
not the least I am very much indebted to my family and friends for their warm encouragement
and moral support in conducting this project work.
KUMAR NITIN
INTRODUCTION:
Interpolation, extrapolation and regression:
Interpolation solves the following problem: given the value of some unknown function at a
number of points, what value does that function have at some other point between the given
points? A very simple method is to use linear interpolation, which assumes that the unknown
function is linear between every pair of successive points. This can be generalized to polynomial
interpolation, which is sometimes more accurate but suffers from Runge's phenomenon. Other
interpolation methods use localized functions like splines or wavelets.
Extrapolation is very similar to interpolation, except that now we want to find the value of the
unknown function at a point which is outside the given points.
Regression is also similar, but it takes into account that the data is imprecise. Given some points,
and a measurement of the value of some function at these points (with an error), we want to
determine the unknown function. The least squares-method is one popular way to achieve this.
Solving equations:
Another fundamental problem is computing the solution of some given equation. Two cases are
commonly distinguished, depending on whether the equation is linear or not.
Much effort has been put in the development of methods for solving systems of linear equations.
Standard methods are Gauss-Jordan elimination and LU-factorization. Iterative methods such as
the conjugate gradient method are usually preferred for large systems.
Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of
a function is an argument for which the function yields zero). If the function is differentiable and
the derivative is known, then Newton's method is a popular choice. Linearization is another
technique for solving nonlinear equations.
Optimization:
Optimization problems ask for the point at which a given function is maximized (or minimized).
Often, the point also has to satisfy some constraints.
The field of optimization is further split in several subfields, depending on the form of the
objective function and the constraint. For instance, linear programming deals with the case that
both the objective function and the constraints are linear. A famous method in linear
programming is the simplex method.
Evaluating integrals:
Numerical integration, also known as numerical quadrature, asks for the value of a
definite integral. Popular methods use some Newton-Cotes formula, for instance the midpoint
rule or the trapezoid rule, or Gaussian quadrature. However, if the dimension of the integration
domain becomes large, these methods become prohibitively expensive. In this situation, one may
use a Monte Carlo method or, in modestly large dimensions, the method of sparse grids.
Differential equations:
Numerical analysis is also concerned with computing (in an approximate way) the solution
of differential equations, both ordinary differential equations and partial differential equations.
Partial differential equations are solved by first discretizing the equation, bringing it into a finite-
dimensional subspace. This can be done by a finite element method, a finite difference method,
or (particularly in engineering) a finite volume method. The theoretical justification of these
methods often involves theorems from functional analysis. This reduces the problem to the
solution of an algebraic equation.
History:
The field of numerical analysis predates the invention of modern computers by many centuries.
In fact, many great mathematicians of the past were preoccupied by numerical analysis, as is
obvious from the names of important algorithms like Newton's method, Lagrange interpolation
polynomial, Gaussian elimination, or Euler's method.
To facilitate computations by hand, large books were produced with formulas and tables of data
such as interpolation points and function coeficients. Using these tables, often calculated out to
16 decimal places or more for some functions, one could look up values to plug into the formulas
given and achieve very good numerical estimates of some functions. The canonical work in the
field is the NISTpublication edited by Abramowitz and Stegun, an 1000 plus page book of a very
large number of commonly used formulas and functions and their values at many points. The
function values are no longer very useful when a computer is available, but the large listing of
formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation. These calculators
evolved in electronic computers in the1940s, and it was then found that these computers were
also useful for administrative purposes. But the invention of the computer also influenced the
field of numerical analysis, since now longer and more complicated calculations could be done
Interpolation:.
It should be mentioned that there is another very different kind of interpolation in mathematics,
namely the "interpolation of operators". The classical results about interpolation of operators are
the Riesz–Thorin theorem and the Marcinkiewicz theorem. There are also many other
subsequent results.Interpolation provides a means of estimating the function at intermediate
points, such as x = 2.5.There are many different interpolation methods, some of which are
described below. Some of the concerns to take into account when choosing an
appropriate algorithm are: How accurate is the method? How expensive is it? How smooth is the
interpolant?
Piecewise constant interpolation:
The simplest interpolation method is to locate the nearest data value, and assign the same value.
In one dimension, there are seldom good reasons to choose this one over linear interpolation,
which is almost as cheap, but in higher dimensional multivariate interpolation.
Linear interpolation:
Linear interpolation is quick and easy, but it is not very precise. Another disadvantage is that the
interpolant is not differentiable at the pointxk.
The following error estimate shows that linear interpolation is not very precise. Denote the
function which we want to interpolate by g, and suppose that x lies between xa and xb and
that g is twice continuously differentiable. Then the linear interpolation error is
In words, the error is proportional to the square of the distance between the data points. The error
of some other methods, including polynomial interpolation and spline interpolation (described
below), is proportional to higher powers of the distance between the data points.
Polynomial interpolation
Consider again the problem given above. The following sixth degree polynomial goes through all
the seven points:
Generally, if we have n data points, there is exactly one polynomial of degree at most n−1 going
through all the data points. The interpolation error is proportional to the distance between the
data points to the power n. Furthermore, the interpolant is a polynomial and thus infinitely
differentiable. So, we see that polynomial interpolation solves all the problems of linear
interpolation.
However, polynomial interpolation also has some disadvantages. Calculating the interpolating
polynomial is computationally expensive (seecomputational complexity) compared to linear
interpolation. Furthermore, polynomial interpolation may exhibit oscillatory artifacts, especially
at the end points.
Spline interpolation:
Remember that linear interpolation uses a linear function for each of intervals [xk,xk+1]. Spline
interpolation uses low-degree polynomials in each of the intervals, and chooses the polynomial
pieces such that they fit smoothly together. The resulting function is called a spline.
Like polynomial interpolation, spline interpolation incurs a smaller error than linear interpolation
and the interpolant is smoother. However, the interpolant is easier to evaluate than the high-
degree polynomials used in polynomial interpolation. It also does not suffer from Runge's
phenomenon.
Gaussian process is a powerful non-linear interpolation tool. Many popular interpolation tools
are actually equivalent to particular Gaussian processes. Gaussian processes can be used not only
for fitting an interpolant that passes exactly through the given data points but also for regression,
i.e., for fitting a curve through noisy data. In the geostatistics community Gaussian process
regression is also known as Kriging.
Other forms of interpolation can be constructed by picking a different class of interpolants. For
instance, rational interpolation is interpolation by rational functions, and trigonometric
interpolation is interpolation by trigonometric polynomials. Another possibility is to
use wavelets.
Sometimes, we know not only the value of the function that we want to interpolate, at some
points, but also its derivative. This leads to Hermite interpolation problems.
In the domain of digital signal processing, the term interpolation refers to the process of
converting a sampled digital signal (such as a sampled audio signal) to a higher sampling rate
using various digital filtering techniques (e.g., convolution with a frequency-limited impulse
signal). In this application there is a specific requirement that the harmonic content of the
original signal be preserved without creating aliased harmonic content of the original signal
above the original Nyquist limit of the signal (i.e., above fs/2 of the original signal sample rate).
EXTRAPOLATION:
.Extrapolation methods:
A sound choice of which extrapolation method to apply relies on a prior knowledge of the
process that created the existing data points. Crucial questions are for example if the data can be
assumed to be continuous, smooth, possibly periodic etc.
Linear extrapolation:
Linear extrapolation means creating a tangent line at the end of the known data and extending it
beyond that limit. Linear extrapolation will only provide good results when used to extend the
graph of an approximately linear function or not too far beyond the known data.
If the two data points nearest the point x * to be extrapolated are (xk − 1,yk − 1) and (xk,yk), linear
extrapolation gives the function:
(which is identical to linear interpolation if xk − 1 < x * < xk). It is possible to include more than two
points, and averaging the slope of the linear interpolant, by regression-like techniques, on the
data points chosen to be included. This is similar to linear prediction.
Polynomial extrapolation:
A polynomial curve can be created through the entire known data or just near the end. The
resulting curve can then be extended beyond the end of the known data. Polynomial
extrapolation is typically done by means of Lagrange interpolation or using Newton's method
of finite differences to create a Newton series that fits the data. The resulting polynomial may be
used to extrapolate the data.
High-order polynomial extrapolation must be used with due care. For the example data set and
problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield
unusable values, an error estimate of the extrapolated value will grow with the degree of the
polynomial extrapolation. This is related to Runge's phenomenon.
Conic extrapolation:
A conic section can be created using five points near the end of the known data. If the conic
section created is an ellipse or circle, it will loop back and rejoin itself. A parabolic or hyperbolic
curve will not rejoin itself, but may curve back relative to the X-axis. This type of extrapolation
could be done with a conic sections template (on paper) or with a computer.
Quality of extrapolation:
Typically, the quality of a particular method of extrapolation is limited by the assumptions about
the function made by the method. If the method assumes the data are smooth, then a non-smooth
function will be poorly extrapolated.
Even for proper assumptions about the function, the extrapolation can diverge strongly from the
function. The classic example is truncated power series representations of sin(x) and
related trigonometric functions. For instance, taking only data from near the x = 0, we may
estimate that the function behaves as sin(x) ~ x. In the neighborhood of x = 0, this is an excellent
estimate. Away from x = 0 however, the extrapolation moves arbitrarily away from the x-axis
while sin(x) remains in the interval [−1,1]. I.e., the error increases without bound.
Taking more terms in the power series of sin(x) around x = 0 will produce better agreement over
a larger interval near x = 0, but will produce extrapolations that eventually diverge away from
the x-axis even faster than the linear approximation.
This divergence is a specific property of extrapolation methods and is only circumvented when
the functional forms assumed by the extrapolation method (inadvertently or intentionally due to
additional information) accurately represent the nature of the function being extrapolated. For
particular problems, this additional information may be available, but in the general case, it is
impossible to satisfy all possible function behaviors with a workably small set of potential
behaviors.
Again, analytic continuation can be thwarted by function features that were not evident from the
initial data.