Numerical Analysis: From Wikipedia, The Free Encyclopedia
Numerical Analysis: From Wikipedia, The Free Encyclopedia
Babylonian clay tablet BC 7289 (c. 1800–1600 BC) with annotations. The approximation of the
square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 +
51/602 + 10/603 = 1.41421296...[1] Image by Bill Casselman.[2]
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to
general symbolic manipulations) for the problems of continuous mathematics (as distinguished
from discrete mathematics).
One of the earliest mathematical writings is the Babylonian tablet BC 7289, which gives a
sexagesimal numerical approximation of , the length of the diagonal in a unit square. Being
able to compute the sides of a triangle (and hence, being able to compute square roots) is
extremely important, for instance, in carpentry and construction.[3]
Numerical analysis continues this long tradition of practical mathematical calculations. Much
like the Babylonian approximation of , modern numerical analysis does not seek exact
answers, because exact answers are often impossible to obtain in practice. Instead, much of
numerical analysis is concerned with obtaining approximate solutions while maintaining
reasonable bounds on errors.
Numerical analysis naturally finds applications in all fields of engineering and the physical
sciences, but in the 21st century, the life sciences and even the arts have adopted elements of
scientific computations. Ordinary differential equations appear in the movement of heavenly
bodies (planets, stars and galaxies); optimization occurs in portfolio management; numerical
linear algebra is important for data analysis; stochastic differential equations and Markov chains
are essential in simulating living cells for medicine and biology.
Before the advent of modern computers numerical methods often depended on hand interpolation
in large printed tables. Since the mid 20th century, computers calculate the required functions
instead. The interpolation algorithms nevertheless may be used as part of the software for solving
differential equations.
Contents
[hide]
1 General introduction
o 1.1 History
o 1.2 Direct and iterative methods
1.2.1 Discretization and numerical integration
o 1.3 Discretization
2 The generation and propagation of errors
o 2.1 Round-off
o 2.2 Truncation and discretization error
o 2.3 Numerical stability and well-posed problems
3 Areas of study
o 3.1 Computing values of functions
o 3.2 Interpolation, extrapolation, and regression
o 3.3 Solving equations and systems of equations
o 3.4 Solving eigenvalue or singular value problems
o 3.5 Optimization
o 3.6 Evaluating integrals
o 3.7 Differential equations
4 Software
5 See also
6 Notes
7 References
8 External links
The rest of this section outlines several important themes of numerical analysis.
[edit] History
The field of numerical analysis predates the invention of modern computers by many centuries.
Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of
the past were preoccupied by numerical analysis, as is obvious from the names of important
algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or
Euler's method.
To facilitate computations by hand, large books were produced with formulas and tables of data
such as interpolation points and function coefficients. Using these tables, often calculated out to
16 decimal places or more for some functions, one could look up values to plug into the formulas
given and achieve very good numerical estimates of some functions. The canonical work in the
field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very
large number of commonly used formulas and functions and their values at many points. The
function values are no longer very useful when a computer is available, but the large listing of
formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation. These calculators
evolved into electronic computers in the 1940s, and it was then found that these computers were
also useful for administrative purposes. But the invention of the computer also influenced the
field of numerical analysis, since now longer and more complicated calculations could be done.
3x3+4=28
Direct method
3x3 + 4 = 28.
Subtract 4 3x3 = 24.
Divide by 3 x3 = 8.
Take cube roots x = 2.
For the iterative method, apply the
bisection method to f(x) = 3x3 - 24.
The initial values are a = 0, b = 3, f(a)
= -24, f(b) = 57.
Iterative method
a b mid f(mid)
0 3 1.5 -13.875
1.5 3 2.25 10.17...
1.5 2.25 1.875 -4.22...
1.875 2.25 2.0625 2.32...
We conclude from this table that the
solution is between 1.875 and 2.0625.
The algorithm might return any
number in that range with an error less
than 0.2.
Direct methods compute the solution to a problem in a finite number of steps. These methods
would give the precise answer if they were performed in infinite precision arithmetic. Examples
include Gaussian elimination, the QR factorization method for solving systems of linear
equations, and the simplex method of linear programming. In practice, finite precision is used
and the result is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative methods are not expected to terminate in a number of
steps. Starting from an initial guess, iterative methods form successive approximations that
converge to the exact solution only in the limit. A convergence test is specified in order to decide
when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision
arithmetic these methods would not reach the solution within a finite number of steps (in
general). Examples include Newton's method, the bisection method, and Jacobi iteration. In
computational matrix algebra, iterative methods are generally needed for large problems.
Iterative methods are more common than direct methods in numerical analysis. Some methods
are direct in principle but are usually used as though they were not, e.g. GMRES and the
conjugate gradient method. For these methods the number of steps needed to obtain the exact
solution is so large that an approximation is accepted in the same manner as for an iterative
method.
[edit] Discretization
The study of errors forms an important part of numerical analysis. There are several ways in
which error can be introduced in the solution of the problem.
[edit] Round-off
Round-off errors arise because it is impossible to represent all real numbers exactly on a machine
with finite memory (which is what all practical digital computers are).
Once an error is generated, it will generally propagate through the calculation. For instance, we
have already noted that the operation + on a calculator (or a computer) is inexact. It follows that
a calculation of the type a+b+c+d+e is even more inexact.
What does it mean when we say that the truncation error is created when we approximate a
mathematical procedure. We know that to integrate a function exactly requires one to find the
sum of infinite trapezoids. But numerically one can find the sum of only finite trapezoids, and
hence the approximation of the mathematical procedure. Similarly, to differentiate a function, the
differential element approaches to zero but numerically we can only choose a finite value of the
differential element.
Both the original problem and the algorithm used to solve that problem can be well-conditioned
and/or ill-conditioned, and any combination is possible.
So an algorithm that solves a well-conditioned problem may be either numerically stable or
numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a
well-posed mathematical problem. For instance, computing the square root of 2 (which is
roughly 1.41421) is a well-posed problem. Many algorithms solve this problem by starting with
an initial approximation x1 to , for instance x1=1.4, and then computing improved guesses x2,
x3, etc... One such method is the famous Babylonian method, which is given by xk+1 = xk/2 + 1/xk.
Another iteration, which we will call Method X, is given by xk + 1 = (xk2−2)2 + xk.[4] We have
calculated a few iterations of each scheme in table form below, with initial guesses x1 = 1.4 and
x1 = 1.42.
Observe that the Babylonian method converges fast regardless of the initial guess, whereas
Method X converges extremely slowly with initial guess 1.4 and diverges for initial guess 1.42.
Hence, the Babylonian method is numerically stable, while Method X is numerically unstable.
Numerical stability is affected by the number of the significant digits the machine keeps
on, if we use a machine that keeps on the first four floating-point digits,a good example
on loss of significance these two equivalent functions
and
by looking to the two above results, we realize that loss of significance which is also
called Subtractive Cancelation has a huge effect on the results, even though both
functions are equivalent; to show that they are equivalent simply we need to start by f(x)
and end with g(x), and so
the true value for the result is 11.174755...
which is exactly g(500)=11.1748 after rounding the result to 4 decimal digits
now imagine that you use tens of terms like these functions in your program, your error
will increase as you proceed in the program, unless you use the suitable formula of the
two functions each time you evaluate either f(x), or g(x), the choice is dependent on the
parity of x .
The example is taken from Mathew; Numerical methods using matlab , 3rd ed.
Interpolation solves the following problem: given the value of some unknown function at a
number of points, what value does that function have at some other point between the given
points?
Extrapolation is very similar to interpolation, except that now we want to find the value of the
unknown function at a point which is outside the given points.
Regression is also similar, but it takes into account that the data is imprecise. Given some points,
and a measurement of the value of some function at these points (with an error), we want to
determine the unknown function. The least squares-method is one popular way to achieve this.
Another fundamental problem is computing the solution of some given equation. Two cases are
commonly distinguished, depending on whether the equation is linear or not. For instance, the
equation 2x + 5 = 3 is linear while 2x2 + 5 = 3 is not.
Much effort has been put in the development of methods for solving systems of linear equations.
Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian
elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and
positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such
as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient
method are usually preferred for large systems.
Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of
a function is an argument for which the function yields zero). If the function is differentiable and
the derivative is known, then Newton's method is a popular choice. Linearization is another
technique for solving nonlinear equations.
[edit] Optimization
Main article: Optimization (mathematics)
Optimization problems ask for the point at which a given function is maximized (or minimized).
Often, the point also has to satisfy some constraints.
The field of optimization is further split in several subfields, depending on the form of the
objective function and the constraint. For instance, linear programming deals with the case that
both the objective function and the constraints are linear. A famous method in linear
programming is the simplex method.
The method of Lagrange multipliers can be used to reduce optimization problems with
constraints to unconstrained optimization problems.
Numerical integration, in some instances also known as numerical quadrature, asks for the value
of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint
rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer"
strategy, whereby an integral on a relatively large set is broken down into integrals on smaller
sets. In higher dimensions, where these methods become prohibitively expensive in terms of
computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo
integration), or, in modestly large dimensions, the method of sparse grids.
Main articles: Numerical ordinary differential equations and Numerical partial differential
equations
Numerical analysis is also concerned with computing (in an approximate way) the solution of
differential equations, both ordinary differential equations and partial differential equations.
Partial differential equations are solved by first discretizing the equation, bringing it into a finite-
dimensional subspace. This can be done by a finite element method, a finite difference method,
or (particularly in engineering) a finite volume method. The theoretical justification of these
methods often involves theorems from functional analysis. This reduces the problem to the
solution of an algebraic equation.
[edit] Software
Main articles: List of numerical analysis software and Comparison of numerical analysis
software
Since the late twentieth century, most algorithms are implemented in a variety of programming
languages. The Netlib repository contains various collections of software routines for numerical
problems, mostly in Fortran and C. Commercial products implementing many different
numerical algorithms include the IMSL and NAG libraries; a free alternative is the GNU
Scientific Library.
There are several popular numerical computing applications such as MATLAB, S-PLUS,
LabVIEW, and IDL as well as free and open source alternatives such as FreeMat, Scilab, GNU
Octave (similar to Matlab), IT++ (a C++ library), R (similar to S-PLUS) and certain variants of
Python. Performance varies widely: while vector and matrix operations are usually fast, scalar
loops may vary in speed by more than an order of magnitude.[6][7]
Many computer algebra systems such as Mathematica also benefit from the availability of
arbitrary precision arithmetic which can provide more accurate results.
Also, any spreadsheet software can be used to solve simple problems relating to numerical
analysis.
Error analysis
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be
challenged and removed. (September 2009)
Error analysis is the study of kind and quantity of error that occurs, particularly in the fields of
applied mathematics (particularly numerical analysis), applied linguistics and statistics.
Contents
[hide]
In numerical analysis, error analysis comprises both forward error analysis and backward
error analysis. Forward error analysis involves the analysis of a function
which is an approximation (usually a finite polynomial) to a function to
determine the bounds on the error in the approximation; i.e., to find such that .
Backward error analysis involves the analysis of the approximation function ,
to determine the bounds on the parameters such that the result
Numerical analysis is the area of mathematics and computer science that creates, analyzes, and
implements algorithms for solving numerically the problems of continuous mathematics. Such
problems originate generally from real-world applications of algebra, geometry, and calculus,
and they involve variables which vary continuously. These problems occur throughout the
natural sciences, social sciences, medicine, engineering, and business. Beginning in the 1940's,
the growth in power and availability of digital computers has led to an increasing use of realistic
mathematical models in science, medicine, engineering, and business; and numerical analysis of
increasing sophistication has been needed to solve these more accurate and complex
mathematical models of the world. The formal academic area of numerical analysis varies from
highly theoretical mathematical studies to computer science issues involving the effects of
computer hardware and software on the implementation of specific algorithms.
Contents
[hide]
Numerical solution of systems of linear equations. This refers to solving for in the
equation with given matrix and column vector . The most important case
has a square matrix. There are both direct methods of solution (requiring only a finite
number of arithmetic operations) and iterative methods (giving increased accuracy with
each new iteration). This topic also includes the matrix eigenvalue problem for a square
matrix , solving for and in the equation .
Numerical solution of systems of nonlinear equations. This refers to rootfinding problems
which are usually written as with a vector with components and a
vector with components. The most important case has .
Optimization. This refers to minimizing or maximizing a real-valued function . The
permitted values for can be either constrained or unconstrained. The
'linear programming problem' is a well-known and important case; is linear, and
there are linear equality and/or inequality constraints on .
Approximation theory
Use computable functions to approximate the values of functions that are not easily
computable or use approximations to simplify dealing with such functions. The most popular
types of computable functions are polynomials, rational functions, and piecewise versions
of them, for example spline functions. Trigonometric polynomials are also a very useful choice.
These equations occur widely as mathematical models for the physical world, and their
numerical solution is important throughout the sciences and engineering.
The mathematical aspects of numerical analysis make use of the language and results of
linear algebra, real analysis, and functional analysis.
If you cannot solve a problem directly, then replace it with a 'nearby problem' which can
be solved more easily. This is an important perspective which cuts across all types of
mathematical problems. For example, to evaluate a definite integral numerically, begin
by approximating its integrand using polynomial interpolation or a Taylor series, and
then integrate exactly the polynomial approximation.
All numerical calculations are carried out using finite precision arithmetic, usually in a
framework of floating-point representation of numbers. What are the effects of using
such finite precision computer arithmetic? How are arithmetic calculations to be carried
out? Using finite precision arithmetic will affect how we compute solutions to all types of
problems, and it forces us to think about the limits on the accuracy with which a problem
can be solved numerically. Even when solving finite systems of linear equations by direct
numerical methods, infinite precision arithmetic is needed in order to find a particular
exact solution.
There is concern with 'stability', a concept referring to the sensitivity of the solution of a
given problem to small changes in the data or the given parameters of the problem. There
are two aspects to this. First, how sensitive is the original problem to small changes in the
data of the problem? Problems that are very sensitive are often referred to as 'ill-
conditioned' or 'ill-posed', depending on the degree of sensitivity. Second, the numerical
method should not introduce additional sensitivity that is not present in the original
mathematical problem being solved. In developing a numerical method to solve a
problem, the method should be no more sensitive to changes in the data than is true of the
original mathematical problem.
There is a fundamental concern with error, its size, and its analytic form. When
approximating a problem, a numerical analyst would want to understand the behaviour of
the error in the computed solution. Understanding the form of the error may allow one to
minimize or estimate it. A 'forward error analysis' looks at the effect of errors made in the
solution process. This is the standard way of understanding the consequences of the
approximation errors that occur in setting up a numerical method of solution, e.g. in
numerical integration and in the numerical solution of differential and integral equations.
A 'backward error analysis' works backward in a numerical algorithm, showing that the
approximating numerical solution is the exact solution to a perturbed version of the
original mathematical problem. In this way the stability of the original problem can be
used to explain possible difficulties in a numerical method. Backward error analysis has
been especially important in understanding the behaviour of numerical methods for
solving linear algebra problems.
In order to develop efficient means of calculating a numerical solution, it is important to
understand the characteristics of the computer being used. For example, the structure of
the computer memory is often very important in devising efficient algorithms for large
linear algebra problems. Also, parallel computer architectures lead to efficient algorithms
only if the algorithm is designed to take advantage of the parallelism.
Numerical analysts are generally interested in measuring the efficiency of algorithms.
What is the cost of a particular algorithm? For example, the use of Gaussian elimination
to solve a linear system containing equations will require approximately
arithmetic operations. How does this compare with other numerical methods for
solving this problem? This topic is a part of the larger area of 'computational complexity'.
Use information gained in solving a problem to improve the solution procedure for that
problem. Often we do not fully understand the characteristics of a problem, especially
very complicated and large ones. Such a solution process is sometimes referred to as
being an 'adaptive procedure', and it can also be viewed as a feedback process.
Development of numerical methods
Numerical analysts and applied mathematicians have a variety of tools which they use in
developing numerical methods for solving mathematical problems. An important perspective,
one mentioned earlier, which cuts across all types of mathematical problems is that of replacing
the given problem with a 'nearby problem' which can be solved more easily. There are other
perspectives which vary with the type of mathematical problem being solved.
Linear systems arise in many of the problems of numerical analysis, a reflection of the
approximation of mathematical problems using linearization. This leads to diversity in the
characteristics of linear systems, and for this reason there are numerous approaches to solving
linear systems. As an example, numerical methods for solving partial differential equations often
lead to very large 'sparse' linear systems in which most coefficients are zero. Solving such sparse
systems requires methods that are quite different from those used to solve more moderate sized
'dense' linear systems in which most coefficients are non-zero.
There are 'direct methods' and 'iterative methods' for solving all types of linear systems, and the
method of choice depends on the characteristics of both the linear system and on the computer
hardware being used. For example, some sparse systems can be solved by direct methods,
whereas others are better solved using iteration. With iteration methods, the linear system is
sometimes transformed to an equivalent form that is more amenable to being solved by iteration;
this is often called 'pre-conditioning' of the linear system.
The linear programming problem was solved principally by the 'simplex method' until new
approaches were developed in the 1980s, and it remains an important method of solution. The
simplex method is a direct method that uses tools from the numerical solution of linear systems.
With a single equation , and having an initial estimate of the root , approximate
by its tangent line at the point . Find the root of this tangent line as an
approximation to the root of the original equation . This leads to 'Newton's iteration
method',
Other linear and higher degree approximations can be used, and these lead to alternative iteration
methods. An important derivative-free approximation of Newton’s method is the 'secant method'.
In practice, the Jacobian matrix for is often too complicated to compute directly; instead the
partial derivatives in the Jacobian matrix are approximated using 'finite differences'. This leads to
a 'finite difference Newton method'. As an alternative strategy and in analogy with the
development of the secant method for the single variable problem, there is a similar rootfinding
iteration method for solving nonlinear systems. It is called 'Broyden’s method' and it uses finite
difference approximations of the derivatives in the Jacobian matrix, avoiding the evaluation of
the partial derivatives of .
With such equations, there are usually at least two general steps involved in obtaining a nearby
problem from which a numerical approximation can be computed; this is often referred to as
'discretization' of the original problem. The given equation will have a domain on which the
unknown function is defined, perhaps an interval in one dimension and maybe a rectangle,
ellipse, or other simply connected bounded region in two dimensions. Many numerical methods
begin by introducing a mesh or grid on this domain, and the solution is to be approximated using
this grid. Following this, there are several common approaches.
One approach approximates the equation with a simpler equation defined on the mesh. For
example, consider approximating the boundary value problem
The approximating functions in are often chosen as piecewise polynomial functions which are
polynomial over the elements of the mesh chosen earlier. Such methods are sometimes called
'local methods'. When the approximating functions are defined without reference to a
grid, then the methods are sometimes called 'global methods' or 'spectral methods'. Examples of
such are sets of polynomials or trigonometric functions of some finite degree or less.
With all three approaches to solving a differential or integral equations, the intent is that the
resulting solution be close to the desired solution . The business of theoretical numerical
analysis is to analyze such an algorithm and investigate the size of .
References
For an historical account of early numerical analysis, see
Herman Goldstine. A History of Numerical Analysis From the 16th Through the19th
Century, Springer-Verlag, New York, 1977.
For a current view of numerical analysis as taught at the undergraduate level, see
Cleve Moler. Numerical Computing with MATLAB, SIAM Pub., Philadelphia, 2004.
For a current view of numerical analysis as taught at the advanced undergraduate or beginning
graduate level, see
Alfio Quarteroni, Riccardo Sacco, and Fausto Saleri. Numerical Mathematics, Springer-
Verlag, New York, 2000.
For one perspective on a theoretical framework using functional analysis for studying many
problems in numerical analysis, see
Gene Golub and Charles Van Loan. Matrix Computations, 3rd ed., Johns Hopkins
University Press, 1996.
For an introduction to practical numerical analysis for solving ordinary differential equations, see
Lawrence Shampine, Ian Gladwell, Skip Thompson. Solving ODEs with Matlab,
Cambridge University Press, Cambridge, 2003.
Michael Overton. Numerical computing with IEEE floating point arithmetic, SIAM Pub.,
Philadelphia, 2001.
Suely Oliveira and David Stewart. Writing Scientific Software: A Guide to Good Style,
Cambridge University Press, Cambridge, 2006.
Introduction
Numerical analysis involves the study of methods of computing numerical data. In many
problems this implies producing a sequence of approximations; thus the questions involve the
rate of convergence, the accuracy (or even validity) of the answer, and the completeness of the
response. (With many problems it is difficult to decide from a program's termination whether
other solutions exist.) Since many problems across mathematics can be reduced to linear algebra,
this too is studied numerically; here there are significant problems with the amount of time
necessary to process the initial data. Numerical solutions to differential equations require the
determination not of a few numbers but of an entire function; in particular, convergence must be
judged by some global criterion. Other topics include numerical simulation, optimization, and
graphical analysis, and the development of robust working code.
Numerical linear algebra topics: solutions of linear systems AX = B, eigenvalues and
eigenvectors, matrix factorizations. Calculus topics: numerical differentiation and integration,
interpolation, solutions of nonlinear equations f(x) = 0. Statistical topics: polynomial
approximation, curve fitting.
History
Applications and related fields
For papers involving machine computations and programs in a specific mathematical area, See
Section --04 in that area. This includes computational issues in group theory, number theory,
geometry, statistics, and so on; for each of these fields there are software packages or libraries of
code which are discussed on those index pages. (On the other hand, most results of numerical
integration, say, are in this section rather than Measure and Integration; topics in optimization are
in section 65K rather than Operations Research.)
For calculations of a combinatorial nature and for graph-theoretic questions such as the traveling
salesman problem or scheduling algorithms, see Combinatorics. (These are distinguished by the
discrete nature of the solution sought.) Portions of that material -- particularly investigations into
the complexity of the algorithms -- is also treated in Computer Science.
General issues of computer use, such as system organization and methodology, or artificial
intelligence, are certainly in computer science. Topics in computer algebra or symbolic
calculation are treated separately.
Issues concerning limitations of specific hardware or software are not strictly speaking part of
mathematics at all but often illustrate some of the issues addressed in numerical analysis. Some
of these can be seen in examples seen below.
There are also applications to areas typically considered part of pure mathematics; for example,
there is substantial work done on the roots of 26C:Polynomials and rational functions.
This area is undergirded by the areas of analysis. See for example Real analysis or Complex
analysis for general topics of convergence.
1.
In the world of math, the practice of numerical analysis is well known for focusing on
algorithms as they are used to solve issues in continuous math. The practice is familiar
territory for engineers and those who work with physical science, but it is beginning to
expand further into liberal arts areas as well. This can be seen in astrology, stock portfolio
analysis, data analysis and medicine. Part of the application of numerical analysis involves
the use of errors. Specific errors are sought out and applied to arrive at mathematical
conclusions.
4. As a type of truncation error, the discretization error focuses on how much a discrete
math problem is not consistent with a continuous math problem.
Conclusion
6. Math errors, unlike the inference of their name, come in useful in statistics, computer
programming, advanced mathematics and much more. The error evaluation provides
significantly useful information, especially when probability is required.