0% found this document useful (0 votes)
672 views

Numerical Analysis: From Wikipedia, The Free Encyclopedia

This document provides an overview of numerical analysis, which is the study of algorithms that use numerical approximations to solve problems in continuous mathematics. Some key points: - One of the earliest examples is a Babylonian clay tablet from 1800-1600 BC that approximates the square root of 2 to about six decimal places. Numerical analysis seeks approximate solutions rather than exact ones. - Numerical analysis finds applications in fields like engineering, sciences, economics, and more recently biology. It involves computing functions, interpolation, solving equations/systems, optimization, differential equations, and more. - Errors can arise from rounding, truncation/discretization, and numerical instability. Direct methods provide exact solutions in theory while iterative methods form

Uploaded by

Nikhil Jasrotia
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
672 views

Numerical Analysis: From Wikipedia, The Free Encyclopedia

This document provides an overview of numerical analysis, which is the study of algorithms that use numerical approximations to solve problems in continuous mathematics. Some key points: - One of the earliest examples is a Babylonian clay tablet from 1800-1600 BC that approximates the square root of 2 to about six decimal places. Numerical analysis seeks approximate solutions rather than exact ones. - Numerical analysis finds applications in fields like engineering, sciences, economics, and more recently biology. It involves computing functions, interpolation, solving equations/systems, optimization, differential equations, and more. - Errors can arise from rounding, truncation/discretization, and numerical instability. Direct methods provide exact solutions in theory while iterative methods form

Uploaded by

Nikhil Jasrotia
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Numerical analysis

From Wikipedia, the free encyclopedia


Jump to: navigation, search

Babylonian clay tablet BC 7289 (c. 1800–1600 BC) with annotations. The approximation of the
square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 +
51/602 + 10/603 = 1.41421296...[1] Image by Bill Casselman.[2]

Numerical analysis is the study of algorithms that use numerical approximation (as opposed to
general symbolic manipulations) for the problems of continuous mathematics (as distinguished
from discrete mathematics).

One of the earliest mathematical writings is the Babylonian tablet BC 7289, which gives a
sexagesimal numerical approximation of , the length of the diagonal in a unit square. Being
able to compute the sides of a triangle (and hence, being able to compute square roots) is
extremely important, for instance, in carpentry and construction.[3]

Numerical analysis continues this long tradition of practical mathematical calculations. Much
like the Babylonian approximation of , modern numerical analysis does not seek exact
answers, because exact answers are often impossible to obtain in practice. Instead, much of
numerical analysis is concerned with obtaining approximate solutions while maintaining
reasonable bounds on errors.

Numerical analysis naturally finds applications in all fields of engineering and the physical
sciences, but in the 21st century, the life sciences and even the arts have adopted elements of
scientific computations. Ordinary differential equations appear in the movement of heavenly
bodies (planets, stars and galaxies); optimization occurs in portfolio management; numerical
linear algebra is important for data analysis; stochastic differential equations and Markov chains
are essential in simulating living cells for medicine and biology.
Before the advent of modern computers numerical methods often depended on hand interpolation
in large printed tables. Since the mid 20th century, computers calculate the required functions
instead. The interpolation algorithms nevertheless may be used as part of the software for solving
differential equations.

Contents
[hide]

 1 General introduction
o 1.1 History
o 1.2 Direct and iterative methods
 1.2.1 Discretization and numerical integration
o 1.3 Discretization
 2 The generation and propagation of errors
o 2.1 Round-off
o 2.2 Truncation and discretization error
o 2.3 Numerical stability and well-posed problems
 3 Areas of study
o 3.1 Computing values of functions
o 3.2 Interpolation, extrapolation, and regression
o 3.3 Solving equations and systems of equations
o 3.4 Solving eigenvalue or singular value problems
o 3.5 Optimization
o 3.6 Evaluating integrals
o 3.7 Differential equations
 4 Software
 5 See also
 6 Notes
 7 References
 8 External links

[edit] General introduction


The overall goal of the field of numerical analysis is the design and analysis of techniques to
give approximate but accurate solutions to hard problems, the variety of which is suggested by
the following.

 Advanced numerical methods are essential in making numerical weather prediction


feasible.
 Computing the trajectory of a spacecraft requires the accurate numerical solution of a
system of ordinary differential equations.
 Car companies can improve the crash safety of their vehicles by using computer
simulations of car crashes. Such simulations essentially consist of solving partial
differential equations numerically.
 Hedge funds (private investment funds) use tools from all fields of numerical analysis to
calculate the value of stocks and derivatives more precisely than other market
participants.
 Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and
crew assignments and fuel needs. This field is also called operations research.
 Insurance companies use numerical programs for actuarial analysis.

The rest of this section outlines several important themes of numerical analysis.

[edit] History

The field of numerical analysis predates the invention of modern computers by many centuries.
Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of
the past were preoccupied by numerical analysis, as is obvious from the names of important
algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or
Euler's method.

To facilitate computations by hand, large books were produced with formulas and tables of data
such as interpolation points and function coefficients. Using these tables, often calculated out to
16 decimal places or more for some functions, one could look up values to plug into the formulas
given and achieve very good numerical estimates of some functions. The canonical work in the
field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very
large number of commonly used formulas and functions and their values at many points. The
function values are no longer very useful when a computer is available, but the large listing of
formulas can still be very handy.

The mechanical calculator was also developed as a tool for hand computation. These calculators
evolved into electronic computers in the 1940s, and it was then found that these computers were
also useful for administrative purposes. But the invention of the computer also influenced the
field of numerical analysis, since now longer and more complicated calculations could be done.

[edit] Direct and iterative methods

Direct vs iterative methods

Consider the problem of solving

3x3+4=28

for the unknown quantity x.

Direct method
3x3 + 4 = 28.
Subtract 4 3x3 = 24.
Divide by 3 x3 = 8.
Take cube roots x = 2.
For the iterative method, apply the
bisection method to f(x) = 3x3 - 24.
The initial values are a = 0, b = 3, f(a)
= -24, f(b) = 57.

Iterative method
a b mid f(mid)
0 3 1.5 -13.875
1.5 3 2.25 10.17...
1.5 2.25 1.875 -4.22...
1.875 2.25 2.0625 2.32...
We conclude from this table that the
solution is between 1.875 and 2.0625.
The algorithm might return any
number in that range with an error less
than 0.2.

[edit] Discretization and numerical


integration

In a two hour race, we have measured


the speed of the car at three instants
and recorded them in the following
table.

0:20 1:00 1:40


Time
km/h 140 150 180
A discretization would be to say that
the speed of the car was constant from
0:00 to 0:40, then from 0:40 to 1:20
and finally from 1:20 to 2:00. For
instance, the total distance traveled in
the first 40 minutes is approximately
(2/3h x 140 km/h)=93.3 km. This
would allow us to estimate the total
distance traveled as 93.3 km + 100 km
+ 120 km = 313.3 km, which is an
example of numerical integration
(see below) using a Riemann sum,
because displacement is the integral of
velocity.

Ill posed problem: Take the function


f(x) = 1/(x − 1). Note that f(1.1) = 10
and f(1.001) = 1000: a change in x of
less than 0.1 turns into a change in f(x)
of nearly 1000. Evaluating f(x) near x
= 1 is an ill-conditioned problem.

Well-posed problem: By contrast, the


function is continuous
and so evaluating it is well-posed, at
least for x being not close to zero.

Direct methods compute the solution to a problem in a finite number of steps. These methods
would give the precise answer if they were performed in infinite precision arithmetic. Examples
include Gaussian elimination, the QR factorization method for solving systems of linear
equations, and the simplex method of linear programming. In practice, finite precision is used
and the result is an approximation of the true solution (assuming stability).

In contrast to direct methods, iterative methods are not expected to terminate in a number of
steps. Starting from an initial guess, iterative methods form successive approximations that
converge to the exact solution only in the limit. A convergence test is specified in order to decide
when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision
arithmetic these methods would not reach the solution within a finite number of steps (in
general). Examples include Newton's method, the bisection method, and Jacobi iteration. In
computational matrix algebra, iterative methods are generally needed for large problems.

Iterative methods are more common than direct methods in numerical analysis. Some methods
are direct in principle but are usually used as though they were not, e.g. GMRES and the
conjugate gradient method. For these methods the number of steps needed to obtain the exact
solution is so large that an approximation is accepted in the same manner as for an iterative
method.

[edit] Discretization

Furthermore, continuous problems must sometimes be replaced by a discrete problem whose


solution is known to approximate that of the continuous problem; this process is called
discretization. For example, the solution of a differential equation is a function. This function
must be represented by a finite amount of data, for instance by its value at a finite number of
points at its domain, even though this domain is a continuum.
[edit] The generation and propagation of errors
It has been suggested that this article or section be merged into Discretization error.
(Discuss)

The study of errors forms an important part of numerical analysis. There are several ways in
which error can be introduced in the solution of the problem.

[edit] Round-off

Round-off errors arise because it is impossible to represent all real numbers exactly on a machine
with finite memory (which is what all practical digital computers are).

[edit] Truncation and discretization error

Truncation errors are committed when an iterative method is terminated or a mathematical


procedure is approximated, and the approximate solution differs from the exact solution.
Similarly, discretization induces a discretization error because the solution of the discrete
problem does not coincide with the solution of the continuous problem. For instance, in the
iteration in the sidebar to compute the solution of 3x3 + 4 = 28, after 10 or so iterations, we
conclude that the root is roughly 1.99 (for example). We therefore have a truncation error of
0.01.

Once an error is generated, it will generally propagate through the calculation. For instance, we
have already noted that the operation + on a calculator (or a computer) is inexact. It follows that
a calculation of the type a+b+c+d+e is even more inexact.

What does it mean when we say that the truncation error is created when we approximate a
mathematical procedure. We know that to integrate a function exactly requires one to find the
sum of infinite trapezoids. But numerically one can find the sum of only finite trapezoids, and
hence the approximation of the mathematical procedure. Similarly, to differentiate a function, the
differential element approaches to zero but numerically we can only choose a finite value of the
differential element.

[edit] Numerical stability and well-posed problems

Numerical stability is an important notion in numerical analysis. An algorithm is called


numerically stable if an error, whatever its cause, does not grow to be much larger during the
calculation. This happens if the problem is well-conditioned, meaning that the solution changes
by only a small amount if the problem data are changed by a small amount. To the contrary, if a
problem is ill-conditioned, then any small error in the data will grow to be a large error.

Both the original problem and the algorithm used to solve that problem can be well-conditioned
and/or ill-conditioned, and any combination is possible.
So an algorithm that solves a well-conditioned problem may be either numerically stable or
numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a
well-posed mathematical problem. For instance, computing the square root of 2 (which is
roughly 1.41421) is a well-posed problem. Many algorithms solve this problem by starting with
an initial approximation x1 to , for instance x1=1.4, and then computing improved guesses x2,
x3, etc... One such method is the famous Babylonian method, which is given by xk+1 = xk/2 + 1/xk.
Another iteration, which we will call Method X, is given by xk + 1 = (xk2−2)2 + xk.[4] We have
calculated a few iterations of each scheme in table form below, with initial guesses x1 = 1.4 and
x1 = 1.42.

Babylonian Babylonian Method X Method X


x1 = 1.4 x1 = 1.42 x1 = 1.4 x1 = 1.42
x2 = 1.4142857... x2 = 1.41422535... x2 = 1.4016 x2 = 1.42026896
x3 = 1.414213564... x3 = 1.41421356242... x3 = 1.4028614... x3 = 1.42056...
... ...
x1000000 = 1.41421... x28 = 7280.2284...

Observe that the Babylonian method converges fast regardless of the initial guess, whereas
Method X converges extremely slowly with initial guess 1.4 and diverges for initial guess 1.42.
Hence, the Babylonian method is numerically stable, while Method X is numerically unstable.

Numerical stability is affected by the number of the significant digits the machine keeps
on, if we use a machine that keeps on the first four floating-point digits,a good example
on loss of significance these two equivalent functions

if we compare the results of

and

by looking to the two above results, we realize that loss of significance which is also
called Subtractive Cancelation has a huge effect on the results, even though both
functions are equivalent; to show that they are equivalent simply we need to start by f(x)
and end with g(x), and so
the true value for the result is 11.174755...
which is exactly g(500)=11.1748 after rounding the result to 4 decimal digits
now imagine that you use tens of terms like these functions in your program, your error
will increase as you proceed in the program, unless you use the suitable formula of the
two functions each time you evaluate either f(x), or g(x), the choice is dependent on the
parity of x .

 The example is taken from Mathew; Numerical methods using matlab , 3rd ed.

[edit] Areas of study


The field of numerical analysis is divided into different disciplines according to the problem that
is to be solved.

[edit] Computing values of functions

Interpolation: We have observed the


temperature to vary from 20 degrees
Celsius at 1:00 to 14 degrees at 3:00.
A linear interpolation of this data
would conclude that it was 17 degrees
at 2:00 and 18.5 degrees at 1:30pm.

Extrapolation: If the gross domestic


product of a country has been growing
an average of 5% per year and was
100 billion dollars last year, we might
extrapolate that it will be 105 billion
dollars this year.

Regression: In linear regression,


given n points, we compute a line that
passes as close as possible to those n
points.

Optimization: Say you sell lemonade


at a lemonade stand, and notice that at
$1, you can sell 197 glasses of
lemonade per day, and that for each
increase of $0.01, you will sell one
less lemonade per day. If you could
charge $1.485, you would maximize
your profit, but due to the constraint
of having to charge a whole cent
amount, charging $1.49 per glass will
yield the maximum income of
$220.52 per day.

Differential equation: If you set up


100 fans to blow air from one end of
the room to the other and then you
drop a feather into the wind, what
happens? The feather will follow the
air currents, which may be very
complex. One approximation is to
measure the speed at which the air is
blowing near the feather every second,
and advance the simulated feather as
if it were moving in a straight line at
that same speed for one second,
before measuring the wind speed
again. This is called the Euler method
for solving an ordinary differential
equation.
One of the simplest problems is the evaluation of a function at a given point. The most
straightforward approach, of just plugging in the number in the formula is sometimes not very
efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the
necessary number of multiplications and additions. Generally, it is important to estimate and
control round-off errors arising from the use of floating point arithmetic.

[edit] Interpolation, extrapolation, and regression

Interpolation solves the following problem: given the value of some unknown function at a
number of points, what value does that function have at some other point between the given
points?

Extrapolation is very similar to interpolation, except that now we want to find the value of the
unknown function at a point which is outside the given points.

Regression is also similar, but it takes into account that the data is imprecise. Given some points,
and a measurement of the value of some function at these points (with an error), we want to
determine the unknown function. The least squares-method is one popular way to achieve this.

[edit] Solving equations and systems of equations

Another fundamental problem is computing the solution of some given equation. Two cases are
commonly distinguished, depending on whether the equation is linear or not. For instance, the
equation 2x + 5 = 3 is linear while 2x2 + 5 = 3 is not.

Much effort has been put in the development of methods for solving systems of linear equations.
Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian
elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and
positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such
as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient
method are usually preferred for large systems.

Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of
a function is an argument for which the function yields zero). If the function is differentiable and
the derivative is known, then Newton's method is a popular choice. Linearization is another
technique for solving nonlinear equations.

[edit] Solving eigenvalue or singular value problems

Several important problems can be phrased in terms of eigenvalue decompositions or singular


value decompositions. For instance, the spectral image compression algorithm [5] is based on the
singular value decomposition. The corresponding tool in statistics is called principal component
analysis.

[edit] Optimization
Main article: Optimization (mathematics)

Optimization problems ask for the point at which a given function is maximized (or minimized).
Often, the point also has to satisfy some constraints.

The field of optimization is further split in several subfields, depending on the form of the
objective function and the constraint. For instance, linear programming deals with the case that
both the objective function and the constraints are linear. A famous method in linear
programming is the simplex method.

The method of Lagrange multipliers can be used to reduce optimization problems with
constraints to unconstrained optimization problems.

[edit] Evaluating integrals

Main article: Numerical integration

Numerical integration, in some instances also known as numerical quadrature, asks for the value
of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint
rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer"
strategy, whereby an integral on a relatively large set is broken down into integrals on smaller
sets. In higher dimensions, where these methods become prohibitively expensive in terms of
computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo
integration), or, in modestly large dimensions, the method of sparse grids.

[edit] Differential equations

Main articles: Numerical ordinary differential equations and Numerical partial differential
equations

Numerical analysis is also concerned with computing (in an approximate way) the solution of
differential equations, both ordinary differential equations and partial differential equations.

Partial differential equations are solved by first discretizing the equation, bringing it into a finite-
dimensional subspace. This can be done by a finite element method, a finite difference method,
or (particularly in engineering) a finite volume method. The theoretical justification of these
methods often involves theorems from functional analysis. This reduces the problem to the
solution of an algebraic equation.

[edit] Software
Main articles: List of numerical analysis software and Comparison of numerical analysis
software

Since the late twentieth century, most algorithms are implemented in a variety of programming
languages. The Netlib repository contains various collections of software routines for numerical
problems, mostly in Fortran and C. Commercial products implementing many different
numerical algorithms include the IMSL and NAG libraries; a free alternative is the GNU
Scientific Library.

There are several popular numerical computing applications such as MATLAB, S-PLUS,
LabVIEW, and IDL as well as free and open source alternatives such as FreeMat, Scilab, GNU
Octave (similar to Matlab), IT++ (a C++ library), R (similar to S-PLUS) and certain variants of
Python. Performance varies widely: while vector and matrix operations are usually fast, scalar
loops may vary in speed by more than an order of magnitude.[6][7]

Many computer algebra systems such as Mathematica also benefit from the availability of
arbitrary precision arithmetic which can provide more accurate results.

Also, any spreadsheet software can be used to solve simple problems relating to numerical
analysis.

Error analysis
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be
challenged and removed. (September 2009)

Error analysis is the study of kind and quantity of error that occurs, particularly in the fields of
applied mathematics (particularly numerical analysis), applied linguistics and statistics.

Contents
[hide]

 1 Error analysis in numerical modeling


 2 Error analysis in language teaching
 3 Error analysis in molecular dynamics simulation
 4 See also
 5 References
 6 External links

[edit] Error analysis in numerical modeling


In numerical simulation or modeling of real systems, error analysis is concerned with the
changes in the output of the model as the parameters to the model vary about a mean.
For instance, in a system modeled as a function of two variables . Error analysis deals
with the propagation of the numerical errors in and (around mean values and ) to error in
(around a mean ).[1]

In numerical analysis, error analysis comprises both forward error analysis and backward
error analysis. Forward error analysis involves the analysis of a function
which is an approximation (usually a finite polynomial) to a function to
determine the bounds on the error in the approximation; i.e., to find such that .
Backward error analysis involves the analysis of the approximation function ,
to determine the bounds on the parameters such that the result

Numerical analysis is the area of mathematics and computer science that creates, analyzes, and
implements algorithms for solving numerically the problems of continuous mathematics.  Such
problems originate generally from real-world applications of algebra, geometry, and calculus,
and they involve variables which vary continuously. These problems occur throughout the
natural sciences, social sciences, medicine, engineering, and business. Beginning in the 1940's,
the growth in power and availability of digital computers has led to an increasing use of realistic
mathematical models in science, medicine, engineering, and business; and numerical analysis of
increasing sophistication has been needed to solve these more accurate and complex
mathematical models of the world. The formal academic area of numerical analysis varies from
highly theoretical mathematical studies to computer science issues involving the effects of
computer hardware and software on the implementation of specific algorithms.

Contents
[hide]

 1 Areas of numerical analysis


o 1.1 Systems of linear and nonlinear equations
o 1.2 Approximation theory
o 1.3 Numerical solution of differential and integral equations
 2 Some common viewpoints and concerns in numerical analysis
 3 Development of numerical methods
o 3.1 Numerical solution of systems of linear equations
o 3.2 Numerical solution of systems of nonlinear equations
o 3.3 Numerical methods for solving differential and integral equations
 4 References
 5 See also

Areas of numerical analysis


A rough categorization of the principal areas of numerical analysis is given below, keeping in
mind that there is often a great deal of overlap between the listed areas. In addition, the
numerical solution of many mathematical problems involves some combination of some of these
areas, possibly all of them. There are also a few problems which do not fit neatly into any of the
following categories.

Systems of linear and nonlinear equations

 Numerical solution of systems of linear equations. This refers to solving for in the
equation with given matrix and column vector . The most important case
has a square matrix. There are both direct methods of solution (requiring only a finite
number of arithmetic operations) and iterative methods (giving increased accuracy with
each new iteration). This topic also includes the matrix eigenvalue problem for a square
matrix , solving for and in the equation .
 Numerical solution of systems of nonlinear equations. This refers to rootfinding problems
which are usually written as with a vector with components and a
vector with components. The most important case has .
 Optimization. This refers to minimizing or maximizing a real-valued function . The
permitted values for can be either constrained or unconstrained. The
'linear programming problem' is a well-known and important case; is linear, and
there are linear equality and/or inequality constraints on .

Approximation theory

Use computable functions to approximate the values of functions that are not easily
computable or use approximations to simplify dealing with such functions. The most popular
types of computable functions are polynomials, rational functions, and piecewise versions
of them, for example spline functions. Trigonometric polynomials are also a very useful choice.

 Best approximations. Here a given function is approximated within a given finite-


dimensional family of computable functions. The quality of the approximation is
expressed by a functional, usually the maximum absolute value of the approximation
error or an integral involving the error. Least squares approximations and minimax
approximations are the most popular choices.
 Interpolation. A computable function is to be chosen to agree with a given at a
given finite set of points . The study of determining and analyzing such interpolation
functions is still an active area of research, particularly when is a multivariate
polynomial.
 Fourier series. A function is decomposed into orthogonal components based on a
given orthogonal basis , and then is approximated by using only the
largest of such components. The convergence of Fourier series is a classical area of
mathematics, and it is very important in many fields of application. The development of
the Fast Fourier Transform in 1965 spawned a rapid progress in digital technology. In the
1990s wavelets became an important tool in this area.
 Numerical integration and differentiation. Most integrals cannot be evaluated directly in
terms of elementary functions, and instead they must be approximated numerically. Most
functions can be differentiated analytically, but there is still a need for numerical
differentiation, both to approximate the derivative of numerical data and to obtain
approximations for discretizing differential equations.

Numerical solution of differential and integral equations

These equations occur widely as mathematical models for the physical world, and their
numerical solution is important throughout the sciences and engineering.

 Ordinary differential equations. This refers to systems of differential equations in which


the unknown solutions are functions of only a single variable. The most important cases
are initial value problems and boundary value problems, and these are the subjects of a
number of textbooks. Of more recent interest are 'differential-algebraic equations', which
are mixed systems of algebraic equations and ordinary differential equations. Also of
recent interest are 'delay differential equations', in which the rate of change of the
solution depends on the state of the system at past times.
 Partial differential equations. This refers to differential equations in which the unknown
solution is a function of more than one variable. These equations occur in almost all areas
of engineering, and many basic models of the physical sciences are given as partial
differential equations. Thus such equations are a very important topic for numerical
analysis. For example, the Navier-Stokes equations are the main theoretical model for the
motion of fluids, and the very large area of 'computational fluid mechanics' is concerned
with solving numerically these and other equations of fluid dynamics
 Integral equations. These equations involve the integration of an unknown function, and
linear equations probably occur most frequently. Some mathematical models lead directly
to integral equations; for example, the radiosity equation is a model for radiative heat
transfer. Another important source of such problems is the reformulation of partial
differential equations, and such reformulations are often called 'boundary integral
equations'.

Some common viewpoints and concerns in numerical


analysis
Most numerical analysts specialize in small sub-areas of the areas listed above, but they share
some common concerns and perspectives. These include the following.

 The mathematical aspects of numerical analysis make use of the language and results of
linear algebra, real analysis, and functional analysis.
 If you cannot solve a problem directly, then replace it with a 'nearby problem' which can
be solved more easily. This is an important perspective which cuts across all types of
mathematical problems. For example, to evaluate a definite integral numerically, begin
by approximating its integrand using polynomial interpolation or a Taylor series, and
then integrate exactly the polynomial approximation.
 All numerical calculations are carried out using finite precision arithmetic, usually in a
framework of floating-point representation of numbers. What are the effects of using
such finite precision computer arithmetic? How are arithmetic calculations to be carried
out? Using finite precision arithmetic will affect how we compute solutions to all types of
problems, and it forces us to think about the limits on the accuracy with which a problem
can be solved numerically. Even when solving finite systems of linear equations by direct
numerical methods, infinite precision arithmetic is needed in order to find a particular
exact solution.
 There is concern with 'stability', a concept referring to the sensitivity of the solution of a
given problem to small changes in the data or the given parameters of the problem. There
are two aspects to this. First, how sensitive is the original problem to small changes in the
data of the problem? Problems that are very sensitive are often referred to as 'ill-
conditioned' or 'ill-posed', depending on the degree of sensitivity. Second, the numerical
method should not introduce additional sensitivity that is not present in the original
mathematical problem being solved. In developing a numerical method to solve a
problem, the method should be no more sensitive to changes in the data than is true of the
original mathematical problem.
 There is a fundamental concern with error, its size, and its analytic form. When
approximating a problem, a numerical analyst would want to understand the behaviour of
the error in the computed solution. Understanding the form of the error may allow one to
minimize or estimate it. A 'forward error analysis' looks at the effect of errors made in the
solution process. This is the standard way of understanding the consequences of the
approximation errors that occur in setting up a numerical method of solution, e.g. in
numerical integration and in the numerical solution of differential and integral equations.
A 'backward error analysis' works backward in a numerical algorithm, showing that the
approximating numerical solution is the exact solution to a perturbed version of the
original mathematical problem. In this way the stability of the original problem can be
used to explain possible difficulties in a numerical method. Backward error analysis has
been especially important in understanding the behaviour of numerical methods for
solving linear algebra problems.
 In order to develop efficient means of calculating a numerical solution, it is important to
understand the characteristics of the computer being used. For example, the structure of
the computer memory is often very important in devising efficient algorithms for large
linear algebra problems. Also, parallel computer architectures lead to efficient algorithms
only if the algorithm is designed to take advantage of the parallelism.
 Numerical analysts are generally interested in measuring the efficiency of algorithms.
What is the cost of a particular algorithm? For example, the use of Gaussian elimination
to solve a linear system containing equations will require approximately
arithmetic operations. How does this compare with other numerical methods for
solving this problem? This topic is a part of the larger area of 'computational complexity'.
 Use information gained in solving a problem to improve the solution procedure for that
problem. Often we do not fully understand the characteristics of a problem, especially
very complicated and large ones. Such a solution process is sometimes referred to as
being an 'adaptive procedure', and it can also be viewed as a feedback process.
Development of numerical methods
Numerical analysts and applied mathematicians have a variety of tools which they use in
developing numerical methods for solving mathematical problems. An important perspective,
one mentioned earlier, which cuts across all types of mathematical problems is that of replacing
the given problem with a 'nearby problem' which can be solved more easily. There are other
perspectives which vary with the type of mathematical problem being solved.

Numerical solution of systems of linear equations

Linear systems arise in many of the problems of numerical analysis, a reflection of the
approximation of mathematical problems using linearization. This leads to diversity in the
characteristics of linear systems, and for this reason there are numerous approaches to solving
linear systems. As an example, numerical methods for solving partial differential equations often
lead to very large 'sparse' linear systems in which most coefficients are zero. Solving such sparse
systems requires methods that are quite different from those used to solve more moderate sized
'dense' linear systems in which most coefficients are non-zero.

There are 'direct methods' and 'iterative methods' for solving all types of linear systems, and the
method of choice depends on the characteristics of both the linear system and on the computer
hardware being used. For example, some sparse systems can be solved by direct methods,
whereas others are better solved using iteration. With iteration methods, the linear system is
sometimes transformed to an equivalent form that is more amenable to being solved by iteration;
this is often called 'pre-conditioning' of the linear system.

With the matrix eigenvalue problem , it is standard to transform the matrix to a


simpler form, one for which the eigenvalue problem can be solved more easily and/or cheaply. A
favorite choice are 'orthogonal transformations' because they are a simple and stable way to
convert the given matrix . Orthogonal transformations are also very useful in transforming
other problems in numerical linear algebra. Of particular importance in this regard is the least
squares solution of over-determined linear systems.

The linear programming problem was solved principally by the 'simplex method' until new
approaches were developed in the 1980s, and it remains an important method of solution. The
simplex method is a direct method that uses tools from the numerical solution of linear systems.

Numerical solution of systems of nonlinear equations

With a single equation , and having an initial estimate of the root , approximate
by its tangent line at the point . Find the root of this tangent line as an
approximation to the root of the original equation . This leads to 'Newton's iteration
method',
Other linear and higher degree approximations can be used, and these lead to alternative iteration
methods. An important derivative-free approximation of Newton’s method is the 'secant method'.

For a system of nonlinear equations for a solution vector in , we approximate by


its linear Taylor approximation about the initial estimate . This leads to Newton's method for
nonlinear systems,

in which denotes the Jacobian matrix, of order for .

In practice, the Jacobian matrix for is often too complicated to compute directly; instead the
partial derivatives in the Jacobian matrix are approximated using 'finite differences'. This leads to
a 'finite difference Newton method'. As an alternative strategy and in analogy with the
development of the secant method for the single variable problem, there is a similar rootfinding
iteration method for solving nonlinear systems. It is called 'Broyden’s method' and it uses finite
difference approximations of the derivatives in the Jacobian matrix, avoiding the evaluation of
the partial derivatives of .

Numerical methods for solving differential and integral equations

With such equations, there are usually at least two general steps involved in obtaining a nearby
problem from which a numerical approximation can be computed; this is often referred to as
'discretization' of the original problem. The given equation will have a domain on which the
unknown function is defined, perhaps an interval in one dimension and maybe a rectangle,
ellipse, or other simply connected bounded region in two dimensions. Many numerical methods
begin by introducing a mesh or grid on this domain, and the solution is to be approximated using
this grid. Following this, there are several common approaches.

One approach approximates the equation with a simpler equation defined on the mesh. For
example, consider approximating the boundary value problem

Introduce a set of mesh points , , with for some given .


Approximate the boundary value problem by
The second derivative in the original problem has been replaced by a numerical approximation to
the second derivative. The new problem is a finite system of nonlinear equations, presumably
amenable to solution by known techniques. The solution to this new problem is , and it is
defined on only the mesh points .

A second approach to discretizing differential and integral equations is as follows. Choose a


finite-dimensional family of functions, denoted here by , with which to approximate the
unknown solution function . Write the given differential or integral equation as ,
with a function for any function , perhaps over a restricted class of functions . The
numerical method consists of selecting a function such that is a small function in
some sense. The various ways of doing this lead to 'Galerkin methods', 'collocation methods', and
'least square methods'.

Yet another approach is to reformulate the equation as an optimization problem. Such


reformulations are a part of the classical area of mathematics known as the 'calculus of
variations', a subject that reflects the importance in physics of minimization principles. The well-
known 'finite element method' for solving elliptic partial differential equations is obtained in this
way, although it often coincides with a Galerkin method.

The approximating functions in are often chosen as piecewise polynomial functions which are
polynomial over the elements of the mesh chosen earlier. Such methods are sometimes called
'local methods'. When the approximating functions are defined without reference to a
grid, then the methods are sometimes called 'global methods' or 'spectral methods'. Examples of
such are sets of polynomials or trigonometric functions of some finite degree or less.

With all three approaches to solving a differential or integral equations, the intent is that the
resulting solution be close to the desired solution . The business of theoretical numerical
analysis is to analyze such an algorithm and investigate the size of .

References
For an historical account of early numerical analysis, see

 Herman Goldstine. A History of Numerical Analysis From the 16th Through the19th
Century, Springer-Verlag, New York, 1977.

For a current view of numerical analysis as taught at the undergraduate level, see

 Cleve Moler. Numerical Computing with MATLAB, SIAM Pub., Philadelphia, 2004.

For a current view of numerical analysis as taught at the advanced undergraduate or beginning
graduate level, see
 Alfio Quarteroni, Riccardo Sacco, and Fausto Saleri. Numerical Mathematics, Springer-
Verlag, New York, 2000.

 Christoph W. Ueberhuber. Numerical Computation: Vol. 1: Methods, Software, and


Analysis, Vol. 2: Methods, Software, and Analysis, Springer-Verlag, New York, 1997.

For one perspective on a theoretical framework using functional analysis for studying many
problems in numerical analysis, see

 Kendall Atkinson and Weimin Han. Theoretical Numerical Analysis: A Functional


Analysis Framework, 2nd ed., Springer-Verlag, New York, 2005.

As references for numerical linear algebra, see

 Gene Golub and Charles Van Loan. Matrix Computations, 3rd ed., Johns Hopkins
University Press, 1996.

 Nicholas Higham. Accuracy and Stability of Numerical Algorithms, SIAM Pub.,


Philadelphia, 1996.

For an introduction to practical numerical analysis for solving ordinary differential equations, see

 Lawrence Shampine, Ian Gladwell, Skip Thompson. Solving ODEs with Matlab,
Cambridge University Press, Cambridge, 2003.

For information on computing aspects of numerical analysis, see

 Michael Overton. Numerical computing with IEEE floating point arithmetic, SIAM Pub.,
Philadelphia, 2001.

 Suely Oliveira and David Stewart. Writing Scientific Software: A Guide to Good Style,
Cambridge University Press, Cambridge, 2006.

Introduction
Numerical analysis involves the study of methods of computing numerical data. In many
problems this implies producing a sequence of approximations; thus the questions involve the
rate of convergence, the accuracy (or even validity) of the answer, and the completeness of the
response. (With many problems it is difficult to decide from a program's termination whether
other solutions exist.) Since many problems across mathematics can be reduced to linear algebra,
this too is studied numerically; here there are significant problems with the amount of time
necessary to process the initial data. Numerical solutions to differential equations require the
determination not of a few numbers but of an entire function; in particular, convergence must be
judged by some global criterion. Other topics include numerical simulation, optimization, and
graphical analysis, and the development of robust working code.
Numerical linear algebra topics: solutions of linear systems AX = B, eigenvalues and
eigenvectors, matrix factorizations. Calculus topics: numerical differentiation and integration,
interpolation, solutions of nonlinear equations f(x) = 0. Statistical topics: polynomial
approximation, curve fitting.

History
Applications and related fields
For papers involving machine computations and programs in a specific mathematical area, See
Section --04 in that area. This includes computational issues in group theory, number theory,
geometry, statistics, and so on; for each of these fields there are software packages or libraries of
code which are discussed on those index pages. (On the other hand, most results of numerical
integration, say, are in this section rather than Measure and Integration; topics in optimization are
in section 65K rather than Operations Research.)

For calculations of a combinatorial nature and for graph-theoretic questions such as the traveling
salesman problem or scheduling algorithms, see Combinatorics. (These are distinguished by the
discrete nature of the solution sought.) Portions of that material -- particularly investigations into
the complexity of the algorithms -- is also treated in Computer Science.

General issues of computer use, such as system organization and methodology, or artificial
intelligence, are certainly in computer science. Topics in computer algebra or symbolic
calculation are treated separately.

Issues concerning limitations of specific hardware or software are not strictly speaking part of
mathematics at all but often illustrate some of the issues addressed in numerical analysis. Some
of these can be seen in examples seen below.

Applications of numerical analysis occur throughout the fields of applied (numerical)


mathematics, in particular in the fields of physics (sections 70-86). Many of these areas including
subheading e.g. for finite element methods (which are primarily treated here in 65L - 65P).

There are also applications to areas typically considered part of pure mathematics; for example,
there is substantial work done on the roots of 26C:Polynomials and rational functions.

This area is undergirded by the areas of analysis. See for example Real analysis or Complex
analysis for general topics of convergence.

Types of Errors in Numerical Analysis


By Tom Lutzenberger, eHow Contributor

updated: February 24, 2010


I want to do this!

1.

The abacus: an early math calculator

In the world of math, the practice of numerical analysis is well known for focusing on
algorithms as they are used to solve issues in continuous math. The practice is familiar
territory for engineers and those who work with physical science, but it is beginning to
expand further into liberal arts areas as well. This can be seen in astrology, stock portfolio
analysis, data analysis and medicine. Part of the application of numerical analysis involves
the use of errors. Specific errors are sought out and applied to arrive at mathematical
conclusions.

The Round-Off Error

2. The round-off error is used because it a representation of every number as a real


number is not possible. So rounding is introduced adjust for this situation. A round-off error,
represents the numerical amount between what a figure actually is versus its closest real
number value, depending on how the round is applied. For instance, rounding to the nearest
whole number means you round up or down to what is the closest whole figure. So if your
result is 3.31 then you would round to 3. Rounding the highest amount would be a bit
different. In this approach, if your figure is 3.31, your rounding would be to 4. In terms of
numerical analysis the round-off error is an attempt to identify what the rounding distance is
when it comes up in algorithms. It's also known as a quantization error.

The Truncation Error

3. A truncation error occurs when approximation is involved in numerical analysis. The


error factor is related to how much the approximate value is a variance from the actual value
in a formula or math result. For example, take the formula of 3 times 3 plus 4. The
calculation equals 28. Now, break it down and the root is close to 1.99. The truncation error
value is equal to 0.01.

The Discretization Error

4. As a type of truncation error, the discretization error focuses on how much a discrete
math problem is not consistent with a continuous math problem.

Numerical Stability Errors


5. If an error stays at one point in an algorithm and doesn't aggregate further as the
calculation continues, then it is considered a numerically stable error. This happens when
the error causes only a very small variation in the formula result. If the opposite occurs, and
the error propagates bigger as the calculation continues, then it is considered numerically
unstable.

Conclusion

6. Math errors, unlike the inference of their name, come in useful in statistics, computer
programming, advanced mathematics and much more. The error evaluation provides
significantly useful information, especially when probability is required.

Read more: Types of Errors in Numerical Analysis | eHow.com


http://www.ehow.com/list_6022901_types-errors-numerical-analysis.html#ixzz14hmctaaH

You might also like