0% found this document useful (0 votes)
423 views50 pages

Numerical Methods and Computation MTL 107

This document provides an overview of a numerical methods course taught by Harish Kumar and Kamana Porwal at IIT Delhi. It discusses the course plan, focus, aims, prerequisites, evaluation plan, and use of MATLAB. It also reviews some key concepts from calculus needed for numerical methods like limits, continuity, differentiation, and the mean value theorem. Finally, it introduces numerical errors and discusses types of errors like modeling errors, approximation errors, and roundoff errors.

Uploaded by

Leaner saluja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
423 views50 pages

Numerical Methods and Computation MTL 107

This document provides an overview of a numerical methods course taught by Harish Kumar and Kamana Porwal at IIT Delhi. It discusses the course plan, focus, aims, prerequisites, evaluation plan, and use of MATLAB. It also reviews some key concepts from calculus needed for numerical methods like limits, continuity, differentiation, and the mean value theorem. Finally, it introduces numerical errors and discusses types of errors like modeling errors, approximation errors, and roundoff errors.

Uploaded by

Leaner saluja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Numerical Methods and Computation

MTL 107

Harish Kumar ([email protected])


and
Kamana Porwal (Coordinater, [email protected])

Dept. of Mathematics, IIT Delhi


Lecture 1: Introduction the MTL 107
Numerical Methods and Computations

Course Website
https://sites.google.com/view/kporwal/teaching/mtl107
Outline of today’s Lecture

▶ What are Numerical Methods


▶ Lecture Planning
▶ Tutorials and Exams
▶ References
▶ Start of the Lecture
Science and Technology

Experiments Theory

Figure 1: Before access to computing


Since 1980 ...

Experiments Theory

Computing

Figure 2: After access to computing


Numerical Methods for Computational Science and Engineering
Introduction
Scientific Computing
Scientific Computing

mCSE, Lecture 1, Sept 19, 2013 3/40


Course Plan (Part 1: Prof. Harish Kumar)

▶ Introduction
▶ Calculus and Linear Algebra Review
▶ Roundoff Errors
▶ Nonlinear equations in One Variables
▶ Direct Methods for Linear Systems
▶ Iterative Methods for Linear Systems
▶ Iterative methods for systems of nonlinear equations
Course Plan (Part 2: Prof. Kamana Porwal)

▶ Interpolations
▶ Least Square Approximation
▶ Numerical Differentiation
▶ Numerical Integration
▶ Numerical Methods for Ordinary Differential equations: Initial
Value Problems.
Focus of the Course

▶ Mathematical Foundation of the algorithms.


▶ Scope and Limitations of Algorithms.
▶ Efficient implementation of Algorithms in Matlab .
▶ Numerical experiments.
▶ No focus on Hardware related issues like parallelization,
vectorization etc.
Aim of the Course

▶ Knowledge of basic algorithms in Computational Sciences.


▶ Ability to analyze numerical algorithms and errors.
▶ Ability of proper error analysis.
▶ To be able to choose appropriate numerical algorithms for a
given problem.
▶ Efficient Implementation of algorithms in Matlab .
Prerequisites

▶ MTL 100: Calculus.


▶ MTL 101: Linear Algebra and Differential Equations.
Revise Them.
Literature

▶ Uri Ascher & Chen Greif: A First Course in Numerical


Methods. SIAM, 2011. See
http://www.siam.org/books/cs07/
▶ Richard L. Burden, J. Douglas Faires: Numerical Analysis (9th
Ed.), 2011.
▶ C. Moler: Numerical Computing with Matlab. SIAM 2004.
See http://www.mathworks.in/moler/.
▶ Numerical Mathematics (2nd Ed.), Alfio Quarteroni, Riccardo
Sacco, Fausto Saleri, Springer, 2007.
▶ Elementary Numerical Analysis: Algorithmic Approach,
Samuel D. Conte, Boor.
Organisation

▶ Dr. Harish Kumar and Dr. Kamana Porwal (coordinator)


▶ Email:[email protected], [email protected]
▶ Website: https:
//sites.google.com/view/kporwal/teaching/mtl107
▶ All the course related announcements will be made on course
webpage so visit it regularly.
Evaluation Plan

1. Minor 30 %
2. Major 70 %
MATLAB

▶ We will use Matlab for implementation of algorithms.


▶ Several Matlab resources will be made available on course
website.
▶ You can use online version of Matlab if internet speed is
decent: https://matlab.mathworks.com/. You should
register uisng IIT Delhi id.
▶ It is fairly easy to code with Matlab . So, learn it.
▶ It is expected that you will be able to implement the
algorithms in Matlab .
▶ If you want to code in some other language, then you are
welcome. However, we will not be able to provide any support
in this regard.
Calculus Review

▶ Limit
▶ Continuity
▶ Differentiability
▶ Riemann Integrability
▶ ..... and related basic results about the real valued functions.
Calculus Review

Theorem (Rolle’s Theorem)


Assume f ∈ C [a, b] and f is differentiable on (a, b). If
f (a) = f (b), then there exist c ∈ (a, b) s.t. f ′ (c) = 0.
Calculus Review

Theorem (Mean Value Theorem)


Assume f ∈ C [a, b] and f is differentiable on (a, b) then there exist
c ∈ (a, b) s.t.,
f (b) − f (a)
f ′ (c) = .
b−a
Calculus Review

Theorem (Extreme Value Theorem)


If f ∈ C [a, b] then there exists c1 , c2 ∈ [a, b] such that
f (c1 ) ≤ f (x) ≤ f (c2 ). Furthermore if f is differentiable in (a, b)
then the numbers c1 and c2 are either occurs at the end points of
[a, b] or they are zeros of f ′ .
Calculus Review

Theorem (Intermediate Value Theorem)


If f ∈ C [a, b] and C is any value in between f (a) and f (b) then
there exists c ∈ (a, b) such that f (c) = C .
Calculus Review

Theorem (Weighted Mean Value Theorem)


Assume that f ∈ C [a, b], g is Riemann integrable on [a, b] and g
do not change sign on [a, b]. Then there exists c ∈ (a, b) such
that, Z b Z b
f (x)g (x)dx = f (c) g (x)dx
a a
Calculus Review

Theorem (Taylor’s Theorem)


Assume that f ∈ C n [a, b], f (n+1) exists on [a, b] , and x0 ∈ [a, b].
Then for every x ∈ [a, b], there exists a number x1 (x) such that,

f (n)
f (x) = f (x0 ) + f ′ (x0 )(x − x0 ) + · · · + (x − x0 )n + Rn
n!
f (n+1) (x1 )
where Rn = (n+1)! (x − x0 )n+1 .
Lecture 2
Introduction to Numerical Methods

Course Website
https://sites.google.com/view/kporwal/teaching/mtl107
Numerical Methods and Errors

▶ Presence of Error is inevitable.


▶ The results are nothing but only approximation.
▶ Goal is to ensure that error is under tolerance level.
▶ Example: How may iterations are there in this program:

1 x = 0; h = 1/10;
2 while x<1,
3 x=x+h;
4 % do something depending on x
5 end
Measure of Errors

▶ Absolute error: if v is approximation of u, then absolute error


is
||u − v ||
▶ Relative error
||u − v ||
||u||
▶ Often a combination of both can be used.
Example

▶ The Stirling Approximation


√  n n
v = Sn = 2πn
e
is used to approximate u = n!.
▶ Use Example1 1.m from
http://www.siam.org/books/cs07/programs.zip
▶ Compare absolute and relative errors for n = 1, · · · , 10.
Example: The Stirling Approximation

1 % Example 1.1 : Stirling approximation


2 format long;
3 e=exp(1);
4 n=1:20; % array
5 Sn=sqrt(2*pi*n).*((n/e).ˆn); % the Stirling ...
approximation.
6 fact n=factorial(n);
7
8 abs err=abs(Sn-fact n); % absolute error
9 rel err=abs err./fact n; % relative error
10
11 format short g
12 [n; fact n; Sn; abs err; rel err]' % print out values
Types of errors

▶ Modelling errors: Wrong assumptions, wrong simplifications,


wrong input data.
▶ Approximation error: Wrong discretization, divergent method,
unstable method.
▶ Roundoff errors: Due to the computer implementation of
algorithms, finite memory.
Discretization Error

▶ Let f be a smooth function, then by Taylor’s theorem:

h2 ′′
f (x0 + h) = f (x0 ) + hf ′ (x0 ) + f (ξ), x0 < ξ < x0 + h.
2
▶ Rearranging:

f (x0 + h) − f (x0 ) h ′′
f ′ (x0 ) = − f (ξ)
h 2
▶ Discretization error is,

f (x0 ) − f (x0 + h) − f (x0 ) = h |f ′′ (ξ)| ≈ h |f ′′ (x0 )| = O(h).



h 2 2
Discretization Error

▶ Let f = sin(x) and calculate its derivate at x = 1.2


(cos(1.2) = 0.362357754476674).
▶ Error
h Absolute Error
0.1 4.71667 × 10−2
0.01 4.666196 × 10−3
0.001 4.660799 × 10−4
10−4 4.660256 × 10−5
10−7 4.619326 × 10−8
▶ Note that f ′′ (x0 )/2 ≈ −0.466.
What Error is this?

▶ Error for very small h


h Absolute Error
10−8 4.36105 × 10−10
10−9 5.594726 × 10−8
10−10 1.669696 × 10−7
10−11 7.938531 × 10−6
10−13 6.851746 × 10−4
10−15 8.173146 × 10−2
10−16 3.623578 × 10−1
Results

0
10

−5
10
Absolute error

−10
10

−15
10 −20 −15 −10 −5 0
10 10 10 10 10
h

Figure 3: Error with decreasing h


Properties of Algorithms

▶ Accuracy: How accurate is the algorithm for nice inputs.


▶ Efficiency: How fast we get the result, number of floating
point operations (flops), amount of memory needed.
▶ Robustness and Stability: The algorithm should produce
results under reasonable circumstances and erros should be
acceptable.
Complexity

▶ Complexity or computational cost is number of elementary


operations in an algorithm.
Operation Description #m/d #a/s Comp
inner product (x ∈ Rn , y ∈ Rn ) → x ⊤ y n n-1 O(n)
tensor product (x ∈ R , y ∈ Rm ) → xy ⊤
n
nm 0 O(mn)
matrix product (A ∈ Rm×n , B ∈ Rn×k ) → AB mnk mn(k-1) O(mnk)
Big-O

▶ f = O(g ) if,
|f (n)|
lim sup <∞
n→∞ |g (n)|
▶ For errors: f = O(hp ) if,

|e|
lim sup <∞
h→0 hp
Small-o

▶ f = o(g ) if,
|f (n)|
lim =0
n→∞ |g (n)|
▶ For errors: f = o(hp ) if,

|e|
lim =0
h→0 hp
Θ notation
The Θ notation signifies a stronger relation then O: A function
ϕ(h) for small h is Θ(ψ(h)) is ϕ is asymptotically bounded both
above and below by ψ.

More Percisely: f = Θ(g ) if,

|f (n)| |f (n)|
0 < lim inf ≤ lim sup <∞
n→∞ |g (n)| n→∞ |g (n)|

Difference: O(h2 ) mean atleast quadratic, whereas Θ(h2 ) means


exact quadratic convergence.
Problem Conditioning

▶ The problem is ill conditioned if a small perturbation in the


data produce a large difference in the result (unstable).
▶ The algorithms is stable if output doesn’t change much with
small change in input.
Stable andNumerical
Unstable Algorithms
Methods for Computational Science and Engineering
Elementary operations

▶ Stable Algorithm
A stable algorithm

An instance of a stable algorithm for computing y = g (x): the


output ȳ is the exact result, ȳ = g (x̄), for a slightly perturbed
input, i.e.,Figure
x̄ which is closeAlgorithm
4: Stable to the input x. Thus, if the algorithm is
stable and the problem is well-conditioned, then the computed
result ȳ is close to the exact y .
NumCSE, Lecture 1, Sept 19, 2013
Stable and Unstable Algorithms
Numerical Methods for Computational Science and Engineering
Elementary operations

▶ Unstable Algorithm
An unstable algorithm

Ill-conditioned problem of computing output values y from input


values x by y = g (x ): when x is slightly perturbed to x̄ , the result
ȳ = g (x̄ ) is far from y .
Figure 5: Unstable Algorithm
NumCSE, Lecture 1, Sept 19, 2013
Example of Unstable Algorithm
Evaluate the integrals,
Z 1
xn
yn = dx, for, n = 0, 1, 2, · · · , 30.
0 x + 10

Note that,
1
x n + 10x n−1 1
Z
yn + 10yn−1 = dx =
0 x + 10 n

and
1
1
Z
y0 = dx = log(11) − log(10)
0 x + 10

▶ Evaluate y0 = log(11) − log(10)


▶ Calculate yn = n1 − 10yn−1
See Example1.6.m
Example of Unstable Algorithm

1 % Example 1.6 : Evaluate integral recursive formula


2 y(1) = log(11) - log(10); % this is y 0
3 for n=1:30
4 y(n+1) = 1/n - 10*y(n);
5 end
6 % For comparison, use numerical
7 for n = 1:31
8 z(n) = quad(@(x)fun1 6(x,n-1),0,1,1.e-10);
9 end
10 format long g
11 fprintf ('recursion result quadrature result ...
abs(difference)\n')
12 for n = 1:31
13 fprintf (' %e %e ...
%e\n',y(n),z(n),abs(y(n)-z(n)))
14 end
Error in Stable/Unstable Algorithms

▶ In general if, En is the error after n elementary operations, it


should behave like
En ≈ c0 nE0
▶ If error behaves exponentially,

En ≈ c1n E0 ,

then algorithm is unstable.


Lecture 3 & 4
Rounding Errors

Course Website
https://sites.google.com/view/kporwal/teaching/mtl107
In this Section
▶ Understand how numbers are stored in computer.
▶ How Roundoff error can accumulates.
▶ Some recipes to avoid them.
Introduction I

One of the important tasks of numerical mathematics is the


determination of the accuracy of results of some computation.
There are three types of errors that limit accuracy:
1. Errors in the mathematical model of the problem to be solved.
Simplified models are easier to solve (shape of objects,
’unimportant’ chemical reactants, linearisation).
2. Discretization or approximation errors depend on the chosen
algorithm or the type of discretization.
▶ It May occur even when computing without rounding error is
approximated by a finite number of terms (truncation error),
▶ Function is approximated by, e.g., a piecewise linear function
▶ Derivative by finite differences.
▶ Finite number of iteration
Introduction II

3 Rounding errors occur if a real number (probably an


intermediate result of some computation) is rounded to the
next nearest machine number. The propagation of rounding
errors from one floating point operation to the next is the
most frequent source of numerical instabilities. Since
computer memory is finite practically no real number can be
represented exactly in a computer.
We discuss floating point numbers as a representation of real
numbers.
Computation of Pi
Motivating example: quadrature of a circle
Let’s try to compute π which is the area of a circle with radius r =
1. We Let’s
approximate by the area
try to πcompute ⇡, ofthe
an area
inscribed
of a regular
circle polygon:
with radius
r=
We approximate ⇡ by the area of an inscribed regular poly

↵n := 2⇡
n
Fn = cos ↵2n si

↵n ↵n
An = nFn6:=Area
Figure n cos sin
of the Circle ! ⇡ as n ! 1
2 2
[See Gander, Gander, & Kwok: Scientific Computing. Springer. Com
Computation of Pi

▶ Define αn = 2π αn αn
n , then area of the triangle is Fn = cos 2 sin 2 .
▶ Area An covered by rotating this triangle n times is
n cos α2n sin α2n
▶ An → π as n → ∞.
n
▶ An = 2 sin( 2π
n )=
n
2 sin(αn )
q q √
1−cos(αn ) 1− 1−sin2 αn
▶ sin(α2n ) = sin α2n = 2 = 2 .

▶ sin(α6 ) = 3
2
Computation of Pi

1 s=sqrt(3)/2; A=3*s; n=6;


2 z=[A-pi n A s];
3 while s>1e-10
4 % initialization
5 % store the results
6 % terminate if s=sin(alpha) small
7 s=sqrt((1-sqrt(1-s*s))/2); % new sin(alpha/2) value
8 n=2*n; A=n/2*s; % A = new polygonal area
9 z=[z; A-pi n A s];
10 end
11 for i=1:length(z)
12 fprintf('%10d %20.15f %20.15f ...
%20.15f\n',z(i,2),z(i,3),z(i,1),z(i,4)')
13 end

You might also like