0% found this document useful (0 votes)
65 views

Iterative Methods - Used For Large Numbers of Equations (N 2000)

Iterative methods are used to approximately solve systems of equations with a large number (N>2000) of equations. They solve the system Ax=b through multiple iterations, taking O(N^2) operations per iteration. Common iterative methods include Jacobi, Gauss-Seidel, and successive over-relaxation (SOR). Convergence is guaranteed when the matrix A's eigenvalues have a magnitude less than 1. Newton-Raphson and steepest descent are commonly used to solve nonlinear equations iteratively. Interpolation and least squares methods are used to fit functions to data, with techniques including Lagrange interpolation, splines, and numerical integration.

Uploaded by

sean_terry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views

Iterative Methods - Used For Large Numbers of Equations (N 2000)

Iterative methods are used to approximately solve systems of equations with a large number (N>2000) of equations. They solve the system Ax=b through multiple iterations, taking O(N^2) operations per iteration. Common iterative methods include Jacobi, Gauss-Seidel, and successive over-relaxation (SOR). Convergence is guaranteed when the matrix A's eigenvalues have a magnitude less than 1. Newton-Raphson and steepest descent are commonly used to solve nonlinear equations iteratively. Interpolation and least squares methods are used to fit functions to data, with techniques including Lagrange interpolation, splines, and numerical integration.

Uploaded by

sean_terry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 15

Iterative Methods

-Used for Large Numbers of Equations (N>2000)


-Solve Ax=b Approximately in K Iterations
-Takes K O(N^2) Flops to Solve
-Less Round-Off Error Than Direct Methods
-Round Off Error is Corrected at Each Step
Iterate Until:
Norm <
2 Common Norms:

L2 = (Euclidian Norm)
L(infinity) = Maximum Row Sum (Infinity Norm)

Residual Vector <


R = A (x-sub(N+1)) b

Convergence is Guaranteed
When lim (n approaches inf) of Norm(T)^N = 0

Matrix A is convergent when all its Eigen values have a magnitude less than 1.

Jacobi Iteration
Gauss-Seidel Iteration
-Converges Faster Than Jacobi
-Utilizes the updated values of the solution in the iteration

Successive Over Relaxatoin (SOR)


-Utilizes a Tuning Parameter

x(N) = x(N-1) + [x(N) x(N-1)]


=1 (Jacobi of Gauss-Seidel)
1<<2 Solution is Accelerated
1<<0 Convergence is Forced if Possible

Conditioning Number (K)


-Tells How Dependent the Two Equations Are

K(A) = Norm(A) * Norm (A^-1)


Where Norm = L(Infinity)
Solution of Non-Linear Equations

Newton Rhapson w/Jacobian


X vector: vector of variables (x,y)
F vector: vector of functions [f1(x,y), f2(x,y)]
z Matrix: vector of change in solution [x(N+1) x(N), y(N+1) y(N)]
Jacobian Matrix (J) : Matrix of partials (dfi/dxj)

Solve:
J * X(N) * z = -F[X(N)]
Update:
X(N+1) = X(N) + z

Steepest Decent
-Converges Linearly

x(N+1) = x(N) [2*Jtrans(x(N))*F(x(N))]


Where
= system parameter > 0

Levenberg-Marquardt

x(N+1) = x(N) (*I)^-1 * Jtrans(x(N))*F(x(N))


Where
= system parameter
I = A^-1 * A
Interpolation and Least Squares
-Goal: Retrieve the Underlying Function that Describes Given Data

Interpolation
-Exact Data
-Passes a Function through Every Data Point
-Can be done Globally and Locally
-Requires Collocation of the Data

Lagrange Interpolation (Global)


Ln,k(x)
N=Order
K=Data point it passes through and takes on a value of 1

First Order (Linear)


L1,0(x) = (x-x1)/(x0-x1)
L1,1(x) = (x-x0)/(x1-x0)

P1(x) = L1,0(x) * F(x0) + L1,1(x) * F(x1)

Second Order (Quadratic)


L2,0(x) = [(x-x1)*(x-x2)] / [(x0-x1)*(x0-x2)]
L2,1(x) = [(x-x0)*(x-x2)] / [(x1-x0)*(x1-x2)]
L2,2(x) = [(x-x0)*(x-x1)] / [(x2-x0)*(x2-x1)]

P2(x) = L2,0(x) * F(x0) + L2,1(x) * F(x1) + L2,2(x) * F(x2)


Generalized
N x xj
L
N K
xK xj
j 0

PN(x) = LN K(x) f xK
K 0

Linear Spline
-An Assembly of Piecewise Linear Interpolations
Cubic Spline
-An Assembly of Piecewise Cubic Functions that Interpolate Data
-2 Requirements
1. Interpolates Data
2. Maintains Continuity of 1st and 2nd Derivatives

Sj(x) = aj + bj(x-xj) + cj(x-xj)^2 + dj(x-xj)^3


Sj(x) = bj +2cj(x-xj) + 3dj(x-xj)^2
Sj(x) = 2cj + 6dj(x-xj)
Note:
Sj(xj) = f(xj) = aj
Sj(xj) = bj
Sj(xj) = 2cj

Sj(xj+1) = Sj+1(xj+1)
Sj(xj+1) = Sj+1(xj+1)
Sj(xj+1) = Sj+1(xj+1)

Boundary Conditions
Free
S = 0
Slope Imposed at Ends
So = f (xo)
Sn = f (xn)

Least Squares
-Noisy Data
-Fit a function
y(x) = a0 + a1x + a2x^2 + a3x^3 +

1. Find Sum of the Squares of the Errors

N
yi y xi
2
SSE( a)
i 0

2. Minimize SSE with Respect to as.

d
SSE( a) 0
da
j

3a. Linear
C* = b
Where
N
C
i j
i xk j xk
k 0

N 3b. Non-Linear
b
j yk i xk N
k 0
yi y xi dda yi
i 0 j

Solve by NR or LevM

R^2 = (So SSE)/So

N
yi ym
2
S
o
i0

N
1
y
m

N
yi
i 0
Numerical Calculas & ODEs

Numerical Derivatives
Forward Difference
First Order Accurate
F(Xo) = [ F(Xo - x) F(Xo) ] / x + TE O(x)
Second Order Accurate
F(Xo) = [ -3F(Xo) + 4F(Xo + x) F(Xo + 2 x) ] / 2 x + TE (x^2)

Backward Difference
First Order Accurate
F(Xo) = [ F(Xo) F(Xo x) ] / x + TE O(x)
Second Order Accurate
F(Xo) = [ 3F(Xo) 4F(Xo x) + F(Xo - 2 x) ] / 2 x + TE O(x^2)

Central Difference
First Order Accurate
F(Xo) = [ F(Xo + x) F(Xo x) ] / x + TE O(x)
Second Order Accurate
F(Xo) = [ F(Xo + x) F(Xo x) ] / 2 x +TE O(x^2)

Numerical Integration: Quadratures

b N

f ( x) d x

a
i f xi E
i 1

Where
i = weights
xi = Abscissae
E = Error

Closed Newton-Cotes Quadratures


-Require Function Evaluation at Endpoints
2 Point (Trapezoidal Rule)

b 3 II
h f ( )

h
f ( x) d x f x f x
2 0 1 12
a

Where
h = X1 Xo
a<<b

3 Point (Simpsons Rule)

b 5 IV
( )

h h f
f ( x) d x f x 4f x f x
3 0 1 2 90
a

Where
h = (a-b)/2
a<<b
Closed Newton-Cotes Quadratures
-DO NOT Require Function Evaluation at Endpoints

Midpoint Rule

b 3

II h
f ( x) d x f x f ( )
0 3
a

Composite Quadratures
-Apply Basic Quadrature Rules in a Piecewise Manner

Compound Trapezoidal Rule

h
b N 1 2 II
( b a)h f ( )
f ( x) d x f ( a)

a 2 j f x f (b )
12
j 1

Where
xj = a + jh
h = (b-a)/N
N = # of Trapezoids
a< <b

Compound Simpson Rule

N
1
N
b 2 2 4 IV
h
f (b )
( b a)h f ( )
f ( x) d x f ( a) fx 4 x

a 3 2j 2j 1 180
j 1 j 1

Where
xj = a + jh
h = (b-a)/N
N = # of Trapezoids
a< <b

Practical Error Estimate


Trapezoidal Rule
En = 4/3(abs(Tn T2n))

Simpsons Rule
En = 1/14(abs(S2-S1))

Romberg Integration
-For Any Quadrature Where the Error can be Written as:
(Trapezoid, Simpson, Midpoint)


aj h
2j
E

j 1

b

aj h
2j
f ( x) d x T

a
N
j 1

Where
Tn = Nth Trapezoidal Rule

2 pt Gauss Quadratures
-Can EXACTLY Evaluate a Polynomial of Degree 2^(N-1)

1



f ( x) d x f x
0 0 1f x1
1

Where
0 = 1 = 1
x0 = 3
3

3
x1 =
3
N = # of Points

Application to any limits of Integration

b N
( b a)
f ( x) d x

a 2
j f (x())
j 1

Where

( b a) ( b a)
x( )
2 2

Numerical Solution of ODEs

Euler Method
t
y
i 1
y
i 2 i i
f y t f y t
i 1 i 1 Ot 2

Modified Euler
t
y
i 1
y
i 2
f yi ti f ypi 1 ti 1

yp
i 1
y t f y t
i i i

Midpoint Rule

y t f yp
1
y t
i 1 i 1
i i
2 2

t
yp
i
1
y
i 2

f y t
i i
2

Runge-Kutta
K t f y t
1 i i
K t f y 1 t
K t
2 i 1 i
2 2
K t f y K t
1 t
3 i 2 i
2 2
4
K t f y K t 1
i 3 i
K1 2 K2 2 K3 K4
1
y y
i 1 i 6

You might also like