0% found this document useful (0 votes)
21 views1 page

Ens 409

The document is a cheatsheet for numerical methods, covering polynomial evaluation using Horner's method, least squares optimization with the Gauss-Newton algorithm, and various interpolation techniques. It includes foundational theorems, error analysis, root-finding methods, optimization methods, and convergence concepts. Key formulas and methods such as Newton's method, the secant method, and gradient descent are also outlined.

Uploaded by

aya.abdelrahman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views1 page

Ens 409

The document is a cheatsheet for numerical methods, covering polynomial evaluation using Horner's method, least squares optimization with the Gauss-Newton algorithm, and various interpolation techniques. It includes foundational theorems, error analysis, root-finding methods, optimization methods, and convergence concepts. Key formulas and methods such as Newton's method, the secant method, and gradient descent are also outlined.

Uploaded by

aya.abdelrahman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Numerical Methods Exam Horner’s Method (Polynomial Evaluation) Gauss-Newton Algorithm

For polynomial p(x) = an xn + an−1 xn−1 + · · · + a1 x + a0 : 2


P
For least squares problems: minimize i [ri (x)]
Cheatsheet • Set bn = an
• For i = n − 1, n − 2, . . . , 0: bi = ai + bi+1 x xk+1 = xk − (JkT Jk )−1 JkT rk
Foundational Concepts • Then p(x) = b0 Jk is the Jacobian of residuals r
Theorems
• Intermediate Value Theorem: If f is continuous on [a, b]
Interpolation Techniques Conjugate Gradient Method
and f (a) · f (b) < 0, then ∃c ∈ (a, b) such that f (c) = 0 Lagrange Interpolation
• Rolle’s Theorem: If f is continuous on [a, b], differentiable n
dk = −∇f (xk ) + βk dk−1
on (a, b), and f (a) = f (b), then ∃c ∈ (a, b) such that f ′ (c) = 0
X
Ln (x) = f (xi )ℓi (x) ∇f (xk )T ∇f (xk )
βk = (Fletcher-Reeves)
• Mean Value Theorem: If f is continuous on [a, b] and i=0 ∇f (xk−1 )T ∇f (xk−1 )
differentiable on (a, b), then ∃c ∈ (a, b) such that Yn
x − xj
f (b)−f (a) where ℓi (x) =
f ′ (c) = b−a xi − xj
Levenberg-Marquardt Method
j=0,j̸=i
Taylor Series Newton Interpolation xk+1 = xk − (JkT Jk + λk I)−1 JkT rk
f ′′ (a) pn (x) = a0 + a1 (x − x0 ) + a2 (x − x0 )(x − x1 ) Damping parameter λk adjusted at each iteration
f (x) = f (a) + f ′ (a)(x − a) + (x − a)2
2!
+ · · · + an (x − x0 ) · · · (x − xn−1 ) Quasi-Newton Methods (BFGS)
f ′′′ (a) f (n) (a)
+ (x − a)3 + · · · + (x − a)n + Rn (x) where ai = f [x0 , x1 , . . . , xi ] (divided differences) Approximates Hessian inverse using rank-one updates
3! n!
(n+1) Divided Differences yk ykT Bk sk sT
k Bk
f (ξ)
Remainder term: Rn (x) = (x − a)n+1 for some ξ f (xi+1 )−f (xi )
(n+1)! • First order: f [xi , xi+1 ] = Bk+1 = Bk + −
between a and x xi+1 −xi ykT sk sT
k Bk sk
• Higher order:
Error Analysis f [xi+1 ,...,xi+k ]−f [xi ,...,xi+k−1 ] sk = xk+1 − xk
f [xi , . . . , xi+k ] = xi+k −xi
• Absolute Error: |x − x̃| yk = ∇f (xk+1 ) − ∇f (xk )
• Relative Error:
|x−x̃| Forward Newton Differences
|x|
|x−x̃| ∆ 0 fi = fi
Convergence Concepts
• Significant Digits: n where < 5 × 10−n
|x|
∆1 fi = fi+1 − fi
Order of Convergence
Root Finding Methods 2 1 1
∆ fi = ∆ fi+1 − ∆ fi = fi+2 − 2fi+1 + fi
Definition: A sequence {xn } converges to L with order p if
Bisection Method |xn+1 − L|
• Algorithm: If f (a) · f (b) < 0, compute c = a+b
. Replace a
Backward Newton Differences lim = C (constant)
2 n→∞ |xn − L|p
or b with c depending on sign of f (c) ∇ 0 fi = fi
• Linear: p = 1, 0 < C < 1
• Error Bound: |x − c| ≤ b−a
2n+1
after n iterations ∇1 fi = fi − fi−1
• Quadratic: p = 2
• Convergence: Linear (Order 1)
∇2 fi = ∇1 fi − ∇1 fi−1 = fi − 2fi−1 + fi−2 • Cubic: p = 3
Newton’s Method Interpolation Error
• Formula: xn+1 = xn −
f (xn ) Convergence Criteria for Iterative Methods
f ′ (xn ) n
′′
|f (ξ)|
f (n+1) (ξ) Y • Absolute Error: |xn+1 − xn | < ϵ
• Error: |xn+1 − α| ≈ |x
2|f ′ (xn )| n
− α|2 f (x) − pn (x) = (x − xi )
|xn+1 −xn |
(n + 1)! i=0 • Relative Error: <ϵ
|xn+1 |
• Convergence: Quadratic (Order 2)
for some ξ ∈ [a, b] • Function Evaluation: |f (xn )| < ϵ
Secant Method
• Formula: xn+1 = xn −
f (xn )(xn −xn−1 ) Optimization Methods Additional Key Concepts to Remember:
f (xn )−f (xn−1 )
• Convergence: Superlinear (Order 1.618)
Gradient Descent • Ill-conditioned Systems
xk+1 = xk − αk ∇f (xk ) • Curve Fitting
Fixed Point Iteration • Regression
• Formula: xn+1 = g(xn ) where g(x) = x has the same Step size αk can be constant or adaptive
• Muller’s Method
solution as f (x) = 0 Newton’s Method for Optimization
• Convergence Condition: |g ′ (x)| < 1 in neighborhood of • Hermite Interpolation
solution xk+1 = xk − [∇2 f (xk )]−1 ∇f (xk ) • Momentum
• Error: |xn+1 − α| ≤ |g ′ (ξ)| · |xn − α| Requires Hessian matrix ∇2 f (x k) • Adaptive Step Size

You might also like