0% found this document useful (0 votes)
60 views36 pages

Iterative Methods For Solving Linear Problems

This document discusses iterative methods for solving large linear problems. When problems become too large, methods like SVD become impractical. Popular alternatives utilize iterative methods to obtain approximate solutions. One such method is Kaczmarz's algorithm, which attacks the problem one row at a time to iteratively estimate the solution. Other methods discussed include ART, SIRT, conjugate gradient, and conjugate gradient least squares, which turn the general problem into a symmetric positive definite system that can be solved with conjugate gradient methods. These iterative methods are useful for large problems like image deblurring.

Uploaded by

BinduMishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views36 pages

Iterative Methods For Solving Linear Problems

This document discusses iterative methods for solving large linear problems. When problems become too large, methods like SVD become impractical. Popular alternatives utilize iterative methods to obtain approximate solutions. One such method is Kaczmarz's algorithm, which attacks the problem one row at a time to iteratively estimate the solution. Other methods discussed include ART, SIRT, conjugate gradient, and conjugate gradient least squares, which turn the general problem into a symmetric positive definite system that can be solved with conjugate gradient methods. These iterative methods are useful for large problems like image deblurring.

Uploaded by

BinduMishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Iterative Methods for Solving Linear Problems

When problems become too large (too many data points,


too many model parameters), SVD and related approaches
become impractical.
Iterative Methods for Solving Linear Problems

When problems become too large (too many data points,


too many model parameters), SVD and related approaches
become impractical.

Very popular alternatives utilize "iterative" methods to


obtain approximate solutions.
Iterative Methods for Solving Linear Problems

When problems become too large (too many data points,


too many model parameters), SVD and related approaches
become impractical.

Very popular alternatives utilize "iterative" methods to


obtain approximate solutions.

In many cases for large problems, the G matrix will be


sparse (many many zero elements), so strategies to take
advantage of this characteristic are important.
Simple Example - Kaczmarz's algorithm
Simple Example - Kaczmarz's algorithm

A class of techniques that attack the problem one row of G at a time.

In 2D, each row of G can be thought of as defining a line given by


G m. In 3D G m is a plane, and above that it's a "hyperplane."
Simple Example - Kaczmarz's algorithm

A class of techniques that attack the problem one row of G at a time.

In 2D, each row of G can be thought of as defining a line given by


G m. In 3D G m is a plane, and above that it's a "hyperplane."

Kaczmarz's algorithm starts with an initial guess (e.g., m0 = 0), and


then steps through each row of G moving to the point on the Gi,.m
hyperplane closest to the current estimate of m. Amazingly, if
G m = d has a unique solution, this simple approach will converge!
ALGORITHM:
ALGORITHM:
FORMULA:
Simple Example - Kaczmarz's algorithm
CHAPTER 6. ITERATIVE METHODS 138

y=x-1
y=1

Figure 6.1: Kaczmarz’s algorithm on a system of two equations.

is perpendicular to this hyperplane, the update to m(i) from the constraint due
MATLAB
Examples
QUESTIONS?
ART and SIRT

These two methods, Algebraic Reconstruction Technique and


Simultaneous Iterative Reconstruction Technique, were
developed for tomography applications.

For simplest version of ART, take Kaczmarz's formula

and replace all non-zero entries in G with 1's, assuming the


length of the ray path in each cell is equal! What are the
numerator and denominator then?
ART and SIRT

These two methods, Algebraic Reconstruction Technique and


Simultaneous Iterative Reconstruction Technique, were
developed for tomography applications.

For simplest version of ART, take Kaczmarz's formula

and replace all non-zero entries in G with 1's, assuming the


length of the ray path in each cell is equal! What are the
numerator and denominator then?

An estimate of the residual and the number of "hit" cells!


ART and SIRT

A slight improvement to ART can be made by scaling things


by the equations to reflect the fact that different ray paths
have different lengths (and note error on page 147 - "cell to
cell" should be "path to path"!).

ART solutions tend to be noisy because the model is


"jumping around" at every update.
ART and SIRT

SIRT uses the same "update formula"

but all model updates are computed before applying any of


them to the model! The updates are averaged to obtain the
model perturbation for that iteration step. As a result, SIRT
results tend to be less noisy, due to the averaging effect.
QUESTIONS?
Conjugate Gradient and CGLS

Goal - in N dimensions, get to


the minimum in N steps!
Conjugate Gradient and CGLS

Goal - in N dimensions, get to


the minimum in N steps!

Gradient method - go in the


"downhill" direction (-g).
Conjugate Gradient and CGLS

Goal - in N dimensions, get to


the minimum in N steps!

Gradient method - go in the


"downhill" direction (-g).

Inefficient in a similar way


that Kaczmarz's method is
inefficient - bounce back
and forth.
Conjugate Gradient and CGLS

Goal - in N dimensions, get to


the minimum in N steps!

Gradient method - go in the


"downhill" direction (-g).

Inefficient in a similar way


that Kaczmarz's method is
inefficient - bounce back
and forth.

CG - zoom directly to the


minimum! How??
Conjugate Gradient

In the figure, the green line is


the optimal gradient (steepest
descent) steps and the red line
is the CG steps. Note that the
second CG step can be thought
of as a linear combination of
the first two gradient steps.

How can we find the right


linear combination?
Conjugate Gradient

For the system A x = b, vectors


pi and pj are said to be mutually
conjugate with respect to A if

piT A pj = 0

If we build the CG steps so that


they satisfy this relationship,
then it turns out that these steps
do what we want!
Conjugate Gradient

For the system A x = b, vectors


pi and pj are said to be mutually
conjugate with respect to A if

piT A pj = 0

If we build the CG steps so that


they satisfy this relationship,
then it turns out that these steps
do what we want!

Note - CG works on symmetric


positive definite matrices A.
What do we do for a general G?
Conjugate Gradient Least Squares

Basically, we need to turn our general problem

Gm=d

into a symmetric positive definite system so that we can solve


it with the Conjugate Gradient method.
Conjugate Gradient Least Squares

Basically, we need to turn our general problem

Gm=d

into a symmetric positive definite system so that we can solve


it with the Conjugate Gradient method.

Two options:

1. Form the normal equations, GT G m = GT d

2. Solve a regularized system such as


QUESTIONS?
Conjugate Gradient Least Squares - Application to image deblurring
Conjugate Gradient Least Squares - Application to image deblurring
CGLS, 30 iterations, no explicit regularization
CGLS, 100 iterations, no explicit regularization
CGLS, 200 iterations, with explicit regularization
QUESTIONS?

You might also like