0% found this document useful (0 votes)
22 views

Review: W2 W1 Inverse Problem Estimating Weights of Ladies From W1 and W2

The document discusses inverse problems and regularization techniques for solving ill-posed problems. It introduces the concepts of well-posed and ill-posed inverse problems, and how adding regularization can improve conditioning. Specific techniques covered include Tikhonov regularization, which adds a penalty term to the residual norm to preference certain solutions. Singular value decomposition is also discussed as it relates to regularization and the truncated SVD method. The role of having multiple data points in improving problem conditioning is explored through an example.

Uploaded by

aapbanaan2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Review: W2 W1 Inverse Problem Estimating Weights of Ladies From W1 and W2

The document discusses inverse problems and regularization techniques for solving ill-posed problems. It introduces the concepts of well-posed and ill-posed inverse problems, and how adding regularization can improve conditioning. Specific techniques covered include Tikhonov regularization, which adds a penalty term to the residual norm to preference certain solutions. Singular value decomposition is also discussed as it relates to regularization and the truncated SVD method. The role of having multiple data points in improving problem conditioning is explored through an example.

Uploaded by

aapbanaan2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

review

W1
Inverse Problem Estimating weights of ladies from
W1 and W2

W2

Inverse problem setting

d1
Unknowns : x=(w1, w2)
Data
: d=(d1, d2)
Model
: d=Ax

d2

Well posed problem

d1
Unknowns : f=(w1, w2)
Data
: d=(d1, d2)
Model
: d=Af

d2

1 1.4 0.6

A =
2 0 .6 1 .4

well-posed
The inverse problem of solving
A(f) = d
for f given d is called well-posed
if:
1. a solution exists for any data d in data
space,
2. the solution is unique in image space, and
3. the inverse mapping d f is continuous.

singular-value decomposition
Suppose A is an m-by-n matrix. Then there exists a
factorization of the form
A=USV*
where U is an m-by-m unitary matrix,
the matrix S is m-by-n with nonnegative numbers on
the diagonal and zeros off the diagonal,
V* denotes the conjugate transpose of V, an n-by-n
unitary matrix.
Such a factorization is called a singular-value
decomposition of A.

singular-value decomposition

The matrix V thus contains a set of


orthonormal "input" or "analysing" basis
vector directions for A
The matrix U contains a set of orthonormal
"output" basis vector directions for A
The matrix S contains the singular values,
which can be thought of as scalar "gain
controls" by which each corresponding input
is multiplied to give a corresponding output.

example
a=
1.4000
0.6000

0.6000
1.4000

>> [U,S,V] = svd(a)


U=
-0.7071 -0.7071
-0.7071 0.7071

S=
.0000
0
0 0000

V=
-0.7071 -0.7071
-0.7071 0.7071

Ill-Posed Problems and Ill-Conditioning


The relative error propagation from the
data to the solution is controlled by the
condition number:
if d is a variation of d and f the
corresponding variation of f

condition number of A

geometry of observation equation and


unknowns

Case of well posed

review

d1
Unknowns : f=(w1, w2)
Data
: d=(d1, d2)
Model
: d=Af

A=?

geometry of observation equation and


unknowns

solution is un-uniqueness

d1
Unknowns : f=(w1, w2)
Data
: d=(d1, d2)
Model
: d=Af

d2

1 0.999 1.001

A =
2 1.001 0.999

geometry of observation eqn.

case of ill condition

>> inv(a)

>> xx=inv(a)*d

ans =

xx =

-499.5000 500.5000
500.5000 -499.5000

d=
46
45
>> xx=inv(a)*d
xx =
-454.5000
545.5000

45
45

>> [U,S,V] = svd(a)


U=
-0.7071 -0.7071
-0.7071 0.7071
S=
1.0000
0
0 0.0010
V=
-0.7071 0.7071
-0.7071 -0.7071

geometry of observation equation and


unknowns

Case of well posed


Solution space depends
on the error level

Does the number of data help you?

d1
Unknowns : f=(w1, w2)
Data
: d=(d1, d2,d3)
Model
: d=Af

d2

d3

0.999 1.001
1

A = 1.000 1.000
2

1.001 0.999

a=
1.0010 0.9990
1.0000 1.0000
0.9990 1.0010
>> [U,S,V] = svd(a)
U=
-0.5774 0.7071 0.4082
-0.5774 -0.0000 -0.8165
-0.5774 -0.7071 0.4082
S=
2.4495
0
0 0.0020
0
0
V=
-0.7071 0.7071
-0.7071 -0.7071

Ill posed problem


unknowns

data
A

The error propagates from the data to the solution

Regularization
data

Add constraint to focus the solution space

Tikhonov regularization

The standard approach is known as linear least squares and


seeks to minimize the residual
|Ax-d|^2
However, the matrix A may be ill-conditioned or singular
yielding a large number of solutions. In order to give
preference to a particular solution with desirable properties,
the regularization term is included in this minimization:
|Ax-d|^2+| x|^2
for some suitably chosen Tikhonov matrix .
In many cases, this matrix is chosen as the identity matrix
= I, giving preference to solutions with smaller norms.

In other cases, the matrix is chosen as highpass operators


(e.g. a difference operator operator or a weighted fourier operator)
may be used to enforce smoothness if the underlying vector is believed
to be mostly continuous.
This regularization improves the conditioning of the problem, thus
enabling a numerical solution.
An explicit solution, denoted by x* , is given by:

x =(A A+ I ) A d
*

The effect of regularization may be varied via the parameter .


For = 0 this reduces to the unregularized least squares solution
provided that (ATA)-1 exists.

Understanding of Tikhonov regularization in 2D solution space

|Ax-d|^2

|Ix|^2

Reference

Tikhonov AN, 1943, On the stability of inverse


problems, Dokl. Akad. Nauk SSSR, 39, No. 5, 195198
Tikhonov AN, 1963, Solution of incorrectly formulated
problems and the regularization method, Soviet Math
Dokl 4, 1035-1038 English translation of Dokl Akad
Nauk SSSR 151, 1963, 501-504
Tikhonov AN and Arsenin VA, 1977, Solution of Illposed Problems, Winston & Sons, Washington, ISBN
0-470-99124-0.
Hansen, P.C., 1998, Rank-deficient and Discrete illposed problems, SIAM

Let us suppose that the operator A has the singular


value decomposition
A =USV
The truncated singular value decomposition (TSVD)
method is based on the observation that for the
larger singular values of A;
the components of the reconstruction along the
corresponding singular vector is welldetermined by
the data,
but the other components are not well-determined.

Does the number of data help


you?

d1
Unknowns : f=(w1, w2)
Data
: d=(d1, d2,d3)
Model
: d=Af

d2

d3

0.999 1.001

A = 1.000 1.000
1.001 0.999

a=
1.0010 0.9990
1.0000 1.0000
0.9990 1.0010
>> [U,S,V] = svd(a)
U=
-0.5774 0.7071 0.4082
-0.5774 -0.0000 -0.8165
-0.5774 -0.7071 0.4082
S=
1.2247
0
0 0.0010
0
0
V=
-0.7071 0.7071
-0.7071 -0.7071

a=
1.0010 0.9990
1.0000 1.0000
0.9990 1.0010
>> [U,S,V] = svd(a)
U=
-0.5774 0.7071 0.4082
-0.5774 -0.0000 -0.8165
-0.5774 -0.7071 0.4082
S=
1.2247
0
0 0.000
0
0
V=
-0.7071 0.7071
-0.7071 -0.7071

Moore-Penrose Matrix Inverse

the Moore-Penrose generalized matrix


inverse is a unique matrix
pseudoinverse .
This matrix was independently defined
by Moore in 1920 and Penrose (1955),
and variously known as the generalized
inverse, pseudoinverse, or MoorePenrose inverse.

Moore-Penrose Matrix Inverse gives


least square and least norm solution

Least square solution

Least square solution

Least norm solution

Least square Least normsolution

a=
1.0010 0.9990
( 1.0000 1.0000 )*0.5
0.9990 1.0010
d= ax + n =
46
46

Home work
Normal inverse matrix
Tikhonov regularization
TSVD

d1

d2

d2

Unknowns : f=(w1, w2,......, wn)


Data
: d=(d1, d2,d3,......, dm)
Model
: d=Af

dm

Computer account setting

name: alphabet
Student ID number
School and Department

email with subject Inverse Problems


[email protected]

The Deconvolution Problem

Choosing the regularization


parameter

You might also like