0% found this document useful (0 votes)
6 views40 pages

Image Analysis Lecture 5

Uploaded by

Frew Dokem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views40 pages

Image Analysis Lecture 5

Uploaded by

Frew Dokem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Image analysis—module 2

Priscilla Canizares [email protected] F2.05 at DAMTP

1. Introduction to inverse problems. 07/05

2. Introduction to time-frequency transforms. 09/05

3. Introduction to sparse representation of signals/images & compressed sensing. 10/05

4. Practical 13/05

1
Introduction to problems
inverse

2
© Dr Priscilla Cañizares
What have we got in store for today

• Introduction to classical inverse problems

- Singular value decomposition, variational formulation and regularisation

- Well-posed vs ill-posed inverse problem.

- Introduction to Statistical formulation to inverse projects

- Some examples

Data Driven Science & Engineering


Machine Learning, Dynamical Systems, and Control
Steven L. Brunton & J. Nathan Kutz

3
er than
ey modelling
involve workingitout
from known parameters.
unknown parameters
have
urce? two data: d1 and d2 , and one unknown model parameter, the density ⇢.
her than modelling it from known parameters.
Inverse
el is then problem:
density ⇥ mass 1D example
= volume:
easurement (CT).

out unknown
Modelparameters
parameters and data are related through a physical model
it from known parameters.
uring its mass and volume. d2 ⇢ = d1
own
uring model parameter,
its mass
• Forward the the
and volume.
problem: We know density Our
density ⇢. and volume V, and we want to determine the mass M
https://en.wikipedia.org/wiki/Density

own• Inverse
modelproblem:
parameter, thethedensity
determine Our
density ⇢.of an 4 by measuring its mass, M, and volume, V:
object

d1volume. We want to solve, or “invert”,


(1)
• Two data points: d1 = M, and d2 = V f1 (d, m) = 0
these equations for the model
deter,
1 •the density ⇢. one
One unknown: Ourmodel parameter m (1)
parameters f2 (d, m) = 0
..
.
4
=d g(m) T1 = a + bz1 (4)
=d Gm T2 = a + bz1 (5)
Inverse problem: 1D example
..
.
2 3 TN = a + bzN
I 0
4 5
• N temperature measurements Ti at di erent depths zi (6)
F =
0 G (7)
Fitting a straight line T = a+bz Linear model

2 3 2 3
T1 = a + bz1 T 1 1 z1 7 ?
6 7 6 2 3 Gm=d Linear model
6 7 6 7
T2 = a + bz1 6 T 2 7 61 z2 7 a
6 7=6 74 5 (8)
.. 6 .. 7 6 .. .. 7 d = [T1,T2, …,TN] Measurements
. 6 . 7 6. . 7 b
4 5 4 5
m =[a,b]T Unknowns
TN = a + bzN TN 1 zN

(7)
5
ff
Ax=b

6
Ax=b
• A takes the form:

• Overdetermined system M>N • Underdetermined system M<N

This is a hard problem to solve

7
Ax=b
• A takes the form:

• Overdetermined system M>N • Underdetermined system M<N

This is a hard problem to solve

Ill-possed inverse problems


?

8
Model Kernel, e.g
(b)
Instrument Fourier transform
Measurement

A = MK
<latexit sha1_base64="LbJeO1AJIU02UrTsFO5+FA6FBTI=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoPgKeyKRC9C1IsgQgTzgGQJs5NJMmR2dpnpFcKSj/DiQRGvfo83/8ZJsgeNFjQUVd10dwWxFAZd98vJLS2vrK7l1wsbm1vbO8XdvYaJEs14nUUy0q2AGi6F4nUUKHkr1pyGgeTNYHQ99ZuPXBsRqQccx9wP6UCJvmAUrdS8JBfkjtx2iyW37M5A/hIvIyXIUOsWPzu9iCUhV8gkNabtuTH6KdUomOSTQicxPKZsRAe8bamiITd+Ojt3Qo6s0iP9SNtSSGbqz4mUhsaMw8B2hhSHZtGbiv957QT7534qVJwgV2y+qJ9IghGZ/k56QnOGcmwJZVrYWwkbUk0Z2oQKNgRv8eW/pHFS9irlyv1pqXqVxZGHAziEY/DgDKpwAzWoA4MRPMELvDqx8+y8Oe/z1pyTzezDLzgf30yajkE=</latexit>

wikipedia
https://www.ucsf.edu/

(a)

(c)

https://my.clevelandclinic.org/

https://education.riaus.org.au/

The properties of the system/object observed determine the type of detection instruments/sensors

(b) https://imagine.gsfc.nasa.gov/science/toolbox/emspectrum_observatories1.html
(a) https://www.researchgate.net/publication/231608920_Multimodal_di use_imaging_system
9 (c) https://www.space.fm/astronomy/planetarysystems/electromagneticspectrum.html
ff
6 T 2 7 61 z 2 z 2 7 6 7 ds
6 7=6 7 6b7 Ii = I0 (10)
6 .. 7 6 .. dI.. 74 5 exp c(c, y)ds
6 . 7 6 . . = c(x, 7 y)I (11)rayi
4 5 4 ds 5 c
Example: X-ray imaging y = Ax + ✏
<latexit sha1_base64="sItikn8UtbrpmAdlzCvVwEAHz6M=">AAACAHicbVDLSsNAFJ34rPVVdeHCzWARBKEkItWNUHXjsoJ9QBPKZHrTDp08mJmIIWTjr7hxoYhbP8Odf+OkzUJbD1w4nHMv997jRpxJZZrfxsLi0vLKammtvL6xubVd2dltyzAWFFo05KHoukQCZwG0FFMcupEA4rscOu74Jvc7DyAkC4N7lUTg+GQYMI9RorTUr+wn+BKntuvhqww/4hMbIsl47lTNmjkBnidWQaqoQLNf+bIHIY19CBTlRMqeZUbKSYlQjHLIynYsISJ0TIbQ0zQgPkgnnTyQ4SOtDLAXCl2BwhP190RKfCkT39WdPlEjOevl4n9eL1behZOyIIoVBHS6yIs5ViHO08ADJoAqnmhCqGD6VkxHRBCqdGZlHYI1+/I8aZ/WrHqtfndWbVwXcZTQATpEx8hC56iBblETtRBFGXpGr+jNeDJejHfjY9q6YBQze+gPjM8fyWaVSA==</latexit>

TN 1 zN zN 2 " Z #
Ii = I0 exp c(c, y)ds " Z #
" Z # rayi

IdI Ii = I0 exp (12) c(c, y)ds


i = I0 exp c(c, y)ds
= c(x, y)I rayi (11) rayi
ds " #
Z
Non-linear problem Ii = I0 exp Linear problem
c(c, y)ds
" Z # M
" Z # rst order rayi
I0 Ii X
Ii = I0 exp c(c, y)ds I1 = (13)
I0 = sij cj
Ii = I0 exp c(c,ray
y)ds
i
(12)
I0 j=1
rayi
M
X
I0 Ii
I1 = I0 = sij cj
I20 3 2
" I0Z Ii M#
X 2 j=1
3
I1 = I0 = Inverse sij cj problem 6
matrix
I1 equation
7 (14)
(13) s12 . . . s1M 7 6
Ii = I0 exp I0 c(c, y)ds
rayi
j=1
2 3
6
6
7 6
I2 7 6
s11
2 3 ? 76
6
6 7=6 s213 cs22 . . . s2M 7 6
I 2 6 .. 7 4 5 6
2 3 6 1
7 6
s114 . s127 . . . 6
s1M 7 6 7
1
7 6
6 7 26 3 5 sN 1 6 scN2 7 . . . s 4
6 I2 7 3 6 y = Hx + ✏2 NM
<latexit sha1_base64="ruhs4ZLsMd0PAFw4GE4Jji7HAbQ=">AAAB/3icbVDLSsNAFJ3UV62vquDGzWARBKEkItWNUHTTZQX7gCaUyfSmHTp5MDMRQ+zCX3HjQhG3/oY7/8ZJm4W2HrhwOOde7r3HjTiTyjS/jcLS8srqWnG9tLG5tb1T3t1ryzAWFFo05KHoukQCZwG0FFMcupEA4rscOu74JvM79yAkC4M7lUTg+GQYMI9RorTULx8k+Cq1XQ83Jg/4FNsQScYzo2JWzSnwIrFyUkE5mv3ylz0IaexDoCgnUvYsM1JOSoRilMOkZMcSIkLHZAg9TQPig3TS6f0TfKyVAfZCoStQeKr+nkiJL2Xiu7rTJ2ok571M/M/rxcq7dFIWRLGCgM4WeTHHKsRZGHjABFDFE00IFUzfiumICEKVjqykQ7DmX14k7bOqVavWbs8r9es8jiI6REfoBFnoAtVRAzVRC1H0iJ7RK3oznowX4934mLUWjHxmH/2B8fkDe92VJQ==</latexit>

I 2 6 7 c 7 6 7
6 1
7 I0 Ii s XM
6 . 7 = 66 1 s
7 IsN22 . . . s 76 .. 7
s
6 I1 =7 6 I011= 12 sij6cj . 1M . . . .
s 7 7 64 7
21 2M
5
(14) 6 . 7
6 I2 7 I6 4 5 7 6 c2 s7 sN 2 . . . sN M 4 5
6 7=6 s0
s
j=1 . . . s 7 6 7N 1 (15)
6 .. 7 4 21 22 I 2M
5 6 .. 7 cM
6 . 7 N 6 . 7
4 5 sN 1 sN 2 . . . sN M 4 5
2 3 IN 2 3 cM
7
I 2 3 c
6 17 6 1
7 7
6 7 6 s11 s12 .10. . s1M 7 6 7
fi
Concept of inverse problem: curve tting

11
fi
Solving linear inverse problems:
13.1. LEAST SQUARES PROBLEMS AND THE PSEUDO-INVERSE 639

• This problem canm


In general, for an overdetermined be⇥written like Ax = b,
n system
Example tting a straight line: f(x) = m x + b
what Gauss and Legendre discovered is that there are
solutions x •The
minimizing best t line is the one with model
Prediction error: ei = diobs - dipre
parameters that give the smallest overall
2
errorkAx
E bk2
N
X
xi T 2
E=e e= ei
and that these solutions are given by thei=1square n ⇥ n
system
• Solution
> by least
> squares tting
N
A Ax = A b, ⇥ X 1⇤
<latexit sha1_base64="4CmV0nyAj7PQrJIF3lQ4kbnxhAQ=">AAACA3icbVC7TsMwFHXKq5RXgA2WiAqpDFRJhQpjBQtjkehDakrkuE5r1XYi20FEaSUWfoWFAYRY+Qk2/ga3zQAtR7q6R+fcK/seP6JEKtv+NnJLyyura/n1wsbm1vaOubvXlGEsEG6gkIai7UOJKeG4oYiiuB0JDJlPccsfXk381j0WkoT8ViUR7jLY5yQgCCoteeaBywj30gePjN1RUNL95DRxR3cVr+KZRbtsT2EtEicjRZCh7plfbi9EMcNcIQql7Dh2pLopFIogiscFN5Y4gmgI+7ijKYcMy246vWFsHWulZwWh0MWVNVV/b6SQSZkwX08yqAZy3puI/3mdWAUX3ZTwKFaYo9lDQUwtFVqTQKweERgpmmgCkSD6rxYaQAGR0rEVdAjO/MmLpFkpO9Vy9easWLvM4siDQ3AESsAB56AGrkEdNAACj+AZvII348l4Md6Nj9lozsh29sEfGJ8/aVqXXw==</latexit>

L1 norm: kek1 =2 |ei |


min kf (xi ) yk2 i=1
xi
called the normal equations. N
⇥ ⇤ X
2 1/2
L2 norm:
Usually 2-norm,kek |ei |
but2are=other options
Furthermore, when the columns of A are linearly i=1 inde-
>
pendent, it turns out that A A is invertible,
12 .. and so x is
fi
fi
fi
N
X
(18)
6P
N xi yi
7
⇥ ⇤
n 1/n 6 P 2 P 7
Ln norm: kekn = |ei | T
G G = 6 xi xi xi yi 7
The least-squares solution and L2 norm i=1 4P P P 2 5
(17) yi xi yi yi
L1 norm: kek1 = max |ei | (19)
i M N N
@E X X X 2
Find the model coekek
L1 norm: cients
1 = that
=
max 0 minimizes
|e =
i | 2 the
m k
error in
G our
G
iq ik
measurement
2
(18) G min
d
iq mi
kGm dk2
@mq i k i i
T
E = e e = (d Gm) (d Gm) T 10
M
X N
X XN
TN
" M T
#" M
# @E M
X N
X N
X
E = eX X
e = (d Gm) (d Gm) X = 0 = 2@E mk Giq Gik 2 Giq di
" #" # @mq = 0 = 2 m G G 2 G d
=X N di X
M Gij mj dXM
i T
G ik m k T @mkq
i(20) k
i
iq
i ik
i
iq i

= i di G
j ij mj di GikkmGm
G
k
k G d = 0 (19)
i j k Or in matrix form:
T TT
G Gm G
G d
Gm= 0 GT d = 0
M
X 6XN N
X
@E
=0=2 mk Giq Gik 2 G"
iq di # 1
@mq k i i
pred T T " # 1" #
m = G G G pred
d T pred T T
1
T
m = G G =G d
m G G G d

T T
G Gm G d=0 13
ffi
2
<latexit sha1_base64="GPJj6F6JYhwPTI4cNnSKzZx1HC4=">AAACBXicbVC7TsMwFHXKq5RXgBGGiAqpDFRJhQpjBQtjkehDakrkuE5r1XYi20FEaRcWfoWFAYRY+Qc2/ga3zQAtR7q6R+fcK/seP6JEKtv+NnJLyyura/n1wsbm1vaOubvXlGEsEG6gkIai7UOJKeG4oYiiuB0JDJlPccsfXk381j0WkoT8ViUR7jLY5yQgCCoteeahywj30gePjN1RUNL95DTxiDu6q3gVzyzaZXsKa5E4GSmCDHXP/HJ7IYoZ5gpRKGXHsSPVTaFQBFE8LrixxBFEQ9jHHU05ZFh20+kVY+tYKz0rCIUurqyp+nsjhUzKhPl6kkE1kPPeRPzP68QquOimhEexwhzNHgpiaqnQmkRi9YjASNFEE4gE0X+10AAKiJQOrqBDcOZPXiTNStmplqs3Z8XaZRZHHhyAI1ACDjgHNXAN6qABEHgEz+AVvBlPxovxbnzMRnNGtrMP/sD4/AH3XJg7</latexit>

min kf (xi ) y i k2
xi

14
We can’t just remove outliers

2
<latexit sha1_base64="GPJj6F6JYhwPTI4cNnSKzZx1HC4=">AAACBXicbVC7TsMwFHXKq5RXgBGGiAqpDFRJhQpjBQtjkehDakrkuE5r1XYi20FEaRcWfoWFAYRY+Qc2/ga3zQAtR7q6R+fcK/seP6JEKtv+NnJLyyura/n1wsbm1vaOubvXlGEsEG6gkIai7UOJKeG4oYiiuB0JDJlPccsfXk381j0WkoT8ViUR7jLY5yQgCCoteeahywj30gePjN1RUNL95DTxiDu6q3gVzyzaZXsKa5E4GSmCDHXP/HJ7IYoZ5gpRKGXHsSPVTaFQBFE8LrixxBFEQ9jHHU05ZFh20+kVY+tYKz0rCIUurqyp+nsjhUzKhPl6kkE1kPPeRPzP68QquOimhEexwhzNHgpiaqnQmkRi9YjASNFEE4gE0X+10AAKiJQOrqBDcOZPXiTNStmplqs3Z8XaZRZHHhyAI1ACDjgHNXAN6qABEHgEz+AVvBlPxovxbnzMRnNGtrMP/sD4/AH3XJg7</latexit>

min kf (xi ) y i k2
xi

15
2
<latexit sha1_base64="GPJj6F6JYhwPTI4cNnSKzZx1HC4=">AAACBXicbVC7TsMwFHXKq5RXgBGGiAqpDFRJhQpjBQtjkehDakrkuE5r1XYi20FEaRcWfoWFAYRY+Qc2/ga3zQAtR7q6R+fcK/seP6JEKtv+NnJLyyura/n1wsbm1vaOubvXlGEsEG6gkIai7UOJKeG4oYiiuB0JDJlPccsfXk381j0WkoT8ViUR7jLY5yQgCCoteeahywj30gePjN1RUNL95DTxiDu6q3gVzyzaZXsKa5E4GSmCDHXP/HJ7IYoZ5gpRKGXHsSPVTaFQBFE8LrixxBFEQ9jHHU05ZFh20+kVY+tYKz0rCIUurqyp+nsjhUzKhPl6kkE1kPPeRPzP68QquOimhEexwhzNHgpiaqnQmkRi9YjASNFEE4gE0X+10AAKiJQOrqBDcOZPXiTNStmplqs3Z8XaZRZHHhyAI1ACDjgHNXAN6qABEHgEz+AVvBlPxovxbnzMRnNGtrMP/sD4/AH3XJg7</latexit>

min kf (xi ) y i k2
xi

16
n n i
number of solutions that give the same minimum prediction error is greater than one, the
i=1
LS fails. The inverse of the "data" matrix is null, and hence the determinant is singular:
N
X
Measures
E = eofe distance T 2
ei (16) [GT G] 1
/
z12
1
z12
(30)

i=1 6. Measures of lenght

Aside from the Euclidian norm, there are other possible measures of distance. For in-
stance, we can equally well quantify the length by summing the absolute values of the

N
X L1 norm: kekelements
1 = max |ei |
of a vector.
⇥ ⇤ i
L1 norm: kek1 = |ei | 1
L1 norm: kek1 =
N
⇥X
|ei |1

Gives non-zero weight to the largest element = i=1
i=1 ⇥ XN

selection of e with largest absolute value
2 1/2
L norm: kek = |ei |
B. N Solving inverse problems using data-driven models
2 2

⇥ X ⇤ ..
i=1

2 1/2 .
L2 norm: kek2 = |ei | Ln norm: kekn =
⇥XN
|ei |n
⇤1/n
L1 i=1
i=1 (31)
The
.. main idea is to develop a mathematically coherent L foundation for 2 combining
. LL norm: kek = max |e |
driven models, and in particular those based on deep learning, with domain-specific k
1 (32) 1 i
i
N
⇥ X n ⇤1/n We choose the norm depending on the data: If the data is very accurate, then the fact that
outlier
Ln norm: kekn edge |ei |
= contained the prediction falls far from its observed value is important; we might choose a higher-order
in physical-analytical models.norm.The focus is on solving ill-posed inverse
i=1 If the data is scattered widely across the trend, then no significance can be placed upon

lems that are at the core of many challenging (17)applications in the natural sciences, me
a few large prediction errors; the low-order norm gives equal weight to errors of different sizes.

and life
• Succesive higher norms sciences, as well as in
give the largest entry of e engineering and industry.
• High-order norms gives preference to higher errors 11

succesively larger weight


The goal is to reliably recover a hidden multi-dimensional model parameter from
L1 norm: kekindirect
1 = max observations.
|ei | A typical example is(18)
when imaging/sensing technologies are u
i
medicine, engineering, astronomy
17 and geophysics. Often, these are ill-posed problem
Gm = d (20)

Measures of distance: L1 vs L2

Model: di = m1 + m2 zi (21)

M =2 (22)

N >M (23)

inverse problems using data-driven models

18
Singular Value Decomposition

SVD is to nd patterns in data by providing a hierarchical representation of the data in a new coordinate system.

• Obtains low-rank approximations to matrices (reduces dimensional space),

• Performs pseudo-inverses of non-square matrices to nd the solution of under/overdetermined system of equations

• To de-noise data sets.

• To characterise the input and output geometry of a linear map between vector spaces.

• To compress an image.

• …

19
fi
fi
• SVD factorises A as a product of two orthogonal transformations with a diagonal matrix

A = U ⌃V T
(or AV = U ⌃)
The
• We SVDof the
can think writes A as a product
the transformations as rotaionsofand
two orthogonal
the diagonal matrix astransformations
with a a scaling, e.g, dilation
agonal matrix (a scaling operation) in between. It says that we can rep
any transformation by a rotation from “input” coordinates
1
A into convenient
Aordinates, followed by a simple scaling operation, followed by a rotation i
“output” coordinates. Furthermore, the diagonal scaling ⌃ comes out with
elements sorted in decreasing order.

2 SVD definitions and interpretation


A
The pieces of the SVD have names following the “singular” theme. The colum
20
where U 2 C and V 1
2 C are unitary matrices If k >with orthonormal
d, the system is inconsistent columnswhen f is not
C n⇥n
and V 2 C m⇥m
are unitary matrices with orthonormal 1.2.2 columns,
Method of weIf can
Snapshots express
the it as
system is inconsistent when is no
and
stentisoraunderdetermined
⌃ 2 R n⇥m
is a matrix with real, non-negative we
k >
canentries
d,express it on
as the diagonal f
and
n⇥m
matrix with real, non-negative entries ⇤on the diagonal
systems and wetocan
It is often impractical construct the matrix XX because of the large size of the state-
express it asproblem; if x has a million

2 elements, then
e diagonal. zeros
Here ⇤ off the diagonal. Here
denotes the complex conjugate denotes
transpose the
dimension
2
XX .has As complex
n,
we
let alone solve
conjugate
the eigenvalue
transpose . As we ⇤
position of A a trillion elements. Sirovich observed that it is possible to A =
bypass U d d d⇤ ,

this V
large

1.2. MATHEMATICAL FORMULATION 5
er throughout willthisdiscover
chapter, the throughout
condition that this U and chapter, V are the
unitary
snapshots
condition
[40].of Snapshots
that
matrix and compute the first m columns of U using what is now known A
U and V are unitary
= Umethod
as the d ⌃d Vd of ,
he range
nsively. of A. If A has full rank 1.2.2 Method A = U d⌃ V ⇤
,
is used extensively. 6 6
2 3 2
CHAPTER 1. SINGULAR 32
VALUE
CHAPTER
3
DECOMP
1. SVD & dPCA
It is often impractical to construct the matrix XX⇤ because of the large size of the state-
k⇥d
d

m, the matrix ⌃ has at most m non-zero elements on the di- A 2 C


dimension n, let alone solve the eigenvalue problem; if x has a million elements, then
In imagingWhen it is common
n  ˆ tom,have thek matrix
> d and ⌃ has has at6662at most 6 non-zero elements on the di-
k⇥d
A 2 C
most d non-zero
7 m
7 6 elements Full on the
SVD7 6 diagonal:
XX⇤ has a trillion elements. Sirovich observed that it is possible to bypass this large
7 6
7
7
matrix and compute the first m columns of U using what is now known as the method of
⌃ ˆ
3 6
7 2 36
7 2 32
7 3
d may be written as ⌃ = Singular. values Therefore, it is 6possible to 6
6 ⌃ exactly snapshots [40].
7 6
7 6
CHAPTER 1. SINGULAR
76
76
VALUE
7
7
DECOMPOSITION
A 2 C k⇥d

agonal,vectors and may be written as ⌃ = 6 . Therefore, it is possible 7to exactly


6 7 6
6 76 76
CHAPTER 1. SVD & PCA 7
Left singular
0 6 7 2 6 3 2 7 6 32 73 6
2 U 2 Ck⇥k3 k⇥k7
d, the system is
X using the economy SVD: inconsistent when f is not
Right
1.2. in the
singular
MATHEMATICAL range
vectors
1.2. of A.
MATHEMATICAL
FORMULATION 6
6 If 0 A
6
6
has 7
FORMULATION
7 full
7 6
= 6
6 7 6
7 6
rank Full SVD
76
7
7
6
6 576
76 ⌃ˆ 7
7
6
6
6
U
5 V2
7
7
⇤C 7
7
7
6 62 7 6 3 6
7 2 76 36
7 2 76 32
7 7 3
6 7
press it as represent h X 
using
i ˆ the1.2.2 economyMethod SVD:
1.2.2
of Snapshots
Method
6
6
of
6
6
Snapshots
7 6
7 6
7 6
7 6
76
7 6
76 ?
76
74
7 6 VV
7
2
7
2 C
C
d⇥d
d⇥d
k⇥k
5
7
? ⌃ 1.2. MATHEMATICAL 1.2.
FORMULATION 6
6
6
MATHEMATICAL
X
6 7
7 = 6
6
7 6
FORMULATION
7 6 Û Û 7
7
6
6
76
5
76 7
7
6
6 U 2
76
76 C 7 5
7
7
7
A
X = U⌃V = Û Û ⇤ ⇤
V =It Û ˆ
⌃V ⇤
.
1.2. MATHEMATICAL
6
1.2. MATHEMATICAL
FORMULATION
6

6
(1.3) 7 6 7 6
7
FORMULATION
7 = 6⇤
6
⌃ˆ 76 76
7 6
76 5 V ⇤ 76
76
7
2
6
76 5
7 3
7
7
0 is often impractical
1.2.2 Method ofh
It is often
to construct
Snapshots 1.2.2
6
impractical
6 i
4 the matrix
A
to
6
6
Method ˆ
XX 7
construct
7
7of
6
5 because
the
4 matrix
6 7 6
Snapshots
of the
XXlargebecause
size of of the
⇤7
the
7
6
5 state-
large size of the
4
6 76
7
7
4
5 state- 76
6
5 7
7
7
dimension n, let dimension
alone solven,the
Method6?
6 6
let6eigenvalue
alone 6

solve problem;
the
7 6⇤
6 7 6
eigenvalue
if
7 6 x has problem;
a million
ˆ
ifelements,
x has a7million
?
76
76
6then elements,
76 7 then
7 V 74
2
76 C d⇥d 5
7
size of(1.3)

1.2.2 Method 1.2.2
of Snapshots
⇤ 1.2. MATHEMATICAL of Snapshots
1.2.6
FORMULATION
X 6 MATHEMATICAL
7 = 6 FORMULATION
Û Û ⇤ 76 50 76 7 5

D Singular
and economy SVD are (25)
shown
A = X
in
U ⌃ =
Fig.
V ⇤U⌃V
XX
1.1.
,
matrix
has
The
and
a =
trillion
XX
It is often impractical
columns
elements.
Ûhas
It is often impractical
compute matrixthe and
a

trillion
of
Sirovich

It is
to construct
It is often
elements.
often
4 the
to construct
first compute
columns

observed
impractical
matrix
6
thespan
? 46
impractical
offirst
XX
6 the matrix
5V
7 6
Sirovich
that
to construct
using
columns
what
it
7 6⇤
7
5
= 6
7 because
the
ofÛ
observed
is possible
4to construct
because
7⇤ 6
6 matrix
is
ofnow
4 using
⌃V
thethe
that
to
of the
known
size.of
bypass
it
matrix
large
XXlarge
what
is XX
because
as isthe
7
possible
this
size of
now
method
6
large
to
the5because
state-
4
of the
known
7
7
the
bypass
76
6
6
7 state-
7 large
this
of the large
5 ⇤
large size of the
of xashas
5 4 the amethodTT of
7
7
7
6
6
4the state-
7 state-
5
7
7
5
0

m Ulet m alone XX U T
Value Decomposition d d d dimension
Vector space complementary and orthogonal to snapshots
U
let alone
1.2.2 Method of Snapshots
n, solve
dimension n, let dimension
[40]. snapshots [40].
dimension
the eigenvalue
1.2.2 Method
alone solven,the let6
6
n, problem;
of Snapshots
eigenvalue
alone solveproblem;
|
solve
the
7
if x the
has aeigenvalue
million
6 if x hasproblem;
7 eigenvalue
6 {z
problem;
elements,
x has a7
a millionifelements,
0 }|
if
then
7
6 million
UU
6then elements,
million
6{z }
UU elements,
== U U
7 then
7
=thenII
XX⇤ XX has ⇤ a trillion elements. Sirovich
XX ⇤
has a trillion
observed elements.
that it is Sirovich
possible to observed
bypass thisthatlargeit is size
possible to bypass this large
ace that is complementary and orthogonal to that spanned by Û. has a trillion
XX elements.

has a trillion6
Sirovich elements.
observed 7 6
Sirovich
that it observed
is possible that
to bypass
it is 7
possible
this 6
large
to bypass 7 large
this
It is often impractical to construct It is often
4 the impractical
matrix XX 4to construct
5 because

of thethe
largematrix
size ofXX the5because

state-
4 of the large
5 of the state-
matrix andand
matrix compute
compute the
thefirst
matrix and matrix
columns
first compute
mm columns theand
offirst
U compute
oflet
using
m using
columns
Ualonewhat the
what
is U first
ofnow
U is now
using
known columns
mwhat known
as is
thenow of
asU
method
known⌃
the using
of as thewhat
method methodofis ofnow known as the method of
ng
d = the
(u first
, u
U1(and 2 ,
V)
ns of U are called .d. left and
are unitary
The full
. , u ),
d V right
= singu-
(square)
left singular
d matrices
SVD and
(v , v , . . . ,
vectorsd containing
1 2 with orthogonal
economy
v ) of X
dimension
columns
2 snapshots
and
snapshots SVD
thethe
n,
[40].
let
first
alone
are
2 2
[40]. 3snapshots
columns
solve
XX has a trillion elements. XX
⇤ d left
the
shownof and
dimension
eigenvalue
V
Sirovich

3 2
[40].snapshots
areright
in
n,
has observed Fig.
[40]. singu-
problem;
a trillion that with
|
solve
if
1.1.
32
elements.U
x
=
the
has
d The
a
(u
eigenvalue
million
with
,
{z
Sirovich
it is possible 1 u 2columns
,U. .
to observed
bypass
problem;
elements,
3=
. ,
2 (u
d thisu ),
}
that
d
|
then
V ,
if
u=
x
of
has
,
(v
{z .
,Û
.
v
a
.
,
million
2 . . .span
? elements, then
3u} ), V = (v , v , . . . , v ) con
,
1 it is2 possible to
large
d 1T , vd )T con
d bypass
d this large
1 2 d
6 matrix and compute
7 6 the first mmatrix
6 andof6
columns
7 compute
U using the first
what7 is6now U
m columns
known of
asU using
the
7 what
method
6 ofis⌃ now known
7 U of
as the method U = UU =
and
lar values
ar vectors. is a diagonal
The diagonalmatrix containing
elements of ˆ
the
2
singular
6
C
3
values
7
are
6262
called
37 2 6
singular
32 76
lar vectors
32 7 Economy
and
6 3
is a SVD
diagonal
7 matrix containing the ss
a vector space that is 66complementary and orthogonal
lar vectors 7 6 to7 that7 spanned by Û.
and ⌃ is a diagonal matrix containing the
⌃ . . . > 0. m⇥m
snapshots [40]. snapshots [40].
. . . > 0.
d = 1 2 d ⌃ 2
6
2 7
3 6 6
2
7 76666 1 2 7
77 26
6
6
3 7
3
6 7 6d
6
2 7
3 6
2 ⌃ d d 3 2 7
3 2 3 3
r d
6 7 76
6 7 76666 77 6 6 76 76 7 Economy
6 7 6 3 23 SVD
7 73 3
hey are ordered from largest to smallest.
In Python: 6 The
6 6 6
2
rank 7 6
of
7 76666
3 6
2
X is
2 6
7
77equal
6
6 6
3
to
7
3 6
27 7 6with U7
3 6
2
= 7
(u6 , u7 , . . . 7
7 2
, u 6 V = (v 7, v
67 ), 7 , . . . , v ) con
The columns of are called left singular 6 7 6 vectors of 7 6 and the columns of are
76 76
r
7 6 6 6 7 6 7 7 6 7 6 7 6 7 6 7
1 72
U 6 6
6 7 766
76 666 7
6 7 6 6
k
7
7 76
67 7 6 X7 d
6 7 6
2
1 72
7 67 6 7 6
d
7
3 6
d
ˆ 77 6677 7766277 V 77377
7 7 ⇤ 7 27 V d
3
of non-zero singular values.
1 import numpy as np 6
6
6
6
7
7
7
7
6
=6
66
6
6
6
6
66
6
77 =
6
7 7
6 6
6 6
7 76
7 7 6
7
7
7
lar
7
6
6
vectors
76
7 6 and
7 6
2
7
7
ˆ
6 7 67 6⌃
6
⌃ 7 is
7
67 6a diagonal
6 7
3 6
rmatrix 7containing the
r
⇤7 7
uations
then attempt
2 A =
right singular vectors. The
to solve
np.random.rand(20, a modified
5) system
6 6
6 6 diagonal
of
6
equations
6
6
77
7 7
7
66
66
6
=66
6
7 76666
6
6
66
6
6
66
77
7
6
7
=
7
6 6
elements
6
6
77 6 6
6
7 76
7
7
7
7
6
67
7
7 of
7
7
6
76 ⌃ 74
6 ˆ 7
7 2
6
6 C⌃
7
7
6
6
4m⇥m
d7
7
6
6
7 6 7 65
7
7
6
6
V
are
7 6 7
5
67
7 67 7 65called
7 6 7
4
7 6
7
7 7
67
776 singular
6
75
17
7
7
7
6 6 7 76666
7 6Û Û
6677 6 6
7 76
7 76 7 76 76
6
7 6 7 67 7
4
7 67 7 6 776 17 7
matrix U is unitary if UU⇤ = U⇤#Ufull
3 U, s, Vt = np.linalg.svd(A)
= I.SVD 6 6 7 7=6=66
6 6677 6 6 7=767 = 76 76 7 67 7 67 7 6 776 7
4 U, s, Vt values and they
= np.linalg.svd(A, are ordered
6
6 4
4
6
full_matrices=False)
alued matrices, this is the same as the regular transpose
6 6 from
X
7
75
5

7
6 # reduced
6 7
7
66
6
444
7 7=6X
66
64
6
6
largest
SVD
6
T
.
6
6
6
6
6
66
7
7
5 5 4 4 to7 smallest.
7 6
6 6 7
7
7
7
74
5
7 76
6
7
6
6
7
74
7 5
7 76
6
6
76
76
74
5
7
The rank of X is equal to
76
74
5
7
7 67
7 67
7 65
76
7 67 7 6
7
7 65 7 6
76 74
7
776
754
7
7
5
6 6 7 7 66 66 7 767 7 6 7 7 76 76 7 7
the number of non-zero
In MATLAB: ⇤ 6 singular
6 66
4 7 5 64 values.
7
7 7 66 6
Au = Ud Ud f, 4
6
66
46
7
7
7
7
5 54
6
7 76
6
7 76
7
7
5 5 4 4 5 54
7
7
5
76
76
54
76 7
76 7
54 5
7
7
5 17
21
1 A = randn(20,5);
Here, we establish the notation that a truncated SVD basis (and the result
Spectral expansion of the operator A ˜
proximated matrix X̃) will be denoted by X̃ = Ũ⌃Ṽ . Because ⌃ is diagon

e rank-r SVD approximation is given by the sum of r distinct rank-1 matri


r
X
Ã
X̃ = ⇤
k uk vk = ⇤
1 u1 v1 + ⇤
2 u2 v2 + ··· + ⇤
r ur vr . (
k=1
12 CHAPTER 1. SINGULAR VALUE DECOMPOSITION
is is the so-called dyadic summation. For a given rank r,rankthere is no be
6
10
(a) (b)
proximation for X, in the `2 sense, than the truncated SVD approximat 1

Cumulative energy
k

0.8
Singular value,

4
10 r = 100
0.6 r = 20
Copyright © 2017 Brunton & Kutz.
2
10
r = 5 All Rights Reserved. 0.4

0.2

0
10 0
0 500 1000 1500 0 500 1000 1500
k k
Figure 1.4: (a) Singular values k. (b) Cumulative
22
energy in the first k modes.
F

Here, we establish the notation that a truncated SVD basis (and the resultin
Spectral expansion of the operator A ˜
pproximated matrix X̃) will be denoted by X̃ = Ũ⌃Ṽ . Because ⌃ is diagona

e rank-r SVD approximation is given by the sum of r distinct rank-1 matrice


r
X
Ã
X̃ = Image⇤ Denoising ⇤
k uk vk = 1 u1 v1 +

2 u2 v 2 + ··· + ⇤
r ur vr . (1.
k=1

his is the so-called dyadic summation.


In this For
example, we consider a mgiven
each rank
× n × p color there
r,rank
image as a is no bett
single sample of length mnp, where p = 3, and assemble a matrix
pproximation for X, in the ∈ R` 2 sense,
SApplication
3mn×N . than the truncated SVD approximatio
to denoising

Copyright © 2017 Brunton & Kutz. All Rights Reserved.

23
-deficient
.3 least
Pseudoinverse squares
Rank-deficient least squares
Rank-de cient least squares: Linear inverse problem
Figure 1: Schematic illustration of SVD in terms of three linear transformations.

A is square and full-rank it has an inverse. Its SVD has a square ⌃ tha
of our other linear algebra tools, SVD provides ye
The inverse of A (if it exists) can be determined easily from the SVD, namely:

h most of our other linear algebra tools, SVD provides y


onzero entries all the way down the diagonal. We can invert ⌃ easily by ta
where
A 1
=VS 1
UT , (1)

esystems.
21 3

linear systems.
s1
6 1 7

he reciprocals of these diagonal entries:


1 6 s2 7
S =6 .. 7 (2)
4 . 5
1
sn

The logic is that we can find the inverse mapping by undoing each of the three operations we did
when multiplying A: first, undo the last rotation by multiplying by U > ; second, un-stretch by

⌃ = diag( 1, 2 ,1. . . 1 )
,T nT
multiplying by 1/si along each axis, third, un-rotate by multiplying by V . (See Fig 2).

= b x = V ⌃
Ax 1= b x = V ⌃ U b
Ax U b
⌃ = diag(1/ 1 , 1/ 2 , . . . , 1/ n )

oblem! •If AIt’s just three it hasmatrix


an inverse. multiplications. The invers
It’s
nd we just three
is square
can write down matrix
the inversemultiplications.
and full-rank
of A directly: The invers
Figure 2: Illustrating the inverse of a matrix in terms of its SVD.

ust diag(1/ 1 , 1/ 2 , . . . , 1/ n ). This may be the simples


Another way to see that this definition of the inverse is correct is via:

(1/ , 1/ 1 . . . , 1/T
, 1). This 1 may 1 be the1 Tsimplest
A 1
A = (V S 1
U > )(U SV > )

= (U ⌃V ) = (V T
)
test. Note that back substitution takes less arithmetic th
1 2 n ⌃ 1
= ⌃ (U > U )SV >
1

A U V U =VS
= V (S 1
S)V >

ote that back substitution takes less arithmetic th


=VV>

lication.
=I

We can do a similar analysis of AA 1.

.s process is nice and stable, and it’s very clear exactly whe
6 24
2
fi
rotate stretch rotate

k-deficient
Rank-deficient least squares
least
Rank-de cient least squares: Linearsquares
inverse problem Figure 1: Schematic illustration of SVD in terms of three linear transformations.

tithofmost
our of
other linear algebra tools, SVD provides
•If A is non-square and/or rank de cient, it doesn’t have inverse
The inverse of A (if it exists) can be determined easily from the SVD, namely:

our other linear algebra tools, SVD provides


where
A =VS U , 1 1 T
(1)

arvesystems.
21 3

linear systems.
s1
6 1 7
1 6 s2 7
S =6 .. 7 (2)
4 . 5
1
sn

The logic is that we can find the inverse mapping by undoing each of the three operations we did
when multiplying A: first, undo the last rotation by multiplying by U > ; second, un-stretch by
multiplying by 1/si along each axis, third, un-rotate by multiplying by V . (See Fig 2).

Ax = b x =
Ax = b x = V ⌃ V⌃ U b
1 1T T
U b
• The inverse of A is singular, but we can multiply three matrices instead which is a numerically stable
roblem!
operation It’s just three matrix multiplications. The inve
It’s just three matrix multiplications. The inver Figure 2: Illustrating the inverse of a matrix in terms of its SVD.

just•However:
diag(1/ 1 , 1/ 2 , . . . , 1/ n ). This may be the simple
Another way to see that this definition of the inverse is correct is via:

g(1/ , 1/ , . . . , 1/ ). This may be the simple


A 1
A = (V S 1
U > )(U SV > )

astest. Note that back substitution takes less arithmetic


1 2 n =VS 1
(U > U )SV >
- Small singular values means bigger values = V (S 1
S)V >

Note that back substitution takes less arithmetic


=VV>

plication.
=I
- Zero singular values means this approach will fail
We can do a similar analysis of AA 1.

n. 25 2
fi
fi
s1
6 . ..
clear up7 front if the process will fail: it’s when we divide
6 7
6
Pseudoinverse
6 s
value. 7
7
6 k 7.
S=6
6 0 One 7 of the strengths of the SVD is that it works when th
7
6 7
4 How
. . . would
5 we go about solving a singular system? More s
•Generalize the SVD-basedweinverse:
solve
0 Ax ⇡ b, A square n ⇥ n, rank r < n?
This is no longer a system of equations we can expec
A can then be written similarly to the inverse:
solution,
† since ran(A) ⇢† IR and b† might
n
not be in there. I
<latexit sha1_base64="7xoT9aIDWn2OgrgHLkDoewgkKCE=">AAAB9XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEWi9C1YvHCvYD2rRsNpt26WYTdjdqCf0fXjwo4tX/4s1/47bNQVsfDDzem2FmnhdzprRtf1u5ldW19Y38ZmFre2d3r7h/0FRRIgltkIhHsu1hRTkTtKGZ5rQdS4pDj9OWN7qZ+q0HKhWLxL0ex9QN8UCwgBGsjdR7Qpfoqpd2fTyYIK9fLNllewa0TJyMlCBDvV/86voRSUIqNOFYqY5jx9pNsdSMcDopdBNFY0xGeEA7hgocUuWms6sn6MQoPgoiaUpoNFN/T6Q4VGoceqYzxHqoFr2p+J/XSXRw4aZMxImmgswXBQlHOkLTCJDPJCWajw3BRDJzKyJDLDHRJqiCCcFZfHmZNM/KTqVcuTsv1a6zOPJwBMdwCg5UoQa3UIcGEJDwDK/wZj1aL9a79TFvzVnZzCH8gfX5AyP2kac=</latexit>

x=A b ⇤
<latexit sha1_base64="EbuIfjSMY5eUNVRwvJDtvUpanI0=">AAACBXicbZC7TsMwFIZPyq2UW4ARBosKialKECosSAUWxiJIW6kNleO4rVXnIttBqqIsLLwKCwMIsfIObLwNbpsBWo5k+dP/nyP7/F7MmVSW9W0UFhaXlleKq6W19Y3NLXN7pyGjRBDqkIhHouVhSTkLqaOY4rQVC4oDj9OmN7wa+80HKiSLwjs1iqkb4H7IeoxgpaWuuX9xn3Z83M/QOWrc5uzoG0uVdc2yVbEmhebBzqEMedW75lfHj0gS0FARjqVs21as3BQLxQinWamTSBpjMsR92tYY4oBKN51skaFDrfioFwl9QoUm6u+JFAdSjgJPdwZYDeSsNxb/89qJ6p25KQvjRNGQTB/qJRypCI0jQT4TlCg+0oCJYPqviAywwETp4Eo6BHt25XloHFfsaqV6c1KuXeZxFGEPDuAIbDiFGlxDHRwg8AjP8ApvxpPxYrwbH9PWgpHP7MKfMj5/AO+2mEA=</latexit>

A = V S Ua ,unique solution. If Ax
† † > A ⇤ =b V S U
is minimum, so is A(x + y) ⇤

null(A). Thus a system like this is both overdetermined and


21 3
s1 SVD gives us easy •We
access
can obtain
toA
the

by
solution
inverting all
space,
singular
though. <latexit sha1_base64="f9Thz4FGmUKeMPNUyfSMQKM8YSo=">AAAB73icbVBNSwMxEJ2tX7V+VT16CRbBU9kVqR6rXjxWsB/QriWbzbahSXZNskJZ+ie8eFDEq3/Hm//GtN2Dtj4YeLw3w8y8IOFMG9f9dgorq2vrG8XN0tb2zu5eef+gpeNUEdokMY9VJ8CaciZp0zDDaSdRFIuA03Ywupn67SeqNIvlvRkn1Bd4IFnECDZW6lw9ZL0QDyb9csWtujOgZeLlpAI5Gv3yVy+MSSqoNIRjrbuemxg/w8owwumk1Es1TTAZ4QHtWiqxoNrPZvdO0IlVQhTFypY0aKb+nsiw0HosAtspsBnqRW8q/ud1UxNd+hmTSWqoJPNFUcqRidH0eRQyRYnhY0swUczeisgQK0yMjahkQ/AWX14mrbOqV6vW7s4r9es8jiIcwTGcggcXUIdbaEATCHB4hld4cx6dF+fd+Zi3Fpx85hD+wPn8Afu/j/Q=</latexit>

6 . 7
6
6
.. doing this
7
7 is a combination of the procedures we used
values that are non-zero, and leaving all zero
for ov
6 1 7
S =6

6
s mined systems
7
7 . using QR.
singular
First
values
we
at
expand
zero.
the residual we’re k
0
6
6
4
using
.. the
7
7 SVD of A and then transform it by U T
(which d
. 5
norm 0of the residual):

kAx bk = kU ⌃V x T
bk = k⌃V x
2 T
U T

umber 26
 
† 1 ⇤
A = V m ⌃ m Um ,
If the operator does not have full rank
where Vm = (v1 , v2 , . . . , vm ), Um = (u1 , u2 , . . . , um ) and ⌃
values.
• Despite the pseudo-inverse of A exists, it can be unstable. Large condition number
Note that the pseudo inverse allows us to define a uniq
stable as kAkkA k = †
1/ k may still be large. To study t
the solution as <latexit sha1_base64="NyTJyM+kQ4Q8kw7ywSWCc2fDIKI=">AAACFHicbVDLSsNAFJ34rPUVdelmsAiCUBIX1WXRhS4r2Ac0JUymk3ToTDLMTEpLzEe48SP8ATcuFHHrwp1/4/Sx0NYDFw7n3Mu99wSCUaUd59taWl5ZXVsvbBQ3t7Z3du29/YZKUolJHScska0AKcJoTOqaakZaQhLEA0aaQf9q7DcHRCqaxHd6JEiHoyimIcVIG8m3T4fQQ0LIZAgh9FTKfQq9UCKcpT4N8sxTNOLIp/nAp75dcsrOBHCRuDNSql7DJ8+/j2q+/eV1E5xyEmvMkFJt1xG6kyGpKWYkL3qpIgLhPopI29AYcaI62eSpHB4bpQvDRJqKNZyovycyxJUa8cB0cqR7at4bi/957VSHF52MxiLVJMbTRWHKoE7gOCHYpZJgzUaGICypuRXiHjKRaJNj0YTgzr+8SBpnZbdSrtyaNC7BFAVwCI7ACXDBOaiCG1ADdYDBA3gGr+DNerRerHfrY9q6ZM1mDsAfWJ8/BASh0A==</latexit>

X ui b
• Writing the solution in components x⇡ vi
i k
i X hui , f i
1 ⇤
e=
u Vk ⌃k Uk f =
we can see that for small singular values the noise components gets ampli ed. i=1 i

We note the component in f corresponding to vi is ampli


27
fi
Inverse In this
theory inintroduction,
imaging: we
notation will focus on linear continuou
inverse problem consists in recovering the model parameter ✓ 2 X
B. Functional representation of an inverse problem

ector-space) from measured data f 2 Y (Y data vector-space), where


Mathematically, an
Function space
inverse problem consists in
Discrete
recovering the model pa
† 1
(X model parameter vector-space)
f = A(u) + n from measured
f = data
Au +f n
2 ,
Y (Y
u data
⇡ vector
A (f
(1)
f = A(u) + n
ation. A where is
H :the unknown solutiona we wantcontinuous
to find, f
d
u2
Y Risoperator
the R forward operator, known N m
<latexit sha1_base64="ZtWxT0E3lDAkO6EJrzfvN1dhB04=">AAACEnicbVDLSgMxFM34rPU16tJNsAi6KTMiVVwV3XQlVewDOmPJpJk2NMkMSUYpQ7/Bjb/ixoUibl25829M2wG19cCFwzn3cu89Qcyo0o7zZc3NLywuLedW8qtr6xub9tZ2XUWJxKSGIxbJZoAUYVSQmqaakWYsCeIBI42gfzHyG3dEKhqJGz2Iic9RV9CQYqSN1LYPUy8IYWV4BlMPIwavh7eXnqTdnkZSRvc/Km/bBafojAFniZuRAshQbdufXifCCSdCY4aUarlOrP0USU0xI8O8lygSI9xHXdIyVCBOlJ+OXxrCfaN0YBhJU0LDsfp7IkVcqQEPTCdHuqemvZH4n9dKdHjqp1TEiSYCTxaFCYM6gqN8YIdKgjUbGIKwpOZWiHtIIqxNinkTgjv98iypHxXdUrF0dVwon2dx5MAu2AMHwAUnoAwqoApqAIMH8ARewKv1aD1bb9b7pHXOymZ2wB9YH99lUZ1W</latexit>

: X ! Forward ! R

e.g. parameter
r model parameters models
k⇥d
A 2 R estimation.
to the
data in
A physical
the
: X ! Y isphenomena
absence of that noise
theobservational relate
forward operator, a kno
n.u
Bounded linear operator, typically ill-behaved
operatorAthat maps our model parameters to data in the absence
: X ! Y is a linear forward operator (a mapping ) acof observa

kLinear Time-Invariant (LTI) problems


and Y 2 an
odel for measuring ; in the
R image continuous
should includesetting, these process
the physical are typic
o
A mathematical model for measuring
28 an image should include the phy
Forward problem in imaging is usually well-behaved

A
<latexit sha1_base64="FVgAUwTikBreAgApENvaUaa0fWI=">AAAB6HicbVDLTgJBEOzFF+IL9ehlIjHxRHaNQY+oF4+QyCOBDZkdemFkdnYzM2tCCF/gxYPGePWTvPk3DrAHBSvppFLVne6uIBFcG9f9dnJr6xubW/ntws7u3v5B8fCoqeNUMWywWMSqHVCNgktsGG4EthOFNAoEtoLR3cxvPaHSPJYPZpygH9GB5CFn1FipftMrltyyOwdZJV5GSpCh1it+dfsxSyOUhgmqdcdzE+NPqDKcCZwWuqnGhLIRHWDHUkkj1P5kfuiUnFmlT8JY2ZKGzNXfExMaaT2OAtsZUTPUy95M/M/rpCa89idcJqlByRaLwlQQE5PZ16TPFTIjxpZQpri9lbAhVZQZm03BhuAtv7xKmhdlr1Ku1C9L1dssjjycwCmcgwdXUIV7qEEDGCA8wyu8OY/Oi/PufCxac042cwx/4Hz+AJZvjNA=</latexit>

+n
<latexit sha1_base64="njoZ2DUOlaEp40/Sn3jSFkYPquQ=">AAAB6nicbVBNS8NAEJ34WetX1aOXxSIIQklEqseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+Oyura+sbm4Wt4vbO7t5+6eCwqeNUMWywWMSqHVCNgktsGG4EthOFNAoEtoLR7dRvPaHSPJaPZpygH9GB5CFn1Fjp4ZzIXqnsVtwZyDLxclKGHPVe6avbj1kaoTRMUK07npsYP6PKcCZwUuymGhPKRnSAHUsljVD72ezUCTm1Sp+EsbIlDZmpvycyGmk9jgLbGVEz1IveVPzP66QmvPYzLpPUoGTzRWEqiInJ9G/S5wqZEWNLKFPc3krYkCrKjE2naEPwFl9eJs2LiletVO8vy7WbPI4CHMMJnIEHV1CDO6hDAxgM4Ble4c0Rzovz7nzMW1ecfOYI/sD5/AGWno1c</latexit>

=
<latexit sha1_base64="dfa12l/r87/D93uS51ywGerJGEM=">AAAB6HicbVDLSgNBEOyNrxhfUY9eBoPgKeyKRC9C0IvHBMwDkiXMTnqTMbOzy8ysEEK+wIsHRbz6Sd78GyfJHjSxoKGo6qa7K0gE18Z1v53c2vrG5lZ+u7Czu7d/UDw8auo4VQwbLBaxagdUo+ASG4Ybge1EIY0Cga1gdDfzW0+oNI/lgxkn6Ed0IHnIGTVWqt/0iiW37M5BVomXkRJkqPWKX91+zNIIpWGCat3x3MT4E6oMZwKnhW6qMaFsRAfYsVTSCLU/mR86JWdW6ZMwVrakIXP198SERlqPo8B2RtQM9bI3E//zOqkJr/0Jl0lqULLFojAVxMRk9jXpc4XMiLEllClubyVsSBVlxmZTsCF4yy+vkuZF2auUK/XLUvU2iyMPJ3AK5+DBFVThHmrQAAYIz/AKb86j8+K8Ox+L1pyTzRzDHzifP5BfjMw=</latexit>

e.g. Model the imaging process

29
which are simple
The inverse generalisations of the discrete case.
ill see, instances of the model function are made in terms of localproblem,
averages,then, consists of undoing the convolution. Sometimes, the o
this introduction,
generalisations wecase.
of the discrete will focus(filter)
on linear continuous
k may not inverse
be known perfectly problems
(blind of the
deconvolution). form:
Other image-related
Well-posedness In this introduction, we will focus on linear continuous inverse problems of the
problems include denoising, where a noisy version of the image is measured, and inpa f

uction, we will focus on linear continuous inversewhere


problems of the form:
a significant part of the pixels are missing.

f = Au + n , u† ⇡ A 1 (f )
f = Au + n , u ⇡ A †
(f )
1. Example: X-ray tomography
1
(10)

f = Au + n , u ⇡ A (f ) 1
(10)
where u 2 Rd is the unknown solution we want to find, f 2 Rk is given (our measur

u 2 R is the unknown solution we want to find, f 2 R is given (our measurement),


In this case, R R, represents
k the density of (a slice of) the object, which m
d
he d
unknown solution we want to find, f 2 R k
is given
A 2 (our
R k⇥d
models the
measurement),
u : !physical phenomena that relate u and f , our measurements
s the physical phenomena that relate u and f , turn,
our be !
thought
Y is a oflinear
A :measurements.
X as anforward
Thus, image. operator
Applications:
(a mapping ) acting between some spaces
k⇥d
models
Rinear forward the(a physical
operator phenomena
mapping ) acting between some that
spaces
k Xrelate
2 R d u and f , our measurements. Thus,
and Y 2 R ; in the continuous setting, these are typically Hilbert or Banach sp
An inverse problem is called well-posed if: • Medical: A CT scanner Au is a routine medical diagnosis tool that provides d
the continuous setting, these are typically Hilbert ordiscrete
Banach setting,
spaces. they
In
! Y is a linear forward operator images the (a mapping ) acting
of a patient’s anatomy. between some spaces X 2 R
are Euclidian spaces. Realistically, ourf data will d
inclu
1. It has spaces.
ng, they are Euclidian a solution, u, (existence),
Realistically, ourobservational
data will include some
noise, n 2 Y combines measurement and modelling errors, with k
2 Y; combines
in2.the continuous setting, these are typically Hilbert or Banach spaces. In
k
se,2n R measurement and modelling If errors,
•we with
Materials
find a science:
knk
for  .to
which obtain three-dimensional
, we can ask reconstructions
ourselves how of
big the
the inner
backwa st
Which is unique (uniqueness) u Au = f
or which Au = f , we can ask ourselves how big of theobjects.
backward error
screte setting, they are Euclidian ku spaces. Realistically,
uk is with respect to the forward our data
error knk = will include
. Broadly some
speaking, the o
respect to the 3. and continuously
forward error knk =depends on f (stability)
. Broadly speaking, the operator
may transform
AX-ray be linear (our
is thecase) or nonlinear,
mathematical and
model its
for inverse,
X-ray A 1
, is rarely
measurement (d = available
2 is equ
vational noise,
(our case) or n and
nonlinear, 2 its combines
Y inverse, in
measurement
A 1 , is rarely
simple
available exceptand modelling errors,
cases. Thus, in (classical) inverse problems, u † with knk 
has to be estimated
.
d 1 algorith
to the Radon transform). This transforms the u to a sinogram f R ⇥ S : d
! R by
Thus, in (classical) inverse problems, u† has to be estimated algorithmically.
find a u for Mathematically, a forward (direct) problem
which Au = f , we can ask ourselves how big the in imaging is usually well-posed…. backward error
8
30 https://www.uni-muenster.de/AMM/num/ipschool2015/moeller0.pdf
An inverse problem in imaging is usually ill-conditioned/possed

?
1
( )
<latexit sha1_base64="xmBBnrQX2zbVbBK0U83smbUZUYI=">AAAB7XicbVBNSwMxEJ2tX7V+VT16CRbBi2VXSvVY9eKxgv2Adi3ZNNvGZpMlyQpl6X/w4kERr/4fb/4b03YP2vpg4PHeDDPzgpgzbVz328mtrK6tb+Q3C1vbO7t7xf2DppaJIrRBJJeqHWBNORO0YZjhtB0riqOA01Ywupn6rSeqNJPi3oxj6kd4IFjICDZWal49pGfepFcsuWV3BrRMvIyUIEO9V/zq9iVJIioM4VjrjufGxk+xMoxwOil0E01jTEZ4QDuWChxR7aezayfoxCp9FEplSxg0U39PpDjSehwFtjPCZqgXvan4n9dJTHjpp0zEiaGCzBeFCUdGounrqM8UJYaPLcFEMXsrIkOsMDE2oIINwVt8eZk0z8tetVy9q5Rq11kceTiCYzgFDy6gBrdQhwYQeIRneIU3RzovzrvzMW/NOdnMIfyB8/kD6K2Otg==</latexit>

A -n

1
<latexit sha1_base64="xmBBnrQX2zbVbBK0U83smbUZUYI=">AAAB7XicbVBNSwMxEJ2tX7V+VT16CRbBi2VXSvVY9eKxgv2Adi3ZNNvGZpMlyQpl6X/w4kERr/4fb/4b03YP2vpg4PHeDDPzgpgzbVz328mtrK6tb+Q3C1vbO7t7xf2DppaJIrRBJJeqHWBNORO0YZjhtB0riqOA01Ywupn6rSeqNJPi3oxj6kd4IFjICDZWal49pGfepFcsuWV3BrRMvIyUIEO9V/zq9iVJIioM4VjrjufGxk+xMoxwOil0E01jTEZ4QDuWChxR7aezayfoxCp9FEplSxg0U39PpDjSehwFtjPCZqgXvan4n9dJTHjpp0zEiaGCzBeFCUdGounrqM8UJYaPLcFEMXsrIkOsMDE2oIINwVt8eZk0z8tetVy9q5Rq11kceTiCYzgFDy6gBrdQhwYQeIRneIU3RzovzrvzMW/NOdnMIfyB8/kD6K2Otg==</latexit>

A Might not exist or if does it might be unestable

31
Solvability and regularization

• A takes the form:


Ax=b

• Overdetermined system M>N


•Underdetermined system M<N

•More unknowns than constraints


MxN
•Find a solution by minimization
•Not enough constraints.
min kAx •Usually solved by regularization
<latexit sha1_base64="fmLuRt1a0BwuRbB+RVMo1CLzbLM=">AAAB+3icbVDLSsNAFJ34rPUV69LNYBHcWJIi1WXVjcsK9gFNCJPppB06MwkzE2lJ+ytuXCji1h9x5984bbPQ1gMXDufcy733hAmjSjvOt7W2vrG5tV3YKe7u7R8c2kellopTiUkTxyyWnRApwqggTU01I51EEsRDRtrh8G7mt5+IVDQWj3qcEJ+jvqARxUgbKbBLHqciGEFvcjOCF6E3CaqBXXYqzhxwlbg5KYMcjcD+8noxTjkRGjOkVNd1Eu1nSGqKGZkWvVSRBOEh6pOuoQJxovxsfvsUnhmlB6NYmhIaztXfExniSo15aDo50gO17M3E/7xuqqNrP6MiSTUReLEoShnUMZwFAXtUEqzZ2BCEJTW3QjxAEmFt4iqaENzll1dJq1pxa5Xaw2W5fpvHUQAn4BScAxdcgTq4Bw3QBBiMwDN4BW/W1Hqx3q2PReualc8cgz+wPn8A0aeTqg==</latexit>

bk2
x
MxN
min kxk2 Ax = b
<latexit sha1_base64="goLaFoIk3/OtfLoR2ZaRQtDzwJQ=">AAAB9XicbVBNT8JAEJ3iF+IX6tHLRmLiibTEoEeiF4+YyEdCa7NdtrBhu212twoB/ocXDxrj1f/izX/jAj0o+JJJXt6bycy8IOFMadv+tnJr6xubW/ntws7u3v5B8fCoqeJUEtogMY9lO8CKciZoQzPNaTuRFEcBp61gcDPzW49UKhaLez1KqBfhnmAhI1gb6cGNmPCHyJ0M3Ylf8Yslu2zPgVaJk5ESZKj7xS+3G5M0okITjpXqOHaivTGWmhFOpwU3VTTBZIB7tGOowBFV3nh+9RSdGaWLwliaEhrN1d8TYxwpNYoC0xlh3VfL3kz8z+ukOrzyxkwkqaaCLBaFKUc6RrMIUJdJSjQfGYKJZOZWRPpYYqJNUAUTgrP88ippVspOtVy9uyjVrrM48nACp3AODlxCDW6hDg0gIOEZXuHNerJerHfrY9Gas7KZY/gD6/MHRV6SYQ==</latexit>

<latexit sha1_base64="DqHz5KaE3QRpMGd31GWdDDDnlUQ=">AAAB7HicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKVC9C1YvHCm5baJeSTbNtaDa7JFmxLP0NXjwo4tUf5M1/Y9ruQVsfDDzem2FmXpAIro3jfKPCyura+kZxs7S1vbO7V94/aOo4VZR5NBaxagdEM8El8ww3grUTxUgUCNYKRrdTv/XIlOaxfDDjhPkRGUgeckqMlbzrJ3wV9MoVp+rMgJeJm5MK5Gj0yl/dfkzTiElDBdG64zqJ8TOiDKeCTUrdVLOE0BEZsI6lkkRM+9ns2Ak+sUofh7GyJQ2eqb8nMhJpPY4C2xkRM9SL3lT8z+ukJrz0My6T1DBJ54vCVGAT4+nnuM8Vo0aMLSFUcXsrpkOiCDU2n5INwV18eZk0z6purVq7P6/Ub/I4inAEx3AKLlxAHe6gAR5Q4PAMr/CGJHpB7+hj3lpA+cwh/AH6/AEEfY4v</latexit>

subject to
x

32
Solvability and regularization

• A takes the form:


Ax=b

• Overdetermined system M>N


•Underdetermined system M<N

•More unknowns than constraints


MxN
•Find a solution by minimization
•Not enough constraints.
min kAx •Usually solved by regularization
<latexit sha1_base64="fmLuRt1a0BwuRbB+RVMo1CLzbLM=">AAAB+3icbVDLSsNAFJ34rPUV69LNYBHcWJIi1WXVjcsK9gFNCJPppB06MwkzE2lJ+ytuXCji1h9x5984bbPQ1gMXDufcy733hAmjSjvOt7W2vrG5tV3YKe7u7R8c2kellopTiUkTxyyWnRApwqggTU01I51EEsRDRtrh8G7mt5+IVDQWj3qcEJ+jvqARxUgbKbBLHqciGEFvcjOCF6E3CaqBXXYqzhxwlbg5KYMcjcD+8noxTjkRGjOkVNd1Eu1nSGqKGZkWvVSRBOEh6pOuoQJxovxsfvsUnhmlB6NYmhIaztXfExniSo15aDo50gO17M3E/7xuqqNrP6MiSTUReLEoShnUMZwFAXtUEqzZ2BCEJTW3QjxAEmFt4iqaENzll1dJq1pxa5Xaw2W5fpvHUQAn4BScAxdcgTq4Bw3QBBiMwDN4BW/W1Hqx3q2PReualc8cgz+wPn8A0aeTqg==</latexit>

bk2
x
MxN
min kxk2 Ax = b
<latexit sha1_base64="goLaFoIk3/OtfLoR2ZaRQtDzwJQ=">AAAB9XicbVBNT8JAEJ3iF+IX6tHLRmLiibTEoEeiF4+YyEdCa7NdtrBhu212twoB/ocXDxrj1f/izX/jAj0o+JJJXt6bycy8IOFMadv+tnJr6xubW/ntws7u3v5B8fCoqeJUEtogMY9lO8CKciZoQzPNaTuRFEcBp61gcDPzW49UKhaLez1KqBfhnmAhI1gb6cGNmPCHyJ0M3Ylf8Yslu2zPgVaJk5ESZKj7xS+3G5M0okITjpXqOHaivTGWmhFOpwU3VTTBZIB7tGOowBFV3nh+9RSdGaWLwliaEhrN1d8TYxwpNYoC0xlh3VfL3kz8z+ukOrzyxkwkqaaCLBaFKUc6RrMIUJdJSjQfGYKJZOZWRPpYYqJNUAUTgrP88ippVspOtVy9uyjVrrM48nACp3AODlxCDW6hDg0gIOEZXuHNerJerHfrY9Gas7KZY/gD6/MHRV6SYQ==</latexit>

<latexit sha1_base64="DqHz5KaE3QRpMGd31GWdDDDnlUQ=">AAAB7HicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKVC9C1YvHCm5baJeSTbNtaDa7JFmxLP0NXjwo4tUf5M1/Y9ruQVsfDDzem2FmXpAIro3jfKPCyura+kZxs7S1vbO7V94/aOo4VZR5NBaxagdEM8El8ww3grUTxUgUCNYKRrdTv/XIlOaxfDDjhPkRGUgeckqMlbzrJ3wV9MoVp+rMgJeJm5MK5Gj0yl/dfkzTiElDBdG64zqJ8TOiDKeCTUrdVLOE0BEZsI6lkkRM+9ns2Ak+sUofh7GyJQ2eqb8nMhJpPY4C2xkRM9SL3lT8z+ukJrz0My6T1DBJ54vCVGAT4+nnuM8Vo0aMLSFUcXsrpkOiCDU2n5INwV18eZk0z6purVq7P6/Ub/I4inAEx3AKLlxAHe6gAR5Q4PAMr/CGJHpB7+hj3lpA+cwh/AH6/AEEfY4v</latexit>

subject to
x

min kAx a kxk2 b kxk1


<latexit sha1_base64="4Cv/TdSE2nlFf+EKG357rMefcPQ=">AAACI3icbZBNS8MwGMfT+TbnW9Wjl+AQBHG0Q6Z4mnrxOMG9wFpKmmZbWJqWJJWNbt/Fi1/FiwdlePHgdzHbetDNPwT+/J7nSfL8/ZhRqSzry8itrK6tb+Q3C1vbO7t75v5BQ0aJwKSOIxaJlo8kYZSTuqKKkVYsCAp9Rpp+/25abz4RIWnEH9UwJm6Iupx2KEZKI8+8dkLKvQF0RjcDeO47I68Mz6DD9A0B8pDmgwXmZ8z2zKJVsmaCy8bOTBFkqnnmxAkinISEK8yQlG3bipWbIqEoZmRccBJJYoT7qEva2nIUEummsx3H8ESTAHYioQ9XcEZ/T6QolHIY+rozRKonF2tT+F+tnajOlZtSHieKcDx/qJMwqCI4DQwGVBCs2FAbhAXVf4W4hwTCSsda0CHYiysvm0a5ZFdKlYeLYvU2iyMPjsAxOAU2uARVcA9qoA4weAav4B18GC/GmzExPuetOSObOQR/ZHz/ABfYorw=</latexit>

bk2 + + We can solve these minimisation problems using


x
optimisation
33
algorithms (more on this in the next module)
r↵ (s) = s/(s + ↵) .
, Tikhonov regularisation has a corresponding variational problem:
uations u
2 2

whose well-posedness can be easily established by writing down the cor


min kAu 2
f k2
+ 2
↵kuk2 , Unlike the TSVD, Tikhonov regularisation has a corresponding variational problem:
Statistical approach equations
f=Au+n
u 2 21 ⇤
min kAu f k
⇤ + ↵kuk , 2 1where {(u⇤ , v , i i=1 is the singular system of A.
)} m
Thi
u↵ = (A 2A + ↵I) 2 A f = Vm ⌃m + 2↵I 2 ⌃m Um f.
i i
ily established by writing down the corresponding normal
u min
u
kAu f k 2 + ↵kuk 2,
r↵ (s) = s/(s2 + ↵) .⇤ 1
1 ⇤ 2 ⇤
u = (A A + ↵I) A f = V ⌃ + ↵I ⌃ U f.
ness can be easily established by
whose writing down
well-posedness can bethe
easilycorresponding
established by writingnormal
Indeed, it is easily verified that A A + ↵I has full rank whenever ↵ > 0.


down the corresponding normal m m m m

Allows us to incorporate prior assumptions and formulate inverse


equations Indeed,problems as variational
it is easily problems
verified that A⇤ A + ↵I has full rank whenever ↵ >
Unlike the TSVD, Tikhonov regularisation has a correspondin
We can use Bayes’ theorem to combine the likelihood We and prior
can
to
use Bayes’
the posterior distribu-
theorem to combine the likelihood and prior to the
1 ⇤ • f and
2 u are continuous
1 random
⇤ variables.
⇤ 1 ⇤ 2 1 ⇤
I) A f = Vm ⌃m + ↵I ⌃m Um f. u↵ = (A A + ↵I) A f = Vm ⌃m + ↵I ⌃mUm f.
n: • The prior assumptions are formulated in terms of multivariate
1
tion: probability distributions. 2
min kAu f k + ↵kuk , 2
⇤ 1 ⇤ Indeed,2it is easily verified that A⇤⇤ A + ↵I has full rank whenever ↵ > 0. 2 2
↵ = (A A + ↵I) A f = Vm ⌃m + ↵I ⌃m Um f. u
↵I has full rank whenever The posterior distribution gives the probability of any given u given
The

A A+posterior distribution gives the probability of any given u given the measurements
↵ > 0.
We can use Bayes’ theorem to combine the likelihood and prior to the posterior distribu-
whose well-posedness can be easily established by writing down
tion: Likelihood of measuring f :given u wikipedia
yombine
verifiedthe
that
likelihood ↵I has
A A + and

full
prior to rank whenever
the posterior distribu-
↵ > 0. equations Thomas Bayes
The posterior distribution gives the probability of any given u given the measurements
’ theoremProbability
to combine the likelihood
of u given f : and prior to the posterior distribu-
Probability of u being the ground truth.
P (u|f ) =
|u)Pprior (u)
Pdata (f
,
post 1
⇤ 1 ⇤ Z 2
the probability of |u)P
any given u given the measurements
P (f (u) u ↵ = (A A + ↵I) A f = V m ⌃ + ↵I ⌃
data prior m
Ppost (u|f ) = Ppost(u|f ) = datawhere
P (f |u)PZ , is(u)the
prior
, normalising constant needed to make ⇡post integrate to
Z Z Indeed, it is easily verified that A⇤ A + ↵I has full rank whene
tribution gives the probability of any given u given the measurements
where Z is the normalising constant needed to make ⇡post integrate to 1.
where Z is the normalising constant needed to make Ppost integrate to 1. theorem to combine the likelihood and prio
We can use Bayes’
Pdata (f |u)Pprior
• (u) tion:of the measurement error.
Is determined by the forward model and the statistics
u|f ) = ,
Z The posterior distribution gives the probability of any given
Pdata (f |u)Pprior (u)
stant needed
Ppostto make
(u|f ) =P• post
Gives information
integrate toon1.the,likelihood of u assuming f and
: u.
Z
34
rmalising constant needed to make P integrate to 1. P (f |u)P (u)
f = Au + ✏ We can use Bayes’ theorem to combine the likelihood and prior to the posterior
tion:
where ✏ is Gaussian distributed with zero mean and variance I. Assuming
This expression can be are nor-
uwritten
The posterior idistribution as:
gives the probability of any given u given the meas

MAP estimate asand


a minimisation problem
min log P (u|f ) f : post
u
mally distributed with zero mean unit variance
Pdata (f |u)Pprior (u) !
1 1
Ppost (u|f ) =
1 ZT 1
,
! Pppost
P (X = u|µ, ⌃) =where (u|f ) exp = exp
Z is the normalising constant (u
(umake⌃ Pµ
µ)
needed to
T 1
post
(u )µ)
integrate
⌃to 1.
(u
1 2 1 2 (2⇡) d k⌃k 2 2 post

Ppost (u|f ) = exp 2


kAU f k kuk
2 2 the mean is given
Where by:
P (u |f post

) = max Ppost (u|f ) = max Pdata (f |u)Pprior (u)
u u

f = Au + ✏
This expression can be written as:
19 where ✏ is Gaussian distributed withGaussian
zero mean
✓ ◆
and variance I. Assuming ui
min
u
log Ppost (u|f )
distribution 2 ⇤
! µpost 1= ⌃post 1 Af !
mally
1 distributed with zero mean and unit
P (X =variance
u|µ, ⌃) = p
d
(2⇡) k⌃k
exp
2
(u µ)T ⌃ 1 (u µ)
T 1
Ppost (u|f ) = exp (u µpost ) ⌃ (u µpost )
2 and the variance: d problem dimension !
1 1
Ppost (u|fHence,
) =weexpwill find thatkAU k
f problem
our imaging kuk
has no unique solution, and If
Where the mean is given by: 2 2 2
it does not depend on the data in a stable way, e.g. small errors◆
✓ in the
1 meas
get heavily amplified, producing useless solutions.1 We⇤are in a situation where
⌃post = A A+I
✓ ◆ to understand indirect measurements (from, e.g. our sensor or detector) and
2 ⇤ 19
µpost = ⌃post ANotice
f that the MAP estimate is given19 by:
and the variance:
2 2 2
✓ ◆ min kAu fk + kuk
1 u
1 ⇤
⌃post = A A +Which
I coincides with the solution of the Tikhonov least-sq
35
Examples of ill-possed inverse problems

36
P P 1

6 N x i y i
7
6P P 2 P 7
= 6 Deconvolution
xi xi xi yi 7 (29)
4P P P 2 5
yi xi yi yi
• y lacks the high-frequency content of x, and H is the convolution by a blurring kernel.

min kGm 2
dk2 (30)
m Used to improve the contrast and sharpness of images

Point spread function


H = Hf Kf
<latexit sha1_base64="Gaq4HHdV7cKuMGHxOC1jNaMO57Y=">AAAB+nicbVBNS8NAEJ3Ur1q/Uj16WSyCp5KIVC9C0UvBSwX7AW0Im+2mXbrZhN2NUmJ/ihcPinj1l3jz37htc9Dqg4HHezPMzAsSzpR2nC+rsLK6tr5R3Cxtbe/s7tnl/baKU0loi8Q8lt0AK8qZoC3NNKfdRFIcBZx2gvH1zO/cU6lYLO70JKFehIeChYxgbSTfLveDMGtM0SVq+CG6MeXbFafqzIH+EjcnFcjR9O3P/iAmaUSFJhwr1XOdRHsZlpoRTqelfqpogskYD2nPUIEjqrxsfvoUHRtlgMJYmhIazdWfExmOlJpEgemMsB6pZW8m/uf1Uh1eeBkTSaqpIItFYcqRjtEsBzRgkhLNJ4ZgIpm5FZERlphok1bJhOAuv/yXtE+rbq1auz2r1K/yOIpwCEdwAi6cQx0a0IQWEHiAJ3iBV+vRerberPdFa8HKZw7gF6yPbxrVkp4=</latexit>

Deconvolution lter

y =H⇤x+✏ (31)
H point spread function
10
.wikipedia.org/

Deconvolution lter

37
fi
fi
yi xi yi yi

Denoising min kGm 2


dk2 (30)
m

• y is a noisy version of x, and H is a, e.g., blurring kernel.

y =H⇤x+✏ H blurring kernel (31)

10

Noisy

* =

Reconstructed image
38
P P P 2
yi xi yi yi
Inpainting
2
min kGm dk2
m

• H is a pixel-wise multiplication by a binary mask of 1’s (px kept) or 0 (removed px).

y =H⇤x+✏

10
• Not enough information to reconstruct the image

Video
Staring:
Prof. Carola-Bibiane Schöenlieb &
Prof. Samuli Siltanen

https://scikit-image.org

39

You might also like