0% found this document useful (0 votes)
2 views11 pages

S M S T C Lecture Notes Lecture4

The document discusses the solution of mixed-determined problems using Singular Value Decomposition (SVD) and provides a revision of probability and statistics. It outlines the concepts of natural solutions, SVD results, and the relationship between model parameters and data resolution. Additionally, it covers various aspects of random variables, Gaussian probability density functions, and examples to illustrate the concepts presented.

Uploaded by

miru park
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views11 pages

S M S T C Lecture Notes Lecture4

The document discusses the solution of mixed-determined problems using Singular Value Decomposition (SVD) and provides a revision of probability and statistics. It outlines the concepts of natural solutions, SVD results, and the relationship between model parameters and data resolution. Additionally, it covers various aspects of random variables, Gaussian probability density functions, and examples to illustrate the concepts presented.

Uploaded by

miru park
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

SMSTC (2020/21)

INVERSE PROBLEMS
Lecture 4: SVD. Revision of probability and statistics
Anya Kirpichnikova, University of Stirlinga

www.smstc.ac.uk

Contents
4.1 Solution of the mixed-determined problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–1
4.1.1 Natural solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–1
4.1.2 SVD: what we want . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–2
4.1.3 SVD: results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–2
4.1.4 Is that what we wanted? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–3
4.1.5 Natural generalised inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–3
4.1.6 SVD: Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–3
4.1.7 SVD: Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–4
4.2 Probability Revision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–5
4.2.1 Random variables, p.d.f., c.d.f., expected value . . . . . . . . . . . . . . . . . . . . . . . 4–5
4.2.2 Other ways to describe R.V. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–6
4.2.3 Correlated data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–6
4.3 Functions of R.V.: how are p(d) and p(m) related? . . . . . . . . . . . . . . . . . . . . . . . . 4–7
4.3.1 Example 1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–7
4.3.2 Example 1D nonlinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–7
4.3.3 General case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–7
4.3.4 General linear case: important conversion formulae . . . . . . . . . . . . . . . . . . . . . 4–7
4.3.5 Example 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–8
4.3.6 Example: how useful Eq 4.2 can be . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–8
4.3.7 Example of uncorrelated data with uniform variance . . . . . . . . . . . . . . . . . . . . 4–8
4.4 Gaussian p.d.f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–9
4.4.1 Univariate Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–9
4.4.2 Multiariate Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–9
4.4.3 K(m) = d case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–9

4.1 Solution of the mixed-determined problem


4.1.1 Natural solution
In this case some linear combinations of model parameters can be determined, we assume they belong to subspace
m p ∈ S p (m), the other linear combinations of model parameters cannot be determined (no information is provided,
”unilluminated” by Km = d), they belong to subspace m0 ∈ S0 (m), and m0 ∈ (K).
Km might not be able to span S(d) (if overdetermined), no matter what m we choose, at best it might be able to
span some subspace S p (d) ⊂ S(d), let those data points be d p ∈ S p (d). Assume also S0 (d) is the other part of S(d)
that is not S p (d). No part of S0 (d) can be satisfied by m, d0 ∈ S0 (d). Then

K (m p + m0 ) = d p + d0
a [email protected]

4–1
SM ST C : INVERSE PROBLEMS 4–2

and
L = mT m = [m p + m0 ]T [m p + m0 ] = mTp m p + mTp m0 + mT0 m p + mT0 m0 = mTp m p + mT0 m0
since m0 is orthogonal to m p . In the right hand side above, mTp m p is determined by d, and mT0 m0 is determined
by a priori information. The total overall error is then

E = [d p + d0 − K(m p + m0 )]T [d p + d0 − K(m p + m0 )] =

E = [d p + d0 − Km p ]T [d p + d0 − Km p ]
due to Km0 = 0, simplifying further, with d0 being orthogonal to d p , we have

E = [d p − Km p ]T [d p − Km p ] + dT0 d0

where [d p − Km p ] is determined by m p and d0 is not possible to determine.


Solve [d p − Km p ] = 0 to get the natural solution (set m0 = 0.)

4.1.2 SVD: what we want


Thus we are looking for the decomposition that would separate S p (d), S0 (d), S p (m), S0 (m) from each other:

• S p (m) contains m p , i.e. linear combinations of mk such that they can be determined by Km = d
• S0 (m) contains m0 , linear combinations of mk that are unilluminated by Km = d, note Km0 = 0
• S(m) = S p (m) ∪ S0 (m)
• S p (d) is a subspace of S(d) that is spanned by Km.

• S0 (d) is a subspace of S(d) that is not spanned by Km, d0 ∈ S0 (d.)

4.1.3 SVD: results


For the derivation, see [4], [3], consider the N × M matrix

K = UΛVT (4.1)

• U is a square N × N matrix of eigenvectors that span S(d) :

U = [u(1) , u(2) , . . . , u(N) ]

where u(i) are orthogonal and can be chosen orthonormal:

UUT = UT U = IN .

• V is a square M × M matrix of eigenvectors that span S(m) :

V = [v(1) , v(2) , . . . , v(M) ]

where v(i) are orthogonal and can be chosen orthonormal:

VVT = VT V = IM .

• Λ is N × M diagonal eigenvalue matrix whose diagonal elements are called singular values, they are usually
positioned in Λ decreasing in size, then we can partition Λ into sub-matrix Λ p (p× p, diagonal) of p non-zero
singular values and several zero matrices:  
Λ 0
Λ= p
0 0

Now back to Eq. (4.1)


K = UΛVT = U p Λ p VTp
SM ST C : INVERSE PROBLEMS 4–3

• K = U p Λ p VTp is the split into p and o we wanted


• U p is the first p columns of U the rest of N − p columns in U are cancelled by zeros in Λ and are not
controlled by information in data kernel K (U0 .)
• V p is the first p columns of V; the rest of M − p columns in V are cancelled by zeros in Λ and are not
controlled by information in data kernel K (V0 .)

Remark: The p− and 0− matrices are orthogonal and normalised:

VTp V = IM ; U T U = IN ; however, in general VVT 6= I, UUT 6= I

4.1.4 Is that what we wanted?


Back to our equation d = Km = U p Λ p VTp m, vectors in V p are perpendicular to vectors in V0 , the eigenvector V p
lies completely in S p (m) and Vo lies completely in S0 (m.)
Spaces S(m) and S(d) are spanned by V and U respectively. The p−spaces are spanned by the p−parts of the
eigenvector matrices with non-zero eigenvalues: S p (m) is spanned by V p , S p (d) is spanned by U p ; whereas V0 , U0
span null-spaces S0 (m), So (d).

• The natural solution to the inverse problem contains no component in S0 (m) and a prediction error e no
component of S(d)

• mest = V p Λ−1
p U p d has no component in S0 (m) (show that as an exercise)

• Show that e = d−Kmest has no component in S p (d) (prove as an exercise) As VTp V p = UTp U p = Λ p Λ−1
p = Ip,
mest is the natural solution.

4.1.5 Natural generalised inverse


K−g = V p Λ−1 T
p Up

where the right hand side is not explicitly computed usually.


Natural generalised inverse has the following model resolution:

R = K−g K = {V p Λ p UTp }{V p Λ−1 T T


p Up } = VpVp

which means that model parameters will be perfectly resolved only if V p spans the complete S(m), i.e. p ≥ M.
Natural generalised inverse has the following data resolution:

N = KK−g = {U p Λ p V p }{U p Λ−1 T


p Vp} = UpUp

which means the data is perfectly resolved is U p spans S(d), i.e. p = N.

4.1.6 SVD: Derivation


We follow [3]

Step 1

Form (N + M) × (N + M) square S from K and KT


 
0 K
S= T
K 0

Matrix S has N + M real eigenvalues Λi and a complete set of eigenvectors w(i) :

Sw(i) = λi w(i)
SM ST C : INVERSE PROBLEMS 4–4

Step 2

Partition w into part u (length N) and part v (length M)

K u(i)
    (i) 
0 u
Sw(i) = λi w(i) → T = λi
K 0 v(i) v(i)

therefore
Kv(i) = λi u(i) and KT u(i) = λi v(i)

Step 3

Assume there is a positive eigenvalue λi > 0 with eigenvector [u(i) , v(i) ]T , then −λi < 0 is also an eigenvalue with
eigenvector [−u(i) , −v(i) ]T .
If there exists p positive eigenvalues, then there exists N + M − 2p zero eigenvalues.
Then
Kv(i) = λi u(i) ⇒ KT Kv(i) = KT [λi u(i) ] = λi [KT u(i) ] = λi [λi v(i) ] = λ2i v(i)
so v(i) is an eigenvalue of KT K with eigenvalue λ2i ;

KT u(i) = λi v(i) ⇒ KKT u(i) = K[λi v(i) ] = λi [Kv(i) ] = λi [λi u(i) ] = λ2i u(i)

so u(i) is an eigenvalue of KKT with eigenvalue λ2i .


Symmetric matrix can have no more distinct eigenvectors than its dimension, so p ≤ min(N, M). Matrices KT K
and KKT are square and symmetric and therefore there exist M vectors v(i) that form a complete orthogonal set V
spanning S(m); and there exist N vectors u(i) that form a complete orthogonal set U spanning S(d). These include
p of w eigenvectors with distinct non-zero eigenvalues and remaining ones chosen from eigenvectors with zero
eigenvalues.

Step 4

Kv(i) = λi u(i) → KV = UΛ

Step 5

K = UΛVT
is the required SVD of K.

4.1.7 SVD: Examples


 
2 4
1 3
Consider K =   which is an N × M = 4 × 2 matrix.
0 0
0 0

Step 1: Generate U

The eigenvectors of KKT form up columns of U :


   
20 14 0 0 20 − λ 14 0 0
14 10 0 0 ⇒ det  14 10 − λ 0 0 
 = 0 ⇒ λ2 200 − 30λ + λ2 − 142 = 0
KKT = 
   
0 0 0 0  0 0 −λ 0 
0 0 0 0 0 0 0 −λ

Solving the characteristic equation, we see that the eigenvalues are


√ √
λ1 = 15 + 221 ≈ 29.866, λ1 = 15 − 221 ≈ 0.13393, λ3 = 0, λ4 = 0.
SM ST C : INVERSE PROBLEMS 4–5

Then the eigenvectors satisfy the following equations together with normalisation:

2 2
x + y = 1


For λ1 : z=w= 0 ⇒ x = −0.817416, y = −0.576048, z = 0, w = 0
 1
√ 
y =

14 −5 + 221 x

x2 + y2 = 1


For λ2 : z=w= 0 ⇒ x = 0.817416, y = −0.576048, z = 0, w = 0
√ 
y = 1 −5 − 221 x


14

and hence  
0.8174 −0.5760 0 0
0.5760 0.8174 0 0
U=
 0

0 1 0
0 0 0 1

Step 2: Generate V

Similarly, to Step 1:  
5 11
KT K = ⇒ λ1 = 0.1339, λ2 = 29.8661
11 25
and the matrix made of eigenvectors is  
0.4046 −0.9145
V=
0.9145 0.4046

Step 3: Generate Λ

Matrix Λ contains singular values of KT K and KKT , i.e.


p √ p √
s1 = λ1 ≈ 29.8661 ≈ 5.4650, s2 = λ2 ≈ 0.1339 ≈ 0.3659

Thus, the last matrix in the decomposition is


 
5.4650 0
 0 0.3659
Λ=
 0

0 
0 0

4.2 Probability Revision


4.2.1 Random variables, p.d.f., c.d.f., expected value
We know that usually d contains noise, and we can consider d as a random variable (R.V.), then each measure-
ment is a realisation and they might differ. R.Vs. have systemeatics , i.e. some tendency to take on some values
more often than others, the latter should be described by a probability density function (P.D.F.). P.D.F. assigns
probability that a particular realisation of the R.V. will have a value in the neighbourhood of d (in the continuous
case, the probability of getting an exact value is zero, since a point has measure zero). The probability that the
measurement is between d and d + dd is p(d)dd and hence we claim
Z ∞ Z dmax
p(d)dd = 1 = p(d)dd
−∞ dmin

as we assume the measurement take its value from (−∞, ∞) or [dmin , dmax ].
The probability of R.V. d taking a value on the interval (d1 , d2 ) is then
Z d2
P(d1 , d2 ) = p(d)dd
d1
SM ST C : INVERSE PROBLEMS 4–6

The cumulative distribution function (C.D.F.) P(d) is defined


Z d
P(d) = p(d)dd
−∞

Note: P(d) is a probability, while p(d) is not.

4.2.2 Other ways to describe R.V.


The p.d.f. p(d) describes the R.V. d well, but is complicated usually, so we consider easier ways to describe d :

• maximum likelihood point dML

• mean/expected value hdi = µ(d) = E(d) (”balancing point” of the distribution)


Z ∞
hdi = µ(d) = E(d) = dp(d)dd
−∞

• width of a distribution (wide distribution means noisy data; narrow means relatively noise-free data)

To measure the width, we multiply the distribution by a function which is zero exactly at the peak (mean) of the
distribution, and grows fast in its vicinity, say a parabola: (d − hdi)2 = (d − µ(d))2 , then we introduce variance
σ2 : Z ∞
σ2 = (d − hdi)2 p(d)dd
−∞

We can also calculate mean and variance from N realisations of data di and get the sample mean and sample
standard deviation
1 N 1 N
hdiest = ∑ di , (σ2 )est = ∑ (d − hdiest )2
N i=1 N − 1 i=1

4.2.3 Correlated data


We think R.Vs. are independent if there are no patterns in occurrence of the values between pairs of R.V.; function
p(d) is the joint probability that the first datum will be in the neighbourhood of d1 , the second datum will be in
the neighbourhood of d − 2, etc., then for the case of independent R.V., the joint distribution is a product

p(d) = p(d1 )p(d2 ) . . . p(dN )

and for a univariate distribution of di you then just integrate over all N − 1 variables d j , j 6= i
Z ∞ Z ∞
p(di ) = ··· p(d)dd j ddk . . . dds
−∞ −∞

Consider p(d1 , d2 ) and test it for correlation, the covariance is calculated


Z ∞Z ∞
cov(d1 , d2 ) = [d − hd1 i][d − hd2 i]p(d1 , d2 )dd1 dd2
−∞ −∞

where brackets [d − hd1 i] and [d − hd2 i] are checking how much of the data are on the same/opposite sides of their
means.
For a data vector d, the individual mean for di would be calculated by taking N integrals
Z ∞ Z ∞ Z ∞
hdii = ··· di p(d)dd1 dd2 . . . ddN = di p(di )ddi
−∞ −∞ −∞

and the matrix of covariances


Z ∞ Z ∞ Z ∞Z ∞
[covd]i j = ··· [di − hdi i][d j − hd j i]p(d)dd1 . . . ddN = [di − hdi i][d j − hd j i]p(di , d j )ddi . . . dd j
−∞ −∞ −∞ −∞
SM ST C : INVERSE PROBLEMS 4–7

4.3 Functions of R.V.: how are p(d) and p(m) related?


In inverse problems we assume m is related to d, we can assume that the estimates of m are R.V. and hence we
can have p(mest ) (mtrue might not be a R.V., can be determined, not random). Which tool converts p(d) to p(m)
when m(d) is known?

4.3.1 Example 1D
We consider one datum d and one model m, and they are related

m(d) = 2d.

Suppose p(d) is uniform on (0, 1), p.d.f. is constant, the area under the curve should total to one, so p(d) = 1. The
p.d.f. is also uniform, but on interval (0, 2) so since the total area should be one, p(m) = 12 ,. p(m) is not merely
p(d(m)) but stretching (shrinking of m−axis w.r.t. d−axis)
Z dmax Z d(mmax ) Z mmax
dd
1= p(d)dd = dm = p(m)dm
dmin d(mmin ) dm mmin

dd
so m = 2d ⇒ d = m2 , the stretching factor is | dm |, note that to accommodate the case when mmin > mmax and
reversing the integral, which might change the direction of integration, we consider the absolute value. In our
dd
example | dm | = 12 .

4.3.2 Example 1D nonlinear


√ dd 1 √1
Assume p(d) is uniform on (0, 1) so p(d) = 1. Assume m(d) = d 2 then d = m, dm = 2 m

1 1
p(m) = √ ,
2 m

m is defined on (0, 1), p(m) has a peak (integrable singularity) at m = 0.

4.3.3 General case


In general, going from p(d) to p(m) given d(m) means transforming multidimensional integrals.
Volume element transforms in the following way:
 
∂d
dd1 . . . ddN = J(m)dm1 . . . dmN , J(m) = | det |
∂m
h i
∂d ∂di
is the Jacobian determinant, i.e. the absolute value of the determinant of ∂m = ∂m j ij
Z Z  
∂d
1= p(d)dd1 . . . ddN = p [d(m)] det dm1 . . . dmN =
∂m
Z Z
= p(d(m))J(m)dm1 . . . dmN = p(m)dm1 . . . dmN

so the p.d.f. transforms  


∂d
p(m) = p [d(m)] det = p[d(m)]J(m)
∂m

4.3.4 General linear case: important conversion formulae


If m(d) is linear, so
m = Md, J(m) = const = J = | det(M −1 )| = | det M|−1
Then one can prove that for
m = Md + v, M = K−g
SM ST C : INVERSE PROBLEMS 4–8

• the mean transforms as


hmi = Mhdi + v

Proof.
Z Z    
∂d ∂m
hmii = mi p(m)dm1 . . . dmN = ∑Mi j d j p[d(m)] det det dd1 . . . ddN =
j ∂m ∂d
Z
= ∑Mi j d j p(d)dd1 . . . ddN = ∑Mi j hd j i
j j

• and even for non-square M the covariance


[covm] = M[covd]M T (4.2)
which means that the covariance of the data is a measure of the amount of measurement error. The last
formula is a rule for error propagation. Given [covd] representing measurement error, it gives a way to
compute [covm] which represents error in model parameters

Proof. Prove as an exercise.

4.3.5 Example 2D
Consider two p.d.fs. for the R.V. d1 and d2 , both uniform of (0, 1). Consider R.V. such that
(
m1 = d1 + d2
m2 = d1 − d2
To have unity of the area, p(d) = 1 (it’s a square with side one)
 
1 1 1 1
M= , | det M| = 2, J = ⇒ p(m) = p(d)J = .
1 −1 2 2

4.3.6 Example: how useful Eq 4.2 can be


N
1
Consider sample mean m1 = N ∑ di = N1 [1, 1, . . . , 1]d, here M = [1, 1, . . . , 1] N1 , v = 0. Assume data are uncorrelated
i=1
with mean hdi and variance σ2d , then we can conclude regarding the mean of the model parameter m1
hm1 i = Mhdi + v = hdi
and p(m1 ) has the same mean as d. Now regarding the variance
σ2d σ2d
var(m1 ) = M[covd]M T =, so < σ2d ⇒ var(m1 ) < var(d)
N N
Another way to transform the formula shows that the measure of width of p(m1 ) (the square root of variance)
p σd 1
var(m) = √ ∝ √
N N
the latter means that the accuracy of determining the mean is increasing as the number of observation points is
increasing, thought slowly due to the square root.

4.3.7 Example of uncorrelated data with uniform variance


Consider uncorrelated data with uniform variance
[covd] = σ2d I,
then
[covm] = M[covd]M T = σ2d MM T
and model parameters are correlated as MM T is not diagonal in general case.
SM ST C : INVERSE PROBLEMS 4–9

4.4 Gaussian p.d.f


4.4.1 Univariate Gaussian
For univariate case, the Gaussian normal distribution with mean hdi and variance σ2 is
 
(d−hdi)2
1 − 2

p(d) = p e
(2π)σ

Importance: this is a limiting p.d.f. for the sum of R.V. (central limit theorem), i.e. as long as the noise in the data
comes from several sources of comparable size, it will tend to follow Gaussian p.d.f.

4.4.2 Multiariate Gaussian


The joint p.d.f. for two independent Gaussian variables is a product of two univariate p.d.fs. When the data are
correlated, say with mean µ(d) and covariance [covd] the joint p.d.f. should show some degree of correlation.
 
1 1 T −1
p(d) = N exp − [d − µ(d)] [covd] [d − µ(d)] (4.3)
(2π) 2 2

When m = Md, p(d) is Eq (4.3), then p(m) is also Gaussian with mean µ(m) = Mµ(d) and [covm] = M[covd]M T
are all linear functions of Gaussian R.V. are also Gaussian.
Given Gaussian pA (d) with µ(dA ), [covd]A and Gaussian pB (d) with µ(dB ), [covd]B . The product

pC (d) = pA (d)pB (d)

is Gaussian with mean


−1
µ(dC ) = [covdA ]−1 + [covdB ]−1 [covdA ]−1 µ(dA ) + [covdB ]−1 µ(dB )


and variance
[covdC ]−1 = [covdA ]−1 + [covdB ]−1

4.4.3 K(m) = d case


Assume K(m) = d in probabilistic sense we need to assume

K(m) = µ(d)

then  
1 1 T −1
p(d) = N 1 exp − [d − K(m)] [covd] [d − K(m)]
(2π) 2 (det[covd]) 2 2
Km must not be a function of R.V., set of unknown quantities that define the shape of the data., if auxilliary
variables in K are random, they should be part of the data
SM ST C : INVERSE PROBLEMS 4–10

Exercises
4–1. Show mest = V p Λ−1
p U p d has no component in S0 (m)

4–2. Show that e = d − Kmest has no component in S p (d) (prove as an exercise)


 
1 4
4–3. Find the SVD for K = 2 5 find the number of non-zero singular values, and find the generalised inverse
3 6
K−g , check that K−g K = I. Use Matlab to check your calculations.

4–4. Prove [covm] = M[covd]M T when m = Md


SM ST C : INVERSE PROBLEMS 4–11

References
[1] W ILLIAM M ENKE, Geophysical Data Analysis: Discrete Inverse Theory, 3rd edition, Essevier, 2012.

[2] P ER C HRISTIAN H ANSEN, Discrete Inverse Problems, Insights and Algorithms, SIAM, 2010.
[3] L ANCZOS C.,, Linear Differential Operators, Van Nostrand-Reinhold, New Jersey, 1962.
[4] P ENROSE R., A., A generalised inverse for matrices , Proc. Cambridge Phil. Soc., 51 (1955), pp. 406–413.
[5] B URDEN , R ICHARD ; FAIRES , D OUGLAS Numerical analysis, (8th ed.), Thomson Brooks/Cole, (10 Decem-
ber 2004). ISBN 9780534392000.
[6] W UNSCH , C., M INSTER , J.F., Methods fro box models and ocean circulation tracers: mathematical pro-
gramming and non-linear inverse theory, J. Geophys. Res., 87 (1982), pp. 5647–5662.
[7] W IGGINS , R.A.,, The general linear inverse problem: Implication of surface waves and free oscillations for
Earth structure, J. Geophys. Space Phys., 10 (1972), pp. 251–285.

You might also like