0% found this document useful (0 votes)
20 views10 pages

Chap 1

A Student's Guide to Numerical Methods

Uploaded by

akhileshpandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views10 pages

Chap 1

A Student's Guide to Numerical Methods

Uploaded by

akhileshpandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

12/13/24, 3:46 PM chap1.

xml

HEAD PREVIOUS

Chapter 1
Fitting Functions to Data
1.1 Exact fitting
1.1.1 Introduction

Suppose we have a set of real-number data pairs 𝑥 𝑖 , 𝑦𝑖 , 𝑖 = 1, 2, … 𝑁. These can be considered to be a set of
points in the xy-plane. They can also be thought of as a set of values 𝑦 of a function of 𝑥; see Fig. 1.1.

Figure 1.1: Example of data to be fitted with a curve.


A frequent challenge is to find some kind of function that represents a "best fit" to the data in some sense. If the
data were fitted perfectly, then clearly the function 𝑓 would have the property
𝑓(𝑥 𝑖 ) = 𝑦𝑖 , for all𝑖 = 1, … , 𝑁 . (1 . 1)
When the number of pairs is small and they are reasonably spaced out in 𝑥, then it may be reasonable to do an
exact fit that satisfies this equation.

1.1.2 Representing an exact fitting function linearly

We have an infinite choice of possible fitting functions. Those functions must have a number of different
adjustable parameters that are set so as to adjust the function to fit the data. One example is a polynomial.
𝑓(𝑥) = 𝑐1 + 𝑐2 𝑥 + 𝑐3 𝑥 2 + . . . + 𝑐𝑁 𝑥 𝑁 - 1 (1 . 2)
Here the 𝑐𝑖 are the coefficients that must be adjusted to make the function fit the data. A polynomial whose
coefficients are the adjustable parameters has a very useful property that it is linearly dependent upon the
coefficients.
In order to fit eqs. (1.1) with the form of eq. (1.2) requires that 𝑁 simultaneous equations be satisfied. Those
equations can be written as an 𝑁 × 𝑁 matrix equation as follows:

silas.psfc.mit.edu/22.15/lectures/chap1.xml 1/10
12/13/24, 3:46 PM chap1.xml

1 𝑥1 𝑥 21 ... 𝑥𝑁
1
-1
𝑐1 𝑦1
( )
( 𝑦2 )
𝑥 22 𝑥𝑁 ( 𝑐2 ) =
-1
1 𝑥2 ... 2 (1 . 3)
... ... ...
𝑐𝑁 𝑦𝑁
1 𝑥 𝑁 𝑥 2𝑁 ... 𝑥𝑁
𝑁
-1

Here we notice that in order for this to be a square matrix system we need the number of coefficients to be equal
to the number of data pairs, 𝑁.
We also see that we could have used any set of 𝑁 functions 𝑓𝑖 as fitting functions, and written the representation:
𝑓(𝑥) = 𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) + 𝑐3 𝑓3 (𝑥) + . . . + 𝑐𝑁 𝑓𝑁 (𝑥) (1 . 4)
and then we would have obtained the matrix equation
𝑓1 (𝑥 1 ) 𝑓2 (𝑥 1 ) 𝑓3 (𝑥 1 ) ... 𝑓𝑁 (𝑥 1 ) 𝑐1 𝑦1
( 𝑓 (𝑥 ) 𝑓 (𝑥 ) 𝑓 (𝑥 ) ... 𝑓𝑁 (𝑥 2 ) ( 𝑐2 ) = (
) 𝑦2 )
1 2 2 2 3 2 (1 . 5)
... ... ...
𝑓1 (𝑥 𝑁 ) 𝑓2 (𝑥 𝑁 ) 𝑓3 (𝑥 𝑁 ) ... 𝑓𝑁 (𝑥 𝑁 ) 𝑐𝑁 𝑦𝑁

This is the most general form of representation of a fitting function that varies linearly with the unknown
coefficients. The matrix1 we will call 𝑆. It has elements 𝑆ij = 𝑓𝑗 (𝑥 𝑖 )

1.1.3 Solving for the coefficients

When we have a matrix equation of the form Sc = 𝑦, where 𝑆 is a square matrix, then provided that the matrix is
non-singular, that is, provided its determinant is non-zero, |𝑆 | ≠ 0, it possesses an inverse 𝑆-1 . Multiplying on
the left by this inverse we get:
𝑆-1 𝑆𝑐 = 𝑐 = 𝑆-1 𝑦 . (1 . 6)
In other words, we can solve for 𝑐, the unknown coefficients, by inverting the function matrix, and multiplying
the values to be fitted, 𝑦 by that inverse.
Once we have the values of 𝑐 we can evaluate the function 𝑓(𝑥) (eq. 1.2) at any 𝑥-value we like. Fig. 1.2 shows
the result of fitting a 5th order

Figure 1.2: Result of the polynomial fit.


polynomial (with 6 terms including the 1) to the six points of our data. The line goes exactly through every point.
But there's a significant problem that the line is unconvincingly curvy near its ends2 . It's not a terribly good fit.

1.2 Approximate Fitting


If we have lots of data which has scatter in it, arising from uncertainties or noise, then we almost certainly do not
want to fit a curve so that it goes exactly through every point. For example see Fig. 1.3.

silas.psfc.mit.edu/22.15/lectures/chap1.xml 2/10
12/13/24, 3:46 PM chap1.xml

Figure 1.3: A cloud of points with uncertainties and noise, to be fitted with a function.
What do we do then? Well, it turns out that we can use almost exactly the same approach, except with different
number of points (𝑁) and terms (𝑀) in our linear fit. In other words we use a representation
𝑓(𝑥) = 𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) + 𝑐3 𝑓3 (𝑥) + . . . + 𝑐𝑀 𝑓𝑀 (𝑥) (1 . 7)
in which usually 𝑀 < 𝑁. We know now that we can't fit the data exactly. The set of equations we would have to
satisfy to do so would be
𝑓1 (𝑥 1 ) 𝑓2 (𝑥 1 ) 𝑓3 (𝑥 1 ) ... 𝑓𝑀 (𝑥 1 ) 𝑐1 𝑦1
( 𝑓 (𝑥 ) 𝑓 (𝑥 ) 𝑓 (𝑥 ) ... 𝑓𝑀 (𝑥 2 ) ( 𝑐2 ) = (
) 𝑦2 )
1 2 2 2 3 2 (1 . 8)
... ... ...
𝑓1 (𝑥 𝑁 ) 𝑓2 (𝑥 𝑁 ) 𝑓3 (𝑥 𝑁 ) ... 𝑓𝑀 (𝑥 𝑁 ) 𝑐𝑀 𝑦𝑁

in which the function matrix 𝑆 is now not square but has dimensions 𝑁 × 𝑀. There are not enough coefficients 𝑐𝑗
to be able to satisfy these equations exactly. They are over-specified. Moreover, a non-square matrix doesn't have
an inverse.
But we are not interested in fitting this data exactly. We want to fit some sort of line through the points that best-
fits them.

1.2.1 Linear Least Squares

What do we mean by "best fit"? Especially when fitting a function of the linear form eq. (1.7), we usually mean
that we want to minimize the vertical distance between the points and the line. If we had a fitted function 𝑓(𝑥),
then for each data pair (𝑥 𝑖 , 𝑦𝑖 ), the square of the vertical distance between the line and the point is (𝑦𝑖 - 𝑓(𝑥 𝑖 ))2
. So the sum, over all the points, of the square distance from the line is
𝜒2 = ∑ (𝑦𝑖 - 𝑓(𝑥 𝑖 ))2 . (1 . 9)
𝑖 = 1, 𝑁
We use the square of the distances in part because they are always positive. We don't want to add positive and
negative distances, because a negative distance is just as bad as a positive one and we don't want them to cancel
out. We generally call 𝜒2 the "residual", or more simply the "chi-squared". It is an inverse measure of goodness
of fit. The smaller it is the better. A linear least squares problem is: find the coefficients of our function 𝑓 that
minimize the residual 𝜒2 .

1.2.2 SVD and the Moore-Penrose Pseudo-inverse


We seem to have gone off in a different direction from our original way to solve for the fitting coefficients by
inverting the square matrix 𝑆. How is that related to the finding of the least-squares solution to the over-specified
set of equations (1.8)?
The answer is a piece of matrix magic! It turns out that there is (contrary to what one is taught in an elementary
matrix course) a way to define the inverse of a non-square matrix or of a singular square matrix. It is called the
silas.psfc.mit.edu/22.15/lectures/chap1.xml 3/10
12/13/24, 3:46 PM chap1.xml

(Moore-Penrose) pseudo-inverse. And once found it can be used in essentially exactly the way we did for the
non-singular square matrix in the earlier treatment. That is, we solve for the coefficients using 𝑐 = 𝑆-1 𝑦, except
that 𝑆-1 is now the pseudo-inverse.
The pseudo-inverse is best understood from a consideration of what is called the Singular Value Decomposition
(SVD) of a matrix. This is the embodiment of a theorem in matrix mathematics that states that any 𝑁 × 𝑀 matrix
can always be expressed as the product of three other matrices with very special properties. For our 𝑁 × 𝑀
matrix 𝑆 this expression is:
𝑆 = 𝑈𝐷𝑉𝑇 , (1 . 10)
𝑇
where denotes transpose, and

𝑈 is an 𝑁 × 𝑁 orthonormal matrix
𝑉 is an 𝑀 × 𝑀 orthonormal matrix
𝐷 is an 𝑁 × 𝑀 diagonal matrix

Orthonormal3 means that the dot product of any column (regarded as a vector) with any other column is zero,
and the dot product of a column with itself is unity. The inverse of an orthonormal matrix is its transpose. So
𝑇 𝑇
𝑈
⏟ ⏟ 𝑈 = ⏟𝐼 and 𝑉
⏟ 𝑉 = ⏟𝐼

(1 . 11)
𝑁 × 𝑁𝑁 × 𝑁 𝑁×𝑁 𝑀 × 𝑀𝑀 × 𝑀 𝑀×𝑀
A diagonal matrix has non-zero elements only on the diagonal. But if it is non-square, as it is if 𝑀 < 𝑁, then it is
padded with extra rows of zeros (or extra columns if 𝑁 < 𝑀).
𝑑1 0 0 … 0
( )
0 𝑑2 0
𝐷= 0 ⋱ (1 . 12)
⋱ 0
0 0 𝑑𝑀
0 0 0 0 0
A sense of what the SVD is can be gained from by thinking4 in terms of the eigenanalysis of the matrix 𝑆𝑇 𝑆. Its
eigenvalues are 𝑑2𝑖 .
The pseudo-inverse can be considered to be
𝑆-1 = 𝑉𝐷-1 𝑈𝑇 . (1 . 13)
Here 𝐷-1 is a 𝑀 × 𝑁 diagonal matrix whose entries are the inverse of those of 𝐷, i.e. 1 / 𝑑𝑗 :

1 / 𝑑1 0 0 … 0 0
( )
0 1 / 𝑑2 0 0
𝐷-1 = 0 ⋱ 0 . (1 . 14)
⋱ 0 0
0 … 0 0 1 / 𝑑𝑀 0

It's clear that eq. (1.13) is in some sense an inverse of 𝑆, because formally
𝑆-1 𝑆 = (𝑉𝐷-1 𝑈𝑇 )(𝑈𝐷𝑉𝑇 ) = 𝑉𝐷-1 𝐷𝑉𝑇 = 𝑉𝑉𝑇 = 𝐼 . (1 . 15)
If 𝑀 ≤ 𝑁 and none of the 𝑑𝑗 is zero, then all the operations in this matrix multiplication reduction are valid,
because
-1
𝐷⏟ ⏟ 𝐷 = ⏟𝐼 . (1 . 16)
𝑀 × 𝑁𝑁 × 𝑀 𝑀×𝑀
But see the enrichment section5 for detailed discussion of other cases.
The most important thing for our present purposes is that if 𝑀 ≤ 𝑁 then we can find a solution of the over-
specified (rectangular matrix) fitting problem 𝑆𝑐 = 𝑦 as 𝑐 = 𝑆-1 𝑦, using the pseudo-inverse. The set of

silas.psfc.mit.edu/22.15/lectures/chap1.xml 4/10
12/13/24, 3:46 PM chap1.xml

coefficients 𝑐 we get corresponds to more than one possible set of 𝑦𝑖 -values, but that does not matter.
Also, one can show6 , that the specific solution that is obtained by this matrix product is in fact the least squares
solution for 𝑐; i.e. the solution that minimizes the residual 𝜒2 . And if there is any freedom in the choice of 𝑐, such
that the residual is at its minimum for a range of different 𝑐, then the solution which minimizes |𝑐 |2 is the one
found.
The beauty of this fact is that one can implement a simple code, which calls a function pinv to find the pseudo-
inverse, and it will work just fine if the matrix 𝑆 is singular or even rectangular.
As a matter of computational efficiency, one should note that in Octave the backslash operator, is equivalent to
multiplying by the pseudo-inverse (i.e. pinv(S)*y = S\y), but calculated far more efficiently7 . So backslash is
preferable in computationally costly code, because it is roughly 5 times faster. You probably won't notice the
difference for matrix dimensions smaller than a few hundred.

Figure 1.4: The cloud of points fitted with linear, quadratic, and cubic polynomials.

1.2.3 Smoothing and Regularization

As we illustrate in Fig. 1.4, by choosing the number of degrees of freedom of the fitting function one can adjust
the smoothness of the fit to the data. However, the choice of basis functions then constrains one in a way that has
been pre-specified. It might not in fact be the best way to smooth the data to fit it by (say) a straight line or a
parabola.
A better way to smooth is by "regularization" in which we add some measure of roughness to the residual we are
seeking to minimize. The roughness (which is the inverse of the smoothness) is a measure of how wiggly the fit
line is. It can in principle be pretty much anything that can be written in the form of a matrix times the fit
coefficients. I'll give an example in a moment. Let's assume the roughness measure is homogeneous, in the sense
that we are trying to make it as near zero as possible. Such a target would be 𝑅𝑐 = 0, where 𝑅 is a matrix of
dimension 𝑁𝑅 × 𝑀, where 𝑁𝑅 is the number of distinct roughness constraints. Presumably we can't satisfy this
equation perfectly because a fully smooth function would have no variation, and be unable to fit the data. But we
want to minimize the square of the roughness (𝑅𝑐)𝑇 𝑅𝑐. We can try to fulfil the requirement to fit the data, and to
minimize the roughness, in a least-squares sense by constructing an expanded compound matrix system
combining the original equations and the regularization, thus8
𝑆 𝑦
( )𝑐 = ( ). (1 . 17)
𝜆𝑅 0
𝑆
If we solve this system in a least-squares sense by using the pseudo inverse of the compound matrix ( 𝜆𝑅 ) , then
we will have found the coefficients that "best" make the roughness zero as well as fitting the data: in the sense
that the total residual
𝜒2 = ∑ (𝑦𝑖 - 𝑓(𝑥 𝑖 ))2 + 𝜆2 ∑ ( ∑ 𝑅kj 𝑐𝑗 )2 (1 . 18)
𝑖 = 1, 𝑁 𝑘 = 1, 𝑁𝑅 𝑗

silas.psfc.mit.edu/22.15/lectures/chap1.xml 5/10
12/13/24, 3:46 PM chap1.xml

is minimized. The value of 𝜆 controls the weight of the smoothing. If it is large, then we prefer smoother
solutions. If it is small or zero we do negligible smoothing.
As a specific one-dimensional example, we might decide that the roughness we want to minimize is represented
by the second derivative of the function: 𝑑2 𝑓 / dx2 . Making this quantity on average small has the effect of
minimizing the wiggles in the function, so it is an appropriate roughness measure. We could therefore choose 𝑅
such that it represented that derivative at a set of chosen points 𝑥 𝑘 , 𝑘 = 1, 𝑁𝑅 (not the same as the data points 𝑥 𝑖
) in which case:
𝑑2 𝑓𝑗
𝑅kj = | . (1 . 19)
dx2
𝑥𝑘
The 𝑥 𝑘 might, for example, be equally spaced over the 𝑥-interval of interest, in which case9 the squared
roughness measure could be considered a discrete approximation to the integral, over the interval, of the quantity
(𝑑2 𝑓 / dx2 )2 .

1.3 Tomographic Image Reconstruction


Consider the problem of x-ray tomography. We make many measurements of the integrated density of matter
along chords in a plane section through some object whose interior we wish to reconstruct. These are generally
done by measuring the attenuation of x-rays along each chord, but the mathematical technique is independent of
the physics. We seek a representation of the density of the object in the form
𝜌(𝑥, 𝑦) = ∑ 𝑐𝑗 𝜌𝑗 (𝑥, 𝑦), (1 . 20)
𝑗 = 1, 𝑀
where 𝜌𝑗 (𝑥, 𝑦) are basis functions over the plane. They might actually be as simple as pixels over mesh 𝑥 𝑘 and
𝑦𝑙 , such that 𝜌𝑗 (𝑥, 𝑦) → 𝜌kl (𝑥, 𝑦) = 1 when 𝑥 𝑘 < 𝑥 < 𝑥 𝑘 + 1 and 𝑦𝑙 < 𝑦 < 𝑦𝑙 + 1 , and zero otherwise. However,
the form of basis function that won Alan Cormack the Nobel prize for medicine in his implementation of
"computerized tomography" (the CT scan) was much more cleverly chosen to build the smoothing into the basis
functions. Be careful thinking about multidimensional fitting. For constructing fitting matrices, the list of basis
functions should be considered to be logically arranged from 1 to 𝑀 in a single index 𝑗 so that the coefficients
are a single column vector. But the physical arrangement of the basis functions might more naturally be
expressed using two indices 𝑘, 𝑙 referring to the different spatial dimensions. If so then they must be mapped in
some consistent manner to the vector column.

Figure 1.5: Illustrative layout of tomographic reconstruction of density in a plane using multiple fans of chordal
observations.
Each chord along which measurements are made, passes through the basis functions (e.g. the pixels), and for a
particular set of coefficients 𝑐𝑗 we therefore get a chordal measurement value
𝑣𝑖 = ∫ 𝑙 𝜌𝑑ℓ = ∫ 𝑙 ∑ 𝑐𝑗 𝜌𝑗 (𝑥, 𝑦)𝑑ℓ = ∑ ∫ 𝜌𝑗 (𝑥, 𝑦)𝑑ℓ 𝑐𝑗 = 𝑆𝑐,
𝑙 (1 . 21)
𝑖 𝑖 𝑖
𝑗 = 1, 𝑀 𝑗 = 1, 𝑀

silas.psfc.mit.edu/22.15/lectures/chap1.xml 6/10
12/13/24, 3:46 PM chap1.xml

where the 𝑁 × 𝑀 matrix 𝑆 is formed from the integrals along each of the 𝑁 lines of sight 𝑙𝑖 , so that
𝑆ij = ∫𝑙𝑖 𝜌𝑗 (𝑥, 𝑦)𝑑ℓ. It represents the contribution of basis function 𝑗 to measurement 𝑖. Our fitting problem is
thus rendered into the standard form:
𝑆𝑐 = 𝑣, (1 . 22)
in which a rather large number 𝑀 of basis functions might be involved. We can solve this by pseudo-inverse:
𝑐 = 𝑆-1 𝑣, and if the system is overdetermined, such that the effective number of different chords is larger than
the number of basis functions, it will probably work.
The problem is, however, usually under-determined, in the sense that we don't really have enough independent
chordal measurements to determine the density in each pixel (for example). This is true even if we apparently
have more measurements than pixels, because generally there is a finite noise or uncertainty level in the chordal
measurements that becomes amplified by the inversion process. This is illustrated by a simple test as shown in
Fig. 1.6.

Figure 1.6: Contour plots of the initial test 𝜌-function (left) used to calculate the chordal integrals, and its
reconstruction based upon inversion of the chordal data (right). The number of pixels (100) exceeds the number
of views (49), and the number of singular values used in the pseudo inverse is restricted to 30. Still they do not
agree well, because various artifacts appear. Reducing the number of singular values does not help.
We then almost certainly want to smooth the representation otherwise all sorts of meaningless artifacts will
appear in our reconstruction that have no physical existence. If we try to do this by forming a pseudo-inverse in
which a smaller number of singular values are retained, and the others put to zero, there is no guarantee that this
will get rid of the roughness. Fig. 1.6 gives an example.
If we instead smooth the reconstruction by regularization, using as our measure of roughness the discrete (2-D)
Laplacian (∇2 𝜌) evaluated at each pixel. We get a far better result, as shown in Fig. 1.7. It turns out that this
good result is rather insensitive to the value of 𝜆2 over two or three orders of magnitude.

Figure 1.7: Reconstruction using a regularization smoothing based upon ∇2 𝜌. The contours are much nearer to
reality.

silas.psfc.mit.edu/22.15/lectures/chap1.xml 7/10
12/13/24, 3:46 PM chap1.xml

1.4 Efficiency and Nonlinearity


Using the inverse or pseudo-inverse to solve for the coefficients of a fitting function is intuitive and straight-
forward. However, in many cases it is not the most computationally efficient approach. For moderate size
problems, modern computers have more than enough power to overcome the inefficiencies, but in a situation
with multiple dimensions, such as tomography, it is easy for the matrix that needs to be inverted to become
enormous, because that matrix's side length is the total number of pixels or elements in the fit, which may be, for
example, the product of the side lengths nx × ny. The giant matrix that has to be inverted, may be very "sparse",
meaning that all but a very few of its elements are zero. It can then become overwhelming in terms of storage
and cpu to use the direct inversion methods we have discussed here. We'll see other approaches later.
Some fitting problems are nonlinear. For example, suppose one had a photon spectrum of a particular spectral
line to which one wished to fit a Gaussian function of particular center, width, and height. That's a problem that
cannot be expressed as a linear sum of functions. In that case fitting becomes more elaborate10 , and less reliable.
There are some potted fitting programs out there, but it's usually better if you can avoid them.

Worked Example: Fitting sinusoidal functions


Suppose we wish to fit a set of data 𝑥 𝑖 , 𝑦𝑖 spread over the range of independent variable 𝑎 ≤ 𝑥 ≤ 𝑏. And suppose
we know the function is zero at the boundaries of the range, at 𝑥 = 𝑎 and 𝑥 = 𝑏. It makes sense to incorporate
our knowledge of the boundary values into the choice of functions to fit, and choose those functions 𝑓𝑛 to be
zero at 𝑥 = 𝑎 and 𝑥 = 𝑏. There are numerous well known sets of functions that have the property of being zero
at two separated points. The points where standard functions are zero are of course not some arbitrary 𝑎 and 𝑏.
But we can scale the independent variable 𝑥 so that 𝑎 and 𝑏 are mapped to the appropriate points for any choice
of function set.
Suppose the functions that we decide to use for fitting are sinusoids11 : 𝑓𝑛 = sin(𝑛𝜃) all of which are zero at
𝜃 = 0 and 𝜃 = 𝜋. We can make this set fit our 𝑥 range by using the scaling
𝜃 = 𝜋(𝑥 - 𝑎) / (𝑏 - 𝑎), (1 . 23)
so that 𝜃 ranges from 0 to 𝜋 as 𝑥 ranges from 𝑎 to 𝑏. Now we want to find the best fit to our data in the form
𝑓(𝑥) = 𝑐1 sin(𝜃) + 𝑐2 sin(2𝜃) + 𝑐3 sin(3𝜃) + … + 𝑐𝑀 sin(𝑀𝜃) . (1 . 24)
We therefore want the least-squares solution for the 𝑐𝑖 of
sin(1𝜃1 ) sin(2𝜃1 ) ... sin(𝑀𝜃1 ) 𝑐1 𝑦1
( sin(1𝜃 ) sin(2𝜃 ) ... sin(𝑀𝜃2 ) ( 𝑐2 ) = (
) 𝑦2 )
𝑆𝑐 = ...
2 2
... ... =𝑦 (1 . 25)
sin(1𝜃𝑁 ) sin(2𝜃𝑁 ) ... sin(𝑀𝜃𝑁 ) 𝑐𝑀 𝑦𝑁

We find this solution by the following procedure.


1. If necessary, construct column vectors 𝑥 and 𝑦 from the data.
2. Calculate the scaled vector 𝜃 from 𝑥.
3. Construct the matrix 𝑆 whose ijth entry is sin(𝑗𝜃𝑖 )
4. Least-squares-solve 𝑆𝑐 = 𝑦 (e.g. by pseudo-inverse) to find 𝑐.
5. Evaluate the fit at any 𝑥 by substituting the expression for 𝜃, 1.23, into 1.24.
This process may be programmed in a mathematical system like Matlab or Octave, which has built-in matrix
multiplication, very concisely12 as follows (entries following % are comments).

% Suppose x and y exist as column vectors of length N. (Nx1 matrices)


j=[1:M]; % Create a 1xM matrix containing numbers 1 to M.
theta=pi*(x-a)/(b-a); % Scale x to obtain the column vector theta.
silas.psfc.mit.edu/22.15/lectures/chap1.xml 8/10
12/13/24, 3:46 PM chap1.xml
S=sin(theta*j); % Construct the matrix S using an outer product.
Sinv=pinv(S); % Pseudo invert it.
c=Sinv*y; % Matrix multiply y to find the coefficients c.

The fit can then be evaluated for any 𝑥 value (or array) xfit, in the form effectively of a scalar product of
sin(𝜃𝑗) with 𝑐. The code is likewise astonishingly brief, and will need careful thought (especially noting what
the dimensions of the matrices are) to understand what is actually happening.

yfit=sin(pi*(xfit-a)/(b-a)*j)*c; % Evaluate the yfit at any xfit

An example is shown in Fig. 1.8.

Figure 1.8: The result of the fit of sinusoids up to 𝑀 = 5 to a noisy dataset of size 𝑁 = 20. The points are the
input data. The curve is constructed by using the yfit expression on an xfit array of some convenient length
spanning the 𝑥-range, and then simply plotting yfit versus xfit.

Exercise 1. Data fitting


1. Given a set of 𝑁 values 𝑦𝑖 of a function 𝑦(𝑥) at the positions 𝑥 𝑖 , write a short code to fit a polynomial having
order one less than 𝑁 (so there are 𝑁 coefficients of the polynomial) to the data.
Obtain a set of (𝑁 = ) 6 numbers from
http://silas.psfc.mit.edu/22.15/15numbers.html

(or if that is not accessible use 𝑦𝑖 = [0 . 892, 1 . 44, 1 . 31, 1 . 66, 1 . 10, 1 . 19]). Take the values 𝑦𝑖 to be at the
positions 𝑥 𝑖 = [0 . 0, 0 . 2, 0 . 4, 0 . 6, 0 . 8, 1 . 0]. Run your code on this data and find the coefficients 𝑐𝑗 .
Plot together (on the same plot) the resulting fitted polynomial representing 𝑦(𝑥) (with sufficient resolution to
give a smooth curve) and the original data points, over the domain 0 ≤ 𝑥 ≤ 1.
Submit the following as your solution:

1. Your code in a computer format that is capable of being executed.


2. The numeric values of your coefficients 𝑐𝑗 , 𝑗 = 1, 𝑁.
3. Your plot.
4. Brief commentary ( < 300 words) on what problems you faced and how you solved them.

2. Save your code from part 1. Make a copy of it with a new name and change the new code as needed to fit (in
silas.psfc.mit.edu/22.15/lectures/chap1.xml 9/10
12/13/24, 3:46 PM chap1.xml

the linear least squares sense) a polynomial of order possibly lower than 𝑁 - 1 to a set of data 𝑥 𝑖 , 𝑦𝑖 (for which
the points are in no particular order).
Obtain a pair of data sets of length (𝑁 = ) 20 numbers 𝑥 𝑖 , 𝑦𝑖 from the same URL by changing the entry in the
"Number of Numbers" box. (Or if that is inaccessible, generate your own data set from random numbers added
to a line.) Run your code on that data to produce the fitting coefficients 𝑐𝑗 when the number of coefficients of the
polynomial is (𝑀 = ) (a) 1, (b) 2, (c) 3. That is: constant, linear, quadratic.
Plot the fitted curves and the original data points on the same plot(s) for all three cases.
Submit the following as your solution:

1. Your code in a computer format that is capable of being executed.


2. Your coefficients 𝑐𝑗 , 𝑗 = 1, 𝑀, for three cases (a), (b), (c).
3. Your plot(s).
4. Very brief remarks on the extent to which the coefficients are the same for the three cases.
5. Can your code from this part also solve the problem of part 1?

HEAD NEXT

silas.psfc.mit.edu/22.15/lectures/chap1.xml 10/10

You might also like