0% found this document useful (0 votes)
16 views12 pages

Assignment 2

Here are the Hermite polynomials expressed in terms of the Hermite basis functions: p1 = 1 + 2x + 4x^2 - 2x^3 + 8x^3 - 12x^4 p2 = 5 - 14x + 4x^2 - 2 - 7(2x) + 1 p3 = 13 - 46x + 4x^2 - 2 - 23(2x) + 4x^2 - 2(8x^3 - 12x^4)

Uploaded by

Matthew Mcivor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views12 pages

Assignment 2

Here are the Hermite polynomials expressed in terms of the Hermite basis functions: p1 = 1 + 2x + 4x^2 - 2x^3 + 8x^3 - 12x^4 p2 = 5 - 14x + 4x^2 - 2 - 7(2x) + 1 p3 = 13 - 46x + 4x^2 - 2 - 23(2x) + 4x^2 - 2(8x^3 - 12x^4)

Uploaded by

Matthew Mcivor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

EMTH211 — ASSIGNMENT COVER SHEET

Name(s) Matthew McIvor

Student ID(s) 62226649

Tutorial Group(s) 11

Signature(s)

This assignment MUST be your own work or the work of a pair.


STAPLE this page to the front of your assignment.

Due: 10:00 AM, Monday 4 August 2021.


A2.1
Modelling the woman 100m sprint times at every Olympic games since 1928.
Proposed model: Year in the first column, and times in the second.

Using this data, a first order polynomial fit is found (See Appendix for code):
Second order polynomial fit (See Appendix for code):

Third order polynomial fit (See Appendix for code):


The error can be calculated by using the absolute value of difference between the actual y
value and the fit line y value and divided by the number of points to calculate the error

Error for linear fit: 0.1907


Error for quadratic fit: 0.1580
Error for cubic fit: 0.1574

The warning MATLAB issues when modelling the cubic fit is:
“Warning: Polynomial is badly conditioned”
MATLAB does this because the degree of the polynomial is too high for the x values that
vary too much. The way to fix this is to rescale the x values so that the difference in
magnitude of the x values don’t vary by large amounts.

Next using the ‘polyval’ command (Appendix) we can use these polynomial fits to estimate
the 100m winning time in 2008.
Estimated time using linear fit: 10.4457
Estimated time using quadratic fit: 10.9162
Estimated time using cubic fit: 10.6662
A2.2
Image compression using singular value decomposition (free sourced images):
Image 1 Image 2

A). Using SVD, these images can be compressed by using the larger values and discarding
the smaller values to approximate the image by a few terms in an outer-product form.
Image 1 produces a matrix of 270x480, using 40 of the singular values, produces this image:

Image 2 produces a matrix of 1707 x 2560, which is much larger than image 1 because of the
higher definition. Thus, it makes sense it will require more singular values to depict a good
approximation of the original image. 60 singular values were used, and it produced the
image (see appendix for code):
B)
Image 1 is formed by a 270 x 480 (k x n) matrix, to just do the original image 1 we need k(n)
numbers, so it needs 129600 real numbers to produce the image, but for the approximation
of image 1 which uses 40 singular values will require less real numbers since the real
numbers are given by the formula S(k + n + 1), so the approximation will have 40(270 + 480
+1) = 30040. This is much smaller than the original image.
For image 2, it was a 1707 x 2560 image, so the real number needed to produce the image
will be, (1707*2560) = 4369920, while the approximation is 60(1707 + 2560 + 1) = 256080
real numbers which again is far less than the original image. And so it will be more efficient
to store an approximation of an image rather than the actual image.
C)
Some images are easier to approximate because they have more of the same colour, also
our eyes are most sensitive to green colours which also may be why the peacock image
requires more singular values because it is predominantly green. It approximates the pixels
so it will take the average pixel colours so the more there is neighbouring pixels of the same
colour the less singular values will be required.

A2.3
[W, sigma, V] = svd(A)

[ ]
−0.9330 0.3600 0
W = −0.2545 −0.6597 −0.7071
−0.2545 −0.6597 0.7071

[ ]
4.2758 0 0
sigma= 0 2.1873 0
0 0 0. 0000

[ ]
−0.7362 −0.1377 −0.6626
V = −0.4067 2.1873 0 .2705
−0.5409 −0.4686 0. 6984

A = W * sigma * VT

[ ]
2.8285 2.3096 1.7888
A= 0.9998 −0.8166 1.2648
0.9998 −0.8166 1.2648

Av2 = A * V2

[ ]
0.7876
A v 2 = −1.4429
−1.4429
b) Eigenvectors and Eigenvalues of ATA from MATLAB (See appendix A):

[ ]
9.9998 4.8998 7.5888
T
A A= 4.8998 6.6676 2.0659
7.5888 2.0659 6.3992

[ ]
0.6626
λ1 = 0 V1 = −0.2705
−0.6984

[ ]
0.1378
λ2 = 4.7842 V2 = −0.8726
0.4687

[ ]
0.7362
λ3 = 18.2824 V3 = 0.4068
0.5409

C) Matrix A is not invertible because the determinant of A = 0, and also sigma is


not invertible.
D) An orthonormal basis to col(A)
By row reducing A (‘rref(A)’) in MATLAB, we get:

[ ]
1 0 0 .9485
A= 0 1 −0.3876
0 0 0

So, the Column space of A will be the columns with a one in their pivot.

{[ ][ ]}
2.8285 2.3069
col ( A )= 0.9998 −0.8166
0.9998 −0.8166

To find the orthonormal basis to the column space we must use the Gram-
Schmidt

[ ]
2.8285
u1=v 1= 0.9998
0.9998

[ ][ ]
2.8285 0.8947
1 1
Normalising this gives: e 1= ∗u1 = 0.9998 = 0. 3162
|u1| 3.16107
0.9998 0.3162
[ ( [ ])
]
2.8285
(2.3069¿−0.8166¿−0.8166)∗ 0.9998

[ ] [ ][ ]
2.3069 0.9998 2.8285 0. 9231
u2=v 2−¿ −0.8166 − ∗ 0.9998 = −1. 3057

[ ]
−0.8166 2.8285 0.9998 −1. 3057
(2.8285¿0.9998¿ 0.9998)∗ 0.9998
0.9998

[ ]
1 1 0.4471
Normalising this gives: e 2= ∗u 2= −0.6325
|u2| 2.06441
−0.6325

Thus, the orthonormal basis to Col(A) =

[ ][ ]
0.8947 0.4471
,
0.3162 −0.6325
0.3162 −0.6325

To confirm that these two vectors are orthonormal the dot product was taken
in MATLAB, and it gave an acceptable zero. Therefore, they are orthogonal.
Since there are only two vectors, this means the column space is only two
dimensional. If we continued the Gram-Schmidt process on the third column of
A we would have gotten a zero vector because it is a linear combination of the
other two.

e)
The Rank of the matrix is two since there is only two orthonormal basis and
because there are only two singular values (seen in sigma)

A2.4
Hermite polynomials:
p1=H 0 ( x )+ H 1 ( x ) + H 2 ( x ) + H 3 ( x )

¿ 1+2 x + ( 4 x −2 ) + ( 8 x −12 x )
2 3

3 2
¿ 8 x + 4 x −10 x−1
p2=5 H 0 ( x )−7 H 1 ( x ) + H 2 ( x )
¿ 5−14 x+ ( 4 x 2−2 )
2
¿ 4 x −14 x+3
p1=13 H 0 ( x )−23 H 1 ( x ) + H 2 ( x )−2 H 3 ( x )

¿ 13−46 x+ 4 x 2−2−2 ( 8 x 2−12 x )


3 2
¿−16 x + 4 x −22 x+11

All three of these polynomials will be linearly independent if the rank of the
coefficient matrix that p1, p2, and p3 form is equal to three.

[ ]
−1 −10 4 8
A= 3 −14 4 0
11 −22 4 −16

Row reducing: R2 + 3R1  R2


R3 + 11R1  R3

[ ]
−1 −10 4 8
A= 0 −44 16 24
0 −132 4 8 72

R3 – 3R1  R3

[ ]
−1 −10 4 8
A= 0 −44 16 24
0 0 0 0

The rank of this matrix will be two as there is two non-zero rows; Therefore P 1,
P2, P3 are linearly dependant, so the polynomials are not linearly independent
in P3.
B) Span of the polynomial matrix A:
Since the rank of the matrix is two, this means there is only two
vectors/polynomials that are linearly independent. These two linearly
independent vectors cause a Span in the vector space of dimension 2.

Appendix
A2.1 code:
Linear approximation:

Quadratic approximation:

Cubic approximation:

2008 times:

A2.2 code:
Image 1 -

Image 2 -

A2.3 code:

You might also like