Assignment 2
Assignment 2
Tutorial Group(s) 11
Signature(s)
Using this data, a first order polynomial fit is found (See Appendix for code):
Second order polynomial fit (See Appendix for code):
The warning MATLAB issues when modelling the cubic fit is:
“Warning: Polynomial is badly conditioned”
MATLAB does this because the degree of the polynomial is too high for the x values that
vary too much. The way to fix this is to rescale the x values so that the difference in
magnitude of the x values don’t vary by large amounts.
Next using the ‘polyval’ command (Appendix) we can use these polynomial fits to estimate
the 100m winning time in 2008.
Estimated time using linear fit: 10.4457
Estimated time using quadratic fit: 10.9162
Estimated time using cubic fit: 10.6662
A2.2
Image compression using singular value decomposition (free sourced images):
Image 1 Image 2
A). Using SVD, these images can be compressed by using the larger values and discarding
the smaller values to approximate the image by a few terms in an outer-product form.
Image 1 produces a matrix of 270x480, using 40 of the singular values, produces this image:
Image 2 produces a matrix of 1707 x 2560, which is much larger than image 1 because of the
higher definition. Thus, it makes sense it will require more singular values to depict a good
approximation of the original image. 60 singular values were used, and it produced the
image (see appendix for code):
B)
Image 1 is formed by a 270 x 480 (k x n) matrix, to just do the original image 1 we need k(n)
numbers, so it needs 129600 real numbers to produce the image, but for the approximation
of image 1 which uses 40 singular values will require less real numbers since the real
numbers are given by the formula S(k + n + 1), so the approximation will have 40(270 + 480
+1) = 30040. This is much smaller than the original image.
For image 2, it was a 1707 x 2560 image, so the real number needed to produce the image
will be, (1707*2560) = 4369920, while the approximation is 60(1707 + 2560 + 1) = 256080
real numbers which again is far less than the original image. And so it will be more efficient
to store an approximation of an image rather than the actual image.
C)
Some images are easier to approximate because they have more of the same colour, also
our eyes are most sensitive to green colours which also may be why the peacock image
requires more singular values because it is predominantly green. It approximates the pixels
so it will take the average pixel colours so the more there is neighbouring pixels of the same
colour the less singular values will be required.
A2.3
[W, sigma, V] = svd(A)
[ ]
−0.9330 0.3600 0
W = −0.2545 −0.6597 −0.7071
−0.2545 −0.6597 0.7071
[ ]
4.2758 0 0
sigma= 0 2.1873 0
0 0 0. 0000
[ ]
−0.7362 −0.1377 −0.6626
V = −0.4067 2.1873 0 .2705
−0.5409 −0.4686 0. 6984
A = W * sigma * VT
[ ]
2.8285 2.3096 1.7888
A= 0.9998 −0.8166 1.2648
0.9998 −0.8166 1.2648
Av2 = A * V2
[ ]
0.7876
A v 2 = −1.4429
−1.4429
b) Eigenvectors and Eigenvalues of ATA from MATLAB (See appendix A):
[ ]
9.9998 4.8998 7.5888
T
A A= 4.8998 6.6676 2.0659
7.5888 2.0659 6.3992
[ ]
0.6626
λ1 = 0 V1 = −0.2705
−0.6984
[ ]
0.1378
λ2 = 4.7842 V2 = −0.8726
0.4687
[ ]
0.7362
λ3 = 18.2824 V3 = 0.4068
0.5409
[ ]
1 0 0 .9485
A= 0 1 −0.3876
0 0 0
So, the Column space of A will be the columns with a one in their pivot.
{[ ][ ]}
2.8285 2.3069
col ( A )= 0.9998 −0.8166
0.9998 −0.8166
To find the orthonormal basis to the column space we must use the Gram-
Schmidt
[ ]
2.8285
u1=v 1= 0.9998
0.9998
[ ][ ]
2.8285 0.8947
1 1
Normalising this gives: e 1= ∗u1 = 0.9998 = 0. 3162
|u1| 3.16107
0.9998 0.3162
[ ( [ ])
]
2.8285
(2.3069¿−0.8166¿−0.8166)∗ 0.9998
[ ] [ ][ ]
2.3069 0.9998 2.8285 0. 9231
u2=v 2−¿ −0.8166 − ∗ 0.9998 = −1. 3057
[ ]
−0.8166 2.8285 0.9998 −1. 3057
(2.8285¿0.9998¿ 0.9998)∗ 0.9998
0.9998
[ ]
1 1 0.4471
Normalising this gives: e 2= ∗u 2= −0.6325
|u2| 2.06441
−0.6325
[ ][ ]
0.8947 0.4471
,
0.3162 −0.6325
0.3162 −0.6325
To confirm that these two vectors are orthonormal the dot product was taken
in MATLAB, and it gave an acceptable zero. Therefore, they are orthogonal.
Since there are only two vectors, this means the column space is only two
dimensional. If we continued the Gram-Schmidt process on the third column of
A we would have gotten a zero vector because it is a linear combination of the
other two.
e)
The Rank of the matrix is two since there is only two orthonormal basis and
because there are only two singular values (seen in sigma)
A2.4
Hermite polynomials:
p1=H 0 ( x )+ H 1 ( x ) + H 2 ( x ) + H 3 ( x )
¿ 1+2 x + ( 4 x −2 ) + ( 8 x −12 x )
2 3
3 2
¿ 8 x + 4 x −10 x−1
p2=5 H 0 ( x )−7 H 1 ( x ) + H 2 ( x )
¿ 5−14 x+ ( 4 x 2−2 )
2
¿ 4 x −14 x+3
p1=13 H 0 ( x )−23 H 1 ( x ) + H 2 ( x )−2 H 3 ( x )
All three of these polynomials will be linearly independent if the rank of the
coefficient matrix that p1, p2, and p3 form is equal to three.
[ ]
−1 −10 4 8
A= 3 −14 4 0
11 −22 4 −16
[ ]
−1 −10 4 8
A= 0 −44 16 24
0 −132 4 8 72
R3 – 3R1 R3
[ ]
−1 −10 4 8
A= 0 −44 16 24
0 0 0 0
The rank of this matrix will be two as there is two non-zero rows; Therefore P 1,
P2, P3 are linearly dependant, so the polynomials are not linearly independent
in P3.
B) Span of the polynomial matrix A:
Since the rank of the matrix is two, this means there is only two
vectors/polynomials that are linearly independent. These two linearly
independent vectors cause a Span in the vector space of dimension 2.
Appendix
A2.1 code:
Linear approximation:
Quadratic approximation:
Cubic approximation:
2008 times:
A2.2 code:
Image 1 -
Image 2 -
A2.3 code: