0% found this document useful (0 votes)
30 views7 pages

School of Mathematics and Statistics Autumn Semester 2008-2009 Numerical Linear Algebra Two Hours Marks Will Be Awarded For Your Best FOUR Answers

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 7

AMA226

SCHOOL OF MATHEMATICS AND STATISTICS Autumn Semester 2008-2009

NUMERICAL LINEAR ALGEBRA Two hours

Marks will be awarded for your best FOUR answers

AMA226 1 Turn Over


AMA226

1 (i) (a) Write down the conditions that must be satised by any matrix
norm, kAk, and dene the subordinate matrix norm. (5 marks)

(b) Compute kAk1 and kAk∞ for the matrix


 
1 −3 7 −8
−4 2 9 14
 
 
A= .

 6 5 −8 0 

13 −6 13 −2

(4 marks)

(ii) We wish to solve Ax = b where b is known exactly and A is subject to an


uncertainty δA. Thus, in eect we necessarily solve
(A + δA)(x + δ x) = b.

Dene the condition number, K(A), and hence, assuming that


kA−1 kkδAk < 1, prove that the relative error in x satises

kδ xk K(A) kδAk
≤n kδAk
o .
kxk 1 − K(A) kAk kAk

(9 marks)

(iii) We wish to solve Ax = b where


   
1.0000 0.2500 0.1111 0.0625 3.14159
0.2500 0.1111 0.0625 0.0400 3.14159
   
   
A≈  , b =  .

 0.1111 0.0625 0.0400 0.0278 


 3.14159 

0.0625 0.0400 0.0278 0.0204 3.14159

Given that A is subject to an uncertainty kδAk ≈ 2 × 10−5 and b is subject


to an uncertainty kδbk ≈ 3 × 10−6 and that, in this case, the relative error
in x satises
!
kδ xk K(A) kδAk kδ bk
≤n o + ,
kxk 1 − K(A) kδAk
kAk
kAk k bk

then:
(a) write scilab code to determine bounds on kδxk1 /kxk1 ; (5 marks)

(b) comment on the usefulness of solutions computed for this system


given this calculation yields kδxk1 /kxk1 ≤ 1.42, . (2 marks)

AMA226 2 Continued
AMA226

2 (i) Given m + 1 data points (xj , fj ), j = 0, 1, ..., m, where the xj values are all
distinct, and a set of suitably chosen basis functions, φ0 (x), φ1 (x), φ2 (x),
then derive the normal equations
m
X m
X m
X m
X
a0 φ0 (xj )φk (xj ) + a1 φ1 (xj )φk (xj ) + a2 φ2 (xj )φk (xj ) = fj φk (xj ),
j=0 j=0 j=0 j=0
k = 0, 1, 2

for determining the best least square t to the data of the function
Y (x) = a0 φ0 (x) + a1 φ1 (x) + a2 φ2 (x)

without weights.
(8 marks)

(ii) Hence, by choosing the basis functions φ0 (x) ≡ 1, φ1 (x) ≡ x and φ2 (x) ≡
x 2 , write down the normal equations for the best quadratic t, and write
scilab code to compute this t for the data

xj 1.000 1.500 2.000 2.500 3.000 3.500 4.000 4.500


fj 8.00 11.13 15.00 18.88 22.00 23.63 23.00 19.38

(10 marks)

(iii) Given that the best quadratic t of the previous part is given by
P2 (x) = −2.25 x 2 + 16.38 x − 7.44

then, by computing the residuals on the sub-range 1.5 ≤ x ≤ 4.0 only,


comment briey on whether or not you think the degree of the polynomial
should be increased. (7 marks)

AMA226 3 Turn Over


AMA226

3 (i) (a) Verify, by direct expansion, that the matrix equation


AT Aα = AT f (1)
where
1 x0 x02 . . . x0n f0 a0
     

 1

x1 x12 . . . x1n   f1 
 
 a1 
 
.. .. .. .. .. ..

A= , f = , α=
. . . ... . . .

     
     
1 xm xm2 . . . xmn fm an

represents the normal equations arising from a least squares t of


the n-th order polynomial, Pn (x), to the data (xj , fj ), j = 0..m.
(3 marks)

(b) Verify that, if P is an orthogonal matrix, then equations (1) remain


as the normal equations when the transformed residuals

r̂ ≡ P r = P (Aα − f )

are minimized (3 marks)

(ii) For any (m + 1) × 1 vector w satisfying kwk =


6 0, an orthogonal matrix P
can be dened according to
T
!
ww
P = I−2 .
wT w

Suppose that w is dened according to


w = x + sign(x0 ) ||x||2 e0 ,

for some arbitrary x = (x0 , x1 , ..., xm )T and where e0 is the rst column of
the (m + 1) × (m + 1) identity matrix. Show that x̂, dened according to
x̂ = P x,

satises
x̂ = −sign(x0 )||x||2 e0 .
(7 marks)

AMA226 4 Question 3 continued on next page


AMA226

3 (continued)

(iii) It is required that y = a0 + a1 x + a2 x 2 is tted to a certain set of data in


the least-squares sense. The residuals arising from this t are given by
   
1 0.0 0.00 0.4000
1 0.2 0.04   0.2046
    

  a0 


r ≡
   
 0.0329
1 0.4 0.16   a1  −  
.. .. ..  ..
 
. . .  a2 .
  
   
  
1 2.0 4.00 0.5972

where (a0 , a1 , a2 ) are chosen such that krk2 is minimised. A series of orthog-
onal transformations P ≡ P4 P3 P2 P1 , based on the Housholder Reection
matrix, is applied to r to obtain r̂ ≡ P r where
   
−3.3166 −3.3166 −4.6433 −0.2517
0.0000 2.0977 4.1952   0.1432 
   

   
0.0000 0.0000 1.1717  0.8349
 
a
 
 0  
   
0.0000 0.0000 0.0000  −0.0662
 
r̂ ≡ 
   a1  − 
 .

 0.0000 0.0000 0.0000  a2

 0.0000 
 
.. .. .. ..

. . . .
   
   
   
0.0000 0.0000 0.0000 0.0000

Determine the values of (a0 , a1 , a2 ) which minimizes kr̂k2 and state the
value of this minimized kr̂k2 . Work correct to four decimal places.
(7 marks)

(iv) We wish to apply a single Householder reection to the matrix


 
1 2.2
1 2.4
 
 
 
A= 
 1 2.6 
 .
1 2.8
 
 
1 3.0

Write scilab code to accomplish this task.


(5 marks)

AMA226 5 Turn Over


AMA226

4 (i) (a) The real symmetric matrix A has eigenvalues λ1 , λ2 , ..., λn satisfying
|λ1 | > |λ2 | ≥ |λ3 | ≥ ... ≥ |λn | > 0

with corresponding linearly independent eigenvectors x1 , x2 , ..., xn


which can be supposed normalized so that the largest element of
each one is unity. The iteration
yk = Azk−1
yk
zk = , k = 1, 2, ...,
µk

where z0 is normalized so that its largest element is unity, and


where µk is chosen so that the largest element of zk is likewise
unity, is called the Power Method. Prove that, for this iteration,
(µk , zk ) → (λ1 , x1 ) as k → ∞. (10 marks)

(b) Supposing that A and z0 are already dened, then write scilab code
which impliments ten iterations of the Power Method to produce
an estimate of (λ1 , x1 ) . (5 marks)

(ii) (a) Suppose now that A is a real symmetric matrix and that an estimate
of (λ2 , x2 ) is also required. The method of Hotelling's Deation,
B = A − λ1 x1 xT1 ,

where the eigenvectors are supposed normalized so that xTk xk = 1,


is useful for this purpose since B has eigenvalues 0, λ2 , λ3 , ..., λn .
Prove this latter statement. (5 marks)

(b) Write additional scilab code which takes the results of your Power
Method code (above) and uses Hotelling's Deation to produce the
matrix B , and then applies ten iterations of the Power Method to
the matrix B to obtain an estimate of (λ2 , x2 ). (5 marks)

AMA226 6 Continued
AMA226

5 (i) The linear system


Ax = b,
where A is an n × n matrix of known coecients, b is an n × 1 column
vector of known values can be rearranged, for the solution vector x, in
arbitrarily many ways in the form
x = Hx + d

which can subsequently be used to dene the iteration


x
(k+1)
= H x(k) + d (2)
where H is some n × n matrix and d is a column vector of known values.
(a) Derive a sucient condition, written in terms of kHk, which will
guarantee that the iteration (2) above will give a sequence of iter-
ates convergent to x, the solution of Ax = b. (5 marks)

(b) Starting from Ax = b, write down the Jacobi iteration, and hence
prove that strict diagonal dominance of the matrix A is sucient
to guarantee the convergence of the method. (8 marks)

(ii) (a) Use the Jacobi iterative method to obtain two successive approxi-
mations to the solution of the system Ax = b where

12 −5 −2
  
1
A =  −5 12 −3  and b= 1  ,
   

−2 −3 12 1

using x(0) = (0, 0, 0)T as the starting vector and working correct to
four signicant gures. (5 marks)

(b) Write scilab code with implements the Jacobi method to generate
an approximate solution to the system Ax = b dened above that
will iterate until kAx − bk∞ ≤ 10−5 . (7 marks)

End of Question Paper

AMA226 7

You might also like