0% found this document useful (0 votes)
22 views9 pages

MAT637 Practice Examination

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views9 pages

MAT637 Practice Examination

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

UNIVERSITY OF BUEA FACULTY OF SCIENCE

DEPARTMENT OF MATHEMATICS
MAT637 Practice Examination

BY
TACHUA MACMILLAN ANJA
SC23P192

QUESTION 1

Given that A is non-singular and A IS large.Direct methods of solutions


are those methods which, in the absence of round off errors will yield
the exact solution in a finite number of elementary arithmetic operations.
Example are:
1. Cramer’s Rule. Here, each unknown is found as a ration of two deter-
minants. The problem here is that the operation count can be very
large and some of the objectives outlined for the design of method will
be violated.
2. Gauss Elimination. Systematic elimination of some of the unknowns
is done on the problem until an equivalent system which is easier to
solve that the original is obtained.
Iterative methods on other hand is one in which start with an initial ap-
proximation, and which by applying a suitable chosen algorithm, lead to a
successively better approximation. Example is the Jacobi iteration.

Design of a methods
1. Cramer’s Rule. Here each unknown xi can be gotten as a ration of
two determinants.
That is xi = DDi , where Di Di is the determinant of Ai ,the matrix
whose i colum is the vector b. D = |A|.

1
2. Gauss elimination. Here we have two processes which are elimina-
tion and substitution.
Step 1: Elimination.
For the system Ax = b, let consider
a11 x1 + a12 x2 + a13 x3 = b1 (1)
a21 x1 + a22 x2 + a23 x3 = b2 (2)
a31 x1 + a32 x2 + a33 x3 = b3 (3)
And define:
a21 a31
, M31 =
M21 = , a11 6= 0
a11 a11
Now set (4) = (2) − M21 × (1) and (5) = (3) − M31 × (1).
We obtain

a11 x1 + a12 x2 + a13 x3 = b1 (4)


(2) (2) (2)
a22 x2 + a23 x3 = b2 (5)
(2) (2) (2)
a32 x2 + a33 x3 = b3 (6)
Where
(2) a21 a12
a22 = a22 − M21 a12 = a22 − a11
(2)
Now, suppose a22 6= 0 and set
(2)
a32
M32 = (2) Then Set (6) = (5) − M32 × (1)
a22
(3) (3)
=⇒ a32 = b3 (6)
(2)
(2) a32 (2)
where a333 = a33 − a222
a23
(3)
b3
And from (6) x = (2) (7)
a33
Step 2
Substituting (7) in (4) gives
(3)
(2)
a23 ( b3(3) )
(2) a33
x2 = b2 − (2)
a22
(3) (2) (2) (3)
a33 b2 − a23 b3
x2 = (2) (3)
(8)
a22 a33
2
and substituting (7) and (8) in (1) we get
 (3)   (3) (2) (2) (3) 
b1 − a23 (2) − a22 a33 b2(2)−a(3)23 b3
b3
a33 a22 a33
x1 =
a11
3. For Jacobi iteration scheme, we split matrix A in the form A = D −
(L + U ),where D is simple, nonsingular and easy to invert. Set N =
D ana P = L + U .
Initialize solution vector with a guess x(0) . Generate the sequence
x(1) .x(2) , · · · , x(k) of estimates to x through the (D − (L + U ))x =
b =⇒ Dx = (L + U )x + b.
Suppose that at step k, we have e(k) = x − x(k) and Dx = (L + U )x +
b and Dx(k+i1) = (L + U )x + b
=⇒ D(x − x(k+1) ) = (L + U )(x − x(k) )
De(k+1) = D−1 (L + U )e(k)
Let M = D−1 (L + U ). We see that after the iterate from k, we have
e(k+p) = M p e(k) and if M is a small matrix,M p will even be smaller
showing the process is indeed converging.
If the coefficient matrix Ax = b is diagonally dominant, then the Jacobi
iteration converges to the exact solution.
Proof
Since A is diagonally dominant, the entries aii 6= 0, ∀i. Hence the
sequence  
n
1  X (k)
xk+1
i = b i − aij xj 
aii
j=ij6=i

is well-defined. Each component of the error satisfies


(k+1)
X aij (k)
ej = e , i = 1, 2, · · · , n
aii j
j=1,j6=i
X aij (k)
=⇒ ||e(k+1) || ≤ | |||e ||
aii
j=1,j6=i
P a
Define λ = max | aiiij |
1≤i≤n j=1,j6=i
Then
||e(k+1) || ≤ λ||e(k) || =⇒ ||e(k+1) || ≤ λ(k+1) ||e(0) ||
If λ < 1, then taking limits as k −→ ∞ gives the required result.

3
b) Fully Describe any one method that you will employ to improve on the
solution resulting from any one of the methods that you have described in
(a) above.
-Pivoting strategies.
(k) (k)
The Gauss elimination will break down at step k if the pivot akk = 0 or akk
(k)
is very small, ie akk < 1. In these case we can replace the row containing
(k) (k)
akk with a row below where the akk has larger entries (in absolute value).

c)
(i)R.T.S the procedure of writing A as A = LLT will never break down
because of a zero pivot.
proof
We can use a direct computation method to calculate and get the LU
decomposition simply by solving the system of equation
n
X
A = LU ⇔ lik ukj = aij
k=1

Where li,k and uij are the entries of the matrices L and U respectively.
If U = LT =⇒ A = LLT
n
X
=⇒ (li,k )(lk,j )T = ai,j
k=1
Xn
⇔ (li,k )(lj,k ) = ai,j
k=1
Xn
⇔ (li,k )(li,k ) < ai,i
k=1
n
X
2
⇔ (li,k ) < ai,i
k=1
2 2 2
⇔ li,1 + li,2 + · · · + li,i−1 < ai,i
2 2 2
For li,1 +li,2 +· · ·+li,i−1 < ai,i , show that the matrix LLT x = b is diagonally
dominant and hence it will alway converges to the required solution result.
Thus A = LLT will never break down because of a zero pivot.

4
Now we show that the matrix A where
 
2 0 1
A = 0 3 1
1 1 2
Is positive definite and solve by LLT decomposition the problem Ax = b,
where b = (9, 6, 9)T .
Solution

d)

i) No, the Gauss Seidel iteration will convergence when the matrix is ei-
ther strictly diagonally dominant or symmetric and positive definite,
but for the Jacobi iteration to convergence only if the system of equa-
tion is diagonally dominant. So the Gauss Seidel iteration might con-
verge while the Jacobi iteration does not.
ii) The Jacobi methods tell us there is a matrix A = M − N ,where in this
method we denote M = D, and N = L + U , then the iterate matrix is
BJ = M −1 N = D−1 (L + U )
For this method to convergence, the spectral radius of BJ should be
less then 1. ie
ρ(BJ ) = max |λ| < 1
λ⊂λ(BJ )

The Gauss Seidel method tell us that if there is a matrix A = M − N ,


where in this method we denote M = D − L and N = U , then the
iteration matrix BG = M −1 N = (D − L)−1 U .
For Gauss Seidel method to convergence, the spectral radius of BG
should be less then one ie
ρ(BG ) = max |λ| < 1
λ⊂λ(BG )

These are the necessary conditions to establish a convergence criterion.


Now, to look at what happen during the corresponding iterations, of
the two methods is indeed faster since it operates on as mush available
and mush data as possible unlike the Jacobi method that relies on the
updated data after the second iteration and those take more iterations
so that relative error become less than the tolerant set. In fact the

5
convergence rate Gauss Seidel is twis the speed of convergence of the
Jacobi method and this true for especially for strictly diagonally dom-
n
P
inant matrix |A(i, i)| > |A(i, j)|, ∀i = 1, 2, · · · , n.
j=1,j6=i
Thus we see that they is no such example of a matrix for which the
Jacobi iteration convergent but the Gauss-Seidel iteration for the same
problem is divergent.
iii) We find the range of values of c for which the Gauss-Seidel iteration
for the solution of Bx = b, for some b, converges. The solution Bx =
b will convergence if the matrix B is diagonally dominant.ie |aii | >
Pn
|aij |, where
i,j=1,j6=i
 
1 c 1
B =  c 1 c
−c2 c 1
Row 1: |c + 1| < 1 =⇒ c < 0
Row 2: |c + c| < 1 =⇒ c < 21 √ √
1+ 3i 1− 3i
Row√ 3: | − c2 + c| < 1 =⇒ −c2 + c − 1 < 0 =⇒ c > 2 or c > 2
∴ 1−2 3i < c < 12

Question 2

i) Let u ∈ C 3 ([0, 1]; R). For quadratic polynomial, we have 3 points


x0 , x1 , x2 .
Error = U (x) − P2 (x) and
U (3) (x)(x − x0 )(x − x1 )(x − x2 )
|U (x) − P2 (x)| =
3!
Where x0 , x1 , x2 are equally specked. Let h = x1 − x0 =⇒ x2 − x1 = h
Let x = x1 + rh. For r = −1, x = x1 + h = x2

6
Define
T3 = (x − x0 )(x − x1 )(x − x2 )
= (x1 + rh − x0 )(x1 + rh − x1 )(x1 + rh − x2 )
= (rh + h)(rh)(rh − h)
= (rh + h)(r2 h2 − h2 )
= (r3 h3 − rh3 + r2 h3 − h3 )
= h3 (r3 + r2 − r − 1)
= h3 (r + 1)(r)(r − 1)
3 2 3
Then Π3 (x) = h r(r3! +1) = h φ(r)
3!
φ(r) = r3 − r
φ0 (r) = 3r2 + 1
φ00 (r) = 6r.
−6
For r = − √13 = √ 3
< 0 ⇒ φ(r) has a minimum value at r = − √13 .
We note that x0 ≤ x ≤ x2 and −1 ≤ r ≤ 1.
1 2
max |φ(r)| = |φ(− √ )| = √
−1≤r≤1 3 3 3
d3 u(x) Π3 (x) d3 u(x) 1 2h3
   
=⇒ |U (x) − P2 (x)| = ≤ max √
dx3 6 x∈[0,1] dx3 6 3 3
h3
 3 
d u(x)
=⇒ |U (x) − P2 (x)| ≤ √ max : x ∈ [0, 1] · · · · · · · · · · (1)
9 3 dx3
(a) Now we calculate (1) when h = 0.01 and u(x) = sinπx
If
u(x) = sinπx
u0 (x) = πcosπx
u00 (x) = −π 2 sinπx
u000 (x) = −π 3 sinπx
(0.01)2
=⇒ |U (x) − P2 (x)| ≤ √ (−π 3 cosπ)
9 3
≤ 0.000063218
(b)
ii) We show that if the function f defined on the interval [a, b] is approx-

7
imated by a polynomial of degree 2, then
Zb Zb
1 a+b
f (x)dx ≈ P2 (x)dx = (b − a)[f (a) + 4f ( ) + f (b)]
6 2
a a
Proof
We approximate the function f with a polynomial of degree 2.
b+a b+a
P2 (x) = f [a] + f [a, ](x − a) + f [a, b + a/2, b](x − a)(x − )
2 2
Zb Zb
b+a b+a b+a
P2 (x)dx = (f [a]+f [a, ](x−a)+f [a, , b](x−a)(x− )dx
2 a 2
a a
(x − a)2
 3
x2 b + 3a (ab + a2 ) b

b+a x
= f [a]x+f [a, b+a/2] +f [a, , b] − ( )+x |a
2 2 3 2 2 2

x3 x2 3a + b
 
2 a(b + a)
(b−a)f (a)+f [a, (b+a)/2](b−a) /2+f [a, (b+a)/2, b] − ( )+
3 2 2 2
But
f ((b + a)/2) − f (a)
f [a, (b+a)/2](b−a)2 /2 = [ ](b−a)2 /2 = [f ((a+b)/2)−f (a)](b−a
(b − a)
Also
2(f (b) − f ((b + a)/2) 2(f ((b + a)/2) − f (a))
f [a, (b+a)/2, b] = [ − ]/(b−a)
(b − a) b−a
 
f (b) − 2f ((b + a)/2) + f (a)
= 2(b − a)
(b − a)
 
2f (b)−4f ((b+a)/2)+2f (a)
∴ P2 (x) = ((b−a)f (a)+(f ((b+a)/2)−f (a))(b−a)+ (b−a)2 (b−
h2 2
 i
b+a 3a+b+ab2 −a3
a) b +ab+a
3 − 2 2
Thus
Zb Zb
1 a+b
f (x)dx ≈ P2 (x)dx = (b − a)[f (a) + 4f ( ) + f (b)]
6 2
a a

iii)
Z1.3
1
5xe−2x dx = (1.3 − 0.1)(f (0.1) + 4f ((0.7)/2 + f (0.3))
6
0.1

8
f (0.1) = 5(0.1)e−0.2 = 0.4094
f (0.7) = 5(0.7)e−1.4 = 0.4828
f (1.3) = 5(1.3)e−2.3 = 0.8689
Zb Zb
∴ f (x)dx ≈ P2 (x)dx = 0.8689
a a

You might also like