0% found this document useful (0 votes)
64 views33 pages

Study Questions - Applied Numerical Methods Part 1

This document contains study questions and solutions for applied numerical methods. It is divided into sections by different authors and covers topics related to numerical solutions of differential equations including: - Conditions for boundedness and stability of solutions - Critical points and stability of nonlinear systems - Newton's method for computing critical points - Diagonalizability of matrices - Matrix exponential and its relation to solutions of systems of ODEs - Initial value problems - Eigenvalues and eigenvectors - Conditions for boundedness of solutions to difference equations

Uploaded by

Folarin Emmanuel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views33 pages

Study Questions - Applied Numerical Methods Part 1

This document contains study questions and solutions for applied numerical methods. It is divided into sections by different authors and covers topics related to numerical solutions of differential equations including: - Conditions for boundedness and stability of solutions - Critical points and stability of nonlinear systems - Newton's method for computing critical points - Diagonalizability of matrices - Matrix exponential and its relation to solutions of systems of ODEs - Initial value problems - Eigenvalues and eigenvectors - Conditions for boundedness of solutions to difference equations

Uploaded by

Folarin Emmanuel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART

1
2 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

Contents

Question 1-8, By Mengxi Wu 3


Question 9-17, Solutions by Mengxi Wu, Written by Patrik Rufelt 6
Question 18-26, By Henrik Sjöström 9
Question 27-35 By Elin Hynning 12
Question 36-44 By Lars-Lowe Sjösund 19
Question 45-51, By Olof Bergvall 24
Question 52-54, By Gustav Sædén Ståhl 28
Question 55-60, By Alexander Eriksson 30
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 3

Question 1-8, By Mengxi Wu

1. Given a system of ODE's ẏ = Ay, where A is an nn-matrix. Give a


sucient condition that the analytical solution is bounded for all t > 0.
Will there be bounded solutions when the matrix A = ...? (p.21). An
analytical solution which is stable as t→∞ must be bounded. The stability of the
ODE system is stable if all the eigenvalues ,λi , of A has a real parts that satises
Re(λi ) < 0. In other words the eigenvalues are situated in the left half the complex
plane and is asymptotically stable, hence the solution will always be bounded.
However if Re(λi ) = 0, then the system is stable if every λi is unique. If there
existsRe(λi ) = 0 and there are λi which are the same, then you have to investigate
the specic case for stability.

2. For a nonlinear system of ODE's ẏ = f (y), what is meant by the


critical points of the system? Give a sucient condition that a critical
point is stable. Are the critical point of the following system stable?
(p.23). A point y is a critical point of the system if it satises f (y) = 0. This
point is also referred to as the stationary or steady- state solutions.
If we perturbed y to y + δy(t) in a neighborhood of y, then the critical point
is asymptotically stable if all the perturbed solution converges to y. If there is any
perturbed solution which diverges from y then the critical point is unstable.

3. Given a nonlinear system of ODE's ẏ = f (y). Assume that the


right hand side consists of dierentiable functions. Describe a New-
ton's method for computing the critical points of the system. For the
following system, choose an initial vector and make one iteration with
Newton's method. (p.181). Newton's method is based on Taylor expansion of
f (y) at a point y(i) , where y(i) is an estimate of a critical point. The function f (y)
is assumed to be twice dierentiable. We have:

f (y) ≈ f (y(i) ) + J(y(i) )(y − y(i) ) + O


∂f
Where J(y(i) ) is the Jacobian of f (y),
∂y . i.e
Since we are searching for critical points we seek f (y) = 0, we can set the left
hand side to 0 and obtain:

f (y(i) ) + J(y(i) )(y − y(i) ) = 0


This gives us:
f (y(i) )
y(i+1) = y(i) −
J(y(i) )
With a start guess u(0) , we iterate until we nd the critical point.

4. What does it mean that an n × n matrix A is diagonalizable? Is the


following matrix A diagonalizable? A is diagonalizable means that we can
write it as a product of its eigenvectors and values in the following manner:

A = SΛS −1
where S contains the eigenvectors s1 , s2 , . . . , sn and Λ is a diagonal matrix contain-
ing λ1 , λ2 , . . . , λn .
A is diagonalizable when it has n linearly independent eigenvectors.
4 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

5. For a system of ODE's ẏ = Ay, y(0) = y0 , the solution can be written


as y(t) = eAt y0 . What is meant by the matrix eAt ? What is eAt when A
= .....? (p.16-18). If we assume that one solution of the ODE is:
y(t) = ceλt
We want to nd λ and c. Inserting this in the ODE gives

cλeλt = Aceλt
giving us

λc = Ac
We see that this is an eigenvalue problem where we want to nd the eigenvalues
λand c of A. If the eigenvectors of A are linearly independent then we can write

A = SΛS −1
where S and Λ are dened as in the question above.
The n solutions corresponding to each eigenvalue is then

yi (t) = ci (t)eλi t
Each solution can be written as linear combination of the eigenvectors

n
X
y(t) = αi ci eλt
i=1

This can also be written as

y(t) = SeΛt ᾱ
Inserting the initial value y(0) = y0 we obtain y(0) = S ᾱ , giving us

y(t) = SeΛt S −1 ᾱ
Thus

eAt = SeΛ S −1

6.For a system of ODE's ẏ = f (y) what is meant by an initial value


problem? Initial value problem is when we solve ẏ = f (y), with an initial value
at t = t0 which is y(t0 ) = y0 , giving us an unique solution to the ODE.

7. What is meant by the eigenvalues and the eigenvectors of an nn matrix


A? The eigenvectors are roots of the characteristic equation. What is the
characteristic equation of A? Which are the eigenvalues and eigenvectors
for the matrix A= ....? (p.17). Eigenvalues λi and eigenvectors ci are for a
given matrix A satises

Aci = λi ci
They can be found using the characteristic equation

det(A − λI) = 0
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 5

8.Given a dierence equation yn+1 = a1 yn + a2 yn−1 + .... + ak+1 yn−k , where


a1 , a2 , ..., ak+1 are real. Formulate a sucient condition that the sequence
yn is bounded for all n. Is the solution bounded when n > 0 for the follow-
ing dierence equation? ... (p.185. The characteristic equation corresponding
to the homogenous part of the dierence equation is then

a1 µk + a2 µk−1 + . . . + ak+1 = 0
If we denote the roots of the characteristic equation as µ1 , µ2 , µ3 , . . . , µk . As
n→∞ then sequence yn is bounded if |µi | ≤ 1 if all the roots are unique. If the
roots are not unique then |µi | < 1.
6 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

Question 9-17, Solutions by Mengxi Wu, Written by Patrik Rufelt

9. Given a dierence equation on the form ∇yn + 12 ∇2 yn = 0. Is the


solution yn bounded for all n > 0?
Answer: ( Pages: 185-186 )
We let: ∇yn = un − un−1 and ∇2 yn = un−2 − 2un−1 − un inserted in the equation
gives:
1
un − un−1 + [un−2 − 2un−1 − un ] = 0
2
3 1
un − 2un−1 + un−2 = 0
2 2
4 1
un − un−1 + un−2 = 0
3 3
q
with the char. eq. µ2 − 34 µ + 13 = 0 which has the following roots: 2
3 ± 1
3 which is
q
unstable since: 3 ± 13 > 1.
2

10. Formulate the following 3rd order ODE as a system of three rst
order ODE's ......
Answer: ( Pages: 13-14 )
If you are unsure on how to rewrite a nth order ODE as a system of n rst order
ODE's, read pages 13-14.
11. Give the recursion formula when the Euler backward method with
time step h is used on e.g. the ODE-problem y0 = y3 + t, y(0) = 1. In
each time step tn + 1 a nonlinear. equation must be solved. Formulate
that equation on the form F (yn + 1) = 0 and the iteration formula when
Newton's method is applied to this equation.
Answer: ( Euler backward: 42, Newtons method: )
Euler Backward: uk = uk−1 + hf (uk , tK ), tk = tk−1 + h
yk − yk−1
yk0 = −yk3 + tk =⇒ = −yk3 + tk =⇒
h
yk − yk−1
=⇒ yk3 − − tk = 0 = F (yk ) = 0
h
y i −y i
[(y i )3 − k h k−1 − tk ]
y i+1
=y − k
i
3(yki )2 − 1/h
.
12. What is meant by an explicit method for solving an initial value
problem
ẏ = f (y), y(0) = y0
Give at least two examples (with their formulas) of explicit methods
often used to solve such a problem.
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 7

Answer: ( Pages: 42, 55 )


An Explicit method only uses the data we aldready know to calculate the next step,
examples: Euler forward: uk = uk−1 + hf (uk−1 , tk−1 ), tk = tk−1 + h
Runge-kutta: uk = uk−1 + h6 (k1 + 2k2 + 2k3 + k4 ), tk = tk−1 + h
Where:
k1 = f (uk−1 , tk−1 )
k1 h
k2 = f (uk−1 + h , tk−1 + )
2 2
k2 h
k3 = f (uk−1 + h , tk−1 + )
2 2
k4 = f (uk−1 + hk3 , tk−1 + h)
.
13. What is meant by an implicit method for solving an initial value
problem
ẏ = f (y), y(0) = y0
Give at least two examples of implicit methods (with their formulas)
often used to solve such a problem.
Answer: ( Pages: 42, 54 )
An Implcit method uses data from the current step, examples:
Euler backwards: uk = uk−1 + hf (uk , tk ), tk = tk−1 + h
Trapezodial method: uk = uk−1 + h2 (f (uk , tk ) + f (uk−1 , tk−1 )), tk = tk−1 + h.
14. What is meant by the order of accuracy for a method used to solve
an initial value problem
ẏ = f (y), y(0) = y0
What is the order of the Euler forward method and the classical Runge-
Kutta method?
Answer: ( Pages 44, 55)
Error relation e ≈ cp , where p is the order of accuracy.
Accuracy of Euler forward: order 1
Accuracy of classical Runge-Kutta (RK4): order 4.
15. What is meant by automatic stepsize control for a method used to
solve an initial value problem ẏ = f (y), y(0) = y0 ?
Answer:
(1) Accept step k if δ ≤ tol
(2) Reject step k if δ > tol, restart from tk−1 with new stepsize hnew = h2 , return
to (1).
16.Why is the midpoint method not suited for ODE-systems where the
eigenvalues of the jacobian are real and negative? Are there any ODE-
systems for which the method could be suitable?
8 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

Answer:
(Leap frog) midpoint method is used to solve: y0 (t) = f (t, y(t)), y(t0 ) = y0 O(h2 )
uk = uk−2 + 2hf (tk−1 , uk1 )
tk = tk−1 + h, k = 2, 3, ..., N
Apply u = f = λu, thus we get the deerence eq.
⇒ uk = uk−2 + 2hλuk−1 − uk−2 = 0
Char eq. of dierence equation is
µ2 − 2λhµ − 1 = 0,
(µ − µ1 )(µ − µ2 ) = 0, µ1 µ2 = 1, (µ1 + µ2 ) = 2hλ
Write one root in polar form: µ1 = reiφ , for stable solution r=1, (|µ1 | < 1, for
stability) µ1 + µ2 = 2hλ → eiφ − e−iφ = 2hλ → 2i sin(φ) = 2hλ hλ = i sin(φ)
Only imaginary values in stability region.
17. Explain the dierence between local error and global error for the
explicit Euler method.
Answer:
Global error: ek = u(tk ) − uk = O(h)
This is also known as truncation error which can be regarded as the eect of all the
local errors up to the point tk (Note that this is seldomly the sum of all the local
erorrs)
Summation: How much it diers from the "real" value up to step tk .
Local error: l(tk , h) = u(tk)−u(t
h
k−1 )
− f (tk−1 , u(tk−1 ))
Can be regarded as the residual when the exact solution is inserted into the explicit
euler. Local error is basically the error in one single point.
In conclusion: Global error: error up to a certain point.
Local error: error in one point.
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 9

Question 18-26, By Henrik Sjöström

(2) 18. An ODE-problem, initial value and boundary value problem, can be
solved by a discretization method or an ansatz method. Give a brief description of
what is meant by these two method classes.

Discretization: Approximating the derivative by calculating the dierance be-


tween nearby points resulting in a set of equations that can be used to nd the
solution in the discretised set.

Ansatz: Approximating the solution with a best t from a set of ansatz equa-
tions, often exponential functions.

(4) 19. When an initial value problem ẏ = f (y), y(0) = y0 is solved with a
discretization method, what is meant by the stability area in the complex hq-plane
for the method? Give a sketch of the stability area for the method .........

The area in the complex plane where the solution is numericly stable for the
product λh.

To get the stability area of a method replace y0 with an appropriate approxi-


mation of the derivative and f (y) with the eigenvalueλi of the jacobian J(f (y)).
By rewriting the equation on the form yk+1 = a(λ)yk the solution converges if
Re(λ) ≤ 0 and |a(λ)| < 1 then calculate for what values of the product λh the
second condition is fulllled. That is the stability area.

(2) 20. What is meant by a sti system of ODE's ẏ = f (y)?

A system where the eigenvalues λi of the jacobian J(f (y)) are orders of magni-
tude dierent from each other and ∀i it holds that Re(λi ) ≤ 0.

(2) 21. For a linear system of ODE's ẏ = Ay , where the eigenvalues of A are
λ1 , λ2 , ..., λn , when is the system stable? Assume that all eigenvalues are real and
negative, when is the system sti ?

The system is stable when Re(λi ) < 0 ∀i and is considered sti when dierent
λ are of very dierent sizes. Usually diering with factors of 102 or larger.

(2) 22. Describe some discretization methods that are suitable for sti initial
value ODE problems.

Euler implicit
uk −uk−1
h = f (tk , uk ) from this equation one solves for uk , usually this needs to
be done in every step making the method somewhat inecient.
Trapezoidal method
A symmetric combination of eulers implicit and explicit methods.
2 uk −u
h
k−1
= f (tk , uk ) + f (tk−1 , uk−1 ) this method is not recommended for very
sti problems.
10 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

Both euler implicit and the trapezoidal method neccesates to solve for uk in ev-
ery step.

(2) 23. Which is the computational problem when a sti initial value ODE-
system is solved with an explicit method?

A very large spread for λ requires a very high accuracy i.e small step h and solv-
ing for many points since the part of the equation corresponding to a small λ will
most likely be interesting to study for a longer period of time. Thus the problem
becomes very expensive to solve.

(3) 24. Given e.g. the following ODE-system ẏ = −100y+z , y(0) = 1, ż = −0.1z ,
z(0) = 1. For which values of the stepsize h is the Euler forward method stable?
Same question for the Euler backward method.

calculate the eigenvalues of the system u̇ = Au by the equation det(A − Iλ) = 0.


In this case: λ1 = −100 and λ2 = −0.1
Euler explicit makes the approximation of the derivative according to
uk+1 −uk
h = Auk this gives the equation uk+1 = uk + hAuk replacing A by its
eigenvalue λ gives uk+1 = (1 + hλ)uk = (1 + hλ)k u0 this converges if |1 + hλ| < 1
1
in this particular case we have that for 0 < h <
50

Euler implicit makes the approximation of the derivative according to


uk+1 −uk
h = Auk+1 this gives the equation uk+1 = uk + hAuk+1 replacing A by
its eigenvalue λ gives uk+1 (1 − hλ) = uk and further uk+1 = (1 − hλ)−k u0 this
converges if |1 − hλ|−1 < 1 in this particular case we have that for h > 0

(4) 25. Given the vibration equation mẍ + cẋ + kx = 0, x(0) = 0, ẋ(0) = v0 .
Use scaling of x and t to formulate this ODE on dimensionless form. Determine
scaling factors so that the scaled equation contains as few parameters as possible.

2
∂2x a ∂ ξ
∂ξ ∂x
Set x = aξ and t = bτ . Insert into
∂t2 = = ab ∂τ
b2 ∂τ 2 and ∂t
.
c k ¨ acb2 ˙
Rewriting the equation and inserting the above into ẍ+
m ẋ+ m x → aξ + bm ξ +
akb2 m
m ξ = 0, removing a and setting b = c

mk
ξ¨ + ξ˙ + 2 ξ
c
aξ̇(0)
with ẋ(0) = v0 = b selecting a = v0 b we get

˙
ξ(0) =1

(2) 26. Describe the nite dierence method used to solve a boundary value
problem y 00 + a(x)y 0 + b(x)y = c(x), y(0) = a, y 0 (1) + y(1) = b

Replacing the derivatives with the central dierance approximations,


yk+1 −yk−1
yk0 = 2h
yk+1 −2yk +yk−1
yk00 = h2
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 11

yk+1 − 2yk + yk−1 yk+1 − yk−1


+ a(x) + b(x)yk =
h2 2h
1 a(x) −2 1 1
( 2+ )yk+1 + ( 2 + b(x))yk + ( 2 − )yk−1 = 0
h 2h h h 2h
1 a(x) −2 1 1
( 2+ )yk+1 + ( 2 + b(x))yk + ( 2 − )yk−1 = c(x)
h 2h h h 2h
This gives N equations with N+2 unknown. y(0) = a gives us the value of the
0 yN +1 −yN −1
unknown y0 , and yN + yN = + yN = b gives us one more equation, then
2h
we have suciently much information to solve the system and can set up a system
of equations and solve for all values of yk ∀k . This results in a tridiagonal system
of equations that can be set up and solved nummerically.
12 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

Question 27-35 By Elin Hynning

27) Verify with Taylor expansion that the following two approxima-
tions are of second order, i.e. O(h2 ). 1) y0 (x) ≈ (y(x + h) − y(x − h))/2h, 2)
y 00 (x) ≈ (y(x + h) − 2y(x) + y(x − h))/h2 .

1) Taylor expansion of y(x) gives

y(x + h) = y(x) + y 0 (x)h + y 00 (x)h2 /2 + O(h3 ),


y(x − h) = y(x) + y 0 (x)(−h) + y 00 (x)(−h)2 /2 + O(h3 ).
This yields

y(x + h) − y(x − h) =
= y(x) + y 0 (x)h + y 00 (x)h2 /2 + O(h3 ) − (y(x) + y 0 (x)(−h) + y 00 (x)h2 /2 + O(h3 )) =
= 2hy 0 (x) + O(h3 )
⇒ 2hy 0 (x) = y(x + h) − y(x − h) + O(h3 )
y(x + h) − y(x − h)
y 0 (x) = + O(h2 ).
2h
0
Thus, we have shown that y (x) ≈ (y(x + h) − y(x − h))/2h with second order
accuracy.

2) Taylor expansion of y(x) yields

y(x + h) = y(x) + y 0 (x)h + y 00 (x)h2 /2 + y 000 (x)h3 /6 + O(h4 ),


y(x − h) = y(x) + y 0 (x)(−h) + y 00 (x)(−h)2 /2 + y 000 (x)(−h)3 /6 + O(h4 ).
This yields

y(x + h) − 2y(x) + y(x − h) =


= y(x) + y 0 (x)h + y 00 (x)h2 /2 + y 000 (x)h3 /6 + O(h4 ) − 2y(x)
+ y(x) − y 0 (x)h + y 00 (x)h2 /2 − y 000 (x)h3 /6 + O(h4 ) =
= y 00 (x)h2 + O(h4 )
⇒ y 00 (x)h2 = y(x + h) − 2y(x)y(x − h) + O(h4 )
y(x + h) − 2y(x) + y(x − h)
y 00 (x) = + h∈ .
h2
00 2
Thus, we have shown that y (x) ≈ (y(x + h) − 2y(x) + y(x − h))/h with second
order accuracy.

28) Derive a second order dierence approximation to y (4) (x) using the
values y(x + 2h), y(x + h), y(x), y(x − h) and y(x − 2h).
We are looking for a formula
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 13

y (4) (x) = ay(x + 2h) + by(x + h) + cy(x) + dy(x − h) + ey(x − 2h) + O(h2 )

Taylor expansion yields

y (4) = a(y + y 0 2h + y 00 2h2 + y 000 4h3 /3 + y (4) 2h4 /3)+


+b(y + y 0 h + y 00 h2 /2 + y 000 h3 /6 + y (4) h4 /24)+
+cy + d(y − y 0 h + y 00 h2 /2 − y 000 h3 /6 + y (4) h4 /24)+
+e(y − y 0 2h + y 00 2h2 − y 000 4h3 /3 + y (4) 2h4 /3) + O(h5 )

Thus, we obtain the following system of equations:

a+b+c+d+e=0
2ah + bh − dh − e2h = 0
a2h2 + bh2 /2 + dh2 /2 + e2h2 = 0
a4h3 /3 + bh3 /6 − dh3 /6 − e4h3 /3 = 0
a2h4 /3 + bh4 /24 + dh4 /24 + e2h4 /3 = 1

We may write this on matrix form:

    
1 1 1 1 1 a 0

 2 1 0 −1 −2 
 b  
 24  0 


 4 1 0 1 4 
 c = 
 h4  0 

 8 1 0 −1 −8  d   0 
16 1 0 1 16 e 1

Solving this system yields

1 −4 6 −4 1
a= , b= , c= , d= , e= .
h4 h4 h4 h4 h4

Now we need to determine the order of the error. We look at the next term in
the Taylor expansion, namely

ay (5) 16h5 /15 + by (5) h5 /120 − dy (5) h5 /120 − ey (5) 16h5 /15.
14 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

Since b = d and a = e it holds that this term is zero. Thus, we look at the
term following the previous in the Taylor expansion:

ay 6 (2h)6 /6! + by 6 (h)6 /6! + dy 6 (h)6 /6! + ey 6 (2h)6 /6!

h6
This term will obviously not be zero, and thus our error will be of the order
h4 = h2 .
Thus, we may write

y(x + 2h) − 4y(x + h) + 6y(x) − 4y(x − h) + y(x − 2h)


y (4) (x) = + O(h2 )
h4

29) Derive a second order dierence approximation to y 0 (x) using the


values y(x), y(x − h) and y(x − 2h).
We are looking for a formula on the form

y 0 (x) = ay(x) + by(x − h) + cy(x − 2h) + O(h2 ).

Taylor expansion yields

y 0 (x) = ay(x) + b(y(x) − y 0 (x)h + y 00 (x)h2 /2) + c(y(x) − 2y 0 (x)h + 2y 00 (x)h2 ) + O(h3 ).

This yields the following equations:

a+b+c=0
−bh − 2ch = 1
bh /2 + 2ch2 = 0
2

We may write this on matrix form:

    
1 1 1 a 0
1
 0 −1 −2   b  =  1 
h
0 1 4 c 0

Solving this system yields

3 4 1
a= , b=− , c= .
2h 2h 2h
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 15

Since the coecients are of order h−1 and we know that the rst possible non-
zero term we have not taken into account is of order h3 , we may deduce that we
2
have an error term of order h . Thus, we may write the rst derivative of y(x) in
the following way:

3y(x) − 4(x − h) + y(x − 2h)


y 0 (x) = + O(h2 )
2h

30) Given a second order ODE y00 = f (x, y, y0 ). Assume a Dirichlet bound-
ary value is given in the left interval point y(0) = 1. Present two other
ways in which a boundary condition can be given in the right bound-
ary point. For second order BVP's there are three kinds of boundary conditions.
Apart from the above mentioned Dirichlet boundary conditions, which consist of
Neumann boundary conditions
specifying the solution at the boundary, there are
and Robin boundary conditions or Mixed boundary conditions. The Neumann BC's
consist of specifying the derivative at the boundary. The Robin BC's species a
combination of y 0 (x) and y(x) at the boundary.

31) When a boundary value problem y00 = p(x)y0 + q(x)y + r(x), y(0) =
1, y(1) = 0 is solved with discretisation based on the approximations
y 00 (xn ) ≈ (yn+1 − 2yn + yn1 )/h2 and y 0 (xn ) ≈ (yn+1 − yn1 )/2h we obtain a
linear system Ay = b of equations to be solved. Set up this system.
Which special structure does the matrix A have? In order to set up the
system we start by substituting the discretised derivatives into the equation. This
yields

yn+1 − 2yn + yn1 yn+1 − yn1


2
≈ pn + qn yn + rn ,
h 2h
where pn = p(xn ) and so on. Rearranging the above expression yields

     
1 pn 2 1 pn
− yn+1 − + qn yn + + yn−1 = rn .
h2 2h h2 h2 2h

We may thus express the equation as the following system of equations:

Ay = b,
where

 
c2 c3 0 0 ··· 0
 c1
 c2 c3 0 ··· 0 
.. .. .. .
A= 0 . ,
 
. . . . 0 
 
 0 ··· 0 c1 c2 c3 
0 ··· 0 0 c1 c2
1 pn pn
where c1 = h2 + 2h , c2 = − h22 − qn and c3 = 1
h2 − 2h . Moreover, we have
16 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

   
r1 y1
 r2   y2 
b= , ,
   
. .
. .

 .   . 
rN −1 yN −1
where y0 = y(0) and yN = y(1). Since the boundary conditions state that y0 =
yN = 0, we do not need to include these points in our y -vector. If the boundary
conditions were not zero, they would have been included in the b-vector.
The matrix A is said to be tridiagonal, since all nonzero elements lie on the three
centermost diagonals of A.

32) What is meant by a tridiagonal n × n matrix A? The number of ops


needed to solve a corresponding linear system of equations Ax = b can be
expressed as O(np ). What is the value of p? The number of bytes needed
in the memory of a computer to store such a system can be expressed as
O(nq ). What is the value of q ? A tridiagonal n × n matrix A is an n × n-matrix
where all the nonzero elements are situated at the three centermost diagonals of
the matrix.
If the tridiagonal matrix algorithm (TDMA) is used, the number of ops needed
to solve the linear system Ax = b is O(n), i.e. p = 1.
(Source: http://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm)
One may store the elements of a tridiagonal matrix in an n × 3-matrix, thus only
requiring the storage of 3n elements. The number of bytes needed to store A in the
memory of a computer may therefore be expressed as O(n), i.e. q = 1.

33) What is meant by a banded n × n matrix A? Describe some way of


storing such a matrix in a sparse way. Same question for a prole matrix.
A banded n × n-matrix A is a matrix where all non-zero elements are situated at
central diagonals of the matrix. If a banded matrix has p diagonals containing
non-zero elements, these diagonals may be stored as vectors in an n × p-matrix.
Prole storage of a matrix is a storage type applicable for symmetric positive
denite (SPD) matrices. Since the matrix is symmetric, we only need to store
the diagonal and the entries below the diagonal. The prole of the matrix is the
border for which all elements to the left of the border are zero-elements. A prole
matrix can be stored in the following way: All nonzero elements of A are stored in
a vector a. The elements are stored row by row, starting from the rst element in
the prole. Together with a, a pointer vector p is also stored. The elements of p
hold the indices in a for the diagonal elements of A. (See page 197 in Edsberg's
book for an example).

34) What input and output data are suitable for a Matlab function
solving a tridiagonal system of linear equations Ax = b if the goal is to
save number of ops and computer memory? If we want to save computer
memory and number of ops, it is wise to give the input matrix A as a sparse
matrix.
(I have no idea of what he is looking for with this question.)
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 17

35) The boundary value problem −y00 = f (x), y(0) = y(1) = 0 can be
solved with an ansatz method based on Galerkin's method. Formulate
the Galerkin method for this problem. The Galerkin method starts with an
ansatz on the form

N
X
y(x) ≈ ỹ(x) = cj ϕj (x),
j=1

where ϕj (x) are given basis functions and cj are coecients to be determined so
that ỹ(x) is a good approximation. We assume that the basis functions satisfy
the boundary conditions, i.e.

ϕj (0) = ϕj (1) = 0 ∀j.

If we insert the ansatz into the BVP we get the following residual function

r(x) = ỹ 00 (x) + f (x) 6= 0.

Preferably, we want r(x) to be small. Galerkin's method deals with this in the
following way: We demand r(x) to be orthogonal to the basis functions. This may
be expressed as

Z 1
r(x)ϕj (x)dx = 0 ∀j
0

Inserting the expression for r(x) yields

 
Z 1 N 2
X d ϕ j
 cj 2
+ f (x) ϕi dx = 0 ∀j, i
0 j=1
dx
N 1 1
d2 ϕj
X Z Z
cj ϕi dx + f (x)ϕi dx = 0 ∀j, i.
j=1 0 dx2 0

We perform partial integration on the integral in the rst term:

1 1 Z 1 Z 1
d2 ϕj
Z 
dϕj dϕj dϕi dϕj dϕi
ϕi dx = ϕi − dx = − dx.
0 dx2 dx 0 0 dx dx 0 dx dx

The last equality follows from the fact that ϕj (0) = ϕj (1) = 0.
Insert the expression for the integral into the expression for r(x) yields
18 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

N Z 1 Z 1
X dϕj dϕi
cj dx = f (x)ϕi dx ∀i, j.
j=1 0 dx dx 0

This is a linear system of equations Ac = b, where

Z 1 Z 1
dϕj dϕi
ai,j = dx, bi = f (x)ϕi dx.
0 dx dx 0
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 19

Question 36-44 By Lars-Lowe Sjösund

36. Classify the following PDEs with respect to linearity (linear or nonlinear), or-
der (rst, second), type (elliptic, parabolic, hyperbolic):
a) ut = uxx + u,
b) uxx + 2uyy = 0,
c) ut = (a(u)ux )x ,
d) ut + uux = 0,
e) uy = ux x + x

Givet diekv. Auxx + 2Buxy + Cuyy + Dux + Euy + F = 0


Denna är elliptisk om
 
A B
Z=
B C

är positivt denit. Om det(Z)=0 är den parabolisk och om det(Z)<0 hyperbolisk.


Ordningen bestäms av högsta förekommande ordning av derivata.
a) det(Z)=0 ⇒ Parabolisk, den är linjär och av ordning 2.
b) Z har egenvärdena 1 och 2, alltså är Z positivt denit och ekv. är elliptisk.
Ordningen är 2 och ekv. är linjär.
c) Ickelinjär, ordning 2, parabolisk (kursboken ekv (5.8))
d) Ickelinjär, ordning 1, hyperbolisk (kursboken ekv (5.10))
e) Ickelinjär, ordning 1, Oklart...

37. Formulate the ODE-system u̇ = Au + b when the Method of Lines is applied


to the PDE-problem ut = uxx + u, t > 0, 0x1, u(0, t) = 0, u(1, t) = 1, u(x, 0) = x
uxx is approximated by the central dierence formula.

Notation: u(xi , tj ) = ui,j

Given PDE:

ut = uxx + u

ui−1,j − 2ui,j + ui+1,j ui−1,j + (−2 + h2x )ui,j + ui+1,j


ut (xi ) ≈ 2
+ ui,j =
hx h2x

Sätt a = 1, b = −2 + h2x .

Vi får systemet

ū˙ = Aū + b̄

, där
20 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

 
b a 0 ··· ··· 0

 a b a 0 ··· 0

1  0 a b a 0 ···

A= 2 .
 
.. .. .. . . 
hξ  .. . . . .
.
. 
. 

 0 0 ··· a b a 
0 ··· 0 a b

och

 
0
1  
b̄ =  
hx2  0 
1
.
A är en N*N-matris där N är antalet inre punkter. Vi får b̄ då u(1, t) = 1.
Initialvärdena diskretiseras enligt ui (0) = xi .
38. Formulate the ODE-system u̇ = Au + b when the Method of Lines is applied
to the PDE-problem ut + ux = 0, t > 0, 0x1, u(0, t) = 0, u(x, 0) = x ux is to be
approximated by the backward Euler dierence formula.

Givet PDE ut + ux = 0, vi får diskretiseringen

ui,j − ui−1,j
ut ≈ −
hx

1
Sätt a= hx och b = − h1x . Vi får systemet

ū˙ = Aū + b̄
, där

 
b 0 0 ··· ··· 0

 a b 0 0 ··· 0

1  a b a 0 0 ···

A=
 
 . .. .. .. . . 
hx  .. . . . .
.
. 
. 

 0 0 ··· a b a 
0 ··· 0 a b

och

b̄ = 0̄
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 21

.
A r en (N+1)*(N+1)-matris, där N är antalet inre punkter (till skillnad från
uppgift 37 har vi inget randvärde vid x=1 och tvingas beräkna värdet själva, därav
N+1). Initialvärdet diskretiseras pss som i uppgift 37.

39. What is meant by an upwind scheme for solving ut + aux = 0, a > 0? (2)
Upwind scheme för ut +aux = 0, a > 0 innerbär Forward time, backward space
ui,j+1 − ui,j ui,j − ui−1,j
⇒ +a = 0.
ht hx
FTBS är av första ordningens noggrannhet i t- och x-led och har stabilitetskriterium
0 < a hhxt 1

40. Derive a dierence approximation and the corresponding stencil to e.g. the
Laplace operator
∂ 2 u ∂ 2 u ∂u ∂u
+ 2 + +
∂x2 ∂ y ∂x ∂y

Vi drar till med approxmation av andra ordningen (att den verkligen är av andra
ordn. visas i uppgift 41):

ui−1,j − 2ui,j + ui+1,j ui,j−1 − 2ui,j + ui,j+1 ui+1,j − ui−1,j ui,j+1 − ui,j−1
uxx +uyy +ux +uy ≈ + + + =
h2x h2y 2hx 2hy

1 1 2 2 1 1 1 1 1 1
= ui−1,j [ − ]+ui,j [− 2 − 2 ]+ui+1,j [ 2 + ]+ui,j−1 [ 2 − ]+ui,j+1 [ 2 + ]=
h2x 2hy hx hy hx 2hx hy 2hy hy 2hy

= c1 ∗ ui−1,j + c2 ∗ ui,j + c3 ∗ ui+1,j + c4 ∗ ui,j−1 + c5ui,j+1


Molekyl:

41. What is the order of accuracy in t and x when the Method of Lines with central
dierences in the x-variable and the implicit Euler method is applied to e.g. the
heat equation ut = uxx ?

Vi använder centraldierens i x-led, denna är av andra ordningen, ty:

h2 h3
u(x + h) = u(x) + u0 (x)h + u00 (x) + u000 (x) + O(h4 )
2 6
22 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

h2 h3
u(x − h) = u(x) − u0 (x)h + u00 (x)
− u000 (x) + O(h4 )
2 6
⇒ u(x + h) + u(x − h) = 2u(x) + u00 (x)h2 + O(h4 )
u(x + h) − 2u(x) + u(x − h)
⇒ = u00 (x) + O(h2 )
h2
Medelst samma metod eller enligt s.42 i kursboken är impl. Euler av ordning 1.

42. Give a second order accurate approximation dierence formula of (d(x)ux )x .

(d(x)ux )x = dx ux + duxx
d(x + h) − d(x − h) di+1 − di−1
dx = + O(h2 ) ⇒ + O(h2 )
2h 2h
u(x + h) − u(x − h) ui+1 − ui−1
ux = + O(h2 ) ⇒ + O(h2 )
2h 2h
ui+1 − 2ui + ui−1
uxx = + O(h2 )
h2
di+1 − di−1 ui+1 − ui−1 ui+1 − 2ui + ui−1
(d(x)ux )x = + O(h2 ) + di + O(h2 ))
2h 2h h2
43. Describe the Crank-Nicolson method for solving the heat equation.
Från wikipedia, först en beskrivning av metoden i allmänhet.

The CrankNicolson method is based on central dierence in space, and the


trapezoidal rule in time, giving second-order convergence in time. For example, in
one dimension, if the partial dierential equation is

∂u ∂u ∂ 2 u
= F (u, x, t, , )
∂t ∂x ∂x2
n
then, letting u(i∆x, n∆t) = ui , the equation for CrankNicolson method is the
average of that forward Euler method at n and that backward Euler method at
n + 1 (note, however, that the method itself is not simply the average of those two
methods, as the equation has an implicit dependence on the solution):

un+1 − uni ∂u ∂ 2 u
 
i
= Fin u, x, t, , 2 (forward Euler)
∆t ∂x ∂x
un+1 − uni ∂u ∂ 2 u
 
i n+1
= Fi u, x, t, , (backward Euler)
∆t ∂x ∂x2
un+1 − uni ∂u ∂ 2 u ∂u ∂ 2 u
    
1
i
= Fin+1 u, x, t, , 2 + Fin u, x, t, , 2 (Crank-Nicolson)
∆t 2 ∂x ∂x ∂x ∂x
The function F must be discretized spatially with a central dierence.
Note that this is an implicit method: to get the "next" value of u in time, a
system of algebraic equations must be solved. If the partial dierential equation is
nonlinear, the discretization will also be nonlinear so that advancing in time will in-
volve the solution of a system of nonlinear algebraic equations, though linearizations
are possible. In many problems, especially linear diusion, the algebraic problem
is tridiagonal and may be eciently solved with the tridiagonal matrix algorithm,
which gives a fast O(n) direct solution as opposed to the usual O(n3 ) for a full
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 23

matrix.

Applicerat på värmeledningsekvationen

The CrankNicolson method is often applied to diusion problems. As an ex-


ample, for linear diusion,

∂u ∂2u
=a 2
∂t ∂x
whose CrankNicolson discretization is then:

un+1 − uni a
i
(un+1 n+1
+ un+1 n n n

= 2 i+1 − 2ui i−1 ) + (ui+1 − 2ui + ui−1 )
∆t 2(∆x)
a∆t
or, letting r= 2(∆x)2 :

−run+1 n+1
i+1 + (1 + 2r)ui − run+1 n n n
i−1 = rui+1 + (1 − 2r)ui + rui−1 $
,
which is a tridiagonal problem, so that un+1
i , may be eciently solved by using
the tridiagonal matrix algorithm in favor of a much more costly matrix inversion.

2D
When extending into two dimensions on a uniform Cartesian grid, the derivation
is similar and the results may lead to a system of band-diagonal equations rather
than tridiagonal ones. The two-dimensional heat equation

∂2u ∂2u
 
∂u
=a + 2
∂t ∂x2 ∂y
can be solved with the CrankNicolson discretization of

1 a∆t  n+1
un+1 n
i,j = ui,j + (ui+1,j + un+1 n+1 n+1 n+1
i−1,j + ui,j+1 + ui,j−1 − 4ui,j )
2 (∆x)2
+ (uni+1,j + uni−1,j + uni,j+1 + uni,j−1 − 4uni,j )


44. Formulate the Galerkin method for the elliptic problem

uxx + uyy = f (x, y), (x, y) ∈ Ω


u(x, y) = 0, (x, y) ∈ Ω
Galerkins metod står bra beskriven i boken. Grundläggande princip på s.86-88.
Från sid.140 behandlas problem 44, man får dock byta ut f(x,y) mot -f(x,y), då
boken behandlar ∆u = −f och problem 44 ∆u = f .
24 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

Question 45-51, By Olof Bergvall

Q.45. Formulate the Galerkin method for the parabolic problem,

ut = uxx , u(x, 0) = f (x), u(0, t) = u(1, t) = 0.

Solution: (See Edsberg p.123-124.) Below follows a derivation of the Galerkin


formulation as well as the actual formulation. The actual Galerkin formulation is
given on the next page.
We express u(x, t) as a time-dependent linear combination of basis functions ϕi

X
u(x, t) = ci (t)ϕi (x),
i=1

where each ϕi satisfy the BC ϕi (0) = ϕi (1) = 0. We now approcimate u by the


rst N terms of the above sum (for some suitable N ),
N
X
u(x, t) ≈ ũ(x, t) = ci (t)ϕi (x).
i=1

Inserting ũ into the PDE gives,

N N
X dci (t) X d2 ϕi (x)
ũt ≈ ũxx , ⇐⇒ ϕi (x) ≈ ci (t) .
i=1
dt i=1
dx2

Since ũ is an approximation we will not have exact equality in general. Hence, it is


interesting to consider the residual function r(x, t) = ũt − ũxx . We have,

N N
X dci (t) X d2 ϕi (x)
r(x, t) = ϕi (x) − ci (t) .
i=1
dt i=1
dx2

We now impose the condition that r(x, t)⊥ϕi (x) for i = 1, . . . , N and all t, i.e.
Z 1
r(x, t)ϕi (x)dx = 0, for i = 1, . . . , N, and all t.
0

This gives,

N 1 N 1
d2 ϕi (x)
Z Z
X dci (t) X
(*) ϕi (x)ϕj (x)dx − ci (t) ϕj (x)dx = 0.
i=1
dt 0 i=1 0 dx2

Consider the second integral in the expression above. By partial integration we


obtain,

1 1 Z 1
d2 ϕi (x)
Z 
dϕi (x) dϕi (x) dϕj (x)
− 2
ϕj (x)dx = − ϕj (x) + dx =
o dx dx 0 0 dx dx
Z 1
dϕi (x) dϕj (x)
= dx,
0 dx dx
since ϕj (0) = ϕj (1) = 0. Inserting this into (*) gives a system of ODE's called the
Galerkin formulation of the problem,

dc
M + Ac = 0,
dt
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 25

where,
Z 1 Z 1
dϕi (x) dϕj (x)
Mij = ϕi (x)ϕj (x)dx, and Aij = dx.
0 0 dx dx
The IC's are obatined from,

N
X
ũ(x, 0) = ci (0)ϕi (x) = f (x).
i=1

(Here Edsberg does something weird since his f is named u0 and f is a function of
the PDE). By multiplying with ϕj (x) and integrating over [0, 1] we obtain

(**) M c(0) = f̂,


where,
Z 1
f̂j = f (x)ϕj (x)dx.
0
By solving (**) we obtain the intial values of ci (t).

Q.46. In an ansatz method for a 1D problem the solution u(x) is approximated


P
by the linear expression uh (x) = αi ϕi (x), where the basis functions ϕi (x) can
be chosen dierently. Give a description of the roof-functions, i.e. the piecewise
linear basis functions in x.

Solution: (See Edsberg p.88-89.) Let h be the stepsize in a equidistant discretisa-


tion of the x-interval xi denote the ith point
in the problem. Let of the discretisa-
tion. The roof-functions, ρi (x),are then dened as,

 0, if x ≤ xi−1 ,

 1
h (x − (i − 1)h), if xi−1 ≤ x ≤ xi ,
ρi (x) =
 − h1 (x − (i + 1)h), if xi ≤ x ≤ xi+1 ,

0, if xi+1 ≤ x.

Intuitively, the roof-functions can be thought of as triangles with corners in (xi−1 , 0),
(xi , 1) and (xi+1 , 0).

Q.47. In an ansatz method for a 2D


P problem the solution u(x, y) is approximated
by the linear expression uh (x, y) = αi ϕi (x, y), where the basis functions ϕi (x, y)
can be chosen dierently. Give a description of the pyramid-functions, i.e. the
piecewise linear basis functions in x, y .

Solution: (See Edsberg p.142-143.) Suppose that we have discretised the region
of our 2D problem and thus obtained points (x1 , y1 ), . . . , (xn , ym ). The pyramid-
function ρij (x, y) is dened as the function which is zero everywhere except on the
quadrangle with cornes (xi+1 , yj ), (xi , yj+1 ), (xi−1 , yj ), (xi , yj−1 ). On this quadran-
3
gle ρij describes a pyramid, i.e. if we see the quadrangle as a subset of R then
ρij connects the sides of the quadrangle to the point (xi , yj , 1) with lled triangles.
To describe the pyramid explictly as a function of x and y seems to become rather
messy (we have to subdivide the quadrangle into four triangles and dene ρij as
four ane (linear) functions over the respective triangles in a way similar to Q.46.).
26 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

Q.48. When solving PDE-problems in 2D and 3D with dierence or ansatz meth-


ods we are lead to solving large linear systems of equations Ax = b. What is meant
by a direct method and an iterative method for solving Ax = b? Give examples by
name of some direct methods and some iterative methods.

Solution: (See Edsberg Appedix A.5.) A direct method is one where the system
is solved directly, i.e. in each step an unknown is solved for and a solution is
obtained terms of the unknows not already solved for.
An iterative method is a method where one starts at some initial guess and
succesively computes better approximations to the solution.
The standard direct method is Gaussian elimination. Often one makes this
method more ecent by applying e.g. LU-factorisation or Cholesky-factorisation.
Examples of iterative methods are Jacobi's method and Gauss-Seidel's method.

Q.49. What is meant by dissipation and dispersion when a conservation law is


solved numerically?

Solution: (See Edsberg p.147.) Suppose that we solve a conservation law nu-
merically. It may occur that the property that should be conserved is not, due
to numerical eects. This phenomenon is called dissipation (or, more accurately,
numerical dissipation).
It may also occur that the phase relations are distorted from what they should
be and that the wave speed is variable even though it should be constant, because
of numerical properties of our algorithm. This phenomenon is called (numerical)
dispersion.
Q.50. What is meant by ll-in when solving a sparse linear system of equations
Ax = b with a direct method?

Solution: (See Edsberg p.137.) When solve the system directly, for instance using
Gaussian elimination, some of the zeros of the matrix A may become nozero in the
process. (In the case of Gaussian elimination this occurs when we add or subtract
a nonzero element to a zero element). We then say that there is a ll-in in A, since
some of the zero elements are lled with nonzero elements.

Q.51. What is meant by the Cholesky-factorization of a symmetric positive denite


matrix A? Can the following matrix A be Cholesky factorized?

Solution: (See Edsberg p.194.) The Cholesky-factorisation of a symmetric and


positive denite matrix A is a factorisation of A as a product of a lower triangular
matrix L and its transpose i.e. A = LLT . (We can generalise to A being a complex
hermitian (equal to its conjugate transpose) matrix but then we must take the
conjugate transpose of L and we then have the decompostion A = LL∗ ).
We have not been given any matrix so the second question is somewhat hard to
answer, but what one has to check is if A is symmetric (hermitian in the complex
case) and positive denite. One checks if A is symmetric simply by inspection. To
check if A is positive denite one may compute the eigenvalues of A. If they are all
positive, then A is positive denite. If A is very large (so that the charcteristic equa-
tion gets a uncomfortably high degree) we may instead compute the determinants
of the north-westmost (upper left corner) 1×1, 2×2, 3×3, . . . , n×n-submatrices of
the n×n matrix A A). If they are all positive,
(i.e. the leading principal minors of
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 27

then A is positive. This is tedious, but it does not involved solution of polynomial
equations of high degree.
28 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

Question 52-54, By Gustav Sædén Ståhl

Q.52 If Ax = b is rewritten on the form x = Hx+c and the iteration xk+1 = Hxk +c
is dened, what is the condition on H for convergence of the iterations to the
solution of Ax = b? Also formulate a condition on H for fast convergence.
Solution. The idea is to look at a split of the matrix A as, say, A = M − N
for some matrices M and N. Then we get Ax = b ⇔ M x = N x + b. We want to
nd the x for when this is fullled, and that we can do by rst guessing a solution
x0 and then iterating by the formula M xk+1 = N xk + b and keep doing this until
kxk+1 − xk k is as small as we want. Of course, if we choose M to be invertible we
can rewrite this iterative process as

xk+1 = M −1 N xk + M −1 b = Hxk + c.
Let G = M −1 N , then, in order to study the convergence, we consider the eigenval-
ues of G and let
ρ(G) = max|λi (G)|.
i
The following result is is not really derived in the book but I suppose that it comes
from the fact that rk = Aek where rk is the residual and ek is the error and we get
convergence if ek → 0 as k → ∞. The result, however, is that if ρ(G) < 1 we have
convergence. Also, the convergence is faster the smaller ρ(G) is. A little trivia:
ρ(G) is called the spectral radius of G.

Q.53 What is meant by the steepest descent method for solving Ax = b, where A
is symmetric and positive denite?
Solution. When we consider the previous question for a symmetric, positive
denite matrix A (SPD) we get a special case (which might have been seen in the
basic course of optimization, for those of you who have taken that course, but I'm
not 100% sure). For this problem, to nd the solution x to the system of equations
Ax = b is equivalent to minimizing the function F (x) = 2 x Ax − x b with respect
1 > >

to x. Next, in order to minimize this we use a so called search function, which is


dened by
xk+1 = xk + αk dk
where dk is the search direction and αk is the stepsize in that direction. Apparantly
it follows immeadiatly that the optimal value of αk is

r>k dk
αk = .
d>k Adk
Now we can dene what the steepest descent method is! It is the method where
we let dk = rk and use starting guess x0 = 0. This simply means that we go in the
direction of the residual when we search for our mimimizer.

Q.54 Describe the preconditioning when used with the steepest descent method.
Assume that the matrix A is symmetric and positive denite.
Solution. The convergence of the steepest descent method is related to the
condition number
maxi λi (A)
κ(A) = .
mini λi (A)
If κ is large, the convergence is slow and if it is small, the convergence is fast. If the
condition number is large we can reduce it with preconditioning. We introduce a
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 29

quadratic matrix E and y = E x. With this substitution we get that a transformed


function
1
Fe(y) = F (x) = y> A
ey − y> b
e
2
where e = E −> AE −1
A and be = E −> b. We now want
to choose E such that
κ(A)
e << κ(A). If E = I we do not change anything so that wouldn't work. If E =
e > , where L is the Cholesky factor of A, then we would have perfect condition, but
L
the Cholesky factor is to expensive to calculate. Therefore, something in between
of I and e>
L is a good choice. Then we dene C = E > E as the preconditioning
matrix. A common choice for C is C = diag(A) (which is the same as choosing
√ √ √
E = diag( a11 , a22 , ..., ann )).
30 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

Question 55-60, By Alexander Eriksson

Q55. For a hyperbolic PDE in x and t the solution u(x, t) is constant along certain
curves in the x, t-plane. What are these curves called? Give the analytic expression
for these curves belonging to ut + 2ux = 0.
A55. They are called characteristics (page 150), and are the lines resulting
dx
from the ODE
dt = a(u(x(t), t)) (in the x, t plane). First, a general case to explain
where a(u) comes from, with the equation being:

∂u ∂
(0.1) + f (u) = 0
∂x ∂x
f (u) is called the ux function. If f (u) is twice dierentiable, we can instead
rewrite it to

∂u ∂u
(0.2) + a(u) =0
∂x ∂x
du(x(t),t)
Along a characteristic, the solution is constant, shown simply by
dt =0
(nal step; Look at the original equation)

du(x(t), t) ∂u dx(t) ∂u ∂u ∂u
(0.3) = + = + a(u(x(t), t)) =0
dt ∂t dt ∂x ∂x ∂x
Constant solution uC along a characteristic gives, with the denition of a char-
acteristic:
dx
(0.4) = a(uC ) = aC −→ x(t) = aC t + C
dt
In our case, a = 2 which is a constant, and the analytic expression becomes
simply x(t) = 2t + C .

(Characteristic curves can be used to generate entire solutions; If you know the
boundary value and the initial condition and you have a collection of curves that
describe how that initial solution propagates, you can solve the entire system)

Q56. Given the system of PDEs


 
0 1
(0.5) ut + u =0
2 0 x
Is the system hyperbolic? Which are the characteristics?

A56. An hyperbolic system of PDE's occurs if the matrix


√ √A(u) is diagonizable and
has real eigenvalues (for this matrix, λ1 = 2, λ2 = − 2), where A(u) is taken
from the setup:

∂u ∂ ∂u ∂u
(0.6) + f (u) = + A(u) =0
∂x ∂x ∂x ∂x
A(u) is the jacobian of f (u). Yes, the system is hyperbolic, as the eigenval-
ues of A are real and distinct.

The characteristic curves of a system of PDEs according to some quick google


STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 31

1
action and a book by Siam on numerical methods , would turn into a system of
ODE's that could be expressed as follows, with Λ being the diagonal matrix with
eigenvalues along the diagonal.

dx
(0.7) =Λ
dt
Since the eigenvalues have no time or spatial dependence (in this case), this
system is solved in complete analogue to the previous case, (0.4), except you get
two families of curves, one for each eigenvalue, with each eigenvalue as the slope
instead of a.
p
x(t) = (2)t + C
p
x(t) = − (2)t + C

Also, note that this A belongs to a system of PDEs not a system of ODEs, so
the sign of the eigenvalues have no bearing on the stability of the system (At least,
I hope sincerely this is the case. Proof by elegance?)

Q57. Given the systems of PDEs


 
0 1
(0.8) ut + u =0
2 0 x
A solution is wanted on the interval 0 ≤ x ≤ 1, t ≥ 0. Suggest initial and boundary
conditions that give a mathematically well posed problem.

A57. (This particular answer distills a fair bit of mathematical vodoo that Edsberg
doesn't explain properly contained in the previous couple of questions. It may be a
tad bit unclear, at which point the previously mentioned book by Siam could prove
useful. Correctness of answer is questionable.)
That the problem is well posed means that the solution is continuous with re-
spect to the given conditions (direct quote, page 105), however, the book provides
an example where boundary is a Riemann step and talks about how it is a solution
in the weak sense. It is not clear exactly how that relates to the stated requirement.
Moving on however;
√ √
Recall that λ1 = 2, λ2 = − 2. Note in particular the signs on the two eigen-
values, as this means that the two characteristic curves originate from "opposite
ends" of the x-interval (go back to the question describing characteristics if that
made no sense), and the requirement for a continuous solution thus means we need
boundary conditions on both sides for it to be well posed (compare to page 152
in the book, where it's either-or depending on the sign of a. If both eigenvalues
had been positive, we could have employed that again). The boundary and initial
values propagate along the characteristics.
 
u1
This probably answers this question, but it's not clear. With u= :
u2

1
It was actually a lot more methodical in explaining hyperbolic PDEs and systems of PDEs than

Edsberg, and if you're interested it's at http://www.siam.org/books/textbooks/OT88sample.pdf


32 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1

I.C. : u(x, 0) = u0 (x), 0≤x≤1


B.C. : u1 (0, t) = α(t), t>0
B.C. : u2 (1, t) = β(t), t>0
The initial condition should u0 (x) for safety's sake be continuously dierentiable
for inf < x < inf (though, again, page 105 suggests it doesn't have to be, without
properly explaining how this "solution in the weak sense" aects the problem).
The boundary conditions, in turn, are only relevant for their specic element in u,
depending on where the characteristics emanate (i.e. signs of eigenvalues).

Q58. What is meant by a numerical boundary condition?

A58. When discretizing in a way that leads to the last inner calculation requiring
an outer point (the stencil contains a ui+1,k ) that may not exist in the real bound-
ary conditions, you impose an articial "numerical boundary condition" to make
the calculation in the nal point possible anyway.

An example of this is in Lab6, when we use the Lax-Wendro method. In that


problem, we had no boundary conditions at x = 1 but the stencil still need the
pointuN,k for the last calculation. We then used linear extrapolation to express
uN,k = 2uN −1,k − uN −2,k , this constructing values for xN , which is imposing a
numerical boundary condition.

0.1. Q59. Use Neumann analysis to verify that the upwind scheme is unstable
when applied to ut = aux , a > 0.

0.2. A59. Points of order: Neumann analysis is a stability analysis you employ
when the eigenvalues are all the same, rendering eigenvalue based stability analysis
useless. Upwind scheme is FTBS, Forward Time Backward Space, a name which
becomes apparent after we've used forward and backward discretization to generate
the following.

ui,k+1 − ui,k ui,k − ui−1,k


(0.9) =a
ht hx
We rewrite this, employing the following notation for the Courant number σ = a hhxt
ht
ui,k+1 − ui,k = a (ui,k − ui−1,k )
hx
ui,k+1 = ui,k + σui,k − σui−1,k
ui,k+1 = (1 + σ)ui,k − σui−1,k
The coecient in front of ui,k is called the complex factor G(σ) (here G(σ) =
(1 + σ)) and stability is only possible when |G(σ)| ≤ 1. We see that there is no way
to choose the stepsize in hx and ht to make the inequality hold, and therefor the
stability criterion cannot be fullled. QED.
(pages 152 and 162)

Q60. What is meant by articial diusion? Why is that sometimes used when
solving hyperbolic PDEs?
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 33

A60. (Skip to the bottom for the short answer. This initial bit is mostly me
not understanding the question and attempting to reason out what's going on)
Step 1: Recall Spurious oscillations; The numerical solution begins to oscillate in
a ridiculously unrealistic fashion, yet it is not unstable as instability requires to
become unbounded, which does not occur with spurious oscillations. One way to
attempt to address this is to lower the stepsize signicantly (page 80) Using the
h
Peclet Number
2x , we have a condition for spurious oscillations where P e < 1
Pe =
means no oscillations. Step 2: Lab6 features a hyperbolic PDE. To solve it we
employ a backward dierence (which we just did because he said so, but it actually
xed the problem above). We do this because unlike the central dierence, this
nite dierence method "postpones the oscillations" (quote unquote)and reduces
accuracy to rst order, but on the other hand, it works. Step 3: Example problem,
the advection diusion equation (page 79) and how it looks discretized with just
central discretization on both derivatives;

d2 u du
(0.10) − + = 0, 0 ≤ x ≤ 1, u(0) = 0, u(1) = 1
dx2 dx
ui+1 − 2ui + ui−1 ui+1 − ui−1
(0.11) − + =0
h2 2h
The spurious oscillations occur, in this case, because of the second boundary con-
dition, as in a neighborhood of x=1 the solution is going to jump sharply (as it
spends most of its time being near before reaching the second boundary). This is
referred to as a "boundary layer" at x = 1, and we have a "singular perturbation
problem".

You address this in your discretization by using the backward dierence for the
rst derivative, central for the second derivative (This is done in the book in page
79-80, this is intended as a quick reference). Then rewrite the rst order derivative
approximation according to (using trivial algebra. I tested it and succeeded, ergo
it has to be trivial):

ui − ui−1 ui+1 − ui−1 h ui+1 − 2ui + ui−1


(0.12) = −
h 2h 2 h2
This inserted into the mixed discretization (i.e. where you use central for 2nd
and backwards for 1st), if you collect the terms so that it looks like both derivatives
1
were the result of central dierence and add some black magic for
x . Result:

ui+1 − 2ui + ui−1 ui+1 − ui−1


(0.13) −h 2
+ =0
h 2h
h
Where h = (1 + P e), where P e = 2x is the Peclet number.

Answer: The term P e is the articial diusion term. We use it when


solving hyperbolic PDEs to avoid spurious oscillations and deal with
propagating discontinuities. (Actually, the more I think about it the more
I suspect the latter is a sucient answer, as it implies the rst. So "deal with
propagating discontinuities".)
(page 81-82)

You might also like