Study Questions - Applied Numerical Methods Part 1
Study Questions - Applied Numerical Methods Part 1
1
2 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
Contents
A = SΛS −1
where S contains the eigenvectors s1 , s2 , . . . , sn and Λ is a diagonal matrix contain-
ing λ1 , λ2 , . . . , λn .
A is diagonalizable when it has n linearly independent eigenvectors.
4 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
cλeλt = Aceλt
giving us
λc = Ac
We see that this is an eigenvalue problem where we want to nd the eigenvalues
λand c of A. If the eigenvectors of A are linearly independent then we can write
A = SΛS −1
where S and Λ are dened as in the question above.
The n solutions corresponding to each eigenvalue is then
yi (t) = ci (t)eλi t
Each solution can be written as linear combination of the eigenvectors
n
X
y(t) = αi ci eλt
i=1
y(t) = SeΛt ᾱ
Inserting the initial value y(0) = y0 we obtain y(0) = S ᾱ , giving us
y(t) = SeΛt S −1 ᾱ
Thus
eAt = SeΛ S −1
Aci = λi ci
They can be found using the characteristic equation
det(A − λI) = 0
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 5
a1 µk + a2 µk−1 + . . . + ak+1 = 0
If we denote the roots of the characteristic equation as µ1 , µ2 , µ3 , . . . , µk . As
n→∞ then sequence yn is bounded if |µi | ≤ 1 if all the roots are unique. If the
roots are not unique then |µi | < 1.
6 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
10. Formulate the following 3rd order ODE as a system of three rst
order ODE's ......
Answer: ( Pages: 13-14 )
If you are unsure on how to rewrite a nth order ODE as a system of n rst order
ODE's, read pages 13-14.
11. Give the recursion formula when the Euler backward method with
time step h is used on e.g. the ODE-problem y0 = y3 + t, y(0) = 1. In
each time step tn + 1 a nonlinear. equation must be solved. Formulate
that equation on the form F (yn + 1) = 0 and the iteration formula when
Newton's method is applied to this equation.
Answer: ( Euler backward: 42, Newtons method: )
Euler Backward: uk = uk−1 + hf (uk , tK ), tk = tk−1 + h
yk − yk−1
yk0 = −yk3 + tk =⇒ = −yk3 + tk =⇒
h
yk − yk−1
=⇒ yk3 − − tk = 0 = F (yk ) = 0
h
y i −y i
[(y i )3 − k h k−1 − tk ]
y i+1
=y − k
i
3(yki )2 − 1/h
.
12. What is meant by an explicit method for solving an initial value
problem
ẏ = f (y), y(0) = y0
Give at least two examples (with their formulas) of explicit methods
often used to solve such a problem.
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 7
Answer:
(Leap frog) midpoint method is used to solve: y0 (t) = f (t, y(t)), y(t0 ) = y0 O(h2 )
uk = uk−2 + 2hf (tk−1 , uk1 )
tk = tk−1 + h, k = 2, 3, ..., N
Apply u = f = λu, thus we get the deerence eq.
⇒ uk = uk−2 + 2hλuk−1 − uk−2 = 0
Char eq. of dierence equation is
µ2 − 2λhµ − 1 = 0,
(µ − µ1 )(µ − µ2 ) = 0, µ1 µ2 = 1, (µ1 + µ2 ) = 2hλ
Write one root in polar form: µ1 = reiφ , for stable solution r=1, (|µ1 | < 1, for
stability) µ1 + µ2 = 2hλ → eiφ − e−iφ = 2hλ → 2i sin(φ) = 2hλ hλ = i sin(φ)
Only imaginary values in stability region.
17. Explain the dierence between local error and global error for the
explicit Euler method.
Answer:
Global error: ek = u(tk ) − uk = O(h)
This is also known as truncation error which can be regarded as the eect of all the
local errors up to the point tk (Note that this is seldomly the sum of all the local
erorrs)
Summation: How much it diers from the "real" value up to step tk .
Local error: l(tk , h) = u(tk)−u(t
h
k−1 )
− f (tk−1 , u(tk−1 ))
Can be regarded as the residual when the exact solution is inserted into the explicit
euler. Local error is basically the error in one single point.
In conclusion: Global error: error up to a certain point.
Local error: error in one point.
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 9
(2) 18. An ODE-problem, initial value and boundary value problem, can be
solved by a discretization method or an ansatz method. Give a brief description of
what is meant by these two method classes.
Ansatz: Approximating the solution with a best t from a set of ansatz equa-
tions, often exponential functions.
(4) 19. When an initial value problem ẏ = f (y), y(0) = y0 is solved with a
discretization method, what is meant by the stability area in the complex hq-plane
for the method? Give a sketch of the stability area for the method .........
The area in the complex plane where the solution is numericly stable for the
product λh.
A system where the eigenvalues λi of the jacobian J(f (y)) are orders of magni-
tude dierent from each other and ∀i it holds that Re(λi ) ≤ 0.
(2) 21. For a linear system of ODE's ẏ = Ay , where the eigenvalues of A are
λ1 , λ2 , ..., λn , when is the system stable? Assume that all eigenvalues are real and
negative, when is the system sti ?
The system is stable when Re(λi ) < 0 ∀i and is considered sti when dierent
λ are of very dierent sizes. Usually diering with factors of 102 or larger.
(2) 22. Describe some discretization methods that are suitable for sti initial
value ODE problems.
Euler implicit
uk −uk−1
h = f (tk , uk ) from this equation one solves for uk , usually this needs to
be done in every step making the method somewhat inecient.
Trapezoidal method
A symmetric combination of eulers implicit and explicit methods.
2 uk −u
h
k−1
= f (tk , uk ) + f (tk−1 , uk−1 ) this method is not recommended for very
sti problems.
10 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
Both euler implicit and the trapezoidal method neccesates to solve for uk in ev-
ery step.
(2) 23. Which is the computational problem when a sti initial value ODE-
system is solved with an explicit method?
A very large spread for λ requires a very high accuracy i.e small step h and solv-
ing for many points since the part of the equation corresponding to a small λ will
most likely be interesting to study for a longer period of time. Thus the problem
becomes very expensive to solve.
(3) 24. Given e.g. the following ODE-system ẏ = −100y+z , y(0) = 1, ż = −0.1z ,
z(0) = 1. For which values of the stepsize h is the Euler forward method stable?
Same question for the Euler backward method.
(4) 25. Given the vibration equation mẍ + cẋ + kx = 0, x(0) = 0, ẋ(0) = v0 .
Use scaling of x and t to formulate this ODE on dimensionless form. Determine
scaling factors so that the scaled equation contains as few parameters as possible.
2
∂2x a ∂ ξ
∂ξ ∂x
Set x = aξ and t = bτ . Insert into
∂t2 = = ab ∂τ
b2 ∂τ 2 and ∂t
.
c k ¨ acb2 ˙
Rewriting the equation and inserting the above into ẍ+
m ẋ+ m x → aξ + bm ξ +
akb2 m
m ξ = 0, removing a and setting b = c
mk
ξ¨ + ξ˙ + 2 ξ
c
aξ̇(0)
with ẋ(0) = v0 = b selecting a = v0 b we get
˙
ξ(0) =1
(2) 26. Describe the nite dierence method used to solve a boundary value
problem y 00 + a(x)y 0 + b(x)y = c(x), y(0) = a, y 0 (1) + y(1) = b
27) Verify with Taylor expansion that the following two approxima-
tions are of second order, i.e. O(h2 ). 1) y0 (x) ≈ (y(x + h) − y(x − h))/2h, 2)
y 00 (x) ≈ (y(x + h) − 2y(x) + y(x − h))/h2 .
y(x + h) − y(x − h) =
= y(x) + y 0 (x)h + y 00 (x)h2 /2 + O(h3 ) − (y(x) + y 0 (x)(−h) + y 00 (x)h2 /2 + O(h3 )) =
= 2hy 0 (x) + O(h3 )
⇒ 2hy 0 (x) = y(x + h) − y(x − h) + O(h3 )
y(x + h) − y(x − h)
y 0 (x) = + O(h2 ).
2h
0
Thus, we have shown that y (x) ≈ (y(x + h) − y(x − h))/2h with second order
accuracy.
28) Derive a second order dierence approximation to y (4) (x) using the
values y(x + 2h), y(x + h), y(x), y(x − h) and y(x − 2h).
We are looking for a formula
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 13
y (4) (x) = ay(x + 2h) + by(x + h) + cy(x) + dy(x − h) + ey(x − 2h) + O(h2 )
a+b+c+d+e=0
2ah + bh − dh − e2h = 0
a2h2 + bh2 /2 + dh2 /2 + e2h2 = 0
a4h3 /3 + bh3 /6 − dh3 /6 − e4h3 /3 = 0
a2h4 /3 + bh4 /24 + dh4 /24 + e2h4 /3 = 1
1 1 1 1 1 a 0
2 1 0 −1 −2
b
24 0
4 1 0 1 4
c =
h4 0
8 1 0 −1 −8 d 0
16 1 0 1 16 e 1
1 −4 6 −4 1
a= , b= , c= , d= , e= .
h4 h4 h4 h4 h4
Now we need to determine the order of the error. We look at the next term in
the Taylor expansion, namely
ay (5) 16h5 /15 + by (5) h5 /120 − dy (5) h5 /120 − ey (5) 16h5 /15.
14 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
Since b = d and a = e it holds that this term is zero. Thus, we look at the
term following the previous in the Taylor expansion:
h6
This term will obviously not be zero, and thus our error will be of the order
h4 = h2 .
Thus, we may write
y 0 (x) = ay(x) + b(y(x) − y 0 (x)h + y 00 (x)h2 /2) + c(y(x) − 2y 0 (x)h + 2y 00 (x)h2 ) + O(h3 ).
a+b+c=0
−bh − 2ch = 1
bh /2 + 2ch2 = 0
2
1 1 1 a 0
1
0 −1 −2 b = 1
h
0 1 4 c 0
3 4 1
a= , b=− , c= .
2h 2h 2h
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 15
Since the coecients are of order h−1 and we know that the rst possible non-
zero term we have not taken into account is of order h3 , we may deduce that we
2
have an error term of order h . Thus, we may write the rst derivative of y(x) in
the following way:
30) Given a second order ODE y00 = f (x, y, y0 ). Assume a Dirichlet bound-
ary value is given in the left interval point y(0) = 1. Present two other
ways in which a boundary condition can be given in the right bound-
ary point. For second order BVP's there are three kinds of boundary conditions.
Apart from the above mentioned Dirichlet boundary conditions, which consist of
Neumann boundary conditions
specifying the solution at the boundary, there are
and Robin boundary conditions or Mixed boundary conditions. The Neumann BC's
consist of specifying the derivative at the boundary. The Robin BC's species a
combination of y 0 (x) and y(x) at the boundary.
31) When a boundary value problem y00 = p(x)y0 + q(x)y + r(x), y(0) =
1, y(1) = 0 is solved with discretisation based on the approximations
y 00 (xn ) ≈ (yn+1 − 2yn + yn1 )/h2 and y 0 (xn ) ≈ (yn+1 − yn1 )/2h we obtain a
linear system Ay = b of equations to be solved. Set up this system.
Which special structure does the matrix A have? In order to set up the
system we start by substituting the discretised derivatives into the equation. This
yields
1 pn 2 1 pn
− yn+1 − + qn yn + + yn−1 = rn .
h2 2h h2 h2 2h
Ay = b,
where
c2 c3 0 0 ··· 0
c1
c2 c3 0 ··· 0
.. .. .. .
A= 0 . ,
. . . . 0
0 ··· 0 c1 c2 c3
0 ··· 0 0 c1 c2
1 pn pn
where c1 = h2 + 2h , c2 = − h22 − qn and c3 = 1
h2 − 2h . Moreover, we have
16 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
r1 y1
r2 y2
b= , ,
. .
. .
. .
rN −1 yN −1
where y0 = y(0) and yN = y(1). Since the boundary conditions state that y0 =
yN = 0, we do not need to include these points in our y -vector. If the boundary
conditions were not zero, they would have been included in the b-vector.
The matrix A is said to be tridiagonal, since all nonzero elements lie on the three
centermost diagonals of A.
34) What input and output data are suitable for a Matlab function
solving a tridiagonal system of linear equations Ax = b if the goal is to
save number of ops and computer memory? If we want to save computer
memory and number of ops, it is wise to give the input matrix A as a sparse
matrix.
(I have no idea of what he is looking for with this question.)
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 17
35) The boundary value problem −y00 = f (x), y(0) = y(1) = 0 can be
solved with an ansatz method based on Galerkin's method. Formulate
the Galerkin method for this problem. The Galerkin method starts with an
ansatz on the form
N
X
y(x) ≈ ỹ(x) = cj ϕj (x),
j=1
where ϕj (x) are given basis functions and cj are coecients to be determined so
that ỹ(x) is a good approximation. We assume that the basis functions satisfy
the boundary conditions, i.e.
If we insert the ansatz into the BVP we get the following residual function
Preferably, we want r(x) to be small. Galerkin's method deals with this in the
following way: We demand r(x) to be orthogonal to the basis functions. This may
be expressed as
Z 1
r(x)ϕj (x)dx = 0 ∀j
0
Z 1 N 2
X d ϕ j
cj 2
+ f (x) ϕi dx = 0 ∀j, i
0 j=1
dx
N 1 1
d2 ϕj
X Z Z
cj ϕi dx + f (x)ϕi dx = 0 ∀j, i.
j=1 0 dx2 0
1 1 Z 1 Z 1
d2 ϕj
Z
dϕj dϕj dϕi dϕj dϕi
ϕi dx = ϕi − dx = − dx.
0 dx2 dx 0 0 dx dx 0 dx dx
The last equality follows from the fact that ϕj (0) = ϕj (1) = 0.
Insert the expression for the integral into the expression for r(x) yields
18 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
N Z 1 Z 1
X dϕj dϕi
cj dx = f (x)ϕi dx ∀i, j.
j=1 0 dx dx 0
Z 1 Z 1
dϕj dϕi
ai,j = dx, bi = f (x)ϕi dx.
0 dx dx 0
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 19
36. Classify the following PDEs with respect to linearity (linear or nonlinear), or-
der (rst, second), type (elliptic, parabolic, hyperbolic):
a) ut = uxx + u,
b) uxx + 2uyy = 0,
c) ut = (a(u)ux )x ,
d) ut + uux = 0,
e) uy = ux x + x
Given PDE:
ut = uxx + u
Sätt a = 1, b = −2 + h2x .
Vi får systemet
ū˙ = Aū + b̄
, där
20 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
b a 0 ··· ··· 0
a b a 0 ··· 0
1 0 a b a 0 ···
A= 2 .
.. .. .. . .
hξ .. . . . .
.
.
.
0 0 ··· a b a
0 ··· 0 a b
och
0
1
b̄ =
hx2 0
1
.
A är en N*N-matris där N är antalet inre punkter. Vi får b̄ då u(1, t) = 1.
Initialvärdena diskretiseras enligt ui (0) = xi .
38. Formulate the ODE-system u̇ = Au + b when the Method of Lines is applied
to the PDE-problem ut + ux = 0, t > 0, 0x1, u(0, t) = 0, u(x, 0) = x ux is to be
approximated by the backward Euler dierence formula.
ui,j − ui−1,j
ut ≈ −
hx
1
Sätt a= hx och b = − h1x . Vi får systemet
ū˙ = Aū + b̄
, där
b 0 0 ··· ··· 0
a b 0 0 ··· 0
1 a b a 0 0 ···
A=
. .. .. .. . .
hx .. . . . .
.
.
.
0 0 ··· a b a
0 ··· 0 a b
och
b̄ = 0̄
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 21
.
A r en (N+1)*(N+1)-matris, där N är antalet inre punkter (till skillnad från
uppgift 37 har vi inget randvärde vid x=1 och tvingas beräkna värdet själva, därav
N+1). Initialvärdet diskretiseras pss som i uppgift 37.
39. What is meant by an upwind scheme for solving ut + aux = 0, a > 0? (2)
Upwind scheme för ut +aux = 0, a > 0 innerbär Forward time, backward space
ui,j+1 − ui,j ui,j − ui−1,j
⇒ +a = 0.
ht hx
FTBS är av första ordningens noggrannhet i t- och x-led och har stabilitetskriterium
0 < a hhxt 1
40. Derive a dierence approximation and the corresponding stencil to e.g. the
Laplace operator
∂ 2 u ∂ 2 u ∂u ∂u
+ 2 + +
∂x2 ∂ y ∂x ∂y
Vi drar till med approxmation av andra ordningen (att den verkligen är av andra
ordn. visas i uppgift 41):
ui−1,j − 2ui,j + ui+1,j ui,j−1 − 2ui,j + ui,j+1 ui+1,j − ui−1,j ui,j+1 − ui,j−1
uxx +uyy +ux +uy ≈ + + + =
h2x h2y 2hx 2hy
1 1 2 2 1 1 1 1 1 1
= ui−1,j [ − ]+ui,j [− 2 − 2 ]+ui+1,j [ 2 + ]+ui,j−1 [ 2 − ]+ui,j+1 [ 2 + ]=
h2x 2hy hx hy hx 2hx hy 2hy hy 2hy
41. What is the order of accuracy in t and x when the Method of Lines with central
dierences in the x-variable and the implicit Euler method is applied to e.g. the
heat equation ut = uxx ?
h2 h3
u(x + h) = u(x) + u0 (x)h + u00 (x) + u000 (x) + O(h4 )
2 6
22 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
h2 h3
u(x − h) = u(x) − u0 (x)h + u00 (x)
− u000 (x) + O(h4 )
2 6
⇒ u(x + h) + u(x − h) = 2u(x) + u00 (x)h2 + O(h4 )
u(x + h) − 2u(x) + u(x − h)
⇒ = u00 (x) + O(h2 )
h2
Medelst samma metod eller enligt s.42 i kursboken är impl. Euler av ordning 1.
(d(x)ux )x = dx ux + duxx
d(x + h) − d(x − h) di+1 − di−1
dx = + O(h2 ) ⇒ + O(h2 )
2h 2h
u(x + h) − u(x − h) ui+1 − ui−1
ux = + O(h2 ) ⇒ + O(h2 )
2h 2h
ui+1 − 2ui + ui−1
uxx = + O(h2 )
h2
di+1 − di−1 ui+1 − ui−1 ui+1 − 2ui + ui−1
(d(x)ux )x = + O(h2 ) + di + O(h2 ))
2h 2h h2
43. Describe the Crank-Nicolson method for solving the heat equation.
Från wikipedia, först en beskrivning av metoden i allmänhet.
∂u ∂u ∂ 2 u
= F (u, x, t, , )
∂t ∂x ∂x2
n
then, letting u(i∆x, n∆t) = ui , the equation for CrankNicolson method is the
average of that forward Euler method at n and that backward Euler method at
n + 1 (note, however, that the method itself is not simply the average of those two
methods, as the equation has an implicit dependence on the solution):
un+1 − uni ∂u ∂ 2 u
i
= Fin u, x, t, , 2 (forward Euler)
∆t ∂x ∂x
un+1 − uni ∂u ∂ 2 u
i n+1
= Fi u, x, t, , (backward Euler)
∆t ∂x ∂x2
un+1 − uni ∂u ∂ 2 u ∂u ∂ 2 u
1
i
= Fin+1 u, x, t, , 2 + Fin u, x, t, , 2 (Crank-Nicolson)
∆t 2 ∂x ∂x ∂x ∂x
The function F must be discretized spatially with a central dierence.
Note that this is an implicit method: to get the "next" value of u in time, a
system of algebraic equations must be solved. If the partial dierential equation is
nonlinear, the discretization will also be nonlinear so that advancing in time will in-
volve the solution of a system of nonlinear algebraic equations, though linearizations
are possible. In many problems, especially linear diusion, the algebraic problem
is tridiagonal and may be eciently solved with the tridiagonal matrix algorithm,
which gives a fast O(n) direct solution as opposed to the usual O(n3 ) for a full
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 23
matrix.
Applicerat på värmeledningsekvationen
∂u ∂2u
=a 2
∂t ∂x
whose CrankNicolson discretization is then:
un+1 − uni a
i
(un+1 n+1
+ un+1 n n n
= 2 i+1 − 2ui i−1 ) + (ui+1 − 2ui + ui−1 )
∆t 2(∆x)
a∆t
or, letting r= 2(∆x)2 :
−run+1 n+1
i+1 + (1 + 2r)ui − run+1 n n n
i−1 = rui+1 + (1 − 2r)ui + rui−1 $
,
which is a tridiagonal problem, so that un+1
i , may be eciently solved by using
the tridiagonal matrix algorithm in favor of a much more costly matrix inversion.
2D
When extending into two dimensions on a uniform Cartesian grid, the derivation
is similar and the results may lead to a system of band-diagonal equations rather
than tridiagonal ones. The two-dimensional heat equation
∂2u ∂2u
∂u
=a + 2
∂t ∂x2 ∂y
can be solved with the CrankNicolson discretization of
1 a∆t n+1
un+1 n
i,j = ui,j + (ui+1,j + un+1 n+1 n+1 n+1
i−1,j + ui,j+1 + ui,j−1 − 4ui,j )
2 (∆x)2
+ (uni+1,j + uni−1,j + uni,j+1 + uni,j−1 − 4uni,j )
N N
X dci (t) X d2 ϕi (x)
ũt ≈ ũxx , ⇐⇒ ϕi (x) ≈ ci (t) .
i=1
dt i=1
dx2
N N
X dci (t) X d2 ϕi (x)
r(x, t) = ϕi (x) − ci (t) .
i=1
dt i=1
dx2
We now impose the condition that r(x, t)⊥ϕi (x) for i = 1, . . . , N and all t, i.e.
Z 1
r(x, t)ϕi (x)dx = 0, for i = 1, . . . , N, and all t.
0
This gives,
N 1 N 1
d2 ϕi (x)
Z Z
X dci (t) X
(*) ϕi (x)ϕj (x)dx − ci (t) ϕj (x)dx = 0.
i=1
dt 0 i=1 0 dx2
1 1 Z 1
d2 ϕi (x)
Z
dϕi (x) dϕi (x) dϕj (x)
− 2
ϕj (x)dx = − ϕj (x) + dx =
o dx dx 0 0 dx dx
Z 1
dϕi (x) dϕj (x)
= dx,
0 dx dx
since ϕj (0) = ϕj (1) = 0. Inserting this into (*) gives a system of ODE's called the
Galerkin formulation of the problem,
dc
M + Ac = 0,
dt
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 25
where,
Z 1 Z 1
dϕi (x) dϕj (x)
Mij = ϕi (x)ϕj (x)dx, and Aij = dx.
0 0 dx dx
The IC's are obatined from,
N
X
ũ(x, 0) = ci (0)ϕi (x) = f (x).
i=1
(Here Edsberg does something weird since his f is named u0 and f is a function of
the PDE). By multiplying with ϕj (x) and integrating over [0, 1] we obtain
Intuitively, the roof-functions can be thought of as triangles with corners in (xi−1 , 0),
(xi , 1) and (xi+1 , 0).
Solution: (See Edsberg p.142-143.) Suppose that we have discretised the region
of our 2D problem and thus obtained points (x1 , y1 ), . . . , (xn , ym ). The pyramid-
function ρij (x, y) is dened as the function which is zero everywhere except on the
quadrangle with cornes (xi+1 , yj ), (xi , yj+1 ), (xi−1 , yj ), (xi , yj−1 ). On this quadran-
3
gle ρij describes a pyramid, i.e. if we see the quadrangle as a subset of R then
ρij connects the sides of the quadrangle to the point (xi , yj , 1) with lled triangles.
To describe the pyramid explictly as a function of x and y seems to become rather
messy (we have to subdivide the quadrangle into four triangles and dene ρij as
four ane (linear) functions over the respective triangles in a way similar to Q.46.).
26 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
Solution: (See Edsberg Appedix A.5.) A direct method is one where the system
is solved directly, i.e. in each step an unknown is solved for and a solution is
obtained terms of the unknows not already solved for.
An iterative method is a method where one starts at some initial guess and
succesively computes better approximations to the solution.
The standard direct method is Gaussian elimination. Often one makes this
method more ecent by applying e.g. LU-factorisation or Cholesky-factorisation.
Examples of iterative methods are Jacobi's method and Gauss-Seidel's method.
Solution: (See Edsberg p.147.) Suppose that we solve a conservation law nu-
merically. It may occur that the property that should be conserved is not, due
to numerical eects. This phenomenon is called dissipation (or, more accurately,
numerical dissipation).
It may also occur that the phase relations are distorted from what they should
be and that the wave speed is variable even though it should be constant, because
of numerical properties of our algorithm. This phenomenon is called (numerical)
dispersion.
Q.50. What is meant by ll-in when solving a sparse linear system of equations
Ax = b with a direct method?
Solution: (See Edsberg p.137.) When solve the system directly, for instance using
Gaussian elimination, some of the zeros of the matrix A may become nozero in the
process. (In the case of Gaussian elimination this occurs when we add or subtract
a nonzero element to a zero element). We then say that there is a ll-in in A, since
some of the zero elements are lled with nonzero elements.
then A is positive. This is tedious, but it does not involved solution of polynomial
equations of high degree.
28 STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1
Q.52 If Ax = b is rewritten on the form x = Hx+c and the iteration xk+1 = Hxk +c
is dened, what is the condition on H for convergence of the iterations to the
solution of Ax = b? Also formulate a condition on H for fast convergence.
Solution. The idea is to look at a split of the matrix A as, say, A = M − N
for some matrices M and N. Then we get Ax = b ⇔ M x = N x + b. We want to
nd the x for when this is fullled, and that we can do by rst guessing a solution
x0 and then iterating by the formula M xk+1 = N xk + b and keep doing this until
kxk+1 − xk k is as small as we want. Of course, if we choose M to be invertible we
can rewrite this iterative process as
xk+1 = M −1 N xk + M −1 b = Hxk + c.
Let G = M −1 N , then, in order to study the convergence, we consider the eigenval-
ues of G and let
ρ(G) = max|λi (G)|.
i
The following result is is not really derived in the book but I suppose that it comes
from the fact that rk = Aek where rk is the residual and ek is the error and we get
convergence if ek → 0 as k → ∞. The result, however, is that if ρ(G) < 1 we have
convergence. Also, the convergence is faster the smaller ρ(G) is. A little trivia:
ρ(G) is called the spectral radius of G.
Q.53 What is meant by the steepest descent method for solving Ax = b, where A
is symmetric and positive denite?
Solution. When we consider the previous question for a symmetric, positive
denite matrix A (SPD) we get a special case (which might have been seen in the
basic course of optimization, for those of you who have taken that course, but I'm
not 100% sure). For this problem, to nd the solution x to the system of equations
Ax = b is equivalent to minimizing the function F (x) = 2 x Ax − x b with respect
1 > >
r>k dk
αk = .
d>k Adk
Now we can dene what the steepest descent method is! It is the method where
we let dk = rk and use starting guess x0 = 0. This simply means that we go in the
direction of the residual when we search for our mimimizer.
Q.54 Describe the preconditioning when used with the steepest descent method.
Assume that the matrix A is symmetric and positive denite.
Solution. The convergence of the steepest descent method is related to the
condition number
maxi λi (A)
κ(A) = .
mini λi (A)
If κ is large, the convergence is slow and if it is small, the convergence is fast. If the
condition number is large we can reduce it with preconditioning. We introduce a
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 29
Q55. For a hyperbolic PDE in x and t the solution u(x, t) is constant along certain
curves in the x, t-plane. What are these curves called? Give the analytic expression
for these curves belonging to ut + 2ux = 0.
A55. They are called characteristics (page 150), and are the lines resulting
dx
from the ODE
dt = a(u(x(t), t)) (in the x, t plane). First, a general case to explain
where a(u) comes from, with the equation being:
∂u ∂
(0.1) + f (u) = 0
∂x ∂x
f (u) is called the ux function. If f (u) is twice dierentiable, we can instead
rewrite it to
∂u ∂u
(0.2) + a(u) =0
∂x ∂x
du(x(t),t)
Along a characteristic, the solution is constant, shown simply by
dt =0
(nal step; Look at the original equation)
du(x(t), t) ∂u dx(t) ∂u ∂u ∂u
(0.3) = + = + a(u(x(t), t)) =0
dt ∂t dt ∂x ∂x ∂x
Constant solution uC along a characteristic gives, with the denition of a char-
acteristic:
dx
(0.4) = a(uC ) = aC −→ x(t) = aC t + C
dt
In our case, a = 2 which is a constant, and the analytic expression becomes
simply x(t) = 2t + C .
(Characteristic curves can be used to generate entire solutions; If you know the
boundary value and the initial condition and you have a collection of curves that
describe how that initial solution propagates, you can solve the entire system)
∂u ∂ ∂u ∂u
(0.6) + f (u) = + A(u) =0
∂x ∂x ∂x ∂x
A(u) is the jacobian of f (u). Yes, the system is hyperbolic, as the eigenval-
ues of A are real and distinct.
1
action and a book by Siam on numerical methods , would turn into a system of
ODE's that could be expressed as follows, with Λ being the diagonal matrix with
eigenvalues along the diagonal.
dx
(0.7) =Λ
dt
Since the eigenvalues have no time or spatial dependence (in this case), this
system is solved in complete analogue to the previous case, (0.4), except you get
two families of curves, one for each eigenvalue, with each eigenvalue as the slope
instead of a.
p
x(t) = (2)t + C
p
x(t) = − (2)t + C
Also, note that this A belongs to a system of PDEs not a system of ODEs, so
the sign of the eigenvalues have no bearing on the stability of the system (At least,
I hope sincerely this is the case. Proof by elegance?)
A57. (This particular answer distills a fair bit of mathematical vodoo that Edsberg
doesn't explain properly contained in the previous couple of questions. It may be a
tad bit unclear, at which point the previously mentioned book by Siam could prove
useful. Correctness of answer is questionable.)
That the problem is well posed means that the solution is continuous with re-
spect to the given conditions (direct quote, page 105), however, the book provides
an example where boundary is a Riemann step and talks about how it is a solution
in the weak sense. It is not clear exactly how that relates to the stated requirement.
Moving on however;
√ √
Recall that λ1 = 2, λ2 = − 2. Note in particular the signs on the two eigen-
values, as this means that the two characteristic curves originate from "opposite
ends" of the x-interval (go back to the question describing characteristics if that
made no sense), and the requirement for a continuous solution thus means we need
boundary conditions on both sides for it to be well posed (compare to page 152
in the book, where it's either-or depending on the sign of a. If both eigenvalues
had been positive, we could have employed that again). The boundary and initial
values propagate along the characteristics.
u1
This probably answers this question, but it's not clear. With u= :
u2
1
It was actually a lot more methodical in explaining hyperbolic PDEs and systems of PDEs than
A58. When discretizing in a way that leads to the last inner calculation requiring
an outer point (the stencil contains a ui+1,k ) that may not exist in the real bound-
ary conditions, you impose an articial "numerical boundary condition" to make
the calculation in the nal point possible anyway.
0.1. Q59. Use Neumann analysis to verify that the upwind scheme is unstable
when applied to ut = aux , a > 0.
0.2. A59. Points of order: Neumann analysis is a stability analysis you employ
when the eigenvalues are all the same, rendering eigenvalue based stability analysis
useless. Upwind scheme is FTBS, Forward Time Backward Space, a name which
becomes apparent after we've used forward and backward discretization to generate
the following.
Q60. What is meant by articial diusion? Why is that sometimes used when
solving hyperbolic PDEs?
STUDY QUESTIONS - APPLIED NUMERICAL METHODS PART 1 33
A60. (Skip to the bottom for the short answer. This initial bit is mostly me
not understanding the question and attempting to reason out what's going on)
Step 1: Recall Spurious oscillations; The numerical solution begins to oscillate in
a ridiculously unrealistic fashion, yet it is not unstable as instability requires to
become unbounded, which does not occur with spurious oscillations. One way to
attempt to address this is to lower the stepsize signicantly (page 80) Using the
h
Peclet Number
2x , we have a condition for spurious oscillations where P e < 1
Pe =
means no oscillations. Step 2: Lab6 features a hyperbolic PDE. To solve it we
employ a backward dierence (which we just did because he said so, but it actually
xed the problem above). We do this because unlike the central dierence, this
nite dierence method "postpones the oscillations" (quote unquote)and reduces
accuracy to rst order, but on the other hand, it works. Step 3: Example problem,
the advection diusion equation (page 79) and how it looks discretized with just
central discretization on both derivatives;
d2 u du
(0.10) − + = 0, 0 ≤ x ≤ 1, u(0) = 0, u(1) = 1
dx2 dx
ui+1 − 2ui + ui−1 ui+1 − ui−1
(0.11) − + =0
h2 2h
The spurious oscillations occur, in this case, because of the second boundary con-
dition, as in a neighborhood of x=1 the solution is going to jump sharply (as it
spends most of its time being near before reaching the second boundary). This is
referred to as a "boundary layer" at x = 1, and we have a "singular perturbation
problem".
You address this in your discretization by using the backward dierence for the
rst derivative, central for the second derivative (This is done in the book in page
79-80, this is intended as a quick reference). Then rewrite the rst order derivative
approximation according to (using trivial algebra. I tested it and succeeded, ergo
it has to be trivial):