0% found this document useful (0 votes)
10 views

Skript

Uploaded by

Hoàng Thái Hà
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Skript

Uploaded by

Hoàng Thái Hà
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Lecture notes for

Finite Element Methods


Autumn Term 2013, Uppsala Universitet

Patrick Henning

Date: December 2, 2013

(this script is based on the previous lecture by A. Målqvist and the book

’The Finite Element Method: Theory, Implementation, and Practice’ by M.G. Larson and F. Bengzon)
II
Contents

1 Lecture 1 and 2 3
1.1 Piecewise polynomial approximations in 1D . . . . . . . . . . . . . 3
1.2 Continuous Piecewise Linear Polynomials . . . . . . . . . . . . . . . 5
1.3 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Continuous Piecewise Linear Interpolation . . . . . . . . . . . . . . 8
1.5 L2 -Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Computation of an L2 -Projection Ph (f ) . . . . . . . . . . . . . . . . 10
1.7 Numerical Integration with Quadrature Rules . . . . . . . . . . . . 11
1.8 Error in Quadrature Rules . . . . . . . . . . . . . . . . . . . . . . . 13
1.9 Implementation of the L2 −Projection . . . . . . . . . . . . . . . . . 13
1.10 Exercise 4.1 - Cauchy-Schwarz inequality . . . . . . . . . . . . . . . 16

2 Lecture 3 and 4 17
2.1 Weak formulation of the problem . . . . . . . . . . . . . . . . . . . 17
2.2 The Finite Element Method . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Derivation of the discrete system . . . . . . . . . . . . . . . . . . . 18
2.4 Basic a priori error estimate . . . . . . . . . . . . . . . . . . . . . . 19
2.5 Mathematical modeling and boundary conditions . . . . . . . . . . 21
2.6 Model problem with coefficient and general Robin BC . . . . . . . . 23
2.7 A posteriori error estimate and adaptivity . . . . . . . . . . . . . . 25

3 Lectures 5 and 6 29
3.1 Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Data structure for mesh . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Mesh generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 Piecewise polynomial spaces . . . . . . . . . . . . . . . . . . . . . . 31
3.5 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6 L2 -projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.7 Existence and uniqueness of the projection . . . . . . . . . . . . . . 35
3.8 A priori error estimate . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.9 Quadrature and numerical integration . . . . . . . . . . . . . . . . . 37

III
1

3.10 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . 38

4 Lecture 7, 8 and 9 41
4.1 Weak formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3 The Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.4 The Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.5 Elliptic Problems with a Convection Term . . . . . . . . . . . . . . 48
4.6 Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.7 Error analysis and adaptivity . . . . . . . . . . . . . . . . . . . . . 50

5 Lecture 10 57
5.1 Systems of Ordinary Differential Equations . . . . . . . . . . . . . . 57
5.2 Heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.3 Wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2 CONTENTS
Chapter 1

Lecture 1 and 2

1.1 Piecewise polynomial approximations in 1D


Summary:

• Introduction of a class of functions that can approximate quite arbitrary


functions and that are easy to implement,

• Introduction of two methods of approximation,

• Discussion of the quality of the approximations.

Let I := [x0 , x1 ] ⊂ R for x0 < x1 and let P1 (I) := {v| v(x) = c0 + c1 x; c0 , c1 ∈ R}.
Every linear function v on I is uniquely defined by {c0 , c1 } via v(x) = c0 + c1 x.







slope c1 = v 0 (x)
c0   


-
x0 x1
Alternatively, the function v can be also characterized by two values {α0 , α1 } via
α0 = v(x0 ) and α1 = v(x1 ).

3
4 CHAPTER 1. LECTURE 1 AND 2


α1 t


α0 t
 



-
x0 x1

Note that typically {c0 , c1 } =


6 {α0 , α1 }, however they are related by a linear system
of equations:
    
c0 + c1 x0 = α0 1 x0 c0 α0
=⇒ = .
c0 + c1 x1 = α1 1 x1 c1 α1
The matrix is invertible since the determinant is positive:
1 x0
= x1 − x0 > 0.
1 x1

Consequently, every pair {c0 , c1 } corresponds with a pair {α0 , α1 } and vice versa.

Now, let λ0 , λ1 ∈ P1 (I) such that λ0 (x0 ) = 1, λ0 (x1 ) = 0 and such that λ1 (x0 ) = 0,
λ1 (x1 ) = 1. λ0 and λ1 are uniquely determined. Any v ∈ P1 (I) can be written as
v(x) = α0 λ0 (x) + α1 λ1 (x). What are the equations for λ0 and λ1 ?

1 6 1 6

λ0 λ1

- -
x0 x1 x0 x1
We obtain
x1 − x x − x0
λ0 (x) = and λ1 (x) = .
x 1 − x0 x1 − x0
The set {λ0 , λ1 } spans P1 (I) (i.e. it is a basis).
1.2. CONTINUOUS PIECEWISE LINEAR POLYNOMIALS 5

1.2 Continuous Piecewise Linear Polynomials


Let a = x0 < x1 < ... < xN = b, where {xi }N i=0 are nodes. Let furthermore
Ii := [xi−1 , xi ], i = 1, 2, .., N and let hi := xi − xi−1 denote the length of Ii . With
I := [a, b] we let Vh := {v ∈ C 0 (I)| v|Ii ∈ P1 (Ii )}. Here, C 0 (I) defines the space of
continuous functions on I.

6v ∈ Vh is uniquely defined by {v(xi )}N


i=0 .

s s
v(x0 ) s s
s s v(xN )

-
x 0 x1 x2 xN
=

a =
b

Let {φj }N
j=0 ⊂ Vh be functions so that for i, j = 0, ..., N :
(
1 i=j
φj (xi ) = .
0 i 6= j

6
1
”hat function”

-
x0 x1 xj−1 xj xj+1 xN

Now {φj }N
j=0 forms a basis of Vh , i.e. any v ∈ Vh can be written as

N
X
v(x) = αi φi (x), where αi = v(xi ).
i=0
6 CHAPTER 1. LECTURE 1 AND 2

We have
 x−x
 hi ,
 i−1
x ∈ Ii ,
−x
φi (x) = xi+1
hi+1
, x ∈ Ii+1 ,

0, otherwise.

1.3 Interpolation
Let f : I = [x0 , x1 ] → R be a continuous function. We define the linear interpolant
Π(f ) ∈ P1 (I) by Π(f ) := f (x0 )φ0 + f (x1 )φ1 .
6
f
s
s

Π(f )
-
x0 x1
We would like to study the error f − Π(f ) in the L2 −norm. Let therefore
Z  21
2
kwkL2 (I) := w(x) dx (the L2 -norm)
I

Why is this a norm? Properties that characterize a norm:

(i) kwk = 0 ⇔ w = 0

(ii) kλwk = |λ|kwk for all w ∈ Vh and λ ∈ R

(iii) kv + wk ≤ kvk + kwk for all v, w ∈ Vh .

They are fulfilled by the L2 −norm, since:

(i) if w = 0, we obviously have kwkL2 (I) = 0,


R
if kwk2L2 (I) = I w(x)2 dx = 0 ⇒ w = 0, because w2 ≥ 0.

R 1 1 R 1
(ii) kλwkL2 (I) = I
(λw(x))2 dx 2 = (λ2 ) 2 I w(x)2 dx 2 = |λ|kwkL2 (I) .
1.3. INTERPOLATION 7

(iii) To verify (iii), we require the Cauchy-Schwarz inequality (respectively Hölder


inequality) which implies
Z
v(x)w(x) dx ≤ kvkL2 (I) kwkL2 (I) . (1.3.1)
I

Using it we obtain:
Z Z
kv + wkL2 (I) = (v(x) + w(x)) dx = v(x)2 + 2v(x)w(x) + w(x)2 dx
2 2
I I
 2
≤ kvk2L2 (I) + 2kvkL2 (I) kwkL2 (I) + kwk2L2 (I) = kvk2L2 (I) + kwk2L2 (I) .

Proposition 1.3.1. Let f ∈ C 2 (I) (two times continuously differentiable), then


it holds

kf − Π(f )kL2 (I) ≤ h2 kf 00 kL2 (I) ,


k(f − Π(f ))0 kL2 (I) ≤ hkf 00 kL2 (I) ,

where h := x1 − x0 .

Proof. Let e := f − Π(f ). We have for arbitrary y ∈ I:


Z y
e(y) = e(x0 ) + e0 (x) dx.
x0

Since e(x0 ) = 0 by the definition of Π(f ), we get


Z y Z y (1.3.1)
Z y  12 Z y  21
e(y) = e0 (x) dx = 1 e0 (x) dx ≤ 12 dx (e0 (x))2 dx
x0 x0 x0 x0
1
≤ h 2 ke0 kL2 (I) .

Therefore we get e(y)2 ≤ hke0 k2L2 (I) which implies:


Z Z
kek2L2 (I) = 2
e(y) dy ≤ hke0 k2L2 (I) 1 dy = h2 ke0 k2L2 (I) . (1.3.2)
I I

It remains to estimate ke0 kL2 (I) . We use the mean value theorem (applied to e0 )
which gives us the existence of a ξ ∈ I such that

e(x1 ) − e(x0 )
e0 (ξ) = =0 (because of e(x0 ) = e(x1 )). (1.3.3)
x1 − x0
8 CHAPTER 1. LECTURE 1 AND 2
Ry Ry
Now let y ∈ I be arbitrary, then we get e0 (y) = e0 (ξ) + ξ
e00 (x) dx = ξ
e00 (x) dx
which implies
Z y Z Z  21 Z  12
1 1
0 00 00 00 2 00 2
e (y) = e (x) dx ≤ |e (x)| dx ≤ h 2 |e (x)| dx =h 2 |f (x)| dx ,
ξ I I I

where we used Π(f )00 = 0. We get


Z Z Z Z
0 00
2
|e (y)| dy ≤ h 2
|f (x)| dx dy = h2
|f 00 (x)|2 dx.
I I I I

Combining the last estimate with (1.3.2), we get

kekL2 (I) ≤ hke0 kL2 (I) ≤ h2 kf 00 kL2 (I) .

1.4 Continuous Piecewise Linear Interpolation


PN
Now let Π(f ) := i=0 f (xi )φi .
Proposition 1.4.1. Let f ∈ C 0 (I) and f ∈ C 2 (Ii ) for all 1 ≤ i ≤ N , then it holds
N
X
kf − Π(f )k2L2 (I) ≤ h4i kf 00 k2L2 (Ii ) ,
i=1
N
X N
X
k(f − Π(f ))0 k2L2 (Ii ) ≤ h4i kf 00 k2L2 (Ii ) ,
i=1 i=1

where hi := xi − xi−1 .
Proof. We can use Proposition 1.3.1 and apply it to each of the intervals Ii to
obtain
N
X N
X
kf − Π(f )k2L2 (I) = kf − Π(f )k2L2 (Ii ) ≤ h4i kf 00 k2L2 (Ii )
i=1 i=1

and simultaneously
N
X N
X
k(f − Π(f ))0 k2L2 (I) = k(f − Π(f ))0 k2L2 (Ii ) ≤ h2i kf 00 k2L2 (Ii ) .
i=1 i=1
1.5. L2 -PROJECTION 9

We make three observations:

1. Π(f ) → f as (max1≤i≤N hi ) → 0.

2. Π(f ) is a bad approximation if f 00 is big.

3. Non-uniform refinements in hi can be used to reduce terms where kf 00 kL2 (Ii )


is big.

1.5 L2-Projection
R R
Let I = [a, b]. Notation: in the following we just write I v instead of I v(x) dx if
the variable of integration is clear from Rthe context.
We consider
R the space L2 (I) := {v| I v(x)2 dx < ∞} with the scalar product
(v, w) := I u · v. Recall the properties of a scalar product:

(i) (v, w) = (w, v) for all v, w ∈ L2 (I) (symmetry),

(ii) (αv+βw, z) = α(v, z)+β(w, z) for all v, w, z ∈ L2 (I) and α, β ∈ R,

(iii) ∀v ∈ L2 (I): (v, v) ≥ 0 and ’(v, v) = 0 ⇔ v=0’.

Each of the properties is obviously fulfilled.

We have already defined the corresponding norm


Z  21
1
2
kvkL2 (I) = v = (v, v) 2 .
I

The normed vector space L2 (I) with the above scalar product (inner product)
is a Hilbert space (i.e. every Cauchy sequence converges). The ’L’ comes from
Lebesgue who introduced the so called ’Lebesque integral’ that we use nowadays.

We say two L2 -functions v and w are orthogonal if (v, w) = 0.

The Cauchy-Schwarz inequality holds for all v, w ∈ L2 (I):

(v, w) ≤ kvkL2 (I) kwkL2 (I) (proved later).

The triangle inequality holds for all v, w ∈ L2 (I):

kv + wkL2 (I) ≤ kvkL2 (I) + kwkL2 (I) .


10 CHAPTER 1. LECTURE 1 AND 2

Proof. Using Cauchy-Schwarz, we get:

kv + wk2L2 (I) = (v + w, v + w) = (v, v + w) + (w, v + w)


≤ kvkL2 (I) kv + wkL2 (I) + kwkL2 (I) kv + wkL2 (I) .

Dividing by kv + wkL2 (I) yields the result. If kv + wkL2 (I) = 0, the result is
trivial.

Let f ∈ L2 (I), where I is an interval in R. We define the L2 −projection

Ph : L2 (I) → Vh = {v ∈ C 0 (I)| v|Ii ∈ P1 (Ii )}

by: Ph (f ) ∈ Vh fulfills
Z
(f − Ph (f )) · vh = 0 for all vh ∈ Vh .
I

Note that

kf − Ph (f )k2L2 (I) = (f − Ph (f ), f − Ph (f ))
= (f − Ph (f ), f − vh ) + (f − Ph (f ), vh − Ph (f ))
| {z }
=0
≤ kf − Ph (f )kL2 (I) kf − vh kL2 (I)

and therefore

kf − Ph (f )kL2 (I) ≤ kf − vh kL2 (I) for all vh ∈ Vh .

Ph (f ) is the best approximation of f in Vh with respect to the L2 −norm.

1.6 Computation of an L2-Projection Ph(f )


To find Ph (f ) ∈ Vh with (f − Ph (f ), vh ) = 0 for all vh ∈ Vh is equivalent to finding
Ph (f ) ∈ Vh with

(f − Ph (f ), φj ) = 0 for all j = 0, ..., N,

where Vh = span({φj }N j=0 ) and φj being the hat functions defined in Section 1.2
(which form a basis of Vh ).
1.7. NUMERICAL INTEGRATION WITH QUADRATURE RULES 11

PN
Since Ph (f ) ∈ Vh we can write it as Ph (f ) = j=0 ξj φj with appropriate ξj ∈ R.
Therefore
’find Ph (f ) ∈ Vh : (Ph (f ), vh ) = (f, vh ) for all vh ∈ Vh ’
N
X
⇐⇒ ’find ξ0 , ..., ξN ∈ R : ξj (φj , φi ) = (f, φi ) for all 0 ≤ i ≤ N ’.
j=0

The real numbers ξ = (ξ0 , ..., ξN ) arePthe coefficients for describing Ph (f ) in terms
of the hat basis functions: Ph (f ) = N j=0 ξj φj . The problem can be expressed as
a linear system of equations. We define the Matrix M ∈ R(N +1)×(N +1) and the
vector b ∈ R(N +1) by
Z
Mij := (φj , φi ) = φi · φj (computable)
Z I

bi := (f, φi ) = f · φi (computable).
I
PN
We get bi = j=0 Mij ξj for i = 0, ..., N or equivalently b = M ξ. This leads to the
following algorithm for the computation of Ph (f ):

Algorithm
1. Initiate mesh with N elements x0 < x1 < ... < xN .
2. Compute M and b.
3. Solve M ξ = b.
4. Let Ph (f ) = N
P
j=0 ξj φj .

1.7 Numerical Integration with Quadrature Rules


We want to approximate integrals numerically. We denote
Z
J(f ) := f (x) dx.
I
A quadrature rule is formula to approximate an integral. The formula takes values
at f (xj ), multiplies them with so called weights ωj (which represent small volume
units) and sums up everything, i.e.:
n
X
J(f ) ≈ QI (f ) := ωj f (xj ), xj ∈ [a, b].
j=1

Examples:
12 CHAPTER 1. LECTURE 1 AND 2

a+b
1. Midpoint rule. Let n = 1, x1 = 2
, ω1 = b − a.
 
a+b
J(f ) ≈ QI (f ) := f (x1 )(b − a) = f (b − a).
2
This rule is exact for linear functions f .
6
s
f

QI (f )
-
a b b−a
2. Trapezoidal rule. Let n = 2, x1 = a, x2 = b, ω1 = ω2 = 2
.
f (a) + f (b)
J(f ) ≈ QI (f ) := (b − a).
2
This rule is exact for linear functions f .
6

s
f
s

QI (f )
-
a b
3. Simpson’s rule. Let g(x) := γx2 + βx + α and I = [0, h]. We wish to
determine α, β and γ such that g(0) = f (0), g( h2 ) = f ( h2 ) and g(h) = f (h),
i.e. g is a quadratic interpolation of f . In particular: if f is a quadratic
polynomial, then f = g. Now, find α, β, γ ∈ R such that
h h2 h
g(0) = α = f (0), g( ) = γ + β + α and g(h) = γh2 + βh + α = f (h).
2 4 2
We obtain the following the linear system of equations for α, β and γ:
    
1 0 0 α f (0)
1 h h2  β  = f ( h )
2 4 2
1 h h2 γ f (h)
   
α f (0)
with solution β  =  h2 − 12 f (h) − 32 f (0) + 2f ( h2)  .

2
γ h2
f (h) − 2f ( h2 ) + f (0)
1.8. ERROR IN QUADRATURE RULES 13

Computing the integral of g over I by using the above values for α, β and
γ, we obtain
Z h
h h
g(x) dx = (f (0) + 4f ( ) + f (h)).
0 6 2
This means, that the quadrature rule

f (a) + 4f ( a+b
2
) + f (b)
J(f ) ≈ QI (f ) := (b − a)
6
is exact for quadratic functions f (because in this case we have f = g).

4. Gauss quadrature rule. So called Gauss quadratures are higher order


quadrature rules which can be exact for arbitrary polynomials.

1.8 Error in Quadrature Rules


Let QI (f ) be a quadrature rule which is exact for polynomials of degree ’≤ p’.
Then
∂ p+1
Z
f (x) dx − QI (f ) ≤ C|I|p+1 sup p+1
f (y) ,
I y∈I ∂x

where C is a constant independent of I and f .

1.9 Implementation of the L2−Projection


We describe the assembling of the mass matrix M with entries
 x−x
Z  hi ,
 i−1
x ∈ Ii ,
xi+1 −xi
Mij = φj · φi where φi (x) = h +1
, x ∈ Ii+1 ,
I  i

0, otherwise.

6
1
φi−1 φi

-
xi−3 xi−2 xi−1 xi xi+1
14 CHAPTER 1. LECTURE 1 AND 2

We compute the entries. First, observe that Mij = 0 if |i − j| > 1. For Mii we get
 2 Z  2
x − xi−1 xi+1 − x
Z Z
2
Mii = φi = dx + dx
I Ii−1 hi Ii hi+1
x x
1 (x − xi−1 )3 i (xi+1 − xi )3 i+1
 
1 hi hi+1
= 2 + 2 − = + , i = 1, ..., N − 1.
hi 3 xi−1 hi+1 3 xi 3 3
hi hN
M00 = and MN N = .
3 3
Sine the matrix M is symmetric, it remains to calculate Mi+1,i .
Z    
xi+1 − x x − xi
Z
Mi+1,i = φi · φi+1 = · dx
I Ii hi+1 hi+1
xi+1 Z xi+1  2
(xi+1 − x)2

1 1 xi+1 − x
= 2 − (x − xi ) − 2 − dx
hi+1 2 xi hi+1 xi 2
x
(xi+1 − x)3 i+1

1 hi+1
2
= , i = 0, ..., N.
hi+1 6 xi 6

We obtain a tridiagonal matrix:


 h1 h1

3 6
0 ··· 0
 h1 h1 .. .. 
6 3
+ h32 h2
6
. . 

M = .. .. .. 
0 . . . 0 .
. ... ...
 .. hN 

6
0 ··· 0 h6N hN
3

Observe that M can written in the following localized structure:


 h1 h1     
3 6
 h1 h1   h2 h2   
M = 6 3 + 3 6  + ... +  hN  .
h2 h2 hN

   
6 3 3 6
hN hN
6 3

We can therefore easily identify the local contribution from an element Ii and
denote this contribution by M Ii , i.e.
 
Ii 1 2 1
M := hi (local mass matrix).
6 1 2
Algorithm:
1.9. IMPLEMENTATION OF THE L2 −PROJECTION 15

1. Allocate memory for a (N + 1) × (N + 1) matrix.

2. For i = 1, ..., N :
 
1 2 1
Compute M Ii = h.
6 1 2 i
Ii
Add M1,1 to Mi,i
Ii
M1,2 to Mi,i+1
Ii
M2,1 to Mi+1,i
Ii
M2,2 to Mi+1,i+1

end.

Assembly of local vector


We use the trapezoidal rule for assembling an approximation of the exact right
hand side vector for the L2 −projection. An exact entry of the right hand side is
given by
Z Z xi Z xi+1
bi := f · φi = f · φi + f · φi .
I xi−1 xi

Using the trapezoidal rule and φi (xi−1 ) = φi (xi+1 ) = 0, we can approximate bi by

f (xi−1 )φi (xi−1 ) + f (xi )φi (xi ) f (xi )φi (xi ) + f (xi+1 )φi (xi+1 )
bi ≈ (xi − xi−1 ) + (xi+1 − xi )
2 2
hi + hi+1
= f (xi ) .
2

The approximate right hand side b̃ is therefore given by:

f (x0 ) h21
       
f (x0 ) 0 0
 f (x1 ) h1 +h2  f (x1 ) f (x1 )  .. 

..
2   h 
  0  1 f (x2 ) 2
h  . h
 N
b̃ :=  = + + ... +  0  .
 
.
  ..  2  0  2  2
 

f (xN −1 ) h N −1 +hN   .   

f (xN −1 )
hN
2 ..
f (xN ) 2 0 . f (xN )
| {z } | {z } | {z }
b I1 bI2 b IN

Algorithm:
16 CHAPTER 1. LECTURE 1 AND 2

1. Allocate memory for a (N + 1) × 1 vector.


2. For i = 1, ..., N :
 
f (xi−1 )
Ii 1
Compute b = 2
hi .
f (xi )

Add bI1i to b̃i−1


bI2i to b̃i
end.

1.10 Exercise 4.1 - Cauchy-Schwarz inequality


We prove the estimate
Z
u · v ≤ kukL2 (I) · kvkL2 (I)

for all u, v ∈ L2 (I).


Proof. Assume kvkL2 (I) 6= 0 (otherwise trivial). Let λ ∈ R. We get
Z
0 ≤ ku − λvkL2 (I) = kukL2 (I) − 2λ u · v + λ2 kvk2L2 (I) .
2 2
(1.10.1)

Now let
R

u·v
λ := (trick!)
kvk2L2 (I)
which makes (1.10.1) to read
R 2 R 2
u·v u·v
0≤ kuk2L2 (I) −2 Ω
+ Ω
.
kvk2L2 (I) kvk2L2 (I)
Multiplying with kvk2L2 (I) yields
Z 2
0≤ kuk2L2 (I) kvk2L2 (I) − u·v ,

which directly implies


Z
u · v ≤ kukL2 (I) kvkL2 (I) .

Chapter 2

Lecture 3 and 4
Finite Element Method in 1D
Summary:
• derive the Finite Element Method,

• study the error,

• implementation of the method.

Model problem: Find u ∈ C 2 (0, 1) such that

−u00 (x) = f (x), x ∈ I := (0, 1),


u(0) = u(1) = 0. (2.0.1)

Even in 1D it might be difficult or impossible to solve equations of type (2.0.1).


We therefore seek a numerical approximation.

2.1 Weak formulation of the problem


0
R a function v ∈ C (0, 1) weakly differentiable if there exists a function w
We call
with I |w| < ∞ and such that
Z Z
0
v·φ =− w·φ
I I

for all φ ∈ C 1 (0, 1) with φ(0) = φ(1) = 0. We write v 0 := w for the weak derivative
of v. Let

V0 := {v ∈ C 0 (0, 1)| kvkL2 (I) < ∞, kv 0 kL2 (I) < ∞ and v(0) = v(1) = 0},

17
18 CHAPTER 2. LECTURE 3 AND 4

where v 0 denote the weak derivative. Multiplying (2.0.1) with a test function
v ∈ V0 and integration over I yields:
Z Z Z Z
00 IP
f · v = −u · v = u · v − u (1)v(1) + u (0)v(0) = u0 · v 0 .
0 0 0 0
I I I I

The weak form reads: find u ∈ V0 s.t.


Z Z
0 0
u ·v = f ·v for all v ∈ V0 . (2.1.1)
I I

Comments:
1. If u is strong solution (i.e. solution of (2.0.1)), then it is also weak solution.
2. If u is a weak solution with u ∈ C 2 (I), it is also strong solution.
3. Existence and uniqueness of weak solutions is obtained by the Lax-Milgram-
Theorem.
4. We can consider solutions with lower regularity using the weak formulation.
5. FEM gives an approximation of the weak solution.
In the following, we use the notation k · kL2 (I) := k · k.

2.2 The Finite Element Method


Let Vh,0 := {v ∈ Vh | v(1) = v(0) = 0}, where

Vh := {c ∈ C 0 (I)| v|Ii ∈ P1 (Ii )}.

Find uh ∈ Vh,0 s.t.


Z Z
u0h 0
·v = f ·v for all v ∈ Vh,0 . (2.2.1)
I I

We call uh the Finite Element Approximation of u.

2.3 Derivation of the discrete system


Problem (2.2.1) is equivalent to: find uh ∈ Vh,0 s.t.
Z Z
0 0
uh · φi = f · φi for 1 ≤ i ≤ N − 1.
I I
2.4. BASIC A PRIORI ERROR ESTIMATE 19

PN −1is only from 1 to N −1, because of the zero boundary condition!


Note that the index
Now, let uh = j=1 ξj φj . Then
−1
Z Z N ! N −1 Z
X X
0 0
f · φi = ξj φj · φi = ξj φ0j · φ0i for 1 ≤ i ≤ N − 1.
I I j=1 j=1 I

Let A ∈ R(N −1)×(N −1) and b ∈ RN −1 be given by the entries


Z
Ai,j := φ0j · φ0i for 1 ≤ i, j ≤ N − 1,
ZI

bi := f · φi for 1 ≤ i ≤ N − 1.
I

We have
N
X −1
bi = Ai,j ξj for 1 ≤ i ≤ N − 1.
j=1

The algebraic problem reads therefore: find ξ ∈ RN −1


b = Aξ.
A is called the stiffness matrix.

Algorithm
1. Initiate a mesh with N elements.
2. Compute A and b.
3. Solve the system Aξ = b.
PN −1
4. Set uh = j=1 ξj φj .

2.4 Basic a priori error estimate


We study the error e := u − uh .

Theorem 2.4.1.
uh ∈ Vh,0 satisfies the Galerkin orthogonality:
Z
(u − uh )0 · vh0 = 0 for all vh ∈ Vh,0 . (2.4.1)
I
20 CHAPTER 2. LECTURE 3 AND 4

Proof. Since Vh,0 ⊂ V0 we obtain from (2.1.1) and (2.2.1) that


Z Z
0
u · vh0 = f · vh for all vh ∈ Vh,0 ,
Z I ZI
u0h · vh0 = f · vh for all vh ∈ Vh,0 .
I I

Subtracting both equations gives the result.

Theorem 2.4.2.
It holds

k(u − uh )0 kL2 (I) ≤ k(u − vh )0 kL2 (I) for all vh ∈ Vh,0 .

Proof. With u − uh = u − vh + vh − uh we obtain

k(u − uh )0 k2L2 (I) = ((u − uh )0 , (u − uh )0 )


= ((u − uh )0 , (u − vh )0 ) + ((u − uh )0 , (vh − uh )0 )
| {z }
=0, because of (2.4.1).

≤ k(u − uh )0 kL2 (I) k(u − vh )0 kL2 (I) .

Dividing by k(u − uh )0 kL2 (I) finishes the proof. (If k(u − uh )0 kL2 (I) = 0, the result
is trivial.)

Theorem 2.4.3 (A priori error estimate).


It holds
N
X
k(u − uh )0 k2L2 (I) ≤ h2i ku00 k2L2 (Ii ) .
i=1

Note: even though we do not prove it, the solution u of (2.1.1) has a second weak
derivative in the weak sense. Therefore, the above equation is justified.
2.5. MATHEMATICAL MODELING AND BOUNDARY CONDITIONS 21

Proof. We have by Theorem 2.4.2


N
X
k(u − uh )0 k2L2 (I) ≤ k(u − Π(u))0 k2L2 (I) ≤ h2i ku00 k2L2 (Ii ) ,
i=1

where we used the estimate from Proposition 1.4.1.

The error is expressed in terms in terms of the exact solution. If it is expressed


in terms of the computed solution uh it is an a posteriori error estimate (this yields
a computable error bound). We will return to this.

We note:

1. uh → u in the kv 0 k-norm as (max1≤i≤N hi ) → 0. If k(u − uh )0 kL2 (I) = 0 then


uh − u is constant, but since u(0) = uh (0) we also have u − uh = 0 and
therefore uh = u.

2. uh is the best approximation within the space Vh,0 with respect to the kv 0 k-
norm.

3. The error e = u − uh is orthogonal to Vh,0 in the (v 0 , w0 ) scalar product.

4. The norm kv 0 k is referred to as the energy norm and has often a physical
meaning.

2.5 Mathematical modeling and boundary con-


ditions
Stationary heat equation:

q(x0 ) q(x1 )

- -

0 x0 x1 L
Heat source f
22 CHAPTER 2. LECTURE 3 AND 4

Let q denote the heat flux along the x-axis. The conservation of energy yields:
Z x1
q(x0 ) − q(x1 ) + f (x) dx = 0.
x0

and therefore
Z x1 Z x1
0
q (x) dx + f (x) dx = 0.
x0 x0

The heat flux is proportional to the negative temperature gradient q = −kT 0 ,


where k is the thermal conductivity, i.e. heat flows from hot to cold. This gives
us
Z x1 Z x1
0 0
− (kT ) (x) dx = f (x) dx.
x0 x0

Since the interval [x0 , x1 ] was arbitrary, we get

0
− (kT 0 ) (x) = f (x) for all x ∈ I = (0, L).

Boundary conditions

There are three important types of boundary conditions (BC):

1. Dirichlet: T (0) = α and T (L) = β for two real numbers α and β. This BC is
also known as strong BC or essential BC. The temperature is kept a constant
value at the boundary points (temperature regulator).

2. Neumann: T 0 (0) = α and T 0 (L) = β for two real numbers α and β. This BC
is also known as natural BC. If T 0 (0) = 0 then q(0) = 0, which means that
we do not have flow over the boundary (no flow in, no flow out).

3. Robin: T 0 (0) = αT (0) and T 0 (L) = βT (L) for two real numbers α and β. It
says that the flux is proportional to the heat - the larger the heat the larger
the flow.

Note that any combination is possible at the two boundary points.


2.6. MODEL PROBLEM WITH COEFFICIENT AND GENERAL ROBIN BC23

2.6 Model problem with coefficient and general


Robin BC
Model problem:

−(au0 )0 = f in I := (0, 1),


0
a(0)u (0) = κ0 (u(0) − g0 ),
a(1)u0 (1) = κ1 (u(1) − g1 ),

where a = a(x) with a(x) ≥ a0 > 0, f ∈ L2 (I), κ0 , κ1 ≥ 0. We derive the weak


form. Let

V := {v ∈ C 0 (0, 1)| kvkL2 (I) < ∞ and kv 0 kL2 (I) < ∞},

where v 0 denotes the weak derivative. Multiplying with v ∈ V and integration


yield:
Z 1 Z 1 Z 1
0 0
fv = −(au ) v = au0 v 0 − a(1)u0 (1)v(1) + a(0)u0 (0)v(0)
0
Z0 1 0

= au0 v 0 − κ1 (u(1) − g1 )v(1) + κ0 (u(0) − g0 )v(0)


0

for all v ∈ V . We gather all u-independent terms on the left and obtain
Z 1 Z 1
0 0
au v + κ0 u(0)v(0) − κ1 u(1)v(1) = f v + κ0 g0 v(0) − κ1 g1 v(1)
0 0

for all v ∈ V .

Implementation:
Let for simplification a = 1. We
PN need to assemble a stiffness matrix A and a
load vector b. Let therefore uh = j=0 ξj φj and v = φi for i = 0, ..., N (observe
that the sum is from 0 to N !). We get:

Aξ = b

with A ∈ R(N +1)×(N +1) and b ∈ RN +1 given by the entries


Z 1
Ai,j := φ0j φ0i + κ0 φj (0)φi (0) − κ1 φj (1)φi (1),
Z0 1
bi := f φi + κ0 g0 φi (0) − κ1 g1 φi (1),
0
24 CHAPTER 2. LECTURE 3 AND 4

where 0 ≤ i, j ≤ N . Since
 x−x
 hi ,
 i−1
x ∈ Ii ,
−x
φi (x) = xi+1
h i+1
, x ∈ Ii+1 ,

0, otherwise.

We have

1
 hi ,
 x ∈ Ii ,
φ0i (x) = − hi+11
, x ∈ Ii+1 ,

0, otherwise.

We note that
Z 1 Z xi Z xi+1
1 1
φ0i φ0i = (φ0i )2 + (φ0i )2 = +
0 xi−1 xi hi hi+1

for i = 0, ..., N , and


1 1 xi+1
−1 1 −1
Z Z Z
φ0i φ0i+1 = φ0i+1 φ0i = = ,
0 0 xi hi+1 hi+1 hi+1

for i = 0, ..., N − 1. The terms κ0 φj (0)φi (0) are only non-zero entries for i = j = 0
and κ1 φj (1)φi (1) is non-zero for i = j = N . We therefore get:
 
κ0 + h11 − h11 0 ··· 0

 −1 1 ... .. 
 h1 h1
+ h12 − h12 . 

A= 0 .. .. .. .
. . .
 
 . 0
.. ..

 .
 . . . − h1N


0 ··· 0 − h1N 1
hN
− κ1

If we use the trapezoidal rule for b, we get

f (x0 ) h21 + κ0 g0
 
 f (x1 ) h1 +h2 
2
..
 
b̃ =  .
 
.

f (xN −1 ) hN −1 +hN


2
hN
f (xN ) 2 − κ1 g1 .
2.7. A POSTERIORI ERROR ESTIMATE AND ADAPTIVITY 25

We assemble the matrix A in the same way as M : Observe that M can written in
the following localized structure:
   
1 −1
1 −1 1 + 1 
  1 −1 
A=  h1  −1 1
+
h1  

   
κ0
1    
... +  + .
hN  1 −1  
−1 1 −κ1

Algorithm:

1. Allocate memory for a (N + 1) × (N + 1) matrix A.

2. For i = 1, ..., N :
 
Ii 1 1 −1
Compute A = .
hi −1 1

Add AI1,1
i
to Ai,i

AI1,2
i
to Ai,i+1

AI2,1
i
to Ai+1,i

AI2,2
i
to Ai+1,i+1

end.

Add κ0 to A0,0 and −κ1 to AN,N .

3. Assemble b as before and solve Aξ = b.

2.7 A posteriori error estimate and adaptivity


We go back to the original model problem to find u (weakly) such that

−u00 = f, in I := (0, 1),


u(0) = u(1) = 0.
26 CHAPTER 2. LECTURE 3 AND 4

Proposition 2.7.1. It holds


N
X
0 2
k(u − uh ) k ≤ Ri (uh )2 ,
i=1

where the element residual is given by


Ri (uh ) := hi · kf + u00h kL2 (Ii ) .
Note that u00h = 0 for piecewise linear Vh .
Proof. Let e := u − uh and observe that (e − Π(e))(xi ) = 0 for every node xi (we
use that subsequently). We get with Galerkin orthogonality and the estimates
from Proposition 1.3.1:
Z 1 Z 1
0 2 0 0
ke k = ee = e0 (e0 − Π(e)0 )
0 0
N Z
X xi
= e0 (e − Π(e))0
i=1 xi−1
XN Z xi
= (−e00 )(e − Π(e)) + [e0 (e − Π(e)) ]xxii−1
i=1 xi−1 | {z } | {z }
=f +u00
h =0 in nodes
N
X
≤ kf + u00h kL2 (Ii ) ke − Π(e)kL2 (Ii )
i=1
XN
≤ kf + u00h kL2 (Ii ) h2i ke00 kL2 (Ii )
i=1
XN
= kf + u00h k2L2 (Ii ) h2i .
i=1

Adaptive mesh refinement

We want to use the bound to increase the number of nodes on the ’right ares’. We
start from:
N
X
k(u − uh )0 k2 ≤ h2i · kf + u00h k2L2 (Ii ) ,
i=1

Algorithm:
2.7. A POSTERIORI ERROR ESTIMATE AND ADAPTIVITY 27

1. Given a coarse mesh of N nodes.

2. While N ’not too large’

Compute uh ,

Compute Ri := hi · kf + u00h kL2 (Ii ) ,

Select and refine Ii where Ri is big

end.

How to select and refine?

Example: let 0 ≤ α ≤ 1. If Rj > α maxi=1,...,N Ri then split Ii into two, i.e.


Iinew,1 := [xi , xi +x2 i+1 ] and Iinew,2 := [ xi +x2 i+1 , xi+1 ].

Original grid

Refined grid
28 CHAPTER 2. LECTURE 3 AND 4
Chapter 3

Lectures 5 and 6
Piecewise polynomial approximation in 2D
Summary:
• construct mesh,

• construct the space Vh ,

• interpolation,

• L2 -projection

• implementation.

3.1 Meshes
Let Ω ⊂ R2 bounded with ∂Ω assumed to be polygonal.
A triangulation Th of Ω is a set of triangles T such that
[
Ω= T
T ∈Th

and two triangles intersect by either a common triangle edge, or a corner, or


nothing. Corners will be referred to as nodes. We let hT :=diam(T ) the length or
the largest edge.

3.2 Data structure for mesh


Let Th have N nodes and M triangles. The data is stored in two matrices. The
matrix P ∈ R2×N describes the nodes, i.e. the nodes (x1 , y1 ), · · · (xN , yN ) form the

29
30 CHAPTER 3. LECTURES 5 AND 6

entries:
 
x1 x2 · · · xN
P =
y1 y2 · · · yN
and the matrix K ∈ R3×M describes the triangles, i.e. it describes which nodes
(numerated from 1 to N ) form a triangle T and how it is orientated:
 α α
n1 n2 · · · nαM

K = nβ1 nβ2 · · · nβM  .


nγ1 nγ2 · · · nγM
This means that triangle Ti is formed by the nodes nαi , nβi and nγi (enumeration
in counter-clockwise direction).


@
 @@
 @

 @
a@
 T @
 T
@


 nβ
 


Example: N7 = (0, 2)
s sN8 = (1, 2)
@
@ T6
@
@
T5 @
N4 s @sN5 = (1, 1) sN6 = (2, 1)
= (0, 1) @ @
@ T2 @ T4
@ @
@ @
T1 @ T3 @
s @s @s
N1 = (0, 0) N2 = (1, 0) N3 = (2, 0)
That implies for our example:
 
0 1 2 0 1 2 0 1
P =
0 0 0 1 1 1 2 2
and
 
1 2 2 3 4 5
K = 2 5 3 6 5 8 .
4 4 5 5 7 7
3.3. MESH GENERATION 31

3.3 Mesh generation


In 2D given a set of nodes, the Delaunay algorithm gives a triangulation of quality,
(a large smalls angle ’worst element’).

Matlab has a built in toolbox called ’PDE Tool Box’ and includes a mesh genera-
tion algorithm.

1. Define geometry:
 
2 2 2 2 2 2 ←− polygon
0
 2 2 1 1 0 ←−
 x coordinate 1
2
 2 1 1 0 0 ←−
 x coordinate 2
0
geom =  0 1 1 2 2
 ←− y coordinate 1
0
 1 1 2 2 0 ←−
 y coordinate 2
1 1 1 1 1 1 ←− domain to the right
0 0 0 0 0 0 ←− domain to the left

’domain to the right = 1’ means: to the right of the edge is R2 \ Ω.


’domain to the left = 0’ means: to the left of the edge is Ω.

(0, 2) 5 (1, 2)

(1, 1) 3 (2, 1)
6

(0, 0) (2, 0)
1
2. [p, e, t] =initmesh(geom, ’hmax’, 0.1 ), where e denotes the edge matrix.

3. pdemesh(p, e, t)

3.4 Piecewise polynomial spaces


Let T be a triangle with nodes

N1 = (x1 , y1 ), N2 = (x2 , y2 ), N3 = (x3 , y3 ).


32 CHAPTER 3. LECTURES 5 AND 6

(x3 , y3 )

 @@

 @
@
 @

 T @
@
 @

  (x2 , y2 )
 
 
(x1 , y1 )
We define
P1 (T ) := {v ∈ C 0 (T )| v(x, y) = c1 + c2 x + c3 y, c1 , c2 , c3 ∈ R}.
Now let vi = v(Ni ) for i = 1, 2, 3. Note that v ∈ P1 (T ) is determined by {vi }3i=1 .
Given vi we compute ci by
    
1 x1 y1 c1 v1
 1 x2 y2  c2  = v2  .
1 x3 y3 c3 v3
This is solvable due to

1 x 1 y1
det 1 x2 y2  = 2|T | =
6 0.
1 x3 y 3
Let λj ∈ P1 (T ) be given by the nodal values
(
1 i=j
λj (Ni ) = .
0 6 j
i=
This gives us
v(x, y) = α1 λ1 (x, y) + α2 λ2 (x, y) + λ3 (x, y),
where αi v(Ni ) for i = 1, 2, 3.

Example:

(0, 1)
s
@
@
@
@
T @
s @s
(0, 0) (1, 0)
3.5. INTERPOLATION 33

The basis functions are λ1 (x, y) = 1 − x − y, λ2 (x, y) = x and λ3 (x, y) = y.

Let Th be a triangulation of Ω, then we let


Vh := {v ∈ C 0 (Ω)| ∀T ∈ Th : v|T ∈ P1 (T )}.
Functions in Vh are piecewise linear and continuous. We know that v ∈ Vh is
uniquely determined by {v(Ni )| i = 1, · · · , N }. We let
(
1 i=j
φj (Ni ) =
0 i 6= j
and let {φj | 1 ≤ j ≤ N } ⊂ Vh be a basis for Vh (’hat functions’), i.e.:
N
X
v(x, y) = αi φi (x, y), αi = v(Ni ), i = 1, · · · , N.
i=1

3.5 Interpolation
Given f ∈ C 0 (T ) on a single triangle with nodes Ni = (xi , yi ), i = 1, 2, 3, we let
3
X
Π(f ) := f (Ni )φi
i=1

in particular Π(f )(Ni ) = f (Ni ).

We want to estimate f − Π(f ). Wirst we need to measure derivatives in 2D. Let


!1
2 2 2
∂f ∂f
|Df | := +
∂x ∂y
!1
2 2 2 2 2 2 2
∂ f ∂ f ∂ f
|D2 f | := 2
+2 + 2
∂ x ∂x∂y ∂ y
R  21
and let kf kL2 (Ω) = Ω
|f (x)|2 dx .
Proposition 3.5.1. For f ∈ C 2 (T ), it holds:
kf − Π(f )kL2 (T ) ≤ Ch2T kD2 f kL2 (T ) ,
kD(f − Π(f ))kL2 (T ) ≤ ChT kD2 f kL2 (T ) ,
where C is a generic constant independent of hT and f , but that depends on the
ratio between smallest and largest interior angle of the triangle T .
34 CHAPTER 3. LECTURES 5 AND 6

Proof. Not in this course.


PN
Now, we consider the piecewise continuous interpolant Π(f ) = i=1 f (Ni )φi .

Proposition 3.5.2. For f ∈ C 0 (Ω), with f ∈ C 2 (T ) for all T ∈ Th , it holds


X
kf − Π(f )k2L2 (Ω) ≤ C h4T kD2 f k2L2 (T ) ,
T ∈Th
X X
kD(f − Π(f ))k2L2 (T ) ≤ C h2T kD2 f k2L2 (T ) ,
T ∈Th T ∈Th

where C is a generic constant independent of h and f , but that depends on the


ratio between smallest and largest interior angle of the triangles of Th .

Proof. An immediate consequence of Proposition 3.5.1.

3.6 L2-projection
Let Ph : L2 (Ω) → Vh be the L2 -projection with Ph (f ) ∈ Vh given s.t.
Z
(f − Ph (f ))vh = 0 for all vh ∈ Vh . (3.6.1)

Linear system:

Problem (3.6.1) is equivalent to


Z
(f − Ph (f ))φi = 0 for i = 1, · · · , N

PN
and Ph (f ) can be expressed as Ph (f ) = i=1 ξi φi , where ξi (for i = 1, · · · , N ) are
given by the equation
Z N
Z X N
X Z
f φi = ξi φi φj = ξi φi φj .
Ω Ω i=1 i=1 Ω

Let M ∈ RN ×N and b ∈ RN be given by


Z
Mi,j := φj φi for i, j = 1, · · · N,
ZΩ
bi := f φi for i = 1, · · · N.

3.7. EXISTENCE AND UNIQUENESS OF THE PROJECTION 35
PN
This gives us j=1 ξj Mi,j = bi for j = 1, · · · , N and leads to
M ξ = b.

Algorithm for L2 -projection:


1. generate triangulation and Vh ,
2. compute M and b,
3. solve M ξ = b,
4. set Ph (f ) = N
P
i=1 ξφi .

3.7 Existence and uniqueness of the projection

Theorem 3.7.1.
For any f ∈ L2 (Ω) the L2 -projection Ph (f ) exists and is unique.

Proof. Uniqueness: assume Ph (f )1 and Ph (f )2 be two solutions with the L2 -


projection property. We obtain:
Z Z
Ph (f )1 vh = f vh for all vh ∈ Vh ,
ZΩ ZΩ
Ph (f )2 vh = f vh for all vh ∈ Vh .
Ω Ω

This gives us
Z
(Ph (f )1 − Ph (f )2 )vh = 0 for all vh ∈ Vh .

Choosing vh = Ph (f )1 − Ph (f )2 gives us
kPh (f )1 − Ph (f )2 kL2 (Ω) = 0
and therefore Ph (f )1 = Ph (f )2 .

Existence: Ph (f ) is given by a N × N linear system M x = b. We just saw


Mx = 0 ⇐⇒ x = 0,
but this implies that the kernel is zero and therefore we have existence for arbitrary
right hand side b.
36 CHAPTER 3. LECTURES 5 AND 6

3.8 A priori error estimate

Theorem 3.8.1.
Let f ∈ L2 (Ω) and let Ph (f ) be the L2 -projection of f , then

kf − Ph (f )k ≤ kf − vh k for all vh ∈ Vh .

Proof. As in 1D let f − Ph (f ) = f − vh + vh − Ph (f ) and observe


Z
2
kf − Ph (f )k = (f − Ph (f ))(f − vh + vh − Ph (f ))
ZΩ Z
= (f − Ph (f ))(f − vh ) + (f − Ph (f ))(vh − Ph (f ))
ZΩ Ω

= (f − Ph (f ))(f − vh )

≤ kf − Ph (f )kkf − vh k for all vh ∈ Vh .

Theorem 3.8.2.
For f ∈ C 0 (Ω), with f ∈ C 2 (T ) for all T ∈ Th , it holds
X
kf − Ph (f )k2L2 (Ω) ≤ C h4T kD2 f k2L2 (T ) ,
T ∈Th

with a generic constant C.

Proof. Use Theorem 3.8.1 and Proposition 3.5.2 to obtain:


X X
kf − Ph (f )k2L2 (Ω) ≤ kf − Π(f )k2L2 (T ) ≤ C h4T kD2 f k2L2 (T ) .
T ∈Th T ∈Th
3.9. QUADRATURE AND NUMERICAL INTEGRATION 37

Theorem 3.8.3.
The mass matrix M is symmetric and positive definite.

Proof. The symmetry is obvious by Mi,j = Mj,i . The positive definite property,
i.e. xT M x > 0 for all x ∈ RN \ {0}, can be verified by:
N
X N
X Z
T
x Mx = xi Mi,j xj = xi xj φi φj
i,j=1 i,j=1 Ω
Z N
! N
!
X X
= xi φi xj φj
Ω i=1 i=j
N
X
=k xi φi k2L2 (Ω) ≥ 0,
i=1

PN
where xT M x = 0 only if i=1 xi φi = 0, which is only the case if xi = 0 for
i = 1, · · · , N .

3.9 Quadrature and numerical integration


R P
In general T f (x) dx ≈ j ωj f (qj )|T |, where the ωj ’s denote the weights, the qj ’s
the quadrature points and |T | the area.

Examples:

1. Midpoint rule:
Z
N1 + N2 + N3
f (x) dx ≈ f (xT )|T |, with midpoint x̄ = .
T 3

2. Corner rule:
3
|T |
Z X
f (x) dx ≈ f (Ni ) , with triangle corners Ni .
T i
3

3. Gauss quadratures: Gauss quadratures are designed with general points and
weights so that they are exact for polynomials of a given degree.
38 CHAPTER 3. LECTURES 5 AND 6

3.10 Implementation details


Assembly of mass matrix M with
Z
Mi,j = φj φi for i, j = 1, · · · N.

Useful formula (without proof): let m, n, p ∈ N and let φT,1 , φT,2 and φT,3 denote
the three basis functions that belong to the three corners of the triangle T . Then
it holds:
Z
p 2 m! n! p!
φm n
T,1 φT,2 φT,3 = |T |. (3.10.1)
T (m + n + p + 2)!

Example:

N5 = (0, 1) N = (1, 1)
s s4

 
T1 
 
 
 

 T T
 2  3
s s s

N1 = (0, 0) N2 = ( 34 , 0) N3 = (1, 0)

   
φ1 φ1 φ1 φ2 φ1 φ3 φ1 φ4 φ1 φ5 φ1 φ1 φ1 φ2 φ1 φ3 φ1 φ4 φ1 φ5
Z φ2 φ1 φ2 φ2 φ2 φ3 φ2 φ4 φ2 φ5  X Z φ2 φ1 φ2 φ2 φ2 φ3 φ2 φ4 φ2 φ5 
   
M= φ3 φ1 φ3 φ2 φ3 φ3 φ3 φ4 φ3 φ5  = φ3 φ1 φ3 φ2 φ3 φ3 φ3 φ4 φ3 φ5 
   
Ω φ φ T
4 1 φ4 φ2 φ4 φ3 φ4 φ4 φ4 φ5
 T ∈Th φ4 φ1 φ4 φ2 φ4 φ3 φ4 φ4 φ4 φ5 
φ5 φ1 φ5 φ2 φ5 φ3 φ5 φ4 φ5 φ5 φ5 φ1 φ5 φ2 φ5 φ3 φ5 φ4 φ5 φ5
   
φ1 φ1 0 0 φ1 φ4 φ1 φ5 φ1 φ1 φ1 φ2 0 φ1 φ4 0
Z  0 0 0 0 0  Z φ2 φ1 φ2 φ2 0 φ2 φ4 0
   
=  0 0 0 0 0 +  0 0 0 0 0
   
T1 φ φ 0 0 φ φ φ φ T2 φ φ φ φ 0 φ
4 1 4 4 4 5 4 1 4 2 4 φ4 0

φ5 φ1 0 0 φ5 φ4 φ5 φ5 0 0 0 0 0
 
0 0 0 0 0
Z 0 φ2 φ2 φ2 φ3 φ2 φ4 0
 
+ 0 φ3 φ2 φ3 φ3 φ3 φ4 0 =: M T1 + M T2 + M T3 .
 
T 3 0 φ φ
4 2 φ4 φ3 φ4 φ4 0

0 0 0 0 0
3.10. IMPLEMENTATION DETAILS 39

Using formula (3.10.1), we obtain


Z
1
φi φj = (1 + δij )|T | for i, j = 1, 2, 3,
T 12
where δij = 1 for i = j and δij = 0 for i 6= j. This gives us
 
2 1 1
1 
M Ti = 1 2 1 |Ti |
12
1 1 2

Algorithm for Mass matrix

• Construct P and K matrices.

• Allocate memory for M (N × N )

• for T ∈ Th :
 
2 1 1
1 
MT = 1 2 1 |T |
12
1 1 2

M (T (1 : 3), T (1 : 3)) = M (T (1 : 3), T (1 : 3)) + M T .

end

Remark: ’M (T (1 : 3), T (1 : 3)) = M (T (1 : 3), T (1 : 3)) + M T ’ is Matlab notation.


40 CHAPTER 3. LECTURES 5 AND 6
Chapter 4

Lecture 7, 8 and 9
Finite Element Method in 2D
Summary:

• weak formulation,

• finite element method,

• error estimation,

• implementation,

• adaptivity.

4.1 Weak formulation


First, we derive the so called Green’s formula from the Divergence Theorem (Stokes
Theorem).

Let d = 1, 2, 3 and Ω ⊂ Rd be a bounded domain with Lipschitz-continuous bound-


ary ∂Ω and unit outer normal n. The Divergence Theorem says:
Z Z
∂f (x)
dx = f (x) · ni dσ(x), i = 1, .., d.
Ω ∂xi ∂Ω

Let f = u · v for two sufficiently regular (i.e. H 1 (Ω)) functions u and v. Then we
obtain:
Z   Z   Z
∂u ∂v
v dx = − u dx + u · v · ni dσ(x), i = 1, .., d.
Ω ∂xi Ω ∂xi ∂Ω

41
42 CHAPTER 4. LECTURE 7, 8 AND 9

Now let u = wi be the components of a vector-valued function. We sum over


i =, 1.., d and obtain:
Z d Z  
X ∂wi
(∇ · w) v dx = v dx
Ω i=1 Ω
∂xi
d Z   Z
X ∂v
=− wi dx + wi · v · ni dσ(x)
i=1 Ω
∂xi ∂Ω
Z Z
= − w · ∇v dx + v w · n dσ(x).
Ω ∂Ω

Now let w = −a∇u for given functions a and u. We obtain:


Z Z Z
− (∇ · (a∇u)) v dx = a∇u · ∇v dx − v (a∇u) · n dσ(x). (4.1.1)
Ω Ω ∂Ω

For the special case a = 1, this formula is called Green’s formula.

General elliptic equation of second order


Find u s.t.
−∇ · (a∇u) + bu = f in Ω, (4.1.2)
a∇u · n = κ(g − u) on ∂Ω,
where a > 0, b ≥ 0, κ ≥ 0, f ∈ L2 (Ω) and g ∈ C 0 (∂Ω).

We seek a weak solution in


V := H 1 (Ω) := {v ∈ L2 (Ω)| v has a weak derivative and kvkL2 (Ω) + k∇vkL2 (Ω) < ∞}.
In order to derive the weak formulation we multiply (4.1.2) with v ∈ V , integrate
over Ω and use (4.1.1):
Z Z Z
f v dx = − ∇ · (a∇u)v dx + buv dx
Ω Z Ω Z Ω Z
= a∇u∇v dx + buv dx − n · a∇u v dσ(x)
ZΩ Z Ω Z ∂Ω

= a∇u∇v dx + buv + κ(u − g) v dσ(x).


Ω Ω ∂Ω

We obtain the weak form: find u ∈ V s.t.


Z Z Z Z Z
a∇u∇v dx + buv dx + κu v dσ(x) = f v dx + κg v dσ(x) ∀v ∈ V.
Ω Ω ∂Ω Ω ∂Ω
(4.1.3)
4.2. FINITE ELEMENT METHOD 43

4.2 Finite Element Method


We can formulate the method as in the 1D case by using the weak formulation
(4.1.3).

Definition 4.2.1 (Finite Element Method in 2D).


Find uh ∈ Vh s.t.
Z Z Z
a∇uh ∇vh dx + buh vh dx + κuh vh dσ(x) (4.2.1)
Ω ZΩ Z ∂Ω
= f vh dx + κg vh dσ(x) ∀vh ∈ Vh .
Ω ∂Ω

Implementation

Let a = 1 and b = g = 0. Then we have uh = N


P
i=1 ξi φi . We pick φj in (4.2.1) and
obtain:
XN Z Z  Z
ξi ∇φi · ∇φj dx + κφi φj dσ(x) = f φj for 1 ≤ j ≤ N.
i=1 Ω ∂Ω Ω

This gives us the system

(A + R) ξ = b, where

A ∈ RN ×N , R ∈ RN ×N and b ∈ RN are given by the entires


Z Z
Ai,j := ∇φj · ∇φi dx, Ri,j := κφi φj dσ(x),
ZΩ ∂Ω

bj := f φj , for 1 ≤ i, j ≤ N.

Assembly of the stiffness matrix

We can again identify the local contributions that come form a particular triangle
T:
Z
T
Ai,j := ∇φi · ∇φj , for i, j = 1, 2, 3.
T
44 CHAPTER 4. LECTURE 7, 8 AND 9

(x3 , y3 )

 @@

 @
@
 @

 T @
@
 @

  (x2 , y2 )
 
 
(x1 , y1 )
We have φi (x, y) = ai + bi x + ci y, for i = 1, 2, 3. Let us denote αi := (ai , bi , ci ).
We know
(
1 i=j
φi (Nj ) = ,
0 i=6 j
which gives us
    
1 x1 y 1 ai 1
Bαi = 1 x2 y2
   bi = 0 = ei .
 
1 x3 y 3 ci 0
In general we have Bαi = ei for i = 1, 2, 3. Furthermore, we obviously have
 
b
∇φi = i ,
ci
which gives
ATi,j = (bi bj + ci cj )|T | for i, j = 1, 2, 3.

Algorithm
Let N be the number of nodes and M the number of triangles.
1. Allocate memory for a N × N matrix A.
2. For K ∈ Th :
Compute ∇φKi = (bi , ci )
K
for i = 1, 2, 3. Set
 
(b21 + c21 ) (b1 b2 + c1 c2 ) (b1 b3 + c1 c3 )
AK := (b2 b1 + c2 c1 ) (b22 + c22 ) (b2 b3 + c2 c3 ) |T |.
(b3 b1 + c3 c1 ) (b3 b2 + c3 c2 ) (b23 + c23 )
4.2. FINITE ELEMENT METHOD 45

Add A(T (1 : 3, K), T (1 : 3, K)) = A(T (1 : 3, K), T (1 : 3, K)) + AK .

end.

Assembly of boundary matrix

Let Γout
h denote the set of boundary edges of the triangulation, i.e.

Γout
h := {E| E = T ∩ ∂Ω, for T ∈ Th }.

Assume that κ is constant on E. For E ∈ Γout E


h , we define R ∈ R
2×2
by the entries
Z
E 1
Ri,j := κφi φj dσ(x) = κ (1 + δi,j )|E|, for i, j = 1, 2,
E 6

where δi,j is 1 for i = j and 0 else.

In Matlab structure:
nα1 nα2 nα3 · · · nαJ
 
 nβ nβ nβ · · · nβJ 
 1 2 3 
 x1 
 
e=  |T |.
 x2 
bnd 
 
 1 
0

J denotes the number of boundary edges (boundary segments).

Algorithm

For j = 1, .., J:

Compute |E| given p(:, e(1 : 2, j)).

Evaluate κ in midpoint (if it is not constant).


 
κ 2 1
R(e(1 : 2, j), e(1 : 2, j)) = R(e(1 : 2, j), e(1 : 2, j)) +
6
|E|.
1 2

end.
46 CHAPTER 4. LECTURE 7, 8 AND 9

Assembly of Load Vector

We use a one-point quadrature rule for approximating the integral. We obtain for
T ∈ Th :
|T |
Z
T
bj = f φj ≈ f (Nj ) , for j = 1, 2, 3.
T 3

Algorithm

For i = 1, .., M :

Compute |K|.
 
f (N1i )
b(t(1 : 3, i)) = b(t(1 : 3, i)) + f (N2i ) |K|
3
.
i
f (N3 )
end.
Given A, R and b, we can solve

(A + R)ξ = b
PN
and write uh = i=1 ξφi .

4.3 The Dirichlet Problem


Find u s.t.

−4u = f in Ω, (4.3.1)
u=g on ∂Ω.

We seek the (weak) solution in:

Vg := {v ∈ V | v|∂Ω = g}.

Multiplying (4.5.1) with a test function v ∈ V0 and integrating gives us:


Z Z Z Z Z
fv = −4uv = ∇u · ∇v − (∇u · n)v = ∇u · ∇v.
Ω Ω Ω ∂Ω Ω
4.4. THE NEUMANN PROBLEM 47

So the weak problem reads: find u ∈ Vg s.t.


Z Z
fv = ∇u · ∇v for all v ∈ V0 .
Ω Ω

Finite Element Method:

Assume that g is piecewise linear on ∂Ω with respect to the triangulation. Then


our FEM approximation is uh ∈ Vh,g := {v ∈ Vh | v|∂Ω = g} with
Z Z
f vh = ∇uh · ∇vh for all v ∈ Vh,0 .
Ω Ω

Assume that we have N nodes and J boundary nodes, then the matrix form
of the FEM problem reads:
N −J J
    
N −J A0,0 A0,g ξ0 b0
   =  
    
J Ag,0 Ag,g ξ1 b1

with A0,0 ∈ R(N −J)×(N −J) , Ag,g ∈ RJ×J , A0,g ∈ R(N −J)×J and Ag,0 ∈ RJ×(N −J) .
Note that ξ ∈ RJ is known (it contains the values of g in the boundary nodes).
We can therefore solve the simplified problem reading: find ξ0 ∈ R(N −J) with

A0,0 ξ0 = b0 − A0,g ξg .

In Matlab let int = 1 : N − J, bnd N − J + 1 : N .

A = A(int ,int ).
b = b(int ) − A(int ,bnd ) ∗ g.
U (int )= A \ b, U (bnd )= g.

4.4 The Neumann Problem


Find u s.t.

−4u = f in Ω, (4.4.1)
∇u · n = g on ∂Ω.
48 CHAPTER 4. LECTURE 7, 8 AND 9

In order to guarantee solvability, we note that


Z Z Z Z Z
f ·1+ g= −4u · 1 + (∇u · n)1 dσ(x) = ∇u · ∇1 = 0.
Ω ∂Ω Ω ∂Ω Ω

We therefore nee to assume the compatibility condition


Z Z
f ·1+ g = 0.
Ω ∂Ω

to ensure that a solution can exist. Note that if u exists, it is only determined up
to a constant, since u + c is a solution if u is a solution and c ∈ R. We therefore
define the solution space:
Z
Ṽ := {v ∈ V | v(x) dx = 0}.

This space guarantees a unique weak solution (with weak formulation as usual
with test functions in V ). Numerically the zero-average can be realized via so
called Lagrange-multipliers.

4.5 Elliptic Problems with a Convection Term


Find u s.t.

−∇ · (a∇u) + b · ∇u + cu = f in Ω, (4.5.1)
u=0 on ∂Ω.

Weak formulation: find u ∈ V0 := {v ∈ V | v|∂Ω = 0} s.t.


Z Z Z Z
a∇u · ∇v + (b · ∇u)v + c u · v = fv for all v ∈ V0 .
Ω Ω Ω Ω

The FEM approximation is given by uh ∈ Vh,0 := {vh ∈ Vh | (vh )|∂Ω = 0} solving


Z Z Z Z
a∇uh · ∇vh + (b · ∇uh )vh + c uh · vh = f vh for all vh ∈ Vh,0 .
Ω Ω Ω Ω
PN
With the ansatz uh = j=1 ξj φj and v = φi in the weak formulation, we get

N
X Z Z Z  Z
ξj a∇φj · ∇φi + (b · ∇φj )φi + c φj · φi = f φi for 1 ≤ i ≤ N.
j=1 Ω Ω Ω Ω
4.6. EIGENVALUE PROBLEM 49

Let therefore A, B, C ∈ RN ×N , R ∈ RN ×N be given by the entires


Z Z
Ai,j := a∇φi · ∇φj dx, Bi,j := (b · ∇φi ) · φj dx,
ZΩ Z Ω

Ci,j := cφi φj dx, Fj := f φj , for 1 ≤ i, j ≤ N.


Ω Ω

We solve

(A + B + C)ξ = F.

Note that B is not symmetric, i.e.


Z Z
Bi,j = (b · ∇φi ) · φj dx 6= (b · ∇φj ) · φi dx = Bj,i .
Ω Ω

4.6 Eigenvalue Problem


Find λ ∈ R and u s.t.

−4u = λu in Ω, (4.6.1)
∇u · n = 0 on ∂Ω.

Weak formulation: we multiply the equation with a test function v ∈ V and


integrate over Ω to obtain:
Z Z
− ∇u · ∇v = λ uv for all v ∈ V.
Ω Ω

The FEM approximation is given by uh ∈ Vh and Λ ∈ R s.t.


Z Z
− ∇uh · ∇vh = Λ uh vh for all vh ∈ Vh .
Ω Ω

This leads to an algebraic system of the structure

Aξ = ΛM ξ,

i.e. an algebraic eigenvalue problem. In Matlab egg or digs can be used.


50 CHAPTER 4. LECTURE 7, 8 AND 9

4.7 Error analysis and adaptivity


Model problem: find u s.t.

−4u = f in Ω,
u=0 on ∂Ω.

Weak solution: find u ∈ V0 s.t.


Z Z
∇u · ∇v = fv for all v ∈ V0 .
Ω Ω

FEM approximation: find uh ∈ Vh,0 with


Z Z
∇uh · ∇vh = f vh for all v ∈ Vh,0 .
Ω Ω

Linear system of equations

We choose v = φi , with i = 1, .., N , where span({φi }N


i=1 ) = Vh,0 . Now, find
PN
uh = i=1 ξi φi s.t.:

N
X Z  Z
ξi ∇φi · ∇φj dx = f φj for 1 ≤ j ≤ N.
i=1 Ω Ω

This gives us the system

Aξ = b,

where A ∈ RN ×N and b ∈ RN are given by the entires


Z Z
Ai,j := ∇φi · ∇φj dx, bj := f φj , for 1 ≤ i, j ≤ N.
Ω Ω

Theorem 4.7.1.
The stiffness matrix A is symmetric and positive definite.
4.7. ERROR ANALYSIS AND ADAPTIVITY 51

Proof. The symmetry of A is trivial.


N X
X N N
X Z
T
x Ax = xi Aij xj = xi x j ∇φi · ∇φj
i=1 j=1 i=1 Ω
Z N
! N
! Z N
! N
!
X X X X
= xi ∇φi · xj ∇φj = ∇ xi φi ·∇ xj φj
Ω i=1 j=1 Ω i=1 j=1
N
!
X
= |∇ xi φi |2 ≥ 0,
i=1

where equality holds only for x = 0, since we are in Vh,0 .


A priori error bound

Theorem 4.7.2.
Let u ∈ V0 denote the weak solution and uh ∈ Vh,0 the corresponding FEM approx-
imation. It holds:
Z
∇(u − uh ) · ∇vh = 0 for all vh ∈ Vh,0 . (4.7.1)

Proof. By definition of weak solution and FEM approximation we get


Z Z
∇u · ∇v = fv for all v ∈ Vh,0 ⊂ V0 ,
Z Ω Z Ω

∇uh · ∇vh = f vh for all vh ∈ Vh,0 .


Ω Ω

Subtracting both equations gives the result.


R
Now let |||v|||2 := Ω ∇v · ∇v be the energy norm on V0 . Note that 0 is the
only constant in V0 .

Theorem 4.7.3.
Let u ∈ V0 denote the weak solution and uh ∈ Vh,0 the corresponding FEM approx-
imation. It holds:

|||u − uh ||| ≤ |||u − vh ||| for all vh ∈ Vh,0 .


52 CHAPTER 4. LECTURE 7, 8 AND 9

Proof. We get for arbitrary vh ∈ Vh,0 :


Z
2
|||u − uh ||| = ∇(u − uh ) · ∇(u − uh )

C.S.
Z
(4.7.1)
= ∇(u − uh ) · ∇(u − vh ) ≤ k|u − uh k|k|u − vh k|.

Theorem 4.7.4.
Let u ∈ V0 denote the weak solution and uh ∈ Vh,0 the corresponding FEM approx-
imation. If u ∈ C 2 (Ω) it holds:
X
|||u − uh |||2 ≤ C h2T kD2 uk2L2 (T ) ,
T ∈Th

where C is independent of hT and u.

Proof. Let v = Π(u) ∈ Vh,0 in Theorem 4.7.3. By the interpolation estimates for
Π we obtain
X X
|||u − uh ||| ≤ |||u − Π(u)||| = ||D(u − Π(u))||2L2 (T ) ≤ C h2T kD2 uk2L2 (T ) .
T ∈Th T ∈Th

Note that uh → u for h → 0. This means |||u − uh ||| ∼ h.

Adaptive Finite Element Methods

Preliminary results:

There exists an interpolation operator Π (Clément-type interpolation) mapping to


Vh,0 , that fulfills the interpolation bound:

kv − Π(v)kL2 (T ) ≤ ChT kDvkL2 (ωT ) ,


S
where ωT := {K ∈ Th | T ∩ K 6= ∅} and C is constant that is independent of hT
and v.
4.7. ERROR ANALYSIS AND ADAPTIVITY 53

Trace inequality:

1
kv − Π(v)kL2 (∂T ) ≤ ChT2 kDvkL2 (ωT ) ,

where C is again a generic constant.

Jump of functions: let vh ∈ Vh be piecewise linear, then

[n · ∇vh ]∂T1 ∩∂T2 := nT1 · (∇vh )|T1 + nT2 · (∇vh )|T2 .

Observe that nT1 = −nT2 on ∂T1 ∩ ∂T2 .

@
@
@
@
(∇vh )|T2@
(∇vh )|T1 @
nT2 - @
@  nT1 @
@
@
@
@
T1 T2
@
@
@

Theorem 4.7.5 (A posteriori error estimate).


It holds
X
|||u − uh |||2 ≤ C RT (uh )2 ,
T ∈Th

where C is a (computable) constant that is independent of hT and


1
RT (uh )2 := h2T kf + 4uh k2L2 (T ) + hT k[n · ∇uh ]k2L2 (∂T \∂Ω) .
4
54 CHAPTER 4. LECTURE 7, 8 AND 9

Proof. Let e := u − uh . We get:


Z Z
2
|||e||| = ∇e · ∇e = ∇e · ∇(e − Π(e))
Ω Z Ω
X
= ∇e · ∇(e − Π(e))
T ∈Th T

(4.1.1) XZ XZ
= 4e (e − Π(e)) + (n · ∇e)(e − Π(e)) dσ(x)
T ∈T T T ∈Th ∂T \∂Ω
h
XZ XZ
= (f + 4uh ) (e − Π(e)) + (n · ∇e)(e − Π(e)) dσ(x).
T ∈Th T T ∈Th ∂T \∂Ω

Note that
XZ
(n · ∇u)(e − Π(e)) dσ(x) = 0
T ∈Th ∂T \∂Ω

since [n · ∇u] = 0 because of u ∈ C 2 . We conclude


XZ XZ
2
|||e||| = (f + 4uh ) (e − Π(e)) − (n · ∇uh )(e − Π(e)) dσ(x).
T ∈Th T T ∈Th ∂T \∂Ω
| {z } | {z }
=:I =:II

Let Γh denote the set of interior edges, i.e.


Γh := {E := T ∩ K| T, K ∈ Th , and E contains more than one point.}
We start with estimating II:
XZ
II = − (n · ∇uh )(e − Π(e)) dσ(x)
T ∈Th ∂T \∂Ω
XZ
= [n · ∇uh ](e − Π(e)) dσ(x)
E∈Γh E
Z
1 X
= [n · ∇uh ](e − Π(e)) dσ(x)
2 T ∈T ∂T \∂Ω
h

1 X
≤ k[n · ∇uh ]kL2 (∂T \∂Ω) ke − Π(e)kL2 (∂T \∂Ω)
2 T ∈T
h
X 1 1
≤C k[n · ∇uh ]kL2 (∂T \∂Ω) hT2 k∇ekL2 (ωT )
T ∈Th
2
! 21
X hT
≤C k[n · ∇uh ]k2L2 (∂T \∂Ω) |||e|||.
T ∈T
4
h
4.7. ERROR ANALYSIS AND ADAPTIVITY 55

Next, we deal with I:


XZ
I= (f + 4uh ) (e − Π(e))
T ∈Th T
X
≤C hT kf + 4uh kL2 (T ) k∇ekL2 (T )
T ∈Th
! 12
X
≤C h2T kf + 4uh k2L2 (T ) |||e|||.
T ∈Th

Combining both estimates yields


!
X X hT
|||e|||2 ≤ C h2T kf + 4uh k2L2 (T ) + k[n · ∇uh ]k2L2 (∂T \∂Ω) .
T ∈Th T ∈T
4
h

Adaptive algorithm

We want to refine the mesh where RT (uh ) is large. Main difficulties are:

• no hanging nodes,

• ’good quality’ triangulations (e.g. avoid very small angles in the triangles).

There are several algorithms including

∗ Rivara refinement (largest edge),

∗ regular refinment.

A combination is used in Matlab.

Alg.:

1. Construct initial mesh Th .

2. Solve finite element problem for uh .

3. Compute local indicators RT (uh )2 .

4. Compute maximum m := maxT ∈Th RT (uh )2 .

5. Mark elements with error over γ · m, where 0 < γ < 1 is a fixed parameter.
56 CHAPTER 4. LECTURE 7, 8 AND 9

6. Refine elements and get new mesh Th .

7. P
Return to step 2) (stop when N becomes too large or when the error
2
T ∈Th RT (uh ) is small enough)

Matlab syntax

Regard problems of the type

−∇ · (a∇u) + cu = f.

Step 3: indicator =pdejmps(p, t, a, c, f, uh , 1, 1, 1), the last three numbers stand for
1
the weighting constants c1 and c22 and m in hm
T.

Step 6: [p, e, t] =refinemesh(geom, p, e, t, index). Here tol = γ ∗ max(indicator)


and index =find(indicator > tol).
Chapter 5

Lecture 10
Time dependent problems
Summary:

• Numerical methods for ODEs

• Heat equation

– Weak form
– FEM
– Implementation

• Wave equation

– Weak form
– FEM
– Implementation

5.1 Systems of Ordinary Differential Equations


Problem: find ξ(t) ∈ RN so that
.
M ξ(t) + A(t)ξ(t) = f (t), t ∈ (0, T ]
ξ(0) = ξ0 ,

where M is constant and only A and f are time-dependent.

57
58 CHAPTER 5. LECTURE 10

Let 0 = t0 < t1 < t2 < ... < tL = T be a discretization, let kn := tn − tn−1


for n = 1, ..., L be the time step size and let ξ n ≈ ξ(tn ) for n = 1, ..., L denote
corresponding approximations. We integrate in time to obtain
Z tn . Z tn Z tn
M ξ(t) dt + A(t)ξ(t) dt = f (t) dt.
tn−1 tn−1 tn−1

This yields
Z tn Z tn
M (ξ(tn ) − ξ(tn−1 )) + A(t)ξ(t) dt = f (t) dt.
tn−1 tn−1

We can approximate this with the Euler scheme:

Backward Euler

M (ξ n − ξ n−1 ) + kn A(tn )ξ n = kn f (tn )

or equivalently

ξ n − ξ n−1
M + An ξ n = fn
kn

with An := A(tn ) and fn := f (tn ).

Algorithm

• Set 0 = t0 < t1 < ...tL = T .

• Let ξ 0 = ξ(0). For n = 1, ..., L

solve (M + kn An )ξ n = M ξ n−1 + kn fn ,

end.

Alternative Schemes

• Backward Euler: (M + kn An )ξn = M ξn−1 + kn fn .

• Forward Euler: M ξn = M ξn−1 − kn An−1 ξn−1 + kn fn−1 .


5.2. HEAT EQUATION 59

• Crank-Nicolson: (M + k2n An )ξ n = (M − k2n An−1 )ξn−1 + k2n (fn + fn−1 ),


here
Z tn
kn kn−1
A(t)ξ(t) dt ≈ An ξn + An−1 ξn−1 ,
tn−1 2 2
Z tn
kn
f (t) dt ≈ (fn + fn−1 ).
tn−1 2

5.2 Heat equation


Find u = u(x, t) s.t.
.
u − 4u = f in Ω ⊂ R2 , t ∈ (0, T ),
u(·, t) = 0 on ∂Ω and t ∈ (0, T ],
u(x, 0) = u0 for x ∈ Ω.
Let V0 := {v| kvk + k∇vk < ∞, v|∂Ω = 0}. Multiplying the heat equation with
v ∈ V and integrating over Ω yields
Z Z Z Z Z
fv =
.
uv − 4u · v =
.
uv + ∇u · ∇v
Ω Ω Ω Ω Ω

for 0 < t < T . The weak form therefore reads: find u(t) ∈ V0 s.t. for all t > 0
Z Z Z
.uv + ∇u · ∇v = f v for all v ∈ V0 .
Ω Ω Ω

FEM

Find uh (t) ∈ Vh,0 s.t. for all t > 0


Z Z Z
.u v + ∇u · ∇v = f v for all vh ∈ Vh,0 .
h h h h h
Ω Ω Ω
PN
Let uh (t) = i=1 ξi (t)φi (x). Choosing vh = φj in the FEM formulation, we obtain:
N N
X .Z X Z Z
ξi φi φj + ξi ∇φi · ∇φj = f φj for 1 ≤ j ≤ N.
i=1 Ω i=1 Ω Ω

Note that
   
ξ1 (t) uh (x1 , t)
ξ(t) =  ...  =  ..
,
   
.
ξN (t) uh (xN , t)
60 CHAPTER 5. LECTURE 10

where xi denotes the node that belongs to the basis function φi .

Let A, M ∈ RN ×N and b(t) ∈ RN be given by the entries


Z Z
Mi,j := φj φi , Ai,j := ∇φj · ∇φi for i, j = 1, · · · N,
ZΩ Ω

bi := f φi for i = 1, · · · N,

then we obtain the system


.
M ξ(t) + Aξ(t) = b(t),

which we can e.g. solve with the Backward Euler method.

Algorithm:

Assume that only b is time-dependent (if it is not time-dependent, move its com-
putation outside of the loop over n).

• Construct mesh and Vh,0 .

• Compute M and A.

• Let ξ 0 = ξ(0).

• For n = 1, ..., L

compute bn ,

solve (M + kn A)ξ n = M ξ n−1 + kn bn ,

(uh )n := N n
P
i=1 ξi φi ,

end.

For ξ 0 we can either use


   
ξ1 (0) u0 (x1 )
ξ 0 :=  ...  =  ...  ,
   
ξN (0) u0 (xN )
PN
or we can let ξ 0 to be the L2 -projection of u0 . We set (uh )0 := 0
i=1 ξi φi .
5.3. WAVE EQUATION 61

Remark 5.2.1 (Stability).


There hold continuous and discrete stability estimates:
Z t
ku(·, t)k ≤ ku(·, 0)k + kf (·, s)k ds,
0
n
X
k(uh )n k ≤ k(uh )n−1 k + kn kfn k ≤ k(uh )0 k + ki kfi k.
i=1

5.3 Wave equation


Find u = u(x, t) s.t.
..
u − ∇ · (ε∇u) = f in Ω ⊂ R2 , t ∈ (0, T ),
n · ∇u(·, t) = 0 on ∂Ω and t ∈ (0, T ],
u(x, 0) = u0 for x ∈ Ω,
.
u(x, 0) = v0 for x ∈ Ω,
where ε = ε(x, t) and f = f (x, t).

Weak form

Let V := {v| kvk + k∇vk < ∞}. Multiplying the wave equation with v ∈ V and
integrating over Ω yields with the Green’s formula
Z Z Z Z Z Z
fv =
..
uv − ∇ · (ε∇u)u · v =
..
uv + ε∇u · ∇v − (ε∇u · n)v.
Ω Ω Ω Ω Ω ∂Ω

for 0 < t < T . The weak form therefore reads: find u(t) ∈ V s.t. for all t > 0
Z Z Z
..
uv + ∇u · ∇v = fv for all v ∈ V.
Ω Ω Ω

FEM - semi-discrete

Find uh (t) ∈ Vh s.t. for all t > 0


Z Z Z
..
uh vh + ∇uh · ∇vh = f vh for all vh ∈ Vh .
Ω Ω Ω
62 CHAPTER 5. LECTURE 10
PN
We let uh (x, t) = j=1 ξj (t)φj (x),
where
   
ξ1 (t) uh (x1 , t)
ξ(t) =  ...  =  ..
.
   
.
ξN (t) uh (xN , t)

This leads to
N N
X .. Z X Z Z
ξi φi φj + ξi ∇φi · ∇φj = f φj for 1 ≤ j ≤ N.
i=1 Ω i=1 Ω Ω

We obtain the system


..
M ξ(t) + Aξ(t) = b(t) for 0 < t < T,

where M denotes the mass matrix and A the stiffness matrix.

FEM - time discretization


.
We transform the problem into a first order system. Let therefore η(t) := ξ(t) and
consider the new coupled system:
.
M ξ(t) = M η(t), for 0 < t < T,
.
M η(t) + A(t)ξ(t) = b(t), for 0 < t < T.

The application of the Crank-Nicholson scheme gives for n > 0:

ξ n − ξ n−1 η n + η n−1
M =M ,
kn 2
η n − η n−1 An ξ n + An−1 ξ n−1 bn + bn−1
M + = .
kn 2 2
In matrix form we get:

− k2n M kn
   n     
M ξ M 2
M ξ n−1 0
kn n = kn + kn .
An M η − An−1 M η n−1 2
(bn + bn−1 )
| 2 {z } | 2 {z }
=:Wn =:wn

Algorithm: Crank-Nicolson for the wave equation:

• Construct mesh and Vh,0 .

• Compute M .
5.3. WAVE EQUATION 63

• Choose ξ 0 = ξ(0) and η 0 = η(0)

• For n = 1, ..., L

compute An and bn ,

solve
 n
ξ
Wn n = wn .
η

PN n
(uh )n := i=1 ξi φi ,

end.

Remark 5.3.1 (Conservation of mass).


Let f = 0 and ε = 1, then
.
ku(·, t)k2 + k∇u(·, t)k2 = C,

where C is independent of t

You might also like