Skript
Skript
Patrick Henning
(this script is based on the previous lecture by A. Målqvist and the book
’The Finite Element Method: Theory, Implementation, and Practice’ by M.G. Larson and F. Bengzon)
II
Contents
1 Lecture 1 and 2 3
1.1 Piecewise polynomial approximations in 1D . . . . . . . . . . . . . 3
1.2 Continuous Piecewise Linear Polynomials . . . . . . . . . . . . . . . 5
1.3 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Continuous Piecewise Linear Interpolation . . . . . . . . . . . . . . 8
1.5 L2 -Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Computation of an L2 -Projection Ph (f ) . . . . . . . . . . . . . . . . 10
1.7 Numerical Integration with Quadrature Rules . . . . . . . . . . . . 11
1.8 Error in Quadrature Rules . . . . . . . . . . . . . . . . . . . . . . . 13
1.9 Implementation of the L2 −Projection . . . . . . . . . . . . . . . . . 13
1.10 Exercise 4.1 - Cauchy-Schwarz inequality . . . . . . . . . . . . . . . 16
2 Lecture 3 and 4 17
2.1 Weak formulation of the problem . . . . . . . . . . . . . . . . . . . 17
2.2 The Finite Element Method . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Derivation of the discrete system . . . . . . . . . . . . . . . . . . . 18
2.4 Basic a priori error estimate . . . . . . . . . . . . . . . . . . . . . . 19
2.5 Mathematical modeling and boundary conditions . . . . . . . . . . 21
2.6 Model problem with coefficient and general Robin BC . . . . . . . . 23
2.7 A posteriori error estimate and adaptivity . . . . . . . . . . . . . . 25
3 Lectures 5 and 6 29
3.1 Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Data structure for mesh . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Mesh generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 Piecewise polynomial spaces . . . . . . . . . . . . . . . . . . . . . . 31
3.5 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6 L2 -projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.7 Existence and uniqueness of the projection . . . . . . . . . . . . . . 35
3.8 A priori error estimate . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.9 Quadrature and numerical integration . . . . . . . . . . . . . . . . . 37
III
1
4 Lecture 7, 8 and 9 41
4.1 Weak formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3 The Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.4 The Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.5 Elliptic Problems with a Convection Term . . . . . . . . . . . . . . 48
4.6 Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.7 Error analysis and adaptivity . . . . . . . . . . . . . . . . . . . . . 50
5 Lecture 10 57
5.1 Systems of Ordinary Differential Equations . . . . . . . . . . . . . . 57
5.2 Heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.3 Wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2 CONTENTS
Chapter 1
Lecture 1 and 2
Let I := [x0 , x1 ] ⊂ R for x0 < x1 and let P1 (I) := {v| v(x) = c0 + c1 x; c0 , c1 ∈ R}.
Every linear function v on I is uniquely defined by {c0 , c1 } via v(x) = c0 + c1 x.
slope c1 = v 0 (x)
c0
-
x0 x1
Alternatively, the function v can be also characterized by two values {α0 , α1 } via
α0 = v(x0 ) and α1 = v(x1 ).
3
4 CHAPTER 1. LECTURE 1 AND 2
α1 t
α0 t
-
x0 x1
Consequently, every pair {c0 , c1 } corresponds with a pair {α0 , α1 } and vice versa.
Now, let λ0 , λ1 ∈ P1 (I) such that λ0 (x0 ) = 1, λ0 (x1 ) = 0 and such that λ1 (x0 ) = 0,
λ1 (x1 ) = 1. λ0 and λ1 are uniquely determined. Any v ∈ P1 (I) can be written as
v(x) = α0 λ0 (x) + α1 λ1 (x). What are the equations for λ0 and λ1 ?
1 6 1 6
λ0 λ1
- -
x0 x1 x0 x1
We obtain
x1 − x x − x0
λ0 (x) = and λ1 (x) = .
x 1 − x0 x1 − x0
The set {λ0 , λ1 } spans P1 (I) (i.e. it is a basis).
1.2. CONTINUOUS PIECEWISE LINEAR POLYNOMIALS 5
s s
v(x0 ) s s
s s v(xN )
-
x 0 x1 x2 xN
=
a =
b
Let {φj }N
j=0 ⊂ Vh be functions so that for i, j = 0, ..., N :
(
1 i=j
φj (xi ) = .
0 i 6= j
6
1
”hat function”
-
x0 x1 xj−1 xj xj+1 xN
Now {φj }N
j=0 forms a basis of Vh , i.e. any v ∈ Vh can be written as
N
X
v(x) = αi φi (x), where αi = v(xi ).
i=0
6 CHAPTER 1. LECTURE 1 AND 2
We have
x−x
hi ,
i−1
x ∈ Ii ,
−x
φi (x) = xi+1
hi+1
, x ∈ Ii+1 ,
0, otherwise.
1.3 Interpolation
Let f : I = [x0 , x1 ] → R be a continuous function. We define the linear interpolant
Π(f ) ∈ P1 (I) by Π(f ) := f (x0 )φ0 + f (x1 )φ1 .
6
f
s
s
Π(f )
-
x0 x1
We would like to study the error f − Π(f ) in the L2 −norm. Let therefore
Z 21
2
kwkL2 (I) := w(x) dx (the L2 -norm)
I
(i) kwk = 0 ⇔ w = 0
R 1 1 R 1
(ii) kλwkL2 (I) = I
(λw(x))2 dx 2 = (λ2 ) 2 I w(x)2 dx 2 = |λ|kwkL2 (I) .
1.3. INTERPOLATION 7
Using it we obtain:
Z Z
kv + wkL2 (I) = (v(x) + w(x)) dx = v(x)2 + 2v(x)w(x) + w(x)2 dx
2 2
I I
2
≤ kvk2L2 (I) + 2kvkL2 (I) kwkL2 (I) + kwk2L2 (I) = kvk2L2 (I) + kwk2L2 (I) .
where h := x1 − x0 .
It remains to estimate ke0 kL2 (I) . We use the mean value theorem (applied to e0 )
which gives us the existence of a ξ ∈ I such that
e(x1 ) − e(x0 )
e0 (ξ) = =0 (because of e(x0 ) = e(x1 )). (1.3.3)
x1 − x0
8 CHAPTER 1. LECTURE 1 AND 2
Ry Ry
Now let y ∈ I be arbitrary, then we get e0 (y) = e0 (ξ) + ξ
e00 (x) dx = ξ
e00 (x) dx
which implies
Z y Z Z 21 Z 12
1 1
0 00 00 00 2 00 2
e (y) = e (x) dx ≤ |e (x)| dx ≤ h 2 |e (x)| dx =h 2 |f (x)| dx ,
ξ I I I
where hi := xi − xi−1 .
Proof. We can use Proposition 1.3.1 and apply it to each of the intervals Ii to
obtain
N
X N
X
kf − Π(f )k2L2 (I) = kf − Π(f )k2L2 (Ii ) ≤ h4i kf 00 k2L2 (Ii )
i=1 i=1
and simultaneously
N
X N
X
k(f − Π(f ))0 k2L2 (I) = k(f − Π(f ))0 k2L2 (Ii ) ≤ h2i kf 00 k2L2 (Ii ) .
i=1 i=1
1.5. L2 -PROJECTION 9
1. Π(f ) → f as (max1≤i≤N hi ) → 0.
1.5 L2-Projection
R R
Let I = [a, b]. Notation: in the following we just write I v instead of I v(x) dx if
the variable of integration is clear from Rthe context.
We consider
R the space L2 (I) := {v| I v(x)2 dx < ∞} with the scalar product
(v, w) := I u · v. Recall the properties of a scalar product:
The normed vector space L2 (I) with the above scalar product (inner product)
is a Hilbert space (i.e. every Cauchy sequence converges). The ’L’ comes from
Lebesgue who introduced the so called ’Lebesque integral’ that we use nowadays.
Dividing by kv + wkL2 (I) yields the result. If kv + wkL2 (I) = 0, the result is
trivial.
by: Ph (f ) ∈ Vh fulfills
Z
(f − Ph (f )) · vh = 0 for all vh ∈ Vh .
I
Note that
kf − Ph (f )k2L2 (I) = (f − Ph (f ), f − Ph (f ))
= (f − Ph (f ), f − vh ) + (f − Ph (f ), vh − Ph (f ))
| {z }
=0
≤ kf − Ph (f )kL2 (I) kf − vh kL2 (I)
and therefore
where Vh = span({φj }N j=0 ) and φj being the hat functions defined in Section 1.2
(which form a basis of Vh ).
1.7. NUMERICAL INTEGRATION WITH QUADRATURE RULES 11
PN
Since Ph (f ) ∈ Vh we can write it as Ph (f ) = j=0 ξj φj with appropriate ξj ∈ R.
Therefore
’find Ph (f ) ∈ Vh : (Ph (f ), vh ) = (f, vh ) for all vh ∈ Vh ’
N
X
⇐⇒ ’find ξ0 , ..., ξN ∈ R : ξj (φj , φi ) = (f, φi ) for all 0 ≤ i ≤ N ’.
j=0
The real numbers ξ = (ξ0 , ..., ξN ) arePthe coefficients for describing Ph (f ) in terms
of the hat basis functions: Ph (f ) = N j=0 ξj φj . The problem can be expressed as
a linear system of equations. We define the Matrix M ∈ R(N +1)×(N +1) and the
vector b ∈ R(N +1) by
Z
Mij := (φj , φi ) = φi · φj (computable)
Z I
bi := (f, φi ) = f · φi (computable).
I
PN
We get bi = j=0 Mij ξj for i = 0, ..., N or equivalently b = M ξ. This leads to the
following algorithm for the computation of Ph (f ):
Algorithm
1. Initiate mesh with N elements x0 < x1 < ... < xN .
2. Compute M and b.
3. Solve M ξ = b.
4. Let Ph (f ) = N
P
j=0 ξj φj .
Examples:
12 CHAPTER 1. LECTURE 1 AND 2
a+b
1. Midpoint rule. Let n = 1, x1 = 2
, ω1 = b − a.
a+b
J(f ) ≈ QI (f ) := f (x1 )(b − a) = f (b − a).
2
This rule is exact for linear functions f .
6
s
f
QI (f )
-
a b b−a
2. Trapezoidal rule. Let n = 2, x1 = a, x2 = b, ω1 = ω2 = 2
.
f (a) + f (b)
J(f ) ≈ QI (f ) := (b − a).
2
This rule is exact for linear functions f .
6
s
f
s
QI (f )
-
a b
3. Simpson’s rule. Let g(x) := γx2 + βx + α and I = [0, h]. We wish to
determine α, β and γ such that g(0) = f (0), g( h2 ) = f ( h2 ) and g(h) = f (h),
i.e. g is a quadratic interpolation of f . In particular: if f is a quadratic
polynomial, then f = g. Now, find α, β, γ ∈ R such that
h h2 h
g(0) = α = f (0), g( ) = γ + β + α and g(h) = γh2 + βh + α = f (h).
2 4 2
We obtain the following the linear system of equations for α, β and γ:
1 0 0 α f (0)
1 h h2 β = f ( h )
2 4 2
1 h h2 γ f (h)
α f (0)
with solution β = h2 − 12 f (h) − 32 f (0) + 2f ( h2) .
2
γ h2
f (h) − 2f ( h2 ) + f (0)
1.8. ERROR IN QUADRATURE RULES 13
Computing the integral of g over I by using the above values for α, β and
γ, we obtain
Z h
h h
g(x) dx = (f (0) + 4f ( ) + f (h)).
0 6 2
This means, that the quadrature rule
f (a) + 4f ( a+b
2
) + f (b)
J(f ) ≈ QI (f ) := (b − a)
6
is exact for quadratic functions f (because in this case we have f = g).
6
1
φi−1 φi
-
xi−3 xi−2 xi−1 xi xi+1
14 CHAPTER 1. LECTURE 1 AND 2
We compute the entries. First, observe that Mij = 0 if |i − j| > 1. For Mii we get
2 Z 2
x − xi−1 xi+1 − x
Z Z
2
Mii = φi = dx + dx
I Ii−1 hi Ii hi+1
x x
1 (x − xi−1 )3 i (xi+1 − xi )3 i+1
1 hi hi+1
= 2 + 2 − = + , i = 1, ..., N − 1.
hi 3 xi−1 hi+1 3 xi 3 3
hi hN
M00 = and MN N = .
3 3
Sine the matrix M is symmetric, it remains to calculate Mi+1,i .
Z
xi+1 − x x − xi
Z
Mi+1,i = φi · φi+1 = · dx
I Ii hi+1 hi+1
xi+1 Z xi+1 2
(xi+1 − x)2
1 1 xi+1 − x
= 2 − (x − xi ) − 2 − dx
hi+1 2 xi hi+1 xi 2
x
(xi+1 − x)3 i+1
1 hi+1
2
= , i = 0, ..., N.
hi+1 6 xi 6
We can therefore easily identify the local contribution from an element Ii and
denote this contribution by M Ii , i.e.
Ii 1 2 1
M := hi (local mass matrix).
6 1 2
Algorithm:
1.9. IMPLEMENTATION OF THE L2 −PROJECTION 15
2. For i = 1, ..., N :
1 2 1
Compute M Ii = h.
6 1 2 i
Ii
Add M1,1 to Mi,i
Ii
M1,2 to Mi,i+1
Ii
M2,1 to Mi+1,i
Ii
M2,2 to Mi+1,i+1
end.
f (xi−1 )φi (xi−1 ) + f (xi )φi (xi ) f (xi )φi (xi ) + f (xi+1 )φi (xi+1 )
bi ≈ (xi − xi−1 ) + (xi+1 − xi )
2 2
hi + hi+1
= f (xi ) .
2
f (x0 ) h21
f (x0 ) 0 0
f (x1 ) h1 +h2 f (x1 ) f (x1 ) ..
..
2 h
0 1 f (x2 ) 2
h . h
N
b̃ := = + + ... + 0 .
.
.. 2 0 2 2
f (xN −1 ) h N −1 +hN .
f (xN −1 )
hN
2 ..
f (xN ) 2 0 . f (xN )
| {z } | {z } | {z }
b I1 bI2 b IN
Algorithm:
16 CHAPTER 1. LECTURE 1 AND 2
Now let
R
Ω
u·v
λ := (trick!)
kvk2L2 (I)
which makes (1.10.1) to read
R 2 R 2
u·v u·v
0≤ kuk2L2 (I) −2 Ω
+ Ω
.
kvk2L2 (I) kvk2L2 (I)
Multiplying with kvk2L2 (I) yields
Z 2
0≤ kuk2L2 (I) kvk2L2 (I) − u·v ,
Ω
Lecture 3 and 4
Finite Element Method in 1D
Summary:
• derive the Finite Element Method,
for all φ ∈ C 1 (0, 1) with φ(0) = φ(1) = 0. We write v 0 := w for the weak derivative
of v. Let
V0 := {v ∈ C 0 (0, 1)| kvkL2 (I) < ∞, kv 0 kL2 (I) < ∞ and v(0) = v(1) = 0},
17
18 CHAPTER 2. LECTURE 3 AND 4
where v 0 denote the weak derivative. Multiplying (2.0.1) with a test function
v ∈ V0 and integration over I yields:
Z Z Z Z
00 IP
f · v = −u · v = u · v − u (1)v(1) + u (0)v(0) = u0 · v 0 .
0 0 0 0
I I I I
Comments:
1. If u is strong solution (i.e. solution of (2.0.1)), then it is also weak solution.
2. If u is a weak solution with u ∈ C 2 (I), it is also strong solution.
3. Existence and uniqueness of weak solutions is obtained by the Lax-Milgram-
Theorem.
4. We can consider solutions with lower regularity using the weak formulation.
5. FEM gives an approximation of the weak solution.
In the following, we use the notation k · kL2 (I) := k · k.
bi := f · φi for 1 ≤ i ≤ N − 1.
I
We have
N
X −1
bi = Ai,j ξj for 1 ≤ i ≤ N − 1.
j=1
Algorithm
1. Initiate a mesh with N elements.
2. Compute A and b.
3. Solve the system Aξ = b.
PN −1
4. Set uh = j=1 ξj φj .
Theorem 2.4.1.
uh ∈ Vh,0 satisfies the Galerkin orthogonality:
Z
(u − uh )0 · vh0 = 0 for all vh ∈ Vh,0 . (2.4.1)
I
20 CHAPTER 2. LECTURE 3 AND 4
Theorem 2.4.2.
It holds
Dividing by k(u − uh )0 kL2 (I) finishes the proof. (If k(u − uh )0 kL2 (I) = 0, the result
is trivial.)
Note: even though we do not prove it, the solution u of (2.1.1) has a second weak
derivative in the weak sense. Therefore, the above equation is justified.
2.5. MATHEMATICAL MODELING AND BOUNDARY CONDITIONS 21
We note:
2. uh is the best approximation within the space Vh,0 with respect to the kv 0 k-
norm.
4. The norm kv 0 k is referred to as the energy norm and has often a physical
meaning.
q(x0 ) q(x1 )
- -
0 x0 x1 L
Heat source f
22 CHAPTER 2. LECTURE 3 AND 4
Let q denote the heat flux along the x-axis. The conservation of energy yields:
Z x1
q(x0 ) − q(x1 ) + f (x) dx = 0.
x0
and therefore
Z x1 Z x1
0
q (x) dx + f (x) dx = 0.
x0 x0
0
− (kT 0 ) (x) = f (x) for all x ∈ I = (0, L).
Boundary conditions
1. Dirichlet: T (0) = α and T (L) = β for two real numbers α and β. This BC is
also known as strong BC or essential BC. The temperature is kept a constant
value at the boundary points (temperature regulator).
2. Neumann: T 0 (0) = α and T 0 (L) = β for two real numbers α and β. This BC
is also known as natural BC. If T 0 (0) = 0 then q(0) = 0, which means that
we do not have flow over the boundary (no flow in, no flow out).
3. Robin: T 0 (0) = αT (0) and T 0 (L) = βT (L) for two real numbers α and β. It
says that the flux is proportional to the heat - the larger the heat the larger
the flow.
V := {v ∈ C 0 (0, 1)| kvkL2 (I) < ∞ and kv 0 kL2 (I) < ∞},
for all v ∈ V . We gather all u-independent terms on the left and obtain
Z 1 Z 1
0 0
au v + κ0 u(0)v(0) − κ1 u(1)v(1) = f v + κ0 g0 v(0) − κ1 g1 v(1)
0 0
for all v ∈ V .
Implementation:
Let for simplification a = 1. We
PN need to assemble a stiffness matrix A and a
load vector b. Let therefore uh = j=0 ξj φj and v = φi for i = 0, ..., N (observe
that the sum is from 0 to N !). We get:
Aξ = b
where 0 ≤ i, j ≤ N . Since
x−x
hi ,
i−1
x ∈ Ii ,
−x
φi (x) = xi+1
h i+1
, x ∈ Ii+1 ,
0, otherwise.
We have
1
hi ,
x ∈ Ii ,
φ0i (x) = − hi+11
, x ∈ Ii+1 ,
0, otherwise.
We note that
Z 1 Z xi Z xi+1
1 1
φ0i φ0i = (φ0i )2 + (φ0i )2 = +
0 xi−1 xi hi hi+1
for i = 0, ..., N − 1. The terms κ0 φj (0)φi (0) are only non-zero entries for i = j = 0
and κ1 φj (1)φi (1) is non-zero for i = j = N . We therefore get:
κ0 + h11 − h11 0 ··· 0
−1 1 ... ..
h1 h1
+ h12 − h12 .
A= 0 .. .. .. .
. . .
. 0
.. ..
.
. . . − h1N
0 ··· 0 − h1N 1
hN
− κ1
f (x0 ) h21 + κ0 g0
f (x1 ) h1 +h2
2
..
b̃ = .
.
f (xN −1 ) hN −1 +hN
2
hN
f (xN ) 2 − κ1 g1 .
2.7. A POSTERIORI ERROR ESTIMATE AND ADAPTIVITY 25
We assemble the matrix A in the same way as M : Observe that M can written in
the following localized structure:
1 −1
1 −1 1 + 1
1 −1
A= h1 −1 1
+
h1
κ0
1
... + + .
hN 1 −1
−1 1 −κ1
Algorithm:
2. For i = 1, ..., N :
Ii 1 1 −1
Compute A = .
hi −1 1
Add AI1,1
i
to Ai,i
AI1,2
i
to Ai,i+1
AI2,1
i
to Ai+1,i
AI2,2
i
to Ai+1,i+1
end.
We want to use the bound to increase the number of nodes on the ’right ares’. We
start from:
N
X
k(u − uh )0 k2 ≤ h2i · kf + u00h k2L2 (Ii ) ,
i=1
Algorithm:
2.7. A POSTERIORI ERROR ESTIMATE AND ADAPTIVITY 27
Compute uh ,
end.
Original grid
Refined grid
28 CHAPTER 2. LECTURE 3 AND 4
Chapter 3
Lectures 5 and 6
Piecewise polynomial approximation in 2D
Summary:
• construct mesh,
• interpolation,
• L2 -projection
• implementation.
3.1 Meshes
Let Ω ⊂ R2 bounded with ∂Ω assumed to be polygonal.
A triangulation Th of Ω is a set of triangles T such that
[
Ω= T
T ∈Th
29
30 CHAPTER 3. LECTURES 5 AND 6
entries:
x1 x2 · · · xN
P =
y1 y2 · · · yN
and the matrix K ∈ R3×M describes the triangles, i.e. it describes which nodes
(numerated from 1 to N ) form a triangle T and how it is orientated:
α α
n1 n2 · · · nαM
nγ
@
@@
@
@
a@
T @
T
@
nβ
nα
Example: N7 = (0, 2)
s sN8 = (1, 2)
@
@ T6
@
@
T5 @
N4 s @sN5 = (1, 1) sN6 = (2, 1)
= (0, 1) @ @
@ T2 @ T4
@ @
@ @
T1 @ T3 @
s @s @s
N1 = (0, 0) N2 = (1, 0) N3 = (2, 0)
That implies for our example:
0 1 2 0 1 2 0 1
P =
0 0 0 1 1 1 2 2
and
1 2 2 3 4 5
K = 2 5 3 6 5 8 .
4 4 5 5 7 7
3.3. MESH GENERATION 31
Matlab has a built in toolbox called ’PDE Tool Box’ and includes a mesh genera-
tion algorithm.
1. Define geometry:
2 2 2 2 2 2 ←− polygon
0
2 2 1 1 0 ←−
x coordinate 1
2
2 1 1 0 0 ←−
x coordinate 2
0
geom = 0 1 1 2 2
←− y coordinate 1
0
1 1 2 2 0 ←−
y coordinate 2
1 1 1 1 1 1 ←− domain to the right
0 0 0 0 0 0 ←− domain to the left
(0, 2) 5 (1, 2)
(1, 1) 3 (2, 1)
6
(0, 0) (2, 0)
1
2. [p, e, t] =initmesh(geom, ’hmax’, 0.1 ), where e denotes the edge matrix.
3. pdemesh(p, e, t)
(x3 , y3 )
@@
@
@
@
T @
@
@
(x2 , y2 )
(x1 , y1 )
We define
P1 (T ) := {v ∈ C 0 (T )| v(x, y) = c1 + c2 x + c3 y, c1 , c2 , c3 ∈ R}.
Now let vi = v(Ni ) for i = 1, 2, 3. Note that v ∈ P1 (T ) is determined by {vi }3i=1 .
Given vi we compute ci by
1 x1 y1 c1 v1
1 x2 y2 c2 = v2 .
1 x3 y3 c3 v3
This is solvable due to
1 x 1 y1
det 1 x2 y2 = 2|T | =
6 0.
1 x3 y 3
Let λj ∈ P1 (T ) be given by the nodal values
(
1 i=j
λj (Ni ) = .
0 6 j
i=
This gives us
v(x, y) = α1 λ1 (x, y) + α2 λ2 (x, y) + λ3 (x, y),
where αi v(Ni ) for i = 1, 2, 3.
Example:
(0, 1)
s
@
@
@
@
T @
s @s
(0, 0) (1, 0)
3.5. INTERPOLATION 33
3.5 Interpolation
Given f ∈ C 0 (T ) on a single triangle with nodes Ni = (xi , yi ), i = 1, 2, 3, we let
3
X
Π(f ) := f (Ni )φi
i=1
3.6 L2-projection
Let Ph : L2 (Ω) → Vh be the L2 -projection with Ph (f ) ∈ Vh given s.t.
Z
(f − Ph (f ))vh = 0 for all vh ∈ Vh . (3.6.1)
Ω
Linear system:
Theorem 3.7.1.
For any f ∈ L2 (Ω) the L2 -projection Ph (f ) exists and is unique.
This gives us
Z
(Ph (f )1 − Ph (f )2 )vh = 0 for all vh ∈ Vh .
Ω
Choosing vh = Ph (f )1 − Ph (f )2 gives us
kPh (f )1 − Ph (f )2 kL2 (Ω) = 0
and therefore Ph (f )1 = Ph (f )2 .
Theorem 3.8.1.
Let f ∈ L2 (Ω) and let Ph (f ) be the L2 -projection of f , then
kf − Ph (f )k ≤ kf − vh k for all vh ∈ Vh .
= (f − Ph (f ))(f − vh )
Ω
≤ kf − Ph (f )kkf − vh k for all vh ∈ Vh .
Theorem 3.8.2.
For f ∈ C 0 (Ω), with f ∈ C 2 (T ) for all T ∈ Th , it holds
X
kf − Ph (f )k2L2 (Ω) ≤ C h4T kD2 f k2L2 (T ) ,
T ∈Th
Theorem 3.8.3.
The mass matrix M is symmetric and positive definite.
Proof. The symmetry is obvious by Mi,j = Mj,i . The positive definite property,
i.e. xT M x > 0 for all x ∈ RN \ {0}, can be verified by:
N
X N
X Z
T
x Mx = xi Mi,j xj = xi xj φi φj
i,j=1 i,j=1 Ω
Z N
! N
!
X X
= xi φi xj φj
Ω i=1 i=j
N
X
=k xi φi k2L2 (Ω) ≥ 0,
i=1
PN
where xT M x = 0 only if i=1 xi φi = 0, which is only the case if xi = 0 for
i = 1, · · · , N .
Examples:
1. Midpoint rule:
Z
N1 + N2 + N3
f (x) dx ≈ f (xT )|T |, with midpoint x̄ = .
T 3
2. Corner rule:
3
|T |
Z X
f (x) dx ≈ f (Ni ) , with triangle corners Ni .
T i
3
3. Gauss quadratures: Gauss quadratures are designed with general points and
weights so that they are exact for polynomials of a given degree.
38 CHAPTER 3. LECTURES 5 AND 6
Useful formula (without proof): let m, n, p ∈ N and let φT,1 , φT,2 and φT,3 denote
the three basis functions that belong to the three corners of the triangle T . Then
it holds:
Z
p 2 m! n! p!
φm n
T,1 φT,2 φT,3 = |T |. (3.10.1)
T (m + n + p + 2)!
Example:
N5 = (0, 1) N = (1, 1)
s s4
T1
T T
2 3
s s s
N1 = (0, 0) N2 = ( 34 , 0) N3 = (1, 0)
φ1 φ1 φ1 φ2 φ1 φ3 φ1 φ4 φ1 φ5 φ1 φ1 φ1 φ2 φ1 φ3 φ1 φ4 φ1 φ5
Z φ2 φ1 φ2 φ2 φ2 φ3 φ2 φ4 φ2 φ5 X Z φ2 φ1 φ2 φ2 φ2 φ3 φ2 φ4 φ2 φ5
M= φ3 φ1 φ3 φ2 φ3 φ3 φ3 φ4 φ3 φ5 = φ3 φ1 φ3 φ2 φ3 φ3 φ3 φ4 φ3 φ5
Ω φ φ T
4 1 φ4 φ2 φ4 φ3 φ4 φ4 φ4 φ5
T ∈Th φ4 φ1 φ4 φ2 φ4 φ3 φ4 φ4 φ4 φ5
φ5 φ1 φ5 φ2 φ5 φ3 φ5 φ4 φ5 φ5 φ5 φ1 φ5 φ2 φ5 φ3 φ5 φ4 φ5 φ5
φ1 φ1 0 0 φ1 φ4 φ1 φ5 φ1 φ1 φ1 φ2 0 φ1 φ4 0
Z 0 0 0 0 0 Z φ2 φ1 φ2 φ2 0 φ2 φ4 0
= 0 0 0 0 0 + 0 0 0 0 0
T1 φ φ 0 0 φ φ φ φ T2 φ φ φ φ 0 φ
4 1 4 4 4 5 4 1 4 2 4 φ4 0
φ5 φ1 0 0 φ5 φ4 φ5 φ5 0 0 0 0 0
0 0 0 0 0
Z 0 φ2 φ2 φ2 φ3 φ2 φ4 0
+ 0 φ3 φ2 φ3 φ3 φ3 φ4 0 =: M T1 + M T2 + M T3 .
T 3 0 φ φ
4 2 φ4 φ3 φ4 φ4 0
0 0 0 0 0
3.10. IMPLEMENTATION DETAILS 39
• for T ∈ Th :
2 1 1
1
MT = 1 2 1 |T |
12
1 1 2
end
Lecture 7, 8 and 9
Finite Element Method in 2D
Summary:
• weak formulation,
• error estimation,
• implementation,
• adaptivity.
Let f = u · v for two sufficiently regular (i.e. H 1 (Ω)) functions u and v. Then we
obtain:
Z Z Z
∂u ∂v
v dx = − u dx + u · v · ni dσ(x), i = 1, .., d.
Ω ∂xi Ω ∂xi ∂Ω
41
42 CHAPTER 4. LECTURE 7, 8 AND 9
Implementation
(A + R) ξ = b, where
bj := f φj , for 1 ≤ i, j ≤ N.
Ω
We can again identify the local contributions that come form a particular triangle
T:
Z
T
Ai,j := ∇φi · ∇φj , for i, j = 1, 2, 3.
T
44 CHAPTER 4. LECTURE 7, 8 AND 9
(x3 , y3 )
@@
@
@
@
T @
@
@
(x2 , y2 )
(x1 , y1 )
We have φi (x, y) = ai + bi x + ci y, for i = 1, 2, 3. Let us denote αi := (ai , bi , ci ).
We know
(
1 i=j
φi (Nj ) = ,
0 i=6 j
which gives us
1 x1 y 1 ai 1
Bαi = 1 x2 y2
bi = 0 = ei .
1 x3 y 3 ci 0
In general we have Bαi = ei for i = 1, 2, 3. Furthermore, we obviously have
b
∇φi = i ,
ci
which gives
ATi,j = (bi bj + ci cj )|T | for i, j = 1, 2, 3.
Algorithm
Let N be the number of nodes and M the number of triangles.
1. Allocate memory for a N × N matrix A.
2. For K ∈ Th :
Compute ∇φKi = (bi , ci )
K
for i = 1, 2, 3. Set
(b21 + c21 ) (b1 b2 + c1 c2 ) (b1 b3 + c1 c3 )
AK := (b2 b1 + c2 c1 ) (b22 + c22 ) (b2 b3 + c2 c3 ) |T |.
(b3 b1 + c3 c1 ) (b3 b2 + c3 c2 ) (b23 + c23 )
4.2. FINITE ELEMENT METHOD 45
end.
Let Γout
h denote the set of boundary edges of the triangulation, i.e.
Γout
h := {E| E = T ∩ ∂Ω, for T ∈ Th }.
In Matlab structure:
nα1 nα2 nα3 · · · nαJ
nβ nβ nβ · · · nβJ
1 2 3
x1
e= |T |.
x2
bnd
1
0
Algorithm
For j = 1, .., J:
end.
46 CHAPTER 4. LECTURE 7, 8 AND 9
We use a one-point quadrature rule for approximating the integral. We obtain for
T ∈ Th :
|T |
Z
T
bj = f φj ≈ f (Nj ) , for j = 1, 2, 3.
T 3
Algorithm
For i = 1, .., M :
Compute |K|.
f (N1i )
b(t(1 : 3, i)) = b(t(1 : 3, i)) + f (N2i ) |K|
3
.
i
f (N3 )
end.
Given A, R and b, we can solve
(A + R)ξ = b
PN
and write uh = i=1 ξφi .
−4u = f in Ω, (4.3.1)
u=g on ∂Ω.
Vg := {v ∈ V | v|∂Ω = g}.
Assume that we have N nodes and J boundary nodes, then the matrix form
of the FEM problem reads:
N −J J
N −J A0,0 A0,g ξ0 b0
=
J Ag,0 Ag,g ξ1 b1
with A0,0 ∈ R(N −J)×(N −J) , Ag,g ∈ RJ×J , A0,g ∈ R(N −J)×J and Ag,0 ∈ RJ×(N −J) .
Note that ξ ∈ RJ is known (it contains the values of g in the boundary nodes).
We can therefore solve the simplified problem reading: find ξ0 ∈ R(N −J) with
A0,0 ξ0 = b0 − A0,g ξg .
A = A(int ,int ).
b = b(int ) − A(int ,bnd ) ∗ g.
U (int )= A \ b, U (bnd )= g.
−4u = f in Ω, (4.4.1)
∇u · n = g on ∂Ω.
48 CHAPTER 4. LECTURE 7, 8 AND 9
to ensure that a solution can exist. Note that if u exists, it is only determined up
to a constant, since u + c is a solution if u is a solution and c ∈ R. We therefore
define the solution space:
Z
Ṽ := {v ∈ V | v(x) dx = 0}.
Ω
This space guarantees a unique weak solution (with weak formulation as usual
with test functions in V ). Numerically the zero-average can be realized via so
called Lagrange-multipliers.
−∇ · (a∇u) + b · ∇u + cu = f in Ω, (4.5.1)
u=0 on ∂Ω.
N
X Z Z Z Z
ξj a∇φj · ∇φi + (b · ∇φj )φi + c φj · φi = f φi for 1 ≤ i ≤ N.
j=1 Ω Ω Ω Ω
4.6. EIGENVALUE PROBLEM 49
We solve
(A + B + C)ξ = F.
−4u = λu in Ω, (4.6.1)
∇u · n = 0 on ∂Ω.
Aξ = ΛM ξ,
−4u = f in Ω,
u=0 on ∂Ω.
N
X Z Z
ξi ∇φi · ∇φj dx = f φj for 1 ≤ j ≤ N.
i=1 Ω Ω
Aξ = b,
Theorem 4.7.1.
The stiffness matrix A is symmetric and positive definite.
4.7. ERROR ANALYSIS AND ADAPTIVITY 51
Theorem 4.7.2.
Let u ∈ V0 denote the weak solution and uh ∈ Vh,0 the corresponding FEM approx-
imation. It holds:
Z
∇(u − uh ) · ∇vh = 0 for all vh ∈ Vh,0 . (4.7.1)
Ω
Theorem 4.7.3.
Let u ∈ V0 denote the weak solution and uh ∈ Vh,0 the corresponding FEM approx-
imation. It holds:
Theorem 4.7.4.
Let u ∈ V0 denote the weak solution and uh ∈ Vh,0 the corresponding FEM approx-
imation. If u ∈ C 2 (Ω) it holds:
X
|||u − uh |||2 ≤ C h2T kD2 uk2L2 (T ) ,
T ∈Th
Proof. Let v = Π(u) ∈ Vh,0 in Theorem 4.7.3. By the interpolation estimates for
Π we obtain
X X
|||u − uh ||| ≤ |||u − Π(u)||| = ||D(u − Π(u))||2L2 (T ) ≤ C h2T kD2 uk2L2 (T ) .
T ∈Th T ∈Th
Preliminary results:
Trace inequality:
1
kv − Π(v)kL2 (∂T ) ≤ ChT2 kDvkL2 (ωT ) ,
@
@
@
@
(∇vh )|T2@
(∇vh )|T1 @
nT2 - @
@ nT1 @
@
@
@
@
T1 T2
@
@
@
(4.1.1) XZ XZ
= 4e (e − Π(e)) + (n · ∇e)(e − Π(e)) dσ(x)
T ∈T T T ∈Th ∂T \∂Ω
h
XZ XZ
= (f + 4uh ) (e − Π(e)) + (n · ∇e)(e − Π(e)) dσ(x).
T ∈Th T T ∈Th ∂T \∂Ω
Note that
XZ
(n · ∇u)(e − Π(e)) dσ(x) = 0
T ∈Th ∂T \∂Ω
1 X
≤ k[n · ∇uh ]kL2 (∂T \∂Ω) ke − Π(e)kL2 (∂T \∂Ω)
2 T ∈T
h
X 1 1
≤C k[n · ∇uh ]kL2 (∂T \∂Ω) hT2 k∇ekL2 (ωT )
T ∈Th
2
! 21
X hT
≤C k[n · ∇uh ]k2L2 (∂T \∂Ω) |||e|||.
T ∈T
4
h
4.7. ERROR ANALYSIS AND ADAPTIVITY 55
Adaptive algorithm
We want to refine the mesh where RT (uh ) is large. Main difficulties are:
• no hanging nodes,
• ’good quality’ triangulations (e.g. avoid very small angles in the triangles).
∗ regular refinment.
Alg.:
5. Mark elements with error over γ · m, where 0 < γ < 1 is a fixed parameter.
56 CHAPTER 4. LECTURE 7, 8 AND 9
7. P
Return to step 2) (stop when N becomes too large or when the error
2
T ∈Th RT (uh ) is small enough)
Matlab syntax
−∇ · (a∇u) + cu = f.
Step 3: indicator =pdejmps(p, t, a, c, f, uh , 1, 1, 1), the last three numbers stand for
1
the weighting constants c1 and c22 and m in hm
T.
Lecture 10
Time dependent problems
Summary:
• Heat equation
– Weak form
– FEM
– Implementation
• Wave equation
– Weak form
– FEM
– Implementation
57
58 CHAPTER 5. LECTURE 10
This yields
Z tn Z tn
M (ξ(tn ) − ξ(tn−1 )) + A(t)ξ(t) dt = f (t) dt.
tn−1 tn−1
Backward Euler
or equivalently
ξ n − ξ n−1
M + An ξ n = fn
kn
Algorithm
solve (M + kn An )ξ n = M ξ n−1 + kn fn ,
end.
Alternative Schemes
for 0 < t < T . The weak form therefore reads: find u(t) ∈ V0 s.t. for all t > 0
Z Z Z
.uv + ∇u · ∇v = f v for all v ∈ V0 .
Ω Ω Ω
FEM
Note that
ξ1 (t) uh (x1 , t)
ξ(t) = ... = ..
,
.
ξN (t) uh (xN , t)
60 CHAPTER 5. LECTURE 10
bi := f φi for i = 1, · · · N,
Ω
Algorithm:
Assume that only b is time-dependent (if it is not time-dependent, move its com-
putation outside of the loop over n).
• Compute M and A.
• Let ξ 0 = ξ(0).
• For n = 1, ..., L
compute bn ,
(uh )n := N n
P
i=1 ξi φi ,
end.
Weak form
Let V := {v| kvk + k∇vk < ∞}. Multiplying the wave equation with v ∈ V and
integrating over Ω yields with the Green’s formula
Z Z Z Z Z Z
fv =
..
uv − ∇ · (ε∇u)u · v =
..
uv + ε∇u · ∇v − (ε∇u · n)v.
Ω Ω Ω Ω Ω ∂Ω
for 0 < t < T . The weak form therefore reads: find u(t) ∈ V s.t. for all t > 0
Z Z Z
..
uv + ∇u · ∇v = fv for all v ∈ V.
Ω Ω Ω
FEM - semi-discrete
This leads to
N N
X .. Z X Z Z
ξi φi φj + ξi ∇φi · ∇φj = f φj for 1 ≤ j ≤ N.
i=1 Ω i=1 Ω Ω
ξ n − ξ n−1 η n + η n−1
M =M ,
kn 2
η n − η n−1 An ξ n + An−1 ξ n−1 bn + bn−1
M + = .
kn 2 2
In matrix form we get:
− k2n M kn
n
M ξ M 2
M ξ n−1 0
kn n = kn + kn .
An M η − An−1 M η n−1 2
(bn + bn−1 )
| 2 {z } | 2 {z }
=:Wn =:wn
• Compute M .
5.3. WAVE EQUATION 63
• For n = 1, ..., L
compute An and bn ,
solve
n
ξ
Wn n = wn .
η
PN n
(uh )n := i=1 ξi φi ,
end.
where C is independent of t