0% found this document useful (0 votes)
35 views21 pages

Doerfler v94 Convergent-Adaptive-Algorithm

Uploaded by

Hoàng Thái Hà
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views21 pages

Doerfler v94 Convergent-Adaptive-Algorithm

Uploaded by

Hoàng Thái Hà
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/238878151

A Convergent Adaptive Algorithm for Poisson’s Equation

Article in SIAM Journal on Numerical Analysis · June 1996


DOI: 10.1137/0733054

CITATIONS READS

1,008 1,371

1 author:

Willy Dörfler
Karlsruhe Institute of Technology
84 PUBLICATIONS 2,075 CITATIONS

SEE PROFILE

All content following this page was uploaded by Willy Dörfler on 16 July 2016.

The user has requested enhancement of the downloaded file.


SIAM J. NUMER. ANAL. ⃝
c 1994 Society for Industrial and Applied Mathematics
Vol. 1, No. 1, pp. 000–000, September 1994 000

A CONVERGENT ADAPTIVE ALGORITHM


FOR POISSON’S EQUATION *
WILLY DÖRFLER∗∗

Abstract. We construct a converging adaptive algorithm for linear elements applied to Pois-
son’s equation in two space dimensions. Starting from a macro triangulation, we describe how to
construct an initial triangulation from a priori information. Then we use a posteriori error estimators
to get a sequence of refined triangulations and approximate solutions. It is proved that the error,
measured in the energy norm, decreases at a constant rate in each step until a prescribed error bound
is reached. Extensions to higher order elements in two space dimensions and numerical results are
included.

Key words. Adaptive mesh refinement, a posteriori error estimator, Poisson’s equation.

AMS subject classifications. 65N15, 65N30, 65N50

0. Introduction. In order to obtain approximate solutions to partial differential


equations, adaptive strategies have become very popular. The aim is to compute a
numerical solution in such a way that the error (i.e. the difference between the exact
and the approximate solution measured in a suitable norm) is of a prescribed accuracy
and the number of degrees of freedom is as small as possible.
For simplicity, we assume that Ω ⊂ IR2 is a bounded domain with a polygonal
boundary. A triangulation or mesh is a decomposition of Ω into disjoint triangles. At
first we assume that a coarse triangulation is given which we will refer to as macro
triangulation. Having established a finite element space, we iterate the procedure
... – Solve – Estimate – Refine – ...
until a stopping criterion is satisfied. We refer to this procedure as SER-algorithm.
Macro triangulations may be created by hand or automatically. Here, we simply
assume that we are given one. As we will see later in an example (3.), such a trian-
gulation may be too coarse to start the SER-algorithm. Instead, we have to create
a so called initial triangulation that is “fine enough” to start with. We will give a
mathematical formulation of the quoted expression in terms of the given data and an
algorithm to obtain such a triangulation approximately (5.1).
Here, we will not discuss the item “solve”. We assume that we can obtain exact
solutions of the finite dimensional problems.
In order to “estimate”, we assume that we have a subroutine that returns es-
timated local errors. As an example, we may have a routine that returns a number
ηT ∈ IR+ for every triangle T of the triangulation. ηT should only depend on the ac-
tual numerical solution and the numerical∑data. The numbers ηT provide us also with
a measure for the total error by ηT2 := ηT2 . We assume that ηT bounds the true
T ∈T
error by inequalities in both directions, where the constants in these inequalities do
only depend on properties of the triangulation (and are of moderate size). The upper
estimate shows that ηT can be used as a reliable stopping criterion for the algorithm,
while the lower estimate suggests that an unnecessary amount of work can be avoided.
Different classes of such error estimators are known [BR] [BW] [ZKGB].

*Received by the editors.


∗∗ Institut für Angewandte Mathematik, Universität Freiburg, Hermann-Herder-Str. 10, D-

79104 Freiburg.

1
2 willy dΥ
f orfler

Given the set of local error estimates for all T , we now need an algorithm that
uses this information to obtain a new triangulation. This task splits in two parts: the
problem of selecting the triangles to be refined (marking strategy) and the construction
of the new triangulation (refinement strategy).
The way we select (or mark) these triangles influences the efficiency of our pro-
cedure. Let for example ηmax := max ηT , θ ∈ (0, 1) and consider the (widely used)
T
criterion

(M) mark T if ηT ≥ θηmax .

If θ ≈ 0, we will refine globally. Then we will not iterate the SER-algorithm very often
but we will probably have an unnecessary amount of degrees of freedom. On the other
hand we may let θ ≈ 1. If the error is not equally distributed, we will mark only a
very small number of triangles. So we expect to have a lot of iterations but a “nearly
optimal” number of degrees of freedom (assuming that ηT is a good estimate). Since
iterations are costly, we think that this strategy is not very efficient.
Different proposals for marking strategies can be found in [BR] [Ja] [Jo]. We
present a new one in section 4.2. Our goal is to obtain a uniform convergence for the
SER-algorithm while getting this rate with as few triangles as possible to be refined.
We will also make some remarks on refinement strategies and how they have to
be applied to obtain our results.
1. Notations.
1.1. Sobolev spaces. Let G ⊂ IR2 be a bounded domain with Lipschitz-conti-
nuous boundary. For m > 0 and p ∈ [1, ∞] let H m,p (G) and H̊ m,p (G) denote the well
known Sobolev spaces with corresponding norms || . ||m,p;G and semi-norms [[ . ]]m,p;G .
For convenience let H 0,p (G) := Lp (G). We write H m (G) and || . ||m;G if p = 2.
In this paper we will refer to the following norm inequalities: There is an optimal
constant CP (G), called Poincaré’s constant, such that for all u ∈ H̊ 1 (G)

||u||0;G ≤ CP (G)||∇u||0;G .

If we set
1
dG := diam(G) and Ĝ := G,
dG
then by a scaling argument we see that CP (G) = CP (Ĝ)dG .
There is an optimal constant C∂ (G) such that for all u ∈ H 1 (G)
( ) 12
1
||u||0;∂G ≤ C∂ (G) ||∇u||0;G + 2 ||u||0;G
2 2 .
dG
1
Again by a scaling argument we see that C∂ (G) = C∂ (Ĝ)dG 2
.
Let Ω ⊂ IR be a bounded domain with polygonal boundary. For abbreviation
2

we will write || . ||m,p for norms defined on Ω and CP for CP (Ω).


1.2. Triangulations. A (conforming) triangulation T of Ω is a decomposition
of Ω into NT open triangles Tj , such that
NT
− Ω̄ = ∪j=1 T̄j
− for i ̸= j the set T̄i ∩ T̄j is empty or consists of a vertex or a common edge.
a convergent adaptive algorithm 3

Let T0 be a triangulation of Ω. If we decompose a subset of triangles of T0 into


subtriangles such that the resulting set of triangles is again a triangulation of Ω, we
call this a refinement of T0 . We may denote this triangulation by T1 . In this way we
can construct a sequence of triangulations {Tk }k≥0 such that Tk+1 is a refinement of
Tk .
On each triangulation Tk let Vk be a finite element space. Let the basis functions
of Vk be given by
{ψqk }q∈Nk .
Here, Nk is the set of nodes of Tk . These are the points that define the basis functions
by
ψqk (q ′ ) := cq δqq′ ,
where cq is an appropriate scaling factor. By Nk∂ we denote the set Nk ∩ ∂Ω and by

V̊k := {ϕ ∈ Vk : ϕ(q) = 0, q ∈ Nk∂ }

the subspace of Vk of functions with vanishing boundary values. Furthermore we


assume these spaces to be nested in the following way

V0 ⊂ . . . ⊂ Vk ⊂ Vk+1 ⊂ . . .

and that the inclusions


Vk ⊂ H 1 (Ω), V̊k ⊂ H̊ 1 (Ω)
hold for all k ≥ 0.
We impose the following geometrical condition on these triangulations. We as-
sume that all triangles are regular in shape: the quotient dT /ρT , ρT being the radius
of the largest ball contained in T , is bounded independently of T and k. A conse-
quence is that locally triangles are of comparable size uniformly in k. That is, there
is a constant σ0 , independent of k, such that for all T, T ′ ∈ Tk having a common edge

dT
≤ σ0 .
dT ′

For T ∈ Tk we define the set of neighbours of T by

ωT := {T ′ ∈ Tk : T̄ ∩ T̄ ′ ̸= ∅}.

Another consequence of the foregoing assumption is that we have for all T ∈ Tk

d ωT
≤ σ1 ,
dT

and that each such T does only belong to at most δ sets ωT ′ , which implies that
( ∑ ) 12

||v||20;ωT ≤ δ ||v||0 ∀v ∈ L2 (Ω).
T ∈Tk

Both constants δ, σ1 do not depend on k.


4 willy dΥ
f orfler

Non-uniformly refined meshes will require mesh dependent norms. Let hk be the
piecewise constant function with hk |T := dT for all T ∈ Tk . Then for each v ∈ L2 (Ω)

||hk v||20 = d2T ||v||20;T .
T ∈Tk

1.3. Projections. Let Pk0 denote the L2 -projection from L2 (Ω) onto Vk and Pk1
the projection from H̊ 1 (Ω) onto V̊k constructed in [Cl]. It is also shown there that
there are constants C0 (ω̂T ), C1 (ω̂T ) such that for all v ∈ H̊ 1 (Ω), respectively H 2 (Ω),
and all T ∈ Tk

||v − Pk1 v||0;T ≤ C0 (ω̂T )dωT ||∇v||0;ωT ,


||∇(v − Pk1 v)||0;T ≤ C1,m (ω̂T )dm
ωT ||∇
1+m v||
0;ωT , for m = 0, 1.

With respect to our assumption in 1.2 the constants


√ √
Ĉ0 := δσ1 max max C0 (ω̂T ), Ĉ1,m := δσ1m max max C1,m (ω̂T ), for m = 0, 1,
k≥0 T ∈Tk k≥0 T ∈Tk

are well defined and we have

||h−1
k (v − Pk v)||0 ≤ Ĉ0 ||∇v||0 ,
1

||∇(v − Pk1 v)||0 ≤ Ĉ1,m ||hm


k ∇
1+m v|| , for m = 0, 1.
0

2. The model problem. As a model problem, we consider the partial differ-


ential equation
−∆u = f in Ω,
u = g ∂ on ∂Ω.
1
f ∈ L2 (Ω) and g ∂ ∈ H 2 (∂Ω) are given functions. In the weak formulation this problem
reads as follows

find u ∈ H 1 (Ω) with u|∂Ω = g ∂ and


∫ ∫
(P) ∇u · ∇ϕ = fϕ ∀ϕ ∈ H̊ 1 (Ω).
Ω Ω

Analogously, we define the discretized problems: let fk and gk∂ be suitable approxima-
tions of f and g ∂ respectively. Then we seek the solutions of

find uk ∈ Vk with uk |∂Ω = gk∂ and


∫ ∫
(Pk ) ∇uk · ∇ϕk = fk ϕ k ∀ϕk ∈ V̊k .
Ω Ω

It is easily proved that both problems are uniquely solvable by the Riesz representation
theorem. Especially for (P) with f ≡ 0 the solution g ∆ satisfies [LM; p. 195]

||∇g ∆ ||0 ≤ Ĉ∆ [[g ∂ ]] 1 ;∂Ω .


2

We immediately see that Ĉ∆ is scale invariant.


a convergent adaptive algorithm 5

Now we have to define ∂


∫ the approximations fk and gk . In view of (Pk ) it would
be desirable to compute Ω f ϕk exactly. That means we would take fk to be the
L2 (Ω)-projection of f onto Vk . But in general f and g ∂ are complicated functions
that cannot be integrated exactly. We make the following assumptions concerning f
and g ∂ :
Assumption. Let f ∈ C 0 (Ω̄) and g ∂ ∈ H 1 (∂Ω). Both functions are only
available as routines x 7→ f (x) and x 7→ g ∂ (x).
Here, we define fk to be the element of Vk that is given by

fk (q) = f (q) ∀q ∈ Nk

and let gk∂ be the restriction of a function in Vk to ∂Ω fulfilling

gk∂ (q) = g ∂ (q) ∀q ∈ Nk∂ .

3. Example. Let Ω := [0, 1]2 , f (x, y) := sin(8πx) and g ∂ (x, y) := 0. Let the
nodes of the macro triangulation be a subset of
{ }
i j
N0 = ( , ) : 0 ≤ i, j ≤ 2 .
2 2

This implies f0 = 0 and thus we will compute u0 = 0. An error estimator T 7→ ηT


that only depends on f0 and u0 returns ηT = 0 for all T ∈ T0 and the procedure will
stop assuming an exact solution. Note, that even two global refinements of T0 (by
partition into 4) will not get a better result.
Such a problem may also occur locally. Let f0 = 0 in a subregion of Ω, although
f ̸= 0 there. If u0 has small oszillations there, it may happen that this subregion will
also not be refined in the following iterations.
The reason for this unsatisfying behaviour is simply that f is badly represented
on T0 . What we are lacking is a quantitative criterion that guarantees that f or g ∂
are well approximated on a given mesh.
4. Error reduction for linear finite elements.
4.1. A posteriori error estimates. In this section, we restrict ourselves to
linear finite elements. We will discuss the case of higher order elements in section 6.
The basis functions ψqk are assumed to be normalized by cq = 1 in 1.2. Due to
the shape regularity and the scale–invariance of the H̊ 1 -norm, there is a constant σψ ,
independent of q and k, such that

||∇ψqk ||0 ≤ σψ .

We now want to study how the error behaves if we change from a given triangu-
lation Tk to a refined triangulation Tk+1 . In this section we will use subscripts H, h
instead of k, k + 1.
To simplify the treatment of the boundary data, we introduce prolongations into
Ω defined by
∂ (q) = g ∂ (q) for q ∈ N ∂ and g (q) := 0 else
gH (q) := gH H H

and
gh (q) := g ∂ (q) for q ∈ Nh∂ \NH
∂ and g (q) := g (q) else.
h H
6 willy dΥ
f orfler

That is, gh − gH is a linear combination of the basis functions ψqh with q ∈ Nh∂ \NH
∂.

Let u, uH and uh be the solutions of the respective problems (P), (PH ) and (Ph ).
eH := u − uH and eh := u − uh are the corresponding error functions. Because of
uh − uH − (gh − gH ) ∈ V̊h we have the identity
∫ ∫
∇eh · ∇(uh − uH − (gh − gH )) = (f − fh ) (uh − uH − (gh − gH )),
Ω Ω

which leads to

||∇eH ||20 = ||∇eh ||20 + ||∇(uh − uH )||20 + 2 ∇eh · ∇(gh − gH )
∫ Ω

(∗) + 2 (f − fh ) (uh − uH − (gh − gH )).


Note that the last terms disappear if fh = Ph0 f and gh = gH .


To prove an error reduction, we obviously have to impose a condition on the
approximation of the data, in the end an assumption on the initial triangulation. It
will turn out that the error from the approximation of the data should be small com-
pared to the error of the actual numerical solution. We intend to stop the numerical
computations if ||∇eH ||0 is of size ϵ for a given ϵ > 0. ϵ is called prescribed error
bound.
For this, we make the following definition:
Definition 1. Let Th be any triangulation, fh and gh∂ approximations of the
data f and g ∂ . Let ϵ be the prescribed error bound. Then the triangulation has
fineness µ with respect to ϵ (and positive weights w1 , w2 , w3 ), if

max{w1 ||f − fh′ ||0 , w2 ||h′ fh′ ||0 , w3 [[g ∂ − gh∂′ ]] 1 ;∂Ω } ≤ µϵ
2

for any refinement Th′ of Th .


The weights are introduced to get a better balance between the different error
terms. We will choose specific values later.
Lemma 1. Assume that TH has fineness µ with respect to ϵ and w1 := CP , w2
and w3 ≥ σψ . Let Th be any refinement of TH . If, for some constant Ce > 0,
ϵ
||∇eH ||0 ≥ ,
Ce
then the estimate:
1
||∇eH ||20 ≥ ||∇eh ||20 + ||∇(uh − uH )||20 − 4Ce µ(1 + 6Ce µ)||∇eH ||20
2
holds true.
Proof. Using Young’s inequality 2ab ≤ 14 a2 + 4b2 for all a, b ∈ IR, the estimate is
obtained from (∗) as follows:

||∇eH ||20 − ||∇eh ||20


≥ ||∇(uh − uH )||20 − 2||∇(gh − gH )||0 (||∇eH ||0 + ||∇(uh − uH )||0 )
− 2CP ||f − fh ||0 (||∇(uh − uH )||0 + ||∇(gh − gH )||0 )
1
≥ ||∇(uh − uH )||20 − 4||∇(gh − gH )||20 − 4CP2 ||f − fh ||20
2
− 2||∇eH ||0 ||∇(gh − gH )||0 − 2CP ||f − fh ||0 ||∇(gh − gH )||0 .
a convergent adaptive algorithm 7
1
For a linear function v on an interval [a, b], the H 2 -semi-norm is
∫ b∫ b
|v(x) − v(y)|2
2
[[v]] 1 ;[a,b] := = |v(b) − v(a)|2 .
2
a a |x − y|2
Thus (with Eq denoting the edge that has q as midpoint):
( ∑ ) 12
||∇(gh − gH )||0 ≤ 2 |(gh∂ − gH
∂ )(q)|2 ||∇ψ h ||2
q 0
q∈Nh∂ \NH

( ∑ ∫ ∫ ) 12
|(gh∂ − gH
∂ )(x) − (g ∂ − g ∂ )(y)|2
≤ σψ h H
Eq ∂Ω |x − y|2
q∈Nh∂ \NH

= σψ [[gh∂ − gH
∂ ]]
1
;∂Ω
≤ σψ ([[g ∂ − gH
∂ ]]
1
;∂Ω
+ [[g ∂ − gh∂ ]] 1 ;∂Ω )
2 2 2

≤ 2µϵ
and the result is readily obtained.
Now we seek for an estimate for the difference ||∇(uh − uH )||0 of the form
||∇(uh − uH )||0 ≥ c0 ||∇eH ||0 .
Obviously, c0 depends on the marking strategy and in order to get a uniform conver-
gence factor, this constant should be independent of the mesh size.
At this point, we need some additional notation. In the following we consider
refinement strategies that introduce new vertices at the midpoints of edges.
Assume that TH is given and let EH be the set of edges, EH ◦ and E ∂ the subset of
H
those in the interior and on the boundary of Ω respectively. By RH/2 (R◦H/2 , R∂H/2 )
we denote the set of midpoints in EH (EH ◦ , E ∂ ).
H

Let Rh be a subset of points in RH/2 , and let Th be a refinement of TH having (at
least) additional nodes Rh . For vH ∈ V̊H and each edge E ∈ EH ◦ , the jump of the

normal derivative is denoted by [∂n vH ]E or, if there is no ambiguity, [∂n vH ] shortly.


Let the sign of [∂n vH ] be determined by a given (arbitrary) orientation.
We now show that ||∇(uh − uH )||0 can be (essentially) estimated from below in
terms of jumps of normal derivatives of uH on refined edges of TH . In addition, it
is shown that we can (essentially) estimate ||∇eH ||0 from below by the sum of these
quantities over all edges. Note that the terms containing fh resp. f are negligible in
our context (see Definition 1 and Theorem 1).
Lemma 2. Let Rh be any subset of R◦H/2 . For each q ∈ Rh let Eq be the edge in

EH that contains q. Then we have the following estimates

dEq ||[∂n uH ]||20;Eq ≤ 24σψ2 ||∇(uh − uH )||20 + Ĉ22 ||hfh ||20 ,
q∈Rh

dEq ||[∂n uH ]||20;Eq ≤ 24σψ2 ||∇eH ||20 + Ĉ22 ||Hf ||20 ,
q∈R◦
H/2

where Ĉ22 := 12(1 + σ02 ).


Proof. For q ∈ Rh let ωq := T1 ∪ T2 be the union of the two triangles in TH that
meet at Eq , hence
∫ ∫ ∫ ∫
[∂n uH ]ψqh = ∇uH · ∇ψqh = ∇(uH − uh ) · ∇ψqh + fh ψqh .
Eq ωq ωq ωq
8 willy dΥ
f orfler

Estimating the second term on the right gives



1 1 1 1
fh ψqh ≤ (d2T1 + d2T2 ) 2 ||fh ||0;ωq ≤ (1 + σ02 ) 2 ||hfh ||0;ωq .
ωq 2 2

Squaring and summing over all q yields


∑ 1 ∑ (∫ )2
dE ||[∂n uH ]||0;Eq =
2 h
[∂n uH ]ψq
4 q Eq
q∈Rh q∈Rh
∑ ( 1
)
≤2 ||∇(uh − uH )||0;ωq ||∇ψq ||0 + (1 + σ0 )||hfh ||0;ωq
2 h 2 2 2
2
q∈Rh

≤ 6σψ2 ||∇(uh − uH )||20 + 3(1 + σ02 )||hfh ||20 .

The second inequality is proved analogously with uh , fh replaced by u, f .


The following Lemma proves an estimate for the true error in terms of the jumps
of the normal derivatives from above (compare this to the a priori error estimate
presented in 2.). It is a slight generalization of a result presented in [Ve].
Lemma 3. For any triangulation TH the error estimate
∑ 1
||∇eH ||0 ≤ Ĉ3 ( dE ||[∂n uH ]||20;E ) 2 + Ĉ0 ||HfH ||0

E∈EH

+ CP ||f − fH ||0 + Ĉ∆ [[g ∂ − gH


∂ ]]
1
;∂Ω
2

holds, where Ĉ3 is a constant that depends only on the scale–invariance properties of
the triangulation and is defined in the proof.
Proof. Let v ∈ H̊ 1 (Ω) be arbitrary and vH := PH 1 v. Then

∫ ∫ ∫ ∫
∇eH · ∇v = fv− ∇uH · ∇(v − vH ) − f H vH



∫ Ω
∫ Ω

= [∂n uH ] (v − vH ) + (f − fH ) v + fH (v − vH )

E∈EH E Ω Ω

≤ ||[∂n uH ]||0;E ||v − vH ||0;E

E∈EH

+ (CP ||f − fH ||0 + Ĉ0 ||HfH ||0 ) ||∇v||0


To estimate the first term, we note that
( )
1
||v − vH ||20;∂T ≤ C∂ (T̂ )2 dT ||∇(v − vH )||20;T + 2 ||v − vH ||20;T
dT
( )
≤ C∂ (T̂ )2 dT C1,0 (ω̂T )2 ||∇v||20;ωT + C0 (ω̂T )2 σ12 ||∇v||20;ωT .

This gives
∑ 1 ∑ ∑
||[∂n uH ]||0;E ||v − vH ||0;E ≤ ||[∂n uH ]||0;E ||v − vH ||0;E

2
E∈EH T ∈TH E⊂∂T
( ∑ ) 12
1 ( dT 1 ) 2
≤√
1
dE ||[∂n uH ]||0;E
2 max C∂ (T̂ ) max ( ) 2 (Ĉ1,0 + Ĉ02 ) 2 ||∇v||0 .
2 E∈E ◦ T ∈TH E⊂∂T dE
H
a convergent adaptive algorithm 9

We choose Ĉ3 such that


( )
1 dT 1
Ĉ3 ≥ √ (Ĉ1,0
2 + Ĉ 2 ) 21 max
0 C∂ (T̂ ) max ( ) 2 .
2 T ∈TH E⊂∂T dE

Now let g ∆ and gH∆ be the harmonic extensions of g ∂ and g ∂ into Ω, that is g ∆ − g ∆
H H
solves (P) with f ≡ 0 and boundary data g ∂ − gH
∂ . As already stated in 2., it holds

||∇(g ∆ − gH
∆ )|| ≤ Ĉ [[g ∂ − g ∂ ]]
0 ∆ H 1 ;∂Ω .
2


Estimating first for eH − (g ∆ − gH
∆ ) (note that

∇(g ∆ − gH
∆ ) · ∇v = 0, ∀v ∈ H̊ 1 (Ω))

and then using the triangle inequality gives the required result.
4.2. The error reduction property. To estimate local errors, the results of
4.1 suggest to use
2 := d ||[∂ u ]||2
ηE E n H 0;E

◦ . Alternatively, we can use error estimates based on triangles T given by


for E ∈ EH

1 ∑
ηT2 := dE ||[∂n uH ]||20;E .
2
E⊂∂T \∂Ω

In the following, we consider triangle based estimates only because they are widely
used. The differences are only technical. ∑ 2
If A is any subset of T, we define for convenience ηA
2 := ηT . The estimated
T ∈A
global error is then given by ηT . As we want to apply Lemma 2 and 3, we need them
to hold with ηE replaced by ηT . We impose the following condition on the refinement
strategy:

(R) The refinement strategy guarantees to divide all 3 edges


of any marked triangle.

If A is a set of marked ∑
triangles of TH and Th is the resulting
∑ refined triangulation,
we can estimate ηA2 ≤ 2 and we have the equality
ηE η 2 = η 2 . Hence we
q Eq T
q∈Rh q∈R◦
H/2

can formulate Lemma 2 and 3 in terms of 2


ηA
and ηT2 .
In order to estimate ||∇(uh − uH )||0 from below by ||∇eH ||0 , we obviously have
to estimate ηA from below by ηT . This motivates the following marking strategy:

(M∗ ) Mark a set A ⊆ TH such that ηA ≥ (1 − θ∗ ) ηT ,

for a fixed given value θ∗ ∈ (0, 1).


The constants CP , Ĉ0 , Ĉ2 , Ĉ3 , Ĉ∆ , σψ appearing in the following theorem have
been defined in the previous sections.
Theorem 1. There are constants κ ∈ (0, 1),µ∗ > 0, depending only on Ĉ0 ,Ĉ2 ,Ĉ3 ,
σψ ,θ∗ and dΩ , such that the following holds: let TH be a triangulation with fineness
µ ≤ µ∗ with respect to ϵ and weights

w1 := CP , w2 := max{Ĉ0 , Ĉ2 }, w3 := max{Ĉ∆ , σψ }.


10 willy dΥ
f orfler

Create a new triangulation Th using the local error estimator η defined above, the
marking strategy (M∗ ) and a refinement strategy that fulfills (R). Then it holds

||∇eh ||0 ≤ κ ||∇eH ||0

or we have already ηT ≤ ϵ.
Proof. First, let us assume that we have the estimate ||∇eH ||0 > ϵ/Ce for a
positive constant Ce .
Using Lemma 2, the choice of A, and Lemma 3 we prove the following inequalities
1 ( 2 ) 1 ( )
||∇(uh − uH )||20 ≥ 2 ηA − (µ∗ ϵ)2 ≥ 2 (1 − θ∗ )2 ηT2 − (Ce µ∗ )2 ||∇eH ||20
24σψ 24σψ
( )
1 (1 − θ∗ ) 2 ( 9 )
≥ − + 1 (Ce µ∗ ) ||∇eH ||20 .
2
24σψ2 2Ĉ32 Ĉ32

Inserting this into Lemma 1 gives


( )
(1 − θ∗ )2
||∇eH ||20 ≥ ||∇eh ||20 + − 4C µ
e ∗ − c̃(C µ
e ∗ ) 2 ||∇e ||2 ,
H 0
96σψ2 Ĉ32

where c̃ is a constant depending on σψ , Ĉ3 . Thus there is a γ > 0, depending on


σψ , Ĉ3 , θ∗ , such that the bracket on the right hand side becomes positive for Ce µ∗ ≤ γ.
Fix such a γ.
Now consider the case ||∇eH ||0 ≤ ϵ/Ce . From Lemma 2, we now have that

( Ĉ2 ) ϵ2
ηT2 ≤ 24σψ2 + (1 + Hmax )2 γ 2 2 .
Ĉ0 Ce

Here Hmax denotes the maximal diameter of the triangles in TH which may be esti-
mated by dΩ . Now we fix Ce such that ηT ≤ ϵ and finally define µ∗ := γ/Ce . With
these settings the theorem is proved.
Remark. i. The theorem proves monotone convergence until ηT ≤ ϵ. But from
Lemma 3 we see that then ||∇eH ||0 ≤ (Ĉ3 + 3µ∗ )ϵ holds, hence the true error is also of
size ϵ. Taking Ce = 1, Theorem 1 also holds if “ηT ≤ ϵ” is replaced by “||∇eH ||0 ≤ ϵ”
(with different constants µ∗ , κ).
ii. In [BV] convergence for an adaptive procedure for elliptic equations in one dimen-
sion was shown. However, the authors used the marking strategy (M), exact data
representation, and gave no bounds for the convergence rate.
So far we considered the case of a special choice for η. We now want to generalize
this result to other local error estimators. First, we want to point out that our results
can be carried over to the case of the so called residual error estimator (e.g. [Ve]):
1
ηTR := (ηT2 + s20 ||HfH ||20 ) 2

(s0 a suitable constant). In fact, Lemma 2 and Lemma 3 hold with only minor
modifications. We introduce the following class of local error estimators:
Definition 2. A local error estimator η̃ is locally equivalent to the residual error
estimator η R if there are sets ST′ , ST′′ ⊂ T (usually subsets of ωT ) and constants Cη̃′ , Cη̃′′
such that for all T ∈ T:

η̃T ≤ Cη̃′ ηSR′ and ηTR ≤ Cη̃′′ η̃S ′′ .


T T
a convergent adaptive algorithm 11

As before, we need a certain assumption on the refinement strategy. Let η̃ be a


local error estimator that meets the conditions stated in the previous definition.

(R’) If T is marked, the refinement strategy guarantees to divide


all 3 edges of any triangle in ST′ .

For example, it was proved in [Ve] that an error estimator based on local Neumann
problems fulfills these requirements with ST′ = {T }.
Theorem 2. Theorem 1 holds also for any error estimator η̃ that is locally equiv-
alent to η R , if the refinement strategy (R’) is used. µ∗ and κ will now depend also on
Cη̃′ , Cη̃′′ .
Proof. Using the assumption on the refinement strategy and the first inequality
in Definition 2, we derive an estimate for η̃A analogous to the first one in Lemma 2.
The remaining estimates of Lemma 2 and Lemma 3 follow immediately.
5. The convergent adaptive algorithm.
5.1. Constructing the initial triangulation. Fix ϵ > 0, µ > 0 and positive
weights w1 , w2 , w3 . Given any macro triangulation, our first aim is to construct an
initial triangulation that has prescribed fineness µ.
As seen earlier, numerical integration is a delicate task. For any deterministic
choice of quadrature points we will have a class of counterexamples like in 3. Note
that an underestimation of the error may also occur locally. But no procedure can
guarantee that the integrals are approximated sufficiently well. To circumvent this
situation practically, we propose a non–deterministic choice of the quadrature points.
We proceed as follows (for more sophisticated methods see [SM]):
– For each T ∈ T choose 3 points p1 , p2 , p3 at random and let

|T | ∑
3
||f − fh ||20;T ≈ |(f − fh )(pi )|2 .
3 i=1

– For each edge E ⊂ ∂Ω let p1 be its center and p2 , p3 be randomly


chosen. Compute

1∑ ∂
3
[[g ∂ − gh∂ ]]21 ;E ≈ |(g − gh∂ )(pi )|2 .
2 3 i=1

1
Remark. To justify the approximation of the H 2 -seminorm, we refer to the
( ∑ )1
proof of Lemma 1. As there, we can show that |g ∂ (q) − gh∂ (q)|2 2 ≤
∂ \N ∂
q∈Nh/2 h

σψ ([[g ∂ − gh∂ ]] 1 ;∂Ω +[[g ∂ − ∂ ]]


gh/2 1 ). But on the other hand we have [[gh∂ − gh/2
∂ ]]
1 ≤
2 ;∂Ω 2 ;∂Ω
2
( ∑ ) 1
Ĉtr ||∇(gh − gh/2 )||0 ≤ Ĉtr σψ |gh∂ (q) − gh/2
∂ (q)|2 2 , where Ĉ
tr stems from
q∈Nh/2 \Nh
∂ ∂

the continuity of the trace operator [A; p. 216]. If we now have [[g ∂ − gh/2
∂ ]]
1
;∂Ω

2
γ[[g ∂ − gh∂ ]] 1 ;∂Ω for some γ ∈ (0, 1) (which is the case if g ∂ is sufficiently well approxi-
2 ( ∑
mated), then we can prove inequalities between [[g ∂ − gh∂ ]] 1 ;∂Ω and |g ∂ (q)−
2 ∂ \N ∂
q∈Nh/2 h
) 21
gh∂ (q)|2 in both directions with constants that do not depend on the grid size.
12 willy dΥ
f orfler

In the sequel, we assume that this provides sufficiently good approximations of


||f − fh ||0;T and ||g ∂ − gh∂ || 1 ;E . Note that ||hfh ||0 can be computed exactly. To obtain
2
our mesh we take

ηT0 := max{w1 ||f − fh ||0;T , w2 ||hfh ||0;T , w3 [[g ∂ − gh∂ ]] 1 ;∂T ∩∂Ω }
2

as local error estimates. We claim that the best strategy (in order to obtain as few as
possible elements) to mark triangles would be
mark T if ηT0 = ηmax 0 .
As mentioned earlier, this will also maximize the number of steps, hence the number of
function evaluations. We propose to use the marking strategy (M) given in 0. or (M∗ )
described in 4.2. One may choose θ (resp. θ∗ ) ∈ (0, 1) depending on the complexity of
f (see also the remark below). Taking any refinement strategy we can now construct
the new mesh and proceed until ηT0h ≤ µϵ.
We now want to comment on the choice of µ and the weights w1 , w2 , w3 . The-
oretically, we can give estimates for the constants that appeared in the theoretical
considerations for a given Ω and refinement algorithm. Practically, however, we would
like to have an easier choice. For µ, see the end of 5.2. To choose weights, we propose
to set all scale–invariant constants equal to one, which gives w2 = w3 = 1. But w1
is not scale–invariant, as it is seen in 1.1. Using the fact that CP (Ω′ ) ≤ CP (Ω′′ ) for
Ω′ ⊂ Ω′′ , we can estimate CP (Ω) by CP (R), R being a rectangle containing Ω.
Remark. If the integrals are sufficiently well approximated, we can give a conver-
gence estimate for our strategy. Assume that the integrals appearing in the definition
of ηT0 are of the form ||hσ f˜||0;T , where f˜ is a function that does (for simplicity) not
depend on the grid and σ ∈ (0, 2] depends on the regularity of the approximated func-
tion. Let TH , Th be the coarse resp. refined mesh. Define gTH := ||H σ f˜||20;T for T ∈ TH .
Now let TH′ ⊆ TH be a set of triangles marked by strategy (M∗ ). For each refined
triangle T ∈ TH there are triangles T1 , . . . , Tl ∈ Th (for some l ∈ IN ), and a number
γ ∈ (0, 1) (depending on the refinement strategy), such that γ 2σ gTH = gTh1 + . . . + gThl .
Therefore
∑ ∑ ∑
(ηT0h )2 = gTh = γ 2σ gTH + gTh
T ∈Th ′
T ∈TH ′
T ∈TH \TH
∑ ∑
≤ γ 2σ gTH + (1 − γ 2σ ) gTH ≤ max{γ 2σ , (1 − γ 2σ )θ∗ (2 − θ∗ )} (ηT0H )2 .
T ∈TH ′
T ∈TH \TH

For example if f and g ∂ are sufficiently regular, we have σ = 1 and for the refinement
strategy we may assume that γ = 12 . In this case, the optimal value for θ∗ is about
0.18.
5.2. The SER-algorithm. Let T0 be the (already locally refined) initial trian-
gulation. Now we start the procedure to compute a sequence of meshes and approxi-
mate solutions. For some k ≥ 0 let Tk and uk be given.
After having computed the local error estimates, we now face the problem to mark
the triangles that have to be refined. For this we want to construct a set Ak ⊂ Tk
which is as small as possible satisfying ηAk ≥ (1 − θ∗ ) ηTk (as proposed in section 4.2).
If we order the set of local errors by means of their absolute values, the optimal set
A∗k is readily obtained. But the ordering would require more than O(NTk ) operations.
Therefore, we look for an approximation of A∗k that can be computed with O(NTk )
operations. We propose the following procedure: compute, say, all ηT and choose
ν ∈ (0, 1) small. Then
a convergent adaptive algorithm 13
Fig. 1. One new and four hanging nodes

q1 q3

q0

q2 q4

sum:= 0.0;
τ := 1.0;
while (sum< (1 − θ∗ )2 ηT2k ) do
τ := τ − ν;
for all T ∈ Tk
if (T is not marked)
if (ηT > τ ηmax ) mark T ;
sum:= sum+ηT2 ;
Obviously the algorithm has to stop, because τ will finally become 0. By choosing ν
we can control how fine the procedure should work. For small ν we expect to obtain
good approximations of A∗k . Note that this algorithm is cheap, because all local errors
had been already computed.
Having marked the triangles, we refine the mesh appropriately and then solve
again.
Theoretically, we can expect from our theorems that for k > 0

||∇ek+1 ||0 ≤ κ||∇ek ||0

with some κ ∈ (0, 1) as long as ηTk > ϵ. Note that the constants appearing there
can be chosen independently of k if the refinement strategy leads to triangulations
satisfying the assumptions stated in 1.2. We stop this algorithm if ηTk ≤ ϵ.
At last, we want to address the problem of choosing µ. We exploit the fact that
the error will decrease if the data is sufficiently well approximated. So, begin with
µ = 1. Now perform the SER-algorithm until it stops due to our stopping criterion
or the error does not decrease sufficiently enough. As an example, this would be
the case if ||∇ek+1 ||0 /||∇ek ||0 ≤ 12 ||∇e1 ||0 /||∇e0 ||0 . Then we would replace µ by a
much smaller value and repeat the construction described in 5.1. Iterate this until the
prescribed error is achieved.
5.3. The refinement strategy. Let us have a short look at some refinement
strategies that create triangulations with the desired properties. We will distinguish
between the cases: local error estimators based on triangles or edges, respectively,
partition of triangles into 2 or 4.
i. Let a set of marked edges be given. For any marked edge we may subdivide
the two neighbouring triangles into 4 similar triangles. This would, in addition to
q0 , create (at least) 4 additional nodes q1 , . . . , q4 as shown in Figure 1 if we want to
establish a new triangulation as defined in 1.2. But we try to avoid these when the
corresponding edges are not marked. In fact, we can treat such hanging nodes: we have
to introduce them in the structure, but function values are assigned by interpolation
from the coarser mesh. Thus they are no additional degrees of freedom. Although
14 willy dΥ
f orfler
Fig. 2. Partition into 4 combined with bisection

Fig. 3. Newest node bisection.

to refine
next

newest node

to refine
next

Fig. 4. Refinement pattern for quadratic elements

q1

q6 q5
q4
q2 q3

this does not fit to our definition of a triangulation in 1.2, all our results are valid in
this case if we guarantee the geometrical conditions to be fulfilled. Note that for all
refinements the triangles are similar to those of the macro triangulation.
ii. If we have a set of marked triangles, a commonly used way to create a new mesh
is to subdivide marked triangles into 4 (as above) and then use bisection to obtain a
triangulation with respect to our definition in 1.2 (Figure 2). To get triangulations
with the required geometrical properties, bisection has to be done carefully. There is an
algorithm (using so called red, green and blue refinement) that meets this requirement
[Ve]. Note that this concept carries immediately over to the case were we use an error
estimator based on edges, because these different refinement patterns depend on the
number of marked edges per triangle.
iii. Finally, we can refine using only bisection of triangles. Some algorithms that
create meshes that fulfill our geometrical assumptions are discussed in [Mi]. To meet
requirement (R) stated in 4.2, however, we have to apply this algorithm twice to the
triangles selected by our marking strategy.
If we are given a set of marked edges, we can perform the bisection algorithm for all
triangles for which at least one edge of its boundary is marked. Repeat this algorithm
a second time if necessary. Due to its properties, the newest node bisection algorithm
(Figure 3) will then stop.
6. Extensions to higher order elements. If we use higher order elements,
a convergent adaptive algorithm 15

there are differences in the analysis which stem from the fact that now ∆uH does
not vanish. We can prove Lemma 1 just as before if the basis functions are properly
scaled and (e.g.) gh∂ , gH∂ are defined to be still linear functions. Next we consider

Lemma 3 and will find that now we have ||H(∆uH + fH )||0 instead of ||HfH ||0 and
new constants Ĉ3 , Ĉ0 . This is no longer a term which is known a priori. So in general
the a priori error is given by

max{w1 ||f − fh ||0 , w3 [[g ∂ − gh∂ ]] 1 ;∂Ω },


2

but the ideas formulated in 5.1 to create the initial triangulation carry over immedi-
ately to this case.
The most striking part is to establish a result similar to Lemma 2. For this we
look into its proof and find that now for q ∈ Rh
∫ ∫ ∫ ∫
[∂n uH ]ψqh − (∆uH + fH ) ψqh = ∇(uH − uh ) · ∇ψqh + (fh − fH ) ψqh ,
Eq ωq ωq ωq

where ωq := supp(ψqh ). In view of the modified version of Lemma 3, we want to


estimate a combination of dEq ||[∂n uH ]||20;Eq and ||H(∆uH + fH )||20;ωq from above in
terms of ||∇(uH − uh )||20;ωq and ||H(fh − fH )||20;ωq .
As an example, let us consider quadratic elements. The nodes are given by the
vertices and the centers of the edges. To separate the two integrals on the left, we
would like to have enough functions that are supported only in T (and thus vanish on
Eq ). But if we divide T into 4 (as in Figure 1), there would be only 3 such functions
per triangle and that will be in general not enough to obtain the required estimate.
(Indeed, these would suffice in the case of fH being a linear function.)
To circumvent this situation we propose to use finer refinement patterns. Consider
the following situation for an equilateral reference triangle T̂ (Figure 4). T̂ is two times
regularly refined and from all the fine basis functions with support in T̂ we fix those
with nodes q1 , . . . , q6 . If we can show that for all v ∈ IP 2 (T̂ )

∑ ∫
6
( )2
v ψ̂qhi ≥ ĉ ||v||20;T̂
i=1 ω̂qi

holds, then we can prove an analogous result to Lemma 2. Transforming T̂ to any T


then yields

||H(∆uH + fH )||0;T ≤ c1 ||∇(uh − uH )||0;T + c2 ||H(fh − fH )||0;T .

The constants c1 , c2 depend only on the shape regularity of T . Using this result, we
can take basis functions with nodes located on ∂T to derive an analogous estimate for
||[∂n uH ]||0;∂T , thus obtaining the required result.
To proof the above stated inequality for T̂ , we have to show that the matrix
[∫ ]
ψ̂pHi ψ̂qhj
T̂ i=1,...,6; j=1,...,6

is invertible. Making use of the local mass matrix (e.g. in [Sch; p. 76]) and the
symmetries of T̂ , we find that the matrix is indeed regular.
16 willy dΥ
f orfler
Fig. 5. Macro triangulation.

Now we can derive a result that is similar to that of Theorem 1. We only have
to introduce the local error estimator

ηT2 := w2′ ||[∂n uH ]||20;∂T + w2′′ ||H(∆uH + fH )||20;T

and to choose the set of marked triangles as before. The weights may be chosen
according to a more careful calculation or, in absence of this information, simply set
to 1. Moreover, every marked triangle has to be refined as in Figure 4. Note that this
can be achieved by applying the refinement strategy described in 5.3.ii twice for each
marked triangle, or 4-times that of 5.3.iii.
We assume that one can find refinement pattern with the same property that
introduce less new nodes than the one presented here.
7. Numerical results. In this section we present some numerical results for
linear finite elements.
We used the newest node bisection [Bä] [Se] in all examples as refinement strategy.
Our analysis requires that we have to perform 2 bisection steps on every marked
triangle in the SER-iteration. All systems of equations were solved using CG with
BPX-preconditioning [BPX]. The CG–iteration was stopped when r · Cr ≤ 10−6 |b| (r
the current residual, C the preconditioning matrix, and b the right hand side of the
equation). In all examples we took the same seed for the random number generator.
Our first example is that of section 3., that is

3
f (x, y) := 2π 2 sin(8πx), g ∂ (x, y) ≡ 0, Ω := [0, 1]2 .

The macro triangulation is shown in Figure 5. The following parameters were chosen:

ϵ = 5 · 10−2 , µ = 1.0, w1 = 0.225, w2 = 1.0, ν = 0.05.

Here, w1 is approximately CP (Ω) and f is normalized such that CP (Ω)||f ||0 = 1. We


ran the program for different values of θ resp. θ∗ for the marking strategies (M) resp.
(M∗ ). In the following table, the first part gives the results for the construction of
the initial grid. The columns are: the number of steps s0 until the required error was
reached, the number N0 of nodes inside Ω, the number Nf of necessary function eval-
uations (the values of f at the vertices were only computed once), and the computing
time Tig in seconds. The second part gives the results for the SER-iteration using
the initial triangulation created with θ∗ = 0.5: the number of steps s, the number of
unknowns N after the last step and the total computation time Ttot in seconds.
a convergent adaptive algorithm 17
Table 1
Example 1, different marking strategies.

marking strategy s0 N0 Nf Tig s N Ttot


(M): θ = 0.2 14 15681 207047 5 3 49873 69
θ = 0.5 20 13649 193644 5 4 35091 61
θ = 0.9 181 11393 697803 15 16 31035 162
(M∗ ): θ∗ = 0.2 19 10369 179642 5 2 36353 47
θ∗ = 0.5 41 10369 279533 6 3 27339 49
θ∗ = 0.9 249 10369 811622 18 18 25031 188

For the example θ∗ = 0.5 we obtained, for k = 1, 2, 3, the following numbers of un-
knowns Nk , numbers NCG of (preconditioned) CG-iterations and estimated errors
ηTk :
Table 2
Example 1, θ∗ = 0.5.

k 1 2 3
Nk 10369 13883 27339
NCG 21 18 18
ηT 7.4E -2 6.0E -2 4.7E -2
k

Since we do not know the exact solution in this example, we cannot show “exact”
errors. But we saw from the final triangulation that the oscillations of f were clearly
resolved.
To illustrate the dependency on the parameter ν we fixed θ∗ = 0.5 and computed
Table 3
Example 1, different ν.

ν s0 N0 Nf Tig s N Ttot
0.01 48 11393 356283 9 4 25677 65
0.05 41 10369 279533 6 3 27339 49
0.20 27 10369 235424 6 3 49873 72

In the second example we computed a harmonic function on a domain having a


corner with an obtuse angle. Let

Ω := [−0.5, 0.5]2 \triangle[(0, 0), (−0.5, 0.5), (−0.5, −0.5)]

and choose data such that


2 2
u(r, α) := r 3 sin( α)
3

(r, α being the polar coordinates) is the exact solution. The macro triangulation
was taken to be the one from Figure 5, restricted to the present domain. We fixed
parameters
ϵ = 2 · 10−2 , µ = 1.0, w3 = 1.0, ν = 0.05.

In all the following computations the initial triangulation was obtained with the mark-
ing strategy (M) and θ = 0.5. It was achieved after 5 steps and consists of 16 triangles
and 2 internal nodes. For the SER algorithm we considered different variants of mark-
ing strategies. The notation is as before.
18 willy dΥ
f orfler
Table 4
Example 2, different marking strategies.

marking strategy s N Ttot


(M): θ = 0.2 14 13459 42
θ = 0.5 16 14975 46
θ = 0.9 71 9560 144
(M∗ ): θ∗ = 0.2 11 35229 55
θ∗ = 0.5 22 10234 44
θ∗ = 0.9 110 9246 182

For the case θ∗ = 0.5 and some values of k we show some details of the iteration.
√ The
notation is as in example 1. In addition, let γk := (||∇ek ||0 /||∇ek−3 ||0 ) / Nk−3 /Nk .
Table 5
Example 2, θ∗ = 0.5.

k 1 4 7 10 13 16 19 22
Nk 2 21 59 160 469 1463 3999 10234
NCG 1 12 16 20 21 22 20 15
ηT 5.8E -1 3.3E -1 2.2E -1 1.4E -1 8.7E -2 5.0E -2 3.0E -2 1.9E -2
k
||∇ek ||0 2.0E -1 1.0E -1 6.6E -2 4.2E -2 2.5E -2 1.4E -2 8.7E -3 5.5E -3
γk 1.6 1.1 1.1 1.0 1.0 1.0 1.0

When the algorithm stopped, the “exact” relative error of the numerical solution was
about 6.4 · 10−3 .
Again, we fix θ∗ = 0.5 and consider several values of ν
Table 6
Example 2, different ν.

ν s N Ttot
0.01 25 11347 54
0.05 22 10234 44
0.20 17 13894 47

From these two examples we can draw the following conclusions:


A marking strategy that gives less unknowns does not necessarily lead to a faster
algorithm. This can be seen if we compare the results for the cases: small θ vs. large
θ (resp. θ∗ ; Tables 1,4), small ν vs. large ν (Tables 3,6). The “right” choice of the
parameters depends on the costs of different parts of the algorithm.
Compared to the marking strategy (M) our new strategy (M∗ ) is preferable in the
first example and comparable in the second if θ, θ∗ ∈ {0.2, 0.5}. The results in Table
5 show nice correspondence between the increase of the unknowns and the decrease of
the error.
From our experience we suggest to take the following choices: create the initial
triangulation using θ = 0.5 or θ∗ ∈ [0.2, 0.5] and ν = 0.05. Perform the SER-iteration
with θ∗ = 0.5 and two bisection steps.
We also recommend to make experiments with the following modifications: (1) if
T is marked, perform two bisection steps on T if ηT ≥ (1 − ν)ηmax and only one else,
(2) switch to a smaller ν or θ∗ if the global error becomes smaller than 2ϵ.
This last point may be important if our global error estimate misses the prescribed
error bound slightly and more new nodes than necessary may be introduced in the
last step. A possible remedy is the following: let the error ek in step k be less than
the one in the previous step. Assuming that the decrease of the error is approximately
given by e2k ≈ (1 − c(1 − θ∗ )2 )e2k−1 (see the proof of Theorem 1), we expect that the
a convergent adaptive algorithm 19

next error ek+1 will not be less than 0.9ϵ if we replace θ∗ by θ̃∗ , given by

[ 1 − (0.9ϵ/ek )2 ] 12
1 − θ̃∗ := (1 − θ∗ ) .
1 − (ek−1 /ek )2

8. Remarks. i. In the literature one sometimes finds (e.g. [BW, p. 288]) a so


called saturation assumption. It states that the structure of the exact solution has
to be sufficiently resolved on the given mesh so that the approximated solution on a
somewhat larger finite element space (e.g. one degree higher than the one considered)
would result in a smaller error. In the present work the condition on the initial
triangulation plays the role of this saturation assumption.
ii. Our criterion for a solution to be acceptable was based on the absolute error
in a suitable norm || . ||. But we may want to have a solution with small relative error

||u − uh ||
.
||u||

We do not know estimates that relate relative errors of the data to the relative error
of an approximate solution as we have for the absolute errors. Hence, we have no a
priori step. In the SER algorithm we can however exploit the fact that

||u − uh || δ
||u − uh || ≤ δ ⇒ ≤ ,
||u|| ||uh || − δ

for δ > 0. We suggest to perform an algorithm like the one we proposed at the end
of 5.2. Create an initial triangulation, such that the a priori error is smaller than a
given ϵ (that is µ = 1). Then start the SER algorithm. Take δ = ηTh and use the right
hand side of the above inequality as the error control if δ has become much smaller
than ||uh ||. Stop if the prescribed bound is reached. If the convergence becomes too
slow, create a new grid for which the a priori error is smaller than before.
9. Conclusions. In this paper we explained how to construct a converging adap-
tive procedure for finite linear elements for the Poisson equation in two space dimen-
sions.
Given a macro triangulation, we first refine it by means of the data functions
to obtain an initial triangulation. We gave a mathematical criterion for the initial
triangulation to be “sufficiently fine”. Since these functions can in general not be in-
tegrated exactly, we perform numerical integration using randomly chosen quadrature
points.
Having a “sufficiently fine” initial triangulation, we start the SER-algorithm that
creates sequences of triangulations Tk and approximations uk . We stated a new mark-
ing strategy that selects the set which has to be refined. It has the advantage that we
are able to show that the error decreases monotone at a rate that does not depend on
the size of the mesh until the estimated error reaches the prescribed error bound.
In addition, we showed how these results can be generalized to higher order ele-
ments by considering quadratic elements.

REFERENCES
[A] R. A. Adams, Sobolev spaces, Academic Press, New York, 1975.
20 willy dΥ
f orfler

[Bä] E. Bänsch, Local mesh refinement in 2 and 3 dimensions, Impact Comput. Sci. Engrg.,
3 (1991), pp. 181–191.
[BPX] J. H. Bramble, J. E. Pasciak, J. Xu, Parallel multilevel preconditioners, Math. Comp.,
55 (1990), pp. 1–22.
[BR] I. Babuška, W. C. Rheinboldt, Error estimators for adaptive finite element computa-
tions, SIAM J. Numer. Anal., 15 (1978), pp. 736–754.
[BV] I. Babuška, M. Vogelius, Feedback and adaptive finite element solution of one-dimensi-
onal boundary value problems, Numer. Math., 44 (1984), pp. 75–102.
[BW] R. E. Bank, A. Weisser, Some a priori error estimators for elliptic partial differential
equations, Math. Comp., 44 (1985), pp. 283–301.
[Cl] P. Clément, Approximation by finite element functions using local regularizations, RAIRO
Anal. Numér., 2 (1975), pp. 77–84.
[Ja] H. Jarausch, On an adaptive grid refinement technique for finite element approximations,
SIAM J. Sci. Statist. Comput., 7 (1986), pp. 1105–1120.
[Jo] C. Johnson, Adaptive finite element methods for diffusion and convection problems, Com-
put. Meth. Appl. Mech. Engrg., 82 (1990), pp. 301–322.
[LM] J. L. Lions, E. Magenes, Non-homogeneous boundary value problems and applications
I, Springer, New York, 1972.
[Mi] W. F. Mitchell, A comparison of adaptive refinement techniques for elliptic problems,
ACM Trans. Math. Software, 15 (1989), pp. 327–346.
[Sch] H. R. Schwarz, Finite element methods, Academic Press, London, 1988.
[Se] E. G. Sewell, Automatic generation of triangulations for piecewise polynomial approxi-
mations, Ph. D. dissertation, Purdue Univ., West Lafayette, Ind., 1972.
[SM] J. Spanier, E. H. Maize, Quasi-Random Methods for estimating integrals using relatively
small samples, SIAM Review, 36 (1994), pp. 18–44.
[Ve] R. Verfürth, A posteriori error estimation and adaptive mesh-refinement techniques, J.
Comput. Appl. Math., 50 (1994), pp. 67–83.
[ZKGB] O. C. Zienkiewicz, D. W. Kelly, J. Gago, I. Babuška, Hierarchical finite element
approaches, error estimators and adaptive refinement, in The mathematics of finite
elements and applications IV, J. R. Whiteman, ed., Academic Press, London, 1982, pp.
313–346.

View publication stats

You might also like