0% found this document useful (0 votes)
160 views11 pages

Lec 3 Fem PDF

The document provides an introduction to the finite element method (FEM) for approximating solutions to partial differential equations (PDEs). It discusses using FEM to solve the Poisson equation as a model problem, describing how the PDE is converted to a weak form that is posed as an optimization problem. The solution of the weak form is approximated by dividing the domain into discrete elements and computing the solution at specific points.

Uploaded by

AbhishekRaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
160 views11 pages

Lec 3 Fem PDF

The document provides an introduction to the finite element method (FEM) for approximating solutions to partial differential equations (PDEs). It discusses using FEM to solve the Poisson equation as a model problem, describing how the PDE is converted to a weak form that is posed as an optimization problem. The solution of the weak form is approximated by dividing the domain into discrete elements and computing the solution at specific points.

Uploaded by

AbhishekRaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Introduction to the finite element method

Instructor: Ramsharan Rangarajan


March 23, 2016

One of the key concepts we have learnt in this course is that of the stress
intensity factor (SIF). We have come to appreciate it as an important factor
in design and safety of structures containing cracks. Using complex variable
techniques, we derived a few special solutions for displacement/stress fields in
cracked, linearly elastic solids. Using these solutions, we were able to compute
the SIF. It is worth noting that we assumed (semi)infinite, two dimensional
geometries in deriving these solutions. Handbooks provide SIF values for a large
number of special configurations. Formulae in such tables have been found using
empirical fits to experimental data and are routinely used in fracture testing.
Over the course of the next four lectures, we will learn about a few com-
putational techniques to compute SIFs in the context of linear elastic fracture
mechanics (LEFM). The boundary element and finite element methods (FEM)
are perhaps the two most commonly used numerical methods to approximate
solutions in LEFM. We will focus solely on the latter. At the end of this part
of the course, we hope that you will
• appreciate the use of FEM in computational fracture mechanics,

• realize the pitfalls in the mesh→solve→plot approach invariably followed


when using commercial FEM packages,
• be aware of the intricacies in FEM arising from approximating solutions
with singularities,

• be cognizant of the different ways of computing SIFs from numerical so-


lutions, and constantly evaluate/compare the merits of each approach,
• always check the documentation/manual of your FEM package to know
what special techniques are being used to compute LEFM solutions and
estimate SIFs.

In this lecture, we will get a bird’s eye view of the finite element method.
We will provide just enough detail so as to develop some intuition behind the
method. We will start paying attention to concepts more directly relevant to
LEFM in subsequent lectures. This will also help you start working on the
computing assignment. A highly recommended textbook on FEM especially
suitable for engineers wishing to learn the subject is the Dover classic “The

1
finite element method: Linear static and dynamic finite element analysis” by
Tom Hughes.

1 The Poisson problem


We will use the 2D Poisson equation as the prototypical example for discussing
the FEM. We consider the problem of computing a scalar-valued function φ
defined on a two-dimensional domain Ω that satisfies
∂2φ ∂2φ
−∆φ , + 2 = f over Ω, (1a)
∂x2 ∂y
φ = 0 over ∂Ω. (1b)

We can think of φ as the equilibrium temperature distribution of a plate repre-


sented by the domain Ω in the presence of a heat source with intensity f (x, y).
The boundary condition (1b) represents the fact that the plate is maintained at
zero temperature along its boundary ∂Ω.
Eq. (1) is an example of a linear, elliptic partial differential equation. The
equilibrium equations of linear elasticity also fall under the same classification,
except that the analogous equations have multiple components (as many as the
number of spatial dimensions) and are usually more complex looking because
of the coupling between various stress-strain components introduced by the
constitutive relation. Eq. (1) is often referred to as the classical form of the
Poisson equation. Its study is a dominant aspect of harmonic analysis and
a unique solution can be proved to exist with some smoothness assumptions
on f . An excellent text on this topic is “Partial differential equations” by
Lawrence Evans. The analytical solution for (1) can be written down using the
fundamental solution of the Laplace operator. Boundary element methods in
fact exploit the knowledge of such fundamental solutions to approximate φ.
The overarching goal of numerical methods is to compute a sequence of ap-
proximations {φh }h such that φh converges to φ in some sense as h & 0. Then,
we can progressively reduce h until φ − φh is smaller than a specified tolerance
required depending on the application of interest. Although not related to the
topic of interest, we mention the possibility of approximating (1) using finite
differences. The idea here is that the partial derivatives appearing in (1) can be
computed approximately using finite difference formulae:

∂2φ φ(x + h, y) − 2φ(x, y) + φ(x − h, y)


≈ , (2a)
∂x2 h2
∂2φ φ(x, y + h) − 2φ(x, y) + φ(x, y − h)
2
≈ . (2b)
∂y h2

Representing −(∂ 2 /∂x2 +∂ 2 /∂y 2 ) as the linear operator L, we effectively replace


equation Lφ = 0 in (1a) by a different one Lh φh = 0, where Lh is obtained by
using the finite difference approximations (2) in place of the partial derivatives
appearing in L. Next, rather than insisting that Lh φh = 0 hold at every (x, y) in

2
Figure 1: A representative finite difference grid.

Ω, we only request that it hold at a finite number of points. In this way, we arrive
at a system of equations represented by Lh φh (xi , yj ) = 0 for a chosen collection
of points {(xi , yj )}i,j that is usually referred to as a grid. The computed solution
labeled as φh , is only expected to be an approximation of φ. Noticing that (2)
reproduces L in the limit as h & 0, we expect that φh will be a progressively
better approximation of φ as h is reduced, i.e., as the grid is refined. See any
text on finite difference methods for a more complete picture. A recommended
text is the book “Fundamentals of engineering numerical analysis” by Parviz
Moin.

1.1 An alternate form


To compute a finite element approximation of φ, we adopt a radically different
approach. To this end, we will first convert (1) into an optimization problem.
Let v : Ω → R be any function that vanishes on ∂Ω that is smooth enough to
warrant the manipulations that follow. Multiplying (1a) by v, we get
f v = −v∆φ. (3)
Integrating (3) over Ω yields
Z Z
f v dΩ = − v∆φ dΩ (4a)

Z Ω
= (∇φ · ∇v − div(v∇φ)) dΩ (4b)

Z Z  
∂φ
= ∇φ · ∇v dΩ − v ds (4c)
Ω ∂Ω ∂n
Z
= ∇φ · ∇v dΩ. (4d)

Denote the collection of all sufficiently smooth functions1 on Ω that vanish


1 Strictly speaking, S is the Sobolev space H 1 (Ω). We will define this space in one of the
0
questions in the following subsection.

3
on ∂Ω by S. Noting that the choice of v in (4) is arbitrary, we conclude that
the solution φ of (1) necessarily satisfies
Z Z
∇φ · ∇v dΩ = f v dΩ for any/every v ∈ S. (5)
Ω Ω

Eq. (5) is called the weak form of (1). There are a few ways of understanding
in what sense (5) is “weaker” than (1). Here we note that a solution of (1) is
necessarily twice continuously differentiable (denoted C 2 ). However, solutions
to (5) need not even be differentiable! That is, (5) is solvable for a much
larger class of functions f than (1). For example, when f is a point source, (5)
has a solution while (1) does not. However, when f is smooth enough (e.g.,
continuous), say when f is a constant function, the solutions of (1) and (5)
coincide.
In the finite element method, we choose to approximate (5) rather than (1).
We are yet to understand why (5) is simpler to approximate than (1). However,
we have seen that one important advantage of FEM is that it permits loading
scenarios and boundary conditions that are relevant in engineering, but not
admissible in the classical sense.

Questions:
(i) Justify the manipulation from (4a) to (4b).
(ii) Justify the manipulation from (4b) to (4c).

(iii) Justify why the boundary term appearing in (4c) vanishes in (4d).

1.2 An optimization problem


Eq. (5) appears daunting at first. It may even seem that (1) was simpler.
To resolve this question, we adopt an optimization-based perspective. The
big picture is that in general, there is no prescriptive way of solving PDEs.
However, we have numerous tools and approximation methods at our disposal
to solve optimization problems. Plainly speaking, we are good at maximiz-
ing/minimizing/extremizing functions; at least we are better at it than solving
PDEs.
Consider the functional
Z  
1
J(v) = ∇v · ∇v − f v dΩ, (6)
Ω 2

where J : S → R assigns a scalar to each function in S. We can think of J(v)


as a “cost” associated with the function v. The key observation here is that the
Euler-Lagrange equation corresponding to the minimization of (6) is precisely

4
the weak form (5). To wit, requesting that φ be a stationary point of J yields
Z  
d 1
0 = hδJ(φ), vi , ∇(φ + ηv) · ∇(φ + ηv) − f (φ + ηv) dΩ (7a)
dη Ω 2 η=0
Z
= (∇φ · ∇v − f v) dΩ, (7b)

which is precisely the weak form of the Poisson problem. Hence one interpreta-
tion of the FEM for the Poisson problem is that to solve (1), we approximately
solve the optimization problem of finding a minimizer of the functional J in (6),
namely

Find φ , arg min J(v). (8)


v∈S

An additional appeal of (8) over (1) is that the former is much more amenable
to approximation using computer codes. Moreover, we can use known results
from optimization theory to understand conditions for existence and uniqueness
of solutions to (8).

Questions:
(i) Justify that (7b) follows from (7a).
(ii) Justify that minimization problem (8) is equivalent to the weak form
(5).

1.3 The Galerkin method


Having decided to solve the weaker version of the Poisson problem, we arrive
at the question of how to solve the optimization problem (8), at least approxi-
mately. An elegant answer is provided by the Galerkin method. Choose a finite
dimensional subspace Sh of S. Then, an approximation of the optimization
problem (8) is computed as

φh , arg min J(v). (9)


v∈Sh

Notice that the only distinction between (8) and (9) is the choice of the collec-
tion of functions over which the minimum is sought. While we seek to find φ as
a minimizer of J over the space S in (8), we seek φh as a minimizer of J over
the subspace Sh in (9).

Questions:

(i) Define a vector space (the real kind will suffice).

5
Figure 2: The Galerkin approximation φh of φ.

(ii) Show that the collection of functions


Z
1
H0 (Ω) , {v : Ω → R : (v 2 + ∇v · ∇v) dΩ < ∞}

is a vector space. This is an example of a Sobolev space, which hap-


pens to be the natural function space in which to study (5). In one-
dimension, it can be shown that H01 (Ω ⊂ R) consists of continuous
function. In higher dimensions, such a characterization does not hold,
meaning that there are functions in H01 (Ω) that are not continuous.
Hence it does not even make sense to talk about point-wise values
of functions in H01 (Ω). Moreover, the gradient appearing in the def-
inition above should in fact be interpreted as the “weak derivative”.
These points should be kept in mind, but are beyond the scope of our
current discussion.

(iii) Define what we mean by Sh being a finite dimensional subspace of S.


(iv) Comparing (8) and (9), prove that J(φh ) ≤ J(φ).
(v) Convince yourself that although Sh is finite dimensional, it has in-
finitely many functions.

Considering that both S and Sh contain infinitely many functions (being


vector spaces), it may be a little puzzling as to why (9) is any easier to solve
than (8). This is easy to answer. Since Sh is finite dimensional, say of dimension
0 < n < ∞, we can find a basis {Ni }ni=1 that spans Sh . Any function v ∈ Sh
can be expressed uniquely as a linear combination of these basis functions, say

6
v(x, y) = vi Ni (x, y) where vi’s are scalar coefficients. Then notice that
1
J(v) = Kij vi vj − Fk vk for v ∈ Sh , (10)
2
where
Z Z
Kij , ∇Ni · ∇Nj and Fi , f Ni dΩ. (11)
Ω Ω

Hence the Galerkin approximation (9) reduces to finding the coefficients {φi }ni=1
of φh with respect to the basis {Ni }ni=1 :
 
1 T
Find φ , arg minn v Kv − Fv , (12)
v∈R 2
which in turn reduces to solving the linear system of equations
Kφ = F. (13)
In the FEM vernacular, the n × n square matrix K is called the stiffness matrix
and the n × 1 vector F is called the force vector.
Although (13) is likely to be very familiar, it is usually not derived in this way
using an optimization-based approach. This is because the FEM is used also in
problems that do not arise from a variational principle (extremal problem) such
as (8). We choose the optimization route to provide the intuition behind FEM
rather then merely go through a more “mechanical” derivation. It is common
to arrive at (13) using analogies with spring or truss networks, or directly from
the weak form as you will show in the following questions.

Questions:
(i) Using v = vi Ni in (9), derive (10).
(ii) Prove (13) starting from (12).
(iii) Prove (12) starting from (13). That is, show that J(φ) ≤ J(v) for
any v ∈ Sh .
(iv) An alternative perspective of FEM, applicable also to problems that
may not be posed as an optimization problem, deals directly with the
weak problem (5). The Galerkin method is more generally defined as
Z Z
Find φh ∈ Sh such that ∇φ · ∇v dΩ = f v dΩ for each v ∈ Sh .
Ω Ω
(14)

That (9) is equivalent to (14) is shown with exactly the same ar-
guments that showed equivalence of (5) and (8). By choosing test
functions v = vi Ni and φh = φi Ni in (14), show that we arrive at the
same set of equations as (13).

7
Figure 3: Choice of shape functions in the finite element method as piecewise
polynomials. The image shows the ubiquitous choice of the “hat function”, that
is a linear polynomial with support localized to a small region in Ω.

1.4 Finite elements


There is considerable freedom in the choice of basis functions {Ni }ni=1 for the
subspace Sh . They only need to be included in S = H01 (Ω), i.e., be sufficiently
smooth and vanish on the boundary ∂Ω. However, arbitrary choices for these
functions is not advisable. After all, we would like that as we increase n, φh
provides a progressively better approximation of φ. That is, as we increase the
number of basis functions thereby making Sh into progressively larger subspace
of S, φ in S is better approximated by φh in Sh . These considerations intricately
link the Galerkin method with interpolation theory.
For our purposes, the FEM is an instance of the Galerkin method for a
specific choice of basis functions {Ni }ni=1 . The most common choices for basis
functions are as piecewise polynomials, a distinctive feature of the finite element
method. Adopting piecewise polynomials is judicious because:
(i) Basis functions are conveniently constructed by “meshing” the domain Ω.
This consists in essentially breaking up Ω into smaller pieces called fi-
nite elements and constructing polynomial functions over them. Elements
usually have polygonal shapes in 2D and polyhedral shapes in 3D.
(ii) It is straightforward to compute K and F by evaluating the necessary
integrals since the integrands are usually either polynomials or rational
polynomials. Numerical quadrature may be required when the forcing
function f is not a polynomial.
(iii) By insisting that each function Ni be nonzero only over a small region of
Ω, we ensure that K becomes a sparse matrix. A sparse system Kφ = F
can be solved far more efficiently than if K were dense. In fact, in realistic
engineering applications, K is never inverted.

(iv) The approximation of functions in Sobolev spaces with polynomials is well


understood. Hence it is possible to estimate a priori, how well φh will
approximate φ.

8
Models
Physical origins of PDEs
and BVPs

Variational principles nonconforming


(weak solutions) methods

Well posedness Galerkin approximation


Existence, uniqueness for coercive problems

Sobolev spaces, Best approxmation property


traces, embedding theorems,
Poincare and Friedrich inequalities

Approximation theory
Polynomial approximation in Sobolev spaces

A priori rates A posteriori error estimates


of conververgence adaptivity

Figure 4: Topics of study in finite element methods

2 Computing assignment: Part 1

Instructions: For this computing assignment, you will require an imple-


mentation of the finite element method for the Poisson equation. Please
team up in pairs. Ideally, a student who is currently taking the FEM class
(or has taken one previously) will team up with a student who has not. You
are welcome to use any non-commercial FEM code of your choice, including
the Matlab program from the FEM class.
For the benefit of those without exposure to FEM or seeking to learn
an implementation of the method in C++, a simple code is provided to
you. The source code contains only header files (.h) and therefore does
not require compiling any libraries. You will require a C++11 compliant

9
compiler. The example eg-poisson.cpp computes the finite element solution
of a Poisson problem. This example will be discussed in detail in class. The
main benefits of using the code provided are:
• the instructor will be able to help in case you experience any difficul-
ties or have any questions.
• you will receive half a point for each bug you find and report. You
will earn good karma by helping students who take this class in the
future.
Please do not expect me to debug code either in person or over email. I
will never look at your code except when you turn it in. If you have any
difficulties, I will be happy to help you come up with ideas to enable you to
resolve them.
Questions:
(i) Choose a 2D domain Ω and a boundary condition φ0 along ∂Ω. Find
an exact solution to the Laplace equation ∆φ = 0 over Ω satisfying
φ = φ0 along ∂Ω.
(ii) Mesh Ω with relatively uniformly sized straight-edged triangles. This
can be done in Matlab. An alternative is the open source, easy to use
program gmsh.
(iii) Prescribe the function φ0 as the boundary condition in your program
and compute the finite element approximation φh of φ.
(iv) Compute the L2 (Ω) norm of the error in the solution defined as
Z
kφ − φh k , (φ − φh )2 dΩ.

(v) Subdivide your mesh. Compute the solution over the refined mesh
and the norm of the resulting error. The subdivision procedure will
be described to you in class.
(vi) Plot the L2 (Ω) norm of the error versus the mesh size h. Take the
mesh size h to be the maximum among the edge lengths in your mesh.
(vii) Highlight the trend you observe in the error plot.
(viii) Consult a textbook or online resource on finite elements. Quote any
result/theorem that explains how the error is expected to behave as a
function of the mesh size.
What to turn in: Either on paper or electronically or in dropbox, turn
in the following:

10
(a) The function implementing your choice of boundary condition.
(b) A plot of the coarsest mesh you used.
(c) A plot of the solution computed over the coarsest mesh.

(d) The function implementing mesh refinement if you coded one yourself.
(e) The function computing the L2 error if you implemented one yourself.
(f) A plot of the error versus mesh size.

(g) A result/theorem from the literature that helps justify the trend your
observed for the error as a function of the mesh size.

11

You might also like