Dirichlet
Dirichlet
John Greer Duke University Email: [email protected] Kirsten R. Messer University of Nebraska, Lincoln Email: [email protected] or [email protected] June 30, 2002
Abstract A dominant technique in early variational approaches was Dirichlets principle, which lost favor after some mathematicians (notably Weierstrass) pointed out its weaknesses. Here we will discuss Dirichlets principle, its aws, and its salvation via direct minimization methods. [3]
Introduction
The majority of the material in this paper was generated from course notes and handouts from the VIGRE minicourse on Variational Methods and Partial Dierential Equations, held at the University of Utah from 28 May - 8 June 2002. [3]. The tie between extrema of integral expressions of the form
Z
a b
f (x, y(x), y (x)) dx and solutions of partial dierential equations (PDEs) of the form d y f (x, y(x), y (x)) y f (x, y(x), y (x)) = 0 dx was discovered independently by both Euler and Lagrange in the middle of the eighteenth century. At rst mathematicians sought to exploit this relationship by solving the PDE in order to nd maxima or minima of the integral expression. Near the middle of the 19th century, however, mathematicians began looking at this relationship in reverse. They sought to nd solutions of the PDE by maximizing or minimizing the integral expression.
One of the rst problems that was looked at this was was the so called Dirichlet Problem y(x) = 0, x ,
Z
y(x) = f (x),
x .
This integral expression is always nonnegative, and thus it was assumed that a function, y, which minimizes the expression must exist, and thus the PDE must have a solution. This became known as Dirichlets principle. In 1870, however this viewpoint was challenged when Weierstrass gave an example of an integral expression which is always nonnegative, yet fails to achieve its minimum. In the remainder of this paper, we will discuss two such counterexamples, and explore conditions that may be imposed on either the class of admissible functions or the integral expression itself to ensure that the minimum is achieved.
Counterexamples
Here we will look at two counterexamples to Dirichlets principle, and discuss the behavior that causes them to fail. Example 1. Minimize
Z
1 0
y 2 dx
y(1) = 1.
Let A := {y C[0, 1] | y(0) = 0, and y(1) = 1}, and consider the sequence {yn } dened by yn = xn
Then for each n N, yn is continuous on [0, 1], yn (0) = 0, and yn (1) = 1. Therefore each yn A. Now, note that
Z
1 0 1 0
(yn )
=
Z
2 yn dx
= =
x2n dx
1 2n + 1
and we see that limn (yn ) = 0. Noting that (y) 0 y, this gives us that inf A (y) = 0 and that {yn } is a minimizing sequence for .
So, weve identied the inmum of over A, but does achieve this inmum? I claim it does not. Suppose y C[0, 1] minimizes . Then
Z
1 0
(y) = 0
y 2 dx = 0 y = 0 a.e.
But y is continuous. Thus y 0, and so y(1) = 1. The problem in this counterexample appears to be the domain. We have a nice, bounded minimizing sequence ( yn 1 n N), yet {yn } does not converge to a point in our domain. Now, our second counterexample: Example 2. Minimize
Z
1 1
[xy (x)]2 dx
y(1) = b,
a=b
Without loss of generality, assume b > a. Let A := {y H 1 [1, 1] | y(1) = a, and y(1) = b}, and consider the sequence {yn } dened by yn (x) =
8 < a :
a+b 2
(ba)n x 2
1 1 x n 1 1 n x n 1 x 1. n
8 < 0 :
(ba)n 2
1 1 x n 1 1 n x n 1 x 1, n
(yn )
= =
1 n 1 n
(b a)n 2
2 Z 2
2
dx
1 n
(b a)n 2 (b a)n 2
2
1 n
x2 dx
1 n 1 n
= =
x3 3
(b a) 6n
So limn (yn ) = 0. As in the previous example, this tells us that inf A (y) = 0 and that {yn } is a minimizing sequence for . So, does achieve its inmum? Again, I claim it does not. Note that if y A, we
know y is continuous and y(1) < y(1). Thus y > 0 on some set of positive measure, and thus (y) > 0. So there is no y A such that (y) = 0. This time there is no problem with the domain. We are working on a nice Hilbert space. This time, the problem lies with the functional itself. Note that
Z
1 1
yn (x)2 dx
= =
(b a)n 2
2 Z
1 n 1 n
dx
(b a)2 n . 2
So, limn yn H 1 = . Our minimizing sequence blows up. This results from the fact that is not coercive.
As the above counterexamples show, a functional can fail to achieve its minimum due either to a problem with the domain, or to a characteristic of the functional itself. The particular requirements that guarantee a functional will achieve its minimum were not nailed down until the early twentieth century. The following theorem and corollary outline these requirements. Theorem 3. Let E be a reexive Banach space, C E weakly closed, and : C R weakly lower semicontinuous. Then has a minimum over C if and only if has a bounded minimizing sequence. Corollary 4. Let E be a reexive Banach space, C E closed, convex, and : C R weakly lower semicontinuous and coercive. Then has a minimum over C. We now state and prove the following related result. This theorem and its proof are taken from [1]. Theorem 5. Let U be a bounded set in Rn . Let L : Rn R U R be continuously dierentiable in each variable and dene
Z
I[u] =
U
L ( u, u, x) dx
for u C 1 (U ). Assume that L is bounded below, and in addition the mapping p L(p, z, x) is convex for each z R, x U. Then I[] is weakly lower semicontinuous on H 1 (U ). Proof. Choose a sequence {uk } with uk and let u weakly in H 1 (U ), (1)
Due to (1), {uk } is bounded in H 1 . By Rellichs compactness theorem (described in e.g. [2]), there is a subsequence such that uk u in L2 (U ). Once again passing to a subsequence, we have uk u almost everywhere in U. (3) (2)
Pick > 0. By Egoros Theorem (see e.g. [4]) there exists a measurable set E such that uk u uniformly in U and meas(U E ) . Let 1 }. (5) T Clearly meas(U F ) 0 as 0. Let G = E F , noticing that meas(U G ) 0 as 0. Without loss of generality, assume L 0. Letting Dp L(p, z, x) denote the derivative of L with respect to its pvariable, and using the convexity of L in the same variable, we see F = {x U ||u(x)| + | u(x)|
Z
(4)
I[uk ]
=
ZU
L ( uk , uk , x) dx L ( uk , uk , x) dx
Z
G
(6) (7)
Z
L ( u, uk , x) dx +
G
Dp L ( u, uk , x) ( uk
Z
u) dx. (8)
Since u and
u are bounded on G ,
Z
k
lim
L ( u, uk , x) dx
G G
L ( u, u, x) dx
uk
u in L2 , u) dx = 0
lim
Dp L ( u, uk , x) ( uk
G
l = lim I[uk ]
k G
L ( u, u, x) dx.
lim I[uk ]
U
L ( u, u, x) dx.
References
[1] Lawrence C. Evans. Partial dierential equations. American Mathematical Society, Providence, RI, 1998. [2] Gerald B. Folland. Introduction to partial dierential equations. Princeton University Press, Princeton, NJ, second edition, 1995. [3] Jean Mawhin and Klaus Schmitt. course notes and handouts. VIGRE minicourse, University of Utah, 28 May - 08 June 2002. [4] H. L. Royden. Real analysis. Macmillan Publishing Company, third edition.