MATH0043 2 - Calculus of Variations
MATH0043 2 - Calculus of Variations
b
′
I (y) = ∫ F (x, y, y ) dx.
a
A typical problem in the calculus of variations involve finding a particular function y(x) to maximize or minimize the
integral I (y) subject to boundary conditions y(a) = A and y(b) = B.
The integral I (y) is an example of a functional, which (more generally) is a mapping from a set of allowable functions to
the reals.
We say that I (y) has an extremum when I (y) takes its maximum or minimum value.
Pythagoras’ theorem to relate ds to dx and dy: drawing a triangle with sides of length dx and dy at right angles to one
−−−−− −− −−
2
−−−−−−−−
another, the hypotenuse is ≈ and so ds and . This means the
dy
2 2 2 2 2
ds = dx + dy s = √dx + dy = √1 + ( ) dx
dx
−−−−−−
arc length equals ∫ .
1
√1 + y ′2 dx
0
The proof to follow requires the integrand F (x, y, y ) to be twice differentiable with respect to each argument. What’s
′
more, the methods that we use in this module to solve problems in the calculus of variations will only find those solutions
which are in C [a, b] . More advanced techniques (i.e. beyond MATH0043) are designed to overcome this last restriction.
2
This isn’t just a technicality: discontinuous extremal functions are very important in optimal control problems, which
arise in engineering applications.
d ∂F ∂F
( ) − = 0.
′
dx ∂y ∂y
You might be wondering what is suppose to mean: how can we differentiate with respect to a derivative? Think of it
∂F
∂y
′
like this: F is given to you as a function of three variables, say F (u, v, w), and when we evaluate the functional I we
plug in x, y(x), y (x) for u, v, w and then integrate. The derivative
′
is just the partial derivative of F with respect to
∂F
∂y
′
∂y
′
, just pretend y is a variable.
′
Equally, there’s an important difference between and . The former is the derivative of F with respect to x, taking
dF
dx
∂F
∂x
into account the fact that y = y(x) and y = y (x) are functions of x too. The latter is the partial derivative of F with
′ ′
respect to its first variable, so it’s found by differentiating F with respect to x and pretending that y and y are just ′
variables and do not depend on x. Hopefully the next example makes this clear:
Let F (x, y, y ′
) = 2x + xyy
′
+ y
′2
+ y . Then
∂F ′
= xy + 2y
′
∂y
d ∂F ′ ′′
( ) = y + xy + 2y
′
dx ∂y
∂F ′
= xy + 1
∂y
∂F ′
= 2 + yy
∂x
dF ′ 2
= 2 + yy + xy′ + xyy′′ + 2y′′y′ + y′
dx
Y satisfying the Euler–Lagrange equation is a necessary, but not sufficient, condition for I (Y ) to be an extremum. In
other words, a function Y (x) may satisfy the Euler–Lagrange equation even when I (Y ) is not an extremum.
where η(x) ∈ C [a, b] satisfies η(a) = η(b) = 0 , so that Y (a) = A and Y (b) = B, i.e. Y still satisfies the boundary
2
ϵ ϵ ϵ
conditions. Informally, Y is a function which satisfies our boundary conditions and which is ‘near to’ Y when ϵ is small.1
ϵ
′
I [ϵ] = ∫ F (x, Y ϵ , Y ϵ ) dx.
a
dI
= 0 when ϵ = 0.
dϵ
dϵ
by differentiating under the integral sign:
b b
dI d dF
′ ′
= ∫ F (x, Y ϵ , Y ϵ ) dx = ∫ (x, Y ϵ , Y ϵ ) dx
dϵ dϵ a a
dϵ
We now use the multivariable chain rule to differentiate F with respect to ϵ. For a general three-variable function
F (u(ϵ), v(ϵ), w(ϵ)) whose three arguments depend on ϵ, the chain rule tells us that
dF ∂ F du ∂ F dv ∂ F dw
= + + .
dϵ ∂u dϵ ∂v dϵ ∂w dϵ
′
dY
In our case, the first argument x is independent of ϵ, so , and since Y we have and .
dx dY ϵ ϵ ′
= 0 ϵ = Y + ϵη = η = η
dϵ dϵ dϵ
Therefore
dF ∂F ∂F ′
(x, Y ϵ , Y ϵ′ ) = η(x) + η (x).
′
dϵ ∂y ∂y
Recall that dI
dϵ
= 0 when ϵ = 0 . Since Y 0 = Y and Y 0
′
= Y
′
,
b
∂F ′
∂F ′ ′
0 = ∫ (x, Y , Y )η(x) + (x, Y , Y )η (x) dx.
′
a
∂y ∂y
b b b
∂F ′
∂F d ∂F
∫ η (x) dx = [ η(x)] − ∫ ( ) η(x) dx.
′ ′ ′
a
∂y ∂y a
dx ∂y
a
The first term on the right hand side vanishes because η(a) = η(b) = 0 . Substituting the second term into ([eat]),
b
∂F d ∂F
∫ ( − ) η(x) dx = 0.
a
∂y dx ∂ y ′
d ∂F ∂F
( ) − = 0.\qedhere
′
dx ∂y ∂y
By considering y + g, where y is the solution from exercise 1 and g(x) is a variation in y(x) satisfying g(0) = g(1) = 0 ,
and then considering I (y + g), show explicitly that y(x) minimizes I (y) in Exercise 1 above. (Hint: use integration by
parts, and the Euler–Lagrange equation satisfied by y(x) to simplify the expression for I (y + g)).
Prove that the straight line y = x is the curve giving the shortest distance between the points (0, 0) and (1, 1).
[flcv] (FLCV). Let y(x) be continuous on [a, b], and suppose that for all η(x) ∈ C
2
[a, b] such that η(a) = η(b) = 0 we
have
b
∫ y(x)η(x) dx = 0.
a
Here is a sketch of the proof. Suppose, for a contradiction, that for some a < α < b we have y(α) > 0 (the case when
α = a or α = b can be done similarly, but let’s keep it simple). Because y is continuous, y(x) > 0 for all x in some
interval (α , α ) containing α .
0 1
ηis in C [a, b] — it’s difficult to give a formal proof without using a formal definition of continuity and differentiability,
2
x ∈ (α , α ) . A strictly positive continuous function on an interval like this has a strictly positive integral, so this is a
0 1
contradiction. Similarly we can show y(x) never takes values < 0, so it is zero everywhere on [a, b].
The Brachistochrone
A classic example of the calculus of variations is to find the brachistochrone, defined as that smooth curve joining two
points A and B (not underneath one another) along which a particle will slide from A to B under gravity in the fastest
possible time.
Using the coordinate system illustrated, we can use conservation of energy to obtain the velocity v of the particle as it
makes its descent
1 2
mv = mgx
2
so that
−−−
v = √2gx .
The brachistochrone is an extremal of this functional, and so it satisfies the Euler-Lagrange equation
′
d y
( ) = 0, y(0) = 0, y(h) = a.
−−−−−−−−− −−
dx ′ 2
√2gx(1 + (y ) )
The constant α is determined implicitly by the remaining boundary condition y(h) = a. The equation of the cycloid is
often given in the following parametric form (which can be obtained from the substitution in the integral)
α
x(θ) = (1 − cos 2θ)
2
α
y(θ) = (2θ − sin 2θ)
2
and can be constructed by following the locus of the initial point of contact when a circle of radius α/2 is rolled (an angle
2θ ) along a straight line.
extremals satisfy an equation called the Beltrami identity which can be easier to solve than the Euler–Lagrange equation.
If I (Y ) is an extremum of the functional
b
′
I = ∫ F (y, y ) dx
a
′
∂F
F − y = C
′
∂y
Consider
d ′
∂F dF ′′
∂F ′
d ∂F
(F − y ) = − y − y ( ).
′ ′ ′
dx ∂y dx ∂y dx ∂y
dF ′
∂F ′ ∂F
′
= y + y
dx ∂y ∂y′
′
∂F ∂F ′′
∂F ′
d ∂F ′
∂F d ∂F
y + y′′ − y − y = y ( − )
′ ′ ′
∂y ∂y ∂y dx ∂ y ∂y dx ∂ y ′
Since Y is an extremal, it is a solution of the Euler–Lagrange equation and so this is zero for y = Y . If something has
zero derivative it is a constant, so Y is a solution of
′
∂F
F − y = C
′
∂y
Answer:
sinh x
y = f (x) = 2
sinh 1
(again).
Isoperimetric Problems
So far we have dealt with boundary conditions of the form y(a) = A, y(b) = B or y(a) = A, y (b) = B. For some ′
problems the natural boundary conditions are expressed using an integral. The standard example is Dido’s problem3: if
you have a piece of rope with a fixed length, what shape should you make with it in order to enclose the largest possible
area? Here we are trying to choose a function y to maximise an integral I (y) giving the area enclosed by y, but the fixed
length constraint is also expressed in terms of an integral involving y. This kind of problem, where we seek an extremal of
some function subject to ‘ordinary’ boundary conditions and also an integral constraint, is called an isoperimetric
problem.
You will need to know about Lagrange multipliers to understand this proof: see the handout on moodle (the constant λ
will turn out to be a Lagrange multiplier).
Suppose I (Y ) is a maximum or minimum subject to J (y) = L , and consider the two-parameter family of functions given
by
where ϵ and δ are constants and η(x) and ζ (x) are twice differentiable functions such that
η(a) = ζ (a) = η(b) = ζ (b) = 0 , with ζ chosen so that Y + ϵη + δζ obeys the integral constraint.
Because I has a maximum or minimum at Y (x) subject to J = L , at the point (ϵ, δ)=(0, 0) our function I [ϵ, δ] takes an
extreme value subject to J [ϵ, δ] = L .
It follows from the theory of Lagrange multipliers that a necessary condition for a function I [ϵ, δ] of two variables subject
to a constraint J [ϵ, δ] = L to take an extreme value at (0, 0) is that there is a constant λ (called the Lagrange multiplier)
such that
∂I ∂J
+ λ = 0
∂ϵ ∂ϵ
∂I ∂J
+ λ = 0
∂δ ∂δ
′
d ′
(F y + λGy )(x, Y , Y ) + (F y ′ + λGy ′ )(x, Y , Y ) = 0
dx
Note that to complete the solution of the problem, the initially unknown multiplier λ must be determined at the end using
the constraint J (y) = L.
(Sheep pen design problem): A fence of length l must be attached to a straight wall at points A and B (a distance a apart,
where a < l) to form an enclosure. Show that the shape of the fence that maximizes the area enclosed is the arc of a
circle, and write down (but do not try to solve) the equations that determine the circle’s radius and the location of its
centre in terms of a and l.
Suggested reading
There are many introductory textbooks on the calculus of variations, but most of them go into far more mathematical
detail that is required for MATH0043. If you’d like to know more of the theory, Gelfand and Fomin’s Calculus of
Variations is available in the library. A less technical source is chapter 9 of Boas Mathematical Methods in the Physical
Sciences. There are many short introductions to calculus of variations on the web, e.g.
https://courses.maths.ox.ac.uk/node/view_material/37762
http://www-users.math.umn.edu/ olver/ln_/cv.pdf
https://personalpages.manchester.ac.uk/staff/david.harris/MT30021/30021CalcVarLec.pdf
although all go into far more detail than we need in MATH0043. Lastly, as well as the moodle handout you may find
2. Don’t confuse this with ‘extremum’. The terminology is standard, e.g. Gelfand and Fomin p.15, but can be
confusing.↩