0% found this document useful (0 votes)
27 views19 pages

Lagrange Multipliers

Langrange Multiplier

Uploaded by

Muzayen Sheko
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views19 pages

Lagrange Multipliers

Langrange Multiplier

Uploaded by

Muzayen Sheko
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Subjects Search Donate Login Sign up

MATH · MULTIVARIABLE CALCULUS · APPLICATIONS OF


MULTIVARIABLE DERIVATIVES · CONSTRAINED OPTIMIZATION
(ARTICLES)

Lagrange
multipliers, examples
Examples of the Lagrangian and Lagrange multiplier
technique in action.

Google Classroom Facebook


Twitter Email

Background
Introduction to Lagrange multipliers
Gradient

Lagrange multiplier technique,


quick recap
Image credit: By Nexcis (Own work) [Public domain], via Wikimedia
Commons

When you want to maximize (or minimize) a


multivariable function f(x, y, …) subject to the
constraint that another multivariable function equals
a constant, g(x, y, …) = c, follow these steps:

Step 1: Introduce a new variable λ, and define a


new function L as follows:

L(x, y, … , λ) = f(x, y, …) − λ(g(x, y,

This function L is called the "Lagrangian", and


the new variable λ is referred to as a "Lagrange
multiplier"

Step 2: Set the gradient of L equal to the zero


vector.
∇L(x, y, … , λ) = 0 ← Zero vector

In other words, find the critical points of L.

Step 3: Consider each solution, which will look


something like (x0 , y0 , … , λ0 ) . Plug each one
into f . Or rather, first remove the λ0 component,
then plug it into f , since f does not have λ as an
input. Whichever one gives the greatest (or
smallest) value is the maximum (or minimum)
point your are seeking.

Example 1: Budgetary constraints


Problem

Suppose you are running a factory, producing some


sort of widget that requires steel as a raw material.
Your costs are predominantly human labor, which is
$20 per hour for your workers, and the steel itself,
which runs for $170 per ton. Suppose your revenue
R is loosely modeled by the following equation:

R(h, s) = 200h2/3 s1/3

h represents hours of labor


s represents tons of steel

If your budget is $20,000, what is the maximum


possible revenue?

Solution

The $20 per hour labor costs and $170 per ton steel
costs tell us that the total cost of production, in terms
of h and s, is

20h + 170s

Therefore the budget of $20,000 can be translated to


the constraint

20h + 170s = 20,000

Before we dive into the computation, you can get a


feel for this problem using the following interactive
diagram. You can see which values of (h, s) yield a
given revenue (blue curve) and which values satisfy
the constraint (red line).
s

$20,000 $60,000

R = 200h2/3 s1/3
20h + 170s = 20,000
h

Since we need to maximize a function R(h, s),


subject to a constraint, 20h + 170s = 20,000, we
begin by writing the Lagrangian function for this
setup:

L(h, s, λ) = 200h2/3 s1/3 − λ(20h + 170s

Next, set the gradient ∇L equal to the 0 vector. This


is the same as setting each partial derivative equal to
0. First, we handle the partial derivative with respect
to h.

∂L
0=
∂h

0= (200h2/3 s1/3 − λ(20h + 170s − 20,0
∂h
2
0 = 200 ⋅ h−1/3 s1/3 − 20λ
3

Next, we handle the partial derivative with respect to


s.
∂L
0=
∂s

0= (200h2/3 s1/3 − λ(20h + 170s − 20,00
∂s
1
0 = 200 ⋅ h2/3 s−2/3 − 170λ
3

Finally we set the partial derivative with respect to λ


equal to 0, which as always is just the same thing as
the constraint. In practice, you can of course just
write the constraint itself, but I'll write out the partial
derivative here just to make things clear.

∂L
0=
∂λ

0= (200h2/3 s1/3 − λ(20h + 170s − 20,0
∂λ

0 = −20h − 170s + 20,000

20h + 170s = 20,000

Putting it together, the system of equations we need


to solve is
2 −1/3 1/3
0 = 200 ⋅ h s − 20λ
3
1
0 = 200 ⋅ h2/3 s−2/3 − 170λ
3

20h + 170s = 20,000

In practice, you should almost always use a


computer once you get to a system of equations like
this. Especially because the equation will likely be
more complicated than these in real applications.
MULTIVARIABLE
CALCULUS APPLICATIONS OF
Once you do, you'll find that the answer is
MULTIVARIABLE DERIVATIVES

Constrained optimization 2,000


(articles) h= ≈ 666.667
3
Lagrange multipliers, 2,000
introduction s= ≈ 39.2157
51
Lagrange multipliers,
8,000
examples
λ=√
3
≈ 2.593
459
Interpretation of Lagrange
multipliers

This means you should employ about 667 hours of


labor, and purchase 39 tons of steel, which will give a
maximum revenue of

2/3 1/3
R(667, 39) = 200(667) (39) ≈ $51,777
The interpretation of this constant λ = 2.593 is left to
the next article

Example 2: Maximizing dot


product
Problem: Let the three-dimensional vector v ⃗be
defined as follows.

⎡ 2 ⎤
v ⃗= 3
⎣ 1 ⎦

Consider every possible unit vector u


^ in three-
dimensional space. For which one is the dot product
^ ⋅ v ⃗the greatest?
u

The diagram below is two-dimensional, but not much


changes in the intuition as we move to three
dimensions.
y

v⃗
2

^
u 1

x
−3 −2 −1 1 2 3

−1

−2

−3

Two-dimensional analogy to the three-dimensional problem we have.


Which unit vector u
^ maximizes the dot product u
^ ⋅ v?⃗

If you are fluent with dot products, you may already


know the answer. It's one of those mathematical
facts worth remembering. If you don't know the
answer, all the better! Because we will now find and
prove the result using the Lagrange multiplier
method.

Solution:

First, we need to spell out how exactly this is a


constrained optimization problem. Write the
coordinates of our unit vectors as x, y and z:

⎡ ⎤
⎡ x ⎤
u
^= y
⎣ z ⎦

The fact that u


^ is a unit vector means its magnitude
is 1:

^ ∣∣ = √x2 + y 2 + z 2 = 1
∣∣u

x2 + y 2 + z 2 = 1

This is our constraint.

Maximizing u
^ ⋅ v ⃗means maximizing the following
quantity:

⎡ x ⎤ ⎡ 2 ⎤
y ⋅ 3 = 2x + 3y + z
⎣ z ⎦ ⎣ 1 ⎦

The Lagrangian, with respect to this function and the


constraint above, is

L(x, y, z, λ) = 2x + 3y + z − λ(x2 + y 2

We now solve for ∇L = 0 by setting each partial


derivative of this expression equal to 0.

(2x + 3y + z − λ(x2 + y 2 + z 2 − 1))
∂x

(2x + 3y + z − λ(x2 + y 2 + z 2 − 1))
∂y

(2x + 3y + z − λ(x2 + y 2 + z 2 − 1))
∂z

Remember, setting the partial derivative with respect


to λ equal to 0 just restates the constraint.


(2x + 3y + z − λ(x2 + y 2 + z 2 − 1)) = −x
∂λ

Solving for x, y and z in the first three equations


above, we get

1
x=2⋅

1
y =3⋅

1
z =1⋅

Ah, what beautiful symmetry. Each of these


1
expressions has the same 2λ factor, and the
coefficients 2, 3 and 1 match up with the coordinates
of v.⃗ Being good math students as we are, we won't
let good symmetry go to waste. In this case,
combining the three equations above into a single
vector equation, we can relate u
^ and v ⃗as follows:
⎡ x ⎤ 1 ⎡
2 ⎤
1
u
^= y = 3 = v⃗
⎣ z ⎦ 2λ ⎣ 1 ⎦ 2λ

Therefore u
^ is
y
proportional to v!⃗
Geometrically, this means v⃗
^ points in the same
u
u
^ max
direction as v.⃗ There are x

two unit vectors


u
^ min
proportional v,⃗

One which points in


Two-dimensional analogy
the same direction, showing the two unit vectors
which maximize and minimize
this is the vector that
the quantity u
^ ⋅ v. ⃗
maximizes u ^ ⋅ v. ⃗
One which points in
the opposite direction. This one minimizes
u
^ ⋅ v. ⃗

We can write these two unit vectors by normalizing


v,⃗ which just means dividing v ⃗by its magnitude:

v⃗
^ max =
u
∣∣v∣∣⃗

v⃗
u
^ min =−
∣∣v∣∣⃗

The magnitude ∣∣v∣∣⃗ is √22 + 32 + 12 = √14, so we


can write the maximizing unit vector u^ max explicitly
as like this:

⎡ 2/√14 ⎤
u
^ max = 3/√14
⎣ 1/√14 ⎦

Just skip the Lagrangian


If you read the last article, you'll recall that the whole
point of the Lagrangian L is that setting ∇L = 0
encodes the two properties a constrained maximum
must satisfy:

Gradient alignment between the target function


and the constraint function,

∇f (x, y) = λ∇g(x, y)

The constraint itself,

g(x, y) = c

When working through examples, you might wonder


why we bother writing out the Lagrangian at all.
Wouldn't it be easier to just start with these two
equations rather than re-establishing them from
∇L = 0 every time? The short answer is yes, it would
be easier. If you find yourself solving a constrained
optimization problem by hand, and you remember
the idea of gradient alignment, feel free to go for it
without worrying about the Lagrangian.

In practice, it's often a computer solving these


problems, not a human. Given that there are many
highly optimized programs for finding when the
gradient of a given function is 0, it's both clean and
useful to encapsulate our problem into the equation
∇L = 0.

Furthermore, the Lagrangian itself, as well as several


functions deriving from it, arise frequently in the
theoretical study of optimization. In this light,
reasoning about the single object L rather than
multiple conditions makes it easier to see the
connection between high-level​ ideas. Not to
mention, it's quicker to write down on a blackboard.

In either case, whatever your future relationship with


constrained optimization might be, it is good to be
able to think about the Lagrangian itself and what it
does. The examples above illustrate how it works,
and hopefully help to drive home the point that
∇L = 0 encapsulates both ∇f = λ∇g and
g(x, y) = c in a single equation.
Ask a question...

Questions Tips & Thanks Top Recent

In example 2, why do we put a hat on u? Is it because it is


a unit vector, or because it is the vector that we are
looking for?
6 votes • Comment • Flag 2 years ago by clara.vdw

It is because it is a unit vector. Unit vectors will


typically have a hat on them.
7 votes • Comment • Flag
2 years ago by u.yu16

Use the method of Lagrange multipliers to compute the


Optimal investments x and y in mutual Funds 1 and 2
respectively.An expressions for x and y should not contain
the lagrange multiplier
2 votes • Comment • Flag
about a year ago by Learner

Instead of constraining optimization to a curve on x-y


plane, is there which a method to constrain the
optimization to a region/area on the x-y plane. Like the
region
x^2+y^2<=2 which r all the points in the unit circle
including the boundary.
1 vote • Comment • Flag
9 months ago by hamadmo77

For problems where the number of constraints is one less


than the number of variables (ie every example we've
gone over except the unit vector one), is there a reason
why we can't just solve the system of equations of the
function and constraint? ie the result is a single-variable
function; take its derivative and set to 0.
1 vote • Comment • Flag
about a year ago by David O'Connor

how do you maximize this function subject to the


constraint
f(x,y)=x^2-y^2+3, 2x+y=3
1 vote • Comment • Flag 10 months ago by jam008

Hello and really thank you for your amazing site. Can you
please explain me why we dont use the whole Lagrange
but only the first part? Why we dont use the 2nd
derivatives
1 vote • Comment • Flag
3 months ago by nikostogas
what shuld we do if we have constraints as well as
boundaries and we need a local extrima?
1 vote • Comment • Flag
2 years ago by Garbage can jr.

At the start of example 1, it would be good if you


mentioned that the problem is very hard to solve
completely by hand, so that people don't waste their time.
0 votes • Comment • Flag
about a year ago by Zaz Brown

Its indeed tricky, but I found it usefull and good


practice.
1 vote • Comment • Flag
10 months ago by aflenoir

find the temperature f(x,y,z) at any point in space is


f=400xyz^2.find the highest temperature on the surface of
the sphere x^2+y^2+z^2=1
0 votes • 1 comment • Flag
2 years ago by gakhil1018
Lagrange multipliers, introduction
Interpretation of Lagrange multipliers

About
Our mission is to
provide a free, world- News

class education to Impact

anyone, anywhere. Our team

Our interns
Khan Academy is a
Our content specialists
501(c)(3) nonprofit
Our leadership
organization. Donate
Our supporters
or volunteer today!
Our contributors

Careers

Internships

Contact

Help center

Support community

Share your story

Press

Download our apps

iOS app

Android app
Subjects

Math by subject

Math by grade

Science & engineering

Computing

Arts & humanities

Economics & finance

Test prep

College, careers, & more

Language English

© 2018 Khan Academy Terms of use Privacy notice

You might also like