0% found this document useful (0 votes)
10 views47 pages

Advanced Macroeconomics 2024-2025: Euler 1744 Optimization, A Complement Etienne Wasmer

The document discusses advanced macroeconomic optimization techniques, focusing on both static and dynamic cases, including the use of Lagrangian methods and Euler's optimization principles. It outlines necessary and sufficient conditions for optimality in static optimization, as well as the formulation of dynamic optimization problems and the derivation of Euler's equation. The content is structured around mathematical proofs and conditions essential for understanding optimization in economic contexts.

Uploaded by

oz2046
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views47 pages

Advanced Macroeconomics 2024-2025: Euler 1744 Optimization, A Complement Etienne Wasmer

The document discusses advanced macroeconomic optimization techniques, focusing on both static and dynamic cases, including the use of Lagrangian methods and Euler's optimization principles. It outlines necessary and sufficient conditions for optimality in static optimization, as well as the formulation of dynamic optimization problems and the derivation of Euler's equation. The content is structured around mathematical proofs and conditions essential for understanding optimization in economic contexts.

Uploaded by

oz2046
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Advanced Macroeconomics 2024-2025

Euler 1744 optimization, a complement

Etienne Wasmer

NYUAD

March 10, 2025

1/47
Optimization

▶ Static case
▶ Dynamic case
▶ Transversality conditions
▶ Source: the excellent Further Mathematics for Economic Analysis, Knut Sydsaeter, Peter
Hammond, Atle Seierstad, Arne Strom, Pearson Ed., ISBN-13: 9780273713289

2/47
Optimization

▶ Static case
▶ Dynamic case
▶ Transversality conditions

3/47
Static optimization
▶ Suppose that you want to maximize a function of n variables

f (x1 , x2 , ...xn )

subject to m constraints

gj (x1 , x2 , ...xn ) = bj , j = 1, ...m < n


▶ Standard method is to generalize the problem: more
unknowns, then the solution of this more general problem has
conditions
▶ which are necessary for your original problem. Implies that you
know the shape of solutions.
▶ Then restrictions lead to necessary and sufficient conditions.
▶ Pose vector notations x = (x1 , x2 , ..., xn ), b = (b1 , ..., bm ) and
g(x) = (g1 (x), g2 (x), ..., gm (x))
▶ Define an admissible solution as a vector x satisfying the
constraints. 4/47
Static optimization, Lagrangian

▶ Introduce the Lagrangian


m
X
L(x) = f (x) − λj [gj (x) − bj ]
j=1

▶ It is a function of x but the multipliers λj are also inputs.


▶ The necessary conditions for optimality of the Lagrangian are:

X ∂gj (x) m
∂L(x) ∂f
=0⇔ = λj
∂xi ∂xi ∂xi
j=1

5/47
Necessary and sufficient conditions, static optimization
1. Part 1: Necessary condition
▶ Suppose that x∗ is an optimum of the initial problem and that
functions f , gj , j = 1, m are differentiable around x∗ .
▶ Suppose that the matrix m × n of

∂g1 (x∗ ) ∂g1 (x∗ )


 
...
 ∂x1 ∂xn 
 .. .. 
. .
 
 
 ∂g (x∗ ) ∗ 
∂gm (x )
m
...
∂xi ∂xn
has rank m
▶ Then there exists unique Lagrange multipliers satisfying the
FOC of the Lagrangian.
2. Part 2: Sufficiency condition
▶ If Lagrange multipliers exist and x∗∗ maximizing the
Lagrangian,
▶ If the Lagrangian L(x) is concave in x and the set over which
functions are defined is convex.
▶ Then x∗∗ solves the original problem.
6/47
Concavity of a function: definition
▶ Concave means in general for a real differentiable function
G (y)of n variables on an open convex set S that

G (y′ ) − G (y) ≤ ∇G (y).(y′ − y)


for all y, y′ in the support S of G
▶ where . is the scalar product
▶ and ∇ is the gradient vector, that is
 
∂G (y) ∂G (y) ∂G (y)
∇G (y) = , ..., , ...,
∂y1 ∂yi ∂yn
▶ so that
n
X ∂G (y)
G (y′ ) − G (y) ≤ (yi′ − yi )
∂yi
i=1

for all y, y .
▶ In short in one dimension: at point y , the function
subsequently grows less fast than if it followed its original
slope in y ′ 7/47
Necessary and sufficient conditions, static optimization,
proof of part 2- sufficiency

▶ A solution to the FOCs of the Lagrangian x ∗∗ is an interior


maximum if the Lagrangian is concave.
▶ This implies that L(x∗∗ ) ≥ L(x).
▶ If x is admissible for a solution, then the constraints are
satisfied, so that gj (x) = bj .
▶ Hence, f (x∗∗ ) ≥ f (x).
▶ Then x∗∗ solves the original problem.
▶ NB: If x∗ maximizes the original problem, it does not always
maximizes the Lagrangian.
▶ NB2: The Lagrangian multipliers are a shadow price of the
associated resource.

8/47
Necessary and sufficient conditions, static optimization,
proof of part 1

▶ The proof of the first part involves the rank condition. It uses
the theorem of implicit functions.
▶ Then one finds a set of m variables satisfying the constraints,
as a function of the n − m remaining variables.
▶ And one can express the remaining free variables to get the
optimum.

9/47
If constraints are inequality conditions
▶ Suppose now that you want to maximize a function of n
variables
f (x1 , x2 , ...xn )
subject to m constraints
gj (x1 , x2 , ...xn ) ≤ bj , j = 1, ...m < n
▶ Still introduce the Lagrangian
m
X
L(x) = f (x) − λj [gj (x) − bj ]
j=1

▶ Same FOC (called Kuhn-Tucker conditions), there are also


complementary slackness conditions λj ≥ 0
▶ Sufficiency condition is that the Lagrangian is concave when
multipliers satisfy the FOC.
▶ Another sufficiency condition is that f is concave and
gj , j = 1, m are quasi-convex. NB: f (x) = x 2 is convex, but f (x) = |x|
is only quasi-convex. 10/47
Optimization

▶ Static case
▶ Dynamic case
▶ Transversality conditions

11/47
A dynamic environment

▶ Country produces Y = f (K ) with f increasing concave.


▶ Country consumes and invest:

Y = f (K ) = C + K̇

▶ NB: all variables are function of time: K = K (t) as well as


C = C (t) and so the time derivatives K̇ , Ċ .
▶ Country has collective preference U(C ) increasing concave
and discounts the future exponentially e −σt .

12/47
A dynamic environment

▶ Let T be a final period.


▶ The objective is to find the best value for
Z T
U(C )e −σt dt
0

▶ Equivalent to maximizing
Z T
U[f (K ) − K̇ ]e −σt dt (1)
0

13/47
A dynamic environment

▶ A more general version of equation (1) is


Z t1
F (t, x, ẋ)dt (2)
t0

▶ with initial and final (landing) conditions


x(t0 ) = x0 , x(t1 ) = x1 .
▶ Problem: we know how to maximize a function with respect
to x scalar or even vector: FOC or Lagrangian.
▶ But maximize an operator of functions with respect to a
function itself: we do not really know. What is the derivative
with respect to a function?

14/47
Here comes Euler!

▶ In 1744, Euler says that a solution x(t) to (2) implies that


 
∂F d ∂F
− =0 (3)
∂x dt ∂ ẋ
▶ Note the total derivative of F above. In the most general case,

∂2F ∂2F ∂2F ˙


 
d ∂F (t, x, ẋ)
= + ẋ + ẋ (4)
dt ∂ ẋ ∂t∂ ẋ ∂x∂ ẋ ∂ ẋ∂ ẋ

15/47
Here comes Euler!

▶ The most complex Euler equation is therefore:

∂F ∂2F ∂2F ∂2F ˙


= + ẋ + ẋ (5)
∂x ∂t∂ ẋ ∂x∂ ẋ ∂ ẋ∂ ẋ
▶ This is a second-order differential equation in x(t) if
∂2F
̸= 0
∂ ẋ∂ ẋ

16/47
Proof 1/N: Necessary condition

▶ Very important logic here in the proof: calculus of variations.


▶ Leads to prove that Euler is a necessary condition for
optimality and identify sufficiency conditions.
▶ Denote by x ∗ = x ∗ (t) a solution to the original problem.
▶ Consider a variation function µ(t) twice differentiable such
that µ(t0 ) = µ(t1 ) = 0 and an arbitrary constant α.
▶ Given this, a function x = x ∗ + αµ is a candidate to the
problem since it has the same initial and final values.
▶ Idea: let’s use Euler to demonstrate that if not satisfied, the
maximization is not achieved around small values of α e.g. for
small variations.

17/47
Proof 2/N: Necessary condition

▶ Denote by I (α) the value


Z t1
I (α) = F (t, x ∗ + αµ, x˙∗ + αµ̇)dt (6)
t0
▶ By definition, I (0) is the highest value since x ∗ is a solution.
▶ Therefore, one knows that I ′ (α) = 0 in α = 0.

18/47
Proof 3/N: Necessary condition

▶ That’s almost it:


Z t1
′ ∂
I (α) = F (t, x ∗ + αµ, x˙∗ + αµ̇)dt
t0 ∂α

▶ First calculate that partial derivative under the integral.

∂ ∂F ∂F
F (t, x ∗ + αµ, x˙∗ + αµ̇) = µ+ µ̇
∂α ∂x ∂ ẋ

▶ Apply it to α = 0:
Z t1  
∂ ∂
I ′ (0) = F (t, x ∗ , x˙∗ ) × µ + F (t, x ∗ , x˙∗ ) × µ̇ dt
t0 ∂x ∂ ẋ
(7)

19/47
Proof 4/N: Necessary condition

▶ Now comes the boring part: integrate by part


▶ Formula is
Z t1 Z t1
′ t1
u.v dt = [u.v ]t0 − u ′ .vdt
t0 t0

▶ Apply it to the second part of equation (7) above, with here


v (t) = µ(t).
Z t1

F (t, x ∗ , x˙∗ ) × µ̇dt
t0 ∂ ẋ
 t1 Z t1  
∂ ∗ ˙∗ d ∂ ∗ ˙∗
= F (t, x , x ) × µ − F (t, x , x ) × µdt
∂ ẋ t0 t0 dt ∂ ẋ

20/47
Proof 5/N: Necessary condition

▶ Reintegrate in I ′ (0)

Z t1 Z t1  
′ ∂ d ∂
I (0) = F (t, x ∗ , x˙∗ ) × µ − ∗ ˙∗
F (t, x , x ) × µdt
t0 ∂x t0 dt ∂ ẋ
 t1
∂ ∗ ˙∗
+ F (t, x , x ) × µ
∂ ẋ t0

▶ that is,
Z t1   
′ ∂ d ∂
I (0) = F (t, x ∗ , x˙∗ ) − F (t, x ∗ , x˙∗ ) × µdt
t0 ∂x dt ∂ ẋ
 t1

+ F (t, x ∗ , x˙∗ ) × µ
∂ ẋ t0

21/47
Proof 6/N: Necessary condition

▶ First use that µ is 0 in the beginning t0 and end t1 of the time


interval. So the last term disappears. One is left with only
Z t1   
′ ∂ ∗ ˙∗ d ∂ ∗ ˙∗
I (0) = F (t, x , x ) − F (t, x , x ) × µdt
t0 ∂x dt ∂ ẋ
(8)

22/47
Proof 7/N: Necessary condition

▶ Second: this has to be true for all functions µ(t) twice


differentiable (no slope discontinuity), therefore it must be
that the integrand of (8) is zero:
 
∂ ∗ ˙∗ d ∂ ∗ ˙∗
F (t, x , x ) − F (t, x , x ) = 0
∂x dt ∂ ẋ

▶ That is, x ∗ (t) satisfies the Euler equation.

23/47
Proof 8/N: Sufficiency

▶ Sufficiency is another matter. We need to show that the


objective function applied to an x ∗∗ that would satisfy Euler,
is at least higher than any other function.
▶ For that, we need to assume that F (t, x, ẋ) is concave in
(x, ẋ).
▶ Why? Suppose a function x ∗∗ (t) that satisfies the Euler
equationand the two limit conditions.
▶ And compare it to all possible functions x ( t) that only satisfy
the two limit conditions, so that x ∗∗ (t0 ) = x(t0 ), same in t1 .

24/47
Proof 9/N: Sufficiency
▶ Concave means in general for a real differentiable function
G (y)of n variables on an open convex set S that

G (y′ ) − G (y) ≤ ∇G (y).(y′ − y)

for all y, y′ in the support S of G


▶ where . is the scalar product
▶ and ∇ is the gradient vector, that is
 
∂G (y) ∂G (y) ∂G (y)
∇G (y) = , ..., , ...,
∂y1 ∂yi ∂yn
▶ so that
n

X ∂G (y)
G (y ) − G (y) ≤ (yi′ − yi )
∂yi
i=1

for all y, y .
25/47
Proof 10/N: Sufficiency
▶ Get back to the proof. The concavity of F in (x, ẋ) implies
that:

∂F (t, x ∗∗ , x ˙∗∗ )
F (t, x, ẋ) − F (t, x ∗∗ , x ˙∗∗ ) ≤ (x − x ∗∗ )
∂x
∂F (t, x ∗∗ , x ˙∗∗ )
+ (ẋ − x ˙∗∗ )
∂ ẋ
▶ Now insert the Euler equation:
 
∂ ∗∗ ˙∗∗ d ∂ ∗∗ ˙∗∗
F (t, x , x ) = F (t, x , x )
∂x dt ∂ ẋ

▶ therefore
 
∗∗ d ∂
F (t, x, ẋ) − F (t, x , x ˙∗∗ ) ≤ F (t, x , x ) (x − x ∗∗ )
∗∗ ˙∗∗
dt ∂ ẋ
∂F (t, x ∗∗ , x ˙∗∗ )
+ (ẋ − x ˙∗∗ )
∂ ẋ
26/47
Proof 11/N: Sufficiency
▶ Invert the inequality: leads to:
 
∗∗ d ∂F
˙
∗∗
F (t, x , x ) − F (t, x, ẋ) ≥ (t, x , x ) .(x ∗∗ − x)
∗∗ ˙∗∗
dt ∂ ẋ
∂F (t, x ∗∗ , x ˙∗∗ ) ˙∗∗
+ (x − ẋ)
∂ ẋ
▶ Now the miracle: the right-hand side can be rewritten as:
 
∗∗ ˙∗∗ d ∂F ∗∗ ˙∗∗ ∗∗
F (t, x , x ) − F (t, x, ẋ) ≥ (t, x , x ).(x − x)
dt ∂ ẋ
(9)
▶ Why? Just redevelop.
 
d ∂F ∗∗ ˙∗∗ ∗∗
(t, x , x ).(x − x)
dt ∂ ẋ
 
d ∂F
= (t, x , x ). (x ∗∗ − x)
∗∗ ˙∗∗
dt ∂ ẋ
d ∂F
+ (x ∗∗ − x) (t, x ∗∗ , x ˙∗∗ )
dt ∂ ẋ 27/47
Proof 12/N: Sufficiency

▶ Statr from the inequality (9)


 
∗∗ d ∂F
F (t, x , x ˙∗∗ ) − F (t, x, ẋ) ≥ (t, x ∗∗ , x ˙∗∗ ).(x ∗∗ − x)
dt ∂ ẋ

and integrate it between t0 and t1 :


Z t1
F (t, x ∗∗ , x ˙∗∗ ) − F (t, x, ẋ) dt ≥

t
Z0 t1  
d ∂F ∗∗ ˙∗∗ ∗∗
(t, x , x ).(x − x)
t0 dt ∂ ẋ
 t1
∂F ∗∗ ˙∗∗ ∗∗
= (t, x , x ).(x − x)
∂ ẋ t0

28/47
Proof 13/N: Sufficiency

▶ And from the limit conditions (boundary conditions) of both


x, x ∗∗ , the right hand side is zero.
▶ So we just proved that
Z t1
F (t, x ∗∗ , x ˙∗∗ ) − F (t, x, ẋ) dt ≥0
 
t0

for all admissible functions.


▶ which implies that x ∗∗ (t) is indeed a solution to the original
problem. QED (quod erat demonstrandum).

29/47
Optimization

▶ Static case
▶ Dynamic case (continued)
▶ Transversality conditions

30/47
Euler is back!

▶ Euler found that an optimal solution must satisfy:


 
∂F d ∂F
− =0 (10)
∂x dt ∂ ẋ

with
∂2F ∂2F ∂2F ˙
 
d ∂F
(t, x, ẋ) = + ẋ + ẋ (11)
dt ∂ ẋ ∂t∂ ẋ ∂x∂ ẋ ∂ ẋ∂ ẋ

31/47
Apply it to our ”nation’s problem”

▶ Problem was to find the best capital accumulation policy -


and then deduce consumption.
Z T
max U[f (K ) − K̇ ]e −σt dt (12)
K 0

▶ That is
F (t, K , K̇ ) = U[f (K ) − K̇ ]e −σt
▶ Euler says

∂F ∂2F ∂2F ∂2F ˙


= + K̇ + K̇ (13)
∂K ∂t∂ K̇ ∂K ∂ K̇ ∂ K̇ ∂ K̇

32/47
Apply it to our ”nation’s problem”

▶ That is, with

F (t, K , K̇ ) = U[f (K ) − K̇ ]e −σt

∂F
= f ′ (K )U ′ [f (K ) − K̇ ]e −σt
∂K
∂F
= −U ′ ([f (K ) − K̇ ])e −σt
∂ K̇

▶ And so the RHS is

∂2F ∂2F ∂2F ˙


+ K̇ + K̇ =
∂t∂ K̇ ∂K ∂ K̇ ∂ K̇ ∂ K̇
σU ′ (C )e −σt − f ′ (K )U”(C )e −σt K̇ + U”(C )e −σt K̇˙

33/47
Apply it to our ”nation’s problem”

▶ Combining:

U”(C ) h ′ ˙
i
f ′ (K ) − σ = − f (K )K̇ − K̇
U ′ (C )
▶ How do we continue?

34/47
Apply it to our ”nation’s problem”
▶ Write it again:
U”(C ) h ′ ˙
i
f ′ (K ) − σ = − f (K )K̇ − K̇
U ′ (C )
▶ Now use:
C + K̇ = f (K )
▶ That is:
Ċ + K̇˙ = f ′ (K )K̇
▶ So we can replace and get:
U”(C )
f ′ (K ) − σ = − Ċ
U ′ (C )

CU”(C ) Ċ
f ′ (K ) − σ = −
U ′ (C ) C

Ċ f ′ (K ) − σ
=
C θ
our regular Euler equation. 35/47
Apply it to our ”nation’s problem”

▶ Why? We had earlier this semester:

c˙t log (βR)


=
ct θ

▶ log (β) = −log (1 + σ) = −σ when the time period goes to


zero.
▶ log (R) = log (1 + f ′ (K )) = f ′ (K ) when again the time period
to express the returns goes to zero.
▶ Implication: if return to capital too large, consumption will
increase; reverse if too small.
▶ Consumption stop progressing at the point where return to
investment will be equal to discounting.

36/47
Optimization

▶ Static case
▶ Dynamic case
▶ Transversality conditions

37/47
Return to the original problem: transversality

▶ One want to maximize the following objective:


Z t1
F (t, x, ẋ)dt (14)
t0

▶ With initial condition x(t0 ) = x0


▶ Final (landing) one : either i) x(t1 ) free or ii) x(t1 ) ≥ x1 .
▶ Example: capital stock cannot become negative, or
no-Ponzi-game, or debt cannot be too large (limit cannot be
too negative).

38/47
Return to the original problem: transversality
Theorem
1. Necessary condition:
▶ If x ∗ is an optimal solution to the problem, it must solve the
Euler equation.
▶ In case i) of a free value x ∗ (t1 ), one has

∂F
(t = t1 ) = 0
∂ ẋ

▶ In the case of case ii) this is only

∂F
(t = t1 ) ≤ 0
∂ ẋ
with inequality if x ∗ (t1 ) = x1 and equality if x ∗ (t1 ) > x1 .
2. Sufficient condition
▶ Reciprocally, if F (t, x, ẋ) is concave in (x, ẋ), then an
admissible solution x ∗∗ satisfying Euler and the transversality
condition will solve the original problem.
39/47
Return to Proof 1/N: Necessary condition

▶ Calculus of variations.
▶ Denote by x ∗ = x ∗ (t) a solution to the original problem.
▶ Consider a variation function µ(t) twice differentiable such
that µ(t0 ) = 0 (but not necessarily in t1 ) and an arbitrary
constant α.
▶ Given this, a function x = x ∗ + αµ is a candidate to the
problem since it has the same initial value.

40/47
Return to Proof 2/N: Necessary condition

▶ Denote by I (α) the value


Z t1
I (α) = F (t, x ∗ + αµ, x˙∗ + αµ̇)dt (15)
t0
▶ By definition, I (0) is the highest value since x ∗ is a solution.
▶ Therefore, one knows that I ′ (α) = 0 in α = 0.

41/47
Return to Proof 3/N: Necessary condition

▶ That’s almost it:


Z t1
′ ∂
I (α) = F (t, x ∗ + αµ, x˙∗ + αµ̇)dt
t0 ∂α

▶ First calculate that partial derivative under the integral.

∂ ∂F ∂F
F (t, x ∗ + αµ, x˙∗ + αµ̇) = µ+ µ̇
∂α ∂x ∂ ẋ

▶ Apply it to α = 0:
Z t1  
′ ∂ ∗ ˙∗ ∂ ∗ ˙∗
I (0) = F (t, x , x )µ + F (t, x , x )µ̇ dt (16)
t0 ∂x ∂ ẋ

42/47
Return to Proof 4/N: Necessary condition

▶ Now comes the boring part: integrate by part


▶ Formula is
Z t1 Z t1
u.v ′ dt = [u.v ]tt10 − u ′ .vdt
t0 t0

▶ Apply it to equation (7)(in fact here 15) above:

Z t1
′ ∂
I (part2) = F (t, x ∗ , x˙∗ )µ̇dt
t ∂ ẋ
0  Z t1  
∂ d ∂
= F (t, x ∗ , x˙∗ )µ tt10 − F (t, x ∗ , x˙∗ ) µdt
∂ ẋ t0 dt ∂ ẋ

43/47
Return to Proof 5/N: Necessary condition

▶ Reintegrate in I ′ (0)

Z t1 Z t1  
′ ∂ d ∂
I (0) = F (t, x ∗ , x˙∗ )µ − ∗ ˙∗
F (t, x , x ) µdt
t ∂x t0 dt ∂ ẋ
 0 
∂ ∗ ˙∗
+ F (t, x , x )µ tt10
∂ ẋ
▶ that is,
Z t1   
′ ∂ d ∂
I (0) = F− F µdt
t0 ∂x dt ∂ ẋ
 

+ F (t, x ∗ , x˙∗ )µ t1
t0
∂ ẋ

44/47
Return to Proof 6/N: Necessary condition

▶ First use that µ is 0 in the beginning and possibly at the end


of the time interval. So the last term half disappears in 0 and
all depends on t1 .
Z t1     
′ ∂ d ∂ ∂F (t)
I (0) = F− F µdt + µ(t) (t1 )
t0 ∂x dt ∂ ẋ ∂ ẋ

45/47
Return to Proof 6/N, continued: Necessary condition


Z t1     
′ ∂ d ∂ ∂F (t)
I (0) = F− F µdt + µ(t) (t1 )
t0 ∂x dt ∂ ẋ ∂ ẋ

▶ Therefore: two cases.


▶ First. If we have a free value for t1 , any µ(t1 ) is admissible so
∂F (t)
one needs to land in (t1 ) = 0
∂ ẋ

46/47
Return to Proof 6/N, continued: Necessary condition

▶ Second case:
Z t1     
′ ∂ d ∂ ∂F (t)
I (0) = F− F µdt + µ(t) (t1 )
t0 ∂x dt ∂ ẋ ∂ ẋ

▶ If one imposes instead that x ∗ (t1 ) is above x1 : two sub-cases,


x ∗ (t1 ) can be strictly above or exactly at the limit.
▶ It the candidate solution x ∗ is strictly above the limit, again
the partial derivative of F with respect to ẋ is zero for I ′ (0) to
be zero.
▶ It the candidate solution x ∗ is exactly at the limit, then it
means that there are many admissible functions such that
x(t1 ) = x ∗ (t1 ) + αµ(t1 ) ≥ x1 which implies µ(t1 ) ≥ 0 for all
positive α. Therefore, 0 = I ′ (α) ≥ I ′ (0) so I ′ (0) ≤ 0. This
implies that the partial derivative of F with respect to ẋ has to
be negative.

47/47

You might also like