0% found this document useful (0 votes)
59 views

Cost Function

This document discusses cost minimization and the cost function. It begins by introducing the cost minimization problem and setting up the Lagrangian to derive the first order conditions. It then presents the second order conditions which require that the bordered Hessian matrix be negative semidefinite. The document notes some difficulties with cost minimization, including non-differentiable technologies, non-interior solutions, existence of a minimum cost bundle, and ensuring a unique global minimum.

Uploaded by

040 Sahera Akter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

Cost Function

This document discusses cost minimization and the cost function. It begins by introducing the cost minimization problem and setting up the Lagrangian to derive the first order conditions. It then presents the second order conditions which require that the bordered Hessian matrix be negative semidefinite. The document notes some difficulties with cost minimization, including non-differentiable technologies, non-interior solutions, existence of a minimum cost bundle, and ensuring a unique global minimum.

Uploaded by

040 Sahera Akter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal

Costs Geometry o

Cost Minimization and the Cost Function

Juan Manuel Puerta

October 5, 2009
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

So far we focused on profit maximization, we could look at a


different problem, that is the cost minimization problem. This is
useful for some reasons:
Different look of the supply behavior of competitive firms
But also, this way we can model supply behavior of firms that
don’t face competitive output prices
(Pedagogic) We get to use the tools of constrained optimization
Cost Minimization Problem: minx wx such that f(x) = y
Begin by setting-up the Lagrangian: L(λ, x) = wx − λ(f (x) − y)
Differentiating with respect to xi and λ you get the first order
conditions,

wi − λ ∂f∂x
(x )
i
= 0 for i=1,2,...,n
f (x∗ ) = y
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Letting Df (x) denote the gradient of f (x), we can write the n


derivative conditions in matrix notation as,
w = λDf (x∗ )
Dividing the ith condition by the jth condition we can get the
familiar first order condition,
∂f (x∗ )
wi ∂xi
= ∂f (x∗ )
, for i, j = 1, 2, ..., n (1)
wj
∂xj

1 This is the standard “isocost=slope of the isoquant” condition †


2 Economic intuition: What would happen if (1) is not an equality?

Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Source: Varian, Microeconomic Analysis, Chapter 4, p. 51.


Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Second Order Conditions

In our discussion above, we assume we “approach” the isoquant


from below. Do you see that if your isoquant is such that the
isocost approaches it from above there is a problem?
Other way of saying this is: if we move along the isocost, we
cannot increase output? Indeed, output should remain constant
or be reduced.
Assume differentiability and take a second-order Taylor
approximation of f (x1 + h1 , x2 + h2 ) where hi are small changes
in the input factors. Then,
∂f (x1 ,x2 ) ∂f (x1 ,x2 )
f (x1 + h1 , x2 + h2 ) ≈ f (x1 , x2 ) + ∂x1 h1 + ∂x2 h2 +
2 (x ,x ) ∂ 2 f (x1 ,x2 ) ∂ 2 f (x1 ,x2 ) 2
(1/2)[ ∂ f ∂x1 2
2 h21 + 2 ∂x1 ∂x2 h1 h2 + ∂x2 h2 ]
1 2
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Second Order Conditions


Since we assumed a move along the isocost,
w1 h1 + w2 h2 = 0 = λ(f1 h1 + f2 h2 ) where the last equality
follows from using FOC (wi = λfi )
But in order to be at an optimum,
f (x1 + h1 , x2 + h2 ) − f (x1 , x2 ) ≤ 0, which means that
  
 f11 f12 h1
h1 h2 ≤0 (2)
f21 f22 h2
for f1 h1 + f2 h2 = 0
Generalizing to the n-factor case,
h0 D2 f (x)h ≤ 0 for all h satisfying wh = 0
Where h = (h1 , h2 , ..., hn ) is a quantity vector (a column vector
according to our convention) and D2 f (x) is the Hessian of the
production function.
Intuitively, FOC imply that the isocost is tangent to the isoquant.
SOC imply that a move along the isocost results on a reduction
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

The second order conditions can also be expressed in terms of


the Hessian of the Lagrangian
 2 2 2

∂ L ∂ L ∂ L
2 ∂λ∂x1 ∂λ∂x2
2 ∗ ∗ ∗
 ∂∂λ2 L ∂2L ∂2L 
D L(λ , x1 , x2 ) = 
 ∂x12∂λ ∂x12 ∂x1 ∂x2 

∂ L ∂2L 2
∂ L
∂x2 ∂λ ∂x2 ∂x1 ∂x22
Computing these derivatives for the case of our Lagrangian
function L(λ, x) = wx − λ(f (x) − y)
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

The resulting is the Bordered Hessian


 
0 −f1 −f2
D2 L(λ∗ , x1∗ , x2∗ ) = −f1 −λf11 −λf12 
−f2 −λf21 −λf22
It turns out that the sufficient conditions stated in (2) are satisfied
with strict inequality if and only if the determinant of the
bordered hessian is negative. Similarly, if you have n factors, the
bordered Hessians for the n-cases should be negative
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Difficulties

For each choice of w and y, there should be an optimum x∗ that


minimizes the cost of producing y. This is the Conditional
Factor Demand (Cf. factor demands in profit maximization)
Similarly, the Cost Function is the function that gives the
minimum cost of producing y at the factor prices w.
c(w, y) = wx(w, y)
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

As in the profit maximization case, there could be cases in which


the first order conditions would not work
1 Technology not representable by a differential production
function (e.g. Leontieff)
2 We are assuming interior solution, i.e. that all the inputs are used
in a strictly positive amount. Otherwise, we have to modify the
conditions according to Kuhn-Tucker,

λ ∂f∂x
(x )
i
− wi ≤ 0 with strict equality if xi > 0
3 The third issue concerns the existence of the optimizing bundle.
The cost function, unlike the profit function, will always achieve
a minimum. This follows from the fact that a continuous function
achieves a minimum and a maximum on a compact (close and
bounded) set. (more on that on the next slide)
4 The fourth problem is the issue of uniqueness. As we saw,
calculus often ensures that a local maximum is achieved. For
finding global maxima you have to make extra assumptions,
namely V(y) convex.
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

More on why existence of a solution would not be a


problem in the cost minimization case?

Because we are minimizing a continuous function on a close and


bounded set. To see this, wx is certainly continuous and V(y) is
closed by assumption (regularity assumption). Boundedness could be
proved easily. Assume an arbitrary x0 , then the minimal cost bundle
must have a lower cost, wx ≤ wx0 . But then we can restrict to a subset
{x in V(y):wx ≤ wx0 }, which is bounded so long w  0
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Some Examples of Cost Minimization

Cobb Douglas with 2 inputs, f (x1 , x2 ) = Ax1a x2b . †


CES, f (x1 , x2 ) = (x1ρ + x2ρ )1/ρ . Homework!
Leontieff, f (x1 , x2 ) = min{ax1 , bx2 }. †
Linear, f (x1 , x2 ) = ax1 + bx2 . Illustration of the Kuhn-Tucker
conditions. †
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

2-input case

In the usual fashion, the conditional factor demand function


imply the following identities,
f (x(w,y)) ≡ y
w − λDf(x(w,y) ≡ 0
For the simpler 1-output, 2-input case, FOC imply
f (x1 (w1 , w2 , y), x2 (w1 , w2 , y)) ≡ y
w1 − λ ∂f (x1 (w1 ,w2 ,y),x
∂x1
2 (w1 ,w2 ,y))
≡0

w2 − λ ∂f (x1 (w1 ,w2 ,y),x


∂x2
2 (w1 ,w2 ,y))
≡0
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

As we did with the FOC of the profit maximization problem, we


can differentiate these identities with respect to the parameters,
e.g. w1 †
∂f ∂x1 ∂f ∂x2
∂x1 ∂w1 + ∂x2 ∂w1 ≡0
2
∂ f ∂x1 ∂ 2 f ∂x2 ∂f ∂λ
1 − λ[ ∂x 2 ∂w +
1 ∂x1 ∂x2 ∂w1 ] − ∂x1 ∂w1 ≡0
1
2 ∂ 2 f ∂x2
0 − λ[ ∂x∂2 ∂x
f ∂x1
1 ∂w1
+ ∂x22 ∂w1
] − ∂f ∂λ
∂x2 ∂w1 ≡0

Which can be written in matrix form as,


   ∂λ   
0 −f1 −f2 ∂w1 0
−f1 −λf11 −λf21   ∂x 
 ∂w11  ≡ −1
−f2 −λf12 −λf22 ∂x2 0
∂w 1
Note that the matrix on the left is precisely the “Bordered
Hessian”
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Recall the Cramer’s Rule, we can use it to solve for ∂xi /∂w1 †

0
0 −f2
−f
1 −1 −λf21



∂ x1 −f2 0 −λf22


∂ w1 = 0

−f1 −f2
−f
1 −λf11 −λf21



−f2 −λf12 −λf22

Solving the determinant on the top, and letting H denote the


lower determinant
∂x1 f22
∂w1 = H <0
In order to satisfy SOC, H < 0, which means that the conditional
factor demand has a negative slope.
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

∂x2
Similarly, you can use Cramer’s rule to solve for ∂w1 ,

0
−f1 0
−f1 −λf11 −1



∂ x2 −f2 −λf12 0


∂ w1 = 0

−f1 −f2

−f −λf11 −λf21

1


−f2 −λf12 −λf22

Carrying out the calculations,


∂x2 −f2 f1
∂w1 = H >0
Similarly, you can differentiate the identities above with respect
to w2 to get
   ∂λ   
0 −f1 −f2 ∂w2 0
−f1 −λf11 −λf21   ∂x1 
 ∂w2  ≡  0
−f2 −λf12 −λf22 ∂x2 −1
∂w 2
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

And using the Cramer’s rule again, you can obtain


∂x2 −f1 f2
∂w1 = H >0
∂x2 ∂x1
Compare the expressions for and ∂w 1
. You will notice that as
∂w2
in the case of the factor demand functions, there is a symmetry
effect.
∂xi
In the 2-factor case, ∂wj > 0, means that factors are always
substitutes.
Of course, this analysis is readily extended to the n-factor case.
As in the case of the profit maximization problem, it is better to
use matrix notation.
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

n-input case

The first order conditions for cost minimization are 1


f (x(w)) ≡ y
w − λDf (x(w)) ≡ 0
Differentiating these identities with respect to w,
Df (x(w))Dx(w) = 0
I − λD2 f (x(w))Dx(w) − Df (x(w))Dλ(w) = 0
Rearranging this expression,
    
0 −Df (x(w)) Dλ(w) 0
= −
−Df (x(w))0 −λD2 f (x(w)) Dx(w) I

1
We omitted y as an argument as it is fixed.
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Assuming a regular optimum so that the Hessian is


non-degenerate. Then pre-multiplying each side by the inverse of
the Bordered Hessian we obtain,
   −1  
Dλ(w) 0 Df (x(w)) 0
=
Dx(w) Df (x(w))0 λD2 f (x(w)) I
From this expression, it follows that, since the hessian is
symmetric, the cross-price effects are symmetric.
Also, It can be shown that the substitution matrix is negative
semi-definite.
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

The cost function

The cost function tells us the minimum cost of producing a level


of output given certain input prices.
The cost function can be expressed in terms of the conditional
factor demands we talked about earlier
c(w,y) ≡ wx(w,y)
Properties of the cost function. As with the profit function,
there are a number of properties that follow from cost
minimization. These are:
1 Nondecreasing in w. If w0 ≥ w, then c(w0 , y) ≥ c(w, y)
2 Homogeneous of degree 1 in w. c(tw, y) = tc(w, y) for y>0.
3 Concave in w. c(tw + (1 − t)w0 ) ≥ tc(w) + (1 − t)c(w0 ) for
t ∈ [0, 1]
4 Continuous in w. c(w, y) is a continuous function of w, for
w0
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Proof †:
Non-decreasing
Homogeneous of Degree 1
Concave
Continuous
Intuition for concavity of the cost function †
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Proof †:
Non-decreasing
Homogeneous of Degree 1
Concave
Continuous
Intuition for concavity of the cost function †
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Proof †:
Non-decreasing
Homogeneous of Degree 1
Concave
Continuous
Intuition for concavity of the cost function †
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Proof †:
Non-decreasing
Homogeneous of Degree 1
Concave
Continuous
Intuition for concavity of the cost function †
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Proof †:
Non-decreasing
Homogeneous of Degree 1
Concave
Continuous
Intuition for concavity of the cost function †
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Shepard’s Lemma

Shepard’s Lemma: Let xi (w, y) be the firm’s conditional factor


demand for input i. Then if the cost function is differentiable at
(w, y), and wi > 0 for i = 1, 2, 3, ..., n then,
∂c(w,y)
xi (w, y) = ∂wi for i = 1, 2, ..., n
Proof †
In general there are 4 approaches to proof and understand
Shepard’s Lemma
1 Differentiate de identity and use FOC (Problem set 2)
2 Use the Envelope theorem directly (see next section)
3 A geometric argument †
4 A economic argument. At the optimum x, a small change in
factor prices has a direct and an indirect effect. The indirect effect
occurs through re-optimization of x but this is negligible in the
optimum. So we are left with the direct effect alone, and this is
just equal to x.
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Envelope theorem for constrained optimization

Shepard’s lemma is another application of the envelope theorem,


this time for constrained optimization.
Consider the following constrained maximization problem
M(a) = maxx1 ,x2 g(x1 .x2 , a) such that h(x1 , x2 , a) = 0
Setting up the lagrangian for this problem and obtaining FOC,
∂g ∂h
∂x1 − λ ∂x 1
=0
∂g ∂h
∂x2 − λ ∂x 2
=0
h(x1 , x2 , a) = 0
From these conditions, you obtain the optimal choice functions,
x1 (a), x2 (a) obtaining the following identity
M(a) ≡ g(x1 (a), x2 (a))
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Envelope theorem for constrained optimization

dM(a)
The envelope theorem says that da is equal to
∂g(x1 ,x2 ,a)
dM(a)
da = ∂a |x=x(a) − λ ∂h(x∂a
1 ,x2 ,a)
|x=x(a)
In the case of cost minimization, the envelope theorem implies
∂c(x,w) ∂L
∂wi = ∂wi = xi |xi =xi (w,y) = xi (w, y)
∂c(x,y) ∂L
∂y = ∂y =λ
The second implication follows also from the envelope theorem
and just means that (at the optimum), the lagrange multiplier of
the minimization cost is exactly the marginal cost.
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Comparative statics using the cost function


Shepard’s lemma relates the cost function with the conditional factor
demands. From the properties of the first, we can infer some
properties for the latter.
1 From the cost function being non decreasing in factor prices

follows that conditional factor demands are positive.


∂c(w,y)
∂wi = xi (w, y) ≥ 0
2 From c(w, y) being HD1, it follows that x (w, y) are HD0
i
3 From the concavity of c(w, y) if follows that its hessian is

negative semi-definite. From Shepard’s lemma it follows that the


substitution matrix for the conditional factor demands is equal to
the Hessian of the cost function. Thus,
1 The cross price effects are
symmetric.∂xi /∂wj = ∂ 2 c/∂wi ∂wj = ∂ 2 c/∂wj ∂wi = ∂xj /∂wi
2 Own price effects are non-positive (∂xi /∂wi = ∂ 2 c/∂w2i ≤ 0
3 The vectors of own factor demands moves “opposite” to the
vector of changes of factor prices. dwdx ≤ 0
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Comparative statics using the cost function


Shepard’s lemma relates the cost function with the conditional factor
demands. From the properties of the first, we can infer some
properties for the latter.
1 From the cost function being non decreasing in factor prices

follows that conditional factor demands are positive.


∂c(w,y)
∂wi = xi (w, y) ≥ 0
2 From c(w, y) being HD1, it follows that x (w, y) are HD0
i
3 From the concavity of c(w, y) if follows that its hessian is

negative semi-definite. From Shepard’s lemma it follows that the


substitution matrix for the conditional factor demands is equal to
the Hessian of the cost function. Thus,
1 The cross price effects are
symmetric.∂xi /∂wj = ∂ 2 c/∂wi ∂wj = ∂ 2 c/∂wj ∂wi = ∂xj /∂wi
2 Own price effects are non-positive (∂xi /∂wi = ∂ 2 c/∂w2i ≤ 0
3 The vectors of own factor demands moves “opposite” to the
vector of changes of factor prices. dwdx ≤ 0
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Comparative statics using the cost function


Shepard’s lemma relates the cost function with the conditional factor
demands. From the properties of the first, we can infer some
properties for the latter.
1 From the cost function being non decreasing in factor prices

follows that conditional factor demands are positive.


∂c(w,y)
∂wi = xi (w, y) ≥ 0
2 From c(w, y) being HD1, it follows that x (w, y) are HD0
i
3 From the concavity of c(w, y) if follows that its hessian is

negative semi-definite. From Shepard’s lemma it follows that the


substitution matrix for the conditional factor demands is equal to
the Hessian of the cost function. Thus,
1 The cross price effects are
symmetric.∂xi /∂wj = ∂ 2 c/∂wi ∂wj = ∂ 2 c/∂wj ∂wi = ∂xj /∂wi
2 Own price effects are non-positive (∂xi /∂wi = ∂ 2 c/∂w2i ≤ 0
3 The vectors of own factor demands moves “opposite” to the
vector of changes of factor prices. dwdx ≤ 0
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Comparative statics using the cost function


Shepard’s lemma relates the cost function with the conditional factor
demands. From the properties of the first, we can infer some
properties for the latter.
1 From the cost function being non decreasing in factor prices

follows that conditional factor demands are positive.


∂c(w,y)
∂wi = xi (w, y) ≥ 0
2 From c(w, y) being HD1, it follows that x (w, y) are HD0
i
3 From the concavity of c(w, y) if follows that its hessian is

negative semi-definite. From Shepard’s lemma it follows that the


substitution matrix for the conditional factor demands is equal to
the Hessian of the cost function. Thus,
1 The cross price effects are
symmetric.∂xi /∂wj = ∂ 2 c/∂wi ∂wj = ∂ 2 c/∂wj ∂wi = ∂xj /∂wi
2 Own price effects are non-positive (∂xi /∂wi = ∂ 2 c/∂w2i ≤ 0
3 The vectors of own factor demands moves “opposite” to the
vector of changes of factor prices. dwdx ≤ 0
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Average and Marginal Costs

Cost Function:c(w, y) ≡ wx(w, y)


Let’s break up the w in fixed wf and variable inputs wv ,
w = (wf , wv ). Fixed inputs enter optimization as constants.
Short-run Cost Function:c(w, y, xf ) = wv xv (w, y, xf ) + wf xf
c(w,y,xf )
Short-run Average Cost: SAC = y
wv xv (w,y,xf )
Short-run Average Variable Cost: SAVC = y
wf xf
Short-run Average Fixed Cost: SAFC = y
∂c(w,y,xf )
Short-run Marginal Cost: ∂y

Long-run Average Cost: LAC = c(w,y) y


∂c(w,y)
Long-run Marginal Cost: LMC = ∂y
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

The total cost function is usually assumed to be monotonic, the


more we produce, the higher the costs
The average cost could be increasing/decreasing with output. We
generally assume it achieves a minimum. The economic
rationale is given by:
1 Average variable cost could be decreasing in some range.
Eventually, it would become increasing
2 Even if they are increasing all the way, average fixed costs are
decreasing.
The minimum of the average cost function is called the minimal
efficient scale
Marginal cost curve cuts SAC and SAVC from the bottom at the
minimum
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

Constant Returns to Scale: If the production function exhibits


constant returns to scale, the cost function may be written as
c(w, y) = yc(w, 1)
Proof: Assume x∗ solves the cost minimization problem for
(1, w) but yx∗ does not the problem for (y, w). Then there is x0
that minimizes costs for production level y and w, and it is
different than x. This implies that wx0 < wx for all x in V(y) and
f (x0 ) = y. But then, CRS implies that f (x0 /y) = 1, so
0
x0 /y ∈ V(1) and wxy < wx̄ for all x̄ ∈ V(1). We found a input
vector x’ that is feasible at (w,1) and it is cheapest way of
producing it. This means that x∗ is not cost-minimizing. From
this proof follows that
c(w, y) = wx(w, y) = wyx(w, 1) = ywx(w, 1) = yc(w, 1).
Cost Minimization Second Order Conditions Conditional factor demand functions The cost function Average and Marginal Costs Geometry o

For the first unit produced, marginal cost equals average cost
Long run cost is always lower than short run cost
Short run marginal cost should be equal to the long run marginal
costs at the optimum (application of the envelope theorem). If
c(y) ≡ c(y, z(y) and let z∗ be an optimal choice given y∗ †
Differentiation with respect to y yields,
dc(y∗ ,z(y∗ )) ∂c(y∗ ,z∗ ) ∂c(y∗ ,z∗ ) ∂z(y∗ ) ∂c(y∗ ,z∗ )
dy = ∂y + ∂z ∂y = ∂y
Where the last inequality follows from FOC of the minimization
∗ ∗
problem with respect to z ( ∂c(y∂z,z ) = 0)

You might also like