0% found this document useful (0 votes)
14 views35 pages

Math Econ Lecture 5

Mathematical economics lecture

Uploaded by

sarshar.26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views35 pages

Math Econ Lecture 5

Mathematical economics lecture

Uploaded by

sarshar.26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Mathematical Economics

Lecture 5

© Hui Xiao Copyrights Reserved

1 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Unconstrained Optimization

Most Economic Models are about optimization, which is a key solution principle.

This often means maximizing or minimizing some function y = f (x), x ∈ R.

Assume f (x) is at least twice continuously differentiable and possesses a max and/or min.

Unconstrained max or min: all x ∈ R are feasible.

Constrained max or min: only x ∈ X ⊂ R are feasible.


Constraints
Extreme values of y = f (x): a max or a min.

Stationary values of y = f (x): points at which f ′ (x) = 0.

Local max at x∗ : f (x∗ ) ≥ f (x) for all x in a small neighbourhood of x∗ .

Global max at x∗ : f (x∗ ) ≥ f (x) for all feasible x.

Local min at x∗ : f (x∗ ) ≤ f (x) for all x in a small neighbourhood of x∗ .

Global min at x∗ : f (x∗ ) ≤ f (x) for all feasible x.

2 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

First Order Conditions (Necessary Conditions)

For now, We focus only on unconstrained max and min solutions.


Then,

If y = f (x) has an extreme value at x∗ , then it has a stationary value at x∗ :

∗ ′ ∗ ∗ ′ ∗
f (x ) ≥ f (x) ⇒ f (x ) = 0 or f (x ) ≤ f (x) ⇒ f (x ) = 0

To show this, consider dy = f ′ (x∗ )dx.

If f ′ (x∗ ) ̸= 0 then it is always possible to find feasible dx ̸= 0 such that dy > 0 or dy < 0

So f ′ (x∗ ) = 0 is a necessary condition for a max or min.

Because this condition is based on the first order derivative, it is also known as the first order
condition.

But to be sure, need second order conditions.

3 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

First Order Conditions

If x∗ is a local max, then f ′ (x∗ ) = 0.

4 / 35
If x∗ is a local min, then f ′ (x∗ ) = 0.
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

First Order Conditions not Sufficient

f ′ (x∗ ) = 0 yields neither a min or a max of the function f (x).


The min-max problem for this f (x) leads to corner solutions. 5 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Second Order Conditions (Sufficient Conditions)

Function’s stationary points may occur at a max, a min, or a point of inflexion.

Therefore it is not sufficient for a max or min by only f ′ (x∗ ) = 0.

Example:
claiming that x∗ maximizes the function might be wrong if it really gives a min or point of
inflexion.

Need to check if claimed solution really is indeed a max or min.

Seconod Order Conditions (S.O.C):

′ ∗ ′′ ∗ ∗
f (x ) = 0 and f (x ) < 0 ⇒ x gives a max.

′ ∗ ′′ ∗ ∗
f (x ) = 0 and f (x ) > 0 ⇒ x gives a min.

Since we apply this by checking whether f ′′ (x∗ ) < 0 in case of a max, or f ′′ (x∗ ) > 0 in case of
a min, these inequalities are called second order conditions.

6 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Second Order Conditions

To prove the above sufficiency, and also to see what happens when f ′′ (x∗ ) = 0, Taylor Series
Expansion around x∗ :

∗ f ′ (x∗ )(x − x∗ ) f ′′ (x∗ )(x − x∗ )2


f (x) = f (x ) + + ,
1! 2!
| {z }
=0

where f ′ (x∗ ) = 0 and (x − x∗ )2 > 0.

Thus,

′′ ∗ ∗ f ′′ (x∗ )(x − x∗ )2 ∗
f (x ) < 0 ⇒ f (x) = f (x ) + =⇒ f (x) < f (x ).
2!
| {z }
<0

′′ ∗ ∗ f ′′ (x∗ )(x − x∗ )2 ∗
f (x ) > 0 ⇒ f (x) = f (x ) + =⇒ f (x) > f (x ).
2!
| {z }
>0

If f ′′ (x∗ ) = 0, continue the expansion until the non-zero even higher-order derivative and apply
the same argument or the ”n’th derivative test”

7 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Second Order Conditions

Always check Second Order Conditions for the Optimum.


8 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economic Applications
Monopoly with Linear Demand and Costs:

Inverse demand function:


p(q) = 100 − q, where p is price and q is output.

Cost function:
C(q) = 25q.

Profit function:
π(q) = p(q)q − C(q) = (100 − q) × q − 25q = 75q − q 2 .

Profit maximization problem:


First order condition (F.O.C) for the profit function is

′ ∗ ∗
π (q) = 75 − 2q = 0 =⇒ 2q = 75,

implying q ∗ = 37.5, p∗ = $62.50, π ∗ = $1406.25.

Check second order condition (S.O.C):

′′ ∗
π (q) = −2 < 0 =⇒ q is the global max.

9 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economic Applications

Publisher vs. Author:

If a book publisher sets the book’s price and Inverse demand function is p(q) = 100 − q, and the
Cost function is C(q) = 25q.

If the author is paid a royalty of 10% of the book’s price, then her income is based on the book
sales:
2 2
Y (q) = 0.1p(q)q = (100 − q) × q = 0.1(100q − q ) = 10q − 0.1q .

Author’s income maximization problem:

2
max Y (q) = 10q − 0.1q .
q

F.O.C: Y ′ (q) = 10 − 0.2q ∗ = 0 =⇒ q ∗ = 50 =⇒ p∗ = 100 − q ∗ = 50.

Check S.O.C: Y ′′ (q) = −0.2 < 0 =⇒ q ∗ is the global max.

Author would want to sell 50 copies at a price of 50.

10 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economic Applications

But for the publisher’s profit maximization problem:

The publisher’s profit function:


π(q) = p(q)q − Y (q) − C(q) = p(q)q − 0.1p(q)q − C(q) = 0.9p(q)q − C(q) =
0.9(100q − q 2 ) − 25q = 90q − 25q − 0.9q 2 = 65q − 0.9q 2 .

Publisher’s profit maximization problem:

2
max π(q) = 65q − 0.9q .
q

F.O.C: π ′ (q) = 65 − 1.8q ∗ = 0 =⇒ q ∗ = 36 =⇒ p∗ = 100 − q ∗ = 64.

Check S.O.C: π ′′ (q) = −1.8 < 0 =⇒ q ∗ is the global max.

Publisher wants to sell 36 copies at a price of 64 each.

The conflict of interest arises because the publisher wants to maximize profit, while the author
wants to maximize sales revenue.

11 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economic Applications

A Monopolist Always Produces Where Demand Is Elastic:

Monopolist’s revenue function:


R(q) = p(q)q.

Recall ϵ as the price elasticity of demand (p′ (q) < 0 for downward sloping demand):

dq
q p dq p 1 p 1 q ′ 1
ϵ=− =− = − dp = − =⇒ p (q) = − .
dp q dp q q p′ (q) p ϵ
p dq

The firm’s marginal revenue is

′ ′
R (q) = p(q) + qp (q)
q ′ 1
= p(1 + p (q)) = p(1 − )
p ϵ
= p(1 − 1/ϵ).

Since firm produces when R′ (q) = p(1 − 1


ϵ) ≥ 0 around the firm’s profit maximum with p > 0
=⇒ 1 − 1
ϵ ≥ 0 =⇒ 1 ≥ 1
ϵ =⇒ ϵ ≥ 1 at that point.

12 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economic Applications
Competitive market supply and demand functions:
S = bPS , D = a0 − a1 PB , a0 , a1 , b > 0.

S is quantity supplied, D is quantity demanded.

PS is the price received by sellers, PB is the price paid by buyers.

PS = PB − t where t ≥ 0 is a specific tax per unit bought and sold, i.e. $t per unit.

The equilibrium condition D = S =⇒ a0 − a1 PB = bPS =⇒

a0 − a1 PB = b(PB − t) =⇒ (a1 + b)PB = a0 + bt.


Solve for the equilibrium PB :

∗ a0 b ∗ ∗
PB = + t, PS = PB − t,
a1 + b a1 + b

∗ ∗ ∗ a0 bt  a1 a1 b
S = D = a0 − a1 PB = a0 − a1 + = a0 (1 − )− t,
a1 + b a1 + b a1 + b a1 + b

So, equilibrium PB increases with the tax t, but the equilibrium quantity bought D ∗ and sold S ∗

decreases with PB .

13 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economic Applications

After substitution, we have that tax revenue R(t) = tD ∗ = tS ∗ :

a1 a1 b a1 a1 b 2 2
R(t) = t a0 (1 − )− t)t = a0 (1 − )t − t = αt − βt ,
a1 + b a1 + b a1 + b a1 + b

where

a1 a1
α = a0 (1 − ) > 0, since a1 + b > a1 , 0 < < 1, a0 > 0, a1 > 0, b > 0,
a1 + b a1 + b

a1 b
β= > 0.
a1 + b

So the tax rate t∗ that maximizes tax revenue by F.O.C:

′ ∗ ∗ α
R (t) = α − 2βt = 0 =⇒ t = > 0, since α > 0, β > 0.

Check S.O.C to confirm the true maximum:

′′ ∗
R (t ) = −2β < 0, since β > 0.

14 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economic Applications

Note that R′′ (t) < 0 for all values of t.

In fact, this is a strictly concave quadratic function with unique stationary value at t∗ = α/2β.

This function is an inverted U-shape and known as ”the Laffer Curve”

For this curve, if t > t∗ , then a reduction in the tax increases tax revenue, gladdening politicians
and taxpayers.

More generally, the example suggests:

If a function is everywhere strictly concave (f ′′ (x) < 0 at all points in its domain) and has a
stationary value, then this is the unique global maximum .

If a function is everywhere strictly convex (f ′′ (x) > 0 at all points in its domain) and has a
stationary value, then this is the unique global minimum.

This is what economists call ”well-behaved” models.

15 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Partial Differentiation

Continuity, differentiability, and differentiation rules also apply to functions of many variables,
which adds to the richness in the economic applications of many more economics variables:

y = f(x1 , x2 , . . . , xn ),

rather than simply y = f(x).

Example:
Consider y = f(x1 , x2 ).

If x2 is fixed at some value, and treat y as a function of only x1 , then we can compute the
derivative of f (x1 , x2 ) with respect to x1 as if f (x1 , x2 ) was a function of only x1 .

Similarly, we can do this for x2 changing while holding x1 fixed at some value.

Thus, we find a pair of first order partial derivatives, which are often simply referred to as the first
derivatives of f(x1 , x2 ) .

We can do this iteratively for any number of variables; i.e., for y = f(x1 , x2 , . . . , xn ).

16 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Partial Differentiation
Formally,

17 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Partial Differentiation

Graphically, partial derivatives show the slopes in the direction of change of the relevant variable:

18 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Finding Partial Derivatives


The partial derivative of a function y = f(x1 , x2 , . . . , xn ) with respect to the variable xi is:

∂f f(x1 , . . . , xi + ∆xi , . . . , xn ) − f(x1 , . . . , xi , . . . , xn )


= lim .
∂xi ∆xi →0 ∆xi

Since the partial derivative of y = f(x1 , x2 , . . . , xn ) with respect to any of the xi , say xj , is simply the rate of
change of f (or y) with respect to xj while holding all other variables xi , i ̸= j, fixed.
The same Differentiation rules for a function of a single variable apply to a function of multiple variables.

Example:

19 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Gradient

The first-order derivatives of f are frequently displayed as a vector, also known as the gradient vector.

Each element fi indicates the steepness or grade of the function as one ‘moves’ in the direction of xi .
Example:
f (x1 , x2 ) = 5 − 2x1 + 3x2 where f1 = -2, and f2 =3, so ∇f = [f1 f2 ] = [−2, 3]:

1 unit of change in x1 , the function f (x1 , x2 )’s value change by -2.

1 unit of change in x2 , the function f (x1 , x2 )’s value change by 3.

Note:
The gradient decent algorithm.

20 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Additively Separable Functions

However, sometimes it might be helpful to look at the special cases where such interaction
(compound) effects among the variables don’t exist.

These are the additively separable functions.

A function y = f(x1 , x2 , . . . , xn ) which can be written in the form

f(x1 , x2 , . . . , xn ) = g1 (x1 ) + g2 (x2 ) + . . . + gn (xn )

= x1 + x 2 + . . . + x n
Pn
= gk (xi )
i=1

is additively separable.

21 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Second-Order Partial Derivatives


Recall that for y = f(x), one can find successively higher-order derivatives by iteratively
differentiating f(x) with respect to x.
We can also do this for y = f(x1 , x2 , . . . , xn ).
However, there is the complication that for each of the first order derivatives,
f1 (x1 , x2 , . . . , xn ), f2 (x1 , x2 , . . . , xn ), . . . , fn (x1 , x2 , . . . , xn ), one can differentiate with
respect to each of x1 , x2 , . . . , xn .
Thus, one ends up with n2 second order partials.
When taking the derivative of fi with respect to another variable xj , i ̸= j, the end result is called
a cross-partial (second) derivative.
Thus, the set of second-order partials:

∂fi (x1 , x2 , . . . , xn )
fij ≡ , i, j = 1, 2, . . . , n.
∂xj
For n = 2, this gives

22 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Second-Order Partial Derivatives

The second-order derivatives of f can be shown as

23 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economics Applications

Previous Example show there can be ‘interaction’ effects among the variables (i.e., the partial
derivative with respect to x2 depends on the levels of the other variables x1 and x3 :

∂y 2 3 6
= 20x1 x2 x3 .
∂x2

These interaction effects among the variables reflect important economic properties.

Example:
The Cobb-Douglas production function with labor L and capital K which are complimentary inputs
needed in production: Y = ALα K β , where A is the technological parameter, L, K > 0, α < 1, and
β < 1,
∂Y α−1 β
MP L = = αAL K , (1)
∂L
where the marginal product of labor L also depends on the K or the capital invested.

∂Y α β−1
MP K = = βAL K , (2)
∂K

where the marginal product of capital K also depends on the labor L tending to the capital invested.

24 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economics Applications

where f12 = f21 = αβAxα−1


1 xβ−1
2 by Young’s theorem.
25 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Unconstrained Optimization

A function has extreme values (maxima and minima) and stationary values, which may not be the
same.

For functions of n variables y = f (x1 , x2 , ..., xn ), it is useful also to write these as f (x), where
x = [x1 , x2 , ..., xn ] is a point in Rn .

Need both necessary and sufficient conditions to ensure that functions’ stationary values give
optimum solutions (global extreme values).

An unconstrained Optimization problem has the whole of Rn as a feasible set.

A constrained Optimization problem has a feasible subset x ∈ X ⊂ R.


Constraints

26 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

First Order Conditions (Necessary Conditions)


However, for multiple variable functions,
The stationary value can also be a saddle point that is a maximum of the function with respect to
some variables, and a minimum with respect to the others.
To visualise in three dimensions, for f (x1 , x2 ), shown below

Saddle points are important in constrained optimizations.


In unconstrained optimizations, regard them as stationary point that may not give a max or min
of the function, so we need second order conditions. 27 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

First Order Conditions

Assume a multi-variable function f are at least twice continuously differentiable (well-behaved),


then the first order conditions are the same as a single variable function.

If an extreme value - max or min - occurs at the point x∗ = [x∗ ∗ ∗


1 , x2 , ..., xn ], then

∗ ∗ ∗
f1 (x1 , x2 , ..., xn ) = 0

∗ ∗ ∗
f2 (x1 , x2 , ..., xn ) = 0

...............

∗ ∗ ∗
fn (x1 , x2 , ..., xn ) =0

Or more concisely: fi (x∗ ) = 0, i = 1, 2, ..., n, x∗ ∈ Rn

28 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

First Order Conditions

Given the total differential

∗ ∗ ∗
dy = f1 (x )dx1 + f2 (x )dx2 + ... + fn (x )dxn ,

if any one fi (x∗ ) ̸= 0, then it is always possible to find a feasible dxi ̸= 0 such that dy ≷ 0 and
so the function cannot be at an extreme value.

To find an optimum using the first order conditions means having to solve a set of quite possibly
nonlinear simultaneous equations, which is not easy and sufficient.

This is why we prefer well-behaved economic models - first order conditions relatively easy to
solve, albeit insufficient.

29 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Second Order Conditions (Sufficient Conditions)

The first order conditions may give inflexion points or saddle points, which may not be the true
(local or global) max or min, so need second-order conditions for optimum.

Intuitively, if any small deviation in any direction away from a stationary value at x∗ , results in a
decrease in the value of the function, then x∗ must yield a local max.

Consider again the total differential of the function at the stationary point x∗ :

∗ ∗ ∗
dy = f1 (x )dx1 + f2 (x )dx2 + ... + fn (x )dxn = 0.

If d(dy) = d2 y = d[f1 (x∗ )dx1 + f2 (x∗ )dx2 + ... + fn (x∗ )dxn ] < 0, then the value of dy
must fall as we move away from x∗ in any direction, implying that dy is negative in a small
neighbourhood of x∗ .

This implies that the value of the function is decreasing as we move from x∗ in any direction.

30 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Second Order Conditions

By quadratic forms, we can write this condition as

n n
X X
2 ∗ T
d y= fij (x )dxi dxj = dx Hdx < 0,
i=1 j=1

where dx = [dx1 , dx2 , ..., dxn ] and dxT is its transpose, and H is the Hessian matrix of
second order partial derivatives.

A necessary & sufficient condition for x∗ to yield a max:


fi (x∗ ) = 0, all i = 1, 2, ..., n & H is negative definite at x∗ .

Similarly, a necessary & sufficient condition for x∗ to yield a min:


fi (x∗ ) = 0, all i = 1, 2, ..., n & H is positive definite at x∗ .

In practice, we just need to check the sign of H at the point x∗ .

For concave and convex functions of n variables, if the function is globally strictly concave
(convex) and possesses a stationary value, then this must be the unique global max (min).

31 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Quadratic Forms

Given an n × n matrix A, and an n × 1 vector x, a quadratic form is

n n
X X
T
q(x) = x Ax = aij xi xj .
i=1 j=1

If A is not symmetric, then we can always define a symmetric matrix A∗ that yields the same quadratic
form as A.

If q(x) = xT Ax > 0 for all x ̸= 0, then A is a positive definite matrix.

If q(x) = xT Ax ⩾ 0 for all x ̸= 0, then A is a positive semidefinite matrix.

If q(x) = xT Ax < 0 for all x ̸= 0, then A is a negative definite matrix.

If q(x) = xT Ax ⩽ 0 for all x ̸= 0, then A is a negative semidefinite matrix.

If q(x) is positive for some x and negative for some other x, then A is indefinite.

32 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economic Applications
Monopoly Price Discrimination:
A monopoly firm sells an identical good in two separate markets with no possibility of arbitrage.
The inverse demand functions for both markets are p1 = 12 − q1 , and p2 = 10 − 2q2 , where pi are
prices, qi are quantities, and its cost function is C = 1
2 (q1 + q2 )2 . Profit function:

1 2 2 2 1 2
π(q1 , q2 ) = p1 q1 +p2 q2 −C = (12−q1 )q1 +(10−2q2 )q2 − (q1 +q2 ) = 12q1 +10q2 −q1 −2q2 − (q1 +q2 ) .
2 2

The first order conditions (F.O.Cs):

∂π ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
= π1 (q1 , q2 ) = 12 − 2q1 − (q1 + q2 ) = 12 − 3q1 − q2 = 0 =⇒ q2 = 12 − 3q1 ,
∂q1

∂π ∗ ∗ ∗ ∗ ∗ ∗ ∗
= π2 (q1 , q2 ) = 10 − 4q2 − (q1 + q2 ) = 10 − q1 − 5q2 = 0,
∂q2
as the necessary condition for profit maximization, which implies equating marginal revenues, since the
marginal costs of the two outputs are identical at (q1∗ + q2∗ ).
Optimal outputs are q1∗ = 3.57, q2∗ = 1.29, so the firm sells more in market 1. Optimal prices
are p∗ ∗ ∗ ∗
1 = 12 − q1 = 8.43, p2 = 10 − 2q2 = 7.43, so price is higher in market 1.

Optimal profit across 2 markets is


π ∗ (q1∗ , q2∗ ) = 12q1∗ + 10q2∗ − q1∗ 2 − 2q2∗ 2 − 1 ∗
2 (q1 + q2∗ )2 = 39.66 − 11.81 = 27.85.
33 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Economic Applications

Second Order Conditions to verify the profit maximum:


For π ∗ (q1∗ , q2∗ ) to be the maximum, just need to verify the Hessian matrix of second order partial derivatives H is negative
definite.
Recall the Hessian matrix of second order partial derivatives H:

 ∂2 π ∂2 π
   h i
∂π1 ∂π1
∂q 2 ∂q1 2 ∂q1 ∂q2
−3 −1
H = 1 = = ,
∂2 π ∂2 π ∂π2 ∂π2
−1 −5
∂q2 1 ∂q 2 ∂q1 ∂q2
2

where π1 (q1 , q2 ) = 12 − 3q1 − q2 , π2 (q1 , q2 ) = 10 − q1 − 5q2 , and π12 (q1 , q2 ) = π21 (q1 , q2 ) by Young’s
Theorem.
Then test for Hessian matrix of second order partial derivatives H’s negative definiteness:

Recall the a quadratic form q(x) = xT Ax, where n × n matrix A, and an n × 1 vector x for all x ̸= 0.
h i
1
Since H is 2 × 2, just need any 2 × 1 vector x, with x ̸= 0, such as x = .
0
h ih i h i h i
T −3 −1 1 1 1
x Hx = [1, 0] = [1×(−3)+0×(−1), 1×(−1)+0×(−5)] = [−3, −1] =
−1 −5 0 0 0
(3)
Since xT Hx < 0, for x ̸= 0, then H is a negative definite matrix, so π ∗ (q1∗ , q2∗ ) is the true maximum.

34 / 35
Single Variable Optimization Partial Differentiation Multiple Variable Optimization

Monopoly Price Discrimination


Now that the profit maximum π ∗ (q1∗ , q2∗ ) is confirmed, which implies that the optimal pricing strategy
for the Monopolist is pricing the same goods differently for the 2 markets.
This is also known as Price Discrimination:

Since we can write M Ri = pi (1 − 1/ϵi ), equality of the marginal revenues implies

∗ ∗ p∗ 1 − 1/ϵ2
M R1 = M R2 =⇒ p1 (1 − 1/ϵ1 ) = p2 (1 − 1/ϵ2 ) =⇒ 1
= . (4)
p∗2 1 − 1/ϵ1

This says that the market with the lower demand elasticity ϵ1 (demand not as responsive to price
changes) will have the higher price p∗
1 , with q1 = 12 − p1 , and q2 = 5 − 2 p2 .
1

Recall the monopolist always produces where ϵi > 1, which is confirmed at the 2 markets’
corresponding price points p∗ ∗
1 = 8.43, and p2 = 7.43:

At p∗
1 in Market 1:
dq1 p∗ 8.43
ϵ1 = − 1
= −(−1) × = 2.36 > 1. (5)
dp1 q1∗ 3.57

At p∗
2 in Market 2:
dq2 p∗ 1 7.43
ϵ2 = − 2
= −(− ) × = 2.89 > 1. (6)
dp2 q2∗ 2 1.29

at the solution π ∗ (q1∗ , q2∗ ) where ϵ1 < ϵ2 , so p∗ ∗


1 > p2 .
35 / 35

You might also like