0% found this document useful (0 votes)
15 views5 pages

Calculus Review

The document provides an overview of basic calculus concepts relevant to economics, including single and multivariable calculus, derivatives, and maximization problems. Key rules for derivatives such as the constant, power, sum, product, quotient, and chain rules are explained, along with partial derivatives and differentials for functions of multiple variables. It also outlines methods for solving unconstrained maximization problems, emphasizing the necessary first-order conditions for optimal solutions.

Uploaded by

r5y8jcd426
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views5 pages

Calculus Review

The document provides an overview of basic calculus concepts relevant to economics, including single and multivariable calculus, derivatives, and maximization problems. Key rules for derivatives such as the constant, power, sum, product, quotient, and chain rules are explained, along with partial derivatives and differentials for functions of multiple variables. It also outlines methods for solving unconstrained maximization problems, emphasizing the necessary first-order conditions for optimal solutions.

Uploaded by

r5y8jcd426
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Basic Calculus for Economics

1 Single Variable Calculus

Consider a differentiable function f in the XY plane, where y = f (x). The derivative of f at X coordinate x,
′ df (x)
denoted f (x) or dx = dx ,
dy
is the slope of the function f at the point (x, y). The interpretation of the derivative

(similar to what a slope means) is how much the Y variable changes in response to an infinitesimal (very small)

change in the X variable.

There are are a few useful rules related to derivatives, which most of you are already familiar with.


1. Constant Rule: If f is a constant function, that is, if y = f (x) = k for all values of x, then dy
dx = f (x) = 0.

Notice that such a function has a graph that is a horizontal line, which has a slope of 0. Another very important
′ dg(x) ′
rule related to constants is if y = f (x) = k × g (x), then f (x) = d
dx [k × g (x)] = k × dx = k × g (x).


2. Power Rule: If f (x) = xn , where n is a constant, then f (x) = d
dx [xn ] = nxn−1

• Examples:

If f (x) = x2 then f (x) = 2x.

If f (x) = 4x3 then f (x) = 4 dx
d
x3 = 4 × 3x2 = 12x2 .


√ 1 ′ 1
−1 −1
If f (x) = x = x 2 then f (x) = 12 (x) 2 = 21 (x) 2 = 1

2 x
.

′ ′ ′
3. Sum Rule: If f (x) = g (x) + h (x), then f (x) = g (x) + h (x).

• Examples:

If f (x) = 3x2 + 2x4 then f (x) = d
3x2 + d
2x4 = 6x + 8x3 .
 
dx dx

If f (x) = x + x3 then f (x) = d
(x) + d
x3 = 1 + 3x2 .

dx dx

′ ′ ′
4. Product Rule: If f (x) = g (x) h (x), then f (x) = g (x) h (x) + h (x) g (x).

• Example:

If f (x) = x3 x2 + 1 then f (x) = d
x3 × x2 + 1 + d
x2 + 1 × x3 = 3x2 x2 + 1 + 2x x3 =
     
dx dx

5x4 + 3x2 .

1
′ ′
g(x) ′ h(x)g (x)−g(x)h (x)
5. Quotient Rule: If f (x) = h(x) then f (x) = [h(x)]2

6. Chain Rule: This is one of the most important and frequently used rule for Economics. Sometimes a variable

is actually a function of another variable, for example a firm’s profit is a function of its quantity sold of a

product, but the quantity sold is itself a function of the product’s price. We will often be interested in how

the firm’s profit changes if it changes its price, in which case we will need to use the chain rule. The chain rule
′ ′ ′
says: if y = f (x) and we have a function h (x) = g (y) = g (f (x)) = (g ◦ f ) (x), then h (x) = g (f (x)) × f (x).
dh(x) dg(y) dg(y) dy
Another way of writing this is: dx = dx = dy dx .

Exercise 1. Prove the quotient rule directly by using the product rule, the power rule and the chain rule.

2 Multivariable Calculus

Consider now a function f of two variables, in the 3-dimensional XYZ plane, where z = f (x, y). That is, f is a

function of x and y.

2.1 Partial Derivatives


∂f (x,y)
The partial derivative of f with respect to x is ∂x , which measures how f changes in response to an infinitesimal

change in x, while keeping y constant. Sometimes this is written as fx (x, y).


∂f (x,y)
Similarly, the partial derivative of f with respect to y is ∂y , which measures how f changes in response to

an infinitesimal change in y, while keeping x constant. Sometimes this is written as fy (x, y).

• Examples:
∂f (x,y) ∂f (x,y)
If f (x, y) = x + y then ∂x = 1 and ∂y = 1.
∂f (x,y) ∂f (x,y)
If f (x, y) = xy then ∂x = y and ∂y = x.
∂f (x,y) ∂f (x,y)
If f (x, y) = x2 y 3 + 2x + 4y then ∂x = 2xy 3 + 2 and ∂y = 3x2 y 2 + 4.

Remark. When you are taking a partial derivative you treat other variables as constants.

2.2 Differentials

As we described earlier, given y = f (x), the derivative of the function is dy
dx = f (x).

However, (in case it helps you remember) we can treat dy
dx as a fraction and write dy = f (x) dx.

Here, dy and dx are called differentials. They represent changes in the respective variables.

That is,

• dy = change in y;

• dx = change in x; and

2

• f (x) = dy
dx = how the change in x causes the change in y.

Example. Suppose y = x2 . Then, dy = 2xdx. When initially x = 2, and then x increases by 0.01, by how much

does y change (approximately)?

Here, dx = 0.01, so change in y is (approximately) dy = 2xdx = 2 (2) (0.01) = 0.04, so y increases by (approxi-

mately) 0.04.1

Now, when we have a function z = f (x, y) of two variables, then the differential dz is also sometimes called a total

differential, and
∂f (x, y) ∂f (x, y)
dz = df (x, y) = dx + dy
∂x ∂y

The interpretation is: the total change in z is the sum of the changes caused by a change in x (which is

fx (x, y) dx) and a change in y (which is fy (x, y) dy).

Remark. The total differential is the sum of all the partial differentials.

3 Maximization Problems

Here we give a very brief overview of how to solve unconstrained maximization problems. In our class, we will

(almost always) deal with constrained maximization problems by first incorporating the constraint into the function

to be maximized (turning it into an unconstrained maximization problem), and then solving it.

In class, we will first write down a maximization problem of the following form:

max f (x)
x∈X

We read this problem as: we have to choose the value of x from a set X of allowed values in order to maximize

the value of f (x). We call x the choice variable, and f the objective function.

In much of what we do, we will solve problems that have interior solutions. That is, the optimal value of x

that maximizes f (x) will not be at the boundary of the set X , but rather at the interior. Let us call the optimal

choice x∗ (sometimes also called the maximizer), and so f (x∗ ) is the maximum value For all problems with interior

solutions (assuming a solution exists and it is interior), the following necessary First Order Condition (FOC)

must be true at the optimal choice x∗ :



f (x∗ ) = 0

1 Here,we are getting an approximate change because we are treating f (x) = 4 at x = 2 as fixed, even though as x increases from

2 to 2.01, the rate of change (which is f (x)) also changes slightly. This approximation works reasonably well for small changes in x,
but badly for large changes. In order to compute the exact change in y (not just an approximation), we would have to add up all the
“small changes” in y that happens as a result of the all the “small changes” in x. This is the purpose of integration, where we could
r
x=2.01 r
x=2.01  x=2.01
compute total change in y = dy = 2xdx = x2 = (2.01)2 − (2)2 = 0.0401.
x=2
x=2 x=2

3
Notice that this condition will not be true at all values of x, only at the optimal choice x∗ . Knowing this about

x∗ , this allows us to solve this FOC equation, which gives us the solution x∗ .

If we instead have a maximization problem where the objective function f (x, y) is a function of two variables,

and both x and y are choice variables2 , we write down the maximization problem as:

max f (x, y)
(x,y)∈X ×Y

Suppose again we have an interior solution (x∗ , y ∗ ). In this case, we have two necessary FOCs, one for each

choice variable:

fx (x∗ , y ∗ ) = 0

fy (x∗ , y ∗ ) = 0

Here, we will have two equations and two unknowns, so we can simultaneously solve the two equations to find

x∗ and y ∗ .

These are the general methods for solving unconstrained maximization problems. However, as is often the case

in mathematics, a complicated problem can sometimes be transformed into a simpler problem, and then we can

more easily solve the simpler problem. In the following example, we take a constrained maximization problem with

two choice variables, and transform it into a much simpler unconstrained problem with one choice variable, and

only then take the FOC.

Example. A new car salesman keeps two types of cars in his store, Toyotas and Range Rovers. He always manages

to sell all cars within a set period. Each Toyota earns him a per-unit profit of $5000, and each Range Rover $1000.

However, the cost of keeping Toyotas is 250T 2 , and cost for Range Rovers is 50R2 , where T is the number of Toyota

cars and R the number of Range Rovers. He always stores a total of 10 cars in his store. The problem is, how many

Toyotas and how many Range Rovers should he choose in order to maximize his total profit? (We assume for the

sake of convenience and in order to take derivatives that any non-negative real numbers of cars are allowed, do not

have to be whole numbers)

The constrained profit maximization problem is:

max 5000T − 250T 2 + 1000R − 50R2


(T,R)∈R2+

s. to T + R = 10
2 In case only x is the choice variable and y is fixed at some level that cannot be changed, then we solve it in the same was as a single

variable objective function, however, in this case, the optimal choice x∗ may be different at different levels of y, in which case x∗ (y) is a
function of y. For example, if a competing firm is choosing to produce a high quantity, a firm’s optimal quantity choice will be different
than when the competitor chooses a low quantity.

4
The constraint gives us R = 10 − T . So, if we plug in the constraint in the objective function, it becomes an

unconstrained maximization problem.

2
max 5000T − 250T 2 + 1000 (10 − T ) − 50 (10 − T )
T

Simplifying,

max 5000T − 300T 2 + 5000


T

FOC:

5000 − 600T = 0

Gives us the optimal T ∗ = 25


3 ≈ 8.33, and then optimal R∗ = 10 − T ∗ = 5
3 ≈ 1.67.

You might also like