0% found this document useful (0 votes)
35 views

Mathematical Knowledge Functions of A Single Variable

1. The document discusses mathematical concepts related to functions including derivatives, concavity, and optimization. It defines the first and second derivatives of functions and how they relate to slope and curvature. 2. Rules for calculating derivatives of sums, constants, powers, products, and quotients are provided. The chain rule is also explained. 3. Optimization of functions is explored, including finding maxima and minima using the first and second derivative tests. Constrained optimization problems are introduced.

Uploaded by

artecksas
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Mathematical Knowledge Functions of A Single Variable

1. The document discusses mathematical concepts related to functions including derivatives, concavity, and optimization. It defines the first and second derivatives of functions and how they relate to slope and curvature. 2. Rules for calculating derivatives of sums, constants, powers, products, and quotients are provided. The chain rule is also explained. 3. Optimization of functions is explored, including finding maxima and minima using the first and second derivative tests. Constrained optimization problems are introduced.

Uploaded by

artecksas
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Mathematical knowledge

1 Functions of a single variable


• Function y = f (x)
• The first derivative of f with respect to x is

df
f 0 (x) = .
dx

It gives, at each value x, the slope or instantaneous rate of change in f (x).


• The second derivative of f with respect to x is

d2 f
f 00 (x) = .
dx2

It gives the rate at which the slope of f changes. It is thus related to the
curvature of the function f .
• Rules of differentiation

— For constants, α:

d
α = 0,
dx

— For sums:

d
[f (x) ± g(x)] = f 0 (x) ± g 0 (x),
dx

— Power rule:

d
(αxn ) = nαxn−1 ,
dx

1
— Product rule:

d
[f (x)g(x)] = f 0 (x)g(x) + f (x)g 0 (x),
dx

— Quotient rule:

d f (x) f 0 (x)g(x) − f (x)g 0 (x)


[ ] = ,
dx g(x) [g(x)]2

— Chain rule:

d
[f (g(x))] = f 0 (g(x))g 0 (x)
dx

• Examples. Calculate the derivatives in each of the following cases:

1. y = 5x−2
2. f (x) = 2x2 + 3x + 4
3. g(x) = 3x − 1
4. (f (x)g(x))0 ?
5. ( fg(x)
(x) 0
)?
6. f (g(x))0 ?

• Concavity and first and second derivatives.


If f is twice differentiable, the following statements 1 to 3 are equivalent:

1. f is concave.

2. f 00 (x) ≤ 0 ∀x.

3. If λ ∈ [0, 1], ∀x and ∀x0 ,


f (λx + (1 − λ)x0 ) ≥ λf (x) + (1 − λ)f (x0 )

(For all x0 : f (x) ≤ f (x0 ) + f 0 (x0 )(x − x0 ) ∀x.)

• Moreover,.

2
4. If f 00 (x) < 0 ∀x, then f is strictly concave.

• Examples. Are the following functions concave?

1. f (x) = 3x2 − 2x + 5
2. g(x) = −5x3 + 9x + 3
3. h(x) = 3x − 3

2 Functions of several variables (n=2)


• Function y = f (x1 , x2 )
• Partial derivatives

∂f
for i = 1, 2
∂xi

• Total differential dy of the function y = f (x1 , x2 )

∂f ∂f
dy = dx1 + dx2
∂x1 ∂x2

where dx1 and dx2 are small changes in x1 and x2 .

• Example: f (x1 , x2 ) = x21 + 3x1 x2 − x22 .

— What are the partial derivatives?

3 Optimization
• Function y = f (x) is differentiable.
• The function achieves a local maximum (respectively global maxi-
mum) at x∗ , if f (x∗ ) ≥ f (x) for all x in some neighborhood of x∗ (re-
spectively for all x).
• The function achieves a local minimum (respectively global minimum)
at xe, if f (e e (respectively for
x) ≥ f (x) for all x in some neighborhood of x
all x).

3
• Necessary conditions for local interior optima in a single-variable case
f (x) is twice continuously differentiable. It reaches a local interior

1. maximum at x∗ ⇒ f 0 (x∗ ) = 0 (F OC)


⇒ f 00 (x∗ ) ≤ 0 (SOC)

e ⇒ f 0 (e
1. maximum at x x) = 0 (F OC)
00
⇒ f (e x) ≥ 0 (SOC)

4 Constrained optimization
• Consider the following problem

(
M axf (x1 , x2 )
x1 ,x2
subject to g(x1 , x2 ) = 0

• f (x1 , x2 ): objective function or maximand,


• x1 and x2 are choice variables,
• g(x1 , x2 ): constraint.

• Let solve this problem by substitution:

— suppose we can rewrite the constraint g(x1 , x2 ) = 0 as x2 = ge(x1 ).


— We can substitute this directly into the objective function: the 2
variable constrained maximization problem can be rephrased as the
single variable problem with no constraints:

M axf (x1 , ge(x1 ))


x1

— x∗1 is defined by

∂f (x1 , ge(x1 )) ∂f (x1 , ge(x1 )) de


g (x1 ))
+ = 0
∂x1 ∂x2 dx1

4
— and x∗2 by

x∗2 = ge(x∗1 )

• Example: Cost minimizing firm (C = wL + rK) with Cobb-Douglas pro-


duction functions

M inwL + rK
K,L
subject to Q = Lα K β

1 β
— The constraint Q = Lα K β becomes L = Q α K − α .
— Unconstrained minimization

1 β
M inwQ α K − α + rK
K

— FOC:

dC 1 β − β −1
= −wQ α K α +r =0
dK α

— SOC

d2 C β 1 β β
= ( + 1)wQ α K − α −2 ≥ 0
dK 2 α α

— Solve FOC gives

w 1β α
K∗ = [ Q α ] α+β
r α

1 β
— Substitute K ∗ in L = Q α K − α gives

r 1α β
L∗ = [ Q β ] α+β
w β

You might also like