7 + 8 Cal
7 + 8 Cal
1 Directional Derivatives
3 Optimization Problems
4 Double Integrals
1 Directional Derivatives
3 Optimization Problems
4 Double Integrals
f (a + h, b) − f (a, b)
fx (a, b) = lim ,
h→0 h
and
f (a, b + h) − f (a, b)
fy (a, b) = lim .
h→0 h
Then the rate of change along the direction u is thus the limit of
the ratio
f (a + hu1 , b + hu2 ) − f (a, b)
as h → 0.
h
Definition (1.)
The directional derivative of f at (a, b) in the direction of the unit vector
u = (u1 , u2 ) is
Theorem (2.)
If f (x, y ) is a differentiable function of two variables x and y , then f has
a directional derivative in the direction of any unit vector u = (u1 , u2 )
and
Du f (x, y ) = ∇f · u = fx (x, y )u1 + fy (x, y )u2 .
v 1
u= = √ (2, 5).
kvk 29
Now we have
(2, 5)
Du f (x, y ) = ∇f (x, y ) · u = (2xy 3 , 3x 2 y 2 − 4) · √ ,
29
(2, 5) −4 · 2 + 8 · 5 32
Du f (2, −1) = (−4, 8) · √ = √ =√ .
29 29 29
Du f (x, y , z) = ∇f (x, y , z) · u.
Definition (6.)
The directional derivatives of f at x0 along the unit vector u is
f (x + hu) − f (x0 )
Du f (x0 ) = lim .
h→0 h
Du f (x) = ∇f (x) · u,
i.e.,
Du f (x1 , x2 , . . . , xn ) = ∇f (x1 , x2 , . . . , xn ) · u.
1 Directional Derivatives
3 Optimization Problems
4 Double Integrals
We know that the value Du f gives the rate of change in f along the
direction u.
Note that θ = 0 means u has the same direction as the gradient vector
∇f and θ = π means u has the opposite direction of the gradient vector
∇f .
Theorem (1.)
Suppose f is a differentiable function (of two or three variables).
(a) The maximum rate of change in f at P0 is k∇f (P0 )k and it occurs
∇f (P0 )
in the direction u = , which is the same direction as the
k∇f (P0 )k
gradient vector ∇f (P0 ).
Solution.
(a) The rate of change of f at point P along a direction of the unit
vector u is given by Du f (P) = ∇f (P) · u.
Du f (2, 0) = ∇f (2, 0) · u
3 4 3 4
= (1, 2) · − , = − + 2 · = 1.
5 5 5 5
(b) We have computed ∇f (2, 0) = (1, 2). Then the maximum rate of
change in f at P is
p √
k∇f (2, 0)k = k(1, 2)k = 12 + 22 = 5.
Remark. The direction −∇f (P) (respectively −∇F (P) is used in the
steepest descent algorithm in seeking the global minimum value of f
(respectively of F ).
Definition (3.)
(a) Points at which fx or fy do not exist are called singular points.
(b) If (a, b) is not a singular point, and ∇f (a, b) = (0, 0), i.e. both
fx (a, b) = fy (a, b) = 0, then we call the point (a, b) a stationary
point.
Solution. We have
√ √ x
3 y −x 2
∇f (x, y ) = ex 3
y − 2x, p , if y 6= 0.
3 3 y2
We have
√
3 y − 2x = 0
∇f (x, y ) = 0 ⇐⇒ x
p 3
= 0.
3 y2
x
From p 3
= 0, we get x = 0.
3 y2
√
Substituting x = 0 into 3 y − 2x = 0, we obtain y = 0.
Now we see how to use partial derivatives to find and classify the local
extrema of functions of two variables.
then f has a local minimum at (a, b), and f (a, b) is called a local
minimum value.
The following result tells us possible points for local extrema are
stationary points if both fx and fy exist.
Theorem (7.)
If f has a local extremum at (a, b), where (a, b) is an interior point, and
the first-order partial derivatives exist there, then (a, b) is a stationary
point, i.e.
∇f (a, b) = (0, 0).
However, note that at a stationary point (a, b), the value f (a, b) may or
may not be a local maximum or a local minimum.
Definition (8.)
A point (a, b) is called a saddle point of f if it is a stationary point but
neither local maximum nor local minimum.
Solution. We have found above that stationary points of f are (0, 0),
(0, √1 ), and (0, − √1 ).
2 2
1 2
1 2 1
f (x, y ) = − y − − x 2 ≤ , for all (x, y ).
4 2 4
1 1
Also we have f 0, ± √ = .
2 4
Thus both points 0, ± √1 give local maximum values. (In fact,
2
they are global maximum.)
CALCULUS II - AY2022-23 29 / 101
(b) The origin (0, 0) is a stationary point, and f (0, 0) = 0.
Note that
- along x = 0, f (x, y ) = y 2 − y 4 = y 2 (1 − y 2 )≥ 0 = f (0, 0), if
−1 ≤ y ≤ 1.