0% found this document useful (0 votes)
2 views

First-Order Taylor Approximation in Multiple Variables

The First-Order Taylor Approximation in multiple variables allows for the linear approximation of differentiable functions near a point, utilizing the gradient vector for optimal linear representation. This concept is crucial in various fields such as optimization, machine learning, and numerical analysis, providing insights into function behavior and error estimation. The document discusses the mathematical foundations, properties, and practical applications of this approximation, along with examples and historical context.

Uploaded by

Daniel Solomon
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

First-Order Taylor Approximation in Multiple Variables

The First-Order Taylor Approximation in multiple variables allows for the linear approximation of differentiable functions near a point, utilizing the gradient vector for optimal linear representation. This concept is crucial in various fields such as optimization, machine learning, and numerical analysis, providing insights into function behavior and error estimation. The document discusses the mathematical foundations, properties, and practical applications of this approximation, along with examples and historical context.

Uploaded by

Daniel Solomon
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

First-Order Taylor Approximation in

Multiple Variables
1. Introduction
The First-Order Taylor Approximation is a fundamental concept in
multivariable calculus, allowing the approximation of a function near a
given point using a linear function. It generalizes the Taylor Series
Expansion from single-variable calculus to multiple dimensions, playing a
critical role in optimization, machine learning, and numerical analysis.

2. Differentiability in R
n

A function f : R n → R is differentiable at a point x ' if there exists a linear map


n
L :R → R such that:
lim f ( x ' + h )−f ( x ' )− L ( h )
h→0
=0
∥ h∥

where h=( h1 , h2 ,... , hn ) is a small perturbation vector, and ∥ h ∥ is its Euclidean


norm:

∥ h ∥=√ h21+ h22 +…+h 2n

2.1 Properties of Differentiability


 Linearity: The best linear approximation to f is given by a gradient
vector.

 Continuity: Differentiability implies continuity, but continuity does not


imply differentiability.

 Gradient Existence: The gradient exists and gives the direction of


maximum change.

3. First-Order Approximation Formula


If f is differentiable, the best linear approximation is given by:
L ( h )=∇ f ( x ' ) ⋅h

where ∇ f ( x ' ) is the gradient vector:


∇ f ( x ' )=
( ∂ f ∂f
,
∂ x1 ∂ x2
, ... ,
∂f
¿
∂ x n x' )
Thus, we obtain the First-Order Taylor Approximation:
f ( x '+h )=f ( x ' ) +∇ f ( x ' ) ⋅ h+ ϵ ( h )

where ϵ ( h ) is an error term satisfying:


lim ϵ ( h )
h→0
=0
∥h∥
This means that for sufficiently small h , we can approximate:
f ( x '+h ) ≈ f ( x ' ) + ∇ f ( x ' ) ⋅ h

4. Explicit Formulation in R
2

For a function f : R 2 → R , let:


( x , y ) =( x ' , y ' ) + ( h , k )
where h , k are small. Then:
f ( x ' , y ' ) +∇ f ( x ' , y ' ) ⋅ ( h , k ) ≈ f ( x '+ h , y ' + k )
Expanding the dot product:
∂f ∂f
f ( x , y )≈ f (x ' , y ' )+ ( x ' , y ' ) h+ ( x' , y ') k
∂x ∂y
Using vector notation:
f ( x , y ) ≈ f ( x ' , y ' ) +∇ f ( x ' , y ' ) ⋅ [ ( x , y )− ( x ' , y ' ) ]

5. Theoretical Framework and Principles


5.1 Principle of Local Linearity
The fundamental principle behind Taylor approximations is local linearity,
which states that a differentiable function behaves like a linear function
when viewed in a sufficiently small neighborhood.
5.2 Chain Rule and Partial Derivatives
The gradient of a function is built using partial derivatives, which
describe how the function changes with respect to each variable
independently.

5.3 Connection to the Total Differential


The total differential:
∂f ∂f
df = dx+ dy
∂x ∂y
is essentially the First-Order Taylor Approximation.

6. Patterns and Regularities


 Gradient as a Direction Indicator: The gradient always points in the
direction of maximum increase.

 Local vs. Global Behavior: The approximation holds only in a local


neighborhood.

 Higher-Order Corrections: The first-order term is the dominant


contributor in small perturbations.

7. Practical Relevance
7.1 Machine Learning & Optimization
 The gradient descent algorithm in machine learning relies on
Taylor approximations.

 Newton’s Method improves optimization using higher-order


derivatives.

7.2 Physics & Engineering


 Used to approximate functions in thermodynamics,
electromagnetism, and fluid dynamics.

 Applied in robotics and control systems to linearize nonlinear


models.

7.3 Computational Applications


 Essential in finite difference methods for numerical analysis.

 Used in scientific simulations and signal processing.


8. Historical Records and Expert Consultation
8.1 Historical Development
 Brook Taylor (1715) introduced Taylor Series.

 Joseph-Louis Lagrange extended Taylor’s work.

 Carl Gustav Jacobi and Augustin-Louis Cauchy formalized


gradient-based analysis.

8.2 Expert Insights


Mathematicians and engineers use Taylor approximations in fields like AI,
control systems, and finance. Experts emphasize the necessity of
considering higher-order terms for more accurate approximations.

9. Observation and Documentation


 Observing error behavior in numerical methods validates Taylor
expansions.

 Documenting convergence properties is critical in stochastic


gradient descent (SGD).

10. Variants and Extensions


 Second-Order Taylor Expansion: Includes the Hessian matrix for
quadratic approximation.

 Taylor Approximations in Function Spaces: Used in functional


analysis and PDE solutions.

 Multivariable Taylor Series: Extends beyond first-order to capture


curvature.

11. Practical Applications


 Control Theory: Designing PID controllers.

 Computer Graphics: Texture mapping approximations.

 Finance: Risk modeling in option pricing.


12. Conclusion
The First-Order Taylor Approximation provides a linear estimation of
differentiable functions and serves as a fundamental tool across
mathematics, engineering, and data science. Understanding its principles,
properties, and applications enhances problem-solving abilities in advanced
computational fields.

Understanding the First-Order Taylor Approximation in


Multiple Variables
1. Definition of Differentiability in Rn

A function f : R n → R is differentiable at a point x ' ∈ Rn if there exists a linear


map L :R n → R such that:
lim f ( x ' + h )−f ( x ' )− L ( h )
h→0
=0
∥ h∥
where h=( h1 , h2 ,… ,h n ) is a small perturbation vector, and ∥ h ∥ is its Euclidean
norm.
This definition ensures that the function can be well approximated by a linear
transformation when we zoom in close enough to x ' .
2. Best Linear Approximation via the Gradient
If f is differentiable at x ' , the best linear approximation is given by:
L ( h )=∇ f ( x ' ) ⋅h

where ∇ f ( x ' ) is the gradient vector, defined as:

∇ f ( x ' )= ( ∂∂xf , ∂∂xf , … , ∂∂xf ) ¿


1 2 n
x'

Thus, differentiability implies:


lim f ( x ' + h )−f ( x ' )−∇ f ( x ' ) ⋅h
h→0
=0 .
∥h∥
From this, for sufficiently small h , we can express:
f ( x '+h )=f ( x ' ) +∇ f ( x ' ) ⋅ h+ ε ( h ) ,

where the error term ε ( h ) satisfies:


lim ε ( h )
h→0
=0 .
∥h∥
This means that as h approaches zero, the error term diminishes faster than
∥ h ∥, making the linear approximation highly accurate for small perturbations.
3. Explicit Formulation for R2

For a function f : R 2 → R , let:


( x , y ) =( x ' , y ' ) + ( h , k ) ,
where h and k are small perturbations. Differentiability ensures:
f ( x ' , y ' ) +∇ f ( x ' , y ' ) ⋅ ( h , k ) ≈ f ( x '+ h , y ' + k ) .
Expanding the dot product:
∂f ∂f
f ( x , y )≈ f (x ' , y ' )+ ( x ' , y ' ) h+ ( x' , y ') k .
∂x ∂y
Rewriting using vector notation:
f ( x , y ) ≈ f ( x ' , y ' ) +∇ f ( x ' , y ' ) ⋅ [ ( x , y )− ( x ' , y ' ) ] .

This is the first-order Taylor approximation in two variables.

4. Concrete Example
Consider the function:
2 2
f ( x , y )=x + xy+ y .
Let’s approximate f ( x , y ) near ( x ' , y ' )=( 1 ,2 ) .
1. Compute the partial derivatives:
∂f ∂f
=2 x + y , =x+ 2 y .
∂x ∂y
2. Evaluate at ( 1 , 2 ):
∂f ∂f
( 1, 2 )=2 ( 1 ) +2=4 , ( 1 ,2 ) =1+2 (2 )=5 .
∂x ∂y
3. The first-order approximation at ( x ' , y ' )=( 1 ,2 ) is:

f ( x , y ) ≈ f ( 1, 2 ) + 4 ( x−1 ) +5 ( y−2 ) .

Since f ( 1 , 2 )=12 + ( 1 ) ( 2 ) +22=7, the final approximation is:


f ( x , y ) ≈ 7+ 4 ( x−1 )+5 ( y −2 ) .

For small perturbations around ( 1 , 2 ), this linear function provides an accurate


approximation of f ( x , y ).

5. Expected Questions & Problems


Here are some types of problems you may encounter:
1. Conceptual Understanding

o Explain why differentiability implies a linear approximation.

o How does the error term ε ( h ) influence the approximation?

o Compare differentiability and continuity in the context of Taylor


approximations.

2. Computational Exercises

o Compute the first-order Taylor approximation of a given function


at a specific point.

o Evaluate the accuracy of the linear approximation for small


perturbations.

3. Theoretical Questions

o Prove that if f is differentiable, then it is continuous.

o Derive the first-order Taylor approximation in three or more


variables.

4. Applications

o Use the first-order approximation to estimate function values in


physics or engineering problems.

o Analyze the error of approximation in optimization and machine


learning contexts.

Example 1: Linear Approximation Near a Point


Function: f ( x , y )=ln ( xy )

Point of Approximation: ( x 0 , y 0 )=( 1 , e )


Objective: Approximate f ( 1.02 , e−0.03 )
Solution:
1. Compute Partial Derivatives at ( 1 , e ) :
∂f y 1
o = =
∂ x xy x
∂f
 At x=1: =1
∂x
∂f x 1
o = =
∂ y xy y
∂f 1
 =
At y=e:
∂y e
2. Calculate Displacements:

o Δx=1.02−1=0.02

o Δy=( e−0.03 )−e=−0.03

3. Apply First-Order Taylor Approximation:


∂f ∂f
f ( x , y ) ≈ f ( x0 , y0 )+ Δx+ Δy
∂x ∂y

( 1e )(−0.03)
f ( 1.02 , e−0.03 ) ≈ ln ( 1 ⋅e )+ ( 1 )( 0.02 ) +

f ( 1.02 , e−0.03 ) ≈ 1+0.02−(


e )
0.03

4. Compute Numerical Value:

f ( 1.02 , e−0.03 ) ≈ 1+0.02− ( 2.718


0.03
)
f ( 1.02 , e−0.03 ) ≈ 1.02−0.011
f ( 1.02 , e−0.03 ) ≈ 1.009

Example 2: Estimating Temperature Change


Function: T ( x , y )=x 2+ xy+ y 2

Point: ( 2 , 3 )
Objective: Estimate the temperature at ( 2.1 , 2.9 )
Solution:
1. Compute Partial Derivatives at ( 2 , 3 ):
∂T
o =2 x+ y
∂x
∂T
 At ( 2 , 3 ): =4+3=7
∂x
∂T
o =x +2 y
∂y
∂T
 =2+6=8
At ( 2 , 3 ):
∂y
2. Calculate Displacements:

o Δx=0.1

o Δy=−0.1

3. Approximate Temperature Change:


∂T ∂T
ΔT ≈ Δx+ Δy
∂x ∂y
ΔT ≈ ( 7 ) ( 0.1 ) + ( 8 )(−0.1 ) =0.7−0.8=−0.1
4. Estimate New Temperature:

o Original T ( 2 , 3 )=22 + ( 2 ) ( 3 ) +32 =4+ 6+9=19

o Estimated T ( 2.1 , 2.9 ) ≈ 19+ (−0.1 )=18.9

Example 3: Error Estimation in Engineering


Function: V ( l, w , h )=lwh (Volume of a Box)
Measurements:
 Length (l ) = 50 cm (± 0.2 cm)

 Width (w ) = 30 cm (± 0.1 cm)

 Height (h ) = 20 cm (± 0.1 cm)

Objective: Estimate the maximum error in volume.


Solution:
1. Compute Partial Derivatives:
∂V
o =wh
∂l
∂V 2
 =( 30 ) ( 20 )=600 cm
∂l
∂V
o =lh
∂w
∂V 2
 =( 50 ) (20 )=1000 cm
∂w
∂V
o =lw
∂h
∂V 2
 =( 50 ) (30 )=1500 cm
∂h
2. Calculate Maximum Possible Errors:

o Δl=0.2 cm

o Δw=0.1 cm

o Δh=0.1 cm

3. Estimate Maximum Error in Volume ( ΔV ):

ΔV ≈ |∂∂Vl Δl|+|∂∂ Vw Δw|+|∂V∂h Δh|


3
ΔV ≈ ( 600⋅ 0.2 )+ ( 1000⋅ 0.1 )+ ( 1500⋅ 0.1 )=120+100+150=370 cm
So, the maximum possible error in the volume measurement is 370 cm³.

Example 4: Approximating Economic Output


Production Function: Q ( K , L ) =K 0.3 L0.7
Initial Inputs: Capital K=100 , Labor L=200
Changes:
 ΔK =5

 ΔL=−10

Objective: Estimate the change in output ( ΔQ ).


Solution:
1. Compute Partial Derivatives at ( 100 , 200 ):
∂Q −0.7 0.7
o =0.3 K L
∂K
∂Q −0.7 0.7
 =0.3 ⋅ ( 100 ) ⋅ ( 200 )
∂K
∂Q 0.3 −0.3
o =0.7 K L
∂L
∂Q 0.3 −0.3
=0.7 ⋅ ( 100 ) ⋅ ( 200 )
∂L
2. Calculate Numerical Values (Using Logarithms or Calculators):

o For brevity, let’s assume:

∂Q
 ≈a
∂K
∂Q
 ≈b
∂L
3. Estimate ΔQ :
ΔQ ≈ aΔK +bΔL
4. Compute ΔQ Numerically:

o Substitute the computed values of a and b and the given ΔK and


ΔL .

Example 5: Pressure Change in Physics


nRT
Equation of State: P ( V ,T )=
V
Initial Conditions:
 Volume V =10 L

 Temperature T =300 K

Changes:
 ΔV =0.1 L

 ΔT =5 K

Objective: Estimate ΔP .
Solution:
1. Compute Partial Derivatives:
∂ P −nRT
o =
∂V V
2

∂ P nR
o =
∂T V
2. Evaluate at Initial Conditions:
∂ P −nR ⋅ 300 −30 nR
o = = =−0.3 nR
∂V (10 )2 100

∂ P nR
o = =0.1 nR
∂ T 10
3. Estimate ΔP :

ΔP ≈ (−0.3 nR ) ( 0.1 ) + ( 0.1 nR ) ( 5 )=−0.03 nR+0.5 nR=0.47 nR


So, the pressure increases by approximately 0.47 nR units.

Example 6: Linearization in Biology


V max S
Enzyme Reaction Rate: R ( S , E )=
Km+ S

Initial Conditions:
 Substrate Concentration S=50 µM

 Enzyme Concentration E=10 µg/mL


Changes:
 ΔS =5 µM

 ΔE=1 µg/mL

Objective: Estimate ΔR .
Solution:
1. Assuming V max =kE , where k is a constant.

2. Compute Partial Derivatives:

∂ R V max K m
o =
∂ S ( K m+ S )2
∂R kS
o =
∂ E K m+ S

3. Evaluate at Initial Conditions and Estimate ΔR .

Example 7: Approximating a Change in Gravitational Force


m1 m2
Newton’s Law of Gravitation: F ( r )=G 2
r
Initial Distance: r =100 km
Change in Distance: Δr=−1 km (Objects move 1 km closer)
Objective: Estimate ΔF .
Solution:
1. Compute Derivative:

dF m1 m2
o =−2G 3
dr r
2. Evaluate at r =100 km:

dF m1 m2
o =−2G
dr ( 100 )3
3. Estimate ΔF :

dF m1 m2 2 G m1 m2
ΔF ≈ Δr=−2 G 3
(−1 )=
dr ( 100 ) 1 , 000 , 000

2 G m1 m2
So, the gravitational force increases by units.
1, 000 , 000

Example 8: Error in Approximating Distance


Function: D ( x , y )= √ x 2 + y 2

Point: ( 3 , 4 )
Changes:
 Δx=0.1

 Δy=−0.1
Objective: Estimate ΔD .
Solution:
1. Compute Partial Derivatives:
∂D x
o =
∂ x √ x 2+ y 2

∂D 3
 At ( 3 , 4 ): = =0.6
∂x 5
∂D y
o = 2 2
∂ y √x + y

∂D 4
 At ( 3 , 4 ): = =0.8
∂y 5
2. Estimate ΔD :

ΔD ≈ ( 0.6 ) ( 0.1 ) + ( 0.8 ) (−0.1 )=0.06−0.08=−0.02


So, the distance decreases by approximately 0.02 units.

Example 9: Approximating a Function in Thermodynamics


Function: U ( S ,V )=a Sb V c
Given Constants: a , b , c
Initial Conditions: S=100, V =50
Changes:
 ΔS =5

 ΔV =−2

Objective: Estimate ΔU .
Solution:
1. Compute Partial Derivatives:
∂U b−1 c
o =ab S V
∂S
∂U b c−1
o =ac S V
∂V
2. Evaluate at Initial Conditions and Compute ΔU :
ΔU ≈ ( ab Sb−1 V c ) ΔS+ ( ac S b V c−1 ) ΔV
3. Substitute Numerical Values to Find ΔU .

Example 10: Electric Field Approximation


1 q
Function: E ( x , y ) = 4 π ϵ
0 ( x + y2)
2

Point: ( 1 ,1 )
Changes:
 Δx=0.05

 Δy=−0.05

Objective: Estimate ΔE .
Solution:
1. Compute Partial Derivatives:

o
∂E
=
−2 x
(1
∂ x ( x2 + y 2 )2 4 π ϵ 0
q
)
∂E
o Similarly for .
∂y

2. Evaluate at ( 1 ,1 ) and Compute ΔE .

Example 11: Approximation in Financial Mathematics


Function: P ( R ,T )=P 0 e RT

Initial Conditions:
 Interest Rate R=0.05

 Time T =10 years

Changes:
 ΔR=0.001

 ΔT =0

Objective: Estimate ΔP .
Solution:
1. Compute Partial Derivative:
∂P RT
o =P0 T e
∂R
2. Compute ΔP :
∂P RT
ΔP ≈ ΔR=P0 T e ΔR
∂R
3. Substitute Numerical Values and Calculate ΔP .

Example 12: Estimating Change in Demand


Demand Function: Q ( P , I )=c P−d I e
Variables:
 Price P

 Income I

 Constants c , d ,e

Objective: Estimate ΔQ when P increases by 2 % and I increases by 1 %.


Solution:
1. Compute Elasticities:

o E P=−d

o E I =e

2. Estimate Percentage Change in Q :


ΔQ ΔP ΔI
≈ EP + EI
Q P I
ΔQ
≈ (−d ) ( 0.02 )+ e ( 0.01 )
Q
3. Compute ΔQ Using Q=c P−d I e .

Example 13: Change in Surface Area


Function: S ( r , h )=2 πrh+2 π r 2 (Surface Area of Cylinder)
Initial Measurements:
 Radius r =5 cm

 Height h=10 cm

Changes:
 Δr=0.1 cm

 Δh=−0.2 cm

Objective: Estimate ΔS .
Solution:
1. Compute Partial Derivatives:
∂S
o =2 πh+ 4 πr
∂r
∂S
o =2 πr
∂h
2. Evaluate at Initial Conditions:
∂S
o =2 π ( 10 ) + 4 π ( 5 )=20 π + 20 π =40 π
∂r
∂S
o =2 π ( 5 )=10 π
∂h
3. Estimate ΔS :

ΔS ≈ ( 40 π ) ( 0.1 ) + ( 10 π )(−0.2 )=4 π −2 π =2 π cm2


So, the surface area increases by approximately 6.283 cm2.

Example 14: Linearizing a Multivariable Function


Function: f ( x , y , z )=x sin ( yz )
Point: ( 1 , 0 , π )
Objective: Find the linear approximation near this point.
Solution:
1. Compute Partial Derivatives at ( 1 , 0 , π ) :
∂f
o =sin ( yz )
∂x

 At point: sin ( 0 )=0


∂f
o =xz cos ( yz )
∂y

 At point: ( 1 ) ( π ) cos ( 0 )=π


∂f
o =xy cos ( yz )
∂z

 At point: ( 1 ) ( 0 ) cos ( 0 ) =0
2. Linear Approximation:

f ( x , y , z ) ≈ f ( 1 ,0 , π ) +0 ⋅ ( x−1 ) + π ⋅ ( y−0 )+ 0 ⋅ ( z−π )


f ( x , y , z ) ≈ 0+ π ( y )

Example 15: Estimating Profit Change


Profit Function: Π ( p ,q )= pq−C ( q ), where C ( q )=a q2
Initial Conditions:
 Price p=$ 50

 Quantity q=100

 Cost coefficient a=0.2

Changes:
 Δp=$ 1

 Δq=2

Objective: Estimate ΔΠ .
Solution:
1. Compute Partial Derivatives:
∂Π
o =q
∂p
∂Π
o =p−2 aq
∂q
2. Evaluate at Initial Conditions:
∂Π
o =100
∂p
∂Π
o =50−2 ( 0.2 ) ( 100 )=50−40=10
∂q
3. Estimate ΔΠ :

ΔΠ ≈ (100 )( 1 ) + ( 10 )( 2 )=100+20=$ 120


So, profit increases by approximately $120.

Example 16: Change in Air Pressure with Altitude


Function: P ( h , T )=P 0 e− Mgh/ RT

Variables:
 h : Altitude

 T : Temperature

 Constants: P0 , M , g , R

Objective: Estimate ΔP when h increases by 100 m and T decreases by 2 K.


Solution:
1. Compute Partial Derivatives:
∂ P −Mg
o = P
∂h RT
∂ P Mgh
o = P
∂T RT2

2. Estimate ΔP Using Given Changes.

Example 17: Estimating Error in Area Measurement


Function: A ( r )=π r 2
Measurement: r =10 cm (± 0.05 cm)
Objective: Estimate the maximum error in area.
Solution:
1. Compute Derivative:
dA
o =2 πr
dr
2. Estimate ΔA :
dA
ΔA ≈ Δr=( 2 π ⋅10 ) ( 0.05 ) =π
dr
So, the maximum error in area is approximately π cm².

Example 18: Linear Approximation in Meteorology


Function: P ( T , H )=a e−bT H c (Pressure depends on temperature T and
humidity H )
Objective: Estimate ΔP given small changes in T and H .
Solution:
1. Compute Partial Derivatives:
∂P −bT c
o =−ab e H
∂T
∂P −bT c−1
o =ac e H
∂H
2. Estimate ΔP Using Given Changes.

Example 19: Approximation in Optics

Lens Maker’s Equation:


1
f
= ( n−1 )(1

1
R1 R2 )
Objective: Estimate the change in focal length ( Δf ) when the radii of
curvature R1 and R2 change slightly.
Solution:
1. Compute Partial Derivatives with Respect to R1 and R2.

2. Use First-Order Approximation to Estimate Δf .


Example 20: Estimating Change in Sound Intensity
P
Function: I ( d )= 2
4πd
Objective: Estimate ΔI when distance d increases by 1 m.
Solution:
1. Compute Derivative:
dI −2 P −P
o = =
dd 4 π d 3 2 π d3
2. Estimate ΔI :
dI
ΔI ≈ Δd
dd

1. Linear Function in R2
 Function: f ( x , y )=3 x+ 4 y +5 .

 Point: ( x ' , y ' )=( 1 ,2 ) .

 Gradient: ∇ f =( 3 , 4 ) (constant).

 Taylor Approximation:

f ( 1+ h ,2+ k )=16+3 h+ 4 k .

 Error Term: ε ( h , k )=0.


0
 Differentiability: The limit lim ¿(h , k ) → 0 =0 ¿, so f is differentiable.
√ h +k 2
2

2. Quadratic Function
 Function: f ( x , y )=x 2+ y 2.

 Point: ( x ' , y ' )=( 1 ,1 ) .

 Gradient: ∇ f =( 2 x , 2 y ) =( 2 ,2 ) .

 Taylor Approximation:

f ( 1+ h ,1+k ) ≈ 2+2 h+2 k .

 Error Term: ε ( h , k )=h2 +k 2.


2 2
h +k
=√ h +k →0 .
2 2
 Limit Analysis:
√h +k
2 2

3. Differentiable Non-C 1 Function


 Function:

f ( x , y )=
{ ( x 2+ y 2) sin
0
( x +1 y )
2 2
( x , y ) ≠ (0 , 0)

( x , y )=( 0 , 0 ) .

 Gradient at (0,0): ∇ f ( 0 , 0 )= ( 0 ,0 ).

 Differentiability:

lim ( h +k ) sin
2 2
( 2
1
2 ) =√ h +k sin
( h +k1 )→ 0 .
( h , k ) →0 h +k 2 2

√h +k
2 2 2 2

 Conclusion: Differentiable at ( 0 , 0 ) , but partial derivatives are


discontinuous.

4. Non-Differentiable Function with Partial Derivatives


 Function:

{
x3
( x , y ) ≠ ( 0 , 0)
f ( x , y )= x 2 + y 2
0 ( x , y )= ( 0 ,0 ) .

∂f ∂f
 Partial Derivatives at (0,0): ( 0 , 0 )=1, ( 0 , 0 )=0.
∂x ∂y
 Limit Analysis:

h3
lim 2 2
−h lim −h k 2
( h , k ) →0 h + k
= h →02 2 3/ 2 (depends on path).
√h +k
2 2
(h + k )
 Conclusion: Not differentiable at ( 0 , 0 ) .
5. Exponential Function
 Function: f ( x , y )=e x + y .

 Point: ( 0 , 0 ) .

 Gradient: ∇ f =( e x+ y , e x+ y )=( 1 ,1 ).

 Taylor Approximation:

f ( h ,k ) ≈ 1+h+ k .

( h+k )2
 Error Term: ε ( h , k )= +⋯.
2

( h+k )2
 Limit: →0 .
2 √ h2 + k 2

6. Trigonometric Function
 Function: f ( x , y )=sin ( x ) +cos ( y ) .

 Point: ( π /2 , 0 ).

 Gradient: ∇ f =( cos ( π /2 ) ,−sin ( 0 ) )=( 0 , 0 ) .

 Taylor Approximation:

f ( π2 +h , 0+ k) ≈ 2 .
2 2
−h k
 Error Term: − +⋯ .
2 2

−h 2+ k 2
 Limit: 2 .
→0
√h + k
2 2

7. Absolute Value Function (Non-Differentiable)


 Function: f ( x , y )=|x|+| y|.

 Point: ( 0 , 1 ).
∂f
 Partial Derivatives: ( 0 ,1 ) does not exist.
∂x
 Conclusion: Not differentiable at ( 0 , 1 ).

8. Polynomial with Cross Terms


 Function: f ( x , y )=x 2 y + x y 2.

 Point: ( 1 ,1 ).

 Gradient: ∇ f =( 2 xy + y2 , x 2+2 xy ) =( 3 ,3 ).

 Taylor Approximation:

f ( 1+ h ,1+k ) ≈ 2+3 h+3 k .

 Error Term: h2 + k 2+ 4 hk + ⋯.
2 2
h +k + 4 hk
 Limit: → 0.
√ h2 + k 2

9. Rational Function
x+ y
 Function: f ( x , y )= 2 2.
1+ x + y
 Point: ( 0 , 0 ) .

 Gradient: ∇ f ( 0 , 0 )= (1 , 1 ).

 Error Term:

h+k −( h+k ) ( h2+ k 2 )


ε ( h , k )= 2 2
−( h+ k )= 2 2
.
1+ h + k 1+h + k
ε (h , k )
 Limit: → 0.
√ h2+ k 2

10. Logarithmic Function


 Function: f ( x , y )=ln ( 1+ x + y ).

 Point: ( 0 , 0 ) .

 Gradient: ∇ f = ( 1+ 1x+ y , 1+ x1+ y )=( 1, 1).


2
−( x+ y )
 Error Term: +⋯ .
2
2
−( x + y )
 Limit: 2 .
→0
√x +y
2 2

11. Discontinuous Function


 Function:

{
2
f ( x , y )= 1 y=x , x ≠ 0
0 otherwise .

 Point: ( 0 , 0 ) .

 Analysis: Discontinuous at ( 0 , 0 ) ; hence, not differentiable.

12. Homogeneous Function


1/ 3
 Function: f ( x , y )=( x 2 + y 2 ) .

 Point: ( 0 , 0 ) .

 Partial Derivatives: Do not exist at ( 0 , 0 ) .

 Conclusion: Not differentiable.

13. Directional Derivatives Exist but Not Differentiable


 Function:

{
x3
( x , y ) ≠ ( 0 , 0)
f ( x , y )= x 2 + y 2
0 ( x , y )= ( 0 ,0 ) .
3
a
 Directional Derivatives: D v f ( 0 , 0 ) = 2 2 for v= ( a , b ).
a +b
 Conclusion: Directional derivatives exist but f is not differentiable.
14. Hyperbolic Function
 Function: f ( x , y )=sinh ( x ) cosh ( y ) .

 Point: ( 0 , 0 ) .

 Gradient: ∇ f =( cosh ( x ) cosh ( y ) , sinh ( x ) sinh ( y ) )=( 1 , 0 ).

 Taylor Approximation:

f ( h ,k ) ≈ h .
3
h 2
 Error Term: + h k +⋯.
6
Error
 Limit: → 0.
√ h2+ k 2

15. Product of Functions


 Function: f ( x , y )=e x ln ( 1+ y ).

 Point: ( 0 , 0 ) .

 Gradient: ∇ f =( 0 ,1 ).

 Taylor Approximation:

f ( h ,k ) ≈ k .
 Error Term: hk +⋯ .
hk
 Limit: → 0.
√ h2+ k 2

16. Additive Exponential Function


 Function: f ( x , y )=x e y + y e x .

 Point: ( 0 , 0 ) .

 Gradient: ∇ f =( 1, 1 ).

 Error Term: 2 hk +⋯ .
2 hk
 Limit: → 0.
√ h2+ k 2
17. Square Root Function
 Function: f ( x , y )=√ 1+ x+ y .

 Point: ( 0 , 0 ) .

 Gradient: ∇ f = ( 12 , 12 ).
2
−( x+ y )
 Error Term: +⋯ .
8
2
−( x + y )
 Limit: 8 .
→0
√x +y
2 2

18. Function with Oscillatory Behavior


 Function:

{ x 2 sin ( 1x )+ y 2
x≠0
f ( x , y )=
2
y x =0 .

 Point: ( 0 , 0 ) .

 Gradient: ∇ f ( 0 , 0 )= ( 0 ,0 ).

 Error Analysis:
x2 sin ( 1x )+ y → 0.
2

√ x2 + y2
 Conclusion: Differentiable at ( 0 , 0 ) .

19. Function with Isolated Non-Differentiability


 Function: f ( x , y )=√ x 2+ y2 .

 Point: ( 0 , 0 ) .

 Analysis: Not differentiable at ( 0 , 0 ) (sharp point).


20. Function with All Directional Derivatives Zero
 Function:

{
x2 y 2
( x , y )≠ ( 0 , 0)
f ( x , y )= x 4 + y 4
0 ( x , y ) =( 0 , 0 ) .

 Directional Derivatives: Dv f ( 0 , 0 ) =0 for all v .

f (h , k )
 Differentiability: lim ¿(h , k ) → 0 ¿ does not exist (path-dependent).
√ h2 +k 2
 Conclusion: Not differentiable.

Key Takeaways:
1. Differentiability requires the error term ε ( h ) to vanish faster than ∥ h ∥.
2. Gradient existence (partial derivatives) is necessary but insufficient
for differentiability.
3. Discontinuous partials or directional derivative mismatches can
invalidate differentiability.
4. The first-order Taylor approximation is valid iff the function is
differentiable at the point.

You might also like