Notes
Notes
Abhinav Pradeep
October 24, 2023
1 Multivariate Limits
1.1 Continuity in calculus of one variable
To prove f is continuous at c,
lim f = lim− f
x→c+ x→c
+
For c ,
|f (x) − L| < ϵ
Analogously, for c− ,
|f (x) − L| < ϵ
|f (x) − L| < ϵ
1
1.4 Euclidean distance in Rn
For P, Q ∈ Rn , where P = (P1 , P2 , P3 ...Pn ), Q = (Q1 , Q2 , Q3 ...Qn )
v
u n
uX
D(P, Q) = t (Pi − Qi )2
i=1
∀ϵ > 0, ∃I ∈ N s.t ∀i ≥ I,
D(xi , X) < ϵ
we have that:
f : D → Rm
Where
D ⊆ Rn
Considering the set of all sequences that converge at X is equivalent to considering the set of all
directions possible ∈ Rn or ∈ D that X can be approached from. Hence, where the single variable case
considers approaches to X + and X − , the multivariate case considers approaches from all directions
possible in abstract Rn
1.7 Limits of multivariate functions
f : Rn → Rm
To prove
lim f (x) = L
x→C
1.9 Smoothness
2 Multivariate Derivatives
NOTE: || : Rn → R where for V ∈ Rn , |V | = D(0, V )
2.1 Differentiability
f : Rn → Rm is differentiable at X ∈ Rn if ∃T such that:
|f (x + h) − f (x) − T h|
lim =0
h→0 |h|
Where h ∈ Rn
|f (x + V h) − f (x)|
DV f (x) = lim
h→0 |h|
Where h ∈ Rn
DV f = (DV f1 , DV f2 , DV f3 , ...DV fm )
2.3 Partial Derivatives
Partial derivatives are a special case of DV f where V is a standard basis vector such as ei , ej , ek ... etc.
In this case it is written as:
∂f
Di f ⇔
∂xi
2.5.1 As an n × m matrix
D1 f
D2 f
Df = ..
.
Dn f
Which forms :
D1 f1 D1 f2 D1 f3 ... D1 fm
D2 f1 D2 f2 D2 f3 ... D2 fm
Df = ..
.. .. .. ..
. . . . .
Dn f1 Dn f2 Dn f3 ... Dn fm
2.5.2 As an m × n matrix
Df = (D1 f )T (D2 f )T (D3 f )T ... (Dn f )T
Which forms :
D1 f1 D2 f1 D3 f1 ... Dn f1
D1 f2 D2 f2 D3 f2 ... Dn f2
Df = ..
.. .. .. ..
. . . . .
D1 fm D2 fm D3 fm ... Dn fm
Both forms are valid?(ish) but m × n form is preferred
For f : Rn → R the total derivative, which is called the gradient ∇, can be defined as:
∇f = (D1 f, D2 f, D3 f, ...Dn f )
2.6 Derivative Laws
2.6.1 Linearity
Consider f : Rn → Rm , g : Rn → Rm which are totally differentiable at x ∈ Rn
f + g : Rn → Rm
Is totally differentiable at x and
λf : Rn → Rm , λ ∈ R
Is totally differentiable at x and
The magnitude |∇f | gives the slope (rate of increase) along this maximal direction
2.8.2 Divergence: (∇ · f ) : Rn → R
Identifies sources and sinks:
2.8.3 Curl: (∇ × f ) : Rn → Rn
3 Tangent Plane
Consider f : R2 → R
Let z = f (x, y)
The plane tangent to z = f (x, y) at some point P⃗ = (a, b, f (a, b)) is of the form:
⃗n(⃗r) = ⃗n(P⃗ )
Where ⃗n is the vector normal to z = f (x, y) at P⃗ . This would be:
⃗n = (1, 0, Dx f ) × (0, 1, Dy f )
Where (1, 0, Dx f ) denotes how much f (x, y) changes in the x direction and (0, 1, Dy f ) denotes how
much f (x, y) changes in the y direction.
Dx F = (1, 0, Dx f )
Dy F = (0, 1, Dy f )
Either way,
i j k
⃗n = det 1 0 Dx f
0 1 Dy f
⃗n = −(Dx f )i − (Dy f )j + k
⃗n = (−Dx f, −Dy f, 1)
Let ⃗r = (x, y, z)
⃗n(⃗r) = ⃗n(P⃗ )
α ∈ Nn0
ie. n-dimensional vector/n-tuple of natural numbers + 0. Elements denoted by:
α = (α1 , α2 , α3 , ...αn )
4.1.1 Rules
For n-dimensional α
n
X
|α| = αi
i=1
n
Y
α! = αi !
i=1
n
Y
n
For x ∈ R , x = α
xαi i
i=1
α ∂ |α| f
D f= n
Q αi
i=1 ∂xi
4.2 General Taylor Series for Rn → R
For f : Rn → R
X Dα f (x)
T (h) = · hα
α!
|α|≤k
Where the summations can alternatively be written as matrix products. For example, consider T (h)
of order 2:
X Dα f (x)
T2 (h) = · hα
α!
|α|≤2
X Dα f (x) X Dα f (x)
T2 (h) = f (x) + · hα + · hα
α! α!
|α|=1 |α|=2
Only one possibility for |α| = 1: α is an n-tuple containing one entry with value 1 and all other entries
are 0. In such case, α! = 1
X X Dα f (x)
T2 (h) = f (x) + Dα f (x) · hα + · hα
α!
|α|=1 |α|=2
Or, as f : Rn → R
X Dα f (x)
T2 (h) = f (x) + ∇f · h + · hα
α!
|α|=2
The 12 factor is done as ’commutativity’ (ie. Dj Di = Di Dj ) means that ni̸=j double counts.
P
α
Summing over |α|=2 D α!f (x) · hα requires summing over both case 1 and 2:
P
X Dα f (x) n n
1X 1X
· hα = Di Di f (x) · h2i + Dj Di f (x) · hi · hj
α! 2 i=1 2 i̸=j
|α|=2
X Dα f (x) n
α 1X
·h = Di Dj f (x) · hi · hj
α! 2 i,j=1
|α|=2
Hence,
n
1X
T2 (h) = f (x) + ∇f · h + Di Dj f (x) · hi · hj
2 i,j=1
4.3 R2 → R
For some z = f (x, y), given P = f (a, b)
Before everything,
X Dα f (x)
T (x + δ) = · δα
α!
|α|≤k
Let h = x + δ ⇔ δ = (h − x)
X Dα f (x)
T (h) = · (h − x)α
α!
|α|≤k
z ≈ T1 (x, y) = f (a, b) + Dx f (a, b)x − Dx f (a, b)a + Dy f (a, b)y − Dy f (a, b)b
1
z ≈ T2 (x, y) = f (a, b) + Dx f (a, b)(x − a) + Dy f (a, b)(y − b) + (x − a, y − b)T Hf (x)(x − a, y − b)
2
To calculate T2 (x), Hf (x) must be given:
’commutativity’ of D:
5 Optimization on f : Rn → R
5.1 Definition of local extrema
Consider f : Rn → R
Define Bϵ (x) = {a ∈ Rn |D(x, a) < ϵ}
Hf (X) = Indefinite
xT M x > 0 ∀x ∈ Rn
M is negative definite iff:
xT M x < 0 ∀x ∈ Rn
M is indefinite iff:
xT M x = 0 ∀x ∈ Rn
f : Rn → R
gi : Rn → R, i ∈ [1, m]
Optimize f with respect to constraints gi = 0 ∀i ∈ [1, m]
gi = 0 forms a contour or level curve. From below proof, it is evident the ∇gi (x) ⊥ (gi = 0)@x:
gi (r(t)) = 0
d
gi (r(t)) = 0
dt
d
∇gi · r(t) = 0 − (1)
dt
QED
However, it is intuitive and a fact that the minimums and maximums of f (r(t)) will occur
where f (r(t)) is flat.
gi = 0 ∀i ∈ [1, m]
7.2 Order:
1st order: Only first derivatives
2nd order: Only second derivatives
..
.
Nth order
7.3 Linearity:
Linear:
n
X di f (x)
ai (x) = b(x)
i=1
dxi
That is it can be written as:
n
!
X di
ai (x) i f (x) = b(x)
i=1
dx
Non-Linear:
Function can not be written as some operator acting on f (x).
f (x)n
di f (x) dj f (x)
·
dxi dxj
g(f (x))
Where g is not linear.
di+1 f df d2 f d3 f di f
= F (x, , , , ... )
dxi+1 dx dx2 dx3 dxi
Given
7.5 Autonomous
Derivative is independent of the dependant variable. Only depends on function itself.
7.6 Homogeneous
Lack of any free terms.
8 First-Order Differential Equations
General first order ODE is:
y ′ = f (y, x)
8.1 Linear
General solution to:
y ′ + Q(x)y = R(x)
R
Qdx
Multiply both sides by e
R R R
Qdx ′ Qdx Qdx
e y + Q(x)e y=e R(x)
By the inverse chan rule,
d R Qdx R
e y = e Qdx R(x)
dx
R
Z R
Qdx
e y = e Qdx R(x)dx
Z
1 R
Qdx
y= R
Qdx
e R(x)dx
e
y ′ = f (y, x)
As
M (x) + N (y)y ′ = 0
dy
M (x) + N (y) =0
dx
M (x)dx + N (y)dy = 0
Then the solution can be implicitly written as
Z Z
N (y)dy = M (x)dx
8.3 Existence and Uniqueness Theorems
8.4 Exact Differential Equations
Consider a linear first order differential equation of the form:
∂M (x, y) ∂N (x, y)
=
∂x ∂y
Then it is exact in R. To find y, find f (x, y) such that:
∂f (x, y)
= M (x, y)
∂x
∂f (x, y)
= N (x, y)
∂y
When integrating M (x, y) w.r.t x, +h becomes +h(y). Contours of f (x, y) are the implicit expression
of y. That is:
f (x, y) = C
Where C ∈ R is determined by initial values.
∂M (x, y) ∂N (x, y)
̸=
∂x ∂y
Goal is to find µ(x, y) such that
µx (x, y)M (x, y) − µy (x, y)N (x, y) − µ(x, y) (Ny (x, y) + Mx (x, y)) = 0
The only way this simplifies is:
dµ(x)
dx
M (x, y) − µ(x) (Ny (x, y) + Mx (x, y)) = 0 − dµ(y)
dy
N (x, y) − µ(y) (Ny (x, y) + Mx (x, y)) = 0
y ′′ = f (y, y ′ , x)
aλ2 + bλ + c = 0
9.4 Wronskian
The Wronskian W (y1 , y2 )(x) is defined as
y1 (x) y2 (x)
W (y1 , y2 )(x) = det ′
y1 (x) y2′ (x)
If
ay ′′ + by ′ + cy = AeBx
Assume y = λeBx
If
ay ′′ + by ′ + cy = A cos(Bt)
or
ay ′′ + by ′ + cy = A sin(Bt)
Assume y = λ1 sin(Bt) + λ2 cos(Bt)
If
ay ′′ + by ′ + cy = A cos(Bt)eCx
or
ay ′′ + by ′ + cy = A sin(Bt)eCx
Assume y = λ1 eCx sin(Bt) + λ2 eCx cos(Bt). In general:
The general solution of the above equation can be written as:
f (n) (x0 )
an = (x − x0 )n
n!
Radius of convergence:
an+1
lim = c|x − x0 |
n→∞ an
Where radius of convergence is:
1
|x − x0 | ≤
c
Consider two series given by:
∞
X
s1 = bn (x − x0 )n
n=1
Consider that:
f (x) ± g(x)
f (x)g(x)
f (x)
g(x)
Differentiation rules:
∞
X
y= an (x − x0 )n
n=1
∞
′ d X
y = an (x − x0 )n
dx n=1
The equation:
y ′′ + p(x)y ′ + q(x)y = 0
THe solution takes the form:
y = a0 y1 + a1 y2
Where a0 and a1 are arbitrary and depend on the initial conditions. y1 and y2 are power series. The
function is analytic at x = x0 where the R(y) ≥ min (R(p), R(q)).
To solve for y, write y and its derivatives as taylor series. Then solve for the coefficients. Change of
index and the above derived rules for operations of power series will be useful. Find a relation (explicit
or recursive) for the coefficients of y1 in terms of a0 and do the same for y2 in terms of a1 .
y ′′ + p(x)y ′ + q(x)y = 0
Solve the above using series solutions to find that:
y = a0 y1 + a1 y2
General solution of the non homogenous equation will be of the form,
y = a0 y1 + a1 y2 + Y
Let Y be of the form:
Y = u1 (x)y1 + u2 (x)y2
It can be shown that:
Z
y2 (x)r(x)
u1 (x) = − dx
W (y1 , y2 )(x)
Z
y1 (x)r(x)
u2 (x) = dx
W (y1 , y2 )(x)
Hence, the solution can be written as:
Z Z
y2 (x)r(x) y1 (x)r(x)
y = a0 y1 + a1 y2 − y1 dx + y2 dx
W (y1 , y2 )(x) W (y1 , y2 )(x)