0% found this document useful (0 votes)
17 views29 pages

Exam Prep

The document discusses key concepts in multivariate calculus, including: 1) Definitions of continuity, limits, and convergence for multivariate functions. 2) Formulas for Euclidean distance in Rn and sequences in Rn. 3) Techniques for proving limits of multivariate functions, including factoring, radical conjugates, and change of variables. 4) Concepts of differentiability, directional derivatives, partial derivatives, and total derivatives for multivariate functions. 5) The Jacobian matrix and gradient vector, and derivative laws including linearity and the chain rule. 6) The tangent plane to a surface defined by a multivariate function z=f(x,y) at

Uploaded by

Abhinav Pradeep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views29 pages

Exam Prep

The document discusses key concepts in multivariate calculus, including: 1) Definitions of continuity, limits, and convergence for multivariate functions. 2) Formulas for Euclidean distance in Rn and sequences in Rn. 3) Techniques for proving limits of multivariate functions, including factoring, radical conjugates, and change of variables. 4) Concepts of differentiability, directional derivatives, partial derivatives, and total derivatives for multivariate functions. 5) The Jacobian matrix and gradient vector, and derivative laws including linearity and the chain rule. 6) The tangent plane to a surface defined by a multivariate function z=f(x,y) at

Uploaded by

Abhinav Pradeep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Notes

Abhinav Pradeep
October 29, 2023

1 Multivariate Limits
1.1 Continuity in calculus of one variable
To prove f is continuous at c,

lim f = lim− f
x→c+ x→c
+
For c ,

∀ϵ > 0, ∃δ > 0 s.t for 0 < x − c < δ

|f (x) − L| < ϵ
Analogously, for c− ,

∀ϵ > 0, ∃δ > 0 s.t for − δ < x − c < 0

|f (x) − L| < ϵ

1.2 Limits in calculus of one variable


f :R→R
To prove that limx→c f (x) = L

∀ϵ > 0, ∃δ > 0 s.t for 0 < |x − c| < δ

|f (x) − L| < ϵ

1.3 Euclidean distance in R2


For P, Q ∈ R2 , where P = (P1 , P2 ), Q = (Q1 , Q2 )
p
D(P, Q) = (P1 − Q1 )2 + (P1 − Q1 )2

1
1.4 Euclidean distance in Rn
For P, Q ∈ Rn , where P = (P1 , P2 , P3 ...Pn ), Q = (Q1 , Q2 , Q3 ...Qn )
v
u n
uX
D(P, Q) = t (Pi − Qi )2
i=1

1.5 Convergence of a sequence Rn


NOTE: SEQUENCES ARE DENOTED WITH SUPERSCRIPT

Consider (xi )i∈N ∈ Rn , limi→∞ (xi )i∈N = X ∈ Rn if

∀ϵ > 0, ∃I ∈ N s.t ∀i ≥ I,

D(xi , X) < ϵ

1.6 Continuity of multivariate functions


f : Rn → Rm
To prove f is continuous at X,

∀ sequences (xi )i∈N such that:

lim (xi )i∈N = X


i→∞

we have that:

lim f (xi ) = f (X)


i→∞

Similar process for

f : D → Rm
Where

D ⊆ Rn
Considering the set of all sequences that converge at X is equivalent to considering the set of all
directions possible ∈ Rn or ∈ D that X can be approached from. Hence, where the single variable case
considers approaches to X + and X − , the multivariate case considers approaches from all directions
possible in abstract Rn
1.7 Limits of multivariate functions
f : Rn → Rm
To prove

lim f (x) = L
x→C

∀ϵ > 0, ∃δ > 0 s.t for 0 < D(x, C) < δ

D(f (x), L) < ϵ

1.8 Limit laws


Let lim(x,y)→(a,b) f (x, y) = L and lim(x,y)→(a,b) g(x, y) = M

lim (f ± g) = L ± M
(x,y)→(a,b)

lim (f × g) = L × M
(x,y)→(a,b)
 
f L
lim =
(x,y)→(a,b) g M

lim kf = kL
(x,y)→(a,b)

Suppose that lim(x,y)→(a,b) f (x, y) = L AND g(x) continuous at L. Then, lim(x,y)→(a,b) g(f (x, y)) = g(L)

1.9 Finding limits of multivariate functions


Factoring: Try cancelling out the denominator.

Radical Conjugate: Removing the square root in the bottom

Squeeze Theorem:

f ≤g≤h

lim f= lim h=L


(x,y)→(a,b) (x,y)→(a,b)

Therefore, by the squeeze theorem, lim(x,y)→(a,b) g = L

Change of variables: Set two new variable u and v such that the limit can be split up into two separate
single variable limits.
2 Multivariate Derivatives
NOTE: || : Rn → R where for V ∈ Rn , |V | = D(0, V )

2.1 Differentiability
f : Rn → Rm is differentiable at X ∈ Rn if ∃T such that:

|f (x + h) − f (x) − T h|
lim =0
h→0 |h|
Where h ∈ Rn

2.2 Directional Derivatives


For f : Rn → R and f differentiable on Rn , derivative in the direction of unit vector V ∈ Rn is defined
as:

|f (x + V h) − f (x)|
DV f (x) = lim
h→0 |h|
Where h ∈ Rn

When extended to f : Rn → Rm , where f can we written as f = (f1 , f2 , f3 , ...fm ), it can be proved


that DV f exists iff DV fk exists ∀k ∈ [1, m]. In this case,

DV f = (DV f1 , DV f2 , DV f3 , ...DV fm )

2.3 Partial Derivatives


Partial derivatives are a special case of DV f where V is a standard basis vector such as ei , ej , ek ... etc.
In this case it is written as:
∂f
Di f ⇔
∂xi
2.3.1 Clairaut’s Theorem
If f : R2 → R have all partial derivatives up to second order continuous at (a, b) then:

∂ 2f ∂ 2f
=
∂x∂y ∂y∂x

2.4 Total Derivative


For f : Rn → Rm the total derivative can be defined:
 
Df = (D1 f )T (D2 f )T (D3 f )T ... (Dn f )T
Which forms :
 
D1 f1 D2 f1 D3 f1 ... Dn f1
 D1 f2 D2 f2 D3 f2 ... Dn f2 
Df =  ..
 
.. .. .. .. 
 . . . . . 
D1 fm D2 fm D3 fm ... Dn fm
In such case Df is also called the Jacobian matrix Jf (⃗x) where ⃗x ∈ Rn :
 
D1 f1 (⃗x) D2 f1 (⃗x) D3 f1 (⃗x) ... Dn f1 (⃗x)
 D1 f2 (⃗x) D2 f2 (⃗x) D3 f2 (⃗x) ... Dn f2 (⃗x) 
Jf (⃗x) = 
 
.. .. .. .. .. 
 . . . . . 
D1 fm (⃗x) D2 fm (⃗x) D3 fm (⃗x) ... Dn fm (⃗x)

For f : Rn → R the total derivative, which is called the gradient ∇, can be defined as:

∇f = (D1 f, D2 f, D3 f, ...Dn f )
The gradient ∇f points in the direction of maximum increase of the function f .

The magnitude |∇f | gives the slope (rate of increase) along this maximal direction

2.5 Derivative Laws


2.5.1 Linearity
Consider f : Rn → Rm , g : Rn → Rm which are totally differentiable at x ∈ Rn

f + g : Rn → Rm
Is totally differentiable at x and

D(f (x) + g(x)) = Df (x) + Dg(x)


Alternatively, in Jacobian notation,

Jf +g (x) = Jf (x) + Jg (x)


λf : Rn → Rm , λ ∈ R
Is totally differentiable at x and

D(λf (x)) = λDf (x)


Alternatively, in Jacobian notation,

Jλf (x) = λJf (x)

2.5.2 Chain rule


For f : Rn → R and ui : Rm → R and x ∈ Rm
n
∂ X ∂ ∂
f (u1 (x), u2 (x), u3 (x), . . . un (x)) = ui (x) · f (u1 , u2 , u3 , . . . uk )
∂x i=1
∂x ∂u i

3 Tangent Plane
Consider f : R2 → R
Let z = f (x, y)

The plane tangent to z = f (x, y) at some point P⃗ = (a, b, f (a, b)) is of the form:

⃗n(⃗r) = ⃗n(P⃗ )
Where ⃗n is the vector normal to z = f (x, y) at P⃗ . This would be:

⃗n = (1, 0, Dx f ) × (0, 1, Dy f )
Where (1, 0, Dx f ) denotes how much f (x, y) changes in the x direction and (0, 1, Dy f ) denotes how
much f (x, y) changes in the y direction.

Alternatively, a (1, 0, Dx f ) and (0, 1, Dy f ) can be obtained by considering F : R2 → R3 as:


 
x
F (x, y) =  y 
f (x, y)
This communicates the same idea as original f (x, y). In this case,

Dx F = (1, 0, Dx f )
Dy F = (0, 1, Dy f )
Either way,
 
i j k
⃗n = det 1 0 Dx f 
0 1 Dy f

⃗n = −(Dx f )i − (Dy f )j + k
⃗n = (−Dx f, −Dy f, 1)
Let ⃗r = (x, y, z)

⃗n(⃗r) = ⃗n(P⃗ )

(−Dx f, −Dy f, 1)(x, y, z) = (−Dx f, −Dy f, 1)(a, b, f (a, b))

−(Dx f )x − (Dy f )y + z = −(Dx f )a − (Dy f )b + f (a, b)

z = (Dx f )x − (Dx f )a + (Dy f )y − (Dy f )b + f (a, b)

z = (Dx f )(x − a) + (Dy f )(y − b) + f (a, b)

4 Optimization on f : Rn → R
4.1 Definition of local extrema
Consider f : Rn → R
Define Bϵ (x) = {a ∈ Rn |D(x, a) < ϵ}

If f has a local maximum at X, then ∃ϵ > 0 such that:

f (X) ≥ f (x) ∀x ∈ Bϵ (X)


If f has a local minimum at X, then ∃ϵ > 0 such that:

f (X) ≤ f (x) ∀x ∈ Bϵ (X)


Isolated extrema remove the equals.

4.2 Definition of local extrema in terms of ∇f


Consider f : Rn → R

If f has local extrema at X,


 
0
0
∇f (x) =  .. 
 
.
0
If f has a local maximum at X,

Hf (X) = Negative definite


If f has a local minimum at X,
Hf (X) = Positive definite
If f has a saddle point at X,

Hf (X) = Indefinite

4.3 Definite and indefinite matrices


Definition:
M is positive definite iff:

xT M x > 0 ∀x ∈ Rn
M is negative definite iff:

xT M x < 0 ∀x ∈ Rn
M is indefinite iff:

xT M x = 0 ∀x ∈ Rn

4.3.1 Regarding eigenvalues


M is positive definite if and only if all of its eigenvalues are positive.
M is negative definite if and only if all of its eigenvalues are negative
M is indefinite if and only if it has both positive and negative eigenvalues.

4.4 Hessian determinant test


For a Hessian of the form:
 
fx,x fx,y
fy,x fy,y
If determinant = 0, inconclusive.

D < 0 (-ve) implies saddle point.

D > 0 (+ve) and fx,x > 0 (+ve) implies minimum.

D > 0 (+ve) and fx,x < 0 (-ve) implies maximum.

4.5 Constrained optimization and Lagrange multipliers


The problem:
Given:

f : Rn → R

gi : Rn → R, i ∈ [1, m]
Optimize f with respect to constraints gi = 0 ∀i ∈ [1, m]

gi = 0 forms a contour or level curve. From below proof, it is evident the ∇gi (x) ⊥ (gi = 0)@x:

Consider r(t) which satisfies

gi (r(t)) = 0

d
gi (r(t)) = 0
dt
d
∇gi · r(t) = 0 − (1)
dt
QED

Next consider f (r(t)). f (r(t)) need not be a level curve.

However, it is intuitive and a fact that the minimums and maximums of f (r(t)) will occur
where f (r(t)) is flat.

That is, where


d
f (r(t)) = 0
dt
d
∇f · r(t) = 0 − (2)
dt
Putting (1) and (2) together allows for a formulation that is independent of parameterization:
Maximums and minimums occur when:
m
X
∇f = λi ∇gi
i=1

With the added initial constraints of:

gi = 0 ∀i ∈ [1, m]
5 Arc Length
Arc length

s
n 2  2  2
X ∆x ∆y ∆y
s = lim + + ∆t
n→∞
i=1
∆t ∆t ∆t
Therefore, for some curve r in R that is parametrized as r(t) : R → R3 :
3

Z b
s= |r′ (t)|dt
a

6 Line integrals
Given f : R2 → R. For some curve C ∈ R2 , that can be parametrized as r : [a, b] → C,
Z
I= f (r)ds
C
3
Parametrize the curve as r(t) : [a, b] → R . By the arc length formula:

ds = |r′ (t)|dt
Hence,
Z Z b
f (r)ds = f (r(t)) |r′ (t)| dt
C a

7 Polar Co-Ordinates
Go from (x, y) to (r, θ)

x = r cos (θ)
y = r sin (θ)
Go from (r, θ) to (x, y)
p
r= x2 + y 2
y
θ = arctan
x

8 Vector fields
Vector fields are defined as F : Rn → Rn . In this case, n is usually 3 F : R3 → R3

8.1 Line integral over a vector field


Line integral over a field measures contribution of field along the path.

Hence, add these contributions up:


Z
= F · T ds
C

If C is parametrized by r(t) : [a, b] → R3 , the tangent vector


dr
T =
ds
Z Z
F · T ds ⇐⇒ F · dr
C C
Z b
= F (r(t)) · r′ (t) dt
a
8.2 Conservative vector fields
Conservative vector field F : Rn → Rn is such that:

F = ∇f
Where f : Rn → R. More explicitly,
 
∂f ∂f ∂f ∂f
F = , , , ...
∂x1 ∂x2 ∂x3 ∂xn
This property leads to the fact that the line integral is path independent:
Z b
F (r(t)) · r′ (t) dt
a

r(t) = (x(t), y(t))

f (r(t)) = f (x(t), y(t))

F (r(t)) = ∇f (r(t))
 
∂f ∂f
F (r(t)) = ,
∂x ∂y
Hence,
Z b   
∂f ∂f dx dy
, · , dt
a ∂x ∂y dt dt
Z b 
∂f dx ∂f dy
+ dt
a ∂x dt ∂y dt
By inverse chain rule,
Z b
df
dt
a dt
Z r(b)
df
r(a)

= f (r(b)) − f (r(a))
8.2.1 Easy way to check for 2D conservative vector fields
Assume F is conservative. Hence,
 
∂f ∂f
F = ,
∂x ∂y
By Clairauts theorem,

∂ 2f ∂ 2f
=
∂x∂y ∂y∂x
Hence,
∂F2 ∂F1
=
∂x ∂y

9 Physics stuff
9.1 Motion in conservative fields
9.1.1 Conservation of energy derivation
Given some conservative field,

F = ∇f

V = −f
Consider an object of mass m in a force dependant field. The object can be characterized as having
an initial position and velocity. There are no external forces.
By newtons first law,

F (r) = mr̈

∇ (−V (r)) = mr̈

−∇V (r) = mr̈

−∇V (r) · ṙ = mr̈ · ṙ


   
∂V (r) ∂V (r) ∂V (r) dx dy dz
− , , · , , = mr̈ · ṙ
∂x ∂y ∂z dt dt dt
 
∂V (r) dx ∂V (r) dy ∂V (r) dz
− + + = mr̈ · ṙ
∂x dt ∂y dt ∂z dt
dV (r)
− = mr̈ · ṙ
dt
Z Z
dV (r)
− dt = mr̈ · ṙ dt
dt
1
−V (r) + C = m|ṙ|2
2
1
C = m|ṙ|2 + V (r)
2
Here the constant of integration is the energy E. It comes from an equation of position and velocity.
Hence, it depends on initial position and velocity.
1
E = m|ṙ|2 + V (r)
2
The sum of potential and kinetic energy is time invariant. That is:
 
d d 1 2
E= m|ṙ| + V (r)
dt dt 2
 
d 1 2
0= m|ṙ| + V (r)
dt 2

9.1.2 Central forces


F = |F |r̂

|F |
F = r
|r|

9.2 Dimensional Analysis


Given:

Dependant variable: Vd

Set of n − 1 independent variables: {V1 , V2 , V3 , ...Vn−1 }

Fundamental units are m, l, and t.

Steps:

1. A relation between all the variables can be written as:

f (Vd , V1 , V2 , V3 , ...Vn−1 ) = 0
2. Hence, f has n parameters. There exist generally 3 fundamental units. Therefore, there exist n − 3
dimensionless Π.

3. Pick 3 repeating variables such that the exponents yield m0 l0 t0 . Repeating variables are those that
cannot form m0 l0 t0 without multiplying in any other terms. Try not to pick Vd . Assume repeating
variables are V1 , V2 , V3 .

4. Hence, the problem becomes:

Π1 = V1a1 V2b1 V3c1 Vdd1


Π2 = V1a2 V2b2 V3c2 V4d2
Π3 = V1a3 V2b3 V3c3 V5d3
..
.
As the Π are unitless,

m0 l0 t0 = units(V1 )a1 × units(V2 )b1 × units(V3 )c1 × units(Vd )d1


m0 l0 t0 = units(V1 )a2 × units(V2 )b2 × units(V3 )c2 × units(V4 )d2
m0 l0 t0 = units(V1 )a3 × units(V2 )b3 × units(V3 )c3 × units(V5 )d3
..
.
Hence solve for the Πs

5. Solving for this guarantees the existence of Φi such that:

Φ1 (Π1 , Π2 , Π3 , ...Π(n−3) ) = 0
Π1 = Φ2 (Π2 , Π3 , ...Π(n−3) )
Π2 = Φ2 (Π1 , Π3 , ...Π(n−3) )
..
.
In this case if form such that Vd = f (V1 , V2 , V3 , ...Vn−1 ) was asked for, then pick and manipulate:

Π1 = Φ2 (Π2 , Π3 , ...Π(n−3) )
9.3 First order EOM to know
Newton’s law of cooling:
dT
= −k(T − Tm )
dt
Tm : temperature of surrounding areas

dT
k is chosen such that if T > Tm , dt
is negative.

Electric circuit:

3
X
Vi = 0
i=1

dJ
L + RJ = E(t)
dt
Single-species population models:
1. Non-damped
dP
= rP
dt

r : Rate
2. Damped
 
dP P
= rP 1−
dt θ

θ : Carrying capacity
Predator-prey system
dx
= αx − βxy
dt
dy
= −γy + δxy
dt
x is prey and y is predator. All parameters are positive.
Species competing for same food source
dx1
= x1 (α1 − β1 x1 − γ1 x2 )
dt
dx2
= x2 (α2 − β2 x2 − γ2 x1 )
dt

9.4 Second order EOM to know


9.4.1 The undamped spring

L is natural length of spring. s is length that the spring is stretched by when mass is tied to the end
and the system is at equilibrium. Hooke’s law states:

T = −kd
Therefore the force on the mass is:

F = T + mg

F = −kd + mg
At equilibrium:

F = −ks + mg
F = 0 at equilibrium as the mass is not in motion.
Hence,

0 = −ks + mg ⇒ ks = mg
When the string is stretched some x away from s

F = −k(s + x) + mg

F = −ks − kx + mg
As ks = mg
F = −kx

mẍ = −kx

mẍ + kx = 0
k
Let ω 2 = m
,

ẍ + ω 2 x = 0
Consider its characteristic equation

λ2 + ω 2 = 0

λ2 = −ω 2

λ = ±ωi
Consider eωti = cos (ωt) + sin (ωt)i. Hence,

x = c1 cos (ωt) + c2 sin (ωt)


r ! r !
k k
x = c1 cos t + c2 sin t
m m
The equation can be rewritten using trig identites. Let

c1 = A cos Φ

c2 = A sin Φ
Hence,

x = A cos Φ cos (ωt) + A sin Φ sin (ωt)


By trig identities,

x = A cos (ωt − Φ)
 π π
Φ∈ − ,
2 2
9.4.2 The damped spring
F = T + mg + Fd
Let damping force be proportional to velocity. Fd = −β ẋ

F = −k(s + x) + mg − β ẋ
As ks = mg

F = −kx − β ẋ
Hence,

mẍ = −kx − β ẋ

mẍ + β ẋ + kx = 0
k β
Let ω 2 = m
, 2p = m

ẍ + 2pẋ + ω 2 x = 0

λ2 + 2pλ + ω 2 = 0
By the quadratic formula:

−b ± b2 − 4ac
λ=
a
q
λ = −2p ± (2p)2 − 4 (ω 2 )
Consider the discriminant:

D = 4p2 − 4ω 2

D = 4 p2 − ω 2


If D < 0, the spring is under damped, then the EOM looks like:
 p p 
x = e−pt c1 cos( ω 2 − p2 t) + c2 sin( ω 2 − p2 t)
If D = 0, the spring is critically damped λ = −p and using reduction of order the below solution will
be found:

x = e−pt (c1 + c2 t)
If D > 0, the spring is over damped. Two real roots, which are guaranteed to be negative λ = −α and
λ = −β. Hence, general soln:

x = c1 e−αt + c2 e−βt
The forced spring
mẍ + β ẋ + kx = f (t)

mẍ + β ẋ + kx = α cos (βt)

10 Taxonomy of Differential Equations:


10.1 Derivatives:
Ordinary
Partial

10.2 Order:
1st order: Only first derivatives
2nd order: Only second derivatives
..
.
Nth order

10.3 Linearity:
Linear:
n
X di f (x)
ai (x) = b(x)
i=1
dxi
That is it can be written as:
n
!
X di
ai (x) i f (x) = b(x)
i=1
dx
Non-Linear:
Function can not be written as some operator acting on f (x).

f (x)n

di f (x) dj f (x)
·
dxi dxj

g(f (x))
Where g is not linear.
10.4 Auxillary conditions:
Initial Value Problems(IVP):

di+1 f df d2 f d3 f di f
= F (x, , , , ... )
dxi+1 dx dx2 dx3 dxi
Given

df (x0 ) d2 f (x0 ) di f (x0 )


f (x0 ) = v0 , = v1 , = v2 , ... = vi
dx dx2 dxi

Boundary Value Problems(BVP):

10.5 Autonomous
Derivative is independent of the dependant variable. Only depends on function itself.

10.6 Homogeneous
Lack of any free terms.

11 First-Order Differential Equations


General first order ODE is:

y ′ = f (y, x)

11.1 Linear
General solution to:

p(x)y ′ + q(x)y = r(x)


Which can be rewritten as:

y ′ + Q(x)y = R(x)
R
Qdx
Multiply both sides by e
R R R
Qdx ′ Qdx Qdx
e y + Q(x)e y=e R(x)
By the inverse chan rule,
d  R Qdx  R
e y = e Qdx R(x)
dx
R
Z R
Qdx
e y = e Qdx R(x)dx
Z
1 R
Qdx
y= R
Qdx
e R(x)dx
e

11.2 Non-Linear Separable


Rewrite

y ′ = f (y, x)
As

M (y, x) + N (y, x)y ′ = 0


In the case that

M (x) + N (y)y ′ = 0

dy
M (x) + N (y) =0
dx

M (x)dx + N (y)dy = 0
Then the solution can be implicitly written as
Z Z
N (y)dy = M (x)dx

11.3 Existence and Uniqueness Theorems


11.4 Exact Differential Equations
Consider a linear first order differential equation of the form:

M (x, y) + N (x, y)y ′ = 0


If M (x, y) and N (x, y) are both continuous in some rectangular region R and,

∂M (x, y) ∂N (x, y)
=
∂x ∂y
Then it is exact in R. To find y, find f (x, y) such that:
∂f (x, y)
= M (x, y)
∂x
∂f (x, y)
= N (x, y)
∂y
When integrating M (x, y) w.r.t x, +h becomes +h(y). Contours of f (x, y) are the implicit expression
of y. That is:

f (x, y) = C
Where C ∈ R is determined by initial values.

11.5 Integrating Factors to convert non-exact to exact


M (x, y) + N (x, y)y ′ = 0
Where

∂M (x, y) ∂N (x, y)
̸=
∂x ∂y
Goal is to find µ(x, y) such that

µ(x, y)M (x, y) + µ(x, y)N (x, y)y ′ = 0


is exact. That is:

∂ (µ(x, y)M (x, y)) ∂ (µ(x, y)N (x, y))


=
∂x ∂y
By the product rule,

∂µ(x, y) ∂M (x, y) ∂µ(x, y) ∂N (x, y)


M (x, y) + µ(x, y) = N (x, y) + µ(x, y)
∂x ∂x ∂y ∂y
∂µ(x, y) ∂M (x, y) ∂µ(x, y) ∂N (x, y)
M (x, y) + µ(x, y) − N (x, y) − µ(x, y) =0
∂x ∂x ∂y ∂y
 
∂µ(x, y) ∂µ(x, y) ∂N (x, y) ∂M (x, y)
M (x, y) − N (x, y) − µ(x, y) + =0
∂x ∂y ∂y ∂x
Written in simpler notation,

µx (x, y)M (x, y) − µy (x, y)N (x, y) − µ(x, y) (Ny (x, y) + Mx (x, y)) = 0
The only way this simplifies is:

µ(x, y) = µ(x) µ(x, y) = µ(y)

dµ(x)
dx
M (x, y) − µ(x) (Ny (x, y) + Mx (x, y)) = 0 − dµ(y)
dy
N (x, y) − µ(y) (Ny (x, y) + Mx (x, y)) = 0

dµ(x) µ(x)(Ny (x,y)+Mx (x,y)) dµ(y) −µ(y)(Ny (x,y)+Mx (x,y))


dx
= M (x,y) dy
= N (x,y)

(Ny (x,y)+Mx (x,y)) (Ny (x,y)+Mx (x,y))


Solvable iff M (x,y)
is a function of x only. Solvable iff M (x,y)
is a function of y only.
12 Second-Order Differential Equations
General form:

y ′′ = f (y, y ′ , x)

12.1 Homogenous Linear with Constant Coefficients, Charecteristic Equa-


tions
ay ′′ + by ′ + cy = 0
Taking the anstaz eλx , the two values of λ can be determined by solving:

aλ2 + bλ + c = 0

12.2 Imaginary Solutions


Given some imaginary solution:

e(a+bi)x = eax ebxi

= eax (cos(bx) + sin(bx)i)

= eax cos(bx) + eax sin(bx)i


 
If e(a+bi)x is a solution then so is ℜ e(a+bi)x and ℑ e(a+bi)x .
Hence,

y = c1 eax cos(bx) + c2 eax sin(bx)

12.3 Singular Root, Reduction of Order


Given that the characteristic equation yields only root and therefore one solution y1 . Set y2 (x) =
v(x)y1 (x)

In this case, setting y2 = Axy1 (x) will always work.

12.4 Wronskian
The Wronskian W (y1 , y2 )(x) is defined as
 
y1 (x) y2 (x)
W (y1 , y2 )(x) = det ′
y1 (x) y2′ (x)

W (y1 , y2 )(x) = y1 (x)y2′ (x) − y2 (x)y1′ (x)


If the Wronskian is non-zero for some t, y1 and y2 are linearly independent and hence can be added
up to with a specific c1 and c2 to solve an IV P .
12.5 Non-Homogeneous Linear with Constant Coefficients
ay ′′ + by ′ + cy = p(x)
Particular solution must be determined by anstaz. Below are general rules for anstaz:

If

ay ′′ + by ′ + cy = AeBx
Assume y = λeBx

If

ay ′′ + by ′ + cy = A cos(Bt)
or

ay ′′ + by ′ + cy = A sin(Bt)
Assume y = λ1 sin(Bt) + λ2 cos(Bt)

If

ay ′′ + by ′ + cy = A cos(Bt)eCx
or

ay ′′ + by ′ + cy = A sin(Bt)eCx
Assume y = λ1 eCx sin(Bt) + λ2 eCx cos(Bt). In general:
The general solution of the above equation can be written as:

y = c1 y1 (t) + c2 y2 (t) + Y (t)


y1 and y2 are solutions of the homogenous equation ay ′′ + by ′ + cy = 0. Y is a particular solution.

12.6 Series Solutions


Refresher of taylor series of a function f about point x0 :

X f (n) (x0 )
f (x) = (x − x0 )n
n=0
n!
Let,

f (n) (x0 )
an = (x − x0 )n
n!
Radius of convergence:

an+1
lim = c|x − x0 |
n→∞ an
Where radius of convergence is:
1
|x − x0 | ≤
c
Consider two series given by:

X
s1 = bn (x − x0 )n
n=1

Which converges to f (x) for some radius |x − x0 | ≤ l, and



X
s2 = cn (x − x0 )n
n=1

Which converges to g(x) for some radius |x − x0 | ≤ l

Consider that:

f (x) ± g(x)

f (x)g(x)

f (x)
g(x)
Differentiation rules:


X
y= an (x − x0 )n
n=1

′ d X
y = an (x − x0 )n
dx n=1
The equation:

P (x)y ′′ + Q(x)y ′ + R(x)y = 0


Can be solved/approximated at an ordinary point x0 (x0 such that P (x0 ) ̸= 0). In such case the above
can be rewritten as:

y ′′ + p(x)y ′ + q(x)y = 0
THe solution takes the form:

y = a0 y1 + a1 y2
Where a0 and a1 are arbitrary and depend on the initial conditions. y1 and y2 are power series. The
function is analytic at x = x0 where the R(y) ≥ min (R(p), R(q)).

To solve for y, write y and its derivatives as taylor series. Then solve for the coefficients. Change of
index and the above derived rules for operations of power series will be useful. Find a relation (explicit
or recursive) for the coefficients of y1 in terms of a0 and do the same for y2 in terms of a1 .
12.7 Non-Homogeneous Linear with Function Coefficients: General so-
lution
To find the general solution to a second order linear differential equation

y ′′ + p(x)y ′ + q(x)y = r(x)


Consider the correspond homogenous equation:

y ′′ + p(x)y ′ + q(x)y = 0
Solve the above using series solutions to find that:

y = a0 y1 + a1 y2
General solution of the non homogenous equation will be of the form,

y = a0 y1 + a1 y2 + Y
Let Y be of the form:

Y = u1 (x)y1 + u2 (x)y2
It can be shown that:
Z
y2 (x)r(x)
u1 (x) = − dx
W (y1 , y2 )(x)
Z
y1 (x)r(x)
u2 (x) = dx
W (y1 , y2 )(x)
Hence, the solution can be written as:
Z Z
y2 (x)r(x) y1 (x)r(x)
y = a0 y1 + a1 y2 − y1 dx + y2 dx
W (y1 , y2 )(x) W (y1 , y2 )(x)

13 Parameterization of curves in R3
14 Envelope of a family of functions
y(x, t) = f (x, t)
Hold t constant.

y (t) (x) = f (x, t)


Envelope:

1. Find ft (x, t)
2. Set ft (x, t) = 0
3. Find t in terms of x. Call this function g(x). The envelope is therefore:

E = f (x, g(x))

15 Partial fractions

You might also like