0% found this document useful (0 votes)
9 views

Notes

The document discusses key concepts in multivariate calculus, including: 1) Definitions of continuity, limits, and convergence for multivariate functions and sequences. 2) Euclidean distance and how it generalizes the concept of distance to multiple dimensions. 3) Multivariate derivatives, including directional derivatives, partial derivatives, and the total derivative (also called the Jacobian matrix). 4) How derivative laws like linearity and the chain rule extend to multivariate calculus. 5) The concept of the tangent plane to a surface and how to compute the normal vector.

Uploaded by

Abhinav Pradeep
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Notes

The document discusses key concepts in multivariate calculus, including: 1) Definitions of continuity, limits, and convergence for multivariate functions and sequences. 2) Euclidean distance and how it generalizes the concept of distance to multiple dimensions. 3) Multivariate derivatives, including directional derivatives, partial derivatives, and the total derivative (also called the Jacobian matrix). 4) How derivative laws like linearity and the chain rule extend to multivariate calculus. 5) The concept of the tangent plane to a surface and how to compute the normal vector.

Uploaded by

Abhinav Pradeep
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Notes

Abhinav Pradeep
October 24, 2023

1 Multivariate Limits
1.1 Continuity in calculus of one variable
To prove f is continuous at c,

lim f = lim− f
x→c+ x→c
+
For c ,

∀ϵ > 0, ∃δ > 0 s.t for 0 < x − c < δ

|f (x) − L| < ϵ
Analogously, for c− ,

∀ϵ > 0, ∃δ > 0 s.t for − δ < x − c < 0

|f (x) − L| < ϵ

1.2 Limits in calculus of one variable


f :R→R
To prove that limx→c f (x) = L

∀ϵ > 0, ∃δ > 0 s.t for 0 < |x − c| < δ

|f (x) − L| < ϵ

1.3 Euclidean distance in R2


For P, Q ∈ R2 , where P = (P1 , P2 ), Q = (Q1 , Q2 )
p
D(P, Q) = (P1 − Q1 )2 + (P1 − Q1 )2

1
1.4 Euclidean distance in Rn
For P, Q ∈ Rn , where P = (P1 , P2 , P3 ...Pn ), Q = (Q1 , Q2 , Q3 ...Qn )
v
u n
uX
D(P, Q) = t (Pi − Qi )2
i=1

1.5 Convergence of a sequence Rn


NOTE: SEQUENCES ARE DENOTED WITH SUPERSCRIPT

Consider (xi )i∈N ∈ Rn , limi→∞ (xi )i∈N = X ∈ Rn if

∀ϵ > 0, ∃I ∈ N s.t ∀i ≥ I,

D(xi , X) < ϵ

1.6 Continuity of multivariate functions


f : Rn → Rm
To prove f is continuous at X,

∀ sequences (xi )i∈N such that:

lim (xi )i∈N = X


i→∞

we have that:

lim f (xi ) = f (X)


i→∞

Similar process for

f : D → Rm
Where

D ⊆ Rn
Considering the set of all sequences that converge at X is equivalent to considering the set of all
directions possible ∈ Rn or ∈ D that X can be approached from. Hence, where the single variable case
considers approaches to X + and X − , the multivariate case considers approaches from all directions
possible in abstract Rn
1.7 Limits of multivariate functions
f : Rn → Rm
To prove

lim f (x) = L
x→C

∀ϵ > 0, ∃δ > 0 s.t for 0 < D(x, C) < δ

D(f (x), L) < ϵ

1.8 Limit laws for multivariate functions


Generalized triangle inequality

1.9 Smoothness

2 Multivariate Derivatives
NOTE: || : Rn → R where for V ∈ Rn , |V | = D(0, V )

2.1 Differentiability
f : Rn → Rm is differentiable at X ∈ Rn if ∃T such that:

|f (x + h) − f (x) − T h|
lim =0
h→0 |h|
Where h ∈ Rn

2.2 Directional Derivatives


For f : Rn → R and f differentiable on Rn , derivative in the direction of unit vector V ∈ Rn is defined
as:

|f (x + V h) − f (x)|
DV f (x) = lim
h→0 |h|
Where h ∈ Rn

When extended to f : Rn → Rm , where f can we written as f = (f1 , f2 , f3 , ...fm ), it can be proved


that DV f exists iff DV fk exists ∀k ∈ [1, m]. In this case,

DV f = (DV f1 , DV f2 , DV f3 , ...DV fm )
2.3 Partial Derivatives
Partial derivatives are a special case of DV f where V is a standard basis vector such as ei , ej , ek ... etc.
In this case it is written as:
∂f
Di f ⇔
∂xi

2.4 Total Differentiability


2.5 Total Derivative
For f : Rn → Rm the total derivative can be defined:

2.5.1 As an n × m matrix
 
D1 f
 D2 f 
Df =  .. 
 
 . 
Dn f
Which forms :
 
D1 f1 D1 f2 D1 f3 ... D1 fm
 D2 f1 D2 f2 D2 f3 ... D2 fm 
Df =  ..
 
.. .. .. .. 
 . . . . . 
Dn f1 Dn f2 Dn f3 ... Dn fm

2.5.2 As an m × n matrix
 
Df = (D1 f )T (D2 f )T (D3 f )T ... (Dn f )T
Which forms :
 
D1 f1 D2 f1 D3 f1 ... Dn f1
 D1 f2 D2 f2 D3 f2 ... Dn f2 
Df =  ..
 
.. .. .. .. 
 . . . . . 
D1 fm D2 fm D3 fm ... Dn fm
Both forms are valid?(ish) but m × n form is preferred

In such case Df is also called the Jacobian matrix Jf (⃗x) where ⃗x ∈ Rn :


 
D1 f1 (⃗x) D2 f1 (⃗x) D3 f1 (⃗x) ... Dn f1 (⃗x)
 D1 f2 (⃗x) D2 f2 (⃗x) D3 f2 (⃗x) ... Dn f2 (⃗x) 
Jf (⃗x) = 
 
.. .. .. .. .. 
 . . . . . 
D1 fm (⃗x) D2 fm (⃗x) D3 fm (⃗x) ... Dn fm (⃗x)

For f : Rn → R the total derivative, which is called the gradient ∇, can be defined as:

∇f = (D1 f, D2 f, D3 f, ...Dn f )
2.6 Derivative Laws
2.6.1 Linearity
Consider f : Rn → Rm , g : Rn → Rm which are totally differentiable at x ∈ Rn

f + g : Rn → Rm
Is totally differentiable at x and

D(f (x) + g(x)) = Df (x) + Dg(x)


Alternatively, in Jacobian notation,

Jf +g (x) = Jf (x) + Jg (x)

λf : Rn → Rm , λ ∈ R
Is totally differentiable at x and

D(λf (x)) = λDf (x)


Alternatively, in Jacobian notation,

Jλf (x) = λJf (x)

2.6.2 Chain rule

2.7 Higher Order Derivatives


2.8 Using ∇ := ( δxδ 1 , δxδ 1 , ... δxδ n )
There are three ways the operator ∇ can act:
1. On a scalar function f : Rn → R : ∇f (the gradient);
2. On a vector function f : Rn → Rn , via the dot product: ∇ · f (the divergence);
3. On a vector function f : Rn → Rn , via the cross product: ∇ × f (the curl)

2.8.1 Gradient: (∇f ) : Rn → Rn


For f : Rn → R
The gradient ∇f points in the direction of maximum increase of the function f .

The magnitude |∇f | gives the slope (rate of increase) along this maximal direction

2.8.2 Divergence: (∇ · f ) : Rn → R
Identifies sources and sinks:
2.8.3 Curl: (∇ × f ) : Rn → Rn

3 Tangent Plane
Consider f : R2 → R
Let z = f (x, y)

The plane tangent to z = f (x, y) at some point P⃗ = (a, b, f (a, b)) is of the form:

⃗n(⃗r) = ⃗n(P⃗ )
Where ⃗n is the vector normal to z = f (x, y) at P⃗ . This would be:

⃗n = (1, 0, Dx f ) × (0, 1, Dy f )
Where (1, 0, Dx f ) denotes how much f (x, y) changes in the x direction and (0, 1, Dy f ) denotes how
much f (x, y) changes in the y direction.

Alternatively, a (1, 0, Dx f ) and (0, 1, Dy f ) can be obtained by considering F : R2 → R3 as:


 
x
F (x, y) =  y 
f (x, y)
This communicates the same idea as original f (x, y). In this case,

Dx F = (1, 0, Dx f )
Dy F = (0, 1, Dy f )
Either way,
 
i j k
⃗n = det 1 0 Dx f 
0 1 Dy f

⃗n = −(Dx f )i − (Dy f )j + k

⃗n = (−Dx f, −Dy f, 1)
Let ⃗r = (x, y, z)

⃗n(⃗r) = ⃗n(P⃗ )

(−Dx f, −Dy f, 1)(x, y, z) = (−Dx f, −Dy f, 1)(a, b, f (a, b))

−(Dx f )x − (Dy f )y + z = −(Dx f )a − (Dy f )b + f (a, b)

z = (Dx f )x − (Dx f )a + (Dy f )y − (Dy f )b + f (a, b)

z = (Dx f )(x − a) + (Dy f )(y − b) + f (a, b)

4 Approximations: Taylor series


4.1 Multi-index notation
n-dimensional multi-index:

α ∈ Nn0
ie. n-dimensional vector/n-tuple of natural numbers + 0. Elements denoted by:

α = (α1 , α2 , α3 , ...αn )

4.1.1 Rules
For n-dimensional α
n
X
|α| = αi
i=1
n
Y
α! = αi !
i=1
n
Y
n
For x ∈ R , x = α
xαi i
i=1

4.1.2 Applications to derivtives


For f : Rn → R
∂f
Where Di f ⇔ ∂xi
,

α ∂ |α| f
D f= n
Q αi
i=1 ∂xi
4.2 General Taylor Series for Rn → R
For f : Rn → R
X Dα f (x)
T (h) = · hα
α!
|α|≤k

Where the summations can alternatively be written as matrix products. For example, consider T (h)
of order 2:
X Dα f (x)
T2 (h) = · hα
α!
|α|≤2

X Dα f (x) X Dα f (x) X Dα f (x)


T2 (h) = · hα + · hα + · hα
α! α! α!
|α|=0 |α|=1 |α|=2

X Dα f (x) X Dα f (x)
T2 (h) = f (x) + · hα + · hα
α! α!
|α|=1 |α|=2

Only one possibility for |α| = 1: α is an n-tuple containing one entry with value 1 and all other entries
are 0. In such case, α! = 1
X X Dα f (x)
T2 (h) = f (x) + Dα f (x) · hα + · hα
α!
|α|=1 |α|=2

Summing over all such α is equivalently:


n
X X Dα f (x)
T2 (h) = f (x) + Di f (x) · hj + · hα
i=1
α!
|α|=2

This summation is equivalently:


X Dα f (x)
T2 (h) = f (x) + Jf (x) · h + · hα
α!
|α|=2

Or, as f : Rn → R
X Dα f (x)
T2 (h) = f (x) + ∇f · h + · hα
α!
|α|=2

α of for that satisfy |α| = 2 are of the form:

Contains only one 2:

α = (0, 0, 0, ...2, ...0, 0, 0)


Contains two 1:

α = (0, 1, 0, ...1, ...0, 0, 0)


Summing over case 1 is equivalent to:
n
1X
Di Di f (x) · h2i
2 i=1
Summing over case 2 is equivalent to:
n
1X
Dj Di f (x) · hi · hj
2 i̸=j

The 12 factor is done as ’commutativity’ (ie. Dj Di = Di Dj ) means that ni̸=j double counts.
P
α
Summing over |α|=2 D α!f (x) · hα requires summing over both case 1 and 2:
P

X Dα f (x) n n
1X 1X
· hα = Di Di f (x) · h2i + Dj Di f (x) · hi · hj
α! 2 i=1 2 i̸=j
|α|=2

X Dα f (x) n
α 1X
·h = Di Dj f (x) · hi · hj
α! 2 i,j=1
|α|=2

Hence,
n
1X
T2 (h) = f (x) + ∇f · h + Di Dj f (x) · hi · hj
2 i,j=1

Which when re-written as:


n
1X
T2 (h) = f (x) + ∇f · h + hi · Di Dj f (x) · hj
2 i,j=1

Is ’evidently’ equivalent to:


1
T2 (h) = f (x) + ∇f · h + hT Hf (x)h
2

4.3 R2 → R
For some z = f (x, y), given P = f (a, b)
Before everything,
X Dα f (x)
T (x + δ) = · δα
α!
|α|≤k

Let h = x + δ ⇔ δ = (h − x)
X Dα f (x)
T (h) = · (h − x)α
α!
|α|≤k

To calculate first order Taylor approximation centered at P,

z ≈ T1 (x, y) = f (a, b) + ∇f (a, b) · (x − a, y − b)


∇f (a, b) must be given.
∇f (a, b) = (Dx f (a, b), Dy f (a, b))

z ≈ T1 (x, y) = f (a, b) + (Dx f (a, b), Dy f (a, b)) · (x − a, y − b)

z ≈ T1 (x, y) = f (a, b) + Dx f (a, b)x − Dx f (a, b)a + Dy f (a, b)y − Dy f (a, b)b

z ≈ T1 (x, y) = f (a, b) + Dx f (a, b)(x − a) + Dy f (a, b)(y − b)

Second order Taylor approximation centered at P:


1
z ≈ T2 (x, y) = f (a, b) + ∇f · (x − a, y − b) + (x − a, y − b)T Hf (x)(x − a, y − b)
2
From before,

1
z ≈ T2 (x, y) = f (a, b) + Dx f (a, b)(x − a) + Dy f (a, b)(y − b) + (x − a, y − b)T Hf (x)(x − a, y − b)
2
To calculate T2 (x), Hf (x) must be given:

Hf (x) = D∇f (x)


∇f (x) : R2 → R2
 
Dx (∇f (x))x Dy (∇f (x))x
Hf (x) =
Dx (∇f (x))y Dy (∇f (x))y
 
Dx Dx f (x) Dy Dx f (x)
Hf (x) =
Dx Dy f (x) Dy Dy f (x)

z ≈ T2 (x, y) = f (a, b) + Dx f (a, b)(x − a) + Dy f (a, b)(y − b)+


 
1 T Dx Dx f (x) Dy Dx f (x)
2
(x − a, y − b) (x − a, y − b) (1)
Dx Dy f (x) Dy Dy f (x)

z ≈ T2 (x, y) = f (a, b) + Dx f (a, b)(x − a) + Dy f (a, b)(y − b)+


    
1 Dx Dx f (x) Dy Dx f (x) x−a
(x − a, y − b) (2)
2 Dx Dy f (x) Dy Dy f (x) y−b

z ≈ T2 (x, y) = f (a, b) + Dx f (a, b)(x − a) + Dy f (a, b)(y − b)+


 
1 x−a
(Dx Dx f (x)(x − a) + Dx Dy (y − b), Dy Dx f (x)(x − a) + Dy Dy (y − b)) (3)
2 y−b
z ≈ T2 (x, y) = f (a, b) + Dx f (a, b)(x − a) + Dy f (a, b)(y − b)+
1
2
(Dx Dx f (x)(x − a)2 + Dx Dy (y − b)(x − a) + Dy Dx f (x)(x − a)(y − b) + Dy Dy (y − b)2 ) (4)

’commutativity’ of D:

z ≈ T2 (x, y) = f (a, b) + Dx f (a, b)(x − a) + Dy f (a, b)(y − b)+


1
2
(Dx Dx f (x)(x − a)2 + 2Dx Dy f (x)(x − a)(y − b) + Dy Dy (y − b)2 ) (5)

z ≈ T2 (x, y) = f (a, b) + Dx f (a, b)(x − a) + Dy f (a, b)(y − b)+


1
D D f (x)(x − a)2 + Dx Dy f (x)(x − a)(y − b) + 12 Dy Dy (y − b)2 (6)
2 x x

5 Optimization on f : Rn → R
5.1 Definition of local extrema
Consider f : Rn → R
Define Bϵ (x) = {a ∈ Rn |D(x, a) < ϵ}

If f has a local maximum at X, then ∃ϵ > 0 such that:

f (X) ≥ f (x) ∀x ∈ Bϵ (X)


If f has a local minimum at X, then ∃ϵ > 0 such that:

f (X) ≤ f (x) ∀x ∈ Bϵ (X)


Isolated extrema remove the equals.

5.2 Definition of local extrema in terms of ∇f


Consider f : Rn → R

If f has local extrema at X,


 
0
0
∇f (x) =  .. 
 
.
0
If f has a local maximum at X,

Hf (X) = Negative definite


If f has a local minimum at X,
Hf (X) = Positive definite
If f has a saddle point at X,

Hf (X) = Indefinite

5.3 Definite and indefinite matrices


Definition:
M is positive definite iff:

xT M x > 0 ∀x ∈ Rn
M is negative definite iff:

xT M x < 0 ∀x ∈ Rn
M is indefinite iff:

xT M x = 0 ∀x ∈ Rn

5.3.1 Regarding eigenvalues


M is positive definite if and only if all of its eigenvalues are positive.
M is negative definite if and only if all of its eigenvalues are negative
M is indefinite if and only if it has both positive and negative eigenvalues.

5.4 Constrained optimization and Lagrange multipliers


The problem:
Given:

f : Rn → R

gi : Rn → R, i ∈ [1, m]
Optimize f with respect to constraints gi = 0 ∀i ∈ [1, m]

gi = 0 forms a contour or level curve. From below proof, it is evident the ∇gi (x) ⊥ (gi = 0)@x:

Consider r(t) which satisfies

gi (r(t)) = 0

d
gi (r(t)) = 0
dt
d
∇gi · r(t) = 0 − (1)
dt
QED

Next consider f (r(t)). f (r(t)) need not be a level curve.

However, it is intuitive and a fact that the minimums and maximums of f (r(t)) will occur
where f (r(t)) is flat.

That is, where


d
f (r(t)) = 0
dt
d
∇f · r(t) = 0 − (2)
dt
Putting (1) and (2) together allows for a formulation that is independent of parameterization:
Maximums and minimums occur when:
m
X
∇f = λi ∇gi
i=1

With the added initial constraints of:

gi = 0 ∀i ∈ [1, m]

6 Line, Surface and Volume Integrals


7 Taxonomy of Differential Equations:
7.1 Derivatives:
Ordinary
Partial

7.2 Order:
1st order: Only first derivatives
2nd order: Only second derivatives
..
.
Nth order

7.3 Linearity:
Linear:
n
X di f (x)
ai (x) = b(x)
i=1
dxi
That is it can be written as:
n
!
X di
ai (x) i f (x) = b(x)
i=1
dx
Non-Linear:
Function can not be written as some operator acting on f (x).

f (x)n

di f (x) dj f (x)
·
dxi dxj

g(f (x))
Where g is not linear.

7.4 Auxillary conditions:


Initial Value Problems(IVP):

di+1 f df d2 f d3 f di f
= F (x, , , , ... )
dxi+1 dx dx2 dx3 dxi
Given

df (x0 ) d2 f (x0 ) di f (x0 )


f (x0 ) = v0 , = v1 , = v2 , ... = vi
dx dx2 dxi

Boundary Value Problems(BVP):

7.5 Autonomous
Derivative is independent of the dependant variable. Only depends on function itself.

7.6 Homogeneous
Lack of any free terms.
8 First-Order Differential Equations
General first order ODE is:

y ′ = f (y, x)

8.1 Linear
General solution to:

p(x)y ′ + q(x)y = r(x)


Which can be rewritten as:

y ′ + Q(x)y = R(x)
R
Qdx
Multiply both sides by e
R R R
Qdx ′ Qdx Qdx
e y + Q(x)e y=e R(x)
By the inverse chan rule,
d  R Qdx  R
e y = e Qdx R(x)
dx
R
Z R
Qdx
e y = e Qdx R(x)dx
Z
1 R
Qdx
y= R
Qdx
e R(x)dx
e

8.2 Non-Linear Separable


Rewrite

y ′ = f (y, x)
As

M (y, x) + N (y, x)y ′ = 0


In the case that

M (x) + N (y)y ′ = 0

dy
M (x) + N (y) =0
dx

M (x)dx + N (y)dy = 0
Then the solution can be implicitly written as
Z Z
N (y)dy = M (x)dx
8.3 Existence and Uniqueness Theorems
8.4 Exact Differential Equations
Consider a linear first order differential equation of the form:

M (x, y) + N (x, y)y ′ = 0


If M (x, y) and N (x, y) are both continuous in some rectangular region R and,

∂M (x, y) ∂N (x, y)
=
∂x ∂y
Then it is exact in R. To find y, find f (x, y) such that:

∂f (x, y)
= M (x, y)
∂x
∂f (x, y)
= N (x, y)
∂y
When integrating M (x, y) w.r.t x, +h becomes +h(y). Contours of f (x, y) are the implicit expression
of y. That is:

f (x, y) = C
Where C ∈ R is determined by initial values.

8.5 Integrating Factors to convert non-exact to exact


M (x, y) + N (x, y)y ′ = 0
Where

∂M (x, y) ∂N (x, y)
̸=
∂x ∂y
Goal is to find µ(x, y) such that

µ(x, y)M (x, y) + µ(x, y)N (x, y)y ′ = 0


is exact. That is:

∂ (µ(x, y)M (x, y)) ∂ (µ(x, y)N (x, y))


=
∂x ∂y
By the product rule,

∂µ(x, y) ∂M (x, y) ∂µ(x, y) ∂N (x, y)


M (x, y) + µ(x, y) = N (x, y) + µ(x, y)
∂x ∂x ∂y ∂y
∂µ(x, y) ∂M (x, y) ∂µ(x, y) ∂N (x, y)
M (x, y) + µ(x, y) − N (x, y) − µ(x, y) =0
∂x ∂x ∂y ∂y
 
∂µ(x, y) ∂µ(x, y) ∂N (x, y) ∂M (x, y)
M (x, y) − N (x, y) − µ(x, y) + =0
∂x ∂y ∂y ∂x
Written in simpler notation,

µx (x, y)M (x, y) − µy (x, y)N (x, y) − µ(x, y) (Ny (x, y) + Mx (x, y)) = 0
The only way this simplifies is:

µ(x, y) = µ(x) µ(x, y) = µ(y)

dµ(x)
dx
M (x, y) − µ(x) (Ny (x, y) + Mx (x, y)) = 0 − dµ(y)
dy
N (x, y) − µ(y) (Ny (x, y) + Mx (x, y)) = 0

dµ(x) µ(x)(Ny (x,y)+Mx (x,y)) dµ(y) −µ(y)(Ny (x,y)+Mx (x,y))


dx
= M (x,y) dy
= N (x,y)

(Ny (x,y)+Mx (x,y)) (Ny (x,y)+Mx (x,y))


Solvable iff M (x,y)
is a function of x only. Solvable iff M (x,y)
is a function of y only.

8.6 Difference Functions

9 Second-Order Differential Equations


General form:

y ′′ = f (y, y ′ , x)

9.1 Homogenous Linear with Constant Coefficients, Charecteristic Equa-


tions
ay ′′ + by ′ + cy = 0
Taking the anstaz eλx , the two values of λ can be determined by solving:

aλ2 + bλ + c = 0

9.2 Imaginary Solutions


Given some imaginary solution:

e(a+bi)x = eax ebxi

= eax (cos(bx) + sin(bx)i)

= eax cos(bx) + eax sin(bx)i


 
If e(a+bi)x is a solution then so is ℜ e(a+bi)x and ℑ e(a+bi)x .
Hence,

y = c1 eax cos(bx) + c2 eax sin(bx)


9.3 Singular Root, Reduction of Order
Given that the characteristic equation yields only root and therefore one solution y1 . Set y2 (x) =
v(x)y1 (x)

9.4 Wronskian
The Wronskian W (y1 , y2 )(x) is defined as
 
y1 (x) y2 (x)
W (y1 , y2 )(x) = det ′
y1 (x) y2′ (x)

W (y1 , y2 )(x) = y1 (x)y2′ (x) − y2 (x)y1′ (x)


If the Wronskian is non-zero for some t, y1 and y2 are linearly independent and hence can be added
up to with a specific c1 and c2 to solve an IV P .

9.5 Non-Homogeneous Linear with Constant Coefficients


ay ′′ + by ′ + cy = p(x)
Particular solution must be determined by anstaz. Below are general rules for anstaz:

If

ay ′′ + by ′ + cy = AeBx
Assume y = λeBx

If

ay ′′ + by ′ + cy = A cos(Bt)
or

ay ′′ + by ′ + cy = A sin(Bt)
Assume y = λ1 sin(Bt) + λ2 cos(Bt)

If

ay ′′ + by ′ + cy = A cos(Bt)eCx
or

ay ′′ + by ′ + cy = A sin(Bt)eCx
Assume y = λ1 eCx sin(Bt) + λ2 eCx cos(Bt). In general:
The general solution of the above equation can be written as:

y = c1 y1 (t) + c2 y2 (t) + Y (t)


y1 and y2 are solutions of the homogenous equation ay ′′ + by ′ + cy = 0. Y is a particular solution.

9.6 Series Solutions


Refresher of taylor series of a function f about point x0 :

X f (n) (x0 )
f (x) = (x − x0 )n
n=0
n!
Let,

f (n) (x0 )
an = (x − x0 )n
n!
Radius of convergence:

an+1
lim = c|x − x0 |
n→∞ an
Where radius of convergence is:
1
|x − x0 | ≤
c
Consider two series given by:

X
s1 = bn (x − x0 )n
n=1

Which converges to f (x) for some radius |x − x0 | ≤ l, and



X
s2 = cn (x − x0 )n
n=1
Which converges to g(x) for some radius |x − x0 | ≤ l

Consider that:

f (x) ± g(x)

f (x)g(x)

f (x)
g(x)
Differentiation rules:


X
y= an (x − x0 )n
n=1

′ d X
y = an (x − x0 )n
dx n=1
The equation:

P (x)y ′′ + Q(x)y ′ + R(x)y = 0


Can be solved/approximated at an ordinary point x0 (x0 such that P (x0 ) ̸= 0). In such case the above
can be rewritten as:

y ′′ + p(x)y ′ + q(x)y = 0
THe solution takes the form:

y = a0 y1 + a1 y2
Where a0 and a1 are arbitrary and depend on the initial conditions. y1 and y2 are power series. The
function is analytic at x = x0 where the R(y) ≥ min (R(p), R(q)).

To solve for y, write y and its derivatives as taylor series. Then solve for the coefficients. Change of
index and the above derived rules for operations of power series will be useful. Find a relation (explicit
or recursive) for the coefficients of y1 in terms of a0 and do the same for y2 in terms of a1 .

9.7 Non-Homogeneous Linear with Function Coefficients: General solu-


tion
To find the general solution to a second order linear differential equation

y ′′ + p(x)y ′ + q(x)y = r(x)


Consider the correspond homogenous equation:

y ′′ + p(x)y ′ + q(x)y = 0
Solve the above using series solutions to find that:

y = a0 y1 + a1 y2
General solution of the non homogenous equation will be of the form,

y = a0 y1 + a1 y2 + Y
Let Y be of the form:

Y = u1 (x)y1 + u2 (x)y2
It can be shown that:
Z
y2 (x)r(x)
u1 (x) = − dx
W (y1 , y2 )(x)
Z
y1 (x)r(x)
u2 (x) = dx
W (y1 , y2 )(x)
Hence, the solution can be written as:
Z Z
y2 (x)r(x) y1 (x)r(x)
y = a0 y1 + a1 y2 − y1 dx + y2 dx
W (y1 , y2 )(x) W (y1 , y2 )(x)

You might also like