0% found this document useful (0 votes)
32 views125 pages

EngMath1 LectureNote Written 2in1

Uploaded by

yubin5226
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views125 pages

EngMath1 LectureNote Written 2in1

Uploaded by

yubin5226
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 125

공학 수학 I

심형보 교수

서울대학교 공과대학 전기정보공학부

2015년 2월

Lesson 1: Introduction to matrix


 terminologies
 addition and scalar multiplication
 product of matrices
 transpose of a matrix

1
Matrix (행렬) & Vector (벡터)

3
행렬(벡터)의 addition & scalar multiplication

     
1 2 −1 0 1
, ,
3 4 2 0 2
4

합과 스칼라 곱의 연산법칙
For A, B, C ∈ Rm×n and c, k ∈ R,

A+B =B+A
(A + B) + C = A + (B + C)
A+0=A
A + (−A) = 0

and

c(A + B) = cA + cB
(c + k)A = cA + kA
c(kA) = (ck)A
1A = A

5
행렬의 곱
⎡ ⎤
  −1 0 1
1 2 1 ⎣ 2 0 1⎦ =
3 4 1
1 1 0

7
행렬 곱의 연산법칙
For A, B, C ∈ Rm×n and k ∈ R,

(kA)B = k(AB) = A(kB)


A(BC) = (AB)C
(A + B)C = AC + BC
C(A + B) = CA + CB

Transposition

(A ) = A
(A + B) = A + B 
(cA) = cA
(AB) = B  A
9
예 : 토지의 용도 변경

10

예 : 회전 변환

11
Lesson 2: System of linear equations, Gauss elimination
 existence and uniqueness of solution
 elementary row operation
 Gauss elimination, pivoting
 echelon form

선형연립방정식 (system of linear equations) & 해 (solution)

a11 x1 + · · · + a1n xn = b1
a21 x1 + · · · + a2n xn = b2
..
.
am1 x1 + · · · + amn xn = bm

2
Existence and uniqueness of solution (해의 존재성과 유일성)

4
해를 구하는 법

x1 − x2 + x3 = 0
10x2 + 25x3 = 90
−95x3 = −190
 
2 5 2
2x1 + 5x2 = 2
−4 3 −30
−4x1 + 3x2 = −30

1. 두 식의 위치 교환 1. 두 행의 위치 교환
2. 한 식을 다른 식에 더하기 2. 한 행을 다른 행에 더하기
3. 한 식에 0 아닌 상수 곱하기 3. 한 행에 0 아닌 상수 곱하기
4. 한 식을 상수배하여 다른 식에 더하기 4. 한 행을 상수배하여 다른 행에 더하기

6
Gauss elimination

a11 x1 + a12 x2 + a13 x3 = b1


a21 x1 + a22 x2 + a23 x3 = b2
a31 x1 + a32 x2 + a33 x3 = b3

Gauss elimination (partial pivoting)

x1 − x2 + x3 = 0
2x1 − 2x2 + 2x3 = 0
10x2 + 25x3 = 90
20x1 + 10x2 = 80

8
Gauss elimination (the case of infinitely many solutions)
⎡ ⎤
3.0 2.0 2.0 −5.0 8.0
⎣0.6 1.5 1.5 −5.4 2.7⎦
1.2 −0.3 −0.3 2.4 2.1

⎡ ⎤
3.0 2.0 2.0 −5.0 8.0
⎣0 1.1 1.1 −4.4 1.1 ⎦
0 −1.1 −1.1 4.4 −1.1

⎡ ⎤
3.0 2.0 2.0 −5.0 8.0
⎣ 0 1.1 1.1 −4.4 1.1⎦
0 0 0 0 0

10
Gauss elimination (the case of no solution)

⎡ ⎤
3 2 1 3
⎣2 1 1 0⎦
6 2 4 6

⎡ ⎤
3 2 1 3
⎣0 − 1 1 ⎦
3 3 −2
0 −2 2 0

⎡ ⎤
3 2 1 3
⎣0 − 1 1 ⎦
3 3 −2
0 0 0 12

11

Echelon form (계단 형태)

Gauss elimination:  
A b ⇒ R f

⎡ ⎤
r11 r12 · · · ··· ··· r1n f1
⎢ r22 · · · ··· ··· r2n f2 ⎥
⎢ ⎥
⎢ .. .. .. ⎥
⎢ . . . ⎥
⎢ ⎥
[R, f ] = ⎢
⎢ rrr ··· rrn fr ⎥⎥
⎢ fr+1 ⎥
⎢ ⎥
⎢ .. ⎥
⎣ . ⎦
fm

12
Lesson 3: Rank of a matrix, Linear independence of vectors
 linear combination (of vectors)
 linear independence (of vectors)
 rank (of a matrix)
 practice using MATLAB

Linear combination (of vectors) & linear independence (of a set of vectors)

2
Example


a1 = 3 0 2 2

a2 = −6 42 24 54

a3 = 21 −21 0 −15

Rank of a matrix
DEF: rank A = 행렬 A에서 선형독립인 row vector의 최대 수

⎡ ⎤
3 0 2 2
⎣−6 42 24 54 ⎦
21 −21 0 −15

4
Properties of ‘rank’
THM: elementary row operation을 해서 얻는 모든 행렬들은 같은 rank를 가진다.
(Rank는 elementary row operation에 대하여 invariant 하다.)

⎡ ⎤
3 0 2 2
⎣−6 42 24 54 ⎦
21 −21 0 −15

Properties of ‘rank’
THM: rank A는 A의 선형독립인 column vector의 최대 수와도 같다.
(따라서 rank A = rank A .)

6
7

Properties of ‘rank’
 For A ∈ Rm×n , rank A ≤ min{m, n}.
 For v1 , · · · , vp ∈ Rn , if n < p, then they are linearly dependent.
 Let A = [v1 , v2 , . . . , vp ] where vi ∈ Rn .
If rank A = p, then they are linearly independent.
If rank A < p, then they are linearly dependent.

Ex: ⎡ ⎤
3 0 2 2
⎣−6 42 24 54 ⎦
21 −21 0 −15

8
MATLAB을 사용한 실습
http://www.mathworks.com

Lesson 4: Vector space


 vector space (in Rn ), subspace
 basis, dimension
 column space, null space of a matrix
 existence and uniqueness of solutions
 vector space (in general)

1
Vector space

3
4

선형연립방정식의 해: 존재성과 유일성

Ax = b with A ∈ Rm×n and b ∈ Rm


1. existence: a solution x exists iff
 b ∈ column space of A
 rank A = rank [A b]
2. uniqueness: when a solution x exists, it is the unique solution iff
 dim(null space of A) = 0
 rank A = n
3. existence & uniqueness: the solution x uniquely exists iff
 rank A = rank [A b] = n
4. existence for any b ∈ Rm : a solution x exists for any b ∈ Rm iff
 rank A = m
5. unique existence for any b ∈ Rm : the unique solution x exists for any b ∈ Rm iff
 rank A = m and rank A = n (i.e., A ∈ Rn×n has ‘full rank’)

5 Ex: rank A = r < n ⇒


6

Homogeneous case

Ax = 0 A ∈ Rm×n

 non-trivial solution exists iff rank A = r < n


 방정식의 수가 미지수의 수보다 적은 경우 항상 non-trivial solution을 가진다.

Q: Dimension of the ‘solution space’ =

7
Nonhomogenous case

Ax = b = 0 A ∈ Rm×n

 Any solution x can be written as

x = x 0 + xh

where x0 is a solution to Ax = b and xh is a solution to Ax = 0.

Vector space
: set of vectors with “addition” and “scalar multiplication”

For A, B, C ∈ V and c, k ∈ R,

A+B =B+A
(A + B) + C = A + (B + C)
A+0=A
A + (−A) = 0

and

c(A + B) = cA + cB
(c + k)A = cA + kA
c(kA) = (ck)A
1A = A
9
Examples of vector space

10

Normed space
: vector space with “norm”

ex: for v ∈ Rn , the norm is v = v12 + v22 + · · · vn2

11
Inner product space
: vector space with “inner product”

1. (c1 A + c2 B, C) = c1 (A, C) + c2 (B, C)


2. (A, B) = (B, A)
3. (A, A) ≥ 0 and (A, A) = 0 iff A = 0

12

Lesson 5: Determinant of a matrix


 determinant (of a matrix)
 Cramer’s rule

1
Determinant (of a matrix)
For A ∈ Rn×n ,

a11 a12 · · · a1n


det A = |A| = =
amn

3
4

Elementary row operation & determinant


1. 두 행을 바꾸면 determinant의 부호가 반대가 됨
2. 똑같은 행이 존재하는 행렬의 determinant는 0
3. 한 행의 상수 배를 다른 행에 더해도 determinant 불변
4. 한 행에 0 아닌 c를 곱하면 determinant는 c배가 됨
(c = 0인 경우도 성립하지만 쓸모는 없음)

5
6

Properties of ‘determinant’
 앞 페이지의 1번–4번은 행 대신 열에 대해서도 똑같이 성립한다.
 det A = det A
 zero row나 zero column이 있으면 determinant는 0
 두 행이나 두 열이 비례관계이면 determinant는 0

7
Properties of ‘determinant’
THM: A matrix A ∈ Rm×n has rank r(≥ 1) iff
 A has a r × r submatrix whose determinant is non-zero, and
 determinants of submatrices of A, whose size is larger than r × r, are zero (if
exists).

Cramer’s rule

Ax = a1 a2 · · · an x = b, A ∈ Rn×n , det A =: D = 0

Cramer’s rule:
D1 D2 Dn
x1 = , x2 = , ··· xn =
D D D
where 
Dk = a1 · · · ak−1 b ak+1 · · · an

Ex:

2x − y = 1
3x + y = 2

9
10

Lesson 6: Inverse of a matrix


 inverse (of a matrix)
 Gauss-Jordan elimination (computing inverse)
 formula for the inverse
 properties of inverse and nonsingular matrices

1
Inverse of a matrix
 For A ∈ Rn×n , the inverse of A is a matrix B such that

AB = I and BA = I

and we denote B by A−1 .


 A−1 exists iff rank A = n iff det A = 0 iff A is ‘non-singular’

3
Computing the inverse: Gauss-Jordan elimination

⎡ ⎤
2 −1 0
⎣−1 2 −1⎦
0 −1 2

⎡ ⎤
2 −1 0 1 0 0
[A|I] = ⎣ −1 2 −1 0 1 0 ⎦
0 −1 2 0 0 1

⎡ 3 1 1

1 0 0 4 2 4
⎢ 1 1 ⎥
[I|B] = ⎣ 0 1 0 2 1 2 ⎦
1 1 3
0 0 1 4 2 4

5
A formula for the inverse
For A = [aij ] ∈ Rn×n ,

Properties about nonsingular matrix, inverse, and determinant


 Inverse of ‘diagonal matrix’ is easy.
 (AB)−1 = B −1 A−1
 (A−1 )−1 = A
 For A, B, C ∈ Rn×n , if A is nonsingular (i.e., rank A = n),
 AB = AC implies B = C.
 AB = 0 implies B = 0.
 For A, B ∈ Rn×n , if A is singular, then AB and BA are singular.
 det(AB) = det(BA) = det A det B

7
8

Lesson 7: Eigenvalues and eigenvectors


 eigenvalues and eigenvectors
 symmetric, skew-symmetric, and orthogonal matrices

1
Eigenvalue and eigenvector of a matrix

3
4

5
Find eigenvalues and eigenvectors of
⎡ ⎤
−2 2 −3
A=⎣ 2 1 −6⎦ .
−1 −2 0

−λ3 − λ2 + 21λ + 45 = 0

λ1 = 5, λ2 = λ3 = −3

⎡ ⎤ ⎡ ⎤
−7 2 −3 −7 2 −3
A − 5I = ⎣ 2 −4 −6⎦ ⇒ ⎣ 0 − 24
7 − 48
7

−1 −2 −5 0 0 0
⎡ ⎤ ⎡ ⎤
1 2 −3 1 2 −3
A + 3I = ⎣ 2 4 −6⎦ ⇒ ⎣0 0 0 ⎦
6
−1 −2 3 0 0 0

7
Symmetric, skew-symmetric, and orthogonal matrices

9
10

Lesson 8: Similarity transformation, diagonalization, and quadratic form


 similarity transformation
 diagonalization
 quadratic form

1
Similarity transformation

행렬 A ∈ Rn×n 가 n개의 선형독립인 e.vectors를 가질 때...

3
언제 행렬 A가 n개의 선형독립인 e.vectors를 갖나? (1)

언제 행렬 A가 n개의 선형독립인 e.vectors를 갖나? (2)

 
0 1 λ1 = −1
A1 = ,
−3 −4 λ2 = −3
 
0 1
A2 = , λ1 = λ2 = −2
−4 −4
 
−2 0
A3 = , λ1 = λ2 = −2
5 0 −2
Diagonalization

Diagonalization이 안되는 경우

7
Quadratic form

Q = 17x21 − 30x1 x2 + 17x22 = 128

9
못 다룬 것들
교재의 연습 문제: out of the scope:
trace, (induced) norm of a matrix,
positive definite matrix, positive (generalized eigenvectors,)
semi-definite matrix Jordan form

further study:

http://snuon.snu.ac.kr [최신제어기법]
http://snui.snu.ac.kr [최신제어기법]
http://lecture.cdsl.kr [선형대수 및 선형시스템 기초]

10

Lesson 9: Introduction to differential equation


 function, limit, and differentiation
 differential equation, general and particular solutions
 direction field, solving DE by computer

1
Function, limit, and differentiation

3
Basic concepts and ideas

y  (x) + 2y(x) − 3 = 0
y  (x) = −27x + x2
y  (t) = 2t
y  (x) + y  (x) + y(x) = 0
y  (x)y  (x) + sin(y(x)) + 2 = 0

y1 (x) + 2y2 (x) + 3 = 0
y2 (x) + 2y1 (x) + y2 (x) = 2
∂y ∂y
2 (x, z) + 3 (x, z) − 2x = 0
∂x ∂z
* ODE (ordinary differential equation) / PDE (partial differential equation)
* Solving DE:

5 * Explicit/implicit solution
Why do we have to study DE?

General solution and particular solution

7
Direction fields (a geometric interpretation of y = f (x, y))

An idea of solving DE by computer

Lesson 10: Solving first order differential equations


 separable differential equations
 exact differential equations

1
Separable DE
f, g: continuous functions

g(y)y  = f (x) ⇒ g(y)dy = f (x)dx

3
y
y =g x

replacing ay + bx + k with v

(2x − 4y + 5)y  + (x − 2y + 3) = 0

5
Exact differential equation: introduction
(observation:) For u(x, y),

∂v ∂u
du = (x, y)dx + (x, y)dy : differential of u.
∂x ∂y

So, if u(x, y) = c (constant), then du = .

Exact differential equation


dy
Given DE: M (x, y) + N (x, y) dx =0

If ∃ a function u(x, y) s.t.

∂u ∂u
(x, y) = M (x, y) & (x, y) = N (x, y)
∂x ∂y

then
u(x, y) = c
is a general sol. to the DE.

The DE is called “exact DE”.

7
How to check if the given DE is exact?

How to solve the exact DE?

9
10

Lesson 11: More on first order differential equations


 integrating factor
 linear differential equation
 Bernoulli equation
 obtaining orthogonal trajectories of curves
 existence and uniqueness of solutions to initial value problem

1
Integrating factor

P (x, y)dx + Q(x, y)dy = 0

(ex+y − yey )dx + (xey − 1)dy = 0

3
Linear DE

y  + p(x)y = r(x)

5
Bernoulli DE

y  + p(x)y = g(x)y a , a = 0 or 1

Verhulst logistic model (population model):

y  = Ay − By 2 , A, B > 0

7
Orthogonal trajectories of curves

Existence of solutions to initial value problem

y  = f (x, y), y(x0 ) = y0

THM 1: IF f (x, y) is continuous, and bounded such that


|f (x, y)| ≤ K, in the region

R = {(x, y) : |x − x0 | < a, |y − y0 | < b}

THEN the IVP has at least one sol. y(x) on the interval
|x − x0 | < α where α = min(a, b/K).
9
Uniqueness of solutions to initial value problem

y  = f (x, y), y(x0 ) = y0

∂f
THM 2: IF f (x, y) and ∂y (x, y) are continuous, and
bounded such that |f (x, y)| ≤ K and | ∂f
∂y (x, y)| ≤ M in R,
THEN the IVP has a unique sol. y(x) on the interval
|x − x0 | < α where α = min(a, b/K).
10

Lesson 12: Solving the second order linear DE


 overview
 homogeneous linear DE
 reduction of order
 homogeneous linear DE with constant coefficients

1
Overview: Linear ODEs of second order

y  + p(x)y  + g(x)y = r(x), y(x0 ) = K0 , y  (x0 ) = K1


1. The homogeneous linear ODE:
y  + p(x)y  + g(x)y = 0 (1)
has two “linearly independent” solutions y1 (x) and y2 (x).
2. Let yh (x) = c1 y1 (x) + c2 y2 (x) with two constant coefficients c1 and c2 , which is
again a solution to (1).
3. Solve
y  + p(x)y  + g(x)y = r(x) (2)
without considering the initial condition. Let the solution be yp (x).
4. The general solution is
y(x) = yh (x) + yp (x) = c1 y1 (x) + c2 y2 (x) + yp (x).

2 Determine c1 and c2 with the initial condition.

Homogeneous linear ODEs of second order

y  + p(x)y  + g(x)y = 0
Claim: Linear homogeneous ODE of the second order has two linearly independent
solutions.

3
4

How to obtain a basis if one sol. is known? (Reduction of order)


Obtaining another y2 (x) with a known y1 (x)

5
Homogeneous linear ODEs with constant coefficients

7
8

9
10
Lesson 13: The second order linear DE
 case study: free oscillation
 Euler-Cauchy equation
 existence and uniqueness of a solution to IVP
 Wronskian and linear independence of solutions

Modeling: Free oscillation

2
3

Euler-Cauchy equation

x2 y  + axy  + by = 0

4
5

Existence and uniqueness of a solution to IVP

y  + p(x)y  + q(x)y = 0, y(x0 ) = K0 , y  (x0 ) = K1

THM: IF p(x) and q(x) are continuous (on an open interval I x0 ),


THEN ∃ a unique sol. y(x) (on the interval I).

6
Wronskian and linear independence of solutions
With y1 (x) and y2 (x) being the solutions of

y  + p(x)y  + q(x)y = 0,

Wronski determinant (Wronskian) of y1 and y2 is defined by

y1 y2
W (y1 , y2 ) = = y1 y2 − y2 y1
y1 y2

THM:
1. two sol. y1 , y2 are linearly indep. on I ⇔
W (y1 (x), y2 (x)) = 0 at some x∗ ∈ I
2. If W (y1 (x), y2 (x)) = 0 at some x∗ ∈ I,
then W (y1 (x), y2 (x)) ≡ 0 on I.
3. If W (y1 (x), y2 (x)) = 0 at some x∗ ∈ I,
7 then y1 and y2 are linearly indep. on I.

8
y + p(x)y + q(x)y = 0 has two indep. sol. y1 and y2
so, it has a general sol. y(x) = c1 y1 (x) + c2 y2 (x)

Any sol. to y + p(x)y + q(x)y = 0 has the form of c1 y1 (x) + c2 y2 (x)

10
Lesson 14: Second order nonhomogeneous linear DE
 nonhomogeneous linear DE
 solution by undetermined coefficient method
 solution by variation-of-parameter formula

Nonhomogeneous linear DE

y  + p(x)y  + q(x)y = r(x)

2
3

Candidate for yp (x) in y + p(x)y + q(x)y = r(x)

The above rules are applied for each term r(x).


If the candidate for yp (x) happens to be a sol. of the
homogeneous equation, then multiply yp (x) by x (or by x2 if
this sol. corresponds to a double root of the characteristic
eq. of the homogeneous equation).
4
y  + 4y = 8x2

5
y  + 2y  + y = e−x y  + 2y  + 5y = 1.25e0.5x + 40 cos 4x − 55 sin 4x

y  + 2y  + 5y = 1.25e0.5x + 40 cos 2x

y  + 2y  + 5y = 1.25e0.5x + 40e−x cos 2x

Solution by variation of parameters

y  + p(x)y  + q(x)y = r(x)

8
9

Lesson 15: Higher order linear DE


 higher order homogeneous linear DE
 higher order homogeneous linear DE with constant coefficients
 higher order nonhomogeneous linear DE

1
Higher order homogeneous linear DE

y (n) + pn−1 (x)y (n−1) + · · · + p1 (x)y  + p0 (x)y = 0

General sol.: y(x) = c1 y1 (x) + c2 y2 (x) + · · · + cn yn (x)


where yi (x)’s are linearly indep.

y (n) + pn−1 (x)y (n−1) + · · · + p1 (x)y  + p0 (x)y = 0, y (i) (x0 ) = Ki

THM: If all pi ’s are conti. (on I), then IVP has a unique sol. (on I).

THM: With all pi ’s being conti.,

sol. {y1 , · · · , yn } are lin. dep. on I


y1 ··· yn
y1 ··· yn
⇔ W (y1 , · · · , yn ) = .. .. .. = 0 at some x0 ∈ I
. . .
(n−1) (n−1)
y1 ··· yn
⇔ W (y1 , · · · , yn ) ≡ 0 on I

3
y  − 5y  + 4y = 0

THM: With all pi ’s being conti., the (H) has n lin. indep. sol. (i.e., there is a general
solution).

THM: With all pi ’s being conti., the general sol. includes all solutions.

5
Higher order homogeneous linear DE with constant coefficients

y (n) + an−1 y (n−1) + · · · + a1 y  + a0 y = 0

* distinct roots

* multiple roots

7
Higher order nonhomogeneous linear DE

y (n) + pn−1 (x)y (n−1) + · · · + p1 (x)y  + p0 (x)y = r(x)

* undetermined coefficient method:

* variation-of-parameter formula:
  
W1 r W2 r Wn r
yp (x) = y1 dx + y2 dx + · · · + yn dx
W W W
⎡ ⎤
0
⎢ .. ⎥
⎢ ⎥
where W = W (y1 , · · · , yn ) and Wj : j-th column in W replaced by ⎢ . ⎥.
⎣0⎦
8 1

Lesson 16: Case studies


 mass-spring-damper system: forced oscillation
 RLC circuit
 elastic beam

1
Case study: forced oscillation (my + cy + ky = r)

m(ω02 − ω 2 ) cω
yp (t) = F0 2 2 2 2 2 2
cos ωt+F0 2 2 sin ωt, y(t) = yh (t)+yp (t)
2 m (ω0 − ω ) + c ω m (ω0 − ω 2 )2 + c2 ω 2

3
m(ω02 − ω 2 ) cω
y(t) = yh (t) + F0 2 2 2 2 2 2
cos ωt + F0 2 2 sin ωt
m (ω0 − ω ) + c ω m (ω0 − ω 2 )2 + c2 ω 2

Modeling: RLC circuit

5
RLC circuit: forced response

Elastic beam

7
Lesson 17: Systems of ODEs
 introduction
 existence and uniqueness of solutions to IVP
 linear homogeneous case
 linear homogeneous constant coefficient case

Systems of ODE

2
Existence and uniqueness of solutions to IVP
⎡ ⎤
k1
⎢ ⎥
y  = f (t, y), y(t0 ) = ⎣ ... ⎦
kn
∂fi
THM: If all fi (t, y) and ∂y j
(t, y) are conti. on some region of (t, y1 , y2 , · · · , yn )-space
containing (t0 , k1 , · · · , kn ), then a sol. y(t) exists and is unique in some local interval of
t around t0 .

⎡ ⎤
k1
⎢ ⎥
y  = A(t)y + g(t), y(t0 ) = ⎣ ... ⎦
kn
THM: If A(t) and g(t) are conti. on an interval I, then a sol. y(t) exists and is unique
on the interval I.
3

Linear homogeneous case

y  = A(t)y
General sol.: y(t) = c1 y (1) (t) + c2 y (2) (t) + · · · + cn y (n)
where y (i) (t)’s are lin. indep. sol.

4
Linear homogeneous constant coefficient case

y  = Ay

6
7

8
9

Handling complex e.v/e.vectors

10
Lesson 18: Qualitative properties of systems of ODE
 phase plane and phase portrait
 critical points
 types and stability of critical points

Phase plane and phase portrait

2
Critical point (= equilibrium)

Example: undamped pendulum

4
Types of critical points: node

Types of critical points: saddle / center

6
Types of critical points: spiral / degenerate node

Stability
DEF: stability of a critical point P0 :
 all trajectories of y  = f (y) whose initial condition y(t0 ) is sufficiently close to P0
remain close to P0 for all future time
 for each  > 0, there is δ > 0 such that,

|y(t0 )| < δ ⇒ |y(t)| < , ∀t ≥ t0

DEF: asymptotic stability of P0 = stability + attractivity (limt→∞ y(t) = y(t0 ))

8
Example: second order system

Lesson 19: Linearization and nonhomogeneous linear systems of ODE


 linearization
 nonhomogeneous case

1
Linearization

y  = f (y)
Let y = 0 be a critical point (without loss of generality; WLOG), and be isolated.

∂f1 ∂f1
y1 = f1 (y1 , y2 ) = f1 (0, 0) + (0, 0)y1 + (0, 0)y2 + h1 (y1 , y2 )
∂y1 ∂y2
∂f2 ∂f2
y2 = f2 (y1 , y2 ) = f2 (0, 0) + (0, 0)y1 + (0, 0)y2 + h2 (y1 , y2 )
∂y1 ∂y2

∂f
y  = f (y) ⇒ y  = Ay = y
∂y y=0

 If no e.v. of A lies in the imaginary axis, then stability of the critical point of the
nonlinear system is determined by A.
 If Re(λ) < 0 for all λ, it is asymptotocally stable.
 If Re(λ) > 0 for at least one λ, it is unstable.
 If all e.v.’s are distinct and no e.v. of A lies in the imaginary axis, then the type of
the critical point of the nonlinear system is determined by A.
 The node, saddle, and spiral are preserved, but center may not be preserved.

3
Nonhomogeneous linear case

Method of undetermined coefficients (for time-invariant case)

5
6

Method of variation of parameters (for time-varying case)

7
Method of diagonalization (for time-invariant case)

Lesson 20: Series solutions of ODE


 power series method
 Legendre equation

1
Power series


am (x − x0 )m = a0 + a1 (x − x0 ) + a2 (x − x0 )2 + · · ·
m=0

3


am (x − x0 )m = a0 + a1 (x − x0 ) + a2 (x − x0 )2 + · · · + an (x − x0 )n
m=0
+ an+1 (x − x0 )n+1 + · · ·

For a given x1 ,
if limn→∞ Sn (x1 ) exists (or, limn→∞ Rn (x1 ) = 0,
or for any  > 0, ∃N () s.t. |Rn (x1 )| <  for all n > N ()),
then the series is called “convergent at x = x1 ” and we write S(x1 ) = limn→∞ Sn (x1 ).

Radius of convergence
If
1 1
R= , or R=
limm→∞ m
|am | am+1
limm→∞ am

is well-defined, then the series is convergent for x s.t. |x − x0 | < R.

5
6

Power series method

y  (x) + p(x)y  (x) + q(x)y(x) = r(x)


If p, q, and r are analytic at x = x0 ,

then there exists a power series solution around x0 (i.e., R > 0):


y(x) = am (x − x0 )m .
m=0

7
8

Legendre equation

(1 − x2 )y  − 2xy  + n(n + 1)y = 0, n : real number

9
10

11
Legendre polynomial (of degree n)

12 "Legendrepolynomials6" by Geek3

Lesson 21: Frobenius method


 Frobenius method
 Euler-Cauchy equation revisited

1
Frobenius method
The DE
b(x)  c(x)
y  + y + 2 y=0
x x
where b and c are analytic at x = 0, has at least one sol. around x = 0 of the form


y(x) = xr am xm = xr (a0 + a1 x + a2 x2 + · · · ).
m=0

 Case 1: distinct roots, not differing by an integer

 Case 2: double roots


 Case 3: distinct roots differing by an integer
3
General sol.: y(x) = c1 y1 (x) + c2 y2 (x) where
 Case 1:

y1 (x) = xr1 (a0 + a1 x + · · · )


y2 (x) = xr2 (A0 + A1 x + · · · )

 Case 2: r = (1 − b0 )/2

y1 (x) = xr (a0 + a1 x + · · · )
y2 (x) = y1 (x) ln x + xr (A1 x + A2 x2 + · · · )

 Case 3: r1 > r2

y1 (x) = xr1 (a0 + a1 x + · · · )


y2 (x) = ky1 (x) ln x + xr2 (A0 + A1 x + · · · )

5
6

7
8

Example: Euler-Cauchy equation revisited

9
10

Lesson 22: Bessel DE and Bessel functions


 example for Frobenius method
 Bessel DE and its solutions

1
Example: a simple hypergeometric equation

x(x − 1)y  + (3x − 1)y  + y = 0

3
4

Example: another simple hypergeometric equation

x(x − 1)y  − xy  + y = 0

5
6

Gamma function
 ∞
Γ(ν) := e−t tν−1 dt
0

has the properties:


1. Γ(ν + 1) = νΓ(ν)
2. Γ(1) = 1
3. Γ(n + 1) = n!

7 "Gamma plot" by Alessio Damato


Bessel’s DE

x2 y  + xy  + (x2 − ν 2 )y = 0, ν≥0

9
Computing y1 (x)

10

Bessel function of the first kind of order n



 (−1)m x2m
Jn (x) = xn
22m+n m!(n + m)!
m=0

11 "Bessel Functions" by Inductiveload


Finding y2 (x)

12

13
14

Bessel function of the second kind of order ν

1
Yν (x) = [Jν (x) cos νπ − J−ν (x)]
sin νπ
Yn (x) = lim Yν (x) = · · ·
ν→n

15 "Bessel Functions" by Inductiveload


Lesson 23: Laplace transform I
 introduction to Laplace transform
 linearity, shifting property
 existence and uniqueness of Laplace transform
 computing inverse Laplace transform
 partial fraction expansion & Heaviside formula

Laplace transform
 ∞
L{f } = f (t)e−st dt = F (s)
0

2
(Property) Linearity: L{af (t) + bg(t)} = aL{f (t)} + bL{g(t)}

(Property) s-shifting property: L{eat f (t)} = F (s − a)

4
Transform table: f (t) ↔ F (s)

1 s
1 ↔ cos ωt ↔
s s2 + ω 2
1 ω
t ↔ sin ωt ↔
s2 s2 + ω 2
2! s
t2 ↔ cosh at ↔
s3 s − a2
2
a
n! sinh at ↔
tn ↔ n+1
, n = integer s − a2
2
s s−a
Γ(a + 1) eat cos ωt ↔
ta ↔ , a>0 (s − a)2 + ω 2
sa+1 ω
1 eat sin ωt ↔
eat ↔ (s − a)2 + ω 2
s−a

Existence and uniqueness of Laplace transform


IF f (t) is piecewise continuous on every finite interval in {t : t ≥ 0}, and

|f (t)| ≤ M ekt , t≥0

with some M and k,


THEN L{f (t)} exists for all Re(s) > k.

6
7

Computing inverse Laplace transform

L−1 {F (s)} = f (t) =?

* Partial fraction expansion:

8
Finding coefficients in partial fraction expansion: Heaviside formula
s+1 A1 A2 A3
Y (s) = s3 +s2 −6s
= s + s+3 + s−2

s3 −4s2 +4 A2 A1 B C
Y (s) = s2 (s−2)(s−1)
= s2
+ s + s−2 + s−1

A3 A2 A1 B2 B1
Y (s) = · · · = (s−1)3
+ (s−1)2
+ s−1 + (s−2)2
+ s−2

20 s−3
Y (s) = (s2 +4)(s2 +2s+2)
+ s2 +2s+2

10
Lesson 24: Laplace transform II
 transform of derivative and integral
 solving linear ODE
 unit step function and t-shifting property
 Dirac’s delta function (impulse)

(Property) Transform of differentiation: L{f (t)} = sL{f (t)} − f (0)

2
3

t
(Property) Transform of integration: L{ 0 f (τ )dτ } = 1s F (s)

4
Solving IVP of linear ODEs with constant coefficients

y  + ay  + by = r(t), y(0) = K0 . y  (0) = K1

6
Unit step function (Heaviside function)

(Property) t-shifting property: L{f (t − a)u(t − a)} = e−as F (s)

8
9

(Dirac’s) delta function


δ(t) is a (generalized) function such that
  a
0, t = 0
δ(t) = and δ(t)dt = 1 for any a > 0
∞, t = 0 −a

 ∞
sifting property: g(t)δ(t − a)dt = g(a), g: conti., a > 0
0
10
Lesson 25: Laplace transform III
 convolution
 impulse response
 differentiation and integration of transforms
 solving system of ODEs

(Property) Convolution: L−1 {F (s)G(s)} = f (t) ∗ g(t)

2
Properties of convolution:

f ∗g =g∗f
f ∗ (g1 + g2 ) = f ∗ g1 + f ∗ g2
(f ∗ g) ∗ v = f ∗ (g ∗ v)
f ∗ 0 = 0 ∗ f = 0, f ∗ 1 = f

Impulse response

4
5

6
(Property) Differentiation of transform: L{tf (t)} = −F (s)

8
∞
(Property) Integration of transform: L{ f (t)
t } = s F (s̃)ds̃

Solving system of ODEs

10

You might also like