0% found this document useful (0 votes)
21 views6 pages

Formulas - MathSc1 (2024)

The document provides a comprehensive overview of mathematical formulas and concepts, including functions, polar and Cartesian coordinates, complex numbers, matrix algebra, differentiation, and Taylor series. It outlines key formulas for rational functions, hyperbolic functions, and the properties of matrices, as well as methods for solving systems of linear equations. Additionally, it covers differentiation rules and partial derivatives for functions of multiple variables.

Uploaded by

Carlos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views6 pages

Formulas - MathSc1 (2024)

The document provides a comprehensive overview of mathematical formulas and concepts, including functions, polar and Cartesian coordinates, complex numbers, matrix algebra, differentiation, and Taylor series. It outlines key formulas for rational functions, hyperbolic functions, and the properties of matrices, as well as methods for solving systems of linear equations. Additionally, it covers differentiation rules and partial derivatives for functions of multiple variables.

Uploaded by

Carlos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Formulas - MathSc1

Functions: Polar coordinates (r, 𝜃): Cartesian coordinates:

𝑟 = √𝑥 2 + 𝑦 2 𝑥 = 𝑟 𝑐𝑜𝑠𝜃
𝑦
𝑡𝑎𝑛𝜃 = 𝑥 𝑦 = 𝑟 𝑠𝑖𝑛𝜃

𝑝(𝑥)
Rational functions 𝑓(𝑥) = where 𝑝 and 𝑞 are polynomials
𝑞(𝑥)

Strictly proper rational function: 𝑑𝑒𝑔𝑟𝑒𝑒(𝑝) < 𝑑𝑒𝑔𝑟𝑒𝑒(𝑞)

Improper rational function: 𝑑𝑒𝑔𝑟𝑒𝑒(𝑝) > 𝑑𝑒𝑔𝑟𝑒𝑒(𝑞)

Factorization of polynomial 𝑓(𝑥) = 𝐴 · (𝑥 − 𝑎) · (𝑥 − 𝑏) · (𝑥 − 𝑐)2 · (𝑥 2 + 𝑑𝑥 + 𝑒)


repeated
distinct linear factors Irreducible quadratic factor
linear factor

Hyperbolic functions cosh(𝑥) = 12(𝑒 𝑥 + 𝑒 −𝑥 )


1
1
sinh(𝑥) = 2(𝑒 𝑥 − 𝑒 −𝑥 )
1

sinh(𝑥)
tanh(𝑥) =
cosh(𝑥)

𝑥(𝑡)
Parametric representation Curve given by { ,𝑡 ∈ ℝ
𝑦(𝑡)

Implicit function Relation between x and y on the form 𝑓(𝑥, 𝑦) = 0

Complex Numbers:

Cartesian form of 𝑧 ∈ ℂ 𝑧 = 𝑥 + 𝑦𝑗 𝑥, 𝑦 ∈ ℝ

Imaginary unit, 𝑗 𝑗 2 = −1

Real part 𝑥 = 𝑅𝑒(𝑧)


Imaginary part 𝑦 = 𝐼𝑚(𝑧)

Modulus |𝑧| = √𝑥 2 + 𝑦 2

Argument arg(𝑧) = 𝜃, −𝜋 ≤ arg(𝑧) ≤ 𝜋

Polar form 𝑧 = 𝑟 ∙ (𝑐𝑜𝑠𝜃 + 𝑠𝑖𝑛𝜃 ∙ 𝑗) or: 𝑧 = 𝑟∠𝜃 where 𝑟 = |𝑧|

Multiplication in polar form 𝑧1 · 𝑧2 = 𝑟1 𝑟2 (cos(𝜃1 + 𝜃2 ) + sin(𝜃1 + 𝜃2 ) · 𝑗


𝑧1 𝑟1
Division in polar form = (cos(𝜃1 − 𝜃2 ) + sin(𝜃1 − 𝜃2 ) · 𝑗
𝑧2 𝑟2

Euler’s Formula 𝑒 𝑗𝜃 = 𝑐𝑜𝑠𝜃 + 𝑠𝑖𝑛𝜃 · 𝑗

Exponential form 𝑧 = 𝑟 · 𝑒 𝑗𝜃

sin/cos/hyperbolic functions cosh(𝑗𝑥) = cos(𝑥) sinh(𝑗𝑥) = 𝑗 sin (𝑥)

tanh(𝑗𝑥) = 𝑗 tan(𝑥) cos(𝑗𝑥) = cosh(𝑥) sin(𝑗𝑥) = 𝑗 sinh (𝑥)

𝑒 𝑗𝜃 + 𝑒 −𝑗𝜃 𝑒 𝑗𝜃 − 𝑒 −𝑗𝜃
𝑐𝑜𝑠𝜃 = 𝑠𝑖𝑛𝜃 =
2 2
Logarithm of complex number ln(𝑧) = ln(|𝑧|) + arg(𝑧) · 𝑗

Power of a complex number 𝑧 𝑛 = 𝑟 𝑛 · [cos(𝑛𝜃) + sin(𝑛𝜃) · 𝑗] (De Moivre’s Theorem)

Root of a complex number


1 1 𝜃 2𝜋𝑘 𝜃 2𝜋𝑘
𝑧 𝑛 = 𝑟 𝑛 · [cos ( + ) + sin ( + ) · 𝑗] , 𝑘 = 0, 1, 2, … , 𝑛 − 1
𝑛 𝑛 𝑛 𝑛

1 1 𝜃 2𝜋𝑘
( + )·𝑗
𝑧𝑛 = 𝑟𝑛 · 𝑒 𝑛 𝑛

Matrix algebra:
Matrix A of order (m,p) and matrix B of order (p,n)

Multiplication 𝑝 𝑖 = 1, … , 𝑚
AB=C where 𝑐𝑖𝑗 = ∑𝑘=1 𝑎𝑖𝑘 𝑏𝑘𝑗 for {
𝑗 = 1, … , 𝑛

Determinant for (2x2)-matrix 𝑎11 𝑎12 𝑎11 𝑎12


𝑨 = [𝑎 𝑎22 ] ⇒ det(𝑨) = |𝑎 𝑎22 | = 𝑎11 𝑎22 − 𝑎21 𝑎12
21 21

+ − +⋯
Co-factors Sign pattern: [− + ⋯ ]

The minor (the value): Determinant of remaining matrix when


removing row i and column j

Co-factor element 𝐴𝑖𝑗 ∶ Combine sign pattern and the minor

Determinant for (n x n)-matrix det(𝑨) = ∑𝑗 𝑎𝑖𝑗 𝐴𝑖𝑗 = ∑𝑖 𝑎𝑖𝑗 𝐴𝑖𝑗 (for any i or j)

Transpose matrix 𝑨𝑇 Rows of matrix A are turned into columns (or vice versa)

Adjoint matrix 𝑎𝑑𝑗 𝑨 = (𝑨𝑐𝑜 )𝑇

1
Inverse matrix 𝑨−1 = |𝑨| ∙ (𝑨𝑐𝑜 )𝑇 and 𝑨 𝑨−1 = 𝑨−1 𝑨 = 𝑰
𝑎 𝑏 1 𝑑 −𝑏
Inverse matrix (order (2 x 2)) 𝑨=[ ] ⇒ 𝑨−1 = ∙[ ]
𝑐 𝑑 𝑎𝑑 − 𝑏𝑐 −𝑐 𝑎

System of linear equations:


𝑨𝒙=𝒃

(a) Non-homogenous and non-singular 𝒃 ≠ 𝟎 ∧ |𝑨| ≠ 0 ⇒ 𝒙 = 𝑨−1 𝒃 (unique solution)

(b) Homogenous and non-singular 𝒃 = 𝟎 ∧ |𝑨| ≠ 0 ⇒ 𝒙 = 𝑨−1 𝟎 = 𝟎 (trivial solution)

(c) Non-homogenous and singular no solutions or


𝒃 ≠ 𝟎 ∧ |𝑨| = 0 ⇒ {
infinitely many solutions

(d) Homogenous and singular 𝒃 = 𝟎 ∧ |𝑨| = 0 ⇒ infinitely many solutions

Characteristic polynomial 𝑐(𝜆) = |𝜆𝑰 − 𝑨|

Characteristic equation 𝑐(𝜆) = 0

Eigenvalues 𝜆: 𝑨 𝒙 = 𝜆 𝒙 𝜆1 , 𝜆2 , 𝜆3 … that are solutions to 𝑐(𝜆) = 0

Differentiation:
𝑑𝑢 𝑑𝑣
𝑓(𝑥) ′ 𝑓′ (𝑥)𝑔(𝑥)−𝑓(𝑥)𝑔′(𝑥) 𝑑 𝑢 𝑣−𝑢
Quotient rule (𝑔(𝑥)) = 2 or (𝑣 ) = 𝑑𝑥 𝑑𝑥
(𝑔(𝑥)) 𝑑𝑥 𝑣2


Composite rule (Chain rule) (𝑓(𝑔(𝑥))) = 𝑓 ′ (𝑔(𝑥))𝑔′(𝑥) or
𝑑𝑦
=
𝑑𝑦 𝑑𝑧
𝑑𝑥 𝑑𝑧 𝑑𝑥

′ 1 𝑑𝑦 1
Inverse rule (𝑓 −1 (𝑥)) = 𝑓′(𝑦) or 𝑑𝑥
= 𝑑𝑥
𝑑𝑦

𝑑2 𝑓
2nd order derivative 𝑓 ′′ (𝑥) = 𝑑𝑥 2 = 𝑓 (2) 𝑥

𝑑𝑦 𝑑 𝑑𝑦
𝑑𝑦 𝑑2𝑦 ( )
𝑑𝑡 𝑑𝑥
For parametric representation = 𝑑𝑡
𝑑𝑥 and = 𝑑𝑥
𝑑𝑥 𝑑𝑡
𝑑𝑥 2
𝑑𝑡
For implicit function 𝑑𝑦
𝜕𝑓
𝜕𝑥
𝑓(𝑥, 𝑦) = 0 =− 𝜕𝑓
𝑑𝑥 𝜕𝑦

Taylor polynomials:
𝑓 ′ (𝑎) 𝑓 ′′ (𝑎) 𝑓 ′′′ (𝑎)
of nth degree about x = a 𝑝𝑛(𝑥) = 𝑓(𝑎) + (𝑥 − 𝑎) + (𝑥 − 𝑎)2 + (𝑥 − 𝑎)3
1! 2! 3!
𝑓 (𝑛) (𝑎)
+⋯+ (𝑥 − 𝑎)𝑛
𝑛!
Note: 𝑝𝑛 (𝑥) ≈ 𝑓(𝑥) when 𝑥 ≈ 𝑎

𝑓 ′ (𝑎) 𝑓 ′′ (𝑎) 2 𝑓 (𝑛) (𝑎) 𝑛
Taylor series expansion, 𝑥 ≈ 𝑎 𝑓(𝑥 + 𝑎) = 𝑓(𝑎) + 𝑥+ 𝑥 +⋯= ∑ 𝑥
1! 2! 𝑛!
𝑛=0


Maclauren series expansion, 𝑓 ′ (0) 𝑓 ′′ (0) 2 𝑓 (𝑛) (0) 𝑛
𝑓(𝑥) = 𝑓(0) + 𝑥+ 𝑥 +⋯= ∑ 𝑥
𝑥≈0 1! 2! 𝑛!
𝑛=0

Functions of 2 or more
variables: 𝑓(𝑥, 𝑦) 𝑜𝑟 𝑓(𝑥, 𝑦, 𝑧) 𝑒𝑡𝑐.

𝜕𝑓 𝑑𝑓 𝜕𝑓 𝑑𝑓
Partial derivatives = [ ] = [ ]
𝜕𝑥 𝑑𝑥 𝑦=𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝜕𝑦 𝑑𝑦 𝑥=𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡

𝜕𝑓
𝜕𝑥
= 𝑓1′ (𝑥, 𝑦) ~ slope on the surface in x-axis direction

𝜕𝑓
= 𝑓2′ (𝑥, 𝑦) ~ slope on the surface in y-axis direction
𝜕𝑦

𝜕𝑓 𝜕𝑓
Directional derivative 𝑚𝛼 (𝑥, 𝑦) = cos(𝛼) + sin (𝛼)
(Slope in any direction 𝛼) 𝜕𝑥 𝜕𝑦

Chain rule 𝑥(𝑠, 𝑡)


𝑧 = 𝑓(𝑥, 𝑦) } ⇒ 𝑧 = 𝐹(𝑠, 𝑡)
𝑦(𝑠, 𝑡)

𝜕𝑧 𝜕𝑧 𝜕𝑥 𝜕𝑧 𝜕𝑦 𝜕𝑧 𝜕𝑧 𝜕𝑥 𝜕𝑧 𝜕𝑦
Then: = + = +
𝜕𝑠 𝜕𝑥 𝜕𝑠 𝜕𝑦 𝜕𝑠 𝜕𝑡 𝜕𝑥 𝜕𝑡 𝜕𝑦 𝜕𝑡

𝜕2 𝑓 𝜕 𝜕𝑓
2nd order partial derivatives 𝜕𝑥 2
= ( )
𝜕𝑥 𝜕𝑥
~ info about curvature on surface in x-axis direction

𝜕2 𝑓 𝜕 𝜕𝑓
𝜕𝑦 2
= ( )
𝜕𝑦 𝜕𝑦
~ info about curvature on surface in y-axis direction

𝜕2 𝑓 𝜕 𝜕𝑓
𝜕𝑥𝜕𝑦
= ( )
𝜕𝑥 𝜕𝑦
~ increase in slope in y-direction when moving in x-axis direction
𝜕2 𝑓 𝜕 𝜕𝑓
𝜕𝑦𝜕𝑥
= ( )
𝜕𝑦 𝜕𝑥
~ increase in slope in x-direction when moving in y-axis direction

𝜕2 𝑓 𝜕2 𝑓
𝜕𝑦𝜕𝑥
= 𝜕𝑥𝜕𝑦 ~ the cross partials are equal (for all relevant functions)

𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓
𝑢 = 𝑓(𝑥, 𝑦): 𝑑𝑢 = ∆𝑥 + ∆𝑦 𝑜𝑟 𝑑𝑢 = 𝑑𝑥 + 𝑑𝑦
The differential (2 variables) 𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦

𝜕𝑓 𝜕𝑓
∆𝑢 ≈ ∆𝑥 + ∆𝑦
𝜕𝑥 𝜕𝑦

Ordinary differential
equations:
𝑑𝑥
Separation of the variables ODE(1): 𝑔(𝑥) = ℎ(𝑡) ⇒ ∫ 𝑔(𝑥)𝑑𝑥 = ∫ ℎ(𝑡)𝑑𝑡
𝑑𝑡

𝑑𝑥 𝑥 1 1 𝑥
ODE(2): = 𝑓( ) ⇒ ∫ 𝑑𝑦 = ∫ 𝑑𝑡 , with 𝑦 =
𝑑𝑡 𝑡 𝑓(𝑦) − 𝑦 𝑡 𝑡

𝑑𝑥
ODE on exact form ODE(3): 𝑝(𝑡, 𝑥) + 𝑞(𝑡, 𝑥) = 0 ⇒ ℎ(𝑡, 𝑥) = 𝑐
𝑑𝑡
𝜕ℎ 𝜕ℎ
where = 𝑝(𝑡, 𝑥) and = 𝑞(𝑡, 𝑥)
𝜕𝑥 𝜕𝑡
𝜕𝑝 𝜕𝑞
Test for existence of the function ℎ(𝑡, 𝑥): 𝜕𝑡
= 𝜕𝑥

𝑑𝑥
First order linear ODE ODE(4): + 𝑝(𝑡) ∙ 𝑥 = 𝑟(𝑡) ⇒ 𝑥(𝑡) = 𝑒 −𝑘(𝑡) ∙ ∫ 𝑒 𝑘(𝑡) ∙ 𝑟(𝑡)𝑑𝑡
𝑑𝑡

where 𝑘(𝑡) = ∫ 𝑝(𝑡)𝑑𝑡

Bernoulli ODE 𝑑𝑥
ODE(5): + 𝑝(𝑡) ∙ 𝑥 = 𝑞(𝑡) ∙ 𝑥 𝛼
𝑑𝑡

𝑑𝑦
⇒ + (1 − 𝛼)𝑝(𝑡)𝑦 = (1 − 𝛼)𝑞(𝑡) , where 𝑦 = 𝑥 1−𝛼
𝑑𝑡

which is solved as an ODE(4)

Numerical solutions:

𝑑𝑥
Euler’s method to 𝑑𝑡
= 𝑓(𝑡, 𝑥) 𝑡𝑛+1 = 𝑡𝑛 + ℎ

𝑋𝑛+1 = 𝑋𝑛 + ℎ · 𝑓(𝑡𝑛 , 𝑋𝑛 )

Runge-Kutta method 1 1
(of 2nd order) 𝑋𝑛+1 = 𝑋𝑛 + ℎ · 𝑓(𝑡𝑛 + 2ℎ, 𝑋𝑛 + 2ℎ · 𝑓(𝑡𝑛 , 𝑋𝑛 )
Differential operator Φ[𝑓(𝑡)] = expression where one or more derivatives of 𝑓 appear

Linear differential operator 𝐿[𝑎𝑥1 + 𝑏𝑥2 ] = 𝑎𝐿[𝑥1 ] + 𝑏𝐿[𝑥2 ]

Homogenous linear ODE 𝐿[𝑥(𝑡)] = 0

Non-homogenous linear ODE 𝐿[𝑥(𝑡)] = 𝑓(𝑡)

Linearity principle 𝑥1 and 𝑥2 solve 𝐿[𝑥(𝑡)] = 0


} ⇒ 𝑎𝑥1 + 𝑏𝑥2 is a solution
𝐿[𝑥(𝑡)] is linear

In addition:

The ODE is of order 𝑝 General solution:


} ⇒ 𝐴 𝑥 +𝐴 𝑥 + ⋯+ 𝐴 𝑥
𝑥1 , 𝑥2 , … , 𝑥𝑝 are independent solutions 1 1 2 2 𝑝 𝑝

Independent functions ∑ 𝑘𝑖 ∙ 𝑓𝑖 (𝑡) = 0 for all 𝑡, only when 𝑘1 = 𝑘2 = ⋯ = 𝑘𝑝 = 0


𝑖=1

General solution to 𝐿[𝑥] = 𝑓(𝑡) ODE(7):


x ∗ : Any solution to 𝐿[𝑥] = 𝑓(𝑡)
} ⇒ General solution: x ∗ + 𝑥𝑐
𝑥𝑐 : The general solution to 𝐿[𝑥] = 0

𝑑2 𝑥 𝑑𝑥
Linear, constant coefficient ODE ODE(6): 𝑎 2
+𝑏 + 𝑐𝑥 = 0 ,𝑎 ≠ 0
𝑑𝑡 𝑑𝑡

Characteristic equation 𝑎𝑚2 + 𝑏𝑚 + 𝑐 = 0 with solutions 𝑚1 and 𝑚2

General solution to ODE(6) 𝑚1 ≠ 𝑚2 ∈ ℝ ∶ 𝑥(𝑡) = 𝐴𝑒 𝑚1 𝑡 + 𝐵𝑒 𝑚2 𝑡

𝑚1 = 𝑚2 = 𝑘 ∶ 𝑥(𝑡) = 𝐴 ∙ 𝑡 ∙ 𝑒 𝑘𝑡 + 𝐵𝑒 𝑘𝑡

𝑚1 , 𝑚2 ∈ ℂ ∶ 𝑚1 = 𝛼 + 𝛽𝑗 and 𝑚2 = 𝛼 − 𝛽𝑗 then
(The method can readily be
generalized to higher order ODE’s) 𝑥(𝑡) = 𝑒 𝛼𝑡 ( 𝐶 · cos(𝛽𝑡) + 𝐷 · sin(𝛽𝑡))

You might also like