0% found this document useful (0 votes)
17 views15 pages

Maths P1 Sheet

This document discusses linear algebra and calculus concepts. It covers topics like vector spaces, matrices, eigenvalues, eigenvectors, linear transformations, continuity of functions, and uniform continuity. Many definitions, properties, theorems and examples regarding these topics are provided.

Uploaded by

4abhisheksaxena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views15 pages

Maths P1 Sheet

This document discusses linear algebra and calculus concepts. It covers topics like vector spaces, matrices, eigenvalues, eigenvectors, linear transformations, continuity of functions, and uniform continuity. Many definitions, properties, theorems and examples regarding these topics are provided.

Uploaded by

4abhisheksaxena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Linear Algebra

1. D’able ⇔ AM=GM for all eigenvalues

Vectorspaces
1. Non-empty set 𝑉over field 𝐹 is vectorspace if:
○ Abelian group under +: closure, associativity, identity, inverse, commutativity
○ Scalar multiplication distributivity:
i. 𝑎(α + β) = 𝑎α + 𝑎β ∀𝑎 ∈ 𝐹, α, β ∈ 𝑉
ii. (𝑎 + 𝑏)(α) = 𝑎α + 𝑏α ∀𝑎, 𝑏 ∈ 𝐹, α ∈ 𝑉
iii. 𝑎𝑏(α) = 𝑎(𝑏α) ∀𝑎, 𝑏 ∈ 𝐹, α ∈ 𝑉
iv. 1. α = α ∀α ∈ 𝑉, 1 𝑖𝑠 𝑢𝑛𝑖𝑡𝑦 𝑜𝑓 𝐹(important!)
○ 𝑉is not a vectorspace if 𝐹 is not a field; F needs to be subfield of V for V to be vectorspace
2. Subspace test: 𝑊 ⊆ 𝑉(𝐹) 𝑖𝑠 𝑠𝑢𝑏𝑠𝑝𝑎𝑐𝑒 𝑖𝑓𝑓 𝑊≠ φ, 𝑎α − 𝑏β ∈ 𝑊 ∀α, β ∈ 𝑊, 𝑎 ∈ 𝐹
3. Theorems:
○ Intersection of arbitrary number of subspaces is a subspace
○ Union is subset iff one is contained in the other
4. dim(𝑉 + 𝑊) = dim(𝑉) + dim(𝑊) − dim(𝑉∩𝑊)
5. Vector homomorphism/LT: φ: 𝑉 → 𝑊 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 φ(𝑎α + 𝑏β) = 𝑎φ(α) + 𝑏φ(β)
𝑊
○ First principle: φ: 𝑉 → 𝑊 ⇒ 𝑉 ≈ 𝑘𝑒𝑟 φ
○ Second principle
○ Third principle
○ Sylvester’s theorem: 𝑟𝑎𝑛𝑘(𝑇) + 𝑛𝑢𝑙𝑙(𝑇) = dim 𝑉 (proof by extending basis of 𝑛𝑢𝑙𝑙(𝑇))
○ Linear transform on a vectorspace: non-singularity ⇔ invertibility
6. If 𝐿(𝑉, 𝑊) 𝑖𝑠 𝑣𝑒𝑐𝑡𝑜𝑟𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝑎𝑙𝑙 𝐿𝑇𝑠 𝑓𝑟𝑜𝑚 𝑉 𝑡𝑜 𝑊, dim 𝐿(𝑉, 𝑊) = dim 𝑉. dim 𝑊
○ Proof: 𝑏𝑎𝑠𝑖𝑠 = {𝑇𝑖𝑗| 𝑇𝑖𝑗(𝑣𝑖) = 𝑤𝑗, 0 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒}
7. Infinite set 𝑆 is linearly independent iff every finite subset is linearly independent

Matrices

Definitions
2 2 𝑛
1. Involutory: 𝐴 = 𝐼𝑛; idempotent matrix: 𝐴 = 𝐴; nilpotent matrix: 𝐴 = 0 𝑓𝑜𝑟 𝑠𝑜𝑚𝑒 𝑛 ∈ ℕ
𝑇 𝑇
2. Symmetric: 𝐴 = 𝐴; skew-symmetric/anti-symmetric: 𝐴 = −𝐴
θ θ
3. Hermitian matrix: 𝐴 = 𝐴; skew-Hermitian: 𝐴 = −𝐴
𝑇 θ
4. Orthogonal matrix: 𝐴 𝐴 = 𝐼; unitary matrix: 𝐴 𝐴 = 𝐼
−1
5. Similar matrices: for square matrices 𝐴, 𝐵: 𝐴 ∼ 𝐵 ⇔ 𝐵 = 𝐶 𝐴𝐶 for non-singular 𝐶
○ Same characteristic equation and hence eigenvalues

Utkarsh Kumar
A= ○ , B= are non similar matrices with same eigenvalue, det, trace, characteristic
polynomial; but B is non-diagonalisable
6. Rank of matrix = largest 𝑟: 𝑟 × 𝑟 non-singular minor exists in matrix
○ Rank = dim columnspace = dim rowspace
○ Proof: reduce A to normal form; show that row operations don’t change column rank (
𝐶𝑖 → 𝐸𝐶𝑖 ⇒ (𝐸𝐶𝑗 𝐿𝐼 ⇒ 𝐶𝑗 𝐿𝐼)) and column operations don’t change row rank 𝑅𝑖 → 𝑅𝑖𝐸'
○ Not affected by elementary row and column operations, pre/post multiplication with non-singular
matrix, 𝑃 (∵ 𝑃 = Π𝑅𝑖 Π𝐶𝑖)
7. Equivalent matrices (same rank): matrices of same order that can be reduced to same normal form
○ Can be obtained from each other by series of elementary row/column operations
○ 𝐵 = 𝑃𝐴𝑄, 𝑃 𝑎𝑛𝑑 𝑄 being non-singular matrices
8. Row reduced echelon form: pivots unity, all other elements in pivot column vanish, pivots are arranged

Theorems
1. Every matrix can be written as sum of symmetric (Hermitian) and skew-symmetric matrices
𝑇 𝑇 𝑇 𝑇
2. 𝐴 𝐵 + 𝐵 𝐴 is symmetric, 𝐴 𝐵 − 𝐵 𝐴 is skew-symmetric
𝑛 𝑛
3. ∑ 𝑎𝑖𝑘𝐶𝑗𝑘 = ∑ 𝑎𝑘𝑖𝐶𝑘𝑗 = det(𝐴) 𝑖𝑓 𝑖 = 𝑗, 0 otherwise
𝑘 𝑘
4. Row operations on 𝐴𝐵 can be affected to pre-factor, column operations to post-factor
5. Matrix 𝐴 = (∏𝑃)𝑋𝑟(∏𝑄), where P (Q) represents elementary row (column) operations, 𝑋𝑟 is of
𝐼𝑟 𝐼 |0
normal/first canonical form ([𝐼 |0], [
𝑟 0
], [ 0𝑟 | 0 ])
6. 𝑟𝑎𝑛𝑘(𝐴𝐵) ≤ 𝑟𝑎𝑛𝑘(𝐴), 𝑟𝑎𝑛𝑘(𝐵)
a. Proof: rank doesn’t change on multiplication with non-singular matrix
b. Proof 2: 𝑅𝑖(𝐴𝐵) = Σ𝐴𝑖,𝑗𝑅 (𝐵); 𝐶𝑖(𝐴𝐵) = Σ𝐶𝑗(𝐴)𝐵𝑗,𝑖
𝑗
c. det 𝐴𝐵 = det 𝐴. det 𝐵 hint: reduce to upper triangular matrices
7. System of linear equations:
a. 𝑟𝑎𝑛𝑘(𝐴|𝐵) > 𝑟𝑎𝑛𝑘(𝐴) ⇒ 𝑛𝑜 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛𝑠
b. 𝑟𝑎𝑛𝑘(𝐴|𝐵) = 𝑟𝑎𝑛𝑘(𝐴) = #𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠 ⇒ 𝑢𝑛𝑖𝑞𝑢𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛
c. 𝑟𝑎𝑛𝑘(𝐴|𝐵) < #𝑣𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠 ⇒ 𝑖𝑛𝑓𝑖𝑛𝑖𝑡𝑒 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛𝑠
∆𝑖
d. Cramer’s rule: 𝑥 = ;∆ 𝑖 obtained by replacing 𝑖th column with 𝑏
𝑖 ∆
𝑛 𝑛 𝑛 2 𝑛 𝑛−1 1
8. 𝑂(𝐺𝐿(𝑛, 𝑍𝑝) = (𝑝 − 1)(𝑝 − 𝑝)(𝑝 − 𝑝 )... (𝑝 − 𝑝 ); 𝑂(𝑆𝐿(𝑛, 𝑍𝑝)) = 𝑝−1
𝑂(𝐺𝐿(𝑛, 𝑍𝑝))
9. 𝐴𝐵, 𝐵𝐴 (det 𝐴, det 𝐵 ≠ 0)have same eigenvalues; proof: (𝐴𝐵)𝑋 = λ𝑋 ⇔ 𝐵𝐴(𝐵𝑋) = λ(𝐵𝑋)

Eigenvalues and eigenvectors


1. Eigenvalues satisfy characteristic equation of the matrix

Utkarsh Kumar
○ Geometric multiplicity ≤ algebraic multiplicity for all eigenvalues
θ
2. Hermitian matrices have real eigenvalues (proof by pre-multiplying with 𝑋 )
○ Skew-Hermitian matrices have purely imaginary or 0 eigenvalues
○ Unitary matrix have eigenvalues with unit modulus
○ Real symmetric matrix ⇒ orthogonal eigenvectors
3. Eigenvectors corresponding to distinct eigenvalues are linearly independent
4. Cayley-Hamilton theorem: Every matrix satisfies its characteristic equation, det(𝑡 𝐼𝑛 − 𝐴) = 0
𝑖
○ Proof: 𝐵 = 𝑎𝑑𝑗(𝑡𝐼𝑛 − 𝐴) = ∑𝑡 𝐵𝑖;
𝑛−1
𝑛−1 𝑖 𝑖
○ (𝑡𝐼𝑛 − 𝐴). 𝐵 = 𝑡 𝐵𝑛−1 + ∑ 𝑡 (𝐵𝑖−1 − 𝐴𝐵𝑖 ) − 𝐴𝐵0 = 𝑝(𝑡)𝐼𝑛 = (∑𝑐𝑖𝑡 )𝐼𝑛
𝑖=1
𝑖
○ Comparing, 𝑐𝑛𝐼 = 𝐵𝑛−1, 𝑐𝑖𝐼 = 𝐵𝑖−1 − 𝐴𝐵𝑖, 𝑐0𝐼 = − 𝐴𝐵0; multiply with 𝐴 and add
5. Diagonalizability: 𝑛 linearly independent eigenvectors ⇔ diagonalizable (proof)
○ 𝑛 distinct eigenvalues ⇒ diagonalizable
○ Two matrices with same set of distinct eigenvalues are diagonalizable
○ Geometric multiplicity = algebraic multiplicity for all eigenvalues ⇒ diagonalizable
○ Real symmetric matrix ⇒ orthogonal eigenvectors ⇒ orthogonally diagonalizable

Inner product space


1. Inner product properties:
θ 𝑇
a. 𝑣. 𝑤 = 𝑤. 𝑣 = 𝑤 𝑣 = 𝑤 𝑣(over ℝ)
b. 𝑣. 𝑣 > 0, 𝑣. 𝑣 = 0 ⇔ 𝑣 = 0
c. (𝑎α + 𝑏β). 𝑣 = 𝑎(α. 𝑣) + 𝑏(β. 𝑣)
2. 𝑉(ℝ) is called Euclidean space; 𝑉(ℂ) is called unitary space
θ
3. 𝑛𝑜𝑟𝑚(𝑣) = ||𝑣|| = 𝑣. 𝑣 = 𝑣𝑣
−1 𝑇
4. Orthogonally similar: 𝐴 ∼𝑜𝑟𝑡ℎ𝑜 𝐵 ⇔ 𝐵 = 𝑃 𝐴𝑃 = 𝑃 𝐴𝑃, where 𝑃 is orthogonal

Matrix examples:

1. Non-similar matrices with same characteristic equation: ,

2. Non-diagonalisable square matrix: : eigenvalue 1, but single LI eigenvector

Utkarsh Kumar
Calculus

Continuity of functions
1. Continuous function on closed interval ⇒ bounded, attains its bounds (proof: consider
< 𝑥𝑛 >: 𝑓(𝑥𝑛) > 𝑛; Bolzano Weierstrauss says there is a convergent subsequence)
2. 𝑓(𝑥 + 𝑦) = 𝑓(𝑥) + 𝑓(𝑦), 𝑓 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑎𝑡 𝑠𝑜𝑚𝑒 𝑝𝑜𝑖𝑛𝑡 ⇒ 𝑓(𝑥) = 𝑎𝑥
3. Uniform continuity: ε doesn’t depend on point in consideration (stronger than continuity)
○ ∀ϵ > 0, ∃δ > 0: ∀𝑥1, 𝑥2 |𝑥1 − 𝑥2| < δ ⇒ |𝑓(𝑥1) − 𝑓(𝑥2)| < ϵ
2
○ Continuity ⇏ uniform continuity (like 𝑓(𝑥) = 𝑥 𝑜𝑛 (0, ∞))
○ Continuity on compact set ⇒ uniform continuity (proof: for ϵ > 0, consider 𝐵(δ𝑥/2, 𝑥), that
forms open cover for compact set)
4. Intermediate Value Theorem: proof by order-completeness of ℝ (
𝑐 = sup{𝑥 ∈ [𝑎, 𝑏] | 𝑓(𝑥) < 𝑘} ⇒ 𝑓(𝑐) = 𝑘)

Differentiation

Differentiability of functions
2 1
1. Differentiable at all points ⇏ continuity of derivative (𝑦 = 𝑥 sin 𝑥
)
2. Rolle’s theorem:
𝑓 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑜𝑛 [𝑎, 𝑏], 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡𝑖𝑎𝑏𝑙𝑒 𝑜𝑛 (𝑎, 𝑏), 𝑓(𝑎) = 𝑓(𝑏) ⇒ 𝑓𝑜𝑟 𝑠𝑜𝑚𝑒 𝑐 ∈ (𝑎, 𝑏), 𝑓'(𝑐) = 0
○ Proof: continuous on compact set, bounded, attains bounds, stationary point concept
𝑓(𝑏)−𝑓(𝑎)
3. LMVT: 𝑓 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑜𝑛 [𝑎, 𝑏], 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡𝑖𝑎𝑏𝑙𝑒 𝑜𝑛 (𝑎, 𝑏) ⇒ 𝑓𝑜𝑟 𝑠𝑜𝑚𝑒 𝑐 ∈ (𝑎, 𝑏), 𝑓'(𝑐) = 𝑏−𝑎
○ Proof using Rolle’s theorem
4. Cauchy’s MVT:
𝑓'(𝑐) 𝑓(𝑏)−𝑓(𝑎)
𝑓, 𝑔 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠 𝑜𝑛 [𝑎, 𝑏], 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡𝑖𝑎𝑏𝑙𝑒 𝑜𝑛 (𝑎, 𝑏) ⇒ 𝑓𝑜𝑟 𝑠𝑜𝑚𝑒 𝑐 ∈ (𝑎, 𝑏), 𝑔'(𝑐)
= 𝑔(𝑏)−𝑔(𝑎)
○ Proof using Rolle’s theorem
2
1 (𝑏−𝑎) 2
5. Taylor’s series: 𝑓(𝑏) = 𝑓(𝑎) + (𝑏 − 𝑎) 𝑓 (𝑎) + 2!
𝑓 (𝑎) +... + 𝑟𝑒𝑚𝑎𝑖𝑛𝑑𝑒𝑟 𝑡𝑒𝑟𝑚
𝑛+1 𝑛+1
○ Lagrange remainder = (𝑏 − 𝑎) 𝑓 (ξ)/(𝑛 + 1)!
𝑛 𝑛+1
○ Cauchy’s reminder = (𝑏 − 𝑎)(𝑏 − ξ) 𝑓 (ξ)/𝑛!
𝑏−𝑥 𝑝 2
○ Proof: 𝐺(𝑥) = 𝐹(𝑥) − ( 𝑏−𝑎
) 𝐹(𝑎); 𝐹(𝑥) = 𝑓(𝑏) − 𝑓(𝑥) − (𝑏 − 𝑥)𝑓'(𝑥) − (𝑏 − 𝑥) 𝑓''(𝑥)/2! −...
(n terms)
i. Rolle’s on 𝐺(𝑥) ⇒ 𝐺'(𝑐) = 0; 𝑝 = 1 ⇒ Cauchy’s remainder; 𝑝 = 𝑛 + 1 ⇒ Lagrange’s
remainder
○ Maclaurin expansion: take 𝑎 = 0 in Taylor series

Utkarsh Kumar
Functions of several variables
𝑥
1. Individual limits exist ⇏ simultaneous limit exists: 𝑓(𝑥, 𝑦) = 𝑥+𝑦
(individual limits not even equal)
𝑥𝑦
2. Partial derivatives exist and are equal ⇏ 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑖𝑡𝑦: 𝑓(𝑥, 𝑦) = 2 2
𝑥 +𝑦
2 2
3. Differentiability: 𝑓(𝑥 + ℎ, 𝑦 + 𝑘) − 𝑓(𝑥, 𝑦) = 𝐴𝑥 + 𝑏𝑦 + ℎ + 𝑘 ɸ(ℎ, 𝑘) ⇒ lim ɸ(ℎ, 𝑘) → 0
(ℎ,𝑘) → (0,0)

○ 𝐴 = 𝑓𝑥(𝑥, 𝑦), 𝐵 = 𝑓𝑦(𝑥, 𝑦); and hence independent of ℎ, 𝑘


○ Necessary for differentiability: 𝑓𝑥 exists and is bounded in nbd; 𝑓𝑦 exists at (𝑎, 𝑏)
○ Necessary for differentiability: 𝑓𝑥 is continuous, 𝑓𝑦 exists at (𝑎, 𝑏)
2 2
𝑥 −𝑦
4. 𝑓𝑥𝑦(𝑥, 𝑦) 𝑎𝑛𝑑 𝑓𝑦𝑥(𝑥, 𝑦) need not always be equal: 𝑓(𝑥, 𝑦) = 𝑥𝑦 2 2
𝑥 +𝑦
○ Schwarz theorem: 𝑓𝑥𝑦 continuous in nbd, 𝑓𝑥 exists ⇒ 𝑓𝑥𝑦 = 𝑓𝑦𝑥 (not necessary)
○ Young’s theorem: 𝑓𝑥, 𝑓𝑦 differentiable at (𝑎, 𝑏) ⇒ 𝑓𝑥𝑦 = 𝑓𝑦𝑥
5. Taylor’s series: 𝑓(𝑥, 𝑦) = 𝑓(𝑥0, 𝑦0) + [(𝑥 − 𝑥0) 𝑓𝑥(𝑥0, 𝑦0) + (𝑦 − 𝑦0)𝑓𝑦(𝑥0, 𝑦0)] +...

1 𝑖 𝑗 𝑛 𝑖 𝑗
○ =∑ ∑ 𝑖! 𝑗!
(𝑥 − 𝑥0) (𝑦 − 𝑦0) [∂ 𝑓/∂ 𝑥∂ 𝑦](𝑥,𝑦)
𝑛 𝑖+𝑗=𝑛

1 𝑖 𝑗 𝑛 𝑖 𝑗
○ Remainder, 𝑅𝑛 = ∑ 𝑖! 𝑗!
(α − 𝑥0) (β − 𝑦0) [∂ 𝑓/∂ 𝑥∂ 𝑦](α,β); (α, β) lies on line segment joining
𝑖+𝑗=𝑛
(𝑥0, 𝑦0) and (𝑥, 𝑦)

Extremas
1. bivariate calculus: consider stationary points and non-differentiable points
2
○ 𝑓𝑥𝑦 − 𝑓𝑥𝑥𝑓𝑦𝑦 < 0 ⇒ 𝑚𝑎𝑥𝑖𝑚𝑎 (𝑓𝑥𝑥 < 0) 𝑜𝑟 𝑚𝑖𝑛𝑖𝑚𝑎 (𝑓𝑥𝑥 > 0)
2
○ 𝑓𝑥𝑦 − 𝑓𝑥𝑥𝑓𝑦𝑦 > 0 ⇒ 𝑠𝑎𝑑𝑑𝑙𝑒 𝑝𝑜𝑖𝑛𝑡
2
○ 𝑓𝑥𝑦 − 𝑓𝑥𝑥𝑓𝑦𝑦 = 0 ⇒ 𝑖𝑛𝑐𝑜𝑛𝑐𝑙𝑢𝑠𝑖𝑣𝑒
2
2. >3 variables: consider stationary points where 𝑑 𝑓(𝑥1, 𝑥2,...) has same sign for arbitrary 𝑑𝑥1, 𝑑𝑥2,...
(curvature up/down in all directions)
2 2
○ Maxima if 𝑑 𝑓 < 0 (concave down); Minima if 𝑑 𝑓 > 0 (concave up)
3. Lagrange’s method of multipliers: optimise ɸ = 𝑓(𝑥, 𝑦) + ∑ λ𝑖𝑐𝑖; 𝑐𝑖 being constraints
2
○ Verify maxima/minima by checking for same sign of 𝑑 ɸ

Jacobian
∂(𝑢,𝑣)
1. Jacobian = ∂(𝑥,𝑦)
= |{𝑢'} {𝑣'}| = 0 ⇔ 𝑢, 𝑣 are functionally related
∂(𝑢,𝑣) ∂(𝑥,𝑦)
2. ∂(𝑥,𝑦)
× ∂(𝑢,𝑣)
= 1

Utkarsh Kumar
3. Jacobians are multipliable (chain rule)
∂(𝑓1,𝑓2,...) ∂(𝑦1,𝑦2,...) 𝑛 ∂(𝑓1,𝑓2,...)
4. In case of 𝑛 implicit equations 𝑓 , 𝑓 ,..., ∂(𝑦 ,𝑦 ,...)
. ∂(𝑥1,𝑥2,...)
= (− 1) ∂(𝑥1,𝑥2,...)
1 2 1 2

Uniform convergence
1. Definition: < 𝑓𝑛(𝑥) > ⇉ 𝑓(𝑥) ⇔ ∀ϵ > 0, ∃𝑛0 > 0: |𝑓(𝑥) − 𝑓𝑛(𝑥)| ≤ ϵ ∀𝑛 > 𝑛0, 𝑥 ∈ 𝐷
2. Continuity
○ 𝑓𝑛(𝑥) 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠, {𝑓𝑛(𝑥)} ⇉ 𝑓(𝑥) ⇒ 𝑓 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
○ 𝑢𝑛(𝑥) 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠, ∑𝑢𝑛(𝑥) ⇉ 𝑢(𝑥) ⇒ 𝑢 𝑖𝑠 𝑐𝑜𝑛𝑡𝑖𝑛𝑢𝑜𝑢𝑠
3. Integrability
𝑏 𝑏
○ 𝑓𝑛(𝑥) integrable, {𝑓𝑛(𝑥)} ⇉ 𝑓(𝑥) ⇒ 𝑓 𝑖𝑠 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑏𝑙𝑒, lim ∫ 𝑓𝑛(𝑥)𝑑𝑥 = ∫ 𝑓(𝑥)𝑑𝑥
𝑛→∞ 𝑎 𝑎
𝑏 𝑏
○ 𝑢𝑛(𝑥) integrable, ∑𝑢𝑛(𝑥) ⇉ 𝑢(𝑥) ⇒ 𝑢 𝑖𝑠 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑏𝑙𝑒, ∑ ∫ 𝑢𝑛(𝑥)𝑑𝑥 = ∫ 𝑢(𝑥)𝑑𝑥
𝑎 𝑎
4. Differentiability
○ 𝑓𝑛(𝑥) is differentiable, 𝑓'𝑛(𝑥) is continuous, 𝑓𝑛(𝑥) → 𝑓(𝑥), 𝑓𝑛'(𝑥) ⇉ 𝑔(𝑥) ⇒ 𝑓'(𝑥) = 𝑔(𝑥)
○ 𝑓𝑛(𝑥)differentiable, 𝑓'𝑛(𝑥) continuous, ∑𝑓𝑛(𝑥) = 𝑓(𝑥), ∑𝑓𝑛'(𝑥) ⇉ 𝑔(𝑥) ⇒ 𝑓'(𝑥) = 𝑔(𝑥)

Uniform convergence for sequences


1. ∀ϵ > 0 ∃𝑚0 ∈ ℕ: ∀𝑚 ≥ 𝑚0, | 𝑓𝑚(𝑥) − 𝑓(𝑥)| < ϵ ∀𝑥 ∈ ℝ (𝑚0 independent of 𝑥)
2. Mn test: 𝑓(𝑥) = lim 𝑓𝑛(𝑥); 𝑀𝑛 = sup𝑥∈ℝ | 𝑓(𝑥) − 𝑓𝑛(𝑥)|; lim 𝑀𝑛 = 0 ⇔ {𝑓𝑛(𝑥)} ⇉ 𝑓(𝑥)
𝑛→∞ 𝑛→∞

Uniform convergence for series


1. Series uniformly converges iff SOPS uniformly converges
2. Weierstrass’ M test: 𝑀 ≥ | 𝑢 (𝑥)| ∀𝑥 ∈ ℝ; Σ𝑀 𝑐𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑠 ⇒ Σ 𝑢 (𝑥) ⇉ 𝑢(𝑥)
𝑛 𝑛 𝑛 𝑛

Utkarsh Kumar
Analytic Geometry

Sphere
2 2 2 2 2 2
1. Need to choose 2 spheres: 𝑥 + 𝑦 + 𝑧 + 2𝑢1𝑥 + 𝑑 = 0, 𝑥 + 𝑦 + 𝑧 + 2𝑢2𝑥 + 𝑑 = 0
2 2 2
2. Orthogonality: 𝑟1 + 𝑟2 = 𝑑

Cone
1. Condition for cone: ∆ = det(𝑎, 𝑏, 𝑐, 𝑓, 𝑔, ℎ, 𝑢, 𝑣, 𝑤, 𝑑) = 0; vertex by 𝑓𝑥 = 𝑓𝑦 = 𝑓𝑧 = 0
2. Cone with origin as vertex ⇔ homogeneous 2 degree equation
3. Cone with 3 ⊥ generators ⇔ 𝑎 + 𝑏 + 𝑐 = 0
4. Cone through coordinate axes: ℎ𝑥𝑦 + 𝑓𝑦𝑧 + 𝑔𝑧𝑥 = 0
5. Reciprocal cone: generated by lines through vertex ⊥ to tangent planes to original plane
○ Obtained by replacing 𝑎, 𝑏, 𝑐, 𝑓, 𝑔, ℎ with their cofactors in ∆' = det(𝑎, 𝑏, 𝑐, 𝑓, 𝑔, ℎ)
6. Enveloping curve: a curve that touches each curve of a given family
2
○ Enveloping cone of sphere: 𝑆𝑆1 = 𝑇
2 2 2
2 2 2 𝑙 𝑚 𝑛
7. Plane 𝑙𝑥 + 𝑚𝑦 + 𝑛𝑧 = 0 touches cone 𝑎𝑥 + 𝑏𝑦 + 𝑐𝑧 = 0: 𝑎
+ 𝑏
+ 𝑐
=0
8. Normal plane through generator 𝐺 will be perpendicular to tangent plane at G and G itself

Cylinder

General conic
1. Condition of tangency: Consider point (α, β, γ) on conicoid; then a line through it has zero roots
2 2 2
2 2 2 𝑙 𝑚 𝑛
2. Central conic: 𝑎𝑥 + 𝑏𝑦 + 𝑐𝑧 = 1 has tangent planes 𝑃: 𝑙𝑥 + 𝑚𝑦 + 𝑛𝑧 =± 𝑎
+ 𝑏
+ 𝑐
2 2 2 −1 −1 −1
a. Director circle: 𝑥 + 𝑦 + 𝑧 = 𝑎 +𝑏 +𝑐
2 2
3. Sections of 𝑎𝑥 + 𝑏𝑦 + 2ℎ𝑥𝑦 + 2𝑢𝑥 + 2𝑣𝑦 + 𝑑 = 0 by 𝑧 = 0 will be
a. Parabola if second degree terms form perfect square
b. Rectangular hyperbola if 𝑎 + 𝑏 = 0
4. Section of a conic with centre (α, β, γ) is obtained by plane 𝑇 = 𝑆1
5. Plane of contact: 𝑇 = 0; polar plane: locus of harmonic conjugate of P wrt R, S: 𝑇 = 0
6. Pole of given plane, 𝑃: 𝑙𝑥 + 𝑚𝑦 + 𝑛𝑧 = 𝑝 is point X whose polar plane is P, ie 𝑇(𝑋) ≡ 𝑃
7. Conjugate points 𝑃, 𝑄: polar plane of 𝑃 passes through 𝑄 (and vice versa)
8. Conjugate planes 𝑃1, 𝑃2: pole of 𝑃1 lies on 𝑃2 (and vice-versa)
9. Conjugate lines 𝐿1, 𝐿2: conjugate plane of each point in 𝐿1 passes through 𝐿2 (and vice-versa)

Utkarsh Kumar
10. Conjugate semi-diameters 𝑂𝑃, 𝑂𝑄, 𝑂𝑅: conjugate plane of one passes through other two
2 2 2 2 2 2
a. Condition:Σ𝑥𝑖 = 𝑎 , Σ𝑦𝑖 = 𝑏 , Σ𝑧𝑖 = 𝑐 , Σ𝑥𝑖𝑦𝑖 = 0 = Σ𝑥𝑖𝑥𝑗 (think
𝑀𝑎𝑡(𝑥1, 𝑥2, 𝑥3, 𝑦1, 𝑦2, 𝑦3, 𝑧1, 𝑧2, 𝑧3))
11. Conjugate plane 𝑃 to (𝑙, 𝑚, 𝑛): locus of midpoints of system of chord parallel to (𝑙, 𝑚, 𝑛)
a. Conjugate planes 𝑃1, 𝑃2, 𝑃3: 𝑃1 is conjugate to 𝑃2 ∩ 𝑃3 and so on
12. Eccentricity
2 2 2
a. Ellipse, 𝑏 = 𝑎 (1 − 𝑒 ); 𝑒 < 1
2 2 2
b. Hyperbola, 𝑏 = 𝑎 (𝑒 − 1); 𝑒 > 1
c. 𝑒 = 0 for circle; 𝑒 = 1 for parabola
𝑥−𝑎cosθ 𝑦−𝑏sinθ 𝑧
13. Hyperbolic paraboloid generators through principal ellipse:
𝑎sinθ
= −𝑏cosθ
= ±𝑐

Reduction of 2nd degree equations to canonical forms


1. Compute λ𝑖’s as eigenvalues of ∆ = 𝑀𝑎𝑡(𝑎, 𝑏, 𝑐, 𝑓, 𝑔, ℎ)
a. λ𝑖 ≠ 0 ∀𝑖
i. 𝑑' = 𝑑 + 𝑢α + 𝑣β + 𝑤γ
2 2 2
ii. λ1𝑥 + λ2𝑦 + λ3𝑧 + 𝑑' = 0
iii. 𝑓𝑥 = 𝑓𝑦 = 𝑓𝑧 = 0 ⇒ (α, β, γ) is centre
b. Else, (𝑙, 𝑚, 𝑛) is eigenvector corresponding to 0, normalised
i. 𝑘 = 𝑙𝑢 + 𝑚𝑣 + 𝑛𝑤 ≠ 0
● Centre: 𝑓𝑥/𝑙 = 𝑓𝑦/𝑚 = 𝑓𝑧/𝑛 = 2𝑘; 𝑘Σ𝑙𝑥 + Σ𝑢𝑥 + 𝑑 = 0
● 𝑑' = 0
ii. 𝑘 = 𝑙𝑢 + 𝑚𝑣 + 𝑛𝑤 = 0
● Line of centres: 𝑓𝑥 = 𝑓𝑦 = 𝑓𝑧 = 0
● 𝑑' = 𝑑 + 𝑢α + 𝑣β + 𝑤γ
2 2
iii. λ1𝑥 + λ2𝑦 + 2𝑘𝑧 + 𝑑' = 0
2. If 2nd degree terms form perfect square,
a. Linear terms line up with square term ⇒ pair of planes
2 2
b. Else, (𝑝𝑥 + 𝑞𝑦 + 𝑟𝑧 + λ) = − 2𝑢𝑥 − 2𝑣𝑦 − 2𝑤𝑧 − 𝑑 + λ + 2λ(𝑝𝑥 + 𝑞𝑦 + 𝑟𝑧)
i. Normal vectors of planes on LHS, RHS are ⊥⇒ λchosen
3. Direction ratios of normal axes/planes are eigenvectors for ∆
4. Type of conicoid:
2 2 2
𝑥 𝑦 𝑧
a. Ellipsoid: 2 + 2 + 2 =1
𝑎 𝑏 𝑐
2 2 2
𝑥 𝑦 𝑧
b. Hyperboloid of one sheet: 2 + 2 − 2 =1
𝑎 𝑏 𝑐

Utkarsh Kumar
2 2 2
𝑥 𝑦 𝑧
c. Hyperboloid of two sheets: 2 − 2 − 2 =1
𝑎 𝑏 𝑐
2 2
𝑥 𝑦 2𝑧
d. Elliptic paraboloid: 2 + 2 = 𝑐
𝑎 𝑏
2 2
𝑥 𝑦 2𝑧
e. Hyperbolic paraboloid: 2 − 2 = 𝑐
𝑎 𝑏
2
f. Parabolic cylinder: 𝑥 = 4𝑎𝑦
2 2
𝑥 𝑦
g. Elliptic cylinder: 2 + 2 =1
𝑎 𝑏
h. Conicoid of revolution: λ1 = λ2
5. Generators for hyperboloids: two sets(λ, µ)

Miscellaneous
1. Check degree of parameterised curve, 𝑥 = 𝑋(λ), 𝑦 = 𝑌(λ), 𝑧 = 𝑍(λ): intersection with arbitrary plane
𝑃: 𝑙𝑥 + 𝑚𝑦 + 𝑛𝑧 = 𝑝, count solutions for λ
2. Plane of contact: locus of points on conicoid, tangent planes at which pass through given point
3. Polar points and planes
4. Enveloping cone

Utkarsh Kumar
Ordinary Differential Equations
1. Order: highest derivative term involved; Degree: power of highest degree term when made free of
negative and fraction powers
2. Exact form ⇔ ∂𝑀/∂𝑦 = ∂𝑁/∂𝑥
−1
a. Homogeneous eqn ⇒ 𝐼𝐹 = (𝑀𝑥 + 𝑁𝑦)
1 𝑑𝑥 𝑑𝑦 1 𝑑𝑥 𝑑𝑦
■ proof:𝑀𝑑𝑥 + 𝑁𝑑𝑦 = 2
(𝑀𝑥 + 𝑁𝑦)( 𝑥
+ 𝑦
)+ 2
(𝑀𝑥 − 𝑁𝑦)( 𝑥
− 𝑦
)
−1
b. 𝑀 = 𝑦𝑓(𝑥𝑦), 𝑁 = 𝑥𝑔(𝑥𝑦) ⇒ 𝐼𝐹 = (𝑀𝑥 − 𝑁𝑦) ; proof like above
∫ 𝑓(𝑥) 𝑑𝑥
c. (𝑀𝑦 − 𝑁𝑥)/𝑁 = 𝑓(𝑥) ⇒ 𝐼𝐹 = 𝑒
∫ 𝑔(𝑦) 𝑑𝑦
d. (𝑁𝑥 − 𝑀𝑦)/𝑀 = 𝑔(𝑦) ⇒ 𝐼𝐹 = 𝑒
3. Don’t forget to check for linear in 𝑥 or 𝑦
𝑖 𝑖 𝑧
4. Euler-Cauchy form: Σ 𝑥 𝐷 𝑦 = 𝑓(𝑥) ⇒ 𝑥 = 𝑒
α −1−α
a. 1/(𝐷 − α) 𝑋(𝑥) = 𝑥 ∫ 𝑥 𝑋(𝑥) 𝑑𝑥; 𝐷 ≡ 𝑑/𝑑𝑧; 𝑧 = log 𝑥
5. CF, PI:
α𝑥 −α𝑥
a. 1/(𝐷 − α) 𝑋(𝑥) = 𝑒 ∫ 𝑒 𝑋(𝑥) 𝑑𝑥; 𝐷 ≡ 𝑑/𝑑𝑥
6. Solving 𝑦" + 𝑃𝑦' + 𝑄𝑦 = 𝑅 (2 deg eqn with nonconstant coefficients)
a. When 𝑧, part of CF is known: 𝑦 = 𝑉𝑧; 𝑉" + (2𝑧'/𝑧 + 𝑃)𝑉' = 𝑅/𝑧
−∫ 𝑃/2 𝑑𝑥 2
b. Normal form: 𝑦 = 𝑉𝑧; 𝑧 = 𝑒 ; 𝑉" + (𝑄 − 𝑃 /4 − 𝑃'/2)𝑉 = 𝑅/𝑧
c. Change of variable 𝑥 → 𝑧: coeff of 𝑦 vanishes or coeff of 𝑦 becomes 1
d. Variation of parameters to find 𝑦𝑝
i. 𝑊 = det (𝑢, 𝑣, 𝑢', 𝑣'); 𝑢, 𝑣 are solutions of the homogeneous equation
ii. 𝑃𝐼 = 𝑢φ + 𝑣ψ; φ = ∫ − 𝑅𝑣/𝑊; ψ = ∫ 𝑅𝑢/𝑊
iii. Proof: find φ and ψ: φ𝑢 + ψ𝑣 is a particular solution; take 𝑢φ' + 𝑣ψ' = 0
𝑧'
iv. Can find other part of CF using 𝑣'' + (𝑃 + 2 𝑧
)𝑣' = 0

Clairaut’s equation
1. 𝑦 = 𝑝𝑥 + 𝑓(𝑝) ⇒ 𝑦 = 𝑝𝑐 + 𝑓(𝑐)
2. Check for solvability for 𝑥, 𝑦, 𝑝
3. Useful substitutions:
2 2
a. 𝑥 = 𝑋, 𝑦 = 𝑌 ⇒ (𝑦𝑝/𝑥) = 𝑃
2 2
b. 1/𝑥 = 𝑋, 1/𝑦 = 𝑌 ⇒ 𝑥 𝑝/𝑦 = 𝑃
c. Identify expressions from derivative terms: 𝑥𝑝 + 𝑦 ⇒ 𝑥𝑦 = 𝑣 etc
2 2
d. Watch out for expressions: 𝑥 + 𝑦 ⇒ polar coordinates etc

Utkarsh Kumar
1

1
Image taken from IMS notes

Utkarsh Kumar
Singular solution
1. Obtained as special case of given ODE (satisfies it); usually not part of general solution
2. ODE 𝑓 = 0 with solution φ = 0: 𝑒𝑙𝑖𝑚𝑖𝑛𝑎𝑡𝑒 𝑝 𝑓𝑟𝑜𝑚 𝑓 = 0 = ∂𝑓/∂𝑝 𝑜𝑟 𝑐 𝑓𝑟𝑜𝑚 φ = 0 = ∂φ/∂𝑐
3. Extraneous loci: singular solution but doesn’t satisfy given ODE; obtain from both
∂𝑓 ∂φ
𝑓= ∂𝑝
= 0; φ = ∂𝑐
=0
2
○ p-discriminant = 𝐸𝑇 𝐶
3 2
○ c-discriminant = 𝐸𝐶 𝑁

Family of curves
𝑑𝑟 2 𝑑θ
1. Orthogonal family: replace 𝑝 → (− 1)/𝑝; 𝑑θ
→− 𝑟 𝑑𝑟
2. Oblique family at angle α: 𝑝 → (𝑝 + tan α)/(1 − 𝑝 tan α)

Laplace transform

−𝑠𝑡
1. ℒ ( 𝑓(𝑡) ) = ∫ 𝑒 𝑓(𝑡) 𝑑𝑡 = 𝐹(𝑠)
0
𝑎𝑡
2. ℒ ( 𝑓(𝑎𝑡 ) = 1/𝑎 𝐹(𝑠/𝑎); ℒ (𝑒 𝑓(𝑡 ) = 𝐹(𝑠 − 𝑎)

𝑛 𝑛 𝑛
3. ℒ (𝑡 𝑓(𝑡) ) = (− 1) 𝐷 𝐹(𝑠); ℒ ( 𝑓(𝑡)/𝑡 ) = ∫ 𝐹(𝑠) 𝑑𝑠
𝑠
𝑡
1
4. ℒ ( 𝑓'(𝑡) ) = 𝑠ℒ (𝑓) − 𝑓(0); ℒ ( ∫ 𝑓(𝑡) 𝑑𝑡 ) = 𝑠
ℒ (𝑓(𝑡) )
0
−𝑎𝑠
5. ℒ ( 𝑓(𝑡 − 𝑎). 𝐻(𝑡 − 𝑎) ) = 𝑒 ℒ (𝑓)
𝑡
−1
6. Convolution: ℒ (𝐹𝐺) = 𝑓 * 𝑔 = ∫ 𝑓(𝑢). 𝑔(𝑡 − 𝑢) 𝑑𝑢(proof by changing order of integration)
0
𝑛 𝑛+1 𝑎 𝑠
7. ℒ (𝑡 ) = Γ(𝑛 + 1)/𝑠 (𝑛≠ − 1); ℒ(sin 𝑎𝑡) = 2 2 ; ℒ(cos 𝑎𝑡) = 2 2
𝑠 +𝑎 𝑠 +𝑎
−1 1 1 −1 1 −1/𝑠
8. As a last resort, consider taylor series: ℒ(sin 𝑡), ℒ ( 𝑠 cos 𝑠
), ℒ ( 𝑠 𝑒 )
9. Initial value theorem: lim 𝑓(𝑡) = lim 𝑠. 𝐹(𝑠); final value theorem: lim 𝑓(𝑡) = lim 𝑠. 𝐹(𝑠)
𝑡→0 𝑠→∞ 𝑡→∞ 𝑠→0

Miscellaneous
∂𝑓 ∂𝑓
1. Euler’s equation: 𝑓(𝑥, 𝑦) is homogeneous of degree 𝑛 ⇒ 𝑥 ∂𝑥
+𝑦 ∂𝑦
= 𝑛 𝑓(𝑥, 𝑦)
2. Wronskian, 𝑊 = 𝑑𝑒𝑡(𝑢, 𝑣,..; 𝑢', 𝑣',...; 𝑢'', 𝑣'', 𝑤'',...)
a. 𝑊 ≠ 0 ⇒ linearly independent functions
b. 𝑊 = 0 is necessary, not sufficient condition for linear dependence of functions
c. If 𝑢, 𝑣 are solutions to ODE, 𝑊 is either identically 0 or never 0; 𝑊 = 0 ⇔ linear independence

Utkarsh Kumar
Vector Analysis
1. Serret Frenet formulae:
𝑑𝑇 𝑑𝐵
○ Curvature vector, 𝑑𝑠 = 𝑘 𝑁; Torsion vector, 𝑑𝑠
= − τ𝑁
˙ →¨
→ →˙ →¨ →¨˙
|𝑟 × 𝑟| [𝑟 𝑟 𝑟]
○𝑘 = ˙2

, τ= ˙ →¨ 2

|𝑟| |𝑟 × 𝑟|
˙
→ ¨
→ →˙ →¨ →¨˙
[𝑟 𝑟 𝑟]
i. Proof using 𝑘 = |𝑟 × 𝑟|, τ = ˙ →¨ 2

, derivatives taken wrt 𝑠
|𝑟 × 𝑟|
∂𝑄 ∂𝑃
2. Green’s identity: ∮ 𝑃𝑑𝑥 + 𝑄𝑑𝑦 = ∬𝐴( ∂𝑥
− ∂𝑦
) 𝑑𝑥𝑑𝑦; A is area inside simple closed curve C
𝐶
○ Check that the function is defined both on C and within A
∂𝑄
○ Proof: divide curve into two parts; show ∬𝐴 ∂𝑥
𝑑𝑥𝑑𝑦 = ∮ 𝑄 𝑑𝑥
𝐶
→ → → → ^
3. Stoke’s theorem: ∮ 𝐹. 𝑑𝑟 = ∬𝑆 ∇ × 𝐹. 𝑛 𝑑𝑆; C is border of surface S
𝐶
○ check for continuity and differentiability within surface
○ Take clockwise line integral (seen from +Z) if surface is below curve
→ ^ → →
4. Gauss’ divergence theorem: ∫ 𝐹. 𝑛 𝑑𝑆 = ∭𝑉∇. 𝐹 𝑑𝑉; S is surface of volume V
𝑆
○ check for continuity and differentiability within volume
→→ → → → → → → →
5. Scalar triple product, [𝑢 𝑣 𝑤] = (𝑢 × 𝑣). 𝑤 = 𝑑𝑒𝑡 (𝑢, 𝑣, 𝑤)
→ → →
○ [𝑢 𝑣 𝑤] is volume of parallelepiped, and 6 × volume of tetrahedron

Miscellaneous terms
→ → → → → →
1. Reciprocal system of vectors for 𝑢, 𝑣, 𝑤 is 𝑢', 𝑣', 𝑤':
→ → → → → →
a. 𝑢. 𝑢' = 𝑣. 𝑣' = 𝑤. 𝑤' = 1
→ → → →
b. 𝑢. 𝑣' = 𝑣. 𝑤' =... = 0
→ →
→ 𝑣×𝑤
c. Thus, 𝑢' = →→ → ...
[𝑢 𝑣 𝑤]
→ → → → → →
2. 𝑎 is irrotational ⇒ ∇ × 𝑎 = 0; 𝑎 is solenoidal ⇒ ∇. 𝑎 = 0
∂𝐹 → ^
3. ^ = ∇𝐹. 𝑛
∂𝑛

Utkarsh Kumar
Vector integration
→ 𝑑𝑢→ 1 → →
1. ∫ 𝑢. 𝑑𝑡 𝑑𝑡 = 2
𝑢. 𝑢

→ 2
𝑑𝑢 → 𝑑𝑢
2. ∫ 𝑢 × 2 𝑑𝑡 = 𝑢 × 𝑑𝑡
𝑑𝑡
^ 2 2
3. ∫∫ φ∇φ. 𝑛 𝑑𝑆 = ∫∫∫ (|∇φ| + φ∇ φ )𝑑𝑉

Utkarsh Kumar
Dynamics and Statics

Dynamics
● Lagrangian
○ 𝐿=𝐾−𝑈
○ 𝑑(∂𝐿/∂𝑥˙)/𝑑𝑡 = ∂𝐿/∂𝑥
● Hamiltonian
○ 𝑝 = ∂𝐿/∂𝑥˙
𝑥

○ 𝐻 = Σ 𝑝𝑥 𝑥˙ − 𝐿 = 𝐾 + 𝑈 (for conservative forces)

○ ∂𝐻/∂𝑝𝑥 = 𝑥˙; ∂𝐻/∂𝑥 = − 𝑝̇𝑥


● Motion under central force:
^ ^ 2 ^ ^ 2
(𝑟˙ 𝑟 + 𝑟θ̇ θ) = (𝑟¨ − θ̇ 𝑟) 𝑟 + (2𝑟˙θ̇ + 𝑟θ̈) θ ⇒ 𝑟¨ − θ̇ 𝑟 =
𝑑𝑣 𝑑 2
○ 𝑑𝑡
= 𝑑𝑡
− 𝑃, 𝑟 θ̇ = ℎ (constant)
𝑑𝑟 ^ 𝑑θ ^
○ 𝑑𝑡
= θ̇θ, 𝑑𝑡
=− θ̇𝑟
● Motion constrained on cycloid: 𝑠 = 4𝑎 sin ψ; θ = 2ψ; 𝑥 = 𝑎(θ + sin θ), 𝑦 = 𝑎(1 − cos θ)
2
𝑑𝑠
○ 𝑚 2 =− 𝑚𝑔 sin ψ ⇒ SHM with angular frequency ω = 𝑔/4𝑎
𝑑𝑡
2
○ 𝑁 − 𝑚𝑔 cos ψ = 𝑚𝑣 /ρ; ρ = 𝑑𝑠/𝑑ψ
● Kepler’s laws
2𝐹
● Principal axis: tan 2θ = 𝐵−𝐴
○ L, T are principal axes of lamina: ∫ 𝑝1𝑝2 𝑑𝑥 𝑑𝑦 = 0; 𝑝1, 𝑝2are signed distances from L, T

Utkarsh Kumar

You might also like