0% found this document useful (0 votes)
39 views28 pages

Dama50 Unit2

Hellenic Open University Mathematics for Machine Learning Unit 2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views28 pages

Dama50 Unit2

Hellenic Open University Mathematics for Machine Learning Unit 2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

DAMA50 - MATHEMATICS FOR MACHINE LEARNING

Analytic Geometry

2nd tutorial meeting notes (2023-24)


Geometric Vectors

Example:
Inner Product (Dot Product)
Length (Euclidean Norm)
Symmetric Matrices
A matrix is called symmetric if
Thus, a symmetric matrix is necessarily square.

Examples

Example
Quadratic forms

Examples

-
Symmetric, Positive Definite Matrices

A symmetric 𝑛𝑛 × 𝑛𝑛 matrix A with real entries is called symmetric positive


definite if for every nonzero real column vector 𝑥𝑥

In the case that the matrix is called positive semi-definite.


Symmetric, Positive Definite Matrices
Symmetric, Positive Definite Matrices
Lengths and Distances
Angle between Vectors
Orthogonality

The two vectors are said to be orthogonal.


General inner product
Orthogonal Matrix
A square matrix A is an orthogonal matrix if and only if its columns are orthonormal
vectors

which implies that

Transformations by orthogonal matrices do not change the length of a vector or


the angle between two vectors.
Orthogonal, Orthonormal Basis
Consider an n-dimensional vector space V and a basis {b1, … , bn} of V.
the basis is said to be orthogonal.

If the basis vectors are also normalised


Orthogonal complement

Consider a D-dimensional vector space


V and an M-dimensional subspace U⊆V.

Then its orthogonal complement 𝑈𝑈 ⊥ is a


(D-M)-dimensional subspace of V and
contains all vectors in V that are
orthogonal to every vector in 𝑈𝑈.

Furthermore, 𝑈𝑈⋂𝑈𝑈 ⊥ = {0}, where {0} is


the empty set, thus any vector x ∈ V can
be uniquely decomposed into a basis of
𝑈𝑈 and a basis of 𝑈𝑈 ⊥ .
Orthogonal Projections
Orthogonal projections (onto 1D subspaces)
If we have a vector 𝑥𝑥⃗ and a line determined by a vector 𝑏𝑏 , how do we find the point
on the line that is closest to 𝑥𝑥⃗ ?

𝑒𝑒⃗

If we think of as an approximation of 𝑥𝑥,


⃗ then the length of
is the error in that approximation.
Since lies on the line through 𝑏𝑏, we know that for some number 𝜆𝜆.
In order to minimize the error vector 𝑒𝑒⃗ length we choose 𝑒𝑒⃗ perpendicular to of 𝑏𝑏
Orthogonal Projections (onto 1D subspaces)
Orthogonal Projections (onto 1D subspaces)
Orthogonal projections (onto 1D subspaces)
Orthogonal Projections (onto 1D subspaces)
Orthogonal Projections (onto general subspaces)

Example: Projection onto a two-dimen-


sional subspace with basis b1, b2.
Orthogonal Projections (onto general subspaces)
Gram-Schmidt Ortogonalization
Rotations (in R2)
Rotations (in R3)
Properties of Rotations

1) Rotations preserve distances

2) Rotations preserve angles.

i.e., the angle between 𝑅𝑅 𝜃𝜃 𝑥𝑥⃗ and 𝑅𝑅 𝜃𝜃 𝑦𝑦 equals the angle between 𝑥𝑥⃗ and 𝑦𝑦.

3) Rotations in three (or more) dimensions are generally not commutative.

Only in two dimensions vector rotations are commutative, such that


R(φ)R(θ) = R(θ)R(φ) for all φ, θ ∈[0, 2π).

You might also like