0% found this document useful (0 votes)
28 views25 pages

Understanding Vectors Vector Spaces and Linear Algebra

The document covers fundamental concepts in linear algebra, including vectors, vector spaces, linear independence, matrix rank, and the dot product. It highlights the importance of these concepts in machine learning, particularly in feature independence and model training. Additionally, it discusses orthogonality and its significance in reducing redundancy in data representation.

Uploaded by

meenu14siwach
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views25 pages

Understanding Vectors Vector Spaces and Linear Algebra

The document covers fundamental concepts in linear algebra, including vectors, vector spaces, linear independence, matrix rank, and the dot product. It highlights the importance of these concepts in machine learning, particularly in feature independence and model training. Additionally, it discusses orthogonality and its significance in reducing redundancy in data representation.

Uploaded by

meenu14siwach
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Table Of Content

Vectors, Vector Spaces and Linear Independence

Basis and Rank of Matrix

Norms (Euclidean and Manhattan)

Dot Product of Vectors

Angles and orthogonality

Anish Pandey (20223039)


Aryan Mishra (20223056)
Vectors
A vector is a mathematical object that can be thought of as an array of numbers. These numbers,
known as the components of the vector, represent a point in space or a quantity that has both
magnitude and direction. Vectors can be in any number of dimensions.
Vectors
Vector Spaces
A vector space (or linear space) is a collection of vectors that can be
added together and multiplied by scalars (numbers) while satisfying
certain properties (like closure, commutativity, and distributivity).
Vector Spaces
Vector Spaces
Linear Independence

Vectors are linearly independent if no vector in the set can be written as


a linear combination of the others. In other words, each vector provides
unique information, and none of them is redundant. If vectors are linearly
dependent, at least one vector can be expressed as a linear combination of
others, meaning it is not adding new, independent information.
Linear Independence

Example in Machine Learning:


In the context of machine learning, linear independence plays an important role in features
used for training models:
Feature Independence: If the input features are linearly independent, it means that no
feature is redundant; each one contributes something unique to the prediction. For
example, in principal component analysis (PCA), we try to transform a dataset into a new
set of linearly independent features (principal components) that explain the variance in the
data.
Model Weights: In linear regression, the weight vector w is found by fitting the model to
linearly independent data. If the data has linearly dependent features, the model might
struggle to find unique coefficients for each feature, leading to overfitting or
multicollinearity problems.
Rank Of A Matrix
The rank of a matrix is a measure of the matrix's "non-redundancy." It gives us the
dimension of the column space (or equivalently, the row space) of the matrix. In
simple terms, the rank of a matrix tells us how many linearly independent columns
(or rows) the matrix has.
Rank Of A Matrix
Basis Of A Matrix

A basis of a matrix refers to a set of linearly independent vectors that span the column space (or row
space) of the matrix. In other words, a basis is a minimal set of vectors that can be combined to
express all other vectors in that space.
Norm of a Vector
Norm of a Vector
Dot Product of Two Vectors
The dot product of two vectors is a scalar quantity (a number) that
represents the projection of one vector onto the other.
Properties of the Dot Product

Commutative Property : Order does not matter when taking the dot product.

Distributive Property : The dot product distributes over vector addition.


Properties of the Dot Product

Scalar Multiplication Property : Scaling one vector scales the dot product by the same factor k.
Significance in ML and Neural Networks
Significance in ML and Neural Networks
Angle Between Two Vectors

Measures how much two vectors point in the same direction.

θ ranges from 0° (aligned) to 180° (opposite direction).


Angle Between Two Vectors
Special Cases of Angles
Significance in ML and Neural Networks
Orthogonality
Definition of Orthogonality

This means the vectors are perpendicular (90° apart) and have no influence on each other.
Orthogonality
Significance in ML and Neural Networks

Feature Engineering: Reducing Redundancy


Significance in ML and Neural Networks

Word Embeddings & Cosine Similarity

You might also like