0% found this document useful (0 votes)
142 views16 pages

2.2 Linear Algebra & Geometry

Vectors, matrices, and tensors are fundamental concepts in machine learning and deep learning. Vectors are 1D arrays that represent data points, matrices are 2D arrays, and tensors generalize these to higher dimensions. Vectors can be added by placing their tails together to form a new vector from the first vector's tail to the second's tip. Matrices represent linear transformations that map input vectors to output vectors while preserving properties like parallel lines and distance ratios. These linear transformations are important for tasks like image and signal processing.

Uploaded by

Raja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views16 pages

2.2 Linear Algebra & Geometry

Vectors, matrices, and tensors are fundamental concepts in machine learning and deep learning. Vectors are 1D arrays that represent data points, matrices are 2D arrays, and tensors generalize these to higher dimensions. Vectors can be added by placing their tails together to form a new vector from the first vector's tail to the second's tip. Matrices represent linear transformations that map input vectors to output vectors while preserving properties like parallel lines and distance ratios. These linear transformations are important for tasks like image and signal processing.

Uploaded by

Raja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

• Vectors, Matrices, and Tensors

o In any ML/DL problem, most of data is presented as Vectors, Matrices, and Tensors.
o A vector is a 1D array. a point in space is a vector of 3 coordinates (x, y, z). it is defined such that it has
magnitude and direction.
o A matrix is a 2D array of numbers, that has a fixed number of rows and columns. It contains a number at the
intersection of 𝑛𝑡ℎ 𝑟𝑜𝑤 𝑎𝑛𝑑 𝑑 𝑡ℎ 𝑐𝑜𝑙𝑢𝑚𝑛. A matrix is usually denoted by square brackets [].
o A tensor is a generalization of vectors and matrices. E.g., a 1D tensor is a vector,2D tensor is a matrix.
o We can have a 3D tensor like image with RGB colours. This continues to expand to n-dimensional tensors.
o Following are some basic matrix terms to represent a matrix

• Vectors
o In machine learning context, vectors are list of numbers. E.g. If we are predicting student final test marks based
on his previous test marks, age and gender. 𝑆𝑡𝑢𝑑𝑒𝑛𝑡1 = [65,15,0]
o In real word projects, this dimension can go up to thousands .
o In linear algebra, vector coordinates tell distance & direction to go from the origin on that axis.

o Row Vector- If we write all values in vector as a row (1 x n)


𝐴 = [𝑎1 ⋯ 𝑎𝑛 ](1×𝑛)
o Column Vector- If we write all values in vector as a column (n x 1)
𝑏1
𝐵=[⋮]
𝑏𝑛 (𝑛×1)
• Vector Addition

o If we have 2 vectors 𝑣⃗ , ⃗⃗⃗⃗⃗


𝑤.
o To add these 𝑣⃗ , ⃗⃗⃗⃗⃗,
𝑤 we move the ⃗⃗⃗⃗⃗𝑤 so that its tail sits at the tip of the 𝑣⃗. The new vector is drawn from the tail
of the 𝑣⃗ to the tip of the ⃗⃗⃗⃗⃗
𝑤 after moving. It’s because If you take a step along 𝑣⃗, then take a steps along ⃗⃗⃗⃗⃗,
𝑤 the
overall effect is same as if you moved along 𝑣⃗ + ⃗⃗⃗⃗⃗.
𝑤
𝑥1 𝑥2 2 6 8
𝑣⃗ + ⃗⃗⃗⃗⃗
𝑤 = [𝑦 ] + [𝑦 ] = [ ] + [ ] = [ ]
1 2 4 −2 2

Rohan Sawant 2.2 Linear Algebra & Geometry 1


• Scalar-vector multiplication

o A vector is multiplied by a scalar, which is done by multiplying every element of the vector by the scalar.
o This process of stretching the vector direction is called scaling. A scalar is a number that scales some vector.
• Unit Vectors (basis of a coordinate system)
1 0
o In the 2D coordinate system, there are two-unit vectors one on 𝑥 direction & one in 𝑦 𝑖̂ = [ ] 𝑎𝑛𝑑 𝑗̂ = [ ]
0 1
o A vector [3, −2] scales 𝑖̂ by a factor of 3 and 𝑗̂ by factor of −2.
o These coordinates describe the sum of two scaled vectors 3𝑖̂ + (−2𝑗̂)
o Coordinates as scalars are scaled unit vectors.

o It is possible to have different basis or unit vectors. With these different basis vectors, we get a completely new
coordinate system.

• Linear combination & Span of vectors


o If two vectors are scaled with different scalars with any basis vectors, the addition of this 2 scaled vectors is
linear combination.

o The span of a vector is 𝑣⃗ and 𝑤


⃗⃗⃗ is the set of all of their linear combinations i.e. the points they can reach. We
will have the whole plane as a span for most of the vectors.

Rohan Sawant 2.2 Linear Algebra & Geometry 2


• Linearly independent and dependent vectors
o 𝑣⃗ and 𝑤
⃗⃗⃗ vectors are linearly dependent if they lie on the same direction.

o If their addition adds a new span of dimensions then they are Linear independent. This vector will give a
contribution to the span. For all values of 𝑎 , 𝑣⃗ ≠ 𝑎. 𝑤
⃗⃗⃗
o Set of vectors is linearly dependant if 𝛽1 ∗ 𝑎1 + ⋯ + 𝛽𝑘 ∗ 𝑎𝑘 = 0 where 𝛽1 , ⋯ , 𝛽𝑘 ≠ 0. We can form the zero
vector as a linear combination of the vectors.
o When a collection of vectors is linearly dependent, at least one of the vectors can be expressed as a linear
combination of the other vectors

0.2 −0.1 0.0


𝑎1 = [−7.0] 𝑎2 = [ 2.0 ] 𝑎1 = [−1.0] are linearly dependent, since 𝑎1 + 2𝑎2 − 3𝑎3 = 0
8.6 −1.0 2.2
1 3
We can express any of these vectors as a linear combination of the other two 𝑎2 = (− ) 𝑎1 + ( ) 𝑎3
2 2
o Set of vectors is linearly independent if 𝛽1 ∗ 𝑎1 + ⋯ + 𝛽𝑘 ∗ 𝑎𝑘 = 0 where 𝛽1 = ⋯ = 𝛽𝑘 = 0. We can form the
zero vector as a linear combination of the vectors. We can’t form the zero vector as a linear combination of
the vectors unless all coefficients are zero
1 0 −1
𝑎1 = [0] 𝑎2 = [−1] 𝑎1 = [ 1 ] are linearly independent
0 1 1
Suppose 𝛽1 ∗ 𝑎1 + 𝛽2 ∗ 𝑎2 + 𝛽3 ∗ 𝑎3 = 0 However, this is only possible if all of the values are equal to zero.
o A linearly independent collection of 𝑛-𝑣𝑒𝑐𝑡𝑜𝑟𝑠 can have at most 𝑛 elements.
o A collection of 𝑛 linearly independent 𝑛-𝑣𝑒𝑐𝑡𝑜𝑟𝑠 (i.e., a collection of linearly independent vectors of the
maximum possible size) is called a basis. i.e., the 𝑛-𝑣𝑒𝑐𝑡𝑜𝑟𝑠 𝑎1 , . . . , 𝑎𝑛 are a basis, then any 𝑛-𝑣𝑒𝑐𝑡𝑜𝑟 𝑏 can be
written as a linear combination of them or any 𝑛-𝑣𝑒𝑐𝑡𝑜𝑟 𝑏 can be written in a unique way as a linear
combination of them.
o In many areas of ML, we search for features and we examine their independence. a good understanding of
basis helps transform data from one domain to another.
• Linear transformations and matrices
o A linear transformation is a function that maps an input vector into an output vector of same length as input.
e.g., Fourier Transform maps input sequence of 𝑁 signal samples to a sequence of another 𝑁 samples. New
samples are complex numbers which capture the amplitude and phase (time-shift) of input signal.

Gif File
o With linear transformation vector is moved again in some particular way in plane to get output vector. E.g.
input vector is rotated at 90° clockwise to form output vector.

Rohan Sawant 2.2 Linear Algebra & Geometry 3


o This transform can also be applied on the whole set of vectors i.e. transform the whole plane to see where
majority of the vectors will be mapped. We can visualize this is to represent vectors as points (Tip of vector).

Gif File Gif File


o linear transformation must obey 3 properties
▪ Line should remain a line after transformation.
▪ Origin should remain at the fixed place.
▪ Distance between the grid lines should remain equidistant and parallel

o Several important geometric transformations can be expressed as a matrix-vector product 𝑤 = 𝐴𝑣. 𝐴 is 2×2
matrix. E.g., Scaling, Rotation, Reflection
1 0
o To transform input vector 𝑥𝑖𝑛 and 𝑦𝑖𝑛 to 𝑥𝑜𝑢𝑡 and 𝑦𝑜𝑢𝑡 basis vectors 𝑖̂ = [ ] 𝑎𝑛𝑑 𝑗̂ = [ ] can be observed.
0 1
−2
E.g., To obtain vector 𝑣⃗ = [ ] and we have to multiply 𝑖̂ with −2 and 𝑗̂ with 4.
4
o With linear transformation, we will obtain transformed 𝑖̂ and transformed 𝑗̂. E.g. 𝑖̂𝑡𝑟𝑎𝑛𝑠 = [ 1 ] 𝑎𝑛𝑑 𝑗̂𝑡𝑟𝑎𝑛𝑠 = [3]
−2 0
1 3 𝑎𝑥 + 𝑏𝑦 1 ∗ −2 + 3 ∗ 4 10
𝑣⃗𝑡𝑟𝑎𝑛𝑠 = −2 [ ] + 4[ ] = [ ]=[ ]=[ ]
−2 0 𝑐𝑥 + 𝑑𝑦 −2 ∗ −2 + 0 ∗ 4 4

𝑥
o To map any input vector [𝑦] with a linear transformation, the output vector will multiply them with
transformed basis vectors.
1∗𝑥+3∗𝑦
𝑣⃗𝑡𝑟𝑎𝑛𝑠 = [ ]
−2 ∗ 𝑥 + 0 ∗ 𝑦
o We can map the whole 2-D plane if we know the transformed basis vectors.
o We can create 2×2 matrix of transformed basis vectors which can be used for vector processing. E.g. [ 1 3 ].
−2 0
Intuitively, it’s summing two scaled column vectors. i.e. matrix-vector multiplication.
𝑎 𝑏 𝑥 𝑎 𝑏 𝑎𝑥 + 𝑏𝑦
[ ] ∙ [𝑦] = 𝑥 [ ] + 𝑦 [ ] = [ ]
𝑐 𝑑 𝑐 𝑑 𝑐𝑥 + 𝑑𝑦
o If a matrix whose vectors are linearly dependent, then, a 2-D plane will be mapped to a single line.

Rohan Sawant 2.2 Linear Algebra & Geometry 4


• The determinant
o A descriptive and more intuition of linear transformations will be the area of one object changed under a
linear transformation. We will keep track of a unit square, defined by basis vectors, and how it is transformed
under an operation. E.g. Dilation & Shear Transformation.

o For any object in general which can be approximated by squares, every unit square will transform. So, every
grid square of size 1 will be mapped into a parallelogram under shear operation.

o This new area is called as determinant of a transformation. The following operation doubles the area.
𝑎 𝑏
det ([ ]) = (𝑎 ∗ 𝑑) − (𝑏 ∗ 𝑐)
𝑐 𝑑
The 𝑎 ∗ 𝑑 term tells how much it has stretched in x & y direction while as 𝑏 ∗ 𝑐 how much it has been
squished in diagonal direction

`
−1 1
det ([ ]) = (−1 ∗ −1) − (1 ∗ −1) = 1 − (−1) = 1 + 1 = 2
−1 −1

o If a linear transformation map a 2-D space into a single line, the determinant value = 0
4 2
det ([ ]) = (4 ∗ 1) − (2 ∗ 2) = 4 − 4 = 0
2 1

Rohan Sawant 2.2 Linear Algebra & Geometry 5


o If determinant is negative, it implies order of 𝑖̂ and 𝑗̂ is reversed i.e., after a transformation 𝐿(𝑗̂) is to the right
of 𝐿(𝑖̂) or the orientation of the space has been reversed.
2 1
det ([ ]) = (2 ∗ −3) − (1 ∗ −1) = −6 − (−1) = −6 + 1 = −5
−1 −3

o In 3D Space, we use determinant to calculate a volume transformation of unit cube to parallelepiped.

𝑎 𝑏 𝑐
𝑑 𝑓 𝑑 𝑒
det ([𝑑 𝑒 𝑓]) = 𝑎 ∗ det ([𝑒 𝑓]) − 𝑏 ∗ det ([ ]) + 𝑐 ∗ det ([ ])
ℎ 𝑖 𝑔 𝑖 𝑔 ℎ
𝑔 ℎ 𝑖
= 𝑎(𝑒𝑖 − 𝑓ℎ) − 𝑏(𝑑𝑖 − 𝑓𝑔) + 𝑐(𝑑ℎ − 𝑒𝑔)
= 𝑎𝑒𝑖 − 𝑎𝑓ℎ − 𝑏𝑑𝑖 + 𝑏𝑓𝑔 + 𝑐𝑑ℎ − 𝑐𝑒𝑔
• Inverse matrices
o For Linear equation, variables 𝑥, 𝑦 are only scaled by a coefficient and are summed. To solve them, we align
variables on the left and constants on the right into another matrix.
2𝑥 + 2𝑦 = −4
1𝑥 + 3𝑦 = −1
2 2 𝑥 −4
[ ] [𝑦 ] = [ ]

1 3 ⏟ ⏟−1
𝐴 𝑥⃗ ⃗⃗
𝑏
𝐴𝑥⃗ = 𝑏⃗⃗
o 𝑥⃗ is input vector and it is transformed to 𝑏⃗⃗ using a matrix 𝐴. So, it moved somewhere in the 2-D space to
obtain resulting vector 𝑏⃗⃗ .
o So, we are looking for a vector 𝑥⃗ which lands on 𝑏⃗⃗

Rohan Sawant 2.2 Linear Algebra & Geometry 6


o Case1: matrix 𝐴 transforms 2D plane into a line i.e. columns of A are linearly dependent. i.e. det = 0
Case2: matrix 𝐴 transforms 2D plane into a 2D Plane i.e. columns of A are linearly independent. i.e. det ≠0

o If determinant is not 0, we will get the unique solution which will transform 𝑥⃗ into 𝑏⃗⃗. As the 𝑥⃗ is input vector
and it is transformed to 𝑏⃗⃗ Vice versa we can go back from 𝑏⃗⃗ in search of 𝑥⃗ . This backward search is inverse
transformation.
3 1 3 1 −1
E.g., 𝐴 = [ ] and Its inverse transformation will be 𝐴−1 = [ ]
0 2 0 2

1 0
o If we consecutively apply 𝐴 and then 𝐴−1 we will go back to the basis vectors 𝑖̂ = [ ] 𝑎𝑛𝑑 𝑗̂ = [ ] .This
0 1
consecutive transformation is called as matrix multiplication. This is also called as identity transformation as
this transformation does nothing but gives us the identity matrix.
o To solve the equation 𝐴𝑥⃗ = 𝑏⃗⃗ we will multiply both sides by 𝐴−1
𝐴−1 𝐴𝑥⃗ = 𝐴−1 𝑏⃗⃗
o Intuitively we are transformation in reverse and following 𝑏⃗⃗ thus wherever 𝑥⃗ will land is solution

o when 𝑑𝑒𝑡(𝐴) = 0 then any vector will be squashed into the line. The vector mapped into this line, cannot
determine its location of origin i.e. we can’t find 𝐴−1

Rohan Sawant 2.2 Linear Algebra & Geometry 7


• Rank, Column Space and Null Space
o Rank is the number of dimensions in the output after linear transformation.
o If a linear transformation of 3D vectors lands on a line which is a 1-D we say that we have a rank-1
transformation. If we have a rank-2 transformation then 3-D vectors will land into a plane.

o Column space is span of columns of matrix i.e. All possible solutions that one matrix 𝐴.
0
o A null space is a set of all vectors that are mapped into 𝑏⃗⃗ = [ ] vector.
0
• Distance between 2 points

𝑦1
𝑝(𝑎1 , 𝑎2 )

𝑑𝑝𝑞
𝑎2 − 𝑏2

𝑎1 − 𝑏1 𝑞(𝑏1 , 𝑏2 )
𝑥1

o By Pythagoras theorem,
o In 2D Space, 𝑑𝑝𝑞 = √(𝑎1 − 𝑏1 )2 + (𝑎2 − 𝑏2 )2
o In 3D space 𝑝(𝑎1 , 𝑎2 , 𝑎3 ), 𝑞(𝑏1 , 𝑏2 , 𝑏3 ) 𝑑𝑝𝑞 = √(𝑎1 − 𝑏1 )2 + (𝑎2 − 𝑏2 )2 + (𝑎3 − 𝑏3 )2
o In ND Space 𝑝(𝑎1 , 𝑎2 , 𝑎3 , ⋯ , 𝑎𝑛 ), 𝑞(𝑏1 , 𝑏2 , 𝑏3 , ⋯ , 𝑏𝑛 )

𝑑𝑝𝑞 = √∑(𝑎𝑖 − 𝑏𝑖 )2
𝑖=1

• Distance of a point from its origin

𝑥2
𝑝(𝑎1 , 𝑎2 )

𝑑𝑝𝑜 𝑎2

𝑜(0,0) 𝑥1
𝑎1
o By Pythagoras theorem,
o In 2D Space, 𝑑𝑝𝑜 = √𝑎1 2 + 𝑎2 2
o In 3D space 𝑝(𝑎1 , 𝑎2 , 𝑎3 ) 𝑑𝑝𝑜 = √𝑎1 2 + 𝑎2 2 + 𝑎3 2
o In ND Space 𝑝(𝑎1 , 𝑎2 , 𝑎3 , ⋯ , 𝑎𝑛 )

𝑑𝑝𝑜 = √∑ 𝑎𝑖 2
𝑖=1

Rohan Sawant 2.2 Linear Algebra & Geometry 8


• Dot Product or Scalar Product or Inner Product
o Two vectors can be multiplied using the "Dot Product". We mostly use dot product in ML
o In matrix notation, the dot product of two vectors is the sum of their component wise products:

2 4
𝑎 = [3] 𝑏 = [6]
4 7
𝑎 . 𝑏 will be 𝑎𝑥 𝑏𝑥 + 𝑎𝑦 𝑏𝑦 + 𝑎𝑧 𝑏𝑧 = 2 ∗ 5 + 3 ∗ 6 + 4 ∗ 7 = 56

o A dot product between two vectors is a projection of vector 𝑤


⃗⃗⃗ on vector 𝑣⃗ . Then, we multiply the length of
projected vector 𝑤
⃗⃗⃗ onto 𝑣⃗ and the length of 𝑣⃗ .
8 −2
[ ] ∙ [ ] = (𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑝𝑟𝑜𝑗𝑒𝑐𝑡𝑒𝑑 𝑤 ⃗⃗⃗) ∙ (𝐿𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑣⃗) ∙
⏟2 ⏟ −4
𝑣⃗⃗ 𝑤
⃗⃗⃗

Gif Link
o If projection is opposite side of 𝑣⃗ the dot product is negative and if they are perpendicular the dot product is
zero

o If 𝑣⃗ and 𝑤
⃗⃗⃗ has same length, the projected length of both vectors on each other would be same. Thus,
calculation sequence does not matter.
𝑣⃗ ∙ 𝑤
⃗⃗⃗ = 𝑤
⃗⃗⃗ ∙ 𝑣⃗

o If 𝑣⃗ is 3 times longer than 𝑤 ⃗⃗⃗, we can interpret 3𝑣⃗ is scaling of 𝑣⃗. But scaling is actually scaling the length.
Thus, (3⃗𝑣⃗ ) ∙ 𝑤
⃗⃗⃗ = 3(⃗𝑣⃗ ∙ ⃗𝑤
⃗⃗)

Rohan Sawant 2.2 Linear Algebra & Geometry 9


o The relation between 2 calculation methods comes with duality. For that let’s see linear transformation from
multiple dimension to single dimension i.e. the number line.
o Here A line with evenly spaced dots on it, will be mapped to a 1D line such that distance between dots on the
mapped line will be equidistant. (property of the Linear Transformation). We can keep track of basis vectors
that will be transformed

o By using basis vectors, we can calculate where any vector will land on transformed line.
1 0
E.g. 𝑖̂ = [ ] 𝑎𝑛𝑑 𝑗̂ = [ ] change to [1 −2]
0 1

4
o A vector 𝑣⃗ = [ ] will be transformed to
3
𝑇𝑟𝑎𝑛𝑠𝑓𝑜𝑟𝑚
⏞ 4
[1 −2] [⏟] = 4 ∙ 1 + 3 ∙ (−2) = 4 − 6 = −2
3
𝑉𝑒𝑐𝑡𝑜𝑟

o For further clarification, it can be thought as how 2-D points will be projected onto a single line. The
transformed single line has unit vector 𝑢̂ .

o The projection of Basis vector 𝑖̂ and 𝑗̂ onto a unit vector 𝑢̂ will be same as 𝑢̂ will be a projected 𝑖̂ and 𝑗̂
i.e. 𝑥𝑐𝑜𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 = 𝑢𝑥 and 𝑦𝑐𝑜𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 = 𝑢𝑦 of a vector 𝑢̂.

Rohan Sawant 2.2 Linear Algebra & Geometry 10


o When we multiply vector (𝑥, 𝑦) with 𝑢𝑥 and 𝑢𝑦 respectively, and sum the products, we get a position where
(𝑥, 𝑦) vector will land on a line defined with a vector 𝑢̂ or the new co-ordinate on single line.
o The matrix vector products are dual with the dot product interpretation.
𝑥
[𝑢𝑥 𝑢𝑦 ] [𝑦] = 𝑢𝑥 ∙ 𝑥 + 𝑢𝑦 ∙ 𝑦
𝑢𝑥 𝑥
[𝑢 ] [𝑦] = 𝑢𝑥 ∙ 𝑥 + 𝑢𝑦 ∙
𝑦
o To calculate the length or norm of single vector using the dot product
‖𝑣⃗‖ = √𝑣⃗ ∙ 𝑣⃗
1
 e.g. 𝑣⃗ = [3]
2
 𝑣⃗ ∙ 𝑣⃗ = 1 + 9 + 4 = 14
 ‖𝑣⃗‖ = √𝑣⃗ ∙ 𝑣⃗ = √14
o To normalize vectors i.e. ‖𝑣⃗‖ = 1 we need to divide the vector 𝑣⃗ by ‖𝑣⃗‖
1
√14
𝑣⃗⃗ 3
 𝑢̂ = ‖𝑣⃗⃗‖
= √14
2
[√14]
1 9 4
 ‖𝑢
⃗⃗‖ = 1 = + + =1
14 14 14
o We can also calculate angle between two vectors
𝑎 ⋅ 𝑢 =∥ 𝑎 ∥ 𝑐𝑜𝑠𝜃

o The projection of 𝑎 on 𝑢 is ∥ 𝑎 ∥ 𝑐𝑜𝑠𝜃


𝑎 ⋅ 𝑢 =∥ 𝑎 ∥ 𝑐𝑜𝑠𝜃
𝑎. 𝑏
=∥ 𝑎 ∥ 𝑐𝑜𝑠𝜃
‖𝑏‖
𝑎 ⋅ 𝑏 =∥ 𝑎 ∥∥ 𝑏 ∥ 𝑐𝑜𝑠𝜃
𝑎⋅ 𝑏
𝑐𝑜𝑠𝜃 =
∥ 𝑎 ∥∥ 𝑏 ∥
• Cross Product or Vector Product
o Two vectors can be multiplied using the "Cross Product". Cross product for a vector is possible when row vector
has dimension of 1x n and n x 1

o The Cross Product a × b of two vectors is another vector that is at right angles to both

o The magnitude (length) of the cross product equals the area of a parallelogram with vectors 𝑎 and 𝑏 for sides

Rohan Sawant 2.2 Linear Algebra & Geometry 11


o It can be calculated as
𝑎 𝑋 𝑏 = |𝑎| ∗ |𝑏| ∗ sin 𝜃

|𝑎|𝑜𝑟𝑖𝑔𝑖𝑛 = √𝑎1 2 + 𝑎2 2 + … , 𝑎𝑛 2

|𝑏|𝑜𝑟𝑖𝑔𝑖𝑛 = √𝑏1 2 + 𝑏2 2 + … , 𝑏𝑛 2

𝜃 is the angle between 𝑎 and 𝑏

o It is to be noted that the cross product is a vector with a specified direction. The resultant is always
perpendicular to both 𝑎 and 𝑏.
o In case 𝑎 and 𝑏 are parallel vectors, the resultant shall be zero as 𝑠𝑖𝑛(0) = 0
o When 𝑎 and 𝑏 start at the origin point (0,0,0), the Cross Product will end at:

[𝑋 𝑌 𝑍]
𝑎 = [2 3 4]
𝑏 = [5 6 7]
𝑐𝑥 = 𝑎𝑦 𝑏𝑧 − 𝑎𝑧 𝑏𝑦 = 3 ∗ 7 − 4 ∗ 6 = −3
𝑎 𝑋 𝑏 will be 𝑐𝑦 = 𝑎𝑧 𝑏𝑥 − 𝑎𝑥 𝑏𝑧 = 4 ∗ 5 − 2 ∗ 7 = 6
𝑐𝑧 = 𝑎𝑥 𝑏𝑦 − 𝑎𝑦 𝑏𝑥 = 2 ∗ 6 − 3 ∗ 5 = −3

o The cross product could point in the completely opposite direction and still be
perpendicular to the two other vectors, so we have the "Right Hand Rule"
o With your right-hand, point your index finger along vector a, and point your middle
finger along vector b: the cross product goes in the direction of your thumb.

Rohan Sawant 2.2 Linear Algebra & Geometry 12


• Equation of a line (2-D), Plane(3-D) and Hyperplane (n-D)
o Equation of a 2D line is 𝑤1 ∗ 𝑥1 + 𝑤2 ∗ 𝑥2 + 𝑤0 = 0 (w1, w2 are constants while as x1 x2 are dimensions)
o Equation of a 3D plane is 𝑤1 ∗ 𝑥1 + 𝑤2 ∗ 𝑥2 + 𝑤3 ∗ 𝑥3 + 𝑤0 = 0 (w1, w2, w3 are constants while as x1 x2 x3
are dimensions)
o AWN, Line separates all 2D points in 2 regions, plane separates 3D Surface it into 2 Parts

o For n Dimensional plane, the equation would be, 𝑊0 + ∑ 𝑊𝑖. 𝑋𝑖 = 0


o Consider we have two vectors; one is a vector of all constants as W. Another is column vector having x1, x2, x3
…. xn as X
o The equation of hyperplane would be dot product of these two vectors 𝑊𝑇. 𝑋 + 𝑤0

o What is constant C or W0?


▪ We will understand this by using 2D equation. General line equation is y = mx + c
▪ m = slope of line & c is a y intercept
▪ So, for a line passing through origin the c = 0

Rohan Sawant 2.2 Linear Algebra & Geometry 13


o We can summarise it as:

o Now, consider𝑊 𝑇 𝑋 = 0 Geometrically, X represents a vector or a point of components x1, x2. W is a vector
which is perpendicular to x
o i.e., 𝑊 𝑇 𝑋 = 0 signifies W and X are perpendicular to each other since there product is 0

• Distance of a point from a Plane


o We know that 𝑊 𝑇 𝑋 = 0(W vector is perpendicular to X)
(𝑊 𝑇 .𝑋)
o To measure a shortest distance of point X from plane the formula is ‖𝑊‖

Rohan Sawant 2.2 Linear Algebra & Geometry 14


• Circle
o In this figure we can see equation of circle if centre of circle is at 0,0. The
equation becomes r2 = x2 + y2
o Circle divides whole region into 2 parts. One inside & one outside
o Suppose we have a point p (d, c) and we want to check whether it is inside or
outside the circle
▪ If d2 + c2 < r2: p lies inside the circle
▪ If d2 + c2 >r2: p lies outside the circle
▪ If d2 + c2 =r2: p lies on the circle
o For a sphere(3D) having x, y, z as its dimension’s equation is r2 = x2 + y2+z2
o For a Hypersphere (nD) having x1, x2, x3, … xn as its dimension equation is r2 =
x12 + x22+ x32 + ……+ xn2
o The point inside or outside concept remains same in for n dimensional space
• Ellipse
o Suppose we have a point p (d, c) and we want to check whether it is
inside or outside the circle
o If (d - h)2 /a2 + (c - k)2 /a2 < 1: p lies inside the ellipse
o If (d - h)2 /a2 + (c - k)2 /a2 > 1: p lies outside the ellipse
o If (d - h)2 /a2 + (c - k)2 /a2 = 1: p lies on the ellipse
o X and y in equation means any point with co-ordinate x, y

• Ellipsoid(3D)
o Ellipsoid is 3D variant of ellipse equation is same as ellipse extended
for 3D. the point checking properties remain same for ellipsoid as that of
ellipse
o Above concept can be extended to Hyper ellipsoid which is basically
an n dimensional ellipse

Rohan Sawant 2.2 Linear Algebra & Geometry 15


• Square & Rectangle

Rohan Sawant 2.2 Linear Algebra & Geometry 16

You might also like