0% found this document useful (0 votes)
12 views3 pages

LinAlg Other Imp Points

Uploaded by

simranjit kaur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views3 pages

LinAlg Other Imp Points

Uploaded by

simranjit kaur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

MTL101-Linear Algebra

Adarsh Roy

Other important points to keep in mind:


 𝑟𝑎𝑛𝑘(𝐴) = 𝑁𝑜. 𝑜𝑓 𝑛𝑜𝑛 𝑧𝑒𝑟𝑜 𝑟𝑜𝑤𝑠 𝑖𝑛 𝑒𝑐ℎ𝑒𝑙𝑜𝑛 𝑓𝑜𝑟𝑚 𝑜𝑓 𝐴
 𝑑𝑒𝑡(𝐸) is not necessarily 1, where 𝐸 is an elementary matrix. (It is -1 for
elementary matrix formed by exchanging two rows of 𝐼)
 Every vector space has zero vector in it.
 𝐶[𝑎, 𝑏] is notation for set of all function which are continuous in [𝑎, 𝑏].
 Dimension of 0 vector space is 0 and its basis is null set.
 Zero vector cannot belong to a L.I. set.
 Subset of L.I. set is L.I. set and superset of L.D. subset is L.D.
 If a set 𝐵 is basis of vector space 𝑉(𝐹) then:
o 𝐵 is L.I.
o 𝐿(𝐵) = 𝑉(𝐹)
o 𝐵 is maximal L.I. set, number of elements in 𝐵 is dimension of
𝑉(𝐹).
o Every 𝑢 ∈ 𝑉(𝐹) is a unique linear combination of vectors in 𝐵.
 Any finite L.I. set in 𝑉(𝐹) is a part of a basis.
 If 𝑉(𝐹) has no finite basis we say its infinite dimensional.
 Ordered basis helps us to write “co-ordinate” vector of any 𝑢 ∈ 𝑉(𝐹).
 𝑟𝑜𝑤 𝑟𝑎𝑛𝑘(𝐴) = dim(𝑟𝑜𝑤 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐴) = 𝑐𝑜𝑙𝑢𝑚𝑛 𝑟𝑎𝑛𝑘(𝐴) =
dim(𝑐𝑜𝑙𝑢𝑚𝑛 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐴) = 𝑠𝑖𝑚𝑝𝑙𝑦 𝑟𝑎𝑛𝑘(𝐴)
 If 𝐴 and 𝐵 are row equivalent matrices (i.e., can be converted into each
other by elementary row operations) then 𝑟𝑜𝑤 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐴 is same as
𝑟𝑜𝑤 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐵.
 To find out which columns of 𝐴 are forming basis for
𝑐𝑜𝑙𝑢𝑚𝑛 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐴 (i.e., which rows constitute L.I. set), we find which
columns have the leading variable in 𝑟𝑟𝑒𝑓(𝐴). Likewise, for finding
which rows of 𝐴 are forming basis for 𝑟𝑜𝑤 𝑠𝑝𝑎𝑐𝑒 𝑜𝑓 𝐴 we find which
columns have the leading variable in 𝑟𝑟𝑒𝑓(𝐴 ).
 If 𝑊 (= 𝐿(𝑆 )) and 𝑊 (= 𝐿(𝑆 )) are subspaces of 𝑉(𝐹) then 𝑊 +
𝑊 (= 𝐿(𝑆 ∪ 𝑆 )) is also a subspace.
 𝑊 + 𝑊 is called direct sum if 𝑊 ∩ 𝑊 = {0}.
 If 𝑉 = 𝑊 + 𝑊 then 𝑉 is direct sum of 𝑊 and 𝑊 if and only if every
𝑣 ∈ 𝑉 can be written as 𝑣 = 𝑤 + 𝑤 for a unique 𝑤 ∈ 𝑊 and a
unique 𝑤 ∈ 𝑊 (which means that if this condition is satisfied then
𝑊 ∩ 𝑊 = {0} and vice-versa).
 For any linear transformation 𝑇 , 𝑇(0) = 0.
 Any linear transformation 𝑇: 𝑅 → 𝑅 is given by:
𝑇(𝑥, 𝑦) = (𝑎𝑥 + 𝑏𝑦, 𝑐𝑥 + 𝑑𝑦) for some choices of 𝑎, 𝑏, 𝑐, 𝑑 ∈ 𝑅.
 A linear transformation 𝑇: 𝑉 → 𝑉 is also called a linear operator.
 If a linear transformation 𝑇: 𝑉 → 𝑊 is bijective then the inverse map is
also a linear transformation. Such 𝑇 is called isomorphism. And 𝑉and 𝑊
are called isomorphic if such a 𝑇 exists.
 𝑘𝑒𝑟(𝑇) = {𝑣 ∈ 𝑉 ∶ 𝑇(𝑣) = 0} and 𝑇(𝑉) = {𝑇(𝑣): 𝑣 ∈ 𝑉}
 For a linear operator over a finite dimensional vector space, 𝑇: 𝑉 → 𝑉,
𝑇 𝑖𝑠 𝑖𝑛𝑗𝑒𝑐𝑡𝑖𝑣𝑒 ⇔ 𝑇 𝑖𝑠 𝑠𝑢𝑟𝑗𝑒𝑐𝑡𝑖𝑣𝑒
 Two matrices 𝐴 and 𝐵 are called conjugate/similar if ∃ invertible 𝑃 s.t.
𝐵 = 𝑃 𝐴𝑃. Trace and determinant of conjugate matrices are same.
 Suppose 𝑇: 𝑉 → 𝑊 and 𝑇 : 𝑊 → 𝑈 and 𝐵, 𝐵 , 𝐵 is basis of 𝑉, 𝑊, 𝑈
respectively. Then, [𝑇 . 𝑇] = [𝑇 ] . [𝑇] .
 By definition, an eigenvector cannot be zero vector.
 For an invertible operator 𝑇, zero cannot be an eigenvalue. An if 𝜆 is an
eigenvalue of 𝑇 then 𝜆 is an eigenvalue of 𝑇 .
 For linear operator 𝑇: 𝑉 → 𝑉, det (𝑋𝐼 − 𝑇) is characteristic polynomial.
Degree of this polynomial is dim(𝑉).
 These three statements are equivalent:
o 𝜆 is an eigenvalue of 𝑇.
o The operator 𝜆𝐼 − 𝑇: 𝑉 → 𝑉 is not injective (as its kernel is not
zero space)
o det (𝜆𝐼 − 𝑇)=0
 All above properties related to 𝑇 have their matrix form too, where 𝑇 is
replaced by [𝑇] and vector 𝑣 by column matrix [𝑣] . Or, in general
with square matrix 𝐴 and column matrix 𝑥. For eg., (𝜆𝐼 − 𝐴)𝑥 = 0 says
that 𝑥 is eigenspace (i.e., is an eigenvector) of 𝐴 associated to the
eigenvalue 𝜆.
 Subspaces 𝑊 , 𝑊 , … , 𝑊 ∈ 𝑉 are called independent if:
𝑤 + 𝑤 + ⋯ 𝑤 = 0 for 𝑤 ∈ 𝑊 ⇒ 𝑤 = 0 ∀ 𝑖 = 1,2, … , 𝑛.
 Eigenspaces (and thus eigenvectors) corresponding to distinct
eigenvalues are independent.
 A linear operator 𝑇: 𝑉 → 𝑉 is called diagonalizable if ∃ a basis 𝐵 s.t. [𝑇]
is diagonal matrix. Likewise, a matrix 𝐴 is called diagonalizable if it is
conjugate/similar to a diagonal matrix.
 Cayley-Hamilton Theorem: Every matrix satisfies its characteristic
equation. Applications of Cayley-Hamilton Theorem:
o To find inverse of matrix.
o 𝑃(0) = (−1) det(𝐴), so 𝑃(0) ≠ 0 ⇒ 𝐴 is invertible.
 A vector space with inner product defined on it, is called inner product
space.
 Inner product on a vector space is a map 𝑉 × 𝑉 → 𝐹 with the following
properties:
o < 𝑢|𝑢 > ≥ 0 (𝑒𝑞𝑢𝑎𝑙𝑖𝑡𝑦 ℎ𝑜𝑙𝑑𝑠 𝑤ℎ𝑒𝑛 𝑢 = 0)
o < 𝑢|𝑣 > = < 𝑣|𝑢 >
o < 𝛼𝑢 + 𝛽𝑣 | 𝑤 > = 𝛼 < 𝑢|𝑤 > +𝛽 < 𝑣|𝑤 >, but
o < 𝑤 | 𝛼𝑢 + 𝛽𝑣 > = 𝛼 < 𝑤|𝑢 > +𝛽̅ < 𝑤|𝑣 >
o |𝑣| = < 𝑣|𝑣 >
 For functions < 𝑓|𝑔 > = ∫ 𝑓(𝑡) 𝑔(𝑡) 𝑑𝑡 , < 𝑓|𝑓 > ≥ 0, and
< 𝑓|𝑓 >= 0 ⇒ 𝑓 = 0 if 𝑓 is continuous.
 Zero vector is orthogonal to every vector. Every orthogonal set of non-
zero vectors is L.I.
 All inner products in 𝑅 are characterized as < 𝑥|𝑦 > = 𝑎𝑥 𝑦 +
𝑏(𝑥 𝑦 + 𝑥 𝑦 ) + 𝑑𝑥 𝑦 , for some choice of 𝑎, 𝑏, 𝑑 ∈ 𝑅 𝑠. 𝑡. 𝑎 > 0 ,
𝑎𝑑 − 𝑏 > 0. Here, 𝑥 = (𝑥 , 𝑥 ) and 𝑦 = (𝑦 , 𝑦 ).
 Let 𝑉 is an i.p.s and we have L.I. set 𝑆 ∈ 𝑉 of 𝑛 vectors in 𝑉, then we
can always construct an orthogonal/orthonormal L.I. set 𝑆 ∈ 𝑉 of 𝑛
vectors such that 𝐿(𝑆 ) = 𝐿(𝑆 ). ~ Gram-Schmidth orthogonalization process.
 If an i.p.s 𝑉 has an orthogonal basis set {𝑞 , 𝑞 , … , 𝑞 } and we write
|
some 𝑣 ∈ 𝑉 as 𝑣 = ∑ 𝛼 𝑞 , then 𝛼 = , so finding coefficients
|
is easier if we have orthogonal basis. Advantage of orthogonal basis
 For an i.p.s 𝑉, 𝑊 ⊆ 𝑉, and some 𝑣 ∈ 𝑉, 𝑤 ∈ 𝑊 is best approximation
of 𝑣 by a vector in 𝑊 (i.e., 𝑤 is the nearest vector to 𝑣 in 𝑊), if
|𝑣 − 𝑤 | ≤ |𝑣 − 𝑤| ∀ 𝑤 ∈ 𝑊.

You might also like