0% found this document useful (0 votes)
16 views9 pages

Rough Idea: V V V V

The document provides definitions and explanations of key concepts in linear algebra, including the span of vectors, linear independence, dot products, and the properties of linear systems. It discusses the relationships between null space, column space, and solutions of linear systems, as well as the conditions for linear independence and the structure of subspaces. Additionally, it covers geometric interpretations and algebraic properties related to these concepts.

Uploaded by

dhoang6679
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views9 pages

Rough Idea: V V V V

The document provides definitions and explanations of key concepts in linear algebra, including the span of vectors, linear independence, dot products, and the properties of linear systems. It discusses the relationships between null space, column space, and solutions of linear systems, as well as the conditions for linear independence and the structure of subspaces. Additionally, it covers geometric interpretations and algebraic properties related to these concepts.

Uploaded by

dhoang6679
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Span

Rough Idea: The span of a set of vectors {v1 , . . . , vk } is the “smallest”


“subspace” of Rn containing v1 , . . . , vk .
This is not very precise as stated (e.g., what is meant by “subspace”?).
Here is the precise definition:

Def: The span of a set of vectors {v1 , . . . , vk } is the set of all linear combi-
nations of v1 , . . . , vk .
i.e.: span(v1 , . . . , vk ) = {c1 v1 + · · · + ck vk | c1 , . . . , ck ∈ R}.

Linear Independence
The definition in the textbook is:

Def: A set of vectors {v1 , . . . , vk } is linearly independent if none of the


vectors is a linear combination of the others.
∴ A set of vectors {v1 , . . . , vk } is linearly dependent if at least one of
the vectors is a linear combination of the others.
Caveat: This definition only applies to a set of two or more vectors.

There is also an equivalent definition, which is somewhat more standard:

Def: A set of vectors {v1 , . . . , vk } is linearly independent if the only linear


combination c1 v1 + · · · + ck vk = 0 equal to the zero vector is the one with
c1 = · · · = ck = 0.
∴ A set of vectors {v1 , . . . , vk } is linearly dependent if there is a linear
combination c1 v1 + · · · + ck vk = 0 equal to the zero vector, where not all
the scalars c1 , . . . , ck are zero.

Point: Linear independence of {v1 , . . . , vk } means:

If c1 v1 + · · · + ck vk = 0, then c1 = · · · = ck = 0.

This way of phrasing linear independence is often useful for proofs.


Linear Independence: Intuition
Why is “linear independence” a concept one would want to define? What
does it mean intuitively? The following examples may help explain.

Example 1: The set span(v) is one of the following:


(i) A line.
(ii) The origin.
Further: The first case (i) holds if and only if {v} is linearly independent.
Otherwise, the other case holds.

Example 2: The set span(v1 , v2 ) is one of the following:


(i) A plane.
(ii) A line.
(iii) The origin.
Further: The first case (i) holds if and only if {v1 , v2 } is linearly independent.
Otherwise, one of the other cases holds.

Example 3: The set span(v1 , v2 , v3 ) is one of the following:


(i) A “3-dimensional space.”
(ii) A plane.
(iii) A line.
(iv) The origin.
Further: The first case (i) holds if and only if {v1 , v2 , v3 } is linearly indepen-
dent. Otherwise, one of the other cases holds.

Q: Do you see the pattern here? What are the possibilities for the span of
four vectors {v1 , v2 , v3 , v4 }? Seven vectors {v1 , . . . , v7 }?

Q: Looking at Example 3, what happens if the vectors v1 , v2 , v3 are in R2 ?


Can possibility (i) occur in that case? What does this tell you about sets of
three vectors in R2 ?
Dot Products - Algebra
Def: The dot product of two vectors v, w ∈ Rn is
   
v1 w1
v · w =  ..  ·  ...  = v1 w1 + · · · + vn wn .
.
vn wn
The length of a vector v ∈ Rn is:
q
kvk = v12 + · · · + vn2 .

Note that v · v = kvk2 . (Q: Why is this a reasonable definition of length?)

Cauchy-Schwarz Inequality: For any non-zero vectors v, w ∈ Rn :

|v · w| ≤ kvkkwk.

Equality holds if and only if w = cv for some non-zero scalar c.

Triangle Inequality: For any non-zero vectors v, w ∈ Rn :

kv + wk ≤ kvk + kwk.

Equality holds if and only if w = cv for some positive scalar c.

Dot Products - Geometry


Prop: Let w, w ∈ Rn be non-zero vectors. Then:

v · w = kvkkwk cos θ,

where θ is the angle between v and w.


Therefore, v and w are perpendicular if and only if v · w = 0.

Def: We say that two vectors v, w are orthogonal if v · w = 0.

Pythagorean Theorem: If v and w are orthogonal, then

kv + wk2 = kvk2 + kwk2 .

(Q: How exactly is this the “Pythagorean Theorem” about right triangles?)
Cross Products
Def: The cross product of two vectors v, w ∈ R3 is
 
v2 w3 − v3 w2
v × w = v3 w1 − v1 w3 .
v1 w2 − v2 w1
Note: Dot products make sense in Rn for any dimension n. But cross prod-
ucts only really work in R3 .

Prop: Let v, w ∈ R3 . Then:

v · (v × w) = 0
w · (v × w) = 0.

In other words: v × w is orthogonal to both v and w.

Prop: Let v, w ∈ R3 be non-zero vectors. Then

kv × wk = kvkkwk sin θ,

where θ is the angle between v and w.

Prop: Let v, w ∈ R3 . The area of the parallelogram formed by v and w is


kv × wk.
Reduced Row Echelon Form; Solutions of Systems
Row Operations:
(1) Multiply/divide a row by a non-zero scalar.
(2) Add/subtract a scalar multiple of one row from another row.
(3) Exchange two rows.

Facts:
(a) Row operations do not change the set of solutions of a linear system.
(b) Using row operations, every matrix can be put in reduced row ech-
elon form.

Def: A matrix is in reduced row echelon form if:


(1) The first non-zero entry in each row is 1. (These 1’s are called pivots.)
(2) Each pivot is further to the right than the pivot of the row above it.
(3) In the column of a pivot, all other entries are zero.
(4) Rows containing all zeros are at the very bottom.

Def: Given a linear system of equations (whose augmented matrix is) in


reduced row echelon form.
The variables whose corresponding column contains a pivot are called
pivot variables. The other variables are called free variables.

Note: For an m × n matrix (i.e., m rows and n columns), we have:


(# of pivot variables) + (# of free variables) = n.
This basic fact is surprisingly important!

Prop 6.2: For a linear system of equations (whose augmented matrix is) in
reduced row echelon form, there are three possibilities:
(A) No solutions. One of the equations is 0 = 1.
(B) Exactly one solution. There’s no 0 = 1, and no free variables.
(C) Infinitely many solutions. There’s no 0 = 1, but there’s at least
one free variable.

Geometrically: The solution set looks like one of:


(A) The empty set. (i.e.: The set {} with nothing inside it.)
(B) A single vector.
(C) A line, or a plane, or a 3-dimensional space, or... etc.
Linear Systems as Matrix-Vector Products
A linear system of m equations in n unknowns is of the form:
a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2 (∗)
..
.
am1 x1 + am2 x2 + · · · + amn xn = bm .
We can write a linear system as a single vector equation:
   
a11 x1 + a12 x2 + · · · + a1n xn b1
 a x + a x + ··· + a x  b 
 21 1 22 2 2n n   2
 ..  =  .. .
 .  .
am1 x1 + am2 x2 + · · · + amn xn bm
The coefficient matrix of the system is the m × n matrix
 
a11 a12 · · · a1n
 21 a22 · · · a2n 
a 
A =  .. .. . . . .. 
 . . . 
am1 am2 · · · amn
The matrix-vector product of the m × n matrix A with the vector x ∈ Rn
is the vector Ax ∈ Rm given by:
    
a11 a12 · · · a1n x1 a11 x1 + · · · + a1n xn
 21 a22 · · · a2n   x2   a21 x1 + · · · + a2n xn 
a    
Ax =  .. .. . . . ..   ..  =  .. .
 . . .  .   . 
am1 am2 · · · amn xn am1 x1 + · · · + amn xn
We can now write the system (∗) as:
Ax = b.

Homogeneous vs Inhomogeneous
Def: A linear system of the form Ax = 0 is called homogeneous.
A linear system of the form Ax = b with b 6= 0 is called inhomogeneous.

Fact: Every homogeneous system Ax = 0 has at least one solution (why?).


∴ For homogeneous systems: only cases (B) and (C) of Prop 6.2 can occur.
Null Space
Def: Let A be an m × n matrix, so A : Rn → Rm .
The null space of A is:
N (A) = {x ∈ Rn | Ax = 0}.
So: N (A) is the set of solutions to Ax = 0.

Fact: Either Ax = b has no solutions, or at least one solution (logic!).


If Ax = b has at least one solution, then the solution set of Ax = b is a
translation of N (A). Therefore, in this case:
◦ Ax = 0 has exactly one solution ⇐⇒ Ax = b has exactly one solution.
◦ Ax = 0 has infinitely many solutions ⇐⇒ Ax = b has infinitely many
solutions.
? Careful: This fact assumes Ax = b has at least one solution. If Ax = b
has no solutions, then we cannot necessarily draw these conclusions!

Column Space
There are two equivalent definitions of the column space.

Def 1: Let A be an m × n matrix. Let A have columns [v1 · · · vn ].


The column space of A is
C(A) = span(v1 , . . . , vn ).
So: the column space is the span of the columns of A.

Def 2: Let A be an m × n matrix, so A : Rn → Rm .


The column space of A is
C(A) = {Ax | x ∈ Rn }.
So: the column space is just the range of A. (i.e., the set of all actual outputs.)

Therefore: The linear system Ax = b has a solution ⇐⇒ b ∈ C(A).

Important:
◦ N (A) is a subspace of the domain of A.
◦ C(A) is a subspace of the codomain of A.
Two Crucial Facts
Fact 1 (Prop 8.3): Let A be an m×n matrix. The following are equivalent:
(i) N (A) = {0}.
(ii) The columns of A are linearly independent.
(iii) rref(A) has a pivot in each column.
Further: If any of these hold, then n ≤ m.

Fact 2 (Prop 9.2): Let A be an m×n matrix. The following are equivalent:
(i) C(A) = Rm .
(ii) The columns of A span Rm .
(iii) rref(A) has a pivot in each row.
Further: If any of these hold, then n ≥ m.

Subspaces
Def: A (linear) subspace of Rn is a subset V ⊂ Rn such that:
(1) 0 ∈ V.
(2) If v, w ∈ V , then v + w ∈ V.
(3) If v ∈ V , then cv ∈ V for all scalars c ∈ R.

N.B.: For a subset V ⊂ Rn to be a (linear) subspace, all three properties


must hold. If any one fails, then the subset V is not a (linear) subspace!

Fact: For any m × n matrix A:


(a) N (A) is a subspace of Rn .
(b) C(A) is a subspace of Rm .

So, the set of solutions to Ax = 0 is a linear subspace. But what about the
set of solutions to Ax = b? Assuming there are solutions to Ax = b, then
the set of solutions is an affine subspace.

Def: An affine subspace of Rn is a translation of a (linear) subspace.

Important: In this class, when we say “subspace,” we mean linear subspace.


This is more specific than the broader concept of “affine subspace.”
Solutions of Linear Systems (again)
For a linear system Ax = b, there are three possibilities:
No solutions There is a 0 = 1 equation b∈/ C(A)
Exactly one solution No 0 = 1 equation, and b ∈ C(A) and
No free variables N (A) = {0}
Infinitely many solutions No 0 = 1 equation, and b ∈ C(A) and
6 {0}
At least one free variable N (A) =

Subspace & Dimension


Def: A (linear) subspace of Rn is a subset V ⊂ Rn such that:
(i) 0 ∈ V.
(ii) If v, w ∈ V , then v + w ∈ V.
(iii) If v ∈ V , then cv ∈ V for all scalars c ∈ R.

Def: A basis for a subspace V ⊂ Rn is a set of vectors {v1 , . . . , vk } such


that:
(1) V = span(v1 , . . . , vk ).
(2) {v1 , . . . , vk } is linearly independent.

◦ Condition (1) ensures that every vector v in the subspace V can be


written as a linear combination of the basis elements: v = x1 w1 + · · · + xk wk .
◦ Condition (2) ensures that these coefficients are unique – that is, for a
given vector v, there is only one possible choice of x1 , . . . , xk .

Def: The dimension of a subspace V ⊂ Rn is the number of elements in


any basis for V .

But what if one basis for V has (say) 5 elements, but another basis for
V had 7 elements? Then how could we make sense of the dimension of V ?
Fortunately, that can never happen, because:

Fact: For a given subspace, every basis has the same number of elements.

Rank-Nullity Theorem: Let A be an m×n matrix, so A : Rn → Rm . Then


dim(C(A)) + dim(N (A)) = n.
This is fantastic! (We call dim(C(A)) the rank, and dim(N (A)) the nullity.)

You might also like