0% found this document useful (0 votes)
21 views

Chapter 2 Lecture Notes

Uploaded by

jmarmolejo03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Chapter 2 Lecture Notes

Uploaded by

jmarmolejo03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Vector and Matrix

• We define a column n-vector to be an array of n numbers, denoted

Denote by ℝ the set of real numbers and by ℝn the set of column n-vectors with real components. We call ℝn an n-dimensional real vector
space. We commonly denote elements of ℝn by lowercase bold letters (e.g., x). The components of x ∈ ℝn are denoted x1,…, xn.
• We define a row n-vector as

• The transpose of a given column vector a is a row vector with corresponding elements, denoted a⊤. For example, if

• Equivalently, we may write a = [a1, a2,…, an]⊤. Throughout the course, we adopt the convention that the term vector (without the
qualifier row or column) refers to a column vector.

• The sum of the vectors a and b, denoted a + b, is the vector


○ The operation of addition of vectors has the following properties:
1. The operation is commutative:

2. The operation is associative:

3. There is a zero vector such that

• We define an operation of multiplication of a vector a ∈ ℝn by a real scalar α ∈ ℝ as

○ This operation has the following properties:


1. The operation is distributive: for any real scalars α and β,
2. The operation is associative:
3. The scalar 1 satisfies
4. Any scalar α satisfies
5. The scalar 0 satisfies
6. The scalar –1 satisfies

• A vector a is said to be a linear combination of vectors a1, a2,…, ak if there are scalars α1,…, αk such that

Proposition 1: A set of vectors {a1, a2,…, ak} is linearly dependent if and only if one of the vectors from the set is a linear
combination of the remaining vectors.

Note: A set of the vectors {a1,…, ak} is linearly independent if it is not linearly dependent.

• A subset ν of ℝn is called a subspace of ℝn if ν is closed under the operations of vector addition and scalar multiplication. That is,
if a and b are vectors in ν, then the vectors a + b and αa are also in ν for every scalar α.
○ Every subspace contains the zero vector 0, for if a is an element of the subspace, so is
• (–1)a = –a. Hence,
• a– a = 0 also belongs to the subspace.

• Let a1, a2,…, ak be arbitrary vectors in ℝn. The set of all their linear combinations is called the span of a1, a2,…, ak and is denoted

Note: The span of any set of vectors is a subspace.

• Given a subspace ν, any set of linearly independent vectors {a1, a2,…, ak} ⊂ ν such that ν = span[a1, a2,…, ak] is referred to as a basis of
the subspace ν.
○ All bases of a subspace ν contain the same number of vectors.
• This number is called the dimension of ν, denoted dim ν.
Proposition 2:
If {a1, a2,…, ak} is a basis of ν, then any vector a of ν can be represented uniquely as
• Suppose that we are given a basis {a1, a2,…, ak} of ν and a vector a ∈ ν such that

○ The coefficients αi, i = 1,…, k, are called the coordinates of a with respect to the basis {a1, a2,…, ak}.

• What is the set of vectors as the natural basis for ℝn?

• Why?

• A matrix is a rectangular array of numbers, commonly denoted by uppercase bold letters (e.g., A). A matrix with m rows and n columns is
called an m × n matrix, and we write

○ The real number aij located in the ith row and jth column is called the (i, j)th entry.
○ We can think of A in terms of its n columns, each of which is a column vector in ℝm.
○ Alternatively, we can think of A in terms of its m rows, each of which is a row n-vector.

• Consider the m × n matrix A above. The transpose of matrix A, denoted A⊤, is the n × m matrix

Rank of a Matrix
• Consider the m × n matrix

○ Let us denote the kth column of A by ak:

• The maximal number of linearly independent columns of A is called the rank of the matrix A, denoted rank A.
○ Note that rank A is the dimension of

• A matrix A is said to be square if the number of its rows is equal to the number of its columns (i.e., it is n × n).
○ Associated with each square matrix A is a scalar called the determinant of the matrix A, denoted det A or |A|.

• A pth-order minor of an m × n matrix A, with p ≤ min{m, n}, is the determinant of a p × p matrix obtained from A by deleting m – p rows
and n – p columns.
>The notation min{m, n) represents the smaller of m and n.

Proposition 3:
If an m × n (m ≥ n) matrix A has a nonzero nth-order minor, then the columns of A are linearly independent; that is, rank A = n.
• The rank of a matrix is equal to the highest order of its nonzero minor(s).

• A nonsingular (or invertible) matrix is a square matrix whose determinant is nonzero.


○ Suppose that A is an n × n square matrix. Then, A is nonsingular if and only if there is another n × n matrix B such that

• We call the matrix B above the inverse matrix of A, and write B = A–1.

Linear Equations:
Suppose that we are given m equations in n unknowns of the form
We can represent the set of equations above as a vector equation

Associated with this system of equations is the matrix

and an augmented matrix

We can also represent the system of equations above as

Theorem 1:
The system of equations Ax = b has a solution if and only if

Theorem 2:
Consider the equation Ax = b, where A ∈ ℝm×n and rank A = m. A solution to Ax = b can be obtained by assigning arbitrary values
for n – m variables and solving for the remaining ones.

Inner Products and Norms:


For x, y ∈ ℝn, we define the Euclidean inner product by

The inner product is a real-valued function 〈·, ·〉 : ℝn × ℝn → ℝ having the following properties:
1. Positivity: 〈x, x〉 ≥ 0, 〈x, x〉 = 0 if and only if x = 0.

2. Symmetry: 〈x, y〉 = 〈y, x〉.

3. Additivity: 〈x + y, z〉 = 〈x, z〉 + 〈y, z〉.

4. Homogeneity: 〈rx, y〉 = r〈x, y〉 for every r ∈ ℝ.

• The vectors x and y are said to be orthogonal if 〈x, y〉 = 0.

• The Euclidean norm of a vector x is defined as

Theorem 3: Cauchy-Schwarz Inequality


For any two vectors x and y in ℝn, the Cauchy-Schwarz inequality

holds. Furthermore, equality holds if and only if x = αy for some α ∈ ℝ.

The Euclidean norm of a vector ∥x∥ has the following properties:

1. Positivity: ∥x∥ ≥ 0, ∥x∥ = 0 if and only if x = 0.

2. Homogeneity: ∥rx∥ = |r|∥x∥, r ∈ ℝ.

3. Triangle inequality: ∥x + y∥ ≤ ∥x∥ + ∥y∥.

You might also like