0% found this document useful (0 votes)
69 views5 pages

Vector and Matrix: Real Vector Space. We Commonly Denote Elements of

1) The document defines vectors and matrices. A vector is an array of numbers and can be a column or row vector. A matrix is a rectangular array of numbers. 2) Key vector operations are addition, subtraction, scalar multiplication, and transpose. Vector spaces are sets of vectors that are closed under addition and scalar multiplication. 3) The span of vectors is the set of their linear combinations. A basis is a set of linearly independent vectors whose span is the entire vector space. The dimension of a vector space is the number of vectors in its basis.

Uploaded by

Samuel Situmeang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views5 pages

Vector and Matrix: Real Vector Space. We Commonly Denote Elements of

1) The document defines vectors and matrices. A vector is an array of numbers and can be a column or row vector. A matrix is a rectangular array of numbers. 2) Key vector operations are addition, subtraction, scalar multiplication, and transpose. Vector spaces are sets of vectors that are closed under addition and scalar multiplication. 3) The span of vectors is the set of their linear combinations. A basis is a set of linearly independent vectors whose span is the entire vector space. The dimension of a vector space is the number of vectors in its basis.

Uploaded by

Samuel Situmeang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Vector and Matrix

We define a column n-vector to be an array of n numbers, denoted

The number ai is called the ith component of the vector a. Denote by the set of real numbers
and by n the set of column n-vectors with real components. We call n an n-dimensional
real vector space. We commonly denote elements of n by lowercase bold letters (e.g., x).
The components of x n are denoted x1, , xn.
We define a row n-vector as

The transpose of a given column vector a is a row vector with corresponding elements,
denoted a . For example, if

then

Equivalently, we may write a = [a1, a2, , an] . Throughout the text we adopt the convention
that the term vector (without the qualifier row or column) refers to a column vector.
Two vectors a = [a1, a2, , an] and b = [b1, b2, , bn] are equal if ai = bi, i = 1, 2, , n.
The sum of the vectors a and b, denoted a + b, is the vector

The operation of addition of vectors has the following properties:


1. The operation is commutative:
2. The operation is associative:
3. There is a zero vector

such that

The vector
is called the difference between a and b and is denoted a b.
The vector 0 b is denoted b. Note that

The vector b a is the unique solution of the vector equation

Indeed, suppose that x = [x1, x2, , xn] is a solution to a + x = b. Then,

and thus

We define an operation of multiplication of a vector a

by a real scalar

as

This operation has the following properties:


1. The operation is distributive: for any real scalars and ,

2. The operation is associative:


3. The scalar 1 satisfies
4. Any scalar satisfies
5. The scalar 0 satisfies
6. The scalar 1 satisfies

Note that a = 0 if and only if = 0 or a = 0. To see this, observe that a = 0 is equivalent


to a1 = a2 = = an = 0. If = 0 or a = 0, then a = 0. If a 0, then at least one of its

components ak 0. For this component, ak = 0, and hence we must have = 0. Similar


arguments can be applied to the case when 0.
A set of vectors {a1, , ak} is said to be linearly independent if the equality
implies that all coefficients i, i = 1, , k, are equal to zero. A set of the vectors {a1, , ak} is
linearly dependent if it is not linearly independent.
Note that the set composed of the single vector 0 is linearly dependent, for if 0, then 0
= 0. In fact, any set of vectors containing the vector 0 is linearly dependent.
A set composed of a single nonzero vector a 0 is linearly independent since a = 0
implies that = 0.
A vector a is said to be a linear combination of vectors a1, a2, , ak if there are scalars 1,
, k such that
Proposition 2.1 A set of vectors {a1, a2, , ak} is linearly dependent if and only if one of the
vectors from, the set is a linear combination of the remaining vectors.
Proof : If {a1, a2, , ak} is linearly dependent, then
where at least one of the scalars i 0, whence
: Suppose that
then
Because the first scalar is nonzero, the set of vectors {a1, a2, , ak} is linearly dependent. The
same argument holds if ai, i = 2, , k, is a linear combination of the remaining vectors.
A subset of n is called a subspace of n if is closed under the operations of vector
addition and scalar multiplication. That is, if a and b are vectors in , then the vectors a + b
and a are also in for every scalar .
Every subspace contains the zero vector 0, for if a is an element of the subspace, so is
(1)a = a. Hence, a a = 0 also belongs to the subspace.
Let a1, a2, ak be arbitrary vectors in n. The set of all their linear combinations is called
the span of a1, a2, , ak and is denoted

Given a vector a, the subspace span[a] is composed of the vectors a, where is an arbitrary
real number ( ). Also observe that if a is a linear combination of a1, a2, , ak, then
The span of any set of vectors is a subspace.
Given a subspace , any set of linearly independent vectors {a1, a2, , ak} such that
= span[a1, a2, , ak] is referred to as a basis of the subspace . All bases of a subspace
contain the same number of vectors. This number is called the dimension of , denoted dim .
Proposition 2.2 If {a1, a2, , ak} is a basis of , then any vector a of can be represented
uniquely as
where i

, i = 1, 2, , k.

Proof To prove the uniqueness of the representation of a in terms of the basis vectors, assume
that
and
We now show that i = i, i = 1, , k. We have
or
Because the set {ai : i = 1, 2, , k} is linearly independent, 1 1 = 2 2 = = k k =
0, which implies that i = i, i = 1, , k.
Suppose that we are given a basis {a1, a2, , ak} of and a vector a such that
The coefficients i, i = 1, , k, are called the coordinates of a with respect to the basis {a1,
a2, , ak}.
The natural basis for n is the set of vectors

The reason for calling these vectors the natural basis is that

We can similarly define complex vector spaces. For this, let denote the set of complex
numbers and n the set of column n-vectors with complex components. As the reader can
easily verify, the set n has properties similar to those of n, where scalars can take complex
values.
A matrix is a rectangular array of numbers, commonly denoted by uppercase bold letters
(e.g., A). A matrix with m rows and n columns is called an m n matrix, and we write

The real number aij located in the ith row and jth column is called the (i, j)th entry. We can
think of A in terms of its n columns, each of which is a column vector in m. Alternatively,
we can think of A in terms of its m rows, each of which is a row n-vector.
Consider the m n matrix A above. The transpose of matrix A, denoted A , is the n m
matrix

that is, the columns of A are the rows of A , and vice versa.
Let the symbol mxn denote the set of m n matrices whose entries are real numbers. We
treat column vectors in n as elements of n 1. Similarly, we treat row n-vectors as elements
of 1 n. Accordingly, vector transposition is simply a special case of matrix transposition,
and we will no longer distinguish between the two. Note that there is a slight inconsistency in
the notation of row vectors when identified as 1 n matrices: We separate the components of
the row vector with commas, whereas in matrix notation we do not generally use commas.
However, the use of commas in separating elements in a row helps to clarify their separation.
We use use such commas even in separating matrices arranged in a horizontal row.

You might also like