0% found this document useful (0 votes)
11 views32 pages

Vector Space

The document defines fundamental concepts in abstract algebra, including sets, binary operations, groups, rings, fields, and vector spaces. It outlines the axioms and examples for each structure, explaining properties like closure, commutativity, and associativity. Additionally, it discusses linear combinations, spanning sets, and subspaces within vector spaces, along with relevant theorems regarding their properties.

Uploaded by

abhinesh1510
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views32 pages

Vector Space

The document defines fundamental concepts in abstract algebra, including sets, binary operations, groups, rings, fields, and vector spaces. It outlines the axioms and examples for each structure, explaining properties like closure, commutativity, and associativity. Additionally, it discusses linear combinations, spanning sets, and subspaces within vector spaces, along with relevant theorems regarding their properties.

Uploaded by

abhinesh1510
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Definitions

• Sets : A set is a well-defined collection of objects. The objects in a set can be anything:
numbers, peoples, letters, rivers etc. These objects are called the elements or members of
the set.

• A binary operation * on a set is a function from S X S into S. For each 𝑎, 𝑏 ∈ 𝑆×𝑆, we


will denote the element ∗ ((𝑎, 𝑏)) of S by 𝑎 ∗ 𝑏.

• Example : our usual addition + is a binary operation on the set R.


Group : A group 𝐺,∗ is a set G, closed under binary operation *, the following axioms satisfied

1. for all 𝑎, 𝑏, 𝑐 ∈ 𝐺, we have 𝑎 ∗ 𝑏 ∗ 𝑐 = 𝑎 ∗ 𝑏 ∗ 𝑐

2. There is an element 𝑒 in G such that for all 𝑥 ∈ 𝐺,

𝑒 ∗ 𝑥 = 𝑥 ∗ 𝑒 = 𝑥. Identity element 𝒆 for 𝒙

3. Corresponding to each 𝑎 ∈ 𝐺, there is an element 𝑎′ in 𝐺 such

that 𝑎 ∗ 𝑎2 = 𝑎2 ∗ 𝑎 = 𝑒. Inverse a’ of a.

Example : 𝑄, + , 𝑍, + and 𝑅, +

𝑍 8 , + is not a group

Definition (Abelian group) : A group is abelian if its binary operation is commutative.

i.e., 𝑥 ∗ 𝑦 = 𝑦 ∗ 𝑥
Ring : A ring 𝑅, +,: is a set 𝑅 together with two binary operations +
and : , which we call addition and multiplication, defined on 𝑅 such
that the following axioms are satisfied
1. 𝑅, + is an abelian group.
2. multiplication is associative i.e., 𝑎 : 𝑏 : 𝑐 = (𝑎 : 𝑏) : 𝑐
3. for all 𝑎, 𝑏, 𝑐 ∈ 𝑅, the left distributive law, 𝑎 : 𝑏 + 𝑐 = (𝑎 :
𝑏) + (𝑎 : 𝑐) and the right distributive law 𝑎 + 𝑏 : 𝑐 = 𝑎 : 𝑐 + (𝑏 :
𝑐) hold.

Example : 𝑍, +,: , 𝑄, +,: , 𝑅, +,: and 𝐶, +,:


Division Ring : Let 𝑅 be a ring with unity 1 ≠ 0. An element 𝑢 in 𝑅 is a unit
if it has a multiplicative inverse in 𝑅. If every nonzero element of 𝑅 is a unit,
then 𝑅 is a division ring.

Field : A field is a commutative division ring.

Examples : 𝑄, 𝑅 are fields. 𝑍 is not a field.

Note: Unity is the multiplicative identity element, while a unit is any


element having a multiplicative inverse.
• Field : Let 𝐹 be a set of elements 𝑥, 𝑦, 𝑧 …. Suppose that the operations of addition and multiplication are

defined such that we can find the sum 𝑥 + 𝑦 and the product 𝑥. 𝑦 of any two elements 𝑥 and 𝑦. Then the set 𝐹 is

called a field, if the following axioms are satisfied for all the elements of the set.

1. Closure laws : For all 𝑥, 𝑦 ∈ 𝐹, 𝑥 + 𝑦 and 𝑥 : 𝑦 are elements of 𝐹.

2. Commutative laws : For all 𝑥, 𝑦 ∈ 𝐹, 𝑥 + 𝑦 = 𝑦 + 𝑥 and 𝑥 : 𝑦 = 𝑦 : 𝑥

3. Associative laws: For all 𝑥, 𝑦, 𝑧 ∈ 𝐹, 𝑥 + 𝑦 + 𝑧 = 𝑥 + (𝑦 + 𝑧) and 𝑥 : 𝑦 : 𝑧 = (𝑥 : 𝑦) : 𝑧

4. Distributive laws: For all 𝑥, 𝑦, 𝑧 ∈ 𝐹, 𝑥 : 𝑦 + 𝑧 = 𝑥 : 𝑦 + 𝑥 : 𝑧 and 𝑦 + 𝑧 : 𝑥 = 𝑦. 𝑥 + 𝑧. 𝑥

5. Identity elements : There exist in 𝐹 an element 0 and an element 1 such that for every 𝑥 in 𝐹, 𝑥 + 0 = 𝑥

and 𝑥. 1 = 𝑥. These are called the zero element and the unity element respectively.

6. Inverse elements : (a) For each 𝑥 in 𝐹 , there exists an element denoted by −𝑥, called the negative or

additive inverse of 𝑥 , such that 𝑥 + −𝑥 = 0 .

(b) For each 𝑥 ≠ 0 in 𝐹, there exists an element denoted by 𝑥 EF called the multiplicative inverse of 𝑥,
F
such that 𝑥 : 𝑥 EF = 1. The element 𝑥 EF can be written as when convenient.
G
Vector Space
Definition : Let 𝑉 be a non-empty set with two operations:

1. Vector addition : This assigns to any 𝑢, 𝑣 ∈ 𝑉 a sum 𝑢 + 𝑣 ∈ 𝑉


K
2. Scalar multiplication : this assigns to any 𝑢 ∈ 𝑉, 𝑘 ∈ 𝐾 a product 𝑘𝑢

Then 𝑉 is called a vector space (over the field 𝐾) if the following axioms hold for any vectors 𝑢, 𝑣, 𝑤 ∈

[A1] 𝑢 + 𝑣 + 𝑤 = 𝑢 + (𝑣 + 𝑤)
V associative law

[A2] There is a vector in 𝑉, denoted by 0 and called the zero vector, such that, for

any 𝑢 ∈ 𝑉, 𝑢+0=0+𝑢 =𝑢 Identity element

[A3] For each 𝑢 ∈ 𝑉, there is a vector in 𝑉, denoted by −𝑢, and called the negative

of 𝑢, such that 𝑢 + −𝑢 = −𝑢 + 𝑢 = 0. additive inverse


[A4] 𝑢 + 𝑣 = 𝑣 + 𝑢 commutative law

[M1] 𝑘 𝑢 + 𝑣 = 𝑘𝑢 + 𝑘𝑣, for any scalars 𝑘 ∈ 𝐾

[M2] 𝑎 + 𝑏 𝑢 = 𝑎𝑢 + 𝑏𝑢, for any scalars 𝑎, 𝑏 ∈ 𝐾

[M3] 𝑎 𝑏 𝑢 = 𝑎(𝑏𝑢), for any scalars 𝑎, 𝑏 ∈ 𝐾

[M4] 1𝑢 = 𝑢, for the unit scalar 1 ∈ 𝐾

It is denoted by (𝑉, +,:) over 𝐾 or simply 𝑉(𝐾)


Examples of vector space

Example 1: Let 𝑉 = 𝑅 M = { 𝑥F , 𝑥O , … , 𝑥M , 𝑥P ∈ 𝑅} & 𝐾 = 𝑅

Vector addition:
𝑥F , 𝑥O , … 𝑥M + 𝑦F , 𝑦O , … 𝑦M = (𝑥F + 𝑦F , 𝑥O + 𝑦O , … 𝑥M + 𝑦M )

scalar multiplication:
𝑘 : 𝑥F , 𝑥O , … 𝑥M = (𝑘𝑥F , 𝑘𝑥O , … , 𝑘 𝑥M )

𝑉, +,: is a vector space over the field 𝐾. i.e., 𝑉(𝐾) is a vector space.
Example 2 : If 𝑚 and 𝑛 are positive integers, the set of all 𝑚 × 𝑛 matrices with elements from an
arbitrary field 𝐹 forms a vector space with respect to the operations of matrix addition and scalar
multiplication.

Example 3 : V = polynomial 𝑠𝑝𝑎𝑐𝑒 = 𝑃 𝑡 = {𝑎a + 𝑎F 𝑥 + 𝑎O 𝑥 O + ⋯ + 𝑎M 𝑥 M , 𝑎P ∈ 𝑅, 𝑛 = 1,2,3 … }=set of


all polynomials. 𝑉 is a vector space over R.

efg e fhig
Example 4 : Consider the linear differential equation aa + 𝑎F + ⋯ + 𝑎M 𝑦 = 0 (1)
eG f eG fhi

in which 𝑎a , 𝑎F , … 𝑎M are real constants.

The set 𝑉 of all real solutions of (1) is not empty, because the zero function is a solution. Hence 𝑉
vector space over R.
Linear combinations, spanning sets
• Linear combinations : Let 𝑉 be a vector space over a field 𝐾. A vector 𝑣 in
𝑉 is a linear combination of vectors 𝑢F, 𝑢O, … , 𝑢M in 𝑉 if there exist scalars
𝑎F, 𝑎O, … , 𝑎M 𝑖𝑛 𝐾 such that 𝑣 = 𝑎F𝑢F + 𝑎O𝑢O + ⋯ + 𝑎M 𝑢M

• Alternatively, 𝑣 is a liner combination of 𝑢F, 𝑢O, … 𝑢M if there is a solution to


the vector equation 𝑣 = 𝑥F𝑢F + 𝑥O𝑢O + ⋯ + 𝑥M 𝑢M where 𝑥F, 𝑥O, … 𝑥M are
unknown scalars.
Spanning sets

• Let 𝑉 be a vector space over 𝐾. Let 𝑆 = 𝑢F , 𝑢O , … 𝑢k , 𝑢P ∈ 𝑉. The


spanning set of 𝑆 is defined as collection of all linear
combinations of the vectors 𝑢F , 𝑢O , … , 𝑢k and denoted by linear
Span of 𝑆 or L(S).

i.e., L(𝑆) = { 𝑎F 𝑢F + 𝑎O 𝑢O + ⋯ + 𝑎k 𝑢k , 𝑎P ∈ 𝐾}
Subspace
Definition : Let 𝑉 be a vector space over a field 𝐾 and let 𝑊 be a subset of V.
Then 𝑊 is a subspace of 𝑉 if 𝑊 is itself a vector space over 𝐾 with respect to
the operations of vector addition and scalar multiplication on 𝑉.

Example 1 : Let 𝑉 = 𝑅O and 𝐾 = 𝑅. Let 𝑊 = { 𝑥, 0 , 𝑥 ∈ 𝑅}

𝑥 𝑧
Example 2: Let 𝑉 = 𝑀O×O and 𝐾 = 𝑅. Let 𝑊 = 𝑧 𝑦 , 𝑥, 𝑦, 𝑧 ∈ 𝑅
Theorem 1 : A subset 𝑊 of a vector space 𝑉(𝐾) is a subspace of 𝑉 if and only if
a) 𝑊 in not empty
b) 𝑊 is closed under vector addition. i.e., if 𝑢 𝑎𝑛𝑑 𝑣 are in 𝑊, 𝑢 + 𝑣 also in 𝑊
c) 𝑊 is closed under scalar multiplication. i.e., for every 𝑢 𝑖𝑛 𝑊, 𝑘𝑢 is also in 𝑊 where 𝑘 ∈
𝐾.

Theorem 2 : A subset 𝑊 of a vector space 𝑉(𝐾) is a subspace of 𝑉 if and only if


a) 𝑊 in not empty &
b) For each pair of vectors 𝑢, 𝑣 ∈ 𝑊 and all scalars 𝑎, 𝑏 ∈ 𝐾, the vector 𝑎𝑢 + 𝑏𝑣 ∈ 𝑊.
Algebra of subspaces

Theorem 3 : The intersection of any two subspaces 𝑈 and 𝑊 of a


vector space 𝑉 is also a subspace.

Theorem 4 : The union of two subspaces 𝑈 and 𝑊 is a subspace if and


only if one is contained in the other.

i.e., Union of two subspaces is not necessarily a subspace.


Definition: Let 𝑈 and 𝑊 be subspaces of a vector space 𝑉. The
sum of 𝑈 and 𝑊 , written 𝑈 + 𝑊 is the set of all vectors of the
form 𝑢 + 𝑤 where 𝑢 ∈ 𝑈 𝑎𝑛𝑑 𝑤 ∈ 𝑊.

i.e., 𝑈 + 𝑊 = {𝑢 + 𝑤, 𝑢 ∈ 𝑈, 𝑤 ∈ 𝑊}

It is also known as the linear sum of the two subspaces of 𝑈 and


𝑊.

Theorem : The sum 𝑈 + 𝑊 of the subspaces 𝑈 and 𝑊 of 𝑉is also a


subspace of 𝑉.
Definition : Let 𝑈 and 𝑊 be subspaces of a vector space of V.
Then the space 𝑉 is said to be the direct sum of 𝑈 and 𝑊 if every
vector 𝑣 ∈ 𝑉 can be written in one and only one way as 𝑣 = 𝑢 + 𝑤
where 𝑢 ∈ 𝑈 𝑎𝑛𝑑 𝑤 ∈ 𝑊.

Theorem : The vector space 𝑉 is the direct sum of its subspaces 𝑈


and 𝑊 , if and only if (i) 𝑉 = 𝑈 + 𝑊 and (ii) 𝑈 ∩ 𝑊 = 0 .
Linear dependence and independence
Let 𝑉 be a vector space over a field 𝐾. The vectors 𝑣F , 𝑣O , … , 𝑣k in 𝑉

• Linear dependence : We say that the vectors 𝑣F , 𝑣O , … , 𝑣M in 𝑉 are linearly


dependent if there exist scalars 𝑎F , 𝑎O , … , 𝑎k in 𝐾, not all of them 0, such that
𝑎F 𝑣F + 𝑎O 𝑣O + ⋯ + 𝑎k 𝑣k = 0

Otherwise, we say that the vectors are linearly independent.

A set 𝑆 = {𝑣F , 𝑣O , … , 𝑣k } of vectors in 𝑉 is linearly dependent or independent


according as the vectors 𝑣F , 𝑣O , … 𝑣k are linearly dependent or independent.
• Echelon form of a matrix:

A matrix is called an echelon matrix, or is said to be in echelon form, if the following two
conditions hold (where a leading nonzero element of a row of 𝐴 is the first nonzero element in
the row):
1. All zero rows, if any, are at the bottom of the matrix
2. Each leading nonzero entry in a row is to the right of the leading nonzero entry in the
preceding row.

1 2 3 4 5
Example : 0 6 0 8 9
0 0 0 3 0
0 0 0 0 0
Remarks
1. Suppose 0 is one of the vectors in 𝑣F , 𝑣O , … 𝑣k , say 𝑣F = 0. Then the vectors must be
linearly dependent.

2. Suppose 𝑣 is non zero vector. Then 𝑣, by itself, is linearly independent.

3. Suppose two of the vectors 𝑣F , 𝑣O , … 𝑣k are equal or one is a scalar multiple of other, say
𝑣F = 𝑘𝑣O . Then the vectors must be linearly dependent.

4. If the set {𝑣F , 𝑣O , … 𝑣k } is linearly independent, then any rearrangement of the vectors
{𝑣Pi , 𝑣Pz , … 𝑣P{ } is also linearly independent.

5. If a set 𝑆 of vectors in linearly independent, then any subset of 𝑆 is linearly


independent. Alternatively, if S contains a linearly dependent subset, then S is linearly
dependent.
Basis and Dimension
Definition A: A set 𝑆 = {𝑢F , 𝑢O , … 𝑢M } of vectors is a basis of 𝑉 if it has
the following two properties:
(1) 𝑆 is linearly independent.
(2) 𝑆 spans 𝑉.

Definition B: A set 𝑆 = {𝑢F , 𝑢O , … 𝑢M } of vectors is a basis of 𝑉 if every


𝑣 ∈ 𝑉 can be written uniquely as a linear combination of the basis
vectors.
Theorem : Let 𝑉 be a vector space such that one basis has ‘m’ elements
and another basis has ‘n’ element. Then m=n.

vA vector space 𝑉 is said to be of finite dimension n or n-


dimensional, written 𝐝𝐢𝐦 𝑽 = 𝒏, if 𝑉 has a basis with n elements.

vSuppose a vector space 𝑉 does not have a finite basis. Then 𝑉 is said
to be of infinite dimensional vector space.
Theorems on Basis
Theorem : Let 𝑉 be a vector space of finite dimension n. Then

1. Any n+1 or more vectors in 𝑉 are linearly dependent

2. Any linearly independent set 𝑆 = {𝑢F , 𝑢O , … , 𝑢M } with n elements is a basis of 𝑉.

3. Any spanning set 𝑇 = 𝑣F , 𝑣O , … . 𝑣M of 𝑉 with n elements is a basis of 𝑉.

Theorem : Suppose 𝑆 spans a vector space 𝑉. Then

1. Any maximum number of linearly independent vectors in 𝑆 form a basis of 𝑉.

2. Suppose one deletes from 𝑆 every vector that is a linear combination of

preceding vectors in 𝑆. Then the remaining vectors form a basis of 𝑉.


Theorem : Let 𝑉 be a vector space of finite dimension and let 𝑆 = {𝑢F , 𝑢O , … 𝑢‚ } be a
set of linearly independent vectors in 𝑉. Then 𝑆 is a part of a basis of 𝑉. i.e., 𝑆 may
be extended to a basis of 𝑉.

Theorem : Let 𝑊 be a subspace of an n-dimensional vector space 𝑉. Then dim 𝑊 ≤


𝑛. In particular, if dim 𝑊 = 𝑛, then 𝑊 = 𝑉.

Theorem : If 𝑈 and 𝑊 are finite dimensional subspace of a vector space 𝑉, then 𝑈 +


𝑊 has finite dimension and

dim(𝑈 + 𝑊) = dim 𝑈 + dim 𝑊 − dim 𝑈 ∩ 𝑊


Inner product space
• The definition of a vector space 𝑉 involves an arbitrary field 𝐾. Here we first restrict 𝐾 to be the
real field 𝑅, in which case 𝑉 is called a real vector space; in the case of 𝐾 = 𝐶, the vector space 𝑉 is

complex vector space.

• Definition: Let 𝑉 be a real vector space. Suppose to each pair of vectors 𝑢, 𝑣 ∈ 𝑉 there is assigned a
real number, denoted by 𝑢, 𝑣 . This function is called a (real) inner product or dot product on 𝑉 if

it satisfies the following axioms :

1. Linear property : 𝑎𝑢F + 𝑏𝑢O , 𝑣 = 𝑎 𝑢F , 𝑣 + 𝑏 𝑢O , 𝑣

2. Symmetric property : 𝑢, 𝑣 = 𝑣, 𝑢

3. Positive definite property : 𝑢, 𝑢 ≥ 0; and 𝑢, 𝑢 = 0 if and only if 𝑢 = 0.

The vector space 𝑉 with an inner product is called a (real) inner product space.
Examples of Inner product space
• Example 1: 𝑅 O (𝑅)
< 𝑢, 𝑣 > = < 𝑢F, 𝑢O , 𝑣F, 𝑣O > = 3𝑢F𝑣F + 2 𝑢O𝑣O is an inner product on
𝑅O 𝑅 .

• Example 2: 𝑉 = 𝑅 M 𝑅 . Let 𝑢 = 𝑢F , 𝑢O , … . 𝑢M & 𝑣 = (𝑣F , 𝑣O , … . , 𝑣M )


< 𝑢, 𝑣 > = 𝑢F 𝑣F + 𝑢O 𝑣O + ⋯ + 𝑢M 𝑣M is an inner product on 𝑅 M (𝑅)

• Example 3: Let 𝑉 be the vector space of real continuous functions on


the interval 𝑎 ≤ 𝑡 ≤ 𝑏, a and b being positive.

< 𝑓, 𝑔 > = ∫Ž 𝑓 𝑡 𝑔 𝑡 𝑑𝑡 in an inner product on 𝑉.
• Example 4: Let 𝑉 be the vector space of 𝑚 × 𝑛 matrices over 𝑅.
< 𝐴, 𝐵 > = 𝑡𝑟 𝐴‘ 𝐵 = 𝑡𝑟𝑎𝑐𝑒 𝑜𝑓 𝐴‘ 𝐵

• Example 5: Let 𝑉 be the vector space of real polynomials of degree ≤ 𝑛.

< 𝑝 𝑥 , 𝑞 𝑥 > = 𝑎a 𝑏a + 𝑎F 𝑏F + ⋯ + 𝑎M 𝑏M is an inner product,

where 𝑝 𝑥 = 𝑎a + 𝑎F 𝑡 + 𝑎O 𝑡 O + ⋯ + 𝑎M 𝑡 M & 𝑞 𝑥 = 𝑏a + 𝑏F 𝑡 + 𝑏O 𝑡 O +
⋯ + 𝑏M 𝑡 M
Complex inner product space
• Definition : Let 𝑉 be a complex vector space. Suppose to each pair of vectors 𝑢, 𝑣 ∈ 𝑉
there is assigned a complex number, denoted by 𝑢, 𝑣 . This function is called a (complex)
inner product or dot product on 𝑉 if it satisfies the following axioms :
1. Linear property : 𝑎𝑢F + 𝑏𝑢O , 𝑣 = 𝑎 𝑢F , 𝑣 + 𝑏 𝑢O , 𝑣
2. Symmetric property : 𝑢, 𝑣 = 𝑣, 𝑢
3. Positive definite property : 𝑢, 𝑢 ≥ 0; and 𝑢, 𝑢 = 0 if and only if 𝑢 = 0.

The vector space 𝑉 over 𝐶 with an inner product is called a (complex) inner product space.
Distance and angle in two dimensional space
• In two dimensional space, 𝑃2 (𝑥2 , 𝑦2 )

𝑣
𝑃(𝑥1 , 𝑦1 ) 𝑃1 (𝑥1 , 𝑦1 )
𝜃 𝑢
O
𝑂

• The length of u , 𝑢 = 𝑂𝑃 = 𝑥12 + 𝑦12 = < 𝑢, 𝑢 >F/O

• The distance between 𝑢 and 𝑣 ,


𝑑 𝑢, 𝑣 = 𝑣 − 𝑢 = 𝑥2 − 𝑥1 2 + 𝑦2 − 𝑦1 2 =< 𝑣 − 𝑢, 𝑣 − 𝑢 >F/O

𝑥1 𝑥2 +𝑦1 𝑦2 š›,œ•
• The angle between the vectors 𝑢 and 𝑣 , cos 𝜃 = =
› .| œ |
𝑥12 +𝑦12 . 𝑥22 +𝑦22
Norm of a vector
• The inner product 𝑢, 𝑢 is nonnegative for any vector 𝑢. Thus its positive square root exists.
This nonnegative number is called the norm or length of 𝑢.

𝑢 = 𝑢, 𝑢

i.e., 𝑢 O = 𝑢, 𝑢

Remarks :

1. If 𝑢 = 1 or equivalently, if 𝑢, 𝑢 = 1, then 𝑢 is called a unit vector and is said to be


normalized.

2. Every nonzero vector 𝑣 in 𝑉 can be multiplied by the reciprocal of its length to obtain the
F
unit vector 𝑣Ÿ = 𝑣
œ
Properties of inner product
• If 𝑉 is an inner product space, then for any vectors 𝑢, 𝑣 in 𝑉,

1. < 𝑢, 𝑣 > ≤ 𝑢 | 𝑣 | (Cauchy-Schwarz inequality)


(This equality hold good, iff, 𝑢 and 𝑣 are linearly dependent)

2. 𝑅𝑒 < 𝑢, 𝑣 > ≤ 𝑢 | 𝑣 |

3. 𝑢 + 𝑣 ≤ 𝑢 + | 𝑣 | (Triangle inequality )

2 2 2 2
4. 𝑢 + 𝑣 + 𝑢−𝑣 =2 𝑢 + 𝑣 (parallelogram law)
• Orthogonality : Two vectors 𝑢, 𝑣 ∈ 𝑉 where 𝑉 is an inner product space are said
to be orthogonal (or perpendicular) if < 𝑢, 𝑣 > = 0 .

• Orthonormal set: A set {𝑢𝑖 } of vectors in 𝑉 is said to be orthogonal, if all the pairs
of distinct vectors are orthogonal.

i.e., if < 𝑢𝑖 , 𝑢𝑗 > = 0 for 𝑖 ≠ 𝑗

• Orthonormal set : The set is said to be orthonormal, if its is orthogonal and if each
𝑢𝑖 has length 1

i.e., (i) < 𝑢𝑖 , 𝑢𝑗 > = 0 for 𝑖 ≠ 𝑗 and

(ii) 𝑢𝑖 = 1
Gram-Schmidt orthogonalization process
• Suppose {𝑣1 , 𝑣2 , … , 𝑣𝑛 } is a basis of an inner product space 𝑉. One can use this basis to
construct an orthogonal basis {𝑤1 , 𝑤2 , … , 𝑤𝑛 } of 𝑉 as follows. Set

𝑤1 = 𝑣1

<𝑣2 ,𝑤1 >


𝑤2 = 𝑣2 − 𝑤
<𝑤1 ,𝑤1 > 1

<𝑣3 ,𝑤1 > <𝑣3 ,𝑤2 >


𝑤3 = 𝑣3 − 𝑤 − 𝑤
<𝑤1 ,𝑤1 > 1 <𝑤2 ,𝑤2 > 2

….

<𝑣𝑛 ,𝑤1 > <𝑣𝑛 ,𝑤2 > <𝑣𝑛 ,𝑤𝑛−1 >


𝑤𝑛 = 𝑣𝑛 − 𝑤 − 𝑤 − ⋯− 𝑤
<𝑤1 ,𝑤1 > 1 <𝑤2 ,𝑤2 > 2 <𝑤𝑛−1 ,𝑤𝑛−1 > 𝑛−1

This construction process is known as the Gram-Schimdt orthogonalization process.

You might also like