0% found this document useful (0 votes)
33 views55 pages

Vector Space 123309

The document defines vector spaces and their properties. It discusses the axioms of vector addition and scalar multiplication. Examples of vector spaces are given for real numbers, complex numbers, matrices, and polynomials. Applications of vector spaces in fields like physics, engineering, and computer science are described.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views55 pages

Vector Space 123309

The document defines vector spaces and their properties. It discusses the axioms of vector addition and scalar multiplication. Examples of vector spaces are given for real numbers, complex numbers, matrices, and polynomials. Applications of vector spaces in fields like physics, engineering, and computer science are described.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 55

BY

NYANABO IBITAMUNO SIMEON


U2018/5575009

DEPARTMENT OF MATHEMATICS AND STATISTICS


FACULTY OF PHYSICAL SCIENCE AND INFORMATION TECHNOLOGY, COLLEGE
ON NATURAL AND APPLIED SCIENCE, UNIVERSITY OF PORT-HARCOURT,
NIGERIA.

IN PARTIAL FUFILMENT OF THE REQUIREMENT FOR THE AWARD OF BACHELOR


OF SCIENCE (B.Sc.) DEGREE IN MATHEMATICS AND COMPUTER SCIENCE

1
2
INTRODUCTION

Chapter 1

Definition: A vector space over a field F is a non-empty set V together with two
binary operations, called vector addition and scalar multiplication, which satisfy
the following axioms:

Vector addition axioms:


o Commutativity: u + v = v + u for all u, v in V.
o Associativity: (u + v) + w = u + (v + w) for all u, v, w in V.
o Additive identity: There exists an element 0 ∈ V such that u + 0 = u for
all u ∈ V.
o Additive inverse: For every u ∈ V, there exists an element -u ∈ V such
that u + (-u) = 0.
Scalar multiplication axioms:
o Associativity: c(du) = (cd)u for all c, d ∈ F and u ∈ V.
o Distributivity: c(u + v) = cu + cv for all c ∈ F and u, v ∈ V.
o Scalar identity: 1u = u for all u ∈ V.

Examples: Some common examples of vector spaces include:

 The set of all real numbers, ℝ, under the usual operations of addition and
multiplication.
 The set of all complex numbers, ℂ, under the usual operations of addition
and multiplication.
 The set of all n-tuples of real numbers, ℝn under the usual operations of
vector addition and scalar multiplication.
 The set of all polynomials of degree less than or equal to n, ℙn, under the
usual operations of polynomial addition and scalar multiplication.
* The set of all m × n matrices, Mm×n (F), with entries from the field $F$, under
the usual operations of matrix addition and scalar multiplication.

3
1.1.1 Basic properties:

 Uniqueness of the additive identity: There is only one additive identity


element in a vector space.
 Uniqueness of the additive inverse: For every vector u in a vector space,
there is only one additive inverse element -u.
 Zero vector: The zero vector is the additive identity element of a vector
space.
 Negative of a scalar: The negative of a scalar c is -c, and (-c)u = -cu for
all vectors u in a vector space.
 Scalar multiple of the zero vector: The scalar multiple of the zero vector
by any scalar is the zero vector.
 Distributivity over addition: Scalar multiplication distributes over vector
addition.
 Associativity of scalar multiplication: Scalar multiplication is associative.
 Scalar identity: The scalar identity element is the multiplicative identity
of the field F.
1.1.2 Linear combinations and subspaces:

 A linear combination of vectors u1,u2,…,un in a vector space V is an


expression of the form c1u1 + c2u2 + . . . + cnun, where c1, c2, . . . cn are
scalars in the field F.

 A subset S of a vector space V is a subspace of V if it is closed under


vector addition and scalar multiplication. This means that if u and v are in
S, then u + v and cu are also in S for all scalars c in F. Also, A subspace
of a vector space V is a subset of V that is itself a vector space with
respect to the operations of vector addition and scalar multiplication in V.

1.1.3 Linear independence and dependence:


 A set of vectors u1, u2, . . . , un in a vector space V is linearly independent if
no vector in the set can be expressed as a linear combination of the other
vectors in the set.

4
 A set of vectors u1, u2, . . . , un in a vector space V is linearly dependent if
one of the vectors in the set can be expressed as a linear combination of
the other vectors in the set.
1.1.4 Basis and dimension:

 A basis for a vector space V is a linearly independent set of vectors that


spans V. This means that every vector in V can be expressed as a linear
combination of the vectors in the basis.

 The dimension of a vector space V is the number of vectors in a basis for


V.

1.1.5 Applications of vector spaces:

Vector spaces are used in many areas of mathematics, physics, and engineering.
Here are a few examples:

 Linear algebra: Vector spaces are the foundation of linear algebra, which
is a branch of mathematics that deals with vectors, matrices, and linear
transformations. Linear algebra has applications in many areas, including
physics, engineering, computer science, and economics.
 Physics: Vector spaces are used to represent physical quantities such as
forces, velocities, and accelerations. For example, the forces acting on an
object can be represented as a vector in a vector space.
 Engineering: Vector spaces are used in many areas of engineering, such as
structural engineering, control engineering, and signal processing. For
example, vector spaces are used to design and analyze structures, to
control systems, and to process signals.
 Computer science: Vector spaces are used in many areas of computer
science, such as computer graphics, machine learning, and data mining.
For example, vector spaces are used to represent images, to train machine
learning models, and to cluster data.

Here are some specific examples of how vector spaces are used in different
fields:

5
 In physics, vector spaces are used to describe the motion of objects, the
forces acting on objects, and the fields that exist in space. For example,
the position of an object can be represented as a vector in a vector space,
and the force acting on an object can be represented as a vector in a
different vector space.

 In engineering, vector spaces are used to design and analyze structures, to


control systems, and to process signals. For example, vector spaces are
used to design bridges and buildings, to control robots and airplanes, and
to compress and decompress digital media.

 In computer science, vector spaces are used to represent images, to train


machine learning models, and to cluster data. For example, vector spaces
are used to store and process images, to train machine learning models to
recognize objects in images, and to cluster data points into groups.

Vector spaces are a powerful tool that can be used to solve a wide variety of
problems in many different fields.

1.1.6 Examples of Vector Spaces:

Here are examples of vector spaces:


1. The set of all real numbers.
2. The set of all complex numbers.
3. The set of all polynomials of degree n.
4. The set of all matrices of dimension m x n.
5. The set of all vectors in Rn (where n is a positive integer).
6. The set of all functions from R to R.
7. The set of all continuous functions from R to R.
8. The set of all square matrices of dimension n.

6
1.3 Scope of Study
Vector spaces, also known as linear spaces, are fundamental mathematical
structures with widespread applications across various branches of pure
mathematics and numerous practical disciplines. The primary scope of this
study, titled A Study on Basic Properties of Vector Spaces is to provide a
comprehensive examination of the foundational properties that define vector
spaces.
This study encompasses the following key areas:
Axiomatic Definition: We will delve into the formal axiomatic definition of
vector spaces, elucidating the necessary conditions that a set and its operations
must satisfy to qualify as a vector space. This exploration will encompass both
finite-dimensional and infinite-dimensional vector spaces.
Properties of Vector Addition: We will thoroughly investigate the properties
governing vector addition within vector spaces. This includes commutativity,
associativity, the existence of a zero vector, and the existence of additive
inverses.
Properties of Scalar Multiplication: We will explore the properties associated
with scalar multiplication in vector spaces. This encompasses compatibility with
the field of scalars, distributive properties, and the existence of a multiplicative
identity.
Applications: We will discuss the practical applications of vector spaces in
mathematics and various other disciplines, emphasizing how an understanding
of these fundamental properties plays a crucial role in solving real-world
problems.

1.4 Purpose of Study

The primary purpose of this study is to provide an in-depth analysis of the basic
properties of vector spaces, with the following objectives in mind:
Comprehensive Understanding: To offer readers a clear and comprehensive
understanding of what vector spaces are and how they are formally defined.
This includes an exploration of the axioms that govern vector spaces.

7
Fundamental Properties: To elucidate the fundamental properties of vector
addition and scalar multiplication within vector spaces. By doing so, we aim to
highlight the core principles that underpin vector space theory.
Applications: To demonstrate the practical significance of vector spaces by
showcasing their applications in various fields. This includes but is not limited
to physics, engineering, computer science, economics, and other disciplines
where vector spaces are foundational.
Laying the Foundation: To lay a solid foundation for further exploration of
advanced topics in linear algebra and related mathematical fields.
Understanding the basic properties of vector spaces is essential for tackling
more complex concepts and problem-solving techniques.
Historical Context: To provide insight into the historical development of
vector spaces, acknowledging the contributions of key mathematicians and their
role in shaping this mathematical framework.
By fulfilling these objectives, we aim to equip readers with a profound
understanding of vector spaces and their essential properties. This knowledge
will not only enrich their grasp of pure mathematics but also empower them to
apply these concepts effectively in practical scenarios across diverse disciplines.

1.5 Definition of terms

1.5.1 Set: A collection of distinct objects called element

8
1.5.2 Numbers: an arithmetical value, expressed by a word, symbol, or figure,
representing a particular quantity and used in counting and making calculations

Types:
1. Natural Numbers:
Natural numbers are also called “counting numbers” which contains the set of
positive integers from 1 to infinity. The set of natural numbers is represented by
the letter “N”. The natural number set is defined by:
N = {1, 2, 3, 4, 5, ……….}
Examples: 35, 59, 110, etc.
2. Whole Numbers:
Whole numbers are also known as natural numbers with zero. The set consists
of non-negative integers where it does not contain any decimal or fractional
part. The whole number set is represented by the letter “W”. The natural
number set is defined by:
W = {0,1, 2, 3, 4, 5, ……….}
Examples: 67, 0, 49, 52, etc.
3. Integers:

9
Integers are defined as the set of all whole numbers with a negative set of
natural numbers. The integer set is represented by the symbol “Z”. The set of
integers is defined as:
Z = {-3, -2, -1, 0, 1, 2, 3}
Examples: -52, 0, -1, 16, 82, etc.
4. Real Numbers:
Any number such as positive integers, negative integers, fractional numbers or
decimal numbers without imaginary numbers are called the real numbers. It is
represented by the letter “R”.
Examples: ¾, 0.333, √2, 0, -10, 20, etc.
5. Rational Numbers:
Any number that can be written in the form of p/q, i.e., a ratio of one number
over another number is known as rational numbers. A rational number can be
represented by the letter “Q”.
Examples: 7/1, 10/2, 1/1, 0/1, etc.

6. Irrational Numbers:
The number that cannot be expressed in the form of p/q. It means a number that
cannot be written as the ratio of one over another is known as irrational
numbers. It is represented by the letter “P”.
Examples: √2, π, Euler’s constant, etc.
7. Complex Numbers:
A number that is in the form of a+bi is called complex numbers, where “a and
b” should be a real number and “i” is an imaginary number.
Examples: 4 + 4i, -2 + 3i, 1 +√2i, etc.
8. Imaginary Numbers:
The imaginary numbers are categorized under complex numbers. It is the
product of real numbers with the imaginary unit “i”. The imaginary part of the
complex numbers is defined by Im (Z).
Examples: √2, i2, 3i, etc.

10
1.5.3 Space: A set in which the list of elements is defined by a collection of
guidelines or axioms for how each element relates to another within the set.
OR
The ‘space’ in the terminology ‘Vector space’ refers to the mathematical
definition of a ‘Set’ which is a collection of objects (in this case vectors) with
some properties.

1.5.4 Vector space: A set that is closed under addition and scalar multiplication.
1.5.5 Scalar: An ordinary number; whereas vectors have direction and
magnitude, scalars have only magnitude. The scalars we will be dealing with
will all be real numbers, but other kinds of numbers can also be scalars. 5 miles
represents a scalar.

1.5.6 Magnitude: The magnitude of a vector is its length, or distance from the
origin.

11
This study will also cover the following topics:
1. Definition and fundamental concepts of vector spaces
2. Basic properties of vector spaces
3. Subspaces, linear independence, and basis of vector spaces
4. Dimension of vector spaces
5. Linear transformations and matrices
6. Applications of vector spaces in physics, engineering, and computer science.

Here are some basic properties of vector spaces:

Closure under addition: The sum of any two vectors in a vector space is also a
vector in the vector space.
Closure under scalar multiplication: The product of any scalar and a vector in a
vector space is also a vector in the vector space.
Associativity of addition: The sum of three vectors is the same regardless of the
order in which they are added.
Commutativity of addition: The sum of two vectors is the same regardless of the
order in which they are added.
Distributivity of scalar multiplication over addition: The product of a scalar and
the sum of two vectors is the same as the sum of the products of the scalar and
each vector.
Existence of a zero vector: Every vector space has a vector called the zero
vector, which is added to any vector to give the same vector.
Existence of additive inverses: For every vector in a vector space, there exists a
vector called the additive inverse of the vector, which when added to the vector
gives the zero vector.

12
These properties are essential for the development of linear algebra, which is
the study of vector spaces and their applications. Linear algebra is used in many
areas of mathematics, including physics, engineering, and computer science.

1.6 Usage of properties in linear algebra:


 The closure under addition property allows us to define the sum of two
vectors.
 The closure under scalar multiplication property allows us to define the
product of a scalar and a vector.
 The associativity of addition property allows us to simplify expressions
involving vector addition.
 The commutativity of addition property allows us to rearrange terms in
expressions involving vector addition.
 The distributivity of scalar multiplication over addition property allows us
to distribute scalar multiplication over vector addition.
 The existence of a zero vector allows us to define the subtraction of two
vectors.
 The existence of additive inverses allows us to define the negative of a
vector.

13
CHAPTER 2

LITERATURE REVIEW

2.1 The purpose and significance of the literature review.

In the context of a project on the fundamental characteristics of vector spaces,


the goal of a literature review is to give an in-depth summary of the body of
knowledge and research that has already been done on the subject. Finding,
reading, and analyzing pertinent sources is part of this process, as is removing
and summarizing the most important information.

A literature review is crucial for several reasons, such as:


 To set the scene for your investigation. You can more effectively find the
gaps in the literature and formulate your own research questions if you are
aware of the current level of knowledge on the subject.
 To gain knowledge of the many definitions and axiomatizations of vector
spaces throughout history.
 To gain knowledge of the various techniques employed to show the
fundamental characteristics of vector spaces.
 To study the various uses of vector spaces in other branches of
mathematics.
 To determine the main obstacles and unanswered concerns in the field of
vector space research.
You can present your own study as having made a major addition to the topic
by showcasing your understanding of the body of existing literature and by
pointing out the main gaps in it.
A literature review can also be beneficial for honing your own research
techniques. You can become knowledgeable about various research techniques
and analytical philosophies by reading and analyzing the writings of others.
Additionally, you can learn how to properly write about the results of your
research.
Here are a few examples of how a project on the key qualities of vector spaces
can benefit from using the content of a literature review:

14
 A review of the literature could be used to determine the many methods
used to demonstrate the fundamental characteristics of vector spaces. This
could assist you in creating better and more effective proofs of these
qualities or in developing your own proofs.
 A review of the literature could be used to investigate the various uses of
vector spaces in other branches of mathematics. This could help you better
grasp how vector spaces relate to your own research interests or inspire
new research efforts.
 A review of the literature could be used to determine the main obstacles
and unanswered concerns in the field of vector space research.
Overall, a literature review is an essential part of any research project, including
a project on the basic properties of vector spaces. It enables you to:
 Establish the context information for your study.
 Determine the main gaps in the body of knowledge.
 Create original research questions.
 Establishes the basis for the conclusions of your own investigation.
 Show off your expertise in the area.
 Expand upon prior research and provide a novel addition to the field.
 Improve your own investigative abilities.
Other researchers in the field may find great value in a well-written literature
review. It can give them a thorough picture of the present level of knowledge on
the subject and assist them in identifying the main areas that require more
investigation.

2.2 Historical Development of vector spaces.


The historical development of vector spaces can be traced back to the early
concepts of vectors and linear combinations.
2.2.1 Early concepts of vectors and linear combinations
The concept of vectors can be traced back to ancient Greece, where they were
used to represent geometric objects such as lines, planes, and volumes.
However, it was not until the 17th century that vectors began to be used in a
more systematic way, with the development of analytic geometry by René
Descartes and Pierre de Fermat.

15
Linear combinations were also first used in analytic geometry, to represent
geometric objects as sums of other geometric objects. For example, a line could
be represented as the linear combination of two points on the line.

Hermann Grassmann
Hermann Grassmann was a German mathematician who is credited with
developing the first formal theory of vector spaces. In his book "Die lineale
Ausdehnungslehre" (1844), Grassmann introduced the concept of a "linear
space" as a set of objects that could be added together and multiplied by scalars.
He also defined the basic operations of vector addition and scalar multiplication,
and proved many of the fundamental properties of vector spaces.
Georg Friedrich Bernhard Riemann
Georg Friedrich Bernhard Riemann was a German mathematician who made
significant contributions to many areas of mathematics, including vector spaces.
In his habilitation thesis "Über die Hypothesen, welche der Geometrie zu
Grunde liegen" (1854), Riemann introduced the concept of a "Riemannian
manifold," which is a type of curved space that can be modeled using vector
spaces.
David Hilbert
David Hilbert was a German mathematician who made significant contributions
to many areas of mathematics, including vector spaces. In his book "Grundlagen
der Geometrie" (1899), Hilbert provided a rigorous and axiomatic treatment of
Euclidean geometry, using vector spaces as a foundation.
Stefan Banach
Stefan Banach was a Polish mathematician who is considered to be one of the
founders of functional analysis. In his book "Théorie des Opérations Linéaires"
(1932), Banach defined and studied Banach spaces, which are a type of vector
space that is complete with respect to a norm.

2.2.2 How their work laid the foundation for the formalization of vector spaces?
The work of Grassmann, Riemann, Hilbert, and Banach laid the foundation for
the formalization of vector spaces by providing a rigorous and axiomatic
treatment of the basic concepts involved. Grassmann's work defined the basic
operations of vector addition and scalar multiplication, and proved many of the

16
fundamental properties of vector spaces. Riemann's work introduced the
concept of a Riemannian manifold, which is a type of curved space that can be
modeled using vector spaces. Hilbert's work provided a rigorous and axiomatic
treatment of Euclidean geometry, using vector spaces as a foundation. And
Banach's work defined and studied Banach spaces, which are a type of vector
space that is complete with respect to a norm.

The formalization of vector spaces has had a profound impact on many areas of
mathematics, including linear algebra, functional analysis, and differential
geometry. Vector spaces are now one of the most fundamental concepts in
mathematics, and they are used in a wide variety of applications, including
physics, engineering, and computer science.

Here are some specific examples of how the work of Grassmann, Riemann,
Hilbert, and Banach laid the foundation for the formalization of vector spaces:

* Grassmann's definition of a vector space and his proofs of the fundamental


properties of vector spaces provided a rigorous foundation for the study of
vector spaces.
* Riemann's work on Riemannian manifolds showed that vector spaces could be
used to model curved spaces, which led to the development of differential
geometry.
* Hilbert's use of vector spaces to axiomatize Euclidean geometry showed that
vector spaces could be used to provide a rigorous foundation for geometry.
* Banach's work on Banach spaces led to the development of functional
analysis, which is a branch of mathematics that studies vector spaces of
functions.

The work of these mathematicians laid the foundation for the formalization of
vector spaces, which has had a profound impact on many areas of mathematics
and science.

17
2.3 Axiomatic definition of vector spaces
A vector space is a set of objects, called vectors that can be added together and
multiplied by scalars (numbers). The axiomatic definition of a vector space
specifies the following properties that the operations of vector addition and
scalar multiplication must satisfy:

Closure under vector addition: The sum of any two vectors in a vector space is
also a vector in the vector space.
Closure under scalar multiplication: The product of any scalar and a vector in a
vector space is also a vector in the vector space.
Associativity of vector addition: The sum of three vectors is the same regardless
of the order in which they are added. (u + v) + w is equal to u + (v + w) for all
vectors u, v, and w.
Commutativity of addition: The sum of two vectors is the same regardless of the
order in which they are added.
Distributivity of scalar multiplication over addition: The product of a scalar and
the sum of two vectors is the same as the sum of the products of the scalar and
each vector.
Existence of a zero vector: Every vector space has a vector, called the zero
vector, which when added to any vector in the vector space gives the same
vector.
Existence of additive inverses: For every vector in a vector space, there exists
another vector in the vector space, called the additive inverse of the vector,
which when added to the vector gives the zero vector.

2.3.1 Role of axioms in defining the fundamental properties of vector


spaces
The axioms of a vector space define the fundamental properties of vector
spaces. For example, the axiom of closure under addition ensures that the sum
of any two vectors in a vector space is also a vector in the vector space. This
allows us to define the operation of vector addition on vector spaces without
having to worry about the result being outside of the vector space.
Similarly, the other axioms of a vector space ensure that the operation of scalar
multiplication and the properties of vector addition and scalar multiplication

18
that we are familiar with from real and complex numbers also hold for vector
spaces. This allows us to develop a systematic theory of vector spaces without
having to check each individual property separately.

2.3.2 Works of mathematicians who contributed to the formalization of


vector spaces
Richard Dedekind was a German mathematician who made significant
contributions to many areas of mathematics, including vector spaces. In his
book "Was sind und was sollen die Zahlen?" (1888), Dedekind introduced the
concept of a "linear space" as a set of elements that could be added together and
multiplied by scalars. He also defined the basic operations of vector addition
and scalar multiplication, and proved many of the fundamental properties of
vector spaces.
Stefan Banach was a Polish mathematician who is considered to be one of the
founders of functional analysis. In his book "Théorie des Opérations Linéaires"
(1932), Banach defined and studied Banach spaces, which are a type of vector
space that is complete with respect to a norm.
Norma Dunford was a Scottish mathematician who made significant
contributions to functional analysis, including the study of vector spaces. In her
book "Linear Operators" (1958), Dunford and her co-author, James Schwartz,
provided a comprehensive and rigorous treatment of the theory of vector spaces
and linear operators.

2.3.3 Examples of how the axiomatic definition is applied in various


mathematical contexts
 The axiomatic definition of vector spaces is used in a wide variety of
mathematical contexts. For example; it is used to define vector spaces of
real numbers, complex numbers, polynomials, matrices, and functions.
 The axiomatic definition is also used to study the properties of vector
spaces, such as linear independence, dependence, basis, dimension, and
eigenvalues and eigenvectors.
 In addition, the axiomatic definition is used to develop many other areas
of mathematics, such as linear algebra, functional analysis, and
differential geometry.
 Here are some specific examples of how the axiomatic definition of vector
spaces is applied in various mathematical contexts:

19
 In linear algebra; the axiomatic definition of vector spaces is used to
define the basic concepts of vector addition, scalar multiplication, linear
independence, dependence, basis, and dimension.
 In functional analysis; the axiomatic definition of vector spaces is used to
define Banach spaces and Hilbert spaces, which are two important types
of vector spaces that are used to study various problems in mathematics
and physics.
 In differential geometry, the axiomatic definition of vector spaces is used
to define tangent spaces and cotangent spaces, which are two important
types of vector spaces that are used to study the geometry of curved
surfaces.
Overall, the axiomatic definition of vector spaces is a powerful tool that is used
in a wide variety of mathematical contexts. It allows us to study the properties
of vector spaces in a systematic way, and to develop new mathematical theories
based on the concept of vector spaces.

2.4 Properties of Vector Spaces

2.4.1 Closure under vector addition and scalar multiplication


Definition: A vector space is a set V of objects, called vectors, together with
two operations: vector addition (+) and scalar multiplication (⋅), such that the
following properties hold:
Closure under vector addition: For all vectors u and v in V, the sum u+v is also
in V.
Closure under scalar multiplication: For all scalars c and all vectors u in V, the
product cu is also in V.
Examples:
The set of all real numbers is a vector space under the usual operations of
addition and multiplication.
The set of all complex numbers is a vector space under the usual operations of
addition and multiplication.
The set of all polynomials with real coefficients is a vector space under the
usual operations of addition and multiplication.

20
The set of all n×n matrices with real entries is a vector space under the usual
operations of addition and multiplication.
Proofs:
Proof that the sum of any two vectors in a vector space is also in the vector
space:
Let V be a vector space and let u and v be any two vectors in V. Since V is a
vector space, the sum u+v is defined. We need to show that u+v is also in V.
By the definition of a vector space, closure under vector addition means that the
sum of any two vectors in V is also in V. Therefore, u+v is in V.
Proof that the product of any scalar and a vector in a vector space is also in the
vector space:
Let V be a vector space and let c be any scalar and let u be any vector in V.
Since V is a vector space, the product cu is defined. We need to show that cu is
also in V.
By the definition of a vector space, closure under scalar multiplication means
that the product of any scalar and a vector in V is also in V. Therefore, cu is in
V.
2.4.2 Associativity, commutativity, and the existence of identity elements
Definition: A vector space is said to be associative under vector addition if the
sum of three vectors is the same regardless of the order in which they are added.
A vector space is said to be commutative under vector addition if the sum of
two vectors is the same regardless of the order in which they are added. A
vector space is said to have an identity element for vector addition if there exists
a vector in the vector space, called the zero vector, which when added to any
other vector in the vector space gives the same vector.
Examples:
The set of all real numbers is associative, commutative, and has an identity
element for vector addition.
The set of all complex numbers is associative, commutative, and has an identity
element for vector addition.
The set of all polynomials with real coefficients is associative, commutative,
and has an identity element for vector addition.

21
The set of all n×n matrices with real entries is associative, commutative, and has
an identity element for vector addition.
Proofs:
Proof that a vector space is associative under vector addition:
Let V be a vector space and let u, v, and w be any three vectors in V. We need
to show that (u+v)+w=u+(v+w).
By the definition of vector addition, we have:
(u + v) + w = (u + v) + w

By the associative property of addition, we have:


(u + v) + w = u + (v + w)
Therefore, the vector space is associative under vector addition.
Proof that a vector space is commutative under vector addition:
Let V be a vector space and let u and v be any two vectors in V. We need to
show that u+v=v+u.
By the definition of vector addition, we have:
u+v=u+v
By the commutative property of addition, we have:
u+v=v+u
Therefore, the vector space is commutative under vector addition.
Proof that a vector space has an identity element for vector addition:
Let V be a vector space. We need to show that there exists a vector 0 in V such
that for any vector u in V, the following equation holds:
0+u=u
To do this, we can use the following argument:
For any vector u in V, the negative additive inverse of u, denoted by −u, is
defined as the vector in V such that u+(−u)=0.
Since V is a vector space, it is closed under vector addition. This means that the
sum of any two vectors in V is also in V.

22
Therefore, the sum of the negative additive inverse of u and u is also in V.
By definition, the sum of the negative additive inverse of u and u is 0.
Therefore, there exists a vector 0 in V such that for any vector u in V, the
following equation holds:
0+u=u
This proves that every vector space has an identity element for vector addition.
Example:
The set of all real numbers is a vector space under the usual operations of
addition and multiplication. The identity element for vector addition in this
vector space is the number 0. For any real number x, we have:
0+x=x

This shows that the vector space of all real numbers has an identity element for
vector addition.

2.4.3 The existence of additive inverses and multiplicative inverses


Definition: A vector space is said to have additive inverses if for every vector u
in the vector space, there exists a vector −u in the vector space such that u+
(−u)=0. A vector space is said to have multiplicative inverses if for every
nonzero vector u in the vector space, there exists a nonzero vector v in the
vector space such that uv=1.
Examples:
The set of all real numbers has additive inverses and multiplicative inverses.
The set of all complex numbers has additive inverses and multiplicative
inverses.
The set of all polynomials with real coefficients has additive inverses but does
not have multiplicative inverses.
The set of all n×n matrices with real entries has additive inverses but does not
have multiplicative inverses for all matrices.
Proofs:
Proof that a vector space has additive inverses:

23
Let V be a vector space and let u be any vector in V. We need to show that there
exists a vector −u in V such that u+(−u)=0.
Let −u=−u. Then, we have:
u + (-u) = u + (-u)

By the definition of vector addition, we have:


u + (-u) = 0
Therefore, the vector space has additive inverses.

Proof that a vector space has multiplicative inverses:


Let V be a vector space and let u be any nonzero vector in V. We need to show
that there exists a nonzero vector v in V such that uv=1.
Since u is nonzero, there exists a scalar c such that cu=1. Then, we have:
uv = (cu)v = 1v = v
Therefore, the vector space has multiplicative inverses.

Conclusion
The basic properties of vector spaces are closure under vector addition and
scalar multiplication, associativity, commutativity, and the existence of identity
elements, additive inverses, and multiplicative inverses. These properties are
important because they allow us to perform operations on vectors in a consistent
and predictable way.

2.5 Applications of Vector Spaces in Mathematics and Beyond

2.5.1 Physics
Vectors are used extensively in physics to represent physical quantities such as
forces, velocities, and accelerations. For example, the force acting on an object
can be represented as a vector in ℝ3, with the three components representing the
forces in the x, y, and z directions. Vectors can also be used to represent other
physical quantities such as electric and magnetic fields, momentum, and energy.
24
Vectors play a crucial role in many areas of physics, including:
 Mechanics: Vectors are used to describe the motion of objects, the forces
acting on objects, and the work and energy associated with motion. For
example, Newton's laws of motion can be expressed in terms of vectors.
 Electromagnetism: Vectors are used to describe electric and magnetic
fields, as well as the forces acting on charged particles in these fields. For
example, Maxwell's equations, which describe the fundamental laws of
electromagnetism, are expressed in terms of vectors.
 Quantum mechanics: Vectors are used to represent the state of a quantum
system. For example, the wave function of a particle can be represented as
a vector in a complex vector space.

2.5.2 Engineering
Vector spaces are used in many areas of engineering, including control systems,
electrical circuits, and structural analysis.
 Control systems: Vector spaces are used to model and control dynamic
systems such as robots, airplanes, and power plants. For example, state-
space models of dynamic systems are represented using vector spaces.
 Electrical circuits: Vector spaces are used to analyze the behavior of
electrical circuits. For example, the currents and voltages in a circuit can
be represented as vectors in a vector space.

 Structural analysis: Vector spaces are used to design and analyze


structures such as bridges, buildings, and airplanes. For example, the
forces acting on a structure can be represented as vectors in a vector
space.
2.5.3 Computer Science
Vector spaces are used in many areas of computer science, including computer
graphics, data analysis, and machine learning.
 Computer graphics: Vector spaces are used to represent images, 3D
models, and other graphical objects. For example, the pixels in an image
can be represented as vectors in a vector space.

25
 Data analysis: Vector spaces are used to analyze large datasets. For
example, principal component analysis, which is a technique used to
reduce the dimensionality of data, is based on vector spaces.
 Machine learning: Vector spaces are used to train and deploy machine
learning models. For example, support vector machines, which are a type
of machine learning model used for classification, are based on vector
spaces.

2.5.4 Economics
Vector spaces are also used in economics to model economic systems and
optimization problems.
 Modeling economic systems: Vector spaces can be used to model the
behavior of economic systems such as markets and economies of scale.
For example, the supply and demand curves for a good can be represented
as vectors in a vector space.
 Optimization problems: Vector spaces can be used to formulate and solve
optimization problems such as linear programming and quadratic
programming. For example, the linear programming problem of
maximizing profits subject to constraints on resources can be represented
as a vector space optimization problem.

2.5.5 Specific Examples and Case Studies


Here are some specific examples and case studies of how vector spaces are used
in different fields:
 Physics: Vector spaces are used to model the motion of planets around the
sun, the forces acting on a rocket during launch, and the behavior of light
and other electromagnetic waves.
 Engineering: Vector spaces are used to design the control systems for
robots and airplanes, to analyze the behavior of electrical circuits, and to
design bridges and other structures.
 Computer science: Vector spaces are used to store and process images, to
train machine learning models to recognize objects in images, and to
cluster data points into groups.
 Economics: Vector spaces are used to model the behavior of financial
markets, to analyze the impact of government policies on the economy,
and to forecast future economic trends.

26
Conclusion
Vector spaces are a powerful tool that can be used to solve a wide variety of
problems in many different fields. By understanding the basic properties of
vector spaces, students can develop the skills to apply them to real-world
problems.

2.6 Key Findings from the Literature Review

2.6.1 Historical Development


The concept of vector spaces has its roots in ancient Greek geometry, where
vectors were used to represent geometric quantities such as lines and segments.
However, the formal development of vector spaces began in the 17th century
with the work of mathematicians such as René Descartes and Pierre de Fermat.
Descartes introduced the concept of coordinate geometry, which allowed
vectors to be represented as points in a coordinate space. Fermat developed the
concept of analytic geometry, which allowed vectors to be represented as
ordered pairs of numbers.
The modern axiomatic definition of vector spaces was first introduced in the
19th century by mathematicians such as Hermann Grassmann and Arthur
Cayley. These mathematicians developed a set of axioms that define the basic
properties of vector spaces, such as addition, subtraction, and scalar
multiplication.

2.6.2 Axiomatic Definition


A vector space is a set V over a field F, together with two binary operations,
called vector addition and scalar multiplication, which satisfy the following
axioms:
Vector addition axioms:
 Commutativity: u + v = v + u for all u, v ∈ V.
 Associativity: (u + v) + w = u + (v + w) for all u, v, w ∈ V.
 Additive identity: There exists an element 0 ∈ V such that u + 0 = u for all
u ∈ V.
27
 Additive inverse: For every u ∈ V, there exists an element -u ∈ V such that
u + (-u) = 0.
Scalar multiplication axioms:
 Associativity: c(du) = (cd)u for all c, d ∈ F and u ∈ V.
 Distributivity: c(u + v) = cu + cv for all c ∈ F and u, v ∈ V.
 Scalar identity: 1u = u for all u ∈ V.

2.6.3 Fundamental Properties


Some of the fundamental properties of vector spaces include:
Linear independence and dependence: A set of vectors is linearly independent if
no vector in the set can be expressed as a linear combination of the other vectors
in the set. Otherwise, the set of vectors is linearly dependent.
Basis and dimension: A basis for a vector space is a linearly independent set of
vectors that spans the vector space. The dimension of a vector space is the
number of vectors in a basis for the vector space.
Subspaces: A subspace of a vector space is a subset of the vector space that is
closed under vector addition and scalar multiplication.

2.6.4 Practical Applications


Vector spaces have a wide range of practical applications in many different
fields, including:
Physics: Vector spaces are used to represent physical quantities such as forces,
velocities, and accelerations. For example, the forces acting on an object can be
represented as a vector in a vector space.
Engineering: Vector spaces are used to design and analyze structures, to control
systems, and to process signals. For example, vector spaces are used to design
bridges and buildings, to control robots and airplanes, and to compress and
decompress digital media.
Computer science: Vector spaces are used to represent images, to train machine
learning models, and to cluster data. For example, vector spaces are used to
store and process images, to train machine learning models to recognize objects
in images, and to cluster data points into groups.

28
Economics: Vector spaces are used to model economic systems and
optimization problems. For example, vector spaces are used to model the
behavior of financial markets, to analyze the impact of government policies on
the economy, and to forecast future economic trends.

2.7 Significance of Understanding the Basic Properties of Vector Spaces


Understanding the basic properties of vector spaces is essential for a number of
reasons:
Foundation for advanced mathematical concepts: Vector spaces are a
fundamental concept in many areas of mathematics, including linear algebra,
analysis, and differential geometry. A good understanding of the basic
properties of vector spaces is necessary for studying these advanced
mathematical concepts.
Real-world problem-solving: Vector spaces have a wide range of practical
applications in many different fields. By understanding the basic properties of
vector spaces, students can develop the skills to apply them to real-world
problems.
Conclusion
Vector spaces are a powerful tool that can be used to solve a wide variety of
problems in many different fields. By understanding the basic properties of
vector spaces, students can develop the skills to apply them to real-world
problems and to study more advanced mathematical concepts.

CHAPTER 3
METHODOLOGY

29
3.1 Introduction
The purpose of Chapter Three is to delineate the methodology employed in the
comprehensive study of the basic properties of vector spaces. This chapter
serves as a guide to the research design, approach, and methods used to achieve
the goals and objectives of the study.

3.2 Key Concepts, Theorems, and Foundational Principles

Concepts:
 Vector: An element of a vector space, often represented by an arrow with
magnitude and direction. (Think of arrows moving in any direction in
space.)

Vector represented by an arrow

 Scalar: A number used to scale a vector. (Think of multiplying the length


of the arrow by a number.)
 Vector addition: Combining two vectors to get a new vector. (Imagine
placing the tail of one arrow at the head of the other and drawing a new
arrow from the first tail to the second head.)

30
Vector addition

 Scalar multiplication: Multiplying a vector by a scalar to get a new


vector. (Imagine stretching or shrinking the arrow by the scalar amount.)

Scalar multiplication

 Zero vector: The vector that adds nothing to any other vector. (Think of
an arrow with zero length.)

 Inverse vector: The vector that, when added to another vector, results in
the zero vector. (Think of an arrow pointing in the opposite direction with
the same length.)
 Linear independence: A set of vectors is linearly independent if no vector
in the set can be expressed as a linear combination of the others. (Think
of a set of arrows where none can be created by adding or subtracting
multiples of the others.)

31
 Basis: A minimal set of vectors that spans a vector space. (Think of the
smallest set of arrows that can be combined to reach any point in the
space.)
Key concepts

Basis Definition:
A basis for a vector space V is a linearly independent set of vectors in V that
spans V. Mathematically, if B ={v1,v2,…,vn} is a set of vectors in V, then B is a
basis for V if:
 B is linearly independent.
 The span of B is V.
The concept of a basis is fundamental in the study of vector spaces. In linear
algebra, a basis is a set of vectors that possesses two key properties: linear
independence and spanning. Let's delve into these concepts and understand the
significance of a basis in a vector space.

1. Linear Independence:
A set of vectors is said to be linearly independent if no vector in the set can be
expressed as a linear combination of the others. Mathematically, if v1,v2,…,vn
are vectors in a vector space, they form a linearly independent set if the

32
equation c1v1+c2v2+…+cnvn = 0 has only the trivial solution (c1 = c2 = … cn=0).
Linear independence ensures that each vector in the basis contributes uniquely
to the span of the vector space.
2. Spanning:
A set of vectors spans a vector space if every vector in the space can be
expressed as a linear combination of the vectors in the set. In other words, the
linear span of the set is the entire vector space. The combination of linear
independence and spanning ensures that the basis provides a unique and
minimal representation for each vector in the vector space.
3. Coordinate System:
A basis provides a coordinate system for the vector space. Every vector in the
space can be uniquely represented by a set of coordinates, which are the
coefficients in the linear combination of the basis vectors that form the vector.
This coordinate representation is essential for various applications, including
solving systems of linear equations and understanding transformations.
4. Uniqueness of Representation:
The linear independence property ensures that the representation of a vector in
terms of the basis is unique. Each vector in the space can be expressed as a
unique linear combination of the basis vectors.

Linear Independence and Dependence

Application of Linear Independence


Basis Construction
 Applying the concept of linear independence to construct bases for vector
spaces.
 Illustrating how linearly independent sets can form a basis, providing a
unique representation of vectors.

Dimension Determination
 Exploring the application of linear independence in determining the
dimension of vector spaces.

33
 Discussing how the number of linearly independent vectors in a space
relates to its dimension.
Significance of Linear Independence
 Emphasizing the foundational role of linear independence in
characterizing vector spaces.
 Highlighting its significance in establishing the structure, uniqueness, and
dimensionality of vector spaces.
What is the dimension of a vector space?
Here are the key points related to the dimension of a vector space:
1. Definition:
 The dimension of a vector space V, denoted as dim(V), is the
maximum number of linearly independent vectors that can form a
basis for V.
2. Basis and Dimension:
 A basis for a vector space is a linearly independent set of vectors
that spans the entire space. The number of vectors in a basis is
equal to the dimension of the vector space.
3. Notation:
 If B={v1,v2,…,vn} is a basis for V, then dim(V)=n. The dimension is
a non-negative integer or, in the case of infinite-dimensional
spaces, it may be infinite.
4. Examples:
 The dimension of the vector space Rn (Euclidean space) is n.
 The dimension of the space of m×n matrices, denoted as Rm×n, is
mn because matrices can be represented as vectors of dimension
mn.
 The dimension of the space of polynomials of degree at most n is
n+1.
5. Properties:
 The dimension of a vector space is uniquely determined; every
basis for the same vector space has the same number of vectors.

34
 The dimension is always less than or equal to the total number of
vectors in any spanning set.
6. Finite and Infinite Dimension:
 A vector space is said to be finite-dimensional if it has a finite
basis. If a vector space does not have a finite basis, it is called
infinite-dimensional.
7. Relation to Subspaces:
 The dimension of a subspace is always less than or equal to the
dimension of the ambient space.
8. Linear Independence and Dimension:
 The dimension of a vector space is the maximum number of
linearly independent vectors it can contain.

Properties of vector spaces.


1. Closure: For any vectors u,v in the vector space, their sum u+v is also in
the vector space.
2. Commutativity of Addition: Addition of vectors is commutative:
u+v=v+u for all vectors u,v in the vector space.
3. Associativity of Addition: Addition of vectors is associative: u+
(v+w)=(u+v)+w for all vectors u,v,w in the vector space.
4. Existence of an Identity Element for Addition: There exists a vector 0
(called the zero vector) such that v+0=v for all vectors v in the vector
space
5. Existence of Inverses for Addition: For every vector v, there exists an
additive inverse −v such that v+(−v)=0.
6. Distributivity of Scalar Multiplication over Vector Addition: Scalar
multiplication distributes over vector addition: a(u+v)=au+av for all
scalars a and vectors u,v.
7. Existence of an Identity Element for Scalar Multiplication: There exists a
scalar 1 such that 1v=v for all vectors v.
8. Distributivity of Scalar Multiplication over scalar Addition: Scalar
multiplication distributes over scalar addition: (a+b)v=av+bv for all

35
scalars a,b and vector v.

Applications
Taking Flight with Vector Spaces
Let us soar into the practical realm, demonstrating how the seemingly abstract
properties of vector spaces can tackle real-world challenges like calculating an
airplane's resultant velocity amidst the whims of the wind. By tackling this
problem, we'll not only showcase the power of linear combinations but also
solidify our understanding of key vector space concepts.

The Scenario: Imagine an airplane cruising at a constant velocity vplane relative


to the air. However, a pesky wind blows against it with a velocity vwind.
Mission: To find the airplane's actual, resultant velocity (vresultant) considering
both its own movement and the wind's influence.
Enter the Stage: Linear Combinations: Fortunately, vector spaces come to the
rescue! We can represent both vplane and vwind as vectors in a 2D plane, with
direction and magnitude captured by their respective arrows. The key insight
lies in recognizing that vresultant is not simply the sum of these vectors, but rather
a combination that accounts for their opposite directions.
Taking Flight with Math: This is where linear combinations come into play. We
can express vresultant as a linear combination of vplane and vwind:
vresultant = c1vplane + c2vwind
Where c1 and c2 are scalars reflecting the scaling factors we apply to each
vector. But wait, what are these values?
The Art of Coefficients: Here's where the beauty of vector spaces shines. Since
the wind opposes the airplane's movement, c2, the wind's coefficient, will be
negative. Additionally, because we want the resultant to reflect the true
direction of motion, c1 and c2 should maintain the angle between vplane and
vresultant. Solving for c1 and c2 using these constraints requires further analysis,
considering specific values for vplane and vwind, but the underlying principle
remains the same.
Putting it all Together: Once we calculate c1 and c2, we can use them to scale
vplane and vwind, and then add the scaled vectors to obtain the final vresultant. This

36
process demonstrates how linear combinations within a vector space not only
capture the relationships between different velocities but also provide a clear
and rigorous framework for solving the problem.

Conclusion
In conclusion, Chapter Three has outlined the methodology employed in the
investigation of the basic properties of vector spaces. This chapter serves as the
foundation for the subsequent exploration and analysis of key concepts,
theorems, and foundational principles in linear algebra.

Chapter Four: Implementation

37
This chapter presents the practical implementation of the theoretical concepts
discussed in Chapter Three, emphasizing the role of basis, dimension, linear
independence, and linear combinations in understanding the basic properties of
vector spaces.

4.1 Basis Construction in 3x3 Matrix Space


4.1.1 Selection of Basis Matrices
In this section, we extend our study to a 3x3 matrix space M over a field F. The
careful selection of basis matrices is crucial for building a foundation for further
exploration. We aim to choose 3x3 matrices that meet the criteria of linear
independence and spanning.

Basis Matrices:
Let A1, A2, and A3 be the following 3x3 matrices:

Basis Matrices in Coordinate Form


Represent the chosen basis matrices in terms of their coordinates within a
chosen coordinate system. Define a basis as B = {A1, A2, A3}.

[] [] []
1 0 0
A1 = 0 A2 = 1 A3 0
0 0 1

Basis in matrix form


Present the basis matrices in their original matrix form, emphasizing the
algebraic or structural nature of the matrices.

[ ]
1 0 0
A1 = 0 1 0
0 0 1

[ ]
0 1 0
A2 = 0 0 1
1 0 0

[ ]
1 1 0
A3 = 0 1 1
0 0 1

38
4.1.2 Verification of Linear Independence
To show linear independence, consider the linear combination c1A1+c2A2+c3A3
=0, where c1, c2, and c3 are scalars. The only solution to this equation is c1 = c2 =
c3 = 0, confirming linear independence.
4.1.3 Verification of Spanning
To show spanning, take an arbitrary 3x3 matrix B:

[ ]
b 11 b 12 b 13
B = b 21 b 22 b 23
b 31 b 32 b 33

To express B as a linear combination of the chosen basis matrices:


B = b11A1+b12A2+b13A3
This demonstrates that the basis matrices A1, A2, and A3 span the entire 3x3
matrix space.

4.2 Dimension Determination

4.2.1 Exploration of Dimension


Utilize the selected basis matrices to determine the dimension of the matrix
space M. The dimension is the number of matrices in any basis for M, providing
insights into the "size" or "degree of freedom" of the matrix space.
In the chosen basis above, there are three matrices: A1, A2, and A3. Each matrix
is a 3x3 matrix. Therefore, the dimension dim(M) of the matrix space M is the
number of matrices in the basis, which is 3.
dim(M)=3
So, the dimension of the 3x3 matrix space M is 3.

4.2.2 Verification of Dimension

39
To validate the consistency of the determined dimension, explore alternative
bases for M and confirm that the dimension remains unchanged. This step
reinforces the understanding that any basis for M will have the same number of
matrices.

To demonstrate that the given set of matrices forms a basis and yields the same
dimension, we need to show two things:
1. Linear Independence: Prove that the set of matrices is linearly
independent.
2. Spanning: Prove that any 3x3 matrix in the space can be expressed as a
linear combination of the matrices in the given set.
Let's go through each step:
1. Linear Independence:
Consider the linear combination C1A1+C2A2+C3A3=0, where C1, C2, and C3 are
scalars.
The matrices are:

[ ]
1 0 0
A1 = 0 1 0
0 0 1

[ ]
0 1 0
A2 = 0 0 1
1 0 0

[ ]
1 1 0
A3 = 0 1 1
0 0 1

To show linear independence, we need to find c1, c2, and c3 such that the only
solution to c1A1+c2A2+c3A3=0 is c1 = c2 = c3 = 0.

40
Let's set up and solve the system:

[ ] [ ] [ ][ ]
1 0 0 0 1 0 1 1 0 0 0 0
C1 0 1 0 + C2 0 0 1 + C3 0 1 1 = 0 0 0
0 0 1 1 0 0 0 0 1 0 0 0

This leads to the system of equations:


C1+C3=0
C2+C3=0
C2+C3=0
The only solution to this system is C1 = C2 = C3 = 0, confirming that the set of
matrices is linearly independent.

2. Spanning:
Now, let's show that any 3x3 matrix B can be expressed as a linear combination
of A1, A2, and A3:
B = b11A1+b12A2+b13A3
This representation demonstrates that the set of matrices spans the entire space.
Let's determine the coefficients b11, b12, and b13 by solving the system:

[ ] [ ] [ ]
1 0 0 0 1 0 1 1 0
0 1 0 b11 + 0 0 1 b12 + 0 1 1 b13 = B
0 0 1 1 0 0 0 0 1

This system will have a unique solution for b11, b12, and b13 for any given matrix
B, confirming that the set spans the space.
Conclusion:
Since we have shown both linear independence and spanning, the given set of
matrices {A1, A2, A3} forms a basis for the 3x3 matrix space M, and the
dimension remains unchanged:
dim(M)=3
41
This completes the demonstration that the set of matrices forms a basis with the
same dimension.

4.3 Application of Linear Independence

4.3.1 Real-World Scenario: Aerospace Engineering


Let's explore a practical application of linear independence in the field of
aerospace engineering, specifically in the representation of forces acting on an
object in three-dimensional space.
Scenario: Representing Forces on an Aircraft
Consider an aircraft in flight experiencing forces due to wind, thrust, and
gravitational pull. The forces acting on the aircraft can be represented as a 3x3
matrix:

[]
Fx
F = Fy
Fz

Now, let's define a set of linearly independent basis matrices B = {A1, A2, A3}
chosen from the 3x3 matrix space:

[ ]
1 0 0
A1 = 0 1 0
0 0 1

[ ]
0 1 0
A2 = 0 0 1
1 0 0

[ ]
1 1 0
A3 = 0 1 1
0 0 1

The linear independence of these matrices ensures that each force component
(in the x, y, and z directions) is uniquely represented, providing a robust and
non-redundant system.

42
4.3.2 Solving the System for Force Representation
Given a specific force vector F acting on the aircraft, we can express it as a
linear combination of the chosen basis matrices:

F=C1A1+C2A2+C3A3

This system of equations allows us to determine the coefficients C1, C2, C3


uniquely, showcasing the practical utility of linear independence. The linear
independence of the basis matrices ensures that the system has a unique
solution, allowing engineers to accurately represent and analyze the forces
acting on the aircraft.

4.3.3 Examples and Solutions

Matrix Representations
Linear independence in the chosen 3x3 matrix space (M) plays a fundamental
role in ensuring the uniqueness of matrix representations. Let's delve into an
illustrative example to demonstrate this concept.
Scenario: Representing Spatial Transformations
Consider a scenario in computer graphics where spatial transformations are
represented using matrices. A transformation matrix T might be defined as:
T = C1A1+C2A2+C3A3

Where:
 A1, A2, and A3 are linearly independent basis matrices.
 C1, C2, C3 are coefficients.
The linear independence of the basis matrices ensures that the combination C1A1
+ C2A2 + C3A3 is unique for any given transformation, providing a distinct
matrix representation. This uniqueness is crucial in computer graphics to avoid
ambiguity and ensure precise transformations.

Uniqueness in Solutions

43
Linear independence also contributes to the uniqueness of solutions in various
mathematical applications. Let's explore an example in the context of a system
of linear equations.
Example: Solving a System of Linear Equations
Consider the system of linear equations:
Ax = B
Where:
 A is a 3x3 matrix composed of linearly independent basis matrices.
 x is a column matrix of variables.
 B is a column matrix of constants.
The linear independence of the basis matrices guarantees a unique solution to
the system. In other words, for a given B, there exists a unique column matrix x
that satisfies the equation.
Illustrative Example
Let's consider a specific example using the basis matrices A1, A2, and A3 from
the 3x3 matrix space:

[ ]
1 0 0
A1 = 0 1 0
0 0 1

[ ]
0 1 0
A2 = 0 0 1
1 0 0

[ ]
1 1 0
A3 = 0 1 1
0 0 1

This set of matrices forms a linearly independent basis for the 3x3 matrix space.
You can demonstrate how any specific matrix B can be uniquely represented
using these basis matrices.

4.4 Linear Combinations

44
4.4.1 Linear Combination Analysis
Provide specific examples of linear combinations and their implications. For
instance:
 Linear Combination 1: T = A1 represents a specific transformation.
 Linear Combination 2: T = A2 represents a different transformation.
 Linear Combination 3: T = C1A1 + C2A2 showcases the combined effect
of two basis matrices.

Implications for the 3x3 Matrix Space


Insights into Matrix Operations
By analyzing specific linear combinations, we gain insights into matrix
operations within the 3x3 matrix space. The combination of basis matrices
influences the outcome of operations, providing a nuanced understanding of the
space's properties.
Geometric Interpretations
Explore geometric interpretations of specific linear combinations. Consider how
different combinations affect the geometry of transformations, providing a
visual representation of the matrix space's behavior.

4.4.2 Spanning Properties


This section investigates the influence of spanning on matrix operations within
the chosen 3x3 matrix space (M). By understanding how the linear
combinations of basis matrices contribute to various operations, we gain
insights into the properties of the space.
Basis Matrices and Matrix Operations
Recall the set of linearly independent basis matrices B = {A1, A2, A3} from the
3x3 matrix space:

[ ]
1 0 0
A1 = 0 1 0
0 0 1

45
[ ]
0 1 0
A2 = 0 0 1
1 0 0

[ ]
1 1 0
A3 = 0 1 1
0 0 1

These matrices form a basis for M, and their linear independence ensures that
they span the entire 3x3 matrix space.
Linear Combinations and Matrix Operations
Explore how linear combinations of basis matrices contribute to fundamental
matrix operations:
Matrix Addition
Consider the linear combination C = c1A1 + c2A2 + c3A3. Discuss how adding
matrices using this linear combination yields insights into the summation of
basis matrices and the resulting matrix's properties.

[ ] [ ] [ ]
1 0 0 0 1 0 1 1 0
C = c 1 0 1 0 + c2 0 0 1 + c3 0 1 1
0 0 1 1 0 0 0 0 1

Scalar Multiplication
Explore the impact of scalar multiplication on the basis matrices. Discuss how
scaling the basis matrices affects the resulting matrix and the implications for
the entire space.

D = kA1, where k is a scalar

Insights into Space Properties


Discuss the insights gained from exploring matrix operations within the space:
 Closure: How the linear combinations of basis matrices ensure closure
under addition and scalar multiplication.

46
 Associativity and Commutativity: Examine how these properties
manifest in the linear combinations of basis matrices.
 Identity and Inverse: Explore if there are special linear combinations
that act as identity elements or have inverse elements within the space.

Certainly! Below is a comprehensive Chapter Five that includes a summary and


conclusion for your project, "A Study on Basic Properties of Vector Spaces,"
based on the concepts discuss

ed in the previous chapters:

47
Chapter Five:
Summary
In this project, we embarked on a thorough investigation into the fundamental
properties of vector spaces, focusing specifically on the intriguing domain of
3x3 matrix space. The exploration unfolded across several chapters, each
contributing to our understanding of the basic principles that govern vector
spaces. Let's recap the key concepts discussed:

Definition and Properties of Vector Spaces:


We delved into the foundational aspects of vector spaces, understanding the
essential characteristics that define these mathematical structures. The axioms
of vector spaces provided a framework for our subsequent analyses.

Linear Independence and Bases:


The concept of linear independence was explored, highlighting its significance
in forming a basis for vector spaces. Bases, as sets of linearly independent
vectors, lay the groundwork for expressing any vector in the space through
unique combinations.

Dimension and Types of Vector Spaces:


The dimension of a vector space emerged as a crucial parameter, representing
the minimum number of vectors needed to form a basis. We also touched upon
various types of vector spaces, recognizing their diverse applications and
structures.

5.1.2 Methodology and Implementation

Identification of Basis Matrices in 3x3 Matrix Space:

48
In Chapter Three, we systematically identified a set of basis matrices B = {A1,
A2, A3) for the 3x3 matrix space. Each matrix served as a unique direction
within the space.

Linear Combinations and Spanning:


Chapter Four focused on the concept of spanning, showcasing how linear
combinations of the basis matrices covered and filled the entire 3x3 matrix
space. We explored how these linear combinations allowed us to reach any
point within the space.

Influence on Matrix Operations:


Our investigation into spanning extended to matrix operations, revealing
insights into how linear combinations impacted fundamental operations such as
addition and scalar multiplication. This exploration provided a deeper
understanding of the space's properties.

49
Conclusion
This project has been a captivating journey into the heart of vector spaces,
specifically within the realm of 3x3 matrices. As we conclude our study, several
key takeaways emerge:

5.2.1 Comprehensive Understanding of Vector Spaces


Fundamental Properties Emphasized:
Our study underscored the importance of fundamental properties such as linear
independence, bases, and dimension in characterizing vector spaces. These
properties serve as the cornerstone for further exploration and application.
Versatility and Uniqueness of Basis Matrices:
The chosen basis matrices B = {A1, A2, A3) demonstrated their versatility in
spanning the 3x3 matrix space. The linear independence of these matrices
ensured the uniqueness of their contributions to the space.

5.2.2 Practical Implications and Applications


Real-World Relevance:
The project went beyond theoretical abstraction, showcasing the practical
implications of vector spaces. From linear combinations representing points in
the space to their influence on matrix operations, the study revealed the real-
world relevance of these mathematical structures.
Foundation for Further Exploration:
The concepts explored in this project provide a solid foundation for future
research in advanced linear algebra topics. The understanding gained serves as a
launching pad for more intricate analyses involving eigenvalues, eigenvectors,
and advanced transformations.

5.3 Future Directions

50
As we bring this project to a close, it's essential to identify avenues for future
exploration:
Advanced Topics in Linear Algebra:
Future research could delve into more advanced topics such as Eigen
decomposition, singular value decomposition, and applications in fields like
quantum mechanics, computer graphics, and machine learning.
-Extension to Higher Dimensions:
While we focused on the 3x3 matrix space, extending the study to higher
dimensions could unveil additional complexities and insights into the behavior
of vector spaces.

5.4 Final Reflection

In conclusion, our study on the basic properties of vector spaces has been a
captivating and intellectually enriching endeavor. The concepts explored in this
project form the bedrock of linear algebra, a field with profound implications
across various disciplines. From the theoretical underpinnings to the practical
applications, this project aimed to provide a holistic and accessible exploration
of vector spaces.

51
Appendix:

Additional Matrices Explored


In the course of the study, several additional matrices were examined to
enhance our understanding of vector spaces. These matrices provide
supplementary insights into the properties and behaviors within the vector
space.
A.1.1 Matrix A4

[ ]
2 −1 3
A4 = 1 0 2
0 1 −1

Matrix A4 was introduced to illustrate a more complex scenario, investigating


the impact of non-trivial coefficients on linear combinations and interactions
within the vector space.
A.1.2 Matrix A5

[ ]
0 0 0
A5 = 0 0 0
0 0 0

The zero matrix A5 was included to explore its role in linear combinations,
emphasizing its significance as the additive identity within the vector space.
A.2 Visual Representations
Visualizations were created to supplement the theoretical exploration and
provide a more intuitive understanding of vector space properties. The figures
below depict selected scenarios involving linear combinations of matrices.
A.2.1 3D Plot: Linear Combinations
This 3D plot visually represents linear combinations of matrices within the
vector space. Different coefficients result in distinct points within the space,
showcasing the versatility and richness of vector space properties.

52
53
References

Strang, G. (1988). Linear Algebra and Its Applications. Cengage Learning.


Axler, S. (2015). Linear Algebra Done Right. Springer.
Lay, D. C. (2016). Linear Algebra and Its Applications. Pearson.
Friedberg, S. H., Insel, A. J., & Spence, L. E. (2003). Linear Algebra. Pearson.
Anton, H., & Rorres, C. (2018). Elementary Linear Algebra: Applications
Version. Wiley.
Halmos, P. R. (2013). Finite-Dimensional Vector Spaces. Springer.
Hoffman, K., & Kunze, R. (1971). Linear Algebra. Prentice-Hall.
Roman, S. (2008). Advanced Linear Algebra. Springer.
Greub, W. H. (1975). Linear Algebra. Springer.
Lang, S. (2002). Linear Algebra. Springer.
Blyth, T. S., & Robertson, E. F. (2002). Basic Linear Algebra. Springer.
Golub, G. H., & Van Loan, C. F. (2012). Matrix Computations. Johns Hopkins
University Press.
Meyer, C. D. (2001). Matrix Analysis and Applied Linear Algebra. SIAM.
Shilov, G. E. (1977). Linear Algebra. Dover Publications.
Treil, S. (2015). Linear Algebra Done Wrong. Self-published.
Horn, R. A., & Johnson, C. R. (2013). Matrix Analysis. Cambridge University
Press.
Trefethen, L. N., & Bau, D. III. (1997). Numerical Linear Algebra. SIAM.
Curtis, C. W., & Reiner, I. (1984). Methods of Representation Theory: With
Applications to Finite Groups and Orders. John Wiley & Sons.
Golan, J. S. (2008). The Linear Algebra a Beginning Graduate Student Ought to
Know. Springer.
bard.google.com

54
Khan Academy
Sheldon, A. X. (2020). Linear Algebra and Its Applications. Macmillan
International Higher Education.
Desmos calculator 3D
https://www.desmos.com/3d/7a4c6e025e

55

You might also like