0% found this document useful (0 votes)
31 views132 pages

Linear Algebra Class Lecture Pass-Unlocked

The document outlines the course CSM 2229 on Linear Algebra and Complex Variables, focusing on systems of linear equations, their geometric interpretations, and the use of augmented matrices. It covers elementary row operations, Gaussian elimination, echelon forms, and the types of solutions for linear systems. The document provides examples and explanations of solving linear equations and systems, emphasizing the conditions for consistency and the existence of solutions.

Uploaded by

wasi78045
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views132 pages

Linear Algebra Class Lecture Pass-Unlocked

The document outlines the course CSM 2229 on Linear Algebra and Complex Variables, focusing on systems of linear equations, their geometric interpretations, and the use of augmented matrices. It covers elementary row operations, Gaussian elimination, echelon forms, and the types of solutions for linear systems. The document provides examples and explanations of solving linear equations and systems, emphasizing the conditions for consistency and the existence of solutions.

Uploaded by

wasi78045
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 132

Course No: CSM 2229

Course Title: Linear Algebra and Complex Variables

Jaionto Karmokar
Assistant Professor
Dept. of Computer Science and Mathematics
Bangladesh Agricultural University

1
4 2
Linear Algebra

➢ Introduction to Systems of Equations

➢ Geometric Interpretation

➢ Augmented Matrices

3 3
Introduction to Systems of Equations

Linear Equations:
• Any straight line in xy-plane can be represented algebraically by an
equation of the form:
a1 x + a2 y = b
• General form: define a linear equation in the n variables x1 , x2 ,..., xn :
a1 x1 + a2 x2 + ... + an xn = b

– Where a1 , a2 ,..., an , and b are real constants.


– The variables in a linear equation are sometimes called unknowns.
Example:
1
• The equations x1 − 2 x2 − 3x3 + x4 = 7 and x + 3 y = 7, y = x + 3z + 1, are linear.
2
• The equations x + 3 y = 5, 3x + 2 y − z + xz = 4, and y = sin x are not linear.

• A solution of a linear equation is a sequence of n numbers s1 , s2 ,..., sn such


that the equation is satisfied. The set of all solutions of the equation is called its
solution set or general solution of the equation
5 4
Linear Systems

• A finite set of linear equations in the variables x1 , x2 ,..., xn is called a system


of linear equations or a linear system.

• A sequence of numbers s1 , s2 ,..., sn is called a solution of the system.

• A system has no solution is said to be inconsistent; if there is at least one


solution of the system, it is called consistent.
a11 x1 + a12 x2 + ... + a1n xn = b1
a21 x1 + a22 x2 + ... + a2 n xn = b2
   
am1 x1 + am 2 x2 + ... + amn xn = bm

An arbitrary system of m
linear equations in n unknowns

6 5
Geometric Interpretation. Existence and Uniqueness of Solutions

If we interpret x1, x2 as coordinates in the x1.x2 -plane, then each of the two
equations represents a straight line, and (x1, x2) is a solution if and only if the
point P with coordinates x1, x2 lies on both lines. Hence there are three
possible cases:
(a) Precisely one solution if the lines intersect
(b) Infinitely many solutions if the lines coincide
(c) No solution if the lines are parallel (x1, x2)

7 6
• Every system of linear equations has either no solutions,
exactly one solution, or infinitely many solutions.

• A general system of two linear equations: (Figure1.1.1)


a1 x + b1 y = c1 (a1 , b1 not both zero)
a2 x + b2 y = c2 (a2 , b2 not both zero)

– Two lines may be parallel => no solution


– Two lines may intersect at only one point => one solution
– Two lines may coincide => infinitely many solution

8 7
Linear Systems in Two Unknowns

9 8
Geometric interpretation of three possible cases

10 9
Linear Systems in Three Unknowns

11 10
Augmented Matrices

◼ The location of the +’s, the x’s, and the =‘s can be abbreviated by writing
only the rectangular array of numbers.
◼ This is called the augmented matrix for the system.
◼ Note: must be written in the same order in each equation as the unknowns
and the constants must be on the right.
a11x1 + a12 x2 + ... + a1n xn = b1
a21x1 + a22 x2 + ... + a2 n xn = b2
   
am1 x1 + am 2 x2 + ... + amn xn = bm
1st column

a11 a12 ... a1n b1  1st row


a a ... a b 
 21 22 2 n 2 
     
 
 m1 m 2
a a ... a mn b m

12 11
Linear Algebra

➢ Types of Problems

➢ Elementary Row Operations

➢ Gaussian/Gauss Elimination

➢ Echelon Forms

➢ Solutions of Four Linear Systems

14 12
Types of Problems

Type-I Type-II Type-III

Elementary Gaussian/Gauss Gauss–Jordan


Row Operations Elimination Elimination

Gauss Elimination Inverse of a


(Unique Solutions) Matrix

Gauss Elimination
(Infinitely Many
Solutions)

Gauss Elimination
(no Solution)

15 13
Elementary Row Operations

• The basic method for solving a system of linear equations is to replace the
given system by a new system that has the same solution set but which is
easier to solve.

• Since the rows of an augmented matrix correspond to the equations in the


associated system. new systems is generally obtained in a series of steps by
applying the following three types of operations to eliminate unknowns
systematically. These are called elementary row operations.
1. Multiply an equation through by an nonzero constant.
2. Interchange two equation.
3. Add a multiple of one equation to another.

Other wise:
1. Multiply a row through by a nonzero constant.
2. Interchange two rows.
3. Add a constant times one row to another

16 14
Example: Elementary row Operations of a matrix
𝑥 + 𝑦 + 2𝑧 = 9
2𝑥 + 4𝑦 − 3𝑧 = 1
3𝑥 + 6𝑦 − 5𝑧 = 0

Solution: The matrix form of the given system of equations

1 1 2 9 add −2 times
the first equation 1 1 2 9
2 4 −3 1 0 2 −7 −17
3 6 −5 0 to the second
3 6 −5 0

add −3 times
the first row 1 1 2 9
to the third 0 2 −7 −17
0 3 −11 −27

multily the second 1 1 2 9


1 7 17
row by 2 0 1 − −
2 2
0 3 −11 −27

17 15
1 1 2 9
add −3 times 7 17
the second row 0 1 − −
to the third 2 2
1 3
0 0 − −
2 2

1 1 2 9
Multily the third 7 17
0 1 − −
row by −2 2 2
0 0 1 3

Add −1 times the 11 35


second row 1 0
2 2
to the first 7 17
0 1 − −
2 2
0 0 1 3
11
Add − 2 times
the third row 1 0 0 1
7
to the first and 2 0 1 0 2
times the third row 0 0 1 3
to the second

◼ The solution x=1, y=2, z=3 is now evident. 18 16


Echelon Forms

• This matrix which have following properties is in reduced row-echelon


form (Example 1, 2).
1. If a row does not consist entirely of zeros, then the first nonzero
number in the row is a 1. We call this a leader 1.
2. If there are any rows that consist entirely of zeros, then they are
grouped together at the bottom of the matrix.
3. In any two successive rows that do not consist entirely of zeros, the
leader 1 in the lower row occurs farther to the right than the leader 1
in the higher row.
4. Each column that contains a leader 1 has zeros everywhere else.

• A matrix that has the first three properties is said to be in row-echelon


form (Example 1, 2).
• A matrix in reduced row-echelon form is of necessity in row-echelon
form, but not conversely.
19 17
Example 1
Row-Echelon & Reduced Row-Echelon form

❖ Row-echelon form:

1 4 −3 7 1 1 0 0 1 2 6 0
0 1 6 2 , 0 1 0 , 0 0 1 −1 0
0 0 1 5 0 0 0 0 0 0 0 1

❖ Reduced row-echelon form:


0 1 −2 0 1
1 0 0 4 1 0 0
0 0 0 1 3 0 0
0 1 0 7 , 0 1 0 , ,
0 0 0 0 0 0 0
0 0 1 −1 0 0 1
0 0 0 0 0

20 18
Example 2
More on Row-Echelon and Reduced Row-Echelon form

❖ All matrices of the following types are in row-echelon form (any real numbers
substituted for the *’s.) :
0 1 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
1 ∗ ∗ ∗ 1 ∗ ∗ ∗ 1 ∗ ∗ ∗
0 0 0 1 ∗ ∗ ∗ ∗ ∗ ∗
0 1 ∗ ∗ 0 1 ∗ ∗ 0 1 ∗ ∗
, , , 0 0 0 0 1 ∗ ∗ ∗ ∗ ∗
0 0 1 ∗ 0 0 1 ∗ 0 0 0 0
0 0 0 0 0 1 ∗ ∗ ∗ ∗
0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 ∗

❖ All matrices of the following types are in reduced row-echelon form (any real
numbers substituted for the *’s.) :
0 1 ∗ 0 0 0 ∗ ∗ 0 ∗
1 0 0 0 1 0 0 ∗ 1 0 ∗ ∗
0 0 0 1 0 0 ∗ ∗ 0 ∗
0 1 0 0 0 1 0 ∗ 0 1 ∗ ∗
, , , 0 0 0 0 1 0 ∗ ∗ 0 ∗
0 0 1 0 0 0 1 ∗ 0 0 0 0
0 0 0 0 0 1 ∗ ∗ 0 ∗
0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 ∗

21 19
Solutions of Four (a, b, c, d) Linear Systems
Suppose that the augmented matrix for a system of linear equations
have been reduced by row operations to the given reduced row-
echelon form. Solve the system.

The corresponding 𝑥 = 5
1 0 0 5
system of equations is: 𝑦 = −2
(a) 0 1 0 −2 𝑧= 4
0 0 1 4

1. The corresponding
1 0 0 4 −1 system of equations is:
(b) 0 1 0 2 6
0 0 1 3 2 free variables
𝑥1 + 4𝑥4 = −1
𝑥2 + 2𝑥4 = 6
𝑥3 + 3𝑥4 = 2

leading
variables
22 20
2. We see that the free variable can be assigned an arbitrary value, say t,
which then determines values of the leading variables.
𝑥1 = −1 − 4𝑥4
𝑥2 = 6 −2𝑥4
𝑥3 = 2−3𝑥4

3. There are infinitely many solutions, and the general solution is given
by the formulas
𝑥1 = −1 − 4𝑡,
𝑥2 = 6 − 2𝑡,
𝑥3 = 2 − 3𝑡,
𝑥4 = 𝑡

23 21
1 6 0 0 4 −2
0 0 1 0 3 1
(c)
0 0 0 1 5 2
0 0 0 0 0 0

Solution (c):
1. The 4th row of zeros leads to the equation places no restrictions on
the solutions (why?). Thus, we can omit this equation.

𝑥1 + 6𝑥2 + 4𝑥5 = −2
𝑥3 + 3𝑥5 = 1
𝑥4 + 5𝑥5 = 2

2. Solving for the leading variables in terms of the free variables:


𝑥1 = −2 −6𝑥2 −4𝑥5
𝑥3 = 1 −3𝑥5
𝑥4 = 2−5𝑥5

24 22
3. The free variable can be assigned an arbitrary value, there are infinitely
many solutions, and the general solution is given by the formulas.
𝑥1 = −2 −6𝑠−4𝑡 ,
𝑥2 = 𝑠
𝑥3 = 1 −3𝑡
𝑥4 = 2−5𝑡,
𝑥5 = 𝑡

1 0 0 0
(d) 0 1 2 0
0 0 0 1

Solution (d): The last equation in the corresponding system of equation is


0𝑥1 + 0𝑥2 + 0𝑥3 = 1

Since this equation cannot be satisfied, there is no solution to the system.

25 23
Linear Algebra

➢ Elimination Methods (step-by-step)

➢ Exercise: Gauss Elimination

27

24
Elimination Methods (step-by-step)
We shall give a step-by-step elimination procedure that can be used to reduce
any matrix to reduced row-echelon form.
0 0 − 2 0 7 12 
• For example: 2 4 − 10 6 12 28
 
2 4 − 5 6 − 5 − 1
➢ Step1. Locate the leftmost column that does not consist entirely of zeros.
0 0 − 2 0 7 12 
2 4 − 10 6 12 28
 
2 4 − 5 6 − 5 − 1
Leftmost nonzero column
➢ Step2. Interchange the top row with another row, to bring a nonzero entry to
top of the column found in Step1.

2 4 − 10 6 12 28 The 1th and 2th rows


0 0 − 2 0 7 12  in the preceding matrix
  were interchanged.
2 4 − 5 6 − 5 − 1
28
25
➢ Step3. If the entry that is now at the top of the column found in Step1 is a,
multiply the first row by 1/a in order to introduce a leading 1.
1 2 − 5 3 6 14 
0 0 − 2 0 7 12  The 1st row of the preceding
  matrix was multiplied by 1/2.
2 4 − 5 6 − 5 − 1

➢ Step4. Add suitable multiples of the top row to the rows below so that all
entires below the leading 1 become zeros.
1 2 − 5 3 6 14  -2 times the 1st row of the
0 0 − 2 0 7 12 
  preceding matrix was added
0 0 5 0 − 17 − 29 to the 3rd row.

➢ Step5. Now cover the top row in the matrix and begin again with Step1
applied to the sub-matrix that remains. Continue in this way until the entire
matrix is in row-echelon form.
1 2 − 5 3 6 14 
0 0 − 2 0 7 12 
 
0 0 − 5 0 − 17 − 29 Leftmost nonzero column in
the sub-matrix
26
29
➢ Step5 (cont.)
1 2 − 5 3 6 14  The 1st row in the submatrix was
0 0 1 0 − 7 − 6 
 2  multiplied by -1/2 to introduce a
0 0 5 0 − 17 − 29 leading 1.

1 2 − 5 3 6 14  -5 times the 1st row of the sub-matrix


0 0 1 0 − 7 − 6  was added to the 2nd row of the sub-
 2  matrix to introduce a zero below the
0 0 0 0 21
1  leading 1.

1 2 − 5 3 6 14  The top row in the sub-matrix was


0 0 1 0 − 7 − 6 covered, and we returned again Step1.
 2 
0 0 0 0 12 1 
Leftmost nonzero column in the
new sub-matrix
1 2 − 5 3 6 14 
0 0 1 0 − 7 − 6  The first (and only) row in the new sub-
 2  matrix was multiplied by 2 to introduce
0 0 0 0 1 2  a leading 1.

◼ The entire matrix is now in row-echelon form.


30 27
➢ Step6. Beginning with has nonzero row and working upward, add suitable
multiples of each row to the rows above to introduce zeros above the leading 1’s.
1 2 − 5 3 6 14 7/2 times the 3rd row of
0 0 1 0 0 1  the preceding matrix was
  added to the 2nd row.
0 0 0 0 1 2 

1 2 − 5 3 0 2
0 0 1 0 0 1  -6 times the 3rd row was
  added to the 1st row.
0 0 0 0 1 2
1 2 0 3 0 7
0 0 1 0 0 1 
  5 times the 2nd row was
0 0 0 0 1 2 added to the 1st row.
◼ The last matrix is in reduced row-echelon form.
Step1~Step5: the above procedure produces a row-echelon form and is called
Gaussian elimination.
Step1~Step6: the above procedure produces a reduced row-echelon form and is
called Gaussian-Jordan elimination.
Every matrix has a unique reduced row-echelon form but a row-echelon form
of a given matrix is not unique. 28
31
Exercise: Gauss Elimination

Problems: Solve the following linear systems by Gaussian elimination.


x + y + 2z = 9
2 x + 4 y − 3z = 1
3x + 6 y − 5 z = 0
Solution:
– We convert the augmented matrix
1 1 2 9
2 4 −3 1
 

3 6 −5 0

– To the row-echelon form
1 1 2 9 
0 1 − 7 − 17 
 2 2 
0 0 1 3 
– The system corresponding to this matrix is
x + y + 2 z = 9, y − 72 z = − 172 , z = 3

3229
– Solving for the leading variables
x = 9 − y − 2 z,
y = − 172 + 72 z ,
z =3
– Substituting the bottom equation into those above

x = 3 − y,
y = 2,
z =3
– Substituting the 2nd equation into the top

x = 1, y = 2, z = 3

33
30
Linear Algebra

➢ Gauss Elimination (Unique Solution)

➢ Gauss Elimination (Infinitely Many Solutions)

➢ Gauss Elimination (no Solution)

35
31
Type-II (A): Gauss Elimination (Unique Solution)

Solve the linear system Find unknown currents from electrical network

This is the system for the unknown currents in the electrical network (in figure). To
obtain it, we label the currents as shown, choosing directions arbitrarily; if a current
will come out negative, this will simply mean that the current flows against the
direction of our arrow. The current entering each battery will be the same as the
current leaving it. The equations for the currents result from Kirchhoff’s laws:

Kirchhoff’s Current Law (KCL): At any point of a circuit, the sum of the inflowing
currents equals the sum of the outflowing currents.

Kirchhoff’s Voltage Law (KVL): In any closed loop, the sum of all voltage drops
equals the impressed
electromotive force.
36 32
Node P gives the first equation, node Q the second, the right loop the third, and
the left loop the fourth, as indicated in the figure.

Fig: Network and equations relating the currents


Solution: (Unique Solution)
This system could be solved rather quickly by noticing its particular form. But
this is not the point. The point is that the Gauss elimination is systematic and will
work in general, also for large systems. We apply it to our system and then do
back substitution. As indicated, let us write the augmented matrix of the system
first and then the system itself:

3733
38 34
39
35
Type-II (B): Gauss Elimination (Infinitely Many Solutions)

Solve the following linear system of three equations in four unknowns whose
augmented matrix is

Solution: As in the previous example, we circle pivots and box terms of equations and
corresponding entries to be eliminated. We indicate the operations in terms of equations
and operate on both equations and matrices.

40
36
Step 2. Elimination of x2 from the third equation by adding 1.1/1.1=1 times the second
equation to the third equation. This gives

41
37
Type-II (C): Gauss Elimination (no Solution)

4238
Exercise

In Problems.7–8, using Kirchhoff’s laws (see Type-I) and showing the details, find the
currents:

(7) (8)

39
43
Linear Algebra

➢ Gauss-Jordan Elimination

➢ Homogeneous Linear Systems

45
40
Type-IV: Gauss–Jordan Elimination

46
41
47
42
Exercise

Problems: Find the inverse by Gauss–Jordan

48
43
Problems: Solve the system of linear equation by Gauss-Jordan Elimination
x1 + 3x2 − 2 x3 + 2x 5 = 0
2 x1 + 6 x2 − 5 x3 − 2 x4 + 4 x5 − 3x6 = −1
5 x3 + 10 x4 + 15 x6 = 5
2 x1 + 6 x2 + 8 x4 + 4 x5 − 18 x6 = 6
Solution:
The augmented matrix for the system is
Multiplying the 2nd row by -1 and then
1 3 - 2 0 2 0 0 
2 6 - 5 - 2 4 - 3 - 1 adding -5 times the new 2nd row to the
  3rd row and -4 times the new 2nd row
0 0 5 10 0 15 5  to the 4th row gives
 
2 6 0 8 4 18 6  1 3 -2 0 2 0 0
Adding (-2) times the 1st row to the 0 0 1 2 0 -3 1

2nd and 4th rows gives 0 0 0 0 0 0 0
 
1 3 -2 0 2 0 0 0 0 0 0 0 6 2
0 0 -1 -2 0 -3 - 1

0 0 5 10 0 15 5
 
0 0 4 8 0 18 6 49
44
Interchanging the 3rd and 4th rows The corresponding system of equations is
and then multiplying the 3rd row of
the resulting matrix by 1/6 gives the
x1 + 3x2 + 4 x4 + 2 x 5 =0
row-echelon form. x3 + 2 x4 =0
x6 = 13
1 3 -2 0 2 0 0
0 0 -1 -2 0 -3 - 1 The solution for the system is

0 0 0 0 0 1 1  x1 = −3 x2 − 4 x4 − 2 x 5
3
  x3 = −2 x4
0 0 0 0 0 0 0
x6 = 1
3
Adding -3 times the 3rd row to the
We assign the free variables, and the
2nd row and then adding 2 times the
general solution is
2nd row of the resulting matrix to
the 1st row yields the reduced row- x1 = −3r − 4s − 2t
echelon form. x2 = r
1 3 0 4 2 0 0 x3 = −2s
0 0
 0 1 2 0 0 x4 = s
0 0 0 0 0 1 1
3 x5 = t
 
0 0 0 0 0 0 0
x6 = 13
50
45
Homogeneous Linear Systems

A system of linear equations is said to be homogeneous if the constant terms


are all zero.
a11 x1 + a12 x2 +  + a1n xn = 0
a21 x1 + a22 x2 +  + a2 n xn = 0
   
am1 x1 + am 2 x2 +  + amn xn = 0
Every homogeneous sytem of linear equations is consistent, since all such
systems have x1=0, x2=0,...........,xn=0 as a solution [trivial solution].
Other solutions are called nontrivial solutions.

Example: [Gauss-Jordan Elimination]

2 x1 + 2 x2 − x3 + x5 = 0
− x1 − x2 + 2 x3 − 3x4 + x5 = 0
x1 + x2 − 2 x3 − x5 = 0
x3 + x4 + x5 = 0
51
46
Solution: The augmented matrix for the system is (same solve)

2 2 −1 0 1 0 1 1 0 0 1 0
− 1 −1 2 −3 1 0 0 0 1 0 1 0
 
1 1 −2 0 −1 0  0 0 0 1 0 0
   
0 0 1 1 1 0  0 0 0 0 0 0

The corresponding system of equations is


x1 + x2 + x5 = 0
x3 + x5 = 0
x4 = 0
Solving for the leading variables yields
x1 = − x2 − x5
x3 = − x5
x4 = 0
The general solution is x1 = − s − t , x2 = s, x3 = −t , x4 = 0, x5 = t
Also the trivial solution is obtained when s=t=0.
52
47
Linear Algebra

Optimization

2
48
3 49
Mathematical Programming Problem

A mathematical programming problem is a special class of decision problem where we


are concerned with the efficient use of limited resources to meet desired objectives.
Mathematically the problem can be stated as,

Linear Programming:

4
50
Before proceeding to discuss the theory of linear programming, we shall
present a few examples to illustrate the applications of linear programming.

5
51
6
52
7
53
8
54
Graphical Solution

9
55
Draw the graph
To illustrate some basic features of linear programming, let us consider some
problems involving only two variables which permit graphical solutions.
Example 1.

10
56
Example 2.

11
57
Exercise

Solve the following problems graphically.

12
58
The general linear programming problem is to find values of a set of variables
X1, X2, ... Xn which optimizes (maximizes or minimizes) a linear function

13
59
General Form

14
60
Example: Refinery input and output schematic.

15
61
Solution:

Let x1 = crude #1 (bbl/day)


x2 = crude #2 (bbl/day)
Maximize profit (minimize cost):
y = income – raw mat’l cost – proc.cost

Calculate amounts of each product:


Produced (yield matrix):
gasoline x3 = 0.80 x1 + 0.44 x2
kerosene x4 = 0.05 x1 + 0.10 x2
fuel oil x5 = 0.10 x1 + 0.36 x2
residual x6 = 0.05 x1 + 0.10 x2

Income:
gasoline (36) (0.80 x1 + 0.44 x2) Then, the objective function is
kerosene (24) (0.05 x1 + 0.10 x2) Profit = f = 8.1 x1 + 10.8 x2
fuel oil (21) (0.10 x1 + 0.36 x2) Constraints
residual (10) (0.05 x1 + 0.10 x2) Maximum allowable production:
0.80 x1 + 0.44 x2 < 24,000 (gasoline)
So, Income = 32.6 x1 + 26.8 x2 0.05 x1 + 0.10 x2 < 2,000 (kerosene)
Raw mat’l cost = 24 x1 + 15 x2 0.10 x1 + 0.36 x2 < 6,000 (fuel oil)
Processing cost = 0.5 x1 + x2 x1 > 0, x2 > 0
16
62
17
63
18
64
19
65
Solution:

20
Same solve 66
Linear Algebra

Linear Transformations

• Introduction to Linear Transformations


• Matrices for Linear Transformations

2
67
Introduction to Linear Transformations

A Function T (Linear Transformations) that maps a vector space V into a vector


space W:
mapping V: the domain of T
𝑇: 𝑉 𝑊, 𝑉, 𝑊: vector space
W: the codomain of T
◼ Image of v under T:
If v is in V and w is in W such that 𝑇(𝐯) = 𝐰, then w is called the image of
v under T .
◼ The range of T: The set of all images of vectors in V.

◼ The preimage of w: The set of all v in V such that T(v)=w.

3
68
❖ Example 1: A function from R2 into R2 such that 𝑇: 𝑅2 → 𝑅2
𝐯 = (𝑣1 , 𝑣2 ) ∈ 𝑅2
𝑇(𝑣1 , 𝑣2 ) = (𝑣1 − 𝑣2 , 𝑣1 + 2𝑣2 )

(a) Find the image of v = (-1,2). (b) Find the preimage of w = (-1,11)

Solution: (𝑎) 𝐯 = (−1, 2)


⇒ 𝑇(𝐯) = 𝑇(−1, 2) = (−1 − 2, −1 + 2(2)) = (−3, 3)

(𝑏) 𝑇(𝐯) = 𝐰 = (−1, 11)


𝑇(𝑣1 , 𝑣2 ) = (𝑣1 − 𝑣2 , 𝑣1 + 2𝑣2 ) = (−1, 11)

⇒ 𝑣1 − 𝑣2 = −1
𝑣1 + 2𝑣2 = 11

⇒ 𝑣1 = 3, 𝑣2 = 4

Thus {(3, 4)} is the preimage of w=(-1, 11).


4
69
❖ Linear Transformation (L.T.):
𝑉, 𝑊:vector space
𝑇: 𝑉 → 𝑊:𝑉 to 𝑊 linear transformation
(1) 𝑇(𝐮 + 𝐯) = 𝑇(𝐮) + 𝑇(𝐯), ∀𝐮, 𝐯 ∈ 𝑉

(2) 𝑇(𝑐𝐮) = 𝑐𝑇(𝐮), ∀𝑐 ∈ 𝑅


Notes:
(1) A linear transformation is said to be operation preserving.
(2) A linear transformation 𝑇: 𝑉 → 𝑉 from a vector space into itself is called a
linear operator.

𝑇(𝐮 + 𝐯) = 𝑇(𝐮) + 𝑇(𝐯) 𝑇(𝑐𝐮) = 𝑐𝑇(𝐮)

Addition in Addition Scalar Scalar


V in W multiplication multiplication
in V in W

5
70
❖ Example 2: Verifying a linear transformation T from R2 into R2
𝑇(𝑣1 , 𝑣2 ) = (𝑣1 − 𝑣2 , 𝑣1 + 2𝑣2 )

Proof: Suppose
𝐮 = (𝑢1 , 𝑢2 ), 𝐯 = (𝑣1 , 𝑣2 ) : vector in 𝑅2 , 𝑐: any real number

(1)Vector addition:
𝐮 + 𝐯 = (𝑢1 , 𝑢2 ) + (𝑣1 , 𝑣2 ) = (𝑢1 + 𝑣1 , 𝑢2 + 𝑣2 )

𝑇(𝐮 + 𝐯) = 𝑇(𝑢1 + 𝑣1 , 𝑢2 + 𝑣2 )
= ((𝑢1 + 𝑣1 ) − (𝑢2 + 𝑣2 ), (𝑢1 + 𝑣1 ) + 2(𝑢2 + 𝑣2 ))
= ((𝑢1 − 𝑢2 ) + (𝑣1 − 𝑣2 ), (𝑢1 + 2𝑢2 ) + (𝑣1 + 2𝑣2 ))
= (𝑢1 − 𝑢2 , 𝑢1 + 2𝑢2 ) + (𝑣1 − 𝑣2 , 𝑣1 + 2𝑣2 )
= 𝑇(𝐮) + 𝑇(𝐯)

(2) Scalar multiplication


𝑐𝐮 = 𝑐(𝑢1 , 𝑢2 ) = (𝑐𝑢1 , 𝑐𝑢2 )

𝑇(𝑐𝐮) = 𝑇(𝑐𝑢1 , 𝑐𝑢2 )


= (𝑐𝑢1 − 𝑐𝑢2 , 𝑐𝑢1 + 2𝑐𝑢2 )
= 𝑐(𝑢1 − 𝑢2 , 𝑢1 + 2𝑢2 )
= 𝑐𝑇(𝐮) Therefore, T is a linear transformation.
6
71
❖ Example 3: Functions that are not linear transformations

(𝑎)𝑓(𝑥) = sin 𝑥
sin( 𝑥1 + 𝑥2 ) ≠ sin( 𝑥1 ) + sin( 𝑥2 )
𝜋 𝜋 𝜋 𝜋
sin( + ) ≠ sin( ) + sin( ) ⇐ 𝑓(𝑥) = sin 𝑥 is not a linear transformation
2 3 2 3

(𝑏)𝑓(𝑥) = 𝑥 2
(𝑥1 + 𝑥2 )2 ≠ 𝑥12 + 𝑥22

(1 + 2)2 ≠ 12 + 22 ⇐ 𝑓(𝑥) = 𝑥 2 is not a linear transformation

(𝑐)𝑓(𝑥) = 𝑥 + 1
𝑓(𝑥1 + 𝑥2 ) = 𝑥1 + 𝑥2 + 1

𝑓(𝑥1 ) + 𝑓(𝑥2 ) = (𝑥1 + 1) + (𝑥2 + 1) = 𝑥1 + 𝑥2 + 2

𝑓(𝑥1 + 𝑥2 ) ≠ 𝑓(𝑥1 ) + 𝑓(𝑥2 ) ⇐ 𝑓(𝑥) = 𝑥 + 1 is not a linear transformation

7
72
◼ Zero transformation:
𝑇: 𝑉 → 𝑊 𝑇(𝐯) = 0, ∀𝐯 ∈ 𝑉

◼ Identity transformation:
𝑇: 𝑉 → 𝑉 𝑇(𝐯) = 𝐯, ∀𝐯 ∈ 𝑉

◼ Properties of linear transformations: If 𝑇: 𝑉 → 𝑊, where 𝐮, 𝐯 ∈ 𝑉 then

(1) 𝑇(𝟎) = 𝟎

(2) 𝑇(−𝐯) = −𝑇(𝐯)

(3) 𝑇(𝐮 − 𝐯) = 𝑇(𝐮) − 𝑇(𝐯)

(4) If 𝐯 = 𝑐1 𝑣1 + 𝑐2 𝑣2 + ⋯ + 𝑐𝑛 𝑣𝑛
Then 𝑇(𝐯) = 𝑇(𝑐1 𝑣1 + 𝑐2 𝑣2 + ⋯ + 𝑐𝑛 𝑣𝑛 )
= 𝑐1 𝑇(𝑣1 ) + 𝑐2 𝑇(𝑣2 ) + ⋯ + 𝑐𝑛 𝑇(𝑣𝑛 )

8
73
❖ Example 4: Linear transformations and bases.

Let 𝑇: 𝑅3 → 𝑅3 be a linear transformation such that


𝑇(1, 0, 0) = (2, −1, 4)
𝑇(0, 1, 0) = (1, 5, −2)
𝑇(0, 0, 1) = (0, 3, 1)
Find T (2, 3, -2).

Solution: Suppose T is a Linear transformations (L.T.)

(2, 3, −2) = 2(1, 0, 0) + 3(0, 1, 0) − 2(0, 0, 1)

𝑇(2, 3, −2) = 2𝑇(1, 0, 0) + 3𝑇(0, 1, 0) − 2𝑇(0, 0, 1)

= 2(2, −1, 4) + 3(1, 5, −2) − 2𝑇(0, 3, 1)

= (7, 7, 0)

9
74
Matrices for Linear Transformations

Let A be an mn matrix. The function T defined by 𝑇(𝐯) = 𝐴𝐯 from Rn into Rm


Then the linear transformation is 𝑇(𝐯) = 𝐴𝐯 where 𝑇: 𝑅𝑛 𝑅𝑚
𝑅𝑛 vector 𝑅𝑚 vector

𝑎11 𝑎12 ⋯ 𝑎1𝑛 𝑣1 𝑎11 𝑣1 + 𝑎12 𝑣2 + ⋯ + 𝑎1𝑛 𝑣𝑛


𝑎21 𝑎22 ⋯ 𝑎2𝑛 𝑣2 𝑎21 𝑣1 + 𝑎22 𝑣2 + ⋯ + 𝑎2𝑛 𝑣𝑛
𝐴𝐯 = ⋮ ⋮ ⋮ ⋮ = ⋮
𝑎𝑚1 𝑎𝑚2 ⋯ 𝑎𝑚𝑛 𝑣𝑛 𝑎𝑚1 𝑣1 + 𝑎𝑚2 𝑣2 + ⋯ + 𝑎𝑚𝑛 𝑣𝑛

◼ Two representations of the linear transformation T: R3→R3

1 𝑇(𝑥1 , 𝑥2 , 𝑥3 ) = (2𝑥1 + 𝑥2 − 𝑥3 , −𝑥1 + 3𝑥2 − 2𝑥3 , 3𝑥2 + 4𝑥3 )


2 1 −1 𝑥1
2 𝑇(𝐱) = 𝐴𝐱 = −1 3 −2 𝑥2
0 3 4 𝑥3

10
75
◼ Standard matrix for a linear transformation

Let 𝑇: 𝑅𝑛 → 𝑅𝑚 be a linear trtansformation such that

 a11   a12   a1n 


 a21   a22   a2 n 
T (e1 ) =  , T (e2 ) =  , , T (en ) =  ,
        
am1  am 2  amn 

Then the 𝑚 × 𝑛 matrix whose 𝑛 columns correspond to 𝑇(𝑒𝑖 )

 a11 a12  a1n 


 a21 a22  a2 n 
A = T (e1 ) T (e2 )  T (en ) =  
     
am1 am 2  amn 

Since 𝑇 𝐯 = 𝐴𝐯 for every 𝐯 in 𝑅𝑛 . 𝑇ℎ𝑒𝑛 A is called the standard matrix for 𝑇.


11
76
◼ Example 1: Finding the standard matrix of a linear transformation:
Find the standard matrix for the L.T. T : R3 → R2 define by
T ( x, y, z ) = ( x − 2 y, 2 x + y)
Solution:
Vector Notation Matrix Notation

1
1 
T (e1 ) = T (1, 0, 0) = (1, 2) T (e1 ) = T ( 0 ) =  
   2
0 
0
 − 2
T (e2 ) = T (0, 1, 0) = (−2, 1) T (e2 ) = T ( 1 ) =  
  1
0
0 
0 
T (e3 ) = T (0, 0, 1) = (0, 0) T (e3 ) = T ( 0 ) =  
  0 
1
12
77
A = T (e1 ) T (e2 ) T (e3 )
1 − 2 0
=
2 1 0

◼ Check:
 x  x
  1 − 2 0   x − 2 y
A y =  y =
   2 1 0    2 x + y 
z z
i.e. T ( x, y, z ) = ( x − 2 y,2 x + y)

◼ Note:

1 − 2 0  1x − 2 y + 0 z
A=
2 1 0  2 x + 1y + 0 z

13
78
◼ Example 2: The standard matrix of a composition:
Let T1 and T2 be L.T. from R3 into R3 s.t.
T1 ( x, y, z ) = (2 x + y, 0, x + z ) and T2 ( x, y, z ) = ( x − y, z, y )
Find the standard matrices for the compositio ns
T = T2  T1 and T ' = T1  T2 ,

Solution: Suppose
 2 1 0 1 − 1 0
A1 = 0 0 0 (standard matrix for T1 ) and A2 = 0 0 1 (standard matrix for T2 )
1 0 1 0 1 0
The standard matrix for T = T2  T1
1 − 1 0 2 1 0 2 1 0
A = A2 A1 = 0 0 1 0 0 0 = 1 0 1
    
0 1 0 1 0 1   0 0 0 
The standard matrix for T ' = T1  T2
2 1 0 1 − 1 0 2 − 2 1
A' = A1 A2 = 0 0 0 0 0 1 = 0 0 0
14
1 0 1 0 1 0 1 0 0 79
Linear Algebra

• Vector spaces and Subspaces


• Linear Combinations of Vectors
• Orthogonality
• Linear Dependence and Independence
• Basis and dimension

2
80
Vector Space

Vector space means a space of vectors where we are allowed to do two


operations such as Vector Addition & Scalar Multiplication.
(i) Vector Addition: Let α, β be two vectors then

(If we add two vectors, we get vector belonging to same space)

(ii) Scalar Multiplication: Let α be a vector & k be a scalar then

(If we multiply a vector by scalar says a real number, we still get a vector)

4
81
Example 1

Solution:

Example 2
Let u = ( –1, 4, 3) and v = ( –2, –3, 1) be elements of R3.
Find u + v and 3u.
Solution: u + v = (–1, 4, 3) + (– 2, –3, 1) = (-3 ,1 ,4)
3u = 3 (–1, 4, 3) = (-3 ,12 ,9)
5
82
Example 3

In R2 , consider the two elements (4, 1)


and (2, 3).
Find their sum and give a geometrical
interpretation of this sum.
we get (4, 1) + (2, 3) = (6, 4).
The vector (6, 4), the sum, is the diagonal
of the parallelogram.
Fig.1

6
83
7
84
Subspace

Subspace:
Let V be a vector space and U be a nonempty subset of V.
U is said to be a subspace of V if it is closed under addition and scalar multiplication.

Fig.2 One and two-dimensional subspaces of R3

8
85
Example 4
Let U be the subset of R3 consisting of all vectors of the form (a, 0, 0)
i.e., U = {(a, 0, 0)R3}. Show that U is a subspace of R3.

Solution:

Let (a, 0, 0), (b, 0, 0)  U, and let k R.


We get
(a, 0, 0) + (b, 0, 0) = (a + b, 0, 0)  U
k(a, 0, 0) = (k a, 0, 0)  U
The sum and scalar product are in U.
Thus U is a subspace of R3.

9
86
Example 5
Let V be the set of vectors of R3 of the form (a, a2, b), namely
V ={(a, a2, b)R3 }. Show that V is not a subspace of R3.

Solution:

Let (a, a2, b), (c, c2, d)  V.


(a, a2, b) + (c, c2, d) = (a+ c, a2 + c2, b + d)
 (a + c, (a + c)2, b + d) ,
since a2 + c2  (a + c)2 .
Thus (a, a2, b) + (c, c2, d)  V.
V is not closed under addition.
V is not a subspace.

10
87
Linear Combinations of Vectors

Definition
Let v1, v2, …, vm be vectors in a vector space V. We say that v, a
vector of V, is a linear combinationof v1, v2, …, vm , if there exist
scalars c1, c2, …,cm such that v can be written
v = c1v1 + c2v2 + … + cmvm .

Example:
The vector (5, 4, 2) is a linear combination of the vectors (1, 2, 0), (3, 1, 4), and
(1, 0, 3), since it can be written
(5, 4, 2) = (1, 2, 0) + 2(3, 1, 4) – 2(1, 0, 3)

12
88
Example 6
Determine whether or not the vector (-1, 1, 5) is a linear combination of the
vectors (1, 2, 3), (0, 1, 4), and (2, 3, 6).

Solution

Suppose c1 (1, 2, 3) + c2 (0,1, 4) + c3 (2, 3, 6) = (−1, 1, 5)

(c1 , 2c1 , 3c1 ) + (0, c2 , 4c2 ) + (2c3 , 3c3 , 6c3 ) = (−1, 1, 5)


(c1 + 2c3 , 2c1 + c2 + 3c3 , 3c1 + 4c2 + 6c3 ) = (−1, 1, 5)

 c1 + 2c3 = −1

  2c1 + c2 + 3c3 = 1  c1 = 1, c2 = 2, c3 = −1
 3c + 4c + 6c = 5
 1 2 3
[Gauss elimination]
Thus (-1, 1, 5) is a linear combination of (1, 2, 3), (0, 1, 4), and (2, 3, 6), where

(−1, 1, 5) = (1, 2, 3) + 2(0, 1, 4) −1(2, 3, 6). 13


89
Example 7
Express the vector (4, 5, 5) as a linear combination of the vectors (1, 2, 3), (-1, 1, 4),
and (3, 3, 2).

Solution

Suppose c1 (1, 2, 3) + c2 (−1, 1, 4) + c3 (3, 3, 2) = (4, 5, 5)

(c1 , 2c1 , 3c1 ) + (−c2 , c2 , 4c2 ) + (3c3 , 3c3 , 2c3 ) = (4, 5, 5)


(c1 − c2 + 3c3 , 2c1 + c2 + 3c3 , 3c1 + 4c2 + 2c3 ) = (4, 5, 5)
 c1 − c2 + 3c3 = 4

 2c1 + c2 + 3c3 = 5  c1 = −2r + 3, c2 = r − 1, c3 = r
3c + 4c + 2c = 5
 1 2 3

[Gauss elimination]
Thus (4, 5, 5) can be expressed in many ways as a linear combination of (1, 2, 3),
(-1, 1, 4), and (3, 3, 2):
(4, 5, 5) = (−2r + 3)(1, 2, 3) + (r − 1)(−1, 1, 4) + r (2, 3, 6) 14
90
Example 8
Show that the vector (3, -4, -6) cannot be expressed as a linear combination of the
vectors (1, 2, 3), (-1, -1, -2), and (1, 4, 5).

Solution

Suppose c1 (1, 2, 3) + c2 (−1, − 1, − 2) + c3 (1, 4, 5) = (3, − 4, − 6)


  c1 − c2 + c3 = 3

 2c1 − c2 + 4c3 = −4
3c − 2c + 5c = − 6
 1 2 3

[Gauss elimination]

This system has no solution.


Thus (3, -4, -6) is not a linear combination of the vectors (1, 2, 3), (-1, -1, -2), and
(1, 4, 5).
15
91
Example 9
Determine whether the matrix − 1 7  is a linear combination of the matrices
 8 − 1
1 0, 2 − 3, and 0 1 in the vector space M of 2  2 matrices.
2 1 0 2  2 0 22

Solution c 1 0 + c 2 − 3 + c 0 1 = − 1 7 
Suppose 1 2 1 2 0 2  3 2 0  8 − 1
       
 c + 2c − 3c2 + c3  − 1 7 
Then  1 2
=
2c1 + 2c3 c1 + 2c2   8 − 1

 c1 + 2c2 = −1

− 3c2 + c3 =7

 2c1 + 2c3 =8
 c1 + 2c2 = −1
[Gauss elimination]
This system has the unique solution c1 = 3, c2 = -2, c3 = 1.
Therefore
− 1 7  = 31 0 − 22 − 3 + 0 1 16
 8 − 1 2 1 0 2  2 0 92
Example 10
Determine whether the function f ( x) = x 2 + 10 x − 7 is a linear combination
of the functions g ( x) = x 2 + 3 x − 1 and h( x) = 2 x 2 − x + 4.

Solution
Suppose c1 g + c2 h = f .
Then c1 ( x 2 + 3x −1) + c2 (2x 2 − x + 4) = x 2 + 10x − 7
(c1 + 2c2 ) x 2 + (3c1 − c2 ) x − c1 + 4c2 = x 2 + 10x − 7
 c1 + 2c2 = 1

  3c1 − c2 = 10
− c + 4c = −7
 1 2
 c1 = 3, c2 = −1  f = 3g − h.

17
93
Orthogonal Vectors

Two nonzero vectors are orthogonal if the angle between them is a right angle.

Example 11
Show that the following pairs of vectors are orthogonal.
(a) (1, 0) and (0, 1).
(b) (2, –3, 1) and (1, 2, 4).

Solution (a) (1, 0).(0, 1) = (1  0) + (0  1) = 0.


The vectors are orthogonal.
(b) (2, –3, 1).(1, 2, 4) = (2  1) + (–3  2) + (1  4) = 2 – 6 + 4 = 0.
The vectors are orthogonal.
Theorem 1

Two nonzero vectors u and v are orthogonal if and only if u.v = 0.

Proof

u, v are orthogonal  cos = 0  u  v = 0


19
94
Norm of a Vector in Rn

Definition
The norm (length or magnitude)
of a vector u = (u1, …, un) in Rn
is denoted ||u|| and defined by

u = (u1 ) +  + (un )
2 2

Note:
The norm of a vector can also
be written in terms of the dot product
u = uu
Fig.3 Length of u

20
95
Example 12

Find the norm of each of the vectors u = (1, 3, 5) of R3


and v = (3, 0, 1, 4) of R4.

Solution

u = (1) 2 + (3) 2 + (5) 2 = 1 + 9 + 25 = 35


v = (3) 2 + (0) 2 + (1) 2 + (4) 2 = 9 + 0 + 1 + 16 = 26

Definition
A unit vector is a vector whose norm is 1.
If v is a nonzero vector, then the vector
is a unit vector in the direction of v.
1
u= v
v

This procedure of constructing a unit vector in the same direction


as a given vector is called normalizing the vector.
21 96
Example 13
(a) Show that the vector (1, 0) is a unit vector.
(b) Find the norm of the vector (2, –1, 3). Normalize this vector.

Solution

(a) (1, 0) = 12 + 02 = 1. Thus (1, 0) is a unit vector. It can be


similarly shown that (0, 1) is a unit vector in R2.

(b) (2, −1, 3) = 22 + (−1)2 + 32 = 14. The norm of (2, –1, 3) is 14.
The normalized vector is
1
(2, − 1, 3)
14
 2 , − 1 , 3 .
The vector may also be written  
 14 14 14 
This vector is a unit vector in the direction of (2, –1, 3).
22
97
Angle between Vectors ( in R2)

The law of cosines gives:

← Fig.4

23
98
Angle between Vectors (in Rn)

Definition
Let u and v be two nonzero vectors in Rn.
The cosine of the angle  between these vectors is
u v
cos = 0  
u v

Example 14
Determine the angle between the vectors u = (1, 0, 0) and
v = (1, 0, 1) in R3.

Solution u  v = (1, 0, 0)  (1, 0, 1) = 1


u = 1 + 0 + 0 = 1 v = 1 + 0 +1 = 2
2 2 2 2 2 2

uv 1
Thus cos = = , the angle between u and v is 45.
u v 2
24
99
Spanning Sets:
The vectors v1, v2, …, vm are said to span a vector space if every vector in the
space can be expressed as a linear combination of these vectors.
In this case {v1, v2, …, vm} is called a spanning set.

Example 15 Show that the vectors (1, 2, 0), (0, 1, -1), and (1, 1, 2) span R3.

Solution Let (x, y, z) be an arbitrary element of R3.


Suppose ( x, y, z ) = c1 (1, 2, 0) + c2 (0, 1, − 1) + c3 (1, 1, 2)

 ( x, y, z) = (c1 + c3 , 2c1 + c2 + c3 , − c2 + 2c3 )

 c1 + c3 = x c1 = 3x − y − z

 
2c1 + c2 + c3 = y  c2 = −4 x + 2 y + z
 − c + 2c = z c = −2 x + y + z
 2 3  3

 ( x, y, z) = (3x − y − z)(1,2,0) + (−4x + 2 y + z)(0, 1, −1) + (−2x + y + z)(1, 1, 2)


 The vectors (1, 2, 0), (0, 1, -1), and (1, 1, 2) span R3. 25
100
Linear Dependence and Independence
Definition
(a) The set of vectors {v1, …, vm} in a vector space V is said to be linearly
dependent if there exist scalars c1, …, cm, not all zero, such that
c1v1+ … + cmvm= 0
(b) The set of vectors { v1, …, vm } is linearly independent if c1v1+ … + cmvm= 0
can only be satisfied when c1= 0,…, cm= 0.

{v1, v2} linearly dependent; {v1, v2} linearly independent;


vectors lie on a line vectors do not lie on a line
27
Fig. Linear dependence and independence of {v1, v2} in R2. 101
Linear Dependence of {v1, v2, v3}

{v1, v2, v3} linearly dependent; {v1, v2, v3} linearly independent;
vectors lie in a plane vectors do not lie in a plane

Fig. Linear dependence and independence of {v1, v2, v3} in R3.

28
102
Example 16
Show that the set {(1, 2, 3), (-2, 1, 1), (8, 6, 10)} is linearly dependent in R3.

Solution
Suppose c1 (1, 2, 3) + c2 (−2, 1, 1) + c3 (8, 6, 10) = 0

 (c1 , 2c1 , 3c1 ) + (−2c2 , c2 , c2 ) + (8c3 , 6c3 , 10c3 ) = 0


(c1 − 2c2 + 8c3 , 2c1 + c2 + 6c3 , 3c1 + c2 + 10c3 ) = 0

 c1 − 2c2 + 8c3 = 0
 c1 = 4
  2c1 + c2 + 6c3 = 0  c2 = -2
3c + c + 10c = 0 c3 = -1 [Gauss elimination]
 1 2 3

Thus 4(1, 2, 3) − 2(−2, 1, 1) − (8, 6, 10) = 0


The set of vectors is linearly dependent.

29
103
Example 17
Show that the set {(3, -2, 2), (3, -1, 4), (1, 0, 5)} is linearly independent in R3.

Solution
Suppose c1 (3, − 2, 2) + c2 (3, − 1, 4) + c3 (1, 0, 5) = 0

 (3c1 , − 2c1 , 2c1 ) + (3c2 , − c2 , 4c2 ) + (c3 , 0, 5c3 ) = 0


(3c1 + 3c2 + c3 , − 2c1 − c2 , 2c1 + 4c2 + 5c3 ) = 0

  3c1 + 3c2 + c3 = 0

 − 2c1 − c2 = 0
2c + 4c + 5c = 0 [Gauss elimination]
 1 2 3

This system has the unique solution c1 = 0, c2 = 0, and c3 = 0.

Thus the set is linearly independent.

30
104
Example 18
Consider the functions f(x) = x2 + 1, g(x) = 3x – 1, h(x) = – 4x + 1 of
the vector space P2 of polynomials of degree  2.
Show that the set of functions { f, g, h } is linearly independent.

Solution
Suppose c1f + c2g + c3h = 0
Since for any real number x, c1 ( x 2 + 1) + c2 (3x − 1) + c3 (−4 x + 1) = 0
Consider, three convenient values of x. We get
x = 0 : c1 − c2 + c3 = 0
x = 1 : 2c1 + 2c2 − 3c3 = 0
x = −1 : 2c1 − 4c2 + 5c3 = 0
It can be shown that this system of three equations has the unique
solution c1 = 0, c2 = 0, c3 = 0

Thus c1f + c2g + c3h = 0 implies that c1 = 0, c2 = 0, c3 = 0.


The set { f, g, h } is linearly independent.
31
105
Span and Linearly independent

Linearly independent:
The set { f, g, h } is linearly independent if c1f + c2g + c3h = 0
implies that c1 = 0, c2 = 0, c3 = 0.

Span:

[Change w by x, since w not exist in region R]

32
106
Basis and Dimension

Basis:
Let V be a vector space and S={v1, …, vn} is a set of vectors in V then
i) S is a linearly independent set
ii) S spans V.
Then S is a basis of V

Dimension:
If a vector space V has a basis consisting of n vectors, then the
dimension of V is n.
It can be write as
dim(V)=dim(R3)=|S|=3. [B is a set]

Dimension of a vector space is equal to the


“number of elements” in its basis.
34
107
Example 19
Show that the set {(1, 0, -1), (1, 1, 1), (1, 2, 4)} is a basis for R3.

Solution
Case-I: (span)
Let (x1, x2, x3) be an arbitrary element of R3.
Suppose ( x1 , x2 , x3 ) = a1 (1, 0, − 1) + a2 (1, 1, 1) + a3 (1, 2, 4)

 a1 + a2 + a3 = x1  a1 + a2 + a3 = x1
a2 + 2a3 = x2 a2 + 2a3 = x2
− a1 + a2 + 4a3 = x3 a3 = ( x1 + x3 ) − 2 x2
 a1 + a2 + a3 = x1  a1 = 2 x1 − 3x2 + x3
a2 + 2a3 = x2 a2 = −2 x1 + 5 x2 − 2 x3
2a2 + 5a3 = x1 + x3 a3 = x1 − 2 x2 + x3
35
Thus the set spans the space. 108
Case-II (linearly independent)

Consider the identity


b1 (1, 0, − 1) + b2 (1, 1, 1) + b3 (1, 2, 4) = (0, 0, 0)
The identity leads to the system of equations
b1 + b2 + b3 = 0
b2 + 2b3 = 0
− b1 + b2 + 4b3 = 0
 b1 = 0, b2 = 0, and b3 = 0 is the unique solution.
Thus the set is linearly independent.

{(1, 0, -1), (1, 1, 1), (1, 2, 4)} spans R3 and is linearly


independent.

 It forms a basis for R3.

36
109
Example 20
Prove that the set B={(1, 3, -1), (2, 1, 0), (4, 2, 1)} is a basis for R3.
Solution
From the given set we see that,
dim(R3)=|B|=3.
It’s enough to show that, the set B is linearly independent or it spans R3.
Let us check for linear independence.
Suppose c1 (1, 3, − 1) + c2 (2, 1, 0) + c3 (4, 2, 1) = (0, 0, 0)
It can be expressed in the form of equations
c1 + 2c2 + 4c3 = 0
3c1 + c2 + 2c3 = 0
− c1 + c3 = 0

37
110
Solve this system of equations, we get
c1 + 2c2 + 4c3 = 0 c1 + 2c2 + 4c3 = 0
5c2 + 10c3 = 0 5c2 + 10c3 = 0
− c1 + c3 = 0 15c3 = 0
This system has the unique solution i.e. c1 = 0, c2 = 0, c3 = 0.

Thus the vectors are linearly independent.


The set {(1, 3, -1), (2, 1, 0), (4, 2, 1)} is therefore a basis for R3.

38
111
Application of Linear Algebra

How much traffic flows through the four labeled segments?

Sum of inflow at a point = Sum of outflow at a point

Civil
Engineering 39
112
Application of Linear Algebra

Suppose the rabbits population growth rate in 2016 are 0, 6, 8 rabbits in


the first, second and third years respectively.
1. If only half of newborn rabbits survive their first year
2. Half of them survive their second year
3. Their maximum life span is three years.
According to the above conditions, what will be the population growth
rate in 2017?

In a population of rabbits. . .

Biology
40
113
Applications of Linear Algebra
Application 1: Constructing Curves and surfaces passing through Specified points

Application 2: Least Square approximation

Application 3: Trafic Flow

Application 4: Electrical Circuits

Application 5: Determinant

Application 6: Genetics
Application 7: Graph Theory

Application 8: Cryptography

Application 9: Markov Chain

Application 10: Leonteif Economic Model

https://www.math.ucdavis.edu/~daddel/linear_algebra_appl/Applications/appli
cations.html 41
114
Linear Algebra

(Post-Optimization)
• Sensitivity Analysis
• General stages of the solution
• Transportation Array
• Simplex method

2
115
Post-Optimization Problems: Sensitivity Analysis
Sensitivity Analysis:

116
The general stages of the solution of any mathematical-programming problem
There are five stages i.e.

 Selection of a Time Horizon


 Selection of Decision Variables and Parameters
 Definition of the Constraints
 Selection of the Objective Function

117
118
Transportation Array

119
Methods

7
120
Solving Process

[Note: It is a balanced
transportation problem]

121
122
123
The role of digital computers in solving mathematical-programming problems

124
Simplex method

Problem:

Solution:

125
Solution:

126
127
128
129
130
Exercise

Problems: Solve the problems by using Simplex method

131
132

You might also like