Linear Algebra
Linear Algebra
Dr. M. Tanveer
Department of Mathematics
aii = di aij = 0 j 6= i.
aij = 0, i < j.
4 0 0
Example: A = 1 1 0
4 9 1
The square matrix A is said to be a upper triangular matrix if
aij = 0, i > j.
9 3 3
Example: A = 0 1 0
0 0 5
A matrix is said to be triangular if it is either lower or upper
triangular.
AT A = AAT = In
1 0
Example: A =
0 −1
The square matrix A is said to be Unitary matrix if
A∗ A = AA∗ = In
1 1+i 1−i
Example: A = 2 1−i 1+i
Theorem
If A is a nonsingular n × n matrix with adjoint matrix A, then
1
A−1 = ( )A.
|A|
AA = AA = (|A|)In
Proof:
a11 a12 ... a1n A11 A21 ... An1
a21 a22 ... a2n A12 A22 ... An2
AA = . .. .. .. .. ..
.. . . . . .
am1 am2 . . . amn A1n A2n ... Ann
and (i, j)th element of AA will be,
w = a1 u1 + a2 u2 + ... + an un
Example
0 1 2 3 0 1 0 1
The matrices 0 0 1 −2 3,0 1 5 are in row echelon form.
0 0 0 1 5 0 0 1
0 1 2 3 0
The matrix 0 1 1 −2 3 is not in row echelon form.
0 0 0 1 5
Theorem
Every matrix is row equivalent to a unique matrix in row reduced echelon
form.
Proof (2)
The product AB can be viewed in two ways. The first is as a set of
linear combinations of the rows of B, and the other is as a set of linear
combinations of the columns of A. In either case the number of linear
independent rows (or columns as the case may be) In other words, the
rank of the product AB cannot be greater than the number of linearly
independent columns of A nor greater than the number of linearly
independent rows of B. Another way to express this is as
ρ(AB) ≤ min(ρ(A), ρ(B)).
Null(A) = {x : Ax = 0}
3x + 4y − 2z = 5
x + 4z = 17
−2x + y − 3z = −20
a1 x1 + a2 x2 + ...an xn = b
Example:
x1 + 2x2 + 9x3 = 8
7x1 + 3x2 + 8x3 = 2
x1 − x2 − 2x3 = −1
x −y =1
x +y =3
x −y =1
x −y =3
is inconsistent,
Source:https://www.slideserve.com/terri/examples
Dr. M. Tanveer Linear Algebra January 27, 2023 37 / 164
Figure: Types of Solution
Source:https://slideplayer.com/slide/14894118/
Note:
After Gaussian elimination, the column having no pivot entries are
often referred to as nonpivot columns, while those with pivots are
called pivots columns.
The variables for nonpivot columns are called independent
variables or free variables, while those for pivots columns are
dependent variables
2x2 + 3x3 = 8
2x1 + 3x2 + x3 = 5
x1 − x2 − 2x3 = −5
x1 − x2 − 2x3 = −5
x2 + x3 = 3
x3 = 2
x −y =1
y =2
3x − 3y = 3
Solve this system by Gaussian elimination method and we get x=3 and
y=2.
Result
Let AX = O be a homogeneous system of m linear equations in n
variables. If m < n, then the system has a nontrivial solution.
x +y +z =1
x + 2y + 4z = a
x + 4y + 10z = a2
is consistent.
1 1 1 1
Solution: Augmented matrix, [A|b] = 1 2 4 a
1 4 10 a2
Apply R2 → R2 + R1 (−1) and R3 → R3 + R1 (−1),
we get,
1 1 1 1
0 1 3 a − 1
0 3 9 a2 − 1
1 1 1 1
Now apply R3 → R3 + R2 (−3), we get 0 1 3 a−1
0 0 0 a2 − 3a + 2
This implies that,
0x + 0y + 0z = a2 − 3a + 2 = 0 ⇒ a = 2, 1.
Now for a = 1, we have y + 3z = 0 ⇒ y = −3z and x + y + z = 1 ⇒
x = 1 + 2z.
Here x and y are dependent variable and z is free variable.Take z = r ,
then y = −3r and x = 1 + 2r for any r .
Therefore, given system has infinite number of solutions.
2x − y + z = 4
3x − y + z = 6
4x − y + 2z = 7
−x + y − z = 9
Note:
1 The Elements of V , F are called vectors and scalars respectively.
2 A Vector Space has a unique additive identity.
3 Every element in a Vector Space has a unique additive inverse.
4 If F = R, the vector space is called a Real Vector Space.
5 If F = C, the vector space is called a Complex Vector Space.
z1 +z2 = (x1 +x2 )+i(y1 +y2 ) and az1 = (a1 x1 −a2 y1 )+i(a1 y1 +a2 x1 ).
z + w = (z1 + w1 , ..., zn + wn )
and
az := (az1 , ..., azn )
n
Then C (F) is a vector space.
Let Mn (R) denotes the set of all n × n matrices with real entries.
Then Mn (R) is a real vector space with vector addition and scalar
multiplication defined by
P(x) = a0 + a1 x + a2 x 2 + ... + am x m
q(x) = b0 + b1 x + b2 x 2 + ... + bm x m
Note:
Any vector space has two improper subspace: {0} and the vector
space V itself. Other subspaces are called proper subspaces.
Proof:
If W is a subspace of V , then all the vector space properties are
satisfied, in particular, above two properties (1) and (2) are satisfied.
Conversely, assume above two condition (1) and (2) holds. Let u be any
vector in W . By condition (2),au ∈ W for all scalar a. Setting
a = 0, ou = 0 ∈ W , and setting a = −1, (−1)u = −u ∈ W . All other
properties are automatically satisfied by the vectors in W since they are
satisfied by all vectors in V .
Proof:
Suppose that W is a non-empty subset of V such that au + v belongs to
W for all vectors u, v in W and all scalars a in F. Since W is non-empty,
there is a vector p in W , and (−1)p + p = 0 is in W . Then if u is any
vector in W and a any scalar, the vector au = au + 0 is in W . In
particular, (−1)u = −u is in W . Finally, If u and v are in W , then
u + v = 1u + v is in W . Thus W is a subspace of V .
Conversely, if W is a subspace of V , u and v are in V , and a is a scalar,
then au + v is in W .
Question:
Q1. Is W = {(x1 , x2 ) ∈ R2 : 2x1 + x2 = 0, x1 + x2 = 1} is a subspace of
R2 ? ( )
a b
Q2. Is W = : a, b ∈ R is a subspace of R2×2 (R) ?.
b 0
Examples:
S1 = {(x, 0, 0) ∈ R3 : x ∈ R} and S2 = {(0, y , 0) ∈ R3 : y ∈ R}. Then
S1 + S2 = {(x, y , 0) : x, y ∈ R}.
Examples:
S1 = {(x, 0, 0) ∈ R3 : x ∈ R} and S2 = {(0, y , 0) ∈ R3 : y ∈ R}. Then
S1 + S2 = {(x, y , 0) : x, y ∈ R}.
Intersection of Subsets
Let S1 , ..., Sm be the subsets of a vector space V , Then the intersection
of S1 , ..., Sm is defined as:
∩Si = S1 ∩ ... ∩ Sm = {s : s ∈ Si , 1 ≤ i ≤ m}
Proof:
Let u, v ∈ ∩Wi and a ∈ F then u, v ∈ Wi for all i.
⇒ u + v ∈ Wi for all i and au ∈ Wi for all i.
⇒ u + v ∈ ∩Wi and au ∈ ∩Wi .
⇒ ∩Wi is aPsubspace of V .
n
Let u, v ∈ i=1 Wi
0 0 0
⇒ u = w1 + ... + wn and v = w1 + ... + wn where wi , wi ∈ Wi
0 0 0 Pn
⇒ u + v = w1 + w1 + wP 2 + w2 ... + wn + wn ∈ i=1 Wi and
n
= aw1 + ... + awn ∈ i=1 Wi
au P
n
⇒ i=1 Wi is a subspace of V .
Example:
Let S = {(1, −1, 0), (1, 0, 2), (0, −2, 5)} be a subset of R3 , then the
vector (1, −2, −2) is a linear combination of the vectors in S,
Theorem
Let S be a nonempty subset of a vector space V . Then:
1 S ⊆ Span(S).
2 Span(S) is a subspace of V (under the same operation as V ).
3 If W is a subspace of V with S ⊆ W , then Span(S) ⊆ W .
4 Span(S) is the smallest subspace of V containing S.
Examples
1 A set of two vectors in V is L.I. if and only if neither vector is a
scalar multiple of the other.
2 (1, 0, 0, 0), (0, 1, 0, 0) and (0, 0, 1, 0) are L.I. in R4 .
Examples:
1 (2, 3, 1), (1, −1, 2), (7, 3, 8) is L.D. in R3 because
Examples:
1 (2, 3, 1), (1, −1, 2), (7, 3, 8) is L.D. in R3 because
Note:
If S is a linearly dependent subset of V then every set containing S
is also linearly dependent.
If S is a linearly independent subset of a vector space V , then every
subset of S is also linearly independent.
Dr. M. Tanveer Linear Algebra January 27, 2023 74 / 164
Theorem
let V be a vector space then {u1 , ..., un } is linearly dependent if and only
if one of the ui ‘s is a linear combination of the other uj ‘s.
Proof:
Pn {u1 , ..., un } is L.D there exists ai ∈ R, not all zero, such that
Since
i=1 ai ui = 0. Suppose ai 6= 0 for some i. Then
a1 u1 + ... + ai ui + ... + an un = 0. Hence ui = −ai−1 ( j6=i aj uj ) and
P
Proof
Suppose there is a linear dependent relation in the elements of S ∪ {u} of
the form
Xn
ai ui + bu = 0, ui ∈ S.
i=1
P
n
If b 6= 0 then u = −1b i=1 ai ui ∈ L(S). Therefore, b = 0. But then
Pn
i=1 ai ui = 0 and this implies that all ai = 0, since S is L.I. Therefore,
S ∪ {u} is Linearly independent.
Examples:
let V = Rn and B = {e1 , e2 , ..., ek }, 1 ≤ k ≤ n, where
ej = (0, ..., 0, 1, 0, ..., 0), 1 ≤ j ≤ k (1 is at the jth place). Then B is
called the Standard basis of Rn . Any vector u ∈ Rn can be written
as
n
X
u = (x1 , x2 , ..., xn ) = e i xi
i=1
i
Let Pi (x) = x for i = 0, 1, 2, ..., m, where m is some positive
integer. Then the set B = {P0 (x), P1 (x), ..., Pm (x)} is called the
standard basis for Pm (R)
Proof:
If S ⊆ L(S1 ), then L(S1 ) = L(S) = V and we can take B = S1 .
0
Otherwise there exist u1 ∈ S\S1 and by above lemma, S1 = S1 ∪ {u1 } is
0
L.I. Replace S1 by S1 and repeat the above process. Since S is finite set,
this process will terminate in a finite number of steps and we have the
desired result.
u = c1 u1 + c2 u2 + ... + cn un
X
= c1 u1 + c2 u2 + ... + ci bj uj + ... + cn un
j6=i
This implies that {u1 , u2 , ..., ui−1 , ui+1 , ..., un } is a spanning set for
V ,which is contradictions that B is minimal spanning set of V .
Therefore, all ai = 0. Hence,
Pn B is L.I.
Now for any u ∈ V , u = i=1 di ui where di ∈ F ⇒ {u1 , u2 , .., un , u} is
L.D ⇒ B is maximal L.I subset of V .
Corollary:
If B is a L.I subset of vector space V , then B can be extended to a basis
of V .
Proof:
Since S span V , every vector in V can be expressed as a linear
combination of the vectors in S. Suppose that some vector u can be
written as
We get
0 = (a1 − b1 )u1 + ... + (an − bn )un
Since S is linearly independent set, it implies that
a1 = b1 , ..., an = bn
Theorem
Let V be a vector space, and let B1 and B2 be bases for V such that B1
has finite elements. Then B2 also has finite many elements, and
|B1 | = |B2 |.
Theorem
Let V be a vector space, and let B1 and B2 be bases for V such that B1
has finite elements. Then B2 also has finite many elements, and
|B1 | = |B2 |.
Proof
Because B1 and B2 are bases for V , L(B1 )) = V and B2 is L.I. Hence,
above lemma shows that B2 has finitely many elements and |B2 | ≤ |B1 |.
Now, since B2 is finite and span V and B1 is L.I. Then using above
lemma, |B1 | ≤ |B2 |.
Therefore, the number of elements in B1 and B2 are same, i.e
|B1 | = |B2 |.
Examples:
dim(Rn ) = n.
dim(Mm,n ) = mn.
dim(Pn ) = n + 1.
So {(1, 0), (i, 0), (0, 1), (0, i)} is a basis of Cn (R) and thus
dim(V ) = 4.
Proof
Let {u1 , u2 , ..., ur } be a basis of U1 ∩ U2 , then as U1 ∩ U2 is a subspace
of U1 and U2 , {u1 , u2 , ..., ur } can be extended to bases
{u1 , u2 , ..., ur , v1 , v2 , ..., vt } and {u1 , u2 , ..., ur , w1 , w2 , ..., wp } of U1 and
U2 , respectively.
Now we will show that the set S is L.I. Let ai , bi and ci are some scalars
such that
r
X t
X p
X
ai ui + bi vi + ci wi = 0
i=1 i=1 i=1
r
X Xp t
X
=⇒ ai ui + ci wi = − bi vi ∈ U1 ∩ U2
i=1 i=1 i=1
dim(U1 + U2 ) = r + t + p = (r + t) + (r + p) − (r )
= dim(U1 ) + dim(U2 ) − dim(U1 ∩ U2 )
dim(U1 + U2 ) = 1 + 1 − 0 = 2
Theorem
Let V and W be vector spaces, and let T : V → W be a function. Then
T is called linear transformation if and only if
T (au + bv ) = aT (u) + bT (v ), ∀ u, v ∈ V , a, b ∈ F
T (1u + 1v ) = 1T (u) + 1T (v )
= T (u) + T (v )
T (au + 0v ) = aT (u) + 0T (v )
= aT (u) + 0
= aT (u)
and
T (x) = Ax,
T (x) = x 2
Proof
Proof(1)
Since T is L.T. Then, T (au) = aT (u), ∀a ∈ F, u ∈ V ⇒ Take a = 0
we have, T (0) = 0T (u) = 0.
Proof(2)
Since T is L.T, Then, T (−u) = T (−1u) = (−1)T (u) = −T (u).
Proof(3) Prove it by induction method.
Identity Transformation
Let V be a vector space and let T : V → V be the function defined by
T (u) = u, ∀u ∈ V
Proof
Let w1 , w2 ∈ W . Since T is one-one and onto, then there exist unique
vectors u1 , u2 ∈ V such that T −1 (w1 ) = u1 and T −1 (w2 ) = u2 . . Or
equivalently, T (u1 ) = w1 and T (u2 ) = w2 . So,
T (au1 + bu2 ) = aT (u1 ) + bT (u2 ) = aw1 + bw2 , ∀a, b ∈ F.
Thus for any a, b ∈ F,
cos(θ) − sin(θ) r cos(α)
T (u) = Au =
sin(θ) cos(θ) r sin(α)
r cos(α + θ)
=
r sin(α + θ)
Note:
Let T : V → W be a linear transformation, then T may not map
L.I. subsets of V into L.I. subsets of W .
Example:
Let T : R2 → R2 , such that T (x, y ) = (x, 0) is a L.T. and
{(1, 0), (0, 1)} is L.I. subset of R2 , But
{T (1, 0), T (0, 1)} = {(1, 0), (0, 0)} is not L.I subset of R2 .
a11 a12 ... a1n
a21 a22 ... a2n
Define a matrix A = . .. .. . Then the coordinate of
.. ..
. . .
am1 am2 . . . amn
the vector T (u) with respect to the ordered basis B2 is
Pn
a1j aj
Pj=1
nj=1 a2j aj
[T (u)]B2 =
..
Pn .
j=1 amj aj
a11 a12 ... a1n a1
a21 a22 ... a2n a2
[T (u)]B2 = . .. .. ..
.. ..
. . . .
am1 am2 ... amn an
= A[u]B1 .
[T (u)]B2 = A[u]B1
T (x, y ) = (x + y , x − y )
and B1 = {(1, 0), (0, 1)} and B2 = {(1, 1), (1, −1)} are ordered basis of
R2
For any vector u = (x, y ) ∈ R2 , [u]B1 = [x, y ]T . Now,
T (1, 0) = (1, 1) = 1(1, 1) + 0(1, −1). So, T [(1, 0)]B2 = [1, 0]T .
T
and T (0, 1) = (1,−1) = 0(1, 1) + 1(1, −1). So, T [(0, 1)]B2 = [0, 1] .
1 0
So, T [B1 , B2 ] =
0 1
x
Notice: [T (x, y )]B2 = [(x + y , x − y )]B2 = x(1, 1) + y (1, −1) = .
y
And
1 0 x x
T [B1 , B2 ][(x, y )]B1 = = = [T (x, y )]B2
0 1 y y
Theorem
Let V and W be vector spaces and T : V → W be a linear
transformation. Then
1 N(T ) is a subspace of V and it is called null space of T or the
kernel of T .
2 R(T ) is a subspace of W and it is called range space of T .
Proof(2)
Let w1 , w2 ∈ R(T ), then w1 = T (u1 ) and w2 = T (u2 ) for some
u1 , u2 ∈ V . Now,∀a, b ∈ F
Proof (2)
Let T is onto ⇒ ∀w ∈ W , there exist u ∈ V such that T (u) = w . It
means that R(T ) = W ⇒ ρ(T ) = dim(W ).
conversely, we know that R(T ) ⊆ W and
ρ(T ) = dim(W ) ⇒ R(T ) = W . Therefore, T is onto.
We now prove that the set S1 = {T (ur +1 ), T (ur +2 ), ..., T (un } is L.I.
Suppose that S1 is not L.I. then, there exists scalars, ar +1 , ar +2 , ..., an ,
not all zero, such that
ar +1 ur +1 + ar +2 ur +2 + ... + an un ∈ N(T ).
Hence, we can express this vector with the help of basis B so there exist
scalars ai , 1 ≤ i ≤ r such that
r
X
ar +1 ur +1 + ar +2 ur +2 + ... + an un = ai ui .
i=1
That is,
a1 u1 + ... + ar ur − ar +1 ur +1 − ar +2 ur +2 − ... − an un = 0
But the set {u1 , u2 , ..., un } is a basis of V and so L.I. Thus,
ai = 0, ∀i, 1 ≤ i ≤ n
In particular, all scalars ar +1 , ar +2 , ..., an are zero. So we get contradic-
tion. Therefore, The set S1 is L.I and a basis of R(T ). Hence,
T (ax 2 + bx + c) = (a − c)x + (b − c)
T (x, y , z) = (x − y , x + z, 2y − z)
T (x, y , z, w ) = (x − y , z + w , x + z)
Now define,
n
X
T (u) = ai T (ui ).
i=1
Example
2 −3 1 0 3 1
Let A= and B= and P= , then AP = PB.
1 −2 0 −1 1 1
It mean
that A is similar
to B.
1 2 2 1
A= and B = are not similar matrices, since
2 1 1 2
|A| =
6 |B|.
Proof
Pn Pn
Let T (ui ) = j=1 aji uj and T (vi ) = j=1 bji vj . Let P = (pij ) be the
transition
Pn matrix from B1 to B2 , then
ui = j=1 pji vj
n
X
T (ui ) = pji T (vj )
j=1
Xn n
X
= pji bkj vk
j=1 k=1
Xn X
n
= bkj pji vk
k=1 j=1
Dr. M. Tanveer Linear Algebra January 27, 2023 125 / 164
Cont...
Also Since,
n
X
T (ui ) = aji uj
j=1
Xn n
X
= aji pkj vk
j=1 k=1
Xn X n
= pkj aji vk
k=1 j=1
n
X n
X
bkj pji = pkj aji
j=1 j=1
but the left hand side is (k, i)th element of BP and right hand side is
(k, i)th element of PA and since it is true for all k = 1, 2, ..., n and i =
1, 2, ..., n
⇒ BP = PA ⇒ B = PAP −1 .
T (x, y ) = (x + 3y , x − y )
2
and let B1 = {(1, 0), (0, 1)}
and B2 = {(1, 2), (2, 1)}
be thebases of R .
1 3 −3 −1
A = T [B1 , B1 ] = and B = T [B2 , B2 ] =
1 −1 5 3
1 2
Now the transition matrix from B2 to B1 is P = then we can see
2 1
that,
PB = AP
⇒ A and B are similar matrices.
be a given matrix then with respect to the standard basis of R3 find the
linear transformation T : R3 → R3 .
Let (x, y , z) ∈ R3 ,
n
X n
X n
X
hau + bv , w i = (aui + bvi )wi = a ui wi + b vi wi
i=1 i=1 i=1
= ahu, w i + bhv , w i
Pn Pn Pn
hu, ui = i=1 ui2 ≥ 0, and hu, v i = i=1 ui vi = i=1 vi ui = hv , ui.
hA, Bi = tr (AB T )
Proof
If u = 0 or v = 0, then the inequality holds. Let u 6= 0 and v 6= 0, For
any scalar a,
0 ≤ hu − av , u − av i
= aāhv , v i − ahv , ui − āhu, v i + hu, ui
= hu, ui − ahv , ui − ā{hu, v i − ahv , v i}
hu,v i
Choose a = hv ,v i , then
hu, v i
0 ≤ ||u||2 − hv , ui − 0
hv , v i
hui , uj i = 0, 1 ≤ i 6= j ≤ n.
||ui || = 1, 1 ≤ i ≤ n.
Theorem
Let V be an inner product space. Let S = {u1 , u2 , ..., un } be a set of
non-zero, mutually orthogonal vectors of V . Then,
1 The set S is linearly independent.
2 Let dim(V ) = n and also let ||ui || = 1, for all i = 1, 2, ..., n. Then
for any v ∈ V ,
Xn
v= hv , ui iui .
i=1
Xn n
X
hv , uj i = h ai ui , uj i = ai hui , uj i = aj huj , uj i.
i=1 i=1
hv ,uj i
Now, aj = huj ,uj i and given that ||uj || = 1, for all j = 1, 2, ..., n,
Therefore,
n
X
v= hv , ui iui .
i=1
Then
a −a + 2b
[u]B = ,
2 2
Notice that:
a −a + 2b
u = (a, b) = (2, 1) + (0, 1)
2 2
Dr. M. Tanveer Linear Algebra January 27, 2023 146 / 164
Gram-Schmidt Process
Let B = {w1 , w2 , ..., wk } be a linearly independent subset of an inner
product space V . we can create a new set {u1 , u2 , ..., uk } of vectors as
follows:
Let u1 = w1 .
u2 = w2 − hw 2 ,u1 i
hu1 ,u1 i u1 .
u3 = w3 − hw 3 ,u1 i hw3 ,u2 i
hu1 ,u1 i u1 − hu2 ,u2 i u2 .
hwk ,uk−1 i
uk = wk − hw k ,u1 i
hu1 ,u1 i u 1 − hwk ,u2 i
hu2 ,u2 i u2 − ... − huk−1 ,uk−1 i uk−1 .
hw , u i
2 1
hu1 , u2 i = hu1 , w2 − u1 i
hu1 , u1 i
= hu1 , w2 i − hw2¯, u1 i
=0
hw , u i hw , u i
3 1 3 2
hu2 , u3 i = hu2 , w3 − u1 − u2 i
hu1 , u1 i hu2 , u2 i
hw3¯, u1 i hw3¯, u2 i
= hu2 , u3 i − hu2 , u1 i − hu2 , u2 i
hu1 , u1 i hu2 , u2 i
=0
hu1 , u3 i = 0.
Dr. M. Tanveer Linear Algebra January 27, 2023 150 / 164
Cont...
⇒ {u1 , u2 , u3 } is orthogonal set.
Continuing this way,
An orthogonal set of vectors {u1 , u2 , ..., un } will be obtained after n step.
Note: This preceding step-by-step construction for converting an
arbitary L.I set into an orthogonal set is called the Gram-Schmidt
process.
Note
If B = {u1 , u2 , ..., un } is an orthogonal basis of V , then
{ ||uu11 || , ||uu22 || , ..., ||uunn || } will be an orthonormal basis of V .
Therefore, every finite dimensional inner product space has an
orthonormal basis.
hu, w i = 0, ∀w ∈ W .
i.e.,
W T = {u ∈ V : hu, w i = 0, ∀w ∈ W }
hu, w i = hv , w i = 0, ∀w ∈ W .
Now, for all w ∈ W ,
⇒ au + bv ∈ W T . Therefore, W T is a subspace of V .
hu, w i = hv , w i = 0, ∀w ∈ W .
Now, for all w ∈ W ,
⇒ au + bv ∈ W T . Therefore, W T is a subspace of V .
Proof(2)
Let u ∈ W ∩ W T ⇒ u ∈ W and u ∈ W T
⇒ hu, ui = 0 ⇒ u = 0.
Therefore, W ∩ W T = {0}.
hv − Pu (v ), ui = hv , ui − hPu (v ), ui
hu, v i
= hv , ui − hu, ui
hu, ui
= hv , ui − hv , ui
= hv , ui − hv , ui (since inner product is real)
= 0.
Proof
We know that v − Pu (v ) ⊥ u . Hence, v − Pu (v ) ⊥ au, for all a ∈ R and
therefore
(v − Pu (v )) ⊥ (Pu (v ) − u) ∀ a ∈ R
Thus we get,
||v − Pu (v )|| ≤ ||v − au|| and equality hold if and only if au = Pu (v ).
Proof
Since W is a subspace of V and V is an inner product space, W is also
an inner product space. Then there exist an orthonormal basis, say
B = {w1 , w2 , ..., wn } of P
W.
n
Let u ∈ V and x = v − i=1 hv , wi iwi , then hx, wi i = 0 for i = 1, 2, ..., n.
Since any element of W can be written as a linear of w1 , w2 , ..., wn ; we
get hx, w i = 0, ∀w ∈ W ⇒ x ∈ W T .
Hence v ∈ W + W T ⇒ V ⊆ W + W T ⇒ V = W + W T .
Now, since W ∩ W T = {0}, therefore
V = W ⊕ WT.