Linear Algebra
Linear Algebra
Linear Algebra
COP Y
A vector space over a field is a set that us closed under the finite addition and scalar multiplication, where
the scalar belongs to the field. In this context, we shall deal with vector spaces over the field of complex
numbers, denoted by C. A vector space has the following properties:
DR F T
1. Closure under vector addition:
A
(u + v) ∈ V ∀ (u,v) ∈ V
In this context, we shall deal with vector spaces over the field of complex numbers, denoted by C.
A vector can be represented by a matrix. The entries of the matrix are called the components of the vector.
35
2.2. LINEAR DEPENDENCE AND INDEPENDENCE CHAPTER 2. LINEAR ALGEBRA
For example:
v1 u1 v1 + u1
v2 u2 v2 + u2
v3 u3 v3 + u3
v=
.
, u=
.
, v + u=
.
. . .
. . .
vn un vn + un
Here n is called the dimension of the vector space. The vector space can also be infinite dimensinal.
A set of vectors −
→, −
v → − → −
→
1 v2 , v3 . . . . vn are said to be linearly dependent iff ∃ some constants ci ’s such
that:
n
�
ci →
−
vi = 0.
P Y
i=1
O
Exception: The trivial case where all ci ’s are = 0.
A F T C
Each vector depends on the rest of the vectors by a linear relationship. Hence the name linearly dependent.
The vectors are called linearly independent iff they do not satisfy the above conditions. For linearly
independent vectors, no vector is related to the rest by a linear relationship. The maximum number of
DR
linearly independent vectors in a vector space is called the dimension of the vector space, denoted by n.
Any vector →−
x in this vector space can be uniquely described as a linear combination of these n vectors of the
vector space. The proof for this is quite simple. A n-dimensional vector space will have a set of n linearly
independent vectors. If we add the vector → −
x to this set, then we will still have a n-dimensional space. So,
−
→ −
→ −
→ −
→ →
−
this means the set v1 , v2 , v3 . . . . vn , x is linearly dependent. So:
n
�
ci →
−
vi + cx →
−
x =0
i=1
n
�
so: ci →
−
vi = cx� →
−
x
i=1
�n
⇒ ci� →
−
vi = →
−
x
i=1
Hence, we have shown that the general vector → −x can be expressed uniquely as a linear combination of the
n-linearly independent vectors of the vector space. The set of n linearly independent vectors is called the
basis of the vector space. The set of all the vectors represented by this basis is called the spanning set of
the basis.
A linear function can be defined over the elements of the vector space. Let f be a function which is linear.
f :→−
x → f (→ −
x ), such that f (→
−
x ) ∈ C. The function maps an element in the vector space to an element in
the field over which the vector space is defined. By our convention we represent a vector →
−x by a coloumn
GO TO FIRST PAGE↑ 36
CHAPTER 2. LINEAR ALGEBRA 2.4. DIRAC’S BRA KET NOTATION
vector. So, we can see by inspection that f should be represented using a row vector. Then on multiplying
the row and the column vector, we get a single scalar. The function satisfies the requirement:
f (a→
−
x + b→ −
y ) = af (→
−
x ) + bf (→
−
x)
where: {a, b} ∈ F and {→ −
x,→−y} ∈ V
The space of such functions is called the dual space to the corresponding vector space. If the vector space
is V, then the corresponding dual space is represented by V∗ .
2.4
OP Y
Dirac’s Bra Ket notation
C
T
The Dirac bra-ket notation is widely used in Quantum Mechanics. Here any vector → −
F
x (represented as a
A
column matrix) is represented as |x� and this vector is called a ket vector. The corresponding dual vector
DR
is called the bra vector, denoted by �x|. Hence the name bra ket notation. We would be following this
method from now on. The reasons will become evident as we proceed. So, we have the conventions:
Consider a vector |x� and its dual vector �x|. We saw that the former was a column matrix and the latter
the row matrix. �x| is called the bra dual of |x�. So, the product of a ket vector and its corresponding bra
vector is a dimensionless scalar quantity (as we saw this is multiplying a row matrix with a column matrix).
The product is represented as: �x|x�. This product is called inner product. The inner product of a bra
and a ket vector is in short defined as: �
�x|y� = yi∗ xi (2.3)
i
The first can be derived by pulling the constant out of the ket vector. The second can be obtained by the
same technique, but using the rule given in (2.2). Another important property satisfied by the inner product
is:
∗
�x|y� = (�y|x�) . (2.6)
GO TO FIRST PAGE↑ 37
2.6. ORTHONORMAL BASIS AND COMPLETENESS RELATIONSCHAPTER 2. LINEAR ALGEBRA
The inner product of a vector and its bra dual gives the square of the norm of the vector. It is represented
as:
COP Y
The norm is a scalar dimensionless quantity. However there is also another possibility which we haven’t yet
explored. What about the product |x��x|. This is a product of a column matrix with a row matrix. This
is a matrix whose dimensions will be (n×n) where n is the dimension of the vector space containing |x�.
T
This product is called an outer product. A matrix can be thought as any transformation on a vector. In
F
quantum mechanics, these transformations or matrices are called operators.
DR
2.6
A
Orthonormal Basis and Completeness Relations
We saw that any set of n linearly independent vectors {|v1 �, |v2 �, |v3 �, ...|vn �} form a basis for a n dimensional
vector space. We impose the condition that:
The basis that satisfies this condition is called the orthonormal basis. The orthonormal basis is conven-
tionally represented by {|e1 �, |e2 �, |e3 �, ...|en �}. The basis has some useful properties. Consider a vector |x�
in the vector space spanned by the basis {ei }. We can use this othonormality property to get the coefficients
multiplying the basis states:
�
|x� = ci |ei �
i
�
multiply both sides by �ej |: �ej |x� = ci �ej |ei �
i
since �ej |ei � = δji , it will be 1 only when j = i
Therefore, to get the coefficient multiplying the jth basis (orthonormal) state, in some basis expansion for a
vector, we must take an inner product of this basis state with the vector. Now, we can see some results with
the outer product of the basis states. The outer product leads to an operator. Now, let us take the basis
GO TO FIRST PAGE↑ 38
CHAPTER 2. LINEAR ALGEBRA 2.7. PROJECTION OPERATOR
Now, looking at the form of this operator, we can easily guess that the operator is the identity operator.
Therefore: �
|ei ��ei | = I (2.9)
i
COP Y
Consider the operator |ek ��ek |. This operator when acts on any vector, gives the coefficient of |ek �, in the
T
{ek } basis expansion of that vector. So, in other words, it takes any vector and projects the vector along
A F
the |ek � direction. The projection occurs due to the formation of the inner product of the bra �ek | with the
DR
arbitrary ket vector. Hence this operator |ek ��ek | is called the projection operator. It is denoted by Pk .
The projection operator satisfies some important properties:
n
(Pk ) = (Pk ) (2.10)
Pi Pj = 0 (2.11)
�
Pi = I (2.12)
i
Now, we can simplify this statement by taking inner product. We use the fact that |ek ��ek | = 1. Only the
terminal terms |ek � and �ek | will remain after all the others become 1 through formation of inner product.
Hence, the RHS will be Pk .
The second statement (2.11) can be proved by considering: Pi Pj = |ei ��ei |ej ��ej |. Now the inner product
�ei |ej � is = 0. The entire expression reduces to 0. Hence the statement is proved.
The third statement (2.12) is already proved in (2.9)when we were looking at the orthonormal basis and
completeness relation.
GO TO FIRST PAGE↑ 39
2.8. GRAM-SCHMIDT ORTHONORMALIZATION CHAPTER 2. LINEAR ALGEBRA
corresponding to the set {vi }. By defenition of orthonormal basis, we need that �Oi |Oi � = 1 ; that is, the
norm of the vector must be = 1. So, we try to construct that basis. So, we define
|vi �
|Oi � = (2.13)
|||vi �||
Our set of vector are all of unit norm. So, we now need to construct orthogonal vectors out of these. Our
idea briefly is the following: We can illustrate this process taking an example of vectors on a 2 dimensional
real vector space, which can be represented on a plane.
1. Without any loss of generality, take the first vector from the given basis (call it |x1 �) as the first
orthonormal vector |O1 �. Hence, we have:
COP Y
DR A F T
Figure 2.1: V1 and v2 are the given basis vectors. choose O1 = v1
2. Take the second vector (|x2 �) from the given basis, and take the projcection of |x2 � along this vector
|O1 �. (take the projection by multiplying |O1 ��O1 | with |x2 �.) Now, we have a vector that is on |O1 �.
Call this vector |PO1 �.
3. Now, subtract PO1 from |v2 � (using the triangle law of vector addition) to get |e2 �.
GO TO FIRST PAGE↑ 40
CHAPTER 2. LINEAR ALGEBRA 2.9. LINEAR OPERATORS
Figure 2.3: diagram for step 3. get O2 = Po1 - O1. Dotted line indicates that we are subtracting vectors
using the triangle law of addition
COP Y
DR A F T Figure 2.4: final diagram: O1 and O2 are formed.
5. Similarly, we go on to construct all {ei }. In the ith step, we need to subtract all the projections of vi
from vi .
So, summing up, we got |O2 � from performing (|v2 � - |PO1 �). We get |PO1 � by performing (|O1 ��O1 |)|v2 �.
Hence, we can write:
�j−1
|vj � − |Oi ��Oi |vj �
|Oj � = �ij−1 where (1 ≥ j ≥ n) (2.14)
|||vj � − i |Oi ��Oi |vj �||
The denominator is to normalize (setting its norm to 1) the resultant vector. n is the dimensionality of the
vector space spanned by the given basis.
From the above definition we see that this function has a value in U for each element in V. If V and
U are n and m dimensional respectively, then the function will transform each dimension out of n to a
GO TO FIRST PAGE↑ 41
2.10. HERMITIAN MATRICES CHAPTER 2. LINEAR ALGEBRA
dimension out of m. In other words, if V is spanned by the basis {|v1 �, |v2 �, |v3 �, ..., |vn �} and U is spanned
by {|u1 �, |u2 �, |u3 �, ..., |um �}, then for every i in 1 to n, there exists complex numbers A1i to Ami such
that
�
A|vi � = Aij |ui �
i
The congugate transpose of a matrix A is denoted as A† , where, the relation (2.16) holds. Hence, a matrix
A is called Hermitian iff A = A† . Since we have this property for the Hermitian matrix, we can see that the
diagonal entries of a Hermitian matrix have to be real. A very similar defenition holds also for Hermitian
operators as well. An operator is Hermitian if its matrix representation has a hermitian form. The conjugate
Y
transpose for an operator is defined as:
T COP
Therefore if an operator is Hermitian,
�u|A|v� = (�v|A† |u�)∗ (2.17)
DR
is satisfied. The defenition of a conjugate transpose applies to vectors as well. We have:
� �
(|x�)† ≡ x∗1 x∗2 x∗3 . . . x∗n ≡ �x| (2.19)
(A + B)† = A† + B† (2.20)
†
(cA) = c A
† ∗
(2.21)
(AB)† = B† A† (2.22)
Let us consider the matrix A to have the elements {Aij } and the matrix B to have the elements {Bij }. If
we prove that the above laws hold for an arbitrary element in A and B, then we have proved the laws in
general.
For the first case, (A + B) will have the elements (Aij + Bij ). Let this sum be = Cij . Now, C† will have
the elements (Cji )∗ . By laws of matrix addition,
Now since the law holds for the individual elements, in should also hold for the matrix or the operator.
Therefore, we can say:
C† = (A + B)†
GO TO FIRST PAGE↑ 42
CHAPTER 2. LINEAR ALGEBRA 2.11. SPECTRAL THEOREM
Theorem: For every normal operator M acting on a vector space V, there exits an orthonormal basis for V
in which the operator M has a diagonal representation.
P roof : We try to prove this theorem by induction on the dimension of the vector space V. We know that
for a single dimensional vector space, any operator is diagonal. So, let M be a normal operator on a vector
space V which is n dimensional. Now, let the eigenvalues and the eigenvectors of M be λ and |a� respectively.
Let P be a projector on to the λ eigen space of M. Let Q be the complement of the projector operator.
We have the relation (P + Q) = I and M|a� = λ|a�.
We now make a series of manipulations that shall give us the operator A, that acts on V, in terms of
some operator(s) that/those act on subspace(s) of V. This is because, we already assume that the spectral
theorem holds in the lower dimensional spaces (subspaces), i,e; Every normal operator in the subspace of V
has diagonal representation in some orthonormal basis for that subspace.
M = IAI
using P + Q = I:
M = (P + Q)M(P + Q) ⇒ PMP + PMQ + QMP + QMQ
Now from these terms, we need to filter out those on the RHS which evaluate to 0. Taking QMP: P projects
a vector on to the λ eigen space. In other words, P acts on a vector to give the coefficient of the λ eigen state
Y
(when the vector is expressed as a linear combination of the λ eigenvectors). M Then acts on these eigen
P
vectors (produced by P), giving the same eigen vector as the result. Q now acts on a vector and projects
O
it to the space which is an orthogonal complement of the λ eigen space. Since after the action of P and M,
C
the result is an eigen vector of the lambda eigen space, the action of Q will now try to express this eigen
T
vector as a linear combination of the eigen vectors which do not belong to the λ eigen space. But this is not
F
possible since we have assumed that the eigen vectors are orthonormal (linearly independent). So, the action
DR A
of Q, shall produce 0. Therefore, QMP = 0.
We can now look at the term PMQ. Let us first consider that M is a normal matrix.
MM† = M† M
MM† |a� = M† M|a�
⇒ M(M† |a�) = λ(M† |a�)
We now see that M† |a� is also an eigenvector of M, with eigenvalue λ. Therefore, from the same above
argument, M† acting on an eigenvector of the λ eigen space, gives another eigen vector in the λ eigen space.
Hence, PA† Q = 0. Now, taking the adjoint of this equation gives: (PM† Q)† = 0 ⇒ QMP = 0 (as P and
Q are Hermitian operators).
Therefore, we have:
The operators on the RHS act on a subspace of V and we had already assumed that the spectral theorem
holds for the subspace. So, if these operators are normal, then we can say that they are diagonal in some
basis.
We can easily show that PMP is normal. Since, P projects any vector onto the λ eigen space, and then
the vector is left unchanged by action of PM (except for a scalar multiplication of λ by M), we can say
that:
(PMP)† = λP (2.24)
Now since P is hermitian, it is obviously normal. Therefore we conclude that PMP is also normal.
GO TO FIRST PAGE↑ 43
2.12. OPERATOR FUNCTIONS CHAPTER 2. LINEAR ALGEBRA
Similarly, we have that QMQ is also normal. (Note: Q2 = Q and Q is hermitian.) We also have:
QM = QMI ⇒ QM(P + Q)
∴ QM = QMP + QMQ ⇒ QMQ (2.25)
† †
Similarly: QM = QM Q (2.26)
So, using the above equations, we can prove that QMQ is normal.
(QMQ)† = QM† Q
∴ (QMQ)† (QMQ) = QM† QQMQ
since Q2 = Q: ⇒ QM† QMQ
since QM† Q = QM† : ⇒ QM† MQ
since MM† = M† M: ⇒ QMM† Q
since QM = QMQ: ⇒ QMQM† Q
since Q = Q2 : ⇒ QMQQM† Q
Therefore, we have: ⇒ (QMQ)(QMQ)†
Therefore, QMQ is normal. Now since PMP and QMQ are normal operators on subspaces P, and Q, they
Y
are diagonal in some orthonormal basis for those respective spaces.
P
Now since PMP and QMQ are diagonal, their sum is also diagonal in the combined basis. Therefore,
O
M = PMP + QMQ is diagonal in some basis for V. Hence, proved.
2.12
F T C
Operator functions
A
DR
It is possible to extend the notion of functions, defined over complex numbers, to functions that are defined
over operators. It is necessary that these operators are normal. A function on a normal matrix or an normal
operator
� is defined in the following way: Let A be some normal operator which has a spectral decomposition
as |a��a|, then:
a
�
f (A) = f (a)|a��a| (2.27)
a
So, in the above equation, we try to represent the operator in a diagonal form and then apply the function
to each diagonal entry. We can try to prove the above equation for special functions. Take for example An
for some positive n.
Hence, we prove equation (eq. 2.27) for the special case of the function being a power operation.
1. Why must we, in general, consider only normal operators or, operators having a spectral decomposition
?
2. How do √we prove equation (eq. 2.27) for the case of square-root or a logarithm operation, that is
f (A) = A and f (A) = log A ?
GO TO FIRST PAGE↑ 44
CHAPTER 2. LINEAR ALGEBRA 2.13. SIMULTANEOUS DIAGONALIZABLE THEOREM
2.12.1 Trace
The trace of a matrix is also a function on the matrix. The trace of a matrix A is defined as the sum of all
the diagonal elements of A. (A need not be a diagonal matrix ). The trace can be defined using the matrix
notation as well as the outer product form:
�
tr(A) = Aii (2.30)
i
�
tr(A) = �i|A|i� (2.31)
i
T
On subtracting the above two equations: (eq. 2.32) - (eq. 2.33), we get
We have shown that A and B commute if and only if they have a common eigen basis. This proves the
simultaneous diagonalizable theorem.
(2.34)
Proof of the converse: Let |a, j� be the eigenstates of the operator A with the eigenvalue a. Here the index j
denotes the degeneracy. Let the vector space containing all eigenstates with eigenvalue a form the vector space
Va . Let the projection operator onto the Va eigenspace be called Pa . Now let us assume that [A, B] = 0.
Therefore, we have:
Therefore, we have that B|a, j� is also an element of the eigenspace Va . Let us define an operator
Ba = Pa BPa (2.36)
We can now try to see how Ba acts on an arbitrary vector. From definition (def. 2.36), we can see that Pa
will cut off all the components of a vector which does not belong to the Va eigen space. The vector, produced
by action of Pa , shall belong to the Va eigenspace. The action of B on any vector inside the Va eigenspace
will leave the vector unchanged (except for multiplying it by a scalar). The action of Pa again on the vector
produced by the action of B, will leave the vector unchanged. Now we can see how B†a acts on any arbitrary
vector. We have B†a = PB† Pa . When an arbitrary vector is acted upon by Ba , the projection operator
Pa shall produce a vector that entirely lies in the Va eigen space. The action of B†a will now produce some
arbitrary vector (since the vector produced by Pa may not be an eigenstate of B†a . The projection operator
Pa again acts on this arbitrary vector (produced by B†a ), taking it again to a vector that entirely lies in the
Va eigen space.
Therefore, summing up, we can say that the action of Ba and B†a on any arbitrary vector, produce the same
vector in the Va eigenspace. Therefore, we say that restricting Ba to subspace Va , Ba and B†a are equivalent
GO TO FIRST PAGE↑ 45
2.14. POLAR DECOMPOSITION CHAPTER 2. LINEAR ALGEBRA
From equation (eq. 2.35), we also have that B|a, b, k� is an element of space Va . So, similarly we can
say:
Pa B|a, b, k� = B|a, b, k�
We can now modify this (above) equation by replacing |a, b, k� on the LHS with Pa |a, b, k� (refer equation
(eq 2.37)):
If we compare the RHS of the above equation with equation (eq 2.36), we see that the RHS of the above
equation is the same as Ba |a, b, k�. Since |a, b, k� is an eigenstate of Ba with eigenvalue b, we can rewrite
Y
equation (eq. 2.38) as:
P
B|a, b, k� = b|a, b, k� (2.39)
F T CO
Therefore, in the above equation, we see that |a, b, k� is also an eigenstate of B with eigenvalue b. Hence, we
see that the set of vectors |a, b, k� form a common eigen basis for A and B. Hence, we have proved that if
[A, B] = 0, then there exists an orthonormal basis where A and B are diagonal.
DR
2.14
A
Polar Decomposition
Theorem: For every matrix A, there exists an unitary matrix U and positive (semidefinite) matrices J and
K, such that:
A = UJ = KU (2.40)
The above theorem can be said as: for every positive operators J and K, there exists an unitary matrix U
such that: KU = UJ .
�
Proof: Consider the operator J = A† A. By construction, the operator J is Hermitian. Therefore, there
exists
� a spectral decomposition for A (involving its eigen values and its eigen states). Therefore, let J =
λi |i��i| , where λi (≥ 0) and |i� are the eigen values and the corresponding eigen states of J.
i
Let us define
From this, we can see that �ψ|ψ� = λ2i . Now, consider all the non-zero eigen values, that is all λi �= 0.
Let
|ψi �
|ei � = (2.42)
λi
�ψi |ψi �
Therefore, we have �ei |ej � = = δij (since all λi ’s are real). We are still considering the λi �= 0
λ i λj
case. Let us now construct an orthonormal basis using the gram-schmidt orthonormalization technique, by
GO TO FIRST PAGE↑ 46
CHAPTER 2. LINEAR ALGEBRA 2.15. SINGULAR VALUE DECOMPOSITION
Y
by J−1 . We have:
F
invertible, then the U is also uniquely defined for every A.
DR A
We can also do the left polar decomposition, starting from equation (eq. 2.44). On post-multiplying the RHS
with U† U (since U is unitary, this will preserve the equality) we get:
A = UJU† U
Now, let UJU† = K. So, we can rewrite the above equation as:
A = KU (2.47)
This now gives us the left polar decomposition of A. We can also show that the matrix K is uniquely defined
for every A. For this, we multiply the equation (eq. 2.47) with its adjoint. We get
AA† = KUU† K
Since, U is unitary, we have
�
K= AA† (2.48)
Therefore, we see that K is uniquely defined. Similarly, we can show that if A is invertible, then the matrix
U is uniquely defined.
Therefore, combining both the left the and right parts, we have proved the polar decomposition.
GO TO FIRST PAGE↑ 47
2.15. SINGULAR VALUE DECOMPOSITION CHAPTER 2. LINEAR ALGEBRA
2.15.1 Proving the theorem for the special case of square matrices:
1. If A and B are two unitary matrices, then their product: AB is also an unitary matrix.
Proof: To prove (AB)† AB = I , given A† A = AA† = B† B = BB† = I:
(AB)† (AB) = B† A† AB
Since A and B are unitary: ⇒ B† IB ⇒ IB† B ⇒ I
∴ (AB)† (AB) = I
Proof: From the polar decomposition of A, we know that there exist� an unitary matrix S and a positive
operator J such that A = SJ . Since J is hermitian (it is defined as A† A (Refer equation (eq. 2.45)) ,
we know that there exits a spectral decomposition for J. So we have some unitary matrix T and a diagonal
matrix D (with non-negative entries) such that J = TDT† . Now, we can rewrite A as:
A = SJ = STDT†
OP Y
Now, since S and T are unitary, from assumption (1), we can define another unitary operator U = ST.
C
Also since T is unitary (which implies that T† too is unitary), we can define another new unitary operator
T
V = T† . Putting all the definitions together, we have:
DR A F A = STDT† ⇒ UDV
The above equation (eq. 2.50) now, proves the singular value decomposition theorem.
(2.50)
Proof: Before getting into this proof, we first need to make some assumptions.
Let us now construct a hermitian operator A† A. Consider the eigenvalue equation for this operator:
Also, we claim (from assumption no.2) that the set of eigenvalues {λi } are all real positive numbers.
That is if the set of eigenvalues {λi } are all > 0 for 0 < i ≤ r, then the set {λi } for (r+1) ≤ i ≤ n are all zero.
Therefore, the set of eigenvectors {|λ1 �, |λ2 �, |λ3 �, . . . |λr �} is orthogonal to the set of eigenvectors {|λr+1 �, |λr+2 �, |λr+3 �, . . .
as they correspond to different eigenvalues (see assumption 3).
GO TO FIRST PAGE↑ 48
CHAPTER 2. LINEAR ALGEBRA 2.15. SINGULAR VALUE DECOMPOSITION
From equation (eq. 2.55), we see that the entries (|µ1 �, |µ2 �, |µ3 � . . . |µr �) of the matrix UD can be simplified.
Y
The entries after |µr � can be left unchanged.
P
� �
O
� �
∴ UD = A|λ1 � A|λ2 � A|λ3 � . . . A|λr � λr+1 |µr+1 � . . . λm |µm � (2.58)
C
T
�λ1 |
F
�λ2 |
A
.
DR
� � .
� �
λm |µm � �λr |
†
⇒ UDV = A|λ1 � A|λ2 � . . . A|λr � λr+1 |µr+1 � . . . (2.59)
�λr+1 |
.
.
�λn |
� �
∴ UDV = A|λ1 ��λ1 | + A|λ2 ��λ2 | + · · · + A|λr ��λr | + λr+1 |µr+1 ��λr+1 | + · · · + λm |µm ��λm | (2.60)
†
Since, we assumed earlier that eigenvalues {λi } are all zero for i > r, can ignore the terms in equation (eq.
2.60) that succeed A|λr ��λr | .
r
�
†
UDV = A |λi ��λi | (2.61)
i=1
Since, we know that all the eigenvalues after λr are 0, we have A|λi � = λi |λi � = 0 for r < i < n. If A|λi � = 0,
then A|λi ��λi | = �λi |0 ⇒ 0 . So, we can write the above equation (eq. 2.61) as:
r
� n
�
UDV† = A |λi ��λi | + A|λi ��λi | (2.62)
i=1 i=r+1
n
�
⇒ UDV† = A |λi ��λi | (2.63)
i=1
GO TO FIRST PAGE↑ 49