Quantum Chemistry II: Math Introduction: Albeiro Restrepo May 27, 2009

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Quantum Chemistry II: Math Introduction

Albeiro Restrepo
May 27, 2009

Contents
1 Operator matrix representation 2

2 Matrix algebra ↔ operator algebra isomorphism 3

3 Yet another switch over rule 4

4 Complete basis sets 5

1
1 OPERATOR MATRIX REPRESENTATION 2

1 Operator matrix representation


Let’s say that an operator O has a solution space V. Any ket belonging to V can be writ-
ten as a linear combination of the n kets of some basis set {|ii} = {|ii , |ji , |ki , · · · , |ni}
or of the n kets of any other basis set of the same space {|αi} = {|αi , |βi , |γi , · · · , |ηi}.
In vector language, any vector belonging to V can be written as a linear combination of
the n vectors of some basis set {êi } = {êi , êj , êk , · · · , ên } or of the vectors of any other
basis set of the same space {êα } = {êα , êβ , êγ , · · · , êη }.

Operating with O over one of the vectors of {êi } will result in another vector of V, say ~vi ,
which in turn could be written as a linear combination of the vectors of {êi } (including
êi itself):

n
X
Oêi = ~vi = Oji êj (1)
j=1

where Oji , the n expansion coefficients are numbers (real or complex); the j label is the
summation index, i.e., the contribution from each individual basis vector to ~vi , while i
lables the vector that was acted upon with O. We now tackle the issue of the nature of
the Oji . Let’s take the sacalar product of equation 1 with any other basis vector (êk for
instance):

 
n
X n
X n
X
êk · (Oêi ) = êk · ~vi = êk ·  Oji êj  = Oji êk · êj = Oji δkj = Oki
j=1 j=1 j=1

=⇒ êk · (Oêi ) = êk · Oêi = Oki (2)

where I used the orthonormalization condition of the basis vectors, êk · êj = δkj , and
the fact that out of the n terms in the sum, only the one for which j = k survives the
action of Kronecker’s delta, all the others vanish. Equation 2 tells us that the expansion
coefficients are not arbitrary, they are fixed by the choice of basis set; for example, in
the basis {|αi}, the expansion coefficients are given by êα · Oêβ = Ωαβ , which are clearly
different numbers to the Oji . In Dirac’s notation we have:

n
X
O |ii = |vi i = Oji |ji
j=1
 
n
X n
X n
X
hk |O| ii = hk |  Oji |ji  = Oji hk |ji = Oji δkj = Oki =⇒ Oki = hk |O| ii
j=1 j=1 j=1
2 MATRIX ALGEBRA ↔ OPERATOR ALGEBRA ISOMORPHISM 3

For a set containing n elements, there are a total of n × n = n2 possible binary combi-
nations, thus we have a total of n2 expansion coefficients obtained from Oki = hk |O| ii =
êk · Oêi . If we write them on a square array and label rows and columns, we have O, the
matrix representation of O in the basis {|ii } = {êi }:

 
O11 O12 · · · O1n
 
 O21 O22 · · · O2n 
 
 · · ··· · 
O=



 · · ··· · 
 
 · · ··· · 
On1 On2 · · · Onn
   
ê1 · Oê1 ê1 · Oê2 · · · ê1 · Oên h1 |O| 1i h1 |O| 2i · · · h1 |O| ni
   
 ê2 · Oê1 ê2 · Oê2 · · · ê2 · Oên   h2 |O| 1i h2 |O| 2i · · · h2 |O| ni 
   
 · · ··· ·   · · ··· · 
O=

=
 


 · · ··· ·   · · ··· · 
   
 · · ··· ·   · · ··· · 
ên · Oê1 ên · Oê2 · · · ên · Oên hn |O| 1i hn |O| 2i · · · hn |O| ni

In the basis {|αi} = {êα }, the matrix representation of O becomes

 
Ωαα Ωαβ · · · Ωαη
 
 Ωβα Ωββ · · · Ωβη 
 
 · · ··· · 
Ω=



 · · ··· · 
 
 · · ··· · 
Ωηα Ωηβ · · · Ωηη
 
  hα |O| αi hα |O| βi · · · hα |O| ηi
êα · Oêα êα · Oêβ · · · êα · Oêη  
   hβ |O| αi hβ |O| βi · · · hβ |O| ηi 
 êβ · Oêα êβ · Oêβ · · · êβ · Oêη   
   · · ··· · 
Ω=
 · · ··· · =
 


   · · ··· · 
 · · ··· ·   
 · · ··· · 
êη · Oêα êη · Oêβ · · · êη · Oêη
hη |O| αi hη |O| βi · · · hη |O| ηi

2 Matrix algebra ↔ operator algebra isomorphism


A nice consequence of dealing with matrix representations of functions (vectors) and
operators, is that there is an isomorphism between matrix algebra and operator algebra.
For example, consider a vector ~a in the solution space V of some operator O; acting
3 YET ANOTHER SWITCH OVER RULE 4

upon ~a with O produces ~b, another vector in V. Both ~a and ~b can be written as linear
combinations of the n vectors of some basis {|ii} = {êi } for V:

n
X n
X
O~a = ~b, ~a = ai êi , ~b = bj êj
i=1 j=1

or, in Dirac’s notation,

n
X n
X
O |ai = |bi , |ai = ai |ii , |bi = bj |ji
i=1 j=1

Let’s explore the relationship between the sets of coefficients {ai } and {bj }:

n
X n
X
hk |bi = hk |O| ai = bj hk |ji = bj δkj = bk
j=1 j=1

n
X n
X n
X
hk |O| ai = ai hk |O| ii = ai Oki = Oki ai =⇒
i=1 i=1 i=1
n
X
Oki ai = bk
i=1

which is exactly the definition for the matrix product between O, the n × n matrix
representation of O and a, the n × 1 matrix representation of ~a to produce b, the n × 1
matrix representation of ~b; all matrix representations in the {|ii} = {êi } basis. We
started by operating with O on any vector of V to produce another vector of V, then
concluded that matrix multiplication of the matrix representations of the involved vectors
and operator yields exactly the same results, hence explicitly showing the isomorphism
between operator algebra and the algebra of matrix representations:

O~a = ~b ↔ O |ai = |bi ↔ Oa = b (3)

3 Yet another switch over rule


Let’s begin again by writing any ket |bi in V as a linear combination of the kets of the
basis {|ii} and finding the component along a given direction:
4 COMPLETE BASIS SETS 5

n
X n
X n
X
|bi = bj |ji =⇒ hi |bi = bj hi |ji bj δij = bi
j=1 j=1 j=1

Now, in bra space,

n
X n
X n
X
hb | = b∗j hj | =⇒ hb |ii = b∗j hj |ii b∗j δji = b∗i
j=1 j=1 j=1

Since hi |bi = bi , hb |ii = b∗i , we can say that hi |bi = hb |ii ∗ , or more generally,

ha |bi = hb| ai∗ (4)

which off course for hermitian operators (O = O† , O |bi = |ci =⇒ hb | O† = hc | ), leads


to

¯ ¯
ha |O| bi = ha |ci = hc |ai ∗ = hb ¯¯O† ¯¯ ai∗ = hb |O| ai∗ (5)

4 Complete basis sets


A complete basis sets spans the entire space, that is, any vector on the space can be
written as a linear combination of the n vectors of the basis. The basis vectors satisfy the
orthogonality condition, that is hi |ji = δij . Let’s investigate the properties of a complete
basis set:

n n n
à n !
X X X X
|ai = ai |ii = |ii ai = |iihi| ai = |iihi| |ai
i=1 i=1 i=1 i=1

whatever is inside parenthesis leaves |ai unchanged, therefore it must equal unity:

n
X
|iihi| = 1 (6)
i=1
4 COMPLETE BASIS SETS 6

the left side of equation 6 is known as the dyadic of the vector space and is a testament
to the completeness of the basis set: every vector on the basis must be included.

Very important note: Looking for simplicity, I chose to write 1 for the sum in equation
6 because it resembles the multiplicative identity, however, it must be clear that the sum
is not a number, as numbers are not being added! The fact is, the sum in equation 6
comprises an entirely new and until now unknown (at least for most of you) kind of vector
multiplication. So far you heard of scalar (dot, inner) ~a ·~b and cross ~a ×~b vector products;
dot products yield a number while cross products yield a vector of the same dimension
of the vectors being multiplied. Let’s now focus on the matrix representation of vectors
(bras and kets) of the space in question:

 
a1
 
 a2 
 
 · 
|ai = a1 |1i + a2 |2i + · · · + an |ni ↔ an×1 = 



 · 
 
 · 
an
³ ´
ha | = a∗1 h1 | + a∗2 h2 | + · · · + a∗n hn | ↔ a†1×n = a∗1 a∗2 · · · a∗n

Therefore, each vector product represented by |aiha|, in matrix representation becomes a


product an×1 a†1×n , which is a new matrix of dimension n × n, clearly not a number, and
certainly not 1. It so happens that when the (orthonormal)basis vectors are considered,
the resulting matrix is exactly In×n (please take the time to prove it, is very easy and
it might come up in the exam!), therefore, upon multiplication, it leaves unchanged the
matrix representation of any vector: In×n an×1 = an×1 .

To illustrate the power of a complete basis set and of the dyadic of the vector space,
let’s work out the matrix representation of sequential application of two operators. We
already know the result: if there is an isomorphism between operator algebra and matrix
algebra, then we better obtain the multiplication of the matrix representations of the
operators. Put in other words, we will prove the isomorphism AB ↔ AB. take the
sequential application AB to be equal to the application of some other operator C, and
begin with the matrix representation of C:

Cij = hi |C| ji = hi |AB| ji = hi |A 1 B| ji


à n ! n
X X
Cij = hi | A |kihk| B |ji = hi |A| kihk |B| ji
k=1 k=1
n
X
Cij = Aik Bkj =⇒ C = AB
k=1

You might also like