0% found this document useful (0 votes)
131 views14 pages

On Active and Passive Transformations: Papachristou@

The document discusses active and passive transformations on vector spaces. 1) An active transformation changes the state of a physical system, while a passive transformation changes the observer's point of view. 2) An active transformation on a vector space is produced by a linear operator and represented by a matrix. A passive transformation corresponds to a change of basis. 3) Rotations are used as examples to illustrate active and passive transformations on a plane. Matrix representations relate the components of vectors before and after an active or passive transformation.

Uploaded by

Abhisek Gomango
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
131 views14 pages

On Active and Passive Transformations: Papachristou@

The document discusses active and passive transformations on vector spaces. 1) An active transformation changes the state of a physical system, while a passive transformation changes the observer's point of view. 2) An active transformation on a vector space is produced by a linear operator and represented by a matrix. A passive transformation corresponds to a change of basis. 3) Rotations are used as examples to illustrate active and passive transformations on a plane. Matrix representations relate the components of vectors before and after an active or passive transformation.

Uploaded by

Abhisek Gomango
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

On active and passive transformations

Costas J. Papachristou
Department of Physical Sciences, Hellenic Naval Academy, Piraeus, Greece
[email protected]

The concepts of active and passive transformations on a vector space are discussed.
Orthogonal coordinate transformations and matrix representations of linear operators
are considered in particular.

1. Introduction

A physical situation may appear changing for two reasons: the physical system itself
may pass from one state to another, or, the same state of the system may be viewed
from two different points of view (e.g., by two different observers, using different
frames of reference). The former case corresponds to an “active” view of the situa-
tion, while the latter one to a “passive” view.
Given that many physical quantities are vectors, of particular interest in Physics
are linear transformations on vector spaces. Starting with the prototype transformation
of rotation on a plane, we study both the active and the passive view of these trans-
formations. In the case of a Euclidean space with Cartesian coordinates, a passive
transformation corresponding to a change of basis is an orthogonal transformation. On
the other hand, an active transformation on a vector space is produced by a linear op-
erator, which is represented by a matrix in a given basis. A change of basis, leading to
a different representation, is a passive transformation on this space.

2. Active view of transformations

Consider the xy-plane with Cartesian coordinates (x, y) and basis unit vectors
{uˆ x , uˆ y } . We call R(θ) the rotation operator on this plane, i.e., the operator which ro-

tates any vector A on the plane by an angle θ (see Fig. 2.1; by convention, θ>0 for
counterclockwise rotation while θ<0 for clockwise rotation). This operator is linear,
given that adding two vectors and then rotating the sum is the same as first rotating
the vectors and then adding them.

y 
A′

uˆ y 
θ A

x
O uˆ x

Figure 2.1

1
C. J. PAPACHRISTOU

Imagine, in particular, that we rotate each vector in the basis {uˆ x , uˆ y } by an angle
θ to obtain a new set of vectors {uˆ x′ , uˆ y′} (Fig. 2.2). The transformation equations
describing these rotations are

uˆ x′ = R (θ )uˆ x = cos θ uˆ x + sin θ uˆ y


(2.1)
uˆ y′ = R (θ )uˆ y = − sin θ uˆ x + cos θ uˆ y

uˆ y
uˆ y′ uˆx′
θ

θ
x
O uˆ x

Figure 2.2

Now, let A = Ax uˆ x + Ay uˆ y be a vector on the xy-plane (see Fig. 2.1). The rotation
operator R(θ) will transform it into a new vector
 
A′ = R (θ ) A = Ax′uˆ x + Ay′uˆ y (2.2)

We want to express the components Ax΄ and Ay΄ in terms of Ax , Ay and θ. By the line-
arity of R(θ) and by using (2.1), we have:

( )
A′ = R (θ ) Ax uˆ x + Ay uˆ y = Ax R (θ )uˆ x + Ay R (θ )uˆ y
= ( Ax cos θ − Ay sin θ ) uˆ x + ( Ax sin θ + Ay cos θ ) uˆ y

By comparing this with (2.2), we get:

Ax′ = Ax cos θ − Ay sin θ


(2.3)
Ay′ = Ax sin θ + Ay cos θ

We define the matrix

cos θ − sin θ 
M = (2.4)
 sin θ cos θ 

2
ON ACTIVE AND PASSIVE TRANSFORMATIONS

The systems (2.1) and (2.3) are then rewritten in the form of matrix equations as

uˆ ′   uˆ x  A ′  Ax 
 x  = MT uˆ  and  x =M A  (2.5)
uˆ y′   y  Ay′   y
   

respectively, where M T is the transpose of M.


  
We note that the vectors A and A′ = R (θ ) A are different geometrical objects, the
latter one being a transformation of the former. On the other hand, the components of
these vectors, connected by (2.3), are referred to the same basis {uˆ x , uˆ y } . This is the
general idea of the active view of a linear transformation.
In a more abstract sense, we consider an n-dimensional vector space Ω with basis
vectors {eˆ1 , eˆ2 ,..., eˆn } ≡ {eˆk } , and we let R be a linear operator on Ω. We assume that
the basis vectors transform under R as follows:

eˆi′ = R eˆi = eˆ j R j i (sum on j ) (2.6)

where the familiar summation convention for repeated upper and lower indices has
been used. Thus, for each value of i, the right-hand side of (2.6) is actually a sum over
all values of j, i.e., from j=1 to j=n. Explicitly,

eˆ1′ = eˆ1 R11 + eˆ2 R 21 + ⋯ + eˆn R n1

eˆ2′ = eˆ1 R12 + eˆ2 R 2 2 + ⋯ + eˆn R n 2


(2.7)

eˆn′ = eˆ1 R1n + eˆ2 R 2 n + ⋯ + eˆn R n n

Now, let

V = V 1eˆ1 + V 2 eˆ2 + ⋯ + V n eˆn ≡ V i eˆi (2.8)
 
be a vector in Ω, and let V ′ = R V . We have:


V ′ = R (V j eˆ j ) = V j R eˆ j = V j eˆi R i j ≡ V i ′ eˆi .

Therefore the components of the original and the transformed vector are related by

V i′ = Ri j V j (2.9)

or, explicitly,

3
C. J. PAPACHRISTOU

V 1 ′ = R 11 V 1 + R 12 V 2 + ⋯ + R 1n V n
V 2 ′ = R 21 V 1 + R 2 2 V 2 + ⋯ + R 2 n V n
(2.10)

V n ′ = R n1 V 1 + R n 2 V 2 + ⋯ + R n n V n

Define the n×n matrix

M =  R i j  with M i j = R i j (2.11)

The basis transformations (2.6) are then written as

 eˆ ′   eˆ1 
 1 ⋮
 ⋮ =M
T
(2.12)
 
  eˆn 
eˆn′ 

while the component transformations (2.9) become

V 1 ′  V 1 
   
 ⋮ =M  ⋮  (2.13)
 n  V n 
V ′   

3. Passive view of transformations

Imagine that our previous x-y system of axes on the plane, with basis unit vectors
{uˆ x , uˆ y } , is rotated counterclockwise by an angle θ to obtain a new system of axes x΄
and y΄ with corresponding basis {uˆ x′ , uˆ y′} (Fig. 3.1). As before, the two bases are re-
lated by the system of equations

uˆ x′ = cos θ uˆ x + sin θ uˆ y
(3.1)
uˆ y′ = − sin θ uˆ x + cos θ uˆ y

y

y′ A

uˆ y
uˆ y′ θ
x′
uˆx′
θ
x
O uˆ x

Figure 3.1

4
ON ACTIVE AND PASSIVE TRANSFORMATIONS


A vector A on the plane can be expressed in both these bases, as follows:

A = Ax uˆ x + Ay uˆ y = Ax′uˆ x′ + Ay′uˆ y′ (3.2)

Substituting the basis transformations (3.1) into the right-hand side of (3.2), and
equating coefficients of similar unprimed basis vectors, we find:

Ax = Ax′ cos θ − Ay′ sin θ


(3.3)
Ay = Ax′ sin θ + Ay′ cos θ

Solving this for the primed components, we get:

Ax′ = Ax cos θ + Ay sin θ


(3.4)
Ay′ = − Ax sin θ + Ay cos θ

Notice that, in contrast to what we did in the previous section, here we keep the geo-

metrical object A fixed and simply expand it in two different bases. This is the
adopted practice in the passive view of a transformation.
Introducing the matrix

cos θ − sin θ 
M =
 sin θ cos θ 

we rewrite our previous equations in the matrix forms

uˆ ′   uˆ x 
 x  = MT uˆ  (3.5)
uˆ y′   y
 
and

 Ax  A ′ A ′ A 
A  = M  x ⇒  x  = M −1  x  (3.6)
 y  Ay′   Ay′   Ay 
   

where
 cos θ sin θ 
M −1 =   = MT (3.7)
 − sin θ cos θ 

Notice that the transformation matrix M is orthogonal. As will be shown below, this is
related to the fact that the transformation (rotation of axes) relates two Cartesian bases
in a Euclidean space.

5
C. J. PAPACHRISTOU

By comparing (2.3) and (3.4) it follows that the transformation equations of the
passive view reduce to those of the active view upon replacing θ with –θ. Physically

this means that a passive transformation in which the vector A is fixed and the basis
of our space is rotated counterclockwise is equivalent to an active transformation in

which the basis is fixed and the vector A is rotated clockwise.
Let us generalize to the case of an n-dimensional vector space Ω with basis
{eˆ1 , eˆ2 ,..., eˆn } ≡ {eˆk } . Let {eˆk ′} be another basis related to the former one by

eˆi′ = eˆ j Λ j i′ (3.8)


(note sum on j ). A vector V in Ω may be expressed in both these bases, as follows:

V = V i eˆi = V j ′eˆ j′ = V j ′eˆi Λ i j′

where use has been made of (3.8). This yields

V i = Λi j′ V j ′ (3.9)

Introducing the n×n matrix

M =  Λ i j′  with M i j = Λ i j′ (3.10)

we write
 eˆ ′   eˆ1 
 1 ⋮
 ⋮ =M
T
(3.11)
 
  eˆn 
eˆn′ 
and

V 1  V 1′  V 1 ′  V 1 
      −1  
 ⋮ =M  ⋮  ⇒  ⋮ = M  ⋮  (3.12)
V n   n   n  V n 
  ′ V ′   
V 

4. Orthogonal transformations in a Euclidean space

In this section the passive view of transformations will be adopted. Let Ω be an n-


dimensional Euclidean space with Cartesian1 coordinates (x1, x2,...,xn) ≡ (xk) and cor-
responding Cartesian basis {eˆk } . Let (xk΄) be another Cartesian coordinate system for

1
Cartesian systems of coordinates exist only in Euclidean spaces. For example, you can define a sys-
tem of Cartesian coordinates on a plane but you cannot define such coordinates on the surface of a
sphere, which is a non-Euclidean space.

6
ON ACTIVE AND PASSIVE TRANSFORMATIONS

Ω, with corresponding basis {eˆk ′} . We assume that the two coordinate systems have a
common origin O≡(0,0,...,0). Both Cartesian bases are orthonormal, in the sense that

eˆi ⋅ eˆ j = eˆi′ ⋅ eˆ j′ = δ i j (4.1)

Assuming that the handedness of the two coordinate systems is the same (e.g., for
n=3, both coordinate systems are right-handed) it is apparent that a linear transforma-
tion from one basis to the other is a “rotation” in Ω. Let us explore this in more detail.

Definition: A linear transformation from a Cartesian basis to another is said to be


an orthogonal transformation.

Proposition 4.1: An orthogonal transformation is represented by an orthogonal


matrix M:

M −1 = M T ⇔ M T M = M M T = 1 (4.2)

Proof: Assume a linear basis transformation of the form (3.8): eˆi′ = eˆ j Λ j i′ . Also,
let M be the transformation matrix defined in (3.10). We have:

( )( )
eˆi′ ⋅ eˆ j′ = eˆk Λ k i′ ⋅ eˆl Λ l j′ = δ k l Λ k i′ Λ l j′ = ∑ Λ k i′ Λ k j′
k

= ∑ M kiM k j = ∑(M ) T
(
Mkj = M T M )
ik ij
k k

where we have taken into account that the original (unprimed) basis is orthonormal.
Given that the same is true for the transformed (primed) basis, we have:

(M T
M ) ij
= δi j ⇔ M T M = 1 .


The magnitude of a vector V is a non-negative quantity whose square is ex-
pressed in a Cartesian basis in terms of the scalar (dot) product, as follows:
2  
( )( )
V = V ⋅ V = V i eˆi ⋅ V j eˆ j = V i V j eˆi ⋅ eˆ j = δ i j V iV j (4.3)


[Obviously, the last term in (4.3) is the sum of the squares of the components of V .]

Proposition 4.2: An orthogonal transformation preserves the Cartesian form (4.3)


of the magnitude of a vector.

Proof: By using the transformation formula (3.9) for components of vectors, de-
rived in the previous section, we have:

7
C. J. PAPACHRISTOU

( )( ) 
 i

δ i j V iV j = δ i j Λ i k ′V k ′ Λ j l ′V l′ =  ∑ Λ i k ′ Λ i l ′  V k ′V l′

   
=  ∑ M i k M il  V k ′V l′ =  ∑ M T ( ) M il  V k ′V l′
 i   i ki

( )
= M T M V k ′V l′ = δ k l V k ′V l′
kl

For a more compact proof, define the matrices

V 1 
  T
V k  ≡  ⋮  and V k  ≡ V 1 ⋯ V n 
     
V n 
 

and similarly for the corresponding primed quantities. Then, in the unprimed basis,

2 T
V = V k  V k  .

Using the fact that, by (3.12), V k  = M V k ′  , we have:


 

( )
T T T
V k  V k  = M V k ′  M V k ′  = V k ′  M T M V k ′ 
           
T
= V k ′  V k ′ 
   

Comment: The above proof suggests an alternate definition of an orthogonal trans-


formation as a linear transformation in a Euclidean space that preserves the Cartesian
form of the magnitude of vectors. In fact, this is the way orthogonal transformations
are usually defined in textbooks.

Now, let P be a point in Ω, with Cartesian coordinates (x1, x2,...,xn) ≡ (xk). In this

system of coordinates the position vector of P can be written as r = xi eˆi . Since this
vector is a geometrical object independent of the system of coordinates, we can write:


r = xi eˆi = x j ′eˆ j′ .

By using (3.8) we find, as in Sec. 3,

xi = Λ i j′ x j ′ (4.4)

which is the analog of (3.9). If M is the matrix defined in (3.10), and if [xk] is the col-
umn vector of the xk, then by the general matrix relation (3.12) we have:

8
ON ACTIVE AND PASSIVE TRANSFORMATIONS

 xk  = M  xk′  ⇒  x k ′  = M −1  x k  = M T  x k  (4.5)
     

where the orthogonality condition (4.2) has been used. Let us call

M T ≡ L with Li j = M ji = Λ j i′ (4.6)

Then the matrix relation (4.5) can be written as a system of n linear equations of the
form

x1 ′ = L11 x1 + L12 x 2 + ⋯ + L1n x n

x 2 ′ = L 21 x1 + L 22 x 2 + ⋯ + L 2 n x n
(4.7)

x n ′ = L n1 x1 + L n 2 x 2 + ⋯ + L nn x n

which equations represent an orthogonal coordinate transformation in Ω.


As an example for n=2, let Ω be a plane with Cartesian coordinates (x1, x2) ≡ (x, y).

A position vector in Ω is written: r = xuˆ x + y uˆ y . As seen in Sec. 3, the transformation
matrix M for a rotation of the basis vectors by an angle θ is

cos θ − sin θ   cos θ sin θ 


M = ⇒ L = MT =  .
 sin θ cos θ   − sin θ cos θ 

The coordinate transformation equations (4.7) are written here as

x′ = x cos θ + y sin θ
y′ = − x sin θ + y cos θ


Exercise: By using the relations V = V j eˆ j and eˆ j′ = eˆl Λ l j′ , together with (3.10)
and (4.1), show the following:

V i = eˆi ⋅V ,

M i j = eˆi ⋅ eˆ j′ .

Under an orthogonal transformation from one Cartesian system of coordinates to


another, the components V k of a vector transform like the coordinates xk themselves.
That is,

V i′ = L i j V j .

From (4.7) we have that

9
C. J. PAPACHRISTOU

∂x i′
Li j = .
∂x j
Therefore,

∂xi′ j ∂xi
V i′ = V and, conversely, V i = V j′ (4.8)
∂x j j′
∂x

5. Active and passive view combined

Let Ω be an n-dimensional vector space with basis {eˆk } (k = 1, 2,… , n) . Let A be a lin-
ear operator on Ω. The action of A on the basis vectors is given by

A eˆ j = ∑ eˆi Ai j ≡ eˆi Ai j (5.1)


i

(Note a slight change in the summation convention; in this section subscripts only will
be used.) The n×n matrix A=[Aij] is the matrix representation of the operator A in the
basis {eˆk } .
A vector in Ω is written:

x = ∑ xi eˆi ≡ xi eˆi (5.2)
i

  
Let y = A x . If y = yi eˆi , then, by the linearity of A and by using (5.1) and (5.2) we
find that

yi = Ai j x j (sum on j ) (5.3)

which represents a system of n linear equations for i=1,...,n. In matrix form,

[ yk ] = A [ xk ] (5.4)

where [xk] and [yk] are column vectors.


Now, let A and B be linear operators on Ω. We define their product C=AB by
   
C x = ( AB) x ≡ A (B x ) , ∀ x ∈ Ω (5.5)

Then, in the basis {eˆk } ,

C eˆ j = A (B eˆ j ) = A (eˆl Bl j ) = Bl j ( A eˆl ) = Ail Bl j eˆi ≡ eˆi Ci j

where
Ci j = Ail Bl j or, in matrix form, C = AB (5.6)

10
ON ACTIVE AND PASSIVE TRANSFORMATIONS

That is, in any basis of Ω,

the matrix of the product of two operators is the product of the matrices of
these operators.

Consider now a change of basis (passive transformation) with transformation ma-


trix T=[Tij]:

eˆ j′ = eˆi Ti j (5.7)

The inverse transformation is

eˆ j = eˆi′ T( ) −1
ij
(5.8)


The same vector may be expressed in both these bases as x = xi eˆi = x j′ eˆ j′ , from
which we get, by using (5.7) and (5.8),

xi = Ti j x j′ and xi′ = T ( )−1


ij
xj (5.9)

In matrix form,

[ xk ] = T [ xk ′ ] and [ xk ′ ] = T −1
[ xk ] (5.10)

How do the matrix elements of a linear operator A transform under a change of


basis of the form (5.7)? In other words, how does the matrix of an active transforma-
 
tion transform under a passive transformation? Let y = A x . By combining (5.10)
with (5.4), we have:

[ yk ′ ] = T −1
[ yk ] = T −1
A [ xk ] = T −1
AT [ xk ′ ] ≡ A′ [ xk ′ ] ⇒

–1
A΄ = Τ ΑΤ (5.11)

For an alternative proof, note that

A eˆ j′ = A (eˆi Ti j ) = Ti j A eˆi = Ti j eˆl Ali = Ali Ti j eˆk ′ T ( ) −1


kl

= T ( −1
AT ) kj
eˆk ′ ≡ eˆk ′ Ak′ j ⇒ A′ = T −1
AT

as before. A transformation of the form (5.11) is called a similarity transformation.


By applying the properties of the trace and the determinant of a matrix to (5.11) it
is not hard to show that, under basis transformations, the trace and the determinant of
the matrix representation of an operator remain unchanged: trA=trA΄, detA=detA΄.
This means that the trace and the determinant are basis-independent quantities that are
properties of the operator itself, rather than properties of its representation.

11
C. J. PAPACHRISTOU


Definition: A vector x ≠ 0 is said to be an eigenvector of the linear operator A if a
constant λ exists such that
 
Ax =λx (5.12)

The constant λ is an eigenvalue of A, to which eigenvalue this eigenvector belongs.


Note that, in general, more than one eigenvector may belong to the same eigenvalue.

In a given basis {eˆk } , the linear system (5.3) corresponding to the eigenvalue
equation (5.12) takes on the form

Ai j xj = λ xi or (Ai j – λ δi j ) xj = 0 (5.13)

where [Aij]=A is the matrix of the operator A in the given basis. This is a homogene-
ous linear system of equations, which has a nontrivial solution for the eigenvector
components iff

det [Ai j – λ δi j ] = 0 or det (Α – λ1) = 0 (5.14)

where 1 here is the n-dimensional unit matrix. This polynomial equation determines
the eigenvalues λi (not necessarily all different from each other) of the operator A.
Now, in general, for any value of the constant λ the matrix (Α–λ1) is the represen-
tation of the operator (A–λ1) in the considered basis {eˆk } . Under a basis transforma-
tion to {eˆk ′} this matrix transforms according to (5.11):

(Α–λ1)΄ = Τ –1
(Α–λ1) Τ = Τ –1
A T – λ1 ≡ Α΄– λ1 .

On the other hand, by the invariance of the determinant under this transformation,

det (Α΄– λ1) = det (Α – λ1) .

In particular, if λ is an eigenvalue of the operator A, the right-hand side of the above


equation vanishes in view of (5.14) and, therefore, the same must be true for the left-
hand side for the same value of λ. That is, the polynomial equation (5.14) determines
the eigenvalues of A uniquely, regardless of the chosen representation. We conclude
that
the eigenvalues of an operator are a property of the operator itself and do not
depend on the choice of basis of the space Ω.

If we can find n linearly independent eigenvectors {xk } of A, belonging to the
corresponding eigenvalues λk (not necessarily all different) we can use these vectors to
define a basis of Ω. The matrix representation of A in this basis is given by (5.1):
    
A x j = xi Ai j . On the other hand, if λj ≡ λ΄, then A x j = λ ′ x j = λ ′δ i j xi . Therefore, since

the xk are linearly independent, we must have Aij=λ΄δij . We conclude that, in the ei-
genvector basis the matrix representation of the operator A has the diagonal form

A = diag (λ1 , λ2 , ... , λn ) .

12
ON ACTIVE AND PASSIVE TRANSFORMATIONS

Moreover, by the above formula and by the fact that the quantities trA, detA and λk are
basis-independent (i.e., invariant under basis transformations) it follows that, in any
basis of Ω,
tr A = λ1 + λ2 + ... + λn , det A = λ1 λ2 ... λn (5.15)

Proposition 5.1: Let A and B be two linear operator on Ω. We assume that A and

B have a common set of n linearly independent eigenvectors {xk } . Then the operators
A and B commute:
AB = BA ⇔ [A, B] ≡ AB – BA = 0

where [A, B] denotes the commutator of A and B.



Proof: Since the n vectors {xk } are linearly independent, they define a basis of Ω.

By assumption, for each value of k the vector xk is an eigenvector of both A and B,
with corresponding eigenvalues, say, α and β. Then,
    
( AB) xk ≡ A (B xk ) = A ( β xk ) = β ( A xk ) = βα xk

 
and similarly, (BA) xk = αβ xk . Thus,

  
( AB) xk = (BA) xk ⇔ [ A, B] xk = 0 ,
 
for all k=1,...,n. Now, let Ψ = ξi xi be an arbitrary vector in Ω. Then,

   
[ A, B] Ψ = [ A, B](ξi xi ) = ξi [ A, B] xi = 0, ∀ Ψ ∈ Ω .

This means that [A, B]=0.

Definition: An operator A is said to be nonsingular if detA≠0 (note that this is a


basis-independent property). A nonsingular operator is invertible, in the sense that an
inverse linear operator A–1 on Ω exists such that AA–1 =A–1A =1op , where 1op is the
unit operator. This allows us to write
   
y = A x ⇔ x = A −1 y .

By (5.4) it follows that, if A is the matrix representation of the nonsingular opera-


tor A in some basis, then the matrix of the inverse operator A–1 is the inverse A–1 of A.
As is well known, the matrix A may have an inverse iff detA≠0, whence the definition
of a nonsingular operator. In view of the second relation in (5.15),

all eigenvalues of a nonsingular operator are nonzero.

Indeed, if even one eigenvalue vanishes, then detA=0 in any representation.

13
C. J. PAPACHRISTOU

6. Comments

Both the active and the passive view are of importance in Physics. Let us see some
examples:
1. The Galilean transformation of Classical Mechanics and the Lorentz transfor-
mation of Relativity2 are passive transformations connecting different inertial frames
of reference. When expressed in terms of mathematical equations, all physical laws
are required to be invariant in form upon passing from one inertial frame to another.
2. The operators of Quantum Mechanics3 are active transformations from a quan-
tum state to a new state. On the other hand, both states and operators may be repre-
sented by matrices in different bases, the transformation from one basis to another be-
ing a passive transformation. Typically, the basis vectors of the quantum-mechanical
space are chosen to be eigenvectors of linear operators representing physical quanti-
ties such as energy, angular momentum, etc. In such a basis the related operator is
represented by a diagonal matrix, the diagonal elements being the eigenvalues of the
operator. Physically, these eigenvalues give the possible values that a measurement of
the associated physical quantity may yield in an experiment.

2
H. Goldstein, Classical Mechanics, 2nd Ed. (Addison-Wesley, 1980).
3
E. Merzbacher, Quantum Mechanics, 3rd Ed. (Wiley, 1998).

14

You might also like