0% found this document useful (0 votes)
3 views

Lecture3_LinearOperators

Uploaded by

devenil540
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture3_LinearOperators

Uploaded by

devenil540
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Lecture 3

Linear operators
Linear algebra reminder
Dual spaces
• Let’s introduce an antilinear functional : on vector space V:

• Examples of the Functionals:


1. A scalar product with the arbitrary vector B from V: F(A)=(A,B)
2.
• All possible functionals also form a linear space of its own, as is easy to verify. For example, a
functional that is a zero element, adding it to any other functional does not change the latter. Addition
is a simple pointwise addition, and multiplication by a scalar is as usual.
• We will call this space a dual (adjoint, or conjugate) to space and denote .
• Theorem (Reisz, representation theorem): for “complete” inner product spaces (all finite-dimensional
vector spaces, for examples):

“Complete” vector spaces are also known as “Hilbert” space. Such spaces are called self-dual, and they
are actually the same spaces from mathematics point of view.
• Inner product is the example of the linear functional, and this theorem states that in simple spaces any
linear functional can be expressed through the inner product with some vector from the space itself.
Bra-ket notation
• Dirac’s bra-ket notation is mathematically meaningful, if we identify ket with
the elements of the dual space of the antilinear functionals, while bra is to be
identified with the vector of the vector space itself:

• In other words, object is a number (generally complex) that is the result of the
action of the functional from the dual space on the vector from the space .
• If representation theorem is satisfied, then there is a one-to-one
correspondence between elements of and , and the object is the same as the
inner product .
• For physicists, that distinction is almost always ignored, and both bra’s and
ket’s are identified with the vectors from , while is identified with the inner
product, being a vector which existence assured by the representation
theorem. I will (hopefully) make clear when the distinction is important, and
when it is not, and how to make it.
Examples of Dual spaces
• For 3D coordinate space: Reisz theorem holds, thus it is also a 3-
dimensional space. If we represent vector as a column of 3 numbers,
it is customary to represent the corresponding vector in the dual
space as a row of (conjugated) numbers.
• Further, we can view the inner product as a product between a vector
in a vector space and a vector from the dual space, and use matrix
multiplication rules to perform the calculations. The space – column.
Dual space: row (conjugated).
Linear Operators
• Operator is a rule (mapping) to transform one vector to another vector: V -> V (not necessarily to the same vector space):

• A linear operator obeys this:

Note, that this include commutativity with the number multiplication.


• A product of two linear operators is a successive application of operators on a vector and is also a linear operator:

The sum:
Let us also define inverse operator: , is an identity operator ().
Note 1: inverse operator does not always exist for a given operator…
Note 2: You might have noted, that all linear operators defined on the given vector space in turn form another linear space
themselves... With some additional properties, such space called algebra.
Note 3: nowhere commutativity of operators have been mentioned, in general two linear operators do not commute:

The operator that tells “how badly“ two operators do not commute called commutator:
Properties of commutators:
“Components” of the operator
(Matrix representation)
• Let’s consider acting on arbitrary vector:

So to define a linear operator, we need to know what it does to the basis vectors, and then we
know how to operate on the entire vector space.

Let’s find out components of vector in the basis:

Matrix element of
the operator
Thus for any vector now:

And the ket vector can be found multiplying the matrix representing the operator onto
the column of components representing the vector: .
Sum and the product of the operators is the sum and the product of the corresponding
matrices (Show this!).
Examples:
• Rotation of a vector in 2D: 𝜙
• Is that a linear operator?
• ;

• Eigenproblem:

• Eigenvectors: : no eigenvectors in the linear space itself! (x and y must be real)


• But imagine extension into 3rd dimension: rotation operator in the plane is still defined, and the
vector that is unchanged by the rotation is the normal vector to the plane: eigenvector is not
necessarily belong to the vector space itself!
Projector operator:
• Definition: , i.e. if , then , i.e.
• Consider arbitrary, normalized vector . Symbolic Operator = is a
projector.
• . Verify that .
• Matrix elements: Let vector has components in some basis , i.e. ..
• Let = , i.e. is one of the basis vectors. Its components then , the
matrix representing it has all zeroes, except Pii element, which is 1.
• Consider sum of n projection operators for each basis vector, then we
have: : representation of the unity.

You might also like