0% found this document useful (0 votes)
32 views8 pages

3 Operations

This document summarizes key concepts from a lecture on operators in quantum computing: 1. It introduces Deutsch's problem, which aims to determine a function's value using minimal queries, and explains how a quantum computer could solve it in one query using superposition. 2. It discusses linear operators and their eigenvectors/eigenvalues, which are important concepts for understanding how operators act on quantum states. 3. It presents the spectral theorem for normal matrices, showing they can be written as a sum of projections onto their eigenvectors, analogous to diagonalizing a matrix.

Uploaded by

Vinayak Dutta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views8 pages

3 Operations

This document summarizes key concepts from a lecture on operators in quantum computing: 1. It introduces Deutsch's problem, which aims to determine a function's value using minimal queries, and explains how a quantum computer could solve it in one query using superposition. 2. It discusses linear operators and their eigenvectors/eigenvalues, which are important concepts for understanding how operators act on quantum states. 3. It presents the spectral theorem for normal matrices, showing they can be written as a sum of projections onto their eigenvectors, analogous to diagonalizing a matrix.

Uploaded by

Vinayak Dutta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Lecture 3: Operators in quantum computing

Rajat Mittal

IIT Kanpur

We saw that the states of a quantum system can be described as a vector. We also looked at linear
operators on these vectors. This lecture will extend our understanding of these linear operators and mention
postulate which specifies the operators allowed in quantum computing.
Let us consider a toy problem, called Deutsch’s Problem. Suppose you are given a subroutine to compute
a one-bit function f : {0, 1} → {0, 1}. We need to find whether f (0) = f (1) using minimum number of
queries to the subroutine. It is easy to find the solution if we can query the subroutine twice, once for f (0)
and once for f (1). The question is, can we do it with just one query?
Using the property of superposition, it might seem like we can do it with one query on a quantum
computer. Just create |+⟩ state, apply subroutine on it (linearly) and then we will have both f (0) as well
as f (1). The idea doesn’t work directly, but still we can find whether f (0) = f (1) in just one query on a
quantum computer!
To understand why the simple idea doesn’t work and how to modify it, we need to learn the 2nd and 3rd
postulate (how to operate on state and how to get output from them). Though, even before that, we need
to look at some linear algebra concepts.

1 Eigenvalues and eigenvectors

You must have seen the content of this section in previous courses. The content given here is meant as a
refresher. If you do not feel comfortable with this material, please refer to any standard textbook on linear
algebra (e.g. [1]).
Let V, W be two vectors spaces over complex numbers, remember that L(V, W ) is the set of linear
operators from V to W . For simplicity, you can assume them to be Cn and Cm . A matrix M ∈ L(V, W )
is square if dim(V ) = dim(W )(m = n). In particular, a matrix M ∈ L(V ) is always square. For a matrix
M ∈ L(V ), a vector v ∈ V satisfying,

M v = λv for some λ ∈ C,

is called the eigenvector of matrix M with eigenvalue λ.

Exercise 1. Given two eigenvectors v, w, when is their linear combination an eigenvector itself?

The previous exercise can be used to show that all the eigenvectors corresponding to a particular eigen-
value form a subspace. This subspace is called the eigenspace of the corresponding eigenvalue.
An eigenvalue λ of an n × n matrix M satisfies the equation

Det(λI − M ) = 0,

where Det(M ) denotes the determinant of the matrix M . The polynomial Det(λI − M ) = 0, in λ, is called
the characteristic polynomial of M . The characteristic polynomial has degree n and will have n roots in the
field of complex numbers. Though, these roots might not be real.

Exercise 2. Give an example of a matrix with no real eigenvalue.

The next theorem shows that the eigenvalues are preserved under the action of a full rank matrix.

Theorem 1. Given a matrix P of full rank, matrix M and matrix P −1 M P have the same set of eigenvalues.
Proof (Extra reading). Suppose λ is an eigenvalue of P −1 M P , we need to show that it is an eigenvalue for
M too. Say λ is an eigenvalue with eigenvector v. Then,

P −1 M P v = λv ⇒ M (P v) = λP v.

Hence P v is an eigenvector with eigenvalue λ.


The opposite direction follows similarly. Given an eigenvector v of M , it can be shown that P −1 v is an
eigenvector of P −1 M P .
P −1 M P (P −1 v) = P −1 M v = λP −1 v
Hence proved.

Exercise 3. Where did we use the fact that P is a full rank matrix?

Exercise 4. Consider the matrix representation of the operator which takes √1 (|0⟩ + |1⟩) to |0⟩ and √1 (|0⟩ −
2 2
|1⟩) to |1⟩. What are its eigenvalues and eigenvectors?

Pauli matrices are used widely in quantum computing. They are defined as,
     
01 0 −i 1 0
X= Y = Z= .
10 i 0 0 −1

Exercise 5. What is the action of X gate on standard basis?

Notice that |u⟩⟨v| is uv ∗ and is a matrix. On the other hand ⟨u||v⟩ is u∗ v, giving a scalar; we write it
succinctly as ⟨u|v⟩.
P
Exercise 6. Show that any matrix can be written as i |ui ⟩⟨vi |, for any orthogonal basis ui ’s by choosing
vi ’s carefully.

Notice that Pauli X can be written as |0⟩⟨1| + |1⟩⟨0|. Dirac notation allows us to compute the action of
X easily.

(|0⟩⟨1| + |1⟩⟨0|)|0⟩ = |0⟩⟨1|0⟩ + |1⟩⟨0|0⟩ = |1⟩.


Where we get the last step because ⟨1|0⟩ = 0 and ⟨0|0⟩ = 1. You can similarly see that X|1⟩ = |0⟩; X is
known as the quantum NOT gate.

Exercise 7. Show that Z = |0⟩⟨0| − |1⟩⟨1|.

Z is known as the phase gate as it puts a phase of −1 in front of state |1⟩.


Before we go further, let us look at another useful notation. Given a square n × n matrix M and two
vectors u, v ∈ Cn ,
⟨u|M |v⟩ := u∗ M v = ⟨u|M v⟩ = ⟨M ∗ u|v⟩.

Exercise 8. Show that ⟨u|M |v⟩ = M • |u⟩⟨v|, where • is the sum of the entry-wise product of the matrices.

1.1 Spectral decomposition

How do the set of eigenvectors and eigenvalues look for a matrix. Is there a structure to it?

Exercise 9. Let v1 , v2 be two eigenvectors of a matrix M with distinct eigenvalues. Show that these two
eigenvectors are linearly independent.

2
Given an n × n matrix M , it need not have n linearly independent eigenvectors. Can it have more than
n linearly independent eigenvectors? The matrix M is called diagonalizable iff the set of eigenvectors of M
span the complete space Cn . Let P be the matrix whose columns are n linearly independent eigenvectors,
then P −1 M P will be a diagonal matrix. By Theorem 1, our original matrix and this diagonal matrix will
have same eigenvalues.

Exercise 10. What are the eigenvalues and eigenvectors of a diagonal matrix?

For a diagonalizable matrix, the basis of eigenvectors need not be an orthonormal basis. We will show a
characterization of matrices whose eigenvectors can form an orthonormal basis. Fortunately, these matrices
are of great importance in quantum computing, and are called normal matrices.
A normal matrix is defined to be a matrix M , s.t., M M ∗ = M ∗ M . Spectral theorem shows that we can
form an orthonormal basis of Cn using the eigenvectors of a normal matrix. Spectral theorem allows us to
view a normal matrix in terms of its eigenvalues and eigenvectors.

Theorem 2 (Spectral theorem). For a normal matrix M ∈ L(Ck ), there exists an orthonormal basis
{|x1 ⟩, · · · , |xk ⟩} of Ck and λi ∈ C (∀i ∈ [k]) such that
n
X
M= λi |xi ⟩⟨xi |.
i=1

Exercise 11. Show that |xi ⟩ is an eigenvector of M with eigenvalue λi .

Before the proof, let us discuss implications of Spectral theorem. It means that any normal matrix
M = U ∗ DU for a diagonal matrix D with entries λi and the matrix U with |xi ⟩ as columns. So, under a
basis change (columns of U are orthonormal), a normal matrix is similar to a diagonal matrix. Since diagonal
matrices are simple and easier to deal with, many properties of diagonal matrices can be lifted to normal
matrices.

Note 1. ⟨y|x⟩ is a scalar, but |y⟩⟨x| is a matrix. Also, the λi ’s need not be different. If we collect all the |xi ⟩’s
corresponding to a particular eigenvalue λ, the space spanned by those |xi ⟩’s is the eigenspace of λ.
Proof idea (Extra reading). The proof of spectral theorem essentially hinges on the following lemma.
Lemma 1. Given an eigenspace S (of eigenvalue λ) for a normal matrix M , then M acts on the space S
and S ⊥ separately. In other words, M |v⟩ ∈ S if |v⟩ ∈ S and M |v⟩ ∈ S ⊥ if |v⟩ ∈ S ⊥ .
Proof of lemma. Since S is an eigenspace, M |v⟩ ∈ S if |v⟩ ∈ S. For a vector |v⟩ ∈ S,

M M ∗ |v⟩ = M ∗ M |v⟩ = λM ∗ |v⟩.

This shows that M ∗ preserves the subspace S. Suppose |v1 ⟩ ∈ S ⊥ and |v2 ⟩ ∈ S, then M ∗ |v2 ⟩ ∈ S. So,

0 = ⟨v1 |M ∗ |v2 ⟩ = ⟨M v1 |v2 ⟩.

Above equation implies M |v1 ⟩ ∈ S ⊥ . Hence, matrix M acts separately on S and S ⊥ .


The lemma implies that M is a linear operator on S ⊥ , i.e., it moves every element of S ⊥ to an element
in S ⊥ linearly. It can be easily shown that this linear operator (the action of M on S ⊥ ) is also normal. The
proof of spectral theorem follows by using induction and is given below.
From the fundamental theorem of Algebra, there is at least one root λ0 of det(λI − M ) = 0. Start with
the eigenspace of the eigenvalue λ0 . Using Lem. 1, we can restrict the matrix to orthogonal subspace (which
is of smaller dimension). We can divide the entire space into orthogonal eigenspaces by induction.

Exercise 12. Show that if we take the orthonormal basis of all these eigenspaces, then we get the required
decomposition.

3
Exercise 13. Given the spectral decomposition of M , what is the spectral decomposition of M ∗ ?

Exercise 14. What is the spectral decomposition of Identity matrix?

Exercise 15. If M is normal, prove that the rank of M is the sum of the dimension of the non-zero eigenspaces.
P P
Exercise 16. Let spectral decomposition of M be i λi |xi ⟩⟨xi | and v = i αi |xi ⟩. Find ⟨v|M |v⟩ in terms of
λi , xi , αi .

It is easy to show that any matrix with orthonormal set of eigenvectors is a normal matrix. Hence, spectral
decomposition provides another characterization of normal matrices.
Clearly the spectral decomposition is not unique (essentially because of the multiplicity of eigenvalues).
But the eigenspaces corresponding to each eigenvalue are fixed. So there is a unique decomposition in terms
of eigenspaces and then any orthonormal basis of these eigenspaces can be chosen.

Note 2. It is also true that if an eigenvalue is a root of characteristic polynomial with multiplicity k, then its
eigenspace is of dimension k. In other words, geometric and algebraic multiplicity of an eigenvalue is same.
Spectral decomposition allows us to define functions over normal matrices.

1.2 Functions on operators


The first notion is of applying a function on a linear operator. We will assume that the linear operators given
to us belong to the set of normal operators or some subset of it. Suppose we have a function, f : C → C,
from complex numbers to complex numbers. It can naturally extended to be a function on a normal linear
operator in L(Cn ). By definition of operator function, we apply the function on all the eigenvalues of the
operator. So, if
A = λ1 |x1 ⟩⟨x1 | + · · · + λn |xn ⟩⟨xn |.
then
f (A) = f (λ1 )|xn ⟩⟨xn | + · · · + f (λn )|xn ⟩⟨xn |.
In particular, we can now define the square-root, exponential and logarithm of an operator.

Exercise 17. Find eiX , eiY , eiZ ; where X, Y, Z are Pauli matrices.

Extra reading: Trace P


Another very important function on operators introduced before is trace. We defined
trace to be T r(A) = i Aii . At this point, it is a function on matrices and not linear operators.

Exercise 18. What is the problem?

For a linear operator, trace might be different for different bases. In other words, there is no guarantee
that it is independent of the basis (from the definition given above).

Exercise 19. Show that the trace is cyclic, i.e., tr(AB) = tr(BA).

This exercise implies that tr(U ∗ AU ) = tr(A). Hence, trace is independent of the representation.

Exercise 20. Show that tr(|u⟩⟨v|) = ⟨u|v⟩.


Pn P
If M = i=1 λi |xi ⟩⟨xi |, the previous exercise shows that tr(M ) = i λi . Since λi ’s do not depend on the
basis chosen, this gives us a basis independent definition of trace. This definition allows us to define trace of
a linear operator (and not just a matrix).
We also know that ⟨v|A|w⟩ = ij Aij vi∗ wj .
P

Exercise 21. Show that Aij = ⟨i|A|j⟩, where matrix A is represented in the standard basis |1⟩, · · · , |n⟩.

4
P
From the previous exercise, tr(A) = i ⟨i|A|i⟩. In fact, for any orthonormal basis v1 , · · · , vn ,
X
tr(A) = ⟨vi |A|vi ⟩,
i

(trace is indepedent of the basis).


If we take vi to be the eigenvectors, we get the same equation
X
tr(A) = λi .
i

Here, λi are the eigenvalues of the operator A.

2 Special class of matrices


We know that all eigenvalues of a normal matrix are complex numbers. We can pick any set of complex
numbers and an orthonormal basis, that will give us a normal matrix. Why?
If we impose more constraints on the eigenvalues, it give us specific classes of normal matrices (some of
them are very important for quantum computing).

2.1 Hermitian matrix


A matrix M is said to be Hermitian if M = M ∗ (analog of symmetric matrices). It is easy to check that
any Hermitian matrix is normal. You can also show that all the eigenvalues of a Hermitian matrix are real
(given as an exercise).
Conversely if all the eigenvalues are real for a normal matrix then the matrix is Hermitian (from spectral
theorem).
Note 3. In quantum meachanics, we use eigenvalues to denote the physical properties of a system. Since
these quantities should be real, we use Hermitian operators.
For any matrix B, a matrix of the form B ∗ B or B + B ∗ is always Hermitian. The sum of two Hermitian
matrices is Hermitian, but the multiplication of two Hermitian matrices need not be Hermitian.
Exercise 22. Give an example of two Hermitian matrices whose multiplication is not Hermitian.

2.2 Unitary matrix


A matrix M is unitary if M M ∗ = M ∗ M = I (analog of orthogonal matrices). In other words, the columns
of M form an orthonormal basis of the whole space. Unitary matrices need not be Hermitian, so their
eigenvalues can be complex. For a unitary matrix, M −1 = M ∗ .
Exercise 23. Give an example of a unitary matrix which is not Hermitian.
Unitary matrices can be viewed as matrices which implement a change of basis. Hence they preserve the
angle (inner product) between the vectors. So for a unitary M ,
⟨u|v⟩ = ⟨M u|M v⟩.
Exercise 24. Prove the above equation.
That means unitary matrix preserves the norm of a vector and angle between vectors. Another way to
characterize a unitary matrix is, they move any orthonormal basis to another orthonormal basis.
If two matrices A, B are related by A = M −1 BM , where M is unitary, then they are unitarily equivalent.
If two matrices are unitarily equivalent then they are similar. Spectral theorem can be stated as the fact that
normal matrices are unitarily equivalent to a diagonal matrix. The diagonal of this diagonal matrix contains
its eigenvalues.

5
Exercise 25. What is the rank of a unitary matrix?

Note 4. Since unitary matrices preserve the norm, they will be used as operators in the postulates of quantum
mechanics.

Exercise 26. Show that the Pauli matrices are Hermitian as well as Unitary by calculating their eigenvalue.

Exercise 27. Show that the Pauli matrices (with identity) form a basis of all Hermitian 2 × 2 operators.

One of the important Unitary matrix is called the Hadamard matrix, it takes |0⟩ to √1 (|0⟩ + |1⟩) and |1⟩
2
to 12 (|0⟩ − |1⟩). The matrix representation is

 
1 1 1
H=√
2 1 −1

Exercise 28. Show that H is a unitary matrix, can you show directly that it is a unitary matrix (without
computing its matrix representation)?

3 Evolution of a quantum system


The next postulate specifies, how a closed quantum system evolves. You might already know this postulate
in terms of the very famous Schrödinger’s equation. It is a partial differential equation which describes how
a quantum state evolves with time.
The evolution is described by a Hamiltonian H which depends on the system being observed. For us, as
computer scientists, it is just some Hermitian matrix H. Given the Hamiltonian H, the equation
d|ψ⟩
i = H|ψ⟩,
dt
describes how the quantum system will change its state with time. For readers who are already familiar with
this equation, we have assumed that Planck’s constant can be absorbed in the Hamiltonian.
This equation can be considered as the second postulate of quantum mechanics. But, we will modify it
a little bit to get rid of the partial differential equation and write it in terms of unitary operators.

Exercise 29. Read about Schrödinger’s equation.

Suppose the quantum system is in state |ψ(t1 )⟩ at time t1 . Then, using the Schrödinger’s equation, the
state at time t2 is
|ψ(t2 )⟩ = e−iH(t2 −t1 ) |ψ(t1 )⟩.

Exercise 30. Show that the matrix e−iH(t2 −t1 ) is unitary.

Using the previous exercise,


|ψ(t2 )⟩ = U (t2 , t1 )|ψ(t1 )⟩.
This gives us the “working” second postulate.

Postulate 2: A closed quantum system evolves unitarily. The unitary matrix only depends on time t1 and t2 .
If the state at t1 is |ψ(t1 )⟩ then the state at time t2 is,

|ψ(t2 )⟩ = U (t2 , t1 )|ψ(t1 )⟩.

Note 5. Unitary operators preserve norm and inner products.


Do you remember any unitary operators considered in this course before?

6
Exercise 31. Show that all Pauli matrices and the Hadamard matrix H are unitary operators.
Exercise 32. “Guess” the eigenvalues and eigenvectors of H. Check, if not, find the actual ones.
Notice that it is enough to specify the action of a gate/unitary on any basis (it is a linear operator). If
we pick the standard basis, then we just need to mention the action on classical inputs. For example, Pauli
X negates the classical inputs, it takes |0⟩ to |1⟩ and |1⟩ to |0⟩. That means, on any state α|0⟩ + β|1⟩, Pauli
X will return α|1⟩ + β|0⟩. This is one of the preferred methods of specifying action of gates (or circuits) in
quantum computing.
The Hadamard operator,  
1 1 1
H=√ ,
2 1 −1
can be thought of as a random coin toss in the quantum world. If we apply Hadamard on standard basis
and measure, we get 0 and 1 with equal probability.
Exercise 33. Why is  
1 ′ 11
H =√
2 11
not a random coin toss in the quantum world?

Controlled operators: Another class of gates, useful in quantum computing, are the controlled versions of a
unitary gate U . There are two inputs to these gates, one is the control part and other is the target part. The
unitary U is applied to the target part if and only if the control part is in ON (set to 1) state. Mostly, if the
control part is a set of qubits, setting all of them to be 1 is seen as the ON state.
The simplest and most useful of these gates is called the CNOT gate. It has one control and one target
qubit.
Exercise 34. Suppose, first qubit is control and second qubit is target, write the matrix representation of
CNOT gate.
The CNOT gate is drawn as,

Here the first qubit is control and second qubit is data.


Exercise 35. What is the output of CNOT gate on the state √1 (|00⟩ + |11⟩). Is the state still entangled?
2

Notice that there are two ways to specify the action of a quantum gate. At a first glance, we need to
describe the action of a gate on any quantum states. The obvious way to do it is by a matrix in a particular
basis (generally the standard basis). This way the output state can be obtained by multiplying the matrix
with the input state vector. This is how we had specified the Hadamard gate.
Though, another description is at least equally useful. Remember that a linear operator can be specified
by its action on a basis. Naturally, we can take the basis formed by classical states, and the action specified
on them is enough to specify the action on any quantum state (the entire Hilbert space). This was the way
we specified the action of the CNOT gate.
You should be able to convert between these descriptions.
Exercise 36. Write the matrix associated to CNOT gate with respect to the classical states as basis.
If you had difficulty thinking about this conversion. Notice that matrix will give you the action on basis
in a straightforward manner. For the opposite side, notice that the columns of the matrix are essentially the
output states when the input is a basis vector. If it was difficult to come up with the matrix for CNOT gate,
can you do it now?

7
4 Assignment
Exercise 37. Read about singular values of a matrix, show that the matrix M and M ∗ have the same singular
values.

Exercise 38. Find the eigenvalue and eigenvectors of Pauli operators.

Exercise 39. Prove that the eigenvalues of a Hermitian matrix are real.

Exercise 40. Prove that the absolute value of the eigenvalues of an unitary matrix is 1. Is the converse true.
What condition do we need to get the converse?

Exercise 41. Prove that a matrix M is Hermitian iff ⟨v|M |v⟩ is real for all |v⟩.

Exercise 42. Show that the set of Hermitian matrices of a fixed dimension form a vector space (over which
field?). What is the dimension of this vector space?
2 2 2
Exercise 43. Let σ = α1 X + α2 Y + α3 Z, where αi ’s are real numbers and |α1 | + |α2 | + |α3 | = 1. Show
that,
eiθσ = cos(θ)I + i sin(θ)σ.

Exercise 44. Prove that if H is Hermitian then eiH is a unitary matrix.

References
1. Gilbert Strang. Introduction to Linear Algebra. Wellesley-Cambridge Press, 2009.

You might also like