3 Operations
3 Operations
Rajat Mittal
IIT Kanpur
We saw that the states of a quantum system can be described as a vector. We also looked at linear
operators on these vectors. This lecture will extend our understanding of these linear operators and mention
postulate which specifies the operators allowed in quantum computing.
Let us consider a toy problem, called Deutsch’s Problem. Suppose you are given a subroutine to compute
a one-bit function f : {0, 1} → {0, 1}. We need to find whether f (0) = f (1) using minimum number of
queries to the subroutine. It is easy to find the solution if we can query the subroutine twice, once for f (0)
and once for f (1). The question is, can we do it with just one query?
Using the property of superposition, it might seem like we can do it with one query on a quantum
computer. Just create |+⟩ state, apply subroutine on it (linearly) and then we will have both f (0) as well
as f (1). The idea doesn’t work directly, but still we can find whether f (0) = f (1) in just one query on a
quantum computer!
To understand why the simple idea doesn’t work and how to modify it, we need to learn the 2nd and 3rd
postulate (how to operate on state and how to get output from them). Though, even before that, we need
to look at some linear algebra concepts.
You must have seen the content of this section in previous courses. The content given here is meant as a
refresher. If you do not feel comfortable with this material, please refer to any standard textbook on linear
algebra (e.g. [1]).
Let V, W be two vectors spaces over complex numbers, remember that L(V, W ) is the set of linear
operators from V to W . For simplicity, you can assume them to be Cn and Cm . A matrix M ∈ L(V, W )
is square if dim(V ) = dim(W )(m = n). In particular, a matrix M ∈ L(V ) is always square. For a matrix
M ∈ L(V ), a vector v ∈ V satisfying,
M v = λv for some λ ∈ C,
Exercise 1. Given two eigenvectors v, w, when is their linear combination an eigenvector itself?
The previous exercise can be used to show that all the eigenvectors corresponding to a particular eigen-
value form a subspace. This subspace is called the eigenspace of the corresponding eigenvalue.
An eigenvalue λ of an n × n matrix M satisfies the equation
Det(λI − M ) = 0,
where Det(M ) denotes the determinant of the matrix M . The polynomial Det(λI − M ) = 0, in λ, is called
the characteristic polynomial of M . The characteristic polynomial has degree n and will have n roots in the
field of complex numbers. Though, these roots might not be real.
The next theorem shows that the eigenvalues are preserved under the action of a full rank matrix.
Theorem 1. Given a matrix P of full rank, matrix M and matrix P −1 M P have the same set of eigenvalues.
Proof (Extra reading). Suppose λ is an eigenvalue of P −1 M P , we need to show that it is an eigenvalue for
M too. Say λ is an eigenvalue with eigenvector v. Then,
P −1 M P v = λv ⇒ M (P v) = λP v.
Exercise 3. Where did we use the fact that P is a full rank matrix?
Exercise 4. Consider the matrix representation of the operator which takes √1 (|0⟩ + |1⟩) to |0⟩ and √1 (|0⟩ −
2 2
|1⟩) to |1⟩. What are its eigenvalues and eigenvectors?
Pauli matrices are used widely in quantum computing. They are defined as,
01 0 −i 1 0
X= Y = Z= .
10 i 0 0 −1
Notice that |u⟩⟨v| is uv ∗ and is a matrix. On the other hand ⟨u||v⟩ is u∗ v, giving a scalar; we write it
succinctly as ⟨u|v⟩.
P
Exercise 6. Show that any matrix can be written as i |ui ⟩⟨vi |, for any orthogonal basis ui ’s by choosing
vi ’s carefully.
Notice that Pauli X can be written as |0⟩⟨1| + |1⟩⟨0|. Dirac notation allows us to compute the action of
X easily.
Exercise 8. Show that ⟨u|M |v⟩ = M • |u⟩⟨v|, where • is the sum of the entry-wise product of the matrices.
How do the set of eigenvectors and eigenvalues look for a matrix. Is there a structure to it?
Exercise 9. Let v1 , v2 be two eigenvectors of a matrix M with distinct eigenvalues. Show that these two
eigenvectors are linearly independent.
2
Given an n × n matrix M , it need not have n linearly independent eigenvectors. Can it have more than
n linearly independent eigenvectors? The matrix M is called diagonalizable iff the set of eigenvectors of M
span the complete space Cn . Let P be the matrix whose columns are n linearly independent eigenvectors,
then P −1 M P will be a diagonal matrix. By Theorem 1, our original matrix and this diagonal matrix will
have same eigenvalues.
Exercise 10. What are the eigenvalues and eigenvectors of a diagonal matrix?
For a diagonalizable matrix, the basis of eigenvectors need not be an orthonormal basis. We will show a
characterization of matrices whose eigenvectors can form an orthonormal basis. Fortunately, these matrices
are of great importance in quantum computing, and are called normal matrices.
A normal matrix is defined to be a matrix M , s.t., M M ∗ = M ∗ M . Spectral theorem shows that we can
form an orthonormal basis of Cn using the eigenvectors of a normal matrix. Spectral theorem allows us to
view a normal matrix in terms of its eigenvalues and eigenvectors.
Theorem 2 (Spectral theorem). For a normal matrix M ∈ L(Ck ), there exists an orthonormal basis
{|x1 ⟩, · · · , |xk ⟩} of Ck and λi ∈ C (∀i ∈ [k]) such that
n
X
M= λi |xi ⟩⟨xi |.
i=1
Before the proof, let us discuss implications of Spectral theorem. It means that any normal matrix
M = U ∗ DU for a diagonal matrix D with entries λi and the matrix U with |xi ⟩ as columns. So, under a
basis change (columns of U are orthonormal), a normal matrix is similar to a diagonal matrix. Since diagonal
matrices are simple and easier to deal with, many properties of diagonal matrices can be lifted to normal
matrices.
Note 1. ⟨y|x⟩ is a scalar, but |y⟩⟨x| is a matrix. Also, the λi ’s need not be different. If we collect all the |xi ⟩’s
corresponding to a particular eigenvalue λ, the space spanned by those |xi ⟩’s is the eigenspace of λ.
Proof idea (Extra reading). The proof of spectral theorem essentially hinges on the following lemma.
Lemma 1. Given an eigenspace S (of eigenvalue λ) for a normal matrix M , then M acts on the space S
and S ⊥ separately. In other words, M |v⟩ ∈ S if |v⟩ ∈ S and M |v⟩ ∈ S ⊥ if |v⟩ ∈ S ⊥ .
Proof of lemma. Since S is an eigenspace, M |v⟩ ∈ S if |v⟩ ∈ S. For a vector |v⟩ ∈ S,
This shows that M ∗ preserves the subspace S. Suppose |v1 ⟩ ∈ S ⊥ and |v2 ⟩ ∈ S, then M ∗ |v2 ⟩ ∈ S. So,
Exercise 12. Show that if we take the orthonormal basis of all these eigenspaces, then we get the required
decomposition.
3
Exercise 13. Given the spectral decomposition of M , what is the spectral decomposition of M ∗ ?
Exercise 15. If M is normal, prove that the rank of M is the sum of the dimension of the non-zero eigenspaces.
P P
Exercise 16. Let spectral decomposition of M be i λi |xi ⟩⟨xi | and v = i αi |xi ⟩. Find ⟨v|M |v⟩ in terms of
λi , xi , αi .
It is easy to show that any matrix with orthonormal set of eigenvectors is a normal matrix. Hence, spectral
decomposition provides another characterization of normal matrices.
Clearly the spectral decomposition is not unique (essentially because of the multiplicity of eigenvalues).
But the eigenspaces corresponding to each eigenvalue are fixed. So there is a unique decomposition in terms
of eigenspaces and then any orthonormal basis of these eigenspaces can be chosen.
Note 2. It is also true that if an eigenvalue is a root of characteristic polynomial with multiplicity k, then its
eigenspace is of dimension k. In other words, geometric and algebraic multiplicity of an eigenvalue is same.
Spectral decomposition allows us to define functions over normal matrices.
Exercise 17. Find eiX , eiY , eiZ ; where X, Y, Z are Pauli matrices.
For a linear operator, trace might be different for different bases. In other words, there is no guarantee
that it is independent of the basis (from the definition given above).
Exercise 19. Show that the trace is cyclic, i.e., tr(AB) = tr(BA).
This exercise implies that tr(U ∗ AU ) = tr(A). Hence, trace is independent of the representation.
Exercise 21. Show that Aij = ⟨i|A|j⟩, where matrix A is represented in the standard basis |1⟩, · · · , |n⟩.
4
P
From the previous exercise, tr(A) = i ⟨i|A|i⟩. In fact, for any orthonormal basis v1 , · · · , vn ,
X
tr(A) = ⟨vi |A|vi ⟩,
i
5
Exercise 25. What is the rank of a unitary matrix?
Note 4. Since unitary matrices preserve the norm, they will be used as operators in the postulates of quantum
mechanics.
Exercise 26. Show that the Pauli matrices are Hermitian as well as Unitary by calculating their eigenvalue.
Exercise 27. Show that the Pauli matrices (with identity) form a basis of all Hermitian 2 × 2 operators.
One of the important Unitary matrix is called the Hadamard matrix, it takes |0⟩ to √1 (|0⟩ + |1⟩) and |1⟩
2
to 12 (|0⟩ − |1⟩). The matrix representation is
√
1 1 1
H=√
2 1 −1
Exercise 28. Show that H is a unitary matrix, can you show directly that it is a unitary matrix (without
computing its matrix representation)?
Suppose the quantum system is in state |ψ(t1 )⟩ at time t1 . Then, using the Schrödinger’s equation, the
state at time t2 is
|ψ(t2 )⟩ = e−iH(t2 −t1 ) |ψ(t1 )⟩.
Postulate 2: A closed quantum system evolves unitarily. The unitary matrix only depends on time t1 and t2 .
If the state at t1 is |ψ(t1 )⟩ then the state at time t2 is,
6
Exercise 31. Show that all Pauli matrices and the Hadamard matrix H are unitary operators.
Exercise 32. “Guess” the eigenvalues and eigenvectors of H. Check, if not, find the actual ones.
Notice that it is enough to specify the action of a gate/unitary on any basis (it is a linear operator). If
we pick the standard basis, then we just need to mention the action on classical inputs. For example, Pauli
X negates the classical inputs, it takes |0⟩ to |1⟩ and |1⟩ to |0⟩. That means, on any state α|0⟩ + β|1⟩, Pauli
X will return α|1⟩ + β|0⟩. This is one of the preferred methods of specifying action of gates (or circuits) in
quantum computing.
The Hadamard operator,
1 1 1
H=√ ,
2 1 −1
can be thought of as a random coin toss in the quantum world. If we apply Hadamard on standard basis
and measure, we get 0 and 1 with equal probability.
Exercise 33. Why is
1 ′ 11
H =√
2 11
not a random coin toss in the quantum world?
Controlled operators: Another class of gates, useful in quantum computing, are the controlled versions of a
unitary gate U . There are two inputs to these gates, one is the control part and other is the target part. The
unitary U is applied to the target part if and only if the control part is in ON (set to 1) state. Mostly, if the
control part is a set of qubits, setting all of them to be 1 is seen as the ON state.
The simplest and most useful of these gates is called the CNOT gate. It has one control and one target
qubit.
Exercise 34. Suppose, first qubit is control and second qubit is target, write the matrix representation of
CNOT gate.
The CNOT gate is drawn as,
Notice that there are two ways to specify the action of a quantum gate. At a first glance, we need to
describe the action of a gate on any quantum states. The obvious way to do it is by a matrix in a particular
basis (generally the standard basis). This way the output state can be obtained by multiplying the matrix
with the input state vector. This is how we had specified the Hadamard gate.
Though, another description is at least equally useful. Remember that a linear operator can be specified
by its action on a basis. Naturally, we can take the basis formed by classical states, and the action specified
on them is enough to specify the action on any quantum state (the entire Hilbert space). This was the way
we specified the action of the CNOT gate.
You should be able to convert between these descriptions.
Exercise 36. Write the matrix associated to CNOT gate with respect to the classical states as basis.
If you had difficulty thinking about this conversion. Notice that matrix will give you the action on basis
in a straightforward manner. For the opposite side, notice that the columns of the matrix are essentially the
output states when the input is a basis vector. If it was difficult to come up with the matrix for CNOT gate,
can you do it now?
7
4 Assignment
Exercise 37. Read about singular values of a matrix, show that the matrix M and M ∗ have the same singular
values.
Exercise 39. Prove that the eigenvalues of a Hermitian matrix are real.
Exercise 40. Prove that the absolute value of the eigenvalues of an unitary matrix is 1. Is the converse true.
What condition do we need to get the converse?
Exercise 41. Prove that a matrix M is Hermitian iff ⟨v|M |v⟩ is real for all |v⟩.
Exercise 42. Show that the set of Hermitian matrices of a fixed dimension form a vector space (over which
field?). What is the dimension of this vector space?
2 2 2
Exercise 43. Let σ = α1 X + α2 Y + α3 Z, where αi ’s are real numbers and |α1 | + |α2 | + |α3 | = 1. Show
that,
eiθσ = cos(θ)I + i sin(θ)σ.
References
1. Gilbert Strang. Introduction to Linear Algebra. Wellesley-Cambridge Press, 2009.