Computação Quântica - Aulas 1, 2, e 3.
Paulo Palmuti Sigiani Neto
Exercise 1 (1).
(a) Let U be a complex matrix. Since U is diagonalizable we have that:
U = P DP −1
Where D is a diagonal matrix consisting of the eigenvalues of U , and the column vectors of
P are the eigenvectors of U , and by hypothesis, they are orthogonal. We have that:
¯
λ1 0 · · · 0 ···
λ1 0 0
.
.. 0 λ2 ..
0 λ¯2 .
D∗ D = . ... · . ...
.. 0 ..
0
0 · · · 0 λ¯n 0 ··· 0 λn
¯
···
λ1 λ1 0 0
..
0 λ¯2 λ2 .
= . .
.. ..
0
0 ··· 0 λ¯n λn
|λ1 | 0 · · ·
0
..
0 |λ2 | .
= . .
.. ..
0
0 ··· 0 |λn |
And since, by hypothesis, U has eigenvalues with absolute value equal to 1:
=I
and it implies that D∗ = D−1 . Now, let {|v1 ⟩ , . . . , |vn ⟩} be the eigenvectors of U . We have
that:
— ⟨v1 | —
| |
∗ .
..
P P = · |v1 ⟩ · · · |vn ⟩
— ⟨vn | — | |
⟨v1 |v1 ⟩ · · · ⟨v1 |vn ⟩
= ... .. ..
. .
⟨vn |v1 ⟩ · · · ⟨vn |vn ⟩
1
And since the eigenvectors, by hypothesis, are orthogonal we have:
=I
This implies that P ∗ = P −1 . Now, we can prove our result:
U ∗ U = (P DP −1 )∗ (P DP −1 )
= (P DP ∗ )∗ (P DP −1 )
= P D∗ P ∗ P DP −1
= P D∗ P −1 P DP ∗
= P D−1 DP −1
=I
(b) Let U be a unitary matrix, and suppose that λ is an eigenvalue of U associated with
|v⟩. Then:
U |v⟩ = λ |v⟩ =⇒ (U |v⟩)∗ = (λ |v⟩)∗
=⇒ ⟨v| U ∗ = λ̄ ⟨v|
Multiplying both sides by the right by U |v⟩ we obtain:
=⇒ ⟨v| U ∗ U |v⟩ = λ̄ ⟨v| U |v⟩
=⇒ ⟨v|v⟩ = (λ̄λ) ⟨v|v⟩
And since ⟨v|v⟩ =
̸ 0 we have that |λ| = λ̄λ = 1
(c) Let v be eigenvector of U associated with λ. Then we have that:
U v = λv =⇒ U ∗ U v = λU ∗ v
1
=⇒ v = U ∗ v
λ
1
So, v is an eigenvector of U associated with λ . For Any complex number z we have that:
z −1 = |z|z̄ 2 . So, since |λ| = 1, we have that λ1 = λ̄. Let |w⟩ be another eigenvector of U ,
associated with θ, and let (·, ·) be the notacion for inner product (as in Nielsen’s book).
Then we have:
⟨v|U |w⟩ = (|v⟩ , U |w⟩) = (|v⟩ , θ |w⟩) = θ ⟨v|w⟩
and:
⟨v|U |w⟩ = (U ∗ |v⟩ , |w⟩) = (λ̄ |v⟩ , |w⟩) = λ ⟨v|w⟩
since θ ⟨v|w⟩ = λ ⟨v|w⟩ and θ ̸= λ, is true that ⟨v|w⟩ = 0, and the eigenvectors are
orthogonal.
(d)
2
Exercise 2 (2).
(→) We need to prove that the eigenvalues of M are all non-negative. Let w be a
eigenvector of M associated with λ. Then:
⟨w|M |w⟩ ≥ 0 =⇒ ⟨w|M |w⟩ ≥ 0
=⇒ λ ⟨w|w⟩ ≥ 0
By the definition of inner product, and since w ̸= 0, we know that ⟨w|w⟩ > 0. So λ cannot be
negative, otherwise we will have a contradiction. Since M is hermitian, it is diagonalizable
with orthogonal eigenvectors. Let M = P DP ∗ .√Since D is the matrix of eigenvalues and
they are all non-negative, we can define
√ √a matriz D (with diagonal entries being the square
root of the eigenvalues) such that D D = D. Now suppose that ⟨v|M |v⟩ = 0. We have
that:
0 = ⟨v|M |v⟩
√ √
= ⟨v|P D DP ∗ |v⟩
√ ∗ √
= ( D P ∗ |v⟩ , DP ∗ |v⟩)
√
And since D is a real matrix:
√ √
= ( DP ∗ |v⟩ , DP ∗ |v⟩)
But this is the inner product
√ between two equal vectors, and (x, x) =
√ 0 if and only if x = 0.
∗
Then we conclude that DP |v⟩ = 0. Multiplying both sides by P D and we obtain:
√ √
(P D) DP ∗ |v⟩ =⇒ P DP ∗ |v⟩ = 0
=⇒ M |v⟩ = 0
(←) Let M |v⟩ = 0. Then: ⟨v|M |v⟩ = ⟨v| 0 = 0.
Exercise 3 (3).
Exercise 4 (5).
(b) (I performed a search on \https://en.wikipedia.org/wiki/Lagrange_polynomial)
Consider the polynomial obtained with the Lagrange method:
m
Y x − λi
pj (x) =
λ − λi
i=1 j
i
3
Note that, for any polynomial p and hermitian matrix we have:
n
X
p(M ) = ai M i
i=1
n d
!i
X X
= ai λj Fj
i=1 j=1
n
X d
X
= ai λij Fj
i=1 j=1
d X
X n
= ai λij Fj
j=1 i=1
d
X
= p(λj ) Fj
j=1
Evaluating pj (M ) we have:
d
X
pj (M ) = p(λk ) Fk
k=1
d Ym
X λk − λi
= · Fk
k=1 i=1
λj − λi
i
Note that we have pj (λk ) = 0 for all k ̸= j. So:
m
Y λj − λi
= · Fj
i=1
λj − λ i
i
=1
Exercise 5 (6).
(a) (→) Since P1 + P2 is a orthogonal projection, we have that:
P1 + P2 = (P1 + P2 )2
= P12 + P1 P2 + P2 P1 + P22
= P1 + P1 P 2 + P2 P 1 + P2
4
It implies that P1 P2 + P2 P1 = 0 (*). Multiplying both sides by P1 we obtain P1 P2 +
P1 P2 P1 = 0. Subtracting the two equations, we have P2 P1 − P1 P2 P1 = 0. Then:
P2 P1 − P1 P2 P1 = 0 =⇒ P2 P1 = P1 P2 P1
=⇒ (P2 P1 )∗ = (P1 P2 P1 )∗
=⇒ P1 P2 = P1 P2 P1
And then P2 P1 = P1 P2 P1 = P1 P2 =⇒ P1 P2 − P2 P1 = 0. Adding this equation with (*)
we obtain:
2P1 P2 = 0 =⇒ P1 P2 = 0
(←) Suppose P1 P2 = 0 then:
(P1 + P2 )2 = P12 + P1 P2 + P2 P1 + P22
= P 1 + P1 P2 + P2 P1 + P 2
= P 1 + P2
futhermore: (P1 + P2 )∗ = P1∗ + P2∗ = P1 + P2 .
(b) (→)P roof byinduction.T hebasecaseisthepreviousitem.