Schmidt Examples
Schmidt Examples
Physics 342
by: Nina Coyle ([email protected])
1 Schmidt decomposition
We’re finding the Schmidt decomposition of three different states. There are two possible
methods (among others) that we can use we can find the decomposition: by inspection,
or by finding the eigenstates of the reduced density matrices. We’ll solve the first two by
inspection, but the last will require some more work to figure out.
1
1 1 1 X 1
√ |00i + |11i = √ |0i |0i + √ |1i |1i = √ |iA i |iB i (1.1)
2 2 2 i=0
2
1 1 1
|00i + |01i + |10i + |11i = √ |0i + |1i ⊗ √ |0i + |1i (1.2)
2 2 2
where we’ve solved the second one by noting that we have every combination of |xA , xB i all
with the same coefficients,
√ so it must be a product of1 |0i + |1i. Also note that we have
included the 1/ 2 in each state |λA,B i to ensure that they are normalized, as they need to
be for a Schmidt decomposition.
For the last state, we can solve by finding the eigenstates of the reduced density matrices.
This works because the eigenstates of a Hermitian matrix P √arepmutually orthogonal, and
because if we take forPexample the trace over B of ρ = i,j λi λj |λi,A i |λi,B i hλj,A | hλj,B |
we end up with ρA = i λi |λi,A i hλi,A |. Notice that this successfully reproduces both reduced
density
P matrices (you could also trace over A to get ρB , finding the equivalent result), so the
sum i λi |λi,A i |λi,B i gives us the correct Schmidt decomposition for the state.
1
|ψi = √ |00i + |01i + |10i (1.3)
3
1 1 1 0
1 1 1 1 0
⇒ ρ = |ψi hψ| = (1.4)
3 1 1 1 0
0 0 0 0
1
You might ask why we didn’t simply use a sum of states of the form |0i |0i , |0i |1i, etc. The reason is
that the |iA i need to be orthogonal to one another (and the same for each |iB i with the other |iB i), and for
example |0A i is not orthogonal to |0A i. Since |0A i would need to show up twice (once for |00i and once for
|01i), we cannot use this as the decomposition. This is unlike the first state, where each of |0A i and |1A i is
only used once in the sum.
1
To see how we got this, consider the “bra side” of the density matrix as telling you the
vector you’re giving the matrix, then the “ket side” as telling you what vector you get from
that. So for example, if I give the matrix |10i I get 1/3(|00i + |01i + |10i). The columns of
a matrix are the new vectors you get from the original basis vectors, so the corresponding
third column is (1, 1, 1, 0).
Now we take the trace over B to find the reduced density matrix ρA . Note that since the
original |ψi is symmetric with respect to A, B, this will be the same as ρB .
1
X
ρA = T rB (ρ) = hiB |ρ|iB i (1.5)
i=0
!
1
= h0B | |00i + |01i + |10i h00| + h01| + h10| |0B i (1.6)
3
!
1
+ h1B | |00i + |01i + |10i h00| + h01| + h10| |1B i (1.7)
3
1 1
= |0i + |1i h0| + h1| + |0i h0| (1.8)
3 3
1 2 1
= (1.9)
3 1 1
Now we find the eigenstates and eigenvalues. You can put this into something like Mathe-
matica to find √
√
2 (1 + 5)/2 1
|λ1 i = √ λ1 = (5 − 5) (1.10)
5+ 5 1 6
√
√
2 (1 − 5)/2 1
|λ2 i = √ λ2 = (5 − 2 5) (1.11)
5+ 5 1 3
Finally, we write down the Schmidt decomposition as:
2 p
X
|ψi = λi |λi i |λi i (1.12)
i=1
using the eigenstates and eigenvalues found above, noting that because ρA = ρB , we will find
the same eigenvalues and eigenstates for each one.
2
Thanks to Katrina for bringing up this approach in office hours
2
2 Product states and Schmidt decomposition
First we show that a state is a product state iff its Schmidt number is 1. For an iff state-
ment, we need to prove both directions.
First let’s suppose |ψi is a product state. This means we can write
for some |ai in space A and some |bi in space B. This already gives us a Schmidt decompo-
sition and there is only one term, so the Schmidt number is 1. This proves the first direction.
Now suppose the Schmidt number is 1. This means we can write the state |ψi as
for some normalized |ai in A and some normalized |bi in B. This is the definition of a product
state, so |ψi must be a product state. This proves the second direction, so we are done.
Now we want to show that a state is a product state iff ρA and ρB are pure states.
First let’s suppose that |ψi is a product state. Then we can write
Now suppose that ρA and ρB are pure states. Then we can use the eigenstates to write
down the Schmidt decomposition:
3
3 Multipartite systems
(i) We’re considering the Schmidt decomposition for a tripartite system. First of all, we can
show that there does not necessarily exist a Schmidt decomposition for every three-particle
system. To see this, consider the state
|000i + |011i
|ψi = √ (3.1)
2
Recall that the Schmidt decomposition is written as
X
|ψi = αi |iA i |iB i |iC i (3.2)
i
so if we take the partial trace over a subsystem, we see that the eigenvalues of the resulting
reduced density matrices must be the same no matter which subsystem we trace over. Thus,
to show this does not have a Schmidt decomposition we will look at the eigenvalues of the
reduced density matrices.3
which has eigenvalues 1, 0, 0, 0. So we’ve shown this state does not have a Schmidt decompositon.
3
Thanks to Zach and Rachel for suggesting this approach in office hours
4
We can also propose some process that gives us similar information to the Schmidt
decomposition. Recall that the Schmidt number is the number of terms in the Schmidt
decomposition, and is a characterization of whether or not a state is entangled (as we saw
in problem 2). We can propose the Schmidt vector, which performs the Schmidt decompo-
sition of each bipartite subsystem, and finds the Schmidt number for each subsystem. For a
tripartite system, we would have three numbers: ~r = (rA , rB , rC ) where each rn indicates the
Schmidt number of the bipartite system obtained by tracing over the n subsystem, T rn (ρ).
(ii) Now we’re looking at the separability of a bipartite system. Since the definition of a
separable density matrix is that it can be written as a sum of tensor products of pure state
ρ’s,
X
ρ= pk ρkA ⊗ ρkB (3.11)
k
we can notice that the following must also be a valid density matrix:
X
ρ0 = pk (ρkA )T ⊗ ρkB (3.12)
k
since the complex conjugate (ρkA )∗ = ((ρkA )† )T = (ρkA )T is also a valid density matrix (here
we used that ρ† = ρ for a density matrix), and thus must have positive eigenvalues. This is
known as the positive partial transpose4 (PPT) or Peres-Horodecki criterion. In general this
is a necessary condition for the density matrix to be separable, but not necessarily sufficient
(that is, it has to satisfy this criterion to be separable, but it does not mean it’s definitely
separable if it satisfies it). However, although we won’t prove it here, it turns out that in 6
or fewer dimensions this is also sufficient.
Let’s write out the indices. The indices for a tensor product are the following5 :
X
ρmµ,nν = pk (ρkA )mn ⊗ (ρkB )µν (3.13)
k
X
ρ0mµ,nν = pk (ρkA )nm ⊗ (ρkB )µν (3.14)
k
so we’ve swapped m and n. To understand the notation ρmµ,nν , recall that the tensor product
of two 2x2 matrices looks like the following:
a11 B a12 B
A⊗B = (3.15)
a21 B a22 B
a11 b11 a11 b12 a12 b11 a12 b12
a11 b21 a11 b22 a12 b21 a12 b22
= a21 b11 a21 b12 a22 b11 a22 b12
(3.16)
a21 b21 a21 b22 a22 b21 a22 b22
4
Thanks to Nolan for informing me of this name in office hour
5
If this is unfamiliar to you, which is entirely understandable, I highly recommend taking a look at the
Wikipedia page for the Kroeneker product
5
so the combination mµ tells us about the row and the combination nν about the column
(consider the labeling amn bµν and compare with the above matrix).
Why bother with this? Looking at the matrix ρ0 and requiring that it also satisfies the
properties of a density matrix gives us some constraints on the allowed values of α. The
original density matrix comes out to be:
1
ρ = α |β11 i hβ11 | + (1 − α) (3.17)
4
1−α 0 0 0
1 0 1 + α −2α 0
= (3.18)
4 0 −2α 1 + α 0
0 0 0 1−α
Switching m and n boils down to switching the upper right and lower left blocks, and gives
the matrix:
1−α 0 0 −2α
1 0 1+α 0 0
ρ0 = (3.19)
4 0 0 1+α 0
−2α 0 0 1−α
Since this is a valid density matrix, we know that it should have positive, real eigenvalues.
Putting ρ0 into Mathematica tells us that we have the following eigenvalues:
1+α 1 − 3α
λ1,2,3 = λ4 = (3.20)
4 4
Solving the last one for λ4 ≥ 0 gives us:
1
α≤ (3.21)
3
6
4 Quantum Hall Effect
(i) We are introducing a magnetic field pointing in the x̃3 direction. Recall that for cases
where there is no time dependence in E or B, the magnetic field is related to the vector
~ =∇
potential by B ~ × A.~ Also recall that the curl of the gradient is zero, which means that
~
we can shift A by the gradient of any function λ:
~→A
A ~ + ∇λ
~ (4.1)
(ii) Note that without the magnetic field, [pi , H] = 0, but we now have functions of x1 , x2
in our Hamiltonian. In this part we will heavily use the fact that [f (x), ∂/∂x] = ∂f /∂x and
pi = −i~∂i . We calculate:
1 e e
[pi , H] = [pi , (pi − Ai )2 + (pj − Aj )2 ] (4.2)
2m c c
1 e e e2
= − [pi , pi Ai ] − [pi , Ai pi ] + 2 [pi , A2i ] (4.3)
2m c c c
!
e e e2
− [pi , pj Aj ] − [pi , Aj pj ] + 2 [pi , A2j ] (4.4)
c c c
!
i~e ∂Ai ∂Ai e ∂A2i ∂Aj ∂Aj e ∂A2j
= pi − pi − + p j − p j − (4.5)
2mc ∂xi ∂xi c ∂x2i ∂xi ∂xi c ∂x2i
!
i~e e 2 2
= [pi , ∂i Ai ] + [pj , ∂i Aj ] − (∂i Ai + ∂i2 A2j ) (4.6)
2mc c
!
i~e e
=− i~(∂i2 Ai + ∂i ∂j Aj ) − (∂i2 A2i + ∂i2 A2j ) (4.7)
2mc c
which is not in general equal to zero. However, we can make a choice of gauge that gets us
[p2 , H] = 0. You could also choose p1 ; here I’m just choosing p2 .
A1 = 0 A2 = Bx1 (4.8)
and since H has no dependence on x2 , all derivatives with respect to x2 vanish and we are
left with [p2 , H] = 0.
7
Defining πi = pi − eAi /c, we can find the commutator [π1 , π2 ] to be:
eA1 eA2
[π1 , π2 ] = [p1 − , p2 − ] (4.11)
c c !
e
= [A2 , p1 ] − [A1 , p2 ] (4.12)
c
!
i~e ∂A2 ∂A1
=− − (4.13)
c ∂x1 ∂x2
i~e
=− B3 (4.14)
c
(iv) Following the hint in the problem, let’s see if we can rewrite our Hamiltonian in some
way that looks like a harmonic oscillator Hamiltonian. We’ll first follow up on the observation
made in part (ii), which is that we can make a gauge choice such that one of the pi commutes
with H. I will choose p2 as in part (ii), and so my choice for the vector potential is A1 = 0,
A2 = Bx1 . This gives the Hamiltonian
p21 1 2 eBx1 2
H= + p − (4.19)
2m 2m 2 c
8
where we have completed the square and defined x0 = ~ck/qB and ω = |qB|/mc. This is
now just a 1D harmonic oscillator potential, shifted by a constant x0 that depends on the py
eigenvalue k. Notice also that the choice of py eigenstate does not affect the energy E. The
functions ν(x1 ) are therefore the 1D harmonic oscillator solutions un (x − x0 ), and we have
energy
1
En = ~ω n + (4.22)
2
~|qB| 1
= n+ (4.23)
mc 2
The eigenstates of this Hamiltonian are
which have an infinite degeneracy labeled by k. This degeneracy is related to the translation
symmetry we have in x2 .
(v) Now we are placing our eigenstates within in a box of dimensions L1 and L2 and counting
the degeneracy. To place our state in a box, we require that the state is localized within the
box.
For the x2 direction, we need to choose some boundary conditions in order to localize
our plane wave. We can choose periodic boundary conditions, in which case we require
For the x1 direction, we can notice that the simple harmonic oscillator eigenfunctions are
localized around their center point x0 (k). Thus we require:
|qB|L2 L1 |qB|
0≤m≤ = A (4.29)
2π~c hc
This tells us the degeneracy within our finite box of area A = L1 L2 for a given B field and
particle with charge q: we can have up to int(|qB|A/hc) number of states localized within
our finite box.
9
A SVD Schmidt decomposition
To be added soon
10