Quantum Computing Lecture 23
Quantum Computing Lecture 23
December 4, 2003
1 Introduction
Before von Neumann proposed classical faulttolerance in the 1940’s, it was assumed that a compu
tational device comprised of more than 106 components could not perform a computation wihout
encountering a fatal hardware error. Von Neumann proved that one could indeed make the com
putation work, as long as a certain degree of overhead was tolerable. Thus follows the classical
faulttolerance theorem:
We can sketch von Neumann’s proof of FaultTolerance as follows: Given classical AND,NOT,
and OR gates let us encode a 0 into many 0’s for c log n times, where c is some constant.
0 → 0000000 (1)
Now take two identical copies of “0”, call them a and b, and put them through the AND gate.
Ideally one should get 1111111. Instead, suppose the strings received are 1110101 on a and 0111111
on b. If one performs a bitwise AND on each successive bit of the bit strings a and b, the result is
0110101. Taking “triples” of bits of this resulting addition, one performs a majority vote. Thus, if
one bit has an error probablity of p , two bits in a triple being erroneous occurs with probabiltiy
3p2 . As long as p is small, one can perform operations with faulttolerance. The same type of proof
can be shown for NOT and OR gates, thus giving universality.
In short, if one has compenents of a computer whose fidelity are high enough, and adding
additional compents is relatively easy, then the computation is indeed plausible. As it turns out
the critics were too critical. Your desktop computer does not even use a software errorcorrection
for doing computations, as the � for our siliconbased hardware has become increasingly small.
1
P. Shor – 18.435/2.111 Quantum Computation – Lecture 23 2
specific point, checking the result, then starting the computation again. The probablistic nature of
quantum mechanics and the nocloning theorem make this technique useless for QC. Classically, one
might make many copies of the computation to perform a massive redundancy scheme; however,
this errs like consistency checks, as it is not “powerful enough” for quantum computations. Thus,
one is left with error correcting codes, which have previously been shown portable from the classical
to the quantum domain.
Since dim(C1 ) is k, dim(C2 ) is k1, this code will encode a single qubit, and satisfies the inequality
0 ⊆ C2 ⊆ C1 . The codewords v are those not overlapping the two classical codes; v ∈ C1 − C2 .
Since the encoded |0� and |1� are orthogonal, a σx should just interchange the two encodings. These
two states are separated by v, which amounts to peforming a σx on each individual qubit. Now
suppose an error is made in performing σx on one of the qubits, where the errors on each qubit are
uncorrelated. Then this code will be able to correct for these errors, resulting in a quantum error
correcting code and operation that performs σx with faulttolerance.
1 � 1 � 1 �
� |x + v� −→ � (−1)x.w (−1)v.w |x + v� = − � |x + v� (5)
|C2 | x∈C2 |C2 | x∈C2 |C2 | x∈C2
since at least 1 vector in C1 gives v.w = 1.
1
E(|0�) −→ √ (E(|0�) + E(|1�)) (6)
2
1
E(|1�) −→ √ (E(|0�) − E(|1�)) (7)
2
A Hadamard transformation of each individual qubit, H ⊗k , applied to E(|0�) will give precisely the
correct encoded transformation.
1 � 1 �
H� |x� = k � (−1)x.y |x�
|C2 | x∈C2 2 2 |C2 | y,x∈C2
1 �
= � |y�
|C2⊥ | y∈C1
|0� + |1�
= E( √ )
2
The last line follows because E(|0�) is composed of all codeword in C2 and E(|1�) is everything in
C1 , but not in C2 by definition. A Hadamard transformation on E(|1�), simply adds in the phase
factor of (−1)y.v , which obviously follows from above. This phase factor will be unity if y ∈ C2 , but
1 if y ∈ (C1 − C2 ), thus appropriately adding a phase to the states in the code that are E(|1�).
⊗k 1 � � 1 � �
UCN OT |x + va � ⊗ |y + vb � = |x� ⊗ |x + y + va + vb �
|C2 | |C2 |
x∈C2 y∈C2 x∈C2 y∈C2
1 � �
= |x� ⊗ |y + va + vb �
|C2 |
x∈C2 y∈C2
P. Shor – 18.435/2.111 Quantum Computation – Lecture 23 4
Measurement of |ψ� in the canonical basis will result in either an odd parity bit string, indicating
a syndrome of 1, or an even parity string for a 0 syndrome. The state |ψ� can be created by applying
the Hadamard transformation to the “cat” state.
1
H ⊗k |ψ� = √ (|0� + |1�) (9)
2
(The state |x� represents a k length string of x’s.)
Now suppose a maximally entangled state can be created and verified by performing a few
CNOT gates between the bits of the cat state and an ancilla. Fault tolerance is not an issue here;
one only wants to know if the state is maximally entangled. Measuring |0� on the ancilla for a
reasonable number of test qubits ensures that the state is in some superposition of the states |0�
and |1�. The Hadamard transform of this state:
α+β 1 � α−β 1 �
H ⊗k (α |0� + β |1�) = ( √ ) k−1 |s� + ( √ ) k−1 |s� (10)
2 2 s∈even 2 2 s∈odd
Thus if α = β, the state is all zeros and no backaction will occur. The all ones state simply
adds the ones vector to the qubits. Thus, faulttolerant measurements can be obtained.