0% found this document useful (0 votes)
21 views4 pages

Simon

Simon's Algorithm is a quantum algorithm designed to find a secret string 'a' from a function 'f' that satisfies specific conditions, utilizing quantum techniques to set up superpositions and perform measurements. The algorithm operates by generating equations that relate to 'a' and requires O(n) queries to the function, with a computational complexity of O(n^2.376) for solving the resulting linear equations. In contrast, any classical probabilistic algorithm is shown to require exponential time to solve the same problem, highlighting the efficiency of Simon's Algorithm in quantum computing.

Uploaded by

rishirockzz985
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views4 pages

Simon

Simon's Algorithm is a quantum algorithm designed to find a secret string 'a' from a function 'f' that satisfies specific conditions, utilizing quantum techniques to set up superpositions and perform measurements. The algorithm operates by generating equations that relate to 'a' and requires O(n) queries to the function, with a computational complexity of O(n^2.376) for solving the resulting linear equations. In contrast, any classical probabilistic algorithm is shown to require exponential time to solve the same problem, highlighting the efficiency of Simon's Algorithm in quantum computing.

Uploaded by

rishirockzz985
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

CS 294-2 Simon’s Algorithm 9/28/04

Fall 2004 Lecture 7

0.1 Overviews
Recall that our basic primitive for designing quantum algorithms is fourier sampling: prepare some quantum
state ψ = ∑x αx x on n qubits; perform a Hadamard transform, resulting in the superposition ∑x βx x ;
now measure to sample x with probability |βx |2 . The point is that classically it is difficult to simulate the
effects of the quantum interference, and therefore to determine for which strings x there is constructive
interference and are therefore output with high probability.
How do we set up the initial superposition ψ = ∑x αx x ? So far we have done so classically: for classi-
cally computable functions f and g, we can set up the superposition ∑x (−1) f (x) g(x · 0m ). A major innovation
in Simon’s algorithm is the use of quantum techniques to set up the initial superposition.
Suppose we’re given a function f : {0, 1}n → {0, 1}n , specified by a black box. (Note that the outputs of
f are n-bit strings, rather than single bits.) We’re promised the following about f : there exists a nonzero
secret string a ∈ {0, 1}n such that

• For all inputs x ∈ {0, 1}n , f (x) = f (x ⊕ a).

• For all inputs x, y ∈ {0, 1}n , if x 6= y ⊕ a, then f (x) 6= f (y).

What these conditions mean is that f is a 2-to-1 function, and that any two inputs mapping to the same
output differ in exactly those positions i for which ai = 1, where i is the i-th position in an n-bit string. For
example, f (x) = 2 ∗ bx/2c. For any k, f (2k) = f (2k + 1). But for any i, j, if i and j are not like (2k, 2k + 1)
pair, then f (i) 6= f ( j). In this example, a is 1.

0.2 Simon’s Algorithm


Let x ⊕ y denote the bitwise mod 2 addition of x and y, and x · y denote the inner product of x and y,
∑ni=1 xi yi mod 2. We now present Simon’s quantum algorithm for finding a. The algorithm uses two regis-
ters, both with n qubits. The registers are initialized to the basis state |0 · · · 0i |0 · · · 0i. We then perform the

|0n i H2n H2n |yi

Cf

|0n i | f (x)i

Figure 1: Simon’s algorithm

CS 294-2, Fall 2004, Lecture 7 1


Hadamard transform H2n on the first register, producing the superposition

1

2n/2 x∈{0,1}n
|xi |0 · · · 0i .

Then, we compute f (x) through the oracle C f and store the result in the second register, obtaining the state

1
2n/2
∑ |xi | f (x)i .
x∈{0,1}n

The second register is not modified after this step. Thus we may invoke the principle of safe storage and
assume that the second register is measured at this point.
Let f (z) be the result of measuring of the second register. There are exactly two x such that f(x)=f(z),
according to the definition of f . one is z and the other is z ⊕ a. The quantum state after measuring is
 
1 1
√ |zi + √ |z ⊕ ai | f (z)i
2 2
We’re now done with the second register, so in the discussion to follow, we’ll omit it from our notation. The
state in the n-qubit first register
1 1
√ |zi + √ |z ⊕ ai
2 2
clearly contains some information about a—the question is how to extract it. If we observed at this point,
we will get z or z ⊕ a. It is a state chosen uniformly at random from {0, 1}n , containing no information at all.
Therefore some more computation is required. The key, once again, is to apply the Hadamard transform
H2n to the register. Doing so, we obtain a superposition

∑ αy |yi
y∈{0,1}n

where
1 1 1 1 1
αy = √ n/2 (−1)y·z + √ n/2 (−1)y·(z⊕a) = (n+1)/2 (−1)y·z [1 + (−1)y·a ] .
22 22 2
There are now two cases. For each y, if y · a = 1, then αy = 0, whereas if y · a = 0, then

±1
αy = .
2(n−1)/2
So when we observe the first register, with certainty we’ll see a y such that y · a = 0. Hence, the output
of the measurement is a random y such that y · a = 0. Furthermore, each y such that y · a = 0 has an equal
probability of occurring. Therefore what we’ve managed to learn is an equation

y1 a1 ⊕ · · · ⊕ yn an = 0 (1)

where y = (y1 , . . . , yn ) is chosen uniformly at random from {0, 1}n . Now, that isn’t enough information to
determine a, but assuming that y 6= 0, it reduces the number of possibilities for a by half.
It should now be clear how to proceed. We run the algorithm over and over, accumulating more and more
equations of the form in (1). Then, once we have enough of these equations, we solve them using Gaussian

CS 294-2, Fall 2004, Lecture 7 2


elimination to obtain a unique value of a. But how many equations is enough? From linear algebra, we
know that a is uniquely determined once we have n − 1 linearly independent equations—in other words,
n − 1 equations

y(1) · a ≡ 0 (mod 2)
..
.
y(n−1) · a ≡ 0 (mod 2)

such that the set y(1) , . . . , y(n−1) is linearly independent in the vector space Z2n . Thus, our strategy will be


to lower-bound the probability that any n − 1 equations returned by the algorithm are independent.
Suppose we already have k linearly independent equations, with associated vectors y(1) , . . . , y(k) . The vectors
then span a subspace S ⊆ Z2n of size 2k , consisting of all vectors of the form

b1 y(1) + · · · + bk y(k)

with b1 , . . . , bk ∈ {0, 1}. Now suppose we learn a new equation with associated vector y(k+1) . This equation
will be independent of all the previous equations provided that y(k+1) lies outside of S, which in turn has
probability at least (2n − 2k )/2n = 1 − 2k−n of occurring. So the probability that any n equations are
independent is exactly the product of those probabilities.
       
1 1 1 1
1 − n × 1 − n−1 × · · · × 1 − × 1− .
2 2 4 2
Can we lower-bound this expression? Trivially, it’s at least
∞  
1
∏ 1 − 2k ≈ 0.28879;
k=1

the infinite product here is related to something in analysis called a q-series. Another way to look at the
constant 0.28879 . . . is this: it is the limit, as n goes to infinity, of the probability that an n × n random matrix
over Z2 is invertible.
But we don’t need heavy-duty analysis to show that the product has a constant lower bound. We use the
inequality (1 − a)(1 − b) = 1 − a − b + ab > 1 − (a + b), if a, b ∈ (0, 1). We just need to multiply the
product out, ignore monomials involving two or more 21k terms multiplied together (which only increase the
product), and observe that the product is lower-bounded by

  
1 1 1 1 1
1 − n + n−1 + · · · + · ≥ .
2 2 4 2 4

We conclude that we can determine a with constant probability of error after repeating the algorithm O (n)
times. So the number of queries to f used by Simon’s algorithm is O (n). The number of computation
steps, though, is at least the number of steps needed to solve a system of linear equations, and the best
known upper bound for this is O n2.376 , due to Coppersmith and Winograd.

0.3 Classical solution


We are going to prove that any probabilistic algorithm needs an exponential time to solve this problem.
Suppose that a is chosen uniformly at random from {0, 1}n − {0n }. Now consider a classical probabilistic

CS 294-2, Fall 2004, Lecture 7 3


algorithm that’s already made k queries, to inputs x1 , . . . , xk . We want to know how much information the
algorithm could have obtained about a, given those queried pairs (xi , f (xi )).
 
On the one hand, there might be a pair of inputs xi , x j (with 1 ≤ i, j ≤ k) such that f (xi ) = f x j . In this
case, the algorithm already has enough information to determine a: a = xi ⊕ x j .
On the other hand, suppose no such pair f (xi ), f (x j ) exists. Then the queried f (xi )’s are distinct and a is
 
k
none of values xi ⊕ x j .
2
The probability that the next query will succeed is at most

k
 
k
2n − 1 −
2
 
k
because there are at least 2n − 1− possible values of u for choosing at the (k + 1)-th query. And f (xk+1 )
2
should be equal to one of the prior observed f (xi ), i ∈ [1, k].
Taking the sum over all k ∈ {1, . . . , m}. We get
m m
k k m2
∑  ≤∑ n 2

2n − m2
k=1 2 − k
k=1 2n − 1 −
k
2

In order to have an constant probability, we must choose m = Ω(2n/2 ). Hence, any deterministic algorithm
has to run in exponential time to get a correct answer with probability larger than a constant.

CS 294-2, Fall 2004, Lecture 7 4

You might also like