Grover's Search Algorithm
Grover's Search Algorithm
i
c
i
[i (1.1)
where c
i
are the probability amplitudes of each basis state [i. These probability amplitudes are complex
numbers with both a magnitude and a phase, the square of which gives us the probability of [ being
in that particular basis state when measured. It is this probability amplitude that allows us to speak of
states in the language of waves, alluding to superposition, interference and so on so forth. This language
will be extremely helpful in understanding Grovers Algorithm.
However, for all its benets over classical computing, quantum computing runs into problems of its
own. The most signicant diculty that confronts quantum computing is the phenomenon of decoher-
ence [1]. As quantum states interact with the environment they decohere into classical states, losing their
quantum-ness their ability to superpose and be in entangled states and thus, losing their quantum
edge. Yet, interaction with the environment is necessary for us to both manipulate the states as well as to
encode and retrieve information from them; all of which are integral parts of our computation processes.
Thus, a quantum information theorist is often in the dicult dilemma of searching for a physical system
that is robust, and yet, one that allows him a measure of control over the qubits. Furthermore, as larger
INTRODUCTION
and larger numbers of qubits interact together to form the framework of our computational system, this
becomes more serious a problem, making scalability of quantum computers a very challenging project.
For this reason, much time and eort have been devoted to studying how dephasing noise aects
quantum systems and quantum computation. Dephasing noise causes o-diagonal terms in our density
matrix to decay to 0, eectively reducing our system to a classical system. This project does a similar
sort of analysis with a signicantly new contribution: the model we introduce to study noise allows us
to study the dynamics of the system as it evolves with time. Thus, we can peek into the system itself
as it evolves with time, rather than treating the noisy time-evolution like a blackbox, the way most
of the currently prevailing studies have been done. We look not merely at the output state after each
interaction is over, but the evolution of the state throughout the interaction.
The model introduced can be applied over a wide range of quantum computation processes. In this
project, we apply this analysis to Grovers Search [2, 3, 4]. Grovers Search is an algorithm that pro-
vides a polynomial speedup to the classical solution of the following problem: Searching for a desired
entry in an unsorted database. Classically, unlike binary searches in a sorted database, searching an
unsorted database can only be executed by examining each item in the database until the desired item
is located. Such a search has a time complexity of O(N), with the worse-case scenario being N accesses
to the database where N = 2
n
is the number of possible states available in an n-bit or n-qubit system.
Grovers Search, on the other hand, requires at most
i=1
1
N
[i
where [i are the encoded qubits. In our study, we shall assume the database to already be in this
distribution, with no defects a perfect database. An iteration of Grovers algorithm involves two main
stages: (a) inverting the phase of the desired basis state, and (b) performing an inversion about the av-
erage operation. The rst stage of the search marks the state being searched for and the second enforces
its probability.
The essence of the algorithm is captured in the interference of probability amplitudes, where the
probability amplitude of the desired state undergoes constructive interference whilst those of the rest
undergo destructive. Thus, stage one of the search is crucial as it inverts the phase of the desired state,
singling it out from the rest of the states. With the phase of the marked basis state inverted, the
constructive and destructive interferences can be realized by doing an inversion about the average, as
can be better visualized in the picture below taken from Ref. [3]. The lines in the picture represent the
probability amplitudes of each basis state.
Computational Models GROVERS IDEAL ALGORITHM
groverA.jpg
(a) First Stage
groverB.jpg
(b) Second Stage
Figure 2.1: Probability Amplitude Distribution of Basis States
As the average (the dotted line in the picture) will be towards the tip of the probability amplitude
of all the states except the marked state, inverting all phases with respect to this average causes
constructive interference for the desired state, and destructive interference for the rest. With enough
iterations, we can easily see how the probability of the desired basis state will far outweigh the other basis
states. The specic number of iterations necessary is worked out by Grover himself [4] and discussed
further by Boyer et al. [5]. For a two-qubit system however, which is the focus of our project, we require
only one iteration, as shall be shown in the upcoming subsections.
2.2 Computational Models
The two computational models through which our search is executed is the familiar circuit model and
the measurement-based one-way-computing model. The circuit model, like most classical models that
we are familiar with, is executed by applying the necessary unitary interaction (gates) on each qubit to
realized the desired eects, such as rotation.
The one-way-computing model [6] however, having no classical analogue or example, is more unique.
This model requires the entire resource for the computation to be a highly entangled cluster state [7]
involving a large number of qubits. Doing projective measurements in dierent basis realizes dierent
6
GROVERS IDEAL ALGORITHM Two-Qubit Search
eects on the encoded qubits. Cluster states are prepared by initializing all qubits into the [+ state,
followed by applying CPhase (controlled-phase) gates onto pairs of neighbouring qubits. A CPhase
operation yields the following: [i [j (1)
ij
[i [j where (i, j 0, 1), that is, it does a phase ip
operation on the second qubit when the rst qubit or control-qubit is in the [1 state. Our study, with
respect to this model, is directed towards the eects of noise on the generation of the entangled resource
rather than the execution of Grovers Search via measurements. Thus for the one-way-computing model,
we shall be looking at the delity of two dierent states: the delity of the generated cluster state under
noise; and the delity of the output of the search after the relevant idealized measurements are executed
on the cluster state generated in noise.
2.3 Two-Qubit Search
2.3.1 Circuit Model
The circuit model works by application of gates. The application of a particular sequence of gates
will achieve a particular eect on the qubits. This subsection shall attempt explain the eect we intend
to achieve (as shall be denoted in the matrix forms seen) as well as the sequence of gates necessary to
attain these eects.
Generic Two-Qubit Search
The rst stage of Grovers Search can be achieved by applying a selective phase inversion matrix,
which for one that inverts the [11 state, has the following form,
I
|11
=
_
_
_
_
_
_
_
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 -1
_
_
_
_
_
_
_
_
(2.1)
7
Two-Qubit Search GROVERS IDEAL ALGORITHM
Applying I
|11
to the input state of a two-qubit Grovers Search yields,
v
in
= [++ (2.2)
v
m
= I
|11
v
in
=
_
_
_
_
_
_
_
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 -1
_
_
_
_
_
_
_
_
1
2
_
_
_
_
_
_
_
_
1
1
1
1
_
_
_
_
_
_
_
_
=
1
2
_
_
_
_
_
_
_
_
1
1
1
-1
_
_
_
_
_
_
_
_
=
1
2
([00 +[01 +[10 [11) (2.3)
where v
in
is the input state and v
m
is the marked state the state that the system will be in after stage
1 of the search has been completed. To invert any of the other basis states, just switch the sign on the
relevant diagonal element. This matrix can be expanded to higher qubits by just adding 0 entries to the
o-diagonal and 1 entries to the diagonal.
The second stage of the search is then realized through the application of the diusion transform
matrix, D, where the componenets take the form
D
ij
=
2
N
ij
(2.4)
8
GROVERS IDEAL ALGORITHM Two-Qubit Search
For two qubits, it has the following eect,
v
out
= D v
m
=
_
_
_
_
_
_
_
_
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1
2
1
2
1
2
-
1
2
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
0
0
0
1
_
_
_
_
_
_
_
_
= [11 (2.5)
where v
out
is the output of the search. One can easily see how the eect would be similar for the other
three marked states. Thus for a two-qubit system, we need only one iteration to obtain the desired state.
Furthermore, we obtain it with 100% probability. Again, this treatment can easily be expanded to deal
with higher qubits by just expanding the matrix D according to Eq. (2.4).
We can achieve the eects of matrices I
|ij
and D through the following equation
I
|ij
= R
12
z
[
2
,
2
]
_
U
SWAP
R
1
z
[]
_
U
SWAP
(2.6)
D = W W I
|00
W W (2.7)
where W is the Walsh Hadamard Transformation acting on one qubit, which shall be further explained in
a moment. Notice that the equtions is read from right to left, where the rightmost operation is the rst
gate. The R gates are rotation gates, rotating, in our case, the electron spin along the direction specied
by the subscript by the angle as specied inside the square brackets and acting on the qubits specied
by the superscript. The
U
SWAP
gate comes from the SWAP operation, which basically swaps the rst
qubit with the second, [01 [10 and vice versa. The
U
SWAP
gate stops this process at midpoint,
causing the resultant state to be a maximally entangled state, some combination of [01 + i [10 with
appropriate phase factors. The gates that we will be using through out this project take the following
9
Two-Qubit Search GROVERS IDEAL ALGORITHM
explicit forms:
R
x
[] =
_
_
_
cos[
2
] i sin[
2
]
i sin[
2
] cos[
2
]
_
_
_
(2.8)
R
z
[] =
_
_
_
cos[
2
] i sin[
2
] 0
0 cos[
2
] i sin[
2
]
_
_
_
(2.9)
_
U
SWAP
=
_
_
_
_
_
_
_
_
1 0 0 0
0
1+i
2
1i
2
0
0
1i
2
1+i
2
0
0 0 0 1
_
_
_
_
_
_
_
_
(2.10)
A = rotation of R
x
and R
z
leaves a bit ip and phase ip eect respectively. Working in the Dirac
notation (bras and kets) and understanding the gate operations in more phenomenological terms such as
these aords us a much deeper insights and appreciation of all that takes place. Thus, we will attempt, as
far as we can, to switch between the purely mathematical matrices and the more elegant Dirac notation,
explaining the phenomenological eects of the system as it evolves through time.
Referring back to Eq. (2.6), one can control the basis state marked by I
|ij
by choosing the appropriate
rotation angle in the last z-rotation gate. The following table summarizes the necessary rotation angle
to mark the appropriate basis states.
[00 [01 [10 [11
R
12
z
[
2
,
2
] R
12
z
[
2
,
2
] R
12
z
[
2
,
2
] R
12
z
[
2
,
2
]
Table 2.1: Gates Marking the Appropriate Basis States
For simplicity, we shall denote the search for the [00 as GS00, [01 as GS01 and so on so forth.
We now return to the Walsh Hadamard Transformation, W. This transformation creates and destroys
superposition. The one qubit W gate can be exactly reproduced with two single qubit rotation, iR
z
[]
10
GROVERS IDEAL ALGORITHM Two-Qubit Search
R
y
[
2
], and has the following matrix form
W =
1
2
_
_
_
1 1
1 -1
_
_
_
From the explicit expression of W above, we can see that it is its own inverse. This explains how W can
both create and destroy superposition
[0
1
2
([0 +[1) (2.11)
[1
1
2
([0 [1) (2.12)
1
2
([0 +[1) [0 (2.13)
1
2
([0 [1) [1 (2.14)
Thus, substituting the relevant gates in for I
|00
and W, we get,
D = R
12
z
[, ] R
12
y
[
2
,
2
] R
12
z
[
2
,
2
]
_
U
SWAP
R
1
z
[]
_
U
SWAP
R
12
z
[, ] R
12
y
[
2
,
2
] (2.15)
Most Ecient Search
As we are studying the eects of noise, the sensible thing to do before we begin our analysis is to
ensure that our circuit has achieved maximum eciency. Thus, in this subsection, we check if there
are redundant gates, asking if we could have achieved a similar eect (where the only system under
consideration is a two-qubit system) with a smaller number of gates. This is extremely important as it
is unwise to study the eects of noise on a search that does not accurately reect the least possible noise
one could have. Unfortunately, this inevitably limits the generality of our most ecient search to only
a search that works on two qubits.
This portion was dealt with through a lot of trial and error, as well as educated guessing and some
background researching. Bodoky and Blaauboers paper [8] was of much assistance in this portion of the
11
Two-Qubit Search GROVERS IDEAL ALGORITHM
project. In this most ecient search, we only have two
U
SWAP
gates, one to entangle the two qubits,
and the other to disentangle them. We denote the most ecient selective phase inversion and diusion
transform matrix by J
|ij
and T respectively.
J
|ij
= R
12
z
[
2
,
2
]
_
U
SWAP
R
1
z
[] (2.16)
T
a
= R
1
x
[]
_
U
SWAP
R
12
x
[
2
,
2
] (2.17)
T
b
=
_
U
SWAP
R
12
x
[
2
,
2
] (2.18)
where T
a
inverts about the average for GS00 and GS11, and T
b
, for GS01 and GS10. Although we
have sacriced the generality of the search for its eciency the dierent searches now require dierent
application of gates this sacrice seems reasonable for the following reason: In most cases, one is well
aware of the entry that is being searched for. And since we know which state is being searched for, we can
apply the appropriate algorithm accordingly. Do note that although J
|ij
and T achieve, in a two-qubit
database, the exact same eect as I
|ij
and D, they do not take the exact same explicit mathematical
form. The matrix components of J and T are dierent from those of I and D.
As we trudged through this project, we found the explicit form of each input state helpful. Thus the
table below is produced for future reference.
12
GROVERS IDEAL ALGORITHM Two-Qubit Search
Gate GS00 GS01 GS10 GS11
R
1
z
[] [++
U
SWAP
[+
R
12
z
[
2
,
2
]
1+i
2
([+ +i [+)
1i
2
_
[ y y
1+i
2
_
[ y y
1+i
2
_
[ y y
1+i
2
_
[ y y
R
12
x
[
2
,
2
]
i [ y y
_
+i [ y y
_
i [ y y
_
+i [ y y
_
U
SWAP
1+i
2
([01 +i [10)
1+i
2
([00 +i [11)
1+i
2
([00 i [11)
1i
2
([01 i [10)
R
2
x
[] [01 - - [10
OUTPUT [00 [01 [10 [11
Table 2.2: Explicit Form of States Involved in Grovers Search
Note that the states listed are the input state of each gate, and recall that GS01 and GS10 require
one less gate operation compared to GS00 and GS11. As we shall be referring to the specic states and
gates of the search in later sections, we impose the following nomenclature. Each input state and each
gate shall be indexed with two numbers. The rst number establishes the stage of Grovers Search being
executed, whilst the second number details the n
th
gate operation of that stage. Thus G
11
refers to the
rst gate, R
1
z
[] and v
11
refers to state as it evolves through R
1
z
[]. Any properties describing specic
states can be indexed in the same way, as shall be seen later. The gates to which each index refers to is
tabulated below.
G
11
G
12
G
13
G
21
G
22
G
23
R
1
z
[]
U
SWAP
R
12
z
[
2
,
2
] R
12
x
[
2
,
2
]
U
SWAP
R
1
x
[]
2.3.2 One-Way-Computing Model
Two-Qubit Grovers Search
The cluster state needed for Grover Search has the form as pictured in the image below [9]. The line
connecting the qubits denotes entanglement.
13
Two-Qubit Search GROVERS IDEAL ALGORITHM
boxclusterstate.jpg
Figure 2.2: Cluster State
Computation on cluster states proceed via measurements on particular qubits in the appropriate
bases. When we speak of measurement in a particular basis, [+ , [ for example, the 0 outcome
of that measurement refers to a measurement outcome of [+, and the 1 to a measurement outcome
of [. If the basis of the measurement was [ , [+ instead, the 0 outcome will refer to [ and
1 to [+. For simplicity, a basis measurement shall be referred to as B
j
[] [+
j
, [
j
, where
[
j
=
1
2
_
[0
j
+e
i
[1
j
_
and the subscript j refers to the qubit under consideration. After each
measurement is done, we trace those qubits out before proceeding to the next measurement for the
cluster state reduces in size as one proceeds with computation.
In cluster state computations, only resulting clusters where previous measurement outcomes were 0
will be allowed to proceed to the next measurement or the next step of computation. A feedforward will
have to be applied to cluster states resulting from measurements where the outcome was 1 before they
can be allowed to proceed with computation. Feedforward is basically a process in which we reinterpret
the resulting cluster by either changing the basis in which future measurements are to be done or by
performing rotation operations on certain qubits. For Grovers Search, the feedforward required is s2
s4, s3 s1, s5s2, s6s3, meaning that if the measurement on qubit 4 (or 1) returns us 1 rather than
0, we should measure qubit 2 (or 3) in a dierent basis [9]. If however, the measurement on qubit 2 (or
3) returns us an outcome of 1, we would need to apply a bit ip operation on qubit 5 (or 6) to correctly
interpret the resulting state.
14
GROVERS IDEAL ALGORITHM Two-Qubit Search
For our particular conguration of qubits in Fig. 2.2, measuring qubits 1 and 4 in the B
1
[] and B
4
[]
basis respectively, followed by qubits 2 and 3 in the B
2
[] and B
3
[] basis, will eect of the following
gate operation on the two encoded qubits:
Gates.jpg
Figure 2.3: Gate Operations
where the two with a line connecting them denotes that the two encoded qubits have undergone
a CPhase operation. The R and W operations represent the rotation gate and the Walsh-Hadamard
transform respectively. It would be helpful to realize that the gate operations in the gure above are
read from left to right, with the CPhase gate on the left being the rst operation on the two encoded
qubits, unlike how gate operations are often read o from an equation.
The rst three gate operations (CPhase, R
12
z
[, ] and WW) on the two encoded qubits completes
the rst stage of Grovers Search. As the measurements on the rst and fourth qubits are responsible
for these three gates, these two measuremetns mark the desired state in the search. The next two
15
Two-Qubit Search GROVERS IDEAL ALGORITHM
measurements complete the second stage, leaving the output on qubits 5 and 6. The table below
summarizes the basis in which measurements must be done to obtain the desired results.
State desired Basis measurement
[00 B
14
[, ] B
23
[, ]
[01 B
14
[, 0] B
23
[0, 0]
[10 B
14
[0, ] B
23
[0, 0]
[11 B
14
[0, 0] B
23
[, ]
Table 2.3: Basis in Which Measurements Should Be Made
where B[] = [ , [+ and B[0] = [+ , [.
This amounts to the following operations for
1. GS00
v
out
= W W R
12
z
[, ] CPhase W W R
12
z
[, ] CPhase [++
= [00 (2.19)
2. GS01
v
out
= W W R
12
z
[0, 0] CPhase W W R
12
z
[, 0] CPhase [++
= [01 (2.20)
3. GS10
v
out
= W W R
12
z
[0, 0] CPhase W W R
12
z
[0, ] CPhase [++
= [10 (2.21)
4. GS11
v
out
= W W R
12
z
[, ] CPhase W W R
12
z
[0, 0] CPhase [++
= [11 (2.22)
16
GROVERS IDEAL ALGORITHM Two-Qubit Search
Recalling the role of CPhase, R
z
[] and W and their eects on a particular qubit, one can easily see how
these gates return us the desired state.
Running cluster state computation on Mathematica is slightly more challenging than running the
usual circuit model. We ran it on Mathematica by applying the appropriate two-qubit projector (applying
the [++++[ operator, for instance) before using a program (supplied by Wolfram website) to trace out
the relevant qubits. This is done twice, to simulate the measurements done on the rst 4 qubits. Then
we summed up over all the possible cases, applying the proper feedforward when necessary. That sum
yields the probabilities of the states that qubits 5 and 6 would be in the output of Grovers Search.
The program for performing the search via one-way-computing is attached in the appendix.
Obtaining Cluster State
To obtain our cluster state, we apply the CPhase gate to the appropriate pair of qubits.
boxclusterstate2.jpg
Figure 2.4: Cluster State
The letters in the picture above show the order in which the entanglements were generated, where
a denotes that it was the rst entanglement generated, b second and so on so forth. We did it in this
specic order because entangling two previously unentangled qubits require one less gate compared to
entangling two qubits which are already entangled to a third and fourth party. An entanglement between
the n
th
and (n+1)
th
qubits which are both disentangled can be achieved by applying the following gates:-
v
ent
= R
n,n+1
z
[
2
,
2
]
_
U
SWAP
n,n+1
R
n
z
[] v
in
(2.23)
17
Two-Qubit Search GROVERS IDEAL ALGORITHM
whilst to generate an entanglement between the n
th
and (n+1)
th
qubit where either one or both of them
are already entangled to another qubit can be achieved by:-
v
ent
= R
n,n+1
z
[
2
,
2
]
_
U
SWAP
n,n+1
R
n
z
[]
_
U
SWAP
n,n+1
v
in
(2.24)
where the superscript of
U
SWAP
denotes which qubit the gate is acting on. Note that these equations
are almost entirely the same as the equation of I
|11
because I
|11
is indeed the CPhase operation. A
CPhase operation is, after all, a selective phase inversion. Thus, referring back to Eq. (2.22), we notice
that the second and third gates after the rst CPhase operation (R
12
z
[0, 0] and W W) leave no net
eect on the state, for the CPhase operation is sucient to produce the marked state.
Beginning with a disentangled six-qubit state,
v
in
= [+ + + + ++ (2.25)
we apply the relevant gate operations to end up with,
v
out
=
1
2
2
_
_
_
_
[+0 + 0 + 0 +[+0 0 + 1 +[+1 + 1 0 +[+1 1 1
+[0 + 1 + 1 +[0 1 + 0 +[1 + 0 1 +[1 0 0
_
_
_
_
Recall that the main aim in this part of the project was to determine how noise would aect the
generation of this state and to determine what happens when this noisy state undergoes the relevant
idealized measurements necessary for Grovers Search. We thus would have been able to determine how
the delity of the entangled resource diers from the delity of the nal output of Grovers Search which
utilizes this state as its entangled resource.
18
GROVERS IDEAL ALGORITHM Two-Qubit Search
g
19
Chapter 3
Grovers Algorithm In Noise
3.1 Lindblad Equation
The noise models used in this project to characterize environmental eects are phenomenological
models, i.e. they characterize noise by the possible eects the environment could have had on the sys-
tem. We consider models where the only eect of the environment is a dephasing eect that is, the
only eect the environment leaves on our quantum system is the eect of decohering it into a classical
system. However, environmental eects could also be dissipative, where the environmental eects cause
energy to be lost from the system.
The phenomenological eects of the environment that shall be under consideration in this project
are bit ip, bit-phase ip and phase ip eects. Thus, we shall study how the system evolves under noise
that causes the following eects:-
bit ip bit-phase ip phase ip
Operator
x
=
_
0 1
1 0
_
y
=
_
0 i
i 0
_
z
=
_
1 0
0 1
_
[0 [1 [0 [1 [0 [0
Eects
[1 [0 [1 [0 [1 [1
Table 3.1: Summary of Phenomenological Noise Models Studied
where
x
,
y
and
z
are the usual Pauli matrices, which shall also be denoted as
1
,
2
and
3
for ease
of computation.
Other noise models that could be of interest are amplitude damping and depolarizing models, the
former being a dissipative noise on top of being dephasing. This whole study can be easily repeated for
GROVERS ALGORITHM IN NOISE Lindblad Equation
these noise models with appropriate but simple tweaks made to the current Mathematica les.
To determine the evolution of a particular state in a noisy environment, we solved the Lindblad
form of the Markovian master equation. A Markovian master equation posits a Markovian environment,
where it assumes that the environment has no memory of the past, or that self-correlations within the
environment decays rapidly as compared to the time taken for the system to vary noticeably [1, 10]. The
Lindblad equation takes the following form:
d
dt
=
i
[1, ]
a
2
[L, [L, ]] (3.1)
where L are Lindblad operators; a, the decay rate which captures the strength of the coupling between
the system and the environment; and 1 is the Hamiltonian of the interaction under consideration. The
tilde on species that this is a reduced density matrix.
At this point, we are required to specify a physical system upon which these algorithms shall take
place. Having already determined the unitary evolution necessary to both execute Grovers Search and
generate the cluster state, we need the specic Hamiltonian operators that will generate these unitary
evolutions. Dierent physical systems utilize dierent interactions to achieve the same unitary opera-
tions. In our project, we deal specically with the quantum dot system, which exploits electron spin as
their fundamental quantum unit [11].
From the discussions in section 2, the only gate operations necessary in this project are one-qubit
local rotation gates and two-qubit
U
SWAP
gates. In fact, generally, these two gates are sucient to
form a universal set [12, 13] of transformations required to perform any necessary computation. In
quantum dot systems, rotation gates rely on an external magnetic eld to produce its eect whilst the
U
SWAP
gate relies on the Heisenberg exchange interaction.
1
R
i
=
1
2
i
(3.2)
1
U
SWAP
n,n+1
=
3
i=1
1
4
2
J
n
i
n+1
i
(3.3)
21
Noise Model GROVERS ALGORITHM IN NOISE
Here and J capture the strength of interaction, and thus, the speed of the operation. To obtain the
unitary evolution which are explicitly dependent on these parameters, one can do a spectral decomposi-
tion of the relevant 1 in terms of its normalized eigenvectors and eigenvalues before exponentiating it.
The generic equation relating a unitary operator to a Hamiltonian is,
U(t) = e
Ht
(3.4)
whilst the unitary operator for a rotation and
U
SWAP
gate (in a solid state quantum dot system) takes
the form of the equations below
U
R
i
(t) = e
i
2
t
i
= e
i
2
i
(3.5)
U
U
SWAP
(t) = e
i
4
Jt
3
i=1
n
i
n+1
i
= e
i
4
3
i=1
n
i
n+1
i
(3.6)
where = t and = Jt. shall determine the extent or angle of rotation, whilst determines the
extent to which the SWAP operation is executed. A of returns us the complete SWAP operation
whilst a of
2
returns us the
U
SWAP
operation we desire. Thus, throughout this project, shall be
set to
2
.
3.2 Noise Model
Our noise model diers from any as far as we know models presented before in this respect: the
input density matrix into the Lindblad equation is not the original input density matrix, v
in
= [++ for
instance, or any of the other input states listed in Table 2.2. Instead, it is the basis states that we solve
for in the Lindblad equation [00[, [01[, [10[ and [11[ for a one-qubit system. Thus, to obtain how
v
in
evolves under a Hamiltonian 1
R
z
in the presence of a bit-ip noise L =
x
, we solve not one Lindblad
equation with = [++++[, but 16 Lindblad equations where the 16 are the 16 basis states. Whilst
there are now more equations to solve, the advantages of this formulation is two-fold:-
22
GROVERS ALGORITHM IN NOISE Noise Model
1. The input density matrices of the partial dierential equations are simple. In a noisy search,
the output of the rst noisy gate in itself is a relatively complicated density matrix. We would
then have to input this complicated density matrix into the Lindblad equation again, doing this
repeatedly with the input state becoming more and more complicated until the search is nished
or the cluster state is achieved. This formulation however, solves the partial dierential equations
for very simple basis states.
2. Determining how the basis states of a two-qubit system evolves with a particular noise under
a particular Hamiltonian eectively solves for all such evolutions, regardless of the form of the
input states. This is true as the basis states span a complete basis. Any two-qubit state can be
represented in terms of the 16 basis states. Thus, we have potentially solved the z-rotation under
bit-ip noise for virtually any possible superposition of two-qubit states possible by just solving
the 16 relatively simple partial dierential equations stated above.
From the Lindblad equation, we then have the following map of a particular basis state for one and
two qubits:
[ij[
ij
(3.7)
[ijkl[ = [ik[ [jl[
ijkl
(3.8)
where [ij[ and [ijkl[ are the original basis states and is the reduced density matrix that the original
basis state is mapped to. It is now a reduced density matrix due to environmental interaction for we
have no access to the environment. Take note of the subscripts of and how they relate to the original
basis state, for this could be a source of confusion. In the following sections, shall refer to the solutions
of the Lindblad equations.
Upon solving the Lindblad equation for all the basis states and obtaining the relevant , we would
need a way to compile these solutions, allowing us to simulate the eect of a noisy gate operation acting
on any input state. This is done in accordance to the following equations for the two-qubit (circuit
23
Solutions to the Lindblad Equation GROVERS ALGORITHM IN NOISE
model) and six-qubit (one-way computing model) gates respectively,
out
=
1
i,j,k,l=0
Tr
_
in
[klij[
_
ijkl
(3.9)
out
=
1
i,j,k,l,m,n,p,q,r,s,u,v=0
Tr
_
in
[pqrsuvijklmn[
_
ijklmnpqrsuv
(3.10)
Basically, we multiply the coecient of a particular basis [ijkl[ of
in
(as obtained from the trace opera-
tion) with the corresponding noisy basis evolution
ijkl
. By summing over all the possible basis, we obtain
the eect of a particular 1 under the inuence of a particular noise model of a particular input state,
in
.
With this simplication, we are able to obtain the explicit expression of how each density matrix
involved in the search evolves with time. Thus, we can trace the evolution of the density matrix through
time, studying its dynamics.
3.3 Solutions to the Lindblad Equation
In this section, we shall list the solutions of the Lindblad equation for the dierent noise models.
We ask that the readers keep in mind that the Lindblad equation returns us a map of how a particular
state evolves, as can be seen in Eq. (3.8). In our list, we use equality signs because we are referring
only to the right side of the map. The notations used for the right side is sucient to identify the
original state on the left-hand side of the map, and thus, we leave that out of our equations. For the
one-qubit maps, each reduced density matrix is written in the complete basis spanned by I,
x
,
y
and
z
. For the two-qubit solutions, each reduced density matrix is written in terms of its matrix elements, S
ij
As superoperators preserve hermiticity, an o-diagonal matrix element will obey the following re-
lation: S
ij
= S
ji
. Recalling that we are dealing with time-evolution of basis states, which essentially
are components of a matrix, each basis evolution obeys that relation that is it obeys
ij
=
ji
and
ijkl
=
klij
. Thus, for the one-qubit evolutions, we list down 3 rather than 4 solutions and for two-qubit
evolutions, we list down 10 rather than 16 solutions. For all the evolutions under the inuence of 1
R
,
24
GROVERS ALGORITHM IN NOISE Solutions to the Lindblad Equation
2
a
2
, whilst for those under the inuence of 1
U
SWAP
,
J
2
4a
2
.
3.3.1 Bit Flip Noise (
x
)
Under 1
R
x
inuence
00
=
1
2
_
0
e
2at
(sin[t]
2
cos[t]
3
)
_
(3.11)
01
=
1
2
_
1
+ie
2at
(cos[t]
2
+ sin[t]
3
)
_
(3.12)
11
=
1
2
_
0
+e
2at
(sin[t]
2
cos[t]
3
)
_
(3.13)
Under 1
R
z
inuence
00
=
1
2
_
0
+e
2at
3
_
(3.14)
01
=
e
at
2
_
cos[t](
1
+i
2
) +
sin[t]
_
(a i)
1
(a +i)i
2
__
(3.15)
11
=
1
2
_
0
e
2at
3
_
(3.16)
Under 1
U
SWAP
inuence, for the expressions with or signs, the top signs apply for the S
ij
as
listed before the semicolon. For the S
ij
after the semicolon, the bottom sign applies.
0000
=
_
_
S
ij
= e
2at
cosh
2
[at] if S
ij
= S
11
S
ij
=
1 e
4at
4
if S
ij
= S
22
, S
33
S
ij
= e
2at
sinh
2
[at] if S
ij
= S
44
S
ij
= 0 otherwise
(3.17)
1111
=
_
_
S
ij
= e
2at
sinh
2
[at] if S
ij
= S
11
S
ij
=
1 e
4at
4
if S
ij
= S
22
, S
33
S
ij
= e
2at
cosh
2
[at] if S
ij
= S
44
S
ij
= 0 otherwise
(3.18)
25
Solutions to the Lindblad Equation GROVERS ALGORITHM IN NOISE
0101
=
_
_
S
ij
=
1 e
4at
4
if S
ij
= S
11
, S
44
S
ij
=
1 +e
4at
4
cos[Jt]
2e
2at
if S
ij
= S
22
; S
33
S
ij
=
i sin[Jt]
2e
2at
if S
ij
= S
23
; S
32
S
ij
= 0 otherwise
(3.19)
1010
=
_
_
S
ij
=
1 e
4at
4
if S
ij
= S
11
, S
44
S
ij
=
1 +e
4at
4
cos[Jt]
2e
2at
if S
ij
= S
22
; S
33
S
ij
=
i sin[Jt]
2e
2at
if S
ij
= S
23
; S
32
S
ij
= 0 otherwise
(3.20)
0001
=
_
_
S
ij
=
e
2at
4
_
2 cosh
2
[at] (e
iJt
+ cos[t]
iJ
sin[t])
_
if S
ij
= S
12
, S
13
S
ij
=
1
8
_
1 e
4at
4ae
2at
sin[t]
_
if S
ij
= S
21
, S
34
; S
24
, S
31
S
ij
=
e
2at
4
_
2 sinh
2
[at] (e
iJt
cos[t] +
iJ
sin[t])
_
if S
ij
= S
42
, S
43
S
ij
= 0 otherwise
(3.21)
0010
=
_
_
S
ij
=
e
2at
4
_
2 cosh
2
[at] (e
iJt
+ cos[t]
iJ
sin[t])
_
if S
ij
= S
12
; S
13
S
ij
=
1
8
_
1 e
4at
4ae
2at
sin[t]
_
if S
ij
= S
21
, S
34
; S
24
, S
31
S
ij
=
e
2at
4
_
2 sinh
2
[at] (e
iJt
cos[t] +
iJ
sin[t])
_
if S
ij
= S
42
; S
43
S
ij
= 0 otherwise
(3.22)
0011
=
_
_
S
ij
= e
2at
cosh
2
[at] if S
ij
= S
14
S
ij
=
1 e
4at
4
if S
ij
= S
23
, S
32
S
ij
= e
2at
sinh
2
[at] if S
ij
= S
41
S
ij
= 0 otherwise
(3.23)
26
GROVERS ALGORITHM IN NOISE Solutions to the Lindblad Equation
0110
=
_
_
S
ij
=
1 e
4at
4
if S
ij
= S
14
, S
41
S
ij
=
i sin[Jt]
2e
2at
if S
ij
= S
22
; S
33
S
ij
=
1 +e
4at
4
cos[Jt]
2e
2at
if S
ij
= S
23
; S
32
S
ij
= 0 otherwise
(3.24)
0111
=
_
_
S
ij
=
1
8
_
1 e
4at
4ae
2at
sin[t]
_
if S
ij
= S
12
, S
43
; S
13
, S
42
S
ij
=
e
2at
4
_
2 sinh
2
[at] (e
iJt
cos[t]
iJ
sin[t])
_
if S
ij
= S
21
; S
31
S
ij
=
e
2at
4
_
2 cosh
2
[at] (e
iJt
+ cos[t] +
iJ
sin[t])
_
if S
ij
= S
24
; S
34
S
ij
= 0 otherwise
(3.25)
1011
=
_
_
S
ij
=
1
8
_
1 e
4at
4ae
2at
sin[t]
_
if S
ij
= S
12
, S
43
; S
13
, S
42
S
ij
=
e
2at
4
_
2 sinh
2
[at] (e
iJt
cos[t]
iJ
sin[t])
_
if S
ij
= S
21
; S
31
S
ij
=
e
2at
4
_
2 cosh
2
[at] (e
iJt
+ cos[t] +
iJ
sin[t])
_
if S
ij
= S
24
; S
34
S
ij
= 0 otherwise
(3.26)
3.3.2 Bit-Phase Flip Noise (
y
)
Under 1
R
x
inuence
00
=
1
2
_
0
+
e
at
_
cos[t]
3
sin[t](
2
+a
3
)
__
(3.27)
01
=
1
2
_
e
2at
1
+
ie
at
_
cos[t]
2
+ sin[t](a
2
+
3
)
__
(3.28)
11
=
1
2
_
e
at
_
cos[t]
3
sin[t](
2
+a
3
)
__
(3.29)
Under 1
R
z
inuence
00
=
1
2
_
0
+e
2at
3
_
(3.30)
01
=
e
at
2
_
cos[t](
1
+i
2
)
sin[t]
_
(a +i)
1
+ (a +i)i
2
__
(3.31)
11
=
1
2
_
0
e
2at
3
_
(3.32)
27
Solutions to the Lindblad Equation GROVERS ALGORITHM IN NOISE
Under 1
U
SWAP
inuence
0000
=
_
_
S
ij
= e
2at
cosh
2
[at] if S
ij
= S
11
S
ij
=
1 e
4at
4
if S
ij
= S
22
, S
33
S
ij
= e
2at
sinh
2
[at] if S
ij
= S
44
S
ij
= 0 otherwise
(3.33)
1111
=
_
_
S
ij
= e
2at
sinh
2
[at] if S
ij
= S
11
S
ij
=
1 e
4at
4
if S
ij
= S
22
, S
33
S
ij
= e
2at
cosh
2
[at] if S
ij
= S
44
S
ij
= 0 otherwise
(3.34)
0101
=
_
_
S
ij
=
1 e
4at
4
if S
ij
= S
11
, S
44
S
ij
=
1 +e
4at
4
cos[Jt]
2e
2at
if S
ij
= S
22
; S
33
S
ij
=
i sin[Jt]
2e
2at
if S
ij
= S
23
; S
32
S
ij
= 0 otherwise
(3.35)
1010
=
_
_
S
ij
=
1 e
4at
4
if S
ij
= S
11
; S
44
S
ij
=
1 +e
4at
4
cos[Jt]
2e
2at
if S
ij
= S
22
; S
33
S
ij
=
i sin[Jt]
2e
2at
if S
ij
= S
23
; S
32
S
ij
= 0 otherwise
(3.36)
0001
=
_
_
S
ij
=
e
2at
4
_
2 cosh
2
[at] (e
iJt
+ cos[t]
iJ
sin[t])
_
if S
ij
= S
12
; S
13
S
ij
=
1
8
_
(1 e
4at
)
4ae
2at
sin[t]
_
if S
ij
= S
21
; S
24
S
ij
=
1
8
_
(1 e
4at
) +
4ae
2at
sin[t]
_
if S
ij
= S
31
; S
34
S
ij
=
e
2at
4
_
2 sinh
2
[at] (e
iJt
cos[t] +
iJ
sin[t])
_
if S
ij
= S
42
, S
43
S
ij
= 0 otherwise
(3.37)
28
GROVERS ALGORITHM IN NOISE Solutions to the Lindblad Equation
0010
=
_
_
S
ij
=
e
2at
4
_
2 cosh
2
[at] (e
iJt
+ cos[t]
iJ
sin[t])
_
if S
ij
= S
12
; S
13
S
ij
=
1
8
_
(1 e
4at
) +
4ae
2at
sin[t]
_
if S
ij
= S
21
; S
24
S
ij
=
1
8
_
1 e
4at
4ae
2at
sin[t]
_
if S
ij
= S
31
; S
34
S
ij
=
e
2at
4
_
2 sinh
2
[at] (e
iJt
cos[t] +
iJ
sin[t])
_
if S
ij
= S
42
; S
43
S
ij
= 0 otherwise
(3.38)
0011
=
_
_
S
ij
= e
2at
cosh
2
[at] if S
ij
= S
14
S
ij
=
e
4at
1
4
if S
ij
= S
23
, S
32
S
ij
= e
2at
sinh
2
[at] if S
ij
= S
41
S
ij
= 0 otherwise
(3.39)
0110
=
_
_
S
ij
=
e
4at
1
4
if S
ij
= S
14
, S
41
S
ij
=
i sin[Jt]
2e
2at
if S
ij
= S
22
; S
33
S
ij
=
1 +e
4at
4
cos[Jt]
2e
2at
if S
ij
= S
23
; S
32
S
ij
= 0 otherwise
(3.40)
0111
=
_
_
S
ij
=
1
8
_
(1 e
4at
)
4ae
2at
sin[t]
_
if S
ij
= S
12
; S
42
S
ij
=
1
8
_
(1 e
4at
) +
4ae
2at
sin[t]
_
if S
ij
= S
13
; S
43
S
ij
=
e
2at
4
_
2 sinh
2
[at] (e
iJt
cos[t]
iJ
sin[t])
_
if S
ij
= S
21
; S
31
S
ij
=
e
2at
4
_
2 cosh
2
[at] (e
iJt
+ cos[t] +
iJ
sin[t])
_
if S
ij
= S
24
; S
34
S
ij
= 0 otherwise
(3.41)
29
Solutions to the Lindblad Equation GROVERS ALGORITHM IN NOISE
1011
=
_
_
S
ij
=
1
8
_
(1 e
4at
) +
4ae
2at
sin[t]
_
if S
ij
= S
12
; S
42
S
ij
=
1
8
_
(1 e
4at
)
4ae
2at
sin[t]
_
if S
ij
= S
13
; S
43
S
ij
=
e
2at
4
_
2 sinh
2
[at] (e
iJt
cos[t]
iJ
sin[t])
_
if S
ij
= S
21
; S
31
S
ij
=
e
2at
4
_
2 cosh
2
[at] (e
iJt
+ cos[t] +
iJ
sin[t])
_
if S
ij
= S
24
; S
34
S
ij
= 0 otherwise
(3.42)
3.3.3 Phase Flip Noise (
z
)
Under 1
R
x
inuence
00
=
1
2
_
e
at
sin[t]
2
+e
at
_
cos[t] +
a
sin[t]
_
3
_
(3.43)
01
=
1
2
_
e
2at
1
+ie
at
_
cos[t]
a
sin[t]
_
2
+ie
at
sin[t]
3
_
(3.44)
11
=
1
2
_
0
+
e
at
sin[t]
2
e
at
_
cos[t] +
a
sin[t]
_
3
_
(3.45)
Under 1
R
z
inuence
00
=
1
2
_
0
+
3
_
(3.46)
01
=
1
2
e
2atit
_
1
+i
2
_
(3.47)
11
=
1
2
_
3
_
(3.48)
Under 1
U
SWAP
inuence
0000
= [0000[ (3.49)
1111
= [1111[ (3.50)
0101
=
_
_
S
ij
=
1
2
+
cos[t] + 2a sin[t]
2e
2at
if S
ij
= S
22
S
ij
=
iJ sin[t]
2e
2at
if S
ij
= S
23
S
ij
=
iJ sin[t]
2e
2at
if S
ij
= S
32
S
ij
=
1
2
cos[t] + 2a sin[t]
2e
2at
if S
ij
= S
33
S
ij
= 0 otherwise
(3.51)
30
GROVERS ALGORITHM IN NOISE Solutions to the Lindblad Equation
1010
=
_
_
S
ij
=
1
2
cos[t] + 2a sin[t]
2e
2at
if S
ij
= S
22
S
ij
=
iJ sin[t]
2e
2at
if S
ij
= S
23
S
ij
=
iJ sin[t]
2e
2at
if S
ij
= S
32
S
ij
=
1
2
+
cos[t] + 2a sin[t]
2e
2at
if S
ij
= S
33
S
ij
= 0 otherwise
(3.52)
0001
=
_
_
S
ij
=
1 +e
iJt
2e
2at
if S
ij
= S
12
S
ij
=
1 e
iJt
2e
2at
if S
ij
= S
13
S
ij
= 0 otherwise
(3.53)
0010
=
_
_
S
ij
=
1 e
iJt
2e
2at
if S
ij
= S
12
S
ij
=
1 +e
iJt
2e
2at
if S
ij
= S
13
S
ij
= 0 otherwise
(3.54)
0011
=
_
_
S
ij
= e
4at
if S
ij
= S
14
S
ij
= 0 otherwise
(3.55)
0110
=
_
_
S
ij
=
iJ sin[t]
2e
2at
if S
ij
= S
22
S
ij
=
1
2e
4at
+
cos[t] 2a sin[t]
2e
2at
if S
ij
= S
23
S
ij
=
1
2e
4at
cos[t] 2a sin[t]
2e
2at
if S
ij
= S
32
S
ij
=
iJ sin[t]
2e
2at
if S
ij
= S
33
S
ij
= 0 otherwise
(3.56)
0111
=
_
_
S
ij
=
1 +e
iJt
2e
2at
if S
ij
= S
24
S
ij
=
1 e
iJt
2e
2at
if S
ij
= S
34
S
ij
= 0 otherwise
(3.57)
31
Extending To A Larger Number of Qubits GROVERS ALGORITHM IN NOISE
1011
=
_
_
S
ij
=
1 e
iJt
2e
2at
if S
ij
= S
24
S
ij
=
1 +e
iJt
2e
2at
if S
ij
= S
34
S
ij
= 0 otherwise
(3.58)
3.4 Extending To A Larger Number of Qubits
As the circuit model requires two-qubit rotation gates, and the cluster state model requires gates
that act on six qubits, we will need to extend the formulation such that it would be able to act on a
larger number of qubits. Extending the one-qubit noisy basis evolution under the inuence of 1
R
into
two-qubit basis evolutions is simple. Each basis evolution shall take the following form for,
1. Gates that rotate only the rst qubit
ijkl
=
ik
[jl[ (3.59)
2. Gates that rotate only the second qubit
ijkl
= [ik[
jl
(3.60)
3. Gates that rotate both qubits
ijkl
=
ik
jl
(3.61)
Extending this to make a six-qubit rotation gate utilizes a similar concept. Noisy basis evolution under
1
U
SWAP
inuence can be extended to six-qubit basis evolutions as such:-
1. For a
_
U
SWAP
12
eect
ijklmnpqrsuv
=
ijpq
[klmnrsuv[ (3.62)
2. For a
_
U
SWAP
23
eect
ijklmnpqrsuv
= [ip[
jkqr
[lmnsuv[ (3.63)
32
GROVERS ALGORITHM IN NOISE Extending To A Larger Number of Qubits
3. For a
_
U
SWAP
34
eect
ijklmnpqrsuv
= [ijpq[
klrs
[mnuv[ (3.64)
4. For a
_
U
SWAP
41
eect, swap qubits 2 and 4, apply
_
U
SWAP
12
, and swap it back. This is a purely
mathematical constraint, necessary only when doing mathematical computation. Physically and
experimentally, there ought to be no complications when it comes to applying the
U
SWAP
gate to
qubits 1 and 4, for they are or can indeed be made to be, side by side in order for the appropriate
1 to be applied to both. Mathematically though, we cannot incorporate this as simplistically,
because the generation of the noisy evolutions require a two-qubit 1 that acts on n and (n + 1),
rather than n and (n +m) as can be seen from Eq. (3.3).
5. For a
_
U
SWAP
25
eect, swap qubits 3 and 5 apply
_
U
SWAP
23
, before swapping it back.
6. For a
_
U
SWAP
36
eect, swap qubits 4 and 6 apply
_
U
SWAP
34
, before swapping it back.
Each SWAP operation can be achieved by
SWAP
14
=
1
i,j,k,l,m,n,p,q,r,s,u,v=0
[ljkimn ijklmn[ (3.65)
SWAP
25
=
1
i,j,k,l,m,n,p,q,r,s,u,v=0
[imkljn ijklmn[ (3.66)
SWAP
36
=
1
i,j,k,l,m,n,p,q,r,s,u,v=0
[ijnlmk ijklmn[ (3.67)
33
Chapter 4
Analysis
4.1 Introduction
In our analysis, we shall focus on two particular quantities: delity and negativity. Fidelity is a
means of determining how close the output state mirrors the desired state. As the trace of a number is
the number itself, by the linearity of trace, we can reexpress delity as
P = [
out
[
= Tr[[
out
[]
= Tr[
out
[[] (4.1)
where we have used P to denote delity.
out
is the outcome of our computation while [ is ideal state
that the search ought to return. Please note that unless stated, delity and P shall always refer
to the delity of the output of the search, v
out
.
Negativity on the other hand, provides us with a means of studying entanglement for two-qubits
by distilling out entanglement. Using only local operations and classical communication (LOCC), we
transform our N copies of an arbitrarily entangled state to some number of purely entangled Bell state.
To obtain negativity we rst obtain the partial transpose of our density matrix, which is also a criterion
called partial positive tranpose, PPT to check for separability of density matrices [14, 15]. We obtain
the partial transpose by rst noting that a density matrix living in the Hilbert space H
a
H
b
can be
expressed as
=
ijkl
p
ijkl
[ijkl[
=
ijkl
p
ijkl
[ik[ [jl[ (4.2)
ANALYSIS Introduction
where p
ijkl
are the matrix elements of the [ijkl[ basis. A partial transpose then consists of us transposing
only a part of the state, doing an identity map on system A and a transpose map on system B, for instance.
T
B
=
ijkl
p
ijkl
[ik[ [lj[ (4.3)
where
T
B
is the partial transpose of system B. If the system is separable, such a partial transpose ought
to still return us a density matrix, and hence, positive eigenvalues. That is the basis of the PPT test, and
hence, a system whose PPT has negative eigenvalues is guaranteed to be entangled. Upon obtaining the
partial transpose, negativity consists in taking the absolute value of the sum of its negative eigenvalues,
A =
i
_
Re[e
i
]
Re[e
i
]
_
(4.4)
When a system is maximally entangled, A = 1.
As our nal aim is to provide practical advice on ways to improve Grovers Search, we would rstly
need a list of the parameters that are within our control. The parameters within our control are:.
1. The ratio of the speed of the gates to the decay rate, a : J and a : .
2. The fraction or amount of gate completed: for the rotation gates and for the
U
SWAP
gates
(refer Eq. (3.5) and Eq. (3.6)). Recall that controls the angle by which a state is rotated by,
whilst controls the extent to which the SWAP operation is applied onto the state.
3. The computational basis in which the search is carried out. The default basis this study is con-
ducted in is the [0 , [1 basis eigenbasis of the Pauli z matrix.
As a is the decay rate, the higher the a the larger the eect of the environment on the system. Recall
that = t and = Jt, and thus can be rearranged to t = / and t = /J. From Eq. (3.13) through
to Eq. (3.58), we notice that the noisy basis maps depend on the the parameter at. Substituting t, and
keeping and constant, we nd that they depend on the ratio of a : and a : J such that the smaller
this ratio, the better the delity. The lower the decay rate, the quicker the gate, the better the results
of the search.
35
Introduction ANALYSIS
In the rst parameter, the total and by which the unitary operation takes the state through
is kept at a constant. The second parameter then relaxes this constraint. Given the speed of a gate
(determined by setting the rst parameter), we can vary the time parameter to obtain the and such
that the search yields maximum delity possible. The and that yields maximum delity shall be
labelled
max
and
max
, whilst the full swing ones shall be
full
and
full
. In our discussions relating to
this issue, we shall speak in terms of the fraction of
max
/
full
and
max
/
full
. Theoretically, it is possible
for this ratio to exceed one. This may seem counter-intuitive for it is natural to assume the smaller the
angle of rotation, or the lesser the system is exposed to the SWAP operation, the less severe the eect
of the environment. And indeed this is most often than not, true. However, it is theoretically possible,
for the system becomes complicated when noise is present and the simple intuitive relationships may no
longer necessarily hold.
In this project, we will be studying and comparing the eects of noise for three dierent noise models.
In reality, we have no control over the environment, and hence the type of noise the environment imposes
on our system. However, what is really relevant to the performance of our search is the eect of the
environment relative to the basis chosen for computation. And this is within our control. Starting o
the search by encoding information in the [+ , [ or the [ y , [ y basis changes the eect that
the environment leaves on our search. This is better visualized through an example: assume that the
environment is characterized by a bit-ip noise. Since this project tells us how all three models of noise
aect a search whose computational basis is in [0 , [1, we could identify the model that has the least
eect on our search (phase ip, say) and choose our computational basis such that the bit ip noisy
environment that we are in has a phase ip eect on the search. Thus, we choose [+ , [ as our
computational basis.
For the purposes of this project, we remind the reader that a delity of 0.5 shall be the minimum
requirement of a successful search. We shall present our analysis in the following manner: Firstly, we
discuss the eect of noise on just one gate. Thus, in that section, all other gates are idealized and
36
ANALYSIS One Gate Noisy
assumed to run to full swing, i.e. =
full
and =
full
. As having only one gate noisy is an implausible
physical situation, this section is meant merely to deepen our understanding as to how the three noise
models may aect the two types of gates (rotation and
U
SWAP
) as they act on the dierent states.
It aims to deepen our appreciation and understanding of why the steps proposed in later sections do
indeed improve the search. Secondly, we study the more realistic situation where all the interactions are
infused with noise, but where all the gates are still running to full swing. Only in the last section do we
relax the constraint on and in attempts to maximize the delity of the search algorithm.
4.2 One Gate Noisy
In this section we shall rstly present a table that summarizes the eects of noise on individual
gates, before looking into some very interesting features of Grovers Search in noise. We then ask the
following question: what happens if entanglement fails to be generated in the course of the search?
As entanglement is an apparent benet that quantum mechanics has over classical mechanics, it would
be intriguing indeed to learn what happens to the quantum algorithm when no entanglement is generated.
Several clarications are in order before one can possibly make sense of the table presented in the
following page. The blue graphs plot for delity and negativity under
x
noise, the red,
y
and the
purple under
z
noise. Recall that indexing anything with ij species the i
th
stage of the search and the
j
th
operation of that stage (refer to section 2.3.1). The delity graphs in the table plots P (recall that
P is always the delity of the nal output state of the search) against t
ij
, where i and j are indices of
the noisy gate. The negativity graphs on the other hand, plots for A
ij
against t
ij
.
The last three columns with the (s) and (s) is merely a summary the denotes that the delity
of the search under that particular noise model is higher than the one with the . This evaluation is
done only at the point when the gate is completed, i.e. at
full
and
full
. The input states listed in this
table (second column) are for the GS11 search and are included merely for reference. Do refer to Table
37
One Gate Noisy ANALYSIS
2.2 for the input states of the other searches. The delity and negativity plots look exactly the same for
all 4 searches, with the exception that for [01 and [10, the last rotation gate is unnecessary. For these
graphs, we have set a : = a : J = 1 : 10. In most plots, we see only two distinct lines rather than
three because two of the lines overlap. The and might assist in determining which are the lines
that overlap.
38
ANALYSIS One Gate Noisy
Gate Input Fidelity Graph Negativity Graph L =
x
L =
y
L =
z
R
1
z
[] [++
RZ[pi].pdf
U
SWAP
[+
rootSWAP1fid.pdf rootSWAP1neg.pdf
R
12
z
[
2
,
2
] [+ +i [+
RZ12[ent].pdf rotneg.pdf
R
12
x
[
2
,
2
] [ y y +i [ y y
RZ12[ent].pdf rotneg.pdf
U
SWAP
[01 i [10
rootSWAP2fid.pdf rootSWAP2neg.pdf
R
2
x
[] [10
RX[pi].pdf
Table 4.1: Eect of Noise on Individual Gates
39
One Gate Noisy ANALYSIS
The graphs plotted in the table, and for the rest of this section, has time t
ij
as the x-axis as it
should for we are exploring the time-evolution and dynamics of the search. However, the maximum
value of t
ij
, as well as the meaning of the explicit values of t
ij
varies according to the dierent operations
applied (for and varies depending on the operation under scrutiny) and the speed of the operations,
as t
ij
= / or /J. The y-axis represents either delity P, or negativity A as specied.
Our rst observation is that there is a direct correlation between the generation of entanglement and
of P. The following graphs plot for delity and negativity of the rst
U
SWAP
gate, G
12
.
4Fid.pdf
(a) Fidelity
4Neg.pdf
(b) Negativity
Figure 4.1: Time evolution of P and A
12
against t
12
a : J = 1 : 10
Comparing the delity plot (a) and negativity plot (b) of
x
(blue), or those of
y
and
z
(red and
purple), we observe that at the point where negativity is at its highest, delity reaches its maximum.
Included are graphs where delity and negativity are plotted on the same axis, with a certain constant
value added to the negativity plot to allow the two plots maximum value to coincide.
40
ANALYSIS One Gate Noisy
4a.pdf
(a) L =
x
4b.pdf
(b) L =
y
and
z
Figure 4.2: Comparing delity and negativity
These graphs allows us to clearly see that the maximum point for delity and negativity occurs at the
same time
1
.
This correlation with delity however, holds only for the generation of entanglement and not for
entanglement in general. Even when all other gates are idealized, this correlation does not hold for any
of the other three gates involving entangled states not even for the
U
SWAP
gate that disentangles the
two qubits. Whilst it is simplistically true that the more entanglement generated, the better the delity
of the algorithm (other gates idealized); once we have generated a nite amount of entanglement, it is not
necessarily as clear anymore that further loss of entanglement would have as direct a correlation with -
delity. This is best seen in the G
13
and G
21
gates rotation gates whose application should not aect the
negativity of the states evolving through them. In the presence of noise, the negativity of any entangled
state evolving through a rotation gate will necessarily drop. Yet, it would be ridiculous to suggest that it
would then be better to skip over these gates to obtain a better delity. To understand why this is true,
consider a
1
2
([01 +[10) state bound to enter a noisy R
2
x
[]. The output of this operation would be
1
2
([00 +[11). If this gate were to be skipped over, we would nd that the delity of the state with re-
spect to
1
2
([00 +[11) would be 0. If it undergoes noisy evolution, even if entanglement is lost, delity
of the nal output state with respect to
1
2
([00 +[11) would at least have a chance to be higher than 0.
1
No label was utilized to distinguish the negativity plot from the delity plot in the graphs below because the distinction
is of no consequence. This graph is but a tool to show that the maximum point of both coincide.
41
One Gate Noisy ANALYSIS
Next, we direct out attention to the two
U
SWAP
gates. Notice that if the input state is in the
eigenbasis of the the Pauli matrix that characterizes the noise in the environment, the eect of the
environment is signicantly lesser.
Gate Input Fidelity Graph Negativity Graph L =
x
L =
y
L =
z
U
SWAP
[+
rootSWAP1fid.pdf rootSWAP1neg.pdf
U
SWAP
[01 i [10
rootSWAP2fid.pdf rootSWAP2neg.pdf
From the rst row, when the input state is an eigenstate of
x
, the
x
Lindblad operator (blue
plot) has less of a detrimental eect on the delity and negativity of the search, as compared to the
y
(red plot) and
z
(purple plot) Lindblad operators. For the second column, where the input state
is a maximally entangled Bell state as expressed in the eigenstates of
z
, the
z
Lindblad operator has
less of a harmful eect on it. (Do refer to Table 2.2 to observe the forms the input states that the
other searches take. This connection holds true for all the four searches). Thus, if the input state is the
eigenbasis of the Lindblad operator, that state, as it evolves through a
U
SWAP
gate, will be more robust.
Furthermore, as v
12
evolves through G
12
in an environment characterized by the
x
Lindblad oper-
ator, entanglement will always be generated regardless of the ratio a : J. For v
22
, if the environment
is characterized by a
z
noise, the entanglement, when destroyed will always be immediately generated
again. The graphs below show this
2
:-
2
The reason for this particular choice of a : J shall be apparent later
42
ANALYSIS One Gate Noisy
3aNeg.pdf
(a) N
12
against t
12
3bNeg.pdf
(b) N
22
against t
22
Figure 4.3: Time evolution of A
ij
against t
ij
a : J = 51 : 100
This places a natural constraint on the question raised what happens if G
12
fails to generate entan-
glement? In our attempts to answer this question, we are limited to a discussion of only the bit-phase
ip and phase ip noise models. The condition that must be fullled to prevent entanglement from being
generated under these two noise models is a : J > 1 : 2. When a = 2J,
J
2
4a
2
as introduced in
the noisy maps of Section 3.3 reduces to 0.
Analysing the behaviour of delity when this ratio exceeds
1
2
, we shall nd that with no entanglement
generated, it is impossible for the delity of the search to reach 0.5. Recall that in this section, the rest
of the gates are idealized. Hence, even in idealized situations, the search will fail when no entanglement
is generated. Fidelity can, however, exceed 0.25, and thus, this search still returns us results that exceed
what we would have been able to obtain via classical means. The P and A
12
curve against t
12
under
L =
z
inuence where a : J 51 : 100 is plotted below (one will obtain the exact same curve with
L =
y
).
43
One Gate Noisy ANALYSIS
1aFid.pdf
(a) Fidelity
1aNeg.pdf
(b) Negativity
Figure 4.4: Time evolution of P and A
12
against t
12
a : J = 51 : 100
Comparing it with the delity plot of L =
x
inuence, under the same a : J ratio, we notice that the
state under the bit ip noise is much more robust than those under phase ip noise.
3aFid.pdf
Figure 4.5: Time evolution of P against t
12
a : J = 51 : 100
Even more interesting is the following graphs:
44
ANALYSIS One Gate Noisy
3cFid.pdf
(a) Fidelity
3cNeg.pdf
(b) Negativity
Figure 4.6: Time evolution of P and A
12
against t
12
a : J = 100 : 1
These graphs show that when there is even a tiny amount of entanglement generated, given that the
rest of the gates are idealized, it is still possible to obtain a successful Grovers Search. This is a rather
amazing nd, considering the large ratio of a : J. This shows how fundamental a role entanglement
plays in quantum computing.
Thus, from this section, we learn the following:-
1. Entanglement is central to Grovers Search for the following reasons:
Firstly, there is a direct connection between the amount of entanglement one is able to generate and
the delity of the search such that if the
U
SWAP
operation is stopped at the point where maximum
entanglement is generated, the search would have yielded an output with the highest delity
attainable under the conditions it was subjected to. Secondly, if entanglement is not generated,
Grovers Search can never succeed even when it is idealized in every other consideration. If, on
the other hand, entanglement is generated ever so slightly, in a situation idealized in every other
manner, the search will succeed.
2. A state evolving through a
U
SWAP
gate will be relatively robust if it happens to be an eigenket of
the Lindblad operator characterizing the environment. Entanglement will denitely be generated
in such cases.
45
All Gates Noisy ANALYSIS
3. In the absence of entanglement, a quantum algorithm could still have the potential of surpassing
a classical algorithm.
4.3 All Gates Noisy
In this section, we consider what happens to the search when all the gates involved are noisy. We
rst consider cases where the speed of both the rotation gates and the
U
SWAP
gates are the same, i.e.
it takes the same amount of time to do a
2
rotation as it does to do a
U
SWAP
gate. Then we vary the
relative speed of the two operations. However, the and throughout this subsection still go to
full
and
full
.
4.3.1 Same Speed Gates
When the speed of the rotation gates and the
U
SWAP
gates are the same, we observe the following:-
1. Fidelity of the marked state is least aected by an environment characterized by L =
x
2. For GS01 and GS10, conducting a search in an environment characterized by a bit ip noise has a
higher probability of success than in the other two noisy environments. In GS00 and GS11 however,
no such preferences can be discerned.
3. Upon reevaluating the correlation between entanglement generated and delity of the output of
the search (now that all the gates are noisy), we nd that for two out of the three noise models
researched, the correlation still holds a happy conclusion for quantum computing indeed.
The following table tabulates the delity results for two particular initial conditions, a : = a : J =
1 : 15 and a : = a : J = 1 : 20. These two conditions were chosen because they provide us with delity
values that are close to a successful search.
46
ANALYSIS All Gates Noisy
a : = 1 : 20 a : = 1 : 15
L =
x
L =
y
L =
z
L =
x
L =
y
L =
z
PM 0.748 0.697 0.663 0.686 0.628 0.594
P 0.517 0.516 0.517 0.440 0.438 0.440
Table 4.2: Fidelity of GS00 and GS11
a : = 1 : 20 a : = 1 : 15
L =
x
L =
y
L =
z
L =
x
L =
y
L =
z
PM 0.748 0.697 0.663 0.686 0.628 0.594
P 0.578 0.546 0.548 0.500 0.468 0.471
Table 4.3: Fidelity of GS01 and GS10
As a : = a : J, only one of those conditions was listed in the table. In the tables presented above,
PM refers to the delity of the marked state, whilst P, as always, refers to the delity of the nal
output. The values for GS00 and GS11 are the same because the structure of both these searches are
similar. The same goes for GS01 and GS10. The rst two observations mentioned can be easily drawn
just by referring to the tables above. Understanding or explaining them however, requires us to refer
back to a simplied version of Table 4.1.
47
All Gates Noisy ANALYSIS
Gate Input L =
x
L =
y
L =
z
R
1
z
[] [++
U
SWAP
[+
R
12
z
[
2
,
2
] [+ +i [+
R
12
x
[
2
,
2
] [ y y +i [ y y
U
SWAP
[01 i [10
R
2
x
[] [10
Table 4.4: Eect of Noise on Individual Gates
Recall that the and refer to the relative performances of the search under the dierent Lindblad
operators assuming rstly that all other gates are ideal, and secondly, that all the gates are applied to
full
and
full
. From Table 4.4, it is clear that the eect of the
x
Lindblad operator (two s) on the
gates prior to obtaining the marked state is less detrimental than that of
y
(one ) which is then less
detrimental than the
z
Lindblad operator. This explains the trend of PM our rst observation above.
The second can be explained as such: As GS01 and GS10 involve only the rst 5 gates, it is clear
that the eect of a
x
Lindblad operator on the search is less than that of
y
or
z
as the former has two
s and one while the latter two have one and two s each. Over the 6 gates in GS00 and GS11,
there are two s and two s for all three noise models. Thus, it is easy to conclude that we should
indeed expect that the dierent noise models to return us roughly the same delity. Note that both
these explanations presuppose that the dierence between a and a in a
U
SWAP
gate is exactly
the same as the dierence between a and in a rotation gate. Thus, in comparing scenario a) where
we have a on the
U
SWAP
and a on a rotation gate and scenario b) where we have a on the
U
SWAP
gate a on the rotation gate, we ought to nd that it yields the same eect. Referring back
to the third column of Table 4.1, it can be seen that this is indeed roughly true. (Do note that the range
of the axes are dierent for dierent graphs).
48
ANALYSIS All Gates Noisy
To prove the third observation, we plot the negativity of v
12
(A
12
), and the nal delity, P, against
t
12
where the conditions have been set thus a : = a : J = 1 : 20. Recall that v
12
is the state upon
which the rst
U
SWAP
gate G
12
is acting upon. In the following plots, a certain constant has been
added to the value of delity. This allows us to compare the delity and negativity curves with ease
3
.
The blue curve plots for a bit ip noise, red for bit-phase ip and purple for phase ip.
6a.pdf
(a) L =
x
6b.pdf
(b) L =
y
6c.pdf
(c) L =
z
Figure 4.7: Time Evolution of P and A
12
Against t
12
All Gates Noisy
Thus, we note that for the bit ip and bit-phase ip noise, assuming that all gates run to full swing,
there is a direct correlation between the entanglement generated and delity. These graphs were ob-
tained rather late into the project, and thus, we lacked the time needed to explore why the case is
3
Again, the only important point is to determine if the maximum point of both curves coincide. Hence, it matters not
which curve is the delity curve and which, the negativity.
49
All Gates Noisy ANALYSIS
dierent for a phase ip noise.
Taking the rst and second observation together, we seem to reach a rather disturbing conclusion:
that there is actually no simplistic correlation between the delity of the marked state and the delity
of the nal output, at least for GS00 and GS11. Comparing across the three noise models, it seems clear
that a higher PM does not necessarily entail a higher P. Would a search really be a search if there is no
correlation between how well I can mark my state and how well I can nd my state? Fortunately, this
is not necessarily the conclusion to be drawn from this observation. All that this study shows is that
there is no simplistic correlation between the delity of the marked state and the delity of the nal
output for the searches done across dierent environments. It is still possible then likely even that
for a particular search in one specic environment, characterized by one Lindblad operator, there would
still be a correlation between the delity of the marked state and the nal output of the search, such
that the better the delity of the marked state, the better then, the resulting output of the search. The
fact that between two or more dierent searches, the search with the highest PM does not result in the
highest P does not mean that between two or more dierent scenarios of the same search, the highest
PM will not result in the highest P.
4.3.2 Dierent Speed Gates
Now when the speeds of the two gates are not the same, we can make two further observations.
1. Speeding up of rotation gates improve the search a lot more than the speeding up of the
U
SWAP
gates do
2. A quicker
U
SWAP
gate lessens the eect of the bit-phase ip noise on the search.
The tables below tabulate data for when the speed of one gate is three times the speed of the other
while the ratio of a to the slower gate remains at 1:15.
50
ANALYSIS All Gates Noisy
: J = 3 : 1 : J = 1 : 3
L =
x
L =
y
L =
z
L =
x
L =
y
L =
z
PM 0.824 0.746 0.728 0.724 0.703 0.658
P 0.609 0.581 0.609 0.501 0.519 0.501
Table 4.5: Fidelity of GS00 and GS11 : J ,= 1 : 1)
: J = 3 : 1 : J = 1 : 3
L =
x
L =
y
L =
z
L =
x
L =
y
L =
z
PM 0.824 0.746 0.728 0.724 0.703 0.658
P 0.644 0.596 0.627 0.578 0.563 0.541
Table 4.6: Fidelity of GS01 and GS10 : J ,= 1 : 1
To notice the rst observation, just compare the P values of one Lindblad operator across the two
ratios : J = 3 : 1 and : J = 1 : 3. When = 3J, P is much higher than when J = 3. The second
observation can be made by looking at the L =
y
column. For J = 3, the value of P for L =
y
is
higher than the value of P for L =
x
and L =
z
. Do compare the values in these two tables to the values
in Tables 4.2 and 4.3 to grasp the delity increment aorded to us when we speed up the gate operations.
Explaining the rst observation that delity improves more when rotation gates are sped up is
probably as simple as this: The search requires a larger number of rotation gates as compared to the
U
SWAP
gates. Thus, improving the ecacy of the rotation gates gives a larger overall delity improve-
ment than does the speeding up of
U
SWAP
gates.
That a higher J improves the delity of the search under the bit-phase ip operator more than it
does for the searches under the other two Lindblad operators can be understood through the following
two graphs:-
51
All Gates Noisy ANALYSIS
5a.pdf
(a) a : J = 1 : 5
5b.pdf
(b) a : J = 1 : 20
Figure 4.8: Time Evolution Of Fidelity Through The Noisy G
12
These two graphs plot the delity of the output of the Grovers search when only G
12
is noisy, assuming
the rest of the gates go to
full
and
full
ideally. Numerically, by the end of the gate for the rst initial
condition a : J = 1 : 5, the delity under L =
x
is 0.789 whilst delity of the search under L =
y
and
z
is 0.588. For the second condition a : J = 1 : 20, the delity is 0.929 and 0.860 respectively. Thus,
for bit ip noise, the increase in delity as one increases the speed of the gate is a mere 17.9 percent
increase whilst for the bit-phase ip and the phase ip noise, the increase is a whooping 46.2 percent.
The graphs for G
22
:-
5c.pdf
(a) a : J = 1 : 5
5d.pdf
(b) a : J = 1 : 20
Figure 4.9: Fidelity As It Evolves Through A Noisy G
22
52
ANALYSIS Maximizing Fidelity
We notice that a similar phenomenon happens here, with the evolution under L =
x
being replaced
with L =
z
noise and vice versa (swapping the blue and purple curves). Hence, when we combine both
gates, we nd that the delity increment under bit-phase ip noise enjoys a larger benet as in increases
more both times, as compared to the search under bit ip and phase ip noise, in which only one of the
two gates would aord a larger increment.
Note that this increase in delity for the L =
y
environment seldom is large enough to outweigh the
natural advantage that the L =
x
environment has over GS01 and GS10. That is, for GS01 and GS10,
it is often the case that the bit ip environment still oers the highest delity possible, regardless of the
: J ratio.
Translating all these observations to practical advice, we have the following three advice:-
1. For GS01 and GS10, one should always aim to run the search on a computational basis such that
the environment has a bit-ip eect on the search.
2. If one had to choose between speeding up rotation gates versus speeding up
U
SWAP
gates, one
should choose to speed up rotation gates for they allow a higher increment in delity for a given
speedup.
3. If it happens that the speed of ones
U
SWAP
gate is much faster than ones rotation gates, one
ought to choose a computational basis such that the environment is characterized by a bit-phase
ip Lindblad operator when running GS00 and GS11.
4.4 Maximizing Fidelity
4.4.1 Program
With any given ratio of a : and a : J, a simple program to maximize the delity of Grovers Search
can be written. The Mathematica le for this program is attached in the appendix. In this section
53
Maximizing Fidelity ANALYSIS
however, we give a pseudo-code for the reader to obtain the general idea of how the le ought to work
4
From our solution to the Lindblad equation, we have explicit forms of how a particular density ma-
trix, and thus, their delity, evolves in time. Given that the evolution of delity in noise is a smooth
curve, determining the point of maximum delity is simple: Find the stationary point of the delity
and the corresponding time variable at that point. Thus, we wrote a program that does precisely that.
We called the method maxFid
(ij)
, where ij again refers to the i
th
stage of the search and the j
th
gate
operation of that stage. This method shall then return us the t
ij
where P (delity of the nal output
state) is maximized. Do note that in general, we shall utilize the and variables (fraction of gate
completed) for discussion, rather than the time parameter.
As the delity of the search is very much dependent on the explicit form of the input state, do note
that calling upon maxFid
(ij)
once for each gate is insucient to ensure that we have obtained the most
maximal delity possible. This is because maximizing delity for a later gate could change the eect of
the noisy evolution of a former gate, for the when maxFid
(11)
was called upon, it was called upon under
the assumption that the later gates obeyed a certain conditio. This condition is no longer obeyed when
maxFid
(ij)
was called upon for those later gates. Thus, an iteration is called for, until the maximization
procedure produces no more signicant increment in delity. We have set this signicant increment to
be a ratio of 2 values up to 4 decimal places (as shall be explained later). The program to run a particular
gate of Grovers Search (G
ij
) shall be called searchG
ij
(t), where t is the parameter we supply to it. Do
note that the maxFid
(ij)
already has a version searchG
ij
() (with no t parameter) incorporated in it,
plus additional instructions to nd the maximum delity and to return the corresponding t value.
Our next step is to recall that Grovers Search is divided into two main stages the marking and
the enforcing. Generally, a maximum delity in the nal state is meaningless if the search did not
successfully mark the state. Thus, for the rst three gates, maxFid
(ij)
shall maximize PM, and for
4
Do note that the program itself may not work exactly like the pseudo-code. This pseudo-codes role is merely to paint
a general picture.
54
ANALYSIS Maximizing Fidelity
the next two (GS01 and GS10) or three (GS00 and GS11) gates, maxFid
(ij)
shall maximize P. The
pseudo-code to maximize PM has the following form
i = 1, C
11
= 1, C
12
= 1, C
13
= 1, T
11
=
full
11
, T
12
=
full
12
J
, T
13
=
full
13
while
T
11
C
11
,= 1.0000 [[
T
12
C
12
,= 1.0000 [[
T
13
C
13
,= 1.0000 do
C
11
T
11
C
12
T
12
C
13
T
13
C shall store the value of T
ij
the iteration before to allow for the
T
ij
C
ij
comparison
condition of the while loop
if i == 1 then
T
11
maxFid
(11)
searchG
12
(T
12
)
searchG
13
(T
13
)
i + +e
else if i == 2 then
searchG
11
(T
11
)
T
12
maxFid
(12)
searchG
13
(T
13
)
i + +
else if i == 3 then
searchG
11
(T
11
)
searchG
12
(T
12
)
T
13
maxFid
(13)
i 1
end if
end while
One can easily reproduce this to maximize P. With this program we now can obtain
max
and
max
for any given initial condition. The rest of this section consists of applying the algorithm above to obtain
55
Maximizing Fidelity ANALYSIS
the necessary and that would maximize the search, given an initial condition. We then study the
trends of the data, seeking for nuggets of insight into the search. We shall nd ourselves endeavoring to
explain the trends of the dierent data more than anything in the upcoming discussion, for the trends
are relatively diverse.
Notice that with this program, we can propose very specic advice on obtaining better delity
depending on the initial condition given. The advice here will take the form of the explicit numerical
value of when one ought to stop each gate operation.
4.4.2 Same Speed Gates
The initial conditions chosen are as before, a : = a : J = 1 : 15. The tables below tabulate
max
/
full
and
max
/
full
for GS00 and GS11.
[00 [11
Gate
L =
x
L =
y
L =
z
L =
x
L =
y
L =
z
G
11
R
1
z
[] 1.001 0.949 0.956 1.003 0.968 0.959
G
12
U
SWAP
0.923 0.835 0.866 0.923 0.853 0.873
G
13
R
12
z
[
2
,
2
] 0.921 0.907 0.914 0.921 0.918 0.917
G
21
R
12
x
[
2
,
2
] 0.827 0.828 0.816 0.827 0.850 0.839
G
22
U
SWAP
0.679 0.602 0.750 0.679 0.669 0.797
G
23
R
2
x
[] 0.997 0.993 1.013 0.997 0.998 1.020
Table 4.7:
max
/
full
for GS00 and GS11
The rst trend that we shall focus on stems from G
11
and G
23
, the rst and last gate. Pulling out
the relevant delity-time graphs from Table 4.1:
56
ANALYSIS Maximizing Fidelity
RZ[pi]2.pdf
(a) G
11
RX[pi]2.pdf
(b) G
23
Figure 4.10: P Evolution Through Dierent Rotation Gates, G
11
and G
23
The axes in the graphs above have been rescaled for easier comparison. Notice that, from Table 4.7,
for G
11
, only the environment characterized by a
x
Lindblad operator has a
max
/
full
of more than 1,
whilst for G
23
, it is the phase ip noise has
max
/
full
> 1. Notice also that the form of the blue curve
(L =
x
) in Fig. 4.10a is slightly dierent as compared to the red (L =
y
) and purple (L =
z
) curves.
The same goes for Fig. 4.10b, where the purple curve is dierent from the blue and red. As compared
with the other curves, their form suggest that their maximum point has not been reached throughout
the course of this gate. Finding the stationary points of these two curves do indeed yield a
max
/
full
of
more than 1, unlike for the other curves. Thus, a knowledge of how the search performs when noise is
only present in one gate can actually motivate an understanding of Table 4.7.
The second trend we shall look at is the value of
max
/
full
for the two
U
SWAP
gates, G
12
and G
22
,
over the three dierent noise models. Notice that one of the noise models will have a signicantly higher
value of
max
/
full
. For G
12
, it is under the bit ip environment that
max
/
full
takes on a larger value,
whilst for G
22
, the phase ip. Drawing out the relevant graphs from Table 4.1 again, we obtain:
57
Maximizing Fidelity ANALYSIS
rootSWAP1fid.pdf
(a) G
12
rootSWAP2fid.pdf
(b) G
22
Figure 4.11: P Evolution Through Dierent
U
SWAP
Gates, G
12
and G
22
Comparing the blue curve (L =
x
) of Fig. 4.11a with the red and purple curves, we notice that the
maximum point of that blue curve occurs later in the operation than do the red and purple ones. Thus,
more of the gate needs to be completed by a search executed in a bit ip environment. A similar ob-
servation holds for G
22
, where instead of focusing on the blue curve, we turn our focus to the purple
curve (L =
z
). This explains why
max
/
full
for G
12
is signicantly higher under bit ip noise, whilst
for G
22
its signicantly higher under phase ip noise. Observe also that for the blue curve in Fig. 4.11a,
the delity at
max
and the delity at
full
does not dier much, unlike for the red and purple curve. A
similar observation can be made for Fig. 4.11b. Another way of putting this would be to notice that
when the environments eect on delity is more detrimental, the maximum delity attainable tends to
be quite a bit larger than the delity at the end of a complete cycle of the gate. Thus we can draw such
a connection: the lower the value of
max
/
full
and the more detrimental the eect of the environment
is on the search (these two normally come together), the more useful and benecial is the whole process
of maximizing delity.
Being able to relate these to increment in delity may be helpful in explaining the following table.
The table below tabulates the percentage increase of delity attainable given the initial constraints of
a : = a : J =
1
15
58
ANALYSIS Maximizing Fidelity
[00 [11
L =
x
L =
y
L =
z
L =
x
L =
y
L =
z
full
and
full
0.686 0.628 0.594 0.686 0.628 0.594
PM
max
and
max
0.693 0.643 0.605 0.693 0.641 0.605
% increase 1.0 2.5 1.9 1.0 2.2 1.9
full
and
full
0.440 0.438 0.440 0.440 0.438 0.440
P
max
and
max
0.476 0.488 0.477 0.476 0.477 0.471
% increase 8.2 11.3 8.5 8.2 8.7 7.0
Table 4.8: Percentage Increase in GS00 and GS11
That the percentage increase of the delity of the marked state PM is smaller for bit ip noise than
for the other two noise models may be explained by precisely the fact that when the environments eect
on delity is more detrimental for the G
12
, the potential for delity to be maximized is also higher.
That however, does not explain why PM under the L =
y
environment has a slightly higher percentage
delity as compared to L =
z
. To answer that, we once again plot for the delity of G
11
, with rescaled
axes, and a certain constant amount added to the L =
z
curve to ease the task of comparison. We
compare the extent to which the maximum delity is larger than the full-swing delity. Plotting the
graphs such that at
full
their delity are equivalent, we obtain the graph below:
RZ[pi]3.pdf
Figure 4.12: P Evolution Through G
11
59
Maximizing Fidelity ANALYSIS
Note that at
max
, red curve returns a higher maximum delity, thus clearing up why the percentage
increase for PM under bit-phase ip is higher than that under phase ip.
Another interesting point to note is that, comparing the percentage increase of PM and the percent-
age increase of P, we note that there is a huge rise or jump between the two. We attribute this to the
second
U
SWAP
gate G
22
. This endorses the statement before, that the smaller the value of
max
/
full
,
the higher the percentage increase aorded to us. Considering the value of
max
/
full
for G
22
, it should
come as no surprise that there would be a large percentage increase in delity for the search,as the value
of
max
/
full
of G
22
is very small. Furthermore, the percentage increase rises up to a whooping 11.3% for
GS00 search under bit-phase ip noise because
max
/
full
for G
22
under those conditions is signicantly
lower than the rest.
As to why, however, the
max
of G
22
is so small is a question we have yet been able to explain.
This, amongst several other unexplained observations, we attribute to the dependence of the evolution
of delity on the explicit form of the input state. As the noise does not seem to aect the various states
in a strictly systematic way, small changes to the state gives rise to numbers and data that look very
dierent. This dierence is further augmented by the cumulative eect of these slightly dierent input
states evolving through a noisy gate repeatedly (the program loops). Thus, some of the numbers churned
out by the program could be such because they just are a mere descriptive fact.
This qualication arises especially when we attempt to look at GS01 and GS10.
60
ANALYSIS Maximizing Fidelity
[01 [10
Gate
L =
x
L =
y
L =
z
L =
x
L =
y
L =
z
G
11
R
1
z
[] 1.059 1.019 1.000 0.942 0.822 0.850
G
12
U
SWAP
0.922 0.847 0.870 0.922 0.852 0.868
G
13
R
12
z
[
2
,
2
] 0.873 0.896 0.916 0.869 0.740 0.784
G
21
R
12
x
[
2
,
2
] 0.919 0.914 0.910 0.919 0.937 0.931
G
22
U
SWAP
1.057 1.079 1.085 1.057 1.089 1.120
Table 4.9:
max
/
full
for GS01 and GS10
As can be seen, very few of the explanations utilized in the discussion for GS00 and GS11 can be reused
on this table. Percentage increase are as tabulated below
[01 [10
L =
x
L =
y
L =
z
L =
x
L =
y
L =
z
full
and
full
0.686 0.628 0.594 0.686 0.628 0.594
PM
max
and
max
0.696 0.640 0.603 0.696 0.661 0.615
% increase 1.4 2.0 1.6 1.4 5.3 3.6
full
and
full
0.500 0.468 0.471 0.500 0.468 0.471
P
max
and
max
0.510 0.479 0.476 0.510 0.489 0.484
% increase 2.0 2.3 1.0 2.0 4.5 2.8
Table 4.10: Percentage Increase in GS01 and GS10
Percentage increment obtained from maximizing the GS01 and GS10 searches is small as compared
to that obtained from maximizing GS00 and GS11. This, we think, could be largely attributed to
the dierence in the G
22
gate, the second
U
SWAP
gate. As can be seen from a comparison of the
two tables, the
max
/
full
for GS00 and GS11 is a lot smaller than 1, whilst for GS01 and GS10, it
61
Maximizing Fidelity ANALYSIS
is slightly larger than 1. As discussed, a
max
/
full
that is smaller than 1 has an advantage in terms
of the potential to maximize delity. And thus, the percentage increase for GS01 and GS10 is a lot lower.
Why should this dierence in
max
/
full
exist, however, is not very clear to us. Amongst the pos-
tulates that we gured made the most sense was the following explanation: In our analysis, the state
whose delity is under scrutiny is the output of the Grover Search: the output of R
2
x
[] (G
23
) for GS00
and GS11 and the output of
U
SWAP
(G
22
) for GS01 and GS10. As delity was not measured directly
on the output of G
22
(for GS00 and GS11, delity was measured on the output of G
23
instead), the
max
was dierent from that obtained from GS01 and GS10, for in those searches, delity was measured
on the output of G
22
, the
U
SWAP
gate itself. However, when we then tried plotting the delity of v
22
itself in the GS00 and GS11 search, we obtained the same results:
max
/
full
< 1. Even when we tried
to subject the outputs of GS01 and GS10 to an extra R
x
gate, there was no change in the plots that
could allow us to explain this odd feature.
Whilst the rst
U
SWAP
gate in Table 4.10, G
12
performs as expected, the values for
max
for all the
other gates, like the values of
max
for G
22
completely befuddles us, for the only two dierence between
GS00, GS11 and GS01, GS10 are the last additional R
x
gate and the direction of rotation in R
z
and
R
x
, in G
13
and G
21
. Furthermore, when all other gates are idealized, a noisy two-qubit rotation gate,
acting on an entangled state should return the same plots regardless of the direction of rotation (refer
to Table 4.1). Thus, the result bae us, and we can but fall back to the position that the dependence of
the evolution of delity on the explicit form of the input state could cause trends that may be slightly
unpredictable. Given that the neither the additional R
x
gate nor the relative direction of rotation (given
that all other gates are idealized) cannot account for the discrepancy between these searches, we con-
clude that the source of discrepancy is the the less-than-ideal entangled state being acted upon by the
two-qubit noisy rotation gates (G
13
and G
21
). This hypothesis is further strengthened by recalling that
the rst three rows of data in Table 4.9 result from the maximization of the rst stage of Grovers Search.
The rst stage of the search, however, involves only three gates, two out of which are exactly identical
62
ANALYSIS Maximizing Fidelity
across all four searches. The only dierence between the four searches in this stage of the search is the
R
z
gate. Thus, there could be no other source from what we can understand at least for the dierent
numbers churned out by the computer.
The few things that we have learned from this portion is that, rstly, in general, the smaller the
ratio of
max
/
full
and
max
/
full
, the higher the potential to maximize the delity of the overall search.
Secondly, we infer that for
U
SWAP
gates, the more detrimental the eect of the environment, the better
the improvement when maximization is applied. And lastly, which is actually a rather important lesson,
this whole process of maximizing a noisy search even for merely two-qubits! is an extremely tedious
and complicated procedure.
4.4.3 Dierent Speed Gates
Moving on to cases where the speed of the operation is within our control, we obtain the four dierent
tables below for the four dierent searches. The ratio of a to the slower gate remains at
1
15
. We only
tabulate for the percentage increase attainable from the maximization procedure as sections previously
have dealt with the general dependence of delity on the relative speed of and J,
Gate Speed Ratio : J = 1 : 1 : J = 3 : 1 : J = 1 : 3
L
x
y
z
x
y
z
x
y
z
GS00 8.2 11.3 8.5 6.8 7.4 6.2 4.1 5.2 4.5
% increase GS01 2.0 2.3 1.0 0.5 1.3 0.3 2.0 1.4 1.0
of P in GS10 2.0 4.5 2.8 0.5 1.6 0.6 2.0 3.9 3.3
GS11 8.2 8.7 7.0 6.8 7.1 6.0 4.2 4.2 3.9
Table 4.11: Percentage Increase in Fidelity, : J ,= 1
Three observations immediately jump out at us. Firstly, we notice that unlike the scenario where all
gates are executed to full swing,
full
and
full
, the maximum delity attainable is not the same for GS00
and GS11, as well as GS01 and GS10 except when the environment is modeled by a bit ip Lindblad
63
Maximizing Fidelity ANALYSIS
operator. Secondly, we notice that for GS00 and GS11, the percentage increase of the search when is
sped up is lower than when J is sped up, which in turn, is lower than when both are not sped up (refer
Table 4.8). And thirdly, for GS01 and GS10, percentage increase of the search when is sped up is
lower than when J is sped up, which in turn is only slightly if at all lower than when both gates are
not sped up.
In analysing the rst observation, we point out that given a particular end point, there may be many
means to reach that end. Thus, it is plausible for the evolution to yield the exact same delity at the end
of the interaction (
full
and
full
), and yet have a dierent delity at every other point of the interaction,
or at every other value of and . Indeed, from the solutions of the Lindblad equations, Eq. (3.13) -
Eq. (3.58), we notice that the noisy evolution of [0 and [1 is indeed slightly dierent, although they
may return us the same results for selected values of t. The t for a completed gate ( or /2) is amongst
those. The fact that these t(s) are amongst them could be because the generator for the two operations
discussed in this project, and the Lindblad operator are both Pauli matrices. That the delity curve
have dierent forms, and that the output density matrix of the Lindblad equation could be dierent for
the same value of and serves to at least provide a further motivation for the lack of a proper trend
between the Tables 4.7 and 4.9, as well as between Tables 4.8 and 4.10. As the maximization procedure
does several iterations, stopping interactions half-way through and utilizing those output states as new
input states, it is plausible to imagine the input states diverging further and further away from each
other hence returning us such diverse results.
All these however, does not explain why under L =
x
, the numbers produced out are consistent
over all the dierent noise models. In all honesty, we have no idea why this would be the case. The
only plausible explanation that we could think of was that under a L =
x
environment, the amount
of entanglement generated is signicantly more than for the other two noise models. The question then
would be: is there then a critical amount of entanglement necessary to stabilize the results of the search,
and what then would then imply? Unfortunately, we lacked the time check if this explanation is the
64
ANALYSIS Maximizing Fidelity
right one, and to further explore it if it were.
The second observation is suciently easy to account for, and it reinforces some of the conclusions
made in the previous section, i.e. the more detrimental the eect of the environment on the search, the
more the potential for maximization. Thus, when both and J are not maximized, we nd the percent-
age increase when the search undergoes maximization is very much higher as compared to when either
one of the gates are sped up. As it was also determined previously, the smaller the fraction of
max
/
full
and
max
/
full
, the higher the percentage increase. Since for GS00 and GS11, the second
U
SWAP
gate
(G
22
) has the lowest fraction, a large degree of the percentage increment in those two searches is rooted
in the
U
SWAP
gate. Thus, speeding up J reduces the potential for delity increment when executing
the maximization process.
For GS01 and GS10, the observations are similarly accounted for. That percentage increase of the
search is lower when is sped up (as oppose to when J is sped up) is explained by recalling that for
GS01 and GS10, the ratio of
max
/
full
is very large, and hence, the relative contribution of the
U
SWAP
gate to the increment in delity during maximization is signicantly reduced so much so that it is now
lesser than that of the rotation gates.
With this, we can infer that, for a successful search, it would be unlikely for us to obtain a percentage
increase higher than the ones stated here, as the detrimental eect of the environment would be lower
in those cases, and thus, the increment in delity, lesser.
65
Chapter 5
Conclusion
Recalling that our main aim in this project is to rstly, propose ways to run Grovers Search such
that we are able to obtain the best possible results; and secondly, to understand the dynamics of the
search as it evolves through noise, we now pull the stray strands of ideas presented into a more co-
hesive summary. Before restating these explicit results however, we shall briey run through all that
had to be accomplished all that formed the basis of this study before summarizing the actual analysis.
We rstly attempted to understand Grovers Search in ideal conditions for two types of computa-
tional model the circuit model and the one-way computing model. We learned that Grovers Search
consist of two stages, and we learned how each stage is executed in the two dierent computational
model. We further explored how to make the circuit on which Grovers Search is executed the most
ecient possible; as well as how to generate the cluster state necessary for the search in the one-way
computing framework. Following that, we introduced our mathematical model, the essential element of
our project. This mathematical model was used to simplify the study of the eects of noise on Grovers
Search. We then solved the necessary Lindblad equations, specic to the solid state quantum dot system,
for three dierent noise models, bit ip noise, bit-phase ip noise and phase ip noise. We also extended
the treatment to two qubits and six qubits for application in the circuit model and one-way computing
model. Only then, were we ready to analyze noise eects on the search.
The following is an important point to keep in mind before we summarize our results: We have sacri-
ced the generality of Grovers Search (circuit model) in pursuit of the most ecient circuit. Thus, GS00
refers to the circuit that searches for [00; GS01, for [01 and so on so forth. Also, due to the constraints
of time, we only managed to develop the model and necessary tools to analyze Grovers Search on the
one-way computing model, but we lacked the time to actually run the analysis.
CONCLUSION
Finally, to obtain the best possible results for Grovers Search, we rst point out that GS01 and
GS10 ought to be executed in the computational basis that allows the eect of the environment on the
search to take on a bit ip eect. For GS00 and GS11, there does not seem to really be a preferred
computational basis. In general, the quicker the speed of the operation, the better the search. On top of
that, quicker rotation gates oer better results than quicker
U
SWAP
gates. A very quick
U
SWAP
gate
however causes a larger increase in the delity of the searches where the eect of environment takes on
the bit-phase ip eect. Whilst this increment is seldom large enough to override the natural preroga-
tive of the bit ip environment for GS01 and GS10, it does cause GS00 and GS11 to have a preferred
computational basis the basis that allows the environment to have a bit-phase ip eect on our system.
With the model we introduced to study noise, one can as we did write a program to maximize the
delity of the search for any given initial condition. Thus, we can compute the extent to which a gate
should be applied (
max
and
max
), for every operation with any given initial condition. As this advice
varies on a case by case basis, depending on the initial conditions there is little to be said about this now.
Aside from all these steps that can be taken to improve the search, we learn that entanglement is
indeed central in Grovers Search. In many cases, there is a direct correlation between the entanglement
generated and delity. We further learn that for all the four dierent searches, regardless of how strong
the eect of the environment, entanglement will always be generated if the environment is characterized
by a bit ip Lindblad operator. We also learn that the more detrimental the eect of the environment
on the search, the higher the capacity for maximization of delity. The lower the ratio of
max
/
full
and
max
/
full
, the better the percentage increase from the maximization procedure. We learn that
the potential for maximization is relatively low for GS01 and GS10 which is okay, for it has a higher
delity compared to GS00 and GS11 to begin with. Most importantly, we learn that even for a two-qubit
system, attempting to maximize delity is extremely tedious and complicated. As a realistic quantum
computer would need a whole lot more than two qubits; this gives us an idea of how messy the picture
would be were we really to attempt to do such a maximization on a realistic quantum computer.
67
CONCLUSION
Listing down point by point what we have learned from this whole analysis does not really do full
justice to this project. A large part of the contribution of our work lies in the introduction of a new
mathematical model to study noise a model that allows us to study the dynamics of the system.
Only this ability to probe into the system can allow us the means of understanding the features that
a noisy system exhibits, and thus, only this ability will allow us a means of guring out how a search
can be improved. Applying this to Grovers Search merely shows the extent to which this can be done.
This model should be equally applicable to other algorithms realized with dierent physical systems on
dierent computational models.
68
BIBLIOGRAPHY BIBLIOGRAPHY
Bibliography
[1] M. Schlosshauer, Decoherence and the Quantum to Classical Transition (Springer-Verlag, 2007).
[2] L. K. Grover, in Proceedings of the Twenty-Eighth Annual Symposium on the Theory of Computing,
1996, Philadelphia, Pennsylvania (ACM Press, New York, 1996), pp. 212218.
[3] L.K. Grover Phys. Rev. Lett. 79, 325 (1997).
[4] L.K. Grover Am. J. Phys. 69, 769 (2001).
[5] M. Boyer, G. Brassard, P. Hyer and A. Tapp, e-print quant-ph/9605034v1.
[6] R. Raussendorf, H. J. Briegel, Phys. Rev. Lett. 86, 5188 (2001).
[7] H. J. Briegel, R. Raussendorf, Phys. Rev. Lett. 86, 910 (2001).
[8] F. Bodoky and M. Blaauboer, Phys. Rev. A 76, 052309 (2007).
[9] P. Walter et. al Nature 434, 169 (2005)
[10] M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information (Cambridge Uni-
versity Press, Cambridge, England, 2000).
[11] D. Loss and P. DiVincenzo Phy. Rev. A 57 120 (1998).
[12] P. DiVincenzo Phy. Rev. A 51 1015 (1995).
[13] A. Barenco et. al Phy. Rev. A 52, 3457 (1995).
[14] A. Peres, Phys. Rev. Lett. 77, 1413, (1996).
[15] M. Horodecki, P. Horodecki and R. Horodecki, Phys. Lett. A 223 1 (1996).
[16] G. Vidal and R. F. Werner, Phys. Rev. A 65, 032314, (2002).
69