Unit 4 Materials
Unit 4 Materials
Shor's Algorithm
Shor's algorithm is a quantum algorithm that can factor large numbers into their prime factors
more quickly than classical computers. It was developed by MIT mathematician Peter Shor in
1994. It's one of the most famous and impactful algorithms in quantum computing, as it
provides an exponential speedup over the best-known classical algorithms for factoring. The
ability to factor large numbers has significant implications for cryptography,
Shor's algorithm is based on Quantum Phase Estimation (QPE), a quantum routine that
estimates the phase of an eigenvalue.
Grover’s Algorithm
Lov Grover, an Indian-American computer scientist, published a paper in 1996 that brought
him fame in Quantum Computing.
An algorithm is a set of instructions used to perform some well-defined task on a computer.
A large part of the desire to develop a quantum computer has come from the discovery that
some algorithms work dramatically better on a quantum computer than they could ever work
on a classical computer. This is because the nature of quantum systems—captured in
superposition and interference of qubits—often allows a quantum system to compute in a
parallel way that is not possible even, in principle, with a classical computer.
Grover’s algorithm can be described as a quantum database-searching algorithm. The
algorithm significantly reduces the number of operations necessary to solve the problem as
compared to a classical computer. Once again, we have a function f (x) on bits with n inputs
and
x ∈ {0, 1}n.
The output of the function is a single bit, so we can have f (x) = 0 or f (x) = 1. The task at
hand, which is solved by Grover’s algorithm, is the following. There is a single x such that f
(x) = 1, and we want to find out what x that is. Note that it may be the case that f (x) ≡ 0
identically, in which case no such x exists. To solve this problem classically, we can generate
input strings x and simply test the function to find out if f (x) = 1. It turns out that doing this
will require 2n − 1 tries to solve the problem. In contrast, Grover’s algorithm solves the
problem with on order of √2n tries. Now suppose that the bit string is small, just 5 bits. The
classical algorithm would require 25 − 1 = 31 attempts to find the correct x, while Grover’s
algorithm would require √ 25 ≈ 6 attempts. We have a significant improvement on a small bit
string. As you might imagine, as the bit strings get larger and larger, the improvement offered
by Grover’s algorithm becomes very significant.
Quantum algorithms can be used for a variety of tasks, including:
Supervised learning
Unsupervised learning
Generative and discriminative models
Dimensionality reduction
Generalized eigenvalue problems in machine learning
Evaluating a classifier
Determining graph connectivity
Pattern matching
2. Machine Learning:
Classical Models: Classical machine learning algorithms process data sequentially
and require extensive computational resources for large datasets.
Quantum Models: Quantum machine learning (QML) can exploit quantum
superposition and entanglement to analyze multiple data points simultaneously,
potentially leading to faster processing times and more efficient algorithms.
3. Drug Discovery:
Classical Models: Classical computational methods simulate molecular interactions
and drug interactions, which can be time-consuming and computationally expensive.
4. Optimization:
Classical Models: Classical optimization algorithms, such as linear programming and
genetic algorithms, are used to solve complex optimization problems but can be slow for
large-scale problems.
Quantum Models: Quantum optimization algorithms, like the Quantum Approximate
Optimization Algorithm (QAOA), can potentially solve optimization problems more
efficiently by exploring multiple solutions simultaneously.
5. Artificial Intelligence:
Classical Models: Classical AI models, such as neural networks, have been successful
in various applications but face limitations in handling extremely large and complex
datasets.
Quantum Models: Quantum models can process vast amounts of real-time data from
satellites and sensors, leading to more accurate and timely predictions of natural disasters.