0% found this document useful (0 votes)
96 views6 pages

Quantum Machine Learning Matrix Product States

This document discusses using quantum computers to accelerate tensor network algorithms. It presents a quantum algorithm that approximates an eigenvector of a unitary matrix using bounded rank tensor network states, given black-box access to the unitary. The algorithm falls into the class of variational quantum algorithms and returns a classical description of the eigenvector in the form of a tensor network. The work creates a bridge between tensor networks, variational quantum eigensolvers, quantum approximate optimization algorithms, and quantum computation. It focuses on applying this framework to 1D matrix product states due to analytical simplifications, but the general approach applies to 2D and other tensor networks as well.

Uploaded by

Katherine Gilber
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views6 pages

Quantum Machine Learning Matrix Product States

This document discusses using quantum computers to accelerate tensor network algorithms. It presents a quantum algorithm that approximates an eigenvector of a unitary matrix using bounded rank tensor network states, given black-box access to the unitary. The algorithm falls into the class of variational quantum algorithms and returns a classical description of the eigenvector in the form of a tensor network. The work creates a bridge between tensor networks, variational quantum eigensolvers, quantum approximate optimization algorithms, and quantum computation. It focuses on applying this framework to 1D matrix product states due to analytical simplifications, but the general approach applies to 2D and other tensor networks as well.

Uploaded by

Katherine Gilber
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Quantum Machine Learning Tensor Network States

Andrey Kardashin,∗ Alexey Uvarov,† and Jacob Biamonte‡


Deep Quantum Laboratory, Skolkovo Institute of Science and Technology, 3 Nobel Street, Moscow, Russia 121205
(Dated: September 15, 2020)
Tensor network algorithms seek to minimize correlations to compress the classical data represent-
ing quantum states. Tensor network algorithms and similar tools—called tensor network methods—
form the backbone of modern numerical methods used to simulate many-body physics and have a
further range of applications in machine learning. Finding and contracting tensor network states is
a computational task which quantum computers might be used to accelerate. We present a quantum
algorithm which returns a classical description of a rank-r tensor network state satisfying an area
law and approximating an eigenvector given black-box access to a unitary matrix. Our work creates
a bridge between several contemporary approaches, including tensor networks, the variational quan-
tum eigensolver (VQE), quantum approximate optimization (QAOA), and quantum computation.
arXiv:1804.02398v4 [quant-ph] 12 Sep 2020

Keywords: quantum algorithms, VQE, QAOA, quantum machine learning, optimisation; tensor networks,
matrix product states; linear algebra, eigenvectors

INTRODUCTION Even in 1D, tensor networks offer certain insights into


quantum algorithms. For example, the maximal degree
Tensor network methods provide the contemporary of entanglement can often be bounded in the description
state-of-the-art in the classical simulation of quantum of tensor network state itself. In other words, the bond
systems. A range of numerical and analytical tools have dimension (the dimension of the wires) in the tensor net-
now emerged, including tensor network algorithms to work act to bound the maximal entanglement. Merging
simulate quantum systems classically—these algorithms quantum computation with ideas from tensor networks
are based in part on powerful insights related to the area provides new tools to quantify the entanglement that a
law [1–9]. The area law places bounds on quantum entan- given quantum circuit can generate [15, 16].
glement that a many-body system can generate, which For the sake of simplicity, we work in the black-box
translates directly to the amount of memory required to setting and assume access to a provided unitary Q. The
store a given quantum state, see e.g. [8]. black-box setting does not consider the implementation
The leading classical methods to simulate random cir- of Q. Prima facie this appears to be a limitation: in
cuits for quantum computational supremacy demonstra- practice however, the restriction can easily be lifted. For
tions are also based on tensor network contractions. Ad- example, in QAOA the problem Hamiltonian can be ap-
ditionally, classical machine learning has been merged plied for varying times, offering a natural extension of the
with matrix product states and other tensor network oracle idea by giving Q a simple time dependence [17].
methods [10–14]. How might quantum computers accel- In the Discussion we drop the black box access restric-
erate tensor network algorithms? tion and cast the steps needed to perform a meaningful
Although tensor network tools have traditionally been near-term demonstration of our algorithm on a quan-
developed to simulate quantum systems classically, we tum computer, providing a low-rank approximation to
propose a quantum algorithm to approximate an eigen- eigenvectors of the quantum computers free- (or effec-
vector of a unitary matrix with bounded rank tensor net- tive) Hamiltonian. The presented algorithm falls into
work states. The algorithm works given only black-box the class of variational quantum algorithms [18–25]. It
access to a unitary matrix. In general, tensor network returns a classical description, in the form of a tensor
contraction can simulate any quantum computation. network, of an eigenvector of an operator found through
We focus on 1D chains of tensors (matrix product an iterative classical-to-quantum optimization process.
states) due to some associated analytical simplifications: We present a general framework to determine ten-
indeed, matrix product states can be approximated clas- sor networks using quantum processors. We focus on
sically which offers an attractive gold standard to com- 1D, which enables several results related to the maxi-
pare the quantum algorithm against. The general frame- mum amounts of entanglement required to demonstrate
work we develop applies equally well to 2D and e.g. sparse these methods. This analysis is followed by a discussion
networks (projected entangled pair states, etc.). How- focused on applications of these techniques, and what
ever, an early merger between these topics is better situ- might be required for a meaningful near-term experimen-
ated to focus on 1D. tal demonstration.

[email protected]
[email protected]
[email protected]; http://quantum.skoltech.ru
2

METHODS and call


n
X
The algorithm we propose solves the following prob- L(θ) = ln pi (θ) (6)
lem: given black-box access to a unitary Q, find any i=1
eigenvector of Q.
the log-likelihood function of the n-point correlator
We work in the standard mathematical setting of quan-
tum computing. We define n qubits arranged on a line n
Y
and fix the standard canonical (computational) basis. We pi (θ). (7)
consider the commutative Hermitian subalgrebra gener- i=1
ated by the n-projectors
The minimization of (6) corresponds to maximizing the
Pi = |0ih0|i (1) probability of measuring each qubit in |0i. This mini-
mization can be done using a variety of optimization and
where the subscript i denotes the corresponding ith qubit machine learning algorithms. The table below summa-
acted on by Pi , with the remainder of the state-space rizes the steps of the algorithm.
acted on trivially. These form our observables.
Rank is the maximum Schmidt number (the non-zero Choose the maximum number of ebits kmax
singular values) across any of the n − 1 step-wise parti- Choose the maximum number of optimization iterations nit
tions of the qubits on a line. Rank provides an upper- for k ← 1 to kmax do
bound on the bipartite entanglement that a quantum Construct the ansatz Uk corresponding to a k-ebit MPS
state can support—as will be seen, a rank-r state has Set θk randomly
at most k = log2 (r) ebits of entanglement. The quan- for j ← 1 to nit do
tum algorithm we present works by finding a sequence of Evaluate p(θk )
Evaluate L(p)
maximally k-ebit approximations, where the k’th approx-
Update θk using a classical optimizer
imation can be used to seed the (k +1)’th approximation. end for
An ebit is the amount of entanglement contained in Store Lk = L(p)
a maximally entangled two-qubit (Bell) state. A quan- end for
tum state with q ebits of entanglement (quantified by return {θk }kk=1
max
, {Lk }kk=1
max
.
any entanglement measure) has the same amount of en-
tanglement (in that measure) as q Bell states. If a task ALGORITHM 1: Find successive tensor network
requires l ebits, it can be done with l or more Bell states, approximations of an eigenvector of Q
but not with fewer. Maximally entangled states in
Cd ⊗ Cd (2) The algorithm begins with rank-1 qubit states as
n
have log2 (d) ebits of entanglement. The question is then |ψ̃(θ)i =
O i
(cos θ1i |0i + e−ıθ2 sin θ1i |1i). (8)
to upper bound the maximum amount of entanglement
i=1
a given quantum computation can generate, provided a
coarse graining to classify quantum algorithms in terms Minimization of the objective function (6) returns 2n real
of both the circuit depth, as well as the maximum ebits numbers describing a local matrix product state. Ap-
possible. For low-depth circuits, these arguments are sur- proximations of higher rank are made by utilizing the
prisingly relevant. quantum circuit structure given in Figure 1.
We parameterize a circuit family generating matrix
product states with θ, a real vector with entries in [0, 2π).
⊗n RESULTS
We consider action on the initial rank-1 state |0i = |0i
and define two states
The algorithm works given only oracle access to a uni-
|ψ(θ)i = U † (θ)QU (θ)|0i (3) tary Q. The spectrum of Q is necessarily contained on
and the complex unit circle and so we note immediately that
1 = max |hφ|Q|φi|2 ≥ max |h0|ψ(θ)i|2
|ψ̃(θ)i = U (θ)|0i, (4) φ θ

both of yet to be specified rank. = max |hψ̃(θ)|Q|ψ̃(θ)i|2 (9)


θ
We will construct an objective function (6) to mini-
mize and hence to recover our approximate eigenvector. with equality of the left-hand-side if and only if |φi is an
The choice of this function provides a desirable degree of eigenvector of Q. One advantage of the presented method
freedom to further tailor the algorithm to the particular is that it terminates when the measurement reaches a
quantum processor at hand. We choose given value. This implies that the system is in an eigen-
state. Such a certificate is not directly established using
pi (θ) = hψ(θ)|Pi |ψ(θ)i (5) other variational quantum algorithms.
3

0 ··· 0 0 0

.. ..
1 1 . 2 . 3 ···

··· ···

2 −→ ↓

··· ··· 1 2 3 ···

3 ···

··· ···

FIG. 1: Example of a tensor network as a quantum circuit. (left) Quantum circuit realization of a matrix product state with
open boundary conditions. (right) Using standard graphical rewrite rules—or by manipulating equations—one readily
recovers the familiar matrix product state depiction as a ‘train of tensors’.

Importantly, the maximization over θ on the right- deterministic generation of a class of isometries, where an
hand-side of (9) corresponds to the minimization of the isometry U that is also an endomorphism on a particular
log-likelihood (6). We will then parameterize ψ̃(θk ) space is called unitary.
where k denotes a k-ebit matrix product state of interest. Matrix product states (11) are not isometries—though
Learning this matrix product state recovers an approxi- correlation functions are readily calculated from them.
mation to an eigenvector of Q. With a further promise Furthermore, matrix product states can be deterministi-
on Q that all eigenvectors have a rank-p matrix product cally generated by the uniform quantum circuit given in
state representation, then we conclude that r < p implies Figure 1. Other isometric structures of interest include
a fundamental error in our approximation. We consider trees and so-called, Multiscale Entanglement Renormal-
then that the r’th singular value of the state takes the ization Ansatz (MERA) networks [3, 26–28].
value ε. It then follows that the one-norm error scales Consider then a rank-r approximation to an eigenvec-
with O(ε) and the two-norm error scales only with O(ε2 ). tor of Q. The blocks in Figure 1 represent unitary maps.
In general, we arrive at the monotonic sequence ordered These circuits act on at most dlog2 (r)e qubits. Hence,
by the following relation each of these blocks has at most r2 real degrees of free-
dom in [0, 2π). The general realization of these blocks
1 ≥ max |hψ̃(θk+1 )|Q|ψ̃(θk+1 )i|2
θk+1 using the typical basis of CNOT gates and arbitrary local
unitaries can be done by a range of methods, see i.e. [29].
≥ max |hψ̃(θk )|Q|ψ̃(θk )i|2 (10)
θk A commonly used theoretical lower bound requires
valid for k = 1 to bn/2c (minimum to maximum possible 1 2
number of ebits). (r − 3 log2 r − 1) (12)
4
Indeed, increasing the rank of the matrix product state
approximation can improve the eigenvector approxima- CNOT gates, where the method in [29] requires r2 local
tion. Yet it should be noted that ground state eigenvec- qubit gates and did not reach this theoretical lower bound
tors of physical systems are in many cases known to be of CNOT gates. The total number of single qubit and
well approximated with low-rank matrix product states CNOT gates nevertheless scales as O(r2 ) for each block,
[1–9]. This depends on further properties of Q and is where the number of blocks is bounded by n. Hence the
a subject of intensive study in numerical methods, fur- implementation complexity scales as O(l · n · r2 ), where
ther motivating the quantum algorithm we present here. the optimization routine terminates after l steps (perhaps
We will develop our algorithm agnostic to Q, leaving a in a local minimum).
more specific near-term demonstration (in which Q is im- Instead of preparing |ψ̃(θ)i by a quantum circuit with
plemented by e.g. a free-Hamiltonian) to the Discussion. θ ∈ (0, 2π]×l tunable parameters as
Generally we will express any |ψ̃(θ)i as a matrix product
⊗n
Y
state as |ψ̃(θ)i = Ul |0i (13)
X
|ψ̃(θ)i = A[θ q]
q As
[θs ]
· · · An[θn ] |q, s, . . . , ni. (11) l
q,s,...,n
where Ul is adjusted by θl one might adopt an alterna-
In (11) the rank r of the representation is embedded into tive (heuristic) circuit realization preformed by adjust-
the realization of the A’s. Quantum mechanics allows the ing controllable Hamiltonian parameters realizing each
4

block, subject again to minimization of (6). With such 1.0

Function value
an approach, one will prepare |ψ̃(θ)i by tuning accessi- 0.8
ble time-dependent parameters (θk (t) corresponding to 0.6
Hermitian Ak ) as 0.4
0.2
θk (t)Ak ⊗n
P
|ψ̃i = T {e−ı }|0i (14) 0.0
1.0
0 250 500 750 1000 1250 1500 1750 2000

where T time orders the sequence and superscript k in- 0.8

Overlap
dexes the kth operator Ak . Provided these sequences are 0.6
localized appropriately, the matrix product structure still 0.4
remains. 0.2
We then consider vertical partitions of a quantum cir- 0.0
cuit with the n qubits positioned horizontally on a line. 3.0

Entanglement
For an m-depth quantum circuit (where m is presum- 2.5
ably bounded above by a low-order polynomial in n), the 2.0
1.5
maximum number of two-qubit gates crossed in a vertical
1.0
partition is never more than m. The maximum number 0.5
of ebits generated by a fully entangling two-qubit CNOT 0.0
0 250 500 750 1000 1250 1500 1750 2000
gate is never more than a single ebit. We then consider Iterations
the (n − 1) partitions of the qubits, the maximum par-
tition with represent to ebits is into two (ideally) equal FIG. 2: Algorithm demonstration on a randomly generated
halves, which is never more than dn/2e. We then arrive 6-qubit unitary Q: the value of (9) (upper), overlap between
at the general result that an m-depth quantum circuit on the variational state |ψ̃(θ)i and the closest eigenstate of Q
n qubits never applies more than (middle) and the von Neumann entropy of the subsystem of
the first three qubits (lower). The vertical black solid lines
min{dn/2e, m} (15) correspond to k: the points between 1 and 75 iterations are
obtained with 1-ebit MPS, 76-1075 iterations with 2-ebit
ebits of entanglement. This immediately puts a lower- MPS, and 1076-2075 iterations with 3-ebit MPS.
bound of ∼ n/2 on the two-qubit gate-depth for Q to
potentially drive a system into a state supporting the
maximum possible ebits of entanglement. provides a connection between our method and QAOA
In Figure 2 we demonstrate our algorithm on find- [31]. Similarly, we can also use this external minimiza-
ing an eigenstate of a randomly generated 6-qubit uni- tion to connect our method to VQE [20]. However, our
tary matrix. For minimizing the function (6), we used method provides a certificate that on proper termination,
the Broyden-Fletcher-Goldfarb-Shanno (BFGS) mini- the system is indeed in such a desired eigenstate.
mization method [30]. For each k-ebit MPS, we place When H is a general quantum Hamiltonian, minimiza-
the k-layered hardware-efficient ansatz as the operators tion of
in blocks [21]. The number of optimization iterations nit
is set to 1000. hψ̃(θk )|H|ψ̃(θk )i (17)
is in turn, QMA-hard. For example, pairing our proce-
DISCUSSION dure with an additional procedure (quantum phase esti-
mation) to minimize Q over all eigenvectors would hence
provide rank-k variational states and hence our meth-
We now turn to the realization of Q and sketch a pos- ods provide a research direction which incorporates ten-
sible demonstration for a near-term device. Polynomial sor network methods in works such as e.g. [19–21]. It
time simulation of Hamiltonian evolution is well known should however be noted that phase estimation adds sig-
to be BQP-hard. This provides an avenue for Q to rep- nificant experimental difficultly compared with the algo-
resent a problem of significant computational interest, as rithm presented here and the algorithm is closer to VQE
simulating quantum evolution and quantum factoring are (with evident differences as listed above and in the main
in BQP. We aim to bootstrap properties of the quantum text).
processor as much as possible to reduce resources for a For a near-term demonstration, we envision Q to be re-
realization—see for example [21]. alized by bootstrapping the underlying physics of the sys-
Let Q(t) be the one-parameter unitary group gener- tem realizing Q, e.g. using the hardware efficient ansatz
ated by H, where H represents a 3-SAT instances. Given [21]. For instance, one can realize Q as a modification of
access to an oracle computing the systems free-Hamiltonian using effective Hamiltonian
hψ̃(θ1 )|H|ψ̃(θ1 )i, (16) methods (modulating local gates). This greatly reduces
practical requirements on Q.
we can minimize over all eigenvectors, which is NP-hard. The interaction graph of the Hamiltonian generating
Hence, finding even rank-1 states can be NP-hard. This Q can be used to define a PEPS tensor network (as it
5

will have the same structure as the layout of the chip, tensor network can be realized with a few hundred gates
it will no longer have the contractable properties of ma- for a system with a few hundred qubits.
trix product states yet is still of interest) [4]. The al-
gorithm works otherwise unchanged, but the circuit acts
on this interaction graph (instead of a line) to create a
corresponding tensor network state (a quantum circuit
ACKNOWLEDGMENTS
in the form of e.g. the variational ansatz). Tailored free-
evolution of the system Hamiltonian generates Q. Our
algorithm returns a tensor network approximation of an A.K. and J.B. acknowledge support from agreement
eigenstate of Q. No. 014/20, Leading Research Center on Quantum Com-
The first interesting demonstrations of the quantum puting. A.U. acknowledges RFBR Project No. 19-31-
algorithm we have presented should realize rank-2 tensor 90159 for financial support. This manuscript has been
networks (matrix product state), and the corresponding released as a preprint as arXiv:1804.02398 [32].

[1] J. Biamonte. Charged String Tensor Networks. Proceed- tum machine learning with tensor networks. Quantum
ings of the National Academy of Sciences, 114(10):2447, Science and Technology, 4(2):024001, jan 2019.
March 2017. [14] Ding Liu, Shi-Ju Ran, Peter Wittek, Cheng Peng,
[2] R. Orús. A practical introduction to tensor networks: Raul Blázquez Garcı́a, Gang Su, and Maciej Lewen-
Matrix product states and projected entangled pair stein. Machine Learning by Unitary Tensor Network
states. Annals of Physics, 349:117–158, October 2014. of Hierarchical Tree Structure. arXiv e-prints, page
[3] G. Vidal. Entanglement renormalization: an introduc- arXiv:1710.04833, Oct 2017.
tion. In Lincoln D. Carr, editor, Understanding Quantum [15] Jacob D. Biamonte, Mauro E. S. Morales, and Dax En-
Phase Transitions. Taylor & Francis, Boca Raton, 2010. shan Koh. Entanglement scaling in quantum advantage
[4] F. Verstraete, V. Murg, and J. I. Cirac. Matrix prod- benchmarks. Physical Review A, 101(1), Jan 2020.
uct states, projected entangled pair states, and varia- [16] J. Biamonte, V. Bergholm, and M. Lanzagorta. Tensor
tional renormalization group methods for quantum spin network methods for invariant theory. Journal of Physics
systems. Advances in Physics, 57:143–224, 2008. A Mathematical General, 46:475301, November 2013.
[5] J. I. Cirac and F. Verstraete. Renormalization and tensor [17] Mauro E. S. Morales, Timur Tlyachev, and Jacob Bia-
product states in spin chains and lattices. J. Phys. A monte. Variational learning of grovers quantum search
Math. Theor., 42(50):504004, 2009. algorithm. Physical Review A, 98(6), Dec 2018.
[6] U. Schollwöck. The density-matrix renormalization [18] M. H. Yung, J. Casanova, A. Mezzacapo, J. McClean,
group in the age of matrix product states. Annals of L. Lamata, A. Aspuru-Guzik, and E. Solano. From tran-
Physics, 326:96–192, January 2011. sistor to trapped-ion computers for quantum chemistry.
[7] R. Orús. Advances on tensor network theory: symme- Scientific Reports, 4:3589, Jan 2014.
tries, fermions, entanglement, and holography. European [19] Jarrod McClean, Jonathan Romero, Ryan Babbush, and
Physical Journal B, 87:280, November 2014. Aln Aspuru-Guzik. The theory of variational hybrid
[8] J. Eisert. Entanglement and tensor network states. Mod- quantum-classical algorithms. New Journal of Physics,
eling and Simulation, 3:520, August 2013. 18:023023, 2016.
[9] G. Evenbly and G. Vidal. Tensor Network States and [20] A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q.
Geometry. Journal of Statistical Physics, 145:891–918, Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’Brien.
November 2011. A variational eigenvalue solver on a photonic quantum
[10] Andrzej Cichocki, Namgil Lee, Ivan Oseledets, Anh-Huy processor. Nature Communications, 5:4213, July 2014.
Phan, Qibin Zhao, and Danilo P. Mandic. Tensor net- [21] A. Kandala, A. Mezzacapo, K. Temme, M. Takita,
works for dimensionality reduction and large-scale opti- M. Brink, J. M. Chow, and J. M. Gambetta. Hardware-
mization: Part 1 low-rank tensor decompositions. Foun- efficient variational quantum eigensolver for small
dations and Trends in Machine Learning, 9(4-5):249–429, molecules and quantum magnets. Nature, 549:242–246,
2016. September 2017.
[11] Andrzej Cichocki, Anh-Huy Phan, Qibin Zhao, Namgil [22] Jacob Biamonte. Universal Variational Quantum Com-
Lee, Ivan Oseledets, Masashi Sugiyama, and Danilo P. putation. arXiv e-prints, page arXiv:1903.04500, Mar
Mandic. Tensor networks for dimensionality reduction 2019.
and large-scale optimization: Part 2 applications and fu- [23] Rongxin Xia and Sabre Kais. Quantum machine learning
ture perspectives. Foundations and Trends in Machine for electronic structure calculations. Nature Communi-
Learning, 9(6):431–673, 2017. cations, 9(1), October 2018.
[12] Stephen R Clark. Unifying neural-network quantum [24] Edward Grant, Marcello Benedetti, Shuxiang Cao, An-
states and correlator product states via tensor networks. drew Hallam, Joshua Lockhart, Vid Stojevic, Andrew G.
Journal of Physics A: Mathematical and Theoretical, Green, and Simone Severini. Hierarchical quantum clas-
51(13):135301, 2018. sifiers. npj Quantum Information, 4:65, Dec 2018.
[13] William Huggins, Piyush Patil, Bradley Mitchell, K Bir- [25] V. Akshay, H. Philathong, M.E.S. Morales, and J.D. Bi-
gitta Whaley, and E Miles Stoudenmire. Towards quan- amonte. Reachability deficits in quantum approximate
6

optimization. Physical Review Letters, 124(9), Mar 2020. Gates. Physical Review Letters, 93(13):130502, Septem-
[26] G. Vidal. Entanglement renormalization. Phys. Rev. ber 2004.
Lett., 99(22):220405, November 2007. [30] J. Nocedal and S. J. Wright. Numerical Optimization.
[27] V. Giovannetti, S. Montangero, and R. Fazio. Quantum Springer, New York, 2006.
Multiscale Entanglement Renormalization Ansatz Chan- [31] E. Farhi, J. Goldstone, and S. Gutmann. A Quantum
nels. Physical Review Letters, 101(18):180503, October Approximate Optimization Algorithm. ArXiv e-prints,
2008. November 2014.
[28] G. Vidal. Class of quantum many-body states that can [32] Jacob Biamonte, Andrey Kardashin, and Alexey Uvarov.
be efficiently simulated. Phys. Rev. Lett., 101(11):110501, Quantum machine learning tensor network states, 2018.
September 2008. Present manuscript preprint.
[29] M. Möttönen, J. J. Vartiainen, V. Bergholm, and M. M.
Salomaa. Quantum Circuits for General Multiqubit

You might also like