Paper

t|ket⟩: a retargetable compiler for NISQ devices

, , , , and

Published 6 November 2020 © 2020 IOP Publishing Ltd
, , Focus on Quantum Software Citation Seyon Sivarajah et al 2021 Quantum Sci. Technol. 6 014003 DOI 10.1088/2058-9565/ab8e92

2058-9565/6/1/014003

Abstract

We present t|ket⟩, a quantum software development platform produced by Cambridge Quantum Computing Ltd. The heart of t|ket⟩ is a language-agnostic optimising compiler designed to generate code for a variety of NISQ devices, which has several features designed to minimise the influence of device error. The compiler has been extensively benchmarked and outperforms most competitors in terms of circuit optimisation and qubit routing.

Export citation and abstract BibTeX RIS

1. Introduction

Quantum computing devices promise significant advantages for a wide variety of information processing tasks [13]. For some tasks, notably the simulation of condensed-matter physics, the abstract structure of the problem may be sufficiently similar to the physical structure of the device that translation from one to the other is natural and (relatively) straightforward [4]. However, for most problems, and most quantum computers, this is not the case. Quantum algorithms are often described in terms that facilitate proving correctness or deriving asymptotic complexity estimates, without reference to a specific computing device on which to execute them. The translation from a high-level description of the algorithm to a machine-specific sequence of physical operations is called compilation, and is essential to realising the supposed computational advantage of quantum algorithms.

In computer science, the term compiler was introduced by Hopper in the early 1950s [5], and originally referred to a routine which 'compiled' a desired program from pre-existing pieces. Today the term denotes a program that translates a human-readable programming language into the binary language of the machine that will execute it. Early compilers produced code that was grossly inefficient, compared to what an average human programmer could write; however, today's sophisticated optimising compilers reliably generate code that runs more quickly and uses less memory than even the best human programmer could manage [6]. The steady improvement of compiler technology has, in turn, enabled programming languages to increase in power and sophistication, increasing the conceptual distance between the programmer and the executable machine language.

By comparison, almost all programming systems available for quantum computing are conceptually primitive, remaining extremely close to the basic quantum circuit model [7]. Although higher-level application-oriented toolkits are becoming available [8, 9], the programmer must usually describe the algorithm to be run in terms of basic unitary gates. On the other hand, quantum computing hardware displays great diversity. Superconducting and ion-trap-based quantum processors are now available from multiple commercial companies [1014], while other technologies such as photonics are not far behind [15, 16]. Different underlying technologies have very different performance parameters and trade-offs, and even broadly similar devices may differ in what basic operations are available. Even in the context of the simple, circuit-centric, programming model, the requirement to translate an abstract circuit into something suitable for the chosen device creates the need for a compiler. Naive approaches to this translation can significantly increase the size of the circuit; therefore the other major task for a quantum compiler is circuit optimisation, to minimise the resources required by the program.

Circuit optimisation is especially pertinent on so-called noisy intermediate-scale quantum (NISQ) devices. Preskill [17] defines an NISQ device as having a memory size of 50–100 qubits, and sufficient gate fidelity to carry out around 1000 two-qubit operations with tolerable error rates. We will adopt a wider definition: an NISQ device is any quantum computer for which general-purpose quantum error correction [18] is not feasible and hardware errors are expected. Because of these ineradicable errors, mere qubit count is poor measure of the capability of NISQ devices. The longer the computation runs, the more noise builds up.

NISQ devices therefore impose strict limitations both on the number of qubits available to algorithms and on the maximum circuit depth that can be achieved. Aside from the obvious requirement to use this limited hardware budget in the most efficient manner possible, the noisiness of NISQ machines introduces further complications. Since many common textbook algorithms such as quantum phase estimation 4 are not feasible in the available circuit depth, hybrid algorithms such as the variational quantum eigensolver (VQE) [20] and the quantum approximate optimisation algorithm (QAOA) [21] have been proposed instead. While the circuit depths required by these algorithms are more favourable to NISQ devices,they are based on repeatedly executing circuits inside a classical optimisation loop, where both the rate of convergence and the accuracy of the final result can be adversely affected by device noise. In consequence, any compiler for NISQ devices should aim to maximise the overall fidelity of the computation. Minimising the number of operations helps, but other techniques may be employed [2225].

This paper describes t|ket⟩, a compiler system for NISQ devices that aims to achieve these objectives. The core of t|ket⟩ is a flexible optimising compiler which supports multiple programming frameworks, and multiple quantum devices. It is specifically designed for NISQ devices, and includes features that minimise the influence of device errors on computation. As we demonstrate in section 9, t|ket⟩'s optimisation and qubit mapping routines reliably outperform other compilers. The system also includes runtime management features to facilitate the variational algorithms typical of NISQ devices.

1.1. NISQ devices and their software

Before addressing the t|ket⟩ system, we consider a schematic variational algorithm in the context of the system architecture of an idealised NISQ device. For purposes of illustration, a toy VQE algorithm is shown in figure 1.

Figure 1.

Figure 1. The typical structure of a variational quantum algorithm.

Standard image High-resolution image

The first point to note in this example is that the central 'execute $\left(C\left(\overline{\theta }\right);m\right)$' subroutine is the only part that runs on the quantum device: the other subroutines and the main loop are classical. The subroutines 'EstimatorFunction' and 'ClassicalOptimiser' are used repeatedly inside the main loop—the characteristic of a hybrid algorithm—and their outputs are used in the next quantum execution. The first two subroutines, 'GenerateParameterisedCircuit' and 'GenerateListOfPauliMeasurements', are tasks that are usually considered part of the compiler, but observe that inside the main loop a fresh quantum circuit must be built using the parameterised circuit C(⋅), the measurement m, and the new parameters $\overline{\theta }$. How does an algorithm like this map onto a realistic quantum computer system?

While it is common to talk about a 'quantum computer' as a unified device, in practice it consists of multiple subsystems, each of which is a computer in its own right. Running a quantum algorithm therefore involves a large number of software components in a mixture of runtime environments, with very different performance demands. Figure 2 displays a realistic architecture for an NISQ computer. The lowest level comprises the programmable devices which drive the evolution of the qubits and read out their states. An example of this kind of device is an arbitrary waveform generator, as found in many superconducting architectures. The microwave pulse sequences output by these devices are generated by simple low-level programs optimised for speed of execution. These devices, and the real-time controller which synchronises them, operate in a hard real-time environment where the computation takes place on the time-scale of the coherence time of the qubits. These components combine to execute a single instance of a quantum circuit, possibly with some classical control. By analogy with GPU computing, we refer to this layer as a kernel.

Figure 2.

Figure 2. Idealised system architecture for an NISQ computer.

Standard image High-resolution image

One level higher, the scheduler is responsible for dispatching circuits to be run and packaging the results for the higher layers. It is also likely to be heavily involved in the device calibration process. (Calibration data are an important input to the compiler.) This layer and those below may be thought of as the low-level system software of the quantum computer, and must normally be physically close to the device. In the layer above we find service-oriented middleware, principally the task manager, which may distribute jobs to different quantum devices or simulators, GPUs, and perhaps conventional HPC resources, to perform the various subroutines of the quantum algorithm. This layer may also allocate access to the quantum system among multiple users. Finally, at the highest level is the user runtime, which defines the overall algorithm and integrates the results of the subcomputations to produce the final answer.

With this picture in mind, we see that the path from a high-level program describing a quantum algorithm to its final result involves many stages of decomposition and compilation in order to run in this heterogeneous environment. In practice, some of these stages may be amalgamated or absent. In this paper we will focus on the generation of the kernel, since this is the indispensable part of the process, and can be (to some extent) decoupled both from the high-level architecture and the low-level system-specific parts.

The picture is further complicated when considering quantum computers capable of error correction. The 'logical' kernel must be translated to an encoded equivalent; subroutines to perform gate synthesis must be added; and error detection correction stages must be interleaved in the main algorithm. However, every part of the NISQ process is also required in the error-corrected case, so we focus on compilation in the NISQ context.

1.2. Related work

The last few years have seen an explosion of interest in quantum programming languages, and the problems of quantum compilation have been explored at various levels of abstraction [26], from high-level algorithm design to pulse control at the machine level.

Several languages have been developed for quantum programming. Quipper [27] is a functional language for quantum circuits embedded in Haskell. The ScaffCC compiler [28], based on the LLVM framework, compiles an extension of C, and can be configured for routing to specific architectures. Q# [29] is a hybrid classical–quantum language designed to facilitate the development of programs that can be run on a simulator (and eventually on actual hardware). Strawberry Fields [30] is a Python-based quantum programming framework that is based on the 'continuous variable' (CV) model of computation.

Several other compilation systems have been developed as Python modules targeting specific hardware. These include the Forest SDK/pyQuil [31] (for Rigetti backends), Qiskit [32] and ProjectQ [33] (for IBM backends), and Cirq [34] (for Google backends). Other projects have adopted a backend-agnostic approach. XACC [35] is a quantum programming framework that can target several different backends as plug-ins. TriQ [36] uses ScaffCC to compile quantum software for several different architectures in order to study their performance characteristics. Even for the full-stack systems, the compiler element (e.g. Quilc [37] from the Forest SDK or the transpiler passes in Qiskit Terra) can be invoked to compile for arbitrary devices.

A range of gate-level circuit-optimisation techniques have been explored, including the use of phase polynomials [38] and constraint programming [39]. There are also promising results for using information on the noise characteristics and fidelities of the target device to assist compilation [4042]. Meanwhile at the level of machine control there have been efforts to optimise the implementation of variational algorithms using automatic differentiation and interleaving compilation with execution [43, 44].

1.3. Synopsis

Here we give an overview of the t|ket⟩ system. Subsequent sections give detailed descriptions of the modular front- and back-ends, the intermediate representation used, the transform system, some of the optimisation methods, and the system of qubit placement and routing. Section 9 provides comparative benchmark results for the performance of the optimiser and qubit allocation engine. The benchmark set of circuits is available for download.

1.4. How to get t|ket⟩

While the core of t|ket⟩ is a highly optimised C++ library, the system is available as the Python module pytket, which provides the programming interface and interoperability with other systems. It can be installed on Linux and MacOS using the command:

pip install pytket

and the documentation is available online at: https://cqcl.github.io/pytket/. Figure 3 shows the components that are installed by this command.

Figure 3.

Figure 3. Components of t|ket⟩.

Standard image High-resolution image

To interface with other software packages, and to use back-ends that depend on external software, the user must also install plug-in packages. At the time of writing, the available plug-ins are: pytket_qiskit, pytket_cirq, pytket_pyquil, pytket_projectq, and pytket_pyzx. All of these can be installed using pip in the same manner as the core package. The pytket module is free for non-commercial use. We encourage the reader to try it out for themselves!

2. System overview

The t|ket⟩ system consists of two main components: a powerful optimising compiler written in C++, and a lightweight user interface and runtime system written in Python. This Python layer allows the user to define circuits and invoke compiler functions, while the runtime environment marshalls and dispatches kernels for execution, and provides convenience methods for defining variational loops, updating parameters, and collating statistics across circuit evaluations. Optional Python extensions provide interfaces to third-party quantum software systems. The overall structure is illustrated in figure 3.

In the classical setting, a compiler translates a human-readable programming language into machine-executable object code. This process can be divided into three stages: a front-end, which handles lexing, parsing, semantic analysis, and other tasks which depend on the source language; a back-end, which allocates registers and generates suitable instruction sequences in the target machine language; and an intermediate stage, which performs data and control-flow analysis on an intermediate representation (IR) of the program, which is independent of both the source and the target languages. Modern compiler systems, such as LLVM [45], use a standard IR to decouple these three stages, making it relatively simple to add support for a new programming language or machine architecture to an existing compiler framework.

t|ket⟩ was designed from the ground up to be retargetable, meaning that it can generate code for many different quantum devices, and language agnostic, meaning that it accepts input from most of the major quantum software platforms. For this reason, its overall structure, shown in figure 4, follows the same basic pattern as the LLVM. A variety of lightweight front-end units translate the desired input language into the t|ket⟩ IR. This internal representation is a generalisation of the usual language of quantum circuits based on hierarchical non-planar maps; this is described in more detail in section 4. Standard quantum circuits are easily embedded into the t|ket⟩ IR—a fact which eases the task of adding new front-ends—but many node types that are not unitary gates are also available.

Figure 4.

Figure 4. Modular front-ends and back-ends for t|ket⟩.

Standard image High-resolution image

Once the input has been translated to the IR, the central circuit transformation engine can begin its work. The transformation engine performs a user-configurable sequence of rewrites of the IR; some examples are described in section 6. Typically this proceeds in two phases: an architecture-independent optimisation phase, which aims to reduce the size and complexity of the circuit; and an architecture-dependent phase, which prepares the circuit for execution on the target machine. This phase itself decomposes into a rebase, which maps the gates present in the circuit into those supported by the device, and a qubit mapping phase. The mapping phase is necessary to ensure that all qubits that are required to interact during the program are physically able to do so; this typically increases the size of the circuit, since most devices do have restricted interactions between their qubits. This is described in detail in section 7.

The end product of this process is a kernel: a circuit that can be executed on the chosen target device. The kernel may then be scheduled for execution by the runtime environment, or simply saved for later.

In keeping with its focus on NISQ devices, the design of t|ket⟩ is minimalistic compared to the schema proposed by Häner et al [26]. There is no error correction, and t|ket⟩ does not include a linker, preferring to rely on application programming frameworks such as CQC's Eumen or IBM's Qiskit Aqua to provide libraries of common routines. The lowest layer of compilation—translation of kernels to control signals for the lasers, microwave generators, and so on—is left to the hardware implementor. However, since t|ket⟩ takes into account the device's architectural constraints during the compilation phase, this last stage of translation can be minimal.

Thanks to its retargetability, t|ket⟩ can be used as a cross-compiler: source programs produced from any supported front-end can be compiled to run on hardware produced by any vendor.

3. Front-ends and back-ends

The pytket interface can be used directly to build quantum circuits from individual gates in the standard way. While this may be acceptable for small experiments, more powerful high-level tools are preferable for larger or more complex tasks. For this reason t|ket⟩ sports a range of lightweight front-end modules for different quantum programming systems. The industry-standard OpenQASM [46] and the functional language Quipper [27] are supported via direct source-file input. Python converters provide support for IBM's Qiskit [32], Google's Cirq [34] and Rigetti's pyQuil [31], as well as the independent open-source projects ProjectQ [33] and PyZX [47]. These Python libraries in turn support higher-level application programming frameworks such as OpenFermion [48] or Rigetti Grove. Support for Q# [29] is planned for the next release.

Similarly, t|ket⟩ offers multiple back-ends, each supporting a different quantum hardware platform or classical simulator. Supporting a given platform implies, firstly, generating a circuit that respects the constraints of the hardware or simulator (generally, connectivity and primitive gate limitations); secondly, the back-end must dispatch the kernel for execution and collate results. The first of these tasks is handled by the system described in section 5. Each back-end class provides a default compiler pass, which guarantees that a compiled circuit will respect the relevant constraints.

t|ket⟩ attempts to provide a uniform interface across the various back-end platforms, so that a user can easily change back-ends for an experiment without changing anything else in their code. At the time of writing, t|ket⟩ supports all IBM Q and Rigetti devices via their online access services, and experimental devices produced by Honeywell Quantum Systems, Oxford Quantum Circuits and the University of Maryland. In addition, t|ket⟩ can use the ProjectQ, IBM Aer and Rigetti QVM simulators. Various other machines are supported indirectly using either QASM output or t|ket⟩'s integration with Qiskit and Cirq. Figure 5 shows an example of front-end input followed by circuit compilation, execution and result retrieval via a back-end. For back-ends that support it, circuit submission and job retrieval can be performed separately, allowing asynchronous execution of the quantum circuit.

Figure 5.

Figure 5. Code example showing front-end and back-end use. A circuit is read in from a QASM file; operations are appended to it using the pytket interface; the circuit is compiled to satisfy the constraints of a back-end, and then executed. The IBMQBackend class is included in the pytket_qiskit extension package.

Standard image High-resolution image

Utility functions are also provided for postprocessing of results, such as calculation of expectation values. Generic mitigation of classical state-preparation-and-measurement (SPAM) [49] errors across back-ends is scheduled for release in 2020.

4. Representing circuits

The standard intermediate representation (IR) in t|ket⟩ is the circuit. A circuit is a labeled directed acyclic graph (DAG) with some additional structure. Vertices in the DAG correspond to operations, usually quantum or classical logic gates, but also boxes, a kind of opaque container which we will define later, and certain compiler-internal meta-operations. Edges in the DAG track the flow of computational resources from operation to operation. Typically, these resources are qubits, and the operations are unitary gates. Since many operations do not act symmetrically on their inputs, we add port labels for the incoming and outgoing edges at each vertex to distinguish between, for example, the control and target qubits of a CX gate. Each input port of a quantum operation is paired with an output port, allowing the path of a particular resource unit to be traced through the circuit, as shown in figure 6.

Figure 6.

Figure 6.  t|ket⟩ circuit internal representation. Pairs of values labeling edges correspond to port numbers at the source and target vertices. Note that the input and output label numbers are even and odd respectively, so that qubit 0 corresponds to the path from 'Input, 0' to 'Output, 1', qubit 1 is the path from 'Input, 2' to 'Output, 3', and so on.

Standard image High-resolution image

For qubits, this linear resource management is justified by the no-cloning [50] and no-deleting [51] theorems. However, for classical operations, this effectively means that output values must overwrite their previous value. This treatment of classical bits is an artefact of the simplified classical computational model of QASM [46] and devices that have adopted it, requiring explicit allocation of classical registers (both for classical input to execution and for result retrieval) and forbidding dynamic allocation of scratch space.

At the input and output boundaries of a circuit, the resource units—which we may identify with storage locations—are partitioned into registers. A circuit can contain arbitrarily many registers, and resource units are represented within a register by identifiers that are unique within the circuit. The registers specify an ordering of the resource units 5 . This ordering allows circuits to be composed sequentially and in parallel, and act as 'port labels' for entire circuits, just as individual operations have ports within a circuit. This means that the process of composing circuits is identical to the composition of individual operations.

4.1. Gate types

There are a wide array of allowed logic gates in t|ket⟩, covering the native gates of the platforms that t|ket⟩ can interface with. An overview of the kinds of supported gates is given in table 1. The most common quantum gates are one- and two-qubit gates, reflecting the native gates on physical superconducting and ion-trap hardware, but some gates with arbitrary quantum controls are allowed; these must eventually be decomposed to hardware-native gates by the transform engine. All quantum gates in t|ket⟩ can have arbitrary classical control, and primitive classical logic gates are supported. However, adding classical control to gates can limit the ability of the rewrite engine to optimise the circuit. An enumeration of all the allowed operation types in t|ket⟩ can be found in the documentation at https://cqcl.github.io/pytket/build/html/optype.html.

Table 1. Classes of operations available for circuits in t|ket⟩.

Class of operationExample
Basic single-qubit gateHadamard
Parameterized single-qubit gateRz(α)
Basic two-qubit gateCX
Parameterized two-qubit gateCRz(α)
Basic multi-qubit gateCnX
Parameterized multi-qubit gateCnRy(α)
Classical output gateMeasure
Meta-operationBarrier
Hierarchical vertexCircBox

Boxes are a special class of operations in t|ket⟩. A box vertex is a container which encapsulates a whole circuit. In figure 7, the circuit from figure 6 is put into a box within another circuit. Boxes allow for front-ends to take in high-level descriptions with subroutines. As this subcircuit can also contain box vertices, a single circuit can contain a hierarchy of arbitrary rank. The hierarchy must be decomposed at the kernel generation stage, but this decomposition is trivial because of the compositional structure whereby circuits are equivalent to individual operations. As well as explicit circuits, box vertices can also contain other representations, which can be useful for optimising certain classes of quantum circuits. Because boxes are opaque, the parent circuit is undisturbed by the optimisation procedure acting on the subroutine. The next release of t|ket⟩ will include such optimisations acting directly on boxes.

Figure 7.

Figure 7.  t|ket⟩ circuit with a box containing the circuit from figure 6. A parameterised controlled-Rz gate is also shown, with a symbolic parameter.

Standard image High-resolution image

4.2. Gate parameters

Unlike large-scale, fault-tolerant quantum computers, NISQ devices generally allow arbitrary angles on parameterised gates. Accordingly, t|ket⟩ allows arbitrary angles on all parameterised gates, up to IEEE 754 double-precision [53].

In section 1 we briefly described the variational hybrid quantum–classical algorithms proposed for NISQ devices. To enable the efficient compilation of this class of algorithms, t|ket⟩ supports symbolic parameters. This allows the compilation of a parameterised circuit corresponding to an entire variational algorithm without requiring repeated compilation from scratch at each iteration of the classical optimiser. The circuit in figure 7 contains a parameterised controlled-Rz gate with a symbolic parameter. This class of circuits is handled using partial compilation: the circuit is precompiled with unknown, symbolic parameters using an expressive symbolic manipulation library. The result can be used as a template circuit and, after parameter values at a given iteration have been substituted, further simple circuit rewriting can be performed before the resulting kernel is sent to a backend to be run. This minimises the computation required between iterations of the classical optimiser, reducing the overall runtime of a variational algorithm while still using the rewrite engine of t|ket⟩ to minimise the resource costs of the circuits. The implementation of the circuit class uses local adjacency lists at each vertex to allow near constant-time edge and vertex insertion and removal.

5. The t|ket⟩ transform system

In general, a quantum algorithm can be expressed in multiple ways using a given gate set; the goal is to express it in a way that minimises the gate count and circuit depth. The field of circuit optimisation is well developed, with a variety of optimisation strategies employed for different algorithms and target hardware devices [38, 54, 55]. Most commonly, a circuit can be rewritten using unitary equality between circuits, where a resource-inefficient subcircuit can be found and replaced using a closer-to-optimal one 6 . The core of t|ket⟩ is a high-performance circuit rewriting engine, referred to as the transform system. A function that performs rewrites using this system is called a transform pass. Circuit optimisation in t|ket⟩ is described in more detail in section 6. Aside from optimisation, the transform system has an essential role in generating circuits that are executable on the target hardware.

Each backend that t|ket⟩ can target has associated with it a series of properties that any valid circuit must satisfy. This will include, as a minimum, the set of supported gates; for many architectures it will also include a graph representing the connectivity between the qubits. Mapping logical qubits to physical qubits also requires rewriting the circuit so that the interactions between qubits correspond only to edges in the associated connectivity graph. The transform pass that performs this rewriting is described in section 7. Other properties may also be required, depending on the platform, and these also require transform passes.

Transform passes are composed sequentially; the resulting function is also a transform pass. For instance, a typical compiler flow will consist first of some optimisation on the circuit that has no regard for connectivity graph or gate set, followed by passes that bring the circuit closer to satisfying all of the constraints. Only if a circuit satisfies all these properties can it be executed on the target hardware.

To document and constrain the composition of transforms, the t|ket⟩ transform engine implements a simple expression language, which follows the same principles as 'contracts' in object-oriented programming [57].

The functions that verify that properties are satisfied are called predicates. Each predicate is a function from a circuit to a Boolean value: true if the circuit satisfies the corresponding property and false otherwise. These functions can incorporate some external information about the target hardware, such as connectivity graph and desired gate set; when external information is required the predicates are generated by higher-order functions. For example, to verify that the connectivity graph of a specific architecture is satisfied, a higher-order function will take in a connectivity graph and return the corresponding predicate. The full list of allowed predicates is documented at https://cqcl.github.io/pytket/build/html/predicates.html.

Each transform pass has a precondition and a postcondition, so that the resulting compiler pass is a Hoare triple. This is illustrated in figure 8(a). The compiler pass may be used on a circuit that satisfies the precondition, and will guarantee that afterwards the circuit satisfies the postcondition. Both the precondition and the postcondition are sets of predicates. For example, a peephole optimisation may require that the circuit be presented in a certain gate set before it can be applied. This gate-set predicate forms the precondition of the pass. The optimisation can then guarantee to the user that the rewrite rule will return the circuit in a different gate set; this guarantee is the postcondition.

Figure 8.

Figure 8. (a) A compiler pass is a transform pass with associated pre- and post-conditions. (b) Composition of compiler passes. The resulting Hoare triple is the standard sequential execution schema for two programs.

Standard image High-resolution image

These Hoare triples may then be composed, so that a custom rewriting sequence can be generated, as shown in figure 8(b). If the triples are correctly matched, so that no intermediate conditions are conflicting, the custom sequence is valid.

More sophisticated combinators, such as loops, can be useful for optimisation passes. For example, a user may wish to continue applying a sequence of rewrite rules until no further rewrites are possible. These combinators may be composed in the same way as sequences. When looping combinators are used, termination of the resulting pass is not guaranteed.

The full list of compiler passes and combinators can be found at https://cqcl.github.io/pytket/build/html/passes.html.

6. Circuit optimisation methods

With the limited fidelity available on NISQ devices, effective circuit optimisation is essential in order to extract all available performance out of the machines. The goal is to identify equivalent circuits that will accumulate less noise when run on a real device.

Circuit optimisations in t|ket⟩ are provided as compiler passes, which can be composed into larger routines. High performance is obtained by optimising at each stage in the compilation pipeline, so it is beneficial to have both powerful optimisations that can yield better results when not constrained by qubit connectivity or gate set and procedures targeted at specific architectures. t|ket⟩ contains some methods that are architecture-agnostic and others that are architecture-aware (parametrised over the properties of the device). Many of the architecture-agnostic passes will additionally preserve any connectivity already satisfied by the inputs, allowing them to be applied after routing. Designing optimisations in this way provides retargetability without sacrificing performance.

6.1. Circuit metrics

Attempting to use the actual fidelity as a cost function would require accurate simulation of the quantum circuit with realistic noise models, which is both computationally expensive and highly dependent on a specific target architecture. Further, because real devices have noise sources that are complex and hard to characterise, simple extrapolation from single-gate performance can significantly overestimate the actual performance of the device, necessitating more sophisticated, holistic measures [5860]. However, simpler metrics can give good, device-independent approximations to noise.

Naively optimising for gate count acknowledges the key fact that all gates will introduce some degree of noise. However, NISQ devices tend to provide fast, high-fidelity single-qubit rotations, with the error rates of multi-qubit operations being an order of magnitude worse [61]. The primary focus for most optimisations in t|ket⟩ is to minimise the two-qubit gate count, which penalises the use of these slower and less accurate operations.

Definition 6.1. The two-qubit gate count of a circuit is the number of maximally-entangling two-qubit gates used in the circuit.

This is often referred to as CX-count, since any other maximally-entangling two-qubit operation (such as CZ or eiXXπ/4) is equivalent to a single CX up to local unitaries. This is analogous to the T-count metric used at the error-correcting level.

Omitting single-qubit gates entirely from consideration improves device-independence, since the number of gates required varies significantly with the gate set (for example, a single U3 gate from the IBM specification can capture any rotation, while up to three are needed if decomposed into the underlying Rz and Ry gates).

The short coherence times of qubits strongly correlates the fidelity of the circuit with the time taken to execute. An ideal device will be able to parallelise gates acting on disjoint qubits to mitigate this. We can obtain a good approximation to the time taken on such a device by considering the depth of the circuit.

Definition 6.2. For a gate g with predecessors P(g), we define depth(g) by:

The depth of a circuit is the maximum value of depth(g) over all gates g. For any gate type G, the G-depth of the circuit is obtained by considering only the contribution from G gates.

Again, given the characteristics of multi-qubit operations on current hardware, CX-depth (or depth with respect to any other maximal two-qubit gate) gives a device-agnostic estimate of the time cost of a circuit.

6.2. Peephole optimisations

Circuit optimisations in t|ket⟩ can broadly be categorised into peephole optimisations and macroscopic analysis. Peephole optimisations are analogous to their namesake in classical compilers, where a sliding window traverses the instruction graph, looking for specific small patterns or classes of subcircuits and substituting equivalent subcircuits (with lower gate counts or depth) in place. Basic examples include the elimination of redundant gates such as identities, gate-inverse pairs, and diagonal gates before measurements. Local gate commutation rules can be considered at the point of pattern identification, or as standalone passes to enable further optimisations.

These techniques are generic, in the sense that they are not tuned for particular applications. The majority are written for best performance in the intermediate gate set of CX, Rz, and Rx, though when they can be expressed more naturally in a different gate set (such as a set of Clifford gates), rebase passes can be applied to convert between them.

Clifford circuits are defined as the class generated by CX, Hadamard, and $\mathrm{R}\mathrm{z}\left(\frac{\pi }{2}\right)$ gates. These are known to be efficiently simulable [62, 63], and there is a wide literature on simplification techniques [6466]. In particular, there are several useful small identities for reducing the CX-count of a circuit, which t|ket⟩ can recognise and apply: these are summarised in figure 9.

Figure 9.

Figure 9. Clifford identities that can be recognised and applied in t|ket⟩ to reduce the CX count. $\mathrm{R}\mathrm{z}\left(\frac{\pi }{2}\right)$ (phase) gates are represented by P in the diagrams. The two identities that would introduce SWAPs can invalidate any connectivity, so can optionally be disabled.

Standard image High-resolution image

The Cartan decomposition [67] specifies a way to synthesise arbitrary n-qubit unitaries into sequences of local unitaries on fewer qubits and a small number of entangling operations between them. This decomposition gives the following upper bounds for small instances:

Theorem 6.3. Any single-qubit unitary can be decomposed into a sequence of at most three rotations using any choice of Rx, Ry, and Rz gates. The angles of rotation are given by the Euler-angle decomposition of the combined rotation on the Bloch sphere.

Theorem 6.4. Any two-qubit unitary can be synthesised using at most three CX gates and 15 parametrised single-qubit gates (from any choice of Rx, Ry, and Rz), given by the KAK decomposition [68, 69].

t|ket⟩ implements the Euler and KAK decompositions by scanning the circuit graph for long sequences of gates over one or two qubits and replacing them whenever this helps to reduce CX count or overall gate count. Known closed-form expressions for manipulating Euler angles allow the single-qubit reduction to be performed on symbolic circuits. t|ket⟩ does not currently support performing the KAK decomposition with symbolic gate parameters, or a generic Cartan decomposition for more than two qubits.

6.3. Macroscopic optimisations

Other optimisation procedures aim to identify high-level macroscopic structures in the circuit or alternative mathematical representations of different classes of circuits that are easier to manipulate than individual gates. The general procedure here is to recognise these structures or subcircuits of the appropriate class and treat them as first-class gates. The algebra of the structure or representation can identify non-local optimisations on the original circuit. Efficient synthesis methods can then be applied to reduce these back down to primitive gates in a way that uses fewer CX gates, parallelises them better, or restructures the circuit to enable more peephole optimisations.

As simulation of molecular systems is a leading candidate application for NISQ devices, t|ket⟩ implements a novel technique for optimising a new class of multi-qubit subcircuits, called Pauli gadgets, which occur frequently in chemistry circuits designed for this purpose.

Definition 6.5. The phase gadget Φn (α) is a canonical representation of a multi-qubit operator of the form ${\mathrm{e}}^{\frac{1}{2}\mathrm{i}\alpha {Z}^{\otimes n}}$.

Definition 6.6. The Pauli gadget $P\left(\alpha ,s\right){:=}U\left(s\right);{{\Phi}}_{\left\vert s\right\vert }\left(\alpha \right);U{\left(s\right)}^{{\dagger}}$ is a canonical representation of a multi-qubit operator of the form ${\mathrm{e}}^{\frac{1}{2}\mathrm{i}\alpha s}$, where s is a string (tensor product) of Pauli operators and the unitary U(s) is defined recursively:

Example 6.7. The simplest construction of a Pauli gadget is a single parameterised rotation gate conjugated by a cascade of CX gates and some single-qubit Clifford gates.

Figure 10.

Figure 10. Code example showing how individual optimisation passes can be composed into a more complex routine.

Standard image High-resolution image
Figure 11.

Figure 11. Some of the elementary optimisation passes available in t|ket⟩.

Standard image High-resolution image

The authors have previously written a comprehensive account of Pauli gadgets and their use in t|ket⟩ [70]. Such gadgets enjoy a powerful equational theory, giving rules for commutation, merging, and interaction with Clifford gates, which are easily proven using the zx-calculus [71]. By recognising these structures in the input circuit, optimising the sequences of gadgets, and efficiently transforming them back to a standard gate set, we can achieve depth reductions greater than 50%. See figure 20 for a summary of results comparing this technique in t|ket⟩ to other compiler stacks for optimising circuits relating to electronic structure problems.

The first step when optimising with macroscopic structures is to identify good candidates in the circuit. It is obviously preferable to work with circuits that are already built from the structures to simplify this step. The integration of t|ket⟩ with application software can make this possible by, for instance, allowing users to directly insert Pauli gadgets into the circuit using the corresponding box type.

Future versions of t|ket⟩ will expand on this area of optimisations to cover other useful intermediate representations, including phase polynomials [38, 72], zx-diagrams [65, 73, 74], Clifford tableaus [63, 75], and linear-reversible functions [76].

6.4. Example procedure

Each of these methods gives rise to a compiler pass that can either be invoked on its own or composed (as described in section 5) into more effective routines. t|ket⟩ comes with some predefined passes combining several of these optimisations. Each backend has a default compilation pass, which guarantees (as far as possible) that the output will be compatible with the backend's hardware or simulator requirements; these passes include a small selection of the peephole optimisations for fast, basic gate reduction.

Figure 10 demonstrates how to compose the basic passes in pytket, recreating the effect of the pre-built SynthesiseIBM pass. Starting with RebaseIBM will decompose multi-qubit gates into a consistent gate set that is easier to manipulate. The RemoveRedundancies pass covers a handful of optimisations based on removing different types of redundant gates. Applying commutation rules can potentially uncover more candidates for removal, so the simplify routine is repeated until the gate count stops decreasing (figure 11).

7. Mapping to physical qubits

Quantum computing devices have different constraints on possible operations between their physical qubits. Some hardwares allow two-qubit (or higher order) operations between any set of physical qubits, while others do not. We define some physical qubits as being connected if the hardware's primitive multi-qubit operations can be executed between them. The connectivity constraints of a hardware can be specified by an undirected graph GD = (VD, ED), where the vertices VD are the physical qubits and edges ED connect physical qubits which can interact. Figure 12 shows an example connectivity graph. As logical quantum circuits are usually written without considering these connectivity constraints, they typically must be modified before execution on a hardware to ensure that every logical multi-qubit operation is mapped to connected physical qubits. We define logical qubits and operations as those present in the logical quantum circuit and state the routing problem as finding a mapping of logical operations to allowed physical operations.

The routing problem is solved by permuting the mapping of logical qubits to physical qubits throughout a circuit's execution, which is achieved by adding SWAP operations. Sometimes gate decompositions can be used to convert non-adjacent multi-qubit gates to distance 1 implementations. An example of this for the CX gate is shown in figure 13. Finding an optimal solution to the routing problem in this manner is NP-complete in general [77]. Note that SWAP operations have different implementations on different hardwares; ion trap devices have physical SWAPs while superconducting devices require the logical states to be transferred between two physical qubits through three CX gates, shown in figure 14. A solution is reached when the logical circuit is modified such that there is an injective map of logical qubits to physical qubits, or placement, p where for every two-qubit gate acting on logical qubits (q, q'), the mapped physical qubits are connected on the connectivity graph GD, or that (p(q), p(q')) ∈ ED.

In some cases a placement p can be found that solves the routing problem without adding SWAP operations. Treating logical qubits as vertices and two-qubit interactions between them in the circuit as edges, we can form an interaction graph for a logical circuit GI = (VI, EI). If there is a subgraph monomorphism p : VIVD which respects (q, q') ∈ EI ⇒ (p(q), p(q')) ∈ ED, then only a relabeling of logical qubits to physical qubits is required.

In figure 15, a short example CX circuit is shown. This circuit can be mapped to the connectivity graph from figure 12 without the addition of additional gates, as shown in figure 16.

t|ket⟩ solves the problem in two steps: finding an initial placement of logical qubits to physical qubits and subsequent addition of SWAP operations to the circuit. We consider this to be a dynamic approach, in contrast to static approaches [7880], that partition circuits into parallelised slices of two-qubit interactions, and then use SWAP networks to permute logical qubits between placements that satisfy these slices.

7.1. Noise aware graph placement

The initial placement p is chosen to maximise the fidelity of the circuit implementation on the device, using both a proxy heuristic which tries to minimise additional gate overhead from routing, and an error heuristic which uses device error characteristics.

Indirectly, knowing only the connectivity graph GD, candidate placements are chosen to minimise the number of gates the subsequent routing procedure will need to add, as these additional operations are most likely error prone two qubit gates. Routing will add gates dynamically as it proceeds through a circuit, and so it is in general not possible to predict which placements will correspond to the fewest gates added. As a heuristic, placements are found such that a maximum number of two-qubit operations at the beginning of the circuit can be completed with no SWAP gates added.

First this problem is cast as finding a subgraph monomorphism p : VIVD which respects (q, q') ∈ EI ⇒ (p(q), p(q')) ∈ ED, for the interaction graph GI = (VI, EI) and device graph GD = (VD, ED). If a monomorphism cannot be found, the routine removes an edge from GI belonging to the latest circuit slice and attempts the graph matching again. This iterates until a monomorphism is found.

Logical qubits qVI which no longer have any edges are removed from GI, thus the subgraph monomorphism routine in practice produces a set of candidate partial placements: maps which only act on a subset of the logical qubits in the circuit. The subsequent routing procedure can accept this as input, and will naively place unmapped qubits near those they next interact with as it proceeds. As device architecture graphs GD currently have, and will likely continue to have, large regular subgraphs, the set of matches can be large, especially when VI is small compared to VD.

If gate fidelity information for individual qubits is available for the target hardware, these candidate placements are scored for maximum expected overall fidelity and the highest scoring is chosen. In NISQ devices qubits and primitive gates often have highly heterogeneous error characteristics, using this to choose from the possible equivalent graph matches can result in a higher fidelity implementation of a given circuit on a given device. Section 9.2 compares the performance of different placement methods available in t|ket⟩.

Figure 12.

Figure 12. An undirected graph showing connectivity constraints for a hypothetical nine qubit device.

Standard image High-resolution image
Figure 13.

Figure 13. Distance 2 distributed CX gate and decompositions to distance 1 CX gates.

Standard image High-resolution image
Figure 14.

Figure 14. SWAP gate and decompositions to CX gates.

Standard image High-resolution image
Figure 15.

Figure 15. An example six qubit circuit with only CX gates.

Standard image High-resolution image
Figure 16.

Figure 16. An example mapping of logical qubits in figure 15 to physical qubits in figure 12. This example satisfies the routing problem without logical circuit modification. The solid lines between red nodes represent physical interactions performed by the circuit; grey nodes and dashed lines are unused by the circuit.

Standard image High-resolution image

7.2. Routing

Given an initial partial placement p, the routing algorithm adds SWAP operations until all logical operations satisfy connectivity constraints. As SWAP operations are added, p is permuted, and so we define a temporary placement p' which is the permutation of p from added SWAP operations up to some slice of circuit gates S.

Two-qubit gates in the circuit are iterated through in time order (via a topological sort of the DAG), finding the first set of two-qubit interactions (q, q') such that (p(q), p(q')) does not respect GD and no q is in multiple interactions. We call this set the first slice S0 and log the permutation of p, p', up to S0. The routing algorithm then aims to pick the optimal edge eED of the connectivty graph to implement a SWAP operation on, given interacting logical qubits in S0 and p'.

A set of candidate placements $\left\{{p}_{0}^{\prime \prime },\enspace \dots ,\enspace {p}_{n}^{\prime \prime }\right\}$ is constructed by permuting instances of p' with SWAP operations on edges in ED. If an edge has no qubits in S0 it is ignored. Each candidate placement is scored and the winning placement is chosen, with the scoring function based on the distance between interactions in S0 given p''. If there is no winning placement for S0 then tied placements are scored for a new slice S1, where S1 is the next set of set of two-qubit interactions (q, q') in the circuit such that (p(q), p(q')) does not respect GD and no q is in multiple interactions. If there is no winning placement for S1 then tied placements are scored for a new slice S2. This is repeated until there is a winning placement p'' for some Sn .

The winning placement p'' is produced from p' via a permutation along its associated winning edge e. In most cases a SWAP operation is inserted along e directly before S0 and a new first slice S0 is found.

In some cases a distributed CX is considered instead: at least one of the logical qubits associated with e is in an interaction (a two-qubit gate g) in S0. If g is a CX gate and its logical qubits are at distance two on the device graph GD (for the temporary placement p') then a distributed CX may be added instead. A new two element set of candidate placements is constructed comprised of p' and p'' and a similar scoring process is implemented, comparing p' and p'' over multiple slices (S0, S1, S2 and so on). If p' wins g is replaced with a distributed CX and no SWAP operation is added. Else, if p'' wins the SWAP operation is added and p' replaced with p''.

This whole process is then repeated, finding new first slices S0 and new winning placements p''. The algorithms terminates when S0 is returned empty.

The algorithm employs a high performance heuristic, which when coupled with an efficient C++ implementation results in fast runtime. t|ket⟩ routing typically performs at least as well as other software solutions when comparing circuit size and depth [77].

Heterogeneity in NISQ device noise means a routed circuit with minimal SWAP overhead may not always prove best though. Some solutions consider device noise [36, 81] when routing, using gate fidelity information to produce a routed circuit with best execution fidelity. This motivated a routing solution we implemented that used a fidelity-based heuristic approach to score and pick SWAP operations, for which the scoring method used an estimation of the noise accumulated in realising all interactions in a slice S0. The estimate was produced by finding SWAP paths required to permit interactions in S0, and then calculating potential error accrued by each logical qubit in realising these paths. In practice we found that the fidelity heuristic could not accurately determine when diverging from adding the minimal number of SWAP gates would improve circuit fidelity, and so in general aiming to minimise SWAP operation overhead provided better results.

8. Applications

Quantum chemistry simulations are performed using a supplementary software package called Eumen. This provides an interface between traditional quantum chemistry problems and various hybrid classical–quantum algorithms, enabling effective chemistry simulations on NISQ hardware using t|ket⟩. For such simulations, Eumen accepts a range of input parameters, such as the molecular or lattice geometry, system charge, multiplicity restrictions, type of simulation, ansatz, optimisations, hardware backend, and qubit mapping. t|ket⟩ mediates between Eumen and the hardware on which the quantum-algorithmic part of the chemistry simulation runs.

Eumen can compute optimal geometries and properties of the ground state or excited states. For example, ground-state energies may be calculated using VQE or imaginary-time evolution methods. For excited-state calculations, one can use methods such as quantum subspace expansion, reduced density matrix approximation, penalty functions, and symmetry constraints [82]. These methods require the measurement of either the expectation value of many-body operators or the overlap of two different states; these measurements are performed by t|ket⟩ using results from the quantum hardware. The states may be prepared with hardware-efficient ansätze or the approximated circuit representation of various physically motivated ansätze, such as UCCSD, k-UpCCGSD, or the time evolution operator.

The depth of the state preparation circuit is significantly reduced by t|ket⟩'s ansatz-specific optimisation methods, which can identify specific structures in circuits and reduce the gate count required for their execution, as described in section 6. Future optimisation methods specific to QAOA instances are also planned. Construction of such structures and subsequent optimisation is aided by boxes: for example, a Pauli gadget can be added to a circuit in abstract form via a PauliExpBox operation. These operations, and variational circuits in general, can make use of symbolic parameters and compilation to simplify their use and speed up compilation. Finally, variational algorithms, and other applications that use Hamiltonian estimation via calculation of multiple terms of the Hamiltonian, can benefit from t|ket⟩'s back-end methods, which allow compilation and submission of multiple circuits. Circuit execution on the back-end can occur asynchronously, with results being retrieved for processing when execution is complete.

9. Benchmarks

In this section we provide some benchmarks of t|ket⟩ compiler performance. First we conduct benchmarks of end-to-end compiler performance, including comparisons with other available quantum compilation tools. Secondly, we perform experiments on a publicly available quantum device to determine whether noise-aware placement offers a benefit. The full datasets and scripts used for generating these results can be found at https://github.com/CQCL/tket_benchmarking.

9.1. End-to-end compilation

We present a series of benchmarks of end-to-end compiler performance on a set of circuits, and compare the performance of t|ket⟩ with that of two widely-used alternatives, Qiskit and the Quilc compiler, which are able to do both general circuit optimisation and routing.

We define end-to-end compilation as the process of translating a circuit, presented in OpenQASM, and outputting an equivalent circuit that has been optimised and has the relevant device constraints satisfied, i.e. has been routed and converted to the correct gate set. We do not include high-level algorithm design or low-level pulse optimisation which are respectively beyond the scope of a compiler and only in its infancy as a research topic.

9.1.1. Benchmark circuits

The benchmark set is made up of three test sets, with circuits of at most 104 initial gates. (This threshold was chosen as circuits larger than this are several orders of magnitude too large for near-term devices. In addition, the runtimes of Qiskit and Quilc already reach several minutes per circuit at this range, making it impractical to benchmark against the larger circuits.)

  • (a)  
    The IBM test set is a series of circuits published as part of the Qiskit Developer Challenge, a public competition to design a better routing algorithm. These circuits are not amenable to significant peephole optimisation to restrict the impact this can have and focus on the efficiency of the routing algorithm. However, they were designed to be easily verified for correctness by mapping the ${\left\vert 0\right\rangle }^{\otimes n}$ state into some other computational basis state; as such, these tests could be circumvented by applying state preparation optimisations. The IBM circuit set in OpenQASM can be found at https://github.com/iic-jku/ibm_qx_mapping.
  • (b)  
    The UCCSD test set is a series of circuits for electronic-structure calculations. They correspond to VQE circuits for estimating the ground state energy of small molecules by the unitary coupled cluster approach [83], using some choice of qubit encoding (Jordan–Wigner, parity mapping, or Bravyi–Kitaev [84]). These circuits are very amenable to optimisation, as well as requiring routing. They are representative of algorithms that have been proposed as suitable for application on NISQ devices [83], and were generated using Qiskit Aqua. The set used here updates and extends that used by Cowtan et al [70], whose OpenQASM files can be found at https://github.com/CQCL/pytket.
  • (c)  
    The product formula test set is a series of circuits for Hamiltonian simulation. These circuits are thought to be candidates for quantum advantage [85], and were used as a test case for the circuit optimizer by Nam et al [38]. They are given in the ASCII format of the Quipper language, and each is formed of a repeated subroutine. We convert this subroutine to a quantum circuit in OpenQASM. We included circuits both before and after optimisation by Nam et al [38], since they still require mapping to the architecture and have potential for further optimisation. We had to further edit the circuits to ensure the rotation angles of gates exceeded Qiskit's very high cutoff (10−5), below which the rotations would be treated as identities, making these circuits almost trivial. These circuits contain some Pauli gadgets, but also have large regions which are not amenable to this kind of optimisation. These circuits can be found at https://github.com/njross/optimizer.

The full collated benchmark set can be found at https://github.com/CQCL/tket_benchmarking.

Figure 17.

Figure 17. Device connectivity layouts used in end-to-end compilation benchmarks. (a) Rochester 53-qubit layout [86]. (b) Sycamore 53-qubit layout [61]. (c) Rigetti Aspen 16-qubit layout [87].

Standard image High-resolution image

9.1.2. Experiments

We compare compilation for four different architectures:

  • (a)  
    The fully-connected graph, for which no routing is required.
  • (b)  
    The connectivity graph of IBM Rochester, a 53-qubit device.
  • (c)  
    The connectivity graph of Google's 53-qubit Sycamore device.
  • (d)  
    The Rigetti Aspen 16-qubit architecture. For this case, all circuits with more than 16 qubits were discarded.

These connectivity graphs are shown in figure 17. Two end-to-end comparisons were made:

  • (a)  
    To compare t|ket⟩ to other available compilation software, the default compiler passes of Qiskit (optimisation level 3) and Quilc, and the recommended generic pass for t|ket⟩ (FullPeepholeOptimise, followed by the default qubit mapping pass, SynthesiseIBM, and the rebase pass into the desired gateset, henceforth referred to as 'FullPass') were applied to all available circuits.
  • (b)  
    To demonstrate the necessity of appropriate usage of situational compiler optimisations, we compare t|ket⟩'s UCCSD-specific pass (the PauliSimp pass followed by the 'FullPass' routine, henceforth dubbed 'ChemPass') against the default Qiskit and Quilc passes, on the UCCSD circuits (test set (b) above). These circuits contain adjacent Pauli gadgets, and we demonstrate that the reduction in two-qubit gate count and depth can be substantial compared to optimising naively.

Figure 18.

Figure 18. Default compilation benchmarks over all benchmark circuits. The multiplicative overhead in two-qubit gate count from input circuit to output circuit is plotted against input two-qubit gate count. The table shows means across the circuit sets and associated standard error. In general, routing induces a larger-than-1 overhead, but for FullConnectivity when no routing is required it is 1 or below. 'FullPass' refers to the recommended t|ket⟩ routine, consisting of the FullPeepholeOptimise pass, followed by the corresponding qubit mapping pass, SynthesiseIBM and rebasing to the final gate set.

Standard image High-resolution image
Figure 19.

Figure 19. Default compilation benchmarks over all benchmark circuits. The multiplicative overhead in two-qubit gate depth from input circuit to output circuit is plotted against input two-qubit depth. The table shows means across the circuit sets and associated standard error. In general, routing induces a larger-than-1 overhead, but for FullConnectivity when no routing is required it is 1 or below. 'FullPass' refers to the recommended t|ket⟩ routine, consisting of the FullPeepholeOptimise pass, followed by the corresponding qubit mapping pass, SynthesiseIBM and rebasing to the final gate set.

Standard image High-resolution image
Figure 20.

Figure 20. Chemistry-specific compilation benchmarks over the UCCSD test set. The multiplicative overhead in two-qubit gate count from input circuit to output circuit is plotted against input two-qubit gate count. The table shows means across the circuit sets and associated standard error. 'ChemPass' refers to application of the t|ket⟩ PauliSimp pass, followed by the 'FullPass' routine.

Standard image High-resolution image
Figure 21.

Figure 21. Chemistry-specific compilation benchmarks over the UCCSD test set. The multiplicative overhead in two-qubit depth from input circuit to output circuit is plotted against input two-qubit depth. The table shows means across the circuit sets and associated standard error. 'ChemPass' refers to application of the t|ket⟩ PauliSimp pass, followed by the 'FullPass' routine.

Standard image High-resolution image

The benchmarks were performed using t|ket⟩ v0.4.1, Quilc v1.16.3 and Qiskit Terra v0.12.0. All results were obtained using a machine with a 2.3 GHz Intel Core i5 processor and 8 GB of 2133 MHz LPDDR3 memory, running MacOS Mojave v10.14.

9.1.3. Metric

The figures of merit for these benchmarks are two-qubit gate count and depth. As described in section 6, two-qubit gates have error rates an order of magnitude higher than single-qubit gates for existing architectures [61], and so the counts and depths are reasonable proxies for the overall expected error rates of a circuit run on an NISQ device.

We defined end-to-end compilation earlier, including the conversion to the device's native gate set. Google's Sycamore device can accept CZ gates natively as a two-qubit operation, whereas IBM Rochester only supports CX gates. As only single-qubit Hadamard gates are required for conversion between CX and CZ gates, we discount the gate-conversion step, and accept either gate set for the two-qubit gate-count and depth metrics. Thus, unlike total gate count, for these backends the two-qubit gate count should be independent of final basis set chosen, meaning the comparison between architectures is purely based on connectivity. The exception here is the Rigetti Aspen device which can use both CZ gates and the XY family (including the iSWAP gate) natively. Since these can be obtained with similar fidelities, the simple two-qubit gate count is justified in weighting their costs equally.

9.1.4. Results

Two-qubit gate count and depth benchmark results for default compilation are shown in figures 18 and 19. The corresponding results for chemistry-specific compilation are shown in figures 20 and 21 respectively. Each figure includes results for all three compilers and all four target backends. The figure of merit in each is multiplicative overhead of two-qubit gate count or depth, i.e. the ratio of the values before and after compilation.

The FullConnectivity case shows the results without the effects of routing. For default compilation passes the majority of benchmark circuits show little to no change in the number of two-qubit gates, demonstrating the difficulty of entangling-gate reduction with generic optimisation passes. In cases where gains can be made, the optimisations in t|ket⟩ are able to make larger reductions than the other compilers, and are able to make reductions in more instances.

In contrast, as shown in figures 20 and 21, adding the PauliSimp pass leads to more significant reductions on the UCCSD circuits, even after mapping onto a device with restricted connectivity. However, on other classes of circuits that do not resemble the UCCSD set, adding this optimisation can cause a drastic drop in performance, by trying to fit them to this model, and potentially making them less amenable to routing.

Across the end-to-end compilation results targeting restricted-connectivity architectures, the general ranking of performance is: t|ket⟩, followed by Quilc (with special note of their performance for the Aspen device), with Qiskit consistently introducing a very high gate overhead. The results are sufficiently spread that for circuits viable for execution on near-term devices, the choice of compiler and pass sequence makes a significant difference to the size of the final circuit. By comparing to the FullConnectivity case, we can see that these differences are dominated by differences in routing performance.

9.2. Noise-aware placement

In section 7.1 we outlined graph placement (GP), a subgraph-monomorphism-based method for finding initial qubit mappings, and noise-aware graph placement (NAGP), a fidelity-aware heuristic for scoring those placements. As the effectiveness of these methods depends strongly on the error characteristics of physical devices and the fidelity with which they execute the circuit, we assess and compare their performance by running benchmark circuits on a device with and without the mapping methods applied.

9.2.1. Benchmark circuits

At the time of writing, it is difficult to implement many common algorithms on publicly-available quantum devices and extract a signal from the noise; this is why implemented algorithms are usually variational. In order to test a large enough data set to have confidence in measured differences, we instead choose to implement random circuits of constrained sizes.

Sets of random circuits are parameterised by number of qubits and total number of gates. Each circuit is generated by uniformly sampling gates from {X, Y, Z, H, T, S, CX}, and uniformly sampling from all qubits for single-qubit gates and from the set of pairs of all qubits in the case of a CX. As the two-qubit operations are the most error-prone [61], samples that included no CX gates were excluded. Circuit sets of qubit numbers of 4 and 8, and gate counts of 20, 40, 60 and 80, were generated. Circuits over four qubits with 80 gates were too deep and therefore noisy for effective comparison of methods, so were omitted, leaving a total of seven sets each with 90 samples.

9.2.2. Experiments

As described in section 7, placement is the task of finding initial maps from logical qubits of the circuit to physical qubits of the device, and routing is the addition of two-qubit gates to satisfy the connectivity constraints of the device. For these experiments, each benchmark circuit was compiled in three different ways, corresponding to three methods of calculating an initial partial placement: 'None' (corresponding to no qubits placed, therefore relying on default on-the-fly placement performed by routing), 'graph placement', and 'noise-aware graph placement'. Each placed circuit was then compiled with identical routing and post-routing optimisation passes from t|ket⟩.

All circuits were run on the publicly-available ibmq_16_melbourne device via the IBM Q Experience [88]. Correspondingly, compilation also included translating the circuits to the IBM Q gate set of {U1, U2, U3, CX}. Programs for execution on an IBM Q device are sent via the API as 'jobs', with a maximum of 75 circuits in each job. All compiled circuits that corresponded to the same initial circuit were evaluated consecutively and within the same job, to mitigate the effects of device-characteristic deviations between jobs on the method comparisons. Each compiled circuit was evaluated with the maximum 8192 shots.

9.2.3. Metric

The ideal metric for comparing the placement methods is overall fidelity of the implemented circuits. However, measurement of this involves a number of circuit measurements that scales exponentially with qubit number and is infeasible for large experiment sets [89]. Instead, we choose a metric that can quantify the distance between the distribution of measurements in the computational basis and the same distribution generated from an ideal simulation. This methodology requires classical resources that scale exponentially with qubit number, as it involves simulation of all circuits. However, as the techniques proposed are only relevant for NISQ-era error rates and device heterogeneity, we expect the number of qubits to remain low enough for applicability of the proposed methods while the same levels of heterogeneity also hold.

The Kullback–Leibler (KL) divergence has been used for comparing measured and simulated distributions [90]. However, it has some shortcomings. The KL divergence between two distributions P, Q over values xi is defined as:

This is asymmetric between P and Q. More importantly, it is defined to be infinite if support(P) ⊈ support(Q). A standard technique to account for this is padding zeroes in the distribution with small values and renormalising. Although this would still show qualitative difference, the absolute value would depend on the free parameter of tuning, so would not be a useful quantitative measure.

We instead use the Jenses–Shannon divergence DJS [91], which is closely related to DKL. For distributions P, Q and $M=\frac{1}{2}\left(P+Q\right)$ it is defined by:

This is a symmetric, finite function with bounds 0 ⩽ DJS(P, Q) ⩽ 1 when the base-2 logarithm is used.

9.2.4. Results

Figure 22 plots the mean value of DJS over each benchmark circuit set for the three placement methods, each benchmark set parameterised by qubit number and gate count. In general, GP is seen to reduce the mean DJS when compared to no initial placement; this can be explained by GP reducing the number of error-prone two-qubit gates that need to be added to map the circuit. We also see that scoring of these placements using device-reported error information by NAGP is able to make further significant reductions to DJS, suggesting that such exploitation of device heterogeneity is a worthwhile avenue of exploration for maximising near-term device use.

Comparing four-qubit and eight-qubit results, DJS means are higher in the eight-qubit case, as expected, as more qubits are entangled together in general and so the system is more prone to error. In the eight-qubit case, DJS is also seen to monotonically increase with gate count, also matching expectations. The peak for DJS mean at 40 gates for four qubits, for all placement methods, is unexpected and warrants further investigation.

Figure 22.

Figure 22. Experimental comparison of placement methods. 'None' refers to the case of no initial placement of qubits, 'GP' to partial placement via graph placement, and 'NAGP' to noise-aware graph placement. Each data point corresponds to a mean Jensen–Shannon divergence DJS of measured distribution from ideal distribution, over the 90 random input circuit samples of the given size, compiled and executed on the device with the three different placement methods.

Standard image High-resolution image

10. Conclusions and future work

In this paper we have described CQC's compiler system t|ket⟩, with particular emphasis on its transformation engine and qubit mapping routine. We showed that t|ket⟩ offers significant improvement in terms of gate count and gate depth over other comparable compiler systems when evaluated on realistic quantum circuits and real quantum architectures. Further, for devices with heterogeneous gate and qubit error rates t|ket⟩ can use the component-level fidelity information to appreciably improve overall device performance. For NISQ-era quantum computing such performance differences may be the margin between success and failure.

The flexible design of t|ket⟩ presents many possibilities for future improvements. Here we sketch three promising directions.

For all-to-all connectivity, the PauliSimp pass achieved staggering depth reductions on the chemistry benchmark set, which is not totally surprising because it was designed to exploit the recurring structures found in UCCSD ansätze. However there is a lot of scope to improve this method, particularly if the Pauli Gadgets are treated as multi-qubit gates and synthesised by the architecture-aware phase of the compilation process, solving the problem mentioned in section 9.1.4. More generally we expect the use of higher level 'big gates', equipped with their own equational theory, and tuned to particular algorithms or ansätze (for example QAOA instances) to yield similar improvements. This kind of application-specific optimisations cannot be discovered by working with random circuits and require real use cases.

Recent results on quantum volume [58] suggest that available qubit numbers already surpass the gate fidelity by such a margin that a large fraction of the qubits cannot effectively be exploited. This suggests that, in the near term at least, extremely shallow circuits will be required. One possible route to such depth reduction is to exploit large numbers of ancillary qubits, combined with techniques from measurement-based quantum computation [92] to effectively trade time for space. The zx-calculus [71] is already incorporated into t|ket⟩ and has proven an effective tool for MBQC calculations in the past [73, 93, 94].

Finally, we will look at techniques to attack the noisiness of NISQ devices head on. Incorporating a very naive noise model into t|ket⟩'s qubit placement algorithm (section 9.2) made a noticeable difference our results. However it is well known that the noise channels in real devices are much more complex and more difficult to characterise [60, 95, 96]. Incorporating better analysis of the device errors into the compilation process, and techniques to suppress and mitigate errors [22, 23, 25] surely have a role to play in the compilation process for NISQ devices for the foreseeable future.

Footnotes

  • But see O'Brien et al [19].

  • In category-theoretic terms, the triple (I, O, G) of an input ordering, output ordering and graph corresponds to a structured cospan [52], where G is the apex.

  • The process of finding and replacing subgraphs in this manner is called double pushout rewriting. See Ehrig et al [56].

Please wait… references are loading.
10.1088/2058-9565/ab8e92