0% found this document useful (0 votes)
42 views47 pages

Sample

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views47 pages

Sample

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Series ISSN: 1935-3235

DING • CHONG
Synthesis Lectures on
Computer Architecture

Series Editor: Natalie Enright Jerger, University of Toronto


Margaret Martonosi, Princeton University


Quantum Computer Systems
Research for Noisy Intermediate-Scale Quantum Computers
Yongshan Ding, University of Chicago

QUANTUM COMPUTER SYSTEMS


Frederic T. Chong, University of Chicago

This book targets computer scientists and engineers who are familiar with concepts in
classical computer systems but are curious to learn the general architecture of quantum
computing systems. It gives a concise presentation of this new paradigm of computing from
a computer systems’ point of view without assuming any background in quantum mechanics.
As such, it is divided into two parts. The first part of the book provides a gentle overview on
the fundamental principles of the quantum theory and their implications for computing. The
second part is devoted to state-of-the-art research in designing practical quantum programs,
building a scalable software systems stack, and controlling quantum hardware components.
Most chapters end with a summary and an outlook for future directions. This book celebrates
the remarkable progress that scientists across disciplines have made in the past decades
and reveals what roles computer scientists and engineers can play to enable practical-scale
quantum computing.

About SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis
Digital Library of Engineering and Computer Science. Synthesis

MORGAN & CLAYPOOL


books provide concise, original presentations of important research and
development topics, published quickly, in digital and print formats.

Synthesis Lectures on
Computer Architecture
store.morganclaypool.com
Natalie Enright Jerger & Margaret Martonosi, Series Editors
Quantum Computer Systems
Research for Noisy Intermediate-Scale
Quantum Computers
Synthesis Lectures on
Computer Architecture
Editors
Natalie Enright Jerger, University of Toronto
Margaret Martonosi, Princeton University
Founding Editor Emeritus
Mark D. Hill, University of Wisconsin, Madison
Synthesis Lectures on Computer Architecture publishes 50- to 100-page publications on topics
pertaining to the science and art of designing, analyzing, selecting and interconnecting hardware
components to create computers that meet functional, performance and cost goals. The scope will
largely follow the purview of premier computer architecture conferences, such as ISCA, HPCA,
MICRO, and ASPLOS.

Quantum Computer Systems: Research for Noisy Intermediate-Scale Quantum


Computers
Yongshan Ding and Frederic T. Chong
2020

Efficient Processing of Deep Neural Networks


Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer
2020

A Primer on Memory Consistency and Cache Coherence, Second Edition


Vijay Nagarajan, Daniel J. Sorin, Mark D. Hill, and David Wood
2020

Innovations in the Memory System


Rajeev Balasubramonian
2019

Cache Replacement Policies


Akanksha Jain and Calvin Lin
2019
iii

The Datacenter as a Computer: Designing Warehouse-Scale Machines, Third Edition


Luiz André Barroso, Urs Hölzle, and Parthasarathy Ranganathan
2018

Principles of Secure Processor Architecture Design


Jakub Szefer
2018

General-Purpose Graphics Processor Architectures


Tor M. Aamodt, Wilson Wai Lun Fung, and Timothy G. Rogers
2018

Compiling Algorithms for Heterogenous Systems


Steven Bell, Jing Pu, James Hegarty, and Mark Horowitz
2018

Architectural and Operating System Support for Virtual Memory


Abhishek Bhattacharjee and Daniel Lustig
2017

Deep Learning for Computer Architects


Brandon Reagen, Robert Adolf, Paul Whatmough, Gu-Yeon Wei, and David Brooks
2017

On-Chip Networks, Second Edition


Natalie Enright Jerger, Tushar Krishna, and Li-Shiuan Peh
2017

Space-Time Computing with Temporal Neural Networks


James E. Smith
2017

Hardware and Software Support for Virtualization


Edouard Bugnion, Jason Nieh, and Dan Tsafrir
2017

Datacenter Design and Management: A Computer Architect’s Perspective


Benjamin C. Lee
2016

A Primer on Compression in the Memory Hierarchy


Somayeh Sardashti, Angelos Arelakis, Per Stenström, and David A. Wood
2015
iv
Research Infrastructures for Hardware Accelerators
Yakun Sophia Shao and David Brooks
2015

Analyzing Analytics
Rajesh Bordawekar, Bob Blainey, and Ruchir Puri
2015

Customizable Computing
Yu-Ting Chen, Jason Cong, Michael Gill, Glenn Reinman, and Bingjun Xiao
2015

Die-stacking Architecture
Yuan Xie and Jishen Zhao
2015

Single-Instruction Multiple-Data Execution


Christopher J. Hughes
2015

Power-Efficient Computer Architectures: Recent Advances


Magnus Själander, Margaret Martonosi, and Stefanos Kaxiras
2014

FPGA-Accelerated Simulation of Computer Systems


Hari Angepat, Derek Chiou, Eric S. Chung, and James C. Hoe
2014

A Primer on Hardware Prefetching


Babak Falsafi and Thomas F. Wenisch
2014

On-Chip Photonic Interconnects: A Computer Architect’s Perspective


Christopher J. Nitta, Matthew K. Farrens, and Venkatesh Akella
2013

Optimization and Mathematical Modeling in Computer Architecture


Tony Nowatzki, Michael Ferris, Karthikeyan Sankaralingam, Cristian Estan, Nilay Vaish, and
David Wood
2013

Security Basics for Computer Architects


Ruby B. Lee
2013
v
The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale
Machines, Second Edition
Luiz André Barroso, Jimmy Clidaras, and Urs Hölzle
2013

Shared-Memory Synchronization
Michael L. Scott
2013

Resilient Architecture Design for Voltage Variation


Vijay Janapa Reddi and Meeta Sharma Gupta
2013

Multithreading Architecture
Mario Nemirovsky and Dean M. Tullsen
2013

Performance Analysis and Tuning for General Purpose Graphics Processing Units
(GPGPU)
Hyesoon Kim, Richard Vuduc, Sara Baghsorkhi, Jee Choi, and Wen-mei Hwu
2012

Automatic Parallelization: An Overview of Fundamental Compiler Techniques


Samuel P. Midkiff
2012

Phase Change Memory: From Devices to Systems


Moinuddin K. Qureshi, Sudhanva Gurumurthi, and Bipin Rajendran
2011

Multi-Core Cache Hierarchies


Rajeev Balasubramonian, Norman P. Jouppi, and Naveen Muralimanohar
2011

A Primer on Memory Consistency and Cache Coherence


Daniel J. Sorin, Mark D. Hill, and David A. Wood
2011

Dynamic Binary Modification: Tools, Techniques, and Applications


Kim Hazelwood
2011

Quantum Computing for Computer Architects, Second Edition


Tzvetan S. Metodi, Arvin I. Faruque, and Frederic T. Chong
2011
vi
High Performance Datacenter Networks: Architectures, Algorithms, and Opportunities
Dennis Abts and John Kim
2011

Processor Microarchitecture: An Implementation Perspective


Antonio González, Fernando Latorre, and Grigorios Magklis
2010

Transactional Memory, Second Edition


Tim Harris, James Larus, and Ravi Rajwar
2010

Computer Architecture Performance Evaluation Methods


Lieven Eeckhout
2010

Introduction to Reconfigurable Supercomputing


Marco Lanzagorta, Stephen Bique, and Robert Rosenberg
2009

On-Chip Networks
Natalie Enright Jerger and Li-Shiuan Peh
2009

The Memory System: You Can’t Avoid It, You Can’t Ignore It, You Can’t Fake It
Bruce Jacob
2009

Fault Tolerant Computer Architecture


Daniel J. Sorin
2009

The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale


Machines
Luiz André Barroso and Urs Hölzle
2009

Computer Architecture Techniques for Power-Efficiency


Stefanos Kaxiras and Margaret Martonosi
2008

Chip Multiprocessor Architecture: Techniques to Improve Throughput and Latency


Kunle Olukotun, Lance Hammond, and James Laudon
2007
vii
Transactional Memory
James R. Larus and Ravi Rajwar
2006

Quantum Computing for Computer Architects


Tzvetan S. Metodi and Frederic T. Chong
2006
Copyright © 2020 by Morgan & Claypool

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.

Quantum Computer Systems: Research for Noisy Intermediate-Scale Quantum Computers


Yongshan Ding and Frederic T. Chong
www.morganclaypool.com

ISBN: 9781681738666 paperback


ISBN: 9781681738673 ebook
ISBN: 9781681738680 hardcover

DOI 10.2200/S01014ED1V01Y202005CAC051

A Publication in the Morgan & Claypool Publishers series


SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE

Lecture #51
Series Editors: Natalie Enright Jerger, University of Toronto
Margaret Martonosi, Princeton University
Founding Editor Emeritus: Mark D. Hill, University of Wisconsin, Madison
Series ISSN
Print 1935-3235 Electronic 1935-3243

Cover Photo: A Noisy Intermediate-Scale Quantum (NISQ) Machine from David Schuster’s laboratory at the
University of Chicago.
Quantum Computer Systems
Research for Noisy Intermediate-Scale
Quantum Computers

Yongshan Ding
University of Chicago

Frederic T. Chong
University of Chicago

SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE #51

M
&C Morgan & cLaypool publishers
ABSTRACT
This book targets computer scientists and engineers who are familiar with concepts in classi-
cal computer systems but are curious to learn the general architecture of quantum computing
systems. It gives a concise presentation of this new paradigm of computing from a computer
systems’ point of view without assuming any background in quantum mechanics. As such, it is
divided into two parts. The first part of the book provides a gentle overview on the fundamental
principles of the quantum theory and their implications for computing. The second part is de-
voted to state-of-the-art research in designing practical quantum programs, building a scalable
software systems stack, and controlling quantum hardware components. Most chapters end with
a summary and an outlook for future directions. This book celebrates the remarkable progress
that scientists across disciplines have made in the past decades and reveals what roles computer
scientists and engineers can play to enable practical-scale quantum computing.

KEYWORDS
quantum computing, computer architecture, quantum compilation, quantum pro-
gramming languages, quantum algorithms, noise mitigation, error correction, qubit
implementations, classical simulation
xi

Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

List of Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

PART I Building Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 The Birth of Quantum Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 The Rise of a New Computing Paradigm . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 What Is a Quantum Computer? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Models of Quantum Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Analog Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Gate-Based Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Measurement-Based Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 A QPU for Classical Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 Architectural Design of a QPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Quantum Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 A Road Map for Quantum Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.1 Computer Science Research Opportunities . . . . . . . . . . . . . . . . . . . . . 11

2 Think Quantumly About Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13


2.1 Bits vs. Qubits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.1 Computing with Bits: Boolean Circuits . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.2 Computing with Qubits: Quantum Circuits . . . . . . . . . . . . . . . . . . . . 20
2.1.3 Architectural Constraints of a Quantum Computer . . . . . . . . . . . . . . 25
2.2 Basic Principles of Quantum Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.1 Quantum States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.2 Composition of Quantum Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
xii
2.2.3 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.4 Quantum Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 Noisy Quantum Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3.1 Quantum Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3.2 Operator Sum Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3.3 Qubit Decoherence and Gate Noise . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Qubit Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.1 Trapped Ion Qubits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4.2 Superconducting Qubits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.4.3 Other Promising Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3 Quantum Application Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55


3.1 General Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.1.1 The Computing Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.1.2 The Query Model and Quantum Parallelism . . . . . . . . . . . . . . . . . . . . 56
3.1.3 Complexity, Fidelity, and Beyond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.2 Gate-Based Quantum Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.2.1 Deutsch–Josza Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2.2 Bernstein–Vazirani Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.3 NISQ Quantum Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.3.1 Variational Quantum Eigensolver (VQE) . . . . . . . . . . . . . . . . . . . . . . 67
3.3.2 Quantum Approximate Optimization Algorithm (QAOA) . . . . . . . . 68
3.4 Summary and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

PART II Quantum Computer Systems . . . . . . . . . . . . . . 71


4 Optimizing Quantum Systems–An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.1 Structure of Quantum Computer Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.2 Quantum-Classical Co-Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.3 Quantum Compiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.4 NISQ vs. FT Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

5 Quantum Programming Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81


5.1 Low-Level Machine Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2 High-Level Programming Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
xiii
5.3 Program Debugging and Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.3.1 Tracing via Classical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.3.2 Assertion via Quantum Property Testing . . . . . . . . . . . . . . . . . . . . . . . 85
5.3.3 Proofs via Formal Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.4 Summary and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

6 Circuit Synthesis and Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91


6.1 Synthesizing Quantum Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.1.1 Choice of Universal Instruction Set . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.1.2 Exact Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.1.3 Approximate Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.1.4 Higher-Dimensional Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.2 Classical vs. Quantum Compiler Optimization . . . . . . . . . . . . . . . . . . . . . . . 105
6.3 Gate Scheduling and Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.3.1 Primary Constraints in Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.3.2 Scheduling Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.3.3 Highlight: Gate Teleportation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.4 Qubit Mapping and Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.4.1 Finding a Good Qubit Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.4.2 Strategically Reusing Qubits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.4.3 Highlight: Uncomputation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.5 Summary and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

7 Microarchitecture and Pulse Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127


7.1 From Gates to Pulses–An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.1.1 General Pulse Compilation Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.2 Quantum Controls and Pulse Shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
7.2.1 Open-Loop vs. Closed-Loop Control . . . . . . . . . . . . . . . . . . . . . . . . 129
7.3 Quantum Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.3.1 Highlight: Compilation for Variational Algorithms . . . . . . . . . . . . . 131
7.4 Summary and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

8 Noise Mitigation and Error Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133


8.1 Characterizing Realistic Noises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
8.1.1 Measurements of Decoherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.1.2 Quantum-State Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.1.3 Randomized Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
xiv
8.2 Noise Mitigation Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.2.1 Randomized Compiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.2.2 Noise-Aware Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.2.3 Crosstalk-Aware Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.3 Quantum Error Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.3.1 Basic Principles of QEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.3.2 Stabilizer Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
8.3.3 Transversality and Eastin–Knill Theorem . . . . . . . . . . . . . . . . . . . . . 146
8.3.4 Knill’s Error Correction Picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.4 Summary and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

9 Classical Simulation of Quantum Computation . . . . . . . . . . . . . . . . . . . . . . . . 151


9.1 Strong vs. Weak Simulation: An Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
9.1.1 Distance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
9.2 Density Matrices: The Schrödinger Picture . . . . . . . . . . . . . . . . . . . . . . . . . . 154
9.3 Stabilizer Formalism: The Heisenberg Picture . . . . . . . . . . . . . . . . . . . . . . . . 156
9.4 Graphical Models and Tensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.5 Summary and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

10 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

Authors’ Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203


xv

Preface
Quantum computing is at a historic time in its development and there is a great need for research
in quantum computer systems. This book stems from a course we co-taught in 2018 and the re-
search efforts of the EPiQC NSF Expedition in Computing and others. Our goal is to provide
a broad overview of some of the emerging research areas in the development of practical com-
puting systems based upon emerging noisy intermediate-scale quantum hardware. It is our hope
that this book will encourage other researchers in the computer systems community to pursue
some of these directions and help accelerate real-world applications of quantum computing.
Despite the impressive capability of today’s digital computers, there are still some compu-
tational tasks that are beyond their reach. Remarkably, some of those tasks seem to be relatively
easy with a quantum computer. Over the past four decades or so, our understanding of the
theoretical power of quantum and skills in quantum engineering has advanced significantly.
Small-scale prototypes of progammable quantum computers are emerging from academic and
industry labs around the world. This is undoubtedly an exciting time, as we may be soon fortunate
enough to be among the first to witness the application of quantum computers on problems that
are unfeasible for today’s classical computers. What has been truly remarkable is that the field
of quantum information science has brought scientists together across disciplines—physicists,
electrical engineers, computer architects, and theorists, just to name a few.
Looking back at the historical progress in digital computers, we remark upon the three
major milestones that led to the integration of millions of computational units that make up the
computing power in today’s computers: low-cost integrated circuit technology, efficient archi-
tectural design, and interconnected software ecosystem. It is not too unrealistic to assume that
the evolution of quantum computers will follow a similar trajectory; we are starting to see some
innovations in hardware, software, and architecture designs that have the potential to scale up
well. The progress and prospect of the new paradigm of computing has motivated us to write
this Synthesis Lecture, which hopefully can bring together more and more computer scientists
and engineers to join the expedition to practical-scale quantum computation.
This introduction to quantum computer systems should primarily appeal to computer sys-
tems researchers, software engineers, and electrical engineers. The focus of this book is on sys-
tems research for noisy intermediate-scale quantum (NISQ) computers, highlighting the recent
progress and addressing the near-term challenges for realizing the computational power of QC
systems.
xvi PREFACE
Reading This Book
The aim of this book is to provide computer systems researchers and engineers with an introduc-
tory guide to the general principles and challenges in designing practical quantum computing
systems. Compared to its predecessor in the series, Quantum Computing for Computer Archi-
tects by Metodi, Faruque, and Chong [1], this book targets near-term progress and prospects of
quantum computing. Throughout the book, we emphasize how computer systems researchers
can contributes to the exciting emerging field. As such, the structure of this book is as follows.
Chapter 2 reviews the central concepts in quantum computation, compares and contrasts with
those of classical computation, and discusses the leading technologies for implementing qubits.
Chapter 3 summarizes the general features in quantum algorithms and reviews some of the
important NISQ applications.
The second part of the book starts in Chapter 4 with an overview of the quantum ar-
chitectural vertical stack and the cross-cutting themes that enable synergy among the different
disciplines in the field. The rest of the book illuminates the opportunities in quantum com-
puter systems research, broadly split into five tracks: (i) Chapter 5 describes existing quantum
programming languages and techniques for debugging and verification; (ii) Chapter 6 intro-
duces important quantum compilation methods including circuit optimization and synthesis;
(iii) Chapter 7 dives into low-level quantum controls, pulse generation, and calibration; (iv) a
number of noise mitigation and error correction techniques are reviewed in Chapter 8; (v) Chap-
ter 9 discusses different methods in classical simulations of quantum circuits and their implica-
tions; and (vi) a summary of progress and prospects of quantum computer systems research can
be found in Chapter 10.
The reader is encouraged to start with the Summary and Outlook section in some chap-
ters for a quick overview of fundamental concepts, highlights of state-of-the-art research, and
discussions of future directions.

Yongshan Ding and Frederic T. Chong


Chicago, June 2020
xvii

Acknowledgments
Our views in the book are strongly informed by ideas formed from discussions with Yuri Alex-
eev, Kenneth Brown, Chris Chamberland, Isaac Chuang, Andrew Cross, Bill Fefferman, Diana
Franklin, Alexey Gorshkov, Hartmut Haeffner, Danielle Harlow, Aram Harrow, Henry Hoff-
man, Andrew Houck, Ali Javadi-Abhari, Jungsang Kim, Peter J. Love, Margaret Martonosi,
Akimasa Miyake, Chris Monroe, William Oliver, John Reppy, David Schuster, Peter Shor,
Martin Suchara, members of the EPiQC Project (Enabling Practical-scale Quantum Com-
putation, an NSF Expedition in Computing), and members of the STAQ Project (Software-
Tailored Architecture for Quantum co-design). Thanks are extended to the students who took
the 2018 course on quantum computer systems for their helpful lecture scribing notes: Anil
Bilgin, Xiaofeng Dong, Shankar G. Menon, Jean Salac, and Lefan Zhang, among others.
Thanks to Morgan & Claypool Publishers for making the publication this book possi-
ble. Many thanks to Michael Morgan, who invited us to write on the subject, for his patience
and encouragement. Thanks also to our Synthesis Lecture series editors Natalie Enright Jerger
and Margaret Martonosi, who shepherded this project to its final product. YD and FTC are
grateful to Frank Mueller and the anonymous reviewers for providing in-depth comments and
suggestions on the original manuscript. Thanks to Sara Kreiman for her thorough copyedit of
the book.
YD has learned a tremendous amount from his advisor FTC, and is very grateful for
FTC’s mentorship in quantum information science research and education. YD also thanks
Ryan O’Donnell, who first introduced him to the field of quantum computation and informa-
tion. YD worked on this book while visiting the Massachusetts Institute of Technology. YD es-
pecially thanks Isaac Chuang, Aram Harrow, and Peter Shor for the many inspiring discussions
during his visit. YD thanks all of his colleagues, friends, and relatives for their encouragement
and support in writing and finishing the book, especially Meizi Liu, and YD’s parents, Genlin
Ding and Shuowen Feng.
Finally, YD and FTC gratefully acknowledge the support from the National Science
Foundation, specifically by EPiQC, an NSF Expedition in Computing, under grants CCF-
1730449, in part by STAQ, under grant NSF Phy-1818914, and in part by DOE grants DE-
SC0020289 and DE-SC0020331.

Yongshan Ding and Frederic T. Chong


Chicago, June 2020
xix

List of Notations
The nomenclature and notations used in this book may be unfamiliar to many readers and may
have different meanings in a different context. We devote this section to clarifying some of the
conventions this book uses to prevent confusion.

Systems Terminology

• Adiabatic quantum computing is a model of analog quantum computing where a quan-


tum system remains in the ground state energy.

• Analog quantum computing (AQC) is a model of quantum computation such that the
state of a quantum system is evolved smoothly.

• Boolean circuit is a model of classical computation that expresses computation by send-


ing data through a combination of logic gates.

• FT refers to being fault tolerant; a fault-tolerant quantum computer relies on quantum


error correction.

• The gate scheduling problem is to design an ordering or synchronization of quantum


gates to be applied to the qubits in the target architecture, under constraints such as
data dependencies, parallelism, communication, and noise.

• Hamiltonian refers to the mathematical representation of the energy configuration of


a physical system. It is commonly used as a linear algebraic operator in quantum me-
chanics.

• Host processor is an abstraction that refers to the classical computer that controls the
processes in quantum computer systems.

• Quantum annealing is a model of analog quantum computing where-in the quantum


systems interact with the thermal environment.

• Lambda calculus is a model of classical computation based on functional expressions


using variable binding and substitution.

• Measurement-based quantum computing (MBQC) is a model of computation that per-


forms computation via only measurements on qubits previously initialized to a cluster
state.
xx LIST OF NOTATIONS
• A NISQ computer refers to a noisy intermediate-scale quantum computer.
• Turing machine is a model of classical computation for abstract computing machines
based on manipulating data sequentially on a strip of tape following a set of rules.
• Quantum compiling refers to the framework for efficiently implementing a given quan-
tum program or target unitary to high precision, using gates from a set of primitive
instructions supported in the underlying quantum architecture.
• Quantum communication is a branch of quantum technology where-in entangled qubits
are used to encrypt and transmit data.
• Quantum circuit synthesis refers to the technique that constructs a gate out of a series of
primitive operation.
• Quantum device topology (or device connectivity) describes the layout of the physical
qubits and the allowed direct interactions between any pair of qubits.
• Quantum logic gates (or qubit operations or quantum instructions) are transformations to
be applied to qubits, represented by unitary matrices.
• The qubit mapping problem aims to find an optimal mapping from the qubit registers
in a quantum program to the qubits in the target architecture, under constraints such
as system size, data dependencies, communication, ancilla reuse, and noise.
• Quantum processing unit (QPU) refers to a hardware component that implements qubits
as well as the control apparatus.
• A quantum program is an abstraction that refers to the sequence of instructions and
control flow that a quantum computer must follow according to a protocol or an algo-
rithm.
• Quantum sensing is a branch of quantum technology that takes advantage of quantum
coherence to perform measurements of physical quantities.
• Quantum simulation is a branch of quantum technology that studies the structures and
properties of electronic or molecular systems.
• Schoelkopf ’s law is an empirical scaling projection for quantum decoherence—delayed
by a factor of 10 roughly every three years.
• The von Neumann architecture is a stored-program computer architecture that controls
instruction fetch and data operations via a common system computer bus.
LIST OF NOTATIONS xxi
Linear Algebra and Probability in Quantum Computing

• The basis of a qubit is a set of linearly independent vectors that span the Hilbert space.
The two most common bases for single qubits are the computational basis (z basis):
   
1 0
fj0i ; j1ig  ; ;
0 1

and the Fourier basis (x basis):


 p   p 
1=p2 1= p2
fjCi ; j ig  ; :
1= 2 1= 2

• The Bloch sphere is a visualization of single-qubit Hilbert space H in three-dimensional


Euclidean space R3 :
1
.x; y; z/ D .I C xx C yy C zz /:
2

• The bra vector is the conjugate transpose of a ket vector:



h j D ˛ ˇ :

• A cluster state is a quantum state defined by a graph, where the nodes in the graph are
qubits initialized to jCi state, and the edges are controlled-Z gates between the qubits.

• A complex number z 2 C is a number in the form of a C bi , where a; b are real numbers


and i is an imaginary unit satisfying i 2 D 1. a is called the read part, and b is called
the imaginary part of z . The conjugate of z is z  D a bi .

• The conjugate transpose of a matrix M is denoted as M Ž whose matrix elements are:

ŒM Ž ij D ŒM j i  :
p
• An EPR pair refers to two qubits in the quantum state jepri D .j00i C j11i/= 2:

• The common mapping is j0i for ground energy state, and j1i for first excited energy state.
In the context of the physical implementation of a qubit, the computational basis cor-
responds to the discrete energy levels.

• A complex square matrix is Hermitian if its complex conjugate transpose H Ž is equal


to itself:
H Ž D H:
xxii LIST OF NOTATIONS
• The Hilbert space H is complex inner product space in which a n-qubit quantum state
is a 2n -dimensional vector of complex entries.
P P
• The inner product of two quantum states j i D j ˛j jj i ; ji D k ˇk jki is h ji D
P 
i D ˛i ˇi .

• An identity matrix I is a matrix with 1 along the diagonal and 0 everywhere else.

• For any real number p  1, the `p norm of a vector x D .x1 ; : : : ; xn / is defined as

n
!1=p
X
p
jjxjjp D jxi j :
i D1

• e M and exp.M / are notations for matrix exponential, which is defined as:
1
X 1 k
eM D M :

kD0

• A mixed quantum state or density matrix is a probability ensemble of pure quantum


P
states:  D i pi j i i h i j.

• The Pauli matrices are


     
0 1 0 i 1 0
x D ; y D ; z D :
1 0 i 0 0 1

• A probability distribution refers to a finite set of non-negative real numbers pi that sums
P
to 1: pi  0 and i pi D 1:

• A quantum channel is a linear mapping from one mixed state to another mixed state

 ! E ./:

• Quantum states are represented by (column) vectors in the Hilbert space using Dirac’s
ket vector notation:  
˛
j iD :
ˇ

• sgn.x/ is the sign of the number x .


P P
• The tensor product of two quantum states j i D j ˛j jj i ; ji D k ˇk jki is j i ˝
P
ji D j;k D ˛j ˇk .jj i ˝ jki/.
LIST OF NOTATIONS xxiii
P
• The trace of a matrix A is the sum of its diagonal elements, t r.A/ D i Ai i D
P th
i hei jAjei i, where jei i is the basis vector with 1 at the i index and 0 everywhere
else.
• A complex square matrix U is unitary if its complex conjugate transpose U Ž is also its
inverse:
U Ž U D U U Ž D I:

• The system, or wave function, of a qubit can be written as a linear combination of basis
states.
PART I

Building Blocks
3

CHAPTER 1

Introduction
Just 40 years ago, the connection between computer science and quantum mechanics was made.
For the first time, scientists thought to build a device to realize information processing and com-
putation using the extraordinary theory that governs the particles and nuclei that constitute our
universe. Since then, we find ourselves time and again amazed by the potential computing power
offered by quantum mechanics as we understand more and more about it. Some problems that
are previously thought to be intractable now have efficient solutions with a quantum computer.
This potential advantage stems from the unconventional approach that a quantum computer
uses to store and process information. Unlike traditional digital computers that represent two
states of a bit with the on and off states of a transistor switch, a quantum computer exploits its
internal states through special quantum mechanical properties such as superposition and entan-
glement. For example, a quantum bit (qubit) lives in a combination of the 0 and 1 states at the
same time. Astonishingly, these peculiar properties offer new perspectives to solving difficult
computational tasks. This chapter is dedicated to a high-level overview of the rise of quantum
computing and its disruptive impacts. More importantly, we highlight the computer scientists’
role in the endeavor to take quantum computing to practice sooner.

1.1 THE BIRTH OF QUANTUM COMPUTING


Paul Benioff began research on the theoretical possibility of building a quantum computer in the
1970s, resulting in his 1980 paper on quantum Turing machines [2]. His work was influenced
by the work of Charles Bennett on classical reversible Turing machines from 1973 [3].
In 1982, the Nobel-winning physicist Richard Feynman famously imagined building a
quantum computer to tackle problems in quantum mechanics [4, 5]. The theory of quantum
mechanics aims to simulate material and chemical processes by predicting the behavior of the
elementary particles involved, such as the electrons and the nuclei. These simulations quickly
become unfeasible on traditional digital computers, which simply could not model the staggering
number of all possible arrangements of electrons in even a very small molecule. Feynman then
turned the problem around and proposed a simple but bold idea: why don’t we store information
on individual particles that already follow the very rules of quantum mechanics that we try to
simulate? He remarked:

“If you want to make a simulation of nature, you’d better make it quantum mechanical,
and by golly it’s a wonderful problem, because it doesn’t look so easy.”
4 1. INTRODUCTION
The idea of quantum computation was made rigorous by pioneers including David
Deutsch [6, 7] and David Albert [8]. Since then, the development of quantum computing has
profoundly altered how physicists and chemists think about and use quantum mechanics. For in-
stance, by inventing new ways of encoding a quantum many-body system as qubits on a quantum
computer, we gain insights on the best quantum model for describing the electronic structure
of the system. It gives rise to interdisciplinary fields like quantum computational chemistry. As
recent experimental breakthroughs and theoretical milestones in quantum simulation are made,
we can no longer talk about how to study a quantum system without bringing quantum com-
putation to the table.

1.1.1 THE RISE OF A NEW COMPUTING PARADIGM


For computer scientists, the change that quantum computing brings has also been nothing short
of astounding. It is so far the only new model of computing that is not bounded by the extended
Church–Turing thesis [9, 10], which states that all computers can only be polynomially faster
than a probabilistic Turing machine. Strikingly, a quantum computer can solve certain compu-
tational tasks drastically more efficiently than anything ever imagined in classical computational
complexity theory.
It is not until the mid-1990s that the power of quantum computing was becoming fully
appreciated. In 1993, Bernstein and Vazirani [9] demonstrated a quantum algorithm with expo-
nential speedup over any classical algorithms, deterministic or randomized, for a computational
problem named recursive Fourier sampling. Many more astonishing discoveries followed. In
1994, Dan Simon [10] showed another computational problem that a quantum computer has
an exponential advantage over any classical computers.
Then in the same year, Peter Shor [11, 12] discovered that more problems, namely fac-
toring large integers and solving discrete logarithms, also have efficient solutions on a quantum
computer, far more so than any classical algorithms that are ever known. The implication of this
discovery is breathtaking. Existing cryptographic codes encrypt today’s private network commu-
nications, data storage, and financial transactions, relying on the fact that prime factorization
for sufficiently large integers is so difficult that the most powerful digital supercomputers could
take thousands or millions of years to compute. But the security of our private information could
be under threat, should a quantum computer capable of running Shor’s algorithm be built.
In 1996, another algorithm by Lov Grover was discovered [13]. Once again, a quantum
algorithm is shown to provide improvement over classical algorithms, and in this case Grover’s
algorithm exhibits quadratic speedup for the problem of unstructured database search in which we
are given a database and aim to find some marked items. For example, given an unordered set
S of N elements, we want to find where x 2 S is located in the set. Classically, we p need O.N /
accesses to the database in the worst case, while quantumly, we can do it with O. N / accesses.
1.2. MODELS OF QUANTUM COMPUTATION 5
These are just a few examples of quantum algorithms that have been discovered. When
implemented appropriately on a quantum computer, they offer efficient solutions to problems
that seem to be intractable in the classical computing paradigm.
Building a quantum computer is, however, extremely challenging. When the idea was first
proposed, no one knew how to build such powerful computers. To realize the computational
power, we must learn to coherently control and manipulate highly-entangled, complex, physical
systems to near perfection.
In the last 30 years or so, technologies for manufacturing quantum chips have significantly
advanced. Today, we are at an exciting time where small- and intermediate-scale prototypes have
been built. It marks a new era for quantum computing, as John Preskill, a long-time leader in
quantum computing at Caltech, puts it, “we have entered the Noisy Intermediate-Scale Quantum
(NISQ) era,” [14] in which quantum computing hardware is becoming large and reliable enough
to perform small useful computational tasks. Research labs from both academia and industry,
domestic and abroad, are now eager to experimentally demonstrate a first application of quantum
computers to some real-world problems that any classical computers would have a hard time
solving efficiently.

1.1.2 WHAT IS A QUANTUM COMPUTER?


In a nutshell, a quantum computer is a computing device that stores information in objects called
quantum bits (or qubits) and transforms them by exploiting certain very special properties from
quantum mechanics. Despite the peculiarity in the behavior of quantum mechanical systems
(e.g., particles at very small energy and distance scales), quantum mechanics is one of the most
celebrated and well-tested theory for explaining those behaviors. Remarkably, the non-intuitive
properties and transformations in quantum systems have significant computational significance,
as they allow a quantum computer to operate on an exponentially large computational space.
In contrast, a traditional digital computer stores information in a sequence of bits, each of
which takes two possible values, 0 or 1, represented by the on and off of a transistor switch, for
example. To manipulate information, it sends an input sequence of bits in the form of electrical
signals through integrated circuits (IC) to produce another sequence of bits. This process is
deterministic and fast, thanks to the advanced technologies in IC fabrication. Computers today
can afford billions of instructions per second, without worrying about experiencing an error for
billions of device hours.

1.2 MODELS OF QUANTUM COMPUTATION


The approaches to quantum computing (QC) can be roughly split into three main categories:
(i) analog QC, (ii) digital gate-based QC, and (iii) measurement-based QC.
6 1. INTRODUCTION
1.2.1 ANALOG MODEL
In analog QC [15, 16], one gradually evolves the state of a quantum system using quantum
operations that smoothly change the system such that the information encoded in the final
system corresponds to the desired answer with high probability. When the quantum system
is restricted to evolve slowly and remains in a ground state energy throughout the evolution,
then this approach is typically referred to as “adiabatic quantum computing” [17, 18]. When
the restriction is lifted and the system is allowed to interact with the thermal environment, it is
referred to as “quantum annealing” [19, 20]. This analog approach is sought after by companies
including D-Wave systems, Google, and others. However, whether or not existing quantum
annealing devices achieve universal quantum computation or any quantum speedup remains
unclear.

1.2.2 GATE-BASED MODEL


In digital QC, information is encoded onto a discrete and finite set of quantum bits (qubits),
and quantum operations are broken down to a sequence of a few basic quantum logic gates.
We obtain the correct answer with high probability from the digital measurement outcomes
of the qubits. A digital QC is typically more sensitive to noise from the environment than an
analog QC. For instance, qubit decoherence is usually considered undesirable in digital QC ex-
cept sometimes during initialization and measurement, whereas in adiabatic QC, decoherence
helps the system relax to the ground energy state [17, 21]. In the NISQ era, noise including
qubit decoherence, imprecise control, and manufacturing defect has non-negligible detrimental
effects and can accumulate when running long quantum algorithms, necessitating noise mitiga-
tion techniques to protect the information during the computation. These devices are called the
“NISQ digital quantum computers.” In principle, the discretization of information allows for
the discretization of errors and use of redundancy to encode information, which give rise to the
use of quantum error correction (QEC) to achieve system-level fault tolerance. However, the
overheads of conventional QEC approaches are found to be inhibitory in the near term. Devices
that implement QEC are called “fault-tolerant quantum computers.” Throughout the remainder
of the book, the discussion will be centered around NISQ digital QC. Nonetheless, the general
principles and techniques introduced here are applicable to all types of quantum computers. We
refer the interested readers to a number of pertinent textbooks, reviews, and theses [22–29].

1.2.3 MEASUREMENT-BASED MODEL


One example of measurement-based quantum computation (MBQC) is the cluster state
model—see a short review in [30]. In this model of quantum computation, one initializes a
number of qubits in the cluster state. The cluster state is represented by a graph, in which each
node is a qubit initializes in jCi and each edge denotes a controlled-Z gate. The graph can have
any topology, e.g., a 1-D chain, or a 2-D grid. The computation process involves measuring (in
some measurement basis) some of the qubits in the cluster state. Some of the measurements are
1.3. A QPU FOR CLASSICAL COMPUTING 7
Compilation and
Host PC
Optimization
Control Flow
CPU
Classical Instructions
Quantum Instructions

Mail Memory
(Data and Instructions) Instructions

QPU
System Controller

Measured Data Execute Gates Readout

Detected Noises Quantum Chip


(Qubits)
Feedback and
Validation

Figure 1.1: A QPU (quantum processor unit) and how it interacts with classical computers.

possibly conditioned on the outcomes of previous measured qubits. The key observation here
is that each measurement equivalently accomplishes a quantum gate due to gate teleportation.
The output of the computation is the measurement bit-string outcome and the remaining state
of the qubits that are not measured. It is shown that this is a universal quantum computation
model.
Our focus of the book is on the gate-based model; we present the other models of com-
putation here for completeness, but details of the models are out of the scope of this book.

1.3 A QPU FOR CLASSICAL COMPUTING


Quantum computing hardware is currently envisioned to be a hardware accelerator for classical
computers, as shown in Figure 1.1. To some extent, it is like a processing unit specialized in
dealing with quantum information, in a way similar to a GPU (graphics processing unit) that
specializes in numerical acceleration of kernels, including creation of images for display. For
this reason, the QC hardware is referred to as a QPU (quantum processing unit). Unlike a GPU,
which can perform arithmetic logic and data fetching at the same time, a QPU does not fetch
data or instructions on its own. A host processor controls every move of the QPU. Let us now
dive deeper into the architectural design of a QPU.
It is often misunderstood that a quantum computer is going to replace all classical digital
computers. A quantum computer should never be viewed as a competitor with a classical com-
8 1. INTRODUCTION
Computer Systems in 1950s Computer Systems Today Quantum Computer Systems
Algorithms Algorithms Algorithms
High-Level Languages
Quantum DSL, Compilation,
Assembly Language, Unitary Synthesis, Pulse
Compiler OS
Circuit Synthesis Shaping, Noise Mitigation,
Architecture Error Correction

Devices (Vacuum Tubes) Devices (Transistors) Devices (Qubits)

Figure 1.2: Architectural designs of classical vs. quantum computers. The abstraction layers for
1950s classical computing, today’s classical computing, and quantum computing are compared.

puter. In fact, classical processing and classical control play vital roles in quantum computing.
On one hand, a quantum algorithm generally involves classical pre- or post-processing. On the
other hand, efficient classical controls are needed for running the algorithm on hardware. As
such, a better way of regarding the QC hardware is as a co-processor or an accelerator, that is a
QPU, as opposed to direct replacements of classical computers.

1.3.1 ARCHITECTURAL DESIGN OF A QPU


A quantum computer implements a fundamentally different model of computation than a mod-
ern classical computer does. It would be surprising if the exact design of a computer architecture
would extend well for a quantum computer [31, 32]. As shown in Figure 1.2, the architectural
design of a quantum computer resembles that of a classical computer in the 1950s where de-
vice constraints are so high that full-stack sharing of information is required from algorithms to
devices. In time, as technology advances and resource becomes abundant, a quantum computer
perhaps will adapt to the modularity and layering models as seen in classical architectures. But
in the short term, as long as the NISQ era lasts, it is premature to copy the abstraction layers of
today’s conventional computer systems to a quantum system.
Furthermore, quantum information processing is fundamentally different than what com-
puter engineers are used to. For instance, for conventional computers, engineers go to great
lengths in minimizing the noises caused by quantum mechanics in the transistor components.
Rather than suppressing its effects, a quantum computer harnesses the power of quantum me-
chanics. As such, the control apparatus for a quantum computer would look drastically different
from that of a conventional computer.
In reality, for successful operation, a quantum computer must implement a well-isolated
physical system that encodes a sufficiently large number of qubits, and controls these qubits with
extremely high speed and precision in order to carry out computation. The rest of the section
describes at a high level the key components in a fully functional quantum computer architecture.
The developments of digital quantum computers, for both the NISQ and FT era, still face
challenges, which comprise of reliably addressing and controlling qubits and correcting errors.
1.4. QUANTUM TECHNOLOGIES 9
Quantum Technologies Quantum Computer Systems

Quantum Communication Quantum Algorithms

Quantum Programming Languages


Quantum Computation
Quantum Compilation

Quantum Simulation Unitary Synthesis

Microarchitecture and Pulse Control


Quantum Sensing
Quantum Hardware

Figure 1.3: Quantum computation is one of the promising technologies made for harnessing
the power of quantum systems.

The control complexity becomes overwhelming as the number of qubits scale up, necessitating
system-level automation to guarantee the successful execution of quantum programs [33]. As
such, classical computers are needed to control and assist the quantum processor. Quantum
computers are generally viewed as co-processors or accelerators of classical computers, as shown
in Figure 1.1.
To some extent, the quantum computer architecture illustrated above arguably resembles
in-memory processing or reconfigurable computing architectures. As shown in Figure 1.1, inside
a QPU, quantum data are implemented by physical (quantum mechanical) objects such as atoms
while quantum gates are control signals such as lasers acting on the data—this “gates-go-to-data”
model of computation motivates a control unit close to the quantum data and an interface that
talks frequently with the quantum memory and the classical memory.

1.4 QUANTUM TECHNOLOGIES


The broad field of quantum technology encompasses more than just quantum computation; it
can be roughly divided into four domains shown in Figure 1.3. (i) In quantum computation,
quantum systems are carefully isolated and controlled to store and transform information in a
way that is promised to be drastically more efficient than classical digital computers. (ii) Quantum
communication [34–39] aims to use entangled photons to encrypt and transmit data securely.
(iii) To study the structure and properties of electronic systems, quantum simulation [4, 40–
46] maps the problem to a well-defined, controlled quantum system to mimic the behavior of
quantum systems of interests. (iv) Quantum sensing [47–49] use quantum coherence to improve
precision measurements of physical quantities. Each domain has its own focus, yet one can
usually benefit from the techniques developed for another. Although mainly about quantum
10 1. INTRODUCTION
computation, this book will extend its discussions from time to time to the other domains of
quantum technologies.

1.5 A ROAD MAP FOR QUANTUM COMPUTERS


Today’s quantum computers resemble, in many aspects, the digital computers we had in the
1950s. They are large in physical size, limited in the number of computing units, expensive to
build, and demanding in power. The machines we build in the NISQ era will be equipped with
50–1000 qubits and are capable of performing operations with error rate around 10 3 or 10 4 .
The state-of-the-art quantum gate error rate is around 10 2 . As a result, when programming
for a quantum computer, we have no choice but to optimize every bit of the limited resources.
When qubits are not only short-lived but also limited in number and when quantum logic gates
are noisy, every variable and every instruction in a quantum program matter.
In fact, we have been here before. After all, this is where we started for classical digital
computers. The introduction of integrated circuits (IC) in the 1960s paved the ground work
for the impressive performance growth of contemporary digital computers. In 1964, Gordon
Moore accurately projected the exponential growth in the number of transistors per integrated
circuit based on the cost of IC fabrication, now known as the Moore’s Law. After half a century
of investment and development in hardware, architecture, and software, we have built a com-
puting ecosystem that has deeply changed our society and transformed the way we live, work
and communicate [50].
Many believe that similar scaling will be achieved for quantum computers. For instance,
the reported qubit coherence times (i.e., lifetime) for superconducting qubits have been on track
for an encouraging exponential increase so far, following the so-called Schoelkopf ’s Law. But
whether this scaling will last relies on continued investment in the field of quantum computing,
driven by not only our scientific curiosity but also its economic and social impacts.
Fueled by joint efforts from research institutions and technology companies worldwide,
progress in quantum hardware has been impressive. IBM [51, 52] and Google [53] are test-
ing superconducting machines with more than 50 quantum bits (qubits) and providing users
with cloud access to their prototypes. Intel [54, 55] is building quantum computers with silicon
spin qubits and cryogenic controls. IonQ [56] has announced a 79-qubit “tape-like” trapped-ion
quantum computer. Other multinational companies, including Intel, Microsoft, and Toshiba,
are also making efforts toward practical-scale fully programmable quantum computers. Many
others, although not building prototypes by themselves, are joining the force by investing in the
field of quantum computing. Machines up to 100 qubits are around the corner, and even 1,000
qubits appears buildable. John Preskill notes that we are at a “privileged time in the history of
science and technology” [14]. Specifically, classical supercomputers cannot simulate quantum
machines larger than 50–100 quantum bits. Emerging physical machines will bring us into un-
explored territory and will allow us to learn how real computations scale in practice.
1.5. A ROAD MAP FOR QUANTUM COMPUTERS 11
Shor’s Algo (FT): ~106, ~ 10-5
Software-Hardware Co-Design
Grover’s Algo (FT): ~106, ~ 10-5

Average Two-Qubit Gate Error Rate


10-4
Quantum Algorithms
Heisenberg Model (FT): ~103, ~ 10-4
2015
10-3 2020 VQE/QAOA (NISQ) Quantum Programming Languages
2019 Gap
2014
2012 Quantum Compilation
2019 Co-Design
10-2 2016 2020
2009 Unitary Synthesis
2018
2006 Classical Simulation Microarchitecture and
10-1 2005
Pulse Control
2003 Superconducting Trapped Ion
Quantum Hardware
1
1 10 102 103 104 105
Number of Physical Qubits
*Size of data point indicates connectivity; larger means denser connectivigy.

Figure 1.4: Status of qubit technologies [57–67]. Also drawn the gap between algorithms and
realistic machines. Breaking abstractions via software-hardware co-design will be key in closing
this gap for NISQ computers, hence the overarching theme of this book.

The key to quantum computation is that every additional qubit doubles the computational
space in which the quantum machines operate. However, this extraordinary computing power
is far from fully realized with today’s technology, as the quantum machines will have high error
rates for some time to come. Ideally in the long term, we would use the remarkable theory of
quantum error correction codes to support error-free quantum computations. The idea of error
correction is to use redundancy encoding, i.e., grouping many physical qubits to represent a sin-
gle, fault-tolerant qubit. As a consequence, a 100 qubit machine can only support, for example,
3–5 usable logical qubits. Until qubit resources become much larger, another practical approach
would be to explore error-tolerant algorithms and use lightweight error-mitigation techniques
in the near term. So NISQ machines imply living with errors and exploring the effects of noise
on the performance and correctness of quantum algorithms.

1.5.1 COMPUTER SCIENCE RESEARCH OPPORTUNITIES


Despite technology advances, there remains a wide gap between the machines we expect and
the algorithms necessary to make full use of their power. In Figure 1.4, we can see the size of
physical machines (in this case trapped ion machines) over time. Ground-breaking theoretical
work produced Shor’s algorithm [12] for the factorization of the product of two primes and
Grover’s algorithm [13] for quantum search, but both would require machines many orders of
magnitude larger than currently practical. This gap has led to a recent focus on smaller-scale,
heuristic quantum algorithms in areas such as quantum simulation, quantum chemistry, and
quantum approximate optimization algorithms (QAOA) [68]. Even these smaller-scale algo-
12 1. INTRODUCTION
rithms, however, suffer from a gap of two to three orders of magnitude with respect to recent
machines. Relying on solely technology improvements may require 10–20 years to close even
this smaller gap.
A promising way to close this gap sooner is to create a bridge from algorithms to physical
machines with a software-architecture stack that can increase qubit and gate efficiency through
automated optimizations and co-design [31]. For example, recent work [69] on quantum circuit
compilation tools has shown that automated optimization produces significantly more efficient
results than hand-optimized circuits, even when only a handful of qubits are involved. The ad-
vantages of automated tools will be even greater as the scale and complexity of quantum pro-
grams grows. Other important targets for optimization are mapping and scheduling compu-
tations to physical qubit topologies and constraints, specializing reliability and error mitigation
for each quantum application, and exploiting machine-specific functionality such as multi-qubit
operators.
Quantum computing technologies have recently advanced to a point where quantum de-
vices are large and reliable enough to execute some applications such as quantum simulation
of small-size molecules. This is an exciting new era because being able to program and control
small prototypes of quantum computers could lead to discoveries of more efficient algorithms
to real-world problems than anything ever imagined in the classical computing paradigm. Pub-
lic interests, including commercial and military interests, are essential in keeping substantial
support for basic research. Recent discoveries of quantum applications in chemistry, finance,
machine learning, and optimization are just some early evidence of a promising future ahead.
Looking forward, we are on the track to continuously grow the performance of the quantum
hardware, complemented with an efficient, scalable, and robust software toolflow that it de-
mands [32, 70, 71].
13

CHAPTER 2

Think Quantumly About


Computing
We begin this chapter with a presentation of the intuitions behind quantum information and
quantum computation (Section 2.1). They then are made rigorous with mathematical formula-
tions in Section 2.2.

2.1 BITS VS. QUBITS


In this section, the elements of classical computing are compared and contrasted with those of
quantum computing (Section 2.1.1). The introduction to quantum mechanics in this section
is tailored for computer scientists, assuming no prior knowledge in physics (Section 2.1.2). A
number of architectural implications is then introduced (Section 2.1.3), arising from the special
properties and transformations in this new computing paradigm.

2.1.1 COMPUTING WITH BITS: BOOLEAN CIRCUITS


Part of the learning curve of quantum computing (QC) stems from its unfamiliar nomenclature.
Some is required for expressing the special properties of quantum mechanics, but the rest is
merely a reformulation of what we already know about what an ordinary computer can do. As
such, to prepare the reader for later discussion in QC, we briefly revisit how classical digital
computers work, but in the language and notation used by QC. In particular, we will review
four fundamental concepts in the classical theory of computing: the circuit model, von Neumann
architecture, reversible computation, and randomized computation.

Boolean Circuits
A number of classical models of computation are developed to describe the components of a
computer necessary to compute a mathematical function. Some familiar ones include the Turing
machine model (sequential description), the lambda calculus model (functional description), etc. In
this section, we choose to review the Boolean circuit model of computation, which is considered
the easiest to extend to the theory of quantum computing. These models, although expressing
computability and complexity from different perspectives, are in fact equivalent. Specifically,
every function computable by an n input Boolean circuit is also computable by a Turing ma-
14 2. THINK QUANTUMLY ABOUT COMPUTING
x1

x2 f (x1, x2) = x1 ⊕ x2

Figure 2.1: A Boolean circuit implementing the XOR function using a NAND gate, an OR gate,
and an AND gate. Lines are wires that transmit signals, and shaped boxes are gates. Signals are
copied/duplicated where wires split into two.

chine of length-n inputs, and vice versa. The size of a circuit, defined by the number of gates it
uses, is closely related to the running time of a Turing machine.
In a classical digital computer (under the Boolean circuit model), information is stored
and manipulated in bits—strings of zeros and ones, such as 10011101. The two states of each
bit in the string are represented in the computer by a two-level system, such as charge (1) or no
charge (0) in a memory cell (for storing) and high (1) or low (0) voltage signal in a circuit wire
(for transmitting).
In the “bra-ket” notation invented by Paul Dirac in 1939, the state of a bit is denoted by the
symbol, ji. So, the two-level system can be written as j0i and j1i, or j"i and j#i, or jchargei and
jno chargei, etc. The above length-8 bit string can thus be written as j1i j0i j0i j1i j1i j1i j0i j1i,
or j10011101i for short. Why is this called the “bra-ket” notation? In fact, ji is called the “ket”
symbol and hj is called the “bra” symbol, and together they form a bracket hji. Later, we will
see in the linear algebra representation of quantum bits, they correspond to column vectors and
row vectors, respectively. For now, the reader may regard this notation as pure symbolism—its
advantages will be clear once we discuss operations of quantum bits.
Any computation can be realized as a circuit of Boolean logic gates. For example, the follow-
ing is a circuit diagram for computing the XOR function of two input bits: f .x1 ; x2 / D x1 ˚ x2
(Figure 2.1).
In this classical Boolean circuit, lines are “wires” that transmit signals, and boxes are “gates”
that transform the signals. Signals are copied/duplicated at places where wires split into two. The
above shows one possible implementation of the XOR function with AND, OR, and NAND
gates. It is well known that the NAND gate, along with duplication of wires and use of ancilla
bits (i.e., ancillary input bits typically initialized to 0), is universal for computation. In other
words, any Boolean function is computable by “wiring together” a number of NAND gates.
The Boolean circuit model is a useful theoretical tool for analyzing what functions can be
efficiently implemented. It is also a convenient tool for computer architects and electrical engi-
neers as it is close to the physical realization of today’s computers. The von Neumann architecture
is one example of a design of modern computers.
2.1. BITS VS. QUBITS 15

von Neumann
Architecture

Central Processing Unit


Input Control Unit Output
Devices Devices
ALU

Registers

Memory Unit

Figure 2.2: The von Neumann Architecture of a classical computer.

von Neumann Architecture


In the following we describe the key components comprising modern digital computers, first
proposed by John von Neumann in 1945. In his description, a von Neumann architecture com-
puter has these components, as shown in Figure 2.2: (i) a central processing unit (CPU) includ-
ing an arithmetic logic unit (ALU) and a control unit; (ii) a random-access memory (RAM)
that stores data and program instructions; (iii) input and output (I/O) devices; and (iv) external
storage.
The instruction set architecture (ISA), serving as the interface between hardware and soft-
ware, defines what a computer natively supports, including data types, registers, memory mod-
els, I/O support, etc. Modern ISAs are commonly classified into two categories: (i) complex
instruction set computer (CISC) that supports many specialized operations regardless of how
rarely they are used in a program. One example of CISC is the Intel x86-family architecture;
and (ii) reduced instruction set computer (RISC) that includes only a small number of essential
operations, such as the RISC-V architecture [72].
The CPU realizes (implements) the ISA. While its design can be very complex, the CPU
typically has a control unit that fetches and executes instructions by directing signals accordingly,
and an ALU that performs arithmetic and logic operations on data. Most modern CPUs are
implemented in electric circuitry, as seen in the Boolean circuit model, printed on a flat piece of
semiconductor material, known as an integrated circuit (IC).
Over the past few decades, production cost for IC has been drastically reduced thanks to
advancement in technology [50]. We can build transistors, the building blocks of an IC, smaller
and smaller, cheaper and cheaper. The number of transistors that can be economically printed
16 2. THINK QUANTUMLY ABOUT COMPUTING
per IC has been growing exponentially over time—approximately doubling every 1.5 years. This
trend is referred to as the “Moore’s Law.” But as most believed, this trend is not a sustainable
one, due to both physical limitations and market size. It is expected that within five years the
feature size of transistors will stop at a few nanometers. As it approaches the atomic level (also
on the order of nanometer), noises from quantum mechanical processes will start to dominate
and perturb the system.

Reversible Computation
The study of reversible computing originally arises from the motivation to improve the com-
putational energy efficiency in the 1960s and 1970s led by Laundauer [73] and Bennet [74].
Quantum computers transform quantum bits reversibly (except for initialization and measure-
ment). The connection between reversible computation and quantum mechanics was discovered
by Benioff in the 1980s [2]. As a result, QC benefits a great deal from the study of reversible
computing, and vice versa. Later, we will see the roles of reversible computing in quantum cir-
cuits.
According to the second law of thermodynamics, an irreversible bit operation, such as the
OR gate, must dissipate energy, typically in the form of heat. Specifically, suppose the output
of an OR gate is j1i. We cannot infer what the inputs were—they could be anything from j01i,
j10i, or j11i. The von Neumann–Landauer limit states that kT ln.2/ energy is dissipated per ir-
reversible bit operation. However, some bit operations are theoretically (logically) “reversible”—
in the sense of that the output state uniquely determines the input state of the operation. For
example, the NOT gate is reversible. Flipping the state of a bit from j0i to j1i, or vice versa,
does not create or erase information from the system. To some extent, reversible also means
time-reversible—the transformation done by a reversible circuit can be reverted by applying the
inverse transformation (which always exists).
One could imagine a computer can be built consisting solely of reversible operations. In
analogy to the NAND gate being universal for Boolean logic, is there a reversible gate set that
is universal? The answer is yes. To illustrate this, we introduce three example reversible gates:
the NOT gate, the CNOT (controlled-not) gate, and the Toffoli (controlled-controlled-not)
gate, all of which are self-inverse (i.e., applying the gate twice returns the bits to their original
state). Their Boolean circuit notations and truth-tables can be found in Table 2.1. Specifically,
the NOT gate negates the state of the input bits. The CNOT gate is a conditional gate—the
state of the target bit x2 is flipped if the control bit x1 is j1i. It is the reversible version of the
XOR gate. The Toffoli gate has two control bits, x1 and x2 , and one target bit x3 . Similarly, the
target bit is flipped if both the control bits are j1i. It is particularly handy as it can be used to
simulate the NAND gate and the DUPE gate (with the use of ancillas), and thus is a universal
reversible gate.
More formally, we note that the Toffoli gate is universal, in that any (possibly non-
reversible) Boolean logic can be simulated with a circuit consisting solely of Toffoli gates, given
2.1. BITS VS. QUBITS 17
Table 2.1: Reversible logic gates. The truth table of reversible logic gates shows the permutation
of bits. Toffoli is universal reversible computation.

Reversible Gate Boolean Circuit Notation Truth Table


|0 |1
NOT gate x1 X NOT(x1)
|1 |0
|00 |00
x1 x1 |01 |01
CNOT gate
x2 ⊕ x1 ⊕ x2 |10 |11
|11 |10
|000 |000
|001 |001
|010 |010
x1 x1
|011 |011
Toffoli gate x2 x2
|100 |100
x3 ⊕ AND (x1, x2) ⊕ x3
|101 |101
|110 |111
|111 |110

that ancilla inputs and garbage outputs are allowed. Proof of this theorem is omitted here. As
such, a generic reversible circuit has the form shown in Figure 2.3.
In this circuit, a Boolean function f W f0; 1gn 7! f0; 1gm is computed reversibly using only
Toffoli gates. All ancilla inputs are initialized to j1i (if needed, j0i ancilla can be produced as
well, because a Toffoli gate on j111i gives j110i). All garbage bits will be discarded at the end
of the circuit.
One cannot overemphasize the above theorem’s implication to quantum computing—as
noted before, a quantum computer transforms quantum bits reversibly, so this theorem implies
that any Boolean circuit can be transformed into a reversible one, and then a quantum one by
implementing a quantum Toffoli gate and replacing each bit with a quantum bit. Reversible
circuit synthesis is a useful tool in designing quantum circuits.
18 2. THINK QUANTUMLY ABOUT COMPUTING
x1 f (x)1
x2 Output


Input
f (x)m


xn Toffoli Gates
|1
Garbage
Ancilla


|1

Figure 2.3: A generic reversible circuit for implementing a possibly irreversible function f W
f0; 1gn ! f0; 1gm .

Randomized Computation
So far, we have not discussed one familiar ingredient to the computation that appears commonly
in classical computing—randomness. Many natural processes exhibit unpredictable behavior,
and we should be able to take advantage of this unpredictability in computation and algorithms.
The notion of probabilistic computation seems realistic and necessary. On one hand, the physi-
cal world contains randomness, as commonly seen in quantum mechanics. On the other hand,
we can propose several computational problems that we do not yet know how to solve efficiently
without randomness. If BPP=P, however (i.e., the complexity class bounded-error probabilistic
polynomial time is equivalent to the class deterministic polynomial time), as some believe, then
randomness is unnecessary and we can simulate randomized algorithms as efficiently with de-
terministic ones. Nonetheless, randomness is still an essential tool in modeling and analyzing
the physical world. We can find many examples where randomness is useful: in economics, it is
well known that Nash equilibrium always exists if players can have probabilistic strategies, and
in cryptography, a secret key typically relies on the uncertainty in itself.
Randomness as a resource is typically used in computation in the following two forms:
(i) an algorithm can take random inputs; and (ii) an algorithm is allowed to make random
choices. As such, we introduce the notion of random bits and coin flips, again in the “bra-ket”
and circuit notations.
Suppose x1 is a random bit, and the state of x1 is j0i with probability 12 and j1i with
probability 21 , denoted as:
1 1
jx1 i D j0i C j1i :
2 2
For now, this notation may look strange and cumbersome, but the benefit of writing the
state of a bit this way will become clear when we generalize to the quantum setting. The state is
called a probability distribution of j0i and j1i. To describe a general n-bit probabilistic system,
2.1. BITS VS. QUBITS 19
we write down the underlying state of the system as:
X
pb jbi ;
b2f0;1gn

where b is any possible length-n bit-string, and pb is called the probability of b . By basic prin-
ciples of probability, all pb values must be non-negative and summing to 1.
In reality, the physical system is in one of those possible state. When we execute a ran-
domized algorithm, we expect to observe (sample) the outcome at the end. From the observer’s
perspective, the values of the random bits are uncertain (hidden) until they are observed. Once
some of the random bits in the system are observed, then the state of the system (to the observer’s
knowledge) is changed to reflect what was just learned, following laws of conditional probability.
For example, a random system can be described with:
1 1 5
jx1 x2 i D j00i C j01i C j10i C 0 j11i :
8 4 8
Now suppose it is observed that the first bit is j0i (the probability of this scenario is
PrŒx1 D 0 D 81 C 14 D 38 ). The state of the system after the observation is then conditioned
on our observation:
1=8 1=4 1 2
jx1 x2 i .given x1 D 0/ D j00i C j01i D j00i C j01i :
3=8 3=8 3 3
Here the bit-strings inconsistent with the outcome are eliminated, and the remaining ones
are renormalized.
In a randomized algorithm, we typically allow that (i) it is correct with high probability,
or (ii) it does not always run in desired time. Some of the uncertainty comes from its ability
to make decisions based on the outcome of a coin flip. Now suppose we have implemented a
conditional-coin-flip gate, named CCOIN: CCOIN . When the input bit is j0i, CCOIN
does nothing. When the input is j1i, CCOIN tosses a fair coin:
(
j0i 7! j0i ;
CCOIN D
j1i 7! 21 j0i C 21 j1i :
Suppose we have a random program that reads: (1) Initialize a bit to x1 D j1i. (2) Flip a
fair coin if x1 is j1i and write result to x1 . (3) Repeat step 2. In terms of a circuit, the program
looks like:
x1 W j1i CCOIN CCOIN
One is interested in observing the outcome at the end of the program. Let’s analyze the
circuit step by step. After the first CCOIN gate, jx1 i is set to j0i and j1i with equal probability
(i.e., jx1 i D 12 j0i C 12 j1i). After the second CCOIN gate, the state becomes jx1 i D 12 j0i C
1 1
. j0i C 12 j1i/ D 34 j0i C 14 j1i. It is convenient to write the above process in a state transition
2 2
diagram:
Ccoin-diagram
20 2. THINK QUANTUMLY ABOUT COMPUTING
1 1
1 |0⟩ |0⟩
2
2

|1⟩ 1 1
|0⟩ 3 1
2 4 |0⟩ + |1⟩
4 4
1
|1⟩
2
1 1
2 |1⟩
4

The following is a slightly larger circuit, where a CNOT gate correlates the two bits.

x1 W j1i CCOIN 

x2 W j1i CCOIN

The system is initialized in j11i. After the first CCOIN gate, the system is put into a ran-
dom state: 12 j01i C 12 j11i. The CNOT gate then transforms the system to 12 j01i C 21 j10i, cor-
relating the two bits. And finally after the second CCOIN gate: jx1 x2 i D 12 . 12 j00i C 21 j01i/ C
1
2
j10i D 41 j00i C 14 j01i C 12 j10i. Again with a state transition diagram:
Ccoin-diagram2

1 1
|00⟩
2 4
1
1 |01⟩ |01⟩
2 1 1 1 1 1
|01⟩ |00⟩ + |01⟩ + |10⟩
2 4 4 4 2
|11⟩

1 1 1 1
|11⟩ |10⟩ |10⟩
2 2

2.1.2 COMPUTING WITH QUBITS: QUANTUM CIRCUITS


Finally, we present to the reader, as efficiently as possible, the fundamental concepts in the quan-
tum mechanics model of computation. Many believe that quantum computing can be described
simply as randomized computing with a twist where we allow the “probability” to take negative
(possibly complex) values. Alternatively, it can also be described as reversible computing with
an additional “Hadamard” gate. Therefore, the goal of this section is to argue the meanings and
implications of these statements.
2.1. BITS VS. QUBITS 21
Quantum Circuit Model
As usual, we start by describing the state of the quantum system using the “bra-ket” notation
introduced earlier. Suppose is an n-qubit (quantum bit) system:
X
j iD ˛b jbi ;
b2f0;1gn

where the coefficient ˛b is called the amplitude (as opposed to probability) of the basis bit-string
b . Just like probabilities, the amplitudes have two constraints: (i) they can take any complex
P
numbers; and (ii) their sum of squared values is 1: b2f0;1gn j˛b j2 D 1. In the context of qubits,
the probability distribution across bit-strings is called the superposition of all bit-strings; the cor-
relation between bits is called entanglement of qubits. It is important to note that these are not
renamings of the same concepts1 —as random bits and quantum bits are fundamentally differ-
ent objects. Despite the striking parallelism between the two, we should always be wary of the
subtleties that differentiate them when analyzing a random circuit vs. a quantum circuit.
To measure (observe) the outcome of a qubit, we follow almost exactly what we did with
a random bit. For an n-qubit system, if we measure all qubits at the end of a circuit,2 then from
P
j i D b2f0;1gn ˛b jbi, we observe the bit-string jbi with probability j˛b j2 . Upon measure-
ment, the state of the system “collapses” to the single classical definite value: Meas.j i/ D jbi,
and can no longer revert to the superposition as it was before. For example, the superposition
state j i D p1 j0i C p1 j1i yields, upon measurement, either outcome with equal probability:
2 2
PrŒMeas.j i/ D j0i D PrŒMeas.j i/ D j1i D 12 .
One operation that is of fundamental importance to quantum computation is called the
Hadamard transformation, a single-qubit quantum gate denoted as H in the circuit model:
8
<j0i 7! p1 j0i C p1 j1i ;
2 2
HD
:j1i 7! p1 j0i p1 j1i :
2 2

It turns out that allowing Hadamard gates in a reversible circuit (consisting of Toffoli gates)
extends the circuit model over to any functions allowed to be computed on qubits (up to global
phase). For this reason, H gate together with Toffoli gate are universal for quantum computation.
Note that it does not mean that Nature allows only Hadamard and Toffoli transformations
on qubits—as we will see in later sections, the laws of quantum mechanics allow a class of
transformations, called unitary transformations.
One would argue that any interesting quantum mechanics phenomenon can be explained
by interference. Unlike probability values that are always non-negative, amplitudes (as being pos-
sibly negative) can either accumulate and cancel. When two amplitudes accumulate, we say they
1 Many believed that quantum mechanics has deterministic explanations, notably by the argument from EPR paradox (by
Einstein, Podolsky, and Rosen in 1935 [75]) and other hidden-variable theories which try to equalize statistical correlation
with entanglement. But later in 1964, John Bell famously showed Bell’s theorem [76] that disproved the existence of local
hidden variables of some types.
2 This is a reasonable assumption by the law of deferred measurement.
22 2. THINK QUANTUMLY ABOUT COMPUTING
interfere constructively; when they cancel each other out, we say they interfere destructively. The
example circuit below illustrates this phenomenon:

x1 W j0i H H

As usual, let’s analyze the circuit step by step. After the first H gate, jx1 i is set to
a superposition state jx1 i D p12 j0i C p12 j1i). After the second H gate, the state becomes
jx1 i D p1 . p1 j0i C p1 j1i/ C p1 . p1 j0i p1 j1i/ D j0i. Note that in this circuit, amplitudes
2 2 2 2 2 2
of j1i cancel each other out (destructively interfere), while those of j0i accumulate (constructively
interfere).
Again we track the state of the qubits as the circuit runs (from left to right), using a tran-
sition diagram. In the context of qubit states, the diagram is called the Feynman Path diagram,
hadamard-diagram
named after physicist Richard Feynman:
1 1
|0⟩
2 2
1 |0⟩
2 1 1
|1⟩
2 2
|0⟩ |0⟩
1 1
|0⟩
1 2 2
2 |1⟩
1 1
|1⟩ −
2 2

In this diagram, the state of the qubits also evolves from left to right.

Feynman’s Sum-Over-Path Approach


Here we describe the precise prescription to track the amplitudes of a quantum state using
Feynman’s “sum-over-path” approach. The idea comes from the well-known theory of path
integral [77, 78]. His key observation is that the final amplitudes of a quantum state can
be written as a weighted sum over all possible paths the quantum system can take from the
initial to the final state. In particular:
• the final amplitude is given by adding the contributions from all paths; and
• the contribution from a path is given by multiplying the coefficients along the
path.

You might also like