Project Hamilton Phase 1 Whitepaper

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

Payments Innovation

February 3, 2022

Project Hamilton Phase 1


A High Performance Payment
Processing System Designed for
Central Bank Digital Currencies
Federal Reserve Bank of Boston and
Massachusetts Institute of Technology Digital Currency Initiative
Project Hamilton Phase 1 Executive Summary

Contents
Introduction .......................................................................................................................... 3
Core Design and Results ..................................................................................................... 4
Learnings ............................................................................................................................. 5
Phase 2 ................................................................................................................................ 6
References ........................................................................................................................... 7

The views expressed in this paper are those of the author and do not
necessarily represent those of the Federal Reserve Bank of Boston or the
Federal Reserve System.

© 2022 Federal Reserve Bank of Boston. All rights reserved.


Project Hamilton Phase 1 Executive Summary

Introduction
In light of continued innovation in money and payments, many central banks are
exploring the creation of a central bank digital currency (CBDC), a new form of central
bank money which supplements existing central bank reserve account balances and
physical currency [5]. CBDCs could exist in various forms depending on a central bank’s
objectives, including a general-purpose CBDC that can be made available to the public
for retail, e-commerce, and person to person payments. Central banks, researchers, and
policymakers have proposed various objectives including fostering financial inclusion,
improving efficiency in payments, prompting innovation in financial services, maintaining
financial stability, and promoting privacy [2,3,9,19].
Because the CBDC research process is still in early stages in many jurisdictions, several
technical design questions remain open for investigation. The answers to these questions
will have meaningful implications and consequences for what options are, or are not,
available to policymakers.
The Federal Reserve Bank of Boston (Boston Fed) and the Massachusetts Institute of
Technology’s Digital Currency Initiative (MIT DCI) are collaborating on exploratory
research known as Project Hamilton, a multiyear research project to explore the CBDC
design space and gain a hands-on understanding of a CBDC’s technical challenges and
opportunities. This paper presents the project’s Phase 1 research. Our primary goal was
to design a core transaction processor that meets the robust speed, throughput, and fault
tolerance requirements of a large retail payment system. Our secondary goal was to
create a flexible platform for collaboration, data gathering, comparison with multiple
architectures, and other future research. With this intent, we are releasing all software
from our research publicly under the MIT open source license. 1

By focusing Phase 1 on the feasibility and performance of basic, but resilient


transactions, we aim to create a foundation for more complex functionality in Phase 2.
The processor’s baseline requirements include time to finality of less than five seconds,
throughput of greater than 100,000 transactions per second, and wide-scale geographic
fault tolerance. Topics left to Phase 2 include critical questions around high-security
issuance, systemwide auditability, programmability, how to balance privacy with
compliance, technical roles for intermediaries, and resilience to denial of service attacks.
As exploratory research on the implications of different design choices, this work is not
intended for a pilot or public launch. That said, we consider performance under a variety
of extensive, realistic workloads and fault tolerance requirements.

1 https://github.com/mit-dci/opencbdc-tx

Federal Reserve Bank of Boston | bostonfed.org | Payments Innovation 3


Project Hamilton Phase 1 Executive Summary

Core Design and Results


In Phase 1, we created a design for a modular, extensible transaction processing system,
implemented it in two distinct architectures, and evaluated their speed, throughput, and
fault tolerance. Furthermore, our design can support a variety of models for
intermediaries and data storage, including users custodying their own funds and not
requiring storing personally identifying user data in the core of the transaction processor.
In our design users interact with a central transaction processor using digital wallets
storing cryptographic keys. Funds are addressed to public keys and wallets create
cryptographic signatures to authorize payments. The transaction processor, run by a
trusted operator (such as the central bank), stores cryptographic hashes representing
unspent central bank funds. Each hash commits to a public key and value. Wallets issue
signed transactions which destroy the funds being spent and create an equivalent
amount of new funds owned by the receiver. The transaction processor validates
transactions and atomically and durably applies changes to the set of unspent funds. In
this version of our work, there are no intermediaries, fees, or identities outside of public
keys. However, our design supports adding these roles and other features in the future.
The flexibility, performance, and resiliency challenges of this design are addressed with
three key ideas. The first idea is to decouple transaction validation from execution, which
enables us to use a data structure that stores very little data in the core transaction
processor. It also makes it easier to scale parts of the system independently. The second
idea is a transaction format and protocol that is secure and provides flexibility for potential
functionality like self-custody and future programmability. The third idea is a system
design and commit protocol that efficiently executes these transactions, which we
implemented with two architectures.

Both architectures met and exceeded our speed and throughput requirements. The first
architecture processes transactions through an ordering server which organizes fully
validated transactions into batches, or blocks, and materializes an ordered transaction
history. This architecture durably completed over 99% of transactions in under two
seconds, and the majority of transactions in under 0.7 seconds. However, the ordering
server resulted in a bottleneck which led to peak throughput of approximately 170,000
transactions per second. Our second architecture processes transactions in parallel on
multiple computers and does not rely on a single ordering server to prevent double
spends. This results in superior scalability but does not materialize an ordered history for
all transactions. This second architecture demonstrated throughput of 1.7 million
transactions per second with 99% of transactions durably completing in under a second,
and the majority of transactions completing in under half a second. It also appears to
scale linearly with the addition of more servers. In order to provide resilience, each
architecture can tolerate the loss of two datacenter locations (for example, due to natural
disasters or loss of network connectivity) while seamlessly continuing to process
transactions and without losing any data.

Federal Reserve Bank of Boston | bostonfed.org | Payments Innovation 4


Project Hamilton Phase 1 Executive Summary

Learnings
Phase 1 has surfaced several key learnings on the potential design of a CBDC:
Select ideas from cryptography, distributed systems, and blockchain technology can
provide unique functionality and robust performance. We suspect existing database and
distributed systems technology is sufficient to provide a more traditional payment
architecture for CBDC where one actor stores users’ accounts, users cannot custody
their own funds, and there is no transaction scripting functionality. We created a new
design to offer both these features and new opportunities for different intermediary roles.

A CBDC can provide functionality that is not currently possible with either cash or bank
accounts. For example, a CBDC could support cryptographic proofs of payment, more
complex transfers to or from multiple sources of funds, and flexible forms of authorization
to spend, such as varying transaction limits.
We found that separating a transaction processor into modular components improves
system scalability and flexibility; for example, we can scale and replicate transaction
validation independently from preventing double spending and committing transactions,
and our architecture can support many future designs for programmability and privacy.
Despite using ideas from blockchain technology, we found that a distributed ledger
operating under the jurisdiction of different actors was not needed to achieve our goals.
Specifically, a distributed ledger does not match the trust assumptions in Project
Hamilton’s approach, which assumes that the platform would be administered by a
central actor. We found that even when run under the control of a single actor, a
distributed ledger architecture has downsides. For example, it creates performance
bottlenecks, and requires the central transaction processor to maintain transaction
history, which one of our designs does not, resulting in significantly improved transaction
throughput scalability properties.
CBDC design choices are more granular than commonly assumed. Currently, CBDC
designs are categorized as direct, two-tier, or hybrid models, with “token” or “account”
access models [1, 2, 7, 12, 15]. We found these limited categorizations lacking and
insufficient to surface the complexity of choices in access, intermediation, institutional
roles, and data retention in CBDC design [10]. For example, wallets can support both an
account-balance view and a coin-specific view for the user regardless of how funds are
stored in the database.
By breaking transaction processing into steps like creation, authorization, submission,
execution, and storing history, CBDC designers can consider the potential roles for
intermediaries at each stage, creating opportunities for innovation.
By implementing a robust system, we identify new questions for CBDC designers and
policymakers to address, regarding tradeoffs in performance, auditability, functionality,
and privacy. Our work raised important questions to address in how the technical
architecture might affect the use and function of CBDC in payments. For example, it is an
open question how important from an economic perspective it might be to support atomic
transactions. In database parlance, this implies multiple operations to different pieces of

Federal Reserve Bank of Boston | bostonfed.org | Payments Innovation 5


Project Hamilton Phase 1 Executive Summary

the data are applied in a way that appears instantaneous (atomic), or the set of updates
does not happen at all; there is no partial application [4,14]. In the context of a payment
processor, this means users could reliably issue payments that might transfer multiple
bills (or funds from multiple accounts) entirely, and would never see partial transfers,
even if there are crashes or system errors. We chose to implement atomic transactions,
which has a direct impact on the performance of the system [8].
The main functional difference between our two architectures is that one materializes an
ordered history for all transactions, while the other does not. This highlights initial
tradeoffs we found between scalability, privacy, and auditability. In the architecture that
achieves 1.7M transactions per second, we do not keep a history of transactions nor do
we use any cryptographic verification inside the core of the transaction processor to
achieve auditability. Doing so in the future would help with security and resiliency but
might impact performance. In the other architecture, we can audit the set of unspent
funds to make sure they were created correctly. Storing the history of transactions implies
the central transaction processor can reconstruct the transaction graph, which, in
combination with other data sources, could reveal sensitive user information [16,17]. In
the next phase of work, we will focus on adding privacy-preserving designs for
auditability.
Similarly, our goals of supporting self-custody and reducing data stored in the core of the
transaction processor had direct implications on data users might be required to store,
failure scenarios, recovery protocols, and on what types of payment functionality we can
support.

Phase 2
In Phase 2 of Project Hamilton, the Boston Fed and MIT DCI will explore new
functionality and alternative technical designs. Research topics may include
cryptographic designs for privacy and auditability, programmability and smart contracts,
offline payments, secure issuance and redemption, new use cases and access models,
techniques for maintaining open access while protecting against denial of service attacks,
and new tools for enacting policy. In addition, we hope to collaborate and explore these
challenges with other technical contributors from a variety of backgrounds in the open
source repository.
Through the development and testing of its own custom software, Project Hamilton
provides unique insight into the technical considerations and tradeoffs involved with the
development of a core processing engine for a CBDC. Project Hamilton’s research and
experimentation with a fast, highly scalable, resilient, and secure technical architecture
will supplement previous work by central banks including policy and economic research
[13], proofs-of-concept and pilot testing [11, 18], as well as CBDCs which have been
made available to the public [6].

Federal Reserve Bank of Boston | bostonfed.org | Payments Innovation 6


Project Hamilton Phase 1 Executive Summary

References
[1] R. Auer and R. Böhme. The technology of retail central bank digital currency. BIS
Quarterly Review, March, 2020.

[2] Bank for International Settlements. CBDCs: an opportunity for the monetary system.
BIS Annual Report Economic Report 2021, pages 65–91, 6 2021.
[3] Bank of Canada et al. Central bank digital currencies: foundational principles and
core features. BIS Working Group, 2020. https://www.bis.org/publ/othp33.pdf.
[4] P. A. Bernstein, V. Hadzilacos, and N. Goodman. Concurrency control and recovery
in database systems, volume 370. Addison-wesley Reading, 1987.
[5] C. Boar and A. Wehrli. Ready, steady, go? results of the third BIS survey on central
bank digital currency. BIS Papers No 114, 2021.
https://www.bis.org/publ/bppdf/bispap114.htm.
[6] Central Bank of The Bahamas. Sand dollar. https://www.sanddollar.bs.

[7] Committee on Payments and Market Infrastructures Markets Committee. Central


bank digital currencies. BIS Quarterly Re-view, March 2018.
[8] European Central Bank. Work stream 3: A new solution –blockchain & eID, 2021.
https://haldus.eestipank. ee/sites/default/files/2021-07/Work%20stream%203%20-
%20A%20New%20Solution%20-%20Blockchain%20and%20eID_1.pdf.

[9] R. Garratt, M. J. Lee, et al. Monetizing privacy with central bank digital currencies.
Technical report, Federal Reserve Bank of New York, 2020.
[10] R. Garratt, M. J. Lee, B. Malone, A. Martin, et al. Token- or Account-based? A digital
currency can be both. Technical report, Federal Reserve Bank of New York, 2020.
[11] J. C. Jiang and K. Lucero. Background and implications of China’s central bank
digital currency: E-CNY. Available at SSRN 3774479, 2021.

[12] C. M. Kahn, F. Rivadeneyra, and T.-N. Wong. Should the central bank issue e-
money? Money, pages 01–18, 2019.
[13] J. Kiff, J. Alwazir, S. Davidovic, A. Farias, A. Khan, T. Khiaonarong, M. Malaika, H.
Monroe, N. Sugimoto, H. Tourpe, and P. Zhou. A survey of re-search on retail central
bank digital currency, 2020. https://www.elibrary.imf.org/view/journals/
001/2020/104/001.2020.issue-104-en.xml.
[14] B. W. Lampson. Atomic transactions. In Distributed Systems Architecture and
Implementation, pages 246–265. Springer, 1981.
[15] T. Mancini-Griffoli, M. S. M. Peria, I. Agur, A. Ari, J. Kiff, A. Popescu, and C. Rochon.
Casting light on central bank digital currency. IMF staff discussion note, vol 8, 2018.

[16] S. Meiklejohn, M. Pomarole, G. Jordan, K. Levchenko, D. Mc-Coy, G. M. Voelker,


and S. Savage. A fistful of bitcoins: charac-terizing payments among men with no names.

Federal Reserve Bank of Boston | bostonfed.org | Payments Innovation 7


Project Hamilton Phase 1 Executive Summary

In Proceedings of the 2013 conference on Internet measurement conference, pages


127–140, 2013.
[17] D. Ron and A. Shamir. Quantitative analysis of the full bit-coin transaction graph. In
International Conference on Financial Cryptography and Data Security, pages 6–24.
Springer, 2013.

[18] Sveriges Riksbank. E-krona pilot phase 1. Sveriges Riks-bank Report, 2021.
https://www.riksbank.se/en-gb/payments--cash/e-krona/technical-solution-for-the-e-
krona-pilot/.
[19] A. Usher, E. Reshidi, F. Rivadeneyra, S. Hendry, et al. The positive case for a CBDC.
Bank of Canada Staff Discussion Paper, 2021.

Federal Reserve Bank of Boston | bostonfed.org | Payments Innovation 8


Foreword

Project Hamilton

“The ultimate test we’ll apply when assessing a central bank digital currency and other digital
innovations is: Are there clear and tangible benefits that outweigh any costs and risks?” – Jerome
Powell, chairman of the Federal Reserve Board of Governors, Sept. 22, 2021

“Given enough eyeballs, all bugs are shallow.” – Eric S. Raymond, “The Cathedral and the Bazaar”

We present here the findings of Phase 1 of Project Hamilton, the Federal Reserve Bank of Boston’s
collaboration with researchers from the Digital Currency Initiative at the Massachusetts Institute of
Technology. This research aims to understand technical opportunities and tradeoffs associated with a
hypothetical general purpose central bank digital currency. By building a platform from scratch, we hope
to better understand the risks and benefits this technology may bring and the many nuanced choices
that impact the ultimate design. Importantly, by issuing an open-source license for the code, we ensure
maximum sharing of what we’ve learned and expand the pool of experts debating and contributing to
the code base. We encourage those working in this code to push this effort forward – creating benefits,
reducing risks, and bringing all bugs to the surface.

Jim Cunha & Robert Bench

Federal Reserve Bank of Boston


A High Performance Payment Processing System Designed for
Central Bank Digital Currencies
James Lovejoy Cory Fields Madars Virza Tyler Frederick David Urness
Kevin Karwaski Anders Brownworth Neha Narula

1 Introduction Hamilton is the first contribution to OpenCBDC, a place


Central banks are increasingly investigating general- for collaboration on technical research and development
purpose central bank digital currency (CBDC), defined for CBDC.2
as a currency that is electronic, a liability of the cen-
tral bank denoted in the national unit of account, broadly 1.1 Goals
available, and used for retail and person-to-person pay- Project Hamilton’s Phase 1 goal is to investigate the tech-
ments [10, 11, 19, 20, 24, 29, 30, 45, 62, 81]. Figure 1 sum- nical feasibility of a high throughput, low latency, and re-
marizes a figure explaining the different properties of a silient transaction processor that provides flexibility for
CBDC as compared to other forms of payment instru- a range of eventual CBDC design choices. We intend to
ments [13]. Researchers have proposed that a CBDC investigate more complex functionality in future phases.
could help address public policy objectives such as en- Note that our design is not a complete CBDC system;
suring public access to central bank money, fostering it is neither production-ready nor does it provide all the
payment competitiveness and resilience, supporting fi- functionality needed for a working CBDC. In this techni-
nancial inclusion, and offering a privacy-preserving dig- cal paper, we do not assess a CBDC’s policy, regulatory,
ital payment method [4, 10, 20, 52, 84]. and legal questions or whether or how it could be issued.
A CBDC’s primary use case is to act as a payment
Performance. To support the scale of retail transactions
instrument for individuals and businesses as part of a
in a large country such as the US, a CBDC transaction
broader exchange of goods or services. For example, a
processor should be able to process, at minimum, tens
user might pay for coffee in a cafe by sending digital
of thousands of transactions per second in real-time and
currency to the cafe owner. However, beyond this core
scale to account for the potential growth in payment vol-
use case, the design of a CBDC can vary considerably
umes [54]. These figures well exceed the transaction vol-
based upon the public policy objectives and unique char-
umes interbank settlement systems are designed to pro-
acteristics of various jurisdictions. Importantly, the fea-
cess [7,47,48,74]. We set the following initial set of per-
sibility, operating performance and impact of different
formance targets to guide our design:
CBDC design choices are inextricably linked to the tech-
nical design of the underlying transaction processor. To Speed. To capture the benefits of faster or real-time
better inform policy discussions, central banks are rec- payments [6], we set a target of 99% of transactions com-
ognizing the importance of technical experimentation in pleting within 5 seconds. Completion includes a trans-
understanding the implications and tradeoffs of different action being validated, executed, and confirmed back to
CBDC models and design decisions on possible policy users. This is comparable to card payment methods and
outcomes. existing interbank instant payment systems.
The Federal Reserve Bank of Boston (Boston Fed) Throughput and scalability. To support settlement fi-
and the Massachusetts Institute of Technology’s Dig- nality and CBDC models which don’t require intermedi-
ital Currency Initiative (MIT DCI) are collaborating aries to aggregate transactions, Hamilton must be able to
on a multi-year exploratory research project, known as handle peak projected transaction volumes produced by
Project Hamilton, to gain a hands-on understanding of hundreds of millions of users. We chose 100,000 transac-
a CBDC’s technical challenges and opportunities.1 This tions per second as a minimum target based on existing
paper presents the first phase of Project Hamilton’s re- cash and card volumes and expected growth rates.
search and describes the technical design of Hamilton, Resiliency. To maintain trust in the digital currency,
a research transaction processing system flexible enough a CBDC must guarantee the ongoing existence and us-
to support experimentation with multiple CBDC models. ability of funds. In this phase of research, we focus on
1 This project is named in tribute to two Hamiltons: Margaret, an continuing to provide system access and preventing data
MIT computer scientist who led the software development for the loss even in the presence of multiple data center failures.
Apollo Program’s guidance system at NASA, and Alexander, who laid
the foundation for a U.S. central bank. 2 https://github.com/mit-dci/opencbdc-tx

1
Property Cash Bank deposits Central bank reserves CBDC
Electronic
Central-bank issued
Universally accessible
Figure 1: Table describing the properties of various monetary instruments, summarized from Graph 3 in [13].

We will address upgradeability and other measures of re- The Bank for International Settlements (BIS) simpli-
silience in future phases of research. fies intermediary choices to three possibilities—the “di-
Privacy and minimizing data retention. There is strong rect” model, in which the central bank issues CBDC to
user demand for financial privacy since fine-grained users directly, “two-tier”, in which the central bank is-
transaction data can reveal sensitive user details [59], sues CBDC to intermediaries who then manage relation-
even if anonymized [50]. Respondents to a Eurosys- ships with users, and a hybrid of the two [8].
tem CBDC public consultation ranked privacy as the We do not directly address intermediary roles in Phase
most important feature of a digital euro (46% of respon- 1. However, we foresee much more complexity of choice
dents) [45]. Any payment system’s architecture is influ- in the roles for intermediaries in a CBDC, along dimen-
enced by the design choices made around data privacy, sions like authorization, custody, and viewing transac-
access, and retention, and achieving robust privacy re- tions.
quires making explicit architectural choices at each layer Importantly, our work shows the design space for in-
of a system’s design. In particular, if many parts of a sys- termediaries is much broader than previously assumed.
tem require access to sensitive data (either raw or de- Design choices not addressed in Phase 1. Fees, com-
rived), it can be challenging to retrofit such a system to pliance and fraud controls, and several other design con-
provide data protection after the fact. Though exploring siderations were not addressed in Phase 1 and are left to
the implications of cryptographic designs for strong pri- future work.
vacy will be a part of our Phase 2 research, during Phase
1 we focused solely on design options that limit data ac- 1.2 System design
cess and retention in the central transaction processor, to Our system processes payments from users who address
support future research and design optionality. Note that and sign transactions using their public/private key pairs
the safest way to secure data is not to collect it in the first stored in their digital wallets, as is the case in many cryp-
place. We designed Hamilton’s transaction processor to tocurrencies.
retain very little data about transactions. User wallets submit transactions to the Hamilton
Intermediary and custody flexibility. One of the most transaction processor to move unspent funds—a repre-
important questions in CBDC design is that of the role of sentation of money containing an amount and the rules
the central bank and other intermediaries.3 These roles required to spend it (in our case, a public key indicat-
will likely vary by jurisdiction, due to policymaker deci- ing ownership). A transaction indicates the unspent funds
sions and consumer preferences. being used and the new unspent funds being generated
Currently, members of the public who want to digitally (i.e., the new data record indicating who now has owner-
store funds and make payments must open accounts with ship over the money). We refer to these as transaction
financial institutions or payment service providers which inputs and outputs, respectively, consistent with many
are linked to the identity of the owner. These institutions cryptocurrency systems [14, 32, 67]. Hamilton validates
are responsible for processing transactions on behalf of the transaction is correct and executes it by deleting the
their customers, interfacing with payment networks, and inputs and creating the outputs. We implement two ar-
safeguarding customer funds. chitectures for high throughput, low latency, and fault-
In contrast, cash can be held directly by the public and tolerant transaction processing. The first, the atomizer ar-
used to conduct transactions without the need for a finan- chitecture, uses an ordering server to create a linear his-
cial institution to process the payment on their behalf. A tory of all transactions. The second, the two-phase com-
CBDC could be designed to offer similar functionality mit (2PC) architecture, executes non-conflicting trans-
to cash and provide users the power to spend their own actions (transactions which do not spend or receive the
funds without the need for an account provider or custo- same funds) in parallel and does not create a single, or-
dian to generate transactions [22]. dered history of transactions.
3 We use the term “intermediaries” to include financial institutions, 1.3 Technical challenges and contributions
custodians, payment service providers, and other third parties who
perform payment-related functions and services. Other entities which
We had to solve the following challenges. First, we had
do not perform payment-related functions, such as Internet service to build a flexible platform that could support multi-
providers, are not included in this definition. ple designs without explicit policy requirements or well-

2
defined tradeoffs. For example, it is unclear what balance underlying certain types of programmability and cryp-
to target between end-user privacy and data storage re- tographic privacy-preserving designs [14, 66, 83, 85],
quirements for users at the central transaction processor. which, along with auditability, we intend to explore in
We take a layered approach with a design where addi- Phase 2. Second, we can upgrade the scripting language
tional functionality can be built outside the core trans- or add a cryptographic privacy-preserving protocol (even
action processor. Our design can support a range of in- supporting multiple concurrent designs), as long as they
termediary roles including one where users custody their are compatible with 32-byte hash storage, without need-
own funds. We explore a design which minimizes storing ing any changes to the backing UHS, making it possible
personally identifying user data and information about to defer decisions on specific programmability features.
transaction addresses and amounts in the core of the sys- Third, if needed, it is always possible to store more data
tem. at other layers outside the transaction processor, for ex-
The second challenge is in providing strong consis- ample in user wallets or an intermediary such as a custo-
tency, geographic fault tolerance, high throughput, and dian. However, our design choices have implications on
low latency, all with a workload that consists of 100% what data users or intermediaries need to store in their
read/write, multi-server transactions. In payment appli- wallets and what messages are required to confirm a pay-
cations, all transactions require strong consistency; it is ment (§3.4).
vital that payments execute correctly even in the presence Our third key idea is a system design and proto-
of unforeseen events or computer crashes. Given our col for efficiently committing atomic payment transac-
performance and resiliency requirements, we must store tions that leverage the UHS to achieve high performance,
data on multiple computers. This requires correctly co- strong consistency, and geographically-replicated fault
ordinating data updates across computers for most trans- tolerance in a 100% read/write, non-partitionable work-
actions, since we cannot rely on payments having data load. We implemented two high-performance architec-
locality, which is often exploited by traditional database tures with different properties (§4). In both architectures,
systems for partitioning to make workloads predomi- the UHS is partitioned across servers to support higher
nantly single-partition transactions. We decided to sup- throughput and an expanding UHS; executing a single
port atomic transactions, meaning a payment is guaran- transaction often involves multiple servers. Each archi-
teed to execute in an all-or-nothing fashion. Atomicity tecture uses a different technique to coordinate the con-
provides better semantics for payments and guarantees sistent application of a transaction across servers. In the
to users, and is helpful for programmability in the future, atomizer architecture, we use a replicated server to order
but increases the cost of achieving these requirements. It all updates, which are then applied to the state of the rest
remains to be seen if it will be required for a CBDC. of the system; one can think of this as an attempt at a
Hamilton addresses these challenges using three key high-performance blockchain.
ideas: In the 2PC architecture, we exploit payment transac-
The first is to decouple transaction validation from tion semantics and our transaction format to limit the
fund existence checks; only a validating layer needs to locking required to achieve atomic transactions and seri-
see the details of a transaction. Beyond the validating alizability [15]. Transactions using different funds do not
layer, Hamilton stores funds as opaque 32 byte hashes conflict and can execute in parallel; once a valid trans-
inside an Unspent funds Hash Set, or UHS [49] (§3.2). action’s funds are confirmed to be unspent, the transac-
This hides details about the funds (like amounts and tion can always proceed, and we can batch many trans-
addresses) from the UHS storage, reduces storage re- actions together to amortize two-phase commit overhead.
quirements, and creates opportunities to improve perfor- Because of these choices, we can use a simpler version
mance. of two-phase commit without rollback.
Our second key idea is the UHS-designed transac- Our evaluation demonstrates 1.7M transactions per
tion format (§3.3), which is extensible and secure against second in the 2PC architecture with less than one sec-
double spends, inflation attacks, replay attacks, and mal- ond 99% tail latency, under 0.5 seconds 50% latency, and
leability, and also has the benefit of supporting future adding more resources could increase throughput fur-
layer 2 designs for even higher throughput in the future. ther without negatively affecting latency. The atomizer
It borrows heavily from Bitcoin’s transaction format but design peaks at 170K transactions per second with un-
is designed to be validated without looking up data from der two seconds 99% tail latency and 0.7s 50% latency.
the UHS, which we term transaction-local validation. We reduced the functionality in the atomizer state ma-
The UHS design, in combination with our transaction chine to simply ordering and deduplicating the inputs for
format, affords us substantial flexibility. First, we be- a small set of transactions; even so, we were limited in
lieve that the abstractions our system provides and the throughput because the atomizer could not be sharded
assumptions it makes are compatible with most ideas across multiple servers. This implies that a design which

3
requires strongly ordering valid transactions to prevent
double spends will be throughput-limited. Sender wallet
Alice: $20 Transaction
In summary, the contributions of this paper are the fol- Transaction
requests and processor
lowing: confirmations
Stores all funds and
Recipient wallet executes transfers
• Hamilton, a flexible transaction processor design Bob: $0
that supports a range of models for a CBDC and
minimizes data storage in the core transaction pro-
cessor while supporting self-custody or custody Figure 2: Data flows between all participants in a trans-
provided by intermediaries action.

• A transaction format and implementation for a UHS


keeps track of funds which are owned by different users.
which together support modularity and extensibility
Funds are a representation of money and as such refer
• Two architectures to implement Hamilton: the at- to an amount of money (such as dollars) and a condi-
omizer architecture which provides a globally or- tion that must be satisfied to move this amount (say, to
dered history of transactions but is limited in another user or users). The funds enter and exit the sys-
throughput, and the 2PC architecture that scales tem through acts of the issuer who can mint and redeem
peak throughput almost linearly with resources but funds to add and remove them from the transaction pro-
does not provide a globally ordered list of transac- cessor, respectively. Users can execute transfer opera-
tions. tions (transactions or payments) that atomically change
the ownership of funds, with the requirement that the
• An evaluation of the performance of the two archi- total amount of funds stored in the transaction proces-
tectures with different types of transaction work- sor has not changed. A user does so by submitting their
loads. Hamilton and the software to evaluate its per- transaction to the transaction processor over the Inter-
formance are implemented in OpenCBDC-tx. net, which the processor then validates and executes. We
leave offline transactions and transfers without Internet
Our architectures are for research purposes and, ac- connectivity to future research. Figure 2 shows the high-
cordingly, have limitations that need to be addressed in level system model and potential communication chan-
future work. These experimental designs are not ready nels between users and the transaction processor.
for real-world use and do not provide system-wide au- Users run wallet software to manage cryptographic
ditability, protection against internally compromised ma- keys, track funds, and facilitate transactions. Wallets
chines, complete privacy guarantees, or resilience to de- could run on a mobile phone or specialized hardware in
nial of service attacks. smart cards. We do not discuss how users obtain wallets
The rest of this paper discusses the system model and and get system access; this could be done using a PKI
security goals for Hamilton (§2), explains the transaction and access control, or, the system could be open to all
format and UHS (§3), describes the design of the two users. An important piece of future work is preventing
architectures (§4) and their implementation (§5), evalu- spam and denial of service attacks, which we briefly dis-
ates Hamilton’s performance on a variety of transaction cuss in §8.
workloads (§6), and puts Hamilton in context with re-
lated work (§7). We discuss broader learnings, limita- 2.2 Threat model
tions to our design, and future work in (§8).
Our goal is to design a system where each user’s funds
2 System model and security goals and the integrity of the monetary system are safe from
This section describes the actors in Hamilton, their roles, interference of an external actor. For the purposes of this
and the security properties we want Hamilton to satisfy. paper we assume that the transaction processor is faith-
In our description, we make the simplifying assumption fully executing our design, that users’ wallets are able to
that users directly custody their money without the assis- maintain secret keys, and that the users are able to use
tance of an intermediary. We note that adding an inter- a secure channels to communicate with the transaction
mediary would not change the core security properties of processor. Our design is a cryptographic system so we
the transaction processor. assume the security of standard cryptographic primitives
such as hash functions and digital signatures.
2.1 Actors We aim to protect against an adversary who can freely
We distinguish three types of actors: the transaction pro- interact with the system as a regular user, and as such
cessor, the issuer, and users. At a high level they op- make no additional assumptions about an adversary’s ca-
erate and interact as follows. The transaction processor pabilities or behavior. For example, the adversary is free

4
to create arbitrarily many identities and wallets, receive Tracking of unspent entries is central to this model
funds from other users, and engage in elaborate transac- so, following Bitcoin, these have a special name: UTXOs
tion patterns. Some of our designs are multi-server sys- (Unspent Transaction Outputs). Importantly, UTXOs are
tems and the adversary is free to attempt concurrent at- never modified and must be spent in their entirety. There-
tacks against all externally-exposed parts of the system. fore, Alice who wants to use her $20.00 UTXO to send
$4.99 to Bob will create a transaction with two outputs:
2.3 Data representation: prior work one $4.99 output meant for Bob and one $15.01 change
To design a transaction processor we have to make a output meant for Alice herself. In contrast to physical
choice about how the users’ funds are represented in the banknotes or coins the UTXO values are not restricted to
system. The two most common ways are the account bal- a fixed set of denominations. Note that it is not required
ance model and the UTXO model, which we now sum- to make change in a system that tracks balances since
marize.4 the default is that the remaining balance stays under the
Tracking of balances. The simplest way to implement a same identifier.
payment system is using balances. The system can store 2.4 Data representation in Hamilton
unspent funds as balances associated with unique identi- Both of these designs have benefits and drawbacks, but
fiers, and a user can make a payment by issuing a request we chose to build Hamilton in the UTXO model. The
to the transaction processor to transfer balance to another choice of UTXOs is compatible with privacy extensions
identifier. Traditional payment systems choose this ap- in the future. Notably, most scalable privacy designs
proach and manage authorization by storing identifiers [14, 21, 33, 60, 63, 83, 85], including those deployed on
under user accounts, usually accessed via a username top of account-based systems [76,86], use a UTXO-style
and password. Traditional payment systems could use data representation internally. In contrast, privacy de-
public key cryptography and digital signatures instead of signs in the account model [25, 69] require locking all of
passwords for authorization, but this is not widely used the accounts in the anonymity set. The UTXO model also
in practice outside of cryptocurrency.5 Several cryptocur- offers greater transaction execution parallelism. How-
rencies, like Ethereum [87], choose this data representa- ever, UTXOs can be less intuitive to the user than ac-
tion. count balances. Although UTXOs can support arbitrary
Tracking of discrete funds. Another way to implement programmability, it is much easier to implement general
a payment system is to track outstanding funds without programmability in an account-balance design. Account
explicitly consolidating them into balances. Here a sys- balances are also more fungible, which is an important
tem maintains an append-only ledger of accounting en- property for money. It might be useful to consider an ac-
tries (sometimes called “coins”) each of which records count balance data model which minimizes the amount
a value (i.e., amount of dollars) and conditions to spend of data stored in the transaction processor in the future.
the funds. Furthermore, each entry is either marked as We emphasize that the transaction processor’s inter-
“spent” or “unspent”. To transfer funds, a user creates nal data representation is distinct from the interface pre-
and authorizes a transaction which: (a) marks some en- sented to the user. In particular, both of these choices
tries (called inputs) as spent, each with a witness that sat- support an account balance user interface abstraction
isfies the conditions to spend the entry; and (b) appends (i.e., tallying the total balance of user’s holdings, show-
new (unspent) entries (called outputs) to the ledger. A ing their transaction history, etc), even though only one
valid transaction must preserve balance: the sum of a has an account balance internal data representation.
transaction’s input values must equal the sum of its out-
2.5 Unspent funds
put values.6
Formally, we represent unspent funds as triples utxo :=
4 There are other, less common, representation models, such as, (v, P, sn). Here v is the amount of money and its role
David Chaum’s original eCash design [33] and the ECB prototype [46]
in representing unspent funds is clear. The other two ele-
using fixed value bills that atomically change ownership.
5 Public-private key pairs have significant advantages over user- ments are an encumbrance predicate P , and a serial num-
names and passwords. Private keys are harder to guess or crack with ber sn, which we now explain.
brute force and can be reused without the same risks as passwords (a The encumbrance predicate P takes two arguments: a
common security problem). Furthermore, private keys do not need to
transaction tx (to be formally defined later) seeking to
be seen or stored by the central transaction processor; signatures made
with a private key only authorize a single transaction instead of pro- spend this utxo, and a witness wit. The predicate returns
viding permanent access to a user’s money. They also allow for inter- true if and only if the witness signifies that this spending
operability with other public-private key systems and for novel privacy transaction should be authorized. This is similar to Bit-
options.
6 In cryptocurrencies with fees, the requirement is that the sum of coin, where each UTXO is encumbered with a script, an
the transaction’s input values must be greater than the sum of its output executable program which evaluates the conditions for a
values, with the difference going to the block miner as fees. valid spend.

5
A common encumbrance is that of digital signature to currency in the outside world being set aside for use in
authorization. Here the predicate P hard-codes a pub- Hamilton, whereas redeeming would make them avail-
lic key pk and P (tx, wit) checks that wit consists of a able again. The issuer must choose unique serial num-
valid signature where the message comprises the serial- bers for newly minted UTXOs. It suffices to set these as
ized spending transaction tx and the signature is under uniformly random, or as result of monotonically increas-
the public key pk. To spend such a utxo, the user creates ing counter value (i.e., the issuer minting the i-th UTXO
a transaction tx having the utxo as an input and signs tx would set its serial number to i).
with the corresponding secret key sk. In a system sup- Value transfers. The Transfer operation both consumes
porting only digital signature authorization, a predicate UTXOs and creates new UTXOs; this is the only opera-
P can be represented by the public key pk itself. tion which both adds and removes from the UTXO set.
In our system we permit users to reuse encumbrances, The input to Transfer is a transaction tx comprised of:
e.g., a user Alice could publish her public key pkAlice and (a) a list of input UTXOs to be spent; (b) two lists of out-
receive multiple payments meant for it. Therefore, we put values and encumbrances specifying output UTXOs
need a way to reference and distinguish funds that share to be created; and (c) a list of witnesses , one for each
the same encumbrance and value (e.g., Alice having re- input. In a valid transaction, balances are preserved, and
ceived same $5.00 value in two different transactions en- each input UTXO to be spent has its encumbrance pred-
cumbered with the same public key pkAlice ). icate satisfied by the corresponding witness (e.g., a sig-
We express this distinction between otherwise iden- nature). When a transfer operation succeeds, the input
tical UTXOs through a globally unique serial number UTXOs are completely consumed (removed from the
sn, the third component in a utxo. In our security defi- UTXO set) and cannot be used again, and the outputs
nitions below we require that serial numbers do not re- are available to be used an inputs to other Transfer or
peat across time: a serial number associated with a spent Redeem operations. Hamilton also computes and assigns
UTXO cannot “reappear” as a serial number for a new unique serial numbers to the output UTXOs.
unspent UTXO. Global uniqueness of serial numbers is
No editing of unspent funds. The above three opera-
not a mere technicality: they express the intent of sin-
tions are the only ways the UTXO set can be modified. In
gling out a particular UTXO and prevent replay attacks
particular, the unspent funds tracked in Hamilton cannot
(see §2.8 for discussion).
be modified to change their ownership (encumbrance),
Skipping ahead, our system assigns each UTXO a value or serial number (see change output discussion in
serial number by deterministically hashing all the cor- §2.3).
responding transaction’s inputs, as well as the output
UTXO’s encumbrance, value, and its index among all Payment discovery. Transaction history in Hamilton is
outputs. This in turn references previous serial num- not public. The sender must give the recipient the newly
bers and recursively incorporates the entire transaction created UTXOs (or the information needed to reconstruct
history.7 The collision resistance of the hash function them) so that the recipient can further spend them. To
and the system property that valid inputs can only be ensure users know a Transfer is completed and has been
spent once guarantees that all serial numbers are glob- applied, the transaction processor is also responsible for
ally unique. responding to queries from users about the existence of
UTXOs.8
2.6 System operations
2.7 Security properties
Logically, Hamilton maintains a record of all unspent
In brief, the system must faithfully execute transactions,
funds in existence; consistent with other cryptocurren-
ensuring that each was authorized by the owner of the
cies we call this record the UTXO set. In order to spend
input funds, and safeguard that transactions do not dis-
funds, they must be present in the UTXO set. Our system
turb the overall balance of funds (outside of minting and
supports the following three kinds of operations: Mint,
redemption). The transaction processor in Hamilton en-
Redeem, and Transfer, all of which are atomic and are
sures this by satisfying the following four security prop-
applied one at a time.
erties.
Minting and redeeming. The Mint operation creates
Authorization. Hamilton only accepts and executes
new unspent funds and adds UTXOs to the UTXO set,
Mint and Redeem operations authorized by the issuer,
whereas the Redeem operation removes unspent funds
i.e., only the issuer can mint and redeem funds. Simi-
from the UTXO set, making them unspendable. When
larly, Hamilton only accepts and executes Transfer op-
deployed these operations also have semantics outside
erations where encumbrances of each consumed UTXO
Hamilton: namely, minting would normally correspond
8 This is unlike in public blockchains where users can search the
7 This is similar how Bitcoin whitepaper [67] defined a coin to be a publicly available history of transactions to see if they have received
chain of digital signatures. payment.

6
are satisfied (e.g., all three operations are covered by dig- unique, no UTXO can be spent more than once or recre-
ital signature authorization). ated after having been spent.
Authenticity. The UTXO set of Hamilton only con-
tains authentic funds, as we now define. Define UTXOs No replay attacks. In a basic replay attack the victim
created by authorized Mint operations to be authentic. has signed a single transaction to authorize a single value
Moreover, define UTXOs created by Transfer operations transfer. The attacker, however, submits this transaction
to also be authentic if and only if all inputs consumed by twice in hopes of effectuating two value transfers. For ex-
the transaction were authentic and the transaction pre- ample, Alice, who has two unspent $5.00 “bills”, might
serves balance. Note that the recursive authenticity prop- give Bob a transaction that spends one of her $5.00 bills
erty depends on both the contents of the transaction itself, to pay for ice cream, which Bob then submits twice to
as well as the UTXO set when Transfer is applied. take possession of both. Or, if Alice only has one $5.00
bill available right now, Bob can wait until she receives
Durability. Mint, Redeem, and Transfer are the only op- $5.00 as a change, resubmit the (old, already confirmed)
erations in Hamilton that change the UTXO set. transaction and take possession of Alice’s newly received
Note that, as a consequence of the three integrity prop- change.
erties defined above the UTXO set always remains au-
thentic and transactions in Hamilton cannot be reverted. Hamilton’s transaction format prevents replay attacks
We further require that the transaction processor makes as each transaction references globally unique input
the following availability guarantee and always makes UTXOs, and each signature covers the entire transac-
progress: tion, including all its inputs and outputs. Thus, signatures
are not valid for spending any other UTXO, including
Availability. An authorized transaction spending authen- those created in the future, and it is not possible to copy a
tic funds will always be accepted by the transaction pro- Hamilton transaction and apply it multiple times to spend
cessor.9 additional funds.10
2.8 Discussion
Transactions are non-malleable. In a system with mal-
We carefully designed our data representation (§2.5), leable transactions, an attacker can change some details
system operations (§2.6) and security properties (§2.7) about the transaction (e.g., the witnesses used to satisfy
so that any system satisfying these maintains an authen- input encumbrances or output UTXO serial numbers)
tic and authorized UTXO set, eliminates the possibil- without otherwise changing the input UTXOs or modify-
ity of double spends, and also achieves additional secu- ing output UTXO values or encumbrances. For example,
rity goals related to its use. In particular, transactions in if the transaction format included an auxiliary field not
Hamilton are not replayable and digital signature autho- covered by the signatures but used in serial number com-
rizations are not reusable. putation, an attacker could change this field. This would
These properties are a consequence of the fact that change output UTXO serial numbers and make it unsafe
each UTXO created by a Mint or Transfer transaction to accept a chain of unconfirmed transactions, thus pre-
is unique and guaranteed to not equal any other member venting certain higher level protocols like the Lightning
of the UTXO set either in the past or in the future. The Network. In 2014, the largest Bitcoin exchange Mt. Gox
issuer chooses uniformly random serial numbers for each closed after claiming to be a victim of malleability at-
Mint transaction output. In Hamilton, for each Transfer tacks [38]. In our implementation, we require signatures
the output UTXO serial numbers are set by hashing all to cover all fields of uniquely-encoded transaction and
the corresponding transaction’s inputs, as well as details derive UTXO serial numbers from the same fields (plus,
pertinent to the particular output UTXO itself (see §2.5). output indexes).
Therefore each UTXO serial number recursively incor-
porates the entire transfer history up to the original Mint
transactions that engendered system with these source 10 There are other ways to prevent replay and signature reuse attacks,

funds. Under standard cryptographic assumptions, it is for example, by incorporating a timestamp or an incrementing nonce,
infeasible to create two distinct chains of transfers re- or enforcing unique encumbrances: each of them ensure that a signed
transaction can effectuate at most one transfer, and that signatures can-
sulting in the same serial number, thus all serial numbers not be repurposed. We made our choice to incorporate serial numbers
and all UTXOs are globally unique. derived from the transaction’s history due to its simplicity and flex-
ibility. For example, deterministic serial numbers do not require the
No double-spends. Transfer operations permanently sender to maintain state and allow for pre-signing transactions that can
mark UTXOs as spent. Therefore, as serial numbers are be kept online to be broadcast later. This does introduce challenges
to programmability since a transaction cannot be signed until the user
9 This does not preclude potential access control outside the trans- knows exactly what outputs it is spending; we could use other tech-
action processor. niques from cryptocurrency systems to address this.

7
3 Transaction design different scalability profiles, with transaction-local vali-
A payment system’s transaction format determines the dation requiring mostly compute resources (i.e., verify-
user experience when making a payment and has policy ing digital signatures used in spend authorization) and
implications in a wide range of areas including the level existence validation requiring mostly persistent storage
of user privacy, whether interaction with financial insti- I/O.
tutions is required, and how minting is performed. Performing local-validation. With this separation in
In the abstract design described in §2.6, the transac- mind, Hamilton has dedicated components, which we
tion processor has full visibility into transactions, includ- call sentinels, that receive transactions from users and
ing public keys, the transaction graph, and values, and perform transaction-local validation, which is stateless,
stores the entire UTXO set. Storing the entire UTXO set and then forward the locally-validated transactions for
is unfortunate because it requires the transaction proces- further processing. This local validation (1) checks that
sor to store encumbrances and values. This has an ef- the transaction is correctly formatted, (2) confirms that
fect on storage and bandwidth requirements (Bitcoin’s each input has a valid signature for the output it is spend-
UTXO state is over 4 gigabytes and Ethereum’s is almost ing, and (3) confirms that balance is preserved (i.e., the
a terabyte [80, 91]), and, as described in §1.1, this poses sum of the outputs equals the sum of the inputs).
data retention and user privacy challenges. Instead, we Checking existence and executing a locally-validated
explored a design which does not require storing encum- transaction. Now, given a transaction that passes
brances (which could identify users) and values in clear- transaction-local validation, our system needs to atom-
text in the transaction processor. Depending on how the ically check for input existence and, if valid, update the
system is architected, we believe this design can be later UTXO set as follows. First, check if all transaction’s in-
extended to avoid even temporarily showing this data to put UTXOs exist in the UTXO set, and abort further pro-
the transaction processor. In Hamilton, the transaction cessing if any of the input UTXO’s are missing. Oth-
processor stores unspent funds as a set of opaque 32- erwise, continue and (a) remove the transaction’s input
byte cryptographic hashes of UTXOs, not UTXOs them- UTXOs from the UTXO set, and (b) add the newly cre-
selves. The rest of this section explains the technical mo- ated output UTXOs to the UTXO set.
tivation behind this choice and how to securely create In our current design, sentinels ensure that locally-
and process transactions in this model. We introduce the valid transactions with inputs in the UTXO set have glob-
transaction format, steps in which a transaction is pro- ally unique outputs, therefore we do not need to explic-
cessed and applied to the state, and implications these itly check that none of the transaction’s output UTXOs
choices have on future functionality. exist in the UTXO set once the sentinel has correctly de-
3.1 Processing transactions in Hamilton rived serial numbers. However, we do so in the trans-
action executor in case we wish to support a different
Processing a Transfer transaction involves confirming
transaction format in the future which might not have this
that it is valid and then applying it to the state. Valida-
property (see §4.3.1).
tion involves checking the following:
3.2 UTXO hash set
1. whether the funds exist to be spent;
We start by observing that executing a transaction that
2. whether the spender has provided authorization to passes transaction-local validation does not require ac-
spend the funds; and cess to transaction’s witness data, e.g., digital signatures.
This is because neither input nor output UTXOs de-
3. does the transaction preserve balance of funds. pend on the witness data and so the atomic update is in-
dependent of witnesses. Therefore, after sentinels have
The first and third items provide authenticity. The sec- checked that a transaction passes transaction-local val-
ond item is authorization. Applying a valid transaction idation, the sentinels could strip witness data and only
to the UTXO set involves atomically removing the spent forward the transaction’s inputs and outputs for process-
funds and creating the new funds under the control of the ing.
recipient(s); this in combination with the other checks Existing UTXO-based cryptocurrencies look up the
provide durability. contents of input UTXOs in a transaction-processor
Separating validation checks. An important part of the maintained UTXO set, to confirm the user has provided
design of Hamilton is that these three validation checks valid UTXOs (i.e., part of the current UTXO set) to
can be divided in transaction-local validation, which spend. Our key insight was relying on the (untrusted)
does not require access to shared state, and existence val- user to provide UTXO data by reducing the problem of
idation which does. We can then scale these two pieces checking UTXO correctness to existence—Do the funds
of work independently. This is useful because they have the user is claiming they can spend actually exist?

8
By doing this, one can go further and observe that af- UTXOs; it could be applied to a digital currency with
ter transaction-local validation, instead of processing and balances or some other application that requires atomi-
storing the entire UTXO, the transaction processor can cally swapping hashes. This means we can experiment
operate on cryptographic commitments to the UTXOs. In with different transaction formats or scripting languages
Hamilton we replace the UTXO set with a UTXO hash without needing to change the core execution engine.
set (UHS), and instead of storing a set of entire UTXOs Privacy. The transaction processor does not need to store
utxo = (v, P, sn), we store cryptographic commitments balances or account information, though sentinels do
h := H(v, P, sn) to UTXOs, which we subsequently re- need to see (but do not need to retain) parts of this infor-
fer to as hashes, or UHS IDs. Here H is a cryptographic mation to validate a transaction. We anticipate being able
hash function, and in Hamilton we use SHA-256 to de- to remove this requirement using cryptographic privacy-
rive these hash commitments. preserving designs which we will investigate in the next
Converting a user’s Transfer transaction into commit- phase of work [14, 66, 85].
ments to be applied to the UTXO hash set is a new step to However, this design also presents challenges for cer-
transaction processing which we call compaction. When tain kinds of auditability, transaction protocols, and pro-
processing a Transfer transaction, Hamilton’s sentinel grammability.
computes the hashes for input UTXOs, deterministically Auditability. The UHS does not contain enough infor-
derives serial numbers for output UTXOs, and computes mation to audit the total amount of unspent funds. This
hashes for output UTXOs. These two sets of hashes form type of auditing would probably be important in the con-
a compact transaction. Sentinels forward compact trans- text of a digital currency, but can be achieved either by
actions to the execution engine to be applied to the UHS, logging data outside the UHS or, to continue preserving
described in §3.3. privacy, by storing values in homomorphic commitments
We note that replacing UTXOs by cryptographic com- that can be maintained and tallied using additional cryp-
mitments preserves security, and an attacker can not cre- tographic techniques [69, 75].
ate a transaction that would be invalid in UTXO set
Sender/recipient transaction protocols. The UHS de-
model but succeed in the UHS model. Because UHS
sign requires a recipient to learn the commitment to find
hashes commit to the same UTXO data which must be
out if they have received funds, and know the serial num-
provided in the transaction, an attacker can not fit a dif-
ber, encumbrance, and value to further spend their funds;
ferent UTXO preimage into the same UHS hash with-
the transaction processor does not store enough infor-
out violating the collision-resistance of H. Therefore, if
mation to help a user recover this if they lose it. This
a transaction format is secure in the UTXO model, then
information could be stored elsewhere, or third parties
it must be in the UHS model. We explain security of our
could conceivably provide this service to users. Requir-
transaction format in §3.3.
ing a user to receive the serial number, encumbrance, and
The idea of storing unspent funds as commitments
value to spend their funds has implications on our trans-
was first proposed as a Bitcoin storage and scalabil-
action protocol and the types of transactions supported,
ity improvement [49]. We now discuss the benefits and
which we discuss in §3.4.
drawbacks of a UHS. It lowers storage requirements, in-
creases flexibility, and improves privacy, but creates chal- Programmability. Decoupling transaction-local valida-
lenges for auditing, transaction flows, and programma- tion and access to shared state means that future trans-
bility. action programmability is restricted to only transaction-
local state. The UHS requires the person constructing the
Storage. In the UHS model the transaction processor
transaction to be able to specify the start and end states
only stores a 32-byte hash per individual UTXO, inde-
for the modifications to the spender’s funds and the recip-
pendent of a UTXO’s size. If transactions contain pro-
ient’s funds. If there are concurrent transactions debiting
grammable features in the future that require a large
or crediting an account balance this might be challeng-
amount of storage space in the transaction format, the
ing. This is easier in the UTXO model since we do not
storage requirement for the state remains the same. This
need to support concurrent access to UTXOs. It would be
state would be maintained by wallets and the user would
challenging to implement a complex smart contracting
need to provide the necessary commitment preimages
language (such as Solidity [44]) using this abstraction.
alongside the transaction. It also keeps the data format
We will consider auditing, alternative data models
uniform and for transaction formats that include user-
and advanced transaction semantics in the next phase of
supplied data, this hampers users from storing arbitrary
work.
data (such as copyrighted or illegal data [79]) in the
transaction processor. 3.3 Transaction format and execution
Flexibility. The UHS makes no assumptions about what Recall that we represent unspent funds as UTXO triples
hashes represent, and this data structure is not limited to utxo = (v, P, sn), comprised of a value v, encumbrance

9
outpoint:
P , and serial number sn (§2.5). We now describe the con- transaction_id: byte[32]
crete choices for v, P , and sn, Hamilton’s transaction for- index: uint
mat, and transaction execution in detail. output:
public_key: byte[32]
Values. We represent values v as 64-bit unsigned inte- value: uint
gers specifying multiples of the smallest subdivision of input:
money, i.e., multiples of $0.01. outpoint: outpoint
output: output
Encumbrances. Currently we only support encum- witness:
brances of public keys, indicating that the authorization signature: byte[64]
needed to spend this output is a signature on some spe-
cific data by the corresponding private key.11 Thus, an transaction:
inputs: input[]
encumbrance P is a 32-byte public key. Our model sup- outputs: output[]
ports future encumbrances, such as requiring a subset of witnesses: witness[]
signatures from multiple public keys.
Serial numbers. It is important that UHS hashes (or, Figure 3: Description of a Transfer transaction. When
equivalently, UTXO set entries) do not collide, yet at the submitted to the execution engine, the transaction is byte
same time it is possible to spend the same amount v to serialized to remove labels and delimiters required for a
the same encumbrance P multiple times. This property is human-readable format.
both about completeness—the ability to put multiple out-
puts with same encumbrance and value in the UHS—and (it stores a cryptographic commitment to it). Therefore,
about security—to prevent replay and signature reuse at- when spending a previous output in our system, the out-
tacks (§2.8). This is the role of the globally unique serial put’s value and encumbrance are included in an input
number sn. along with outpoint reference to that input.
We make UTXOs (i.e., UHS hashes) unique by deriv-
ing the serial numbers sn as pairs sn := (txid, idx) as We are now ready to fully specify the transaction for-
follows. The first component, txid is the unique trans- mat, computation of transaction identifiers, and transac-
action identifier: the cryptographic hash of the Mint or tion validation.
Transfer transaction that created this UTXO. This hash Mint transactions. Unspent funds enter the system as
covers all input UTXOs, output encumbrances and val- outputs created by Mint transactions. A k-output Mint
ues (as well as a unique nonce for each Mint transaction transaction txMint is a quadruple (~vout , P~out , nonce; σ),
which has no inputs). The second component, idx, is the comprised of two size-k lists of output values ~vout
particular output index, i.e., first, second, etc, output of and P~out , as well as a unique nonce, and issuer’s sig-
the transaction.12 Since inputs can only be spent once nature σ. Such a transaction creates k UTXOs with
and they are all unique, this ensures that valid transac- value/encumbrance pairs (vout,1 , Pout,1 ), . . . , (vout,k ,
tions create unique serial numbers sn and unique output Pout,k ). We define txid(txMint ) := H((~vout , P~out ,
hashes: txid’s are different for distinct transactions and nonce)), where H is a cryptographic hash function.
the idx values distinguish multiple outputs of the same
Transfer transactions. A k-input, l-output Transfer
transaction.
transaction seeks to fully consume k UTXOs currently
This design matches Bitcoin where previous outputs
present in the system, and create l new UTXOs spec-
being spent are referenced via an outpoint, the trans-
ified by encumbrances and values. Such transaction
action identifier/output index pair. A Bitcoin outpoint ~ inp , ~vout , P~out ; wit)
~ is comprised of (a) a
txTransfer = (utxo
uniquely identifies a previous output and is never reused ~
size-k list utxoinp of input UTXOs to be spent; (b) two
for a different output once spent, therefore we use se-
size-l lists ~vout and P~out of output values and encum-
rial number and outpoint interchangeably when describ-
brances specifying output UTXOs to be created; and (c) a
ing Hamilton transactions. ~ one for each input. (See Fig-
size-k list of witnesses wit,
A notable difference is that Bitcoin transactions only
ure 3 for a machine-readable specification of a Transfer
contain outpoints but not the outputs themselves, so val-
transaction.)
idating nodes must look up output information (like the
amount) in a local database in order to validate a trans- The transaction’s inputs utxoinp,i = (vinp,i , Pinp,i ,
action. As we operate in the UHS model, our trans- sninp,i ) must have values that sum up exactly to the val-
Pk Pl
action processor does not store this output information ues for transaction’s outputs: i=1 vinp,i = j=1 vout,j .
11 In Bitcoin and other cryptocurrencies, such encumbrances are
Note that this is different from Bitcoin which requires the
known as Pay-to-Pubkey, or P2PK, scripts.
sum of the outputs to be less than or equal to the sum of
12 While in this exposition we use 1-based indexing, our software the inputs, because the difference is used as transaction
implementation uses 0-based indexing fees which go to the block miner. We do not require fees,

10
but could consider them in a future phase of this work. See Figure 5 for Python pseudocode of a validate
Similar to Mint transactions, such txTransfer creates function specifying the transaction validation algorithm.
l UTXOs with value/encumbrance pairs (vout,i , Pout,i ), Once validated, a transaction is compacted. First, the
and we define txid(txTransfer ) := H((utxo ~ inp , ~vout , sentinel derives the output UTXO serial numbers; to-
P~out )). (See Figure 4 for an explicit description of this gether with output encumbrances and values they fully
computation.13 ) specify output UTXOs to be created. Next, the sentinel
The transaction format satisfies the properties speci- hashes the input and output UTXOs and obtains two lists
fied in §2.7 of authorization, authenticity, and durability. of hashes which it sends to the transaction processor,
It is not possible to create counterfeit money in the sys- which maintains the UHS, for existence checks and exe-
tem as an outpoint is globally unique and unusable once cution. See Figure 6 for a pseudocode description of the
spent, and transaction-local validation checks, described transaction compaction algorithm.
later, ensure preservation of balance. The swap abstraction. Note that while the validate
Transaction creation. To create a Transfer transaction, function does not reference any data from the state and
users use their private keys to create a digital signature only uses transaction-local data, the UHS, in turn, does
on the txid, which serves as the witness for authorizing not reference a transaction’s contents and only operates
the transaction, obtaining one signature per transaction on the compacted hash values. Consequently, processing
input. Witnesses are not included in the transaction iden- Hamilton transactions at scale reduces to the challenge
tifier so signing can be deferred by the sender to after the of implementing a fast, scalable, and durable system for
outpoint has been shared with the recipient. This is use- executing the following kind of UHS primitive, which
ful to support future smart contract functionality where we call swap. We describe two such systems in §4.
unsigned transactions could be shared between parties to A UHS system maintains a set of hashes, and exposes
be signed and broadcast later under certain conditions. a single operation called swap. The inputs to swap are
Recall that encumbrances are applied to individual out- two lists of hashes: one for existence checks and removal
puts rather than whole transactions, meaning that funds (called input hashes), and one for insertion (called out-
can be spent atomically from multiple public keys in a put hashes). To execute a swap, the system atomically
single transaction. checks that all input hashes are present. If an input hash
Once a transaction is finalized, the users will determin- is missing, swap aborts. Otherwise, it obtains an updated
istically derive outpoints (i.e., serial numbers) of each of UHS by erasing all input hashes and inserting all output
the output UTXOs from the transaction contents. Users hashes. All other hashes in the UHS remain unchanged.
store this outpoint information in their wallets. Figure 7 describes contents of a compact transaction and
how such a transaction is then processed by swap.
Transaction execution. As described in §3.1, transac- We note that separating transaction-local validation
tion execution in Hamilton can be separated in two parts: and execution means that with swap we can support mul-
(a) transaction-local validation, and (b) checking for tiple transaction formats concurrently without affecting
UHS hash existence and execution of a locally-validated UHS performance.
transaction.
Security. Note that the transaction format itself guar-
The sentinel completes transaction-local validation of
antees that old and new hashes output by the compact
a Transfer transaction by performing the following three
function are unique, as the hashes commit to the entirety
checks:
of pertinent transfer history up to the distinct (due to
1. Syntactical correctness. Check that the transaction presence of a nonce) Mint’s. Once swap has removed
has at least one input and output, and that the trans- hashes from the UHS they can not be recreated (this
action supplies exactly one witness per input. would require duplicate outpoints) thus ensuring that
outputs cannot be double-spent and transactions cannot
2. Balance. Check that transaction’s input values tally be replayed: the subsequent spends would be rejected by
up to exactly the same value as outputs to be cre- swap’s existence checks as input hashes would not be
ated. present in the UHS. Similarly, since the swap abstrac-
tion provides atomic deletion and addition of inputs and
3. Authorization. Check that each input UTXO is ac- outputs, the transaction is final once accepted and cannot
companied by a valid signature, relative to the in- be reversed. Finally, transaction IDs will never repeat for
put’s public key, on a message comprised of the valid transactions as described above, so signatures can-
transaction’s identifier txid. not be reused once the transaction is settled as changing
13 In our system we use SHA256 both for computing UHS hashes any aspect of the inputs or outputs of the transaction will
and transaction identifiers. To make vector serialization unambiguous change the transaction ID, resulting in an invalid signa-
we also explicitly hash k and l as part of tuple serialization. ture.

11
def transaction_id(transaction):
hash_args = [len(transaction['inputs'])]
for inp in transaction['inputs']:
hash_args += [inp['outpoint']['transaction_id'], inp['outpoint']['index'],
inp['output']['public_key'], inp['output']['value']]

hash_args += [len(transaction['outputs'])]
for out in transaction['outputs']:
hash_args += [out['public_key'], out['value']]

return serialize_and_hash(hash_args)

Figure 4: Calculation of transaction identifier.

def validate_local(transaction):
if len(transaction['inputs']) < 1:
return False
if len(transaction['outputs']) < 1:
return False
if len(transaction['witnesses']) != len(transaction['inputs']):
return False

total_input_value = 0
for inp in transaction['inputs']:
total_input_value += inp['output']['value']

total_output_value = 0
for out in transaction['outputs']:
total_output_value += out['value']

if total_input_value != total_output_value:
return False

txid = transaction_id(transaction)

for inp, wit in zip(transaction['inputs'], transaction['witnesses']):


if not check_signature(inp['output']['public_key'], wit['signature'], txid):
return False

return True

Figure 5: Transaction validation algorithm.

12
def input_hash(input):
hash_args = [input['outpoint']['transaction_id'], input['outpoint']['index'],
input['output']['public_key'], input['output']['value']]
return serialize_and_hash(hash_args)

def compact(transaction):
txid = transaction_hash(transaction)
input_hashes = []
for inp in transaction['inputs']:
h = input_hash(inp)
input_hashes.append(h)

output_hashes = []
for i, out in enumerate(transaction['outputs']):
inp = {
'outpoint': {
'transaction_id': txid,
'index': i
}
'output': out
}
h = input_hash(inp)
output_hashes.append(h)

return (txid, input_hashes, output_hashes)

Figure 6: Calculation of UHS input hashes and transaction compaction algorithm.

𝐔𝐇𝐒old Compact transaction 𝑻𝐜𝐨𝐦𝐩𝐚𝐜𝐭 𝐔𝐇𝐒new


24ba...4cc txid: 81fc...d94 24ba...4cc
2808...777 Output hashes:
Input hashes:
2d44...5f2 2d44...5f2
2808...777 677b...99d
… 5015...c06 3e1e...612 …
30ef...0d1 cd1d...2c3 30ef...0d1
5015...c06
677b...99d 677b...99d
… …
feb6...4f0 txid: adb7...401 txid: 81fc...d94 feb6...4f0
Output index: 0 Output index: 2 677b...99d
PK: b0c8...290 PK: 49e2...891 3e1e...612
Value: $2.15 Value: $7.00 cd1d...2c3

Figure 7: Processing of a compact transaction. As explained in §3.3, a transaction T is first validated, and after that T
is compacted to obtain the corresponding compact transaction Tcompact . The compact transaction Tcompact consists of
a transaction identifier (txid), input hashes (referring to previously committed transaction outputs), and output hashes
(referring to outputs of the transaction T itself). To process Tcompact , the swap function atomically does the following:
it checks that all input hashes of Tcompact are in the UHS, and if so it obtains an updated UHS by erasing Tcompact ’s
input hashes (highlighted in italics) and adding Tcompact ’s output hashes (highlighted in bold). All other hashes in the
UHS remain unchanged.

13
3.4 Transaction protocol One way to address this would be to have the transac-
A transaction protocol is the series of user actions (or ac- tion processor store the outputs as well, so the recipient
tions performed by wallets on the user’s behalf) needed could query for them later, but this would require storing
to create and submit a transaction to the transaction pro- public keys and amounts, which would allow users to be
cessor. This includes how the recipient shares their public tracked across transactions.
key with the sender, who participates in constructing and An alternative transaction format could compute the
authorizing the transaction, who submits the transaction hash with only the public key and value, so the recipi-
to the transaction processor, how confirmation (or rejec- ent could deterministically find out if they have received
tion) is communicated, and any other actions needed for money without needing to know exactly how it was
a transaction to succeed. For example, a protocol may be: spent. This fixes the above problem but has downsides.
(1) the recipient shares their public key with the sender, The swap function would need to explicitly check for
(2) the sender constructs, signs, and submits a transac- and reject duplicate transaction IDs to prevent transac-
tion to the transaction processor, and (3) both the sender tions from being replayed. Unlike in the format described
and recipient query the transaction processor (possibly above, it would be trivial to recreate the same input set,
repeatedly) to find out if the transaction has completed and thus the transaction ID, if outputs with the same pub-
successfully. Note that once constructed and shared, ei- lic key and value were created, allowing signatures to
ther the sender or recipient could submit the transaction. be reused. This would effectively force users to generate
Our choice of transaction data model and format di- new public private key pairs for transactions of the same
rectly impact potential transaction protocols. For exam- value because the swap function must reject transactions
ple, transaction compaction for the UHS adds a new that repeat the same public key and value pair.
communication step requirement between sender and re- Learning transaction confirmation. There is no pub-
cipient. Note that the recipient does not need to authorize lic ledger of transactions, so recipients must rely on
the transfer, beyond sharing a public key with the sender. the transaction processor to learn about the status of
This means that a sender could construct and submit a outstanding transactions. In our system they do this by
transaction without the recipient’s knowledge (e.g., by querying the transaction processor directly, but we could
reusing a public key), or without sending the recipient also consider a design where the transaction processor
the constructed transaction. This would make the funds signs confirmed transactions so the spender could relay
unspendable, and the recipient might not even know they confirmation directly to the recipient. In §4, we introduce
exist. The recipient should not consider a payment “com- a service that responds to user queries about whether a
plete” until they have received both a confirmation from transaction was successful. This service stores transac-
the transaction processor and the full preimage data for tion IDs and output hashes. As described above, recipi-
their new outputs. If the recipient does not receive these, ents must receive either the transaction ID or output data
the sender has essentially destroyed the funds. about the transaction before they can confirm it has been
In theory, other cryptocurrencies in which the recip- successful; this can be shared at any point after transac-
ient’s address is obfuscated also have this problem. In tion construction (including before submission).
practice, because the entire blockchain is public and stan- Instead of requiring the user to poll for transaction
dard address formats are used, recipients can scan every confirmation, the processor could support receipt call-
transaction to detect if they have been paid and, if so, back endpoints. Users would specify a callback endpoint
construct new transactions to further spend those funds. in the transaction format and the transaction processor
Even if the UHS were public, recipients would not be would push a notification to that endpoint when the trans-
able to unilaterally detect payments as the output hashes action is complete. Users are already familiar with this
are only generated during transaction construction. payment protocol as it is commonplace for credit card
This communication requirement means we cannot al- payments over the Internet: an e-mail address or phone
ways safely execute certain transaction protocols, includ- number is provided at transaction time, and a receipt is
ing non-interactive or “billboard” payments. We define sent to that address upon completion. It may be possible
non-interactive payments as transactions where the re- for third-party intermediaries to emerge who do noth-
cipient does not need to engage with the sender at all at ing but provide a finality inbox service to users, much
the time of transaction. For example, a charity may want like how e-mail providers hold messages until users grab
to solicit donations in a train station by posting their pub- them. Importantly, this callback would not affect the ex-
lic key as a QR code. If the sender did not communicate ecution of the transaction itself, merely the finality noti-
with the charity to also send the new outputs, the money fication, so these intermediaries would not need to take
would be rendered unspendable (it is controlled by the custody of user money.
charity’s public key, but the charity does not have enough Using a receipt callback endpoint has two primary
information to construct a valid transaction to spend it). drawbacks. First, it increases data storage requirements

14
within the UHS or within an alternative look-up service CBDC usage. We present them as examples to illustrate
and, second, it requires high availability for the call- key ideas and facilitate discussion.
back endpoint. To link a successful transaction from the There are many other potential architectures to explore
UHS to an endpoint (e.g., an email address), the end- for fast transaction processing. We made several early de-
point data would need to be included within the UHS. sign choices that ultimately defined other properties of
If not included, a separate service would need to scrape the system. Examples of these early design choices in-
the endpoint data along with the transaction ID or output clude defining how users learn about execution results,
hashes from the validation set. This increases data re- or whether those results are globally linearizable, mean-
tention and, accordingly, impacts privacy and likely per- ing that a time-based ordered list of transaction history
formance. Furthermore, if the endpoint is unavailable or logically exists and can be materialized [58].
incorrectly specified by the user, the confirmation noti-
In this section we describe the two architectures we
fication would fail, leaving polling the transaction pro-
implemented and evaluated for processing transactions
cessor as the only alternative. A central directory con-
at scale. Both would require solving significant addi-
taining all public keys and notification endpoints would
tional challenges before they would be ready for use in a
simplify the process, but creates a similar privacy risk by
production-quality system.
linking transactions to personally identifying data (e.g.,
email addresses).
Limitations on types of transactions supported. 4.1 Consistency
Hamilton only supports push payments—the sender must
explicitly authorize and initiate each transaction. We do As described in §3, transaction processing can be split
not yet support pull payments, where the sender can pre- into transaction-local validation, existence validation,
authorize the recipient to continuously charge money to and execution, which creates opportunities for improv-
the sender, like with a subscription service. It is not clear ing performance. To process more transactions, we par-
how to support this using a UHS because transactions, tition the set of unspent funds across multiple computers.
and transaction authorization, must reference the specific Transactions might reference unspent funds stored on
funds being spent. different machines, requiring a coordination protocol to
check existence of inputs and execute transactions atom-
3.5 Learnings ically. One way to achieve this is to first explicitly order
Constructing a payment system using a UHS showed all valid transactions and subsequently apply them to the
how choices in transaction design and data storage can partitioned state in the same order, if the inputs exist and
impact data retention requirements and transaction pro- have not already been spent. We investigate this type of
tocols (including potential use cases). Importantly, the architecture in §4.2. However, our correctness require-
flexibility of a system’s transaction protocols will im- ments do not require materializing a linear transaction
pact what use cases are possible and the user experi- history. In §4.3, we describe an architecture which uses
ence, which are critical for adoption. The amount of data a variant of two-phase commit [51] to achieve atomicity
the transaction processor retains and to whom it is visi- and serializability without actually materializing a linear
ble dictates what out-of-band interactions between users order.
are needed. During out-of-band communication, wallet Our invariants suggest that we could further relax con-
communication protocols could fail or transaction data sistency requirements so transactions would not need to
could be lost, creating edge cases where a sender no execute atomically. That is, the new funds could be cre-
longer has the authority to spend funds and the receiver ated lazily and a user might observe that their spent funds
does not have the information required to reference them. are not available for some time before the transferred
We leave solving these tradeoffs and building fully func- funds are available to spend. (Note that delayed execu-
tional user wallets to future work. tion is quite common in today’s payment systems where
settlement might even take days.) In addition, we might
4 Processing transactions at scale not require that a total order of all transactions exists
To illustrate how architecture design choices for the (even an implicit one). Relaxing one or both of these
transaction processor affect the broader properties of guarantees might improve performance. We leave these
a CBDC, we designed and implemented two architec- explorations to future work.
tures. This required exploring the tradeoffs in user-facing As described in §3.1, in both of our implemented de-
wallet software, the payment processor’s back-end soft- signs a sentinel receives a transaction from a user, per-
ware, and the communication layers between them. Im- forms transaction-local validation, condenses the trans-
portantly, these transaction processing systems would action into a compact transaction, and sends it to the ex-
require significant further development for real-world ecution engine to enact the transfer and update the UHS.

15
4.2 Atomizer design
This design takes a two-stage pipelined approach: users
submit transactions to sentinels and then subscribe to a
watchtower to learn transaction status. Shards, each of
which stores some portion of the set of unspent outputs
(the UHS), receive compact transactions from the sen-
tinels. Shards check to see if the inputs to a transaction
exist, and then send this information to an ordering server
we call an atomizer, which produces a linear ordering of
transactions in blocks of state updates to the UHS. These
blocks are made durable on the archiver. Finally, each
block is broadcast and applied atomically in order (by
block height) to each shard in parallel. Each shard keeps
track of its current block height. The watchtower also
digests blocks and keeps state on transaction status for
users.
Figure 8 shows a diagram of the components in the
atomizer architecture and the data flow between compo-
nents. The order of messages during normal transaction
execution are described below:

1. User wallet submits a valid transaction to the sen-


tinel for execution by the system.
2. Sentinel validates the transaction and responds to
the user that the transaction is valid and is now
pending execution.
3. Sentinel converts the transaction to a compact trans-
action and forwards it to the shards.
4. Shards check the input UHS IDs are unspent and
forward the compact transaction to the atomizer.
The shards attach their current block height and the
list of input indexes the shard is attesting are un-
spent to the notification.
5. Atomizer collects notifications from shards and ap-
pends the compact transaction to its current block
once a full set of attestations for all transaction in-
put UHS IDs have been received. Once the make
block timer has expired, the atomizer seals the cur- Figure 8: System diagram for the atomizer architecture
rent block and broadcasts it to listeners. Shards up- and inter-component data flow
date their current block height and their set of un-
spent UHS IDs by deleting UHS IDs spent by trans-
actions in the block and creating newly created UHS
IDs. The watchtower updates its cache of UHS IDs
to indicate which have been spent and created re-
cently.
6. User wallet queries the watchtower to determine
whether their transaction has been successfully ex-
ecuted.
7. Watchtower responds to the user wallet to confirm
the transaction has succeeded.

16
4.2.1 Validating transactions 4.2.3 Updating state
The sentinel is responsible for validating all transaction As shards receive blocks, they atomically apply the
rules except the existence of inputs. This includes check- blocks to their local state; they remove any inputs that
ing that the transaction is correctly formatted, that it were spent in the block and insert new UHS IDs created
preserves funds, and that any necessary signatures are in the block into their local data stores. (Each shard does
present and valid. If a transaction does not meet these this for its own UHS ID range.) Once it has completely
criteria, the sentinel will return an error to the user with- processed a block, the shard updates its block height to
out forwarding the transaction for further processing. We be used in future attestations.
could extend the transaction format and sentinel valida-
Watchtowers receive blocks from the atomizer and
tion to support more complex encumbrances in the fu-
maintain a time-limited cache of recently executed com-
ture.
pact transactions. Users can query the watchtower by
Assuming the sentinel validates the transaction suc-
transaction ID to find out whether the system has suc-
cessfully, it converts the transaction to a compact trans-
cessfully executed their transactions. Another service
action and broadcasts this to the shards. As described
could provide longer-term, historical transaction status
in §3, a compact transaction is the minimal data neces-
by reading the blocks from the archiver and maintaining
sary to validate that all transaction inputs are present in
an index, much like a cryptocurrency block explorer.
the UHS, and update the UHS by deleting spent inputs
and inserting new outputs. Each shard is responsible for Correctness relies on the atomizer as an ordering
a range of UHS IDs. Relevant shards (responsible for a server. In the design described above, an atomizer will
UHS ID range covering input UHS IDs in a transaction) not consider an attestation if it is not marked with the lat-
will check if the inputs exist. If they do, they will form est block height. A shard also will not update its block
an attestation for the atomizer, which contains the com- height until it has fully processed the previous block’s
pact transaction, a list of the input UHS ID indexes the updates, so at the time the shard produces an attestation
shard is attesting are unspent, and the block height for the atomizer will accept, it must have processed the pre-
which the attestations are valid. This means an attesta- vious block, destroying any spent inputs. A shard might
tion is a confirmation by the shard that the input exists attest to the same input twice at one block height for
as of a specific block height. Note that the shard does not different transactions, but the atomizer will deduplicate
remove inputs or change state in any way at this point, this and allow only one of the transactions into a block,
and might attest to the same input across multiple trans- whichever receives a full set of input attestations first.
actions. This could conceivably result in a double spend The reliance on block height for attestations creates
but is prevented by the atomizer, as described below. a synchronization loop problem between shards and the
atomizer. A shard’s attestations may no longer be valid
4.2.2 Ordering transactions after the atomizer updates its block height and before the
The atomizer collects, processes, and applies attestations shard processes new blocks. To allow the use of attes-
from shards. The atomizer stores attestations by block tations that are still valid but not current (i.e., UHS IDs
height and transaction ID, and when a transaction has a still not spent as of a certain block height), we introduce
complete set of attestations at the latest block height (we a spent transaction output (STXO) cache in the atomizer.
will relax this requirement later), the atomizer considers If an attestation has a non-current block height, the atom-
it for a block. In our implementation, the atomizer pro- izer checks in the STXO cache if the attestation’s UHS
duces blocks on a specific schedule (in §6, every 250ms), ID has been spent in recent blocks. If not, the attestation
but could also produce blocks when a certain number of is still valid and the transaction can proceed. The STXO
complete transactions are ready to be included in a block. cache depth determines the maximum usable attestation
The atomizer creates a block of complete compact trans- “age” (i.e., the difference between the block heights of
actions, based on the order in which a complete set of the attestation and the atomizer). With each new block
input attestations were received for the transaction. Im- produced, the atomizer adds newly spent UHS IDs to
portantly, the atomizer does not include transactions con- its STXO cache and discards UHS IDs older than the
taining inputs already referenced by another transaction cache’s depth.
in the block, even if both transactions have a full set of in- The STXO cache significantly improves performance
put attestations. The atomizer assigns the block the next because stale attestations can still be considered by the
sequential block height, makes the block durable, and atomizer across block boundaries. Furthermore, the at-
then broadcasts the block to the shards, watchtower, and omizer’s STXO cache makes it possible for shards to
archiver. A transaction is considered finalized (meaning process new compact transactions from sentinels in par-
its effects will eventually be visible to users) once the allel with digesting a block. By taking a snapshot of
block is made durable, as described in §4.2.4. its existing UHS partition before processing a block at

17
height h, the shard can issue attestations with the snap- loss is prevented by replicating each UHS hash on more
shot’s block height of h − 1. Once the shard has fully di- than one shard. If a shard becomes unavailable, the ser-
gested the block, the old snapshot can be discarded and vice can still be maintained if the replication factor does
attestations will reference the latest block height h. not fall below one. A replacement shard can be created
by either re-applying the blocks stored by the archiver
4.2.4 Fault tolerance or copying the required state from other replica shards.
The atomizer operates in a replicated state machine; in Additionally, if a shard falls more than one block behind,
our implementation we use Raft [73]. We replicate in- the shard can get the missed block(s) from the archiver
puts to the atomizer’s functions to process transactions, and apply them to catch up. Blocks not yet in the archiver
make blocks, and prune blocks (shard attestations, com- can be retrieved from the atomizer leader, making the at-
plete transactions, and block heights). The replication omizer the real-time source of consistency and synchro-
process makes sure that blocks are replicated across at- nization between all system components.
omizers; the lead atomizer (and at least half the replicas) The archiver is the historical record of state transitions
will remember the block until an archiver has received of the overall UHS and can be used for recovery in the
it and notified the lead atomizer that the block is safe to event of component failures or network degradation. If
prune. The lead atomizer will make sure this operation archive data is lost, the system can continue to operate
is replicated. Archivers are the long-term storage for his- as long as all shards remain synchronized with the at-
torical data in the system to reduce the storage require- omizer. However, in this case, future shard reconstruc-
ment for the atomizer. A block, and thus the transactions tion from archive data would be impossible. To alleviate
contained within it, is committed once the command to this problem and speed up shard recovery from blocks in
produce the block has been replicated by the atomizer the archive, periodic snapshots of shard state at regular
state machine. At this point, a majority of the atomizer block heights could be taken. This would require fewer
replicas have the state necessary to broadcast the block. blocks to be processed while reconstructing shard state
Interestingly, shards do not require consensus to stay and would also remove the necessity of the archiver to
up to date, since they apply blocks from the atomizer in store blocks prior to the most recent snapshot. In this
block-height order. We can replicate shards by simply way, the archiver could be recovered to full functional-
creating shards with overlapping UHS ID ranges. Each ity.
shard range copy can process blocks and provide correct
attestations as of their current block height. Sentinels can 4.2.5 Preventing double spends
send transactions to any of the copies of a shard range; Suppose an adversary tries to double-spend an output
if one fails, it can try another. Note that if a replica is that was previously spent in a transaction confirmed a
out of date (has not yet processed the most recent block long time in the past (e.g., minutes). Typically, this will
outside the atomizer’s STXO cache) its attestations will be caught and prevented at the shard layer. A shard copy
be discarded at the atomizer; this would require the user that is responsible for the UHS ID of the input that ref-
or sentinel to retry the transaction. The atomizer must erences the previously spent output will check its UHS
broadcast blocks to all shard replicas. range and see that the UHS ID is not present, and thus
If there is a leadership change in the atomizer Raft not spendable. The shard will thus not forward an at-
cluster after a MakeBlock command has been repli- testation to the atomizer for the offending input. Since
cated but before the resulting block has been broadcast to the atomizer will never receive a full set of attestations
the archiver, the archiver will request the missing block for each input to the malicious compact transaction, the
from the new atomizer leader once a subsequent block is transaction will not be included in a block and therefore
received and the discontinuity is recognized. Similarly, will not execute. The atomizer eventually discards these
shards and watchtowers will request missing blocks from incomplete compact transactions.
the archiver, allowing them to catch up after an atomizer Consider the case where two transactions (txA and
leadership change. Since blocks are stored by the atom- txB ) are submitted concurrently and double spend an
izer cluster until an archiver has backed them up, there output o; this means each transaction references o in
is no risk of blocks being lost even if broadcasting them an input. Assume that this double spend succeeds; this
fails. means that the UHS ID for this output is attested to twice
Sentinels do not need to retain state and thus do not at block heights h1 and h2 , by shards s1 and s2 . Assume
need state recovery. New sentinels may be spawned at the atomizer is at height h, and there is no STXO cache.
any time to support higher loads or drained as load de-
creases. • Case 1: h1 = h2 : shards s1 and s2 (these might
Shard state will be a consistent but possibly stale view be the same shard) will send attestations a1 and a2
of the overall UHS maintained by the system. Shard data with heights h1 and h2 ; h1 = h2 must be ≤ h (if the

18
atomizer is at height h, it could not have broadcast a tower reveals minimal information about transactions. A
previous block higher than h). The atomizer will re- recipient of funds in a transaction will be able to query
ceive attestations a1 and a2 in some order; assume about the status of their own outputs, but they cannot
it is a1 first. If there is no new block created before learn the status of the transaction inputs or other outputs
the atomizer receives a2 , then later when making a which the sender has not shared with them. Similarly,
block the atomizer will detect the duplicate attesta- the sender of funds in a transaction can confirm that the
tion a2 in txB and discard txB . If there is a new system accepted the outputs they created, but they can-
block created, then the atomizer will be at height not learn about how the recipient spends those outputs,
h + 1 > h, which means h + 1 > h2 and the atom- since the sender will not know the transaction ID for the
izer will discard a2 because its height is not current. transaction in which the recipient spends those outputs.
For additional privacy, the watchtower could challenge
• Case 2: Assume h1 < h2 . If h1 < h2 < h, the user to produce a signature for the UHS ID they are
then the atomizer will discard both attestations. If querying to ensure the user actually has the ability to
h1 < h2 = h (h2 cannot be greater than h for the spend the given output.
reason above), then the atomizer will discard s1 ’s
attestation when it is received, but accept s2 ’s be- 4.3 Two-phase commit design
cause it is up-to-date. In this architecture, shards use variants of two-phase
commit and conservative two-phase locking [43] to
Since both attestations will not be accepted by the at- atomically apply transactions to the UHS. There is no
omizer, txA and txB cannot both succeed. materialized order of transactions, though two-phase
We can extend this argument to include the STXO commit ensures serializability. There are two compo-
cache by considering that if h1 < h2 < h the atom- nents: transaction coordinators and shards. Each logical
izer will reject the attestation from shard s1 if it is too shard is responsible for a subset of the UHS IDs which
old (h1 has been phased out of the cache), and reject the are unspent within the system, in the same fashion as in
attestation from s2 if the attestation from s1 at height h1 the atomizer architecture. Unlike in the atomizer design,
is still in the cache. there are no blocks, archivers, or atomizers; shards do not
4.2.6 Watchtower have any notion of block height; sentinels are responsi-
ble for communicating transaction status back to users
The atomizer design uses a queryable watchtower to ef-
synchronously; and we require a replication protocol for
ficiently communicate a transaction’s success to users. A
shard fault tolerance.
transaction reaches finality (i.e., success) when the at-
Figure 9 shows a diagram of the components in the
omizer includes its compact version (containing input
2PC architecture and the data flow between components.
hashes, output hashes, and transaction ID) in a finished
The order of messages during a single transaction’s suc-
block. The simplest way to notify users would be for the
cessful execution are described below:
system to broadcast completed blocks to all users, and re-
quire users to check each block for their transaction ID. 1. User wallet submits a valid transaction to sentinel.
This is analogous to how each node in the Bitcoin net-
work stores the entire block history. Given this system’s 2. Sentinel converts the transaction to a compact trans-
throughput requirement of 100,000 transactions per sec- action and forwards it to the coordinator.
ond, this high volume of transactions would create un-
reasonable bandwidth and processing demands for users. 3. Coordinator splits input and output UHS IDs to be
Similarly, broadly sharing the complete transaction his- relevant for each shard and issues a prepare with
tory would undermine privacy (e.g., the transaction graph each UHS ID subset.
could be seen).
4. Each shard locks the relevant input IDs and reserves
Instead, the system provides a watchtower which ag-
output IDs, records data about the transaction lo-
gregates error messages from system components and
cally, and responds to coordinator indicating it was
blocks from the atomizer, and stores an index of recently
successful.
confirmed transactions and errors to share with autho-
rized clients upon their request. Users query a watch- 5. Coordinator issues a commit to each shard.
tower with a transaction ID and UHS IDs, and the watch-
tower returns the status of the UHS IDs corresponding 6. Each shard finalizes the transaction by atomically
to the given transaction ID within the system to indicate deleting the input IDs, creating the output IDs, and
whether or not it was successful. updating local transaction state about the status of
By requiring a tuple of transaction ID and UHS IDs the transaction. The shard then responds to coordi-
as the watchtower query payload from users, the watch- nator to indicate that the commit was successful.

19
is not strictly necessary. It is possible in other trans-
action format designs UHS IDs will not be guar-
anteed to be unique, so we do not assume this and
reserve outputs.) Each shard responds to the request
indicating which transactions in the batch had their
IDs successfully locked or reserved, and which no
longer exist, or were already locked/reserved by a
different dtxn.

2. Apply. The coordinator uses the shards’ responses


to determine which compact transactions in the
batch can be completed, and which cannot complete
because some of the inputs are unavailable or al-
ready locked. The coordinator makes this decision
durable and then contacts each shard again to indi-
cate which transactions in the dtxn batch to com-
plete and which to cancel. Each shard then atomi-
cally unlocks the input UHS IDs belonging to a can-
celed transaction, and deletes input UHS IDs and
creates the output UHS IDs for successful transac-
tions.

Once every shard participating in the batch has com-


pleted the second phase, the coordinator informs each
Figure 9: System diagram for the 2PC architecture and
sentinel whether its transactions were successfully ex-
inter-component data flow
ecuted or rejected by the shards. The sentinels in turn
forward these responses to the users who submitted the
7. Coordinator issues a discard to each shard inform- transactions.
ing them that the transaction is now complete and it It is possible that if two concurrent transactions by
can forget the relevant transaction state. different transaction coordinators spend the same inputs,
neither will succeed, because both will be canceled due
8. Coordinator responds to sentinel indicating that the
to observing the other’s lock conflicts. This means that at
transaction was successfully executed.
least one will need to be retried, which is left to the user’s
9. Sentinel responds to user wallet, forwarding success wallet. An adversary could try to continually conflict a
response from coordinator. user’s transaction by spending the same input. However,
this requires the adversary to have the authorization to
spend the same input. Investigating methods to fairly re-
4.3.1 Batching Transactions solve concurrency conflicts is left to future work.
Instead of processing one transaction at a time, a coor- Batching many user payments into larger distributed
dinator receives compact transactions from sentinels (the transactions amortizes the cost of making the result
same as in §4.2) and adds them to a batch of many com- of each phase of the protocol durable on each shard,
pact transactions, which represents a single distributed whether by flushing to persistent storage or replicating
transaction, or a dtxn. After a delay, or when a batch has as part of a distributed state machine.
reached a size threshold, a coordinator initiates the pro- Because our application semantics are constrained,
tocol to try to commit the transaction batch. Many coor- this is slightly different from traditional two-phase com-
dinators could create and execute dtxns in parallel. There mit in that dtxns always complete successfully, and indi-
are two phases to commit a dtxn: vidual compact transactions are executed (or not) deter-
ministically: If all of a compact transaction’s input UHS
1. Lock. The coordinator contacts each shard respon- IDs are locked and output UHS IDs are reserved, the
sible for a UHS ID included in the batch and re- compact transaction will succeed. The transaction coor-
quests that it durably lock the input UHS IDs and dinator always completes both phases of dtxns, even if
reserve the output UHS IDs. (Note that in the trans- some of the compact transactions within do not succeed.
action format described in §3.3 output UHS IDs are General 2PC designs need to support transaction coordi-
guaranteed to be unique across transactions by the nators that might make arbitrary decisions about whether
nature of our transaction format, so this reservation to commit or abort transactions.

20
4.3.2 Fault Tolerance the transaction coordinator will eventually call apply and
Each transaction coordinator and shard is made fault tol- the shard will destroy u. For c(txB ) to succeed, another
erant via a replicated state machine. Our implementation transaction coordinator must also lock u, but it cannot do
uses Raft. Sentinels maintain state during the duration of so without contacting the same shard and seeing either
the user wallet request to return transaction status to the that u has already been locked by the transaction coordi-
user. If a sentinel fails before a client request has been nator executing c(txA ) or that u no longer exists.
forwarded to a coordinator, the user’s wallet will need to
4.3.4 Comparison to atomizer design
retry its transaction with another sentinel.
Only the leader node in the transaction coordinator There are two primary differences between the 2PC and
Raft cluster actively processes dtxns; followers simply atomizer architecture. First, the 2PC architecture does
replicate the inputs to each phase of the dtxn. Before ini- not materialize an immediately available total ordering
tiating each phase of the distributed transaction, the co- of transactions, which the atomizer architecture does
ordinator replicates the inputs to both the lock and apply through a sequence of blocks. Although it might be pos-
commands to each shard. Shards remember which phase sible to generate a partial ordering of transactions post-
each dtxn has last executed and the response to the coor- execution using a technique such as Lamport timestamps
dinator. If the coordinator leader changes mid-dtxn, the [64], this is left to future work. Ultimately, however, in
new leader reads the list of active dtxns from the coor- two-phase commit, unrelated transactions could execute
dinator state machine and continues each dtxn from the in any order while maintaining serializability and cor-
start of its most recent phase. Shards that have already rectness from double-spends. This difference may have
completed the phase will return the stored response to the negative implications for future auditability but positive
new coordinator leader. To ensure proper completion of implications on the privacy of the system from post-
the apply phase across all shards, shards will remember execution transaction flow analysis. What’s more, re-
the response for the apply phase until the coordinator has laxing the requirement for a total ordering removes the
received responses from all shards in the dtxn and issued primary bottleneck in the atomizer architecture (the at-
a “discard” message to inform shards the dtxn is com- omizer cluster itself). As shown in §6, this means the
plete and can be forgotten. Note that discards can be ap- 2PC architecture can scale linearly in throughput by de-
plied lazily and the transaction coordinators can inform ploying additional shards and transaction coordinators,
the sentinels the transactions were successful before is- whereas the atomizer architecture is limited by the re-
suing the discard. source constraints (network bandwidth and CPU) of a
Similar to coordinators, only the leader in a given single server, the atomizer leader.
shard cluster processes dtxns and responds to sentinels. Second, the atomizer uses asynchronous communi-
Although followers do not handle RPCs, they maintain cation between components whereas the 2PC architec-
the same UHS as the shard leader, so they are prepared ture uses typical synchronous remote procedure calls for
to take over processing RPCs if the leader fails with- inter-component communication. Using blocks to coor-
out a specific recovery procedure beyond that provided dinate state between individual components makes the
by the Raft protocol. Once a dtxn has entered the lock consistency and replication story for the atomizer sim-
phase and has been replicated by the coordinator cluster, pler, but it also means that transactions can fail for tran-
the dtxn will always run to completion. If a shard leader sitory reasons related to inter-component message timing
fails mid-transaction, the coordinator leader will retry re- that are opaque to the end user. When the atomizer-based
quests until a new shard leader processes and responds to system is operating at or close to peak capacity, or dur-
the request. ing degraded network conditions, users may have to retry
If a user’s wallet loses connection to its sentinel while their transaction multiple times before successful execu-
waiting for a response to its transaction, that response tion if a shard or the atomizer is overloaded and cannot
will be lost and the wallet will have to query the shards provide or validate attestations before they expire. Fur-
to discover whether their transaction has succeeded, or if thermore, since transaction status and error reporting is
it will need to be retried. handled entirely by the watchtower, users will need to
actively poll the watchtower at the time of the transac-
4.3.3 Preventing double spends tion to discover its result.
Assume there is a double spend of output o by txA and In contrast, 2PC uses a more complex availability and
txB (as described in §4.2.5). The UHS ID u for out- consistency strategy that relies on replicated state ma-
put o is handled by one shard cluster, at most. In order chines for shards and coordinators. This adds signifi-
for txA to succeed, the transaction coordinator handling cantly more code complexity, increasing the attack sur-
the compacted txA (c(txA )) must submit a dtxn con- face for exploiting bugs. It also requires careful con-
taining c(txA ) which locks u. If c(txA ) succeeds, then sideration for how to safely recover partially completed

21
dtxns. However, from an end-user perspective, much less understood, well-tested protocols to achieve its goals.
complex software is required to successfully complete Note that it might be beneficial to distribute read-only
a transaction. Once a coordinator has replicated a user’s copies of the data to other actors for auditing purposes.
transaction, it will always run to completion, and the user This can be done in many architectures and must be care-
will receive a success or error response directly from the fully balanced with data privacy and performance con-
sentinel that received their transaction. Furthermore, as siderations. Given a workload target of 100K transac-
shown in §6.2.1, the lack of message timing complex- tions per second and a minimum transaction size of 64
ity between internal system components and the lack of bytes, this would require transferring over 500GB of data
fixed inter-block delays results in reduced transaction tail per day, which is out of scope for most users. In the
latency for the same throughput. Users would only need next phase of work, we intend to explore adding forms
to retry transactions in the rare case of simultaneous fail- of cryptographic auditing that do not require replicating
ure of several internal system components, and never all transactions.
once replicated by a coordinator. The 2PC system itself Reasons to consider blockchain technology. Central
ensures successful completion of transactions rather than banks that wish to distribute trust and governance might
depending on the user to work within the best-effort se- still consider blockchain technology for their implemen-
mantics provided by the atomizer architecture. tations, and it might make sense to use blockchain tech-
4.4 Considering blockchain technology nology if CBDC designers decide that intermediaries
should run nodes in the system that validate and exe-
Many have suggested using blockchain technology to de- cute transactions. The state-of-the-art in blockchain per-
sign a central bank digital currency; blockchain technol- formance is improving, which might remove this concern
ogy has been used to refer to a wide range of technolo- as a factor in the future.
gies comprising distributed consensus protocols, hash-
ing, digital signatures, zero-knowledge proofs, and dis- 5 Implementation
tributed databases. Many of these technologies predate
the first time the term was used in Bitcoin [68]. We implemented the 2PC and atomizer architectures as
We found that using a blockchain-based system in its a set of standalone applications in C++. We used C++17,
entirety was not a good match for our requirements. The the most recent C++ specification that was widely sup-
first reason is due to performance. Byzantine fault toler- ported by mainline compilers at the time, supporting
ant consensus algorithms and other new blockchain con- builds using both GNU GCC and LLVM Clang. The
sensus protocols generally provide lower performance codebase15 has been tested on Linux and macOS but
than Raft, and any single state machine architecture will should be portable to any UNIX-like system with rela-
be limited by the resources of one server.14 Our atom- tively minimal changes. The primary dependence on a
izer architecture is inspired, in part, by a permissioned UNIX-compatible API is our use of UNIX sockets for
blockchain design. Though we minimized the function- network communication. Aside from that the codebase
ality in the atomizer to just deduplicating inputs, we uses only standard C++ and some third-party libraries so
were unable to achieve throughput greater than 170K may be portable to non-UNIX systems in the future.
transactions per second in a geo-replicated environment; Clients communicate with the sentinel and watchtower
the cause being network bandwidth limitations between components via a custom serialization protocol, via sin-
replicas in other regions. If bandwidth constraints are re- gle, short-lived TCP connections. Our watchtower imple-
laxed, computation in the leader atomizer to manage Raft mentation accepts polling client status requests. We an-
replication and execute the state machine becomes the ticipate that users will have different status confirmation
bottleneck. Section 6 describes bottlenecks and the per- needs regarding latency, client overhead, interoperabil-
formance of the atomizer under different workloads. ity, and range of historical data availability. Future imple-
Second, there was no requirement to distribute trust mentations could allow clients to make archival queries
amongst a set of distrusting participants. The transac- for historical transaction information, reduce latency by
tion processing platform is, by its nature, controlled and adopting a more sophisticated publish/subscribe design
governed by a central administrator, the central bank. where clients could subscribe to asynchronous updates
Blockchains use relatively new distributed consensus for pending transactions, or accept queries via alternate
protocols which operate in a very different adversar- protocols or more standard serialization formats.
ial environment. This introduces software and opera- We used four third-party libraries for the core code-
tional complexity. A CBDC has different adversarial as- base: LevelDB [56], NuRaft [41], libsecp256k1 [18] and
sumptions and should rely on the simplest, most well- vendored components from Bitcoin Core [17]. We use
LevelDB for internal shard storage and atomic write
14 Layer2 designs can provide higher throughput, but add timing
complexity and have different security assumptions. 15 Published at https://github.com/mit-dci/opencbdc-tx

22
transactions as well as a persistent implementation of a window size [42] settings to increase the maximum win-
Raft log. We use NuRaft to provide the Raft replicated dow size to account for the high bandwidth, high latency
state machine abstraction used for fault tolerance in the links between servers in different regions. Without this
atomizer, 2PC shards, and 2PC coordinators. change we were unable to maximize a single TCP con-
From Bitcoin Core, we use libsecp256k1 for BIP- nection over long distances, hurting performance at the
340 compatible Schnorr signatures [90] which we use edge of the transaction throughput envelope.
as our public-key signature scheme for transactions. We
also use the cryptography components of Bitcoin Core Measurement and generating load. Load generators are
to provide optimized implementations of SHA256 [71], simulated wallets that manage their own set of unspent
used as the cryptographic hash function in the codebase, outputs, public and private key pairs, and pending trans-
SipHash [5] used for hashmaps and bech32 [88, 89] used actions. Load generators create and sign transactions and
for error-correcting public key encoding. Unit and inte- wait for confirmations from the sentinel in the 2PC ar-
gration tests also require the GoogleTest [55] framework, chitecture, or query a watchtower in the atomizer archi-
but the framework is not required to build and run the tecture. We simulate both the sender and the receiver
main codebase. querying transaction status separately. Latency is mea-
sured in the load generator as the time taken between the
6 Evaluation sender broadcasting a transaction and receiving a con-
In this section, we evaluate and compare the atomizer and firmation. In the 2PC architecture, load generators also
2PC architectures against our original project require- record transaction throughput and the values are aggre-
ments of high throughput and low latency, ability to tol- gated to produce throughput values over time for the
erate the failure of multiple data center regions, and per- overall system. In the atomizer architecture, the archiver
formance under a variety of workloads. We also describe calculates the transaction throughput based on the num-
our benchmarking environment. ber of transactions in each block and the time between
blocks. Since sentinels are not replicated and can be
6.1 Setup scaled independently of the remainder of the system
For benchmarking and testing we deployed the code- components, each load generator is paired with a sen-
base in Amazon Web Services (AWS) using EC2 vir- tinel with a one-to-one relationship so that static transac-
tual servers. All servers run Ubuntu 20.04. Atomizers, tion checks, signature validation, and conversion to com-
shards, coordinators, watchtowers, and 2PC sentinels pact transactions are not a bottleneck for overall system
used c5n.2xlarge instances (8 vCPUs, 21GB RAM) throughput and latency. Load generators start with a fixed
whereas load generators and atomizer sentinels used number of outputs minted in the system and send trans-
c5n.large instances (2 vCPUs, 5.25GB RAM). Both in- actions as fast as they can, limited by the speed of their
stance types are virtualized so the underlying hardware is virtual machine and the number of outputs available to
being shared by other virtual machines operated by other spend due to existing pending transactions. Unless oth-
AWS customers. Each EC2 instance has a network inter- erwise stated, load generators produce transactions with
face card (NIC) that provides up to 25 Gbps to Amazon’s two inputs and two outputs.
internal network and used elastic block store (EBS) vol- If, in an experiment, Raft clusters are unable to reliably
umes for persistent storage rather than local disks. replicate data between all online nodes in the cluster, we
We ran the system components in three geographical discard the data point. This is to not count high through-
regions, Virginia (us-east-1), Ohio (us-east-2), and Ore- put or low latency numbers in which data is not fully
gon (us-west-2), with VPC peering connections between replicated as expected; we intend to show where band-
each region utilizing Amazon’s private network rather width constraints between regions or variations in virtual
than the public Internet. Unless otherwise stated, all 2PC machine performance prevent reliable fault tolerance for
shards, coordinators, and atomizers were replicated by a a given workload.
factor of three (one node in each region, tolerating one For peak finding, we ran sweeps with increasing load
failure per Raft cluster), and atomizer shards replicated over a number of system configurations. To select the
by a factor of two (tolerating one failure per UHS ID peak throughput configuration where the system was not
range). Replicated components were equally distributed overloaded, we only considered results where the aver-
between regions to simulate conditions where the entire age tail latency was below 5 seconds, with the maximum
system is geo-replicated and tolerant to the loss of an en- below 15 seconds, and completed successfully at least
tire region. Similarly, non-replicated components such as 3 times, or for the majority of test runs. Once the peak
sentinels, load generators, and watchtowers were equally configuration was identified, we acquired at least 3 data
distributed between regions to simulate offered load from points to plot the throughput and latency results irrespec-
across the United States. We modified the default TCP tive of the individual latency values. For scalability plots,

23
Peak throughput (shard scale) atomizer node has to ingest and replicate all transaction
2PC notifications from shards, there will always be a bottle-
1.5 Atomizer
Throughput (TX/s) ×106
neck at the atomizer cluster even if block distribution is
offloaded to another service. This is the key drawback
1.0 of the atomizer architecture and the cost of generating a
total ordering of all transactions.
0.5 Diving deeper, Figure 11 shows the throughput and la-
tency varying the number of clients for different shard
0.0 counts for both architectures. Recall we only include
0 4 8 12 16 20 24 28 32 data points where the system was not overloaded and the
Logical shards transaction data could be replicated reliably between all
Figure 10: Peak throughput of the atomizer and 2PC ar- regions containing nodes. Often benchmarks with greater
chitectures when varying logical shard count to be 1, 2, offered load succeeded when others with less load failed
4, . . . , 32. based on the above criteria, or there were large variations
in latency between experiments using the same system
we included all experiments that completed successfully configuration. We suspect that this is because of varia-
regardless of the latency values. tion in the peak network bandwidth and compute avail-
able when running the benchmark in AWS, due to op-
6.2 Scalability erating on shared hardware and network links. Since we
In this subsection, we consider two forms of system per- were unable to control for these variations, many of the
formance and scalability. The first relates to increased plots show large error bars, and may contain data points
load from users in terms of transactions per second, and where an experiment was retried multiple times to obtain
how varying the number of system components affects at least three results. It is possible that the variability is
the maximum supported transaction throughput and tail actually due to our system design or its implementation,
latency. The second explores how increasing the size but a controlled testing environment would be required
of the unspent transaction output set affects the perfor- to evaluate this hypothesis.
mance metrics of both architectures. These two experi- Here we see 2PC does not have a drop off in perfor-
ments compare how readily the architectures could sup- mance, supporting a greater offered load by increasing
port a high number of users. the number of shards. Additionally, if a lower tail la-
tency is desired for a particular transaction throughput,
6.2.1 Throughput and latency increasing the number of shards can decrease tail latency
Figure 10 compares the peak transaction throughput be- for the same offered load. Crucially, the 2PC architecture
tween the atomizer and 2PC as the number of logical has no experimentally demonstrated bottleneck and can
shards increases. The atomizer architecture has a peak support more throughput without trading off tail latency
throughput of 170,000 transactions per second, beyond by scaling the number of shard clusters. In the worst case
which adding additional shards fails to increase through- each transaction requires the participation of a subset of
put, whereas the 2PC architecture scales linearly as the shards equal to the number of inputs and outputs in the
number of logical shards increases, up to 1.7 million transaction. Since transactions in the test load have an
transactions per second, though we expect peak through- upper bounded number of inputs and outputs, increasing
put would continue to increase with more shards. The the number of shards results in each transaction requir-
atomizer itself is the limiting factor for the overall sys- ing the participation of a smaller proportion of the total
tem as all transaction notifications have to be routed via shards in the system. By contrast, the atomizer architec-
the atomizer Raft cluster. Adding more shards actually ture has a clear peak throughput plateau with 8 shards,
increases the network bandwidth and computation re- where increasing to 16 nodes results in a drop in peak
quired of the atomizer leader as there are more block throughput.
subscribers, as can be seen by the drop in performance
between eight and sixteen shards. The leader is unable
6.2.2 Database Size
to serve both the followers in the atomizer Raft cluster Figure 12 compares how the transaction throughput and
and the subscribed shards and watchtowers with its avail- tail latency for both architectures change as the number
able network bandwidth and compute resources. These of unspent outputs increases, with the number of shards
constraints could be alleviated through an extra service fixed at 8. The plot shows that the atomizer architecture
purely responsible for distributing blocks to shards so can handle up to 100 million outputs with minimal ef-
that there are fewer subscribers to the leader atomizer fect on transaction throughput and latency. At one bil-
node, or by using IP multicast. However, since the leader lion outputs, throughput suffers slightly for the same of-

24
Atomizer Scalability (Throughput) Atomizer Scalability (Latency)
200000 1 1
2 10000 2
150000 4 4
Throughput (TX/s)

99% latency (ms)


8 8
16 16
100000

50000
1000
0
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
Clients Clients
2PC Scalability (Throughput) 2PC Scalability (Latency)
100000
1.75 1 1
2 2
Throughput (TX/s) ×106

1.50
4 4

99% latency (ms)


1.25 8 8
16 10000 16
1.00
32 32
0.75
0.50
0.25 1000
0.00
0 40 80 120 160 200 240 280 0 40 80 120 160 200 240 280
Clients Clients
Figure 11: Atomizer and 2PC peak (considering clients) average throughput and 99% latency at peak average through-
put with varying logical shard count and clients. In 2PC there are the same number of coordinators as shards.

fered load. Recall that the shards must store the UHS on entirely in-memory leading to much better performance
disk, meaning that as the size of the database grows, each with a large number of outputs. Because of this, replicat-
lookup of a UHS ID and update of the UHS when a block ing the shards in the atomizer architecture using a Raft
is processed takes longer. Thus, peak throughput might cluster might lead to better raw throughput for a given
have to be limited to support a larger number of out- number of shards.
puts as the default atomizer architecture cannot easily ac-
commodate more shards due to network bandwidth con- 6.3 Fault Tolerance
straints on the leader atomizer node. Conversely, while In this subsection, we consider how the system responds
the peak throughput decreases with a larger UHS in the to failures, such as random hardware failures, natural dis-
2PC architecture, it is able to scale by increasing the asters, and network partitions. We evaluate how both ar-
number of shards and thus maintain performance. Un- chitectures handle up to two regional data center failures,
like the atomizer architecture, which is limited by the and the scalability of each architecture as the number of
atomizer leader, the 2PC architecture is only limited by supported failures increases.
the performance of the shards themselves, the number of Figure 13 shows the transaction throughput over time
which can be increased to spread load between a greater for the atomizer architecture when two simulated data
number of shards. center failures occur and shards have a replication factor
The atomizer architecture may be able to accommo- of three and the atomizer a replication factor of five (sup-
date a larger UHS if shards did not use an on-disk porting up to two failures per cluster). At both 120 and
database like in 2PC. Note, however, that an in-memory 180 seconds into the test, an atomizer node and shard
only shard would not survive a crash or power fail- replica for each logical shard is killed to simulate two
ure and would need to be rebuilt completely from the failures of entire data centers. The plot shows that the
archiver which may be challenging in a long-running system can recover successfully and automatically re-
system. The 2PC shard’s state is still persisted to disk store the availability of the system in a matter of sec-
but through a sequentially written Raft log and snapshots. onds. The failures cause a drop in throughput to zero for
This method of persistence is more performant than the several seconds as the atomizer Raft cluster performs a
random reads and writes to disk needed by the atomizer leader election to select a new leader. Interestingly, we
architecture’s shard. The 2PC shard’s state machine is only see a dip in performance when the atomizer leader

25
Peak throughput (UTXO set size) Latency at peak throughput (UTXO set size)
600000
2PC 2PC
500000 Atomizer 6000 Atomizer
Throughput (TX/s)

99% latency (ms)


400000
4000
300000
200000 2000
100000
0 0
106 107 108 109 106 107 108 109
Seeded UTXOs Seeded UTXOs

Figure 12: Comparison of 2PC and atomizer with different UHS sizes.

Atomizer two regional failures 2PC two regional failures


60000
Throughput (TX/s, 1250ms MA)

Throughput (TX/s, 5000ms MA)


50000 600000
40000
30000 400000
20000
200000
10000
0 0
00:00 01:00 02:00 03:00 04:00 05:00 00:00 01:00 02:00 03:00 04:00 05:00
Time (mm:ss) Time (mm:ss)
Figure 13: Atomizer architecture throughput over time Figure 14: 2PC architecture throughput over time with
with atomizer replication factor five, shard replication replication factor 5 and 2 whole data center failures at
factor three and two whole-data center failures at 120s 120s and 180s. 5 sample moving average (1 second per
(leader) and 180s (follower) (see discussion for why the sample).
follower failure does not cause throughput drop). 5 sam-
ple moving average (1 sample per block). and a subsequent set of nodes for each cluster were killed
at 180 seconds into the test (which comprised some lead-
is killed, and the Raft cluster needs to elect a new leader, ers and some followers). The plot shows that the 2PC ar-
which is what happens at 120 seconds. At 180 seconds, chitecture is successfully able to handle and recover from
in addition to shards, a follower Raft atomizer node is the failure of two entire data centers with minimal loss of
killed, which does not impact performance. Shards in the downtime and no loss of system performance. For each
atomizer architecture do not use Raft consensus, so any failure, throughput was temporarily reduced for less than
sentinels previously using the failed shard simply con- fifteen seconds, before automatically recovering to the
nect to a different online shard covering the same range baseline. As in the atomizer architecture, there is no loss
of UHS IDs. After the atomizer leader election has com- of data from each failure and the system is not left in an
pleted, the shards connect to the new leader and continue inconsistent state as the replacement coordinators con-
processing transactions. There is no loss of data or incon- tinue any distributed transactions that were in progress at
sistency in the unspent output set as a result of the fail- the time of each failure.
ures. Load generators simply retry any transactions that Figure 15 compares the change in transaction through-
were dropped by failed shards or the previous atomizer put and tail latency between architectures as the num-
cluster leader. ber of supported system failures increases from zero
The plot in Figure 14 shows the overall system through four. For 2PC this shows how the system per-
throughput of the 2PC architecture over time when forms when the replication factor of shards and coor-
shards and coordinators have a replication factor of five dinators increases from one through nine, the number
(supporting up to two failures per cluster). To simulate of clusters remains fixed at eight and the offered load
continued system uptime and recovery when up to two is increased until peak throughput is achieved. The plot
data centers fail completely, the Raft leaders for coordi- shows that 2PC is tolerant to increased replication fac-
nators and shards were killed at 120 seconds into the test, tor, showing only a modest decrease in peak throughput.

26
Peak throughput (fault tolerance) Latency at peak throughput (fault tolerance)
400000 2PC 2500 2PC
Atomizer Atomizer
2000
300000
Throughput (TX/s)

99% latency (ms)


1500
200000
1000
100000 500
0 0
0 1 2 3 4 0 1 2 3 4
Failures the system can tolerate Failures the system can tolerate
Figure 15: Throughput and 99% latency for different choices of number of faults tolerated, f . In the atomizer archi-
tecture this means 2f + 1 atomizer replicas and f + 1 shard replicas. In 2PC, it means 2f + 1 transaction coordinator
replicas and shard replicas.

This suggests that, if desired, the 2PC architecture may As the number of inputs per transaction increases, the
be able to support a high number of simultaneous fail- peak throughput drops in the atomizer architecture. This
ures. is because inputs must be checked by the shards to en-
For the atomizer architecture, the shard replication sure they are unspent and aggregated within the atom-
factor is increased from one through five and the at- izer to ensure all outputs have been attested to by shards.
omizer cluster from one through nine, showing peak Since more shards on average are required to attest to
throughput decrease. Since the atomizer is the bottleneck a transaction with a larger number of inputs, more data
in the system, increasing the replication factor of the at- must be replicated by the atomizer cluster. There is also
omizer cluster results in increased bandwidth require- a higher probability that it will take multiple blocks be-
ments on the leader atomizer node causing a decrease fore all required attestations have been accumulated for
in peak throughput. Increasing the replication factor of a transaction in the atomizer.
shards also results in more bandwidth utilization on the
leader atomizer. As explained previously for the shard Conversely, the increase in output count in the atom-
scaling plot in Figure 10, the leader must broadcast the izer architecture exhibits only a minor loss in throughput
latest blocks to a larger number of subscribers as each and increase in latency because outputs are not the lim-
shard replica receives all blocks. The atomizer architec- iting factor for the atomizer to process transactions. Our
ture is therefore less tolerant to increased redundancy transaction format guarantees unique output UHS IDs if
than 2PC due to bandwidth constraints on the leader at- the transaction is valid, so as an optimization the atom-
omizer node. izer and shards are not required to check them. There-
6.4 Workload Variability fore, additional outputs only increase the size of blocks
This subsection compares how both architectures per- and transaction notifications, and thus the network band-
form under varying transaction workloads from users. width requirement between atomizers and shards.
We vary the proportion of transactions with a high num-
ber of input and outputs, and the proportion of double- For the 2PC architecture, as the proportion of large
spending transactions. We are unsure how the transaction transactions (inputs or outputs) increases, the peak
workload will look in practice, however, for Bitcoin we throughput decreases as the system becomes overloaded.
found that over 75% of transactions consist of one input This is similar to increasing the number of clients of-
and two outputs, or vice versa. fering two-input, two-output transactions. Ultimately the
system is limited by the overall number of UHS IDs be-
6.4.1 Transaction Size ing processed, regardless of how they are grouped into
Figures 16 and 17 compare how the proportion of trans- transactions. Tail latency is largely unaffected by the
actions sent with a high number of inputs and outputs, transaction size as latency is dominated by Raft repli-
respectively, affect the throughput and latency between cation delays rather than the lookup time in the state ma-
architectures. In this test, the proportion of transaction chine for each UHS ID. As a result, in production envi-
load sent to the system with eight rather than two in- ronment it may be necessary to over-provision the num-
puts/outputs was increased from 0% through 30%. The ber of shard clusters to absorb workloads with a high pro-
benchmarks were conducted using a database containing portion of large transactions, or discourage large transac-
1 billion UHS IDs. tion via other means.

27
Peak throughput (many inputs) Latency at peak throughput (many inputs)
300000 2PC 4000 2PC
Atomizer Atomizer
250000
Throughput (TX/s)

99% latency (ms)


3000
200000
150000 2000
100000
1000
50000
0 0
0% 5% 10% 15% 20% 25% 30% 0% 5% 10% 15% 20% 25% 30%
% of transactions being 8-in-2-out % of transactions being 8-in-2-out
Figure 16: Throughput and 99% latency varying the proportion of transactions with eight inputs and two outputs.
Peak throughput (many outputs) Latency at peak throughput (many outputs)
400000 4000
2PC
Atomizer
300000 3000
Throughput (TX/s)

99% latency (ms)


200000 2000

100000 1000
2PC
Atomizer
0 0
0% 5% 10% 15% 20% 25% 30% 0% 5% 10% 15% 20% 25% 30%
% of transactions being 2-in-8-out % of transactions being 2-in-8-out
Figure 17: Throughput and 99% tail latency varying the proportion of transactions with two inputs and eight outputs.

6.4.2 Double Spends of double-spending transactions in 2PC while maintain-


ing the same load of valid transactions could be achieved
Figure 18 compares how the transaction throughput and
by increasing the number of shards and coordinators. It
latency of valid transactions changes between architec-
may be more difficult to scale the atomizer architecture
tures as the proportion of double-spending transactions
to absorb more double-spends by adding shards because
sent from the load generators is varied between 0% and
of the increased load additional shards put on the atom-
30%. The load generators send double-spending trans-
izer cluster as shown in Figure 10.
actions by storing previously confirmed transactions and
re-issuing them at a later time. This ensures the inputs to
the transaction are either not present in any shard’s UHS
7 Related Work
or are present in the atomizer’s spent transaction output Central banks around the world are in a wide variety of
cache. Only the throughput and latency of valid trans- stages with regard to CBDCs. Some are in research and
actions are included in the plot. After sending a double- development phases while others are running pilots and
spend transaction, there is an artificial delay within load even launching products to the public. China’s e-CNY is
generators to simulate the additional time it takes to gen- currently in public trials [30, 61, 77] and is a centralized
erate new valid transactions. system based on the UTXO model. e-CNY involves a
Double-spends do not greatly affect the throughput two tier model and does not support end-user custody.
and latency of valid transactions in the atomizer archi- On a smaller scale, the Central Bank of the Bahamas has
tecture. This is because most double-spends are trivially launched a two tier CBDC, the Sand Dollar [29], which
caught at the shard layer so that additional load is not is built on the NZIA Cortex DLT platform. The Central
put on the atomizer cluster. Double-spends negatively af- Bank of Nigeria has launched eNaira [28], a two tiered
fect the peak throughput of valid transactions in 2PC be- system based on Bitt’s DCMS platform. Some projects
cause each transaction, valid or not, has to be replicated are in pilot phase such as the Eastern Caribbean Central
as part of a distributed transaction batch. This requires Bank’s DCash [40] system which is also based on Bitt’s
shards to replicate all transactions as part of the lock platform.
phase, and the coordinators have to replicate the status Other projects are in research and development phases
of all transactions, so double-spends cause the same load such as the Riksbank’s e-krona project, built on R3’s
as valid transactions. Absorbing an increased proportion Corda Enterprise Blockchain platform [78], which re-

28
Peak throughput (double spends) Latency at peak throughput (double spends)
400000 2PC 4000 2PC
Atomizer Atomizer
300000 3000
Throughput (TX/s)

99% latency (ms)


200000 2000

100000 1000

0 0
0% 5% 10% 15% 20% 25% 30% 0% 5% 10% 15% 20% 25% 30%
% of transactions being double spends % of transactions being double spends
Figure 18: Throughput and 99% latency of valid transactions varying the proportion of transactions with double-
spending inputs.

quires all transactions go through a single notary to tion [49]. Output data is blinded in the process of gen-
enforce double-spend protection. This creates a similar erating UHS IDs and the transaction processor does not
scaling bottleneck to our atomizer architecture [81]. Sev- store the output data itself. As a result, we introduce an
eral projects have achieved linear scalability with a paral- interactive transaction protocol that relies on the sender
lelized architecture. Eesti Pank along with several other of funds sharing the output data and identifier with the
banks in the Eurosystem, have tested a CBDC design recipient. In Bitcoin, all parties can independently verify
based on tracking groups of bills using a set of paral- the success of a transaction by checking if it is included
lelized blockchains [46]. While it achieves linear scala- in a block which is not practical at scale as described in
bility, transactions involving multiple bills require exter- §4.4. We address this issue in the atomizer architecture
nal coordination. No internal guarantee of atomicity for with the addition of a watchtower where senders and re-
these transactions is provided. ceivers can verify transaction success. In the 2PC archi-
Several central banks already support real-time gross tecture, senders and receivers learn of success directly
settlement (RTGS) and fast payment systems [6]. These through the shards. Another design option might be for
systems are designed to settle transactions between eli- payers to send recipients cryptographic proofs of trans-
gible financial institutions with low latency. In practice, action inclusion, for example by using something like
these systems do not handle a volume of traffic repre- SkipChain [70], so recipients do not need a query ser-
sentative of a national retail payment system nor do they vice.
provide direct access to the public [7, 47, 48, 74]. Allen Hamilton’s 2PC architecture uses a variant of two-
et. al. identify these and other technical and legal issues phase commit [57] which does not need to support roll-
related to CBDC design [2]. backs. Like Google’s Spanner [35], it uses a combination
The Bank for International Settlements together with of two-phase commit with a replicated state machine (in
a group of seven central banks outlined [9] some of this case, Raft [73]), but does not support general SQL.
the tradeoffs between privacy, interoperability, resilience Narwhal/Tusk [37] is a consensus algorithm which com-
and other topics but do not propose a potential design. mits to hashes of transaction sets using a DAG but does
The Regulated Liability Network [36] from SETL and not present a full-featured state machine nor transaction
Amazon AWS presents a CBDC design which claims to system. It might be possible for Hamilton to use this in-
achieve 1 million transactions per second utilizing mul- stead of Raft for improved performance but it is not clear
tiple coordinated blockchains. However, the paper does how a deterministic transaction execution state machine
not discuss deployment across multiple geographic re- would be built that could take advantage of the increased
gions which is vital for resiliency, and does not provide consensus performance.
transaction latency figures. Chaumian eCash [33], and designs based on it [23,
Hamilton borrows ideas from both cryptocurrency and 26, 27], also operate with a central trusted intermediary,
electronic cash designs. Hamilton uses the UTXO trans- but either require maintaining an ever-growing list of all
action model first used in Bitcoin and stores state as un- spent coins for double spend prevention, or require users
spent coins [67]. Unlike Bitcoin, Hamilton operates in to manage expiring coins. The Swiss National Bank’s
a model of centralized trust. Our transaction flow di- CBDC [34] project expands upon Chaum’s model by
verges from Bitcoin because the complete ledger is not proposing epoch windows in which coins must be spent.
publicly available to users, and the transaction processor This addresses the issue of maintaining an evergrowing
only stores transaction hashes to reduce stored informa- list of spent coins by pruning older entries, but imposes a

29
new requirement on users who might not be familiar with of CBDC research, it is important that policy and tech-
money that cannot be used across epochs, and has signif- nical research are not conducted in isolation from each
icant policy implications. Many of these schemes strive other.
to achieve unlinkability, with mixed success against col-
Techniques from cryptography, distributed systems
luding attackers, while Hamilton does not. It is unclear
and blockchain technology can be combined to pro-
what level of performance these schemes can achieve in
vide unique functionality and robust performance. By
practice, since few of them have been implemented.
leveraging classical distributed computing algorithms,
Unlike most CBDC research efforts to date, the Hamil-
we implemented a highly scalable CBDC platform while
ton project is open source. This allows results to be inde-
supporting a Bitcoin-like transaction format. Without im-
pendently reproducible and fosters collaboration with ex-
plementing a blockchain, our two-phase commit archi-
ternal parties on continuing research. It also encourages
tecture supports both intermediation and self-custodying
global interoperability standards and provides a much
user wallets, and eliminates single points-of-failure to
lower barrier to adoption.
provide geographic fault tolerance. Our system also sup-
Contrary to other projects proposing backed stablecoin
ports a range of potential privacy options by not re-
designs [16, 31, 39, 82], Hamilton is designed to be ad-
quiring central storage of user balances or identities.
ministered directly by the central bank or a related entity,
The atomizer architecture uses a globally-ordered se-
and transacts in central bank liabilities.
quence of transactions grouped into batches, similar to
8 Discussion a blockchain, which potentially provides better support
for auditability in the future. However, generating the
Phase 1 of Project Hamilton has identified several key transaction sequence in the atomizer architecture limits
results which challenge preexisting technical design as- its scalability potential compared to the two-phase com-
sumptions, and highlight several open questions to be ex- mit architecture.
plored in future phases of the project. We discuss our
learnings and opportunities for future research below. Using a Byzantine fault tolerant (BFT) single state
machine approach might cater for an unnecessarily
8.1 Key Results strong threat model if the central bank directly op-
CBDC design choices are more granular than com- erates the CBDC. Systems in which transaction valida-
monly assumed. Existing research often assumes that tion and execution are distributed among multiple sepa-
blockchain or distributed ledger technology is required rate entities, such as in cryptocurrencies like Bitcoin and
to implement many of the desirable features for a CBDC, permissioned chains like the proposed Diem blockchain,
or makes broad suppositions about the capabilities of can be implemented as replicated state machines using
particular data models, such as so-called “token-based” distributed consensus algorithms which provide Byzan-
and “account-based” models [3, 12, 65]. We found these tine agreement or full Byzantine fault tolerance. This ap-
limited categorizations lacking and insufficient to sur- proach allows such systems to tolerate malicious nodes
face the complexity of choices in access, intermedia- when multiple mutually untrusted parties participate in
tion, institutional roles, and data retention in CBDC de- settling transactions and defining system rules. In a cen-
sign [53]. Our research identified several key design tral bank operated CBDC, only the central bank settles
choices that would need to be made. For example, the transactions and defines the system rules so there is no re-
CBDC’s trust and threat model, transaction format, and quirement to expect malicious nodes under normal oper-
fault-tolerance and scaling strategy, the primary choices ation. If the CBDC is not operated directly by the central
explored by this phase of research, present a range of bank, and instead via multiple, distrusted third parties, a
potential options that affect user experience. Future re- distributed BFT-based approach may be a better solution.
search into auditability, tamper-resistance, spam preven- We leave exploring this option to future work. Byzantine
tion, programmability semantics, and privacy are among fault tolerant algorithms such as HotStuff [1] might still
the most important design choices which have been left be useful to protect against bugs or compromised com-
to future research. ponents as a drop-in replacement for Raft, the non-BFT
consensus algorithm already used for this project.
CBDCs can adopt a wide variety of design character-
istics depending on public policy objectives and sys- Executing all transactions via a single-threaded state
tem performance demands. Robust technical research machine, whether generating a blockchain-like data
and experimentation is required to inform policymakers structure or not, prevents horizontally scaling the
as to the wide variety of technical capabilities and trade- maximum throughput of the system by adding more
offs. Equally, clear public policy objectives and product nodes. Our research was unable to partition the atomizer
design decisions are required to inform the appropriate service, which must be scaled vertically using additional
technical design for the system. As a result, at this stage network bandwidth and processor speed for an increase

30
in maximum transaction throughput. Vertical scalability ing area of future research.
is more difficult to achieve than horizontal scalability be- The central bank does not need to retain all trans-
cause improvements in network bandwidth and proces- action information to implement a secure CBDC sys-
sor speed occur over long timeframes and have increas- tem. We show that the central transaction processor only
ingly plateaued in recent years. However, it may be im- needs to store commitments to unspent funds, as opaque
possible to avoid a limited capacity to scale for increased 32 byte hashes. This limits data retention by the cen-
throughput if materializing a total ordering of all trans- tral bank, which is appealing, but makes self-custody
actions proves to be the best method for implementing more operationally challenging for users, and the sys-
tamper-detection and programmability, important ques- tem harder to audit internally. Our data model only stores
tions for future research. By contrast, depending on the cryptographic commitments to unspent funds at the cen-
workload, a traditional partitioned database implemen- tral bank and discards the underlying preimage of the
tation can scale horizontally to accommodate a greater commitment required to spend, in order to limit data re-
maximum transaction throughput by adding more nodes tention at the central bank. In order to spend funds, the
to the system. In our specific data model, funds are uni- user must provide the preimage of the commitment with
formly distributed across partitions and transactions can their transaction so it can be validated by the sentinels.
require the participation of multiple partitions, but we ex- Therefore, the sender of a payment must provide the re-
pect most transactions will only reference a small num- cipient with the preimage required to spend the money
ber of unspent outputs relative to the total number of par- before the transaction can be considered complete. The
titions. Other data models, such as accounts, may reduce preimage must be retained by the user until they spend
the maximum number of partitions involved in a trans- their funds, as it cannot be recovered if lost, and without
action and make the cross-partition workload more pre- it the sentinels cannot check whether the transaction is
dictable. valid. The task of storing transaction data and commu-
It is challenging to implement a non-interactive pay- nicating it between users could be conducted by a third
ment protocol while maintaining user-to-user pri- party. However, the third party would have access to the
vacy. In public cryptocurrencies, transactions are visi- transaction data of participating users. Zero-knowledge
ble to all parties making it easy for a user to indepen- proofs have the potential to hide transaction data from
dently discover whether they have received a payment sentinels, eliminate the need for direct communication
under certain conditions. If transactions use standardized between transacting parties, and enable internal system
encumbrances, the recipient of a payment can identify auditing.
funds they can spend by searching all transactions settled
by the system for encumbrances they can satisfy. Public 8.2 Future Work
visibility of all transactions is unlikely to be a desirable This paper demonstrates a high-performance, fault-
feature for a CBDC due to user-to-user privacy concerns. tolerant CBDC implementation. However, we have not
Although some cryptocurrencies use cryptography to ob- yet explored all design considerations for a practical
fuscate or hide the transaction participants and values CBDC deployment. Some ideas for future areas of re-
from observers, the volume of transactions settled by a search and implementation are presented below. We plan
CBDC may be too great for a user to check every trans- to investigate many of these research topics in future
action to determine whether they have received a pay- phases of Project Hamilton.
ment. Since the transactions executed by the system are
not broadcast to all users, the sender and recipient have Privacy and auditability The UHS is a powerful data
to communicate with each other either directly or via a model enabling transaction validation to be fully de-
third party as part of the transaction protocol to provide a coupled from the database layer of the system. It also
payment notification. Public cryptocurrencies allow for minimizes data retention in the core system, and opens
non-standard encumbrances which also require user-to- the possibility of zero-knowledge sentinels which would
user communication to provide a payment notification, hide transaction data and greatly increase user privacy
but this is uncommon in practice, and our system requires from the central bank. However, only storing commit-
out-of-band notification for all transactions regardless of ments to the underlying data makes the system difficult
whether using a standard encumbrance. Third parties in- to audit for correctness of transaction execution, the total
cluded in the transaction protocol would be useful if both supply of money, and intrusion detection. Furthermore,
the sender and receiver are not online at the same time, it is unclear how to balance user privacy from the central
and could be the central bank itself or external service bank and the desire of law-enforcement to access trans-
providers. Zero-knowledge proofs might make it possi- action data.
ble to publish all transactions executed by the system Programmability Our current transaction format and
without compromising user-to-user privacy, an interest- data model restricts programmability features to those

31
which can be implemented using transaction-local val- long-term, production-level readiness. Our evaluation fo-
idation. Transactions are deterministic in that they must cused on measuring peak performance for short periods
provide all state elements that will be mutated prior to of time under high load with a static number of system
transaction execution, and fully specify the state tran- components. We did not evaluate system performance
sition should the transaction complete. It is unclear over extended periods of time, where supporting a large
whether these two restrictions affect the space of con- UHS may require a greatly increased number of shards.
tracts that can be implemented. In either case, the UHS Scaling the number of shards and rebalancing UHS IDs
makes contract engineering more difficult than a model between them may have performance implications need
which supports non-deterministic transactions, so the to be fully investigated. We also do not provide an im-
performance of a system implementing such a model will plementation for various important production processes
need to be evaluated. such as system health monitoring, shard rebalancing, and
Interoperability To support further innovation on top of automated component scaling.
the CBDC, techniques for interacting with cryptocurren- Denial of service attacks Our designs support self-
cies and existing payment solutions in the traditional fi- custody of private keys, and we assume there are no fees
nancial sector will need to be researched. We are confi- per transaction in the base layer, making the system vul-
dent that our designs could support interoperability with nerable to denial-of-service attacks. Adversaries could
cryptocurrencies via Layer-2 payment channel networks, submit large volumes of invalid or valid transactions at
though specific implementation details still need to be no cost, consuming central bank resources and degrading
determined. It is unclear whether the CBDC will need system performance for legitimate users. Rate-limiting
to directly support formal standards used by payment and spam-prevention techniques (aside from fees) could
platforms in the banking sector, or whether interoperable mitigate this risk. Options include network-level throt-
functionality could instead be delivered by third parties. tling, enforcing a cool-off period before money can be
Easier cross-border payments are often cited as an im- re-spent, charging nominal fees past a certain transaction
portant policy goal for a CBDC, and our designs support volume threshold, or requiring a proof-of-work per trans-
such payments if users from multiple countries are able action.
to directly use the CBDC. Techniques for cross-border Quantum resistance If large-scale quantum computers
payments between separate CBDCs will depend largely are built, most cryptographic systems powering today’s
upon how CBDCs from other countries are designed. Internet, e-commerce, and finance could eventually be at
Offline payments We have not yet explored the poten- risk. This stems from the fact that these systems rely on
tial for payments using CBDC without an Internet con- cryptographic primitives that are vulnerable to quantum
nection. Our transaction format and data model requires adversaries. However, standards bodies, such as NIST
interactive communication between the central bank and [72], are developing a portfolio of cryptographic prim-
both transacting parties. One option is to operate a paral- itives resistant to classical, quantum, and hybrid attacks.
lel system using trusted hardware requiring no connectiv- This is a highly mature effort and is expected to yield
ity with the central bank to conduct a transaction. Trusted final selections in the not-too-distant future. The cryp-
hardware would be responsible for enforcing the authen- tographic primitives used in Hamilton are either post-
ticity of CBDC while outside central bank systems, and quantum with minimal modifications (e.g., hash func-
thus vulnerable to supply chain attacks or end-user tam- tions, where post-quantum resistance can be obtained by
pering. Alternatively, radio, satellite or mesh networks a suitable increase in parameters), or can be replaced
could be exploited to retain connectivity with the central with a standardized post-quantum alternative, once one
bank during an Internet outage. is available. Similarly, the extensions of Hamilton that
Minting and redemption Our experiments assumed the we have identified, e.g., a privacy-enhanced version, use
entirety of the CBDC in circulation was already present cryptographic primitives for which post-quantum alter-
in the system. In practice, CBDC will need to be minted natives are known. Transitioning to post-quantum sys-
or removed from circulation depending on the flow of tems will be an industry-wide effort. We are confident
money into and out of the system. We have yet to explore that Hamilton is well-prepared for such a transition and
how best to implement changing the supply of CBDC can remain a long-term secure system in a post-quantum
while maintaining security against both insider attacks world.
and external adversaries.
Productionization Our designs are fully fault-tolerant
9 Conclusion
against multiple geographic data center failures, ensuring CBDCs are being considered widely by central banks
high-availability while preventing data loss. However, and technical research is critical to understand what is
the implementation has not been hardened or tested for feasible, identify interdependencies between technical

32
and policy choices, and discover novel approaches to References
achieving goals for a CBDC. [1] I. Abraham, G. Gueta, and D. Malkhi. Hot-Stuff the
Our research presents a CBDC transaction processor linear, optimal-resilience, one-message BFT devil. CoRR,
abs/1803.05069, 2018.
design, implements two potential architectures to sup-
port transactions at scale, and high performance and re- [2] S. Allen, S. Čapkun, I. Eyal, G. Fanti, B. A. Ford, J. Grimmel-
mann, A. Juels, K. Kostiainen, S. Meiklejohn, A. Miller, et al.
silience. We find that technical and policy choices are Design choices for central bank digital currency: Policy and tech-
highly interdependent and that these choices are more nical considerations. Technical report, National Bureau of Eco-
granular and with more permutations than commonly nomic Research, 2020.
discussed. Our work is limited to the transaction proces- [3] R. Auer and R. Böhme. The technology of retail central bank
sor component of a CBDC and, as a research platform, digital currency. BIS Quarterly Review, March 2020.
is neither designed to launch a CBDC or address all po- [4] R. Auer, J. Frost, M. Lee, A. Martin, and N. Narula. Why
central bank digital currencies? Liberty Street Economics,
tential requirements. Further research is needed in a wide 2021. https://libertystreeteconomics.newyorkfed.org/2021/12/
range of technical areas and how these different technical why-central-bank-digital-currencies/.
options impact desired policy outcomes. [5] J. Aumasson and D. J. Bernstein. SipHash: a fast short-input
Through software design, development, and testing, PRF. Cryptology ePrint Archive, Report 2012/351, 2012. https:
Project Hamilton provides unique insight into technol- //eprint.iacr.org/2012/351.
ogy relevant to implementing a CBDC. By designing a [6] Bank For International Settlements. Fast payments - enhanc-
ing the speed and availability of retail payments. Committee on
flexible research platform and issuing an open-source li- Payments and Market Infrastructures, 2016. https://www.bis.org/
cense for the software, the Project Hamilton team hopes cpmi/publ/d154.pdf.
to share its learnings with others and receive feedback [7] Bank for International Settlements. BIS statistics explorer, 2019.
and contributions to the code from other digital currency https://stats.bis.org/statx/toc/CPMI.html.
experts. [8] Bank for International Settlements. CBDCs: an opportunity for
This open-source release concludes Phase 1 of Project the monetary system. BIS Annual Report Economic Report 2021,
pages 65–91, 6 2021.
Hamilton. The flexible core infrastructure developed in
Phase 1 was designed to support future research and de- [9] Bank for International Settlements et al. Central bank digital
currencies: System design and interoperability, 9 2021. https:
velopment with various potential designs and features. In //www.bis.org/publ/othp42 system design.pdf.
Phase 2 of Project Hamilton, the Boston Fed and MIT [10] Bank of Canada et al. Central bank digital currencies: founda-
DCI will continue their CBDC infrastructure research tional principles and core features. BIS Working Group, 2020.
and explore different options and configurations in ar- https://www.bis.org/publ/othp33.pdf.
eas such as data privacy, programmability, and interoper- [11] Bank of England. Central bank digital currency: Opportunities,
ability. The team will assess how these choices impact a challenges and design, 2020. https://www.bankofengland.co.uk/-
/media/boe/files/paper/2020/central-bank-digital-currency-
platform’s technical design and performance.
opportunities-challenges-and-design.pdf.
As the global CBDC discussion evolves and the Fed-
[12] Bank of Thailand. Central bank digital currency:
eral Reserve’s research continues, Project Hamilton aims The future of payments for corporates, 2021. https:
to continue providing valuable insights to policymakers //www.bot.or.th/English/FinancialMarkets/ProjectInthanon/
and the general public through its experimentation with Documents/20210308 CBDC.pdf.
leading-edge technical research. [13] M. L. Bech and R. Garratt. Central bank digital currencies. BIS
Quarterly Review, September 2017.
[14] E. Ben-Sasson, A. Chiesa, C. Garman, M. Green, I. Miers,
10 Acknowledgements E. Tromer, and M. Virza. Zerocash: Decentralized anonymous
payments from Bitcoin. In Proceedings of the 2014 IEEE Sym-
The authors express gratitude to Robert Bench, Jim posium on Security and Privacy, SP ’14, pages 459–474, 2014.
Cunha, Ken Montgomery, and Eric Rosengren for their [15] P. A. Bernstein, V. Hadzilacos, and N. Goodman. Concurrency
leadership and direction in this work. In addition, control and recovery in database systems, volume 370. Addison-
we thank Robleh Ali, Jonathan Allen, Spencer Con- Wesley Reading, 1987.
naughton, Thomas Cowan, Tadge Dryja, Rob Flynn, [16] Binance. Binance USD. https://www.binance.com/en/busd.
Kristin Forbes, Shira Frank, Nikhil George, Gert- [17] Bitcoin Core Developers. Bitcoin Core. https://github.com/
Jaap Glasbergen, Ethan Heilman, Simon Johnson, Sean bitcoin/bitcoin.
Neville, Ronald L. Rivest, Bernard Snowden, Michael [18] Bitcoin Core Developers. libsecp256k1. https://github.com/
bitcoin-core/secp256k1.
Specter, Sam Stuewe, Robert Townsend, Reuben Young-
blom, and staff at the Federal Reserve Board for their [19] C. Boar and A. Wehrli. Ready, steady, go? – results of the third
BIS survey on central bank digital currency. BIS Papers No 114,
helpful contributions, feedback, and comments. We are 2021. https://www.bis.org/publ/bppdf/bispap114.htm.
also grateful to the funders of the Digital Currency Ini- [20] Board of Governors of the Federal Reserve System. Money and
tiative for their ongoing support of the MIT researchers payments: The U.S. dollar in the age of digital transformation,
that participated in this work. January 2022.

33
[21] S. Bowe, A. Chiesa, M. Green, I. Miers, P. Mishra, and H. Wu. [41] eBay. NuRaft. https://github.com/eBay/NuRaft.
Zexe: Enabling decentralized private computation. In Proceed- [42] ESnet. Linux tuning. https://fasterdata.es.net/host-tuning/linux/.
ings of the 41st IEEE Symposium on Security and Privacy,
S&P ’20, 2020. ePrint: https://eprint.iacr.org/2018/962. [43] K. Eswaran, J. Gray, and L. Traiger. The notion of consistency
and predicate locks in a database system. Communications of the
[22] L. Brainard. Update on digital currencies, stablecoins, and ACM, 19(11):624–632, november 1976.
the challenges ahead, 2019. https://www.federalreserve.gov/
[44] Ethereum Developers. Solidity, the smart contract programming
newsevents/speech/brainard20191218a.htm.
language. https://github.com/ethereum/solidity.
[23] S. Brands. Untraceable off-line cash in wallet with observers.
[45] European Central Bank. ECB publishes the results of the public
In Annual international cryptology conference, pages 302–318.
consultation on a digital euro, 2021. https://www.ecb.europa.eu/
Springer, 1993.
press/pr/date/2021/html/ecb.pr210414∼ca3013c852.en.html.
[24] N. Brewster and S. Bishop. Getting out the message. [46] European Central Bank. Work stream 3: A new solution –
http://www.centralbank.org.bb/ economic-insightbb/getting- blockchain & eID, 2021. https://haldus.eestipank.ee/sites/default/
out-the-message. files/2021-07/Work%20stream%203%20-%20A%20New%
[25] B. Bünz, S. Agrawal, M. Zamani, and D. Boneh. Zether: Towards 20Solution%20-%20Blockchain%20and%20eID 1.pdf.
privacy in a smart contract world. In Proceedings of the 24th [47] Eurosystem. TARGET Instant Payments Settlement user re-
International Conference on Financial Cryptography and Data quirements, 2017. https://www.ecb.europa.eu/paym/target/tips/
Security, FC ’20, 2020. ePrint: https://eprint.iacr.org/2019/191. profuse/shared/pdf/tips crdm uhb v1.0.0.pdf.
[26] J. Camenisch, S. Hohenberger, and A. Lysyanskaya. Com- [48] Eurosystem. T2-T2S consolidation user requirements doc-
pact e-cash. In Annual International Conference on the Theory ument for T2-RTGS component, 2018. https://www.ecb.
and Applications of Cryptographic Techniques, pages 302–321. europa.eu/paym/pdf/consultations/T2-T2S Consolidation User
Springer, 2005. Requirements Document T2 RTGS v1.2 CLEAN.pdf.
[27] J. Camenisch, S. Hohenberger, and A. Lysyanskaya. Balancing [49] C. Fields. UHS: Full-node security without maintaining a full
accountability and privacy using e-cash. In International Confer- UTXO set. https://lists.linuxfoundation.org/pipermail/bitcoin-
ence on Security and Cryptography for Networks, pages 141–155. dev/2018-May/015967.html.
Springer, 2006. [50] M. Fleder and D. Shah. I know what you bought at Chipotle
[28] Central Bank of Nigeria. Design paper for the eNaira. https: for $9.81 by solving a linear inverse problem. In Proceedings of
//enaira.gov.ng/download/eNaira Design Paper.pdf. the ACM on Measurement and Analysis of Computing Systems,
[29] Central Bank of The Bahamas. Sand dollar. https://www. volume 4, pages 1–17, 2020.
sanddollar.bs. [51] B. I. Galler and L. Bos. A model of transaction blocking in
databases, 1983. https://www.sciencedirect.com/science/article/
[30] Central Banking Newsdesk, 2020. https://www.centralbanking.
pii/0166531683900123.
com/fintech/cbdc/7529621/pboc-confirms-digital-currency-
pilot. [52] R. Garratt, M. J. Lee, et al. Monetizing privacy with central bank
digital currencies. Technical report, Federal Reserve Bank of
[31] Centre Foundation. USD-C. https://www.centre.io/usdc.
New York, 2020.
[32] M. M. Chakravarty, J. Chapman, K. MacKenzie, O. Melkonian, [53] R. Garratt, M. J. Lee, B. Malone, and A. Martin. Token- or
M. P. Jones, and P. Wadler. The extended UTXO model. In In- Account-based? A digital currency can be both. Liberty Street
ternational Conference on Financial Cryptography and Data Se- Economics, 2020. https://libertystreeteconomics.newyorkfed.
curity, pages 525–539. Springer, 2020. org/2020/08/token-or-account-based-a-digital-currency-can-be-
[33] D. Chaum. Blind signatures for untraceable payments. In Ad- both/.
vances in Cryptology: Proceedings of Crypto 82, pages 199–203. [54] G. Gerdes, C. Greene, X. M. Liu, and E. Massaro. The 2019
Springer, 1983. Federal Reserve payments study, 2019.
[34] D. Chaum, C. Grothoff, and T. Moser. How to issue a central [55] Google. GoogleTest. https://github.com/google/googletest.
bank digital currency. arXiv preprint arXiv:2103.00254, 2021.
[56] Google. LevelDB. https://github.com/google/leveldb.
[35] J. C. Corbett, J. Dean, M. Epstein, A. Fikes, C. Frost, J. J. Fur-
[57] J. N. Gray. Notes on data base operating systems. In Operating
man, S. Ghemawat, A. Gubarev, C. Heiser, P. Hochschild, et al.
Systems: An Advanced Course, pages 394–481. Springer, 1978.
Spanner: Google’s globally distributed database. ACM Transac-
tions on Computer Systems (TOCS), 31(3):1–22, 2013. [58] M. P. Herlihy and J. M. Wing. Linearizability: A correctness
condition for concurrent objects. ACM Transactions on Program-
[36] A. Culligan, N. Pennington, M. Delatine, P. Morel, E. M. Salinas, ming Languages and Systems (TOPLAS), 12(3):463–492, 1990.
G. Vargas, N. Dusane, J. Iu, S. Sheikh, N. Kerigan, T. McLaugh-
lin, P. D. Courcy, M. Low, and K. H. Park. The regulated liabil- [59] K. Hill. How Target figured out a teen girl was pregnant before
ity network, 12 2021. https://setldevelopmentltd.box.com/shared/ her father did, 2012. https://www.forbes.com/sites/kashmirhill/
static/18mff2m990qabgzseiex3h7itq7qdnls.pdf. 2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-
before-her-father-did/.
[37] G. Danezis, E. K. Kogias, A. Sonnino, and A. Spiegelman. Nar-
[60] D. Hopwood, S. Bowe, T. Hornby, and N. Wilcox. Zcash protocol
wal and Tusk: A DAG-based mempool and efficient BFT consen-
specifiation, 2021. https://zips.z.cash/protocol/protocol.pdf.
sus, 2021. https://arxiv.org/pdf/2105.11827.pdf.
[61] J. C. Jiang and K. Lucero. Background and implications of
[38] C. Decker and R. Wattenhofer. Bitcoin transaction malleability
China’s central bank digital currency: E-CNY. Available at SSRN
and MtGox. In Proceedings of the 19th European Symposium on
3774479, 2021.
Research in Computer Security, pages 313–326, 2014.
[62] J. Kiff, J. Alwazir, S. Davidovic, A. Farias, A. Khan,
[39] Diem Foundation. Diem. https://www.diem.com/en-us/white- T. Khiaonarong, M. Malaika, H. Monroe, N. Sugimoto,
paper/. H. Tourpe, and P. Zhou. A survey of research on retail central
[40] Eastern Caribbean Central Bank. ECCB digital EC currency pi- bank digital currency, 2020. https://www.elibrary.imf.org/view/
lot, 2021. https://www.eccb-centralbank.org/p/about-the-project. journals/001/2020/104/001.2020.issue-104-en.xml.

34
[63] koe, K. M. Alonso, and S. Noether. Zero to Monero: Second edi- [83] UkoeHB. Mechanics of MobileCoin. https://github.com/
tion, 2020. https://www.getmonero.org/library/Zero-to-Monero- UkoeHB/Mechanics-of-MobileCoin.
2-0-0.pdf. [84] A. Usher, E. Reshidi, F. Rivadeneyra, S. Hendry, et al. The pos-
[64] L. Lamport. Time, clocks, and the ordering of events in a dis- itive case for a CBDC. Bank of Canada Staff Discussion Paper,
tributed system. Communications of the ACM, 21(7):558–565, 2021.
jul 1978.
[85] N. van Saberhagen. CryptoNote v 2.0. https://web.archive.org/
[65] T. Mancini-Griffoli, M. S. M. Peria, I. Agur, A. Ari, J. Kiff, web/20201028121818/https://cryptonote.org/whitepaper.pdf.
A. Popescu, and C. Rochon. Casting light on central bank digital
[86] T. Walton-Pocock. Why hashes dominate in SNARKs: A
currency. IMF Staff Discussion Note, 8, 2018.
primer by AZTEC, 2019. https://medium.com/aztec-protocol/
[66] G. Maxwell. Confidential transactions – investiga- why-hashes-dominate-in-snarks-b20a555f074c.
tion. https://elementsproject.org/features/confidential-
[87] G. Wood et al. Ethereum: A secure decentralised generalised
transactions/investigation.
transaction ledger. Ethereum project yellow paper, 151(2014):1–
[67] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. 32, 2014.
Cryptography Mailing list at https://metzdowd.com, 10 2008.
[88] P. Wuille. Bech32m format for v1+ witness addresses, 2020.
https://bitcoin.org/bitcoin.pdf.
https://github.com/bitcoin/bips/blob/master/bip-0350.mediawiki.
[68] A. Narayanan and J. Clark. Bitcoin’s academic pedigree. Com-
[89] P. Wuille and G. Maxwell. Base32 address format for native v0-
munications of the ACM, 60(12):36–45, 2017.
16 witness outputs, 2017. https://github.com/bitcoin/bips/blob/
[69] N. Narula, W. Vasquez, and M. Virza. zkLedger: Privacy- master/bip-0173.mediawiki.
preserving auditing for distributed ledgers. In Proceedings of
the 15th USENIX Symposium on Networked Systems Design and [90] P. Wuille, J. Nick, and T. Ruffing. Schnorr signatures for
Implementation, NSDI ’18, 2018. ePrint: https://eprint.iacr.org/ secp256k1, 2020. https://github.com/bitcoin/bips/blob/master/
2018/241. bip-0340.mediawiki.

[70] K. Nikitin, E. Kokoris-Kogias, P. Jovanovic, N. Gailly, L. Gasser, [91] YCharts. Ethereum chain full sync data size. https://ycharts.com/
I. Khoffi, J. Cappos, and B. Ford. CHAINIAC: Proactive indicators/ethereum chain full sync data size.
software-update transparency via collectively signed skipchains
and verified builds. In 26th USENIX Security Symposium
(USENIX Security ’17), pages 1271–1287, 2017.
[71] NIST. Secure Hash Standard, 2002. https://csrc.nist.
gov/csrc/media/publications/fips/180/2/archive/2002-08-
01/documents/fips180-2.pdf.
[72] NIST. Post-quantum cryptography, 2016. https://csrc.nist.gov/
Projects/Post-Quantum-Cryptography.
[73] D. Ongaro and J. Ousterhout. In search of an understandable con-
sensus algorithm. In 2014 USENIX Annual Technical Conference
(USENIX ATC ’14), pages 305–319, 2014.
[74] Pay.UK. Pay.UK 2020 annual self-assessment against
the principles for financial market infrastructure, 2020.
https://www.wearepay.uk/wp-content/uploads/Pay.UK-PFMI-
Self-Assessment-Jun-20.pdf.
[75] T. P. Pedersen. Non-interactive and information-theoretic secure
verifiable secret sharing. In Proceedings of the 11th Annual Inter-
national Cryptology Conference, CRYPTO ’91, pages 129–140,
1992.
[76] A. Pertsev, R. Semenov, and R. Storm. Tornado cash privacy
solution: Version 1.4, 2019. https://tornado.cash/Tornado.cash
whitepaper v1.4.pdf.
[77] Y. Qian. Technical aspects of CBDC in a two-tiered system,
2018. https://www.itu.int/en/ITU-T/Workshops-and-Seminars/
20180718/Documents/Yao%20Qian.pdf.
[78] R3. Corda. https://www.corda.net.
[79] K. Shirriff. Hidden surprises in the Bitcoin blockchain and
how they are stored: Nelson Mandela, WikiLeaks, photos,
and Python software. http://www.righto.com/2014/02/ascii-
bernanke-wikileaks-photographs.html.
[80] Statoshi.info. Bitcoin unspent transaction output set.
https://statoshi.info/d/000000009/unspent-transaction-output-
set?orgId=1&refresh=10m.
[81] Sveriges Riksbank. E-krona pilot phase 1. Sveriges Riks-
bank Report, 2021. https://www.riksbank.se/globalassets/media/
rapporter/e-krona/2021/e-krona-pilot-phase-1.pdf.
[82] Tether Operations Ltd. Tether. https://tether.to/.

35

You might also like