Rust
Rust
Foreword xiii
Preface xv
What Is This Book About . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Cloud-native applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Working in a team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Who Is This Book For . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
1 Getting Started 1
1.1 Installing The Rust Toolchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Compilation Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Release Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 What Toolchains Do We Need? . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Project Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 IDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 Rust-analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 IntelliJ Rust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.3 What Should I Use? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Inner Development Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4.1 Faster Linking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.2 cargo-watch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Continuous Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.1 CI Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5.2 Ready-to-go CI Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
i
ii CONTENTS
4 Telemetry 89
4.1 Unknown Unknowns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.2 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.3 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.3.1 The log Crate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.3.2 actix-web’s Logger Middleware . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.3.3 The Facade Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.4 Instrumenting POST /subscriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.4.1 Interactions With External Systems . . . . . . . . . . . . . . . . . . . . . . . . 96
4.4.2 Think Like A User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.4.3 Logs Must Be Easy To Correlate . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.5 Structured Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.5.1 The tracing Crate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.5.2 Migrating From log To tracing . . . . . . . . . . . . . . . . . . . . . . . . . . 102
CONTENTS iii
When you read these lines, Rust has achieved its biggest goal: make an offer to programmers to write their
production systems in a different language. By the end of the book, it is still your choice to follow that path,
but you have all you need to consider the offer. I’ve been part of the growth process of two widely different
languages: Ruby and Rust - by programming them, but also by running events, being part of their project
management and running business around them. Through that, I had the privilege of being in touch with
many of the creators of those languages and consider some of them friends. Rust has been my one chance
in life to see and help a language grow from the experimental stage to adoption in the industry.
I’ll let you in on a secret I learned along the way: programming languages are not adopted because of a feature
checklist. It’s a complex interplay between good technology, the ability to speak about it and finding enough
people willing to take long bets. When I write these lines, over 5000 people have contributed to the Rust
project, often for free, in their spare time - because they believe in that bet. But you don’t have to contribute
to the compiler or be recorded in a git log to contribute to Rust. Luca’s book is such a contribution: it gives
newcomers a perspective on Rust and promotes the good work of those many people.
Rust was never intended to be a research platform - it was always meant as a programming language solving
real, tangible issues in large codebases. It is no surprise that it comes out of an organization that maintains a
very large and complex codebase - Mozilla, creators of Firefox. When I joined Rust, it was just ambition - but
the ambition was to industrialize research to make the software of tomorrow better. With all of its theoretical
concepts, linear typing, region based memory management, the programming language was always meant
for everyone. This reflects in its lingo: Rust uses accessible names like “Ownership” and “Borrowing” for
the concepts I just mentioned. Rust is an industry language, through and through.
And that reflects in its proponents: I’ve known Luca for years as a community member who knows a ton
about Rust. But his deeper interest lies in convincing people that Rust is worth a try by addressing their
needs. The title and structure of this book reflects one of the core values of Rust: to find its worth in writing
production software that is solid and works. Rust shows its strength in the care and knowledge that went
into it to write stable software productively. Such an experience is best found with a guide and Luca is one
of the best guides you can find around Rust.
Rust doesn’t solve all of your problems, but it has made an effort to eliminate whole categories of mistakes.
There’s the view out there that safety features in languages are there because of the incompetence of pro-
grammers. I don’t subscribe to this view. Emily Dunham, captured it well in her RustConf 2017 keynote:
“safe code allows you to take better risks”. Much of the magic of the Rust community lies in this positive
view of its users: whether you are a newcomer or an experienced developer, we trust your experience and
your decision-making. In this book, Luca offers a lot of new knowledge that can be applied even outside of
xiii
xiv CONTENTS
Rust, well explained in the context of daily software praxis. I wish you a great time reading, learning and
contemplating.
Florian Gilcher,
Management Director of Ferrous Systems and
Co-Founder of the Rust Foundation
Preface
Zero To Production will focus on the challenges of writing Cloud-native applications in a team of
four or five engineers with different levels of experience and proficiency.
Cloud-native applications
Defining what Cloud-native application means is, by itself, a topic for a whole new book1 . Instead of pre-
scribing what Cloud-native applications should look like, we can lay down what we expect them to do.
Paraphrasing Cornelia Davis, we expect Cloud-native applications:
• To achieve high-availability while running in fault-prone environments;
• To allow us to continuously release new versions with zero downtime;
• To handle dynamic workloads (e.g. request volumes).
These requirements have a deep impact on the viable solution space for the architecture of our software.
High availability implies that our application should be able to serve requests with no downtime even if
one or more of our machines suddenly starts failing (a common occurrence in a Cloud environment2 ). This
1
Like the excellent Cloud-native patterns by Cornelia Davis!
2
For example, many companies run their software on AWS Spot Instances to reduce their infrastructure bills. The price of Spot
xv
xvi CONTENTS
forces our application to be distributed - there should be multiple instances of it running on multiple ma-
chines.
The same is true if we want to be able to handle dynamic workloads - we should be able to measure if our
system is under load and throw more compute at the problem by spinning up new instances of the applica-
tion. This also requires our infrastructure to be elastic to avoid overprovisioning and its associated costs.
Running a replicated application influences our approach to data persistence - we will avoid using the local
filesystem as our primary storage solution, relying instead on databases for our persistence needs.
Zero To Production will thus extensively cover topics that might seem tangential to pure backend application
development. But Cloud-native software is all about rainbows and DevOps, therefore we will be spending
plenty of time on topics traditionally associated with the craft of operating systems.
We will cover how to instrument your Rust application to collect logs, traces and metrics to be able to
observe our system.
We will cover how to set up and evolve your database schema via migrations.
We will cover all the material required to use Rust to tackle both day one and day two concerns of a Cloud-
native API.
Working in a team
The impact of those three requirements goes beyond the technical characteristics of our system: it influences
how we build our software.
To be able to quickly release a new version of our application to our users we need to be sure that our applic-
ation works.
If you are working on a solo project you can rely on your thorough understanding of the whole system: you
wrote it, it might be small enough to fit entirely in your head at any point in time.3
If you are working in a team on a commercial project, you will be very often working on code that was neither
written or reviewed by you. The original authors might not be around anymore.
You will end up being paralysed by fear every time you are about to introduce changes if you are relying on
your comprehensive understanding of what the code does to prevent it from breaking.
You want automated tests.
Running on every commit. On every branch. Keeping main healthy.
You want to leverage the type system to make undesirable states difficult or impossible to represent.
You want to use every tool at your disposal to empower each member of the team to evolve that piece of
software. To contribute fully to the development process even if they might not be as experienced as you or
equally familiar with the codebase or the technologies you are using.
instances is the result of a continuous auction and it can be substantially cheaper than the corresponding full price for On Demand
instances (up to 90% cheaper!).
There is one gotcha: AWS can decommission your Spot instances at any point in time. Your software must be fault-tolerant to
leverage this opportunity.
3
Assuming you wrote it recently.
Your past self from one year ago counts as a stranger for all intents and purposes in the world of software development. Pray that
your past self wrote comments for your present self if you are about to pick up again an old project of yours.
CONTENTS xvii
Zero To Production will therefore put a strong emphasis on test-driven development and continuous integ-
ration from the get-go - we will have a CI pipeline set up before we even have a web server up and running!
We will be covering techniques such as black-box testing for APIs and HTTP mocking - not wildly popular
or well documented in the Rust community yet extremely powerful.
We will also borrow terminology and techniques from the Domain Driven Design world, combining them
with type-driven design to ensure the correctness of our systems.
Our main focus is enterprise software: correct code which is expressive enough to model the domain
and supple enough to support its evolution over time.
We will thus have a bias for boring and correct solutions, even if they incur a performance overhead that
could be optimised away with a more careful and chiseled approach.
Get it to run first, optimise it later (if needed).
Yes.
I am writing this book for the seasoned backend developers who have read The Rust Book and are now
trying to port over a couple of simple systems.
I am writing this book for the new engineers on my team, a trail to help them make sense of the codebases
they will contribute to over the coming weeks and months.
I am writing this book for a niche whose needs I believe are currently underserved by the articles and resources
available in the Rust ecosystem.
xviii CONTENTS
Getting Started
There is more to a programming language than the language itself: tooling is a key element of the experience
of using the language.
The same applies to many other technologies (e.g. RPC frameworks like gRPC or Apache Avro) and it often
has a disproportionate impact on the uptake (or the demise) of the technology itself.
Tooling should therefore be treated as a first-class concern both when designing and teaching the language
itself.
The Rust community has put tooling at the forefront since its early days: it shows.
We are now going to take a brief tour of a set of tools and utilities that are going to be useful in our jour-
ney. Some of them are officially supported by the Rust organisation, others are built and maintained by the
community.
1
2 CHAPTER 1. GETTING STARTED
The Rust project strives for stability without stagnation. Quoting from Rust’s documentation:
[..] you should never have to fear upgrading to a new version of stable Rust. Each upgrade should
be painless, but should also bring you new features, fewer bugs, and faster compile times.
That is why, for application development, you should generally rely on the latest released version of the
compiler to run, build and test your software - the so-called stable channel.
A new version of the compiler is released on the stable channel every six weeks1 - the latest version at the
time of writing is v1.72.02 .
Testing your software using the beta compiler is one of the many ways to support the Rust project - it helps
catching bugs before the release date3 .
nightly serves a different purpose: it gives early adopters access to unfinished features4 before they are re-
leased (or even on track to be stabilised!).
I would invite you to think twice if you are planning to run production software on top of the nightly
compiler: it’s called unstable for a reason.
You can update your toolchains with rustup update, while rustup toolchain list will give you an over-
view of what is installed on your system.
We will not need (or perform) any cross-compiling - our production workloads will be running in contain-
ers, hence we do not need to cross-compile from our development machine to the target host used in our
production environment.
1
More details on the release schedule can be found in the Rust book.
2
You can check the next version and its release date at Rust forge.
3
It’s fairly rare for beta releases to contain issues thanks to the CI/CD setup of the Rust project. One of its most interesting
components is crater, a tool designed to scrape crates.io and GitHub for Rust projects to build them and run their test suites to
identify potential regressions. Pietro Albini gave an awesome overview of the Rust release process in his Shipping a compiler every
six weeks talk at RustFest 2019.
4
You can check the list of feature flags available on nightly in The Unstable Book. Spoiler: there are loads.
1.2. PROJECT SETUP 3
You will not be spending a lot of quality time working directly with rustc - your main interface for building
and testing Rust applications will be cargo, Rust’s build tool.
You can double-check everything is up and running with
cargo --version
Let’s use cargo to create the skeleton of the project we will be working on for the whole book:
cargo new zero2prod
You should have a new zero2prod folder, with the following file structure:
zero2prod/
Cargo.toml
.gitignore
.git
src/
main.rs
We will be using GitHub as a reference given its popularity and the recently released GitHub Actions feature
for CI pipelines, but you are of course free to choose any other git hosting solution (or none at all).
1.3 IDEs
The project skeleton is ready, it is now time to fire up your favourite editor so that we can start messing
around with it.
Different people have different preferences but I would argue that the bare minimum you want to have, espe-
cially if you are starting out with a new programming language, is a setup that supports syntax highlighting,
code navigation and code completion.
Syntax highlighting gives you immediate feedback on glaring syntax errors, while code navigation and code
4 CHAPTER 1. GETTING STARTED
completion enable “exploratory” programming: jumping in and out of the source of your dependencies,
quick access to the available methods on a struct or an enum you imported from a crate without having to
continuously switch between your editor and docs.rs.
You have two main options for your IDE setup: rust-analyzer and IntelliJ Rust.
1.3.1 Rust-analyzer
rust-analyzer5 is an implementation of the Language Server Protocol for Rust.
The Language Server Protocol makes it easy to leverage rust-analyzer in many different editors, including
but not limited to VS Code, Emacs, Vim/NeoVim and Sublime Text 3.
Editor-specific setup instructions can be found on rust-analyzer’s website.
• Run tests;
• Run the application.
This is also known as the inner development loop.
The speed of your inner development loop is as an upper bound on the number of iterations that you can
complete in a unit of time.
If it takes 5 minutes to compile and run the application, you can complete at most 12 iterations in an hour.
Cut it down to 2 minutes and you can now fit in 30 iterations in the same hour!
Rust does not help us here - compilation speed can become a pain point on big projects. Let’s see what we
can do to mitigate the issue before moving forward.
# On Windows
# ```
# cargo install -f cargo-binutils
# rustup component add llvm-tools-preview
# ```
[target.x86_64-pc-windows-msvc]
rustflags = ["-C", "link-arg=-fuse-ld=lld"]
[target.x86_64-pc-windows-gnu]
rustflags = ["-C", "link-arg=-fuse-ld=lld"]
# On Linux:
# - Ubuntu, `sudo apt-get install lld clang`
# - Arch, `sudo pacman -S lld clang`
[target.x86_64-unknown-linux-gnu]
rustflags = ["-C", "linker=clang", "-C", "link-arg=-fuse-ld=lld"]
# On MacOS, `brew install llvm` and follow steps in `brew info llvm`
[target.x86_64-apple-darwin]
rustflags = ["-C", "link-arg=-fuse-ld=lld"]
6 CHAPTER 1. GETTING STARTED
[target.aarch64-apple-darwin]
rustflags = ["-C", "link-arg=-fuse-ld=/opt/homebrew/opt/llvm/bin/ld64.lld"]
There is ongoing work on the Rust compiler to use lld as the default linker where possible - soon enough
this custom configuration will not be necessary to achieve higher compilation performance!8
1.4.2 cargo-watch
We can also mitigate the impact on our productivity by reducing the perceived compilation time - i.e. the
time you spend looking at your terminal waiting for cargo check or cargo run to complete.
Tooling can help here - let’s install cargo-watch:
cargo install cargo-watch
cargo-watch monitors your source code to trigger commands every time a file changes.
For example:
cargo watch -x check
In trunk-based development we should be able to deploy our main branch at any point in time.
Every member of the team can branch off from main, develop a small feature or fix a bug, merge back into
main and release to our users.
Continuous Integration empowers each member of the team to integrate their changes into the main
branch multiple times a day.
1.5.1 CI Steps
1.5.1.1 Tests
If your CI pipeline had a single step, it should be testing.
Tests are a first-class concept in the Rust ecosystem and you can leverage cargo to run your unit and integra-
tion tests:
cargo test
cargo test also takes care of building the project before running tests, hence you do not need to run cargo
buildbeforehand (even though most pipelines will invoke cargo build before running tests to cache de-
pendencies).
The easiest way to measure code coverage of a Rust project is via cargo tarpaulin, a cargo subcommand
developed by xd009642. You can install tarpaulin with
# At the time of writing tarpaulin only supports
# x86_64 CPU architectures running Linux.
cargo install cargo-tarpaulin
while
cargo tarpaulin --ignore-tests
will compute code coverage for your application code, ignoring your test functions.
tarpaulin can be used to upload code coverage metrics to popular services like Codecov or Coveralls - in-
structions can be found in tarpaulin’s README.
1.5.1.3 Linting
Writing idiomatic code in any programming language requires time and practice.
It is easy at the beginning of your learning journey to end up with fairly convoluted solutions to problems
that could otherwise be tackled with a much simpler approach.
Static analysis can help: in the same way a compiler steps through your code to ensure it conforms to the
language rules and constraints, a linter will try to spot unidiomatic code, overly-complex constructs and
common mistakes/inefficiencies.
The Rust team maintains clippy, the official Rust linter9 .
clippy is included in the set of components installed by rustup if you are using the default profile. Often
CI environments use rustup’s minimal profile, which does not include clippy.
You can easily install it with
rustup component add clippy
In our CI pipeline we would like to fail the linter check if clippy emits any warnings.
We can achieve it with
cargo clippy -- -D warnings
Static analysis is not infallible: from time to time clippy might suggest changes that you do not believe to
be either correct or desirable.
You can mute a warning using the #[allow(clippy::lint_name)] attribute on the affected code block or
9
Yes, clippy is named after the (in)famous paperclip-shaped Microsoft Word assistant.
1.5. CONTINUOUS INTEGRATION 9
disable the noisy lint altogether for the whole project with a configuration line in clippy.toml or a project-
level #![allow(clippy::lint_name)] directive.
Details on the available lints and how to tune them for your specific purposes can be found in clippy’s
README.
1.5.1.4 Formatting
Most organizations have more than one line of defence for the main branch: one is provided by the CI
pipeline checks, the other is often a pull request review.
A lot can be said on what distinguishes a value-adding PR review process from a soul-sucking one - no need
to re-open the whole debate here.
I know for sure what should not be the focus of a good PR review: formatting nitpicks - e.g. Can you add a
newline here?, I think we have a trailing whitespace there!, etc.
Let machines deal with formatting while reviewers focus on architecture, testing thoroughness, reliability,
observability. Automated formatting removes a distraction from the complex equation of the PR review
process. You might dislike this or that formatting choice, but the complete erasure of formatting bikeshed-
ding is worth the minor discomfort.
rustfmt is the official Rust formatter.
Just like clippy, rustfmt is included in the set of default components installed by rustup. If missing, you
can easily install it with
rustup component add rustfmt
It will fail when a commit contains unformatted code, printing the difference to the console.10
You can tune rustfmt for a project with a configuration file, rustfmt.toml. Details can be found in rust-
fmt’s README.
They also provide cargo-audit11 , a convenient cargo sub-command to check if vulnerabilities have been
reported for any of the crates in the dependency tree of your project.
You can install it with
cargo install cargo-audit
Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.
Hopefully I have taught you enough to go out there and stitch together a solid CI pipeline for your Rust
projects.
We should also be honest and admit that it can take multiple hours of fidgeting around to learn how to use
the specific flavour of configuration language used by a CI provider and the debugging experience can often
be quite painful, with long feedback cycles.
I have thus decided to collect a set of ready-made configuration files for the most popular CI providers - the
exact steps we just described, ready to be embedded in your project repository:
• GitHub Actions;
• CircleCI;
• GitLab CI;
• Travis.
It is often much easier to tweak an existing setup to suit your specific needs than to write a new one from
scratch.
11
cargo-deny, developed by Embark Studios, is another cargo sub-command that supports vulnerability scanning of your de-
pendency tree. It also bundles additional checks you might want to perform on your dependencies - it helps you identify unmain-
tained crates, define rules to restrict the set of allowed software licenses and spot when you have multiple versions of the same crate
in your lock file (wasted compilation cycles!). It requires a bit of upfront effort in configuration, but it can be a powerful addition
to your CI toolbox.
Chapter 2
Zero To Production will focus on the challenges of writing cloud-native applications in a team of four
or five engineers with different levels of experience and proficiency.
It flips the hierarchy you are used to: the material you are studying is not relevant because somebody claims
it is, it is relevant because it is useful to get closer to a solution.
You learn new techniques and when it makes sense to reach for them.
The devil is in the details: a problem-based learning path can be delightful, yet it is painfully easy to misjudge
how challenging each step of the journey is going to be.
Our driving example needs to be:
We will go for an email newsletter - the next section will detail the functionality we plan to cover1 .
1
Who knows, I might end up using our home-grown newsletter application to release the final chapter - it would definitely
provide me with a sense of closure.
11
12 CHAPTER 2. BUILDING AN EMAIL NEWSLETTER
As a …,
I want to …,
So that …
A user story helps us to capture who we are building for (as a), the actions they want to perform (want to)
as well as their motives (so that).
We will fulfill two user stories:
• As a blog visitor,
I want to subscribe to the newsletter,
So that I can receive email updates when new content is published on the blog;
• As the blog author,
I want to send an email to all my subscribers,
So that I can notify them when new content is published.
We will not add features to
• unsubscribe;
• manage multiple newsletters;
• segment subscribers in multiple audiences;
• track opening and click rates.
2
Make no mistake: when buying a SaaS product it is often not the software itself that you are paying for - you are paying for
the peace of mind of knowing that there is an engineering team working full time to keep the service up and running, for their legal
and compliance expertise, for their security team. We (developers) often underestimate how much time (and headaches) that saves
us over time.
2.3. WORKING IN ITERATIONS 13
As said, pretty barebone. We would definitely not be able to launch publicly without giving users the pos-
sibility to unsubscribe.
Nonetheless, fulfilling those two stories will give us plenty of opportunities to practice and hone our skills!
2.3.1 Coming Up
Strategy is clear, we can finally get started: the next chapter will focus on the subscription functionality.
Getting off the ground will require some initial heavy-lifting: choosing a web framework, setting up the
infrastructure for managing database migrations, putting together our application scaffolding as well as our
setup for integration testing.
Expect to spend way more time pair programming with the compiler going forward!
shots, showing what the project looks like at end of each chapter and key sections.
If you get stuck, make sure to compare your code with the one in the repository!
Chapter 3
We spent the whole previous chapter defining what we will be building (an email newsletter!), narrowing
down a precise set of requirements. It is now time to roll up our sleeves and get started with it.
This chapter will take a first stab at implementing this user story:
As a blog visitor,
I want to subscribe to the newsletter,
So that I can receive email updates when new content is published on the blog.
We expect our blog visitors to input their email address in a form embedded on a web page.
The form will trigger an API call to a backend server that will actually process the information, store it and
send back a response.
This chapter will focus on that backend server - we will implement the /subscriptions POST endpoint.
15
16 CHAPTER 3. SIGN UP A NEW SUBSCRIBER
We will be relying on our Continuous Integration pipeline to keep us in check throughout the process - if
you have not set it up yet, go back to Chapter 1 and grab one of the ready-made templates.
Throughout this chapter and beyond I suggest you to keep a couple of extra browser tabs open: actix-web’s
website, actix-web’s documentation and actix-web’s examples collection.
We can use /health_check to verify that the application is up and ready to accept incoming requests.
Combine it with a SaaS service like pingdom.com and you can be alerted when your API goes dark - quite a
good baseline for an email newsletter that you are running on the side.
A health-check endpoint can also be handy if you are using a container orchestrator to juggle your applica-
tion (e.g. Kubernetes or Nomad): the orchestrator can call /health_check to detect if the API has become
unresponsive and trigger a restart.
#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(greet))
.route("/{name}", web::get().to(greet))
3.3. OUR FIRST ENDPOINT: A BASIC HEALTH CHECK 17
})
.bind("127.0.0.1:8000")?
.run()
.await
}
We have not added actix-web and tokio to our list of dependencies, therefore the compiler cannot resolve
what we imported.
We can either fix the situation manually, by adding
#! Cargo.toml
# [...]
[dependencies]
actix-web = "4"
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
under [dependencies] in our Cargo.toml or we can use cargo add to quickly add the latest version of both
crates as a dependency of our project:
cargo add actix-web@4
cargo add tokio@1 --features macros,rt-multi-thread
1
During our development process we are not always interested in producing a runnable binary: we often just want to know if
our code compiles or not. cargo check was born to serve exactly this usecase: it runs the same checks that are run by cargo build,
but it does not bother to perform any machine code generation. It is therefore much faster and provides us with a tighter feedback
loop. See link for more details.
18 CHAPTER 3. SIGN UP A NEW SUBSCRIBER
You can now launch the application with cargo run and perform a quick manual test:
curl http://127.0.0.1:8000
Hello World!
#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(greet))
.route("/{name}", web::get().to(greet))
})
.bind("127.0.0.1:8000")?
.run()
.await
}
App is the component whose job is to take an incoming request as input and spit out a response.
Let’s zoom in on our code snippet:
App::new()
.route("/", web::get().to(greet))
.route("/{name}", web::get().to(greet))
App is a practical example of the builder pattern: new() gives us a clean slate to which we can add, one bit at
a time, new behaviour using a fluent API (i.e. chaining method calls one after the other).
We will cover the majority of App’s API surface on a need-to-know basis over the course of the whole book:
by the end of our journey you should have touched most of its methods at least once.
• path, a string, possibly templated (e.g. "/{name}") to accommodate dynamic path segments;
• route, an instance of the Route struct.
"/" will match all requests without any segment following the base path - i.e. http://localhost:8000/.
web::get() is a short-cut for Route::new().guard(guard::Get()) a.k.a. the request should be passed to
the handler if and only if its HTTP method is GET.
You can start to picture what happens when a new request comes in: App iterates over all registered endpoints
until it finds a matching one (both path template and guards are satisfied) and passes over the request object
to the handler.
This is not 100% accurate but it is a good enough mental model for the time being.
What does a handler look like instead? What is its function signature?
We only have one example at the moment, greet:
async fn greet(req: HttpRequest) -> impl Responder {
[...]
}
greet is an asynchronous function that takes an HttpRequest as input and returns something that imple-
20 CHAPTER 3. SIGN UP A NEW SUBSCRIBER
ments the Responder trait2 . A type implements the Responder trait if it can be converted into a HttpRe-
sponse - it is implemented off the shelf for a variety of common types (e.g. strings, status codes, bytes, Ht-
tpResponse, etc.) and we can roll our own implementations if needed.
Do all our handlers need to have the same function signature of greet?
No! actix-web, channelling some forbidden trait black magic, allows a wide range of different function
signatures for handlers, especially when it comes to input arguments. We will get back to it soon enough.
#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(greet))
.route("/{name}", web::get().to(greet))
})
.bind("127.0.0.1:8000")?
.run()
.await
}
What is #[tokio::main] doing here? Well, let’s remove it and see what happens! cargo check screams at
us with these errors:
error[E0277]: `main` has invalid return type `impl std::future::Future`
--> src/main.rs:8:20
|
8 | async fn main() -> Result<(), std::io::Error> {
| ^^^^^^^^^^^^^^^^^^^
| `main` can only return types that implement `std::process::Termination`
|
= help: consider using `()`, or a `Result`
2
impl Responder is using the impl Trait syntax introduced in Rust 1.26 - you can find more details in Rust’s 2018 edition
guide.
3.3. OUR FIRST ENDPOINT: A BASIC HEALTH CHECK 21
We need main to be asynchronous because HttpServer::run is an asynchronous method but main, the entry-
point of our binary, cannot be an asynchronous function. Why is that?
Asynchronous programming in Rust is built on top of the Future trait: a future stands for a value that
may not be there yet. All futures expose a poll method which has to be called to allow the future to make
progress and eventually resolve to its final value. You can think of Rust’s futures as lazy: unless polled, there
is no guarantee that they will execute to completion. This has often been described as a pull model compared
to the push model adopted by other languages3 .
Rust’s standard library, by design, does not include an asynchronous runtime: you are supposed to bring
one into your project as a dependency, one more crate under [dependencies] in your Cargo.toml. This
approach is extremely versatile: you are free to implement your own runtime, optimised to cater for the
specific requirements of your usecase (see the Fuchsia project or bastion’s actor framework).
This explains why main cannot be an asynchronous function: who is in charge to call poll on it?
There is no special configuration syntax that tells the Rust compiler that one of your dependencies is an
asynchronous runtime (e.g. as we do for allocators) and, to be fair, there is not even a standardised definition
of what a runtime is (e.g. an Executor trait).
You are therefore expected to launch your asynchronous runtime at the top of your main function and then
use it to drive your futures to completion.
You might have guessed by now what is the purpose of #[tokio::main], but guesses are not enough to
satisfy us: we want to see it.
How?
tokio::main is a procedural macro and this is a great opportunity to introduce cargo expand, an awesome
addition to our Swiss army knife for Rust development:
Rust macros operate at the token level: they take in a stream of symbols (e.g. in our case, the whole main
function) and output a stream of new symbols which then gets passed to the compiler. In other words, the
main purpose of Rust macros is code generation.
How do we debug or inspect what is happening with a particular macro? You inspect the tokens it outputs!
That is exactly where cargo expand shines: it expands all macros in your code without passing the output
to the compiler, allowing you to step through it and understand what is going on.
Let’s use cargo expand to demystify #[tokio::main]:
cargo expand
Unfortunately, it fails:
error: the option `Z` is only accepted on the nightly compiler
error: could not compile `zero2prod`
3
Check out the release notes of async/await for more details. The talk by withoutboats at Rust LATAM 2019 is another
excellent reference on the topic. If you prefer books to talks, check out Futures Explained in 200 Lines of Rust.
22 CHAPTER 3. SIGN UP A NEW SUBSCRIBER
We are using the stable compiler to build, test and run our code. cargo-expand, instead, relies on the
nightly compiler to expand our macros.
You can install the nightly compiler by running
rustup toolchain install nightly --allow-downgrade
Some components of the bundle installed by rustup might be broken/missing on the latest nightly release:
--allow-downgrade tells rustup to find and install the latest nightly where all the needed components are
available.
You can use rustup default to change the default toolchain used by cargo and the other tools managed by
rustup. In our case, we do not want to switch over to nightly - we just need it for cargo-expand.
Luckily enough, cargo allows us to specify the toolchain on a per-command basis:
# Use the nightly toolchain just for this command invocation
cargo +nightly expand
/// [...]
We are starting tokio’s async runtime and we are using it to drive the future returned by HttpServer::run
to completion.
In other words, the job of #[tokio::main] is to give us the illusion of being able to define an asynchronous
main while, under the hood, it just takes our main asynchronous body and writes the necessary boilerplate
to make it run on top of tokio’s runtime.
#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(greet))
.route("/{name}", web::get().to(greet))
})
.bind("127.0.0.1:8000")?
.run()
.await
}
First of all we need a request handler. Mimicking greet we can start with this signature:
async fn health_check(req: HttpRequest) -> impl Responder {
todo!()
}
We said that Responder is nothing more than a conversion trait into a HttpResponse. Returning an instance
of HttpResponse directly should work then!
24 CHAPTER 3. SIGN UP A NEW SUBSCRIBER
Looking at its documentation we can use HttpResponse::Ok to get a HttpResponseBuilder primed with a
200 status code. HttpResponseBuilder exposes a rich fluent API to progressively build out a HttpResponse
response, but we do not need it here: we can get a HttpResponse with an empty body by calling finish on
the builder.
Gluing everything together:
async fn health_check(req: HttpRequest) -> impl Responder {
HttpResponse::Ok().finish()
}
A quick cargo check confirms that our handler is not doing anything weird. A closer look at HttpRe-
sponseBuilder unveils that it implements Responder as well - we can therefore omit our call to finish and
shorten our handler to:
async fn health_check(req: HttpRequest) -> impl Responder {
HttpResponse::Ok()
}
The next step is handler registration - we need to add it to our App via route:
App::new()
.route("/health_check", web::get().to(health_check))
#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
HttpServer::new(|| {
App::new()
.route("/health_check", web::get().to(health_check))
})
.bind("127.0.0.1:8000")?
.run()
.await
}
Our health check response is indeed static and does not use any of the data bundled with the incoming
HTTP request (routing aside). We could follow the compiler’s advice and prefix req with an underscore…
or we could remove that input argument entirely from health_check:
async fn health_check() -> impl Responder {
HttpResponse::Ok()
}
Surprise surprise, it compiles! actix-web has some pretty advanced type magic going on behind the scenes
and it accepts a broad range of signatures as request handlers - more on that later.
What is left to do?
Well, a little test!
# Launch the application first in another terminal with `cargo run`
curl -v http://127.0.0.1:8000/health_check
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8000 (#0)
> GET /health_check HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.61.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-length: 0
< date: Wed, 05 Aug 2020 22:11:52 GMT
Congrats, you have just implemented your first working actix_web endpoint!
manually check that all our assumptions on its behaviour are still valid every time we perform some changes.
We’d like to automate as much as possible: those checks should be run in our CI pipeline every time we are
committing a change in order to prevent regressions.
While the behaviour of our health check might not evolve much over the course of our journey, it is a good
starting point to get our testing scaffolding properly set up.
#[tokio::test]
async fn health_check_succeeds() {
let response = health_check().await;
// This requires changing the return type of `health_check`
// from `impl Responder` to `HttpResponse` to compile
// You also need to import it with `use actix_web::HttpResponse`!
assert!(response.status().is_success())
}
}
Changing any of these two properties would break our API contract, but our test would still pass - not good
enough.
actix-web provides some conveniences to interact with an App without skipping the routing logic, but there
are severe shortcomings to its approach:
• migrating to another web framework would force us to rewrite our whole integration test suite. As
much as possible, we’d like our integration tests to be highly decoupled from the technology underpin-
ning our API implementation (e.g. having framework-agnostic integration tests is life-saving when
you are going through a large rewrite or refactoring!);
• due to some actix-web’s limitations4 , we wouldn’t be able to share our App startup logic between our
production code and our testing code, therefore undermining our trust in the guarantees provided
by our test suite due to the risk of divergence over time.
We will opt for a fully black-box solution: we will launch our application at the beginning of each test and
interact with it using an off-the-shelf HTTP client (e.g. reqwest).
#[cfg(test)]
mod tests {
// Import the code I want to test
use super::*;
// My tests
}
4
App is a generic struct and some of the types used to parametrise it are private to the actix_web project. It is therefore impossible
(or, at least, so cumbersome that I have never succeeded at it) to write a function that returns an instance of App.
28 CHAPTER 3. SIGN UP A NEW SUBSCRIBER
///
/// assert!(is_even(2));
/// assert!(!is_even(1));
/// ```
pub fn is_even(x: u64) -> bool {
x % 2 == 0
}
An embedded test module has privileged access to the code living next to it: it can interact with structs, meth-
ods, fields and functions that have not been marked as public and would normally not be available to a user
of our code if they were to import it as a dependency of their own project.
Embedded test modules are quite useful for what I call iceberg projects, i.e. the exposed surface is very lim-
ited (e.g. a couple of public functions), but the underlying machinery is much larger and fairly complicated
(e.g. tens of routines). It might not be straight-forward to exercise all the possible edge cases via the exposed
functions - you can then leverage embedded test modules to write unit tests for private sub-components to
increase your overall confidence in the correctness of the whole project.
Tests in the external tests folder and doc tests, instead, have exactly the same level of access to your code
that you would get if you were to add your crate as a dependency in another project. They are therefore used
mostly for integration testing, i.e. testing your code by calling it in the same exact way a user would.
Our email newsletter is not a library, therefore the line is a bit blurry - we are not exposing it to the world as
a Rust crate, we are putting it out there as an API accessible over the network.
Nonetheless we are going to use the tests folder for our API integration tests - it is more clearly separated
and it is easier to manage test helpers as sub-modules of an external test binary.
If you won’t take my word for it, we can run a quick experiment:
# Create the tests folder
mkdir -p tests
//! tests/health_check.rs
use zero2prod::main;
#[test]
fn dummy_test() {
main()
}
For more information about this error, try `rustc --explain E0432`.
error: could not compile `zero2prod`.
We need to refactor our project into a library and a binary: all our logic will live in the library crate while the
binary itself will be just an entrypoint with a very slim main function.
First step: we need to change our Cargo.toml.
It currently looks something like this:
[package]
name = "zero2prod"
version = "0.1.0"
authors = ["Luca Palmieri <[email protected]>"]
edition = "2021"
[dependencies]
# [...]
We are relying on cargo’s default behaviour: unless something is spelled out, it will look for a src/main.rs
file as the binary entrypoint and use the package.name field as the binary name.
Looking at the manifest target specification, we need to add a lib section to add a library to our project:
[package]
name = "zero2prod"
version = "0.1.0"
authors = ["Luca Palmieri <[email protected]>"]
edition = "2021"
30 CHAPTER 3. SIGN UP A NEW SUBSCRIBER
[lib]
# We could use any path here, but we are following the community convention
# We could specify a library name using the `name` field. If unspecified,
# cargo will default to `package.name`, which is what we want.
path = "src/lib.rs"
[dependencies]
# [...]
The lib.rs file does not exist yet and cargo won’t create it for us:
cargo check
Everything should be working now: cargo check passes and cargo run still launches our application.
Although it is working, our Cargo.toml file now does not give you at a glance the full picture: you see a
library, but you don’t see our binary there. Even if not strictly necessary, I prefer to have everything spelled
out as soon as we move out of the auto-generated vanilla configuration:
[package]
name = "zero2prod"
version = "0.1.0"
authors = ["Luca Palmieri <[email protected]>"]
edition = "2021"
[lib]
path = "src/lib.rs"
[dependencies]
# [...]
use zero2prod::run;
#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
run().await
}
//! lib.rs
When we receive a GET request for /health_check we return a 200 OK response with no body.
// Act
let response = client
.get("http://127.0.0.1:8000/health_check")
.send()
.await
.expect("Failed to execute request.");
// Assert
assert!(response.status().is_success());
assert_eq!(Some(0), response.content_length());
}
#! Cargo.toml
# [...]
# Dev dependencies are used exclusively when running tests or examples
# They do not get included in the final application binary!
[dev-dependencies]
reqwest = "0.11"
# [...]
3.5. IMPLEMENTING OUR FIRST INTEGRATION TEST 33
The test also covers the full range of properties we are interested to check:
The test as it is crashes before doing anything useful: we are missing spawn_app, the last piece of the integra-
tion testing puzzle.
Why don’t we just call run in there? I.e.
//! tests/health_check.rs
// [...]
Running target/debug/deps/health_check-fc74836458377166
running 1 test
test health_check_works ...
test health_check_works has been running for over 60 seconds
No matter how long you wait, test execution will never terminate. What is going on?
use zero2prod::run;
#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
// Bubble up the io::Error if we failed to bind the address
// Otherwise call .await on our Server
run()?.await
}
// if we fail to perform the required setup we can just panic and crash
// all the things.
fn spawn_app() {
let server = zero2prod::run().expect("Failed to bind address");
// Launch the server as a background task
// tokio::spawn returns a handle to the spawned future,
// but we have no use for it here, hence the non-binding let
let _ = tokio::spawn(server);
}
#[tokio::test]
async fn health_check_works() {
// No .await, no .expect
spawn_app();
// [...]
}
cargo test
Running target/debug/deps/health_check-a1d027e9ac92cd64
running 1 test
test health_check_works ... ok
3.5.1 Polishing
We got it working, now we need to have a second look and improve it, if needed or possible.
3.5.1.1 Clean Up
What happens to our app running in the background when the test run ends? Does it shut down? Does it
linger as a zombie somewhere?
Well, running cargo test multiple times in a row always succeeds - a strong hint that our 8000 port is getting
released at the end of each run, therefore implying that the application is correctly shut down.
36 CHAPTER 3. SIGN UP A NEW SUBSCRIBER
A second look at tokio::spawn’s documentation supports our hypothesis: when a tokio runtime is shut
down all tasks spawned on it are dropped. tokio::test spins up a new runtime at the beginning of each
test case and they shut down at the end of each test case.
In other words, good news - no need to implement any clean up logic to avoid leaking resources between test
runs.
• if port 8000 is being used by another program on our machine (e.g. our own application!), tests will
fail;
• if we try to run two or more tests in parallel only one of them will manage to bind the port, all others
will fail.
We can do better: tests should run their background application on a random available port.
First of all we need to change our run function - it should take the application address as an argument instead
of relying on a hard-coded value:
//! src/lib.rs
// [...]
fn spawn_app() {
let server = zero2prod::run("127.0.0.1:0").expect("Failed to bind address");
3.5. IMPLEMENTING OUR FIRST INTEGRATION TEST 37
let _ = tokio::spawn(server);
}
Done - the background app now runs on a random port every time we launch cargo test!
There is only a small issue… our test is failing5 !
running 1 test
test health_check_works ... FAILED
failures:
failures:
health_check_works
Our HTTP client is still calling 127.0.0.1:8000 and we really don’t know what to put there now: the
application port is determined at runtime, we cannot hard code it there.
We need, somehow, to find out what port the OS has gifted our application and return it from spawn_app.
There are a few ways to go about it - we will use a std::net::TcpListener.
Our HttpServer right now is doing double duty: given an address, it will bind it and then start the applic-
ation. We can take over the first step: we will bind the port on our own with TcpListener and then hand
5
There is a remote chance that the OS ended up picking 8000 as random port and everything worked out smoothly. Cheers to
you lucky reader!
38 CHAPTER 3. SIGN UP A NEW SUBSCRIBER
use actix_web::dev::Server;
use actix_web::{web, App, HttpResponse, HttpServer};
use std::net::TcpListener;
// [...]
The change broke both our main and our spawn_app function. I’ll leave main to you, let’s focus on
spawn_app:
//! tests/health_check.rs
// [...]
We can now leverage the application address in our test to point our reqwest::Client:
//! tests/health_check.rs
// [...]
3.6. REFOCUS 39
#[tokio::test]
async fn health_check_works() {
// Arrange
let address = spawn_app();
let client = reqwest::Client::new();
// Act
let response = client
// Use the returned application address
.get(&format!("{}/health_check", &address))
.send()
.await
.expect("Failed to execute request.");
// Assert
assert!(response.status().is_success());
assert_eq!(Some(0), response.content_length());
}
All is good - cargo test comes out green. Our setup is much more robust now!
3.6 Refocus
Let’s take a small break to look back, we covered a fair amount of ground!
We set out to implement a /health_check endpoint and that gave us the opportunity to learn more about
the fundamentals of our web framework, actix-web, as well as the basics of (integration) testing for Rust
APIs.
It is now time to capitalise on what we learned to finally fulfill the first user story of our email newsletter
project:
As a blog visitor,
I want to subscribe to the newsletter,
So that I can receive email updates when new content is published on the blog.
We expect our blog visitors to input their email address in a form embedded on a web page.
The form will trigger a POST /subscriptions call to our backend API that will actually process the inform-
ation, store it and send back a response.
We will have to dig into:
• how to read data collected in a HTML form in actix-web (i.e. how do I parse the request body of a
POST?);
• what libraries are available to work with a PostgreSQL database in Rust (diesel vs sqlx vs tokio-
postgres);
40 CHAPTER 3. SIGN UP A NEW SUBSCRIBER
the keys and values [in our form] are encoded in key-value tuples separated by ‘&’, with a ‘=’ between
the key and the value. Non-alphanumeric characters in both keys and values are percent encoded.
For example: if the name is Le Guin and the email is [email protected] the POST request body
should be name=le%20guin&email=ursula_le_guin%40gmail.com (spaces are replaced by %20 while @ be-
comes %40 - a reference conversion table can be found w3schools’s website).
To summarise:
• if a valid pair of name and email is supplied using the application/x-www-form-urlencoded format
the backend should return a 200 OK;
• if either name or email are missing the backend should return a 400 BAD REQUEST.