0% found this document useful (0 votes)
263 views846 pages

The Imposters Roadmap - Rob Conery

The Imposter’s Roadmap is a guide for self-taught developers aiming to advance their careers in the tech industry. It covers essential tools, skills, and strategies for personal and professional growth, emphasizing the importance of soft skills, project management, and leadership. The book aims to empower readers by sharing practical advice and personal experiences to help them navigate challenges and succeed in their careers.

Uploaded by

akhilsahu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
263 views846 pages

The Imposters Roadmap - Rob Conery

The Imposter’s Roadmap is a guide for self-taught developers aiming to advance their careers in the tech industry. It covers essential tools, skills, and strategies for personal and professional growth, emphasizing the importance of soft skills, project management, and leadership. The book aims to empower readers by sharing practical advice and personal experiences to help them navigate challenges and succeed in their careers.

Uploaded by

akhilsahu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 846

THE IMPOSTER’S ROADMAP

ESSENTIAL TOOLS AND SKILLS FOR SELF-


TAUGHT DEVELOPERS WHO WANT TO GROW
THEIR CAREER.

ROB CONERY
The Imposter’s Roadmap

A guide for self-taught programmers to building your career in the tech industry.

ROB CONERY

COPYRIGHT BIG MACHINE, INC, 2024

All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any
form or by any means, including photocopying, recording, or other electronic or mechanical methods,
without the prior written permission of the publisher, except in the case of brief quotations embodied
in critical reviews and certain other noncommercial uses permitted by copyright law. For permission
requests, write to the publisher, addressed "Attention: Rob," at the address below.

Published by: Big Machine, Inc

Please forward all questions RE publication to [email protected]

Publisher’s version: 0.1.0


CONTENTS

Foreword v
Preface vii
Introduction xiii

PART ONE
PREPARATION
1. Your Career Journal 3
Keeping receipts, keeping your job
2. Essential Soft Skills 46
Because you will need to work with other people
3. Leading Your Team 69
You can change someone’s life and career with your actions
4. Simple Tools For Managing Projects 119
A look at tools and services for handling the details
5. Principles of Interface Design 160
Adding some polish can make all the difference
6. Creating User Stories 186
This Agile practice can help keep you focused on what’s important
7. Software Project Management Basics 207
Getting to know Agile and it’s various flavors
8. The First Sprint: Your Prototype 231
Action is the name of the game - get something in front of your boss or client
9. Summary: Buckle up 269
Shipping is winning; shipping gets you noticed

PART TWO
DEVELOPMENT
10. A Brief Review of Git 275
You will likely be using Git to manage your source control, so let’s have a
quick review
11. Flexing Git and GitHub 289
Your source history tells a story, let’s make it a good one
12. Trunk-based Development 323
When process gets in the way, throw it all out
13. Building Things Using Containers 338
Docker is essential to understand if you’re going to lead a team
14. Simple Container Orchestration 371
Digging in to Docker Compose
15. Formalizing Our Container Strategy 399
Commonly known as DevOps, this is where things get interesting
16. Kubernetes 423
A critical tool to understand so you can make someone else do it
17. Architectural Approaches 457
The structure of your application impacts more than you realize
18. Testing Strategies 499
Unit, Behavioral, Integration, User Acceptance, and Exploratory
19. Debugging 545
A wonderful skill to master
20. Congrats on Your MVP! 558
You’re in the spotlight now, friendo

PART THREE
DELIVERY
21. What It Means to Ship 571
We love shipping, but we don’t love the politics that come with it
22. The Build 586
Formalizing a critical process
23. Adding Features, Fixing Things 616
Change is the hardest part of any application lifetime
24. Code Reviews 639
In which we get to flex our people skills for great good
25. Oh, Right, Documentation 665
No one likes creating documentation, but when it’s done well, you’re a hero
26. Monitoring 699
Knowing what’s happening when so you can avoid problems
27. We’re Gonna Need a Bigger Boat 732
The art and science of scaling an your application
28. A Loud Bang, Then Silence 770
Creating a Disaster Plan for when things go very badly
29. Reporting, Once Again 790
Understanding how good data can save your job
30. And Here We Are 819
Go make some magic happen
FOREWORD

Here I am writing the foreword for Rob Conery’s latest impostors


book. We collaborated on the second one and I've written for and with
him before. But today I am sitting here reading his latest draft. I stop,
and I text him, “you're a very good writer. I'm jealous.”

What am I supposed to do with that? Where am I supposed to


compartmentalize the fact that I'm writing a foreword to an imposters
book when I myself am an imposter and I'm feeling bad that I didn't
write this book. You can imagine how one quickly goes down the
drain.

I'm telling you this for the same reason that Rob wrote the book.
Because it's important that you know that you're not alone. We're all
just doing our best and we are all just making it up as we go along.
Very rarely is there a plan. You can go to Instagram reels you'll find out
that the algorithm is designed to make you feel bad about yourself,
and all of the Hustle Culture Bros ( ) will tell you that the reason
that you're not successful is because you're not getting up at 5:00 in
the morning to do burpees. That’s not why.
vi FOREWORD

Feeling like an imposter - or impostor, even! - is a complicated web of


reasons. But you can and will dig out slowly, you'll never completely
feel impostor- free, BUT you will have tiny victories, and this book is a
great start with a power series of tools and techniques that are sure to
set you up for success.

Imposters Unite!

Scott Hanselman

VP of Developer Community, Microsoft


PREFACE

This book is a practical guide to growing your career in the tech


industry. We’ll discuss tools and skills, I’ll relate a few stories from my
experience, and I hope that you walk away with a reasonable
understanding of what’s in front of you. It should be a fun ride; it was
for me.

Most of the time.

I have screwed up, a lot. You will screw up too, and that’s OK because
this is how we learn and grow. Don’t be afraid to fail because it will
happen. It’s how you fail that counts.

ABOUT ME
I’ve never met you, most likely, so I have to make a few assumptions.
This is dangerous ground given our social, cultural, and many other
backgrounds. Yet, we’re both working in the tech industry, so with
that, we share common ground.

That common ground is where I hope to plant the seeds in this book.
viii PREFACE

There are a few things you’re about to read that you’re going to find
challenging, especially in what people consider socially acceptable. Yet,
I would ask you to suspend your judgment until each chapter has
concluded because, you see, we live in a world of massive
contradiction between what we say publicly and what we think (and
sometimes do) privately.

Our private thoughts drive us, while our external presentation is what
allows us to execute that drive. For some people, it’s important to
align the two and present themselves exactly as they feel on the
inside. These people are exceedingly difficult to get along with.

Others have an overly practiced veneer that reeks of insincerity and


pretense. These people are also repelling.

Like everything in this book, there is a wonderful middle ground,


where you give just enough of yourself so that people trust you and
feel they truly know you. You hold back the things that can go unsaid,
respecting the other person’s space as well as holding your own.

Working effectively in this middle ground is called politics. This is


where you will need to operate as you climb the ladder at your job,
whether (and especially if) you’re a lone contractor or a junior at their
first job right out of bootcamp. You simply won’t succeed if you blurt
out everything that comes to mind, or “fake it until you make it”. I
know too many people like that, and they are found out in this
industry.

What I’m trying to say is this: You will be playing a game and learning the
art of mastering perception. I know. It sounds duplicitous and slimy,
and you’re probably wondering if I’ve “flipped the Bozo bit” and about
to go off on some rant about contrails.

Yet, that’s what leadership is. As Martin Luther King Jr. once said:

A genuine leader is not a searcher for consensus, but a molder of


consensus.
PREFACE ix

Your influence on others will change their perception of their work


and even of themselves. Think of the great bosses you’ve had in your
life; I’m sure they’ve had a dramatic effect on you. Your influence will
also guide the perception of your bosses and clients — aka, “the
people who pay the bills”. You will often have to advocate for your
project and your team, and your skills at leadership, at molding
consensus, are critical.

As with so many things in life, the idea of taking a step forward into a
leadership position means that you believe you can actually lead, or, at
the very least, that you want to lead, and you have faith that you can
do things better than the person who’s leading you now.

That’s some scary stuff, especially if you believe yourself to be “nice”


and “do no harm” is your personal motto. That’s a good motto, and
you can keep it if you like, and still run the team you’re on.

A non-slimy, non-Bozo way of seeing the above paragraphs is that you


are, in fact, a pleasant person. Good-hearted, smart, and see the love
in everyone. If that’s the case, you have a duty to lead the team you’re on.

Politics, innit.

If you’re going to move into a lead position, you will need to believe in
yourself and that you’re the wonderful person you are (hopefully). You
are, aren’t you? I have a feeling that, given this book’s title, you’re not
an egotistical ass that believes the position is yours for the taking.

LET’S TALK ABOUT YOU


I’m assuming you’re in the first 5 or so years of your career, but I’m
hoping there is something in this book for everyone, regardless of
what stage they’re at in their career.

I’m also going to assume you’re a good person, which means you
might struggle with the soft skills chapter, where I dig in to
x PREFACE

interpersonal politics and games. As much as we want to avoid it,


these “games”, if you will, are essential to your career. You don’t have
to engage in them, of course, but that means giving up your power
and letting other people decide where you end up.

For some, that works. For others, not so much. By learning about the
“dark arts” of office politics, you can avoid quite a few clumsy
attempts to derail your project by unscrupulous scumbags. Don’t
avoid learning about these things — they can save your ass.

I do hope you read this book with an open mind, and if something
strikes you as challenging, try to see both the good, and the bad that
come with it. That’s one of the skills you’ll need to learn as you move
along.

THIS IS A LIVING BOOK


I’m șelf-published, which means I need to do many things on my own,
or by hiring out for help (I do both). That said, it’s entirely possible
you’re going to find an issue with some content in this book.

If you do, and you feel like helping out, I have a GitHub repository
where I track the issues. If you have a general question, maybe email
me first. If you find a grammar problem, or you disagree with
something I’ve said, an issue would be super helpful. I’ll have some
templates in there as well, to help things along.

As I fix the issues that come in, I’ll push out a book update. When I
do, I’ll be sure to let you know.

RESOURCES
I use images throughout this book, some of them are for code
samples. The reason I use images for code is that I find that publishing
tools are horrible at rendering these things properly. I’ve tried just
about every tool out there, but I always come back to images because
they’re easy to read.
PREFACE xi

To that end, I have a code repository where you can download all the
code in this book, which is right here.

I also have several online courses that I’ve put together over the years
that you might find interesting:

I add to these occasionally, and I might add a few videos to support


this book, too.

I used to blog a lot, but I find that having a newsletter is more fun. It’s
the same idea, but you don’t need an RSS feed. I don’t send out
marketing stuff, and if you’re interested in reading about what I’m
learning, you can do so right here.
xii PREFACE

OK, off we go!
INTRODUCTION

Yesterday I was clever, so I wanted to change the world. Today I


am wise, so I am changing myself.

In 2005 I bought Mike Gunderloy’s classic Coder to Developer. The book


is short and details the skills a junior programmer needs to focus on if
they want to advance. I wasn’t sure I needed this book as I had been
programming and leading teams for years at that point. How wrong
I was.

If you’re a programmer, you write the stuff. If you’re a developer, you


both write and ship programs. That’s a sizable distinction. It speaks to
you as a person more than your skill as a programmer, and this is
where I need to use some editorial caution.

For the majority of the book I will do my best to avoid philosophical


musings and the trappings of the “Coding Career Coach” (which I
certainly am not) while at the same time impressing upon you the
importance of certain practices and a clarity of mind. I’m not here to
tell you “you can do it!” - because honestly, maybe you can’t. At the
xiv INTRODUCTION

same time there might be a solid leader inside you that just needs a
kick in the pants.

I assume you’re reading this book because you want more out of your
career. More challenges, more recognition, more meaning and, likely,
more money. Probably all of the above. If this is you, please embrace
these motives. These desires are human and the people who accept
that fact are usually the ones who end up in leadership positions,
which, I assume, is what you want because that’s what this book is all
about: helping you navigate into a leadership position.

DELIVERY IS A DRUG
There is nothing, nothing better than delivering a result on time and
exceeding the client’s expectations. The road may have been rough
getting there, but a successful launch will make everyone forget all of
that and you’ll be a hero.

In 2005 I was contracted by PayPal to build a “starter kit” for ASP.NET


2.0, which was yet to be launched. These starter kits, as they were
called, were literally that: basic applications that could be built out
into custom solutions. There was a Club Starter Kit, Membership
Starter Kit, Blog Starter Kit and the one I was working on: the
Commerce Starter Kit.
INTRODUCTION xv

I can’t believe I found this screenshot! That’s IE 7 running on Windows XP, by


the way.

I worked with people from PayPal, including my friend Dave Nielsen,


and also people from Microsoft - most notably Brian Goldfarb. I had 4
months to build it and it had to look good and work well.

I led development of a small team - me and 2 other contractors - and


did my best to manage my time. We’ll touch on this more later in the
book: fending off meetings is a solid skill to have! We had work to do
on a very tight schedule and I had to manage two clients from two very
high-powered companies. Let’s just say the lack of meetings caused
some tension.

Brian, in particular, was feeling the heat. Steve Ballmer, then-CEO of


Microsoft, was going to demo this starter kit on stage at a huge
developer conference when they announced the arrival of ASP.NET
2.0. He didn’t have time for mistakes or fixes - it had to work perfectly.
xvi INTRODUCTION

Well, as we know, no software works perfectly the first time, does it?
As delivery neared, Brian, Dave and I would talk daily. We would go
over the demo that Brian was to give with Ballmer, and make sure it
all worked.

We found bugs, Dave and Brian freaked out, I fixed the bugs, rinse,
and repeat. Some of these meetings got tense, but I held my tongue
(for the most part), wrote things down and kept saying “trust me, I
got this”.

I was no stranger to pressure. I had been working with Fortune 50


companies (a big deal here in the US) for the last 8 years. Big budgets,
big egos, and big pressure. I learned quickly (and often the hard way)
how to make room for yourself and your team so you can deliver
something of value. And like the title of this chapter says: it’s kind of a
drug.

We delivered on time. The demo went off perfectly, and I received a


joyous call from Brian later that day. I’ll never forget what he said:
“You’re with me now! I’m going to keep you busy for years, I hope
you’re ready!” He wasn’t joking either. Four months later I received a
call from Scott Guthrie himself, who, at the time, was managing the
ASP.NET group.

He had a job I might be interested in…

THE POWER OF DOING


Delivery is a drug. It feels good to ship something when you’re on
the team, sure, but when you’re the one responsible for guiding,
helping, defending and inspiring… that feeling of sending your team’s
creation into the world just can’t be beat.

People will do anything to keep that high. Including lose themselves


when they’re drunk with power. Ever wonder why there are so many
assholes in management?
INTRODUCTION xvii

There is a downside to all of this, however, which is that you have to


defend yourself and your achievements if you ever expect to do more.
Some might see this as “you have to fight to maintain your power”,
which is true from a negative perspective, or that “delivering results
puts a target on your back”, which is also true.

Execution and gravity have a lot in common: they draw people and
interest to you and you can get crushed if you’re not careful.

Every bit of software ever written started off as a spark deep in


someone’s imagination. That spark grew into an idea that overtook
them and, at some point, you got involved. Perhaps the idea was yours
and you’re trying to make it real. Maybe it’s a good friend and you’re
now a cofounder or, most likely, you’ve been hired to lay more digital
bricks as this idea crosses from the fantastic to the real.

Reality, alas, is thick with nonsense. Once you start to truly execute
and actually deliver something you’ll be told “great work!” to your face
and shortly thereafter the deception starts from your envious
colleagues (meaning all of them) in the form of back-stabbing,
sabotage and outright aggression. If it sounds like I’m being hyperbolic
or otherwise overstating the power of execution, consider an example.

You and 4 of your friends are planning a night out over the weekend.
You’re not sure what you want to do but you’ve debated going to a
movie, perhaps bowling, Thai food or possibly a dinner party at
someone’s house. It’s summer, so something outdoors was also
discussed.

One of your friends, Kim, decided to call each of you independently


and ask a few more questions about what you enjoy doing on warm
summer nights. Everyone was more or less agreeable, aside from
Chino who wondered aloud if this should be a group decision. Kim
agreed with him but thought it best if they suggested a few things,
and asked Chino what movie he wanted to see most. Chino got
excited and gave her the name of a movie he was looking forward to.
xviii INTRODUCTION

On Friday, Kim sends out a group text that says:

There’s a public night market on Saturday and I bought us tickets!


They have a bunch of food stalls there from local restaurants
(including Thai food) and here are their menus. There are some
bands playing and a biergarten! The bowling alley has league night
so everything is closed – so I bought us tickets for the late show at
the theater as none of us as has seen New Movie which has 89%
on Rotten Tomatoes and some great reviews! Chino suggested
New Movie – good call!

My car seats 8 so I’ll be by to get everyone at 6:30 on Saturday…


see you then!

How would you feel if one of your friends or coworkers did something
like this? Some people might be happy – all of that sounds like good
fun! It’s also nice to not have to think through all the particulars while
trying to punch through group inertia.

And then you might have others who feel put out. Who is Kim to take
control of this situation? Your slightly sad friend might feel minimized
or worse, silenced, which is never a good feeling. A group effort is
exactly that… now it just feels like Kim is being a dictator.

So: how would this make you feel?

I know more than a few Kims: the doers. People who realize the best
way forward is through. These people are treasured and despised at
the same time, and tend to either wreck a situation or knock the ball
cleanly out of the park. Perhaps you know some of these people too?
Equally annoying and lovable.

There’s more to the story, however, because Kim did some savvy
things. Little tactics she’s learned throughout her life that help avoid
the unpleasant side effects of her actions:
INTRODUCTION xix

She narrowed the choices. It’s league night at the bowling


alley, so that’s out… movie it is! The little deception here is
the “lie of omission”, but is this really a lie? She’s suggesting a
solution… but yes she’s also not offering a full list.
She called everyone first. Consensus (or, more importantly,
the appearance of consensus) is everything to people like Kim.
Doing this allowed her to figure out how many people wanted
to go to the movies and maybe what food they wanted. Again:
is this deceptive? A gray area but yes, this is manipulation
with intent.
She removed a roadblock. In her call with Chino, she sensed
pushback and used her skills at persuasion. She asked him
what movie he was looking forward to and made him feel
involved. Kim’s not alone in this decision now, Chino is with
her. Ah, deception can be so subtle, can’t it?

In short: she molded consensus. You can’t lead a team without using your
elbows from time to time.

As a lead, you need to feel OK manipulating perception and


persuading people. You need to actively remove roadblocks and defend
your people as you and your team execute your project vision,
ruthlessly. You will also need to narrow the focus, set expectations,
and remove any inertia that your team is feeling, which (I hate to say)
means addressing members who are causing issues. More on that in a
later chapter.

Freight train. Bull in a China shop. Jumping without a parachute.


Chaotic neutral force of nature. These are all terms that are likely to
be applied to you over your career of shipping software. You will be
lauded and maligned and, yes, people will be out for your job if they
sense you’re a doer.

Bring it on.
xx INTRODUCTION

THE PATH OF MOST RESISTANCE


I’ll have more to say on this in a later chapter, but I find the idea of
“no pain, no gain” to be entirely true in the world of organized
software development. If it was easy, we’d all be rich!

Don’t get me wrong: if you gain enough experience, coding an


application becomes easier. Shipping those applications, however,
tends to become harder. Bigger applications come with more
experience, and bigger applications mean bigger teams with a bigger
budget. People + Money = Drama and Politics, always.

As a programmer, I understand (and often prefer) working with code


and computers because it’s a world with rules. As a lead, you get away
from that and it can be disorienting and confusing, which lead to
frustration and burnout. Go talk to your boss about their job and how
much they enjoy meetings…

I’m not going to shy away from any of the “human nonsense” in this
book; in fact, I’m going to go right at it. If you feel resistance, that’s a
good thing. It means you’ve found a weak part of your game which
you need to strengthen if you’re going to move up in your career.

Even if you don’t intend to manage people, you will still need to
master the art of getting your way. That shouldn’t sound slimy to you!
After 20 years in this industry, you’re going to have a lot more
experience that others will try to ignore. Your experience is extremely
valuable; you owe it to your bosses and clients to push for yourself.

That’s what this book is about: you sharing your knowledge with the
world, building from the inside and using a solid set of tools on the
outside.

In Part 1 of this book, we’re going to focus on preparation:

You will learn how to log your daily efforts through journaling.
Even if you keep a journal now, there might be some things
you’re missing.
INTRODUCTION xxi

Choosing the right tools to manage your project and report


progress and, more importantly, your successes. You need to
get used to that last part.
Making sure you’re building the right thing. It’s wonderful
how often this doesn’t happen.
The basics of Agile development and its variations. It’s likely
you’ll be working in some form of Agile, so let’s understand
what it is, and more importantly, what it isn’t.

In Part 2 we’ll dig into the development process. I’ll share with you
the strategies you’ll need to be certain your team delivers value when
all is said and done.

You’ll learn different ways to use GitHub to help you track


what’s going on with your codebase.
We’ll be introduced to Docker, a very necessary tool in this day
and age.
We’ll discuss AI and how it can help, and destroy, your
project.
We’ll dig into architectural considerations as well as testing
and how to do it.

In Part 3 we’ll figure out where your application is going to live and
for how long. SHIP IT!

We’ll create The Build, a process by which your code is


assembled and tested in an automated way.
You’ll want to be certain you have monitoring ready to go
because Downtime Is Death!
We’ll discuss cloud providers and whether they’re worth it.
Containers or VMs? Not an easy choice.
Docs. You have to have them, and now’s the time to make
sure they’re up to speed.
xxii INTRODUCTION

In Part 4 we’ll dive into fixes, improvements, and scaling - aka


“maintenance”. Most developers think initial development is the
largest part of the application process but, they would be wrong.

We’ll discuss the Art of Scaling. How to measure, what to


tweak, and how to make sure it worked.
What happens when things explode? That’s in your Disaster
Recovery Plan, isn’t it?
We’ll talk about change and how to go about it: code,
databases, and more.
Fending off power grabs and rewrites. Your boss might leave
the company at some point! That new CTO might not like
what you’ve made and think it’s time for a rewrite…

Finally, the last part of the book is about your success and what to do
with it. Do you want to manage other people, or do you want to be an
“Individual Contributor”? You should know this right now, before the
journey begins.

QUICK THOUGHTS ON AI
For some reason, I was hell-bent on not adding anything about AI to
this book because it’s an over saturated topic these days. That said, I
do feel that it’s worth noting the following:

I wrote this book 100% with my own fingers, on my own


keyboard, with my own brain. I Google numerous things for
research, and I did the same with ChatGPT to see what other
research I could find. But nothing was written by any AI tool,
anywhere.
AI tools can be amazingly helpful, and you should use them.

Your opinions are groovy, but your job is to ship software. There is no
holy war happening here, at least not yet, and not using tools to your
project’s advantage is a function of your ego, not your client’s needs.
INTRODUCTION xxiii

Give yourself every advantage you can to deliver the thing you’ve been
asked to create.

GitHub Copilot (or any other AI coding tool) is phenomenal at


helping people figure things out on the fly. It helped me throw
together a project in C# in a week’s time - and C# is a language that I
hadn’t used in over 9 years!

So why the guarantee, then? Why did I start this section off with the
assertion that I didn’t use AI to write any of the content herein?

There’s a fine line between writing code and writing prose. I suppose
it’s like buying a hamburger and finding out it was grown in a lab: it’s
a synthetic experience.

But couldn’t you say the same about code? Writing code is a creative
endeavor, isn’t it? I think so. So why, then, is it important to me that
you know I wrote this entire thing by hand?

It’s simply this: because I care. Code can be expressive, but not as much
as prose on a page, fitted together line by line. I try to put my voice in
everything I write, not the synthetic voice of an LLM.

That, to me, is the difference. Code should be efficient, expressive, and


concise. Code is read by a machine to execute a logical operation. A
book is different in that it offers ideas, experiences, and emotion. At
least if it’s done with care. I’m not going to suggest that my writing
skills are that good, but I’m trying, and I think that matters.

With that, let us begin.


PART ONE
PREPARATION
ONE
YOUR CAREER JOURNAL
KEEPING RECEIPTS, KEEPING YOUR JOB

I
f you have a journaling system that captures your efforts at
work, and you’re happy with it, feel free to skip this section. If
you don’t have one, that needs to change.

There are so many journaling systems out there and I’m sure you’ve heard
of some of them. The king of the hill is Getting Things Done (or
GTD), but there’s also Bullet Journaling, Time Blocking, Pomodoro,
etc.

I have tried almost all of these and finally settled on one that I like
based on its flexibility. You might find other systems that resonate
with you, which is grand! Just make sure you pick one and stick
with it.

Why? Because a journal can keep you sane. It will be your best friend,
confidant, and long-term memory. It can be a source of joy and calm,
and a place you retreat to before and after work.

Your journal and notes are your life, and they’re immensely valuable.
4 ROB CONERY

JOURNAL VS. TIME MANAGEMENT


I think of these two things together and I blame the Bullet Journal for
that. To me, thinking about time and tasks goes hand in hand with the
question of “what did I do with my day?” In the world of productivity,
however, they’re not typically discussed together.

What you do is up to you, of course, as long as you do it. To me, the


most important thing is keeping a journal, thus the title of the chapter.
As long as you’re doing that, however, you might as well learn to track
your time and what you’re doing personally. Team tasks are a separate
affair and need to be shared with the entire team. This chapter is
focused on you and only you.

TRACKING YOUR WORK


Saving emails and setting reminders for tasks and appointments will
work for you when you’re just getting started, but as you move up in
the ranks, you’ll need a more formal system. This will help you be
productive, yes, but you’ll also need to make sure your memory is
correct.

In the next chapter, we’re going to discuss “Soft Skills”, which go


beyond the idea of professionalism and into navigating how to work
with other people. Other ambitious people who have their agendas
which might not align with yours. To that end, it is absolutely critical
to document your day, which includes:

Any calls or meetings, with a summary agenda.


Conversations that you think are important, or could be
important.
Hunches you may have.
A daily summary, in bullet form, of what you did and the
actions you took.
THE IMPOSTER’S ROADMAP 5

I can’t tell you how many times this habit has saved my skin both
professionally and personally. Gaslighting is a very real thing, and
being able to document what was said and when is crucial.

Saying “keep a work journal” is one thing, but having a useful system
is another. Journaling is different from time and task management, but
if you’re careful, they can be one and the same. I’ll share what I do at
the end of this chapter.

A Quick Note on Screenshotting

Be prudent with this practice. I think it’s perfectly fine to screenshot


conversations in Slack or Teams (or whatever chat app you use at
work), as long as it’s for you and your records. Same with email.

If you pass those around, however, you will quickly lose trust, even
from your friends. If you become known as a screenshotter, people
will assume that’s what you’re doing in their conversation with you as
well, and you do not want that.

It’s so tempting to screenshot hard conversations and grumpily share


them with confidant’s at work, a “can you believe he/she/they said
this!” Avoid the temptation!

Some administrators will install a screenshot monitoring plugin in


Slack, that will notify them and also the channel when someone takes
a screenshot, which can be highly embarrassing.

TRACKING YOUR LIFE BY TAKING NOTES


What books have you read on personal finance that you might want to
share with a friend’s kid who is graduating college and wants to start
actively investing in their future? How do you hook up Passport to a
Node web application again? Oh, and what was that great business
idea you had a month ago that you meant to ask your friend about?

When you start writing things down, you quickly discover that there’s
a natural crossover between time and task management, planning, and
6 ROB CONERY

journaling. Some people prefer to keep these separate and others try
to integrate them. I’ve tried variations of each, and had some success.
Let’s discuss a few.

Building a Second Brain

Thiago Forte’s book Building a Second Brain is fascinating, and I read it


in the span of three days. It’s all about note-taking and developing a
system so that you can unload information from your overtaxed cortex
into a system that you can tag and categorize, a thing he calls the
“personal database”.

I love that idea. I really love unloading my brain! Code snippets,


thoughts on books I’ve read, ideas from others, inspiration, things I
don’t like, meeting notes - on and on. You can use any note-taking app
as well (I used Apple Notes and Obsidian), and he even offers a
framework for notes called PARA.

The book is worth a read as it underscores, perfectly, the need to


unburden your brain and reduce distracting thoughts.

The Bullet Journal

The goal of the Bullet Journal is to help you be “mindful” of your


time, using a Zen-like approach to, basically, meditate on
your day.
THE IMPOSTER’S ROADMAP 7

The system is incredibly simple: you have a series of “logs” that work
together, and the stuff that’s truly important becomes apparent, while
the stuff that isn’t is thrown away.

Tasks are bulleted items, events have circles, notes are dashes. When
you’ve executed a task, you cross it out. If you didn’t do it, you move
it to a different log (monthly or future) to be done another day.

The big breakthrough for me was understanding that your daily log,
which you see in the image above, is a plan for the day and it’s also a
bucket for catching anything that comes to mind during the day. At
the end of the day, you decide what to do with the things that
occurred to you, or that you need to plan.

All of these logs end up being just that: a visual track record of what
you’ve done, and what you’ve discarded over time. Using a pen and
paper also slows you down, which is on purpose, and encourages you
to doodle and have some creative fun.

You can also relate tasks and events into “collections”, which can take
whatever form you like, and that’s a major draw for me: the Bullet
Journal is yours to mold to your life, however you see fit. There are a
few simple rules, but the rest is left to you.
8 ROB CONERY

I’ll have more to say on this in a few sections.

TRACKING YOUR LIFE BY TRACKING YOUR TIME


People have made millions on productivity systems over the years. It’s
kind of nuts. The reason why is straightforward: there’s too much
information in the world and we’re trying to consume it all. Honestly, the best
way I’ve found to feel more productive is to focus on reading less and
saying “no” more often.

You will still need a system, however, and the best advice you’ll see
online and in books is to keep it as simple as you can. Friction in your
process will stop you from using it! Speaking of, let’s talk about
processes now.

GTD and Trello

One of the more famous methods of time management is Getting


Things Done (aka GTD) by David Allen. You may have heard of this as
it’s extremely popular and still in use.

The idea is that you have an inbox, where you dump everything you
need to do. You then follow a simple process at some point during
your day to categorize things as needed.

The main concepts in GTD are:

1. Capture: when inspiration hits, you need to add something to


your calendar, a task presents itself, or anything else that must
get done, it gets captured and stored somewhere. This can be a
notes app, task manager, or a shoebox full of paper.
2. Clarify: go through the inbox and decide what to do with each
thing. If you can do it quickly, just do it and get it out of the
way. If you must do something to complete it (it’s
“actionable”), decide when (today, upcoming, waiting for,
someday).
THE IMPOSTER’S ROADMAP 9

3. Organize: some things are references, other things are tasks


for a project. Add some organization in the form of folders or
tags.
4. Reflect: set a schedule to do the organizing and clarification of
your inbox. This should happen regularly.
5. Engage: use your GTD system to figure out what comes next
and then do it.

It’s a simple system and there are quite a few apps you can use that
will help you. I used Trello for this for a long time and it works really
well, and best of all, it’s free for single use.

I’ve also used Things 3 on my Mac, as well as Apple Reminders, which


works surprisingly well.

GTD is great, but when it comes to looking over what you’ve done and
logging your efforts, it can be lacking because the focus isn’t on that,
it’s on, well, getting things done. Once they’re done, they’re gone unless
you impose your own system.

Time Blocking

This is a very intuitive way of planning your day, but like GTD it lacks
a focus on journaling and logging. I think that’s OK, however, use a
journal as a journal and no more.
10 ROB CONERY

Anyway, time blocking centers around “blocking” parts of your day to


do certain groups of things. Your day might start with 2 hours for
exercise and personal time, 4 hours for work focus, 2 hours for lunch
and a walk, and the remaining time for family.

The rules for this are simple: your focus for each block must be
complete. You can adjust these blocks, of course, as needed but your
goal should be something like “when I’m exercising, I should be at the
gym, on a walk, or on my bike”. You can then assign yourself those
tasks within that block for the day.

Breaks are also important, as is the need to focus on downtime to


“sharpen the saw”. Most people find that blocking the week on a
Sunday is useful, with individual tasks being added daily in the
morning during a planning block.

Google Calendar is excellent at this, if you use Google Tasks. You can
drag the tasks right into the calendar and see what you’re doing when
easily. You can also do this with other calendar systems, but I find
Google Calendar works great.

If you’re an Apple person (as I am), the built-in Reminders app along
with a capable calendaring app, like Fantastical, works great:
THE IMPOSTER’S ROADMAP 11

When you schedule a task for a given day in Reminders, it shows up at


the top of Fantastical on that day. Since there’s no time associated
with it, it’s up at the top next to the all day events.

If you want to do that task during a given time block, however, you
can drag it down into the day and drop it at a given time, which will
add a time alert in Reminders. When you’re done with the task, you
can tick it off right in the calendar, or you can do it in your Reminders
app.

Outlook does something similar:


12 ROB CONERY

Here, I’m using Tasks together with my day view in Outlook to block
out my day. The tasks don’t overlap with the underlying calendar
event, however, which is a bummer but it’s also understandable.

Turning Your Calendar Into a Journal

One of the neat things about calendar apps that track tasks is that you
can use them as a pseudo journal, tracking what you did when:
THE IMPOSTER’S ROADMAP 13

I have the setting “Show Completed Tasks” turned on so they show up


right on my calendar as you see. I did not have those tasks set for those
times, however, because nothing breaks your concentration faster than
a reminder pinging you on your computer, phone, and iPad all at the
same time.

What I did do was to drag those tasks from the top of the day down to
the time block that I actually did them in. They won’t remind me,
either, since they’ve been completed. As you can see from the red line,
the exact timing doesn’t line up but that doesn’t matter to me; I like
the idea that I know what I did in that given block.

Tasks and Events in Your Ideal Week

I learned this trick from Ali Abdaal, and I like it a lot. Time blocking is
extremely useful, but how you go about blocking things can be
confusing. What Ali does is to create a new calendar called “Ideal
Week”, and then block out, with repeating tasks, what that might look
14 ROB CONERY

like. You can see mine above, in green and purple (blue is work). In
Outlook, the Ideal Week is purple.

You will adjust this at the start of each week, which is when you’re
supposed to do your planning. That’s OK, at least you start from
somewhere rather than an empty calendar. You can move things
around and adjust as you need, which then gives you a framework for
your tasks and when to do them.

The Important Times of the Week

When you time block your week, there are three very important
events that should be considered above all others: Weekly setup, morning
routine, nightly reflection. I probably don’t need to explain this too much,
but time blocking will only work if the blocks make sense, which is
where the Sunday setup comes in (or whatever day you choose). The
morning routine is the same process (fitting your daily events and
tasks to reality), and the nightly reflection is where you get to
consider your day and write a little journal entry.

The nightly reflection is the only event in my Ideal Week calendar with
an alert set. This is where I log what happened in life and in work,
screenshotting as I please because receipts are critical. This is the time
that you will save your job in the future, potentially, so take care to
make the time to actually do it!

I have used the Day One journaling app for years and I love it:
THE IMPOSTER’S ROADMAP 15

It’s Mac-only, but you can use your favorite note-taking system instead
of a journal like this one. I like the security and encryption here, which
is why I use it, and it also supports Markdown.

If you don’t have a journaling system and want to use Markdown, and
also want to keep everything as stupidly simple as you can, I got you!

Our goal is least-friction, so if don’t have a set up, let’s ease you
into one…

JUST SHOW ME WHAT TO DO


I love the Bullet Journal approach, but having it on paper means that I
can’t search things easily or quickly refer to a note during a meeting or
call, unless the journal is right in front of me. Each journal of mine
lasts 6-8 months, and I have 5 or 6 of them that go back a few years.

A digital version of Bullet Journal would solve this, but that goes
against one of the main selling points, which is to get your head out of
the digital world and slow down.
16 ROB CONERY

To me, the tradeoff is worth it. I have been using what I’m about to
show you successfully for a few months now, and I love it. I still have
the same peace of mind, but I do miss taking time away from a screen
to collect my thoughts. Ah well, life is compromise.

Hello Obsidian

I will be mentioning Obsidian throughout this book, but just know


you can replace it with your favorite note-taking app. I tried a similar
approach with Notion, but I found it to be slow and fiddly, and it
made me want to tinker far more than I wanted to.

Obsidian, on the other hand, is not web-based. It’s a collection of


markdown documents that live locally on disk. The app itself is a
glorified markdown editor and considered one of the very best by
quite a few people.

It comes in every flavor, too: Mac, Windows, Linux, iOS, and Android.
If you store your files on a shared drive, you can sync mobile and
desktop easily. I use iCloud for this, but Dropbox works great too, or
whatever sync system you use. If you upgrade to the paid version, you
can use their syncing service.
THE IMPOSTER’S ROADMAP 17

Obsidian is completely free for personal use, but if you want to use
it “commercially”, it’s $50/year. Given how much I use this thing, I
don’t mind giving them money at all. There is also a one time
“Catalyst” license that gets you early access and helps support
development.

So, go get yourself Obsidian if you want to play along, and let’s do
this.

Your First Vault

When you start Obsidian for the first time, you’ll be asked which
folder to use to store your “Vault”. You can have many vaults, which is
great if you intend to separate work from life. For me, a single vault is
fine.

Once you’ve done that, you should see something that looks like
this:

Have a look around in here, and click on things. Obsidian is super


simple to use, with an outstanding editor that is open in the middle
pane. On the right is the “Graph View”, which shows how your notes
are linked together. I’ll touch on that later, but this is an incredible
way to “see” your notes and how they relate.
18 ROB CONERY

You can do all the normal things an editor can do, including setting up
a folder hierarchy, tagging documents, and more. I’m going to focus on
getting our journal set up, so play around on your own and watch the
many YouTube videos out there, but don’t get too sucked in to the
“systems” just yet… there are sooooo many and they’re very
academic. We need to remain focused, for now.

Organization Philosophy

I don’t really have a system because systems require mental space and
I find that any system is a form of friction, which will keep me from
keeping the notes I need to keep.

I do have a set of folders, which I’ll go into, but aside from the
calendar-based journal stuff, that’s the extent of any “system” I use.

The goal, for me, is to capture the flexibility of the Bullet Journal and
make my system bend to the way I think. It should reflect my brain, not
someone else’s.

The Folders

I don’t like to organize things by folder, but that’s just me. I use tags
instead, and link things together for a more “organic” structure, if you
will.

Obsidian supports linking notes, and pushes the idea of traversing


those notes so you can follow your thinking on a given subject. Given
that, I like to generate everything in my daily notes, unless I need a
special collection.

For instance, I might want to remember something I coded during the


day, and I can drop it right in my daily note making sure to tag it:
THE IMPOSTER’S ROADMAP 19

I’ll talk more about that in a minute - but my point is that I don’t like
to get hung up in a contrived structure; I’d rather build it out
organically.

To that end, I only have a few folders in my root:

Year Folders, 2024, for example. Inside of those I have month


folders which contain my daily notes.
Templates, this is a meta folder that Obsidian uses, and we’ll
discuss this more in a second.
Assets, another Obsidian meta folder where the images go.

It looks something like this:


20 ROB CONERY

I keep every daily note in the root until the end of the month, when I
move them all into their month folders for a given year. I’m getting
ahead of myself here - let’s back up a bit.

Collections and Logs

Bullet Journaling is all about putting your notes into a log somewhere.
The most common log is the Daily Log and, as you can see above,
mine has the date as the title along with the day of the week. The
Daily Log is something you’ll often use, as well as the Monthly and
Future Log.

These are useful concepts, but to me the power of Bullet Journaling


comes from being able to create custom collections on the fly. These are
groups of tasks, events, and ideas that should stand apart from your
daily notes. There’s no set criteria for them, you just sort of “feel it”
according to your own process.

For instance, you might get inspired watching a YouTube video and
want to remember it, so you log it in your daily log and add a link. It’s
THE IMPOSTER’S ROADMAP 21

a video about wandering around Bangkok, Thailand, as a digital


nomad. The more you think about it, the more you like the idea…

This is where you might create a custom collection, and link it back to
your daily log:

Creating a custom collection is as easy as wrapping some words with


[[ ]], and when you click on the new link, a new note will be created
for you, where you can freeform your thoughts:

This is a real note in my journal, by the way. I’m using the emoji here
because it’s a list, but it might turn into a pro/con list later on, which
is just fine because I can change things around as I need, when I need:
22 ROB CONERY

That’s the flexibility factor here: you’re not locked in to any system
but your own. I’m a visual person, so seeing icons like this really
helps.

Here are a few icons I like to use:

At a glance, I can see my daily notes (no icon), the monthly log (has
the month number with the name), and each collection after that.

These collections are simply notes that I’ve taken during May, 2024.
They include books I’ve read, work I’ve done, ideas I’ve had, and
general inspirations. I don’t do long-form journaling here, as I use a
THE IMPOSTER’S ROADMAP 23

different app for that. This journal captures my work and life in stark
detail, which is the key.

Just the Facts, Please

The goal of Bullet Journaling is to clearly see what you spend your
time on so you can make more meaningful decisions in the future. If
you read the book (I listened to it on Audible and it’s outstanding),
one of the things they discuss is to be as concise as possible, without
embellishing.

Each Bullet Journal becomes another volume in the story of your


life. Does it represent the life you want to live? If not, then
leverage the lessons you've learned to change the narrative in the
next volume.

That’s Ryder Carrol, the creator of the Bullet Journal method, and his
assertion is that your journal should be just that: the facts of your day.
Emotions tend to skew our perception - the 30th gray day in a row
may make you feel very gloomy about your life, so your journal might
naturally reflect that. Conversely, meeting someone new is exciting
and makes everything feel wonderful, and your journal could reflect
that as well.

Now think about reading back over those pages a year in the future.
You’re completely removed from the emotional element, so you might
be confused as to the “truth” of your day. Or be confused as to why
you neglected to write down something more important, such as the
phone call with a parent or friend whom you might not see anymore.

This is a daily log of mine from earlier this month. There is another
half, which I’ll show in a second:
24 ROB CONERY

Each note in Obsidian is a simple markdown document that works


like a Hugo or Jekyll post in that it can also have metadata at the top
in the form of YAML front matter. This is extremely useful if you do
any tracking, which I’ll discuss in a bit.

My tasks are broken out into checkboxes and, as you can see, my
theme supports all kinds. Completed tasks are in green, pushed tasks
(meaning “do them tomorrow”) are blue arrows, red boxes are things
I decided to let go.

Down below in my daily log, I have my Journey and a reflection on


The Day:
THE IMPOSTER’S ROADMAP 25

I’ve been focusing on physical fitness over the last month and I want
to keep myself responsible, so I make sure I log what I did. I also tag
the things I did so I can click on the tag and see the entries later on,
all of which are titled with the day, so I’ll know when I did them.

The final bit is important as it ties up your day and is the “reflection”
you look at when you examine your life. It’s easy to embellish here,
but keeping to bullets really helps. Here, it’s easy to see that my
battery wearing down on a 30 mile (ca. 48 km) bike ride after the gym
was quite taxing, so I didn’t do many of the things I was supposed to.

The goal with these daily notes is to give your future self a snapshot of
who you were on this day in your past. This could be a year from now,
or it could be next month!

The Daily Log

We discussed this above, but let’s dig into this just a little more.

There’s a lot of repetition in the Bullet Journal, and that’s the point.
Your daily log is a place to plan your day, but it’s also a place to catch
thoughts, ideas, inspirations, and events that you need to remember.
This is key: when something hits you, drop it in your daily log and get
26 ROB CONERY

to it when you can. I have Obsidian on my phone, so it’s simple to add


an idea when I’m out and about. I can also just add a reminder to add
something to my journal that night.

For instance: 3 weeks ago, I was watching a Parts Unknown Episode


on Singapore (you can stream this on Max and other places). I have a
good friend who lives there and is constantly trying to get me to visit,
and I keep thinking “someday”. As it turns out, someday, for me,
could be any day as my kids are in college and I’m divorced, I just need
to plan and go.

Boom. Write it down! I paused the show and grabbed my iPad which
was next to me and on my daily log added a task:

As I was writing the task out, I realized I would need to write down
what I found, so I just surrounded everything with [[ ]], which will
turn it into a “note placeholder”. The note doesn’t exist yet, but if I
click it, it will! This is a custom collection, which we discussed above,
but it’s such a powerful concept that I wanted to discuss it again.

This is how collections are born, and they’re one of the reasons I love
the Bullet Journal. If you’re using a physical book, a collection can
occupy the next empty page because flexibility is important. Obsidian
gives you the same idea.

Normally, I don’t have time to fill out a collection when inspiration


strikes, plus I wanted to get back to my show, so I just left it the way it
THE IMPOSTER’S ROADMAP 27

was until later that night, when I went over the day. I do this before I
go to bed, but occasionally it slips to the next morning.

Reflecting on your day is crucial. This is where you write down things
that happened that you want to remember (just the fact, thanks), but
also where you triage your task list. Seeing an open task means one of
three things:

It gets pushed to tomorrow.


It gets pushed to the monthly log.
It gets tossed out.

Tossing things out is another cornerstone of the Bullet Journal.


Inspiration might hit, but careful consideration later on is critical. Do I
really think Singapore is a good idea, or was I just being carried away
in the moment?

I do think it’s worth thinking about, but we’ll get to it at some point,
so I’ll dump it to the monthly log, which I signify with a little calendar
(the icon code for this is -[<] for “scheduled”):

You don’t want to mark things as scheduled and then forget about
them, however, so be certain you copy the thing over to your monthly
or future log:
28 ROB CONERY

There’s no calendar icon here - just a task that needs to be dealt with,
and I’ll do that when I migrate my May tasks to July, which I’ll get into
in just a minute.

Logging Events

Occasionally I find myself wondering whether something should be in


a collection or not. Maybe it’s a 1:1 with my boss, or a team meeting.
If it’s something I care about, like whether I’ll get a promotion, then
yeah, I would put it in a collection because the conversation is
important to me, but there could also be followup emails and so on.

If it’s a quick catch up or review, I’ll just add a header to my day (third
level header), something like “Meeting with Sonia” and under that,
add some bullets with the interesting points.

Bullets, again, underscore brevity and the need to record facts, not
meaning. A stream of notes is difficult to read, but quick bullets that
are bolded in places are very useful.
THE IMPOSTER’S ROADMAP 29

If something comes up in this meeting that requires me to do


something, I’ll pop it into the daily log directly so I can resolve it later
that night. Sometimes things come up that I just know will become a
collection, such as the show recommendation. Sonia and I have very
similar tastes and she’s recommended a few bangers, so I might set up
an empty link using [[Constellation]] which will create a collection
page for me when I click it.

The Monthly Log

Your monthly log should be a concise summary of what you did over
the month. It’s also the place you put things that you don’t take care
of immediately during the day.

In addition, you can store all kinds of information that future you will
find useful. At the top of my monthly log, I have the tasks that I
wanted to get done this month, as well as the things “migrated” from
my daily logs, like my Singapore inspiration:
30 ROB CONERY

Below that, I also list out the collections I want to remember. Note
that this isn’t every one of them, because some of them are just
random notes or ideas that are “fleeting”. I can find them if I search,
but they’re not an important part of my month:

I’m only halfway through May, so I expect these things to grow.


THE IMPOSTER’S ROADMAP 31

Below this, I have my daily summary “calendar”, which gives a


“greatest hits” overview of the month. I’m not entirely sure that I’m
going to keep doing this, but I am so used to having a grid overview of
the month that I decided, “why not”.

Your monthly log should be a quick reference/snapshot of what you


did during the month so that future you can find when something
happened easily. Hopefully, you’ll have a good tagging discipline,
which makes life super simple too.

On the right side of the grid are my trackers: G is for gym, S is for
swimming and W is for 2 mile (3.22 km) walk. There are plugins you
can get for Obsidian that will actually create heat maps and charts for
you automatically based on note properties (the YAML front matter)
and tags. I think these are interesting, but I prefer my manual
approach because it keeps things more in your control, which is what I
like about Bullet Journal in general.

The Future Log

This is where you put your “year at a glance”, if you will. It works the
same as the monthly log, with a small difference:
32 ROB CONERY

This is the first thing you create for every new journal. For my
Obsidian system, you get one per year.
Things that won’t get done in a given month get popped into
the future log. If going to Singapore was a long way off, I
might pop that task into the future log instead of the monthly
log, knowing it’s a “someday” kind of thing.
This is also where you put your intentions for the year, your
high-level goals, and monthly “snapshots”.

I routinely look at the yearly plan, which is a dump of both what I


want to do, and what’s happened. A planner that turns into a log over
time. I have course ideas in there, book writing goals, physical fitness
goals that I update as I get stronger/slimmer, and I also have a
breakout summary for each month.

As you can see, I’m also linking to each of the collections I created in
May that, I think, are important. This is another bit of redundancy, but
I don’t care, it keeps things just the way I want them. When I look
back at 2024, I want to scan this list and quickly see what was done
when, and what I was thinking about.
THE IMPOSTER’S ROADMAP 33

Note: when you change the title of a note, or move it, Obsidian will
automatically change all links for you.

That’s what your future log is: a roadmap of your year that you build
out as the year goes by. If it helps, you can liken this to GTD:

Today is your daily log. These are things that you want to get
done now.
Your inbox is also your daily log. Whatever comes to mind, for
now or in the future, goes there.
Upcoming is your monthly log. When you do your nightly
reflection (or whenever you do it), move things from your
daily to your monthly or future log depending on where it
feels better to go. Be sure to mark it in your daily log so that
you know it’s been scheduled.
Someday is your future log.
Every morning, when you’re setting up your day, scan your
future and monthly log to see what you might want to
“migrate” into the day, then do it.

There’s that word again: migrate. I mentioned it once before, let’s go a


bit deeper.

Migrating Tasks

Bullet Journals are alive with tasks, events, and notes, which is why I
like them so much. You can even get meta about things, creating a
task to remind you to create a collection for a project you’re
working on.

Whenever you move things from one collection to another, it’s called
“migrating”. A common thing to do is to migrate tasks from one
month to another, because there’s very little chance you’re going to
finish everything that lands in your monthly log. This is totally fine
and, in fact, is one of the main points of the Bullet Journal.
34 ROB CONERY

Whenever you migrate something, you get to think about it and


whether it really is that important to you. If you’ve migrated it a few
times, the answer is probably “no”. If you can’t make yourself do the
task and it’s not absolutely required of you, consider letting it go.

I’ve done this with chapters for this book. I’ll get an idea for one and
drop it in my daily log, something like “outline a chapter for imposter
focused on programming languages”. At the time I had this idea, it
seemed like a good one.

Later that night, I decided it wasn’t going to happen any time soon, so
I pushed it to my monthly log. You might be wondering if I have a
dedicated collection for this book and all the things I needed to do for
it, and I don’t. I like to keep custom collections as targeted as I can.

Anyway, a few days went by and I would see this task sitting there on
the monthly log, and the more I looked at it, the less motivated I was
to actually do it. Programming languages are interesting, but
comparing and contrasting them is pointless. You could talk about
performance, syntax, and other tech topics, or you could discuss job
opportunities, pay scale and job security. To each their own, honestly.

So, I tossed it.

You don’t “delete” tasks from your log - you strike them out if you
decide not to do them. Here, I’m using a - [-] in markdown to show
that I struck this out. It’s important to know what you decided not to
do, because you’ll probably want to do it again later! Seeing that
you’ve already thought about it (in your collection backlinks) is a
gigantic timesaver.

The value in decluttering your mind is wonderful. It’s like deleting


code you know you won’t need. Reading about it might not have the
THE IMPOSTER’S ROADMAP 35

same impact as actually doing it, so if you try Bullet Journaling, you
can see for yourself.

Collections, Projects, and Tags

When I create a custom collection, which I often do, I try to make


them as small and targeted as possible. If they’re for a work thing, I
typically consider them to be individual sprints.

For example, the collection I have in May for working on this book is a
simple checklist:

This is a simple checklist, but it can grow to include notes, doodles,


cover ideas, research, or anything else that goes into the idea of
finishing this book.

Notice, also, that I have a tag called #Imposter-Roadmap, which is


what I use instead of a formal project structure. I could do that, if I
wanted, using Obsidian’s organizational tools and some plugins, but I
really like keeping things as dead simple as I can. If I click on that tag, I
can see all the daily logs and collections I’ve created for the book over
time, which is all I really need.

Everyone has a different way of tagging things, but for me, I follow a
simple rule. Tags should add:
36 ROB CONERY

Meaning. Is there some context that this note belongs to?


People. I will always tag important people (my boss, friends,
kids, etc.) using their name so I can quickly look over any
meeting notes, calls, or other events.
Tracking. When did I go to the gym or go on a bike ride?
What books were bad, good, or great?

It’s all about summarizing for me, personally, and Obsidian is great at
helping you see and track your tags.

Using Backlinks To Tell a Story

Obsidian has so many powerful features, and part of me would rather


not share everything with you so you can have the joy of discovering
them yourself! But this one, right here, is glorious:

The right sidebar has several helper commands, and one of them is to
“show backlinks”. When you click it, you can see where your current
document is referenced throughout your vault.

Right here I can see the dates that I was trying to finish this book, and
THE IMPOSTER’S ROADMAP 37

also that I struck out the programming language idea. This is


outstanding, but it gets better, believe it or not.

You can install a plugin called “AutoMOC” (MOC == “map of


content”) which will add a set of links right into your document:

It won’t add the context, however, meaning you can’t see what was
done on Friday, May 10, but you can click through to find out quickly.

But wait, there’s more! You can click on “Open graph view” on your
left sidebar menu (called “The Ribbon”) and see all your notes,
connected:
38 ROB CONERY

This is an animated view, too, and you can set filters, sizes, and more.
I mean… just… look at how your days are connected to your
collections!

There’s even more goodness - but I’m leaving that for you to discover.
Obsidian is wonderful.

Archiving and Moving On

At the end of each month I take extra time and plan out the next one.
This is when you get to migrate your tasks to your future log, to the
next month, or just ditch them entirely.

Once I’m done “wrapping up” a given month, I drag all the notes
(daily logs, the monthly log, and any custom collections) and drop
them into their month folder inside the current year folder:
THE IMPOSTER’S ROADMAP 39

It doesn’t matter where things live in terms of folders. Books, code


snippets, fleeting ideas, inspirations - they’re all tagged and
searchable. To me, it’s more useful to have these notes together based
on the time I had them. That’s just me, of course, you do you!

Which is why I love the Bullet Journal so much: it’s so flexible. You can
use the simple mechanics (daily logs, future logs, custom collections)
and go to town with your own bad self.

My Theme and Plugins

We’re programmers, so we like abstraction and also a pretty editor. To


that end, I’ll share my templates and plugins, as well as my theme,
which is AnuPpuccin. You can find it in the themes directory, and I
have the supplemental plugins for styling also installed:
40 ROB CONERY

It’s not too difficult to set up the styling bits, give it a Google and
you’ll see how to do it. It’s not necessary at all, but if you like the
pretty rainbow colors, this is where and how you do it.

In terms of plugins, I have:

Advanced Tables, a fun plugin that helps you work with


markdown tables.
AutoMOC, which helps with backlinks in documents.
Book Search, a killer plugin that will search for graphics and
metadata for a book using an API, and create a note for you.
Calendar, so I can see a calendar view in my right sidebar.
THE IMPOSTER’S ROADMAP 41

Convert to URL preview, which is handy for YouTube


embeds, which I use a lot.
Dataview, which everyone uses. Just get it, it’s wonderful.
Tag Wrangler, which helps you manage your tags.

There are so many more, and everyone has their list of “must have”
plugins. Explore, add what you like, and freak out!

Templates

Templates work like code snippets and are simply markdown files
you’ve filled out a little bit to get you started. You type ⌘T and you
see a list of things you can inject in the current note. I have a few that
I like:

I encourage you to make your own of each of these, with a few


suggestions.

The Yearly template should have placeholders for Tasks, Goals,


Intentions (if you want), and headings for January through December:
42 ROB CONERY

The same goes for your monthly log, but I added a monthly grid to
mine that I fill out with days:
THE IMPOSTER’S ROADMAP 43

The daily log should be as simple as you can make it:

The only properties I have here are where I am when the note is
created (I fill it in manually).

Once you have set up your templates, go to your settings and make
sure you have your Daily Note setup like this:

Of course, set this as you need, but the big thing is to make sure you
have the template set to /Templates/Daily.

The Rest Is Up To You


44 ROB CONERY

I’m not a productivity guru, but I thought it might be valuable to


share what works for me. The important thing here is that you do
what makes you happy, but please, it’s so easy to get lost in the
plumbing and mechanics that you forget your goal: to document your
life and create a log of your work, and your value.

Why? Because, at some point, you will need to quickly access things
you’ve done in the past. You might need some code snippets (great to
add to the end of the daily log. It’s just markdown, and fenced blocks
are supported), pictures, screen grabs (keep those receipts!), or
musings. If you’re asked why you should get a raise, you should be
able to search your notes and come up with a compelling answer.

If you’re blamed for something you had nothing to do with, your notes
should vindicate you. Conversely, if you kicked ass and delivered
something wonderful, celebrate it in your journal and add a few
pictures of the celebration.

Adding pictures, code, and video embeds is the final reason I like
Obsidian over the analog Bullet Journal. Being able to drop code on a
daily log and find it later by search is wonderful - and you can see the
context for it from your log, even tracking through to the collection,
and following the logic path from there.

WHATEVER YOU CHOOSE, USE IT


You need to keep receipts as you move along in your career. This is
for both good, and not-so-good reasons. You might be asked to review
one of your team’s performance for a promotion, or for layoffs. You
must have solid notes for this. This is also why you log only facts, because
as much as you might love this person on your team, looking over
their work might tell another story. The opposite is also true!

Develop your journaling habit and make time for yourself during the
day. It’s a wonderful experience, and once it saves your ass, you’ll
never go back.
THE IMPOSTER’S ROADMAP 45
TWO
ESSENTIAL SOFT SKILLS
BECAUSE YOU WILL NEED TO WORK WITH
OTHER PEOPLE

I
love reading books on personal growth. Emotional,
motivational, psychological, and financial—all of it. It’s fun to
see how other people have done things and succeeded because if
you read enough of these things, you start to see overlapping patterns.

A few books have stood out to me, and I’ll share them with you in a
minute, but let’s get to the soft skills first. These are the “human”
skills you’re going to need as you move through your tech career.

It should be noted that the software industry is a bit different from


most, especially when it comes to delivery. We deal in a world with 0
materials costs—it’s just bits! This amplifies the profit potential,
meaning that you can produce far more money with a ridiculously
small staff and extremely low overhead.

I know of quite a few businesses that make over $1M/yr that are one
person with a laptop. That’s the business we’re in.

I mention this because, aside from the drug trade, there isn’t an
industry quite like ours anywhere. And, like the drug trade, the money
flying around can make otherwise good people do very unpleasant
things.
THE IMPOSTER’S ROADMAP 47

Before we get to the particulars, let’s revisit one of the themes of this
book: every situation has two sides. You could read a few of the
things I’m about to write about and sigh, “OK, Boomer”. I wouldn’t
blame you, many of these subjects are difficult to read about, let alone
write about.

In reply, I would first tell you that I am most definitely not a Boomer
and that much of what you’re about to read comes from my
observations of bosses I’ve had from across the spectrum of
identification. I have worked with some very, very high-powered
people, and they know these subjects very well.

They understand, either directly or intuitively, The Laws of Power. I love


and hate this book at the same time. It goes by another title, too: The
Psychopath’s Playbook. We will discuss this work throughout this book,
but I mention it now because it illustrates, perfectly, something you’ll
need to get used to: seeing both sides of a situation.

Consider one of the “Laws” from this book: appeal to people’s self-
interest. Yeah, it’s like that throughout the book. On one hand, you
could read this statement and think it’s all about manipulating people
through lies and deceit, acting as if you care about them.

Or you could read this and see a call to empathy, putting yourself in
someone else’s shoes, and helping them to succeed, which is the
hallmark of a great boss, by the way. As a lead, if your team succeeds,
so do you.

What I’m trying to say here is this: have an open mind, understand there’s a
yin/yang balance in every interaction. “Taking the higher ground” is
actually a very judgmental, arrogant thing to do. Who are you to know
what the higher ground looks like? And if you do, why aren’t you
helping someone else get there?

Let’s be positive. I think you do know what the higher ground looks
like, and we’ll start there.
48 ROB CONERY

KNOWING YOUR VALUE


There’s a term you should know in case you’ve never heard it before:
The Dunning-Kruger Effect. You’ve probably heard someone say it at
some point, as it’s rampant in our industry (both the accusations and
the actual effect).

According to Dunning-Kruger, the most incompetent people believe


they’re spectacular, while the most competent folks think they’re
imposters. It comes down to self-awareness, and understanding your
impact on others. Well, that and a few hundred other psychological
principles, but let’s avoid the academic angle here, we’re talking
about you.

Your value is what you produce. No more, no less. I’m not talking
about conversations or emails here, I’m talking about an actual product
of your skillset. When is the last time you took stock of the things
you’ve done in your career? The code you’ve shipped, and the money
other people have made from your efforts. If you don’t know the
answer to that question, now’s the time!

I just went through this exercise recently as I was gearing up for an


interview with a large tech company. I always take interviews, even if
everything is going fine at my current job because you never know
who you’ll meet and what job might come your way.

Interviews are your time to shine, and it’s a good idea to write down
your accomplishments so that you understand your value. For
instance:

Designed and built the frontend for expiredfoods.com using


Next.js 13 and MongoDB Atlas.
Wrote 3 articles for JavaScript Monthly, focusing on state
management in React. Each one article received 30,000 views.
Presented “React, State, and You” at my local user group to an
audience of 250.
THE IMPOSTER’S ROADMAP 49

Ran a marathon in San Diego, CA, and came in 45th for my


age class.
Volunteered at the local vocational school and showed how to
get started coding.

This is a personal resume for you so that you can understand the
tangible, measurable things you’ve done that have had an impact. You
built a major asset for your company and helped educate thousands of
people. You also pushed yourself to run a marathon for the first time
in your life! Personal achievements count!

If you find this exercise difficult, that’s a good sign. If you think you’re
awesome but are coming up short on accomplishments, it’s time for a
gut check.

Planning, Talk, and Motion

There is a difference between action that leads to impact, and motion


that leads to nothing. Picking up a piece of trash you find on the beach
and throwing it away is an action that has impact. Complaining to a
friend when you see someone throw a tissue on the ground is motion
—though I can’t blame you for not wanting to pick it up! Gross…

Motion is different from action. Motion is taking notes, surfing the


web, and planning the big trip. Action is actually buying the plane
ticket and booking the hotel. Motion is holding yet another meeting
and coming up with action items, and action is… actually doing the
action items.

When it comes to value, action will always produce far more than
motion. Talking is motion, walking is action. You can talk in a
meeting, or you can be the person to convert that motion into action
in the form of a plan, or action items.

This is where we need to ask ourselves the hard questions: am I the


talker and pontificator (aka ‘consultant’), or am I the one writing down the
plan?
50 ROB CONERY

Next time you’re in a meeting, observe the dynamic and see who is
playing the role of diplomat. They’re the person relating stories,
saying things like, “I think what Steve is suggesting reflects what
Sonia is expecting from us. I was in a meeting last week when Kim
also said that…” and the stories go on from there. Diplomats like
stories because that means they get to talk, and to them, talking
provides value.

If you’re a speaker helping your audience do their jobs, then there is


value in talking. If you like to relate stories and summarize other
people’s points, this is your moment to consider just how much value
you’re providing your team.

FINDING YOUR TRIBE, BUILDING YOUR NETWORK


And we’re off. This is where we get to put ourselves in a double bind,
wanting to build our career while not wanting to look like we’re trying
to build our career. This will be on repeat for the rest of this chapter
(and large parts of this book): there’s a good side, and a bad side to all
of this.

Right, so, here’s the thing: you will only get as far as your network. The old
saying “it’s not what you know, but who you know” is absolutely true,
at least in the business world, and especially in the tech industry. If
you ever considered a powerful person in your company who is a
complete ass and wonder why they haven’t been fired, it’s because of
whom they know.

Conversely, if you ever considered a powerful person at your company


that you deeply admire and want to follow in their footsteps, build your
network. It’s not as hard as you think, but it does require some social
engineering.

It’s human nature to seek alliances, and alliances are formed with
people who share common interests and backgrounds. This leads to a
natural, implicit bias that every human has: we’re more comfortable
around people who share our culture, background, and other traits.
THE IMPOSTER’S ROADMAP 51

You have to be aware of this when you’re trying to expand your


network.

Dealing With Your Natural Bias

People don’t like to think of themselves as biased because it has a


negative connotation. Without bias, however, we wouldn’t exist as a
species. Our strength as a species comes from our ability to adapt into
ad-hoc groups fitting whatever situation arises, something unique in
the animal kingdom.

I’ll sidestep the religious and political implications here, but if you’re
interested and want to read a book that will blow your mind, Sapiens
by Yuval Noah Harari is a book you should read. One quote from
this book has stuck with me, and made me realize the power of
implicit bias. I can’t find the exact wording, but it went something
like this:

Put 800 human beings in a room behind locked doors, and they
will quickly form into groups, with the singular goal of finding a
way out. Put 800 chimpanzees in a room with locked doors, it will
be a massacre.

Do a quick assessment of your life. Your friends, your neighborhood,


how you grew up, and so on. The patterns you see reflect your implicit
bias, which is the thing you need to be most aware of when you
expand your social base at work.

Why is this important? Because influence needs to be universal, not


just with the people you like. You don’t need to put on a show of this
either—just be aware of it and, ideally, see it as a challenge to face.
Spend time with people you’re not naturally drawn to, and see what
kind of common ground you share.

Getting Tribal
52 ROB CONERY

This is where we get to the darker side of your network: your tribe. As
you meet people naturally at your job, some connections will be
stronger than others, for whatever reason. Perhaps you work well
together, or pulled together to get something shipped “against all
odds”.

These bonds are strong, and tend to last if they are cared for. In fact,
they tend to span companies and live on for years. I still get calls from
friends I worked with over a decade ago, trying to recruit me so we
could work together again. These are the friends you want. This is your
tribe.

The phrase “we worked together at” and “I followed them here” are
commonplace. As I write this, a good friend accepted an offer at a
company where 6 of her tribe currently work! They keep pulling each
other along, from company to company.

The word “tribe” is exclusive, by nature, but it doesn’t have to be.


Find your way in by befriending a boss you like, or a coworker who
does outstanding work (that you also like). These people will be
managers one day, then directors and VPs. That could happen to you,
too, and you’ll want to surround yourself with people you trust who
you know do a good job.

DEVELOPING YOUR EMPATHETIC SHELL


Your job is to ship software and, if you do it well, you’ll move up the
ranks or, better yet, maybe start your own company and do well for
yourself. We’re very lucky to work in this industry, don’t you think?

The trouble is, there are other people in this industry that want to do
the same as you. I could be mean about this and tell you they’re out to
get you, or we could be real about this, and understand that they want
the same thing as you, and every so often those goals conflict.

In fact, they often conflict. This is where empathy helps, and please
note that I’m not about to suggest you understand someone else’s
THE IMPOSTER’S ROADMAP 53

motives to convince yourself to roll over and not cause a problem. Far
from it and, in fact, quite the opposite.

Conflict will happen, and you will carry out the conflict as humans
have forever:

A perceived threat.
Posturing (my tribe is bigger than yours and I have the
backing of 5 managers).
Cold confrontation (aka backroom dealing, whispers, and
trying to get people fired)
Hostile confrontation, which usually results in someone being
fired, or their project canceled by a VP or CEO. Think of the
OpenAI nonsense from December 2023. Sam won because
Satya Nadella was in his tribe.

You will face conflict, it’s part of the deal, but there are ways to get
around this, and it involves understanding whether there really is a
threat to begin with. To achieve that, you have to have the ability to
truly understand the other side.

Empathy isn’t something that you can teach, it’s cultivated. You
practice by pretending you’re the other person, “switching sides” if
you will, and coming up with ways that you would propel what
they’re doing over what you’re doing. Fair warning: this can be
extremely stressful, especially if it involves your project and a team
whose jobs depend on you.

Emotional Kung Fu

The first order of business is to check yourself and your emotional


state when confronted with a potential conflict. Are you frustrated?
Angry? Defensive? This is where journals are so very helpful because
you can be honest with them and, hopefully, yourself.

If you’re feeling negative, it will come out and be used against you.
You’ll hear terms like “ah, yes, they're very passionate about their
54 ROB CONERY

work” or “they tend to wear their emotions on their sleeve”. I hate


that this is true, but no matter how you identify or what culture you
call home: having an emotional reaction will be used against you. I
didn’t make the rules here.

Also: this is likely where you might find yourself muttering “OK,
Boomer” (I’m not a Boomer, dammit). If this is you, bookmark this
page and come back to this chapter as your career progresses and your
success grows. You will find things becoming more difficult and that
difficulty will likely be from your colleagues, who will smile and say
nice things to you, but cause trouble otherwise.

Assuming you accept the idea that success brings trouble, you can be
mindful of how you perceive and deal with that trouble. Having clear
thinking when you’re challenged in a meeting or email chain will help
you absorb whatever blow comes your way and carefully think about
your response.

Consider a meeting with your boss and peers. One of your peers, Jo,
has plans of their own and is about to let everyone know this:

You: “So, in short, the goal of our project is to increase the visibility of
our platform to non-Python developers, which could increase our
market share dramatically”.

Jo: “If I may—Python developers should be our primary focus. They


have put us where we are, and I don’t see the point in diluting our
efforts with our existing market”.

Jo has a point, unfortunately, and has chosen this day to express it.
More than that, however, Jo has an agenda which you didn’t know
about until now. This will surprise you, and it will also frustrate you
because, as you mention, outreach to non-Python folks is the entire
goal of your project and Jo just stated that your project basically
shouldn’t exist.

What do you do? Here are two possible responses (of many):
THE IMPOSTER’S ROADMAP 55

You: (glaring slightly): “I’m sorry, what? We’ve been working on this
project for months, and you’re discussing this now? Has something
changed that I’m unaware of? We’ve put great effort into this effort,
and it’s frustrating to hear this, to say the least”.

Say hello to Rob, circa 2003. I didn’t care for conflict, but if it
happened, I tended to go right at it aggressively. This can work if
you’re the CEO, CTO, VP, or run your own company. It absolutely
does not work if you’re somewhere on the org chart that’s not the top.

Let’s use a little awareness, and see if we can adopt what I like to call
“Emotional Kung Fu”, which is to accept the barbs coming your way
and let them pass through you.

You: (taking notes, waiting until a count of five): “I agree, Jo, diluting
our efforts would be counterproductive. Where do you see the dilution
happening?”

This is a passive-aggressive response, but it still works. If Jo wants to


stick their neck out, let them. This could work, but it’s also
exceedingly clear that you’re taking up the fight.

We can do better:

You: (taking notes, waiting until a count of five): “I wonder if it’s


possible to do both things. Expand our non-Python efforts, while also
strengthening our current user experience? I have a few thoughts on
that, and we could work together to come up with a plan, if you like.”

This validates Jo’s assertion without validating Jo. Asking a question


vs. making a statement is still passive-aggressive, but has the veneer
of being a bit more polite. It will be clear, however, that you will likely
do no such thing to help Jo, and many will see this as duplicitous, at
best.

Now, consider the fourth response:

You: (taking notes, waiting until a count of five): silence.


56 ROB CONERY

Countering someone openly is a brazen effort, and your silence


amplifies this. By not engaging, you’re dismissing Jo’s nonsense as
something that doesn’t deserve a reply. Which it doesn’t: this is your
project and if there’s a question about its validity, it should come from
management.

Never underestimate the value of silence. It conveys strength far


beyond words and, the most importantly, you won’t say something
that will erode your position. If Jo wants to waste time and money
undoing what’s been done, that’s on them.

People Will Always Be People

Consider your group of friends, or your family. Do they get along all
the time? Doubtful they do because all of us have things we want out
of this life (and career) and will fight for it. Every so often that means
fighting each other, as unpleasant as it seems.

The rules of engagement in the workplace are much different than at


home or with friends, however. You can be fired at work, or promoted.
You can cause someone else to be fired, or promoted. Don’t run from
these things, as they will never change.

Yes, these interactions can be extremely annoying and make you want
to give up, but I would encourage you not to do this. Work is not real
life, it’s a strange blend of tribal politics and being an old-world
member of the royal court: people jockeying for position and trying to
gain power. Yes, work can be a social place too, but it is rarely just that.

It can take years of work experience to learn how to navigate other


people’s self interests, and also your own. Thankfully, you’re reading
this book and I have some practical advice for you.

THE LAWS OF POWER AND INFLUENCE


You might need to take some time before you read this section as it
will be challenging. That said, I’m hoping it’s also enlightening and
THE IMPOSTER’S ROADMAP 57

that it helps you see your motivations when it comes to the workplace
and delivering software.

I’m sure there’s someone at your current job whom you regard as
brilliant and that you respect more than others. If that person told you
that learning Java was The Way, you would likely spend the following
weeks learning the Old Standard.

If you’re good at delivering things, your trajectory can match this


person:

Your lead will assign you more work and depend on you more
because you’re capable of working through problems and
delivering.
You will take your lead’s job, garnering respect (and jealousy)
from others at your company.
You will become a director if you keep delivering value,
building out your network, your tribe, and your enemies.

In short: your influence on what happens to you, your team, and your
company will grow. This is power, and it’s OK to have as long as
you’re not a jerk.

That’s the critical part you need to understand: with great power comes
great responsibility, to borrow from Spiderman. It’s true. People will
depend on you, and your care of the principles you’re about to read
will mean everything for your growth as a person as well as a boss.

To that end: every one of the principles I’m going to discuss below has
a positive perspective and a negative one. Seeing both is critical, so
you can stay away from the negative and embrace the positive.

What you’re about to read is largely based on Robert Greene’s work,


The 48 Laws of Power, which I mentioned above. It’s a great read even if
you never plan to move up at your company because you’ll know what
the more conniving people are up to. I both love and hate this book
58 ROB CONERY

because I find it revolting that people can behave this way, but I also
find it eerily accurate.

Right, enough said, let’s do this. Here are some of the principles
discussed in the book, adjusted to our field. I’ve also added the
negative and positive aspects of each.

Never Outshine the Master

This, in simple terms, means you should never make your boss look
bad. The better they look, the better you look because you’re their go-
to. If your work outshines theirs, you look like a climber and your
boss, who might be thrilled at your efforts, will resent you for it
silently.

Many managers in our industry will make a point to proclaim,


publicly, that their job is to hire people smarter than them and to
watch them excel and move up the ranks. This is a contradictory, self-
serving statement in plain sight: “I’m surrounding myself with a
wonderful team that will make me look like a good manager”.

Either way: always make your boss look like a star. It’s good for you both.

Always Say Less Than Necessary

Silence is power in just about every way. More than that, when you
open your mouth to protest (or write prickly emails and posts), it will
likely make you look weak, almost every time.

The worst possible case is overreaction by misunderstanding someone


else’s intentions. This creates drama, and puts a mark on you.

Silence is your friend when it comes to the typical workday. This, of


course, does not mean you should remain silent if you’re harassed or
anything that requires attention from HR or worse.

Guard Your Reputation With Your Life

It’s all you have. Once it’s dented, it is almost impossible to recover,
even if you leave your job and go somewhere else. In 2009, I found
THE IMPOSTER’S ROADMAP 59

myself embroiled in a controversial situation at the company I worked


at and had to be the point person for a very public debacle. I did my
best to handle the situation, but there was no winning here. I was
branded as “difficult”, someone who enjoys drama, and “a loose
cannon”.

Old colleagues, whom I consider friends, still consider me these


things. Ironically, the people whom I interacted with publicly
respected my message because it was truthful. You win some, you lose
some… but my reputation in that company is ruined to this day.

Attention is a Good Thing

This reminds me of Rahm’s Rule: “Never let a serious crisis go to


waste”, which is how many people on YouTube, Twitter, and other
social media view being the “Main Character” for the day. Any attention
is good attention, and I hate that notion.

That’s the negative way of seeing it, the more positive way is “always
put on a good show” or “there is no ceiling” when it comes to being
dramatic. I read a funny story in a marketing book about a $100
hamburger, which seems utterly ridiculous, but a Las Vegas restaurant
wanted to make an impression and generate buzz, so they came up
with this overpriced burger.

Who the hell would pay that amount for a burger? Turns out that
everyone wanted to know the answer to that question—because there
were people who stepped up to the challenge. There always is.

The attention that this restaurant received for such a ridiculous stunt
put it on the map and made it stick in people’s minds. I’m writing
about it right now!

The point is: don’t just give a demo, give a Demo That People Will
Remember and Talk About. David Heinemeier Hansson (DHH) knows
this all too well. He famously dropped a slide in one of this talks with
the simple phrase “Fu— You” right in the middle. I don’t remember
what he was talking about, and it didn’t matter. The slide set the tone
60 ROB CONERY

for the rebellious Ruby on Rails community at the time and caused
many, myself included, to write about it.

Win Through Action

This is a theme throughout this book: don’t talk, do. Action is power,
and it can, and will, get you into trouble if you’re not careful. But
that’s why you’re here, reading these “laws”!

The silence in the meeting when your colleague confronts you should
be followed by a successful deployment, or publication, or whatever
you’re delivering. One of my favorite quotes comes from David
Goggins’ book, Can’t Hurt Me:

Don’t kill ‘em with kindness, torture them with success.

The best rebuttal to a challenge is to deliver on what you’re doing, and


keep delivering. Let your challenger look like nothing but hot air.

Appeal To Other’s Self Interest

This one feels slimy, to say the least, but there’s a more positive spin
here. It’s not about playing someone by faking what you care about,
it’s about aligning your interests with theirs. This is also called molding
consensus, which we discussed at the beginning of the chapter.

You work at a large pet supplies company, and you’re developing a


social dog-walking application that’s coming together well, and should
boost the brand’s loyalty. You’re also being challenged by the person
running the grooming services department, who thinks your dog-
walking app won’t bring in revenue and that the company should put
more resources into expanding grooming.

You have an idea: what if you could provide a 20% grooming discount
for people who shared pictures of their adorable, well-groomed dogs
on walks throughout the city while using the dog-walking app?
THE IMPOSTER’S ROADMAP 61

You’ve just won over an adversary and aligned your interests. You’ve
also expanded your network and your tribe if things work out.

Don’t ask for favors or increase the divide, find alignment.

Mediocrity Kills

This goes along with “Attention is a Good Thing”, but never waste an
opportunity to do outstanding work and blow people’s minds. You’ve
probably heard the term “underpromise and overdeliver”, and this is
why: mediocrity is toxic.

Nothing is worse for your reputation than mediocrity. It sends the


message that you either phoned it in, or that you’re incapable of
making an impression.

Always have a trick up your sleeve, or a grand exit when doing a


demo. Steve Jobs’ “One More Thing” is canonical in this regard, and
people came to expect that of him. Your demo can showcase the work
you were asked to do, but the finale should blow people away with
your inspiration and skill.

Conceal Your Intentions

There’s no spinning this one: it’s slimy. That said, there is wisdom in
not letting everyone know your plans, especially the colleagues who
are trying to kill your project or your position in it. But… yeah, it still
feels slimy to me.

Consider this “law” and the ones that follow as a bit of warning. A
“this is what other people might do” kind of thing. In my experience,
it’s completely accurate. As you rise in the ranks of your company,
people are rarely fully transparent and if they spend time insisting that
they are, you know for sure they’re not.

Get Others To Do the Work, You Take Credit

Eww! Yeah, this one just doesn’t hit right, does it? And yet: this is the
very definition of being a good manager. You let the people you hire do the
work you hired them for. Give them autonomy and the ability to take
62 ROB CONERY

risks. The fact that you did so makes you a good manager, which
means you naturally get to accept some credit for the work that gets
done.

Many managers will do more than that, taking credit for your work
outright. In many companies, this is du jour and expected of you. I
think it’s horrible, but if you’re trying to get into a big company (a
FAANG, Microsoft, etc.) don’t be surprised when your boss, who has
seemed so kind and helpful, takes more than their share of your credit
and then gaslights you. The unspoken agreement here is that you’ll do
the same later on.

Be aware this happens and don’t be surprised when it happens to you.


Know it’s going to happen, and prepare for it.

Avoid Unhappy or Unlucky People

They will bring you down and, worse, you can be labeled simply by
befriending them. Also: working with these people is a downer!

It’s also a good thing to check yourself occasionally and see if you’re
becoming “unhappy or unlucky”. It’s nearly impossible to recover a
broken reputation (you can trust me on this one) so if you believe this
label has landed on you, it might be time to move on and try again.

Make People Depend on You

This is why DBAs like being DBAs: everyone counts on them. It’s also
why people like to “own” information on “Holy Spreadsheets” (the
source of truth for a project or team)—information is power and if you
own the information, people must depend on you.

Let’s turn this more positive, shall we? If you’re good at what you do,
people will naturally depend on you to keep doing it. If you’re a good
leader, motivator, and consensus-builder, people will follow you and
be inspired by you. This is a good thing.

Using trickery is crappy, but it does happen and is something you


should be aware of. It’s a silly, brazen tactic but silly, brazen people are
THE IMPOSTER’S ROADMAP 63

everywhere in the workplace and if you know what they’re doing, all
the better for you.

Crush Your Enemy, Completely

This is one of the true tests of leadership, and something most people
aren’t willing to do, which is very human. Most of us are brought up
to believe mercy is a good thing, and an enemy turned friend is a
powerful thing. We’ll discuss the latter assertion in a minute, but
usually an enemy that’s been defeated by you will make it their goal in
life to come back stronger and take you out completely.

Consider Jo from before, the person who didn’t think your project was
worth doing because they had other ideas. As it turns out, your
project did very well, and you received a promotion in the form of
becoming the group manager. Jo now reports to you.

There’s bad blood between the two of you, even though you kept your
cool each time and obeyed the laws, saying nothing most of the time,
and as little as possible other times. You also kept your intentions to
yourself, so Jo didn’t fully understand your plans.

What do you do about Jo now?

You could be accommodating and kind, having a meeting with Jo


straight away and asking what they think is required to work well
together moving forward. Jo would likely respond with a plan and do
their best to assure you that they’re there to help you as you need.

Do you believe them? A better question is: should you believe them? If
I’m honest, I would believe them because I just don’t have it in me to
do anything more. I’ve had this conversation with so many people,
mostly managers, and 90% of the time, we would keep Jo around and
hope that they will become a trusted ally.

The brutal truth, however, is that Jo is an unknown and there will


always be suspicion that bad blood is still there. A defeated enemy
with a chip on their shoulder is formidable, and you can also trust me
on this one.
64 ROB CONERY

If you respect this law, you would ask Jo what their plan is for their
future at the company because it shouldn’t be in your group. You
would be honest: you don’t trust their intentions, regardless of what
they say, and your group is built on trust. You’ve been given full
control over the matter and arranged for Jo to transition to another
group, or out the door.

You might think this will brand you as unkind and a harsh boss. On
the contrary, it will gain you respect. Even if Jo leaves the company
and complains non-stop throughout the industry, posting on Reddit or
social media, they’re the one who will look weak.

Yeah, I know. I couldn’t do this either. I’ll be honest and say I wish I
could because it’s the people that can do this that rise to the level of
VP and beyond.

Let Your Enemy Think You’ve Been Beaten

An enemy turned friend is a powerful thing, which is what most kind


people think when they win a battle. They would rather not be
considered unkind or harsh, so they’ll do their best to mend fences
and “play nice”. If you’re going up against someone who is willing to
crush you (something you should always assume), this is a deadly
mistake.

In martial arts, you’re most vulnerable when you attack. Your body
motion is given over to aggression and impact, and you have little to
no defense. If your enemy absorbs your blow, their counterattack will
likely do you in completely.

The same is true in professional life. When Jo comes after your project
(back before you were promoted, let’s say), saying very little, or
nothing at all, can give the impression that you’ve been beaten. Jo
might assume they’ve won, which means your counterattack (in the
form of successful delivery) will be all the more devastating.

So, let me ask you this: does all this conversation about conflict turn
you off? It’s difficult to write about because I’ve lived it far too many
THE IMPOSTER’S ROADMAP 65

times. And I hate to be a downer, but this is going to happen to you, even
if you decide to work on your own as a contractor or found your own
company. Conflict and competition are core to humans, it’s how we
grow and become stronger. It’s OK to not engage, but that also means
you won’t rise to your potential.

Chaos Is Your Friend

Gross. I hate this, yet it’s also true. There will be people at your
company who, occasionally, will inject chaos into the daily standup,
monthly meetings, or off-site retreat. They will sow doubt, backstab,
and flat out lie, all with the goal of causing a little chaos.

Believe it or not, this is a common, natural role. Chaos causes


destruction and destruction causes regrowth. Regrowth helps refine a
system. All of this is the evolutionary, organic cycle and there’s little
you can do to avoid it. Someone, and it could be the nicest person you
know, will take on the role of The Joker and sit back, hoping to watch
the project burn for little to no reason at all.

If there’s a positive to this, it’s that chaos will shake loose the
decaying parts of a project that should probably go anyway. This could
be features, entire products, or people themselves. The chaos could be
infighting, budget cuts, or layoffs. It could even be the lack of funding
moving forward, causing a pivot and downsizing of a bloated startup.

So, how do you make chaos your friend? Simply by knowing it’s
coming, regardless of what you do. Routinely interviewing at
companies you admire can help you take advantage of layoffs at your
company, before they happen. You could even explain to your boss you
know they’re coming and that you’ve been interviewing and have an
offer at a competitor. You never know, you might receive a guarantee,
or even a raise if you stay.

Chaos always brings opportunities, if you see it as a positive.


Something will pop up, like Jo sending a note they shouldn't be on
Slack or doing some other slimy thing you can out them for. For me, it
all comes back to surfing. Waves are organized chaos in the water, and
66 ROB CONERY

it’s fun to see the patterns and ride them. You can do the same at
work, it just takes patience.

Make Your Wins Look Easy

Imaging you’re at a company all-hands meeting and the president


stands up to make a speech. They want to recognize Terry, who did
remarkable work on Project X, delivering something critical on time
and under budget. Terry is beaming, and is asked to come up and say a
few words:

Oh my gosh what can I say—I wish my entire team could be up


here with me. It was such a struggle, and they handled all the
setbacks with grace and professionalism. It felt like being in a
battle together, and there’s no other team I would gladly go to war
with this these fantastic people…

A pretty standard thank you speech for a manager: recognize the team,
acknowledge the effort, strengthen the bonds.

But what if Terry said this, instead:

Thank you so much for trusting us with this project. I know how
much it means to the company and everyone who works here; it is
a privilege to work with such a talented team, and I had every
faith in their ability to deliver.

I suppose you could call this a humblebrag, but then again, Terry is on
stage accepting praise for a job well done. It’s a very fine line when
you try to make a win look easy—you can easily be branded as
arrogant. If people come to expect excellence from you, however, it’s
worth it.
THE IMPOSTER’S ROADMAP 67

Barry Sanders is a famous running back in American football, and is


one of the very few running backs to rush for over 2000 yards in a
single season. He was absolutely electric to watch on the field, and it
was only a matter of time until he slashed and spun his way through
the defense, breaking off a full-field sprint to the end zone.

When he got there, however, he didn’t dance, spike the ball, point,
and shout. He just tossed the ball to the referee and made his way to
the sideline, with his teammates dancing all around him. To him, it
was just another set of downs ending up with him putting points on
the board. That was his job.

Barry retired at the height of his career, shocking every football fan
around the world, including me. He walked away, caring very little
what people thought. It was effortless for him, and the right thing
to do.

When you win, toss the ball to the ref and make your way to the
sidelines. Of course, you delivered, that’s what you’re here for.

YOU DOING YOU


If you read Robert Greene’s The 48 Laws of Power you’ll see the rest of
the list which I think, frankly, are variations on the above. It’s a tough
book to get through because I don’t ever want to be a person who is
so conniving and calculating!

Yet, there is value in knowing what other people are up to. There is
also value in understanding that they probably aren’t “evil” or
conniving either—they’re just looking out for their own self interests
and doing what comes naturally to them. Which is what you’ll do as
well.

If you see yourself doing something you think is slimy, and it makes
you feel slimy, then stop and ponder your motive. Reflect on these
laws here and consider how you could do better. Perhaps not sending
that email and saying less. Maybe you could push for a stronger demo
68 ROB CONERY

because mediocrity is toxic. Instead of embracing conflict with Jo,


perhaps there’s a way you can appeal to their self interests, and align
your projects.

Your job is to ship software, which means you need to mold


consensus. Just because you make something doesn’t mean it will be
shipped! In fact, when you make something, it will likely be targeted
because action is powerful, as is delivery.

Building software is easy, getting it out the door and shipped is the
hard part, as it requires a masterful use of the soft skills of power.
THREE
LEADING YOUR TEAM
YOU CAN CHANGE SOMEONE’S LIFE AND
CAREER WITH YOUR ACTIONS

A
t some point in your career, it’s likely you’ll be asked to lead a
team of others. It’s OK not to do this, by the way, if
leadership isn’t something you’re comfortable with. Many
of my friends have decided to remain “IC”, or Individual Contributors,
and they have progressed their career just fine.

Leading a team, however, can fast track your career if you do it well.
This is all I did earlier in my career, and I loved it. These days I prefer
to work on my own, which may change in the future if the right role
presents itself.

As a team lead, you’re the captain of the ship. You don’t hoist the sails
or turn the wheel yourself, you let your team do it. A better way to put
that is that you enable your team to do it. Your role is to ensure the ship is
pointed where it’s supposed to be pointed, that you stay out of danger,
and that your crew is united in their effort.

That last bit can be difficult and, frankly, is why I don’t lead teams
anymore. We’ll get to that shortly.

For this chapter, I’m going to assume leadership is in your future,


somewhere. You might be:
70 ROB CONERY

Asked to lead an existing team because your current lead got


promoted or left. The most common scenario at any type of
company.
Joining a new company and asked to create a team. A
startup, perhaps.
Joining an existing team as the lead. This is common in big
corps after a reorganization.

Managing others is challenging, and it increases the intensity of your


career, in both positive and negative ways. Watching someone on your
team grow in confidence and ability is truly rewarding, especially
when they go on to lead a team themselves. Keeping the ship pointed
in the right direction and shipping your project … there’s no other
feeling like it.

Many factors need to align if your role as lead is to be successful,


however.

YOU’RE ONLY AS GOOD AS YOUR ENVIRONMENT


It doesn’t matter how great you are with people, if your manager (and
their manager) doesn’t support you well, you’re doomed. The only
way you find these things out as you begin your leadership journey is
to try and then fail. Most books on life and career urge you to keep
trying and failing until you succeed, which is generally good advice,
but in this case, I don’t know.

Consider this horror story.

Good Job, See Ya Later

I’ve worked at big companies and small startups, including two of my


own. One of the things you have to watch out for are the psychopaths,
and I am not being flippant about that. These people are out there and
have a strong desire to lead as it fills their ego. We’ll talk more about
tough personalities later, but for now, just know that I worked under
one (a few levels above me) and it wasn’t fun. I hadn’t read The 48
THE IMPOSTER’S ROADMAP 71

Laws of Power, which we discussed in the Soft Skills chapter, but I truly
wish I had because I got eaten alive.

I had been in the business long enough to see that this person was
exceptionally toxic and superb at manipulating people, especially the
VPs and executives above them. I mentioned the overt toxic behavior
a few times to my direct boss, and the reply was always the same: “I
think everyone knows it, but there’s not much we can do about it”.
You’ll find that response is common.

I need to expand on this because at some point you will work for one of
these people. The tech industry seems to attract them more than other
industries, though I can’t prove that. It might be the money, the
status, who knows! But according to this study, 1 out of every 100
people have diagnosable psychopathic traits:

About 1.2% of U.S. adult men and 0.3% to 0.7% of U.S. adult
women are considered to have clinically significant levels of
psychopathic traits.

We’re not talking about violent criminals or despotic leaders here.


We’re talking about people who simply can’t feel emotion towards
others (empathy), have an inflated sense of self and can be particularly
aggressive. Know someone like that?

If 1% of the population have diagnosable psychopathic tendencies,


that means in a company of 10,000 people, 100 of them are
psychopaths. For massive companies (like FAANGs, for example),
that’s well over 1000 people!

It gets worse when you consider that some people just have the traits,
but might not be a clinical case. And even worse than that:
psychopathic people tend to make “good leaders” in that they drive
people extremely difficult to deliver results, which they then take
credit for.
72 ROB CONERY

So, how do you spot a psychopath/sociopath? Here are some tips:

They never really look at you but, instead, look through you.
It’s a strange trait, but their face is almost devoid of
expression and the eyes seem lifeless. This is because they
have very little regard for you and probably aren’t listening to
what you’re saying anyway.
They say rehearsed things, as if reading from a script. If you’re
in conversation with them, they’ll take an extra beat or two
before they reply. When they do reply (in a meeting, perhaps),
they will sound as if they’re reading from the company manual
or saying something so rote or cliché that it’s almost comical.
Something like “I hear and validate your position, and I’m
interested to hear more. Perhaps you could make some time on
my calendar so we can discuss this further”.
They leave a trail of destruction. The thing with psychopaths
is that their words and perception of themselves does not
align with reality. Good people quitting, one after the other, is
always a sign.

That last bit is especially difficult to deal with. At this particular job,
about 40% of the group I was in left. 20% of them “rage-quit”, calling
out this toxic person in particular. Nothing was done.

At one point, I was offered a lead position that was vacated because of
this toxic manager. I was told that if I didn’t take the job, there might
not be a position for me at all — which is another sign that you’re
working for a psychopath (do what I want, or I’ll make your life hell).

I took the role and within 2 weeks I realized there was no way I was
going to succeed. Team dynamics, miserable morale, increased
workload without a promotion and a leadership structure that was
happy to take credit for any success the team had.

Getting Out of a Tough Situation


THE IMPOSTER’S ROADMAP 73

There’s a saying that I like and that I’ve found particularly true: Human
Resources is there to protect the company, not the employee. You’re always told
to “go to HR” whenever things get weird or challenging, but that
rarely works for people. What does work is a paper trail in your journal
and a good lawyer.

I keep a journal of absolutely everything — emails, chats,


conversations, meetings, and so on, which we discussed in the Journal
chapter previously. Psychopathic people can be devious and one of
their tricks is to “gaslight” you, reshaping the past to their needs.
Being the good person that you are, you will obviously question
yourself and wonder if you truly are forgetting things.

This is why you keep a journal. This is where you keep a solid
project trail with GitHub issues and PRs, relevant emails and chats,
your brag book, and more. You can’t be successful unless you have a
trail of facts about your actions, and that includes protecting yourself
from Not Nice People.

I booked a meeting with the Toxic Manager and asked that we record
it. This caught them off guard, but they agreed. I knew a trap was
waiting for me, that I would be asked to find a new team or leave the
company if I couldn’t do the job.

Before we got to that point, I began by recounting the meetings I had,


the conversations that took place and even the chats. I made sure to
include dates, and I also made sure Toxic Manager knew I was reading
these things from my journal.

There were a few things that I had not agreed to with this position (to
keep this as anonymous as I can, I’m going to omit those details) but
had been assumed after the fact. There were also a few things said to
me that violated company policy and no, I would not be stepping
down, I will be resuming my old position.

Here’s a fun thing about psychopaths: they tend to be impulsive and


while they can plan ahead, they tend to focus on only one thing,
74 ROB CONERY

harming everything else. A fun story from the article I linked explains
this well:

An example is Robert Durst, the real-estate heir who was


convicted of murder in 2021 and died in custody in January. At
one point, he was on the run from police for killing his landlord
with $30,000 in his car and $900 in his pocket. But impulsively,
he decided he was hungry, so he parked his car, entered a
Wegmans supermarket, stole a hoagie, and, predictably, got
caught.

It’s never a good idea to confront a psychopath, as they will become


aggressive and try to lie their way out of any difficult situation. It can
escalate to include outlandish accusations, more lies, and extreme
gaslighting, which is why I did the meeting on video and recorded it as
well. I wanted this person to melt down and have it recorded as I
confronted them.

This is an issue I have that I wish I didn’t: when I encounter people


like this, I go to war. What I should have done was to leave, picking up
my career at another company with better management. I chose to
stay because I was convinced this person was on their way out, and I
simply had to be a part of that.

They never left and, instead, got promoted shortly thereafter. I stayed
another 6 months but finally left, fatigued and deflated.

Don’t Stay Under a Toxic Lead

Don’t hesitate to get the hell out of a bad situation. If you spot a
psychopath in your management chain, it won’t end well for you,
especially if they seem to select you as their pal. You won’t make this
situation work out, and you’ll end up burned out, never wanting to lead
a team again.
THE IMPOSTER’S ROADMAP 75

It’s not leading a team that’s hard, it’s the support you don’t have
from your managers. This is critical to understand.

WHEN YOUR MANAGER KICKS ASS


You can always spot a good manager when their team follows them
from job to job. Or, when they follow their team from job to job. I
have worked for a few of these people, and they truly are wonderful
people who dedicate themselves to raising those who work for them.

One of the best leads I worked for started our very first conversation
with “let’s talk about why you’re not a lead yourself ”. They wanted to
know what “moving up” meant to me, and where I wanted to be in 3–
5 years. We talked for an hour and came up with a plan, and I talked
most of the time. They weren’t condescending at all, and no subject
was off limits.

Sadly, I only worked for them for a year before they left the company.
The “alignment” I felt was wonderful, like I was part of a greater
effort, truly making a difference and being recognized for it.

These are the people you want to work for and, yes, if you get a
chance to stay with them, do it. Your manager is everything when it
comes to your progress in a company, and if you think they’re not
doing their job, fire them.

You might have heard the term “managing up” before, and it’s a skill
you need to cultivate. Your boss is there primarily to guide the team to
success, and they should be doing that by empowering you and then
getting out of the way. They should also make sure you get the
recognition you deserve and, if they’re not, you should change roles,
change groups, or change jobs.

You are your best advocate!


76 ROB CONERY

BUILDING A TEAM
You just stepped into a lead role at a company you respect, working
for a person who has a solid reputation as a great lead. You’re set up
for success, now let’s deal with your leadership skills.

Check Yourself: Are You Emotionally Ready For This?

If you’re excited to lead a team, ask yourself why. Are you looking for
validation? Perhaps a little power at your company? Respect? It’s
human to want these things, but if you find yourself needing them, we
should talk.

The terms “psychopath” and “sociopath” don’t sound good, but they
define a spectrum that everyone is on, somewhere. I’m not a
psychologist, but I’ve studied enough on the subject to realize that
these toxic people have armored themselves emotionally, usually as
the result of some type of trauma. They don’t feel things like a
“normal” person might and, instead, feel joy and fulfillment under
many different circumstances.

For some, validation is the goal. They need to be recognized by others


continually to feel fulfilled and, if they’re not, they cause problems
because they feel attacked due to the lack of praise. It sounds messed
up, but it comes from a point of pain, and people do weird things with
emotional pain.

If you’re going into leadership hoping to feel validated in the work you
do, talk to a therapist about it. The job isn’t about you, it’s about your
project and your team. If this section is making you angry, that’s a
good sign that talking to a therapist is paramount for you.

No Room For Heroes

OK, you’re emotionally ready to jump in, and you have a great
management chain above you. Let’s do this!

Now let’s get to the other side of the emotional needs spectrum: over-
compensation and over-attachment. Didn’t think we’d be discussing
THE IMPOSTER’S ROADMAP 77

emotional intelligence in a technical book, did you! And yet, it’s one
of the critical skills of any good leader.

Here’s the thing: you’re not allowed to be a hero!

From XKCD

Your job, as the lead of your team, is to enable your team’s


success. At some points, you’re going to “lead from the front”, fixing
broken builds, resolving simple issues such as spelling mistakes in the
documentation, and even centering a few divs. Leading from the front
means you’re in the mix with your team, not stuck in an office staring
at a spreadsheet.
78 ROB CONERY

When your team is humming, get yourself out of the way. This is
when you buy a round on a Friday night, give small spot bonuses for
hard work, and ensure that your door is always open for a drop-in
when people have a question.

It’s a delicate balance and a fun one to maintain, but there’s also the
temptation to jump in and resolve a problem you know how to fix,
rescuing a bad situation. This often comes up in a code review, where
you might see an N+1 problem (we’ll discuss this in the scaling
chapter). These are usually simple fixes with any ORM, one that you
could do quickly, so your team focuses on other things.

Making this fix, however, can deflate people. Avoid the temptation and
fill out an issue, assigning it to the last person to work on the file (use
git blame for that), and then move on. Your team needs to trust you
to lead, not to fix.

As the lead, you’re setting the tone for the team, and your leadership
informs the overall emotional health of everyone on it. If you’re
constantly stressed out and complaining, so is your team. If you’re
overly positive and lack sincerity, you won’t connect with them, and
they won’t connect with you. You’re a regular person tasked with
helping them be outstanding, which is a privilege! It should be fun,
but always remember to keep it real.

At my first startup (cofounded with a friend during the first Dotcom


boom) I would come in between 0830 and 0900 and leave, promptly,
at 1700. One or two nights a week, I would invite a few folks to go
have a beer with me at Thirsty Bear (an SF brewery that’s sadly gone
now) to blow off steam. I enjoyed this ritual and would share some of
the interesting projects we were bidding on, and how exciting it was
to be in the exploding tech industry. If I found myself complaining, I
would stop as fast as I could (it’s human nature, I suppose). “Stay
away from unlucky or unhappy people” is really a thing, be grateful
you have a job in a great industry, and stay positive.

Who Do You Want Working For You?


THE IMPOSTER’S ROADMAP 79

Leading a team means helping people do their best work. I could fill
an entire book on just this subject but, instead, I’ll point you to this
book right here by Sarah Drasner. I worked with Sarah at Microsoft
and, like so many will testify, she’s a fantastic person and ruthlessly
competent.

Your job as a technical/engineering manager is to give everything you


have to your team in terms of support, clear guidance, and ensuring
they can excel. Easy sentence to write in summary, isn’t it? There’s a
lot of psychology, of course, so once again I would push you to buy
Sarah’s book; I need to stick to the practical bits here, which are:

Assessing maturity levels


Soft skills, which we discussed in a previous chapter, and
Build vs. Buy

I think the tech industry attracts neurodiverse folks in large numbers.


People who prefer their computer to social interaction, people who are
ridiculously smart, and people who need structure. Usually all three.

Our industry is also a very young industry, which means that, on top of
the neurodiversity you have people who are still maturing and
growing into themselves. This can make social interactions a
challenge, and interviewing an absolute nightmare.

Professionalism: Tips For Getting Along

A team needs to be a team, which means they need to communicate


and have a degree of professionalism. In the Soft Skills chapter, we
examined the human motivations we all have and that we hide under
the veneer of “professionalism”. This might sound disingenuous and
fake, but we have social rules for a reason, which is to keep us from
killing each other. More importantly, these rules actually help us to
work together, which is the hallmark evolutionary skill of Homo
sapiens.
80 ROB CONERY

When I was starting out in the professional world, I had to go into an


office every day with a tie on. I was a geologist at a large
environmental company in San Francisco, and you never knew when
someone from Chevron (our largest client) was going to stop by, so
you wore the tie.

I remember thinking it was ridiculous. I was a geologist! Why did I


have a tie on! My boss coached me on several things, one of the
biggest being the idea of professionalism. Get ready as this is going to
sound extremely old school, but consider the impact you have on
others and what it says about you socially.

OK, here goes:

Make yourself as presentable as you feel comfortable doing.


For me, this meant shaving, clean clothes, combed hair and
yes, a tie. I’m not suggesting you wear a tie, by the way, please
don’t. Just know that extra effort can go a long way when it
comes to clothing.
Take a second and consider the words you’re about to say or
write. Inspire, don’t offend, ever.
Smile and look people in the eye.

I know, I know. It’s like I worked in an episode of Mad Men. Applying


this to the modern tech industry workplace and… yeah you would sort
of stick out. That said, you would sort of stick out.

My point with all of this is that it is possible to coach someone who


needs coaching. Perhaps they’re straight out of school and know how
to code, but require a little help adjusting to life outside of campus. It
will take effort on your part, but it can often pay off, as people will
stay if they have a strong relationship with their boss.

This will be part of your job whether you like it or not: you will be
the coach that unites the team and, typically, the person who has to
push others to “do better” when it comes to working with their
colleagues.
THE IMPOSTER’S ROADMAP 81

The best way to do this is by example. How you care for yourself, the
words you choose, and the attention you pay to others when they
speak will be emulated. Scott Guthrie was my “skip” for a few years at
Microsoft (my boss’s boss) and every meeting we had was a study in
interpersonal communication. I still use some of his phrases to this
very day, including:

I wonder if it’s possible to (do this thing I want you to do)


What would happen if we (didn’t do what you want but did what I
suggested instead)
I really like what you’ve done here. I wonder if you would have the time
to (try this thing asked you to do before)

Some might flag this as passive-aggressive, but that’s a different thing.


If Scott had said “OK, that’s fine, do what you want even though I
suggested something else. We’ll see if it works for you” — THAT
would be toxic.

He didn’t do that. He gave you room to try things and would gently,
but firmly, suggest you also try what he wanted you to try.

Words matter and your careful use of them will set the tone for the
entire team.

Your Words Are You

It’s common to see “my opinions are my own and not those of my
employer” on social media sites, namely Twitter. Good luck with that:

When I check a candidate’s Facebook or Twitter, my aim is more


to get a sense of them as a person than to look for damaging
information

We’ve all heard horror stories about people getting angry, tweeting
something inflammatory, and getting fired for it. This has happened to
82 ROB CONERY

quite a few friends of mine and, thankfully, hasn’t happened to me.


Mostly because I’m paranoid and routinely delete my tweets!

I used to “get spicy” because I figured that Twitter was a personal,


social thing and why have an opinion if you’re just going to water it
down so you don’t offend someone? I didn’t get fired, but I did get
pulled into a few meetings discussing my use of social media.

But that’s not what you have to worry about! Employers will toss your
resume or overlook you completely if your Twitter/Facebook/LinkedIn
profile is overly challenging or, simply put, paints you as a jerk.

As a future employer, this is something you need to take seriously.


What will you do with someone’s online profile?

On one hand, people should be free to express themselves however


they see fit. Twitter isn’t the office and who knows? They might be an
entirely different person at work. I know many people like that —
savage online, kind and thoughtful in person.

On the other hand, there’s a saying that I believe in, completely:

When someone tells you who they are, believe them the first time

I have worked with some extremely challenging people over the years,
and I’m sure a few would say the same about me. There are so many
factors that go into interpersonal conflicts — the trick for you is
figuring out what factors drive your potential team member.

I’ll let you decide whether screening someone’s Twitter, LinkedIn, or


GitHub (I don’t do Facebook, personally) is warranted or acceptable. I
do it all the time when meeting people — it’s a fun way to get to know
what makes someone tick.

Would I use this to get to know a candidate for my next big startup?
Honestly, I might look over their GitHub profile but wouldn’t put any
THE IMPOSTER’S ROADMAP 83

stock in Twitter. That platform drives people to do and say some weird
things. LinkedIn, on the other hand, is a corporate event, so I don’t
think I’d fully trust that either.

One thing I would never do, however, is ask friends of friends their
opinion…

Warning: DO NOT Ask Friends!

I was in an interview back in 1993 (geologist again) when the


interviewer gave me a pretty intense look and said:

I spoke to my friend Dave over at (YOUR COMPANY) and he had


some challenging things to say about you. He told me about an
incident with a contractor …

This paragraph is difficult to write. I wanted that job! The “Dave” my


interviewer was mentioning was a colleague who did not care for me
at all, and completely fabricated an interaction with a contractor (a
deep core mud driller), saying that I made the guy so angry we almost
came to blows.

Dave was a psychopath, you see. He got fired a few years later for
lying about numerous things and no, Dave isn’t his real name.

Back to the interview: I was so completely caught off guard, and I


found myself getting angry. I pushed back and tried to explain what
happened, and my interviewer kept saying, “I don’t know, Dave is a
good friend, it’s his word vs. yours.”

I didn’t get the job.

I called my old employer when I got home and, if you can believe it,
politely said to my old boss: “look, I understand I don’t work there
anymore, but could you please allow me to get another job?” And then
recounted what had happened.
84 ROB CONERY

He asked what I was talking about, and I told him, and within 15
minutes I was on a conference call with the company legal team. An
hour later, the legal team from the company I interviewed at was
calling me. Turns out that plenty of laws were broken, and I could
have sued both companies for a lot of money.

I didn’t, of course. I am not sure if I should have — but my point is


this: do not talk to other people about potential candidates. This is incredibly
common, believe it or not! I still get calls from people asking my
thoughts on hiring person X and I shut it down immediately.

Be sure you do the same.

APPROACHING THE INTERVIEW PROCESS


There are many controversial topics in the tech industry, and I would
say this is right there in the top 5. If you’ve been working in this
industry for a while, you know what I’m talking about!

Big companies have extremely impersonal processes, and smaller


companies are all over the place in terms of fairness and consistency.
What I’m trying to get at here is that there’s no way I can offer you a
definitive answer when it comes to the interview process because one
of these will be true:

Your company will already have a process


You’ll copy a process from another company
You’ll do your best and still piss someone off

People don’t like being rejected, and that’s precisely what you’ll be
doing. Telling someone “no” is no fun, to be sure, but it can also get
you into hot water if you’re not careful.

I sense that all I’m doing is flinging horror stories at you, but it’s
essential that you understand just how bad this process can turn out!

No Explanation, Thank You


THE IMPOSTER’S ROADMAP 85

If you have an HR department, they will likely be handling all of this.


If you use a recruiter, which you should, they’ll also handle this kind
of thing as part of their job. Either way, the following is something
you should know in general.

The first thing to understand is likely the hardest: you don’t owe anyone
a reason why you passed on them. I’ll take it one step further: please don’t
tell them. Sounds cold, doesn’t it? It’s not. The truth is that you need to
protect yourself and your company.

If there’s even a hint of bias or discrimination, you could get sued. As


an employer, you’re not allowed to discriminate based on gender
identity, sexual orientation, race, ethnicity, physical ability, etc. When
someone asks you politely what they could work on for the next round
of interviews, you might feel inclined to help them out, which is very
nice of you.

Consider the following conversation, that happened to a close friend:

Friend: Thank you for your time spent with us. I’m sorry to tell you that it’s a
‘no’. We’re going to move forward with another candidate.

Candidate: Darn, that’s unfortunate. May I ask what went into the decision? I
would like to make sure I study more and apply again in the future. Or perhaps it
wasn’t my skill level?

Look out! Any comment that is not directly related to technical skills
can and likely will get you into trouble. Consider the situation where
the interviewee’s social profile (Twitter, GitHub) was extremely
combative and disrespectful, suggesting they might not be the best fit
for your team. When this happens, the responses tend to be
something like “it just doesn’t seem like a good cultural fit”.

I’ve been told that before and it stings. I was honestly depressed and
angry for about a week. You’re basically being told that you’re not a
nice person, which would make anyone gear up for a fight. And that’s
usually what happens!

So, how did my friend handle this?


86 ROB CONERY

Friend: Technical skills are essential, but they’re not the only factor in our
hiring process. We consider your answers to all of our questions very carefully
and your answers were fine, but we decided to move on with someone else because
they fit the role better.

Good response here. Nothing negative is said… unless you’re the


person hearing it. They were just rejected for a job and if they choose
to “read between the lines”, they’ll likely come to the conclusion that
it was the “soft skills” that limited them.

Candidate: It sounds like you’re telling me I’m lacking some personal skills.
Can you elaborate on that? This company doesn’t exactly have the best
reputation either, you know. I’ve been through numerous interviews and you guys
have taken up hours and hours of my time. I think it’s quite rude to reject me
after all this time without something I can work on in the future.

I mean… I can understand how this person felt. I’ve been there. You
go through a loop of 5 other 6 interviews plus a screening call, only to
hear “no thanks”. I will also add that my friend didn’t help themselves
much by using the phrase “consider your answers to all of our
questions”.

On the other hand, the candidate would be proving the point right
here, wouldn’t they? It’s one thing to get a rejection, but it shows a
lack of maturity and professionalism to put an interviewer on blast
like this.

So, how do you get out of this?

Friend: Once again, I do appreciate the time you’ve spent with us. This position
is highly competitive, and we’ve had over 50 applications, yours being one that
rose to the top. I know it’s frustrating to hear a ‘no’, but I would encourage you
to apply once again in a few months. Now that you know the questions we ask,
you should have a leg up.

I’m happy to report that nothing more happened. I don’t know if the
interviewee ever applied again, but they did give a crystal clear
example of why you don’t give reasons for rejection: as much as you
THE IMPOSTER’S ROADMAP 87

want to be kind, you’ll likely just make someone defensive and


angry.

Or worse. You could start into “cultural fit” and throw open the door
to discriminatory hiring practices, and then you’re in big trouble.

SCREENING
Hiring someone usually involves multiple interviews with multiple
folks on your end. Current members of your team, you, maybe your
manager as well. It’s important that you don’t waste anyone’s time,
especially the candidate’s, so doing a quick screening call is
paramount.

If you’re using a recruiter, they’ll do this for you with your guidance as
to what’s important.

Let’s say you’re looking for a frontend developer with 5 or so years of


experience with React (for instance). HTML and CSS knowledge are
also required, and a willingness to grow into further roles later on
(backend, database, etc.).

Note: when I use the term “recruiter” here, I’m not necessarily talking about the
annoying spam emails you get trying to hook you up with a job. Yes, those are
recruiters too, but often a larger company will have a person whose sole job is to
facilitate hiring. They’re also called recruiters.

If you’re working with a technical recruiter through your company or


one that you’ve contracted with (always a good idea if you can afford
it) they’ll usually know right away how to screen applicants fitting
this description.

You can also use a service like Indeed, Glassdoor, Dice, and so many
more. You can set this process up to be as automated as you like, but
one thing they lack is the personal connection you’ll make when
talking to someone virtually or in person.

An Effective Screening Interview Strategy


88 ROB CONERY

You have one question in mind for the screening interview: should we
move forward with a proper interview? To answer this, you need to know:

The technical skills of the candidate


Job history and experience level
Culture fit

I know, I know, but soft skills are critically important! I’ll touch on
“Culture fit” last, as it’s likely the greatest challenge.

To establish technical skills, you’re going to need to hear some


technical details. A screening interview isn’t the place to ask
algorithmic or coding questions, as those take time. Instead, you need
to get into a technical conversation quickly in which you also discuss
work history and, ideally, some soft skills questions.

Everyone has a different way of doing this, but I’ll share what I’ve
done in the past as well as what other companies have done when I’ve
gone to interview.

State clearly your understanding of their experience. It’s on the


resume, which ideally sums up what they know and what they’ve
done. You should be able to tell their level of skill quickly.

Ask about the last project they worked on. You don’t want them to
go into detail or divulge secrets, of course, as that could get them into
trouble. If their last project didn’t involve the skills you need (no
React, for example), ask them about the last project they did using
React.

Probe and Listen. The candidate might be nervous and, in fact, if


they’re junior level (less than 5 years) they definitely will be. If they’re
not, that’s something to consider. Arrogance? Possibly, but they might
also really know their stuff.

Ask for deeper technical explanations as they talk about their project.
You can usually assess whether they’re talking about the project in
general, or their contribution to it. Push to find out the latter. You can
THE IMPOSTER’S ROADMAP 89

also ask them about a challenging bug they might have solved or, if
they could do it all over again, what approach would they take.

I like to avoid asking opinions about technical things. For instance, a


question like “do you enjoy working with React?” Can easily lead to
someone voicing frustrations with the platform or, worse, general
reasons why “React is awesome”. These things don’t tell you about
skill and experience.

I once shared a coding story with a candidate during a screening


interview for a web development position. This was years ago, and it
had to do with the way I was doing data access for a given page. I told
them about a bug I ran into that took me an hour to solve, and then
asked if they had ever hit the same bug. Admittedly, this was an
extremely common mistake in the platform I was using, and I wanted
to see what the candidate said.

They laughed and said “oh, the dreaded xyz issue. Yeah, they don’t
make it obvious, do they? I hit that so many times when I was starting
out that I now have a snippet I use to make sure I avoid the problem.”
I liked this person and I liked the way they handled the question.
Conversational, aware of the problem which meant they had the
experience they said they did, and they communicated their
experience well. And yes, we hired her. Her name is Kelly, and she
ended up being one of our top coders that people went to constantly
for help and advice.

Taking Notes

You want to be certain you come prepared with notes, outlining what
you intend to ask and also a few things to check off (see list above).
Your Journal works well for this, but also be sure to put something in
your company files, wherever you keep them.

It’s a good idea to bullet the important stuff, such as:

6 years working with React. All of it? Next.js? Single quote


issue and how to get around it?
90 ROB CONERY

3 jobs in the past 5 years. Looking to stay here? Mention


mentoring program and room to grow.
Diver and student pilot!

A good resume that hasn’t been overly “embellished”, shall we say,


can tell you a lot about someone. You can help drive the interview by
prepping some notes for yourself and filling them in as you go.

Giving Them Time

Screening interviews usually take 30–40 minutes but can go longer if


you’re having fun. I’ve had this happen a few times and it’s refreshing.
Make sure, however, you leave at least 10 minutes for the candidate to
screen you. One thing that drives me nuts when interviewing is when
the company acts as a gatekeeper, allowing you entrance to their
sacred lair only if you’re lucky enough!

Your candidate is also screening you, so be sure you give them time. If
they’re a junior level person, however, they might not have any
questions — so let’s help them out.

I’m sure you might be wondering what it’s like to work here. You
can then describe the everyday, some programs they might like, the
financial health of the company (which is important if you’re smaller
— let them know they’ll have a job for at least a few years), the other
team members and what they’re like, and so on.

Always keep in mind that the person you’re talking to could be your
next Kelly, and they’re interviewing you, too.

THE STRUCTURED INTERVIEW


If you’ve interviewed at companies like Amazon or Google, you’ve
been through a “behavioral” or “structured” interview. These are
open-ended interviews with a set of questions that remain constant
over all the candidates. Some companies, like Amazon, will go so far
as to tell you what they’re going to ask you right up front.
THE IMPOSTER’S ROADMAP 91

The idea is to avoid bias and make the interview process as even as
possible. I like this type of interview because it becomes a bit of a
game — one you can’t cheat at, by the way.

Here’s a typical question from AWS: tell me about a time you challenged
your team and management, putting the customer first. How can you cheat at
this one?

AWS has a set of Leadership Principles that they’re very serious


about. They expect you to know and study them before you interview
because you will be asked a few. They will also grade your answers,
which should be concise and on point. I wrote a post about the
process here if you would like to know more.

Note: I know that working at a big company like Amazon, Microsoft, Google,
etc. is not for everyone. The interview process, however, is fascinating to me,
which is why I’m detailing it here. I think preparing this way will help you for
any interview process.

All the AWS questions start with “tell me about a time that X” where
X is a thing you and your company care about. These are usually non-
technical things, but they can be made technical as long as everyone
you interview can answer them fairly.

Tell me about a time when you faced a scoping bug in your JavaScript code. How
did you identify and solve it? It’s almost inconceivable that someone
would never have faced one of these! If they tell you they’ve never
faced a scoping issue, that, in itself, is something to dig in to.

Tell me about a time when code you broke the build. How did you find out, and
how did you fix it? We’ve all broken the build! If you haven’t, that’s OK!
It could simply mean you were on a team that did things differently.
Some teams encourage you to commit right to the trunk, which we’ll
read about later on. Some teams actually expect you to break the build
and will think you’re not trying hard enough if you don’t.

Tell me about a time when you wrote a routine correctly the first time. How did
you know? If you’re working with a junior person, they might not have
92 ROB CONERY

a story like this, which is good to know — it means they’re honest.


Sometimes, however, we write something correctly the very first time,
and it feels like a miracle. I never believe it, however, until it’s fully
covered with tests and yes, we’ll talk about that later too.

Hopefully, you get the idea about a structured interview. Before you do
one, however, make sure that you and your team agree on scoring.
Each of these questions will draw a story out of the candidate that you
need to evaluate. What do you want to hear?

For the React job, you might want to understand what they think
about testing their code, which the second and third questions will
tell us. Communication skills and attention to detail: the second
question should tell us that too.

The ability to solve problems, ask for help, and researching — the first
question tells us this (ideally).

Personally, I think you can come to understand someone’s technical


abilities easily by asking questions like this. That said, there are some
psychopaths out there (see above) that are particularly good at fooling
people.

Code doesn’t lie, so let’s get into the toughest part of this whole
chapter.

CODING QUESTIONS
So here we are, asking someone to write code so we can decide what
they know. Honestly, I don’t know anyone who has ever felt good
about this, mainly because:

The questions are off-topic, algorithmic nonsense that have


nothing to do with the job. Knowing your Big-O is great, but if
we’re still talking about that React job, do they really need to
know Big-O? I’m guessing not.
THE IMPOSTER’S ROADMAP 93

The questions take too long and are intimidating. I was


asked to create a scoring routine for tic-tac-toe results once. I
thought it through and came up with an answer that was
consistently wrong or incomplete. The right answer had to do
with an array of arrays preloaded with every possible tic-tac-
toe move and whether it was a win…
The questions are just plain stupid. I was once asked how I
would perform a hot image rebuild/container push on n
Docker containers in a “pre-warm” state for deployment (this
was before Kubernetes) with zero downtime. I know Docker,
but I’m not a container/ops person. I was applying for a
completely unrelated position, but I knew enough about
Docker to know that this seemed to be a silly thing to ask. So I
replied, “I wouldn’t. I would ask you to do it” which, to me,
was the right answer, but they didn’t find it funny.

We all have our stories, I’m sure you have a few crazy ones too. Sitting
through an interview loop with people asking you to write an
algorithm on the fly that traverses a tree using depth-first search is
extremely annoying. What’s even worse is when someone asks you
something that you truly know, top to bottom, and they tell you that
you’re wrong!

That happened to me with PostgreSQL once. I was asked about the


SQL for an aggregate query that would normally require several joins,
both left and right. A massive pain, but I pulled out a windowing
expression (which is ANSI SQL, by the way) and they didn’t know
what to make of it. Thankfully, they were OK learning something new
that day.

So, how do you gauge the coding skill of a candidate?

The Dreaded Whiteboard (or Google Doc)

I’m sure there are companies out there that still make you write code
on a whiteboard. So, so annoying. They’ll even take a picture so they
94 ROB CONERY

can see if it compiles! Google did this to me, and my code did not, in
fact, compile, and I didn’t get the job.

If you choose to do this, you can make the experience so much better
by asking someone to spot a bug and then fix it. You could also ask
them to write some tests to be sure the bug is fixed.

This is just slightly less annoying, but here’s a fun one: turn the code
below into a closure:

Seems silly, I know, but from there you can ask them to turn it into a
revealing module (which they might already have done), a class, or
some other pattern they should know.

Going with the React example, once again, you could ask if this code
will work, and if not, why not?

I’m not a React expert by any stretch, and I found out the hard way
that this won’t work because JSX isn’t HTML. I think most React
people would spot this immediately, and if they do, you might ask
how they could expand on this component to add team members,
maybe some company information, and so on.
THE IMPOSTER’S ROADMAP 95

The point is: coding interview questions can be painful, but they don’t
have to be.

Take Home Tests

I have the Microsoft Ignite 2023 keynote streaming on my other


monitor as I write this, and the entire presentation is about Copilot
and AI. GitHub Universe was last week and there, once again, the
topic was Copilot and AI for coders.

AI is an incredibly valuable tool when used correctly, and we discussed


this at the beginning of the book. I bring it up here because you will
need to be savvy with it if you offer take home interview questions,
which many companies have started doing.

I’ll be honest with you: I hate these. It feels like homework. I know
you have to study for interviews, but that feels different from
taking home an assignment. I would probably do what I normally
do: ask Google and find some suggestions on Stack Overflow. I’m
not a fan of copy/paste as I like to understand what I’m doing, but
that is a reality of what we do as programmers: we research. We
have to. There’s no way you’re going to know how to do
everything.

This is why I find Copilot so valuable. I ask it how to do something


and it tells me. It’s like having Kelly (the developer I mentioned
above) sitting next to you, constantly willing to help you out. I don’t
copy/paste from Copilot, either. I need to write the solution, so I
remember it or at least have it stored somewhere in my memory, and
often the code provided can be improved (variable naming, comments,
and so on).

If you send someone home with an assignment, this is almost


certainly what they’ll be doing. I, for one, don’t mind this at all. As an
interviewer, I’m more interested in someone’s ability to 1) solve the
problem and two 2) deliver that solution. How they do it doesn’t
matter to me, as long as it’s not infringing on a copyright or flatly
stealing someone else’s work.
96 ROB CONERY

I think a better solution would be to embrace the process and have


them do the assignment right in front of you and instead of looking
for a working solution, see what their working process is. You can move
from there to assuming the code is working and then ask what they
would do next. That, to me, is the more critical aspect.

Some things I would look for out of an interview like this:

How do they know the code is “correct”?


How would they “ship” their code to the project?
How do they know their code is addressing the problem
directly?

Coders love to jump in and start coding, solving a problem using a


“greedy” algorithmic approach: let’s see if we can get this code to compile
and then give us a correct result, and then I’ll write a few tests for some edge
cases and then commit it to Git and push to GitHub and then open a PR. We’ll
get into the GitHub process stuff later.

The funny thing is that this is precisely how I do things when I’m
working on a solo project. I love exploring and trying new ideas as you
never know when something will hit you, and you’ll be inspired. That
doesn’t work in a group setting.

If I was interviewing an experienced programmer, I would expect them


to follow some kind of process, or at least mention it at some point.
Something like:

I normally like to work against an issue, and I assume we have


one for a given milestone? (Yes, here’s the Issue).
My next step would be to fork the repository and open a PR
immediately, referencing the issue, so I know what’s going on,
and then probably popping in a few checkboxes to keep track
of things.
I’m not strictly TDD (again, coming later) but I think covering
THE IMPOSTER’S ROADMAP 97

the best/worst case is a good idea. With at least 3-5 error


cases.
Do some research, write the code, make sure the tests pass (I
do both unit and spec) and commit with comments for the PR.
Once things look good, I’ll commit the PR using a name for
my repo that makes sense. My typical format is
username/project-issue, but I’ll use whatever format you
have.

This, to me, is 10 times more valuable than reversing a linked list. By


the way: I wouldn’t expect people with less than 5 years experience to
know these steps in exactly the same way, but I would want them to
follow the basic theme of knowing what to do before you try to do it.

Boring!

The risk you run with a take home assignment is that people will find
it boring, and you’ll lose your candidate who might otherwise be a
great fit for your company. This happened to me a few years back: I
was given a React project (I’m more of a Vue person but know React
well enough to get things done and was clear about this) and I needed
to … honestly I can’t even remember… and about 30 minutes in I just
threw my hands up and thought “this isn’t worth it”.

The problem was basic enough, I suppose, but I was missing context
and I didn’t know how to go the extra mile, which is what I normally
like to do. This might sound like chest-thumping, but I’ve made a
career out of going well beyond what’s expected (Mediocrity is Toxic,
after all), over-delivering and basically kicking ass.

Anyway: I was forced to be mediocre doing this and I tried to get over it
but… I just couldn’t. I suppose it's perfectionism, but not really. I don’t
like contrived nonsense, is the problem, and that’s precisely what this was.

I wrote back and asked if there was something else I could do and
tried to explain the problem, which, of course, didn’t work. They
98 ROB CONERY

would be making an exception for me, and that wasn’t fair to other
candidates. I thanked them for their time and that was that.

You’re Asking Too Much

The last thing I’ll add here is that if you do decide to go with a take —
home test, be sure the candidate can do it in a reasonable amount of
time and make it clear what that timeframe is. Even then, they’re
probably going to spend twice as much time on it which, let’s face it,
is an imposition. It’s also stressful.

A company I interviewed with a few years back wanted to see my


presentation skills and I offered a list of talks I had given over the last
few years, but they had a specific talk and format in mind. This
position was Director-level, so I could see why they wanted this
from me.

They suggested a topic based on my resume and asked for a 10-minute


talk, which I would present to the team during the next call.

I almost immediately said no, but again, this was a job I really wanted.
Why did I almost say no? Because:

I have a solid history of talks they could review and ask


questions about.
A 10-minute talk is actually very difficult to give (distillation,
getting your point across and delivering value in a very, very
short time frame).
I don’t do bullet points. I put a TON of effort into every talk,
and I rehearse it at least 20 times (seriously) before giving it.
I’m not a natural speaker, I need to practice, and I find the
repetition drives you to go deeper, delivering better
information.

I knew this talk was going to take me 4–6 hours at least. Sure, I could
probably phone it in and do just fine, but that’s not who I am. I care a
THE IMPOSTER’S ROADMAP 99

lot about the content I create and the people I’m presenting it to. I try
to take it as far as I can, no matter the circumstances or context.

I did the presentation, which went over well, and I had a few
comments similar to “well that certainly exceeded our expectations. I
didn’t know about X and Y, thanks for that”. That’s why I care so
much — that reaction is wonderful. The fun part is that I didn’t know
X and Y either until I put the talk together!

The point: it might be convenient for you to do take home tests, but
they can be imposing and not nearly as fun and effective as the next
approach.

Pairing Sessions

I haven’t been through one of these before. Well, not intentionally,


that is. Years ago I read that a solid technique for getting through a
coding interview was to involve the interviewer as much as possible
because they’re problem-solvers like you are, and often can’t resist
helping. As long as you make it clear you have a good idea how to
solve the problem, you can lean on the hope that your interviewer is
human and that they want you to succeed.

Pairing sessions are, of course, much different than that. If you’ve


never pair coded, it can be quite fun if the person you’re pairing with
has the time and energy.

It’s a simple process: one person is the coder, or “driver”, the other is
the “navigator” who watches what’s going on and catches bugs,
suggests changes and direction, and offers basic feedback. That’s
usually a senior person but doesn’t have to be.

In practice, which we’ll talk about in a later chapter, you can get a lot
done in a shorter time. And by “a lot” I mean code that’s much closer
to being ready to ship. In an interview setting, however, the social
element is missing because you’ve likely just met the person, and this
is where things get interesting.
100 ROB CONERY

The most common setup for a pair coding interview is for the
interviewer, you, to be the navigator and the candidate to be the coder.
You have a problem you need to solve, hopefully a short one, and you
do it together. This is barely a step above a typical coding problem
where the candidate asks a load of questions. It can be less stressful in
some ways (the candidate doesn’t feel like they’re in the spotlight so
much) but it can also be more stressful for the candidate because:

The skill gap. Pair coding is typically done with a peer, not a
lead, and you can easily end up pushing someone to a solution
that you think is correct, not allowing them to be creative. If
you have a lot of experience, this will be stressful for you to
watch, and that will come through in the session.
You don’t get along when coding. They might not start with
tests, and you might be a TDD fan. They might make a joke
about your age, or some quip about being a zealot — both of
which they likely won’t mean as insults. Tabs vs. spaces, Vim
or Emacs instead of Your Favorite Editor. Missing semicolons
and, worst of all, a light editor theme!
They know you know the answer. That last is tough and, I’ll
be honest, it would stress me out as well as make me feel like
a dancing puppet.

One way to address this is to change places — you code, they navigate.
Sounds a bit backwards, but you could purposely do things wrong and
see if they catch you, which I know sounds manipulative, but this
entire process is manipulative isn’t it!

PROMOTING THE STARS


This is one of the best parts of your job: rewarding good work and
watching someone’s career take-off, knowing you played a small part.
You might hear this sentiment from one of your managers someday:
“my goal as a manager is to find my replacement”, and I like the
THE IMPOSTER’S ROADMAP 101

sentiment. As a lead, your goal should be to surround yourself with


extremely capable people, or as some friends have put it: “people
much smarter than me”.

Again, a good sentiment but rings insincere at times. You are the lead
of your group, and you set the tone, the pace, and the bar. If you find
someone more capable than you are, that’s great! They still need to
know you’re the boss, however, and passive-aggressive, self-effacing
“humor” should be at an absolute minimum.

You will indeed find a superstar outlier at some point, who will blow
past everyone in your group, by you, and probably by your boss. I
know a few of these people, and they’re really, superb. Your goal is to
help them and then get out of their way.

Usually, however, you’ll be playing coach: sometimes barking,


sometimes pulling back, and sometimes celebrating. There is a lot of
psychology that goes into being a good manager, and you cannot shy
away from the people skills; they’re essential. Often times, the only
thing you need to do is to listen, validate, unstick, and motivate.

Above all: be real.

Common Processes

It doesn’t matter the size of your company, you’re going to need a


transparent promotion process, and it revolves around you and your
reports checking in regularly.

Here’s a general process you can adapt as you need:

Have a clear set of goals that your report can hit.


Review progress on those goals (and other things) at least
once a month.
Offer structured training and mentorship. You want them to
expand what they know so they can grow into whom they’ll
become, with your help.
102 ROB CONERY

Promote from within! Nothing kills morale like hiring


externally when there are qualified people internally.
Offer a bonus and a pay increase every year, and base it on
performance.

Above all: be transparent about all of this. Your report should know where
they stand, and what they can expect from you.

All of that said, sometimes a promotion just can’t happen. These are
called “discretionary factors” and this is where the problems live.

Discretionary Factors

Hitting goals is one thing, but sometimes a promotion can’t happen


“just because”. I’ve been on both sides of this, and it’s not a fun
conversation to have. Often, it’s not enough to check boxes and grow
your skills. Other things need to align, which include:

Budget. Promotions come with a pay raise, and if the


company can’t afford it, there will be no promotion. You might
lose a good report if you can’t pay them, and typically, there
are ways to “find the money” to offer at least some amount of
compensation.
Room. If your team is full of seniors, do you have room for
another? This is something that you can fight for as a lead.
Make room for someone who deserves it.
Diversity issues. This is an extremely volatile topic, of
course, but sometimes ensuring that a team is diverse requires
being proactive and denying someone else’s privilege. It
happens, and it’s not a fun conversation to have. Be honest
about it, or, at least, as honest as you can. This does get into
legal territory, however, so be careful.
Bad people in upper management. Every company has them
— the people who go out of their way to exact petty revenge
by denying promotion or raises. If someone is being blocked,
THE IMPOSTER’S ROADMAP 103

push back and demand to know why. If you don’t get a solid
answer, go over their head and fight for your report to have a
fair chance.

That last one is tough. If your report is being blocked and there’s
nothing you can do about it, it’s only a matter of time until you’re
toyed with as well. The only thing you can do is to try to get this bad
person’s attitude changed, or walk in protest.

So how do you approach the “discretionary factors” with your report?


As with everything: be honest. Impressions matter, as do politics. If
there’s a weird vibe between your report and someone on the
management team, urge your report to initiate some kind of
conversation to smooth things over. Let them handle it themselves,
rather than you playing mediator.

If there’s no room or budget is an issue, your report needs to know


this as soon as possible. If there is room, be sure they know that too!
That can be extremely motivating, knowing that a promotion is there,
waiting for them, if they can show the impact outlined in the
promotion package.

The Promotion Package

One way to prepare someone for a try at a promotion is to put


together a “package”. At larger companies, this is a formalized thing
where you need endorsements from different people from across the
company, with a few showcase projects that highlight your efforts.

If your company doesn’t do this, start the practice yourself! Promoting


someone based on a subjective review automatically involves your bias
(and those of your managers). We’re human and bias is built in, so it’s
always a good idea to formalize a promotion in the same way a
structured interview offers (basically) the same chance to everyone.

Here are a few ideas:


104 ROB CONERY

Three written recommendations from skip level managers (2


levels above your report).
Three projects which clearly show a high level of impact.
GitHub issues and PRs is good for this.
Two examples of initiative and leadership within your group.

Yes, these are subjective as well and, of course, might need to be


altered based on your company size, and so on. The goal is to have
“something to shoot for”, a performance target that your report can
think about if they want to move up.

Let’s see what some bigger companies do.

Microsoft’s Promotion Process

Microsoft has a formal “check in” process called the “Connect”, which
is a bit like a career journal. You document your goals, the impact
you’ve had, and the things where you think you could improve. You
also have a chance to discuss your commitment to diversity and
inclusion.

Your lead looks over the Connect and offers their thoughts as well,
and you might get a chance to redo parts of it before it becomes a
matter of record.

Your Connect is used to gauge readiness for promotion, and also your
rewards.

At Microsoft, it’s all about “impact” on the business and customers:


what did you do to make a difference? Your manager might ask you
to help document this as part of a promotion package, which
typically includes endorsements from higher level people you’ve
worked with (typically 3), and a few other factors dependent on
level.

While there’s no formalized training process, there are many


resources you can ask for throughout the year. You document these on
your Connect to show your commitment to growth.
THE IMPOSTER’S ROADMAP 105

All of this said, if there’s no room or budget for your promotion, it


won’t happen. People stay at Microsoft for long periods of time, which
can hinder upward growth. It’s common to switch groups every 3 or
so years to avoid this problem.

Google’s Promotion Process

Like most larger companies, Google has a formalized promotion


process that involves creating a package, which includes:

Endorsements from your peers based on your work with them.


Performance reviews with your manager, which happen
(typically) twice a year.
Your readiness for a move up, which is reflected in your
training and what you’ve done over the year.

This package is reviewed by a committee of managers and senior


people to help avoid favoritism and suppression.

Like every other company: if there’s no room or money to move up, a


promotion won’t happen.

Make This Process Your Own

Promoting people is necessary if you want to keep the good ones, and
they need to know what you expect from them, as we’ve discussed.
You don’t need to use a grand, formal process, but you do need to
document what factors went into the promotion, or why they were
denied.

This is a tough discussion to have, especially if the reason is “we’re


under a hiring freeze” or “there’s no budget” when you work at a
trillion-dollar company. Those are tough words to hear, and you, as
the manager, are going to bear the brunt of the frustration.

Conversely, if you help your report move up, you’ll be thanked


endlessly (hopefully). Sometimes this means you will need to put up a
stink, including:
106 ROB CONERY

Pushing to promote from within, rather than hire externally.


Insisting on making room for just one more senior role.
Advocating for someone who has been passed over one too
many times previously.

Don’t take no for an answer if your report is passed over, and they
deserve to get a raise or level bump. This is the main reason
companies lose good people!

CUTTING BAD APPLES


I have a theory that Bad Apples (people on your team who are toxic
and drag everyone down — the unhappy and unlucky from our Soft
Skills chapter) aren’t born, they’re made. The work you do and how
you handle yourself is guided by so many factors, including:

Your sense of value and impact. When people get put in a


position they don’t do well in, they’re likely to self-destruct
and sabotage what they do rather than speak up.
Interpersonal conflicts. Cultural, personal, age — so many
factors can add friction between two otherwise wonderful
people. I worked with a person from Paris, and we got along
well. Another person from Belgium joined us and things took a
weird turn. I learned that France and Belgium have a funny/not
funny thing, and more than once the not funny stuff happened.
A lack of recognition. This is a deep one, psychologically, and
hits on the need for validation. A vast topic that usually
reaches into childhood, a person’s need for validation (or lack
of need) can cause all kinds of behavioral issues (as discussed
above). These might not pop up until someone comes along
and reminds a colleague of their oldest brother and…

You, as the lead, are the one to set the tone for your group. The way
you communicate, carry yourself, the words you use and (more
THE IMPOSTER’S ROADMAP 107

importantly) the words you don’t use. We’ll get into communication
styles later, for now, the thing you need to know is that you are the
primary guard against the Bad Apple.

Our professional situations will no doubt differ, but I’ll share with you
an approach that worked well for me when I was a team lead and
CTO. It’s an extension of the idea above, that I set the tone for the
team, but it goes even further into team dynamics and defusing the
Bad Apple Bomb.

Expectations. Set Them, Write Them Down, Review Them.

I learned this process from one of my very first managers during my


geologist days. His name was Mike, and the first thing we did on day
one was to go over a list of expectations for my role.

He flatly told me that bonuses and raises are based on these


expectations and, most importantly, the expectations can change
based on workload, personal issues, and so on. If you’re in a large
company your HR department will probably have a formalized process
for this, which is great, but you will be the one to provide
recommendations for compensation and advancement. The only way
you can do that is to have a scorecard that must be transparent.

Another person, Bill, who worked for me, was difficult to deal with.
He joined my team as a transfer from another department (IT) and I
was warned that he could be “challenging”. Bill had a history of
coming in late and missing deadlines, and also making quips during
meetings that would bring the mood down. He thought they were
funny, but soon enough most people ignored him, and he requested a
change of scene.

I sat with Bill on day one, remembering what Mike did for me many
years before. I asked Bill what he thought success looked like, and his
answer didn’t surprise me:

Me: what would success look and feel like for you in this role?
108 ROB CONERY

Bill: I guess that I just do what you tell me? Is there something else I need to
know?

Me: I’m not going to tell you what to do, Bill. That’s not how I work.

Bill: OK… (smirk, laugh)

So that didn’t start off well, and I’ll be honest here that conversations
like this are frustrating to me. Bill clearly was burned out and likely
looking for another job. He was also extremely talented, and I felt that
with the right guardrails in place, he could do well.

I waited for a minute before responding, looking at something on my


screen to dial back the frustration. I needed to defuse the moment,
and it wasn’t easy. Bill was, in fact, challenging and extremely
condescending. When people act like this, it means they’re defending
themselves. They see you as a threat of some kind, and will do
anything to keep from being vulnerable in the moment.

It’s difficult to pause a conversation for a whole minute, but I did. I


then leaned forward on my desk, giving Bill all of my attention, who
was sitting on the other side. I regarded him for a moment (which
means I didn’t stare at him, just looked at him with a neutral
expression… I hope):

Me: you do want to work here, right? In my group?

Bill: well yeah, that’s why I’m sitting here.

Me: OK, what is it that makes you want to work in my group.

Bill: I don’t know… I guess the project with the X, Y and Z stuff is pretty cool. I
was talking to Kelly about it the other day, and she mentioned you just shipped B
functionality, which sounds kind of cool.

Me: yeah, it was fun! I had Kelly study up on B, and she took some online
courses that we paid for so she could be ready for it. She learned enough to ship,
which is great.

Bill: I didn’t know you were into B…


THE IMPOSTER’S ROADMAP 109

Me: I’m not, Kelly is. She recommended it and came up with a plan that
included training, so we did it.

Bill: that sounds fun. Would you be into using C and maybe D too? I read about
those…

I need to substitute names here, for obvious reasons, and I hope it’s
not too confusing. Bill was a classic case of feeling like he was
“working for the man” and had no autonomy or authority. I made it
clear to him that his future was his own with me, but I wasn’t going
to tolerate BS (that’s an acronym, by the way).

GIVING FEEDBACK
This is where things become extremely tricky: how do you give someone
direct, meaningful feedback without making them defensive? I love the way
Sara Drasner explains this:

How do you get around this? Asking helps. I’ve started doing an
exercise with my team where I ask the group as a whole how they
would like to get feedback. Not only does it open up ideas, but it
also helps that each individual has to think for themselves how
they prefer to receive feedback. Normalizing this type of
vulnerability and self-reflection can help us all feel like partners,
instead of some top-down edict.

Sara’s article is all about mistakes she’s made as a manager, and it’s a
wonderful read. Sara recognizes something that takes a while to learn:
people will offer feedback in a much different way than they prefer to
receive it. This can be for personal, cultural, or social reasons, and we
should never expect that just because a person is direct with you that
they want you to be direct with them. Matching styles rarely works!
110 ROB CONERY

I took over a team at a big company once and within the first 4 days
one of my team members emailed me detailing precisely what they
expected from me: meeting cadence, how they offer feedback and how
they expect to receive it. Pretty extraordinary, the definition of
managing up.

Many years ago, I offered a suggestion rather directly to a person with


6 years of experience, and he exploded at me, clearly under extreme
stress. I sent the feedback in an email (mistake one) and it was pretty
terse. I wasn’t happy with the work being done, so I didn’t take the
time to care about how hard the feedback landed. That was a mistake.

The person stood up in their chair and let me have it. We had an open
office space that was now filled with F-bombs, reasons as to why I was
a horrible manager, if I knew the answer, why didn’t I just tell him
what to do instead of belittling him? He then yelled that I needed to
learn how to relate to people, which I found ironic, but I knew he was
right. It was scarring.

This person clearly had enough and no, he didn’t get fired, but he and
I went out for a beer that night and patched things up personally. I
didn’t realize how hard I had been pushing him, and he bottled up his
rage. We mutually agreed that working together was a bad idea and
since it was my company, he agreed to leave with a severance. I want
to be clear that he wasn’t fired or pushed out the door — it was his
choice. We simply did not get along.

There was a lot I could have done better, to be sure, and for months I
thought that I was a crap manager and the Bad Apple myself. I still
think about what happened, it made a massive impact on me. This
was one of my first times managing, and I was not paying attention to
this person’s needs.

Back to Bill’s story now. I had learned from getting yelled at, and
realized that I needed to be sure that expectations were clearly set,
and that Bill understood what it took to succeed and kick ass.

Fast-forwarding our conversation:


THE IMPOSTER’S ROADMAP 111

Bill: this actually sounds fun. What do you need me to do to get rolling with C
and D?

Me: that’s for you to define. We need to ship (thing) by the end of (month)
which means that we need to hit milestones (names). I’ll forward you the details
and the specs — see if that’s something that works for C and D. I’m happy to
help at any time, but I’m looking for you to drive this. If you can hit these dates
with some “reasonable” code, that’s all I ask.

Bill: define “reasonable” code.

Me: no one does things perfectly the first time, and I don’t expect that. The term
“reasonable” means that you’ve written tests for the common scenarios, it’s
commented with variable names that make sense. It’s readable so when we do go
in to debug something, we know what’s happening.

Bill: who’s “we”?

Me: the people who might be owning this code after you get promoted to lead.

Bill: you going somewhere?

Me: hopefully. If I do my job right, one of you will be sitting here at some point
in the future.

I’m obviously paraphrasing this conversation, but the gist of it is all


there. I had to repeat it a few times, but Bill soon began to understand
that his career was in his hands, and it inspired him.

Over the following months I met with Bill on a routine basis, and we
went over his “plan”, which was an informal document with
summaries and bullet points. He fell behind in a few areas, which was
expected, and excelled in others. Change was welcome, so we added
and tweaked as we needed to, making sure to keep the original in
place.

I want to wrap this up with a bow and say that Bill was one of my best
coders and went on to lead a team of his own, but that didn’t happen.
He learned “C and D” and was hired by another company to run a
small team using these tools. A great opportunity for him, too.
112 ROB CONERY

It happens. It’s the tech industry. As much as I came to enjoy working


with Bill, I will always maintain a clear definition of roles.

FRIEND VS. BOSS


You will develop social relationships with your team, and that’s a good
thing! We’re human, we like connection. The problem is when you
need to do boss things, and you find yourself feeling like a friend
instead.

I feel that this much is obvious: you need to maintain your position as lead.
You’re not there to make friends, necessarily, you’re there to guide
the ship and help your team kick ass. It’s not really that simple,
though.

People move around a lot in the tech industry, and it’s extremely
common to develop a solid network of peers that tend to hire each
other — your tribe. “I’m here because so-and-so recommended me for
the job” is the norm. What’s also the norm is that someone may work
for you and love you, and then hire you away a few years later so you
could come work for them at a different company.

I say all of this to underscore how important it is to balance the social


with the professional. Empathy and kindness go a long way, but
clarity, vision, and direct communication are even better. Your position
as lead will occasionally be at odds with your status as “friend” and
it’s OK. The best thing you can do for your friend is help them keep
their job and get a bigger bonus!

FIRING PEOPLE
I’ve had to do this 7 times in my career, and it made me physically ill.
The first time I did it, I was an absolute wreck and handled it very
poorly. When the person came into the meeting they had no idea they
were about to be fired, and I just blurted it out because I couldn’t
handle the interaction: “I hate saying this to you right now, but I
THE IMPOSTER’S ROADMAP 113

didn’t make this decision, it came from above me, we have to let
you go…”

As I write this, I am reliving the shame of that moment, especially the


way I handled it. I blamed someone else, and tried to comfort the
person as they teared up and practically begged for a way they could
stay and improve.

It really hurts thinking back on that.

The other times were still horrible, but I built up a little armor and
lobbied for a better severance for the person each time. As a manager,
you should have the ability to fight for a severance for a person who’s
being let go for performance reasons. If they’re being fired “for cause”
(an incident that can harm the company or people within), no
severance is required.

There’s No Easy Way

I like the scene in Moneyball with Brad Pitt and Jonah Hill, where they
discuss how Brand (Jonah Hill) would cut someone from the team.
Brand doesn’t like the idea, but Beane (Brad Pitt) is pushing him,
telling him it’s part of the job. Brand begins a drawn out, “you’ve been
a huge part of this team, but sometimes you have to make decisions
that are best for the team…”.

At this point, Beane plays the role of the player being cut, working on
Brand’s emotions, telling him he just bought a house, his kids have
made friends, and Brand becomes very uncomfortable.

What the hell are you talking about? …. They’re professional ball
players, just be straight with them. No fluff, just facts.

This seems cold, but when you bumble your way through firing
someone, it can make everything worse. We work in one of the richest
industries in the world. There are always jobs to be found, and if
114 ROB CONERY

someone isn’t cutting it, or they’re bringing everyone else down (like
Bill, above), they have to go.

Some countries and US states have labor laws that require steps to be
performed before you fire someone, even if the employment contract
says employment is “at will”. These are typically good things to do
anyway:

Have a written trail for the cause of termination. This is


usually a writeup of some kind, or a performance
improvement plan (PIP).
If you have an HR department, be sure to consult them first.
They exist to protect and defend the company, so use them.
Make sure the employee knows how they need to improve,
and make sure they understand the consequences if they
don’t.

If all of that is done, the conversation becomes simpler: “it’s time to


end your employment here, as this role isn’t working out for either of
us. We have a severance for you which includes (provide details) and
wish you all the best. This decision is final, and I’ll need your badge
and other resources. Please have your belongings cleared when you
leave.”

There is no ‘good’ way to deliver this news, so be direct. The more direct you
are, the less backlash you’ll receive, but you should be ready for it. It’s
best to expect the worst, and if that happens, you need to stay calm
and remember that you’re not there to console, comfort, validate, or
sympathize. This sounds extremely cold, I know, but if you do this
you’re going to make it worse.

You are the gray rock:

Them: “You’re seriously firing me? What the actual F@@K? Tell me
why, right now, tell me why”.
THE IMPOSTER’S ROADMAP 115

You: “You have 30 minutes to clear your desk, and please lower your
voice.”

Them: “Lower my voice? That’s all you have to -“

You: “If you need help clearing your desk, I can have a security person
help you.”

Them: “Oh my god, I knew you were an @sshole but-“

You: gesture to the door

Them: “Fine. This place sucks”.

It seems so inhuman, but at the same time, do not let yourself be abused.
This is work, after all, and there are other jobs out there. Most of us
have been laid off or fired at some point.

Let’s try another scenario:

Them: “Oh no. No no no no no. This can’t be happening! My


youngest was just diagnosed with diabetes and my partner was laid off
too… please, there has to be a way to make this work out.”

You: “The decision is final, and you’ve already been locked out of our
network and security system.”

Them: “How can you be so cold! I trusted you! This-“

You: “If you need help with your belongings, I can have security come
and help you.”

Them: “I feel sorry for you”

You: gesture to the door

You’ll have one of these in your career, and it’s never easy. There’s also
no good time, either. Please trust me when I tell you that
commiserating, trying to be a friend, “letting them down gently” —
none of these work. Doing this might make you feel better, but it will
only offer false hope to the person getting fired and, if you say too
much, could land you in legal trouble.
116 ROB CONERY

Note: believe me, I know this sounds heartless and inhumane. If you do consider
this person a friend, offer to talk to them after work, or take them out for a beer.
Even then, your temptation to tell them more than they should know could end
up getting you fired, too. There’s no good way out of this.

Firing via Email

Large companies, like Google, have begun firing people via email. This
is usually during a mass layoff, and it makes sense in a twisted,
corporate way: we have to fire 12,000 people, we need to automate this
message:

You can even find termination email templates online!

On one hand, I hate this. On the other, it’s clean, and I’m not
humiliated in front of others, and I also don’t have to see my boss’s
face, which would make everything worse. For the company, this
makes sense too because they control what’s being said, and don’t
open themselves to lawsuits.

I don’t have advice for you on this, apart from…

This is a Big Industry With a Long Memory

Always remember the person sitting across from you might end up as
your boss someday, years down the line. Or worse, they might be the
THE IMPOSTER’S ROADMAP 117

one interviewing you for your next job. This might sound far-fetched,
believe me that it’s not!

Sending an email as a termination notice is cowardly, and it will be


remembered.

The same goes for “managing someone out”, which is an alternative to


firing that many big companies use because firing people costs a lot of
money in severance and also lawsuits. Managing out is easy: don’t give
a promotion, isolate them, don’t give a bonus, and put them on a
project they hate.

This is a risky move, especially in this industry, as it’s an easy way to


make an enemy. A wounded enemy will often make it their mission to
destroy you.

RESOURCES
I could easily spend the next year writing a comprehensive book on
management, hiring, evaluation, leadership skills, and more. What I
would rather do is get down to specifics in the next chapters: the
tools, how to use them, and what to think about.

To that end, I’m going to offer you some resources that I have found
extremely valuable! These aren’t books on management (aside from
Sarah’s which is outstanding), but books on how to improve yourself.
Management is a very personal thing, and my style and perspective
won’t necessarily align with yours.

Mastering interpersonal communication means being in tune with


yourself and understanding your motivations.

Sara Drasner’s blog and book on managing software


engineers.
How to Win Friends and Influence People by Dale Carnegie.
The standard for inspiring others and acting with integrity and
118 ROB CONERY

professionalism. A bit dated, but hopefully, you can overlook


that and hear what’s being said.
The Tools by Barry Michels and Phil Stutz. This book changed
my life. Therapy is great, but what’s even better is to actually be
able to do something to improve yourself as a person. I like
uncovering the things that drive my thoughts, but what I like
even more is to have the tools to improve myself!

In the next chapter we’ll meet your team and your project, as we have
to have something to build, don’t we?
FOUR
SIMPLE TOOLS FOR MANAGING
PROJECTS
A LOOK AT TOOLS AND SERVICES FOR
HANDLING THE DETAILS

T
he emphasis for this chapter is on the word: simple. The
principles I’m going to discuss will scale to any team, but the
tools needed to manage that team will vary depending on
your company.

I’ll start off with GitHub and its new management features, and I’ll
move on to Notion, Airtable, and Trello — two other services that are
very popular for managing things.

I’m not going to do a deep dive on the services and how they work,
nor am I going to spend much time in this section on how you should
use them — otherwise this chapter would be massive. Instead, I’ll be
revisiting how to track and manage things throughout the book
which, to me, is more organic and natural.

For now, let’s get to know the tools available to us. I should add here
that yes, of course there are so many more out there (like Retool), but
I find these the simplest and most powerful. If you start here, you’ll do
just fine.

Note: I’ll be discussing features of GitHub below, specifically Issues and Pull
120 ROB CONERY

Requests (PRs). If you don’t know what these things are or how they work, fear
not, we’ll explore in a later chapter.

GITHUB
Programmers think of GitHub mostly in terms of centralized source
control. It is that, but there is much more to it.

With GitHub, you can:

Manage issues
Run code reviews and automated checks
Define a process for code contribution from your team or from
the public
Manage discussions
Create documentation
See the entire development health of your project at a single
glance

It’s a powerful site, and we’ll get into the details of how to use it to
manage the development process in the chapter on Project
Management Basics. For now, let’s focus on the tooling bits.

The GitHub Setup

When you first set up your repository, it will look something like this:
THE IMPOSTER’S ROADMAP 121

Note that I’ve created an organization inside GitHub called red-4. This
is useful for businesses to use to segregate work, or for individuals to
use if they’re self-employed or have an open-source project.

The first thing you’ll want to do is to head to Settings -> Features


and be certain you have the following checked:
122 ROB CONERY

The wiki is where our growing documentation will go. Issues, of


course, are where we’re going to create and assign programming tasks.
What we’re most concerned with from the start are Projects and
Discussions.

Projects

In the last year, GitHub has added a feature to their site that allows
project leads like you to manage tasks and timelines. As far as
management tools go, it’s basic — but that’s OK, as we can change
things whenever we feel like it.

If you click on “Projects” in the menu for your repository, this is what
you’ll see as of the writing of this book in 2024:
THE IMPOSTER’S ROADMAP 123

Unfortunately, this is a little confusing. Originally, you could have one


or more projects per repository that focused on a particular effort. For
instance: UI design might be a completely separate effort from
development, so you could create a project for it:
124 ROB CONERY

You can choose your project template if you like, using automations
built into the GitHub workflow. These are useful, but a “Basic
Kanban” is all we need, and it looks like this:
THE IMPOSTER’S ROADMAP 125

This is called a Kanban Board, which consists of cards moved from one
column to the next. Each card represents a singular task, and the
columns represent the state of that task. I have the default column
names in place (To do, In Progress and Done) but you can change
them to be whatever you want.

What Is Kanban?

If you already know, skip on ahead. If you don’t, Kanban is a flavor of


Agile that brings in processes developed in the automotive industry —
specifically Toyota’s factories in Japan. It might seem weird, and if I’m
honest, it is, but it’s also great for helping keep your focus. We’ll
devote an entire chapter to Agile and discuss Kanban as well as its
cousin, Lean.

For now, know that a Kanban board is supposed to represent a cork


board full of cue cards. The idea is that the visual representation of
cards and columns is much easier to understand than a spreadsheet
full of numbers:

Kanban (English: signboard) started as a visual scheduling


system, part of the Toyota production system. A few decades later
126 ROB CONERY

(2007), David Anderson further developed the Kanban method's


idea and introduced the Kanban board. Indeed, Darren Davis
(Anderson’s colleague) was the one who suggested that the
workflow should be visualized on a whiteboard. This is how the
Kanban board was born, as we know it today, to become one of
the most useful agile project management tools for knowledge
work. Nowadays, its usage by Agile teams is so widespread that
you can often hear people refer to Kanban boards as agile task
boards.

There’s nothing to learning them, but it is work keeping them up to


date. This is why automated boards are so useful, which GitHub
supports as well as the other services we’ll be looking at.

The New GitHub Project

In early 2022, GitHub rolled out their new version of Projects. The
main differences are:

More ways to view and filter project information. There are


boards, but the default view is a spreadsheet.
No longer tied to a repository. Many projects have multiple
repositories, but only need a single place to track things.
Other projects use a “mono repo” approach (which we’ll
discuss later) and might need multiple projects to deal with it.

As you can see, there are a lot more features here:


THE IMPOSTER’S ROADMAP 127

Different views and filters help leads like you better understand what’s
going on.

To get started, don’t overthink things. This project is devoted to the


LMS design, so use stream of consciousness and let it fly:

These are the things I immediately think of when it comes to a new


project and UI design. The titles themselves will likely change, for
instance “Hire a designer” might become “Find out if we can hire a
designer”. It’s critical to dump all the thoughts in your head right
from the start.
128 ROB CONERY

Here’s what the Kanban view looks like:

Note that you can change the column names too. Here I’ve added a
column “Needs Approval” as it fits what I’m doing right now. Later
on, however, we should be sailing through the process and I might
need “Needs Review” when we produce things that others need
to see.

Finally, while rudimentary, GitHub Projects has workflow automations


that you can set up as you need:
THE IMPOSTER’S ROADMAP 129

Right now, this is mostly setting statuses, which moves cards around
the board. On the right side, there are changes coming soon which
will allow you to automate things in more detail.

Cons of GitHub Projects

GitHub’s Projects are by far the easiest when it comes to automating


input and task updates. Your developers will work against open issues,
filling out PRs when their code is ready for review and eventual input.
This process is well-defined in terms of programming, and GitHub
Projects works perfectly with it.

The only downside here is that there are no reports to speak of as of


this writing. You can screenshot the board, which is usually enough,
and you can create different views filtered on things like Title,
Assignee, and Label. You can turn a task into an issue with the click of
a button and send it to any repository in the organization, which is a
cool feature.

Unfortunately, these tasks are missing one crucial thing:

There’s no builtin due date. You can use Milestones for this, which
do have a due date, or you can create your own new field with a date
data type which allows you to see an actual date next to the task.
Unfortunately, you can’t filter on these with something like < today.
130 ROB CONERY

There is one other data type which find interesting: the iteration. This
is a timespan selection that defines … well, iterations, I suppose:

What’s an “iteration” you ask? This is where we get into technical


project management (specifically Agile) and the idea of Epics, Stories,
Sprints, and Iterations. It’s nice that these are here and in a later chapter
we’ll fill things out a bit more, but in my mind they cause confusion
with Milestones, and I’d rather stick with those.

So, what it comes down to is this: super tight automation and ease of
information management vs. flexible reporting. For me, a snapshot of
the current tasks queued, in progress and completed under a given
milestone would suffice.

BASECAMP
I loved this site back in the mid to late 2000s. No one had ever seen
anything like it: friendly, obvious, and human. It’s still that way, and
many organizations choose to use it over GitHub because its focus is
primarily on team collaboration and communication.
THE IMPOSTER’S ROADMAP 131

GitHub is great for issue tracking, basic discussions, and reporting,


but you need to have some form of collaboration app on top of it.
Slack and Discord are popular choices, and many companies are
moving to Microsoft Teams as well.

In terms of “human”, this is what I mean:

The app is clean, well-organized, and full of innovative ways of


tracking what’s happening with your project.
132 ROB CONERY

They don’t just offer you a Kanban board, either. It’s a “Card Table”
and improves on it (according to them):

The power of Basecamp is not just one tool, but all of them put together
in the context of a project. This image is from their home page, but it
nicely illustrates how teams can come together in a single space to
create and share documents and design assets, chat, work on tasks,
and post things to a general message board:
THE IMPOSTER’S ROADMAP 133

The pricing is reasonable for starting up, which, as of today, is


$15/user/month. For a team of 10–20, that’s a pretty good deal. You
can also opt for pro at $299/mo and get unlimited users as well as a
few other perks.

Basecamp focuses heavily on collaboration for any project, not just


technical ones. Keeping communication tied to a project in a single
space is much nicer than using email + 3-4 different applications
(Slack, GitHub, and Dropbox, e.g.). It’s also cheaper.

NOTION
Notion is one of those tools that you can’t help but love. It can do just
about anything you need:
134 ROB CONERY

There are templates to help you get going, but from there it’s all up to
you. You can literally store anything digital in Notion. Lists, tasks, docs,
calendars, project plans, messages… it’s kind of crazy.

The pricing is the same as Basecamp, although it’s free for individual
use. I have a few Notion boards of my own where I keep all kinds of
things.

The only problem with Notion is, as opposed to Basecamp, it’s a bit
unfocused. I like Basecamp’s structure — the idea is that all information
you can add to Basecamp pertains to a given project. With Notion,
it’s… whatever you want, really. If you’re a person with ADHD or
even think you have ADHD… well, you might find Notion challenging.

Everything in Notion is a “page” that is represented in one way or


another. Pages have properties that you define, as well as covers and
descriptions. Pages can also contain other pages, too. Here’s a page
that has all my covers, logos, and other images I use for my business:
THE IMPOSTER’S ROADMAP 135

Each one of these images is its own page:


136 ROB CONERY

Each page has properties, as I mention that you can create to suit the
thing you’re putting into Notion. You don’t have to come up with
everything on your own, either — Notion has a pretty solid set of
templates for you to plug in as needed. This is where I got the “Brand
Assets” page from:

With Notion, it’s “pages all the way down”. Your top-level page can
represent an entire library of knowledge, some shortcuts, or an in-
depth system for managing your project and daily life.

This is Thomas Frank’s Ultimate Brain 2.0, which I bought last year,
believing that I could hopefully organize my life:
THE IMPOSTER’S ROADMAP 137

It’s remarkable. There are calculations, automations, and data


relationships in there that must have taken months to think up and
put together. I love that Notion has this ecosystem! The Ultimate
Brain is focused on individuals trying to unload what they know and
what they need to do. It’s based on Thiago Forte’s book Building a
Second Brain, which I’ve read and love.

As a quick aside: I do love these ideas, but I find that unflattering my brain
ironically takes a lot of brainpower on the daily. I do create notes and lists, so I
don’t forget things, but sorting and sifting is… a challenge for me.

If you want to use Notion to run your project, you absolutely can.
Everything is web-based and shareable, so you just need to set
yourself up with a team and pay per seat, just like Basecamp, and
you’re off and running. They have a mess of templates for you as well:
138 ROB CONERY

Most of these are free, but there are some premium ones in there that
might be a little more comprehensive.

The Cons of Notion

Notion is a tool that I want to love. It can represent any data and let
you work with it in ridiculous ways. I would say that it’s almost
perfect, except for the following:

You most likely need to be online. There are things you can
do offline, but it’s hit or miss whether it will work. I’ve tried
using it on an Airplane (seriously) and it was painful if I’m
honest. For most people and teams, this isn’t a showstopper.
It’s a Skyrim approach to working with data: a wide open
world. On one hand, this flexibility is wonderful and lets you
work however you want. On the other hand, if you have
ADHD (or ADHD tendencies)… the faffing with covers,
structure, rules, and so on is a major distraction to getting
work done. Your team can also destroy things or not know
where something goes, and just create their own silo of
information. It takes work to stay organized.
Your information is stored on their servers. It’s the web and
THE IMPOSTER’S ROADMAP 139

everything you do and say is subject to their data privacy and


security, which, so far, hasn’t been a problem for most people.
Exporting your data is possible, but it’s weird. You can
export a page and all of its subpages as PDF, HTML, or
Markdown with CSV. You get the data, but if you’re going to
move to another system it’s pointless.
It can crash. I’ve had this happen more than once, where you
get the “spinner of death” that just sits there and spins and
spins. It’s a JavaScript-intense application, so there will be
issues with browser resources as well as server resources,
which do crop up from time to time.

None of these, to me, is a blocker. I’ve used Notion on 3 projects so


far (with other people) and the team/collaboration aspect is
outstanding. Every page can have a comment from a user on the team
and, yes, you can lock things down based on role.

The pricing is reasonable as of this writing, and I know that Notion is


the go-to solution for quite a few teams out there.

AIRTABLE
Airtable is one of those apps that is instantly appealing:
140 ROB CONERY

You can manage just about anything with Airtable. It’s a combination
online database, spreadsheet, and intelligence tool all rolled into one. I
use it for every non-coding project I work on, it’s that good.

Airtable is free, but there are paid upgrades that are definitely worth
the price. It starts at $24/month and goes up if you add people (such
as other team members) to your base.

Getting Started with Templates

The easiest way to get started is to take a look at their templates:


THE IMPOSTER’S ROADMAP 141

These are databases to start out with, but are extremely flexible in
terms of data and how its represented.

On the left side, there’s a category called “Software Development”. If I


choose this, I see 13 different templates:
142 ROB CONERY

So many of these are useful to us at this point. User Research, User


Studies, Product Launch, Product Planning… it’s overwhelming.

Once again: keep it simple, stay focused. Our goal is to know what
we’re doing right now. We can always expand as we need to, but
presently we need to know how to go from nothing to something with
our new product, the Red:4 LMS.

Let’s take a look at the Product Planning template, as that’s precisely


what we’re doing. The description of the template is promising:

Customer stories? Sprints? Epics? If hearing those words gets you


excited about rapid product development, then this is the
template for you. This product planning template is perfect for
teams of all sizes working throughout the product life cycle,
THE IMPOSTER’S ROADMAP 143

whether you're a startup iterating on a product concept searching


for product/market fit or a large organization with significant
market share deploying new features on a regular cadence.

If you don’t know what stories, sprints, and epics are, don’t worry.
We’ll get into that later. What is important is that we have a place to
start with some test data to help us out:

Each tab here can be thought of as a “table” in a relational database


sense. You can relate one tab to another by creating a column and
setting its definition to “Link to another record”.

The “Facets” tab shows what I mean:


144 ROB CONERY

Here you can see the fields “App section”, “Stories” and “Epics” — all
of which relate to the other tabs.

The true power of Airtable comes with the ability to filter and view
data. The most common view is the spreadsheet, but you can also
view data using a Kanban if there’s a status field or a calendar if the
data is time-dependent. You can set up input forms for people to enter
their data, and you can also add fun “apps” that do interesting things
with your base:
THE IMPOSTER’S ROADMAP 145

The Pivot Table is great for exploring historical data, and SendGrid
will email a group of people based on whatever conditions you’ve
set up.

In 2018, I used Airtable to help the events team at Microsoft organize


and setup Microsoft Ignite, The Tour, a localized version of their
successful Ignite conference. We had dozens of different venues and
talks around the world, not to mention a high-powered rotation of
speakers. We needed to be sure everyone was informed of what they
were speaking about when and where, and Airtable worked for us
perfectly. They even have an API that allows you to work with the data
in any base!

A Wealth of Data Types

The power of Airtable lies with the way it can represent data. There
are the usual types (strings, numbers, dates, etc.) but you can also
attach files, barcodes, references to users, formulas (which are very,
very useful), long text with formatting, phone numbers, emails,
146 ROB CONERY

ratings, rollups, URLs, single or multiple choice, checkboxes,


durations, auto-numbers and more.

One of the main things you’ll use is a single-choice dropdown,


particularly for various status choices:

Here I’m adding a column called Status set to Single Select. I then
added the selections I wanted. When I added the column it was empty,
but I set the first to “To Do” and was able to drag/copy the choice
down in the same way you might do with Excel.

Now that I have the Status set, I can create a new Kanban view, using
Status as the column definition:
THE IMPOSTER’S ROADMAP 147

If I add or remove a Status item, these columns will change.

There is so much more customization I can do here, but I would


encourage you to explore things and see what you come up with. If
Airtable has a weakness, it’s that you can almost do too much!

Airtable’s strength is its ability to display data. If you ever worked


with Microsoft Access, you’ll feel right at home with Airtable.

Cons of Airtable

I like Airtable a lot, if you couldn’t tell. It’s extremely flexible, and
you’re able to create reports and views easily. The only downside is
that data entry is much more manual and, likely, will have to be
done by you.

There is an integration with GitHub using Zapier, but I’m not a big fan
of stitching together what feels like a funky machine just to know
what’s going on. As I mention, I use Airtable for most of my non-
technical projects, like writing books or making videos. For these
tasks, it’s extremely useful and makes collaboration a breeze.

If you’re OK sorting/sifting issues and PRs and then updating


Airtable, I say rock on! But doing so makes you an information
bottleneck, and at some point that’s going to get painful.
148 ROB CONERY

TRELLO
Trello is Kanban on steroids. They offer templates, just like Airtable,
but it’s all about the board:

This is the Product Planner template, which is like the Airtable


template but simpler. Trello is free, but if you want to have different
views you have to upgrade:
THE IMPOSTER’S ROADMAP 149

Working with Trello can be overwhelming as there is so much you can


do with it. The cards themselves can contain almost any kind of
information, and if you can’t find it with the basic board settings, you
have options with Power Ups.

Power Ups

Power Ups are essential “plugins” that extend your board in any
number of ways, and they’re totally free! Some integrate with other
services, others enhance cards so that specific information can be
embedded easily:
150 ROB CONERY

Here are some of the Product and Design Power Ups. You can add
User Testing information to your cards, wireframes, Figma diagrams
and a lot more. Like I said: overwhelming.

Let’s add one! I want to track what’s going on with our GitHub repo,
so I’ll add the GitHub Power Up:

Once I’ve added the GitHub Power Up and authorized Trello to access
my GitHub account, I can open a card and see GitHub in the Power
Up menu:
THE IMPOSTER’S ROADMAP 151

Clicking on this button allows me to attach a specific branch, commit,


issue or pull request.

For instance, let’s say I have a card representing a task to fix the tab
alignment in our user interface. I created an issue for this on GitHub,
but I also want to track it in Trello — and I can:

Back in GitHub, the issue has been updated with a Trello link:
152 ROB CONERY

My next course of action is to add a label to the issue (something we’ll


cover later on), do the work, and then close the issue once everything
is resolved. These changes can then be seen on Trello:

Notice that the label for the issue is also attached to the card!

Automations
THE IMPOSTER’S ROADMAP 153

Trello has outstanding automations that will move cards around,


create macros, and also email you and your boss status reports! That
last bit is a huge deal, which I’ll discuss in just a minute.

You can build a trigger on just about anything:

You can even make the trigger time and date based, acting like a cron job:

Once again, I could spend the entire book on this one topic:
automating Trello. It’s unbelievable how powerful this tool is! It will
even send you periodic emails on the current state of your board that
you can review every Monday, for instance:
154 ROB CONERY

This is a “board snapshot”, but you can also have Trello send you
overdue tasks, tasks assigned to you, and tasks that are due soon.

The one major downside of the Trello automations is that they don’t
seem to work with Power Ups very well. For instance: when an issue
is closed on GitHub, it would be nice to move the card containing it to
“Completed”. I can’t find a way to do that as of this writing,
unfortunately.

Cons of Trello

Trello is probably the most powerful of all the simple management


tools at your disposal. You can review your cards and, as long as
they’re hooked up to the issues and PRs you’re tracking, you should
see things change as the issues, PRs, branches, and commits change.

Unfortunately, this too is manual and the integration between


GitHub and Trello only goes so far. As I mention above, you can’t
automatically change columns when an issue is closed or a PR is
merged — it’s a manual thing. You can’t create a card when an issue
or PR is created, either. These things are probably OK if you need the
powerful cards and automations that Trello offers.
THE IMPOSTER’S ROADMAP 155

At the beginning of the project, however, this can be extremely


overwhelming.

WHERE’S JIRA?
If you’ve worked in an enterprise setting, then you’ve likely had to
work with JIRA at some point. I used it back in 2000-something and
let’s just say it was challenging. It started life as an open-source bug
tracker and morphed into a complex Agile project management tool
owned by Atlassian.

This image captures JIRA pretty well:

It has all the greatest hits that almost every tool in this chapter has:

Kanban boards.
Task tracking and reporting.
Issue triage and tracking.
Roadmaps and team structure.

As you can see, JIRA is focused on helping your team focus on Agile
156 ROB CONERY

concepts. Things like chat and messages, however, require a separate


service.

It also has a few bells and whistles for managers:

Here’s the thing: if you’re in an enterprise/big company setting, it’s


likely you already have your tooling set by company standard. JIRA
excels in this setting, where reporting and tracking is critical for
stakeholders to understand your “project velocity”. If this is your
situation, then you want to be certain you hand the keys to JIRA to
your Project Manager because it’s not exactly something a developer
lead handles. At least not in my experience.

OKRs? Elevator Pitch? These are things that you might end up caring
about later in your career, when you take over an organization,
perhaps, or you found your first startup.

I certainly don’t want this section to come off as dismissive. Some


people love JIRA — especially devoted Agile teams. I think the blurb
from a few images ago captures it pretty well:
THE IMPOSTER’S ROADMAP 157

Developers want to focus on code, not update issues. We get it!


Open DevOps makes it easier to do both regardless of the tools
you use. Now developers can stay focused and the business can
stay aligned.

WHICH SERVICE IS THE BEST?!?


You can’t go wrong with any of these services, but I will give you my
opinion… here goes.

It comes down to being able to answer a simple question from your


boss:

So, where are we with the project?

To answer that, you’ll need to be certain you know:

What you and your team are supposed to be building


When it’s due
What tasks are needed for the current iteration
Who’s doing those tasks
The progress of those tasks

Someday you might have a PM that will do this for you… for now, you
are the PM and that means you need to know your tools.

Your project information is only as good as your ability to collect it,


which usually means hounding your team for updates. This can
happen in periodic standup meetings (yuck), emails prompting them
(“answer me, or I’ll call a meeting” is a great threat by the way) or,
best of all, something automated.
158 ROB CONERY

Up to date, reliable information is only as good as your ability to


synthesize it and create simple reports. A reply your boss (pretend it’s
me) might be looking for is:

Right now, we’re focused on UI Design because we’re building


this application ‘outside in’. I’ve broken things down into X, Y and
Z buckets with explicit tasks for each. So far, we’re 30% along
with X, 25% with Y and 10% with Z as we’re waiting for approval
to hire a designer. If that doesn’t come, we’ll move on with a
secondary set of tasks designed to help us focus on an off the shelf
template.

This tells me a lot and I might have some questions, but overall, it’s
where I might expect you to be as you’re just starting out.

Going Further With Project Reports

Automated reports are nice, but the best reports offer a quick
summary at the beginning, hopefully stating “we’re right on schedule”
or “we’re ahead of schedule by a week” and then detailing the
feature/task progress. All I want to know, as your boss, are two
things:

Do you need me to clear the way or unblock you or your


team?
Are we on schedule?
Are there any risks to the schedule?

That’s core project management right there and yes, it’s boring, but
it’s also the first step to a brighter, more exciting future when you can
hire a PM to do this stuff for you… and you can be the CTO.

For now, however, you’re the one generating reports and your tools
will need to help you, so whatever you choose needs to be able to
THE IMPOSTER’S ROADMAP 159

support you and what you’re doing. Do not underestimate this.


There are ways to export things from GitHub’s issues and projects to
Excel, so you can run up some numbers if you need. There are also
apps in the GitHub marketplace that you can plug in to help you out.

Take the time to ensure your boss knows you’re a star.

Summary and Recommendation

I think you can sense my thoughts on this topic: keep it simple, use
GitHub. At some point, you might outgrow GitHub completely and
that’s OK, nothing will be lost. Your status updates will be more
manual, but that’s OK too, as having an executive summary at the top
of your email with specific detail will be appreciated.

You can define tasks on your project board and turn them into issues
easily, and you can also see the progress of a PR directly on your
board. Unfortunately, as of this writing, you have to manually add
issues and PRs you want to track — it’s not automatic. I think this is
OK — that’s what triage is all about when issues come in. You don’t
want every issue automatically added to a work queue, as they often
require discussion and careful consideration.

We’ll learn about that in Part 2: Development.

For the rest of this book I’ll be focusing on GitHub as my PM tool, but
if you have a different tool, that’s fine too! Ideally, you can translate
what I’m doing to your tool of choice. And no, I’m not making this
choice because I work at Microsoft , it’s my choice for code-heavy
projects!
FIVE
PRINCIPLES OF INTERFACE
DESIGN
ADDING SOME POLISH CAN MAKE ALL THE
DIFFERENCE

T
here are two questions you might be pondering straight
away:

Why is there is a chapter on interface design?


Why is that chapter at the beginning of the book?

The reason it’s here is that we’re doers. We execute and deliver
something even when, especially when it’s not completely thought out.
Reflecting on our laws of power:

Win through action, not words

We don’t have an application or a database, but we can deliver a


mockup, and we can do it quickly. This is important because having
something to look at and consider is far more effective than pondering
words on a page.
THE IMPOSTER’S ROADMAP 161

I’m not a designer, but I do know there are a few principles that are
easy to learn. This is extremely useful in the early stages of a new
project.

Let’s add some tasks for ourselves:

Notice that I have denied the use of a designer (sorry… not sorry)
because I would rather not spend money just yet if this thing doesn’t
work out, and you can’t deliver something useful!

THE GOLDEN RATIO


Let’s talk about structure first. If your application has a user interface,
you’ll likely be working with some kind of grid system using CSS or
some other style tooling. These grids are not arbitrary! They’re based
on an ancient mathematical ratio called The Golden Ratio or The Golden
Mean, or just plain old ɸ (phi) which is roughly 1.613:

The number phi, often known as the golden ratio, is a


mathematical concept that people have known about since the
time of the ancient Greeks. It is an irrational number like pi and e,
162 ROB CONERY

meaning that its terms go on forever after the decimal point


without repeating.

Phi is closely related to The Fibonacci Sequence, which is a set of


numbers where each number is the sum of the previous two in the
set: 0, 1, 1, 2, 3, 5, 8, 13, 21 and so on. The ratio of the succession of
numbers is 1.613, which describes natural growth in fascinating ways:

Phi has been used in science and art for centuries, and many artists
have claimed that it holds the secrets of beauty. Scientists tend to scoff
at this stuff, and I won’t dig into the debate here — but the ratio is
useful to us when thinking about UI design.

A Grid Based on Phi

This is Sketch, a Mac application that many designers use to create


mockup interface designs. There’s a pixel grid that you can turn on,
which I have done:
THE IMPOSTER’S ROADMAP 163

I want a window that’s 800 pixels wide, but the question is, what
should the height be? If I’m using phi, I can divide 800/1.613 = 495,
so I stay “in ratio”:
164 ROB CONERY

Great. So now I want a header bar and I want it to be “in ratio”. I could
divide 495 by 1.613, but that would be far too big. If I keep dividing,
however, I’ll eventually get to a number that I can round to 45, which
looks pretty good in terms of proportion:

You can keep going in this way, or you could just use a grid system
that’s figured this out already.

Just Use a Framework

If you’ve used a CSS framework before, you’ve likely used a grid


system like this one, which is built in to Sketch:
THE IMPOSTER’S ROADMAP 165

The “container” for the bars you see here is 960 pixels wide. Each
pink column is 60 pixels and the gap between columns is 20 pixels.
There’s an offset at the beginning and end that are 10 pixels apiece.
While not exact, this spacing is based on phi.

You don’t need to whip out your calculator every time you do a page
layout! It is a good idea, however, to know if you’re looking at a grid
that’s part of a framework or someone’s arbitrary idea of what a grid
should be (that happens). As long as phi is in there somewhere, the
layout should look pleasing to the eye.

CHOOSING COLORS
It’s important to understand what you’re doing with color from the
very start of the project. People have an emotional reaction to color, so
you need to take care to get it right and have that reaction match your
project or company branding.

Red:4 has its own branding, but you’re free to do something more
creative and appropriate to an LMS, whatever that means. I suppose
I’m offloading the responsibility to you, so choose wisely!
166 ROB CONERY

Let’s explore the basics of color theory and why it matters to you.

Accessibility Concerns

Before we dive in to colors, know that 1 in 12 men and 1 in 200


women are color-blind. To make the site accessible to everyone, there
are some guidelines you should consider from the start:

Don’t use color as a primary form of navigation. In other


words, “click the red button” isn’t helpful if a person can’t
see red.
Make sure text is at a reasonable contrast, and avoid
overlaying text on a picture unless the opacity of the picture is
dialed way down.
If you’re displaying a color option (for a shirt, e.g.) make sure
to add a label for each color swatch (aka a red box with the
word “red”)
Make links easy to see without needing to hover. It used to be
a trend back in the 90s and early 2000s that links would be
blue and underlined when you hover. Not very usable to
someone who can’t differentiate blue.
Certain combinations of contrasting colors can cause a
problem. Blue and gray, red and green, green and gray and
so on.

The article linked above is a great first start when considering your
color choices. It’s difficult to get it right, but trying is everything.

The Color Wheel


THE IMPOSTER’S ROADMAP 167

This is a color wheel, something first described by Isaac Newton in the


1600s. You can see the primary colors red, yellow, and blue as well as
the secondary colors of orange (red + yellow), purple (red + blue),
and green (blue + yellow).

We care about the color wheel because with it, we can work with color
in a straightforward way.

Warm and Cool Colors

Using the color wheel, we can divide all the colors into two buckets,
warm and cool:
168 ROB CONERY

These colors are associated with emotions. Warm colors are


considered passionate, angry, feisty, troublesome etc. Cool colors are
more peaceful, calming, cold and so on.

We have the freedom to decide our branding colors without a designer’s


help as we need to start somewhere, so the first thing we need to
consider is what our brand should “feel” like. If we were a startup, we’d
have a marketing team to figure this all out with user studies and so on.
We’re on our own, unfortunately, so let’s consider what we’re doing.

We’re helping programmers learn things using video and text. Ideally,
they should be calm and feel comfortable in a non-distracting
environment.

This tells me we need cool colors. But which ones?

The Three Color Combinations


THE IMPOSTER’S ROADMAP 169

You’ll find 3 main colors for most brands, with an emphasis on one or
two “main” colors.

Twitter is an example of a site that went with cool colors:

The main brand color is a sky blue. The secondary color is a variation
of the main color, and I suppose you could say there’s a third color in
there, represented by the muted “Tweet” button.

It’s a simple interface that you could describe as non-threatening,


calm and inviting despite the content within.

Trying to figure out which colors should be in your brand is made a


little easier by considering 3 common color choices:

Complementary: 2 opposite colors. A minimalist, bold


approach to branding.
Analogous: 3 colors relatively close to each other on the color
wheel. This is what we see on the Twitter page, with the colors
being separated by tint and tone, which I’ll discuss in the next
section.
Triadic: 3 colors evenly separated on the color wheel. This is
for brands with “pop”, for lack of better words, that are
looking for a brash, festive feel.

A great example of a triadic mix of colors is Mozilla:


170 ROB CONERY

Bold black on white typography, tricolor gradients everywhere.

You can see complementary colors at work with Netflix:

Red on black, that’s it. Let the videos shine through!

Hue, Shade, Tint, and Tone

There’s a great article on 99 Designs regarding color theory, and I refer


to it often when I need to choose something regarding colors. They
THE IMPOSTER’S ROADMAP 171

have an illustration that perfectly encapsulates Hue, Shade, Tint, and


Tone:

From 99Designs, The 7-step Guide to Understanding Color Theory

Simply put, tints, tones, and shades are variations of hues, or


colors, on the color wheel. A tint is a hue to which white has
been added. For example, red + white = pink. A shade is a
hue to which black has been added. For example, red + black
= burgundy. Finally, a tone is a color to which black and
white (or grey) have been added. This darkens the original
hue while making the color appear more subtle and less
intense.

Twitter changed the tint and tone of its main color to produce its
secondary color. Tweaking shade, tint, and tone is a great way to come
up with your starting color, allowing you to pick complementary or
analogous colors from there.

Or Just Use ColourLovers

At this point, we’ve decided to go with something on the cool side,


which helps people concentrate and not get distracted. But then what?
Look, I’m not a designer, so I need some help picking the right shade
(or tint or tone) of a color. There are numerous sites out there that
172 ROB CONERY

can help with this, but my favorite (sorry, “favourite”) is


ColorLouvers:

They’re branding, by the way, is complementary green and red…

You can get lost exploring this site, but let’s stay focused and head to
the palettes:
THE IMPOSTER’S ROADMAP 173

This is the “Most Loved” tab, and I can see why — these combinations
are wonderful! I wouldn’t say they are bold, so they fit what I’m after.
I like the blues and greens — cool tones that I’ve been looking for.

To me, the trick with choosing a color palette is to use the first one
that jumps out at you without thinking about it too much. Here’s one
that fits for me:
174 ROB CONERY

I only want three colors to start with, so I’ll choose the dark and
lighter blues with the lightest green as an accent color:

I might change this as time goes on, but this is a great first start,
mostly because we haven’t spent too much time on it!
THE IMPOSTER’S ROADMAP 175

THE BASICS OF TYPOGRAPHY


Now that we have our color template, let’s focus on typography. Once
more: it’s crucial that we don’t get pulled into a rabbit hole. The
keyword for these (in fact all early stage) exercises is velocity. The
sooner we can get something in front of our client/boss, the better.

If we’re clumsy about it, however, we’re in for a tough time.

Personally, I love fonts. If I see one I particularly like, it can ruin my


day, as I will find it irresistible to chase it down and find out where I
can get it. There’s something magical about clear, legible type that I
have yet to understand. My late friend, Bill Hill, was the creator of
ClearType and commissioned fonts for Microsoft that many of us use
to this day: Georgia, Garamond and Callibri.

I interviewed Bill in 2010 and if you have a moment, have a listen. The
entire episode is devoted to the power of typography, but Bill’s stories,
to me, were magnificent. His Scottish brogue coupled with his wild
excitement was something truly special.

Anyway: Bill had a theory that the hunting instinct inside of us saw
fonts as animal tracks laid down on a trail. We would find the shapes
pleasing and distinct, and our brains evolved an attraction to the
symmetry.

No one knows, of course, but there is something to beautiful


typography. Let’s dissect it.

Font and Line Size

Obviously, our text needs to be legible, and font size is key. Users can,
of course, change the zoom on the screen to read what they want, but
that will throw off our layout. Legibility goes far beyond the main
body text as well: we need size gradients between headings that are
pleasing to the eye, with proportional margins.

You’ll never guess how we figure that out! That’s right: phi.
176 ROB CONERY

There’s no need to overthink this! The most legible body size we can
use is 16px, but that, of course, depends on the font.

Consider some dummy text from Word:

Note: if you don’t know what Lorem Ipsum is: it’s placeholder text that designers
and copywriters use so that they can ‘block’ their designs with text. It’s very
useful, especially when showing your client something as they will often fixate on
what you’ve written, rather than the design itself.

It’s impossible to recreate 16px font size using an image and an ebook
reader, but do your best to imagine!

The headings and body that you see here were blocked out using the
Golden Ratio Typography Calculator. Go play around if you like — but
hopefully, you can see how the type flows easily from one heading to
the next and the lines are spaced evenly.

I’m using a typical approach to fonts here, which is pretty boring (but
very legible): Helvetica for the headers, Georgia for the body. Helvetica
is a sans serif, the word “sans” meaning “without” in French and
“serif ” coming from the Dutch word for “line”. Georgia is a serif font
THE IMPOSTER’S ROADMAP 177

because it has little “serifs” or lines on the tops and bottoms of each
letter.

Having fonts with contrasting serifs is a common way to distinguish


between a header and the body following it, and helps the eye to scan
the document easily.

The opposite also works:

The header here is Alpha Slab One and the body is Avenir. This font
choice is less “traditional” if you will, than Helvetica/Georgia, but that
could work in your favor if you’re trying to create something that
appears more “fun”.

A Simpler Way To Choose Proper Font Sizes

Some fonts are bigger than others, or at least they appear that way.
Compare Alpha Slab One to Helvetica above it: it simply takes up
more space on the page.
178 ROB CONERY

You can use the GRT Calculator above, but a simpler resource is type-
scale.com, which will also generate the CSS for you. Easy-peasy!

Choosing the Right Font Combinations

Now we come to it: which fonts are the best? It’s impossible for me to
answer that because fonts convey so much to your reader. It’s like
asking me what the best shoes are! Some people wear trainers
everywhere, other people have a separate closet for their shoe choices.

It comes down to the feeling you want to convey. Bringing this home to
our project, I think a “playful ease” would work well. At least that’s
what I want our user to feel when using our site because learning is
fun and, if taught well, should also be easy.

A great resource to research font use is also the place that will likely
supply your fonts: Google Fonts.

They have extensive articles on what fonts to choose, why and when.
One article that I love is all about “Emotive considerations”:
THE IMPOSTER’S ROADMAP 179

When choosing typefaces, there are two key considerations: How


does this type make us feel and how does this type work? The
emotional response to the shape of letterforms is a very personal
experience, and when readers first see type, they react to it in an
emotional way before anything else. It’s a major part of why so
much emphasis is placed on choosing type even when it’s not
technically a part of typography.

Selecting the right font is not a simple matter, and you’ll likely go
through a few iterations before settling on the “right one” for your
project. What you consider playful vs. what I do is up for debate. As
your CTO, do I trust your judgement or do you just try to make me
happy?

This is where compromise sets in, and we’ll eventually settle on


something. To that end, I suggest coming up with three font sets to
show me that spark a “playful, easy” feeling to you.

Here are some:


180 ROB CONERY

This is Doppio One for the headers and Georgia for the body. I’m
using all caps on the headers for legibility, as they’re easy to find and
Georgia is easy to read. Doppio also has some playful geometry to it
that’s a little more rounded.
THE IMPOSTER’S ROADMAP 181

This is Rockwell for the header and Avenir for the body. I like it as
it’s a bit more bold than Doppio/Georgia.

This final set has been my go-to for book writing for years:

The headers are Bebas Neue and the body is Iowan Old Style, one of
the default fonts for reading on the iPad. To me, this font is extremely
legible and the slim, clean lines of Bebas Neue really stand out
against it.

I am not sure if I’d call it “fun”, but it does look nice.

Let’s decide and stop thinking about it, shall we? I like the second
choice of Rockwell and Avenir. It’s simple, straightforward with some
“clean whimsy”. How’s that for a description!

If you want to play around with other combinations, be my guest. So


much has been written about font choice for the web, so have a
Google and have some fun. I just have one request…
182 ROB CONERY

Please Don’t Use Scripts (and Arial)!

If you’re extremely clever about fonts, you might be able to get away
with using a script font as your header. If you try to use one as the
body, I’ll find you!

A script font is something ad hoc and meant more as an “artistic”


statement:

This one is supposed to convey handwritten care, which is silly, it’s a


computer font.

This is National Forest and (I hope) conveys why you don’t want to use
scripts:

It might work for the Yosemite logo or as a funky quote, but as a


header, it’s difficult to read and annoying.

The bottom line is this: scripted fonts are for decoration, not reading.
The most notorious misuse of a script font is Comic Sans, which you
see everywhere:
THE IMPOSTER’S ROADMAP 183

This font looks like it came from a comic book, thus the name. If used
in that context, OK sure, but it’s come to be recognized as the go-to
font used by people who try to liven up their corporate email template
with something “exciting”. It is, truly, the Rick-roll of the typographic
world.

Arial is the last font I would like to address. It was created as a knock-
off of Helvetica for various reasons that you can read about here.
Design-wise, this is a perfect summation of Arial:

Despite its pervasiveness, a professional designer would rarely—at


least for the moment—specify Arial. To professional designers,
Arial is looked down on as a not-very-faithful imitation of a
typeface that is no longer fashionable. It has what you might call a
“low-end stigma.” The few cases that I have heard of where a
designer has intentionally used Arial were because the client
insisted on it. Why? The client wanted to be able to produce
materials in-house that matched their corporate look, and they
already had Arial because it’s included with Windows. True to its
184 ROB CONERY

heritage, Arial gets chosen because it’s cheap, not because it’s a
great typeface.

It’s not so much that it’s “bad”, it’s just… overused, watered-down
and kind of not good. It’s like wearing trainers everywhere…

SUMMARY
We’ve discussed layout, colors and typography, let’s update our board:

This might seem like overkill, what I’m doing here with the board, and
also more than a little contrived. Believe me it’s not: tracking your
work and communicating what you’re doing (and what you did) says a
lot about you as a leader.

Perception and Deception, right? We’re not deceiving anyone with


this… well, not really. We are presenting a version of reality, however,
and we need to embrace that idea. It’s easy to see this in a negative
light, that we’re manipulating our boss or client. The truth is: we are.
That’s our job as the lead, to keep people informed, enough to make
THE IMPOSTER’S ROADMAP 185

them happy but not so much that they start to meddle in what we’re
doing. Meddling is the dark side of human kindness. People like to
help other people and sometimes get lost in their own needs. When
bosses get over-anxious and feel uninformed, they start to
micromanage and make your life difficult.

We now get to take the next step in our project: getting to know who
our users are through stories. This can be a highly orchestrated dance,
or it can be an hour’s exercise with a keyboard — either way, it’s a
crucial step.
SIX
CREATING USER STORIES
THIS AGILE PRACTICE CAN HELP KEEP YOU
FOCUSED ON WHAT’S IMPORTANT

W
e are about to step delicately into the world of software
development methodologies, namely Agile and the ideas
behind it. It’s been 20 years now and Agile’s footprint is
everywhere, even if smart people think they’re breaking away
from it, they’re not.

There is no way we can move forward without understanding this, so


let’s take a quick second and review.

THE AGILE PRINCIPLES


In 2002, several dudes got together and tried to figure out a better way
to build software than the ridiculous processes they were currently
using. I lived this pain and I know it well.

I worked on a project for Ameritech, a massive phone company in the


US, back in the late 1990s. We had a budget of $2.5 million and spent
the first 4 months creating The Project Plan. We planned everything…
like literally every single thing. Every process, every interface, every
query and more. We couldn’t do it unless it was in the plan.
THE IMPOSTER’S ROADMAP 187

I flew out to Chicago every other week as we built The Plan. My client
insisted I start building things anyway, sort of in secret, as he sat
through meeting after meeting creating this beastly set of documents.
It was a funny way to build software.

So in 2002 the 17 dudes put together the “Agile Manifesto” and gifted
software developers with a better way to do things. I don’t mean that
sarcastically, it was a radical and very welcome shift. People complain
about Agile processes these days, but let me assure you that you do not
want to live through The Project Plan. It kills any desire to build anything.

Here are the principles these 17 people came up with:

Our highest priority is to satisfy the customer through early and


continuous delivery of valuable software.

Welcome changing requirements, even late in development. Agile


processes harness change for the customer's competitive
advantage.

Deliver working software frequently, from a couple of weeks to a


couple of months, with a preference to the shorter timescale.

Business people and developers must work together daily


throughout the project.

Build projects around motivated individuals. Give them the


environment and support they need, and trust them to get the job
done.

The most efficient and effective method of conveying information


to and within a development team is face-to-face conversation.

Working software is the primary measure of progress. Agile


processes promote sustainable development. The sponsors,
developers, and users should be able to maintain a constant pace
indefinitely.
188 ROB CONERY

Continuous attention to technical excellence and good design


enhances agility.

Simplicity--the art of maximizing the amount of work not done--is


essential.

The best architectures, requirements, and designs emerge from


self-organizing teams.

At regular intervals, the team reflects on how to become more


effective, then tunes and adjusts its behavior accordingly.

In the next chapter we’re going to dig in to Agile and what it’s become
in the modern day because you will need to understand it if you
expect to be leading a team.

For now, I’d like to focus on the very first principle:

Our highest priority is to satisfy the customer through early and


continuous delivery of valuable software.

WHY PROGRAMMERS SHOULD CARE ABOUT CUSTOMERS


A customer-first perspective gets messy, fast. Before Agile, the idea of
creating requirements and specifications for software usually sat with
Marketing or Product Managers. CEOs and cofounders would weigh in
as well, but customer needs were left to other people.

This created a problem. I’ll illustrate this issue using a story from my
experience.

Ameritech’s First Information Portal

Let’s revisit the Ameritech project with the gigantic Project Plan. My
company was hired along with 6 other consultants to help design and
THE IMPOSTER’S ROADMAP 189

build an “information portal” on the Ameritech intranet for call center


representatives. The idea was that they would be able to look up
company policies, pricing and so on for customers that called in with
questions. Their current system was an old, rickety DOS-based thing
that required constant maintenance and took months to master.

Keep in mind that “the web” was just born a few years prior and
Google didn’t exist yet. Yahoo was the default search engine and
companies were making millions by putting things online. The public
was yet to catch up with the whole thing, but the wave was starting to
roll.

An executive at Ameritech was web-savvy and knew that having a


centralized portal meant the software could be updated easily without
the user needing to do anything. This seems obvious to us now, but
back then it was a massive deal. That’s why we were there: we knew
how to create these things.

The requirements for the project came from another consultant who
frankly had very little experience with the web and just “wanted in” so
he did what consultants do: convinced you he knew what he was
talking about. His plan reflected that lack of experience and his design
for this information portal looked just like the DOS screens, but as a
web page. His idea was, basically, “let’s not shock our call center reps
too much”.

My client, who worked at Ameritech (his name was Dino), had a


different idea. He asked us to get crazy and build something that we
thought the user would like, so I did. I copied Yahoo’s general
structure: a big search box at the top and a category drill-down on the
left (the web was small enough back then that this was feasible). It
took me a weekend, but I had a prototype up and running in short
order and when I was asked how I came up with it, I replied that “I
didn’t, Yahoo did. They did all the research, so let’s use it.”

Dino liked the idea and absolutely loved the interface. But I took it
one step further and asked the IT team for a dump of sample
190 ROB CONERY

documents from the DOS system, and they gave me 6000! I took
those documents and had a Microsoft Server tool called IndexServer
scan over them, making them searchable by the web framework I was
using, classic ASP. When I showed a working prototype to Dino, he
almost lost his mind.

Mediocrity is toxic — put on a good show. That, and Make your wins look
easy. “Oh, and by the way, the search box works and uses your data.
Give it a try…”

That made an impression.

I’ll tell you more about this story later, as it quite literally defined my
career — for now I need to get back to my point! Which is this: our
customers were the call center representatives and the three of us
(me, Dino and the unexperienced consultant) all had three separate
ideas in mind about what they wanted. That was messy, to say the
least.

So: who do you trust in this scenario?

THE CUSTOMER?
This is where we come back to Agile and reflect once again on the first
principle:

Our highest priority is to satisfy the customer through early and


continuous delivery of valuable software.

It’s a good principle, in principle. The idea is that you have as much
empathy as possible for the people who are about to use your
software, and then you show it to them as often as you can, asking for
feedback. You take that feedback and change things around and deploy
as often as possible until you “get it right” and the customer likes
what they have.
THE IMPOSTER’S ROADMAP 191

Sounds good, doesn’t it? “The customer is always right” and “give the
customer what they want” are rules of modern business… but then
again there’s a very famous quote from Steve Jobs that I like better:

Some people say give the customers what they want, but that's
not my approach. Our job is to figure out what they're going to
want before they do. I think Henry Ford once said, 'If I'd ask
customers what they wanted, they would've told me a faster
horse.' People don't know what they want until you show it to
them. That's why I never rely on market research. Our task is to
read things that are not yet on the page.

There’s a lot of truth to this. Let’s return to the Ameritech story, I’ll
share what I learned.

The Tale of the Three

Back in the late 1990s, big companies like Ameritech had no problem
letting consultants go to war for them. In fact, that was expected and
business as usual. That’s what Dino did to my company and the
Unexperienced Consultant. I’ll never forget the day I demoed what we
had put together — a full working prototype that looked like a modern
web application — and what the consultant had put together — a
PowerPoint presentation that looked like crap.

He was livid. I didn’t care one bit.

The following day, we had three customers come in (call center


people) to have a look and play around with what we had made. It was
a fascinating experience! Their interactions were scripted entirely,
right down to the phone call they received from Dino, who was sitting
in the other room recording their responses.

Each rep had to sit in a small, glass booth about 4 meters wide and 8
meters tall with cameras pointing at their face, hands, and screen.
192 ROB CONERY

They had to pretend they were at work but instead of their DOS app,
they were using the site I created.

Bless You… Bless You!

We weren’t allowed to tell them how to use it, they had to figure it
out based on what they saw. The first person got it instantly and
searched for the term “marina install” (for installing a phone line on a
boat at the marina on Lake Michigan). It came up instantly as the first
result (thank you IndexServer) and she clicked the link and read the
answer and, literally, started to cry.

Can I please have this right now? You don’t understand… this is
wonderful. The current system is so confusing … bless you…
bless you!

She really did say “bless you” twice and gave Dino a massive hug. She
had been with the company for 3 months and was really having a hard
time learning the system. Call center reps were rated based on the
number of calls they could successfully answer per hour, and their
knowledge of their system played right into that. If our first rep had
our system, she could answer more calls and get paid a lot more.

Is This Necessary?

The second rep had been at Ameritech for 8 years and was a
supervisor. His reaction was instantly negative, and he said something
I’ll never forget:

Is this necessary? Our system works fine right now, what’s the
problem?
THE IMPOSTER’S ROADMAP 193

Hard to argue with that. From Rep 2’s perspective, this entire effort
was a waste of time. He fumbled around the page a bit, read the help
document I hastily threw together based on Dino’s feedback, and
managed to answer the call professionally within a reasonable time.
He then told us he would have done far better with the old system.

Why I Gotta Use a Mouse?

The final rep that came in that day had some fire in her eyes. She
wasn’t exactly happy to be there, mostly because she worked in
downtown Chicago, and we were out at the corporate headquarters in
Hoffman Estates, about 45 minutes away. Her pay was the same, but
this was inconvenient.

She sat down, saw the screen and said, “what’s this?” Dino explained
over the monitor that it was a website and started to give her a little
more information, but she cut him off with “why do I need a
website?” At that point, Dino asked her to focus on the test call and
do her best, and we would try to answer her questions afterward.

He started the call, and she looked baffled, but then looked down to
her right and saw a mouse and rolled her eyes. “I hate these damn
things” she muttered, making sure the call was muted. The pointer
shot around the screen as she adjusted, and eventually, she made it
through the call and told Dino how he could order service for his boat
in the Lake Michigan marina.

Note: it’s always a good idea to empower keyboard warriors out there with
shortcuts. Extra credit for doing a keyboard command overlay too.

Her feedback was, shall we say, direct:

Why I gotta use a mouse? These things are hell on my wrists, and
it took me months to get the keystrokes down for the system we
use now. I don’t see how this is any better.
194 ROB CONERY

I hadn’t considered keyboard shortcuts at that point and, if I’m


honest, I don’t think I would have been able to figure it out back then.
Dino and I did work out how to tab from one spot to the next, but…
well, let’s just say the interaction was difficult.

Dino was one of those “fun clients” who liked to geek out with you.
He was a trained musician and insanely smart, often buying books on
HTML, programming and databases (as that’s how you learned things
back then) so he could be sure he knew what we were up to. Together,
we figured out how to add keyboard shortcuts to the portal, and our
third rep was thrilled about that later on.

FINDING THE INSPIRATION


I don’t want this entire chapter devoted to Steve Jobs, but I do find
many of his insights useful, especially when it comes to product
development. This one in particular, when he was asked about doing
market research for the iMac:

We have a lot of customers, and we have a lot of research into our


installed base. We also watch industry trends pretty carefully. But
in the end, for something this complicated, it's really hard to
design products by focus groups. A lot of times, people don't
know what they want until you show it to them.

In short: we know our customers and what they think they’ll want. That’s a
bold statement and, in fact, I would say it’s pretty arrogant. Then
again, Apple’s product launches are famous for having lines around
the block, so they must understand something about people.

I find Apple to be on one extreme and Agile to be on the other. With


Agile, you ask the customer what they think until you deliver that to
them, and then you hope they like your “faster horse”. With Apple’s
THE IMPOSTER’S ROADMAP 195

approach, you build something according to a vision and hope that


people will want it when you show it to them.

So what are we going to do?

Putting on my CTO hat, I will tell you that I loathe mediocrity and no,
I don’t want an application built by committee that retreads
everything else out there because that’s what the user will be familiar
with. Yuck!

On the other hand, I’m not a visionary. You might be, and I’d be lucky
to have you in that case, but for now, let’s just say that both of us
might have some fun ideas about how to build a given application.

Let’s see if we can be human about this, and we’ll focus on the same
LMS system we discussed in a previous chapter, the Red:4 Developer
Portal.

We can start by thinking about three possible users of our system in


terms of experience and preferences. We can then describe the things
they love, hate and are indifferent about our system.

Then comes the fun part: we synthesize our efforts and describe our
perfect user and just try to please them, forgetting the rest.

RELYING ON EMPATHY WITH USER STORIES


We need to use both things: Agile and Vision, moving the needle
between the two until we get it right. As CTO, I would rather not
overload customers with constant delivery, I want to wow them so
they’re inspired and excited to use our stuff.

I also don’t want to confuse the hell out of them with something
completely unexpected. Let’s see how we can meet in the middle
somewhere with a set of user stories, something you’ve likely heard
about.

Instead of identifying users based on their demographics, age, experience


and so on, let’s lean into the “vision” space and see if we can discuss them
196 ROB CONERY

in terms of their experience. What would delight them, infuriate them,


or, worst of all, put them completely to sleep when using our application.

I like to be positive, so let’s lead off with Jill, our most excited user.

Jill: Loves Learning New Things

Jill has been working at X Corp for the last 15 years as an Oracle DBA
and has hit a point where the work has become routine, and she wants
to make a change in her career. She heard from a friend that
PostgreSQL is gaining in popularity and features, and she has decided
to stop dismissing it as trivial and to learn what she can about it.

Her friend recommended Red:4’s PostgreSQL tutorial as it’s an easy,


entertaining video that she might enjoy. She’s very excited about the
possibility of a “one-stop” learning channel rather than cobbling
together books, YouTube, and blog posts. She feels that by paying
someone for a well-made tutorial, she can expedite her learning
efforts.

She’s so optimistic that she has begun searching for opportunities


with other companies.

Santosh: Not a Fan of Change

Santosh is a career SQL Server DBA who has worked at a mid-sized


insurance company for the last 12 years. The company was just
bought by a larger one that relies heavily on open-source technology,
including PostgreSQL.

Santosh has been offered a position at the new company, but he’ll
need to skill up on PostgreSQL, which he is not excited about. He has
invested an incredible amount of time getting to know SQL Server and
how to effectively administer it. He likes his company and the people
he works with and does not want to leave. He’s also been offered a
substantial raise if he stays and a healthy budget for training
materials.
THE IMPOSTER’S ROADMAP 197

He reluctantly found Red:4’s PostgreSQL tutorial and read the positive


reviews. He likes the idea of a full-scope tutorial that covers so much,
and he also likes that there are additional courses he can take to get
him within 80% of what he knew with SQL Server.

He’s decided to hold off on searching for a different job… for now.

Fen: It’s Just a Job

Fen works as a programmer at a startup that is migrating its blog from


WordPress to a headless CMS that runs on PostgreSQL. They don’t
know PostgreSQL at all, but consider it “just another database that
runs SQL”.

Fen doesn’t get caught up in technical arguments, preferring instead


to do the work they’ve been asked to do and call it a day. They care
about their job and their colleagues and really enjoy the challenges
that come up, but at the end of the day, work-life balance is critical
and Fen treats the job as just that.

Fen has been appointed the “Dev-DBA” as they know the most about
MySQL. Red:4’s PostgreSQL tutorial came up in discussion and looked
promising, and Fen decided to try it out.

Fen enjoys the job and doesn’t feel the need to look for another one.
The tech industry is about adapting to changes and learning new
things, so this is all in a day’s work.

CONTRASTING TRADITIONAL AGILE USER STORIES


The approach I’m using above is a bit different from the Agile-flavored
user story which is typically created by engineers who are trying to
define features, or “specifications”. In fact, I’ll take that further and
say that Agile user stories are little more than Project Plans rewritten
using a silly formula.

That “silly formula” is typically described as “as a, I want, so that”,


something like:
198 ROB CONERY

As a shopper, I want a cart, so I can remember the things I want


to buy

It might seem like I’m being just a little cheeky with this example, but
it’s actually one of the better Agile-flavored user stories I’ve read!

You’ll see the word “empathy” often used in articles written about
user stories. This explanation is from Atlassian, the creators of Jira,
when discussing “As a”:

Who are we building this for? We’re not just after a job title,
we’re after the persona of the person. Max. Our team should have
a shared understanding of who Max is. We’ve hopefully
interviewed plenty of Max’s. We understand how that person
works, how they think and what they feel. We have empathy
for Max.

In spirit, I agree with everything said here, but in practice, this often
falls apart. Consider the examples given (same article):

As Max, I want to invite my friends, so we can enjoy this service


together.

As Sascha, I want to organize my work, so I can feel more in


control.

As a manager, I want to be able to understand my colleagues


progress, so I can better report our sucess and failures.

These aren’t stories, these are feature definitions. Those are


THE IMPOSTER’S ROADMAP 199

important, but they don’t drive the story into the vision end of things.
That’s what we want to do with our next exercise.

WE’RE NOT GOING TO PLEASE EVERYONE


At some point in your career, you’re going to learn some marketing
skills. I was resistant to this forever, but soon realized I needed these
when I started my own business. Most people consider marketers to
be slimy snake-oil salesmen and there are definitely plenty of those,
but I have found there is a very human side to the practice.

A friend of mine, Chad Fowler, puts it really well:

Suppose you had a service you could offer that could save people
$100 a year. This is a service that could benefit anyone. It’s a
service you can perform and anyone can consume… If you don’t
tell people about this service, then you are cheating them out of
that $100. You owe it to them.

Chad talked about this at a conference in Poland that I was at, and it
really stuck with me. And, to be fair, this thought didn’t originate with
Chad — he is relaying it from an article he read once and try as I
might, I can’t find the link!

Why am I discussing marketing? Because, at its core, that’s what


user stories are: a marketing strategy. You’re trying to build
something that people want to buy and use, and the key to that
sentence is “want to buy and use”. That, friends, is marketing.

You don’t have to agree with me on this, and you can make whatever
you feel like making and hope that people use it, and you know what?
Maybe they will! In fact, I’ll pile on that sentiment and say that a
common thing you hear when successful entrepreneurs are asked
what their inspiration was for making Product X goes like this:
200 ROB CONERY

I dunno, I just made it for me. The fact that other people like it is
cool.

This is going to sound weird, but this is still marketing. Allow me to


explain…

DERIVING YOUR PERFECT USER AND IGNORING THE REST


People that build something for themselves typically do so because
they couldn’t find the thing they were looking for. That happened to
me when I wrote The Imposter’s Handbook. I was tired of feeling like I
didn’t know what I needed to know, and I hated every book I found
that touched on the problem (too dull, too theoretical, poorly
written, etc.) so I just wrote one for myself. People liked it, and there
you go.

What I did when I wrote that book is use myself as my “perfect


customer”. The person who needs what you’re selling and will buy it
without hesitation, and then go on to tell others about it. If we focus
on our perfect person, we’ll do well. If we try to please everyone, we’ll
drive our product into mediocrity.

Let’s bring this back to Jill, Santosh, and Fen. Is one of them our
perfect customer? Let’s see:

Jill wants out of her job yesterday and is very excited to learn
PostgreSQL. It might seem like she’s our perfect customer, but
there are plenty of reasons why this might not be true,
including the fact that she might prefer books to videos or find
our presentations off-putting.
Santosh might seem like our nightmare customer, but then
again, he might be astounded by PostgreSQL after 10 minutes
and make a full turnaround, crediting us with changing his
career.
THE IMPOSTER’S ROADMAP 201

Fen is just happy to be here and likely the easiest to please…


which is balanced by the easiest to make unhappy. Their time
is important to them, and they only have so much to give to
work, which means we need to be on our game if we expect to
impress them.

I think it’s clear that none of these people, as written here, is our
perfect customer, but there are parts of them that fit well. Jill’s
enthusiasm for PostgreSQL, the opportunity to change Santosh’s life,
and Fen’s desire to optimize their time.

We can work with this.

Focusing On the Maximum Impact

It feels strange to be thinking about marketing so much when we


haven’t written a line of code but trust me: either we do it now or
someone will do it for us later on if we’re successful.

I was reading the book Dotcom Secrets by serial hustler Russel Brunson
because, as I mention above, I finally gave in and realized that
understanding who was buying my stuff was critical (aka
“marketing”). Unfortunately for me, I normally thought about this
after I created things, not before. That’s OK too — but I could have
saved myself a ton of time if I had gone through the exercise of
figuring out who, I thought, would benefit the most from what I was
making.

Put another way: I should have figured out where my product would
have maximum impact. At this point, I need to ask for your patience
because yes, I know we’re veering into the non-programming realm,
but in truth, programmers began doing that with the rise of Agile. We
stepped out from behind our glowing monitors and asked to be part of
the project process… so here we are.

In Dotcom Secrets, Brunson did a mental exercise one day when he


considered who his “perfect customer” was. Where they were in life,
their job, where they wanted to go, even what they looked like. As you
202 ROB CONERY

can probably guess: by focusing like this, he worked less and made
more sales. His “true fans” grew, and they did the sales work for him
through word of mouth, and life became better.

I think we’re on the right track, but we need to have a conversation


about whom we think our perfect customer is. Let’s call them Alex, a
reasonable gender-neutral name, and you can assign whatever is most
comfortable to you. Our task now is to squeeze the “desirable” parts
of our stories into the persona of Alex. Your persona and mine may
differ — but I’ll go through the process here, and you can use as you
like.

It’s important to note that you can’t do this alone, even if you’re the
project lead. Being the doer that you are, you can take a stab at it but
make sure your client (me) is clear that it’s a starting point and
validation is needed.

So, now, let’s pretend that you and I walked through that process and
“rounded out” our main character.

Meet Alex

Alex is a programmer who enjoys their job but wants to do more.


They’re overwhelmed by all the resources and options out there and
just need to know how to take the first step forward. They would
rather not mess things up by trying to learn too many things, too
quickly. They’re curious and are hoping to find a “niche”, if you will,
that they can master over the next 5–10 years.

While Alex is enthusiastic about the job, they also know their work
comes second to their personal life.

Alex found Red:4’s site through a friend and is excited about the
prospect of becoming a PostgreSQL DBA. The comprehensive video
library and book collection aren’t overwhelming at all, and the blog
posts are simple, focused and direct. Alex is excited to get started and
feels like they could be at the beginning of a major career shift.
THE IMPOSTER’S ROADMAP 203

Alex’s age is not relevant. They could be straight out of college or a


senior who has just graduated from bootcamp after a long career in
the military. They have a drive to learn and are excited about the
opportunity.

OUR SIMPLE STORIES


Now that we know Alex, let’s use them to help us move forward with
three stories that we can use to get our initial prototype off the
ground. We’ll use these interactions:

Alex first encounters our site and decides to look around


Alex finds something interesting and decides to take a look
Alex is keen to know more about us, so … does something

Simple enough and a great place to get started. You’ll hear me say this
a lot through this book: simplicity at every step is the key to
shipping software. If something you’re doing is too complicated,
break it into smaller, simpler parts and make those your focus, one at
a time. I’ll come back to this, repeatedly.

Alex Meets Red:4

Alex heard about Red:4 from a friend and was intrigued, so they
Googled us and found our site at the top of the search results. They
click the first link, which is our homepage, and are greeted with a
simple, elegant site and a compelling headline. The images are
engaging without being the typical corporate, bubbly vector artwork.
They’re images of smiling people of all ages, having fun writing code.

Alex scrolls down and sees a headline and some imagery that
describes choices for career paths: Data Professional, Machine
Learning, Backend Programmer, Frontend Specialist and more.

Below that section is content divided by technology, including


databases (PostgreSQL, MySQL, MongoDB, etc.), platforms (Node.js,
204 ROB CONERY

.NET, Ruby on Rails, Django, etc.) and languages (JavaScript, Python,


Ruby, C#, etc.).

Alex clicks into the Data Professional career path and sees a headline,
description and 3-minute video at the very top. The video describes
the life of a data pro and what the work is like, and then shows how to
get started.

Alex sees the courses below and that the first lessons are free, so they
click through and watch for a few minutes. The videos are high quality
with great production value and the narrator has the most pleasing
voice. Alex is eager to know more.

They click on “About” in the top menu and instead of a wall of text,
Alex sees a picture of the entire Red:4 staff on a beach in Hawaii. The
page is broken into parts, including who they are individually, what
their purpose is, and who they hope to help.

Alex sees they have a blog as well, and the option to have it delivered
via email (a list service). Alex joins the mailing list and reads through
the blog posts, which are entertaining and personable. They like the
lack of corporate veneer and really appreciate the shared enthusiasm
the team seems to have.

Not ready to commit just yet (but very close), Alex decides to think
about it for a few days. Over the next week, a post is delivered via
email and at the bottom is a mention of a discount for being on the
mailing list. Alex uses the discount and signs up, delighted with their
choice.

Go Forth, and Build!

If we were using Agile for our project process, I’d be in trouble with
that story. It’s too long and has far too many individual tasks. Agile
user stories tend to be extremely focused and spawn very few tasks,
perhaps 5 at most.

What I just wrote is more like an Epic in an Agile sense, which is a


collection of stories. The entire next chapter is devoted to this stuff,
THE IMPOSTER’S ROADMAP 205

but I wanted to prepare you for the jargon blast that’s coming your
way, and arm you with a simple defense:

Agile is just a way to think about and organize what you and your
team need to do

That’s it. Things have names which come along with rules attached
and, like anything, adhering to the process does not guarantee success.
That said, being organized really does help. So we’re going to do our
best to comply.

What we know right now is that we need a few things to make Alex
happy, and those are:

A Home page
An About page with information about the team, our
vision, etc
Career Paths, which we can also think of as “Categories”
Technologies, which we can think of as “Tags”
A Blog
A Mailing List with a signup option somewhere on the site

This is our prototype. We don’t need to have actual content or


imagery, we can block this stuff out with lorem ipsum and placeholder
graphics. This will be our first Sprint (another Agile thing) and we’ll
define what’s happening in the next chapter. We’re getting
somewhere, and that feels good! The ship is leaving port!

SUMMARY
As you can probably tell, Alex is my perfect customer and who I gear
all of my content towards. Personally, I feel that the 5-year mark in the
career of a programmer is a “make or break” moment where you’re
206 ROB CONERY

either inspired to take a leap of faith or you shift into cruise control
and let the days roll out. There’s no way I can help the latter folks — I
like exploring far too much!

You can probably also tell that I like the Agile process in general, but
I’ve seen far too many projects fail because the process outweighed the
product. People can’t help themselves, they like to know what they’re
doing before they do it and will often take things too far. That’s a
people problem, not a process problem.

Managing the process is straightforward, the people… not so much.


This is where we begin the dark art of managing perceptions, both
within the team and without. This is where you’ll need to start
checking yourself as well, because you’re human too, and it’s very easy
to let success go right to your head.

Onward!
SEVEN
SOFTWARE PROJECT
MANAGEMENT BASICS
GETTING TO KNOW AGILE AND IT’S
VARIOUS FLAVORS

W
e focused on our users in the last chapter, which is as it
should be. In this chapter, we’re going to use Alex’s story
to prepare for our first sprint: the initial demo. This is a
crucial time for us.

We will use this demo to convey three essential things:

Our understanding of the project


Our understanding of the customer
Our ability to deliver

This is when things get real, and we can’t take a single step until we
have some process in place. This project will be reviewed someday,
right back to the first day, and every move we’ve made will need to be
documented and understandable without sloppy cowboy coder
nonsense.

In the next chapter, we’ll go through the steps to actually build the
demo. In this chapter, we’re going to figure out what those steps are
and wrap them in some kind of process.
208 ROB CONERY

What process, you ask? Let’s find out.

THE AGILE PHILOSOPHY


It’s important to understand that when discussing Agile you’re
discussing a philosophy, not an actual framework for doing things. Let’s
revisit the Agile Principles so we can understand this:

Our highest priority is to satisfy the customer through early and


continuous delivery of valuable software.

Welcome changing requirements, even late in development. Agile


processes harness change for the customer's competitive advantage.

Deliver working software frequently, from a couple of weeks to a


couple of months, with a preference to the shorter timescale.

Business people and developers must work together daily


throughout the project.

Build projects around motivated individuals. Give them the


environment and support they need, and trust them to get the job
done.

The most efficient and effective method of conveying information


to and within a development team is face-to-face conversation.

Working software is the primary measure of progress. Agile


processes promote sustainable development. The sponsors,
developers, and users should be able to maintain a constant pace
indefinitely.

Continuous attention to technical excellence and good design


enhances agility.

Simplicity--the art of maximizing the amount of work not done--is


essential.
THE IMPOSTER’S ROADMAP 209

The best architectures, requirements, and designs emerge from


self-organizing teams.

At regular intervals, the team reflects on how to become more


effective, then tunes and adjusts its behavior accordingly.

To sum this up: Agile is about infusing software with humanity. At our
best, we’re messy, change our minds often as we learn new things,
enjoy building for a better future and generally try to improve
ourselves.

As Jim Highsmith (one of the manifesto signees) puts it:

At the core, I believe Agile Methodologists are really about


"mushy" stuff—about delivering good products to customers by
operating in an environment that does more than talk about
"people as our most important asset" but actually "acts" as if
people were the most important, and lose the word "asset". So in
the final analysis, the meteoric rise of interest in—and sometimes
tremendous criticism of—Agile Methodologies is about the mushy
stuff of values and culture.

I think this approach is difficult to argue with.

UBIQUITOUS TERMS
Like any good religion, Agile has a few denominations to go along
with it that muddy the waters. People have learned the how, but make
up their own reasons when it comes to the why, and arguments start.
One of the primary knocks on a team trying to be Agile is just that:
they’re trying, but not doing it right (according to the person leveling
the criticism).
210 ROB CONERY

I’m going to sidestep what “going Agile right” means, and get into the
terms that every Agile team needs to understand. You have probably
heard some of these terms before, and they are:

Stories: discussed last chapter, a user story defines who a user


is and what they want from the application. In some
applications of Agile, stories are given “points” which dictate
how difficult they are to implement, usually derived during a
Planning Poker session (see below).
Sprint: a small period in which focused work gets done and
something gets shipped. Typically, two weeks long, a sprint
will focus on one or more user stories and deliver story
points.
Epic: an overall idea driving development, made up by
individual user stories. Epics can and often do span many
sprints.
Backlog: work to be done. “Grooming” the backlog is
something that a dev team might do with a product owner,
deciding what work goes into a sprint.
MVP: Minimum Viable Product — the least thing a team can
create to generate user feedback on a new idea. Usually, the
result of the first sprints or epic.
Planning Poker: When planning something, the entire team
is asked to assess the difficulty of a task or a user story. Each
team member has a set of “cards” to play that have ascending
values, typically based on the Fibonacci sequence (1, 2, 3, 5, 8,
13, etc.). Each member “plays” a card that conveys how hard
they think the task will be. The scale is arbitrary, with the
minimum and maximum values decided by the team
beforehand. The goal is to spawn discussion until agreement
is reached on the difficulty of the task or story.

There are other terms used which are pretty obvious, and you tend to
pick these up as you go along. In addition, each framework might have
THE IMPOSTER’S ROADMAP 211

additional bits of jargon (like Scrum) as well, and we’ll discuss those
in due course.

Speaking of frameworks, there are two that dominate the Agile space:
Lean and Scrum. If you’ve been working as a programmer for any
length of time, you’ve probably had to deal with one of these, and it’s
likely that you followed Scrum, which I’ll discuss below. Lean is also
very popular, so let’s start with that.

LEAN / KANBAN
I remember when Lean started gaining popularity as a necessary
revision to Agile back in the mid/late 2000s. People began talking
about Toyota production lines and Kanban boards and the idea that
relentless focus on quality, minimalism, and eliminating “waste” was
the natural evolution of Agile. Given its roots in the Toyota production
lines, people began to infuse Japanese cultural practices and even Zen
principles into their software development process.

I worked on a team that was using Lean ideas, and the project
manager took it so seriously that he would never send an email longer
than a single sentence. His code review comments and issues were
famously terse — two sentences at most. Somehow this made him a
good Agile person, I guess.

Unfortunately, this person wasn’t Hemingway, so his single sentences


usually turned into multiple single sentence replies to questions from
the team. This kind of thing often happens, and I’ll come back to it
throughout this chapter: Agile is a great idea, people make it hard.

Anyway: the core idea is maximum efficiency, minimum waste.


Every action you take as a developer should contribute to the product
being produced and nothing more. The product being produced
should be as close to what the client wants, and that’s it.

Kaizen
212 ROB CONERY

As you iterate you use Kaizen, or “continuous improvement using


small, positive change”. You’ll frequently hear people explain that this
process flexes the idea that “the little positive things add up to a big
positive thing”.

Positivity is at its core, in that each team member:

Let’s go of their ego and assumptions


Questions everything and tries to find a solution to any
problem identified
Failure is OK as long as you’re committed to learning and
improving
Never stop trying to improve

These are fun ideas and take a committed team. The theory is good,
but, as we know, people tend to be people and dig chaos (Chaos is your
friend). This is where a strong lead, such as yourself, earns their
paycheck.

The Five Whys

This process is a unique way to solve a given problem, something I’ve


used before when pondering a problem in my journal: The Five Whys.
The main idea is that when you identify a concern, issue, or goal, you
ask “why?” five times to ensure that you’re getting to the essence of
things.

For instance, I’m trying to finish this book and can’t seem to do it. It’s
taking forever! If I was journaling this problem, I might ponder it
thus:

I’m trying to finish this book and can’t.

Why (1)? Because I can’t seem to find enough time in the day and
the mental strength.
THE IMPOSTER’S ROADMAP 213

Why (2)? Because my day job is very taxing mentally and


spiritually, and I’m totally drained at the end of the day.

Why (3)? There is constant internal drama in my organization


that I keep getting dragged into, and my workload constantly
changes, so I can’t focus and deliver on one single thing, which
makes me feel like I’m failing.

Why (4)? Because delivery is important to me and makes me feel


good about my career choice. If you’re doing what you’re good at,
you should be able to deliver on it.

Note: I like my job and the above conversation is just for example.

This process doesn’t take all five whys, it’s clear that my day job is
sucking the motivation out of me and my chosen writing time (the
evening) is the wrong time. I can solve this problem simply by
changing the times that I write.

I’m currently writing this in the morning, bright and early, right after
doing yoga for 20 minutes. Every so often, I go on a walk before I
write as well to make sure my brain is clear. I started waking up 90
minutes before I normally do, and I have a target of 2000 words/day.
So far, it’s working like a charm!

MVP

When you hear the term “MVP” it means “minimum viable product”
— the absolute least possible thing that will deliver value. You then
iterate on that (making sure you don’t waste effort and keystrokes)
until you reach product enlightenment.

There are advantages to this approach. The idea of focusing on the


value of the product and avoiding distraction is good. Going minimal
for the sake of being minimal (like my old PM’s emails) is ridiculous.

The Downside of Lean


214 ROB CONERY

I suppose the first (and most commonly heard) criticism of Lean is


that we’re not building cars. That’s a significant thing to consider!
Cars go through design phases that last years, with every single part
dialed in with precision. The manufacturing line just needs to fulfill
their role and do what’s asked at every station on the assembly line.

It might be more apt to say that we build the factory… but even then,
that’s not entirely accurate. Creating software is an organic process
where an idea is conceived, grown internally over time and then
introduced to the world. It doesn’t stop there — growth continues,
followed by eventual decline. This, to me, doesn’t reconcile with the
idea of an assembly line that assembles parts and then everything is
finished. That’s not software development.

On a more personal note, I find it ethically difficult to adopt


traditional practices from another culture for the sake of building
software. I have the same problem with some large tech companies
shamelessly adopting Hawaiian cultural identity as their corporate
identity because the founder had a good time on vacation over there.

Applying a cultural tradition to software feels wrong to me. I do think


there are plenty of things to learn from Kaizen, as there is from the
“spirit of Aloha” (kindness, love, etc.). But maybe focus on those
things rather than the traditions of another culture?

On the more concrete side of things: projects using Lean tend to lose
sight of the big picture and the opportunities that distraction and
inspiration can bring. The best features of any application are often
wild bits of inspiration that come in the middle of the night or while
on a walk at the end of the day. Twitter, for example, was a goofy side
project at Odeo and basically thrown together in a short timeframe.

Given time and room to grow, who knows what these things can turn
into.

There are plenty of good things about Lean, to be sure. The Five
Whys, constant, small improvement and a focus on positivity are all
things we can bring in to our project.
THE IMPOSTER’S ROADMAP 215

SCRUM
Probably the most popular Agile framework, Scrum is focused on
turning programming into a team sport. It’s almost a cult:

The Scrum framework is fairly simple being made up of a team


with three accountabilities, who take part in five events and
produce three artifacts. With Scrum, important decisions are
based on the perceived state of the three artifacts making their
transparency critical. Artifacts that have low transparency can lead
to decisions that diminish value and increase risk. Transparency
enables inspection and leads to greater trust among the team and
others within the organization.

Scrum is simultaneously rigorous and ad-hoc, which is why the rugby


term is used, I suppose. There are only three roles in the Scrum
framework with very specific jobs. Let’s take a look at each.

The Scrum Master

This person’s job is, basically, the “master of ceremonies”. The word
“master” might be in the title, but there is no actual authority in this
role. In fact, all the roles are equal in terms of authority.

The Scrum Master’s job is to make sure the entire team is following
the game plan and is playing by the rules. They are both coach and
referee, ensuring that the project scope and vision are understood by
all and that everyone is participating in the Agile process.

They’re also there to handle anything that might be blocking progress.


This could be an internal issue such as a personal conflict or an
external one such as a budget restriction.

In the “coaching” role, the Scrum Master will coordinate meetings,


sprints (which we’ll talk about in a minute) and demos. They track
216 ROB CONERY

progress and ensure that everything is running smoothly. In other


settings, this role is often viewed as that of a Project Manager.

It’s more than that, however. The Scrum Master is there to support
and help, not to dictate policy. It’s a delicate role to play as there is a
lot of implied authority, yet no actual authority.

The Product Owner

As the name implies, the Product Owner’s job is to make sure that
what gets shipped is what is intended. This doesn’t have to be strictly
what the customer wants, it can be somewhere along our Agile/Apple
spectrum: aligning with customer demand or with the CEO’s vision.
What gets shipped just has to comply with whatever vision has been
stated.

A Product Owner will guide the creation of user stories and epics (a
collection of user stories), work with the business
owner/founder/CEO to ensure that the stories align with the vision,
and also ensure that the team is executing according to the stories
given.

This role is a “big picture” role, that can be thought of as a version of


a “Product Manager” in other settings. The difference with a Scrum
Product Owner is that they’re involved daily, helping with sprints and
ensuring that the team is building what the stories ask for.

The Team (or Squad)

The team’s job is to build things within the scope and time given. If
the Product Owner and Scrum Master do their jobs, the team will have
a concise set of requirements to work on in a well-defined timeframe.

The cool part about a Scrum team is that they can self-organize to
execute as they see fit. This means that the team should be big enough
to organize into smaller “pods”, sort of like a caper flick where you
need a team to handle the crowd in the bank, a team to disable
security, and another to break into the safe.
THE IMPOSTER’S ROADMAP 217

The team has the sole authority to execute as they see fit, as long as
their goal is to execute against the sprint.

Sprints

You’ll hear this term repeatedly in Agile settings, and it basically


means doing some work within a given block of time. Every Agile
framework handles sprints a little differently — but they’re actually
quite simple.

In the Scrum sense, the Scrum Master and Product Owner work
together to define what goes into a sprint, favoring small, effective,
positive results. These are traditionally 2-week periods.

Ideally, a sprint results in something being deployed or shipped. This


could mean a feature, bug fixes, or documentation update. It’s up to
the Scrum Master to determine what gets shipped from a sprint,
which they, of course, work on with the team and Product Owner.

The Daily Standup

Scrum won’t work if the team isn’t united and sharing what they’re
doing and what they’ve learned. This is also called transparency and
must be in place from top to bottom.

The goals of a daily standup are to share:

what’s been done (usually the day before)


what’s been learned and how it applies to today
what things are blocking progress, if any

These meetings should only last 15 minutes, but usually stretch to 20


minutes or a half hour. If your Scrum Master knows what they’re
doing, the meeting will go quickly and any longer discussions will be
handled individually.

The goal of the meetings is to ensure everyone is on the same page.


218 ROB CONERY

The Retrospective

When a sprint has ended, the Scrum Master might call a meeting to
discuss how the spring went. A few things they’ll want to know are:

Did we have enough time to do the things planned?


What worked?
What didn’t work?
What can we do better next time?

This kind of meeting can spiral if the Scrum Master does not focus
it. Even then, you can expect to be in this meeting for hours as
people dive into project process, communication and overall
improvement.

Problems with Scrum

The first and, to me, most glaring problem with Scrum is that
programmers aren’t professional athletes competing on a team. It’s
critical to understand that what works for a professional rugby team is
entirely different from a group of developers.

Athletes are driven towards mental and physical excellence in pursuit


of a championship. If they don’t win, they’ll lose their job. If they’re
playing on a team sport, then their success depends heavily on their
teammates and their coach. It’s a high-pressure situation, to say the
least.

Developing software is also high-pressure and yes, depends greatly on


the team and yes, if you don’t deliver something you could all lose
your jobs. That’s where the comparisons end.

Success in software is not nearly as well-defined as a sporting match


with a score that defines both a winner and a loser. For some
companies, this type of thing works. For others, not at all.

As a programmer, you can’t spend hours in a mental gymnasium


hoping to decrease your bug count. Sure, you can read books and try
THE IMPOSTER’S ROADMAP 219

to learn a few things, but it’s hardly the same. Your performance in
the daily standup won’t make or break an entire sprint!

I’m sure there are programmers (like myself) who enjoy competing in
organized sporting competitions. Personally speaking, that’s not the
way I want to build software.

There’s also the abundance of ceremony to consider: it’s annoying.


Daily standups, for instance, can grow tiresome quickly and end up
being the worst part of the day with the majority of the team on mute.

I don’t think I’ve ever been part of a standup meeting with 10 or more
people that hasn’t gone over by at least 15 minutes. That’s usually a
function of the Scrum Master, but unless you have an experienced one
who is good at working with people (aka “managing perception”),
you’re doomed.

I like this quote from Know Your Team:

Each of your team members’ faces is blank. Mouths are moving,


shaping the words, “what I worked on” and “what my blockers
are,” but no one is truly listening.

This is the reality of daily stand-up meetings. And it might be


your reality as a manager.

As a manager, you’ve likely witnessed this first hand. Your daily


stand-up meetings have become bloated and unengaging, the
more time passes and the bigger your team grows.

I have been on this team and it’s discouraging. You start to feel like
you’re acting out the process for the sake of saying you’re being Agile
without actually being agile!

For Scrum to work, everyone on your team needs to buy into it


and, more importantly, understand the benefits of it. I find this to be
220 ROB CONERY

the exception instead of the rule. I’ve been on many Scrum teams and
there are always people who bristle against the formality and boo-ya
sportiness of it.

That’s just the way people are. Give them a set of rules and the first
thing they’ll do is try to break them because chaos is your friend.

AGILE IS WHAT YOU MAKE IT


Let’s end our discussion where it began: Agile is about people. Your
team can make or break whatever process you put in place, so it’s
critical to make sure you embrace the human aspect of all of this.

You don’t need to be terribly strict, and should feel free to drop what
doesn’t work for you. The daily standup can be done via email every
other day, or maybe you can host something for after work twice a
week — make it a social event!

Remember: the goal is the sharing of knowledge, so everyone knows


what everyone else is doing. This can often be solved by making sure
the issues and tasks are updated on GitHub. We’ll talk more about
working against a PR in a later chapter, but watching someone commit
code against a PR (with solid comments) can be enough to keep
everyone else informed.

In my experience, projects never adhere strictly to Scrum, Lean, XP or


any other flavor of Agile. They use them as guides and adjust along
the way.

Let me show you what I mean.

IN THE REAL WORLD


Right now, the project is just you. Some ideas discussed above make
good sense in terms of organizing your efforts, making sure you
understand what’s going on, and also ensuring you focus on rapid
delivery.
THE IMPOSTER’S ROADMAP 221

I suppose if you wanted to do a one-person standup, you could! This


isn’t a bad idea altogether; using your journal to answer the same
questions (what I did yesterday, what I’ll do today, here are my
blockers) will start a practice of thinking about those things daily.

At the very least, let’s organize our efforts using Agile so we know
what we’re doing.

Our First Epic

An epic is a collection of user stories, but it can be a collection of


whatever you want it to be. Here is how the ASP.NET team uses epics:

The team is using GitHub’s Issues functionality to manage their epics,


stories, and sprints in the form of issues and milestones. Issues are
extremely flexible, as you’re about to see, and you can use them in an
“Agile” way. I put that in quotes, by the way because it’s easy to start
arguments when you say you’re doing things in an Agile way. There is
always someone to tell you that you’re not.

Here’s what an epic looks like in this project:


222 ROB CONERY

You’ll notice that there are no user stories here, only issues. This is
where we start arguing about being “Agile”. A user story is one or
more tasks rewritten from a user’s perspective. The point of this is
that you put yourself in the user’s place and, using some empathy, try
to build something they want to use.

These stories go through a translation, however, and are turned into


tasks that then become part of a sprint. What Damian is doing here is
skipping the story part and going straight to the tasks, recording them
as referenced issues and slick markup, which you’re about to learn.

Our First Epic

I like the way the ASP.NET team has adapted Agile to their culture and
project. If it works, it works.

Personally, I'm in favor of having a story element as well as the tasks


that go into it. Let’s use our repository and its issues list to generate
our first epic, which will describe the first time someone visits our
site:
THE IMPOSTER’S ROADMAP 223

Let’s break this down:

I prepended [EPIC] to the title just for clarity. Developers


have been identifying issues like this for a long time now, and
it’s common to see things like [WIP] (work in progress),
[BUG], [FEATURE X] (some feature), and so on. This kind
of thing makes scanning an issue list easier in a larger
repository.
I labeled the issue an Epic so we can quickly scan it along with
other epics in the future.
There are no stories or tasks just yet because I haven’t defined
them.

I really want to emphasize that we don’t need to go overboard on


detail here. Just get the basics out and if people have comments or
questions they can add them! If you’re following Scrum, this might be
where the Scrum Master and Product Owner figure out what an epic
will look like.

Adding User Stories

Our first user story is what happens in the first 3 seconds of Alex’s
visit to our home page, which is the most crucial interaction we’re
224 ROB CONERY

going to have. This is especially true for our upcoming first demo —
we need to grab people for another 30 seconds after they get there.

Why 3 and then 30? It’s a marketing thing. You have up to 3 seconds
to grab someone’s attention with your site, and if they’re interested,
you have up to 30 more seconds to engage them. If they’re still
around after 30 seconds, your chances of converting them into a
customer are exponentially higher.

I’m supporting this idea with my first story:

I could also have written this as “create a beautiful landing page with
nice fonts and an engaging headline” but I decided to keep with the
“Agile” way of doing things.

I labeled the issue as a Story and, in addition, I added some


checkboxes to indicate the tasks that need to be done. A story doesn’t
have to revolve around a single task. In fact, it’s common to have up to
5, although I think that’s too many for one story.

Notice that 13 on the bottom left? That represents the story points for
this story. I’m using a Fibonacci sequence with values 1, 2, 3, 5, 8, 13,
21, 34, 55.

Why Fibonacci? I suppose it’s a geeky thing, but I like the way it
represents actual difficulty: it’s never linear! Difficulty tends to grow
THE IMPOSTER’S ROADMAP 225

on an organic curve so, to keep things effective and simple, those are
my choices. I chose 13 because I suspect that I won’t be going with
static HTML to start off with, and will probably opt to use whatever
framework we’ll be using to build the app. Not sure yet.

Let’s create our next story, which is what happens when Alex stays
around for 30 seconds:

Same deal with this issue, but this time I have a score of 8 because I
think this will be a little simpler to implement. If I was on a Scrum
team, the Scrum Master might call a planning meeting and set up a
sprint (which we’ll do in a second). Instead of taking my 8 at face
value, would instead opt for a poker session so we get a better
estimate.

It’s just me right now, so I’m going with this.

Right then, it’s time for our final story. What happens when Alex is
engaged for up to 5 minutes. This is now “investigation mode” and
Alex is seriously considering signing up for the service. There are
multiple things we can do at this point (get them on a mailing list,
offer a freebie, take a quiz, etc.) but we’re going to focus on simple:
226 ROB CONERY

This is much more in-depth and requires significant design work, thus
the 21 points. I do see value in just starting, which is what this story is
all about and if we decide it needs more work in the future (which we
will) we can add another story to it.

Issues vs. Subtasks

You might be wondering why I don’t break this story into separate
issues, and I definitely could. For some organizations, this would
make sense to do. For me, I like the simplicity of having checkboxes
right here. Opening separate issues feels like overkill, if I’m honest,
and like I keep saying: simplicity, simplicity, simplicity.

Least friction is our goal for getting things off the ground, as long as it
doesn’t come at a cost.

That said, there is one major upside to dividing tasks out as issues:
email notifications. If your boss (aka me) is keen to follow your progress,
then take some time and create an issue for each task. You can then
reference those issues from the story as I did with the epic.

But, as your boss, I have far too much email already, so just getting a
notification when you add a comment is fine.

Rolling Our Stories Back Into Our Epic


THE IMPOSTER’S ROADMAP 227

We have 4 issues right now: 1 Epic and 3 Stories. Let’s roll those
stories back into our epic so we can keep track of them. We can do
this using GitHub’s editor, which will also tell us whether those
stories are completed:

If you go into edit mode on the issue, you’ll see a checkbox icon,
which will place the -[] markup in the body. This is “GitHub flavored
markdown”, which specifies that a checkbox will be rendered here. If
you add an X to it: -[X] then it will be checked.

We can then click the external link icon to reference another issue. I
want to reference our three stories and can do that using #11 where
11 is the number of the issue. If you click the external icon link, this
all becomes visual.

Behold, our epic!


228 ROB CONERY

This serves many purposes:

If we end up doing some form of Scrum or Lean, we’ve started


off on the right foot. We could hire a few people in the next
week, maybe a Scrum Master and a Product Owner, for
instance, and they would know exactly what’s going on.
As your CTO, I would be thrilled to see this page. I might have
a few comments, asking for a few more details, but all in all, if
I don’t have to bug you, that makes me happy.
It has helped us organize our thoughts and structure our
tasks. We can now have some more fun attaching these stories
to sprints!

OUR FIRST SPRINT


The only thing we can use to track our progress over time with
GitHub is the Milestone feature. It’s pretty simple: there’s a due date
and a progress percentage based on what issues are in that milestone.
It works perfectly for a sprint:
THE IMPOSTER’S ROADMAP 229

I created our first sprint and set the due date for two weeks from now.
I also attached our 3 stories to it:

You can attach issues to a milestone by selecting them in the list view
and clicking on “Milestones” in the list menu, which is what I did
here.

ENGAGE!
This is our first sprint! How exciting! We have stories to work against,
a due date, and people will receive notifications as the work goes on
over time.

This, friendo, is a clean and well-organized GitHub repository and


something you should be proud of. When your colleagues join you
later on they will be impressed by your thoroughness, and they will
also appreciate the respect you showed for the work and for the work
230 ROB CONERY

they’re about to do. It’s not very fun to go to a party in a filthy house
with no chairs to sit in, is it?

Our repo looks professional and focused, which is precisely what you
want people to think of you. Managing perceptions and molding consensus,
look at you go!
EIGHT
THE FIRST SPRINT: YOUR
PROTOTYPE
ACTION IS THE NAME OF THE GAME - GET
SOMETHING IN FRONT OF YOUR BOSS OR
CLIENT

H
ey, we get to write some code! Well, soon anyway. One of
the more unsettling things about moving up in the
development world is that you code less, plan more. It’s
like any career: the more experienced people do the strategy and
planning and help everyone else. It can be a bummer, but that sad
feeling goes away when you ship something radical. Trust me.

You’re about to rock some boats, so get ready! Up to this point, we’ve
been dealing with words on a page and arms waving in the air, and
that’s about to change. If you ever wanted to know if your boss is a
micromanager, you’re about to find out.

We’re going to have some fun in this chapter, and in the next we will
discuss how to handle the fallout. It would be nice to think there
won’t be any, but in my experience there always is, and I’ll walk you
through some scenarios that you’ll probably encounter.

Before we get to the fun (and the fallout), we need to discuss


something important.
232 ROB CONERY

YOU AND YOUR TEAM


So far, I’ve been assuming that you’re the first person to start work on
the project and that your team will eventually join you. This is
common in startups, mid-size companies and the enterprise. I’ve
worked jobs in all three, and 90% of the time I was the one who
helped get things off the ground.

If that’s not the case and your team is ready to start with you (or has
already started), we need to be careful. You can easily burn bridges at
this point.

The emphasis in this book is execution, doing rather than talking (Win
through action). If it’s just you, then things are simpler, but if you have
a team working with you, your job is to help them execute, which
means you need to let them decide things for the most part.

For now, let’s ponder what this first sprint will produce.

THE PROTOTYPE
It’s human nature to want to tread lightly and carefully when starting
a new thing. “A fool rushes in” etc. This is true, but I think it’s a good
time to revisit my Ameritech Odyssey. I promised I would come back
to the story about using Microsoft’s IndexServer and how it shaped
my career, so here goes.

I flew back to San Francisco from Chicago on a Friday evening with


25Mb of XML documents – 6000 in all – from the Ameritech IT
department. These documents contained processes and procedures
that call center representatives used to answer questions from
Ameritech customers.

The next day, a warm Saturday morning in the Oakland hills, I was
scouring the web trying to find any blog posts on Microsoft’s
IndexServer. The web was pretty small back then and information was
THE IMPOSTER’S ROADMAP 233

extremely difficult to find. You would usually end up at Barnes &


Noble in the tech section if you needed to know something.

All I knew about IndexServer is that it created a search catalog that


was accessible using COM (the Component Object Model) and that I
could hook into that using classic ASP, otherwise known as just “ASP”
at the time. I just didn’t know how to do it.

And finally, I struck gold: an article on MSDN, Microsoft’s tech hub. It


was covering the more obscure parts of Windows NT, an “I bet you
didn’t know this was possible” kind of article with goofy tips and tricks
you could try if you were a cowboy coder, which I very much was.

I called my business partner at the time, Dave Nielsen, and told him
my discovery and how I was thinking we could create a prototype and
mock up the search results once I knew how it all worked what code I
needed to write. His response changed my life:

Or we could just build it.

That hadn’t occurred to me. For some reason, I thought we needed


permission or something, but Dave pressed on:

I suppose we could wait for Dino to green-light a demo, or we


could just build it and show it to him in its working state. Every
so often, the best way to convince someone is to do the thing you
want done.

So I did. And we blew everyone’s mind. We also made quite a few


enemies in the process, but I’ll discuss that more down below – for
now we have to answer a simple question: do we start building the
234 ROB CONERY

thing, or do we expend a little less effort and create a throwaway


prototype?

Let’s consider both things.

PROTOTYPING WITH WIREFRAME TOOLS


Some developers are more into design than others and enjoy doing
things like wireframing. If you don’t know what that means, it’s
simply sketching things out using an illustration program to mock out
an interface, often with cartoon-like representations:

From https://balsamiq.com

This example is from Balsamiq, mocking up a music player. The neat


thing about this is that it focuses more on structure, flow, and where
things live in the interface rather than on colors and typography.

This kind of thing can be extremely useful, but it can also be


distracting if not done correctly.
THE IMPOSTER’S ROADMAP 235

For instance: if you were to show the image above to someone on the
marketing team, they might say something like “Is Winter Set a
playlist we have internally? Shouldn’t that be called out — whether
it’s ours or the user’s? Also, I think ‘Kidkanevil’ is spelled
‘Kidkaneival’…”

This is why we use Lorem Ipsum in this process and why blocks are used
in wireframes. If you ever want to get work done, do not show real
information in a mockup.

Figma

Note: when I first wrote this chapter in 2022, Figma was its own company. They
were subsequently bought by Adobe, and then in 2023 I got to change this note
once again as the deal fell through and Figma remained on its own.

It’s extraordinary what people can build with JavaScript these days and
if you want proof of that, visit Figma’s site and create a free account:

If you’re new to Figma, don’t bother trying to figure it out as you go, it
just won’t work. It’s truly overwhelming what’s possible with this
tool! Instead, head over to the learning section:
236 ROB CONERY

Or you can watch a video on YouTube — of which there are many. It’s
easy to see why this tool is so popular, it can do anything! Many have
called the “Adobe Killer” but I’m not sure about that.

What I do know, is that you can kick up a wireframe in pretty short


order:

This is from a template that I imported and as you can see, I don’t
need to worry about HTML or CSS to get the exact layout I want.
THE IMPOSTER’S ROADMAP 237

Things are blocked for me, which I like, and I can have multiple
screens that link together so when I click a button or a link — I’ll see
the new screen.

I like Figma a lot, but it’s overkill for what I need done. I would need
to spend 3–5 days learning the tool and then likely a week to get a
wireframe up. To be sure, a wireframe is not the same as a prototype,
but it does get the design in front of someone, which is what we want.

I’m leaning towards using a starter site or an off the shelf template


that I can tweak later on, so I’ll add that comment:

Let’s take a look at a few more tools.

Sketch and Balsamiq

Sketch is one of my favorite design tools and I use it regularly for the
graphics on my site. Its true strength, however, is with prototyping an
interface:
238 ROB CONERY

It’s the same as Figma in so many ways, with one exception: it’s a Mac
app whereas Figma is web-based.

With Sketch, you work with “artboards” that are dedicated to specific
screens. You can choose mobile, tablet, desktop, and even watch. You
can animate things to bring across an experience, create a brand color
palette, collaborate with coworkers and so on.

I love this tool and I have taken the time to get to know it, but I’m
confronted with the same issue: I would rather have something that works
when I’m done.

I feel the same way about Balsamiq: neat tool, doesn’t fit what I want
at this stage. Balsamiq is a little more focused on structure and flow
vs. design:
THE IMPOSTER’S ROADMAP 239

From the Balsamiq websight

I do think this is very useful and of all the tools I’ve tried out,
Balsamiq is by far the simplest.

All of that said: CSS kits and starter templates are so good that
sometimes it’s just easier to get rolling right away rather than go
through a big design process only to start again with CSS and HTML.
The downside is that your prototype can end up looking exactly like
someone else’s site.

Either way, let’s take a look at a few themes.

UI KITS AND PREPARED THEMES


Unless you’re a CSS wizard, you’re going to want to lean on some
kind of ready-made toolset, and they’re very cheap and also easy to
find. Templates and UI kits are a lucrative business if you’re a
designer, and they’re only getting better every year.
240 ROB CONERY

I will share my choices with you, but there are so many more out
there that people love, and I encourage you to explore and come up
with your own selection. For now, here’s what I typically do.

Themeforest

I normally go to Themeforest looking for WordPress or Ghost (the


blogging CMS) templates, but they also have straight up HTML too.
It’s downright ridiculous how many high-quality, low-cost themes this
site has!

We can browse what we’re looking for by selecting “HTML” in the


category ribbon and then, in the search bar, type in LMS. You’ll see the
results immediately — but let’s be sure to sort them in descending
order:

Right away, we see some incredible templates for $20 to $30, which is
a steal!

For $24, you get every page your website needs, built with Bootstrap
(or so the sales page says), a popular CSS framework:
THE IMPOSTER’S ROADMAP 241
242 ROB CONERY

That’s just ridiculous! Most of these themes come with CSS


“preprocessors” too, things like SASS or LESS, which allow you to
work with CSS in a programmatic way. This is great for us because we
can quickly change the color and typography to meet our needs.

What I’ve shown here is just one of many templates that we can use to
get our prototype off the ground. As always: it’s a good idea to read
through the reviews to see what people have to say. You can also view
a demo of the theme and click around to see how it “feels”. I typically
will open up the developer tools to inspect the themes as well, just to
see how difficult it will be to figure things out:
THE IMPOSTER’S ROADMAP 243

Here I can see that the template is created using Tailwind, not
Bootstrap, which is OK because I know TailwindCSS very well and this
makes me happier. Another thing to be certain you check is which
JavaScript frameworks are used. Most notable to me are the use of a
Bootstrap script as well as UIKit.js, which is yet another CSS
framework!

It looks impressive, doesn’t it, but things are starting to get a little
weird. Why the use of so many different CSS frameworks? Most of
these toolkits (except Tailwind) come with their own JavaScript
support files to handle things like dropdown lists and animations. The
author of this template is using Tailwind for the bulk of the CSS while
also using these other framework’s JavaScript files to… do …
something?

What is all of this doing to the browser payload? Let’s take a look at
the dev tools one more time, under the Network tab:
244 ROB CONERY

YIKES, that’s 22Mb for what is a pretty simple page! But before we
have a complete freakout on this, that size is probably due to a lack of
optimization to images, JS and CSS files and so on. I’m not terribly
worried about the payload — but it is something to keep in mind:
designers aren't always concerned with page size.

Thoughts on Prebuilt Templates

I’ve used HTML templates before and will likely do so again because
they fit my business: I want to stay small. I’ve done the startup grind,
growing things to what some consider “mid-size”. I’ve worked the
corporate and enterprise projects too, but for me, the velocity and
simplicity of “doing my own thing” keeps me interested. I don’t see
one thing as better or worse than another, it’s just what I like.

My site is currently using Rails with a custom template from


Bootstrap Themes. Before that, I used WordPress — the default for so
many businesses like mine. It works out OK and as far as look and feel
go, I use a theme builder (currently Thrive Themes) that helps me
focus on content, not design. I have a love/hate relationship with
WordPress: I just haven’t been able to stick with it for more than 6
THE IMPOSTER’S ROADMAP 245

months. Something always goes wrong with it, but dang it’s nice to get
your MVP up quickly.

The difficult part with using HTML templates from a service like
Themeforest is getting to know both them and the frameworks they
use because they all use CSS frameworks. If you know Bootstrap or
Tailwind (the most common), you’ve got a leg up, and you should be
able to customize things easily. Until you can’t.

That’s where the trouble comes in: when you change a thing in one
place, and it blows up your entire site. This has happened to me more
times than I care to remember, and it’s not because these designers
are bad people or bad designers for that matter — they just take a few
liberties.

I remember trying to dissect a template for use with a Node web app
which was using the Express framework. The view engine I was using
(EJS) allowed for the use of “partials” which are snippets of HTML
that can be reused throughout the app. The designer, however, just did
their thing and had special classes on the html and body which were
hierarchical, meaning that elements within the body of a given page
looked different based on the class set for the body. This included
headers and the site-wide nav component. To the designer, things
were just fine. As for me, a programmer, I was looking for abstraction.

It’s not always this way, of course, but it happens often enough that
it’s a good idea to make sure you can do things from scratch if you
have to. This is why many people prefer “UI Kits” which are more
modular and built on top of popular CSS frameworks.

CSS Kits: Bootstrap and Tailwind UI

If you have a search on “Bootstrap UI Kit” you’ll be awash in colorful


buttons, shaded tables and flashy hero sections. There are so, so many of
these things, and they’re all amazing and each one has its approach.

I own a few Bootstrap Themes myself, and I really like them. Tons to
246 ROB CONERY

choose from, and they do just about everything you need, usually
surpassing templates you might find on Themeforest.

In the last year, however, I started using Tailwind CSS because I like
their “utility first” approach. This is a subjective thing, of course. I
know CSS just enough to truly damage any UI I work with, but
somehow Tailwind keeps me from doing that. I think it’s because they
don’t try to abstract the idea of CSS — just make it more apparent.

I would rather not get into a discussion about CSS frameworks; ideally
you have your favorite and if you do, I think that’s the one you should
use, assuming they have themes or UI kits.

Here’s the deal when you work with a UI Kit: you assemble pages from
components, customize later. For instance, consider my favorite: Tailwind
UI:

That’s one of their prebuilt landing pages constructed out of smaller


UI components like a header, inline form, badges, headers, and
buttons. It’s gorgeous! All of Tailwind is, and the approach is
thoughtful and smart all at the same time.

The process is straightforward :


THE IMPOSTER’S ROADMAP 247

You figure out the pages you need and create the files for them
(index.html, about.html and so on).
You add the “harness” for your site, which includes the html,
head and body tags as well as CSS and JavaScript tags.
You add the components you want to each page. The home
page might have a hero section followed by a features section,
testimonials and so on.

How do you figure out which components and sections you need? A
great place to start is by reviewing the templates on Themeforest! The
designers who built those themes typically use former projects,
changing things just enough to not cause issues with their clients.

You could also look around at what other services offer. For instance:
Teachable is an online LMS that specializes in video. You could, if you
wanted, see what they have on their home and about pages and if you
felt inclined, you could do a free trial to see how they lay out their
courses and video page.

Personally, I don’t care for the design of Teachable. The font mix they
use for headlines is annoying to my eyes and the lack of transitions
between sections along with the oversized images remind me of…
something I would design… which is scary indeed!

SOMEWHERE IN BETWEEN: WEBFLOW


Webflow is a visual web design tool that reminds me of the old
desktop tools I used back in the 90s and early 2000s, namely
Microsoft’s Front Page:
248 ROB CONERY

You could drag and drop HTML “stuff ” to your heart’s content and
build a page that looked … well, like you dragged and dropped things
to your heart’s content.

Webflow is a bit easier on the eyes. You start by creating a project and
selecting a template for it:
THE IMPOSTER’S ROADMAP 249

They do the font selection, grid layout, effects, and animations as well
as the color scheme – all of which you can change.

If you’re familiar with an “Adobe Interface”, meaning property panes


with a heavy click-this-then-that interaction scheme, then you’ll feel
right at home here:
250 ROB CONERY

You can customize everything on this page using both dedicated and
shared components, which will update across the site. Webflow also
offers tools to help you build a site that doesn’t suck as well, including
an auditor which can help you with accessibility and well-formed
HTML:

The crazy power of Webflow comes in, however, when you begin to
work with collections. Webflow has a built-in CMS that you pop your
data into and then show that data through template components.
THE IMPOSTER’S ROADMAP 251

For instance, I can define our LMS courses and lessons by adding a
new collection called “Courses”:

It’s… powerful. Notice the “Collection Templates” at the top of the


New Collection page? That’s a great touch, I think.

If you want to sell things, you can do that too. There’s a special area
for eCommerce on the left menu (a shopping cart) and when you click
it for a new store, two collections are added for you, as well as a few
more interesting things:
252 ROB CONERY

This is a full-featured storefront, and Webflow will even handle the


checkout for you, integrating both Stripe and PayPal. If you’re selling
digital products (like a book or downloadable videos), the order
backend can handle fulfillment easily.

But what about an LMS like ours? This is where things get very, very
interesting indeed. Webflow is considered one of the main
components of the “No Code” movement, and as such, is trying to
embrace anything and everything that their users might need,
including Membership:
THE IMPOSTER’S ROADMAP 253

It’s powerful stuff and if you’re interested, go take a look!

Thoughts on Webflow

I really like this service, especially for “brochure” or marketing sites


that provide information about a company, service, or product. For a
very low price (less than $20 USD) you can have a beautiful site up
and running with some stunning interactivity. E-commerce and
Membership will be more on top of that, and they’re also rolling out
orchestrated logic features which could replace so many services (such as
email marketing automation)… it’s a compelling service!

The main issue I have with services like this is that you’re part of their
ecosystem and if they change something, you change something. I have
been burned by so many of these “No Code” apps in the past! Allow
me to share some sadness:

In 2015, I started selling The Imposter’s Handbook through


Gumroad. They do checkout and delivery with a simple
HTML snippet. It seemed ideal until I noticed that they charge
you transaction fees and a monthly fee per customer. There was
no mention of these fees until you signed up on their monthly
254 ROB CONERY

plan. Customer count never declines, so this means that the


more you sell, the higher the monthly fee goes up, forever.
In 2017, I moved to using SendOwl as the checkout and
delivery service. The book was selling extremely well, so I
signed up for a higher tier that included “Unlimited Download
Bandwidth”. After selling on their platform for only 3 weeks, I
received an email from them asking me for an additional $100,
or they were going to shut my account down. It turns out I
was using a lot of bandwidth (11G to be precise) and they
were losing money. I asked them if they could define what
“Unlimited Bandwidth” meant, and they said “well, of course
within reason” so I left them too.
In 2012, Shopify changed their API and didn’t actively tell
anyone. They did post something on their developer blog, but
they didn’t let their store owners know of the change. This
broke my Webhook receiver, which was I was using to deliver
things to my customers. Because it was broken, Shopify shut
the Webhook off due to the 500 response. I was livid because I
had no idea until 50 or so furious customers wanted to know
where their stuff was. Shopify claimed they let me know
because they wrote a post about it, I had some unkind words
for them.
In 2015, I started to roll my users into UserApp, a hosted
authentication service like Auth0 or Okta. UserApp was one
of the very first players in this field, and it looked
remarkable. Later that year, the founders up and walked away
from the business, ghosting everyone on the platform and
letting it rot. I dodged that bullet by weeks, as I didn’t
actually flip everything over, but I lost a ton of time in the
process.

The point of all of this: aside from Shopify, these companies are a
funding round away from going away and taking a chunk of your
business with them. Webflow just hit $4 billion in valuation due to a
series C round of funding and has apparently been profitable for the
THE IMPOSTER’S ROADMAP 255

last few years, so that’s a good thing. But will they be around in 3, 5
or 10 years from now?

What about our company? Do we go with a simple service like


Webflow and get moving quickly, or do we flex what we know and
build something laser-focused on our users? This requires some long-
term thinking, which is a skill you’ll need to have as a senior
developer.

THINKING LONG TERM


Back in 2002, I was CTO of a small healthcare company that had 8
developers. It was a startup, and we were growing quickly, and I
remember trying to wrap my head around our workload. We didn’t
use Agile because I didn’t know what it was nor how it worked.

One day, I was reading CODE magazine and there was an article
written by Mike Gunderloy (I don’t have a link for him) and he was
talking about the longevity of a project. I can’t find the article either,
and I’ve tried, but the idea was simple enough: you should consider
what will happen to your project in a year from now, 5 years from
now, and then 10 years from now.

Software people have a hard time thinking this way, and I think it’s
because technology changes rapidly. New frameworks come about,
new trends, companies die or kill their products. It seems like a 3-year
shelf life is about the most you can expect.

Our project isn’t even off the ground yet, but the moves we make now
will be critical to its future health. There are many questions we need
to answer, and when I’m confronted with such things, I find it helpful
to look at what other successful projects have done. I also look at the
failures — but not as much as the successes.

Why, you ask? While I do believe there is a lot to learn from failure, I
find that seeing what someone else did right to be even better. It’s
common knowledge that most startups fail because they try to grow
256 ROB CONERY

too fast, pulling in funding, making promises and not delivering on


the vision. They get liquidated and “sold off ” so investors get repaid,
rinse, repeat.

Other projects take a much more “slow and steady” approach.


Pluralsight, for example, was profitable from day one. Same with my
company Tekpub – we never took a single dime of funding, but I did
think about it.

GitHub is, to me, a company that did it their way. They didn’t go
public, but they have stayed true to themselves for over a decade. Let’s
use them as a case study.

CASE STUDY: GITHUB PROJECT EVOLUTION


I talk a lot about GitHub in this book, and it’s for good reason: so much
of your work will happen there. Of course, there are other sites out there,
like GitLab and so on, but if I had to pick one site for you to master, it
would be GitHub. It’s been around forever and still leads the way in
terms of developer collaboration.

I interviewed Tom Preston-Werner, the cofounder and first CEO of


GitHub, back in 2009. He was, shall we say, very self-assured and was
having a great time with his new company, GitHub. He was turning
down funding and wanted to take the company public, an idea which
seemed far-fetched at the time… but it turns out, the dude had vision.

Here was their site when they launched in 2008:


THE IMPOSTER’S ROADMAP 257

And here is their launch announcement, which they still have on their
blog!
258 ROB CONERY

It did not, of course, look like this back then… but the words are the
same.

One Year

A year later it was a bit cleaner with super-bro copy sprinkled


everywhere. This is 2009 and the service largely remained the same;
however, they were having internal engineering issues because the
service was growing much faster than they anticipated. That’s a good
problem to have, and the GitHub user base forgave them… and kept
signing up:

Tom wrote a blog post, celebrating the one-year mark:

It’s hard to imagine, but just one year ago today we made the first
commit to the GitHub repository. We don’t have a baby book for
GitHub, so we’ll have to settle on the blog to record our handprint
and first words.

We have four full-time employees: Tom Preston-Werner, Chris


Wanstrath, PJ Hyett, and Scott Chacon. Our support man, Tekkub,
THE IMPOSTER’S ROADMAP 259

answers all your nuanced questions via email, IRC, and on the
forums. We’ve taken a grand total of $0 in venture capital or other
outside investment. Just recently we topped 20,000 public
repositories.

But we couldn’t have done it without you, our loyal users. You’ve
dared to try a new version control system and seen how much
better things can be. Thanks for joining us on this adventure, we
look forward to the upcoming year to make your life as a
developer even more amazing!

5 Years

Now let’s jump ahead to 5 years after the project launched. They have
matured as a company and solved many of their engineering issues.
My Phriend Phil joined GitHub in 2011 with the title of “Windows
Badass” and went on to become Director of Engineering, overseeing
the creation of the Atom text editor, Electron, and GitHub Desktop.
He retired in 2018 after Microsoft bought GitHub.

In 2013, when Phil’s team was working on Electron, this is what


GitHub looked like, with Phil’s work front and center on the home
page:
260 ROB CONERY

I wonder if Tom and his cofounders ever thought they would expand
into the enterprise, build a text editor, and change the entire landscape
of desktop development with Electron.

I don’t know, to be honest, and I don’t think Tom did either, but I’m
sure he had a dream.

Ten Years
THE IMPOSTER’S ROADMAP 261

The 10-year mark always seems so very far away, and at the risk of
sounding old… it really does get here much faster than you think it
will. In 2018, 4 years ago, GitHub turned 10:

The headline has changed as more and more developers, and the
businesses they work for, embrace Git. Microsoft has transferred all of
its open-source work there (which is a lot) and then bought the place
outright.

GitHub has continued to focus on its core strength: helping


developers communicate and ship software:
262 ROB CONERY

Tom’s cofounder and CEO for many years, Chris Wanstrath (defunct),
wrote a nice letter to the community to celebrate this 10th year:

On this day 10 years ago, GitHub officially went live. We started


with a pretty simple purpose: to connect developers and make it
easier for them to work together on projects with Git. In the last
decade, we’ve evolved as a company and as a platform, but the
reason GitHub exists is fundamentally the same. What makes this
platform special isn’t an idea or an invention. It’s the people using
it—and GitHub is celebrating 10 years because of you, our
community.

When we look back at the last decade, it’s not any one individual
piece of software that we remember, it’s what people have done
with it. You’ve shared, taught, tinkered, and built on GitHub from
all around the world. At launch, we couldn’t have anticipated the
number of projects we’ve seen take shape, the one-line programs
and massive frameworks. We also never imagined that businesses
would become so deeply invested in the open source community
or that so many of you would learn from each other’s code.
THE IMPOSTER’S ROADMAP 263

GitHub launched at a time when technology was connecting


people in new ways, but as I wrote in our launch post, let’s not
pontificate on the journey. Your work speaks for itself—and we’ve
collected some of our favorite moments and milestones to
celebrate just a few of the ways you’ve pushed software forward.

As we look ahead, I’ll keep this simple. Together, you have defined
what software is today. And you’ll continue to shape its future in
the years to come. So what’s in store for the next 10 years of
software? We’ll follow your lead.

In the meantime, we thank you for the code you’ve committed,


the pull requests you’ve merged, the documentation you’ve
written, the projects you’ve shared, and for the 10 years of GitHub
you’ve made possible. We’re grateful for them all, and we can’t
wait to see what you build next.

The Common Thread

Imagine yourself writing a retrospective post a year from when your


project starts. Then at 5 years and finally at 10. This isn’t a weird
metaphysical practice! Fantastic ideas and insights can happen when
you let yourself daydream about such things.

For instance: I used to daydream about running a large online


subscription site with 20,000 users who were learning together and
collaborating. When I imagined what my day would look like, I freaked
out! People that give you money each month expect certain things at
certain times. There’s no way I could ever run a site like that on my
own. This is OK because with that many people I could afford to hire
out for support, which would probably suck as I would have to
manage that effort. That would detract from my service, so no thanks.

When thinking about GitHub’s “arc”, it seems clear that they


concentrated on delivering their vision of enhancing developer
collaboration. They didn’t pivot, though I’m sure they talked about it,
264 ROB CONERY

and they didn’t spin up multiple tangential projects that would dilute
their brand.

They stayed focused. Using GitHub today feels like using it 14 years
ago, and I think it’s pretty cool that 90% of the work I do uses the
same features that were available back then.

OK, case study over, let’s get to work and write some code!

CHOOSING A PROTOTYPE FRAMEWORK


If you’re working with a team, it’s likely you already know what tech
stack you’re going to use for your application. If you’re the first person
on the project and will be assuming the lead position, this decision
will likely fall to you.

There is no way I can offer you help or suggestions on what stack to


use other than to say “go with what you know”. Early on in my career
this was a simple matter: I knew ASP and later, ASP.NET, so the
decision was easy. After that, I moved into the Ruby on Rails world,
which, once again, made my decisions straightforward.

Nowadays, I have more options than I can deal with. I can build
applications using Django (a Python web framework), Rails, Node
(with Express) and most frameworks build with Node, Vue and Nuxt,
the framework built on top of view. I’ve written applications using
static site generators as well, Jekyll, Hugo, and Middleman to name a
few. I’ve used Firebase as my storage and backend functions too…

Having so many options is not always a good thing, believe me! But I
mostly work on my own and build things for myself. When it comes
to working with a team, things become easier.

In 2019, I rebuilt Microsoft’s LearnTV management app. I knew that


the position was temporary because I was “loaned out” to the group
who needed it. We talked at length about what they required, and I
also asked about the people they were going to bring on in the future,
and the answer was simple: they were focused on the frontend more
THE IMPOSTER’S ROADMAP 265

than the back. They described a complex frontend using calendaring,


reordering of list, a simple search, etc. and it became clear to me that
they were going to want some flavor of single-page application. I
ended up choosing Nuxt, a Vue framework that I know well and that
can be customized easily.

It’s still in production, which is great, and the developers who came
on after I left have been able to support the application easily. Or so I
hope!

For you, leading this effort, the first thing you need to consider is who
will be picking up for you as your team expands. And, of course,
where things will be in 1, 5, and 10 years from now. Rails and Django
have been around for 15 or so years, and they’ll probably be around
for a long time to come — those would be solid choices if you’re
looking to hire Ruby or Python people.

Node is also a good choice, as JavaScript is one of those languages that


everyone knows (mostly). There is a little more churn, however, when
it comes to frameworks and longevity.

Putting Off the Rewrite

It doesn’t matter what framework or language you pick, you’re going to


rewrite everything at some point. It’s a natural thing, and it’s also a pain in
the butt. I rewrote Tekpub within a year of launching, moving from
ASP.NET to Ruby on Rails. It was a lot of work, but completely worth
it. I rewrote everything again a year later when Rails 3 came out — it
gave me an excuse to clean things up.

Why did I do these rewrites? Here were my reasons:

Velocity. Businesses evolve and change given customer


demand. The very last thing you want to do is say “we can’t do
this with our current codebase”, which happens more than
you might think.
Tests. Rails makes testing so, so simple. In fact, I would say
it’s more than that — they make it hard not to test! The
266 ROB CONERY

generators cover everything for you, and it made me feel more


confident in my codebase.
Faster development meant more content. Tekpub was a
content-driven site and I needed to focus on that, not building
things. I enjoyed the latter a lot, it kept me sharp, but
dropping everything because of a problem really sucks. Being
able to fix things quickly is a HUGE win.

I spoke with my partner (James Avery) about these changes every


time. He was a hard one to convince, but eventually, we both agreed
that more nimble and flexible was the right call.

The thing is: you don’t want to kick off A Great Rewrite when your
brand is heating up and people are coming to your site. They like
what they see! Let them like it! Twitter was notorious for their “fail
whale” back in the late 2000s. They were trying to solve massive
scaling problems with their Ruby on Rails codebase and single
MySQL database. They stuck with it and let things crash occasionally
until the whole thing was rewritten in Scala in the early 2010s. They
took their time, hired the talent they needed, and set about fixing
things.

The point is: Twitter needed the rewrite. Developers will often rewrite
something just for the sake of “cleaning things up”. Joel Spolsky,
cofounder of StackOverflow, wrote:

We’re programmers. Programmers are, in their hearts, architects,


and the first thing they want to do when they get to a site is to
bulldoze the place flat and build something grand. We’re not
excited by incremental renovation: tinkering, improving, planting
flower beds.

There’s a subtle reason that programmers always want to throw


away the code and start over. The reason is that they think the old
code is a mess. And here is the interesting observation: they are
THE IMPOSTER’S ROADMAP 267

probably wrong. The reason that they think the old code is a mess
is because of a cardinal, fundamental law of programming:

It’s harder to read code than to write it.

So, so very true and I have been guilty of this. The thing is: we are
indeed going to create something messy. It can’t be avoided. Early-stage
applications are subject to dramatic change and whimsical marketing
fantastical requests, all within the oppressive ramp up schedule. It’s
fun, but the code often pays the price.

It’s tempting to think that organized and structured frameworks like


Rails or Django can help with this, ideally forcing you to follow
patterns and ideas as you build things out. Speaking from experience,
however, the only thing worse than an unorganized mess is an
organized, structured mess.

This is the Denver Public Library. I’m sure there are some people in
the world that think this building is interesting. Next to the Seattle
Space Needle, I think this is one of the ugliest bits of architecture I’ve
ever seen. You can’t “fix” the organized visual chaos of this structure,
either. It needs to come down entirely and rebuilt from scratch…
nothing can improve this.
268 ROB CONERY

So Denver just has to deal with it for now.

HELLO WORLD
The antidote to creating a colossal organized mess, to ship early and
iterate slowly. Easier said than done! If I were in your shoes, I would
take a deep breath and look 3 years in the future. What platform/lan-
guage/framework would I want to support?

To answer that question, I might think 3 years in the past: what was I
working on then? For me, I was working with Node a lot, Vue, Rails
and Firebase. I also worked with Django on a project and I had a great
time.

I like each of these frameworks, although if I’m honest, I’d rather


work with Ruby or Python instead of JavaScript or Typescript. But I
know that’s a personal decision, and I’ll be hiring a team soon —
probably best to go with something we’ll all know or be able to pick
up quickly.

The key is to get off the ground quickly using the thing you know (or
can learn) the best:

This is our first prototype and it looks wonderful! No, this isn’t a
“draw the owl” moment, I promise. The entire next part of the book is
how I got here and the process I followed.
NINE
SUMMARY: BUCKLE UP
SHIPPING IS WINNING; SHIPPING GETS
YOU NOTICED

I
n this first part of the journey we focused on tools, philosophies
and sprinkled in some practical thoughts as well. We didn’t do a
lot of coding, but that’s what being a senior/lead developer is all
about: less coding, more shipping.

Your job is to deliver software, which is entirely different from


writing it, although you’ll probably do that too.

If you come to Hawaii on vacation, you might see a group of outrigger


canoes out in the water, paddling in a bay or close to shore:
270 ROB CONERY

I paddled for 4 seasons many years ago before I blew out my shoulder,
and it was one of the most intensely relaxing things I’ve ever done.
There’s no time to let your mind wander, you had to focus and keep
your paddle in time with the person in front of you. You had to control
your breath as well, so you didn’t hyperventilate because your
diaphragm is getting compressed by leaning over too far. If you didn’t
make the mistake of extending too far forward, you run the risk of
pulling your arm out of its socket at the shoulder, which is what
happened to me.

Each person in this canoe has a job: the person in the very front is
called the “stroker” or just “stroke”. They set the pace and every other
person in the canoe has to follow their lead so you’re all stroking in
time. The person in the second seat paddles on the opposite site of the
stroker, so they keep time for their side of the boat.

Seats 3, 4, and 5 are the “wheelhouse”. These are usually the biggest,
strongest paddlers that generate the most power/stroke.

Seat 6 is where you sit: the ho’okele. You steer the boat and decide
where everyone is going by using your paddle (and skills) to steer the
boat. Every now and again you might paddle as well — when the boat
THE IMPOSTER’S ROADMAP 271

needs a boost or everything is humming along, and you want to up


your velocity.

OUR BOAT IS IN THE WATER


Arriving at practice was always a joyful experience. You see your
friends, it’s late in the day, and you know you’re about to be out on
the calm blue water of the bay, watching the sunset. The coach yells
out “let’s get the boats!” and you, your crew, and all the other crews
head to the boat tents, pick up your boats and put them on the
“wheelies”.

At this point, the thing is just a big, awkward bit of fiberglass that
requires a lot of care. Two of your crew need to hold up the amas, the
arms that extend to the side and connect to the outrigger, to make
sure they don’t drag, and you gently convey the boat to the water.

Once in the water, it’s a boat and you’re its crew. As my coach used
to say:

When she’s in the water, she’s a part of you, and you’re a part
of her

Our boat is now in the water. It’s launched for everyone to see and you’re
the steersman. But as we well know, there’s a lot more to paddling
this boat than just sticking our paddle in the water and pulling as hard
as we can.

Everyone on the team has a role to play which depends on their skill,
and no role is more critical than yours.

As the steersman of this project, you just spent some time reading up
on techniques, tools, and ideas on how to steer your project
successfully. Now it’s time to push off and win a race or two!
PART TWO
DEVELOPMENT
TEN
A BRIEF REVIEW OF GIT
YOU WILL LIKELY BE USING GIT TO
MANAGE YOUR SOURCE CONTROL, SO
LET’S HAVE A QUICK REVIEW

W
e are underway! In this part of the book we’re going to
focus on our first sprint and how it was executed.
Successfully, I might add (good job!).

The actual code isn’t relevant. As I mentioned in the last chapter: I


used Nuxt and a template I purchased so we could get off the ground,
but your choice is probably going to be different because your
experience is different. Either way: the “what” at this point isn’t what
we care about: it’s all about the how.

FOCUSING ONCE AGAIN ON GITHUB


In the following chapters we’re going to be using GitHub as our
primary source control and management tool. I know there are other tools
out there, but I had to focus on the thing I know best, which is GitHub.

I do feel that many of the processes are you’ll go through are the
same, regardless of where your source code is hosted. To that end,
many of the processes used with GitHub translate to other services,
such as GitLab, Bitbucket and yes, even Team Foundation Server.
276 ROB CONERY

If you’re not a GitHub person, I hope you can still find value in these
chapters as the processes are all based on Agile ideas (customer focus,
quick iteration, ongoing communication, etc.).

STARTING, GROWING, LAUNCHING


I’m going to start at the very beginning, discussing source control and
the nuances of using it and, as you might expect, I’ll be focusing on
Git. It’s become the de facto source control provider and even if you
disagree and hate it, it’s still something you’ll need to know if you’re
going to work on a team in the software industry. It’s everywhere.

We’ll then dive in to tips and strategies for working with GitHub,
specifically the “GitHub Flow” (as it’s called) as well as the simpler
Trunk-Based Development that is gaining popularity.

We’ll discuss architectural approaches, summarizing common


techniques, who uses them and why. I’ll have a few opinions on the
matter but know this before we even get started: everyone has opinions
on this stuff. The only right one for your project is the one that ships.

Lastly, we’ll discuss job retention. Your job, to be specific. If you build an
application and ship it off without an analytics and reporting story,
you’re asking to be replaced. Your boss (me) will want to know how
the app is performing and whether we are doing the right things.
From customer interactions all the way to monitoring and logging, we
need to know what’s happening in there!

ASSUMPTIONS
I’m going to assume that you know and understand Git, at least to a
reasonable extent. If you don’t, I’ve added a small primer below, as
well as a history of source control as we know it. Believe it or not,
people used to argue about whether the use of source control was
useful!

Spoiler: it is.
THE IMPOSTER’S ROADMAP 277

As a senior developer, your source history is the story of your project.


It is vital that you know what you’re doing, otherwise all your work
will look like a complete mess.

For now, here are the essentials that you’ll need to know before
moving on. If you need to go deeper, I would suggest spending an
hour or two on YouTube so you understand the concepts at the very
least.

GIT ESSENTIALS
There are less than 10 commands that you must understand to
effectively use Git:

git init will create a new repository on disk. This should be in


the same directory as your code.
git add marks untracked files as “to be added” to your
repository.
git commit will add/update/delete files based on what has
changed. You add a message using -m, which is critical, so
people know what you did.
git remote will add a remote repository link to your local
repository. This is usually the “origin”, a central place (or
branch) that a team typically regards as the source of truth.
git remote add origin is a command you’ll use with every
project, and it often points to GitHub: git remote add
origin [email protected]:robconery/red4-portal is an
example.
git pull will fetch updates from a remote repository. An
example might be git pull origin which will try to merge
whatever changes are in the origin with your local.
git push pushes your changes to another branch in a remote
repository. A command you’ll use often is git push origin
master or, more likely, git push origin my-local-branch because
committing to master is messy (we’ll discuss in next chapter).
278 ROB CONERY

git merge will take the changes from one branch and do its
best to non-destructively merge them with another branch.
Occasionally, this doesn’t work, and you end up with a “merge
conflict” that you need to step through. We’ll talk about this
later on too.
git checkout allows you to move between branches in your
local repository. A common use of this is git checkout -b
which will create a branch if one doesn’t exist. An example
would be git checkout -b robconery/fix-something-broken-
#32. That’s a long branch name, but in a larger project it’s
extremely helpful to have descriptive branch names like this.
The #32 part references a GitHub issue number.
git branch shows you information about a given branch, but
the most common use of this command is git branch -d,
which deletes a branch from the repository.

There are, of course, a load of other commands, and I’m sure I’m
leaving a few out that some readers will find essential (such as stash).
There’s a lot to learn, and I suggest spending some time to master this
tool.

When using Git and writing code, it’s a good idea to be doing it inside
a branch. Many teams consider committing directly to main (the
default branch) a Very Bad Idea because main is only where tested,
deployed code lives. Others are just fine working in the trunk, and
we’ll see both ways in the next chapter.

There are different ways of dealing with Git and GitHub, flexing it to
tell a compelling story about your project. This is your job, and you need
to be certain you know your tooling! It could very well save your butt
in the coming months.

To understand Git, it’s a good idea to understand where it came from


and why. Aside from being an entertaining story, you’ll learn why
centralized source control (CVS, Subversion, etc.) has been largely
abandoned in favor of the lighter-weight, easier to use Git.
THE IMPOSTER’S ROADMAP 279

I invite you to read on and get lost in a fun story or, if you like, just
jump to the next chapter and we’ll get rolling with more exciting stuff.

THE HISTORY OF MODERN SOURCE CONTROL


I didn’t know what source control was until 2002 and by that time I
had been coding professionally for over 5 years. For big clients.
Including Ameritech, the Fortune 50 supermassive telecom client I
wrote about in earlier chapters.

I think about those days and want to smack myself.

Source control was difficult back then and, believe it or not,


programmers found it cumbersome. In the Microsoft world, you had
one choice: Visual Source Safe. To say that this tool was reviled across
the industry is far too kind — it is directly responsible for
programmers hating source control.

Then there was Subversion, or I should say “is” because people still
use it. It came out in 2004 as an “update” to CVS and was a breath of
fresh air. This is the tool that finally got me using source control
routinely.

It’s difficult to admit this, but source control was much more of a
chore than it is today. I love how Mike Gunderloy puts it in “Coder to
Developer”, written back in 2004:

Most developers start out writing code without any sort of source
code control, except perhaps for making occasional backups of an
entire directory of stuff. But gradually, most of us realize that
having a tool help keep track of code changes and prevent code
loss is a good idea. We tend to go through three levels of
enlightenment in this process, starting as beginning users and
gradually developing expertise …
280 ROB CONERY

The first level of enlightenment is simple: You start using source


code control…

The next step in your path to source code enlightenment is to be


able to effectively manage your source code tree…

The 3rd level of source control is, essentially, mastery, but keep in
mind that this was 2004 and this stuff was a lot harder than it is
today. At least to me.

THE OLD, FAMILIAR STING


I liked Subversion, but that’s mainly because I didn’t know any better.
The idea of branching, merging and resolving conflicts is
straightforward today with tools like Git and Mercurial (and others),
but back then… egads.

Subversion, like Team Foundation Server today, uses a “file checkout”


metaphor which “locks” a file when you check it out to work on it. If
you forget to check it back in, no one else can work on it and it’s an
extremely painful process. There are settings to disable this and
administrators can also override the checkouts… which causes all
kinds of fun pain.

Branching in Subversion is literally copying a directory, often the main


“trunk”, and then merging the directories back together. It’s… not
fast, and it causes a lot of file bloat.

A BRIEF HISTORY OF GIT AND OTHER TOOLS


As you probably know, Git was created by Linus Torvalds who also
created the Linux kernel. The story goes that he had been using
BitKeeper to manage the source for the Linux kernel project when one
of the project members, Andrew Tridgell, created a special client called
SourcePuller to read the metadata coming off the BitKeeper servers.
THE IMPOSTER’S ROADMAP 281

Tridgell (known as “Tridge”) was working on a project for the Open


Source Development Labs (OSDL) at the time. The project he was
working on had nothing to do with the Linux kernel, and BitKeeper
didn’t like that.

Back then, BitKeeper was proprietary and people needed to pay to use
it, but Linus worked out a deal with the company behind BitKeeper
(named “BitMover”) so that he and his contributors could use it for
free. This caused problems on the Linux kernel project because some
maintainers didn’t like that closed-source software was being used on
a free, open-source project. Others didn’t see how source control tools
affected anything and, simply put, there was nothing that worked as
well as BitKeeper for their project. Linus knew the tool well and he
liked it. Since he was responsible for maintaining the source history
and managing the patches, his decision was final.

To make matters more fun, the CEO of BitMover, Larry McVoy, was a
contributor to the Linux project. He wanted to help the project so he
decided to open up parts of the BitKeeper server so that bridges could
be made for project members using Subversion and CVS. In doing so,
he made it possible for “Tridge” to create his metadata reader.

This is where things blew up. BitMover didn’t like that OSDL was
accessing metadata they shouldn’t be, so in 2005 they stopped giving
free licenses to people that worked at OSDL. Unfortunately, one of the
names on that list was Linus Torvalds.

This made Linus pretty angry:

Linux founder Linus Torvalds has followed up his weekend


condemnation of reverse engineering with an astonishing personal
attack on the integrity of one of the most respected figures in the
open source community, rsync author and Samba co-lead Andrew
Tridgell.
282 ROB CONERY

Torvalds accuses Tridgell of playing dirty tricks with his


proprietary source code tool of choice, Bitkeeper and destabilizing
the product. These are serious accusations to make.

Torvalds uses the pay-for proprietary software to manage the


Linux source code (obliging other kernel developers to follow
suit), but last week its owner, Bitkeeper CEO Larry McVoy, yanked
the license, pushing Torvalds to look for an alternative. He's now
going to write his own. For this inconvenience, he blames Tridgell.

So Linus decided to take a vacation and see what he could come up


with as he could not find a free source control system that did what he
needed. Olivia Mackall decided to do the same thing, and created the
Mercurial project at the same time.

HELLO GIT
Linus’s design goals for the new system, which he called “Git” as a
self-deprecating joke, should be:

Patching should take no more than 3 seconds


Be distributed like BitKeeper
Include strong safeguards against corruption
Unlike CVS in every way. If there was a decision to be made,
do whatever CVS doesn’t

Development began on April 3, 2005, and on April 4, Git was


managing its own source. On June 16, just 2 and a half months after
the project started, Git was managing the Linux kernel. On July 26th,
Junio Hamano took over the project and has led it since.
THE IMPOSTER’S ROADMAP 283

MERCURIAL
The history of Mercurial is tied directly to Git, so I think it’s worth
bringing it up here. It’s also a fascinating story. The original project
was kicked off by Olivia Mackall (historically Matt just in case you
look things up). She named the project Mercurial because:

Shortly before the first release, I read an article about the ongoing
Bitkeeper debacle that described Larry McVoy as mercurial (in the
sense of 'fickle'). Given the multiple meanings, the convenient
abbreviation, and the good fit with my pre-existing naming
scheme (see my email address), it clicked instantly. Mercurial is
thus named in Larry's honor. I do not know if the same is true
of Git

Larry McVoy (CEO of BitMover) didn’t like this at all and required
that commercial users of BitKeeper not use the tool to create a
competitor to BitKeeper, which seems like a very shortsighted decision
to me.

He actually followed through with this threat! A person named Bryan


O’Sullivan was working on Mercurial with Olivia and their employer
was contacted by McVoy, who demanded O’Sullivan stop working on
the project:

At my workplace, we use a commercial SCM tool called BitKeeper


to manage a number of source trees. Last week, Larry McVoy (the
CEO of BitMover, which produces BitKeeper) contacted my
company's management.

Larry expressed concern that I might be moving BitKeeper


technology into Mercurial. In a phone conversation that followed,
I told Larry that of course I hadn't done so.
284 ROB CONERY

However, Larry conveyed his very legitimate worry that a fast,


stable open source project such as Mercurial poses a threat to his
business, and that he considered it "unacceptable" that an
employee of a customer should work on a free project that he sees
as competing.

To avoid any possible perception of conflict, I have volunteered


to Larry that as long as I continue to use the commercial version
of BitKeeper, I will not contribute to the development of
Mercurial.

I don’t think this kind of thing could happen today, but who knows.

Mercurial is still around, but Git is far, far more popular. BitBucket, a
competitor to GitHub many years ago, made its name based on using
Mercurial instead of Git as many people felt that the Git API was too
“hostile” (whatever that means).

Unfortunately, they moved to using Git as well in 2020:

The version control software market has evolved a lot since


Bitbucket began in 2008. When we launched, centralized version
control was the norm and we only supported Mercurial repos.

But Git adoption has grown over the years to become the default
system, helping teams of all sizes work faster as they become
more distributed.

SO LONG, BITKEEPER
Git made an immediate splash when it came out and has grown
massively ever since. Its popularity soon drove BitKeeper into the
dustbin, and it went open source in May 2016. This is what Larry
McVoy had to say about that:
THE IMPOSTER’S ROADMAP 285

A few days ago, in a Q and A on Hacker News, McVoy wrote:


"Git/GitHub has all the market share. Trying to compete with that
just proved to be too hard. So rather than wait until we were
about to turn out the lights, we decided to open source it while
we still had money in the bank and see what happens.

"We've got about two years of money and we're trying to build up
some additional stuff that we can charge for. We're also open to
doing work for pay to add whatever it is that some company wants
to BK, that's more or less what we've been doing for the last 18
years.

"Will it work? No idea. We have a couple of years to find out. If


nothing pans out, open sourcing it seemed like a better answer
than selling it off."

Honestly, I hate to see anyone’s business explode… or I suppose


“implode” is a better word. I do understand that you have to protect
your company, but it appears that demanding that multiple coders
behave and not code what you want them to is going to have a
Streisand effect, where telling a group of people not to do something
results in many more people actually doing it just because you said
not to.

THE RISE OF GIT


In 2006, when I worked at Microsoft the first time, I went to lunch
with two friends: Phil Haack (yeah, him again) and John Lam. John
was working on a project I was very excited about: Iron Ruby. John
was trying to create a compiler which would turn Ruby code into
something that could run on the .NET runtime. An interesting
project, but unfortunately doomed to fail.
286 ROB CONERY

During our lunch, John started talking about “Git” and then Phil
jumped in, talking about how much he loved it. We were eating Thai
food in Redmond, and I remember looking around, thinking to myself:
what did he just say? Git?

I asked John what he was talking about and described the idea of
“distributed source control” which, to me, sounded absolutely absurd.
I had been using Subversion for years and the idea of your trunk (the
source of truth) being everywhere was ridiculous! Where did the
trunk live!

“Nowhere. Everywhere” is what John said. He then gave some


examples that included a beachball metaphor, which confused me even
more.

I went home and downloaded Scott Chacon’s “Pro Git”, which


everyone was reading, and within a few days things started to click.

EVERYTHING CHANGED
I never created branches in Subversion, it was just too cumbersome. If
I wanted to goof around and try something different, I would just copy
my code over to a different directory on my machine and try it out.

Creating a branch and watching files fly around on my hard drive…


what the hell!?! And then I read up on branching workflows, and it
was at that point that I felt the earth move below me. Things would
never be the same.

Git brought the idea of source control into the realm of project
management. You could, all of a sudden, track progress using
branches and commits and figure out who did what using blame. You
didn’t worry about losing your source server because everyone had a
copy of the repository!

That was wild!

Best of all: it was small and fast. Subversion (and tools like it) required a
THE IMPOSTER’S ROADMAP 287

server to be set up somewhere and that’s where your repository lived.


Git flipped that on its head and said, “nope! Your repository lives on
your hard drive, right next to your codebase”.

NOT FOR EVERYONE


The idea of a distributed source control system took everyone by
surprise and there were quite a few who didn’t care for it. I remember
reading a post back in 2012 where the author simply hated the entire
idea. The author brought up some good points, but I just have to start
with this one because it made me laugh:

Leaky abstraction

Git doesn’t so much have a leaky abstraction as no abstraction.


There is essentially no distinction between implementation
detail and user interface. It’s understandable that an advanced
user might need to know a little about how features are
implemented, to grasp subtleties about various commands. But
even beginners are quickly confronted with hideous internal
details. In theory, there is the “plumbing” and “the porcelain” –
but you’d better be a plumber to know how to work the
porcelain.

It’s not a leaky abstraction — there is no abstraction! Circular logic


aside, I do understand the point being made here and I use this quote
because it’s what I hear most often from people who don’t like
working with Git: there are too many inconsistencies and obtuse commands.

I think this is true. To delete a file in your repository you use git rm
and the file name. The rm echoes the bash command that does the
same thing: removes stuff. To delete a branch, however, you use git
branch mybranch -d. This one always gets me… I don’t know why.
Deleting isn’t an optional argument, is it? It’s a damn command!
288 ROB CONERY

Anyway. There are numerous concepts in Git, yes. You do need to


understand what an index is, branching, stashes, merges and so on.
You can view this as hostile or contemptuous, or you can view it as
“you have the controls, RTFM”.

I lean towards the latter idea. We’re talking about the source code for
a project worth a lot of money — I don’t want someone else’s abstraction
doing stuff to it. That’s me. I don’t mind spending a weekend reading up
on the commands and strategies as you’re about to do — I’m a coder
and I have mastered the ability to do things like this.

MAKING IT WORK FOR US


Using Git to control the source of your project is one thing, but
centralizing it and using it to track your bug reports, roadmap,
changelog, and versioning is where the power is. That’s what we’re
going to do in the next chapter: combine the power of Git and GitHub
to help us manage things.
ELEVEN
FLEXING GIT AND GITHUB
YOUR SOURCE HISTORY TELLS A STORY,
LET’S MAKE IT A GOOD ONE

B
oth Git and GitHub are wide open and allow you to essentially
do what you want and/or need to get your work done. We can
either create a mess, or we can make something beautiful.

Let’s focus on making something beautiful. Before we do, however, we


need to focus ourselves as to why we’re going to be doing what we’re
doing with the context that our efforts are only as good as the ability
of our team.

That last bit is crucial. If you’re a Git wizard that can do magic with
your repository — that’s great! But if your team doesn’t understand
what you’re doing nor why, you will quickly have a mess and all of
your efforts will be going into handling your tooling rather than
building our product.

How do we get around this? By using sane, simple approaches that


don’t ask too much of people yet still support what we’re trying to do:
tell a good story. What we build is a direct reflection of who we are
as a team, and a team is only as good as its coach and its plan.
290 ROB CONERY

RECOGNIZING YOURSELF
In this chapter, you’re going to be the coach: creating a game plan and
making sure your team can carry it out. In other words: creating tasks
and optionally assigning them to people, making sure they know
what’s expected and, most importantly, that your efforts are
recognized and recorded somewhere.

Did you feel a weird twinge reading that sentence, wondering why you
would also focus on yourself? Cultures around the world treat this
kind of thing differently, but I can share with you that when I grew up,
you simply did not focus on yourself in a group setting.

This is the worst part about being a lead or manager: treading the line
between toxic narcissism and toxic passive-aggressiveness. I think we
all know what a toxic narcissist can be like, everyone seems to have
had a boss at one time who stole credit for something, needed to be
the center of every decision and made the project more about them
than about the client (aka a “toxic psychopath”).

Toxic passive-aggressive behavior, however, can easily go under the


radar. These people focus outward and hope that some day they get
recognized but, invariably, they don’t because they constantly praise
everyone else and divert any recognition at all. Then they get angry
and lash out at the worst times because, surprise, they don’t feel
recognized.

There is a healthy middle ground, and you can apply whatever


practices make sense to you so you can stay in that happy place:
mindfulness, positive self talk, or long walks in the desert. How you
get there doesn’t matter: getting there is the point. It’s critical that
you understand the importance of your role and that you are the
coach, the center around which the team will pivot. You’re doing this
because you want to do it, and you want the recognition that comes with
it. This is OK. It’s natural, and it’s human and if you do a good job
than you should, naturally, continue leading others and hopefully do
bigger things.
THE IMPOSTER’S ROADMAP 291

I’m obviously not a guru nor am I in a position to give anyone mental


health advice. What I can do is to emphasize that passive-aggressive
behavior is just as toxic (probably even more-so) than narcissism.

When you’re asked about the health of your project, you’re going to
need to list your achievements and failures, clearly understanding
what you’ve done well and what can improve. Your boss or client will
want to know they chose the right person, so go ahead and gloat if you
deserve to. It’s not arrogance if it’s deserved, and it’s not bragging if
you’re asked. It’s also not true if it’s not written down somewhere, so
make sure you have the answers at the ready if someone comes
looking.

And they will come looking. Especially when you start kicking butt
and succeeding. They’ll want to know who, how and why because
success is gravity and everyone will get pulled to you. It’s just the way
people are.

Enough with the pep talk. Let’s get into some real situations.

IT’S PEOPLE. ALWAYS PEOPLE.


As the lead, you’re going to have to deal with messy human stuff too.
In fact, I might argue that this will be your primary focus.

Allow me to explain.

One of the tasks that our team needs to perform is to come up with a
placeholder headline and subheading, along with a hero graphic. This
isn’t really “programmer stuff ” but, as my buddy used to say back in
our startup days: “everyone hangs white boards” (meaning: no task is
beneath any of us).

I would rather not see Lorem Ipsum as a headline, I want to see a


headline, and I’m OK if it’s not perfect. It’s up to you to figure out
how that gets done… which might be a challenge.
292 ROB CONERY

Let’s pretend you have a team and on this team is a great frontend
programmer named Sara. You know that Sara has a background in
copywriting and marketing, and she would be the ideal person to take
this task on. You also know that you want to use Scrum for the team,
and part of using Scrum is that teams can self-organize and choose
which tasks they wish to work on. This is a useful thing, but you
know that Sara is the right person for this particular task, so you
decide to just assign the task to her directly, removing the whole
“assign yourself what you want to work on thing”.

Oh, but Sara doesn’t like that! She’s been trying to get away from
copywriting for years and finally landed a job at Red:4 working on
JavaScript and Typescript stuff. This has made her happy, and your
assignment without asking first is rubbing her the wrong way. That
said, you’re the boss, aren’t you? You and the team need her expertise,
and it’s aggravating that she is having issues with this.

When Leading Isn’t Fun

Ah, the joys of managing people. This is yet another point in the
project where your ability to manage perception and persuade people
will be extremely useful. Your goal is to ship this thing, which means
you need to push every so often. How much you push is up to you,
and it might not feel good to do so, but that’s the job. The trick is to
do it well.

I’ve had many managers over my career and there are two that I
respect more than all the others combined. I won’t share their names
as they are public figures, but I will say that the skill they shared was
the uncanny ability to get you to agree with them. Even if I went to a
meeting with a head full of steam thinking “there’s no way I’m going
to back down on this feature”, I would leave agreeing that their way
was the best way.

They did this with a mix of unnerving attention, listening to


everything I had to say and asking questions about it. They wouldn’t
interrupt and would wait until I was done. Both of them also could
THE IMPOSTER’S ROADMAP 293

appear completely relaxed and confident in their idea. They would


then ask me questions as if they didn’t hear everything I said and
instead, focusing on what I was going to do for them. They would
even use the same phrases (they both worked at the same company, so
this wasn’t too coincidental):

I wonder if we could explore an approach where you did


WhatIWanted and AlsoWhatIWanted which would get us to
WhatIReallyWant which is where we need to be. Maybe you could
add YourIdea and YourOtherIdea to this other part and ask Other
Person for their input too? I like your idea a lot — I’d love to see it
added when we have the time.

Reading that message doesn’t do it justice, you really had to be there


to fully understand. Technical discussions are won by persuasion, not
by the merit’s of the thing being discussed, although yes, that does
add to the persuasion part too. The same goes for shipping software:
you’re the one who will make the difference, and it starts now, dividing up
the work and making sure it gets done as well as possible, tracking
what, when and how.

Avoiding the Coup

As the lead, you, friendo, must answer to your tribe repeatedly, and
you need to do it with evidence of your competence. You will also
need to answer to those above you, repeatedly, who will use your
success to further themselves and when you fail, you’ll be alone. This
is OK, however, because if you’re comfortable playing this very human
game, you’ll be after their head soon enough, won’t you? You’re not
going to admit that out loud… but that’s how these things are set up,
and it’s the game you’re playing when you choose to elevate your
career.
294 ROB CONERY

The good news for you is that you’re reading this book, and I’m telling
you this now, rather than you learning the hard way and having your
spirit crushed. And no, you don’t need to have shouting matches or
hatch wild plots to overthrow your superiors. What you do need to do
is get in the habit of tracking everything you and your team does,
highlighting the wins and correcting the mistakes. And also writing in
your journal daily, which I assume you’re doing already!

That’s why we have the mechanics I’m about to describe and why
they’re important: so you don’t have to fight. We’re going to make sure
that everything our team does is tracked with a timeline of execution.
We will blow people away with our reporting and communication skills
because the last person who anyone wants to tangle with is the person
who comes with receipts.

That will be you, and your success will bleed into the team and make
them love you and their jobs. This is the good part of being a leader:
believing in yourself, which makes your team believe in themselves.

What a great feeling!

FIRST THINGS FIRST: THE ORGANIZATION


In the previous chapter on tools we explored setting up a GitHub
repository with discussions, issues, a wiki, and even a GitHub project.
You’ve likely interacted with a repository on GitHub and many of
these features, so I’m going to assume you’re familiar. If you’re not,
that’s OK… you will be when I’m done with you!

The first thing we need to do is to think 5 to 10 years down the line. If


we’re successful and decide to retire, the last thing we want is to have
to go through the pain of transferring the repository to someone else.
That’s simply a glorified redirect, and it causes all kinds of confusion.

No, what we want to do is to set up an organization within GitHub:


THE IMPOSTER’S ROADMAP 295

Organizations are shared accounts where businesses and open-


source projects can collaborate across many projects at once.
Owners and administrators can manage member access to the
organization's data and projects with sophisticated security and
administrative features.

As you can see, I set up the Red:4 organization a while ago, in one of
my first acts as CTO:

It’s a simple thing to do and worth the time to do it. There’s a free tier
which is pretty useful, but if you want to have tighter control on
things, it’s worth it to upgrade to the Team license, which is
$4/mo/user.

Note: GitLab supports the same concept, but they’re called ‘Groups’ and don’t
have quite the same control as GitHub

I’ll make sure I call out which features are premium and which aren’t
as we go through the “GitHub Flow” below.
296 ROB CONERY

THE ORGANIZATION SETTINGS


The first thing we want to do is make sure all default branches are set
to main instead of master. I’m not going to touch on the social
implications or have a debate on this, and neither should you. You will
face social issues as a lead and will need to navigate them as you see
fit, but, this isn’t one of those. It’s just a word and if it avoids drama,
then that’s your priority.

Guarding Your Codebase

The next setting is the Commit signoff, which will be important as the
team grows. In the open-source world, it’s critical that copyrighted
code isn’t added to an open-source project so they have what’s called a
“Developer Certificate of Origin” (DCO) and, believe it or not, the
same goes for the work you do.

We could have a healthy debate sometime about the meaning of a


code copyright, patents and intellectual property and no doubt one of
us would have some strong words on the subject. I think the idea is
sound: with the rise of StackOverflow, copy/pasting code is rampant.
Everything is online and easily accessible, which means there are little
landmines everywhere. AI has made this worse, as you never know if
that generated code originated under copyright.

As the lead of your project (or a senior reviewer), it’s critical to make
THE IMPOSTER’S ROADMAP 297

sure that whatever code is committed into your main branch complies
with the DCO sign off.

Getting Nuts, and How To Avoid Lockups

These next screens will make managers happy and coders cry. When
you’re on the paid plan for your organization, a little box will appear
at the top of your repository:

Clicking the link will take you here… where the fun starts:
298 ROB CONERY

As you can see, these are the rules you can put in place for how code
makes its way into the main branch — what I keep calling the “holiest
of holies”. It really should be treated with reverence and call it
whatever you want, but do respect it. Nothing’s worse than dealing
with bad code in main.

Why, you ask? Because this is “the state” of development and a direct
reflection of you as the lead. Commits to main are considered your
progress or “cadence”, which is a reflection of how well your team
knows what they’re doing and where they need to go.

Your boss or client, if they know GitHub, will likely head right to the
“Insights” screen so they know what’s going on, and they might see
something like this:
THE IMPOSTER’S ROADMAP 299

That’s your story. The commits to main are literally the product being
built, so you want to be sure the code is a vetted and clean as possible.

But this leads to a question: how many rules do we need to put in


place? Every rule that you apply in the Branch Protection screen is
going to slow people down, and this graph will change. That said, you
will probably sleep better at night knowing that no code can get in
without at least 2 people reviewing it.

For now, let’s make a few sane and reasonable choices. I’ll start by
making sure that no commits can go directly into main without a pull
request (PR):
300 ROB CONERY

There’s also a section here where you can specify approval before a
merge can happen. I’m going with 1 for now — I’m the only dev and
lead — but two is a good number to start with that won’t slow things
down. I’m going to leave the rest of these boxes unchecked because
they will likely cause confusion and chaos later on. When it comes to
these types of decisions, I lean on a saying I learned while doing Ruby:

It’s not a problem until it’s a problem

Also known as YAGNI. Let the problem present itself first, then fix it.

As such, I’m going to leave the rest of the boxes unchecked as well
until we feel the pain of a growing team:
THE IMPOSTER’S ROADMAP 301

Imagine your team in triage mode, the worst imaginable kind: it’s
4am, the app is offline and no one knows why. Alerts have woken you
and members of your team, and you’re trying to figure out, and fix,
what’s gone wrong.

Someone realizes that a test credential got pushed to production


somehow, which is extremely irritating (and common), but they can’t
push a change until they get a second approver to sign off on the
change. That person is on vacation and out of cell range, so you step
in and override the rule and things are OK again.

Here we have an all-too-common scenario: the rules didn’t prevent


302 ROB CONERY

someone from pushing a critical bug, but they did prevent the fix. The
more rules you have, the more this scenario is likely to happen.

Rules don’t stop people from being people. Instead, they force people
to create workarounds at the worst possible time.

Labels

It’s always nice to make sure the labels fit your organization. We’re
using an Agile-ish approach, so I’ve added Epic and Story as labels:

There are also “Topics” that you can add to each repository, which are
little descriptions, but that might be overkill. Don’t get too bogged
down in here — it’s easy to do!

Allowing Forks

When your team comes on, they’re likely going to use their personal
GitHub accounts to work on your project. This might seem weird, but
THE IMPOSTER’S ROADMAP 303

in reality it’s extremely helpful — you would rather not be managing


credentials and accounts for your team… it’s a nightmare!

We’re going to be certain our repositories are private, at least to begin


with, which means we’ll need to grant our team members access as
they come on. That also means we’ll need to be certain we check this
little box in our organization settings:

The private repo that we have will also be private for our team
members when they fork it, so we’re still safe. Oh, and if you don’t
know what a fork is, hold tight, I’m going to walk through everything
in just a minute.

SETTING UP ISSUES AS TASKS


In chapter 6 we set up our Epic and Stories for use in our first sprint,
The Initial Build:
304 ROB CONERY

This is a great start, and good enough if our team is just one person
(you). You have your checkboxes, and you can do tick them off as
you go:

But if you’re leading a team, you have to do things differently. This is


where Scrum’s “self-organizing teams” is very useful.

The idea is that you create isolated work items that:

Move the story and epic forward


Are part of this sprint only
THE IMPOSTER’S ROADMAP 305

Will be complete and shippable when the sprint is over

Here’s an example:

This issue references our story, so we’ll see that in the story timeline.
There are clear directions here but, if they’re not, there’s plenty of
room for comment/discussion below.

Notice that I tagged the issue Up For Grabs too. You don’t need to be
that explicit — you can leave the label empty if you want — but I find
that tracking labeling is very useful. Notice on the bottom of the
graphic that it says I applied the label Up For Grabs? When the issue
is assigned, that will show up in the timeline too, and the label will be
removed. This says a lot about the process involved regarding getting
the work done.

Once the issue is assigned, the person doing the work should feel free
to label as needed.

EXECUTING TASKS WITH THE GITHUB FLOW


The basic idea behind the GitHub Flow is that your team is always
working in a branch in their repository. The only way code gets
306 ROB CONERY

into the main project repository is through a pull request (henceforth


a “PR”). Each pull request is (ideally) reviewed by you and one other
and is related back to one or more issues.

That last part is important. Nothing is more frustrating for a senior


developer than a PR coming in out of the blue that is not the result of
some issue. On a private project like ours, this is usually solvable by
having a one-on-one meeting and asking a few questions, followed up
with a strong suggestion that the team should stay focused or bring
up issues in the right setting, such as a standup or sprint kickoff.

On public, open-source projects, random PRs cause stress. A lot of it.


People that offer their time to your project don’t like it when you push
them for a little more rigor, and they’ll often go away, which is
probably a good thing.

TANGENT: HACKTOBERFEST AND PRS GONE WRONG


If you want to know what I mean by PRs coming in out of the blue,
consider Hacktoberfest:

If you aren't familiar, Hacktoberfest is an annual event that occurs


every October. It is held by Digital Ocean and encourages
developers to submit Pull Requests to Open Source repositories
and as a reward you get a T-Shirt… There's almost no limits, so if
your request is merged into any Open Source repository, you
qualify. Amazing.

Sounds like fun, doesn’t it? DigitalOcean is a good company with a


good heart, and they thought they were helping things by
“sponsoring” this “game”. I use quotes here because what ended up
happening is the opposite:
THE IMPOSTER’S ROADMAP 307

No rule was given regarding the type of PR nor what it addressed. You
didn’t need to be solving any issues at all, just offering
“improvements”. It got so bad that a Twitter account was created,
devoted to tracking the crazy things happening to Open Source
projects during October:
308 ROB CONERY

It turns out that people will do just about anything to win a t-shirt,
which is weird. But as I keep saying: people will be people. This
phenomenon apparently happened because some person with over
600K YouTube followers decided to show how easy it was to game the
DigitalOcean contest. They thought it would be fun to flash mob
open-source projects, thus spamming them relentlessly:
THE IMPOSTER’S ROADMAP 309
310 ROB CONERY

You can imagine being a maintainer on one of these projects, doing


the thing you love for free, and being punished for it.

To bring this back to the subject at hand — there is very little distance
between Hacktoberfest spam and someone offering a PR without first
creating an issue and getting the go ahead. You’re asking the project
owner to stop what they’re doing and come into your world to
understand what you think about their work.

This can be a good thing. A critical security risk, a broken link in a


README, etc. But it’s mostly a nightmare, one I’ve had to deal with
more than once.

HAVING YOUR PR ACCEPTED


You will likely find yourself needing to fix something completely
unrelated before you can submit the PR that addresses the issue
you’re working under. This happens all the time — in fact, it happened
to me recently on a project at Microsoft that caused a bit of a problem
for me.

I was asked to improve the dialog flow for a CLI project, and I realized
I could cut down on the 6 questions being asked by breaking the
single, general command into two more specific commands. To
achieve that, however, meant tweaking a few things.

I stopped what I was doing and submitted an issue with the tag
suggestion and outlined my ideas as clearly as I could. I’ll discuss
how to write a compelling issue/PR note in a later paragraph, for now
just know that it was pretty detailed.

The issue caused some discussion, which resulted in a “no thanks”


from the project owner. Unfortunately for me, this and a few other
issues I submitted gave the impression that I was trying to change too
many things, which is true, I was. I could sense an ego battle about to
take place, so I kept quiet and kept working.
THE IMPOSTER’S ROADMAP 311

Conflicts happen, and they’re going to happen on your project and will
probably involve you too. I’ll discuss that in a later chapter.

For now, a good rule to live by is: issue first, PR second. If an issue
isn’t assigned to you, and you have to create one, the ideal workflow
goes like this:

An opportunity is found to improve the code (bug fix, new


feature, etc.) and an issue is created. Hopefully, it’s a
minuscule fix that won’t give the impression you’re trying to
shake the foundation of the project.
The repository is forked, and the contributor assigns the issue
to themselves in the main repository.
The contributor creates a branch in the forked repository,
naming it something like contributor/add-something-#209
and does their work, committing over time as things progress.
They open a PR with their branch and describe the work
they’ve done. The PR is opened on a special branch you’ve
created (perhaps 12-authentication which means “sprint 12
in which we’re focusing on authentication”.
The code goes through review, and you sign off, as does
another collaborator or code owner.
You merge the PR to your sprint branch and the collaborator
deletes their working branch and syncs their main branch
upstream.
When the sprint is over and everyone signs off, the sprint
branch is merged with the main branch and things get
deployed.

That’s the GitHub flow: everyone works in a branch, and all work is
tied to one or more issues.

Let’s see this in detail with the initial work I did for the Red:4 Portal.

Note: I’ll be using GitHub for my examples in this chapter, but GitLab has
312 ROB CONERY

corollaries for every practice. I had to choose one and GitHub is what I know, so I
went with that.

Assigning the Issue and Forking

I’m going to put my developer hat on now, removing my CTO one so


we can get some work done. For the purposes of this book I created a
separate account at GitHub with the handle robred4… don’t tell them
I did that OK?

The first thing I did was to self-assign the issue for work on the home
page and change the label from Up For Grabs to enhancement:

It might not seem like it, but the history tracking here is critical to
understanding how and when work gets done, and it will save your
bacon on your projects.

The next step is to be certain I don’t work on the main repository. I


could clone this repository directly to my hard drive and work on it
there, but that’s considered bad form because the origin would be set
to the project origin and if I messed up a push, I could end up pushing
changes directly into the main branch and screwing everything up.

I might have done this more times than I can remember. Maybe.
THE IMPOSTER’S ROADMAP 313

The better way to do this is to fork (or “create a copy of ”) the main
project, so I can work on it from my developer account. Before I do
that, however, I need to get clear on something.

WORKING IN A FEATURE BRANCH


The GitHub flow is all about branches and making sure the main
branch at the origin represents what’s deployed live. This can seem
overly ceremonial, especially with a small team, but there are reasons
for ceremony and structure even if the team is 5 people or fewer:

It’s a simple process to begin with, and something people


will be used to if they have a few years of experience.
Starting early is a good thing because your team could grow
quickly. Having processes in place and part of the daily work is
much easier than trying to change as a team grows.
You can track sprints and work much easier using a branch-
based flow.

It takes a few extra emails or Slack messages, but the next step that I
want to take as Rob the Dev is to know which branch I should be
working from. When you fork a repository, you fork all of its active
branches as well, as you can see here:

There are only two in our project: the holy main branch and the
initial-build branch, which was put in place just for our sprint. As a
314 ROB CONERY

lead, you need to be sure everyone knows which branch to clone from
for their work!

Now I’m ready to fork the project repository, and I do that from
anywhere in the repository — I just need to click on the Fork button:

I choose an owner (me) and wait for a second and boom! I have a
fresh repo to work from:

This will be my development branch.


THE IMPOSTER’S ROADMAP 315

CLONING THE DEVELOPMENT REPOSITORY AND DOING SOME WORK


The first thing to do is to clone my development branch:

git clone [email protected]:robconery/developer-portal

This does as you would expect and clones my repository to a local


directory. Now I have to be sure that I work in a specific branch. If I
work in initial-build (our sprint branch), then I’m not using “GitHub
Flow” — all work should be done in a dedicated branch, which means even
my work, for this specific issue (#10) should be done in a smaller,
dedicated branch based on the one I forked (initial-build).

To accomplish that, I’ll immediately branch my local repository:

git checkout -b robconery/homepage-initial

Naming things is hard, just do your best. Notice here that I’m using
my name followed by the work. I could have also done
robconery/issue-19 but to me, right now, this conveys what I’m
doing.

Let’s say I made the improvements requested:


316 ROB CONERY

I’m not a copywriter, OK? I did the best I could here, and I do think
it’s a great placeholder until we can get some help. If the CTO doesn’t
like it, I’m sure I’ll hear about it during code review.

Speaking of, the next step is to push the changes back up to my


development repository once I consider things finished:

git commit -am "Added headline and lede as placeholders


per #10"

It’s at this point that I realize that my .gitignore file is missing


something crucial, so I add that too:

git commit -am "Added .DS_Store to gitignore, not stricly part of


issue but needed"

Is this a good idea? Let’s discuss.

Changing What’s Not Yours to Change

I was assigned a specific task: update the headline and lede on the
home page. That means that I get to change the home page and also
implies that I don’t get to change anything else. That last bit is
important!

By changing the .gitignore file the way I did, I open the door to a
merge conflict: when two files have been changed independently and
now need to be synchronized. This is the problem with doing GitHub
Flow: more branches, more possible merge conflicts.

We’ll see why this is when we get into the next chapter: Trunk-Based
Development.

Merge conflicts suck and can take days to unravel. They’re also a
“smell”, if you will, of a crappy process that is a direct reflection on
you. We want to avoid these as much as we can by impressing on our
team the importance of only working in the files you need to.

OK, let’s get back to our work.

Creating a PR
THE IMPOSTER’S ROADMAP 317

The next part of the GitHub flow is to push our changes up to our
working branch and then submit it back to the sprint branch.

We start by pushing the changes:

git push origin robconery/homepage-initial

This will simultaneously create a branch for us on GitHub and push


our code to it. Once completed, we’ll see a new message:

Great! I love how GitHub makes this so simple. I’ll click on “Compare
& pull request” and I’m taken to a new screen:
318 ROB CONERY

Notice that main is selected by default for the base branch — yikes!
We would rather not have our work merged directly into main, we
want it to go into the sprint branch, initial-build. If you screw up,
that’s OK. I’ve done this a million times myself. You’ll probably be
asked to resubmit and it’s not a big deal.

I’ll write a descriptive message that follows a template I like, keeping


things direct and terse:
THE IMPOSTER’S ROADMAP 319

Two things every project lead loves: why you did the work and what
you changed. I explain both in direct and detailed language that’s not
overwhelming. It can take a while to get this down but these initial
PRs and issues will set the tone for the entire project.

We’re playing a silly game here, where I’m both developer and CTO,
but I’m also showing what I expect from others when they join the
project. I could spend days writing up documentation or trying to
explain what I’m looking for as a project lead when reading a commit
or an issue — but showing people is far easier and more effective.

This first PR and issue will do just fine for that.

When I submit the PR people will be notified, and if I set a reviewer


or tagged them in the PR body (using @robconery, for instance) then
they will also get a notification:

At this point, discussion happens, and a link is provided to the


commits for further review. Many times more work is requested — as
you can see, Rob the CTO is wondering if we can “tighten up” the
headline:
320 ROB CONERY

This is good discussion. More work can happen here (with commit
links following in a timeline fashion) or other issues can be created,
which will also be referenced here.

Here’s what a pull request looks like in the wild, taken from the
firebase-tools project:
THE IMPOSTER’S ROADMAP 321

While not exactly an example of “GitHub Flow”, this is a great


example of using GitHub and the PR process to work on a feature.

The timeline here shows everything that went into the changes
suggested. It’s a long list, but you can see the description is just
descriptive enough to drive the conversation, review, and coding
process.

You might be wondering if mastering GitHub like this is useful? As I


keep saying: this is your project legacy. You want it to look vibrant
and healthy.

Take a look at the firebase-tools project’s Insights:


322 ROB CONERY

This page says a ton but, most notably, that they’re fixing problems
and pushing code. So yes, mastering a tool like GitHub is critical!

CAN THIS BE MADE SIMPLER?


There are many moving parts using GitHub Flow, and that means
there are plenty of screw-ups. The process slows people down, and
having to support your team through a difficult onboarding and
development structure can be… taxing.

In the next chapter, we’ll take a look at an alternative, where we get to


push directly to main often: Trunk-Based Development.
TWELVE
TRUNK-BASED DEVELOPMENT
WHEN PROCESS GETS IN THE WAY,
THROW IT ALL OUT

W
hen working with a branch-based source control system
(which is just about all of them), you’re going to need to
come up with a merge strategy that doesn’t cause
problems as your team grows.

GitFlow has its advantages, but many teams find it a bit too
ceremonial and, more importantly, find themselves trying to figure out
how to merge massive feature or sprint branches.

As an example: you’re working on a sprint and are 10 days in when


you find out that a library you’ve just been using has been found to
have malware embedded in it. The team is working on a fix, but you
decide to change course and replace it entirely.

The sprint is suspended for a few days and the issue is triaged and
fixed (in a bug branch) and then deployed. Now the fun begins.

Replacing the library you’ve been using involved a few changes:


updates to a few models, some services, and four different application
pages. It’s at this point where you start to think about architectural
choices and why isolation, cohesion and coupling are terms you
324 ROB CONERY

should have studied more. Don’t worry, we’ll get to that in a few
chapters! For now, you’re stuck with a difficult situation.

The code your developers are working from has changed, and that
change is important to keep. This has caused what’s known as a merge
conflict.

MERGE CONFLICTS
You’ve probably been exposed to one of these beasties but, just in
case, let’s step through it.

There’s an issue posted regarding an example for merge conflicts for


this book, and it’s labeled as up for grabs. I assign it to myself and
see that the feature branch is called demos/merge-conflict which has
a single README file:

I fork this branch to my personal repo and then create a development


branch:

git checkout -b robconery/demos-merge-conflict

I then update the text:


THE IMPOSTER’S ROADMAP 325

Looks good to me, let’s commit these changes:

git commit -am "Added additional wording to define how a merge


conflict happens"

Unfortunately, my lead thought I was out today and decided to sneak


in a quick change to the feature branch, giving me a little guidance as
to what he wanted to see. Again: this is in the feature branch so his
commit “changes history” if you will, and the branch that I forked is
now one commit behind the parent feature branch:

git commit -am "Added some prompts for what I want to see"

The sad part is that I, as the developer, have no idea that this change
happened and the only time we find this out is during a merge:
326 ROB CONERY

This merge can happen as part of a PR, a “pull update” or any other
time when two branches have divergent histories. For instance: my
lead could have realized his mistake and sent me a note saying “oops, I
screwed up, can you pull from the feature branch and resolve this
conflict for me?”

In fact, a merge conflict will usually be yours to resolve in a situation


like this. You’re the one who is writing the feature code, so it’s most
important that you ensure that code works. The only way that
happens is if you’re the one resolving the conflict.

Want to erode team trust and confidence? Create merge conflicts for
them and ask them to resolve them. This can be the result of direct
action (as with our contrived example) OR it can be the result of you
not understanding the codebase, assigning issues that overlap in
terms of code files.

I need to expand on that last bit.

Tangent: When Crappy Code Causes Merge Conflicts

I worked on a project that was, in a word, horrible. The lead was an


“Enterprise Architect” and made sure that we had interfaces
everywhere, used Gang of Four patterns, and so on.

The problem, however, was that this lead was also the first developer,
and decided that it was best to have a MasterContext object that was
passed around between objects and services. At some point, the
MasterContext began vacuuming up application settings,
configuration, and random instances of classes. I think his idea was
that we could inject this MasterContext easily, and mock it in our
tests. This was true, but it also made changing things an absolute
nightmare.

He would assign an issue to one developer, let’s call him Jason, which
was something like “Add an inventory check to the CartService on
add()”. He would then assign a different issue to Kim: “Debit
inventory on sale()”.
THE IMPOSTER’S ROADMAP 327

The current Cart instance lived on the MasterContext and the


inventory was represented by a Product instance that lived in the
current Cart. Long story short: Jason and Kim needed to work with
the MasterContext and they both made changes in their feature
branches, and when they created their PR, they had merge conflicts.

If you ever want to know if there’s too much coupling in your project,
constant merge conflicts will tell you that.

Resolving a Merge Conflict

So what, exactly, happens with a merge conflict? Git will happily tell
us if we do a git diff, which compares the HEAD to the last commit:

We can also just look at the file(s) affected:


328 ROB CONERY

This notation here might look strange, but it’s very likely you’ve
seen it before. This is called a DIFF and, simply put, shows two
different versions of a text file. On the top is the previous version,
the bottom is the current version, and they’re separated by a line:
=======. There are also labels which are prepended by
<<<<<<< denoting “previous” because they’re pointing to the
past and >>>>>>> denoting current (or later) because they’re
pointing forward.

A conflict can exist between two branches at a time, but multiple files
can be involved. This is important to understand because this file
could have been worked on by multiple developers and/or teams, and
each of them could have caused their merge conflict. READMEs are
particularly subject to these problems as developers, like me, like to
update project READMEs even when not asked to (adding clarity,
grammar, etc.). It’s a serious difficulty!

The point is: resolving a conflict means that you first need to
understand what happened and why. Here we have a simple situation:
my lead created a simple conflict with a change after my fork. If this
was a larger project, say 40 developers who are grammar nuts, the
potential merge conflict could take days to resolve!

Speaking of resolution, our conflict can be resolved by removing the


THE IMPOSTER’S ROADMAP 329

previous change altogether. Tools like VS Code have this ability built
in, but I’m just going to do mine by hand:

To get out of a conflicted state, I just need to remove the DIFF text,
merging as I see fit, which is what I did. I can then commit this change
to get out of the conflicted state, making sure I describe what I did:

git commit -am "Resolved conflict by removing the prompt text


and reverting to mine"

Here’s a tip: you can also tag this commit for quick reference later on.
It’s good to know how many merge conflicts arise during a project, as
they are the very definition of “friction” in a development project.

AVOIDING MERGE CONFLICTS


When considering the nature of merge conflicts, even with my
contrived demo, you begin to see that the more branches we have, the
more merge conflicts will occur. This is especially true if we have long-
lived branches.

To get around this, many teams have started using what’s known as
“trunk-based development” where “trunk” refers to the main or
master branch… the thing I called “holiest of holies” in the last
chapter. To many developers, this is complete heresy — you don’t
develop in main! I used to be one of these people… and then you live
330 ROB CONERY

through “merge hell” and wonder if you’ve been wrong about this
perspective.

The idea is straightforward: smaller branches, shorter lives. Once finished,


the branch is merged directly into the trunk once the PR is accepted.

Sounds exciting, doesn’t it? Let’s go through the demo exercise above
and see if it really is.

THE BASICS
Let’s get our heads into the right space for this exercise. What we’re
going to want to focus on is:

Less than 3 branches at any given time in our project


Committing/merging at least once a day to main
Timely code reviews (like, right now) that happen in real time
rather than waiting on one more people.
Small updates rather than one big one.

Put these together, and you get fewer merges, at least in practice. If
you do end up with a conflict then, theoretically, it should be easier to
merge given the smaller updates.

Forking Main

The idea here is velocity, so I’ll be using GitHub primarily instead of


going through a text editor and console. If I was writing code then yes,
of course I would want my editor. But I really want to emphasize the
idea of thinking in terms of the least friction and, to me, that means
just going with the simplest thing for now.

Here’s our trunk:


THE IMPOSTER’S ROADMAP 331

We can see that my CTO self (robred4) created the initial bits and, for
now, let’s assume that I saw the issue, assigned it to me and off we go.

The first thing I want to do is fork it:


332 ROB CONERY

This is now in my development account where I can do things in


reasonable isolation. Using GitFlow I would have forked the feature
(or sprint) branch and then created a branch of my own here in my
development repository.

This time I’ll just do my work right in main, and I’ll do it by editing
the file directly in GitHub’s editor. Once finished I’ll commit right
there through the web interface:

This looks good to me and yes, I want to commit directly to main in


my repo. I do have the option of creating a new branch and PR, but
we’re doing TBD, so start the engines!

Once committed, I head back to the code window and see that I can
open a PR:
THE IMPOSTER’S ROADMAP 333

I do that and everything goes well:


334 ROB CONERY

As you can see, the merge is blocked because a code review is required
before the merge can happen. I’ll get to this in a second, but one thing
to notice is that I moved too quickly for Rob the CTO to make the
conflicting change!

I know, I know, this is so contrived, but it’s also so real. The velocity of
development is the point — it keeps conflicts to a minimum.

In our case, Rob was notified for a review and will hopefully respond
right now.

Synchronous Code Review

For some practitioners of TBD, my review rule is overkill — I should


be able to merge directly and let the build process find any issues (I’ll
get to that in a second). I’m not ready for that level of commitment,
however.

I ping Rob on Slack, and he’s able to review this quickly — he knows
that quick commits are the name of the game and can get my code
into main within 10 minutes of my PR. He heads over to the repo and
sees my PR:
THE IMPOSTER’S ROADMAP 335

He can do the review with a diff straight away, approving the changes:

Notice, too, that simple comments and a request for changes can be
made, right here, if needed. No need for changes here, however, and
CTO Rob approves the PR, which I see immediately:
336 ROB CONERY

The merge is now ready to go, directly into main. Your organization
may vary, but the projects I’m on typically allow the developer to
merge their PR, which I do:

All developers usually have merge authority in a TBD project, with a


review step being required. The most critical thing, however, is the
Continuous Integration/Continuous Delivery build step, also known
THE IMPOSTER’S ROADMAP 337

as “the build”. This is where the code is compiled, tests are run, and
any other quality tests are implemented.

SUMMARY
Velocity is magical with development teams and, most of all, removes
the tedium of dealing with the problems that arise due to your process
in the first place.

Trunk-based Development requires strong leadership and a group


that’s a bit more senior, or has done it before. Focusing on speed is
great, but your team needs to be able to move fast and not spend days
on a problem. With the right mentorship in place, or things like pair
coding (which we’ll get into later), you can do this pretty easily.
THIRTEEN
BUILDING THINGS USING
CONTAINERS
DOCKER IS ESSENTIAL TO UNDERSTAND IF
YOU’RE GOING TO LEAD A TEAM

D
ocker has revolutionized software development, especially
when it comes to deployment. It still has a ways to go in
terms of adoption, but I think that will change as the years
go by. It’s a technology you need to know, especially if you’re leading a
development team.

I was at a conference in London in the spring of 2022, and I asked the


audience if they had used Docker before. To my surprise, only 10% of
the hands went up! When I asked who had heard of Docker, only 50%
raised their hand!

A friend of mine had a similar experience at an enterprise conference


in the US, and it surprised us both: Docker has yet to work itself into the
mainstream.

Given all this, I’m going to assume a few things about you:

You’ve heard of Docker and might know what it does


You’ve put off learning it until you’ve needed to
Have been skeptical because it seems like a fad
THE IMPOSTER’S ROADMAP 339

If that’s you, let me address each point quickly and then we can get to
the good stuff:

Docker uses a Linux virtual machine (VM) to create isolated


“subsystems” called containers that do a thing. It’s easy to see a
container as a VM, but it’s not because it uses the host’s
resources directly rather than a virtual machine, which has
virtualized CPU, RAM and Hard Drive.
Docker is coming for you, if not with your current job then
your next one. Someone in your management chain will read
about Docker and wonder aloud if it saves you time and effort
(yes, it will). Maybe your next job will use it daily to
orchestrate their build or create a development environment.
Either way: knowing Docker upfront will ease these
transitions.
Fads come and go with a slowing adoption curve over time.
Docker is doing the opposite as companies around the world
are trying out new ways to use Docker to increase efficiency.
Yes, it’s another toolset to go with all the other toolsets you
need to learn… but that’s what we do as programmers,
isn’t it?

Let’s dig in and see how we can use Docker in our project. As you’re
about to see, Docker can do a lot more than just handle deployments
for you.

WHEN DOCKER BLOWS YOUR MIND


I remember learning Ruby on Rails for the first time in 2005 and then,
in 2008, deploying my business (Tekpub.com) to a VM running on
AWS. At that time, I used Capistrano, which is basically a collection of
local bash scripts that do remote things over SSH, building your
server, deploying your code, configuring your web server, and so on.
340 ROB CONERY

This was a powerful thing to me! The ability to version and deploy my
site automatically was something I had never seen before and, if I’m
honest, was one of the main reasons I wanted to use Rails.

I felt the same thing when I learned Docker many years later. I was
trying to deploy an Elixir app to Heroku, which was supposed to be
easy, but I found myself troubleshooting build packs and wishing life
could be easier.

And then a good friend recommended using Docker. I couldn’t see


how it would solve my problem, but I decided to learn what I could…
and it changed my career.

Instead of pushing my site to a service and hoping for the best, I


could now:

Pull a Docker image for Elixir and copy my code into it


Pull a Docker image for Redis and Postgres, setting it up just
the way I wanted
Create containers from all 3 images, connecting them
together, without executing a single bash script

This … was tremendous. I chose Ruby on Rails because the deployment


story was simple, but creating images of my app and deploying
those? WOW.

I learned how I could build an application image locally, with all my


settings just so, and then push that image to a private repository on
Dockerhub. From there, I could create a build file using Docker
Compose (which we’ll talk about in a second) and deploy that to the
cloud… and everything would just… work.

Well, most of the time. There are some things you need to know
before you get the point of everything “just working”, but we’ll
get into all that in a minute. The point is: little specialty
containers, orchestrated just so, opened up a whole new world
for me.
THE IMPOSTER’S ROADMAP 341

Erlang is a functional language that powers massive


telecommunications companies, and they have this philosophy called
let it fail, the idea being that your application should be comprised of
small processes that can (and should) die and be replaced easily. Elixir
runs on the Erlang VM, so this idea naturally applies to Elixir too. It’s
a fascinating way to build software! A single process or package can’t
kill the VM. They’re all designed to fail and restart, so resilience is
something that’s “built in”, if you will.

Using Docker, we can do the same thing. Instead of one, massive


monolithic application that can die because a single package throws an
exception, we can now build small, specialized containers that can be
restarted if something goes wrong.

Why have a single container running a large Node application when


you can have 5 that run in individual containers, reacting to specific
events, and one that runs your UI?

This is where microservices come from, and we’ll be discussing them in


the next chapter, including how my friend Chad Fowler scaled his old
company, Wunderlist, to service millions of requests per second using
them.

Docker has changed the way people build software, and also how they
write their code daily. For it to work properly, however, you’ll need to
know the basics as well as the little tricks Docker people have
developed over the years.

LEARNING THE JARGON


People who are just learning Docker often get confused as to the
difference between an image and a container. In object-oriented
programming terms, an image is like a class or a type and a container
would be an instance of that class or type.

To create an image, you issue a set of commands in a Dockerfile, and


it looks like this:
342 ROB CONERY

These commands tell Docker to go get the latest build of Node.js from
Dockerhub and, when it’s run, to execute the command node if no
other command is given.

We can then build this image using the Docker CLI, making sure we
tag the image (which is like naming it) so we can use it easily later on.
We do that with the -t flag:

docker built -t red4app .

It might be hard to miss, but that little dot at the end is a file
specification, letting Docker know where the Dockerfile is. You would
think that . would be the default (meaning “this directory” in Unix
terms) but it’s not, and you’ll get an error if you try to do things
without it.

Using the OO programming analogy again: the Dockerfile defines our


class, docker build -t red4app . compiles it into an image. To use the
image, we run it, specifying the tag that we want to run:

docker run -it --rm red4app

This creates a container from our image — an “instance” if you will —


with a node running inside it using a specialized built of Linux. You’ll
notice it’s very fast (if you’re playing along, and I do hope you are) and
when it runs we’ll be in interactive mode because I specified -i for
interactive. I also told it we wanted a terminal with -t (or, literally, a
TTY which allows for text input) and finally, I told Docker to kill the
container when I exit using --rm.

I know, numerous commands, but in all honesty you don’t end up


using docker run from the command line all that often. At least I
THE IMPOSTER’S ROADMAP 343

don’t — but even if you did, this is what you’d likely need to
remember.

This is what all of this looks like when run:

Here’s the super fun radical fantastic part: I didn’t have to install anything
to get this to work, aside from Docker itself. I have a functioning Linux
server running node, and I didn’t need to install anything in my OS to
do so.

That’s outstanding! Want to learn Redis but don’t want to deal with
installing it?

docker run -it --rm redis

Docker won’t find the Redis image locally, so it will scan Dockerhub
for it, pull it down, and then run it:
344 ROB CONERY

You might think this would take a few minutes, but I was able to get
everything up and running within 8 seconds.

So as you’re probably imagining: nothing can be this simple and


powerful — there have to be little traps that will get you, and there
are. Let’s take a look at some of these right now.

CREATING A CUSTOM IMAGE


We’ve seen that it’s pretty easy to run a Docker command that pulls
down something like Node.js or Redis, but that’s not usually what we
would do in production. How do we access our application code or
have our database initialized properly?

For this, we need to create a custom image. Before we go farther, I’m


going to warn you: this isn’t supposed to be a Docker tutorial. My goal in
explaining all of this is to introduce you to the world of containers if
you’re not already a part of it. If you would like to learn more, I
encourage you to do so. There are loads of books and tutorials —
some quite good. Go spend a weekend and go as deep as you need. My
THE IMPOSTER’S ROADMAP 345

goal here is to show you the practical side of using Docker so you can
ship things.

OK, back to the show. Here’s a common task you’ll likely have to
deal with in the future: getting your application to run in a
container.

Here’s a common way to do just that with Node.js:

If you’re a Docker person, you’ve probably spotted a few places where


this file can be improved — and I’m going to discuss all of that in a
minute. Right now, I want to go into how this file is used to create a
custom image.

We’ve already seen how Docker will pull an image from Dockerhub in
the same way you might clone things using Git. But that’s just the
start!

We’re also specifying that we want to work in the /usr/src/app


directory inside the container, so we set the working directory as such.
At this point, I could just copy my application code directly into the
image using COPY, but that would be a bad idea!

Frequently we use packages that are written in C and need to be


compiled natively. This often happens in the Node, Ruby and Python
worlds, and it’s become a goal for many of these developers to have
completely “non-native” packages (all packages written in JavaScript,
346 ROB CONERY

for instance) because compilation can cause problems during


deployment.

Our containers will be able to compile this code no problem, but


they’ll use a different compiler, most likely, than what’s available on
my development box. If I compile something on my M1 Mac, for
instance, it won’t run on a Linux machine because the processor
architecture will likely be different.

All of this is to say: we need to be certain we install our packages inside the
container, not copy them in. In our custom Node.js image, I’m copying
over the package.json and the package-lock.json (thus the package*
wildcard). Once that’s done, I execute a command inside the container
using RUN, and that tells NPM to go out and do its thing.

Once that’s done, I run COPY one more time to pull in my code. This
command assumes I’m executing the Dockerfile in my application
root, which is common practice.

Finally, I expose the port I expect the app to run on. With Node.js, this
is typically port 3000, but some cloud providers (such as Azure)
expect your container to expose port 8080. I could do both, but I
chose 8080. All other ports in the container are closed by default.

I set the default command to be npm start and that’s that! To create a
container from this image, I can use docker run like before:

Our image builds, we see a weird hash… and … what happened?

For people new to Docker, this is a common occurrence. They read an


article or book (like this one), play along and enter some commands,
get blown away and then totally confused.
THE IMPOSTER’S ROADMAP 347

Here’s the thing: Docker is simple until you try to use it. Let’s get past
that initial confusion and see what we just did.

Images All the Way Down

Something to keep in mind when working with a Dockerfile to create a


custom image is that each command you run creates an image based on
the previous command — these are also called “layers”. Aside from the
first command, which is always a FROM command. You can read more
about the individual commands here.

Take a look at the example again:

The first line creates an image using FROM. The second command
sets the working directory for a new image that is based on the one
created from the FROM command, but this one has the working
directory (or pwd in the Unix world) set to /usr/src/app.

The COPY command creates yet another image based on the one
created before it, and so on, and the net result is a set of layered
images that make up our final one.

Docker does a good job optimizing things for size as well: each layer is
only a delta on top of the previous layer — just the changed bits — not
a whole new layer. This is the way Git works under the covers, too:
each commit is a delta from the last.
348 ROB CONERY

Why do we care about this? Because we want to be mindful of our


commands and take care not to create a gigantic image.

For instance, I might want to a custom NPM task for setting up the
environment for first use:

RUN npm run dev_init

It might be tempting to keep things clean, simple and readable by giving


each command its own line — it’s very Unix-y after all (do one thing
well). Doing things like this with Docker, however, creates an image
that can become bloated and difficult to use because of its sheer size.

A better approach is to use the line continuation character:

The RUN command is usually where layers are abused the most, so
be mindful of how you’re using it, so your image stays nimble. Your
team will thank you!

DOCKER BEST PRACTICES


Now that we know that image size is a thing with Docker people —
what other “things” should we know? The good news is that there
THE IMPOSTER’S ROADMAP 349

aren’t that many hard and fast rules in the Docker world, but there are
some things you should know, so your applications run in the
smallest, most secure image possible.

Use Official Images

It can be tempting to read a blog post where someone has set up an


application or framework in the exact way you want it and then
created an image to share with others. While this might sound like a
great time-saver, you’re also trusting a random stranger with your
entire application infrastructure! This is a giant security issue, and
hopefully, you can see why.

I used an official image straight from Dockerhub in my example


Dockerfile above:

FROM node:latest

If I had suggested you use my special version of Node.js, it might look


like this:

FROM robconery/node:latest

This would tell Docker to go to my Dockerhub repo and pull down the
latest image of Node.js… and alarm bells should start ringing in your
mind! While I like the idea of you trusting me, it’s generally not a
good idea to trust any image that’s not the official one.

What people do nowadays is just share their Dockerfile with the


official image visible as the starting point.

Use Specific Versions

You’re setting up to use Docker locally and create the image above,
write some code, and then put the book down for a year or so. When
you come back, you want to redo the demos because you’re
interviewing and POW! Nothing works! The image built, and the
container started just fine — but one of the packages I’m using doesn’t
work with the latest version of Node.js.
350 ROB CONERY

When you specify :latest as the tag, it will go get whatever version is
the very latest, and this could break everything. It’s better to make
sure you know what version you’re starting from — regardless of what
build your Dockerfile is based on.

Let’s fix this by specifying we only want version 16 of Node.js:

Small Builds

We’ve already seen how Docker works with layers and how we should
keep our custom images small, but there’s more we can do to ensure
that we get the smallest image possible — this includes making sure
our FROM image is the lightest weight we can find.

When running our application code, we don’t need all the tools and
utilities that come preinstalled with the more popular distributions
like Fedora, CentOS, or Ubuntu. These distributions of Linux are built
to handle a wide variety of workloads, including hosting one or more
users.

These tools and utilities take up space. Fedora is 231Mb, CentOS is


193Mb and Ubuntu 118Mb. These are extremely lightweight when
thinking about Windows or macOS — but they’re extremely
heavyweight when thinking about a Docker image.
THE IMPOSTER’S ROADMAP 351

This is where specialized distributions of Linux come in, such as


Alpine, which has an installation size of only 4Mb! It’s likely that you
can find a special, official build with the -alpine tag attached to it,
which means that they started with the bare bones Alpine Linux and
added only that packages they required.

Debian (another distribution of Linux) is another choice and is the


default base image with most frameworks such as Node.js and Python.
When I specified node:latest above, it was built using Debian.

Like so many things in the Linux world, you can specify which version
of Debian you want using a specific tag:

bullseye is Debian 11, which works with ARM machines (like


my Apple Silicon M1)
stretch is Debian 10
buster is Debian 9
jessie is Debian 8

These are the full operating systems, but if you want the trimmed
down version, you’ll want to find the -slim tag. These versions have
only what it takes to run the thing you want to run.

The question then becomes: which one do you use! The answer isn’t
as casual as “it depends”, believe it or not. You might encounter
compatibility problems with one image vs. another, depending on
what you’re trying to do and what machine/processor you’re trying to
do it on.

For instance: I’m running on an ARM processor locally, so I would


want to be certain I chose an image that works for that architecture. If
I’m on a team, however, I would want an image that can work on
multiple processors… which gets tricky if we all want to use the same
image.

I’ve found success most often with -alpine images, so if you’re stuck
and just want someone to tell you what to do, then there you go!
352 ROB CONERY

Understanding the Root User and File Permissions

This is a big issue and, once again, I could fill an entire book on just
this subject. Instead, I’ll summarize what I’ve researched, and I’ll leave
it to you to dig in deeper if you like.

If you’re running Docker on Windows or a Mac using Docker Desktop


(the most common scenario), there’s a lot happening in terms of
processes, permissions, and users.

Here is the Docker Desktop (and Docker itself) running on my Mac:

As you can see, it’s running as me (I’m rob if you didn’t know). This
process is really two processes in one: the Docker client (the CLI and
GUI) and the Docker Engine.

There are also these processes running:


THE IMPOSTER’S ROADMAP 353

Docker Desktop uses Apple’s HyperKit to manage a Linux virtual


machine, which itself is actually running Docker. This is important to
remember: Docker only works on Linux, so when running on something
that’s not Linux, you need a virtual machine.

This can be confusing, as you often read that “Docker isn’t a VM!”,
which is true — it has full and complete access to the resources on
whatever host is running it. Unfortunately, Macs don’t run Linux (it’s
FreeBSD which is a bit different) and neither does Windows, so we
need a Linux VM for this.

Why am I bringing all this up? Because as you can see in the images
above, Docker and all the subprocesses are running as rob, and rob is
an administrator on this machine. I’m not root, but I can still do
admin things.

We care about this stuff because when we create a container from an


image, that container will run with full access to the host’s resources
— it has to, that’s how containers work. The host is a Linux VM that’s
running on my machine, as rob, which means the host has access to
everything I have access to.

The upshot is this: if you found your way into one of my container
instances (and I didn’t take precautions) then you would have full
access to my machine! Yikes!

So what do we do?

The most common thing people do is to ensure that the user the
container is running is not root. I can do that with our Node.js
image by specifying a USER:
354 ROB CONERY

The official Node.js image, like so many other official images, already
has a user created that has locked down privileges. With the node
images, that user is node. When I specify USER node toward the end
of the Dockerfile, I’m telling Docker that every command following
should use that user account. I set it at the end because I need to do
root-y things above, like install packages, copy files and so on.

If the official image you’re using doesn’t have a user already created,
you can use the adduser and addgroup commands:

This command should work on both alpine and debian-based images.


In short, the command will create a locked-down user with no
THE IMPOSTER’S ROADMAP 355

password, the home directory set to whatever WORKDIR is (for


convenience) with a user ID of 1001. Most people that do this by hand
will use the name docker by convention — but this is up to you. Oh,
and the --gecos command passes an empty string for a name so a
prompt isn’t offered, slowing things down.

This works, but there is another way, believe it or not, and you can do
it by going without a Linux distribution at all.

Distroless Images

So far, we’ve been focusing on two Docker-centric distributions of


Linux: -alpine and -slim, the latter being a trimmed down version of
Debian. They’re small and work great… but there’s still more we can
do here.

If we’re running a node process, do we need a package manager (like


apt, for instance), a shell (like bash) or any other system that comes
along for the ride in these distributions? We really don’t.

What if we could strip out everything except for the exact packages and
processes we need to run the thing our image is designed to run?
That’s what a distroless image is.

You might be wondering if this is worth doing, or are we just


engineers being engineers? The short answer is that if security is a big
deal to you, and you can get by with a slightly larger image: then yes,
it’s worth doing. You can’t log in to a running container that doesn’t
have a terminal, SSH or a shell, can you?

The only way this can be done with Docker is to have a multi-stage
image, which is, essentially, two FROM commands:
356 ROB CONERY

The first stage of the build uses an alias, AS build, that will let us
copy things from it. We don’t need NPM (Node’s package manager) in
our container, so let’s use the first stage to install and build everything
we need and then copy it into the second stage, which is built FROM
a special distroless image created by Google.

This will run our Node.js code in a container that has a Linux kernel, a
running node process and the libraries that are needed to make it run.
That’s it. Weird, wild and interesting if you ask me. The resulting
image is slightly larger given the layers involved, but it’s also
extremely locked down, which is a good thing!

Right now, there are only 5 distroless images you can choose from:

Java
Python3
Go
Node.js
Rust

But I would imagine these choices will expand over time.

SHARING AND DEPLOYING YOUR IMAGE


Docker is fun to play with on your own, but that’s not why we’re
using it. What we want is:
THE IMPOSTER’S ROADMAP 357

A shared environment for our developers that we are sure


works for everyone.
An environment that we ship, instead of the code.

That’s the power of Docker: you ship the entire environment, not just
your code! Even better: you can remove yourself entirely from the
process and have your build tools (GitHub Actions, for instance) build
the image for you.

Let’s see how this works, shall we?

Understanding Docker Tags

We’ve been building our image without the use of tags, despite using
the -t flag, which looks like it should mean “tag” but doesn’t. Well,
sort of doesn’t. It’s confusing, I suppose.

If you ask for help using docker build --help, you’ll see the -t (or --
tag) command:

It’s critical to tag your build, especially if you’re going to be deploying


it in production. Doing so allows you to roll things back easily.

So, as you can see, if we want to tag our image we need to use a
specific command, something like docker build red4app:1.0.0, where
1.0.0 might be our version number. This is a common approach, but
there are others you might think about.

Tagging Strategies

If you head to DockerHub, which is where most public images are


stored, you can browse the more popular repositories to see what they
do. Here’s the repo for WordPress, and here’s one for Ghost. They, of
358 ROB CONERY

course, have different concerns than you do as their images are built
for specific purposes.

For example: wordpress:6.4.2-php8.1-alpine is… well, you can


probably guess. WordPress version 6.4.2 using PHP 8.1 on Linux
Alpine. They need to tell you this as the end user because it’s
something you’ll want to know right away given the needs of your
Docker host environment.

For us, it’s a different story. I think using semantic versioning should
be enough. If, by the way, that’s the first time you’ve encountered that
term, you should click that link and have a read. “Semver”, as it’s
called, is the “1.0.0” format we use for software versioning, which is
deciphered as “major.minor.patch”. Major versions are allowed to break
previous versions. Minor versions add features that are backwards
compatible, and patches are bug fixes. Fun stuff! But off-topic.

I know some organizations that will add a sprint name, or a milestone,


to keep things “tidy”. I suppose it depends on how “agile” you’re
feeling, but it can be helpful to know what builds go with which
milestones. If you name your milestones after version numbers, well,
that makes it all the more simple!

I think all of that is confusing, if you ask me, and I would suggest
using SemVer unless you have a good reason not to, such as your
company Docker naming policy.

Pushing to DockerHub

If you’re building open-source stuff, DockerHub is fantastically simple.


It’s also the default when you use the docker push command.

The first thing we need to do is tag our image with some meaningful
name. As it stands, our Dockerfile is building on top of node:16,
which is a good start, but I think it’s more helpful to use a specific
platform too. Now that we know more about the naming of things,
let’s go with Alpine, since it’s one we know and like:
THE IMPOSTER’S ROADMAP 359

Groovy. Now we just need to name our image in the same manner, if
it’s important to us, which it very well could be! I think the tag
red4app:0.0.1-alpine is a good one, don’t you?

Outstanding. Docker really is wonderful, isn’t it? 0.0.1, by the way, is


a typical “first step” version number as we work our way to the first
“stable release”, which is reserved for 1.0.0.
360 ROB CONERY

Right then, now we need to push this to docker hub! For this, we use
docker push:

Oh man! Well, I suppose that makes sense, doesn’t it. I haven’t told
DockerHub who I am, which is simple to do. You first need an
account, however, and that’s free.

To login, we use docker login of course, and are guided gently along:

You also have to have a repository to push your image to, which you
can do through the DockerHub we site.
THE IMPOSTER’S ROADMAP 361

You can also have a single private repo too, which is nice.

Now that this is done, we should be able to push our image! Well…
sort of. Actually, not really because if we try:

Crap! What the hell! This can be frustrating, but I suppose it’s also a
good thing too: your image name needs to match the path to your
DockerHub repository, which means we need to rename things:
362 ROB CONERY

Notice that little blurb at the end there? Our image name resolves to
the default container registry, which is docker.io. We haven’t discussed
registries just yet, but as you might be able to figure out, these are
places that host Docker images for you, and there are a lot of them.
Your local Docker install will default to docker.io (DockerHub) if you
don’t specify otherwise. We’ll change that in the next section.

Wooohooo! We’re up, and our image is live to the public!


THE IMPOSTER’S ROADMAP 363

There are loads of things I can do at this point, including setting up


automated builds using GitHub and other services. That costs money,
but it’s worth it! I’ll get to that later when we talk about Continuous
Integration and Deployment (CI/CD).

USING YOUR OWN CONTAINER REGISTRY


Almost every cloud has a container registry, which is a good thing. It
makes deploying your app on those clouds extremely easy. Well, easy if
you know how that cloud works, I guess.

Anyway: let’s set up a registry somewhere besides DockerHub because


we want it to be close to where we’re going to deploy things.

I use Digital Ocean and have been for years. I know it well, so I will
show you how I do this kind of thing up there, but you can translate
everything I’m doing to other cloud providers as well. AWS, Azure,
and GCP will probably be a little more complex, but hopefully, you can
work out what goes where easily.
364 ROB CONERY

So I’ll click that big blue button there and create my first registry.
They have a free tier, which is pretty darn good if you ask me, and it
gives you 500Mb of space. Our container is a bit smaller than that,
coming in at 40.5Mb (according to DockerHub) and we could
probably make it even smaller if we wanted but, for now, I’ll leave it.

More space is extremely cheap, however, and tops out at $20/mo for
unlimited repositories and 100G of storage. That’s a lot.
THE IMPOSTER’S ROADMAP 365

OK, I’m good to go!

One of the things I like about Digital Ocean is how friendly it is. They
make things plain and easy to follow, for the most part, including
logging in:

I won’t walk you through all the instructions, but it took me about 20
seconds to get logged in to Digital Ocean’s registry, and I’m now good
to go!

Note: we’ll be talking about Kubernetes in a few chapters, but this


worth bringing up now. Digital Ocean will “connect” your registry
with your Kubernetes clusters if you like:
366 ROB CONERY

That is extremely helpful when it comes to quickly spinning up new


containers!

Anyway: I’m all set now, and can tag a new image for deployment to
Digital Ocean:

I’ll admit, that registry address is confusing, isn’t it? This kind of
naming is common in the Docker world: registry/reposito-
ry/image:tag. It can get pretty verbose but, again, we’ll fix that in a
second!

Now we push!
THE IMPOSTER’S ROADMAP 367

Just like that, our image is up and ready to be pulled wherever we’re
going to deploy it.

MAKING LIFE SIMPLER WITH MAKE


You knew this was coming, didn’t you! One thing that will mess you
up quickly is a typo in your naming scheme, which will happen if you
keep typing the names manually! You can trust me on that one…

To remind you: Make is a build tool that I like to abuse. It works like
most build tools do, with targets and recipes, and is older than the
hills. In many ways, it’s perfect to use with Docker because we are,
indeed, building things!

I can automate the steps we’ve been going through using 3 targets in a
Makefile:
368 ROB CONERY

If you’ve never seen a Makefile before, they are incredibly easy to


learn. At the top are some variables that work the same as variables in
a shell environment. We have 3 targets (build,tag, and push) that do
what they suggest and, as you can see, push has a dependency on tag,
which need to run first.

The first target in any Makefile is always run by default, so if I execute


make in my terminal:
THE IMPOSTER’S ROADMAP 369

You can see the command that’s being run. Yay! Notice, however, that
I’m not naming the image using the registry. We haven’t discussed
that yet, but I figure I would mention it now.

You can tag an image using a naming convention, or you can just use
the tag command, which is docker tag [name] [tag]. We need our
image tagged with the registry address, so that’s what that command
does.

Now we push!
370 ROB CONERY

Glorious! As you can see, we’re rejected here because our image
exists. If we want to rev that image, we just need to tweak the TAG
variable in our Makefile, and we’re good to go.

Make your wins look easy

FLEX! Make is one of those things what, when done will, will set you
apart.

SUMMARY
Right then! If you have an interview with a team tomorrow that uses
Docker, hopefully you’ll be just a little more prepared than you were
before you started this chapter.

Docker is a wonderful tool, but like so many wonderful tools, it can


have extremely sharp edges. Just know this: if you understand the
basics of security and the reasons for the obsession with image sizes,
then you’re way ahead of the game.

Now comes the fun part: getting containers to work happily together.
FOURTEEN
SIMPLE CONTAINER
ORCHESTRATION
DIGGING IN TO DOCKER COMPOSE

T
he power of Docker lies in its speed and simplicity. You can
create a container to do just about anything, from running
Node, Python or Ruby scripts to housing your application
data. In fact, you can do it all at once!

Imagine creating an application that had components and services


written in whatever languages your team felt like writing them in.
Maybe one of the services has to do video transcoding — write that
service in Rust! Another might do simple database queries — let’s use
Python for that.

Each service can do its thing and communicate with other services
using some kind of message transport. Perhaps it’s a queueing system
like RabbitMQ or a data routing system like Kafka. You could also use
an event-based backend and have your services listen for events and
then do their thing, causing yet more events.

Sounds fascinating, doesn’t it? It is — but it’s also difficult to get


right. This practice is called microservices, and I’m getting ahead of
myself talking about it right now. We’ll discuss it in a later chapter,
372 ROB CONERY

but I felt it was a good example of how Docker has upended software
development and created entirely new ways to build an application.

And it all starts with the idea of orchestrating these containers


together. That’s what we’ll get into in this chapter, and we’re going to
start out gracefully using Docker Compose and then ramp it up to
more industrial strength solutions, including Kubernetes and “cloud
native” offerings from the Big Cloud Providers.

Welcome to the wild, fun world of DevOps!

THIS IS IT
To get the most out of working with Docker, it’s important to
understand that you’re not in the happy land of code anymore. This is your
application infrastructure — IT!

Applications used to live in data centers running on servers cared for


by people with a long, long list of certifications. They had to think
about networking, security and system updates — all, so our
applications could run as well as possible. This is not a trivial
concern.

When you step into the world of containers and container


orchestration, you’re going through a “context shift”, from coder to IT
admin, and you should not take this duty lightly.

Each container should be treated as a server, talking to other


containers over a network that you create, alongside other networks
on a different subnet controlled by a virtual switch somewhere.

It’s easy to get overwhelmed, but Docker Compose takes a lot of this
off our shoulders, letting us happily create our containerized
applications. We, on the other hand, trust that Docker Compose won’t
blow our foot off with its default settings.

Are you happy with that idea? Me neither. Let’s get to know Docker
Compose and what it’s doing for us. We don’t need to go that deep —
THE IMPOSTER’S ROADMAP 373

just deep enough to get below the abstraction a bit and understand
what’s being done.

HELLO DOCKER COMPOSE


For many developers, Docker Compose is all you need. It’s a very
capable system that wraps up most of what you would want from a
container orchestration tool because it will:

Create containers and network them together.


Set properties of those containers like ports, users,
environment variables and so on.
Restart any failed container.
Start containers in order if desired.
Create volumes and attach as needed.

Docker Compose is a virtualized IT admin that’s on the clock, 24/7,


looking after your containers and making sure they stay up. Well…
sort of. It won’t apply security patches or monitor disk space — but it
will restart things if something explodes!

That’s a lot of stuff and, as I mentioned, for simpler apps this is all
you need, and you can trust Compose to do the right thing. But what
if you’re not working on a common scenario? Maybe you’re creating a
logging service that is constantly receiving input from apps in your
enterprise.

Docker is a great fit for that too because, if your code is written to


handle such a thing, you can “fan out” multiple containers to handle
the ingestion load, as needed. Creating a Docker container is fast,
allowing your app to respond by simple creating more containers.

Compose can’t handle that, unfortunately, which is OK as it was never


meant to. We’ll see how to deal with that problem in the next chapter,
for now, let’s see what a typical Compose file might look like as we dip
our toe into the wild world of IT administration.
374 ROB CONERY

The docker-compose.yml File

The markup language YAML is ubiquitous in the DevOps world, and it


stands for “Yet Another Markup Language” or, in some circles, “YAML
Ain't Markup Language” which is a Unix-y ha ha self-referencing thing
like “GNU’s Not Unix”, another recursive acronym.

YAML was created in 2001 and popularized by Ruby on Rails, which


used it as a configuration mechanism because Ruby could process it.
All of that is interesting, but what we really care about is the docker-
compose.yml file that’s going to orchestrate our app.

Let’s take a look at what the application directory structure might look
like for an e-commerce application, and how Docker Compose might
work with it:

There’s obviously a lot going on here, and I’ll explain it in a second,


but just know that organizing things (like directories and files) is
subjective. This is a system I’ve used in the past, but it’s likely that
you and your team will want to do whatever works for you.

Here we have an app directory that contains our services, cache, and
frontend code. Let’s say we’re building a Vue storefront with a Flask
THE IMPOSTER’S ROADMAP 375

API for the cart written in Python that uses a Redis cache. The
checkout api also uses Flask but saves its data in PostgreSQL.

Each of these parts of the application, or “services”, has a Dockerfile


that describes the image it needs to run. The cart API will need a
Python image and Flask installed. The Redis cache will need Redis as
well as a volume for persistent data storage — you get the idea.

At the top level, we have two docker-compose.yml files, one for


development and one for deployment to a Docker host.

Let’s stop here and take a breath. I would rather not overwhelm you,
but I also don’t want to start off with a HelloWorld microservices
app, either. If you find this bewildering and strange — welcome to
DevOps. There’s a lot that goes on in this joint, but if we move at a
pace, hopefully we won’t fall into any weird holes.

Defining The Cart Service

Let’s assume that we’ve already created the Dockerfile for the cart
API. Here’s how we can now use that image, together with a Redis
image, orchestrated by Docker Compose:
376 ROB CONERY

The top levels of the docker-compose file are, typically, version,


services, volumes and network. The version is … well, the version of
Docker Compose syntax and the services are what we care about the
most. It’s not required, but is useful. I’ll talk about volumes and
networks later.

The cart service specifies that the Dockerfile to be used to build the
cart image and container are in the relative directory ../cart. When
Compose sees this build directive, it will go and grab that Dockerfile
and build it.

The next directive, environment, sets the ENV variables in the


container at runtime. Our cart service will need this because this is
how it will know where Redis is. Something to notice about these
settings is that the REDIS_HOST is set to redis. This is because
Compose creates an isolated network that these containers run in, and
the names of these containers correspond to their service names.
Wonderfully simple and direct!

This doesn’t mean that one container can instantly go rocking around
in another — remember, each port is locked down unless you specify
that it should be open. Indeed: if you look at the redis service down
below, you can see that ports are mapped specifically for the Redis
default port of 6379.

The final bit is depends_on. This tells Compose that the redis service
should be started before the cart service.

It’s a remarkably simple system, I think. It does take some time to


understand the syntax nuances, but once you get those, you’re off and
running.

One final thing to notice: we didn’t use a Dockerfile for the Redis
service here. We definitely could have, but instead I used image,
which tells Compose to go out to DockerHub (or other registry) and
pull down whatever image I specified.
THE IMPOSTER’S ROADMAP 377

We have to do a little more work with the redis image, however, as we


need to give it a command to start, which we do with command. This
container is critical to our application, so I’m telling Compose to
restart always if things fail.

Understanding Volumes

Volumes are simply mapped directories back to the host and are used
if things need to be persisted between containers. For instance, if
Redis crashes, we don’t want the data to be lost, so we ensure that it’s
stored in a volume.

The Redis image stores its data in a /data directory, so if I map that to
my local Docker volumes directory, it will be saved. I can also map a
volume somewhere else on my hard drive if I like.

You can see that in action in the cart service:

Here I’m mapping the cart directory on my local drive to


/usr/src/app in my container. The syntax for this stuff does start to
build up and, for some reason, I keep forgetting how to map volumes
like this! A simple way to remember is: local : host

Dev vs. Prod

We’ve been working on the docker-compose.yml file in the root of


our application, which is typically used for development purposes. If
378 ROB CONERY

you were to pull this repository and want to get started working on it,
you could just use docker-compose up right in the root and your
development environment would start right up. Neat!

This is especially useful when you’re working on a team and where


using Compose shines: a simple system that one user can run locally
to code up a given service.

In theory, this works great, but in practice it’s a different story. If you
have a part of your application that requires extensive builds (like
Nuxt, for example, which is a Vue.js frontend framework), Docker
(and Compose) can be quite slow. This is due to everything running in
a virtual machine rather than using your computer’s resources
directly.

The biggest place you’ll notice this is with file input and output, aka
“I/O”. Running something like npm install inside a container can be
massively slow, especially if you need to install for the first time.

You can get around these problems by tweaking Docker’s resources,


which is easy to do. I’m running on an M1 Mac and I have a lot of
extra power to give to Docker, so I’m happy to use it for development.

When running production, however, we’ll have a whole mess of


different concerns, primarily where Redis and PostgreSQL will be
living. We’ll discuss this more in the following sections, but the
essentials are, once again, I/O. PostgreSQL needs good I/O to be
performant, and you want it running on a glorious SSD for maximum
benefit. If you’re in a virtualized container, however, you’re going
through virtual I/O, which isn’t good.

Companies like DigitalOcean and others have optimized this and,


from conversations I’ve had, more and more people are feeling
comfortable deploying their database using a container. I, for one,
think this is crazy talk but… I guess if it works, it works!

I will typically go with a hosted solution, which most cloud providers


offer. DigitalOcean, AWS, Azure and GCP all have managed database
THE IMPOSTER’S ROADMAP 379

offerings. If we go with one of these, then we need to be certain our


/deployment/docker/docker-compose.yml file uses the proper ENV
variables, and we need to be certain we don’t spin up our own Redis
and PostgreSQL containers.

STARTING THINGS UP
Here is the code for our cart, which, for now, is just an express
application. We don’t have any routes just yet, but we can add those
later:

As you can see, we’re using the redis service to store the cart data. To
keep things brief and on-target, I’m going to ask you to imagine that
code exists :). The main thing to notice is the url of the Redis client:

redis://redis:6379

If you’re not familiar with URL connection strings, they’re built the
same everywhere, including the web:

protocol://user:password@server:port/resource

Pretty handy for connecting to things, but let’s not get caught up in
this either — so many little rabbit holes to fall into! The main thing I
want you to notice is that our Redis connection URL is
380 ROB CONERY

redis://redis:6379. That’s Compose networking in action, as the


name of the server resolves to the name of the service. We’ll get into
networking in just a minute — but I think this is super groovy.

Furthermore, to remind you, my cart Dockerfile looks like this:

When the cart container starts, the default command will be npm
start, which will simply call node index.js — standard stuff for
Node.js. Express will spin up and so will our Redis client — and off
we go:

As you can see, our Redis server was started first because we specified
depends_on: redis for our cart service. The npm start command was
called, and our app spun up!
THE IMPOSTER’S ROADMAP 381

It’s pretty wild to see this happen for the very first time, isn’t it? The
output shows the STDOUT from our running containers, which
suggests we’re good to go.

Each service we define will be accessible from the other services in our
docker-compose.yml file using the name of the service. For instance,
if I wanted to use my cart API over HTTPS, I would just need to make
sure I exposed port 443 (using EXPOSE in the Dockerfile or expose
in the docker-compose.yml service definition) and that’s that.

This will be the same for every service we specify in our docker-
compose.yml file: each one will have a name that resolves to the name of the
service, which means we can access any one of them from the others.
This does not mean that the ports are open and access is easy! All
ports are locked down by default, and you have to explicitly open
them. Still, each container is on the same subnet.

Is this a good idea? If you have an IT background, you might be feeling


pretty twitchy right now, so let’s dig in to networking.

UNDERSTANDING DOCKER NETWORKING


When running Docker Compose locally, things are done for us in the
background, and it just works. I think most of us, especially me, would
be happy to trust Compose and get on with my day — IT and
networking in general are not my favorite subjects.

That said, we are doing “IT stuff ” and we definitely should not take
that lightly, as tempting as it might be. Networking is a massive topic,
and we could get lost easily, but rather than do that, let’s dip our toe
into the networking world so we begin to know what we don’t know,
going deeper as we need.

Docker Compose has seven different “drivers”, or types of networks,


that you can use depending on your needs. For most people, the
default bridge type will be fine, and Compose will create this for you
with all the necessary settings.
382 ROB CONERY

Now, I realize that for anyone out there with IT experience… well,
you’re probably jumping out of your chair screaming right now.
Setting up a network is NOT something you normally leave to a tool!
Routers, switches, subnets, external access vs, internal, not to
mention DHCP and DNS! How is it possible that Compose can do all
of this and do it correctly?

That is a great question. The answer is simply this: Compose tries its best
but leaves many choices up to you. I know, sounds wishy-washy, doesn’t it?
Let’s dig in and see what’s going on.

The Default Bridge

If you don’t specify any network, this is the one that will be created
for you. The idea is that you have containers that want to talk to
each other — so a lot goes on in the background to allow this to
happen.

Now that our containers are started and networked together, let’s see
what’s going on using docker network ls:

Here we see a total of four networks with an interesting story. The


bridge network is built in and is Docker’s default network when you
start a container, unless otherwise specified. The host network is
fascinating, and I’ll get to that in just a minute — but the short story
is that this is a connection to the Docker host, which is a VM running
on my local machine.

Finally, we have none, which is literally no network at all. I’ll talk


about that down below too.
THE IMPOSTER’S ROADMAP 383

The one we care about is compose_default, which was created for us


by Compose as my directory is called compose so the suffix _default
was added.

Let’s take a look at this network using docker network inspect


compose_default:

There’s a lot going on here, and networking geeks might find all of
this fascinating. I’m not a networking geek, but I know enough to
realize that we have a full, virtual network happening here with a
switch, subnet and DNS. It’s OK if you don’t know what those things
are — the point I’m trying to make is that this is a full, real network
running virtually behind the scenes.

Should we care about this at a deeper level? I say we should, and the
first question we might want to ask is “what about other Docker
Compose files… are they on the same network!?!”

I was going to demo this and show you that no, every network is
isolated, but I thought it's easier to just say it. When Docker Compose
384 ROB CONERY

starts, it looks at the existing networks and increments the subnet IPs.
Our subnet looks like this:

But if I started up a new container set, the subnet would start with
172.28.0.0/16 — notice the “28” rather than the “27”.

By the way, if you don’t understand routing, subnets, DHCP and NAT
stuff — take a second and have a Google. If you set up wifi in your
house or apartment, you’ve worked with all of this stuff. Every
machine on your home network talks to your router, which is your
gateway to the outside world. Internally, your home machines (and
phones, Xboxes and so on) are given an IP address within a certain
range defined by the subnet. In our config, our subnet is defined by
the IP addresses 172.27.0.0 - 172.27.0.16. The subnet ends at 16
because the /16 at the end specifies that.

OK — hopefully that’s enough to get you through this chapter!

The Named Bridge

Let’s look back at the services we’re eventually going to define for our
application:
THE IMPOSTER’S ROADMAP 385

There are many services here! The question becomes this: do they need
to have access to each other, or should they be isolated?

Our credit card processing might be asynchronous, reacting to events


queued in a service bus somewhere. It might be a good idea for our
checkout service to be completely isolated from the rest of the
services, just in case a bad person found their way in to one of them.

We can do this by defining custom networks:


386 ROB CONERY

Here I tell Compose to create 2 named networks: frontend and


backend, and then I specify which services use which network. You
can have a single service use more than one network too, just add it
under the networks setting in the service definition.

We can now inspect our networks again using docker network ls and
here’s what we would see:

The compose_default network is still there — but we have two new


THE IMPOSTER’S ROADMAP 387

ones, called compose_frontend and compose_backend. If we inspect


these networks, we’ll see three different subnets, which is grand.

We can specify, if we like, a whole mess of options for each network,


including an entirely different subnet and gateway. If you’re interested
in learning more, you can do so here. As I keep saying: I could write an
entire book on this one subject alone, and that’s especially true here. My
goal, with this chapter, is to get you familiar enough that you can
know which questions to ask and rabbit holes to fall into when
problems pop up for you.

For now, let’s move on and discuss a few more network types.

MacVLAN

Fair warning: if you’re a twiddler, you might lose the rest of your day
messing around with Docker networking. It’s so interesting!
MacVLAN networking is where we leave the comfortable world of
defaults and get into the customizations that you might need if you’re
running your own IT setup.

Note: self-hosting your applications on your hardware is happening more and


more as the complexity and cost of cloud providers increases. If you have the IT
know-how, you can save a lot of money running your machines.

The MacVLAN has one purpose: bind your containers to your local
network. You give it a subnet, ip range, and the gateway address, and
each container is treated as a peer on that network. Like a bunch of
little VMs!

That’s crazy. You might be thinking: why in the world would I ever do this?
Good question! If you’re the “network person” at your company, you
might want to expose a service to everyone in your subnet — maybe
it’s the dev team, and they need a centralized PostgreSQL or Redis
service — MacVLAN makes this simple to do.

You need to understand your network first, however, before you can
set up your MacVLAN, specifically you’ll need to know:
388 ROB CONERY

The gateway address. This is the IP address of your router,


typically, that can see the outside world. I’m on a Mac, which
means I can run arp -a or to see all the Devices on my
network as well as their IP address.
The subnet range you want to join. If you don’t know, a
“subnet” is a set of IP addresses of an internal network. When
your router reaches out to the internet, it’s given a public IP
address so that information can be routed back to it. We don’t
want a public IP for every device connecting to the internet, so
our router creates a subnetwork, or “subnet”, using IP
addresses that it assigns. This is typically something like
10.1.1.0/24 or 192.168.1.0/24.
The name of your host network interface. That can be tricky
to figure out — what the heck is that? MacVLAN needs a non-
virtual network interface card (or NIC) to bind to, so we need
to know its name.

If you’re on a Mac, the simplest thing you can do here is go to System


> Network and take a look at your active network connection. Here’s
the setup for my wifi connection:
THE IMPOSTER’S ROADMAP 389

Here I can see my gateway (Router) which is 192.168.159.1, my IP


address which tells me I’m on the 192.168.159.x subnet, but no
network interface name. To get that, I need to ask the terminal using
networksetup -listallhardwareports:

Hardware Port: Wi-Fi

Device: en0

Ethernet Address: XXXXXXXX #my Mac address which you


aren't gonna know

You can find your MAC address by going to the “Hardware” tab in the network
settings.

Here I see that my network interface is en0, which is pretty typical for
Apple machines. You might have to Google a bit to figure out how to
get this information on your machine, but it should take 10 minutes
to get your answer.

I have both wireless and wired connections, and for this example I’m
going to use my wired connection, eth0. Here’s the new docker-
compose.yml network setting:
390 ROB CONERY

Now I just need to update the services to use this network setup:

You’ll notice here, too, that I have to assign a manual IP address for
my services, making sure they’re outside my router’s DHCP range
(auto-assigned IP pool). You can also let Docker assign these IP
addresses for you, using ip_range: in the network setup, but I prefer
doing it manually.

This is where things start to get weird and fall apart.

The Downside of MacVLAN

Long story short: MacVLAN doesn’t have the ability to use Dynamic
Host Configuration Protocol, or DHCP, which is how your computer
connects to a network and is configured automatically by the router.
For that reason, we have to assign IP addresses manually, and we also
have to be sure that those assignments fall outside the router’s DHCP
range to avoid collisions.

Let’s say we do that — like I did above — we’re still not out of the
woods! Docker containers have virtualized everything, including a
Media Access Control (or MAC) address. These things are meant to
identify hardware — physical things on our network — but our MAC
THE IMPOSTER’S ROADMAP 391

addresses are virtual. This should be OK, but the problem comes when
these MAC addresses are all using the same Network Interface Card,
or NIC.

Three machines using the same network card? Our router might take
exception to that and issues can arise, including lack of access to the
network and the outside world.

There are ways around this, including tweaking your router and telling
it to stop being so strict — but that’s beyond the scope of this book. If
your services can’t connect, for some reason, this would be the reason.

IPVLAN

This is where you’re going to need to understand networking and


what’s at stake with the settings we’re using. IPVLAN is interesting
because it virtualizes the MAC addresses of your containers, and
masks them, so that they use the Docker host’s MAC address when
accessing the network.

Wild.

Setting it up is just like the MacVLAN, making sure I change the


network settings for each of my services to ipvlan:
392 ROB CONERY

Here, we’re specifying l2 (that’s L in there) as the mode, or “Layer 2


networking”. This is neat, but where things get really crazy is when
you specify l3, or “Layer 3 networking”, which turns your host
machine into a router.

That’s… kind of nuts. Doing this allows machines outside our Docker
network to talk to our containers using typical routing rules that we
set up with the host. Port-forwarding, default responses — all the
stuff you currently do with your home internet router can be done
with your IPVLAN L3.

To get this to work properly, you need to let your existing network
know that a new router is in town, setting up routing and traffic rules
(and more subnets) so that everything works…

This is where I need to gracefully step back from the subject. I am not a
networking person, though I’ve done enough of it in the past when
running my first business. I think it’s good enough if you know this is
possible — and you can go dig in on your own if you would like to
know more. I am, honestly, uncomfortably over the edge of a
knowledge cliff here — even though I’ve spent days studying this stuff!

Overlay

Overlay networks are for people who are, basically, managing an entire
Docker-based infrastructure or an on-premises “cloud”. I’m not going
THE IMPOSTER’S ROADMAP 393

to go into detail on it because, well, I don’t have the time or


knowledge to dig in much further! Besides, this type of setup is way,
WAY outside the scope of this book.

That said, the scenario for an overlay network is to allow different


Docker hosts and their containers to talk to each other. This takes
some configuration, as you can imagine. The idea is that you might be
running your IT department at your company and want each
“developer unit” to have semi-isolated container environments that
talk to each other. To be honest, I can’t imagine what that might be,
but nevertheless you should know it’s possible.

Docker networking is crazy.

None

You might be thinking of a simple question: “do I have to use a


network?”. Docker compose creates one for you, even if you don’t
specify it, and we can see it using docker network ls:

Here you can see the ones we’ve been playing with, including the
default bridge network. If you don’t want a network, you have to use
the last one there: none.

It’s the most secure, to be sure, but also a bit useless unless you really
need to be sure no one can get in and snoop your containers! As you
might have guessed, you just specify network: none for your service,
and you’re good to go.
394 ROB CONERY

SHARING YOUR IMAGE


One of the great things about working with Docker is that you can
share your image with others. Maybe you’re working on an open-
source application that does something amazing — like a full
PostgreSQL web-based GUI!

This is PGWeb, a web-based Go application that is a fully functional


GUI frontend for PostgreSQL. I’ve used this a few times and it’s
remarkable!

It still blows my mind that you can share an image like this, complete
with Linux, Go and a web server. But how is that done?

To share your Docker image, you need what’s called a Container


Registry. Dockerhub is the most popular and also the default. When
you publish your image there, you build a local binary file and push it,
completely contained and versioned, ready to be shared with the
world. If you want to keep it private, you can do that too, but you have
to pay money.
THE IMPOSTER’S ROADMAP 395

There are other registries, of course. In fact, you can create your very
own registry by using Docker itself, which is wild. Doing this is a tad
out of scope for this book, but here’s a great article on how to do it
from Digital Ocean, who will also set one up for you with their one-
click deployments.

If I’m honest, there really is no need to run your own registry. Every
cloud has this ability and makes deploying applications from their
registries effortless. GitHub also has a registry that you can tie to a
repository, building a new image as part of a workflow. I would
suggest you use one of these unless you have a good reason not to —
it’s just another thing you would have to maintain!

DOCKER AND YOUR TEAM


Using Docker for development is still, in 2024, a bit of a debate.
Actually, let me rephrase that: it’s still a polarizing topic. For some
people (like me), nothing beats having your development environment
setup locally without another service to think about. Docker does
crash on me, and I’ve been stuck for hours trying to use it to enable
local development. I don’t know what it is — but something always
seems to go wrong, and I end up having to restart the service
constantly.

That said, there are some elementary ways you can put your
development environment completely inside Docker. Seamlessly, I
might add.

Dev Containers for VS Code

I work at Microsoft, so I get to see things before the public does, and
I’ll never forget when my friend Burke Holland showed me Dev
Containers. You have to have VS Code, of course, with the Dev
Containers extension. You also need Docker up and running.

Now it’s a matter of telling VS Code that you need a container for
396 ROB CONERY

your environment. You can do this by using the command palette (⌘-
shift-P on a Mac, ctrl-shift-P on Windows). You should then see this:

The second choice will walk you through the setup for your dev
container image. There are some easy “quick starts”, including .NET,
Node, Go and various database combinations, and you can customize
things as you need to once you get started.

The fascinating thing is that you can also store information from VS Code
itself in there! Extensions you need only for a given project, a special
theme, and even project-specific snippets. If you want to see more
about all of this, I did a talk at NDC London in 2022 all about it!

Neat, but does it work?

This is the thing: in theory, all of this sounds wonderful. In practice,


however, there is a lot of starting and stopping of containers until you
get things the way you need. It’s not a matter of “I need Node and
Postgres — GO!”. You will also need Redis, most likely, SSH with
THE IMPOSTER’S ROADMAP 397

proper key settings so you can push your code to GitHub, and nginx
so you can serve your app properly.

For some people who know Docker, this is a breeze, which makes
working on a team simple. For other dev leads out there, velocity is key
and, being extremely honest here: Docker slows you down because it’s
another thing between your developers and your application code.

I’ve tried to make the jump to Docker for development 4 times now. It
really does help with things like making sure you have the correct
ruby gems for your Rails app, or the right Python version and
environment for your Django app.

That said, these platforms (Node, Ruby, Python) have excellent


tooling for creating virtual environments within which to work. Rbenv
(which is what I use), RVM, Pyenv, N, NodeEnv and more — these
can help isolate and manage dependencies easily.

Database tools such as Postgres.app and DBngin for the Mac make
setting up databases simple too.

Balance all of this, however, with your knowledge and comfort with
Docker. If you’re good at it and know what you’re doing,
environment tools and software installation just to support
development seems extremely silly. I get this, I really do. I don’t have
a definitive answer for you on this — it comes down to you and your
team!

SUMMARY
Docker Compose does a lot, but it lacks one thing, which can be
important: logic. When you create your docker-compose.yml file, you
can, indeed, specify some rules such as “restart on failure” and scale,
which will create multiple instances of a service for you.

But what if you wanted that scaling to happen according to traffic


load? Or, perhaps, have different scaling plans (up vs. out, for
instance) with a centralized state store for each container?
398 ROB CONERY

If you’ve worked with Erlang in the past, you might be sensing we’re
about to enter some familiar ground.

Indeed, we are. In the next section we’ll look at more advanced


container topics, getting to know Kubernetes, Cloud orchestration
services, and more.
FIFTEEN
FORMALIZING OUR CONTAINER
STRATEGY
COMMONLY KNOWN AS DEVOPS, THIS IS
WHERE THINGS GET INTERESTING

W
hen I started my programming career, there was only
one way to get your web application in front of other
people: through an IT department somewhere. It could be
your own at the company you work at, or a host of some kind.
Either way: you dealt with Information Technology types that typically
enjoyed their job having power over you.

When containers became popular, this all changed.

Developers could describe the precise environment they needed using


a bit of YAML and some shell scripts! “Ship the environment” became
the mode of deployment and Developer Operations, or “DevOps”, was
born.

Turns out that developers like to tweak and twiddle things, constantly
inventing and reinventing, hoping to keep improving whatever it is
they’re twiddling. Orchestrating containers is a twiddler honeypot!

Before we dig in, I need to be certain we have some context set: I am


not a DevOps person. I know how to use many of the tools, but I haven’t
dug in on a big project and freaked out on YAML. What you’re about
400 ROB CONERY

to read is the result of my investigations and educating myself. It’s what


I enjoy doing and, hopefully, what I’m good at!

If you discover an issue, please do let me know! I’ll be glad to fix it —


it’s how we all learn.

STARTING AT ZERO: HOW YOUR CODE IS RUN


We need to keep our feet on the ground if we’re to get through this.
The jargon and concepts can be overwhelming. This stuff gets complex,
fast, and it’s not just the tools — it’s also a conceptual understanding
of what your app needs and how you can tweak your infrastructure to
support it.

In the “old” days, we bought two Dell servers: one for the web, one for
the database. We SSH’d (or remote desktop) into the database, ran
some SQL files after installing whatever database we were going to
use, and crossed our fingers.

To deploy our code, we would set up an FTP client on our dev


machine and deployment was basically an rsync away, or an FTP push
using FileZilla.

If you had an IT department, you would drop things (Code files,


binaries, SQL files, etc.) on a network share and let your DBA and
server folks know an update was needed.

This, in summary, is our process: we write code, sometimes compile it, and
push it to a machine that executes it for our users. This was true then, and
it’s true now. The difference is that fewer people are involved and IT
departments are slowly going away.

OK, BUT REALLY: HOW IS THE CODE ACTUALLY RUN?


Your code is turned from text into binary execution format at some
point in the deployment lifecycle. If you’re using .NET, Java, Go, or the
THE IMPOSTER’S ROADMAP 401

like, your code must be compiled before it’s even run, with the
compiler throwing errors if you break the rules.

Other languages, such as Ruby, JavaScript, and (in most cases)


Python, the code is compiled “Just in Time”, which is typically right
before it’s executed for the first time.

We know these things, but let’s go deeper. If you’re going to do


DevOps, you always have to think about execution of code and
queries, when, where, and how.

Each language I mentioned above has a runtime of some kind. This is a


separate binary that will execute (and sometimes compile, too) the
byte code that it's given. For Ruby and JavaScript (using Node), these
are single process runtimes. Put another way: if your application throws
an unrecoverable error, your process will die and so will your code.

Some platforms, such as Java and .NET, use threading internally. If you
know how to use these, then you can write code a bit more
defensively and let a thread die instead of the entire application going
down. This makes these platforms a bit more resilient and “self-
healing”.

Our goal here, therefore, is to ensure that our application stays up and
happy. We understand how our runtimes work (and don’t, for that
matter), so we can come up with a plan.

PROCESS MANAGEMENT
I need to pick a platform here, so let’s use Node. Let’s say we have an
Express web application that uses Postgres to store data and Redis as
a cache. Right now, we have no idea what a Redis “cache” does nor
how to use it to make our application more efficient, aside from
storing user sessions, but I’ll get there.

My first goal is to make sure our single-process Node application can


stay up for our users. We’ve tested our code and feel pretty good about
402 ROB CONERY

it, but errors always happen and, as Alan Turing explained with The
Halting Problem: you just can’t say if a program will exit. The corollary
there is also true: you can’t say if a program will stay running!

Googling “Node process management” we land on this page, which


answers our questions:

I’ve used PM2 many times and really like it. In fact, Azure (and other
cloud providers) use PM2 behind the scenes to run their Node
applications. It’s simple to configure, can scale your app if needed,
provide monitoring information, and also restart a crashed process. It
can also do simple load balancing, which is wild.

So, here’s the question:


THE IMPOSTER’S ROADMAP 403

The short answer is “no”. PM2 handles the application service, which
is Node, but doesn’t handle scaling anything else. Often, this might be
all you require, but as we’re going to see in a few chapters: CPU-
bound processes (like the Node service) don’t typically slow things
down, it’s the I/O stuff that hits the disk or network.

Either way: let’s do something a bit more comprehensive, shall we?

THE NEED FOR MORE CONTROL


This is where the pool gets a bit deeper. We have a seemingly simple
application, but a production environment that is far, far too simple
for the long run. At first, everything is code, database, cache — the
trifecta of simple infrastructure. As time goes on, these things get split
out slowly to something a bit more complex:

Services
Messaging
Reporting and Analytics
Event tracking
Monitoring
404 ROB CONERY

Logging
One or more databases (Postgres, Redis, Cloud of some kind)

I’m probably missing a few things in here… propping a “modern”


application up for the long term is tricky business. At some point,
your simple PM2 deployment will start to act “weird” and you’ll want
to know why, so you’ll add extra logging so you can see certain events
happen and diagnose your problems.

You’ll then add monitoring so you know when things go pear-shaped.


It’s always nice when your app tells you what the issue is instead of
your users!

Through exploring logs and realtime application monitoring, you


discover that one of your services accounts for 90% of the load on
your application, which is also slowing things down and causing the
“weirdness” you’ve seen in the logs. Someone on your dev team
suggests refactoring and breaking things out into smaller services,
orchestrated by Docker Compose.

You’ve read this book and understand Docker to a comfortable level,


and you understand the role of Docker Compose — so why not?
Rather than depend on PM2 to manage Node processes, you can use
scaling with Docker Compose.

The rewrite takes a few months, but eventually, you get things
working. The container images are up on GitHub and your
deployment process is now, basically, pushing a Docker Compose file
to your host.

You’ve moved your databases to managed hosting, and your monthly


hosting bill has gone up by 800%, but your app is humming along
happily.

You do wonder, however, if there’s a better way. Things still feel


manual, and you wonder if you’ve done things “properly”. In terms of
our bar and grill, you’ve spent a few weeks putting together an
employee manual, installing a computerized ordering system that
THE IMPOSTER’S ROADMAP 405

tracks everything from orders to sales to food costs and inventory.


You’ve also automated payroll and set up Quickbooks. Look at you!

But did you do these things correctly? Having tried (and failed,
multiple times) to set up my books using Quickbooks, I know that
dread feeling! Having all of these systems running is wonderful… but
having them siloed like this is unsettling.

Let’s see what else is out there.

YOUR OWN CLOUD WITH DOKKU


I’ve been using Dokku on and off for the last 8 years. It’s a brilliant
layer of abstraction that sits on top of Docker, and it handles a
ridiculous amount of setup and orchestration for you.

At its core, Dokku uses Git, Docker, and a mighty set of shell scripts
to create your application’s runtime environment. If the CLI doesn’t
scare you, Dokku might be something interesting to look into.

For most applications, you can create the Dokku “stuff ” you need
using various plugins, then tie them together with a few commands.

The steps are as simple as you can imagine. You’ll need a Linux VM
somewhere, and it should have some resources, as it’s going to power
406 ROB CONERY

your entire infrastructure. I use DigitalOcean for this, but you can set
up a Dokku server effortlessly, just about anywhere.

If you have a Linux machine with Dokku installed, you simply need to
SSH in and run some commands:

There it is — that’s all you need to get started! Ha ha, YOU WISH.
There are quite a few more things we need to do here, namely:

Expose the postgres service, briefly, so we can set up the


database with some SQL files.
Add the letsencrypt plugin, which will generate and update
an SSL certificate for our site.
Add backup directives, so our database is backed up nightly to
a remote location, such as cloud storage.
Set up monitoring and health checks
Set up a persistent storage volume for things we upload
Set up a scaling plan.

It’s incredible how many things you can do with Dokku, as long you as
spend a little time reading the docs and memorizing the mantra
“there’s a plugin for that” or “there’s a setting for that”. The people
THE IMPOSTER’S ROADMAP 407

who made Dokku wanted to copy what it’s like using Heroku, and I
think they did a good job!

To get our application deployed, we simply do a Git push:

git push dokku@mydokkuserver:app

This fires off a post-receive hook in our remote Git repo that was set
up for us when we created our application. This hook does the needful
and deploys our code for us.

Simple.

Microservices with Dokku

Our application has been broken out into a set of services, and Dokku
can handle this for us just fine — it’s just a little manual. There are a
few approaches we can use to approximate what we put together with
Docker Compose:

Create a set of apps for the services we need and add them to
a Dokku network.
Create a single app with workers that do things
asynchronously, using some type of asynchronous processing,
such as Redis pub/sub.
Skip microservices altogether and scale things using Dokku
directly.

We’ll talk about this more later on, but managing microservices is
tricky business, and one that you really need to justify vs. “just scale
the entire damned thing out”. Services like Heroku are good at this,
but it’s also a bit simplistic for many designs.

I don’t have an answer on this. I like Dokku and I know it can do a lot
and I prefer the simplicity. That said, if I was leading a team that
wasn’t comfortable doing “server stuff ”, I would be a bit hesitant.

I feel that Docker is a needed skill for every developer these days.
Using Dokku, however, is kind of niche. You would need to be very
408 ROB CONERY

sure the entire team knew what the setup was and why, just in case
you found a new job and were walked out the door where you are now
(it happens).

It’s a good option, in my mind, because you definitely have more


control over your infrastructure without the feeling that you are doing
things too “manually”.

Now let’s get to the good stuff.

FLEXING THE CLOUDS


You can do a lot with containers; we haven’t even scratched the
surface! Docker and Docker Compose can do a lot for you, but there’s
so, so much more that’s possible, especially using the various cloud
providers out there.

Rather than using a single orchestration tool like Docker Compose or


Kubernetes to create your application infrastructure, you can assemble
various cloud services, building your data center and tuning it exactly
as you want.

The big players are names you’ve likely heard before:

Amazon Web Services, or AWS. This is where hosted Virtual


Machines became a thing, and is still, to this day, the
dominant cloud service. People that know and understand
AWS are in high demand, and for good reason: you need a
Ph.D. to figure out how to use it!
Azure is Microsoft’s cloud and is slowly chipping away at
Amazon’s dominance. They work much the same way and
offer similar services… and you need another Ph.D. if you’re
going to use it.
Google Cloud Platform, or GCP. This is an interesting
service because Google also offers Firebase, which I’ll discuss
below, as an “entry level” cloud service that’s extremely
simple to use. Google is an intriguing player here because
THE IMPOSTER’S ROADMAP 409

people simply don’t trust the company to keep anything


around for any length of time. That’s a tough reputation to
have if you’re offering a cloud service.

You need to know at least one of these services if you’re to move up in


your career. I know there will be someone reading this saying “well,
actually…” because no one likes declarative statements, do they? It’s
true, I suppose, that if you tried hard enough in your career, you could
avoid using one of the Big Cloud Services and, instead, use one or
more of the services I list below.

If you do this, however, you’re limiting your experience and jobs will
be harder to find. And for what? Bragging rights?

AWS is by far the most popular, and most complex, of the Big Clouds.
If you had to pick one, I would suggest AWS. If you’re a Microsoft
person (.NET, Office apps, etc.) go with Azure. If you choose AWS,
then start with this video from NetworkChuck (Charles Keith). He is
one of my heroes in terms of teaching people things online; so good.
His video should be enough to get you started on your journey.

You can learn these services in a month or so in your part-time, well


enough to deploy and maintain a reasonably sized application. When I
joined Microsoft a few years back, I had never used Azure. I spent a
month getting to know what the various services are, and the tools for
using them. I’m a big CLI fan, and the Azure CLI was extremely
straightforward and easy to understand.

Once again, I could write 10 books on these three cloud services


alone. Instead, I’ll suggest you pick one and get to know it reasonably
well.

Those are the bigger services, now let’s discuss the smaller, cheaper,
and simpler services that are super focused with their offerings.

Firebase
410 ROB CONERY

I’ve been using Firebase for years and I love it. It gives you (almost)
everything you need to run a web or mobile application, and it’s
extremely simple to use.

Firebase started life as an independent company, offering a hosted,


realtime database with simple hosting. Google acquired them about a
decade ago and has kept building the service out as a “launching pad”
into their full cloud service.

The original Firebase realtime database (just called Firebase) was


interesting, but it suffered from scaling issues. It could handle a lot,
but your entire database was essentially a single document with lots of
subdocuments representing collections of even more documents.

Google released the Firestore database in 2017 as an alternative to the


original Firebase and confused everyone, because it was essentially the
same thing but, according to Google, could scale a little better. This
caused a bit of panic in existing Firebase users (myself included) who
feared the original Firebase was about to get axed. It hasn’t (so far)
but Google does suggest you use Firestore.

In addition to a database, you get an entire “backend in a box”, so to


speak, with services that include:
THE IMPOSTER’S ROADMAP 411

A dedicated CLI, admin, and client libraries


Authentication
Storage
Analytics
Simple hosting (static sites)
Functions powered by Node or Python

You can also integrate pretty easily with the main GCP services if you
like.

One of the more compelling aspects of Firestore is that you can


access it directly from a web or mobile client. There are rules you
put in place, so people can’t arbitrarily mess with your data or see
things you want kept secret. The net effect, however, is that the
collection of services Firebase offers replaces the need for a
server.

This is great if you’re working on a JavaScript-powered front end


(React, Vue, etc.) as it makes hosting your application essentially free.
The only thing you pay for is data size and bandwidth, which can get
expensive if you’re not careful. The largest bill I have ever received
from Firebase was $1.67, which was mostly due to storing books like
the one you’re reading now.

AWS Amplify

Imagine you could copy/paste Firebase, but instead of pasting a new


service on GCP, you pasted it to AWS. That’s what Amplify is:

A dedicated CLI with libraries (like Firebase),


Hosting
Storage
Authentication
Serverless Lambda Functions

In addition to these, you also get a set of premade components and a


visual toolset to help you model data and bind it to your interface.
412 ROB CONERY

The goal of AWS Amplify is to wrap the most commonly used AWS
services for frontend and mobile applications, and make it “easy” for
you as the developer to build things.

I’m not sure what I think of this approach. Trying to simplify a service
like AWS down into a drag and drop interface seems… overly
ambitious. That said, it also looks extremely compelling:

Most applications that you’ll build in your career are some version of
“forms over data”, which means you spend most of your time figuring
out how to get user input (forms) into a database. Amplify is focusing
on that, and trying to “help” you move faster:
THE IMPOSTER’S ROADMAP 413

I’ll be honest: I really hate abstractions like this. This is one of those
opinions that builds over time, by the way, and each person’s
experience will be unique, I suppose.

Frameworks and tools that try to hide massive complexity (and AWS
qualifies as that) tend to do things that will surprise you, in both good
and bad ways. Right off the top of my head, I’m wondering what AWS
services Amplify will create for me, and how I’ll be paying for it. Every
month, AWS charges my credit card right around $1, and I have yet to
figure out why. I’ve tried to shut down all of my services, too, but
nothing seems to work.

The last thing you want is a surprise AWS bill, and if you click that
link, you’ll see quite a few stories about how people did a tutorial or
forgot to turn something off when they were goofing around. This can
sink your new company! Thankfully, AWS is pretty good at helping
out with unexpected charges, you just need to navigate a phone tree
and get a real person to help you out.

Financial issues aside, the problem with Big Abstractions like this is
that you have to play by their rules if you want to get the most out of
the tool. Ruby on Rails is the same way: play along, you’ll get a lot of
414 ROB CONERY

work done. Come up against an edge case when Rails gets in your
way, it’s painful.

These tools and frameworks are great for getting started, but if you
decide to use them, know what they cost and how much workarounds
will hurt.

Digital Ocean and Linode

There are fun “middle ground” cloud services, and two of my favorites
are Digital Ocean and Linode, which was recently bought by Akamai.
I’ve been a customer of both for years, but have used Digital Ocean
the most.

Both clouds lean on simplicity, and instead of offering you hundreds of


smaller services, they offer you prebuilt virtual machines you can use
as you like.

If I was starting a company today and building a web/mobile


application, I would use Ruby on Rails and deploy it to Digital Ocean
using the Dokku droplet.
THE IMPOSTER’S ROADMAP 415

I talked about Dokku above. I use it for all my server-based


applications as it’s simple, and I love it. I discuss Ruby on Rails more
in the Architectural Approaches chapter later in the book — but I
think I should probably answer a question here before I go on: aren’t
both of these things magical abstractions that I don’t trust?

Indeed. Dokku orchestrates Docker using Git, and Ruby on Rails is…
well, it’s Ruby on Rails. I know them both, which is to say I know
where, when, and how I’ll need to use workarounds and when both
things will become too hard to maintain as the complexity of my
application grows.

For instance: Dokku is great at stateless container orchestration. Things


like web applications (without a database), Redis caching, and more.
It can do stateful stuff, like running Postgres, but I know through
experience that having someone else manage Postgres is always the
best idea.
416 ROB CONERY

That’s my current setup: I have a server-based application (using Node


and Express) which uses Redis for session management, and that’s
run in Dokku happily for years. I also have a Postgres database which
is running at Supabase, which I’ll discuss next.

Back to the point, however: simple VMs with prebuilt images can take
you a long, long way. That’s what you get with Digital Ocean and
Linode, with a sprinkling of other services like managed databases,
storage, functions, and monitoring.

Digital Ocean will also run Kubernetes for you, which we’ll see in the
next chapter.

Supabase

One of the challenges I have writing a book like this is that it takes a
long time to pull the content together, and things will inevitably
change.

Figma, for instance, was bought by Adobe right after I wrote about it
in chapter 7, so I had to reconsider if it was going to be part of the
book. Nothing against Adobe, but they might be planning to absorb it,
the team, or just kill it outright. Recently, however, the deal fell
through, so Figma is back on its own.

Supabase is in an even more precarious position. They’re a startup


with a good idea that is only 3 or so years old as of this writing.
They’re competing against Google (specifically: Firebase) and will
likely be acquired in the next year unless they can put up some
impressive numbers.

All of that said, it’s a service I’ve been using for almost a year now,
and I love it. They have managed to hit a very tight niche, and they’re
doing it extremely well.
THE IMPOSTER’S ROADMAP 417

Using the copy/paste analogy once again, imagine you copied Firebase
and pasted it on top of Postgres, running on managed AWS services:

Supabase is following a popular model: wrapping up AWS services in a


pretty package and acting as your IT team. Sounds great to me! But… isn’t
this another magical abstraction? Yes… and no.

Supabase is a collection of services running on top of Postgres and


AWS, and it offers an API that you can use directly from the client,
which flexes PostgreSQL’s row-level security using a pretty nifty idea:
passing a JWT directly through to a query using a Postgres extension.
418 ROB CONERY

The only reason I don’t like Supabase is because I wanted to build this
exact service a few years ago (minus the row-level security) and
decided against it!

I know and love Postgres, so none of what Supabase does is a mystery


to me. Storage is handled through AWS S3 (simple storage), and
functions use TypeScript and Deno on AWS Lambda.

So yes: there is a good level of abstraction here, but nothing magical.


If Supabase was to run out of funding tomorrow, I would be sad, but I
would also be able to take matters into my own hands:

Supabase is open source, which is extremely wonderful. You can pay


them to host it for you (which I do), or you can run your own instance
and have some peace of mind.

WHAT IF I BOUGHT A SERVER AND PUT IT ON A RACK?


That’s what we did back in the early 2000s. In fact, one of the things
that kept my consulting company up and running for years was our
server rack, which held some massive hardware for our clients.
THE IMPOSTER’S ROADMAP 419

Not so funny aside: our server rack was in the recreation area of our office, which
I suppose, in retrospect, was a terrible idea. One day, 2 of our core programmers
were playing foosball and one of them absent-mindedly put their can of Diet
Coke on top of the switch while they joined a game. I saw it, and launched the
can across the office, against the wall. No, I didn’t apologize for it. People tried
to tease me about it too… which wasn’t a good idea either, as I would remind
them, every time, how much money was represented by those machines.

I remember talking to Geoff Dalgas on an episode of This Developer’s


Life (a podcast I host with Scott Hanselman) about the time that
StackOverflow went down. Geoff was one of the original developers of
the site, and took care of their servers at an Oregon colocation facility.
Long story short, the data center that hosted their servers decided to
upgrade their electrical system and, in doing so, cut the power to
everything.

This is the problem if you decide to host your own servers: people will
put cokes on the rack. The power will go out. You (or someone on your team) will
need to physically protect your machines from other humans.

I’ve hosted machines for clients, as I mentioned above, but I also


maintained servers at Level3 in San Francisco as well as a few other
collocation facilities, the names of which I can’t remember. These
facilities are pretty good at maintaining security and making sure
things like power don’t go out. But they’re run by humans, and these
humans know that visual theater is as important as actual safety.

Case in point:

A fire broke out in Digital Realty's LAX12 data center in El


Segundo, California on Sunday 21 May. The fire caused two suites
at the facility to be shut down and destroyed an unknown number
of servers, with power cut off and some users locked out of the
server space. The suite affected by the fire is still apparently out of
action three days after the fire.
420 ROB CONERY

According to an incident report from Evocative, which used space


at the facility, the fire took place at 1:40am on Sunday, in the Colo
4 space within the Digital Realty facility at 2260 E El Segundo
Blvd.

The fire sprinkler system was activated in the Colo space above
row 4 and was limited to row 4 with some exposure to adjacent
rows, according to a report from Digital Realty. Power was cut off
to the Colo 3 and 4 spaces.

Fires breaking out in data centers is a big deal. There is a lot of


electricity pumping in the walls, and a short somewhere can cause
absolute disaster and wipeout your entire business, never to be
recovered again.

Is it worth it?

A Money Saving Alternative

David Heinemeier Hansson, or DHH as he’s better known, is the


creator of Ruby on Rails and also the cofounder of 37Signals, the
company that runs Basecamp.com and Hey.com. About a year ago, he
and his cofounder decided they were spending far too much money on
AWS and GCP:

The back of the napkin math is that we'll save at least $1.5 million
per year by owning our own hardware rather than renting it from
Amazon. And crucially, we've been able to do this without
changing the size of the operations team at all. Running our
applications in the cloud just never provided the promised
productivity gains to do with any smaller of a team anyway.

This is possible because the way we operate our own hardware


actually isn't too dissimilar from how people use rental clouds like
THE IMPOSTER’S ROADMAP 421

AWS. We buy new hardware from Dell, have it shipped directly to


the two data centers we use, and ask the white-glove service
hands at Deft to rack the new machines. Then we see the new IP
addresses pop online, and can immediately put them to work with
KVM/Docker/Kamal.

DHH makes a great argument, as he tends to do, for the money


they’re saving on their infrastructure. He’s not alone in this:
StackOverflow.com still runs on custom-built servers that, I’m
guessing, are at a different data center than the one in Oregon.

37Signals didn’t need to hire additional staff to do this, and apparently


the IT services they get from Deft are still creating savings for them.
One thing David doesn’t mention, however, is that they’ve upped
their risk quite a bit. You need to insure these servers, and you also
have to be certain you have disaster plans in place, response teams,
and a backup plan that is bullet-proof.

They’re smart people over there, and I’m sure they figured all of this.
DHH also offers a bit of caution to his readers, that this isn’t a
solution for everyone:

As I've mentioned before, I still think the cloud has a place for
companies early enough in their lifecycle that the spend is either
immaterial or the risk that they won't be around in 24 months is
high. Just be careful that you don't look at those lavish cloud
credits as a gift! It's a hook. And if you tie yourself too much to
their proprietary managed services or serverless offerings, you'll
find it very difficult to escape, once the bills start going to the
moon.

Should you rack your own servers? To me, the question feels like
422 ROB CONERY

“should I buy a motorcycle?” They’re powerful, cost less than a car,


and that hardware…

I don’t know if I could stand the stress, myself. But then again, I’m
not paying out millions a month in cloud fees!

Speaking of, in the next chapter we’ll dig into Kubernetes, which you
can use in the cloud or your own datacenter.
SIXTEEN
KUBERNETES
A CRITICAL TOOL TO UNDERSTAND SO
YOU CAN MAKE SOMEONE ELSE DO IT

D
ocker Compose does a lot for our container-based setup,
but there are limitations. For instance:

It won’t work across host machines.


It won’t load balance between containers.
Not so good when things go wrong. It can restart a container
if it crashes, but if the system as a whole has an issue, there
are no safeguards.
No built-in configuration for the entire system.

The bottom line is this: Kubernetes sets you up for massive scaling from
day one. It has a vibrant community, and is a miracle of engineering. It
is, as we’re about to see, a data center in a box, and it requires the
same level of understanding as any IT department used to.

To that end, we’ll start at a high level and work our way down. There
are quite a few concepts and terms you’ll need to learn, but hopefully,
it won’t be too bad. Kubernetes is dense, very dense, and the people
who know it are in extremely high demand.

Before we start, however, let’s ask the most important question.


424 ROB CONERY

DO YOU NEED KUBERNETES?


If you ask strangers online, they’ll say “nope” 99% of the time, with
1% telling you “Kubernetes is ASWERORM!” In other words: not
helpful.

Kubernetes gives you control over your deployment and scaling. You
have an entire datacenter at your fingertips, and there is almost no
limit to how high you can scale your stuff. That said, you will be
bathing in YAML regularly, learning new terms and plugins hourly,
and lining your office walls with cheatsheets. Or, I don’t know, maybe
you have a great memory? My notes on Kubernetes go for pages and
pages…

Kubernetes is a complex tool with complex terms that do extremely


complex things well. The same can be said about computer
programming, and I think it’s fair to say that you’ll need to make room
for Kubernetes in your head if you’re going to be good at it. It also
helps if you can think in 3 dimensions (we’ll get to that later) because
understanding the layout of your Kubernetes topology is critical.

Kubernetes is worth investing in if you:

Expect to scale your application at some point in the future


(when are you not planning this?)
Have a dedicated person on the team who can “own” it.
Want an infrastructure to grow with your project, over time,
and you’re OK with adding a layer of process and structure to
your deployments (rather than git push with Dokku or
Heroku).
Want to add a solid, useful bit of experience to your resume.

That’s it, really. If you want my personal advice on this — get to MVP
(Minimum Viable Product) first and make sure you have users and
potential to grow. Dokku and Heroku are outstanding at facilitating
this! Growing pains are good pains to have, and if you experience
THE IMPOSTER’S ROADMAP 425

them, investing in Kubernetes might work well for you in the


long run.

THE MOST BASIC UNDERSTANDING


Kubernetes is software that runs on one or more networked machines
that is centrally coordinated by a single machine called the master.
Each machine that the master controls is called a node, and each node
can also be thought of as a Docker host. I’m going to be using Docker
from now on but when you read it, just know I mean “container”
because Kubernetes will work with all kinds of container services.

This is called a cluster, and it needs to be set up first. Once that’s done,
we can start playing with YAML, and we get to understand one of the
main concepts behind Kubernetes: desired state.

In our YAML file, we specify the desired state of Kubernetes


infrastructure. This includes the images we want to use, configuration
options, networking, and so on. When we submit the YAML to
Kubernetes, it will do its best to conform to this desired state and, if it
can’t, it’ll let us know.

If something happens that puts our cluster out of sync with the
desired state, such as a container crash, Kubernetes will do it’s best to
heal itself, back to that state.

That’s Kubernetes at a high level, but wow there is so much more, and
we must move slowly if you’re new to the whole thing. If you’re a
Kubernetes person, feel free to move on, what comes next is for
people (like me) who only have the slightest knowledge of
Kubernetes, namely that it exists.

CREATING A CLUSTER
You can play around with Kubernetes locally, provided you have Linux
running somewhere, or you can play around online. I’m going to do
the latter.
426 ROB CONERY

There’s a simple site you can go to that will let you play with
Kubernetes for a few hours. It will set up a cluster for you, and you
can play around for a bit. You do need to log in, but that’s just to keep
people from causing problems:

As you can see, we can play for a few hours and then everything is
wiped out.

When you create an “instance”, you get a new “Docker in Docker


running Kubernetes” instance, which, I know, is wild, but it’s fun to
play with:
THE IMPOSTER’S ROADMAP 427

We’re given a few warnings and also instructions, the first of which is
to initialize our cluster with kubeadmin, which manages our cluster.
This is the first of two binary tools that we care about, the second is
kubectl, which manages the Kubernetes state.

So, to be clear: Kubernetes is the software that powers everything, but


our cluster is a set of Kubernetes tools to help us orchestrate Docker.
So let’s create one — and I’ll do that with a little copy/paste:

Ick. What… is happening here? Do we need to care? I think so, but


not for all of it. Let it wash over you and know we’re going to figure it
out slowly, one thing at a time.

So, oddly, the playground is calling our Kubernetes instance node1,


which is confusing because node is a term within Kubernetes — but
just know that’s what you’re looking at there.

The next things to notice are:

We specified --apiserver-advertise-address and a few other


networking things. This is because we’re in a playground and
our host machine needs to configure things. Let’s not worry
about that now. The API Server is a thing we’ll need to worry
about, but we’ll get there.
428 ROB CONERY

You will see mention of things like kubelets, control plane,


Pod manifest, and various config stuff. We’ll get to know these
things in time, just know they are friends and here to help.
You will see instructions at the very end, which are essential.

Good. Lord. There is a lot you have to learn, and it’s all good if you
want to dig in to the administrative bits and bobs of Kubernetes. For
some applications, this could be a good idea as you have a lot more
control and know what’s going on at any given moment. If you’re just
starting out… well, there are simpler options.

CREATING A CLUSTER, THE EASIER WAY


Here’s the thing: if you needed a web server, would you go and
download the source for nginx, build it, and configure it by hand?
Probably not! We’re in manual mode here, running Kubernetes
directly on a server. It’s an excellent idea to get to know how this
works, OR you could let someone else do all of this for you, like your
favorite cloud provider!
THE IMPOSTER’S ROADMAP 429

I’ve been using Digital Ocean for years and like them a lot. Linode is
great too, as are so many other services (Azure, Google, AWS, etc.). It
just comes down to “how detailed and in control do I want to be?”

For me, I’m happy to have Digital Ocean do the first parts for me. I’ll
show you how they do it, but just know that most other cloud
providers do the same kind of thing. I’m honestly not trying to send
them business — but you should see the difference.

The first thing I need to do is select a datacenter and version:


430 ROB CONERY

I’ll go locally and then use the recommended version of Kubernetes.

Next up, we need to select our cluster’s capacity. What size do we


want for each machine, or node, in our cluster? This is all part of
capacity planning and normally, if you’re here setting up one of these
clusters, you know what your needs are, and you can run the math to
figure out a good balance between each node’s power and how many
containers you’ll be running overall.

Is it better to have 3 cheap nodes for $36/month, or have twice the


RAM and CPU over two total nodes at $48/month? The neat thing
about services like Digital Ocean is that you don’t need to decide
straight away — you can let them do it for you, or you can adjust your
nodes manually as you need:

You can see at the bottom there that my monthly bill will range
between $24 and $60/month, which I’m fine with. For starting out,
I'm in favor of knowing just how much I’ll be spending, so I’m going
with “Fixed size”.

We’re asked a few more questions here, one of which is critical:


THE IMPOSTER’S ROADMAP 431

High availability is a great option if you require uptime guarantees.


This is something you’ll need to decide with your team, but if we’re
going with the Red:4 training example here, I would say “no, people
can wait if we’re down for a bit”.

The second option is the big one, however. Where do we put a


database? I like having this option right here because it makes you
realize that Kubernetes was designed for stateless applications.
Containers come and go, as you’re going to see, and remembering
things wasn’t in the original plan.

This has changed, of course, and there are a few ways to run a
database reliably right inside Kubernetes. Or you can do a managed
thing with Digital Ocean, which I’ll skip for now.

After a minute or so, our new cluster should be up and running!


432 ROB CONERY

Again, this is what Digital Ocean does, but other cloud providers will
do something similar for you, including letting you download the
configuration file:

This is where things get fun! I think… we’re about to download


instructions for a local admin app to reach out to Digital Ocean, telling
it what to do. Let’s download that file and put it in a local working
directory.
THE IMPOSTER’S ROADMAP 433

Find a place on your local drive to work from. Ideally, this won’t be in
your project’s code directory because you do not want this stuff in source
control. Our config file that we just downloaded are the keys to our
Kubernetes kingdom, which you should view as your personal, virtual
data center.

Let’s play around!

CREATING THE LOCAL COMMAND CENTER


We tell Kubernetes what to do using a local admin tool called kubectl,
which I guess is short for “Kubernetes Controller”. People generally
refer to this as “cube cuddle”, which is … whatever.

You can find the installation instructions here, or if you’re on a Mac


like me, just use homebrew (brew install kubectl).

Once we have that installed, navigate to your local working directory,


which should also have your configuration file in it that we just
downloaded. Digital Ocean gives this a cryptic name, but you can
change it if you like. I’m going to rename mine to
do_kube_config.yaml, so I know what it is.

Whenever we run kubectl against our new cluster, we need to tell it


where the config file is. Let’s check in and take a look at our nodes
using the following command:

kubectl --kubeconfig=./do_kube_config.yaml get nodes

Boom! You should see something like this come back quickly:
434 ROB CONERY

Don’t you love it when things just work? Let’s tighten this up a bit so
we don’t need to keep adding the --kubeconfig flag, shall we? To
achieve that, we can create an environment variable to hold that info:
KUBECONFIG=./do_kube_config.yaml

That makes things just a bit easier, doesn’t it? If you’re using a shell
plugin that autoloads environment stuff from a .env file, you can set
the variable above right there. If you’re new to environment variables,
they’re settings in your terminal session that your shell can use when
executing commands.

Going for the flex using Make

I love Make. I know I abuse it because it’s really meant for building
software, but you can also use it to execute shell commands (and
scripts) in an orchestrated way. It’s super helpful when it comes to
remembering commands, like get nodes:

This step, is, of course, optional. But it’s also extremely fun and
useful! You can create a Makefile in your project root, adding a
variable for our config:
THE IMPOSTER’S ROADMAP 435

Kind of looks like YAML, doesn’t it? It’s not, and it can be frustrating
if you don’t know the particulars, which are:

Each “key” here is called a “target”. Make is supposed to


orchestrate compilation and building of software, so like any
build tool, a “target” should produce some artifact. They don’t
strictly have to, however.
.PHONY means “these targets don’t build anything”. Make
will actually check to see if a target’s build artifact exists and,
if it does and hasn’t been modified since the last build, will try
to speed things up for you by skipping it.
Each target needs to be indented with exactly one tab.

We’ll keep playing around with this and see if it’s worth it as we
go on.

A MORE DETAILED LOOK AT OUR INFRASTRUCTURE


I think it’s imperative that we move at a reasonable pace and that you
don’t get overwhelmed with jargon and terms. Or maybe I’m being
selfish, trying not to overwhelm myself.

We have been working with the idea of a “master node” along with
“child nodes”, or just “nodes”, but obviously things are more
complicated than that.
436 ROB CONERY

Our master node is also referred to as the control plane, which is a funky
term describing a collection of services at the heart of Kubernetes:

The control plane consists of 4 parts:

The API Server, which is how the commands are processed


from kubectl and other tools.
The Scheduler, which tracks what nodes are in use and their
current load. It also decides when to increase the number of
nodes and how, as well as decreasing them (and how).
The Controller Manager is responsible for maintaining the
state of the cluster. We’ll get into that in more detail in a
minute, but basically the Controller Manager executes a bunch
of run loops that ensure the cluster looks the way we want it
to look. If a node crashes, the Controller Manager ensures a
new one takes its place, for instance.
The distributed key/value store, etc.d (or “etsy-dee”). This
data store isn’t specific to Kubernetes, by the way.

All of these services work to ensure our nodes are in a desired state.
THE IMPOSTER’S ROADMAP 437

UNDERSTANDING DESIRED STATE


Right now, our cluster isn’t doing anything, other than sitting there
idol. This is the state that we’ve asked it to be in: three nodes, no
deployments. We can change that, however, by describing the state we’d
like it to change to.

We do this by using a bit of YAML, of course. I’ll create a file which


will describe 4 containers (also called “pods”) running an Nginx web
server image. I’ll put this in a file called nginx-deployment.yaml
right in our working root:

If you don’t know Kubernetes, many of these things won’t mean much
to you, which is OK as I’m about to go over them. For now, know that
this file represents the desired state that we want our cluster to
change to.
438 ROB CONERY

The main things to notice are:

The app label. This helps us keep track of things for our
needs, but doesn’t really impact the system.
The spec:replicas setting. This will create containers for us
and evenly distribute those containers across our cluster.
The template:spec:containers section, which contains the
Docker settings we’ll be using for each container. Looks like
Docker Compose in there, doesn’t it? Aside from
containerPort which, hopefully, you can figure out!

Let’s see if it works!

OUR FIRST DEPLOYMENT


Our next job is to alter the state of our cluster using kubctl, and we
do that by using kubectl apply:

It’s ridiculous how fast this is! It should take well under a second to
issue the command to our API Server, which hands it off to the
Controller Manager and Scheduler.

But, did it work? We can find out by asking kubctl to give us a little
more info than get returns. For that, we’ll use describe pods :
THE IMPOSTER’S ROADMAP 439

It’s unnerving how fast Kubernetes is! There is a TON of info dumped
on us here, but I highlighted the big things we need to know, namely
that our deployment was honored and that our image was applied
properly. The services are running, yay!

But where’s our site?

SHOWING THE WORLD OUR WORK


Kubernetes doesn’t know we have a web server in there and that our
intent is to show it to the world. All we have at this point is 4
containers, or “pods”, running Nginx.

Before we get to that, it’s a good idea for us to start using the term
pod instead of container because that’s what the replicas setting
refers to: the number of pods we want running across our nodes.

That’s one of the neat things about Kubernetes: you can have (almost)
as many pods as you want! They don’t correspond at all to the number
of nodes you have in your cluster (hopefully these terms are sinking
in). The Scheduler knows the workload of each node, so when we
440 ROB CONERY

change the state of things through a deployment, the Scheduler will


decide which node can handle the request. It might redistribute
things, if it wants to, but the goal will always be a “graceful change of
state”. If anything fails, your deployment will fail!

OK, so how do we tell Kubernetes to show our stuff? We could go the


route of setting up a network with port forwarding, but that’s silly!
What’s the point of having all of these nodes and pods if we can’t
balance the load, eh?

Kubernetes Services

Our first state change deployed an application to Kubernetes, but now


we need to add a service, which is also a deployment… but not really.
What we need to do is to tell Kubernetes to essentially “turn on” load
balancing, and to route incoming requests to a group of pods.

I’ll create a new YAML file, or “manifest” as they’re called in


Kubernetes land, and call it load-balancer.yaml:
THE IMPOSTER’S ROADMAP 441

That wasn’t too hard, was it! The main differences from our first
deployment are the kind, which is of course a Service. From there we
had to describe the type as LoadBalancer and then which pods to
balance things with.

For that, we use the selector setting. This is where our app labels are
useful from our deployment above! We specified that each of the pods
would have the label nginx, hoorah for us!

Let’s see if it worked. We can do this using the command kubectl get
services:

It’s running, yay! This is a service that sits outside our nodes, as you
can see, and it will be assigned an external IP once one becomes
available.

After a minute or so…

That was almost too easy! Let’s see if we can hit our site:
442 ROB CONERY

Oh my goodness! We have a highly scalable, load balanced app


running on Kubernetes! This is… almost too fun.

ADDING SSL THE SIMPLE WAY


In 2023/2024, when this book was written, most domains use
Cloudflare to mitigate attacks, handle DNS, do caching, serverless
functions, load balancing, and a lot more. It’s one of the core services
that 1) hasn’t let me down, which is a miracle for how long I’ve been
using it, and 2) gives you everything you need for free.

I’ll talk about Cloudflare in more detail later on, but one of the
services they provide is “proxying” your domain settings if you want
them to. I do this with every site I own, including bigmachine.io. If
you proxy your domain, you get a free SSL certificate, and you don’t
have to do a damned thing to your server.

Check it out:

This is an “A” record for my domain, bigmachine.io, which resolves


the name k8s.bigmachine.io to my Kubernetes Load Balancing service.
These are Domain Name System (DNS) records, something you’ll
need to know even if you don’t do “web stuff ”, and something that
Cloudflare handles extremely well.

Before we can do this, however, you’ll want to make sure the


“SSL/TLS” settings for your domain are set to “Flexible”:
THE IMPOSTER’S ROADMAP 443

I was having issues with this demo and my friend Aaron Wislang
jumped to my rescue by reminding me of this setting. I had it set to
“Full” which didn’t work because my server doesn’t have a self-signed
certificate.

Note: some services do have this when you create a site with them. They’ll
typically offer you a preset site URL that’s something like “https://youraccount.
theirservice.something.come” which will already have an SSL certificate. In that
case, using “Full” mode with Cloudflare works great. If that’s not the case, you’ll
need to ensure “Flexible” is selected.

Et voila!
444 ROB CONERY

You know, I used to spend days, sometimes weeks, waiting for SSL
certificates. You had to prove who you were, send in business and
banking documents, and then you’d get a PEM file with some
(typically) lame instructions on how to set the certificate up with your
server.

Now it’s literally a slider and it works perfectly!

The Not-so-simple Way

I can understand why someone might not want to use Cloudflare for
everything they do, depending on a service that much is scary. If I was
leading a project, however, Cloudflare would definitely be a hill I
would die on. They simply do everything better than any service I have
used.

That said, you do have some choices.

Your Kubernetes host/cloud provider will likely have an SSL option for
you, or at the very least, instructions for how to set up SSL in your
cluster using something like CertBot or CertManager. Digital Ocean,
which is where my Kubernetes cluster is hosted, has several
walkthroughs for how to set up SSL with them using Lets Encrypt.

It’s not difficult and, like everything Kubernetes, comes down to


learning you some YAML. I can’t possibly cover everything here, so I’ll
just recommend you have a Google. I’ll also strongly suggest you flex
Cloudflare. I’ve been with them for close to 8 years now, and they
continue to be wonderful.

UPGRADING AND CHANGING OUR CLUSTER


This is why people love Kubernetes: desired state. You tell it what it’s
supposed to do, and it will figure out how to do it. “I want 5 pods
running my app with a single MySQL service, a single Redis service,
and I want it NOW!”

It’s fun to bark orders like that.


THE IMPOSTER’S ROADMAP 445

All we need to do to change things is update the image for the


deployment or service that we want to change. Let’s upgrade Nginx to
the latest as of this righting: 1.25.3:

Now we just need to run apply one more time and off we go!

How easy was that? Let’s take a look at the state of our cluster again,
using kubectl describe pods:

It still amazes me every time I see this. The Scheduler decides where
things should go, and the Controller Manager makes it happen. Here
you can see that the image for Nginx 1.25.3 is being pulled and
applied across our pods, gracefully rolling out one by one, ensuring
there’s no downtime.
446 ROB CONERY

This, right here, is your application deployment process:

Finish your sprint and build your new image, tagging it with
something meaningful, such as the name of the sprint.
Push your image to your registry.
Update your deployment manifest with the latest image and
apply.

This can be a manual process or something you do with continuous


integration and deployment, which we’ll talk about in a later chapter.

TANGENT: IS THE FRICTION WORTH IT?


In the early days of your project, you’re going to need to move quickly,
making changes on the fly to meet deadlines and demos. If you have
the added step of formalizing your codebase into a Docker image,
pushing, applying, and praying for success — it will slow you down.

You might then want to speed things up by automating a CI/CD build


process that creates a container from your deployment branch on
push, and then applies that change to your Kubernetes cluster. This is
a common way of doing things, and it’s also extremely process heavy.

The assumption developers have (including myself) when they do


DevOps is that it will “just work”. For the most part, these tools do
indeed “just work”… but when they don’t, you have to stop
everything and unwind a pretty intense process snarl.

What’s worse is that this is a mess you created, and you’ll have to tell
your boss about it. All they’re going to hear is “blah blah blah I have
to figure out this process I created before I can fix things blah blah
blah so it’ll take another few hours”.

I can’t tell you just how disastrous it feels when your development
process gets in your way, and I speak from experience on this one. I
used to have automated deployments happening from my GitHub repo
for my publishing site, bigmachine.io. I would push, a process would
THE IMPOSTER’S ROADMAP 447

go off that ran my tests, built an image, and then pushed off to Digital


Ocean. It seemed simple, until it wasn’t.

I decided to move from an Alpine-based image to a Debian one…


Bullseye, I believe because I bought a Mac M1 and wanted to be
certain I could run the image locally. Still, to this day, I don't
understand why the build broke. I spent a few hours probing the
issue, so I could learn something new and fun… which I didn’t have
time for, unfortunately. I was in the middle of trying to get a video out
the door, and so I scrapped the entire thing and moved everything to
Dokku.

I could have fixed this within a few hours. In fact, thinking about it
now I think I might have chosen the ARM image which, of course,
wouldn’t work on an Intel processor which… ah whatever.

Eyes On the Prize

I set that process up because of my ego, if I’m honest. Not to show


everyone how amazing I am, but because I wanted to learn it and say I
knew how to do it. I think that’s a good ego reason! Unfortunately, it
turned out to be the wrong reason entirely when everything blew up
on me.

My site exists to show customers the stuff I’ve made: Books and
Videos. If that doesn’t happen, I don’t have a business. This is a good
argument for using Kubernetes, but it’s an even better argument for
having some way of keeping your site alive if it fails. Kubernetes will
keep your site up, but so will PM2 if you’re running Node, or any
process manager, really, including Dokku!

If Kubernetes has a downside, it’s the psychic weight of the thing and
the process it creates in your project. If you have a team that can
handle that, Kubernetes is rad! If not, it could be a disastrous choice
for you and your team.

Moral of the story: consider the friction before adopting this


fantastic tool.
448 ROB CONERY

YOUR NEW ROLE AS YAML ENGINEER


Thankfully, Kubernetes YAML is reasonably readable. The first lines
tell the service what it’s being handed, and the following lines specify
what’s needed.

Let’s look over the Nginx deployment, once again, but with more
comments:

If you ignore the top bits and only read from spec onward, it makes
reasonable sense. We know how important selectors are now because
we added a load balancer that used those labels.
THE IMPOSTER’S ROADMAP 449

Each pod has a spec applied to it as well, configuring the images and
their settings as needed.

Here’s the load balancer with more comments:

API version, what the manifest is describing, a name and some labels.
Then a configuration specification for the thing we’re asking for.

Here’s a setup for something more complicated: MySQL on a


“persistent volume”. Here’s another specification for using Redis.
Reading through the YAML for these, you start to see the pattern:

API, Kind (Deployment, Service, Pod, etc.), Metadata (name,


labels)
The Spec: configuration for the deployment, service, pod, etc.
The Spec Template: metadata you apply to every pod created
for the spec.
450 ROB CONERY

The Spec Spec: Docker info, which typically aligns with Docker
Compose.

It’s one of those things you get used to and need to exercise from time
to time. Kubernetes is all about the YAML and understanding the keys
and switches.

OR JUST USE YOUR CLOUD PROVIDER


Most cloud providers these days try to help you do the simpler, more
common things such as setting up a load balancer and SSL. Here’s
Digital Ocean’s suggestion for the cluster I created:

There are other “one-click” setups that will run the YAML for you as
well, which you can see here. Monitoring, networking, and even
WordPress!

To be honest, it makes my head hurt. There is so much twiddling and


configuration happening with Kubernetes that … thinking of another
tool doing it for me is… kind of cringe. To me, if you’re going to use
Kubernetes, you really should take the time to master your cluster and
know exactly what’s going on.

That said…
THE IMPOSTER’S ROADMAP 451

MEET HELM
Kubernetes is hard, so we need more abstractions to make it less hard!
Apparently. Yes, I’m being snarky, but consider:

A VM abstracts a physical computer into a virtual thing.


Docker (kind of) abstracts a VM. There are binary files of
course (the image) and the actual process (container), but it’s
a step up from VMs in general.
Docker Compose orchestrates Docker containers so they can
work together.
Kubernetes abstracts the idea of Docker Compose, and the
entire data center where you would use those tools.
Helm abstracts Kubernetes.

It’s like a whole different universe up in here. But, be honest: you


were probably thinking the same thing! All of this YAML to set up
something so common, like a web app and database etc… can’t we centralize this
somehow? Well, I know I was!

That’s what Helm does. It’s a CLI tool that works off things called
“charts”, which are recipes for Kubernetes. If you understand
Kubernetes, you will jump to Helm quickly because it just makes
sense.

Here’s a Helm chart for the Ghost CMS that I love! You can view the
templates that go into this as well, so you’re not just blindly trusting
someone else to build things for you.
452 ROB CONERY

13 manifests for setting up all the Ghost goodies. I think that’s pretty
neat!

Once again: I could write 2 or 3 books on Helm, so I’m going to make


sure you know it exists and what it does. There’s a Quickstart tutorial
as well which is pretty damned good at describing what’s going on.

Ultimately, Helm is a YAML generator that amounts to a “package


manager” for Kubernetes, which I suppose makes sense.

CLEANING THINGS UP
Let’s drop our deployment, shall we? It’s a good idea to understand
how to clean things up, and as you might expect, you can do this with
a simple command:

Bye bye Nginx…

Now let’s do the same for our load balancer. Can you guess what that
command would be?
THE IMPOSTER’S ROADMAP 453

I probably should have named this something better than app-


service. Ah well, naming.

We’ve deleted our pods… but our cluster is still sitting there idle, with
nothing to do! Qué horrible! We’re going to get billed for all of this, so
let’s make sure we drop the cluster at our cloud provider too, which
you can do with Digital Ocean right in the settings screen.

Not Quite Dead Though

This is such a cool thing about Kubernetes. Imagine you’re goofing


around and you all of a sudden realize you just dropped prod! We
were just goofing around, and you weren’t paying attention, and down
it goes!

What now!

Your manifests still exist in your working directory, so all you need to
do to get your environment back up is create a new cluster and apply
your deployment and services.

It’s at this point you’ll probably be delighted you decided to put your
data into a managed service!

This is when I get to flex about Make. I can rebuild my entire


everything with a single Make command: make.
454 ROB CONERY

Love this tool!

WHAT ABOUT…?
Imagine you’re on a plane, flying far away, and you fall asleep during
the flight, happily passing the hours with Morpheus in the Dream.

Boom! You land. You were happily asleep, and now you’re groggy and
don’t quite know what happened. You see that you’re in an airplane
and can’t remember how you got there. People are now getting off the
plane, and you pick up your stuff and as you walk down the aisle, a
flight attendant comes up behind you and hands you your backpack
with your laptop in it and your phone — both of which you forgot as
you stumbled toward the exit.

You walk down the jetway and still can’t remember where you are,
THE IMPOSTER’S ROADMAP 455

and then you see the signs. Oh, right. The Big Trip. I forgot that I’m
starting out here… in a big country I’ve never been to…

If you’ve never used Kubernetes before, that person is you (me too!),
landing in Kubernetes Land with only the slightest notion of what’s
going on.

I wish I could fill this chapter with loads of examples and use cases,
but others have done this so much better than I can. If I had to
recommend one, it would be Nigel Poulton. I don’t know Nigel, but
I’ve watched his Pluralsight courses and also read his book. He’s
excellent.

I have to stop at some point, however, otherwise this entire book


would be about Kubernetes and, even then, I wouldn’t do it justice!

A NOTE ON CLOUD SERVICES, AKA CLOUD NATIVE


I’ve been writing this book for well over a year now. It’s currently
December 2023, and I started putting together the outline back in July
2022. Amazon (AWS), Azure, and Google (GCP) are the main players
here, with smaller services like Linode, Digital Ocean, Heroku, and
others trying to compete at a lower price point.

Full disclosure: I currently work as a Cloud Advocate at Microsoft in the Cloud


Native group. What you’re about to read is my personal, unbiased opinion on
things.

In the time that I’ve been writing this book, the major cloud services
have launched over 100 new services, each of which is trying to make
the cloud easier for you as the person who needs your application in
front of people. And it’s going to keep growing and churning.

You can host your Kubernetes cluster on each of the services, or you
can approximate Kubernetes using a collection of offerings, a so-called
“Cloud Native” approach.
456 ROB CONERY

This is a massive topic and one that is frankly out of the scope of this
book. I work in the Cloud Native advocacy group at Microsoft, and I’m
still learning about our services!

If you want to go all in on a cloud service, that’s a fine strategy,


provided you understand the billing. Nothing is worse than getting
surprised by a cloud bill you didn’t expect. This happened to me just 3
months ago when I was trying to get a Kubernetes cluster up on Azure
on my persona subscription. I forgot to turn it off. $245 later, I was
pretty unhappy with myself. Cloud companies are pretty
understanding when it comes to these things, however, but still… it’s
not fun.

If your company is just starting out, perhaps wait a bit on deciding


what your infrastructure will look like. You’ll want to hire for this
role, or learn it yourself. If you’re at an established company, it’s likely
this decision has already been made for you, and you’ll have a resource
somewhere that can help out.

Right then, let’s move on.

Docker, Docker Compose, Kubernetes, and Helm can do miraculous


things, but only if our applications are built to enable those miracles.
That’s what we’ll look at in the next chapter: different ways to cobble
together an application and hopefully not make a mess.
SEVENTEEN
ARCHITECTURAL APPROACHES
THE STRUCTURE OF YOUR APPLICATION
IMPACTS MORE THAN YOU REALIZE

FUN WAYS TO BUILD YOUR APPLICATION

P
utting a book together is hard work. Aside from the planning
and the writing of the thing, there’s also organizing and
editing. I struggled with this chapter in terms of where to put
it: before DevOps or after DevOps.

On one hand, you have to know how your application will be deployed
and where it will ultimately live before you build it. If your company
has Azure credits you’ll probably go with Azure, which means that
you’re most likely a .NET shop, which means you’re going to build a
certain style of application.

Maybe you like Rails and want to move extra fast, so you go with the
Big Monolith. In that case, Dokku or Heroku (or services like them)
are going to work better for you.

Anyway, I had to pick a place to discuss architectural stuff, and it


seemed to make sense to discuss it after we thought about
infrastructure. The two go hand in hand, so off we go.
458 ROB CONERY

WHAT PROBLEMS ARE WE SOLVING?


There are so many architectural patterns in software, how are you
supposed to choose? Well, I have some thoughts on that.

My golden rule when considering how to build an application comes


down to a single question: what problems are we trying to solve here? With
every discussion below, it’s important to know what problems you’re
facing, and how each thing approach can help you or slow you down.

For some teams, velocity is key. For others, it’s clarity and structure. It
all depends on your context, but do keep this question in mind,
always. In fact, write it down in a README somewhere, or at least in
your personal journal.

Knowing why you’re doing a thing is critical, because you will be asked
why you’re doing what you’re doing. You should have quite a few
answers at the ready.

STARTUP VS. REWRITE VS. ENTERPRISE


I might have mentioned this, but your job is to ship, and that means that
you need to ensure that you, your team, and your stakeholders
understand the context under which you’re working.

I’m going to assume that you’re familiar with the startup landscape vs.
“the enterprise”. I’ve worked both, and one thing I’ve never entirely
understood is why so much ceremony and structure go into
“enterprise” applications. If you’re talking about accounting, HR,
inventory, or payroll systems, then yes, I can understand that.

I suppose it’s not relevant whether I understand what it is you’re


working on: the point is still the same! You’re here to ship it.

Startups move fast and “break things”, which is fun and exciting until
it’s not. They really do expect you to work 10 to 12 hours a day, even
if they publicly say “we believe in work/life balance”, the rest of that
THE IMPOSTER’S ROADMAP 459

sentence is “unless you’re winding up a sprint or have a broken


build.”

Velocity and structure are on sliders here, but one thing isn’t, and it’s
the common thread through the rest of this chapter: the ease of fixing
and changing things. Bad architectural choices are easy to spot: “fixing
this bug means we need to change X, which means we need to change
how we Y and ultimately Z.” I’ve been there so many times.

In the startup world, no one cares. Build it, ship it, let’s go go go go!
It’s all about MVP and MRR (monthly recurring revenue) and getting
profitable so we can get another round of funding… so we’re not
profitable anymore.

In the enterprise world you plan, diagram things in UML, sit through
design meetings and, sometimes, write code.

No matter what world you’re in, you’re going to rewrite everything at


some point. We discussed Joel Spolsky’s post earlier, but I think it’s a
good idea to bring up his key point once again:

There’s a subtle reason that programmers always want to throw


away the code and start over. The reason is that they think the old
code is a mess. And here is the interesting observation: they are
probably wrong. The reason that they think the old code is a mess
is because of a cardinal, fundamental law of programming:

It’s harder to read code than to write it.

This is why code reuse is so hard. This is why everybody on your


team has a different function they like to use for splitting strings
into arrays of strings. They write their own function because it’s
easier and more fun than figuring out how the old function works.

I do wonder if it’s in our DNA: we become the alpha person and want
460 ROB CONERY

to ensure we have control of everything. It doesn’t matter, it’s going to


happen, so let’s just embrace it.

UNDERSTANDING TTR
Every time I write an application, I wonder how long it will be until I
replace it. The last one I shipped was 3 months ago, and it was for my
side hustle. An insidious bug crept in, and it took me 3 hours to both
figure out what I had written and then resolve the issue. The old site
was a big frontend application written in Vue 3 and the backend was
Firebase and, after 7 years of this setup, I finally had enough.

Over the course of 12 days during my winter vacation, I tore


everything out and redid the site using Ruby on Rails and PostgreSQL.
All server-side stuff. It’s working great and I leave it alone.

But I know I’m going to redo it again someday. I always do. The
question becomes: what’s my Time to Rewrite, or TTR? You can calculate
this by figuring out:

How much “technical debt” you’ve accumulated. We’ll discuss


this term in a second.
How easy is it to isolate and fix a bug?
How often do new platforms and languages come out that
interest you, or that claim insanely high developer
productivity?

That last one gets me every time. When framework developers discuss
productivity, they’re usually referring to the initial build of the
application, which is NOT where the bulk of your time will be spent.
Nope, that’s maintenance, which we’ll get into in the next part of this
book.

Revving to the next version, enhancing things, fixing bugs, adding


features, etc. That’s where you’re going to spend your time.

Technical Debt
THE IMPOSTER’S ROADMAP 461

Every so often, I get angry with data access tools (OK, very often) so I
end up writing queries by hand because I know SQL and love it. This
is fine, until something crashes, and you realize the query you wrote
sidestepped the validation rule you have in your model code.

That’s technical debt: making decisions for the sake of speed or efficiency that
build up until you’re technically bankrupt.

It’s unavoidable. Even your framework choice is a form of technical


debt! These things don’t last forever and even if they did, that would
be worse! Every new version of a framework or language requires you
to change things if you want to use that new version, which is a pain
in the butt. Vue.js 2 has just reached “end of life” and will no longer
be supported in 2024. How good does that feel to developers who
have been maintaining a Vue.js 2.0 app successfully for the last 5 or so
years?

All we can hope to do is minimize the accrual of technical debt, and we


do that by recognizing when it happens:

How long has your framework of choice been around, what’s


their license, and what is their cadence for major version
changes (major versions can break things)?
How often do you find yourself stuck, creating workarounds?
How easy is it to test the code you’ve written?

Testing to the Rescue

Testing is your primary defense against too much technical debt, and
your style of testing will help you find and “adjust” it as time goes on.
We’ll discuss this more in the next chapter, but for now, it’s important
to understand that a “well-written” suite of tests can help you sleep
so, so much better at night.

We Need It, Just Do It

I have a monitor on my site that emails me any time a critical bug


happens, and I hate it. I love knowing it’s there, and I’m happy when
462 ROB CONERY

I’m able to find and fix a bug quickly… but when that thing goes off
it’s like the dentist saying “hmm, we’re going to have to do something
about this aren’t we?”

This is the danger zone for technical debt. The app is down or some
bit of core functionality is doing weird things, and a slow panic sets in.
You throw some code at the problem, building in a workaround of
some kind, and promise to return to the issue when you have more
time.

A day goes by and things look normal. “Guess I fixed it… weird” is a
phrase I’ve used in the past. A week and then maybe a month. And
then your boss asks to see you: why are these reports showing 0 sales tax
collected?

Again: this is where your tests can keep you from stepping on a rusty
nail. As long as you run them, that is, which is why having CI/CD
builds are so important — and we’ll talk about that in the next
section.

Never let a workaround stay in place. It’s a bit of rot in the walls of
your application, and you need to get in there and cut it out, replacing
the problem carefully.

Ooh! Shiny!

Ruby on Rails 7 shipped in 2021 with… well, let’s just say its creator,
DHH, can be a bit dramatic (among other things):

This version of Rails has been years in the conceptual making. It’s
the fulfillment of a vision to present a truly full-stack approach to
web development that tackles both the front- and back-end
challenges with equal vigor. An omakase menu that includes
everything from the aperitif to the dessert.
THE IMPOSTER’S ROADMAP 463

I’ll be honest: Rails was instrumental in keeping my business going


many, many years ago. Reading this post made me want to drop
everything and rewrite my existing site using Rails, which I did
because occasionally, I just can’t help myself.

When you’re first building your site, there is nothing as fast as Ruby on
Rails in terms of developer velocity. Django is probably the closest, and
I’m sure Python people will be jumping up and down right now, which
is fine — imagine I wrote “Django” up there.

Either way: it’s fun to see your site come together with tests written
for you, you don’t have to think about what goes where and if you
understand the conventions, you’re flying!

Velocity is fun, isn’t it? For personal or side projects, yes. In fact, I’ll
add to that by saying fun and simple are critical if you’re a small team.
For bigger teams, more discipline is needed.

Your TTR Is Under Your Control

Tests and Discipline, that’s what it takes. We’ll talk about writing
easily changeable code down below here, and good testing strategies
in the next chapter — but these will help save you from building up
technical debt.

That and avoiding Hacker News as much as you can. A working


application that’s generating revenue is a wonderful thing. Rebuilding
it is not a wonderful thing unless it’s sick and ailing but, even then,
you will have to address what made the old app sick and ailing in the
first place, because it’s likely the problem is a process one, not a “the
code just got that way” one.

Let’s see what that means.

YOLO ARCHITECTURE: WHATEVER, WHO CARES


One of the reasons Rails (and Django, I suppose) are so popular, is
that they remove a lot of thinking from building an application. Rails
464 ROB CONERY

uses generators that put the code in the places it’s supposed to go,
and you don’t need to think about it. It does this by creating several
directories with reasonable naming conventions, and then popping
Ruby files in them:

This is referred to as a “monolith”, or “Majestic Monolith” if


you’re DHH.

Honestly, if I had an idea for a startup (and I have many), this is the
framework I would go with. I can build a pretty solid MVP with this
thing, and get it out the door fast.

Rails is built with the following architectural patterns baked in and


ready to go:

Model, View, Controller (or MVC). This application


structure was popularized back in the 1980s and 90s as a way
to build out complex, distributed, stateful applications. It was
introduced by Trygve Reenskaug, who created it as a way to
represent a Norwegian shipping yard in a desktop application.
ActiveRecord. Each model uses the ActiveRecord pattern,
THE IMPOSTER’S ROADMAP 465

which is that it represents a table in your database and each


instance of the model represents a row.

That’s the core of it, really. The goal here is to get to market fast, fail
early, and I support it completely. Unfortunately, it’s easy to start
piling up technical debt.

The old “Rails doesn’t scale” arguments have little to no merit


anymore, since you can plug in caching and break the app into smaller
pieces if you like. That’s a good problem to have, by the way, and you
probably wouldn’t have it unless Rails got you there.

ActiveRecord is also a concern because you have to consider just how


much you want it to do for you. Relational database tables do not
correspond to classes in object-oriented programming, and this
becomes extremely apparent at scale. You can work around this, but as
I mentioned above, workarounds cause even more technical debt.

When working with Rails, you eventually come to the idea of a service
class, something that wraps other classes into some kind of process.
My favorite example of this is the Shopping class. Let’s discuss.

TTR with YOLO

Consider an e-commerce application. Most people think of Product,


Cart, Customerand Order classes right off the top of their head. Let’s
go with that for a minute.

Our use case is that a Customer comes to our site and wants to buy a
Product. We might think something like “well, let’s have the
Customer add a Product to a Cart using Cart.addItem method”, and
that’s a pretty typical approach.

You then add several methods to the Cart class, including


removeItem, checkForDuplicates (because you want to increment
quantity, not keep adding Product items), subTotal for calculating
things, and so on.
466 ROB CONERY

This works, but soon you feel that you might be getting lost a bit
because you’re adding a bunch of “business” logic into your model,
which is there to represent a database table.

And you would be right. This is called a “fat model” in the Rails
world, and it can work, but your time to rewrite will be weeks, maybe
a month or two, as you simply won’t be able to remember what went
where.

A better choice here is to have a Shopping class that represents a


service or a process. A Customer goes Shopping, which kind of
makes sense, and you can add all the methods you like to your
Shopping class because that literally represents your business logic.

This setup can last you about a year, until your business needs to
expand into marketing campaigns, “Buy Now” deals, coupons and
other marketplaces that resell your stuff for you.

All of that said, I have friends who are working on Rails applications
that are 10+ years old. They do a rewrite every other year or so, and
it’s mostly a migration to the newest version of Rails.

All in all, I would say you have a year, possibly two, before you redo
your Rails (or other ‘batteries included’ framework) application.
That’s honestly not bad!

THE MONO REPO


I remember hearing the term “Monorepo” and being a bit vexed. A …
single… repo? Is it related to a monolith?

It turns out that no, a monorepo is basically a single directory that


contains your project “stuff ”. If you’re a .NET person, it’s like a
solution, but each directory can be whatever you need:
THE IMPOSTER’S ROADMAP 467

Here I have my Rails app, but I also have a Vue.js app that I use within
the Rails app, some shared libraries that might include ruby gems and
client libraries. Playwright (the frontend testing service) can stand on
its own as well, so let’s pop it right here.

For reporting, I use Metabase, which is an open-source reporting tool


that runs in a Docker container, and it lets me build ad-hoc reports
that are pretty damned powerful.

Finally, I like to work with databases on their own, outside whatever


framework I’m using. I don’t use Rails migrations because I’m very
particular about how I database — so I pop my SQL stuff in there
along with some Node tests to make sure my database behaves as I
want it to.

All of this stuff is in a single repository, too, and it represents the state
of my project at any given time. This makes sense to me because all of
these projects depend on one another, so they should also share a
version history.

As you can see, this type of setup is very team-friendly and allows
468 ROB CONERY

each team member to work in an isolated environment, free of the


context of a “main” application, like my YOLO Rails app.

It also invites some interesting ideas regarding splitting your monolith


into… littler monoliths:

Here, I’ve split my main Rails application into 3 smaller ones, each
performing a service that needs performing. They can all work off the
same database (which I can change later, if I want), but I might want
to use a static site generator for the catalog since it doesn’t change all
that often. I like Rails for invoicing and fulfillment, but I might want
to go with something else for the administrative stuff, like Retool or
JetAdmin.

Shoot, I could even use Django, just for its built-in admin tool!

This structure offers a lot of flexibility, right from the start, and allows
you to plug in what you need, and remove what you don’t. The
downside is that it allows you to tinker.

One of the great things about Rails (and other opinionated


frameworks) is they remove a lot of you from the equation.
THE IMPOSTER’S ROADMAP 469

Programmers complicate things because (speaking for myself here) we


tend to overthink them:

Do I need to split my main Rails app into 3 apps?


Can the admin stuff just stay protected in the main Rails app
with a cool template and layout?

There might be good reasons for these things, which include:

You’ve hired a few more people and instead of having them


work in the same codebase, trampling each other, they can
own their own space.
You tend to update one part of your application more than
others. For me, that would be the catalog, and having a more
CMS-y application (Ghost, WordPress, Jekyll, or Hugo) is
much better than rolling my own thing with Rails.

TTR: Monorepo

Will you rewrite things? Yes, of course! Instead of one big rewrite,
however, we might have 2 smaller ones now. This can be more
manageable with a team, of course, but you will still do it!

Same timeframe here: within a year. This time, however, there will
probably be smaller ones at a shorter cadence.

MICROSERVICES
As the name implies: microservices are individual applications that
operate independently of one another. This approach is applicable for
distributed, highly scalable applications and the complexity that goes
into it is everything you’ve heard.

I talked about my good friend (and old boss) Chad Fowler in a


previous chapter. He was the CTO of Wunderlist back in the mid-
2010s. He was brought in to build out their API and to address some
470 ROB CONERY

performance issues with the Ruby-based startup, and used


microservices to do it.

In short: he broke everything apart and scaled the things that needed
scaling. He insisted that each “cell” (his term for each service) be as
small as possible, preferably a single file that could be read without
scrolling.

It worked incredibly well. Wunderlist scaled handily to meet demand


and was then bought by Microsoft… which… ummm…

In short, Wunderlist has become so hard to maintain that


Microsoft has decided to abandon the app and migrate users to
another Microsoft-owned task management application.

That’s the problem with Microservices, in summary. Only you and your
team will understand what’s going on. Maybe. Let’s see, shall we?

Heading out on the internets to see if I can find some “reference


microservice architectures”, if you will. Here’s one from Google
Cloud, the “Online Boutique”:

Online Boutique is a cloud-first microservices demo application.


Online Boutique consists of an 11-tier microservices application.
The application is a web-based e-commerce app where users can
browse items, add them to the cart, and purchase them.

Google uses this application to demonstrate the use of


technologies like Kubernetes, GKE, Istio, Stackdriver, and gRPC.
This application works on any Kubernetes cluster, like Google
Kubernetes Engine (GKE). It’s easy to deploy with little to no
configuration.
THE IMPOSTER’S ROADMAP 471

This is a Kubernetes application, and since we know all about


Kubernetes now we can take a look at the running pods (from the
README of the project) to see what services we have:

I think this is a fun read and, for the most part, it’s approachable. The
hard part, for me, is that demos like this either lean way too far into
architectural wonderland or they tell a fun fairy tale. This is the latter.

Rather than pile on and be mean to an open-source project like this


(I’m sure the people who made it had good intentions), I’m going to
stay positive.

Choosing Your Services

Want to start a fight between developers? Ask them how to


implement a microservice design! Seriously… it works every time.

I like the e-commerce example above. They made choices as to what


goes where, but don’t explain how or why they did so. Did they:

Profile load on an existing application and decide they needed


to isolate one aspect from another?
Realize that one language/platform was better than another
for a given task? C#, for instance, is one of the fastest
runtimes in the world. Is this why it’s the cart service, using
Redis (I would back that choice).
472 ROB CONERY

The email service using Python is a curious choice to me. You


can embrace asynchronous programming here because emails
don’t necessarily need to be sent immediately.

There are many considerations that go into choosing what your


services are and, of course, it’s easy to get carried away. You’ll be
accused of that no matter what, so I suppose be sure of your
decisions!

Speaking of, here are some common considerations:

How much does the thing you’re modeling change?


How “volatile” is it? Meaning: will it crash? Can it crash? I’ve
discussed Erlang previously, but they approach the idea of
small “services” with the idea that it will die, you just need to
restart it!
Will the thing need to scale differently than other things in
your application? The shopping cart might be one of these, as
well as the recommendation engine. Email is another, if you’re
using it to send out batch emails to all your customers for
marketing purposes (and also to handle their unsubscribe
requests).
Does it have to work with external services? The tax
calculator and the currency converter services would likely
need to ping an external resource for just-in-time data to run
their calculations. Wrapping these in a service could be a good
thing.

Having read the Kubernetes section, hopefully you can see how
microservices and K8s go hand in hand. Microservices is all about
scaling out instead of up (get a bigger machine), and the patterns that
emerge follow functional programming pretty closely.

Note: if you’re a bit hazy on functional programming, I made a fun video for you.
THE IMPOSTER’S ROADMAP 473

The biggest question that needs answering, however, is this: do you


need this architecture?

If I was designing Stack Overflow today, I might consider it. Or


designing an API for a popular to-do list application like my friend
Chad, yeah, I might consider it right from the start. Otherwise: no way.

The one thing that will destroy your project or your startup is over-
engineering from the start. We’ll go into this more when we discuss
stress testing and benchmarking in the next chapter, but having a
solid understanding of where your application could fail and having a
plan for that is far, far better than spending the extra days, weeks, and
months orchestrating a site that could easily have been a monolith.

The e-commerce site, above, should be a Rails (or Django) application.


Shopify is a Rails monolith that uses Kubernetes, by the way:

All systems at Shopify have to be designed with the scale in mind.


At the same time, it still feels like you're working on a classic Rails
app. The amount of engineering efforts put into this is incredible.
For a developer writing a database migration, it looks just like it
would for any other Rails app, but under the hood that migration
would be asynchronously applied to 100+ database shards with
zero downtime. This story is similar for any other aspect of our
infrastructure, from CI and tests to deploys.

In Production Engineering, we've put a lot of efforts to migrate


our infrastructure to Kubernetes. Some approaches and design
decisions had to be evaluated as they were not ready for cloud
environments. At the same time, many of those investments into
Kubernetes have already started to pay off. What took me days of
writing Chef cookbooks before, now is a matter of a couple of
changes in Kubernetes' YAML. I expect that our Kubernetes
foundation will mature, and unlock us even more possibilities to
scale.
474 ROB CONERY

This blows my mind. Could they have a “more scaleable” architecture


if they went with microservices? Maybe. Maybe not! You could argue
that they might save some money using fewer resources, but then they
would probably spend more money on DevOps and people to keep
that infrastructure alive.

The ultimate issue, however, is not money or people.

Handling Change with Microservices

This is where understanding functional programming ideas is useful.


Let’s go back to the online boutique example, above. Here is their
architectural diagram… do you notice anything confusing?

The first thing I notice is that these services are not independent at all.
The checkout service, for instance, depends on productcatalog,
shipping, currency, cart, payment, and email. The frontend service
(which is written in Go) also depends on shipping, currency, cart
and productcatalog.

Pretend that you now own this application and the marketing team
comes to you and says:
THE IMPOSTER’S ROADMAP 475

We would like to know more about how people are using our site,
and want to track cart behavior. How hard is it to add event
tracking?

I don’t know about you, but my first instinct would be to jump right
into the cart service to implement this change. Here’s what we have
to work with (it’s C# if you’re wondering):

There’s no way, apparently, to remove an item from the cart.


Moreover, there’s no context passed here. Did they empty the cart
because they were frustrated, the session timed out because they left,
or because they bought something?

Our next task is to figure out that context. What other services are
calling these endpoints… and how? Are they event-driven, or
synchronous?
476 ROB CONERY

At this point, I would probably throw my hands in the air and say
“let’s create a logging service that we can build behavioral stuff on top
of ” which sounds like it should have been there all along, right? I
think there’s a reason it’s not there, and I can illustrate it using a little
AI fun:

I’ve been using (and abusing) Microsoft Designer, and it’s pretty
damned fun! I like thinking of services as robots, I don’t know why.

Architectural spaghetti. That’s what we’ve created here, and our


ability to change something has become the stuff of nightmares.

So what should we do? Well, this is where complexity is about to go


through the roof, but if you study up and understand it, it could work
extremely well for you.

Or your entire team will quit and go get jobs doing Rails.
THE IMPOSTER’S ROADMAP 477

EVENTED ARCHITECTURE
This is where our architectural approach is going to depend on our
infrastructure. The two simply go hand in hand! To see what I mean,
let’s recap how we got here.

Microservices are supposed to be independent services that do a thing.


This can and often does resolve to a “container per service”, which
makes perfect sense. Containers are fast and easy to scale — so let’s
go with that idea for now.

To be clear: you don’t need to be working in microservices to use


events, but people can and often do split things into different service
levels which you can think of as microservices, I suppose.

One thing that event-driven architecture can do for you is remove the
ridiculous infrastructure spaghetti that we created above. To see what
I mean, let’s see how our checkout service might interact with the
cart service using events.

The Checkout Example

Services don’t talk to each other directly when you’re using events, as
we are now. You use a broker to send messages back and forth, and
that broker can be set up with routing rules that are actually quite
amazing. We’ll discuss those later on.

I only have experience with one broker, RabbitMQ, and I was able to
orchestrate some pretty intense message queues with it. And, for full
disclosure, I did this for fun over a weekend, not for production use.

The simplest thing we can do for our checkout is to create what’s


called a Remote Procedure Call (RPC) if we’re using RabbitMQ. Other
queues and cloud services do roughly the same thing, with the idea
that we’re doing an asynchronous request and response. I could do the
same thing over HTTP if I set my services up as APIs, but as you’ll
see, RabbitMQ does it better.
478 ROB CONERY

When working with a message broker, you need to embrace a few


terms:

You send a message to a channel within a queue.


That message is consumed by a consumer

Our cart service might send a ready_to_checkout message to the


broker, which would route that message to the consuming checkout
service. It would do it’s work, and, when everything is transacted
properly and stored, would send checkout_success or checkout_fail
message back to the cart.

This is the RPC pattern in its most basic form, and the only way that
it works properly is if the messages sent back and forth adhere to a
given structure. This is referred to as a “service” or “messaging”
contract and can be as simple, or complex, as you like. Using a tool
like RabbitMQ can help with this, as it will prevent random messages
being sent into a channel.

Let’s see another example.

Firestore Events

I’ve been using Firebase for many, many years and I love it. It does
take some getting used to, especially if you lean into their event
system.

I used to have a checkout process on my site that was 100% Firebase-


driven, meaning that you could watch the transaction go in realtime!
THE IMPOSTER’S ROADMAP 479

I showed how I did this in my video course Going Serverless with


Firebase, which is now deprecated, but I thought I would share it with
you here because it’s tons of fun.

If you don’t know, Firebase is a “backend in a box”, meaning you get a


realtime database, authentication, serverless functions, and more. You
access the database directly from the client and there are rules in
place, so the client can’t do evil things.

You can get data snapshots (one and done), or you can set up a
listener for a particular document, which may or may not exist. That’s
what I did for my checkout!

Here’s how it worked:

You would choose the thing you want to buy and head to the
checkout page, where you would pop in your credit card using
Stripe.
When Stripe returned with a successful response, I would set
up a listener for the orders collection, using a GUID that I
passed to Stripe on the initial call. This was my order number.
I set up Stripe to send out a webhook on the
payment.received event, which would ping my first Firebase
480 ROB CONERY

function: stripe_receiver. This would send along the payment


information, including the order number, which would be
stored in the orders collection using the order number as the
key and a default status of status.received: true and every
other status flag set to false.
On the client, I would “listen” for changes to that document
and whenever the status key of the document changed, I
would update the UI.
My second function, create_invoice, would react to a created
event in the orders collection. Whenever an order was
created, an invoice would also be created and stored in the
invoices collection. When this process was done, the order’s
status.invoiced flag would be set to true.
The third function, fulfill_order, would also go off on the
created event for orders. This function would do the needful
in terms of authorizing downloads and course access, as well
as creating a record in the fulfillments collection. When it
was done, it would update the status.fulfilled flag and set it
to true.
Once the order was fulfilled, my email_customer function
would go off, as it was set to fire on created for documents in
the fulfillments table. This would send the user their invoice
information as well as the download links and course access.

I tell ya, there’s nothing like watching these dominoes fall as an order
is processed. Your customer knows exactly what’s going on as well,
which is also a bonus.

Two actually only problem one there’s: race conditions (that sentence is
a joke, by the way). That and unintended triggers.

When I first set this up, emails would show up with empty invoice
numbers, and I couldn’t figure out why. Can you see why? When
Firebase queues these function calls, there’s no guarantee of what will
happen when, so in some cases a fulfillment would happen before an
THE IMPOSTER’S ROADMAP 481

invoice was created, which is called a “race condition”: two processes


executing concurrently, creating chaos.

I got around this by resetting the trigger for fulfill_order to the


created event on invoices which I suppose makes more sense — but
wow it took me a while to figure out what was going on!

That’s the downside of doing “event stuff ”: debugging. We’ve gotten rid
of the spaghetti, but we’ve introduced some fantastic misdirection and
concealment.

I believe this is an image taken inside my Firebase database…

Testing Your Event Handlers

As we’re going to see in the next chapter, the idea behind writing tests
for your application is that if you write enough of them, you can sleep better
at night. I believe that. I wrote a lot of tests for my checkout routine,
making sure each handler did as little as possible, and called out to
service classes that I could isolate and test independently.
482 ROB CONERY

Tests happen synchronously, however, and if we rely on them entirely,


we’re omitting a massive part of the entire application.

When Events Bite Back

Everything was great with my application until I deployed it, and then
that race condition hit, and I was completely vexed for a few hours.
That kind of thing happens a lot: you’ve spent hours setting your stack
of dominos only to realize, too late, that your design won’t work once
you’ve tipped the first piece:

This isn’t the first time the Firebase scheduler got the best of me. I
had a function that synchronized customer information with Shopify
once, and it turns out that executing thousands of function calls in the
background will slow down your entire service (the initial run had to
synchronize my historical data too).

Orders were coming in at the same time, but weren’t being


authorized. Emails weren’t being sent either, and one person had to
THE IMPOSTER’S ROADMAP 483

wait for 4 hours before receiving instructions on how to download her


stuff. I didn’t want to intervene, either, as I knew that somewhere in
the background deep in the guts of Firebase, a process was queued up,
and I didn’t want to trample it and cause several other events to get
queued up… yikes.

The novelty of this approach quickly wore off. I found it mentally


challenging to keep the process in my head, and would so often find
myself muttering, “WTF is happening…?”

That’s not something you want to say when you’re the lead!

An Easier Team Sell

Working with an evented system can actually be much, much easier in


terms of team management. Like a monorepo, you can have a team
that “owns” a certain aspect of your application, and all they need to
do is worry about the message that’s being sent to them, and then
respond with messages of their own.

Setting up and maintaining your message queue can also be a


dedicated effort. The team that handles that doesn’t need to know how
or why a certain message is sent — they just need to set up the routing
rules.

TTR with Events

If you have a capable team that’s onboard with this setup, an event-
based system can last for years… unless there’s major trouble that
results because of the use of events. Like my scheduler issue, or
having to solve one-too-many race conditions.

It takes discipline and a bit of expertise to avoid these problems, but it


is possible. I think there’s always a fear of “poking the beast” when
you work with a system like this, because it seems like there’s magic
happening in the background, constantly.

I have an Apple HomeKit thing at my house that plays music for me


when I come home. 9 times out of 10 it works fine. The 10th time it
484 ROB CONERY

won’t do anything and I have no idea why. Somewhere inside of it,


something goes wrong, and so everything else dies as a result.

I feel sorry for the Apple engineers who have to solve those problems.
I think it just happens with the complexity curve. More services, more
events, more event wiring… and boom, no music for you when you
come home!

THE ACTOR MODEL


The natural progression of an Evented Architecture is the Actor Model,
which is a generic no-name that is about as vague and amorphous as it
sounds. The idea, thankfully, is simple: your application is made up of
smaller applications which, themselves, are made up of smaller
processes that do all kinds of things, including managing other
processes.

I’m getting pretty good at my prompting, I think! OK, I’ll stop now.
THE IMPOSTER’S ROADMAP 485

The Actor model is all about concurrent programming. All these


processes are built to do a thing, and all the things that happen in
your application can and typically do happen at the same time. This
means that you’ll typically find functional languages powering these
applications, but that is not at all a hard and fast rule. It’s just a
nice fit.

Why? Here comes a tangent, feel free to skip ahead if you’re a


functional person.

Functional Programming Approach

One of the core aspects of functional programming is purity, which I’ve


discussed before, but here it is again! Code purity is the idea that a
given function can do its thing without external influence, which
means it needs to be given everything it needs to execute as an
argument; otherwise it will fail.

No global variables, no configuration settings that will magically be


read. No environment variables that could stop the process and, if
we’re being strict about it, no database stuff! Any outside influence is
called a “side effect”, and, correspondingly, your function should never
change anything outside itself! It can only return a value or die trying.

Change is carefully managed in a functional application and, in fact, is


frowned upon! If you give a function a bit of data, a new bit of data
should be returned, never alter the existing data because doing so
could cause problems with other functions. This is called immutability,
and it can take some getting used to.

Why does code purity and immutability go so well together with The
Actor Model? Because it helps you do concurrent things without
worrying about destabilizing your system. Race conditions don’t
matter (ideally), because a running process never relies on another
running process.

At least that’s the idea.


486 ROB CONERY

In reality, it’s difficult to pull off, but with languages like Erlang, Elixir,
F#, and Haskell, it becomes a bit easier given the way the language
compilers keep you honest.

Example: Erlang’s BEAM

The Erlang runtime, called the BEAM, is considered one of the most
bullet-proof and resilient runtimes on the planet. This has to do with
the way people write code with it using The Actor Model and the
framework for writing Erlang applications called OTP.

Every bit of code you write in Erlang runs inside a process, which is
managed by the BEAM. Processes can spawn other processes, kill
them, and resurrect them as needed. This leads to an “organic” design
process, where you create the cells of a thing and naturally expect
them to die and then regrow anew. I know, sounds super theoretical,
but let’s see an example.

Using the e-commerce example once again, we can imagine a user


visiting the store, creating asession process. That process might
spawn some child processes, which it owns:

A shopping process that has methods like add_item,


remove_item, browse, etc.
Ashopping_state process that stores state information for the
session, such as the cart items for the methods above.
Ashopping_supervisor process that ensures our shopping
process stays alive. To do that, it might store a session_id
GUID so that, if anything dies, it can be brought back to life
with the same state.

There would be more processes, obviously, but I think this setup


captures what The Actor Model is like. For everything to work,
however, a process hierarchy needs to be understood, and we also
have to give in to the necessary side effect, which is storing data
somewhere.
THE IMPOSTER’S ROADMAP 487

For us, the hierarchy goes something like this:

The session process spawns (and therefore “owns”) the


shopping_supervisor and shopping_state processes.
The shopping_supervisor spawns the shopping process, and
keeps track of its running state, restarting it if/when it dies,
using the state data from shopping_state.

The session would probably also have a supervisor process too. All
the data would likely get stored somewhere like etc.d using built-in
Erlang storage libraries, just in case the entire system went down, and
the processes needed to come back up with their state data intact.

It’s really difficult to crash Erlang, especially when it’s run in a


distributed environment. If one node (Erlang process) goes offline, the
other ones will jump in to carry its workload. You can typically keep
Erlang up and running happily for years, which led the late Joe
Armstrong, one of the creators of Erlang, to offer his take on
databases:

OO developers care way too much about databases

Note: I believe he said this on the Elixir Fountain podcast and, try as I might, I
can’t find the episode nor the transcript! I hope I got close there.

Joe liked to speak his mind. I hung out with him and his family on two
different occasions at NDC London. Kind, funny, incredibly smart, and
a prime example of someone who’s written word cuts sharply, but in
person is caring and lovely.

Joe’s point about databases is, “why care, if you build a system that
embraces failure?” You don’t catch errors in Erlang, you let them
happen and move on. This isn’t a careless approach, it’s quite the
488 ROB CONERY

opposite. You build systems that can rebound from failure


successfully.

If this sounds like Kubernetes on a language level, it is. Kubernetes is


the infrastructure version of the BEAM, which uses Erlang concepts
but with Docker containers instead of processes.

TTR with Actor Model

If you decide to go with a distributed system, you’re going to stick


with that system unless you throw it out completely, top to bottom.
That means your code, your infrastructure and probably your entire
team too.

Of all the patterns discussed, I think this one has the most longevity,
and it’s not due to “sunk costs” (meaning: you’ve put so much effort
into your application that you can’t give up on it). Distributed systems
require dedication and a clear choice made when starting out. It would
be like discussing replacing your family car for you, partner, and 2
kids, with a motorcycle. It could work just fine, but your context would
have to be completely replaced.

Unless everyone else was cool taking the bus. If you live in a city… it
could actually work out.

Yes, You Should Try It Out

Erlang is a bit creaky as a language goes, with syntax rules that take
some getting used to. I have a few friends who are quite skilled at
Erlang and don’t mind it one bit.

If you care about syntax and expressiveness, Elixir might be worth


playing with. It compiles to a BEAM executable and is inspired heavily
by Ruby.

Finally: as you can probably tell, I have some experience with Elixir
and the BEAM, and I do love it and wish I could work with it more.
We’re talking about The Actor Model, however, and I want to stay on
THE IMPOSTER’S ROADMAP 489

target and address the questions you might have, such as “should I do
this?”

If you’re building a system that can work concurrently, and you have
the skill on your team to build something like that, then I think it’s a
solid choice. Systems are scaling outward more and more thanks to
Docker and Kubernetes, and if you build something that embraces that
idea from day one, it’s great!

Otherwise, functional approaches can confuse a team of Object-


oriented (OO) coders. It is a fun weekend experiment, however, and
you’ll learn a lot!

WHAT ABOUT…
I can’t cover every approach in detail, and I know I’m going to leave a
few out. I have tried to detail the more common approaches, but there
are, of course, many others. Some of these aren’t full application
architectural approaches; some could be considered strategies or
techniques.

Either way, let’s explore a bit.

Command Query Responsibility Segregation

This is more of an approach to working with data, but it also can


directly affect your modeling effort if you’re doing OO. The idea is
straightforward: inserting, updating, or deleting data in your database
has a dedicated command model. It could be one model for changing
things, or one for each operation.

A command can also span tables and encapsulate a single transaction.


This, to me, is where CQRS shines. A FulfillOrder model might take
in a Checkout as an argument, and then write to 4 or 5 different
tables in a single transaction. A data access tool, such as an ORM, can
struggle with this idea and cause you to write a bit of extra code to get
it to do what you want.
490 ROB CONERY

Reading data, however, has its own model that, once again, is
dedicated to the need. You might do a rollup query on sales, need a
single product with associated vendors, tags, and technical
specifications, or just a list of customers. Each of these concerns gets
its own model.

CQRS can use a data access tool to work with the data, or, what I’ve
actually seen a lot of, is people writing the exact SQL they need in
each model. I’ll sidestep that argument and say that I like CQRS, but
I’m a data person, so I’m a bit biased. I don’t care much for ORMs.

Event Sourcing

This is an interesting one that’s popular in the .NET and Java worlds.
The idea is that you embrace the idea of changing application state,
rather than isolated database transactions.

With the e-commerce shop, once again, a SessionState might be


recorded every step of the way (state change) until a purchase is made
or the session ends. You might end up with 10 or so state changes,
until you arrive at the tail, which is the current state.

Each state change is triggered by an Event, which is the data you keep.
AddItemToCart would be an event, as would
RemoveItemFromCart, Checkout, ShipItems and so on.

The benefit to this is that you have an audit log, and you know
precisely what happened when. This is great for reporting, because
you (typically) have time-based data, which can help you understand
your user’s behavior.

It can also be confusing and take up a lot of space on your database


hard drive with data that may or may not be valuable. In some
circumstances, understanding the chain of events that led to the
current state is critical. A hospital record of care, for example, or a diet
tracker. In other circumstances, however, a good log or Notes field
might be good enough.

Service-Oriented Architecture
THE IMPOSTER’S ROADMAP 491

With SOA, you break your application into independent services that
do a thing. This aligns with the monorepo idea pretty well, and also
corresponds to The Actor Model and microservices discussed above.

SOA is an enterprise way of doing things and is dependent on


messages being routed successfully between services using one or
more protocols and interfaces. One service doesn’t need to know
about another in any way, and one service can be an entire application
in and of itself, completely independent of other services.

You might be wondering what the difference is between microservices


and SOA? The short answer is that microservices are:

Part of a single application. They’re still independent, but they


don’t make sense on their own.
Small, typically just a single file.
Built for scaling.

Back in the early 2000s, the idea of Web Services gained popularity
and the idea was that you could build an entire application to be a
single service that other applications could consume. In our e-
commerce example, a ShippingService could be a SOA service. Its
sole job is to calculate shipping rates and setup shipments to
customers.

Web Services used a standard called WSDL, or “Web Services


Discovery Language”, which was metadata that consumers could use
to discover what methods a service exposed, what protocol they used
(HTTPS, TCP/IP, or SOAP), and what arguments they expected.

Ah, the memories. I made quite a few Web Services back in the day,
and my cofounder was convinced that Web Services were going to
replace the web as we knew it. I can see why.

SOAP (Simple Object Access Protocol) made it elementary to do OO


things using XML over HTTP. It was as if you did an npm install, but
with a remote service. I think some of these things are still out there,
492 ROB CONERY

but REST (Representational State Transfer) kind of trounced things


with its simplicity.

And no, I won’t be talking about REST in this book. REST is the web
and “what it is” has become a rather heated argument which most
modern programmers think of as “sending JSON over HTTPS” which
is fine with me.

MVC

We discussed MVC, or “Model, View, Controller”, above and how


Rails made it popular. It’s primarily designed for client-server (web,
mobile).

MVC splits your application into 3 main parts:

The model, which represents a business thing, encapsulating


properties and methods only for that thing.
The view, which is the visual representation of the model.
The controller, which displays model information to the user,
and takes in user input.

MVC is an application pattern, not necessarily an architectural one.


Architectural patterns affect your infrastructure and are often used to
help scale an application as well as make it easier to fix bugs and
change things in the future. Application patterns can also do that, but
are typically more about where code and files live on disk.

MVC doesn’t help an application scale by its very nature. In fact, you
could even argue that it can prevent scaling and increase cohesion
because all the parts work together. This is left to the developer to
solve; MVC, on its own, doesn’t enforce architectural ideas.

You can, of course, separate a larger Rails app into smaller ones, each
using the MVC pattern, and some people do this when their apps get
large enough. That puts us back into microservices land.

Let’s discuss two offshoots of MVC now.


THE IMPOSTER’S ROADMAP 493

MVVM and MVP

MVVM stands for “Model, View, ViewModel” which is popular in


frontend frameworks and web frameworks that use file-based routing.

With MVC frameworks like Rails and Django, you have to specify
which HTTP endpoints are handled by which controller. In Rails, this
is done in a routes.yaml file and if you follow their naming
conventions, you can get away with very little code in that routes file.

But you still need to specify some kind of route, even if you’re using a
built-in helper.

In the Good Old Days, requests that came in were for literal files on
disk, so you might ask for /some/directory/doc.html and that file
would be returned to you. MVC frameworks made this more of an
“RPC” (Remote Procedure Call) kind of thing by using controllers.

With the shift to frontend frameworks over the years, and frameworks
like ASP.NET embracing file-based routing, the need for a Controller
went away. The need for arranging and orchestrating UI logic didn’t go
away, however, which is where ViewModels and Presenters come in.

ViewModels (and Presenters) do, essentially, what controllers do in


Rails and Django:
494 ROB CONERY

As you can see, the CartViewModel gives the view the data it
requires in terms of the cart and supporting methods. You could, if
you wanted, shield the model entirely if your language supports
private properties.

What’s the difference between ViewModels and Presenters? Good


question. I’ll give you my opinion, but please know that what I’m
about to say is highly subjective…

I think it depends on the language and framework you’re using.


Microsoft’s ASP.NET has a bunch of things built in that do magical
bindings if you follow the MVVM pattern. Android embraces MVP as
do other frameworks, and you find a mix of the two when it comes to
frontend development.

If you’re wondering about frontend frameworks and how they fit in


THE IMPOSTER’S ROADMAP 495

here, well… it’s complicated. Here’s what the Vue.js team has to say
on the matter:

In Vue.js, every Vue instance is a ViewModel. They are


instantiated with the Vue constructor or its sub-classes.

This makes sense if you’re using Vue.js on a single page. It doesn’t


make sense if you plug in Vue’s router and build out a single page
application. In that case, well, I have no idea. On one hand, you could
argue that your model lives on a server and there is some type of
controller handling requests.

On the other hand, you might be working with Firebase and your logic
is handled in your application using a Presenter or ViewModel
(usually something like Pinia, the state store). I guess. It’s confusing
and honestly, if I could offer any advice here… just go with it and
don’t ask questions.

Onion

If you’re doing work in .NET or Java, you might come across various
terms like “Hexagonal” or “Onion” architecture, and then you’ll
Google what they are and see many drawings of concentric circles,
discussion on “separation of concerns”, “layering”, “encapsulation”
and more. All the buzzwords can be confusing, but overall the concept
is familiar if you’re into operating systems.

Here’s an image I found on the ThoughtWorks blog, where enterprise


guru Martin Fowler works:
496 ROB CONERY

Martin is a brilliant person and if you’re looking up software design


patterns, you’ll probably end up at his blog.

The core idea of Onion stems from Linux: a series of shells protecting
a kernel, or domain model. This is where your business rules live in
code form, and are protected from every other part of your application
by various service classes. From there, you have application services,
which handle user requests, and then, of course, your user interface.

You can make this as simple or as complex as you like, but I find it
oddly straightforward. I’m not an enterprise person, but I do love the
idea of a domain model (the classes you create to represent the
business you’re modeling) being hidden from everything else.

There are variations to this layering approach, of course, including


Jimmy Bogard’s Vertical Slice and the good old Hexagonal
Architecture. Where you put your layers and how you access them is a
checkers game that enterprise people will keep playing throughout the
years, which, I think, is grand.
THE IMPOSTER’S ROADMAP 497

SUMMARY
Let’s end where we began: your choices here reflect where you work
as much as the application you’re working on. Your team setup, your
company policies, and the support from your manager and
stakeholders.

I know it sounds like I’m winding up with an “it depends”, and I hate
that! So I’ll end this chapter with this.

What Would Rob Do?

If I’m starting a project from scratch and I have complete control, I’m
using Ruby on Rails. The idea needs to take form quickly, and the
people who hired me will want to see progress right from the start.
After 1.0 is shipped, I’ll do some profiling and tweaking if there are
any scaling issues, and I’ll hold tight to “it’s not a problem until it’s a
problem”.

If I’m taking over a project that’s working well, I’ll get to know the
team and the codebase, make sure everyone is happy in what they’re
doing. I would then focus my time on ensuring they know what
they’re working on and where we’re going as a company. Finally, I
would get out of their way.

If I’m taking over a project that’s not working well in terms of team
satisfaction, I’ll do the same thing: get to know the codebase and the
problems, see what the common patterns are, and write it all down.
Any changes made need justification and buy-in from your
manager/stakeholder, and a rewrite is extremely costly! However,
sometimes they are justified and if one is, I’ll go with Ruby on Rails
every time.

If this same project is also suffering from scaling issues, I would do


the same thing as above, but first look at using a monorepo with a
single database (PostgreSQL, possibly sharded or distributed if
required). Autonomy is key with teams, and feeling a sense of
498 ROB CONERY

ownership too. It also makes rewrites simpler because they’re smaller.


We would address the pain points and scale as we can.

If I ended up working at some large company and asked to rebuild


their commerce infrastructure, I would ask for a lot of money and hire
a strong team that understands messaging and event-based
architecture. I would also hire a few DevOps folks to spin up
Kubernetes, and then work with the team to figure out what services
make sense for the job at hand, and then I would get the hell out of
the way.

Your Focus: SHIPPING

In every case, you need to deliver, or you’re not being a good lead.
Whatever architecture you choose needs to lead to you and your team
shipping software; otherwise it’s useless.

In the next chapter, we’re going to dig into testing, which is one of the
primary ways you can get to know your application. A well-made test
suite can also act as documentation, helping new folks to the team get
up to speed quickly.
EIGHTEEN
TESTING STRATEGIES
UNIT, BEHAVIORAL, INTEGRATION, USER
ACCEPTANCE, AND EXPLORATORY

I
f you’ve been coding for a while now, you’ve hopefully been
writing tests to ensure your code is as correct as possible.
Notice the word correct here — it’s an intentional choice. You’re
not testing that your code works because the compiler or runtime will
tell you that pretty quickly.

Words are important, especially with testing, and it takes discipline to


write a set of tests that make any kind of sense.

AUTOMATED TESTING?
I don’t know where you are in your career, so I would rather not make
any assumptions. If you’re a few years in, you know all about testing.
At least, I hope you do. If you’re just out of a bootcamp (welcome!),
then testing may or may not have been covered much. Hopefully,
it was.

When you write code, you want to know that it works, so you might
try it out in a script or by hitting “Run” in your IDE of choice. Maybe
the debugger? Either way — it’s manual and not a good way to test
your code.
500 ROB CONERY

A better way is to orchestrate things. Make sure the test conditions


are set up just as you need for each test, so the environment is the
same each time. You then use a testing framework to help you execute
a test, evaluating the result. If things go well, the test passes and
people are happy.

Rather than you hitting “Run”, you tell the test framework to execute
your test suite, which are dedicated bits of code that make sure things
“work” the way you want them to. I put “work” in quotes because, as
you’re going to find out, testing your code is a terribly difficult thing
to get “right”. Bias, crap testing strategies, bad code — so many things
can derail your testing efforts.

If you still don’t understand what I mean by “automated testing”, have


a Google and come back once it makes sense. I know that it took me a
bit to understand the need for it, and it then took me even longer to
figure out how I could actually do it. As I’ve been saying with so many
chapters: this isn’t an in-depth guide on how to, so I will assume you know
about testing to some level.

Now, let’s discuss fun ways to torture your code and your application.

YOUR TESTS TELL YOUR STORY


Before we get into the technical bits, I want to underscore just how
much your tests say about you and your application. When senior
folks look at a codebase, they (typically) go straight for the tests, as
there’s always a story there.

For example, if there are:

None, very few, or sparse tests. You have a lot of confidence


in your skills, perhaps too much. Maybe you never learned
how to test “properly”, or perhaps you find testing to be too
much friction.
Numerous tests that cover everything. You’re either
automating your testing (Rails generators, for instance) or
THE IMPOSTER’S ROADMAP 501

you’re using a code coverage tool (thing that analyzes the


percent of your codebase that’s under test) and you want a
high score. This is a cargo cult mentality, focused on process
rather than results.
Expressive tests that are easy to follow and read like a story
during the run. This is, typically, the realm of behavior-driven
testing (BDD), which we’ll discuss below. Your tests focus on
the end user and describe how the application should react. I
like these tests, a lot, and it takes skill to get them as correct
and relevant as possible.

You can clearly read my bias here, and there’s a reason for that, which
I’ll explain below. That said, unit testing does have a place, but it
involves a bit more work.

Use Every Style

Your application should have all the test styles below, as each one has
a specific focus:

Unit Tests are fine-grained and help you isolate and debug
problems.
Behavioral tests are more generalized and focus on
specifications, rather than implementation. These tests are user-
focused.
Integration testing is a full run through of your application,
using every service (or integration) without mocks of any
kind. This is typically a scripted process using scenarios and
stories.
Acceptance testing is more rigorous and typically involves
your users. This can include focus groups, scripted testing
scenarios, alpha, and beta releases.
Stress testing is, as it sounds, when you load your app to see
where it breaks.
502 ROB CONERY

As a programmer, you’ve probably focused on only one of these: unit


tests, which is great! It wasn’t that long ago that unit testing was the
exception — not many people tested their code in an automated way
and, instead, relied on a Quality Assurance (QA) team to do it for
them.

As a lead, however, you’re going to need to think about all of these


things. Here’s how I think about these things, and it’s just me and my
opinion — all of this is highly subjective. Here is what I find works
well:

I create unit tests for purely technical things. For instance, I


might have some generalized data transformation (date
handling, for instance) or creating a sequential GUID (like a
snowflake ID) that fits unit testing perfectly. Anything I might
reuse in the future, or that could go into a shareable package,
gets a set of unit tests.
I create behavioral tests for anything business-y. I sometimes
ask myself “will user input touch this?” or, more aptly, “will
this generate any database records?” If so, I write behavior
tests for it.
I always create integration tests. I will never trust that an ORM,
email service, logger, or whatever will do what it says it will
do. Docs have a way of going out of date quickly, and
occasionally, I’ll write the integration tests first just to see if a
given external thing is what I want.
Acceptance tests are challenging to do, but are so, so
necessary. I wait until the end of the process and create a suite
that I can run whenever things change.
Stress testing is something I’m not good at, if I’m honest. I
will do it at least once, to make sure that a cloud service, for
example, scales the way it says it will. Over time, I’ve learned
where the bottlenecks in an application are, and I can,
generally speaking, avoid most major scaling obstacles.
THE IMPOSTER’S ROADMAP 503

Allow me to explain that last bit just a little more. When it comes to
scaling your application and looking for code improvements, the first
place to look, always, is input/output (IO). Reading things from disk,
writing to disk, or sending/receiving over a network is always going to
slow things down. This is why Node became so popular: they made
that asynchronous.

We’ll talk about this more in the scaling chapter, but you should know
what your data story is right from the start. If you’re going to be
writing constantly, you’ll need a strategy from day one as to how to
handle that in the best way (connection pooling, where your
transaction boundaries are, etc.). If you’re reading constantly, caching
is your friend but also a nightmare.

There are ways to plan for these things before they become a problem,
so stress testing may or may not be useful for you. I’ll discuss this
more down below.

PATIENCE. DISCIPLINE.
You hear it a lot from test-driven development (TDD) fans: it takes a
good measure of self-discipline to create useful tests, and to also see
the benefits of testing in general. I agree with this, for the most part.

It’s stressful to have your monitoring alarm go off when your


application crashes. Or, worse yet, when a potential customer emails
you with the subject “I’m trying to check out on your site and I
can’t…”.

It’s at this point where you realize just how valuable tests are. It’s also
the point where you wish you didn’t have so many tests! Double-
edged sword, that. A “good” test suite will help you isolate bugs if you
can get your hands on a reasonable crash report, which your logs will
hopefully provide. Once you reproduce the bug, you can (hopefully)
easily resolve the problem so your tests pass. You can then push that
fix, so the error goes away.
504 ROB CONERY

Hold on their friendo! Your fix needs to be covered by a test so you


can prove the bug has been dealt with, and that you didn’t introduce
Yet Another Bug with your fix. This is where discipline comes in, and
it’s oh so very necessary.

In this chapter, I won’t be focusing on how to write tests in a given


style — that’s easy to figure out with the help of Google or ChatGPT.
Instead, I want to offer my perspective on how to write a “good” test
suite. One that doesn’t get bogged down with unnecessary cruft,
causing you to delete tests rather than fix your code.

Before we get there, however, we need to be certain our best friend is


in place: the Logger.

LOGGING, YOUR SAVIOR


I’ll discuss Logging and Monitoring in depth in a later chapter, for
now, I need to offer a gentle introduction because without reasonable
logging, your test suite will be operating at less than half its capacity.

Here’s a truth that you’ll discover over your career as a lead: the true
challenge of software development is keeping your app alive, not writing it in the
first place.

When people think about writing software, they think about the
creation process. I think that’s where the fun is, honestly, because
you’re watching an idea come to life! After you launch, however, is
where the game truly starts. I remember a friend saying that about
World of Warcraft (a popular MMO game) many, many years ago. It
was such a long, arduous slog to get to max level but, once you did…
wow, there was a lot to do.

Same goes for your application! Once deployed, you’ll be focused on:

Handling any bugs, of course.


Adding features.
Fine-tuning, refactoring, and optimizing.
THE IMPOSTER’S ROADMAP 505

Planning the next version, using what you’ve learned with the
existing version.
Scaling as your users grow.

And so much more. Not the most exciting thing, but absolutely the
bulk of your development career will be spent here.

So, why do I mention this now? Because the only way you’re going to
know what’s going on with your application and your users is through
logging and reporting. We’ll get to reporting later, as it’s much more
of a data concern than logging is.

What a Good Log Looks Like

Logging is an art form, and if you do it well, your coworkers will love
you forever. A log entry has one job: to convey what happened when,
how, and where. You’re the one who gets to figure out why — isn’t
that lovely and simple!

Here’s the simplest log entry that does just that:


506 ROB CONERY

Informational log entries are great because they let you know that
people are, in fact, using your application. A good logging system can
actually be used to recover lost data, too, which is an interesting
rabbit hole to fall into… but let’s not.

The entry above is an info entry, but the ones we care about most
when things go wrong are the error and warning logs:

This is a reasonably detailed error log that should help you track the
error quickly. It tells you:

How the error was triggered (a POST request to /cart/add.


Where the user came from (the referrer).
What browser they were using
The location, to the line, of the error (which is likely a throw).

Using this information, we can see that our user is using the app the
way it’s intended, and that nothing necessarily is out of order… which
is when our heart starts racing.

Looking at the product_id, it’s the same one that’s in the info log, so
THE IMPOSTER’S ROADMAP 507

how could it not exist? It’s at this point where we realize that our
logging strategy is a bit incomplete.

Relying Too Heavily on Try/Catch

At this point, we know the problem, but we now have an even bigger
question: how is this even possible?” So yes, this happened to me, and I
was mad at myself for creating a set of logs that contradicted
themselves.

My first thought sent a chill down my spine: what if the product table is
just… gone? Maybe all the records are deleted or, worse, the table got dropped!

I run tests locally against a test database, and every time I do, I drop
and recreate my tables to keep everything in sync. If I somehow
messed up an environment variable… it’s entirely likely that this
could have happened. If you ever “drop prod” in your career, I can
almost guarantee it will be due to environment variables.

The terror only got worse when I realized that the way I coded the
getProduct() method was with a single try/catch, and I wasn’t
evaluating the caught error. Any error would have been treated as a
“product doesn’t exist” error! ROB!

I was certain I had dropped production, but thankfully that wasn’t the
case. A frantic 10 minutes later, I figured out that what I was missing
was a log entry that says, “product out of stock”:
508 ROB CONERY

This log entry, which should have appeared right above the error above,
would have told me something critically important: the error only
happened when the product went on backorder. That would suggest an
edge case where my customer had the item in their cart, but while
they were bopping around my site, someone else bought the last one.

This happens a lot, believe it or not. But this raises an even bigger
question: I only sell digital products, why is this going out of stock?!?!

Yes, this really did happen to me. Long story short: when I created the
product I didn’t set its type to digital, so the default inventory was set
to 100 (bad idea) and it was being debited during every sale… and you
get the idea.

I didn’t have a message in the cart about inventory because I don’t do


inventory. Isn’t programming fun? It took me about an hour to get
everything resolved, and that’s mostly because my first inclination was
that there was something wrong with my query.

If It Changes, Record It

You can obviously dump far too much information in a log entry,
though there are those who say that’s not possible. My rule of thumb
is this:

If any state changes in your application, track it with an info.


User logs in, logs out, product status changes, price changes,
description changes, etc. Track every change as it becomes a
road map.
If any state changes that needs some action (like restocking,
for instance), it gets a warning.
If it crashes, it’s an error.

Logging errors properly is a challenge, and I’ll get into that more in
the logging chapter. For now, let’s focus on how our logs can help us
with our tests.
THE IMPOSTER’S ROADMAP 509

UNIT TESTS
Unit testing is how a programmer thinks about the application they’re
writing or maintaining. They’re a bit more mechanical and have the
singular purpose of helping you isolate and fix errors as they arise.

Here’s a very typical unit test from a popular Node.js project. I


obfuscated the code so we don’t get lost in the details:

Some people call the example above “Sad Path, Happy Path” testing,
where you think of as many things as possible that can go wrong,
ensuring that you also have a guard in place: the “Happy Path” test.
It’s entirely possible, in fact, it’s highly likely, that you can focus on the
negative so much that you end up creating a function that doesn’t
actually work. The happy path is critical, and defines “the truth”, if
you will, of what the function should do.

If I had unit tests for every service and every method, like this project
510 ROB CONERY

does, I could have figured out my “missing product” bug pretty


quickly… possibly before it even existed.

Test-driven Design

One of the problems of unit testing is that you end up with so many
tests that can easily break when you change something, causing
friction. We hate friction!

Test-driven design, or TDD, flips this notion on its head: your tests
dictate your application design.

It’s a fun game, honestly, if you have the patience to do it correctly.


Here’s the process you follow:

You think about what needs to be created and write the


simplest possible test to validate that it exists. In my case,
with the add_to_cart method, I would need a Product and
the method itself. Run the test and see what breaks, which
will likely be that the method doesn’t exist (we’ll assume we
already have a Product class).
You take the next, reasonable step, which is that the Product
is added to some kind of backing store. The simplest thing
here would be to have a product property on the cart, and the
add_to_cart method sets that property.
We obviously need the ability to have more than one Product
in our cart, so we write another test ensuring that’s the case.
We watch it fail, and then fix our Cart class accordingly so we
can watch it pass.

And so on. The game here is to remove any assumptions about


anything, and just do the least possible thing you can do to get the
test to pass.

Pro: Speedy Bug Fixes

A well-structured set of unit tests can almost act as documentation,


THE IMPOSTER’S ROADMAP 511

and if a given test fails, you know exactly where and what to fix. That
is the strength of unit testing: fine-grained control, speedy fixes.

But would this have worked for my missing product problem? A TDD
person would say, “of course, because that condition wouldn’t exist in
your code.”

That comes at a cost, however.

Con: Troublesome Refactoring

The popular open-source project I’m using as an example has hundreds


of tests. Maybe even thousands, I didn’t count.

Imagine you find a bug in this project, yet all the tests pass. This is
normal, by the way, and frequently happens. You decide you’re going to
help out, and you fork the project and clone it to your local machine.

Debugging is an art form, which we’ll get into in the next section, but
let’s pretend you go old school and pop console.log(...) everywhere
as it’s a Node.js project, and why not (pretend you’re using the
debugger if you want). Finally, you find the culprit: a certain character
present in an environment variable (let’s say it’s an equality sign =) is
causing things to break.

Fun. Where does this fix go? You open an issue, some suggestions
come in, there’s debate about whether this is a valid bug, and then a
few suggestions on how to fix. You have some ideas, and decide to
create a few local branches to try them out.

You tweak the way the environment variable is read and set, replacing
a split() operation with a more comprehensive use substring(). 82
tests failed as a result.

You look over the output and realize that there is additional work
being done on the environment variable setting somewhere else in the
application, so you decide to follow the trail and do what you can to
resolve the problem. 96 tests are now failing.
512 ROB CONERY

You get the idea. And this part I’m not making up, as it just happened
to me. Not with this project, but with another one I was working on.
My database connection string had an = character in it and … yeah,
everything choked when I messed with the code to read the values in.

The point is: when you build up a unit test suite and try for the most
code coverage possible, it makes life difficult when it comes to making
changes. In fact, it gets worse: you avoid making changes because you would
rather not break your tests.

I’ve been locked in that cycle before and it’s horrible. 82 tests are
failing when I try to resolve this issue? #WONTFIX, thank you.

Con: Tests That Don’t Actually Test Anything

One of the problems you face with a big test suite like this one is that
you start adding packages and code to avoid working with external
systems. External systems, like mail services, databases, caches, etc.,
can fail or take too long to respond. In that case, people resort to using
mocks or stubs, also known as “test doubles” (like stunt doubles).

This chapter isn’t about how to test, I cover that in The Imposter’s
Handbook. Rather, it’s about testing approaches and strategies. In other
words: how you should think about testing in a team context. To that
end, I’m not going to dive into the wild world of proper mocking and
stubbing. It’s highly argumentative anyway and, if you ask me, a waste
of time.

Consider this pattern:


THE IMPOSTER’S ROADMAP 513

A mock is there to avoid using an external service, with the idea that
your test should be isolated as much as possible. Looking at this test,
however, are you sure we’re properly testing things?

One of the problems with mocking a database is that you might


violate some rule of the database, such as unique email address,
integer out of range, null value, and more. We don’t get that here.

I know, I know. You could use an ORM that has validations built in,
which is a fine way to go! Sequelize is one of my favorites, and you can
define your database schema right in your code. Doing this allows you
to run a validate() method on your model to see if it can be saved,
which is what we might do in our mock. Or, better yet, we could use
an in-memory instance of our database (SQLite does such a thing)…
but then we’re involving the database in our test, aren’t we?
514 ROB CONERY

The point is: mocks can work well if you’re super focused on making
sure they don’t interfere with your tests.

I don’t use mocks, myself, when it comes to databases. If you’re using


a data access tool (like Sequelize) then you should be able to use an
in-memory database and not worry about it. When it comes to other
things, like email services, I will absolutely use a mock, because there
is a golden rule that you must follow when doing any kind of test…

Don’t Test What You Don’t Own

It’s a dumb trap people fall into easily: you write code to “verify” that
a service or package does what it’s supposed to do. A 32-bit GUID, for
instance, is your primary key type in your database. You don’t want
the default values that your platform provides, so you download a
package that will create sequential GUIDs so you have ordered keys
that are (supposedly) easier to index.

In one of your tests, you ensure the key is 32-bit and that one record
has a “greater” key than the next record. You’re paranoid, which is
fine, I would be too. Unfortunately, you’re testing the package for the
package developers, which is a waste of your time. If you can’t trust
the package, then you shouldn’t be using it.

This happens with things like commerce services (Stripe, for instance)
that are going to generate an order and a customer record for you. You
want to be sure the totals add up, so you write a quick test against the
Test Account… (I’ve done this).

Mocking can keep you from testing what you don’t own as far as unit
tests go, but successfully mocking something is an art form that also
depends on the language and frameworks you’re using. Rails, for
instance, strongly suggests you just give in to things and use a test
database. It also gives you the ability to “turn off ” sending email at
the framework level during testing, so you don’t need to worry about
spamming [email protected].
THE IMPOSTER’S ROADMAP 515

If you’re using .NET, you have quite a few blog posts and YouTube
videos to support you on your mocking journey. And there are quite a
few opinions too. With Node, there are tools and packages to do just
about every kind of testing you need. Mocking, intercepts, in-memory,
and more. Nock and Sinon are two of the most common, with Sinon
adding a very useful ability to check for asynchronous callbacks.

Your strategy will depend on the language and platform you use, just
have the question “am I testing what I own?” running on repeat as
you build out your test suite.

If you find, however, that you have a few too many tests that change
when your code changes (aka “brittle”), it might be time to think
about a different approach.

BEHAVIORAL TESTING
Behavioral testing and Behavior-driven Development (BDD) is a
fascinating way to think about testing. The good part is that you test
things from a usage perspective, not a mechanical one. The downside
is that it’s easy to write tests with bias in them that don’t actually test
what you think they’re testing.

Let’s consider my cart situation once again. If I were to test this using
simple behavioral testing or BDD, I would write specifications that I
would test against:

Given we have a product with SKU X, when I add it to a cart,


the cart should have one more item.
When a cart’s item count changes, the cart’s item total should
be the product of the quantity x item price for each item.

And so on. A general rule of thumb, when you’re doing behavioral


testing (which most people say “BDD” whether they write the tests
first or not), is that you try to use “Given, When, Then” as a guide for
writing your “specs”.
516 ROB CONERY

The Jargon Trap

BDD has plenty of jargon, and you can use that jargon all you like, but
it doesn’t mean you’re doing BDD. I know, I know! It sounds like I’m
gatekeeping here, but it’s so important — you would be amazed at how
many BDD “tutorials” are out there that completely miss the dang
point.

It’s critical to take your brain out of engineering and coding land and
take on more of a product focus. You’re not testing classes and methods,
you’re specifying how a feature works given some scenario.

That’s where we’ll start: features. Using an e-commerce system, a


feature might be “The Shopping Cart”. Now we get to exercise that
feature by thinking up every scenario that we can:

Adding a product to the cart.


Removing a product from the cart.
Changing the quantity of a product in the cart.
The product is on backorder.
The product becomes backordered while in the cart.

When you think in terms of features and scenarios, it frees your mind
a bit, and you can come up with several scenarios that may or may not
happen. That last bullet would have caught my problem, most likely,
but once again that’s more about me and not the style of testing I’m
using. I will say, however, that writing specs like this without thinking
about code makes stumbling on that scenario much easier.

What a Spec Looks Like

I think that programmers have a difficult time doing BDD because you
have to think like a Project/Product Manager, not a coder. I think
that’s a good thing, because it’s always good to remember why you’re
building the thing you’re building.

If, however, you need a nudge, there is a handy cadence you can use
THE IMPOSTER’S ROADMAP 517

when putting features, scenarios, and specs together: Given, When,


Then:

I’m using JavaScript here, with the Mocha testing framework, but
hopefully the language doesn’t matter and you can get the idea. The
code doesn’t matter anyway — it’s how you describe your tests and
specifications. You want to be as clear as possible with each thing,
ensuring that your assumptions are clear and as precise as possible.

Why? Because of this:


518 ROB CONERY

It’s a lofty goal, and you’ll be accused of being a total nerd (I am, all
the time), but if you create pretty test reports like this, you’ll boss or
client will be thrilled. That’s the theory, anyway, they’re still going to
make fun of you.

I have a friend I used to work with at Microsoft that printed his test
runs out and taped them to his door, so people wouldn’t bother him.

This is the goal of BDD: executable specifications, and it’s usually at this
point that most people have their AHA! moment. We don’t write code
in a vacuum… well, at least not most of the time. As a programmer,
you have a specification that you work against, like anyone who builds
anything. So why not formalize it and make it executable?

In the example above, every blue item is a specification that’s pending,


but as I move along, you can see exactly where I am in terms of
implementing the features of the cart:
THE IMPOSTER’S ROADMAP 519

Having a bit of trouble with the cart’s item count… but I’ll get there.

Your testing framework might look different, but most languages and
testing tools support some descriptive output like this. I even made it
work in C# using XUnit with the Trait attribute!

Pro: Your Specs Won’t Change As Much

This is the thing with BDD: your application specifications will likely stay
the same, so your test suite will be far less brittle. That’s a huge deal! Unit
tests, like the first example we use with a class and testing every
method, tend to break when you refactor your code or change things
around. BDD specs don’t, unless the features and specifications
themselves change.

Pro: You Will Think More Clearly About What Your App Is Doing

When creating your specs, you have to think through how your
application will be used. This aligns your thinking with the
Project/Product Managers and helps you understand the value of the
thing you’re creating.
520 ROB CONERY

If you understand the value, you will likely have some input on how to
increase that value, which is always a good thing.

Con: Some Things Will Go Untested

The benefit of Unit Testing and code coverage tools is that more of
your code tends to get tested, especially if you have test generators in
the framework you’re using (such as Rails).

With BDD, you’re focused on specs, not coverage. It’s a little harder to
ensure that every part of your application is exercised. Ideally, your
scenarios will cover your codebase as completely as possible, but I
have yet to have that happen in my case!

Con: Some People Just Hate It

You will encounter other developers who will throw their hands in the
air and tell you that you’re being a cult leader and adhering to syntax
rules rather than a solid testing strategy. They will have a point, and
that’s OK. BDD does have a place and the jargon is there to keep your
head in the right place, with your attention focused on specifying
features rather than testing method implementations.

There’s room for both!

The Best of Both Worlds

There are some things that are best tested with a simple unit test. A
snowflake generator, for instance, is a simple numerical
transformation that you can test directly with a unit test, not worrying
about behavior because it really doesn’t have any.

BDD is great for describing the way your application behaves when a
user interacts with it, thus the name, but occasionally, you’ll want
something a bit more targeted and fine-grained, so feel free to pop in a
few unit tests here and there, as needed!

You don’t have to commit to a single style, and in fact you shouldn’t!
Just know which tests are tests and which are specs, and you should
have a happier team.
THE IMPOSTER’S ROADMAP 521

COMMON TESTING TOOLS


Tests are just code that runs code and are usually constructed with
some type of testing framework, such as:

XUnit for .NET and C#


JUnit or TestNG for Java
Mocha, Chai, or a million others for Node
RSpec or MiniTest for Ruby
Pytest for Python

You’re probably using one of these, but if you’re not, that’s perfectly
fine. As long as your framework keeps you thinking about tests and
not the test framework, you’re good to go.

Once you write the tests, you need to run them and there are quite a
few opinions on this, which usually go something like:

“We use Blah Blah IDE which has testing built in, so use
that.”
“Our favorite testing extension for VS Code is Bloop Bloop, so
use that.”
“Testing tool? You mean the CLI?”

I typically find myself testing in the CLI; however, there are some
wonderful testing tools out there that can speed your “feedback loop”
tremendously. This is our goal if we’re doing TDD or BDD: fast
feedback and iteration. Sometimes the CLI just doesn’t cut it for
people who want the most speed possible.

Here’s the WebStorm IDE (from JetBrains) test runner (image from
their site):
522 ROB CONERY

You can pop this out and let it float in the corner, setting it to watch
your code files. When something changes, the tests will automatically
run again, showing you quickly if you screwed something up.

Visual Studio has something similar:

If you have the Enterprise edition, you get “Live Testing”, which will
show you test results right next to your code:
THE IMPOSTER’S ROADMAP 523

I used this once before, about 3 years ago, and it was unbelievably fun
to see things change in real time.

If you’re a VS Code user, then you have plenty of extensions to choose


from, even premium ones (not free) that will approximate the live
coding experience in Visual Studio.

I like these as they’re nice and visual, but I tend to try to keep things
as simple as possible. To that end, I’ll have a CLI open on my right
monitor while I code on my left. If I’m just using my laptop, I’ll create
a new desktop and swipe over to it occasionally to see how the tests
are doing.

Most frameworks allow to you set a “watch” command which will


rerun the tests when a file changes. I work in JavaScript a lot, so this
has worked well for me. As always: the best choice for you and your
team is to go with what people are comfortable with.

INTEGRATION TESTING
You’ll reach a point in the development process where you’re going to
need to make sure everything works as intended. In our e-commerce
example, for instance, I might need to ensure that:
524 ROB CONERY

Things are stored properly in the database.


Emails are being sent.
Stripe is handling the payments as expected, and our webhook
receivers are actually working (good grief, how many times has
this bitten me!)
The fulfillment system is generating the proper expiring URLs
for things actually in storage.

The only way you can be confident that everything is working is to run
a full integration test, and that’s stressful.

How do you actually do it? And where?

I’ll share with you what I’ve done, and then we’ll get into the
theoretical approaches after that. Normally, I would do this the other
way around (theory first, my thoughts second), but I tend to do
integration tests right from the start, which I find simpler, and I think
it’s useful to start there.

The Simple Approach: Isolate, Script, Rollback

This is what I have done in the past, and it works well for me. I won’t
claim that it’s The One True Way, but you might find it useful.

I tend to script interactions that are carefully focused, which is a


version of Big Bang testing, which you’ll read about in a minute.

For instance: let’s say I want to ensure that the “Happy Path” works
for my e-commerce checkout, which is:

A Customer picks a Product.


The Customer buys that Product with a credit card, using
Stripe.
Stripe returns a successful Payment.
An Order is created and is fulfilled in some way.
They get an email with their Invoice attached.
THE IMPOSTER’S ROADMAP 525

This process, right here, is the core of what my e-commerce application


should do: help my customers give me money :D.

So how do I run this? Using a simple script. If I’m working in Node,


I’ll create a directory for the integration tests, and then I’ll create a
simple method that runs them and reports things to the console:

Here, I’m testing two “modules” as they’re referred to in integration


testing land. I call them services, but let’s not get hung up on names,
shall we?

As you can see, this is just a straight-up script that isn’t using a
testing framework. I could be more formal about it, but (this is just
me, again) I don’t run these tests all that often. I only run them if I
refactor something big, add some features, or think that something
might break because of a change to the codebase.

Hopefully, you get the idea. I’ve written my service class methods so
they return whatever things they’ve changed or created, so I can
interrogate them using Node’s assert library. My service classes also
526 ROB CONERY

log what they’re doing, so I can see it as it all happens, but I can also
use the debugger here if I want to know more.

There are some key considerations when doing this simple approach:

You’ll need to be certain you’re working against Stripe’s test


account (each account has a test flag with test API
numbers). When I run prepareStripe I’m sending in a test
card number, which will fail if I use live, and that’s a good
thing!
Obviously, send the email to yourself if you want to see it or
to “[email protected]”, which goes nowhere.
Have a cleanup routine, or make this process idempotent if you
aren't concerned about accumulating test data.

I run these tests against my local development database, so I’m not all
that worried about filling it with test data. That said, it can be very
annoying when tests like this fail because of a data validation error
(duplicate key, etc.).

What I typically do is to make checkout, order, payment, and


fulfillment processes idempotent, which means that if an order exists
with a given order number (which is supposed to be unique), it’s
simply replaced. Same with Stripe payment records, checkout records,
and authorization data. How you do this is up to you, of course, but
there are some fun ways:

Use a stored procedure (or function in Postgres) that will


remove existing data. With Postgres, you can use on conflict
to run updates if you like, ignore things, or do whatever you
like. This should all happen in a transaction so you don’t mess
up your data!
Run a cleaner routine before you run your test. This is more
manual, but might give you more peace of mind.
Run a rollback routine after your test. Same thing, just
happens after the fact.
THE IMPOSTER’S ROADMAP 527

Use an INTEGRATION environment setting that uses an in-


memory SQLite database. This will only work if you’re using
an ORM that supports SQLite or some type of in-memory
system, such as Entity Framework.

Hopefully, the external services you use can run in test mode too.
Most payment systems allow for this, and there are also email testing
services out there that won’t actually send any email and, instead, will
show you a nice interface. One that I like a lot is Ethereal Mail, which
is free and gives you a set of SMTP credentials that you can plug in as
you need.

Right, that’s my way of doing things. Let’s take a look at a more


formal way of doing things.

Integration Testing Tools

If you do a quick Google search, you’ll find several tools dedicated to


Integration Testing. I’ll be right up front on this: I have never used a
single one of these aside from Pytest, which I’ve used for other things:

There really is no need for a dedicated toolset if all you’re doing is


ensuring that the services your application needs are working as
they’re supposed to. That’s me and my opinion, and I know that there
are quite a few opinions out there on these things, but before you
flame me, hang out until the next section on Acceptance Tests, and
hopefully, you’ll see where I’m coming from.
528 ROB CONERY

For now, let’s talk about the different styles of Integration Tests.

Bottom Up

An e-commerce application is pretty straightforward, but there are


still architectural “layers”, if you will, that do a thing. For instance, I
have:

To use a database to store data transactionally using SQL.


Models that represent my business objects.
Services that use my models to implement business logic.
UI components that interact with services.

The lowest level here could be considered my data layer, or anything


that integrates with the operating system or disk. Next is my domain
model and any data access logic that goes along with it. Finally:
services and then the UI components.

The idea with bottom up is that you test your data access bits first,
then weave in your models, then services, and finally UI stuff, working
your way from the bottom to the top. You can do this like I did, using
a script, or you can use some type of testing tool.

By focusing on your layers, you can find problems easier when things
go wrong, and if you do this bottom up, you can find problems that
could ripple throughout the application first, rather than get surprised
later.

Top Down

As the name implies, you test the layers of your application from the
top (or outside) down to the bottom, which is usually some type of
data interaction. Personally, I find this type of testing a bit weird, but I
do understand why it’s there.

You’ll need test doubles (mocks, stubs) to if you do this, and as you
progress down the layers, you remove those doubles and let the
integration happen. I think that’s strange, and a bit contrived, yet I can
THE IMPOSTER’S ROADMAP 529

also see how it might be valuable to know that your service logic is
sound, but your data access logic has issues. You’ll find that out
quickly if you use this approach.

Big Bang

As the name implies: you just do everything, all at once, just like I did
in the test above. The shopping and checkout services do their thing
and use models that hit the database. They also send emails, and
so on.

The benefit here is that you exercise everything and don’t need to
work in stubs. The downside is that finding and killing any bugs can
be tedious, since you’re doing everything at once. For instance: if I
have a problem in my fulfillment routine, I’ll need to write more tests
and possibly a test double or two to isolate the issue. This takes time
during testing, whereas doing top-down, I would naturally stumble on
this issue during the test creation process.

If I’m honest, I find that this type of testing is a bit tedious. It sits in
the middle of unit/behavioral testing and the thing that’s coming
next: fully scripted acceptance tests. To me, having a solid set of
unit/behavior tests as well as acceptance tests covers just about
everything I need.

ACCEPTANCE TESTING
This one’s tricky. For some organizations (like where I work at
Microsoft), this means getting users in a room to try a thing you’re
creating. You take notes, measure things like eye-movement, clicks,
frustration and happiness levels, and so on. These are usually
conducted by a specialized team and if you’re involved, it’s likely just
as a spectator.

Before we go further, it’s a good idea to go all the way back to the
chapter where we thought about our application users (Getting To Know
Our Users). We had to flex our empathy but, more importantly, we had
530 ROB CONERY

to involve our entire team. This is a very hard thing to do because you
ride the line between groupthink vs. a singular perspective (aka
“bias”).

These things evolve, but you have to start somewhere, so hopefully


you have this list of users at the ready!

Right then! Now that you have your users, let’s consider our testing
tool. For you, it’s more likely you’ll use a dedicated testing tool for the
platform you’re building for:

Selenium, Cypress, or Playwright if you’re building a web


application.
Appium, Calabash, or your IDE of choice if you’re building a
mobile app.

There are others, of course, but they all basically do the same thing:
interact with your application based on a script of some kind.

For this chapter, I’ll discuss Playwright, as it’s the tool I have the most
experience with. Full disclosure as well: I’m a Microsoft employee as
of this writing, and Playwright is a Microsoft project. That’s not why
I’m writing about it, however; I genuinely love this thing.

Writing Your Scripts

Each test you create is a scripted interaction, which means you need
to fully develop one or more target end users. This isn’t easy to do,
and you can easily build in your biases to these tests. In fact, I can
guarantee that you will.

I’ve discussed this throughout the chapter: your biases will almost
always be the thing that undermines your tests, and that is especially
true here. I’ll go further: your initial tests will probably be marginally
useful, but if you’re willing to change them to reflect actual user
interactions, they will organically improve.
THE IMPOSTER’S ROADMAP 531

How do you do that? Logs, baby! A good set of logs tell your
application’s story, which always involves your users. The Logging
chapter comes in the next part, where everything will, hopefully, start
coming together.

So, to start, know that each script has a beginning, middle, and end
and, ideally, ends with some outcome that you can measure. For our e-
commerce application, we can script out the cart interaction which
ends with one of our users buying one of our books.

This is easy enough, but is it useful? Nope. Why not? Because we’ve
already tested this! Duh! We need to shift our thinking here because
we’re going after the experience, not the mechanics.

Embracing The Dimensions

This is the first thing that I love about Playwright: it’s headless. It will
test your site, but it does so using headless browsers (you can choose
from the major ones), which puts you in a very interesting position as
the test creator. How do you write code for a non-visual interaction?

Using Accessibility Rich Internet Application (ARIA) tools is how!


You’ve seen these HTML attributes before, they’re the ones that start
with aria-. Each browser has support for these tags, as they help
people who need assistive technology.

Here’s what that looks like in the browser:


532 ROB CONERY

That’s my blog, and you can see how the DOM is viewed in a different
“dimension”. Rather than a tree of elements, it’s now a tree of roles
which help people with vision challenges interact with your site.

This is how Playwright works: you make your pages more accessible,
which increases your ability to write scripts for it. There are other
ways, of course, but this is a “best of both worlds” thing and I think
it’s wonderful!

That’s the goal with Acceptance, or “End to End” testing: you want to go
after the experience of the site, and what better way to do it than to try
to visualize it in your mind, thinking about the colors and what you see.

If you would like to know more about Playwright, I made a free hour-
long video that you can watch right here.

Is This Acceptable, Though?

Your scripts need to answer a simple question: will my users like what
I’ve made. The users, of course, are the ones we defined during our
Agile spin up, way, way long ago. If we’re serious about these users,
who they are, and what they care about, then we need to imagine
them using our site and being happy, sad, critical, excited, and bored.
THE IMPOSTER’S ROADMAP 533

Beginning, middle, and end. Why is Santosh here, and what does he
care about? Will Fen be pissed off and write another email
complaining about our fonts? You can have some fun with these
scripts, but it’s critical that they tell the intended story so you and
your team understand when you’re actually done building the version
of the app you’re creating.

That’s the thing: these tests tell you when you’re done. How would you
know otherwise? I wish I could go deeper technically here, but it’s not
really a technical issue, it’s a human issue. Your application needs to
deliver value to your users, which means you need to understand who
those users are and what they care about.

You get that right, your career is set.

ALPHA TESTING
You’ve built out a well-tested codebase, and your Integration and
Acceptance tests look respectable. In short: you feel good about what
you’ve made… which should always make you feel uneasy.

Now comes the fun: letting someone else use what you’ve made. This
typically involves dedicated groups internally and, at a later point, the
actual users who will use the application.

With alpha testing, you gather people who know nothing about your
code and, ideally, know very little about your application. These could
be folks internal to your company or your friends. Maybe your
“insiders” group. Either way: these should be people you trust to keep
the rough edges to themselves and understand that this is a very early
launch.

The alpha test period is only as good as the group doing the testing. If
they’re your friends, you might get a text with a screenshot or an
email saying something like “I tried this, and it didn’t work”, which is
always a good time.
534 ROB CONERY

If you’re lucky, however, you might have access to a team that does
this kind of thing for a living. Yes, there are people who do this kind of
thing as their job! And you know what, it looks like a lot of fun if I’m
honest.

A software tester’s life is all about exploring things from a user’s


perspective and doing the weird stuff that you, as the developer, will
never have thought of.

I think this joke from Reddit illustrates perfectly the life of a QA


tester:

A software tester walks into a bar.

Walks into a bar

Runs into a bar.

Crawls into a bar.

Dances into a bar.

Flies into a bar.

Jumps into a bar.

And orders:

a beer.

2 beers.

0 beers.

99999999 beers.

a lizard in a beer glass.

-1 beer.

”qwertyuiop" beers.
THE IMPOSTER’S ROADMAP 535

Testing complete.

A real customer walks into the bar and asks where the bathroom
is.

The bar goes up in flames.

One of the comments from this post also sums up the perfect
developer response:

The bathroom worked at my house

I’ve had a few popular open-source projects over the years, and every
now and again a tester would drop by and dump gold in my issues list.
A complete, well-documented description of what they did, why, and
what happened.

These people are gold.

Exploratory Testing

When I first wrote publicly about putting this book together, one of
my blog readers offered this suggestion:

Testing is just as iterative as development. If I had my druthers,


Exploratory Testing would be taught to every dev/tester out there
in the hopes that everyone can be on the hunt for bugs, edge
cases, and side effects before release.

If I’m honest, that’s the first time I’ve ever heard the term
“exploratory testing”, which makes sense because I’m not a dedicated
536 ROB CONERY

tester, and it’s been a long time since I’ve had access to a QA
department that wasn’t the general public.

When you hand your app over to a QA team, they will probably do
some form of Exploratory Testing, which treats your application like a
crime scene. That’s how Cem Kaner, the creator of the process,
describes it: “Testing is like CSI” (CSI is a television show that deals
with crime scene investigation and I used to watch it religiously).

Before Exploratory Testing came along, QA people would typically


follow a script — usually something that follows with your User
Acceptance tests above. “Open page X, click the blue button that says
‘Login’ and then enter the credentials …” might be a scripted test for a
login process.

Exploratory Testing takes a different route in that it’s completely


unstructured. Well, sort of. There are some terms and general
processes the tester needs to follow, just like in a CSI show, but the
tester is free to explore the application within a structured process. I
know, it sounds a little contradictory, so let’s dig in.

When testing an application, the tester is given a Charter, which gives


them a specific task, but in an open way, allowing for exploration and
discovery. Testing the login page, for example, might have the Charter
“Login to the application using the credentials X for user name and Y
for password”.

From there, the process is straightforward, wrapped with a little


jargon:

Oracles are indicators that something has happened, either


good, or bad. Testing the login page, for example, you might
see a green alert welcoming you back once you log in. You
might see a red alert telling you your credentials are wrong.
You might also see a 500 error too. All of these tell you that
something happened.
THE IMPOSTER’S ROADMAP 537

Heuristics refer to how you handle certain situations as a


tester, based on your experience. The red alert when you log in
could mean you entered a typo in the login box, so you try
again. It could also mean you were given the wrong credentials
deliberately, so you registered to create an account. Heuristics
allow the tester to try what they can to discover more about
the Oracle they’re dealing with.
Risks are anything that might threaten the value of the
application for the given Charter. The text on the login button
might not have enough contrast, the font size of the alerts
might be difficult to read for people with vision issues. There
are no ARIA tags for assistive technology, which might
alienate quite a few users.

A tester will work with an application for a set amount of time, or


“time box” their session, during a focused period. This is important,
because an open-ended testing session could easily become unfocused.

During that time, they’ll take numerous notes, might document their
Charter and Oracles, and of course document any risks they come
across — all while treating your application like a crime scene.

Once completed, the tester will document everything and send it off


to the team. Or they might create some issues in GitHub — this is all
part of your internal process.

BETA TESTING
Beta testing is a bit more unstructured, unless your users are testers:
fewer bugs, fewer rapid changes. A “we think this is ready” statement,
but understanding that your users tell you when you’re ready.

Alpha and beta are actually considered Acceptance Testing phases, so


sometimes you’ll hear people say “we’re beta testing this” which also
means “we’re letting our users find the bugs now.” That’s a perfectly
538 ROB CONERY

acceptable thing to do, by the way, because as we’ve been discussing,


you and your team will be completely blind to certain bugs!

But, in reality, beta can mean whatever you want it to mean, but one
understanding that everyone agrees on is that any release that’s less
than 1.0.0 (using SemVer here) is allowed to change and break things.
Moving your app from 0.4.3 to 0.5.0, for instance, probably will break
any other services relying on it. These are called “breaking changes”
and are really a pain in the butt.

Gmail was released on April 1st (April Fools' Day… which is weird) of
2004 and stayed in beta until July 2009, which is also weird. It was in
beta for so long that it became a bit of a joke.

The PostgreSQL driver for Elixir, one of my favorite languages, was


released in alpha almost 10 years ago as of this writing. As of today, a
bright sunny Saturday in January 2024, the driver is still in beta with
the latest release set to 0.17.4.

I asked José Valim about this. He’s the creator of Elixir and a
maintainer of the driver (among many others), and he responded that
they’re evolving the tooling for Elixir so rapidly that it makes life
much easier if you don’t go past 1.0.0, which means you don’t get to
break things as often.

Unfortunately for me, I created a data access project for Elixir called
Moebius, which used Postgrex. When it broke, my stuff broke (if I
upgraded). Eventually, I stopped upgrading the driver, which was a
problem because Elixir itself was changing as well… and you get the
idea. The most significant change was when they added date support
to Elixir, which meant they added it to Postgrex too, which meant I
needed to change my code and bump to version 4 or just give up. I
hate giving up, but this was such a massive change, I didn’t know
what else to do.

Alpha and beta stages are a great idea if you want to involve your
users. They’re not so great if you want a free license to change things
on an ongoing basis.
THE IMPOSTER’S ROADMAP 539

Gathering User Feedback

This is a tricky one, and, of course, depends on your audience. It’s


common to open up your application to users at a discount (if you’re a
paid service), with the implicit understanding that they’re getting the
discount to offer you feedback. If you’re not a paid service, then the
lure might be that they can be one of the first users on the platform
for the price of simply trying things out and letting you know what
they think.

But how do you get that feedback? There are a few ways, of course,
depending on who your users will be and how much you want to
spend… which all leads to the same question: what quality of
feedback do you want?

Start With a Waiting List

I know most developers hate the idea of email, especially mass email,
but trust me, email services can be your absolutely best friend.

A common thing to do is to set up an email list service, which is


usually free up to a certain number of people. ConvertKit, the service I
use, is free up to 1000 subscribers, which is actually quite a lot.

Note: I’ve used quite a few list services, including MailChimp, Drip,
ActiveCampaign, Mautic, etc. ConvertKit has worked great for me, consistently
and no, I get no consideration for this.

You can create landing pages with ConvertKit and use that to do all
kinds of interesting things with your beta users, including not
bothering them at all! When they sign up, you tag them with beta-
user or something like that, and then thank them for signing up, and
that you’ll let them know when the beta is open for them.

When you’re ready, you can send out a broadcast email to everyone on
the list that you want to let in to your beta. You can do this in waves,
if you like, or all at once — up to you.
540 ROB CONERY

One really nice thing about using ConvertKit is that it works well with
automation sites like Zapier, and you can add your user directly to
your system when they sign up, or pop them into a Google
spreadsheet!

I’m not going to get into the details of how Zapier works. It’s super
simple to learn and, in summary, connects APIs together.

Here’s a “zap” (their term) that I used to have running for when an
order came in from Stripe. I hooked up my Stripe API to Zapier, and
also Google Sheets, and then put the two of them together like so:

You can do the same with ConvertKit: add a beta invite person to a
spreadsheet, and when you’re ready, export a CSV and import those
users into your application.

Now the fun begins: handling the user feedback!

Simple: Hit Reply

One of the pleasant things about using a service like ConvertKit is


that you can create email sequences, which, as the name implies, are
emails sent to people sequentially over time.
THE IMPOSTER’S ROADMAP 541

One extremely useful thing is to have a “Beta Feedback Prompt”


sequence, which you spread out over a week or two, maybe sending an
email every other day.

Each email can be a “did you know that our app does X?” Another
could be, “here’s an invitation code to send out to your friends.”. The
one you care about, however, is “hit reply and let us know how we’re
doing!”

You would be surprised at how effective that is, as long as your email
is 1) helpful and 2) sincere. Instead of asking directly (“Please tell us
how we’re doing”), consider making it more personal with a story that
offers some value:

The idea for this application came to me in a dream, if you can believe
it. I think it was because I always wanted to use an app that helped
me do X in half the time and do Y at the same time. I’m a big believer
in the Z Theory, which tells us that A, B, and C can happen if you take
the time to journal every day, do yoga, and eat nothing but pork chops
for 36 hours. That might sound extreme, but I guess that’s just me.

Hopefully, you see the benefit like I do, and if so, have fun! If not,
I would love to hear from you to know more. Don’t worry about
hurting my feelings, I do yoga. Just hit reply and let it rip!

You’ll get responses, and you will be surprised at how supportive


these responses will be, for the most part. Those are fun, but be sure
to focus on the people who take the time to tell you “meh” or, better
yet, “I tried and don’t get it”. These responses will often be light on
details because time is money, friend, but with some skill, you can get
more detail.

The person you really want to hear from and the one who will make
all the difference is the one who’s just flat out confused. Maybe they
542 ROB CONERY

tried your app and shut it within seconds. They didn’t understand
what you were trying to say, and so on.

Many times you can reply to them, and they’ll help you out — they
wanted to be on your beta list to begin with, so there’s some perceived
value on their part! Other times, you’ll need to get creative.

One way to lure them is to offer a free subscription, if your site’s for
pay, or a gift certificate for coffee or similar, if you have the budget.
That might sound extreme, but people will appreciate your gesture
and probably decline it anyway. The point is: if they feel you’re
genuinely curious about what they think, they’ll answer you.

When you get your valuable feedback, pop it into a to-do list or notes
application, or even fill out a GitHub issue straight away!

Speaking of…

Nerdy: GitHub Issues or Discussions

This works great if your audience is technical, and it’s good for you,
too, because comments might not be as pointed as an email reply.

You still want to use an email service to reach out to people, but you
don’t have to and, instead, you can pop “see a problem?” links
everywhere and hope people click on them. You’ll want to be certain
you have an issue template in place for your beta period (see the
GitHub chapters earlier), but don’t be overbearing! People would
rather not give you their time, unless they really love you and want
you to succeed, but don’t count on that.

Asking your users to fill out a GitHub issue can save you time and
energy, but you’re asking a lot. If you think you’ll have a big invite list,
this can be just fine.

Spendy: A Third-Party Service

I remember when StackOverflow launched, and watching Jeff Atwood


and team handle the intense public interest. Jeff didn’t use an email
THE IMPOSTER’S ROADMAP 543

service for this, instead they just sent out an invitation list and wired
up a service called UserVoice. You can read the entire process here.

I’ve used UserVoice in the past, and it worked pretty well. If you’re
familiar with the Discourse forum system or (once again)
StackOverflow, then you’ll understand UserVoice. At least the way it
used to work, back when Jeff was using it.

People provided suggestions, asked questions, and so on and things


were voted on. The idea is that these discussions were self-ranking,
which helped you understand what users wanted.

It’s a much different service these days, of course, but it’s difficult to
tell as the price is extreme and there’s no demo content there. Oh
well.

Intercom was another service I used that had the interesting approach
of popping a chat window right on your site as an overlay. People
could chat with you in realtime, or leave a message that you reply to
when you had a moment. I liked it, but it was expensive.

It has since evolved into a more complete customer support system,


using AI of course, but still lets your users chat with you, which, I
think, is groovy.

If you have the budget for a service like this, it can save you a lot of
time and also raise the quality of feedback you receive. The starting
price is pretty reasonable, which as of this writing is $39/mo in
the US.

If you Google “user feedback service” you’ll see numerous services


that essentially do the same thing, with slightly different user
interfaces. If I had to choose one, I’d go with Intercom as I’ve used
them before, and I'm in favor of evolving chats and user feedback into
support documents.
544 ROB CONERY

SUMMARY
I can’t tell you how to test, there are quite a few tutorials for that.
Similarly, I can’t tell you what to test — these are things you learn as
you gain experience.

What I can do is nudge you down the right paths, and if you leave this
chapter with only one thing, it’s this: don’t limit yourself to a single way of
testing your application. Every approach is “good”, and also has its
drawbacks. Overlap it, and you can “prove” (notice the quotes) that
you’ve done what you have set out to do.

That last bit is everything. As I keep mentioning throughout this


book: you are your best cheerleader. At some point, you will need to
flex, and make damned sure that your bosses and the people that work
for you know that you were an important part of the success.

I know that this is problematic for many people, myself included.

You’re working hard to get to a place in your career where you can
enable a team to deliver. Let’s stay focused. This isn’t about ego, this
is about you doing a good job and making sure people notice you.

Attention is a good thing

A spectrum of tests, unit to acceptance, is as close to “proof ” of you


doing a good job as you’re going to get. The best part is that it’s not
you doing the talking, it’s the tests. Even then, you need to be OK
with letting people love you for doing good work.

You’re good at your job. You’re reading this book so you can improve,
aren’t you? That says a lot.
NINETEEN
DEBUGGING
A WONDERFUL SKILL TO MASTER

I
love debugging, I really do. I think it’s the clarity of purpose and
sense of urgency that focuses my entire mind on to one task: fix
the damned problem.

If you want to be extremely valuable to your team, become good at


debugging. If you want to reach god status as a lead, add points to
your bug-slayer skill tree. In my experience, there are two people
absolutely revered on any project: the data person and the bug killer.
They remove the blocks, solve the problems and, best of all, offer the
joy of learning to others.

That, right there, is why debugging is wonderful: you always learn


something.

REPEAT AFTER ME: IT’S NOT HAUNTED


When I was starting out in my career, I was routinely convinced that
languages and frameworks were buggy, or that my machine was
somehow haunted. Or at least infested with real bugs (which did
happen once).
546 ROB CONERY

I would yell at my screen, claiming “IMPOSSIBLE!” I would swear


bluntly, like an adolescent toddler, and generally behave in the worst
ways imaginable. Until one day, when my buddy (who was also my
cofounder) put his hand on my shoulder and said, “you know, Rob,
you sound ignorant when you yell like that.”

Dave is a good guy, through and through, and his comment hit me
pretty hard. I wasn’t angry, just ashamed. Computers are never wrong is a
good mantra to have. Sure, sometimes you’ll find a weird bug or
language behavior, especially if you use JavaScript, but 99.999% of the
time, it will be your fault (or whoever wrote the code).

I used to think the patience and resolve needed to find and fix a bug
was something you gained with experience, but I am not convinced
that’s necessarily true. Experience comes with time, sure, but patience
is something you can cultivate at any point in your life.

You’ll need it.

CLEAR YOUR MIND


As we saw in the last chapter, incomplete or poor logging can lead you
to the wrong answer, easily. I thought I dropped my production
database and wasted precious minutes with my checkout broken,
chasing a problem that didn’t exist. Of all the time-wasters, this is the
biggest. I was worried, stressed, and let my fear get the best of me.

Before you start debugging anything, clear your mind. I know it’s easy
to say, but it’s critical because often the answer is right in front of you,
if only you would let it come to you.

Imagine that you can’t find your keys. The first thing you probably do
is check your pockets (which is reasonable), and then search the area
where you think your keys should be. Maybe they fell? Might be I put
them just… next to the place I normally put them? And then your
search radius widens as you check places near where your keys
should be.
THE IMPOSTER’S ROADMAP 547

We believe so strongly that our keys should be where we left them that
we can’t free our minds to think “maybe they’re still in the front
door” (most likely, for me at least), or “I bet I dropped them on the
way up the steps”. These are far more likely than you breaking a
pattern and setting your keys near where you think you set them.

My point is: the best bug killers that I know of can clear their mind
and come up with 3 scenarios within seconds, and one of them usually
involves DNS.

From https://www.cyberciti.biz/humour/a-haiku-about-dns/

Let’s start here because why not. If your site is down because it’s not
responding, immediately check your DNS settings:

DNS is a simple concept to understand, but extremely difficult to


put into practice. DNS is often to blame for massive internet
outages because even the most savvy companies and educated
minds make mistakes while DNS misconfigurations are
unforgiving.
548 ROB CONERY

At the same time, DNS issues often don’t manifest themselves as


DNS issues. Based on the symptom, DNS is often the last thing
you’d expect it to be—this explains the denial stage of the haiku.

Been there, too many times. I set up email through AWS Simple Email
Service, which is an amazingly cheap email sender, but to work
properly you have to ensure that your DNS is set correctly (eek that
last bit sent shivers). I did. In fact, I verified my settings four times
before rolling things over… and yet email still failed to send at least 3
out of 10 times.

Yep. DNS issues (it had to do with my DKIM setting).

I bring up DNS because it’s a great example of built-in bias, and one
that you can get over by cultivating the ability to empty your brain. I’ll
offer some step by step ideas in a minute, but for now, understand
that often the answer will be something completely unexpected that’s
also obvious once you figure it out.

We need to shorten the distance between unexpected and obvious.

WORK IT BACKWARDS
If you’re debugging your code, you are the problem, not the code. You
wrote the bug, you’re also going to be in your own way as you try to
debug the issue. By “in your own way” I mean that your bias for
“what should happen” will absolutely cause issues.

I claim that the primary reason that the so-called “bug slayers” on any
team are so good, is because the code they’re looking at is not
their own.

Think about the last major bug you had that took time to solve. At
some point, you probably, like I did in the last chapter, sat back and
said some variation of “right, we have a product. I know it’s there. It’s
THE IMPOSTER’S ROADMAP 549

being added to the cart here because I can see it in the logs… and then
it’s not being added there!”

This is restating the events as they happen, or “working your way


forwards”. How many crime novels or TV shows have a “break the
case” moment where they said, earnestly: OK, let’s work our way
forwards?

We need to do the opposite, and here’s a system that I have used


countless times to unravel the nastiest bugs:

Accept, as a truth, that the language and framework you’re


using aren’t buggy.
Stop what you’re doing, and do something else that will
break your concentration completely. If you play an
instrument, do it. Or go for a walk, the gym, or play Elden
Ring.
Take a hot shower and explain the problem to your shadow,
working your way backwards. “The app is crashing with a
connection error, which is caused by the Cart class method
that adds a Product to the items if it exists…”
Give yourself a high five once you understand what the
issue is.

Clear your mind, take a shower to engage your body and relieve the
tension. Then work your way backwards from the problem. Don’t
explain what should happen. Just explain what is happening and
how we got to this point.

Of course, I’m going to quote Sherlock Holmes, but I think this is


perfect (from A Study in Scarlet):

… the grand thing is to be able to reason backwards. That is a


very useful accomplishment, and a very easy one, but people do
not practise it much. In the every-day affairs of life it is more
550 ROB CONERY

useful to reason forwards, and so the other comes to be neglected.


There are fifty who can reason synthetically for one who can
reason analytically… Let me see if I can make it clearer. Most
people, if you describe a train of events to them, will tell you what
the result would be. They can put those events together in their
minds, and argue from them that something will come to pass.
There are few people, however, who, if you told them a result,
would be able to evolve from their own inner consciousness what
the steps were which led up to that result. This power is what I
mean when I talk of reasoning backwards, or analytically.

You might be wondering: why a shower? Don’t skip that. The body
holds emotion and if the body is busy, it means the mind can be freer
from emotional interference. Solving tough bugs is extremely
frustrating, and anger can quickly fire up and next thing you know
your buddy is telling you that you sound ignorant.

If you’re at work then a shower might be out of the question, unless


you work at home, of course. If you belong to a gym that has one, go
there. If you don’t belong to a gym, you can justify the expense if only
to help you solve bugs. Showers (or baths, hot tubs) are magical
things!

The mental and emotional break you’re on will help you to understand
an essential truth: you’re being lied to.

THE DEBUGGER LIES


When you discuss debugging with anyone, they will likely ask what
tool you’re using to do the debugging. What they’re really asking is if
you’re using a development environment, or IDE, like Jet Brains,
Visual Studio, etc. Or if you’re winging it by writing things to a
debugger using something like Vim or VS Code (both of which have
debugging capabilities that are pretty dang good).
THE IMPOSTER’S ROADMAP 551

Debuggers typically work the same way. Here’s what VS Code offers
for debugging JavaScript using Node.js:

I have highlighted the important stuff here, which is the variables


window (top left), the breakpoint (top middle), and the call stack
(bottom left).

You probably know this, but for the sake of completeness, the idea
here is that you step through your code, line by line, and watch the
variables change. The call stack grows and reminds you how you got
to where you are, and the output of your program is on the right side
in the Debug Console. This is pretty standard fare for IDE debuggers.

They work pretty well when it comes to “why is this thing set this
way”, but they can also waste your time and lie to you. That might
sound a bit harsh, but it’s absolutely true.

When you walk through a debug session, you’re working the problem
forward and reinforcing your bias as a developer. “This should work!!!”
is a common exclamation from people using debuggers. They can help,
of course, especially in the example above where I’m trying to reassign
552 ROB CONERY

a constant variable. The debug console will tell me this and my debug
session will end right where the error happened.

If you’re dealing with a data error, as I was, a debugger can be all but
useless. Worse than useless: it keeps you running in circles, looking for your
keys where they should be, rather than where they’re likely to be. If you’ve
convinced yourself you have a logic bug, you’ll work that debugger for
hours before you drop everything and start over, and wow that’s
frustrating.

That’s why I say debuggers lie: they keep telling you what you want
to hear.

I have a feeling that quite a few readers won’t appreciate this


assertion, which is OK. Debuggers certainly do have their place. I
think, however, they should be used to confirm a hunch rather than to
explore a problem.

Using Debuggers to Blow Up Your Code

You can use a debugger to try to reproduce an error or bug by using a


combination of adding watch expressions and messing with variables
in memory.

Here’s what that looks like:


THE IMPOSTER’S ROADMAP 553

I’ve stopped on line 3, which is great, and then I manually changed the
local variable today to “Boxing Day” right in the Variables pane. I also
added my today variable to the Watch window, and you can see how
today has changed in memory, but it didn’t cause an error to happen
as I didn’t reassign today, I changed its value in memory.

Using the Watch window can be extremely valuable if you’re trying to


validate your hunch. Using my Cart problem from the Testing
Strategies chapter, let’s say that I’ve run through my tests and have
used the debugger against my development database without any
success (which is true, I was pulling my hair out).

The only difference now between my local application and the one my
customer is using is the data. Once this realization hits, I can narrow
the differences quickly… or that’s how you’re supposed to do it. What
I actually ended up doing was pulling down a backup of my production
system and running it locally… but let’s pretend I didn’t do that.

The only data that should be different in this scenario is the Product.
My logs show the same SKU, price, and name… what else could be
there? Let’s see what the debugger says:

There’s obviously more information in there, but in terms of a data


error, the first things I look for are status, category, and type. These
554 ROB CONERY

things can change the way an application behaves and, as you can see,
I have a type set to “digital”.

Hmm… what would happen if that changed to “physical”? I can reset


my breakpoint to just above the call to addToCart, and then change
the product.type to physical:

Boom! Found it:

Now obviously this isn’t the same code I was using — that’s long gone
— so I had to recreate a few things to highlight this debugging
THE IMPOSTER’S ROADMAP 555

technique. To be honest, I never do this. I much prefer writing more


tests for a given scenario, which includes error reports.

This is a fine first step, however, especially if it keeps you from using
your production database to locate the issue! Mine was pretty small,
so it was easy, but in The Real World this is typically never an option.

Changing the variable data is a great way to tweak the state of things
and see where stuff blows up. Once you find a problem, as we did, you
can write your tests based on your findings.

THE LOGS LIE


Your logs will tell the story of your application. Never trust them.
Well, not until you’ve proven that they tell as real a story as possible.

We talked about bias in the last chapter, and this is where bias lives: in
the logs. I’ve heard developers claim “the logs never lie”, and it makes
me laugh, every time. Of course they do! You set them up!

Imagine hiring someone to take pictures of you throughout the day;


your meals, exercise, what you’re working on, the emails you write, etc.

What’s that? Did you skip the gym today? Oh, and got up late, too?

Here’s a fun exercise: imagine you’re at the store and looking to buy a
new pair of pants, maybe a shirt or a sweater. You put it on in the
dressing room (seriously: try to imagine yourself doing this) and turn
to the left or the right.

Which side did you turn to? I’m going to bet it was the side of your
body that you liked best. The way your hair parts, maybe you (like me)
have a more developed right arm than left.

The point is: we do weird stuff when it comes to our reflection, and
your application logs are, indeed, a reflection of you and your team.
It’s only human to want them to look as good as possible, isn’t it?
Even when it comes to errors.
556 ROB CONERY

Hey! Look at me, I’m reporting this error here, which means I’m a good
developer!

The thing is: it takes a few months for your logs to start telling a
reasonable story. They’ll never be free of bias, but they will get closer
to objectivity as time goes on.

The Weekly Log Review

One way to adjust your logs once your application is up and running is
to have a weekly log review with your team. This can be tedious, but
ideally, you’re going to use a log service (which we’ll cover in a later
chapter) that will enable you to sift, sort, and query in some online
interface.

What do these logs tell you? What do they tell the team? It’s a fun
exercise to go through and compare stories and then compare them
with actual reporting data to see if things make sense.

I’ll talk more about that in the Logging chapter. For now: trusting your
logs takes time, and they are indispensable when it comes to
debugging.

They have to be telling you the right story, however.

THE DOCS LIE


No one likes writing documentation, even people hired to do that
exact thing. They’re dry, boring, formulaic, and often are irrelevant
within a few months as things change. Which they do, often.

We’ll talk about writing docs for our application in a later chapter. The
docs I’m referring to for this section are the ones for the frameworks
you’re using. If you Google “the docs are wrong” you’ll see what I
mean.

If you haven’t had a problem like this hit you yet, you will. If you’re a
frontend JavaScript person, you know this all too well! These
THE IMPOSTER’S ROADMAP 557

frameworks change so rapidly that the documentation would need to


be written daily!

I created a fun course for the frontend framework Nuxt 3, and I used
the Vuetify component framework for the UI. Long story short: the
Vuetify documentation was ridiculously wrong in several places. It’s
an open-source project and I don’t want to give them too much of a
hard time, but all the same: I wasted hours trying to figure out how to
do the littlest things.

The documentation would tell me to use a particular boolean property to


show an active state for a link and I would use that property, only to have
an error thrown that says, “this property doesn’t exist”. Things like that.

My point is: if you’re relying on large frameworks, which you likely


will be, the documentation can be misleading, at best, and downright
false at worst. There are no bad people here, either! Writing docs is
hard work, and if you’re an open-source project, writing docs is
typically the last thing you have time for. Relying on volunteers is fine,
but that’s where the “wrong” part typically happens.

SO HOW DO YOU DEBUG THEN?


Like I said at the beginning of the chapter: this is a skill you work on
for years, and the only tool you can rely on is behind your eyes.

Clues are everywhere, as are distractions and bias. It is essential that


you cultivate the ability to clear your mind and forget the code you’ve
written. If you can, ask someone else for their thoughts, then work the
problem backwards from what you’re able to observe.

Debuggers work great for this part, but so does logging — even
console.log! I constantly use that, but I don’t trust it all that much.

Debugging happens in your head, not in a tool or by reading some


documentation. Those help, of course, but it’s the leap of inspiration
that will solve the tough problems.
TWENTY
CONGRATS ON YOUR MVP!
YOU’RE IN THE SPOTLIGHT NOW, FRIENDO

T
here is nothing like the rush of deploying your application
for the very first time. I don’t mean to staging or to test out
your production environment — I mean deploying your
Minimum Viable Product. The thing you feel OK charging money for.

Maybe your marketing department is just one person (you), or


perhaps a whole team of people waiting to hit social networks to let
people know your service is now in alpha… or possibly, you went
straight to beta! Either way: you’re ready for customers.

Now, we wait. While we do, we need to be thinking about the next


steps.

LAUNCH DAY
One of the best things about alpha and beta testing is that you get to
“harden” your production environment, to an extent. It’s tempting to
put scaling measures in place before you launch, but that amounts to
wasted effort and money, for the most part.
THE IMPOSTER’S ROADMAP 559

The simple reason is that you simply have no idea about your real-
world traffic and usage patterns. No one wants to see their app crash,
and I’m certainly not suggesting that you YOLO your launch! At the
same time, it will crash, and you need to be prepared for that.

We’ll get into monitoring, disaster plans, and more in the next part of
the book but, for now, let’s talk about you, your team, and launch day.
It’s a big day!

It’s a Marketing Thing, Really

Launching your app means letting people know that your app is now
generally available, or “GA”. If you have a marketing team, they’re
going to be making a big deal out of this. If you don’t, then you need
to understand how this process works.

The first term you need to understand is “channel”. You reach out to
potential customers where they live, which means you need to find a
way to get your message in front of their eyes, which usually involves:

Email. That beta signup list is where you have to start when
announcing your app’s launch. These are your insiders, your
favs, your BFFs — let them know first! If you charge money,
offer a discount if they sign up and make sure they know it’s
only for them.
YouTube. Post some videos that are tangential to what your
app does, and make sure they’re full of value and not some
masqueraded advertising. For instance: if you sell videos or
books, do a video that goes into detail on one of the topics. Or
do a video on how to write a book. Just make sure you offer
value!
Ads. Marketing is all about figuring out where your potential
customers are, and then letting them know you exist. I know, I
know! Programmers think marketing is slimy — but that’s not
a good take. It’s the slimy marketers that are slimy! Scraping
emails from GitHub, crapping on competitors in public,
dropping in to social media threads to shill. That’s clumsy and
560 ROB CONERY

dumb… so don’t do that, I suppose. Ads are a great way to let


people know what you offer, and you can target places your
customers are, like LinkedIn, StackOverflow, etc.
Social. This is the least effective way to let people know about
you, as people have trained their brains to skim over ads or
market-y posts. It can work, however, if your audience is large
enough.
Influencers. Yeah, I know. But I’ll be honest: you wouldn’t be
reading this book if I hadn’t taken the time to work with some
influencers. If they like what you do, offer them a cut. You
would be amazed at how much traffic a single post can do
from someone with a few hundred thousand followers!

Marketing is a lot of work, and this is all I’m going to say about it
here. If you’re on your own, you’ll need to study up on this stuff. If
you’re not, and you have a marketing team, lucky you!

The Soft Launch

When James Avery and I launched Tekpub back in 2009, we posted


about it on Twitter, I wrote about it on my blog, and James reached
out to blog aggregator services (DotNetKicks in particular) to let them
know what we were up to. We didn’t take out any ads — we were
hoping to “soft launch” and catch any weird performance issues before
they happened.

We had 3000 signups within 24 hours, which might not seem like a
lot, but for us, it was far more than we expected!

When you “soft launch” you let a select group know that you’re live,
and you hope for “organic” growth via word of mouth. Your email list
could be considered a soft launch.

Tekpub stayed up just fine and our traffic spiked as our tweets were
retweeted, and we went semi-viral. I remember discussing financials
with James, and we were absolutely blown away. We hadn’t planned
on reaching that number for another 6 months!
THE IMPOSTER’S ROADMAP 561

I’m talking about MRR, or “monthly recurring revenue”, which is the


lifeblood of a subscription service like ours. We hit $21,000 in the first
month, which was crazy.

The Hard Launch

As you can probably guess, a Hard Launch is something that Apple


likes to do, using ads and social media posts to draw attention and
create excitement. It works. Lines around the block on big launch days
are still happening for iPhones, which make the news, of course, and
that causes even more excitement and interest.

I suppose if I had invented the iPhone, I might do the same thing. I’m
not a fan of big productions like that because it makes failing appear
that much more dramatic.

Cyberpunk 2077 needs no explanation. The worst launch of any video


game in history, the game was unplayable for many and had some of
the most hilarious bugs ever encountered. The only thing that saved it
was the company’s reputation. CD Projekt Red is beloved for many
games, including the Witcher franchise, so devoted fans gave them a
chance to address the problems and they did. It’s safe to say, however,
the game suffered because of the launch.

A launch like that can destroy your company and, for me, it’s just not
worth the risk.

Do It On a Tuesday

I worked with a Tech Crunch writer (Kara Swisher) when we


announced the sale of Tekpub to Pluralsight. Everything had to be
timed just right; otherwise we would “miss” the cycle.

I swear, the media is a weird gig.

Long story short: that’s when I found out that Tuesdays are the best
days to let people know a thing. 10am Pacific time, to be precise,
which of course depends on where your audience is, but if it’s a tech
audience, it’s 10am on Tuesday.
562 ROB CONERY

I don’t know if it’s still the case, but for some reason when you
announce on Tuesday at 10am (making sure you’re not colliding with
some other event like a conference or Apple announcement), people
will be far more likely to read what you have to say. Email, social,
blog, whatever. Even if you’re going for a soft launch — you want that
announcement to be read.

Now you get to hold your breath and…

Buckle Up

I’ve worked in every type of software company in quite a few different


roles. I’ve been CTO 3 times (smaller companies) tech lead more than
I can recall, DBA three times, and regular developer also more than I
know. Every single time I’ve been involved in a launch, for myself or a
client, there is some level of scrambling that happens.

It’s always stressful, but just be ready to fix, quickly, anything that
comes up. A crash shouldn’t happen — it’s usually some type of
confusion or a setting you forgot to change. Just make sure your build
is set up so deployment takes the least amount of time possible, and
that you have tests in place so you don’t push more bugs.

It’s nerve wracking, but with some good fortune, you should be
through the process in 24 hours, feeling good about yourself and your
future.

THINKING ABOUT REPORTING AND ANALYTICS


Remember a long, long time ago, right around the first chapter, where
I mentioned that you’re going to come up against your psychology,
often? This is one of those times, and you’ll have to bear with me.

You’ve shipped your MVP, you have tests in place. You’ve chosen a
solid strategy for scaling your application in terms of both architecture
and infrastructure, and your GitHub repo looks clean and solid.

Now we have to focus on you. That last sentence might make you feel:
THE IMPOSTER’S ROADMAP 563

Complete cringe. You don’t like egotistical leads that take


credit for other people’s work, and you’ll be damned if you
ever do the same. This is a team effort, and everyone shares in
the glory!
You aren't concerned about recognition. You know you’ve
done good work, that’s what you’re paid to do, why shout
about it? The only thing that calling attention to yourself does
is make you a target, and you’re fine smiling and moving on
with the work.
Of course! This project should get you promoted. That’s
why you bought this book, isn’t it? You want to move up, so
you’ve educated yourself, applied what you’ve learned, and
delivered. Isn’t that the way this works?

I feel each of these, simultaneously. I have delivered a lot in my career,


and it’s always nice when your work is recognized… but I hate being
the center of attention. Recognition is not why I do these things, but
yeah, this is a career that I want to grow so promotions are part of the
deal.

I think most people cycle between these reactions when they do good
work. It’s human, I suppose, but I think it’s also good to check
yourself. There are plenty of toxic egos in the programming world, and
not one of these people thinks of themselves as toxic. Or egotistical.

There’s an easy check on this: who are you thinking about most often when
it comes to recognition? If it’s you, you’re being toxic. That goes for a
complete cringe reaction too. Passive-aggressive people define
themselves by focusing on negative things they see in others. I’m not
like that person and never will be, is a common thought with these toxic
people.

You did good work, it’s OK to own that. Take a second and write down
how you feel about being recognized, or close your eyes and consider
what you would write. We’ll come back to this at the end of the
chapter.
564 ROB CONERY

Reporting Starts Yesterday

I wasn’t sure where to stick this section. If I added it too early, you
would probably forget about it because programmers don’t like to
think about reporting. Marketing people love it, so do sales and
managers. Coders… not so much. That’s a general statement, but I
find it to be accurate.

Think of the project you’re working on now. Have you thought about
how you’re going to report viability and value? I love reports and
analytics, it’s how I got into this business! Yet even with my own side
business, reports are something that I consider once a quarter, which
is weird.

Why is it weird? Because reports are how you will be evaluated as the
project lead. Most programmers laugh at this statement, and I can see
why: their job is to deliver software, not make sales.

Is that true, though?

As unfair as it seems, if the app “isn’t performing”, you’re going to


share in the blame. Marketers are superb at deflecting blame to
programmers and, once again, I know I’m speaking in general terms,
but damn if I haven’t lived this 1000 times over.

But let’s be positive, shall we?

You don’t need to worry about the actual reports, just yet, but you do
need to think about the data that goes into those reports. Specifically:

Anything that shows a result. Marketers call these


“conversions”: when an anonymous user converts to a lead, or
a customer, or partner, etc. Your app is there to convert
people, the trick is to know what those conversions are before
you go 1.0! In the analytics world, results are typically called
“facts”. More on that in a second.
Anything that shows a path to a result. When a user
converts, how did they get there? Email campaign? Blog post?
THE IMPOSTER’S ROADMAP 565

What application events did they go through before they


converted?
Your dimensions. This is a tough one, but understanding the
role of dimensions in analytics is a big dang deal. You will
have facts in your database, and dimensions are how you
aggregate those facts. A sale is a fact, time is a dimension, as
is referral source, product category, and demographics.

Reporting and analytics veers heavily into the sales and marketing
realm, and you need to be OK with that. As I mentioned, you will get
thrown under the bus if you can’t prove how valuable your application
is or if others can’t prove it with the data you’re collecting.

Let’s flip this into the practical, shall we? I’ll use the e-commerce
example once again, as I find this scenario relates to just about
everything. Translate as you need.

As the lead, you want to be certain you’re collecting and storing all the
data you need to answer as many questions as possible, without
violating people’s privacy. Let’s start with the results, identifying what
they are:

Sales. This one is obvious, I hope.


Email list signups. At some point, you will likely be asked to
support a marketing campaign that involves email. Maybe your
beta campaign pushed you into this process? Either way: when
people sign up, your list service should be able to ping you to
let you know. Save that data because it shows interest from
potential customers.
Repeat sales. This one isn’t obvious, but is likely the most
valuable thing you’ll ever be able to tell a marketing person.
Find out if they have a rule (yes, they will) for when a person
goes from simple Customer to Valued Customer. This is
usually 3 repeat sales.
566 ROB CONERY

Note: the people who will keep your business going are the Valued Customers.
The people who love your service and will come back for anything you sell.

Just the fact that you’re asking these questions is going to make
people happy, especially your marketing team. If you don’t have one,
or you are the marketing team, you’ll thank yourself later, I promise.

Now that we have our results, let’s consider our dimensions:

Product. This is the one that managers typically focus on:


what’s selling the most? Make sure you have rollups in place, like
categories, store location (if applicable), payment processor,
and more.
Demographic. This is where privacy issues come in, so be
careful. I don’t like to plan on disaster, but I do think, “what
information will be stolen during a breach?” People don’t
like it when you store more than you need, so typically
keeping city, state, country is fine. Note: IP Addresses are
considered personal information, so do not store these unless
you tell your users in a privacy statement, and you better
have a good reason for doing so, apart from “marketing
wanted it”.
Time. You obviously want to timestamp your sales and given
that you have some experience, you will likely store it in UTC,
which is also fine. Until that data is exported. I explain more
here, but you might also want to consider storing the
timezone your database was in because it’s often not the same
as your company location’s time zone.
Campaign or Referral. Your customers found you somehow,
storing referral information is paramount. You can store this
in the session, and make sure you look for any utm_
parameters in the query string, as this is what your marketers
will plant in their referral links from blog posts, email, and
so on.

This is a great start. We’ll talk more about reporting in a later chapter,
THE IMPOSTER’S ROADMAP 567

but just know that the data your application creates will almost
certainly end up here (or something just like it):

Ah Excel, how I love/hate thee! It’s funny, you can provide all the
graphs, tables, and rollups you can think of to your web app, but it
will never be enough! You will soon be asked to provide CSV download
support too, so the Excel jockeys at your company can do their thing.

We can be snarky about this, or we can recognize that “doing their


thing” means using the data you provided, making you look like a…

SUPERSTAR!
You’re there to ship, yes, but shipping doesn’t simply mean putting
your code on a server somewhere. It means putting an application in
front of users, which offers value in return for revenue. That is
your job!

And hey, you’re kicking butt! Sort of — the fun has just begun. Now
we need to make sure our applications stays up and can change as
needed.

That comes next.


PART THREE
DELIVERY
TWENTY-ONE
WHAT IT MEANS TO SHIP
WE LOVE SHIPPING, BUT WE DON’T LOVE
THE POLITICS THAT COME WITH IT

I
cannot overstate the importance of shipping an application,
especially when you launch 1.0. Yes, it feels good for you and
your team, but the most important thing for you is that you’ve
just strapped a rocket to your back.

Sound hyperbolic? It really isn’t. Some people are excellent at


parlaying delivery into a raise or a promotion. Other people really suck
at it, and stay where they are, happy not to make a fuss.

I’m not going to assume anything about your motives reading this
book, but if you’ve made it this far, you must at least understand what
just happened.

By shipping your MVP, you:

Increased your experience dramatically.


Demonstrated that you can execute and deliver when asked.
Motivated a team of people to do a thing together. If there
was no team, it’s even harder to deliver so feel good about
that too.
572 ROB CONERY

Created good will with your boss, client, and stakeholders.


Created enemies out of random strangers who now see you
as a competitor.

Yes, politics. There is no getting around the amount of influence you


gain by shipping something. Some people like to use the word
“power” here, which is fine, but it has a negative connotation… like
we’re all fighting each other for the top position.

Some people do see it that way, however, and you need to know how
their brains work. I talked about this briefly a few chapters back, when
discussing sociopaths, but do keep in mind: they’re everywhere. When
you ship, you become a target.

This next section might be a little cringy for some of you, but if you
choose to raise your profile and influence by shipping software, you’re
going to have to understand human motivation to a deeper level.

REVISITING THE 48 LAWS OF POWER


Back in the soft skills chapter, we discussed Robert Greene’s book The
48 Laws of Power. I’m certain it made a few readers extremely
uncomfortable, but there is no escaping the human desire to increase
their influence. It’s survival, and it’s built into us.

To that end, let’s quickly review them here:

Never outshine the master. Make sure the work you do


makes your boss look good.
Always say less than necessary. Or: don’t say something
you’ll regret, including making promises or committing to
something you’re unsure of.
Guard your reputation with your life.
Attention is a good thing. Eyes on you, even for the wrong
reasons, is better than being ignored. This is less about being
THE IMPOSTER’S ROADMAP 573

in the spotlight, more about preventing people from forgetting


about you and your team’s work.
Win through action, never argument or words.
Appeal to people’s self interest, never ask for mercy or a
favor. In a more positive way: if you need help, the best way to
get it is to align what you do with someone else. Never owe a
favor.
Timing is everything. This takes experience, but if you
understand when to have a conversation or do a thing, it can
change everything for your project.
Mediocrity kills. If you give a demo, make it a spectacle and
go well beyond what people expect. Steve Jobs’ “one more
thing” is an example of this. If someone on your team settles
for less than outstanding, they need to go.

These are ones I like because they’re mostly honest and make good
sense. There are some negative ones, however, and it’s imperative that
you learn them so you can spot them. They are:

Conceal your intentions. Always keep people guessing about


what you may, or may not, really be up to. If people think you
have a plan, they’ll want in.
Get others to do the work, you take the credit. Eww. All too
common, however.
Avoid unhappy or unlucky people. Bad apples on a team
destroy it for a reason: their attitude. Bad apples need to go;
don’t feel bad showing them the door. You’re actually doing
them a favor.
Make people depend on you and keep them there.
Crush your enemy completely. A wounded foe will come back
stronger.
Keep your enemies close.
Never commit. Another ick from me, but once again, all too
common in the working world.
574 ROB CONERY

Let your enemy think you’ve been beaten, then destroy them
completely. Nothing is worse in sports than thinking you’ve
won handily at the halfway point. That’s when you’re at your
weakest.
Hide your mistakes, or blame someone else, but don’t make
it obvious. Gross.
Chaos is your friend. Also gross.
Make your wins look easy.

Yep, still makes me cringe. Remember: you can (and should) use this
as armor. Do the things that align with your moral compass, but
always know that some other folks don’t have one.

You Can Be a Good, Influential Person, But It Takes Effort

So much has been written about the “need” to be “evil” when it comes
to being in a position of authority. Those are simplistic terms for an
incredibly nuanced topic, but if you’re going to do well as your career
takes off, whether you want to lead a team or not, you will have to
shake loose a few things inside your mind.

If you wish to lead a team, for example, helping and supporting them
is your primary goal, but right next to that is keeping your project
moving forward and preventing sabotage from other teams. Another
way to put that last point is you’ll have to carefully manage interest
and understand whether collaboration will help your project or not.

That’s what happens when you ship. You hear a ping in Slack or
Teams and then: I think our projects are very compatible. Let’s pick a time and
hop on a call and see what we can do together.

This can work to your advantage if you understand their intentions,


but more often than not, it’s someone trying to keep their project
momentum going by drafting a bit of yours.

Let’s spin that around now. You might find that someone else in your
company is running a project that has a sizable amount of overlap
THE IMPOSTER’S ROADMAP 575

with yours. They’ve shipped too, and done a good job. You realize that
it’s wasteful to overlap like this, so you ping them and ask if they have
time for a call.

Politics are incredibly subtle, and I want, terribly, to tell you that you
can, indeed, spin things positively so you don’t think everyone is out
to get you. You can also lean on your “evil” side, if you will, and be
suspicious of people’s motives and, if required for your team and
project, take action that might not feel “good”.

If You’re Going To Do This, I’ll Need You To Do Something


For Me

That was a call I had a few years back when I was running a very
visible project for a client. I was given full control of the team and the
product, yet here I was on a call with someone whom I just met, and
they were making demands. People do this, and it’s weird.

The problem was that I was not a full-time employee, I was a


contractor brought in to get a project off the ground. I was told
repeatedly that it didn’t matter — as far as the company was
concerned, I was just another employee.

And yet. If there’s a way that someone can flex on you, they will.
Usually, it’s because of a higher level or longer tenure, anything that
puts them “above” you.

It was interesting to be in that meeting, as they used what I like to call


a “mafia tactic”, which is that they ask you to do something in
exchange for some kind of benefit. Then they follow that up with a
threat. You’ve probably heard the famous line from The Godfather: “I
made him an offer so good he couldn’t refuse”.

I was told, in short, that I had to use a product this person supported
because the “entire company was going to be moving to it” and if I
would be one of the first projects to adopt it, my boss and everyone
else would be pleased.
576 ROB CONERY

Oh, and Rob, I’m going to need to review your disaster plan as
that’s my role here in Division X, and we have to ensure that your
project will meet company standards.

I still can’t quite believe it when these things happen, and I’m sure
you’re wondering if, in fact, this really happened. It’s just so clumsily
aggressive, don’t you think? And yes, it did happen. That’s the way
these things always seem to happen: right there in the open.

There are a few ways to handle this, politically speaking. But take a
second and think what your gut reaction would be if this happened to
you. Remember: you’ve been given full control of your team and
project.

Now think about how you would handle this situation using the laws
above, without worrying about losing your soul. This is just a thought
exercise, so pretend you’re playing a game.

OK, here are some choices that I came up with when this happened:

Be a “good” person and don’t commit to anything without


talking to your manager. Let them know the details, and run
the risk of them looking back at you, saying “this is entirely
your decision and why I put you in this role.”
Be a “strategic” person and don’t commit to anything, but
spring the trap and cause a little chaos, making other people
react to your inaction.
Be neutral and ignore them entirely, calling their bluff and
forcing their hand. Thank them for the call and let them know
you have another meeting, and then ghost them.

There are, of course, a few variations, and I’m sure some of you are
extremely clever and could probably navigate this situation with ease.

So what did I do? I used two of the “evil” rules above: never commit and
cause chaos. I also added a dash of conceal your intentions with fake
THE IMPOSTER’S ROADMAP 577

surrender. Wow, writing it here makes me feel gross, but then consider
that I’m trying to fend off someone who could very well cause
problems for the project. This would upend our work and cause us to
rip out entire chunks of functionality for their benefit.

I ended the call as quickly as I could using some vague excuse and let
them know I would follow up in an email as soon as I did a little
more investigation on my own. Seriously: it was extremely abrupt, as
I didn’t want them to try to leverage the disaster plan threat, as I
didn’t actually have a disaster plan for an application with 4 total
users.

I then pinged my client and let them know I had interest from that
team and that, at some point, I might need their help clearing a few
things up. I waited until then end of the week (this happened on a
Tuesday) and wrote to them on Friday, letting them know I was still
trying to assess the impact of replacing our stuff with theirs.

And then I ignored them and did nothing. Which didn’t work.

My client pinged me a week or so later, letting me know that she had


the same conversation with the same person I did. They went right
over my head… which is a power move and, honestly, that’s OK. They
made their move and if they’re going to force the situation, then they
can answer for the delay.

I worked the details with my client, and she agreed to grant extra time
and budget to do this. It was, as it turns out, important for the
company. This allowed me to instantly pivot and say whatever you need.
I worked with the clumsy, aggressive person over the following
months and, unfortunately, they had enough influence to eventually
take the project over and end my contract.

A negative view of this would be that I flat lost this battle and, if I was
more clever, I could have rescued my project and my job. A more
positive view is that the company saved money by merging the
projects, going in a direction that was the plan all along. A very
natural ending, and everyone is happy.
578 ROB CONERY

I’ll go with the last one, I guess. That’s the essential thing: my
reputation remained intact.

KEEPING TRACK
I also mentioned at the beginning of this book that you should get in
the habit of keeping a journal. Personal journals are great for catching
yourself turning into an asshole (power corrupts, etc.) but you should
also be keeping a professional one as well.

I cannot emphasize how important a journal is for protecting


yourself from people coming after you. If your experience is in a
smaller company, you might think I’m overreacting or that I’ve worked
in some crappy places. Let me tell you: nothing is more vicious than a
new hire trying to make a name for themselves, wherever you are.

By journaling what was said when, and in what context, you can refer
to it when you’re being told “I never said that” or “I don’t think we
had that conversation”. Who knows, they might be right! Either way:
my journals have kept me from thinking I was losing my mind due to
some not-so-nice behavior from other people.

The entries don’t need to be extensive, just a summary of what was


said, when, and the context. Tagging is always useful, and if you can
snap a photo of something, that’s even better (keep the receipts).

Having a journal is like having a dash cam: looks like overkill until the
day you need it, then it’s worth 100 times the effort and cost.

I had a version of this happen to me back in 2003 when I was working


at an analytical startup. I didn’t know the “rules of power” at the time,
but I did know that I didn’t trust my boss, at all. So I journaled most
things, like this:
THE IMPOSTER’S ROADMAP 579

What you journal is up to you, but “notable” things, like your boss
asking you to go fishing with them in Montana, is typically what I
shoot for.

And guess what happened a month later…


580 ROB CONERY

Keep the receipts, friends. When you catch someone, which you will,
always give them an out (don’t outshine the boss). We had a laugh over
this and, of course, the conversation turned to assurances I needed to
give so we could meet the deadline.

This is how sociopaths work, by the way: ensuring that you owe them
something. Back then, I just played the game and reassured my boss
that we’re on target and not to worry. Today, I’m not sure.

The rules of power give me some options here:

Surrender. Give in and then show the conversation to your


boss’s boss confidentially. This will only work if you have
multiple instances of crappy behavior.
Keep your boundaries. Work/life balance is important, and
my boss was choosing life while choosing work for me. You
can just state flatly what your boundary is without fighting,
and let them know you track everything in your journal.
Go on the offensive. Sometimes drawing a sociopath out can
work in your favor, especially if they’re denied access to you.
You have to have the receipts, however, because if they make a
bid deal out of it, you’ll have to be willing to crush them
completely (see the rules).

Keeping boundaries is a solid way to go. Pick your battles and all.

Your Brag Book

I just learned this term a month ago, and it seems like such an obvious
thing! A Brag Book is where you keep your “wins”, which can be:

Nice things people say to you.


Nice things people say about something you made (a
testimonial if you will).
Awards you’ve received.
Numbers that show you’re outstanding (“I just passed 10,000
subs on YouTube!”).
THE IMPOSTER’S ROADMAP 581

Dates you’ve shipped something and how it went.

These are only starting points — the book is yours, so add whatever
you think defines “good work”. It’s also not bragging if it’s for you, by
you, so brag away.

No one likes the idea of bragging until they need to bring the receipts
into a discussion, which you will. Someday, you’ll have conversations
where you’re asked:

Why your rate is so high (if you do contract work).


How you’ve contributed to the quarterly bottom line. I hate
these.
To do your manager’s job and fill out a form which applies to
the group OKR (Objectives and Key Results).
To document your contributions as part of your employee
roadmap. At Microsoft, we have things called “Connects”,
which are structured ways we show our impact. It’s important
to know what to say there.

Your professional life is not your personal life, and your reputation is
everything. As the laws state: guard your reputation with your life. On a
more positive note, build your reputation at every opportunity. If we were
talking about being social and dealing with friends, yeah, this would
be weird. But we’re not. This is work, and you have to be rigorous
with it!

ALWAYS GET IT IN WRITING


I really do hope you read The 48 Laws of Power so you can see the games
going on around you. People seem so much less deceitful and slimy
when you recognize what’s going on.

The phrase “let’s hop on a call” is one such problematic area. Things
said via voice can so easily be manipulated and misconstrued. In fact,
I’ll go so far as to say they will be manipulated, especially if you keep
582 ROB CONERY

racking up the wins. Some people hate a winner and will just come
after you. It’s why most American Football lovers hate the New
England Patriots and are starting to loathe the Kansas City Chiefs.

We discussed this in the journaling chapter, but it’s worth repeating


here as we reflect on our victory of shipping our MVP. Don’t rest on
your win — make sure you’re geared up for the next effort by making
sure your boss is ready to help you when needed.

MANAGING UP
Managing your manager is always tricky, and it takes experience to get
it right. If they’re good, they will hopefully follow the same points laid
out in this book in that they:

See their job as a support role, helping their team do their


work as efficiently as possible.
Set the tone and the direction for the team, and make sure
everyone knows what they’re doing.
Make sure you get the credit you deserve.

Bosses require reminding, from time to time, just like you do. These
conversations can be difficult, but I’ve found that being direct while
also personal can really help.

Avoiding Traps, Intended or Not

I sat in a group meeting a few years ago and the group manager was
asking us what our core priorities were and how they were helping the
group. This person had a lot of experience, but didn’t have too many
management roles under their belt at the time.

Case in point: when the meeting started, they summarized the goal
(describe your core priorities and how they help) and then said, “let’s
start with you, Sue” (pseudonym).
THE IMPOSTER’S ROADMAP 583

No one wants to be in a meeting, and learning how to run one


effectively is difficult. Asking if anyone has any questions right at the
start is a good idea because it could change everything. I have a
coworker at Microsoft who is notorious for immediately asking, “can
this be an email?” He’s usually right. Others might suggest
postponing the meeting due to an absence or some other event.

Note: If you’re the one running the meeting, try this fun trick: send out a quick
message (Slack, Teams, whatever) and ask, “does everyone know what they’re
working on? If so, reply with a quick summary, and we can skip the meeting”.
You’ll see applause emojis from everywhere.

When my manager forgot to cue for questions, I raised my hand


quickly and asked if I could get clear on something. I was about to
challenge my boss directly, which goes against the rule “never
outshine the master”! Worded correctly, however, you can save their
bacon:

I’m wondering if we can have a bit more detail on the group


priorities before we detail our own. The last thing I want to do is
create overlap between the group’s goals and mine, or to do
something entirely different. As of today, I’m still a bit hazy on a
few points.

The thing is, there were no group priorities. We, as a group, had a loose
direction that my manager’s manager had put in place and my
manager decided to hand that down to us.

That’s not our job. Setting direction is your manager’s job, and often
you’ll find yourself in a situation where that’s delegated to you. Never
accept this because, once again citing the rules of power, you can
easily be blamed if the priorities of the group are off.

I put my boss in a corner without an out, which is never a good idea,


584 ROB CONERY

especially in a group setting. To help them shine once again, I quickly


added:

Sorry if you mentioned this, and I missed it — is that our goal


today? Helping you detail the group’s priorities so you can report
that to XXX?

He took the out, thankfully.

Again: I don’t think my boss intentionally tried to screw us over. Good


people can do bad things in the work world and think they’re doing
good things. I don’t know the psychology of it, but I do know I need to
be on guard for silly people tricks they don’t even know they’re doing!

IF YOU DON’T, SOMEONE ELSE WILL


This chapter has been devoted to soft skills, once again, and I do hope
you revisit it from time to time. I know how the laws of power sound,
and when I read the book I promised myself I would never stoop that
low for work. And then I realized just how important they are.

Good and evil are vague and shifting concepts, if you’re a person who
sees the world in those rigid, contrasting terms, you might want to
check yourself. This is called “Black and White thinking”, and is an
extreme way of looking at the world that ends in hatred, prejudice,
and war. That’s a bold statement, but when has “us vs. them” ever
worked out for anyone?

You will need to learn how to manipulate people and circumstances to ship
software. This doesn’t mean being “evil”, it means being diplomatic,
fending off aggressive folks, assessing risks, managing your boss or
client, and being professional.

We all manage what we think vs. what we say, every single day. This is
called society, but you could also see it as manipulation if you chose.
THE IMPOSTER’S ROADMAP 585

Your clothes, your hair, your smile, and your choice of words are all
designed, by you, to give the best impression possible — or at least
the impression you want others to have of you.

Always remember, however: if you’re not learning these skills and


putting yourself in leadership roles, someone else will. If you think
you’re a good person, you owe it to the world to block the assholes.
TWENTY-TWO
THE BUILD
FORMALIZING A CRITICAL PROCESS

L
et’s jump back into the world of programming now because
we need to set up the all important Build. You probably know
what “the build” is and what it means, but just in case you’re
new around here: the build is the process that goes off when you build
the software for your project. This is typically done in steps, including:

Linting, which is the process of running a code checker that


looks for style or compliance issues. The name comes from
removing lint from a shirt — you’re cleaning things up.
Compiling (if your project uses a language that needs
compilation).
Running tests.
Notifying of success, warnings, or failures

Builds can be as simple as running make on your local machine (or


clicking “Build” in your IDE of choice) or as complex as integrating
multiple services that deploy changes to your Kubernetes cluster.

Simple is fast, complex is safe. But what do I mean by “safe”?


THE IMPOSTER’S ROADMAP 587

BREAKING THE BUILD


If you’ve been coding for a while, you’ve likely heard this term. When
you “break the build” you commit code to a repository that then runs
the build process for you. Linting, compiling, testing, in that order,
and something broke. The question is: why?

Your build tool should tell you that. If you use GitHub, you’ll see
pretty accurate results, depending on how you set things up, and how
to fix things before you submit a PR. If you’re doing trunk-based
development, the build is there to keep things safe so you don’t push
bugs into production, or worse: send bugs to your coworkers so they
need to stop working until you fix your mess!

Breaking the build happens, but you should expect it to happen and
not freak out when it does. It’s there for a reason, so we need to be
sure that we set things up to help those who need it.

ARCHITECTURE VS. BUILD GYMNASTICS


When you set up a build, you typically create a set of Docker
containers that do something with your codebase. Linting and
compiling are straightforward (sort of, we’ll get into the latter in a
minute), but testing is another story.

Yes, you can test on your dev box just fine, but what happens when
you want to run those tests on another machine, in a container? Using
a service like GitHub, that’s precisely what you’re going to be doing.
This is where things might get interesting for you!

For example: I have no problem writing tests that hit a database, but I
know plenty of people who avoid it at all costs, so they will create test
doubles and use software patterns (Dependency Inversion/Injection,
e.g.) to avoid touching the database. They do this to isolate the tests
as completely as possible in an environment that’s easily reproducible,
such as a test container running on a build server.
588 ROB CONERY

I covered these software patterns in The Imposter’s Handbook, but I’ll


touch on them lightly here.

Mocking Your Data Access

I’ll use C# for this example, as the patterns I’m about to show are very
common with that language. If you’re a .NET person, I’ll discuss
patterns with Entity Framework later on.

So, this might be a very typical setup using C# and .NET (TypeScript
as well):

We have our class, a repository, and an interface for that repository.


This might seem verbose if you’re not a .NET person, but this type of
setup has “loosened” up our code quite a bit in that we no longer have
to rely completely on CustomerRepository, we can use
THE IMPOSTER’S ROADMAP 589

ICustomerRepository instead, which “hides” the implementation


details, allowing us to create test doubles easily.

For instance: if I’m writing tests for my CustomerService, I can create


a quick mock for the given test:

This works because my CustomerService doesn’t create the


repository instance directly, I inject it:
590 ROB CONERY

This is decoupling in action, where you remove a dependency from a


given class by injecting that dependency instead of creating it.

That’s a good first step, but our class is still dependent on a specific
repository interface, the ICustomerRepository. This might be
acceptable, or we might go one step further and create a base class or
a more generalized interface such as IQuery or something.

I’ll leave this up to you. It’s easy to get lost in the architectural forest
and for some, it’s fun to see just how loose you can make your class
associations. For me, I like to use the phrase “it’s not a problem until
it’s a problem”, which is something I learned doing Ruby. A .NET
person might tell me that’s a great way to pile on technical debt, to
which I would counter that it’s also a great way to ship software.

There’s no answer to this debate, so I’m going to move on.

Letting SQLite Handle It

One of my favorite testing tools is SQLite. It’s a full-blown, ANSI-


compliant relational database that is embedded, which means there is
no central service running in the background that stores files on disk.
I will assume you’ve heard of it before.

The reason I like it for testing is because you can use it to emulate your
database if you’re using some type of object-relational modeler, or
THE IMPOSTER’S ROADMAP 591

ORM. Just about every ORM out there will support SQLite, which
means you can run your tests with it and not have to worry about
hitting your database.

That seems contradictory, doesn’t it? SQLite is a database, and we’re


hitting it, aren’t we? Well, yeah, I suppose so, but if we set things up
so that SQLite runs only in-memory, we’re not actually storing
anything and our tests run lightning fast!

Let’s switch to Node.js for this one. I’m using the Sequelize ORM,
which I really like, with the following schema definition:

Hopefully, this is straightforward: I’m declaring a class and a table at


the same time, which Sequelize will build for me in the background.
There’s an email field with a unique constraint, and a name field
that’s nullable.
592 ROB CONERY

I can use this class with most database platforms, but my app will use
PostgreSQL for development and production, but SQLite for testing:

This connection string (outlined) tells SQLite that I don’t want


anything written to disk, which will speed things up tremendously
and also make it easier to clean up after my tests.

I’m not a huge ORM fan because they tend to be difficult to debug,
and also (at times) create horrendous SQL that causes very slow
queries. That said, this use case is wonderful.

Entity Framework also supports SQLite in-memory, as does


ActiveRecord and most other ORMs out there.

Just Use Containers

I don’t do this myself, but I have quite a few friends who work on
larger projects that use this technique regularly: just put everything into
test containers.
THE IMPOSTER’S ROADMAP 593

One of the main benefits of Docker is that your development


environments can match across your team, which will also match
production. In a team setting, this is wonderful, but if you work on
your own or the team’s work is siloed well enough, Docker
shenanigans can slow you down.

The idea is straightforward: every time you run a test, you start a
dedicated set of Docker containers using something like Docker
Compose. It spins up your code and all the services your application
needs (PostgreSQL, Redis, etc.). Once the tests are done, the
containers are destroyed.

This might sound slow, but Docker is ridiculously fast and can start up
incredibly quickly. If you’re a TDD person, it won’t be quick enough,
which I agree with (friction is the enemy when you’re doing TDD). If
you embrace testing, but do it on a slower cadence, this could work
out for you really well.

We discussed creating Docker containers and orchestrating them with


Docker Compose in a previous chapter, and the idea is the same,
however with a test setup, you’ll want to do things a bit differently,
namely:

Seeding your database with test data.


Setting your ENV variables to a specific configuration.
Using a specific branch of your repository.

Ideally, you’ll know how to do each of these, given what we covered in


the Docker chapters. Except for the seed data — we should briefly talk
about that.

Seeding Data For Your Test Database

Instead of using fixtures and doubles, you might want to just have the
data exist in your database before you run your tests. Since we’re
using Docker, we can do whatever we want because we know the
environment will be set just so, for every test suite we want to create
594 ROB CONERY

containers for.

Your container startup script can execute the SQL file directly, but
with MySQL and PostgreSQL, we have a fun shortcut! You can copy an
initialization file (plain SQL) to a special directory, which will be run
right when the container starts for the first time. Here’s what that
looks like:

The important part is the COPY statement, which pops my


initialization file into /docker-entrypoint-initdb.d/seed.sql and from
there, your data will be ready for your tests.

For SQLite, you can just copy in the binary database file directly, and
you’re good to go. This (oddly) works for SQL Server as well, but
you’re probably better off initializing your data using a RUN
command.

CREATING A BUILD WITH GITHUB


There are numerous build tools out there, but I find the most
common scenario is using GitHub Actions. These are jobs that run on
a given event, which is typically when a branch of your repository
receives a new commit. Most builds work on this type of idea: you
push something to source control and a build process goes off.

If you don’t use GitHub Actions, you should be able to translate what
you read here. Most of the packages and tools are the same, and use
containers under the hood to run things in a given environment. The
“Just Use Containers” section above is, essentially, how builds work at
GitHub and other services.
THE IMPOSTER’S ROADMAP 595

The steps for creating a build on GitHub are, thankfully, ridiculously


simple. At least in the beginning.

Continuous Integration and Delivery

I have a small project I created many years ago called node-pg-start,


which is a Node.js starter site (database, authentication, theme, etc.)
for Node.js and PostgreSQL:

I don’t have a build for this repository, but I can add one quickly by
clicking on “Actions” in the tab menu (fourth from the left).

I’m not presented with a lot of information, which can be confusing if


this is your first time here:
596 ROB CONERY

You can build and deploy Docker images whenever you push to a
given branch, publish your code as a package, and so much more.

I’m mainly interested in linting and running tests, so I need to scroll


down until I get to this section on Continuous Integration:

The ones you see here are just the recommended ones based on the
code in my repository. If I had a Rails application, or maybe Django or
.NET, I would see those suggestions here instead.

I’m using Node.js, so let’s see what type of CI/CD I can do with the
Node.js action. I’ll click on Configure:
THE IMPOSTER’S ROADMAP 597

This is where things get confusing for many people. You might expect
some checkboxes, radio buttons and other form selection things, but
instead, you’re given a blob of YAML. DevOps!

Reading through this is simple enough, but it’s not immediately


obvious what’s actually happening here! We’re adding a file to our
project, in a magical directory called .github/workflows. Each action
that you add to your repository is a YAML file that describes whatever
job you want done.

Reading through our YAML, we can see that:

Our action is triggered on push to the master branch (or


“main” with more current repositories).
Our job will run on Ubuntu, whatever the latest image is.
We’re going to run our code on Node versions 14, 16, and 18.
598 ROB CONERY

At the end is the main part: the commands we will run within the
container, which are:

The first step checks our code out using a built-in GitHub action,
which will use whatever branch you set for the action on push.

The next step sets up Node.js, and as you can see, we’re building
against 3 different versions of Node, which isn’t strictly necessary
unless you need to do that.

Next, we’re going to install dependencies, run build if we have a build


command (my project doesn’t), and then run our tests — that’s it!

Reviewing this YAML file, it doesn’t look like there’s anything I need
to do, which is exciting! Let’s commit the change and see what
happens, shall we?

If we click on “Actions” again, we’ll see something entirely different:

Hey, it’s our first workflow! As you can see, it’s triggered on a given
THE IMPOSTER’S ROADMAP 599

push, which was the addition of the workflow YAML file (“Create
node.js.yml”), The status is “Queued”, which means it’s waiting
to run.

If we wait a minute, we’ll find out what happened:

Crap. We broke the build! That red X there means that there’s a
problem, and if I check my email, I’ll see that yes, indeed, there’s a
problem:
600 ROB CONERY

Damn! What happened? Thankfully, that’s easy to answer by clicking


on that green box. I’m taken back to GitHub, where I can see the
results of each run. Clicking on the Node 14 run, I’m given the answer
I want:

This type of build output is incredibly valuable. Each step is logged, so


you can see how things are set up, and each error is output in the
same way you would see it if you ran things locally.

Here, I’m being told that my database connection string URL is invalid
(I know this error all too well). This makes sense! I don’t have a
PostgreSQL database to run my tests against, so of course things will
fall over!

This is why we like test doubles, innit?

There are a few ways to fix this — let’s explore.

Fixing The Build: Use SQLite

I never thought I would write these words, but thankfully, I’m using
an ORM. That means I can easily switch over to SQLite for testing,
which doesn’t need a dedicated service and can run directly from
Node.js.
THE IMPOSTER’S ROADMAP 601

I just need to install the SQLite package for Node.js and update my
data access code:

Here, you can see that I’m using SQLite in memory. Let’s see if this
fixes the build! I’ll commit the change and push.

Note: I need to do a git pull before I do a git push because I added the YAML
file to my repo using the GitHub website. You’ll get an error if you try to push
straight away, so don’t forget to pull down the changes and merge them in.

Heading back to my repo now, clicking on the “Actions” tab…


602 ROB CONERY

Woohoo! Green checks are always exciting. As you can see, I had to
merge the changes (addition of YAML file) to my codebase, which is
why the commit title is a bit weird, but everything works. Hurrah!

Fixing The Build: Using PostgreSQL Anyway

The fun thing about the GitHub action containers is that they have
Docker installed because they’re already running Docker! That means
we can use Docker to build our test containers as we need.

For this, we’ll replace our run directives with a single run, and have it
execute some Docker commands as well as shell scripts:

Credit for this script goes to my friend, Aaron Wislang.


THE IMPOSTER’S ROADMAP 603

This might look a bit crazy, but what we’re doing here is creating a
PostgreSQL container on the fly and setting things up as we require.
We have to wait a bit for the container to start, that’s why we have an
until loop in there.

We write out a .env file, containing the DATABASE_URL and then


run our tests. It might seem convoluted, but welcome to CI/CD.

Hey, it worked! Unfortunately, doing things the way we’re doing them
here will take longer given the Docker gymnastics — but often that
doesn’t matter, as a timely build feedback isn’t really a necessity for
most projects. Obviously, you would rather not wait hours, but an
extra minute or two tends to be acceptable.

I like this idea, but if using SQLite is an option, I’ll just do that.
Hopefully, you can see how your application’s architecture has a direct
impact on your ability to safeguard your production environment
using a CI/CD build.

Finding Out When Things Go Wrong

As you’ve seen, an email is sent out to anyone who is subscribed to


repository notifications. You can also enable the GitHub mobile
application to notify you when things go wrong, and I highly
recommend you do so.

I also want to reiterate: breaking the build is not a bad thing and is
to be expected. Trying, failing, fixing, succeeding: this is how we
604 ROB CONERY

learn and grow as programmers on a team. The thing you don’t want is
a delay getting a fix in, so it’s imperative that everyone is subscribed
to notifications.

This can get a little overwhelming if you have a large company with
numerous repositories. If you’re part of an organization at GitHub,
notifications are turned on by default for every repository in that
organization. Some people (like me) reflexively turn off their GitHub
notifications, which is a bad thing because you never want a ping at
the end of the day asking if you fixed your broken build.

Yes, this happened to me and it was embarrassing. A team I was on


was waiting for my PR, which I created, but it broke the build and I
didn’t know. Nothing happened to the main repository, just my fork,
however the broken build sat there for hours while I went off and did
other things.

Ideally, you won’t get to a point where a single PR will stop the
development process. Unfortunately for me, I was on a team with a
snarled mess of a codebase (which we were trying to fix) so my PR
touched about 60% of the application. This is how poor architecture
can impact your team: one change ripples throughout the codebase,
which is disastrous!

The fix was simple, thankfully, once I found out about it!

Managing Environment Variables and Secrets

Every application has secret settings which are usually handled by


environment variables, and GitHub has a place for you to store these.

You might need to send emails during an integration test, but you
don’t want to send actual emails to some test account. Instead, you’ve
decided to use a service like Ethereal, which stores the email as a
record that you can browse in a web interface.

Those credentials need to be stored somewhere, and we never commit


our environment variable files (.env and the like)… so what do we do?
THE IMPOSTER’S ROADMAP 605

That’s where GitHub secrets is magical:

You can find this in your repository settings tab.

You can store secrets that are used for your application, or for your
repository. It might not seem obvious what the difference is, so let’s
take a look.

Environment Secrets

An environment secret is an encrypted value that your application


might need access to for a given Action. We only have one action with
1 workflow, and if our tests needed to access a secret value, such as an
SMTP server setting, we would want to set that as an environment
secret:
606 ROB CONERY

This is a straightforward process that is thankfully flexible and simple.


The first thing you need to do is to configure an environment. I’ll call
mine “Test”:

You can access your environments from the side menu as well.

The next step is to set up the secrets for the environment, which again
is a matter of clicking a button or two and adding the values you need.

Repository Secrets

At some point, we might create a GitHub action that deploys our code
to a cloud provider such as AWS, Azure, or GCP. Doing so will require
API keys, which aren’t part of our application, just our infrastructure.

That’s what repository secrets are all about:

These don’t work in any environment — they work across the entire
repository and are accessible from your Actions.

Deployment to a Docker Image

When you start playing with GitHub Actions, it’s hard to stop. There’s
so much you can do, and we’ve barely scratched the surface.
Compiling, building, linting, and running your tests is just the start —
THE IMPOSTER’S ROADMAP 607

you can also create and deploy a Docker image to whatever registry
you like.

For this to work, our repository needs to have a Dockerfile prepared


for our code, which we covered in a previous chapter. Here’s the one
for this repository:

A very typical Node application, which we could build and then push
to our registry using a CLI or some kind of build tool, or we could let
GitHub do it for us!

This workflow will create an image from our Dockerfile and, by


default, deploy that image to GitHub’s container registry. This is
extremely handy because permissions for your registry will be
608 ROB CONERY

inherited from your repository, which means there’s very little setup.
You can change this, of course, if you need to.

You can also use whichever registry you like, whether it’s your own,
your companies, or DockerHub. To see this in action, let’s click
“Configure” in that Docker Image box:

Here, you can see that we’re taken directly to an editor for a new
YAML file called docker-publish.yml. The registry is defaulted to
GitHub’s (ghcr.io), but you can change this, as needed.

Note: if you’re deploying to Azure, AWS, or some other cloud, it’s likely there’s
a prebuilt workflow for you, which is what you should use rather than trying to
alter this file.

I’m not going to do a thing to this file — let’s see if it works! I’ll
commit the change, which will kick off the workflow:
THE IMPOSTER’S ROADMAP 609

That was about as simple as it gets! But… where’s the image?

If we head back now to the repository home page (or just click “<>
Code”), you’ll see our image as a “package”:

Clicking on that link will bring us to the image page, which has our
Docker details:
610 ROB CONERY

It honestly couldn’t be any easier!

OTHER FUN GITHUB ACTIONS TRICKS


There are many interesting things you can do with GitHub Actions,
which can be a good thing, or a bad thing. The more workflows you
add to your build, the longer it will take overall, which can be
annoying if you get nuts with it.

That said, building and deploying are only some of the things you can
do. You can also do “maintenance” stuff, such as scanning your issues
list or welcoming new committers:
THE IMPOSTER’S ROADMAP 611

You can also create and run frontend web applications or static web
apps to GitHub Pages:

If you don’t know what GitHub Pages is, it’s a way to use GitHub to
host your HTML files and display them as a website. You can also
create a pages site using Markdown files and the Jekyll or mdBook
frameworks. It’s great for documentation or a personal site.
612 ROB CONERY

LINTING AND SECURITY


I mentioned at the beginning of this chapter that code analysis might
be important for some organizations. With JavaScript, for example,
this can be extremely useful using a tool like ESLint; Ruby might use
Rubocop or Brakeman and Python might use something like Pylint.

Now I know that people have opinions on linting and “the style
police”, and I’m going to sidestep this argument by saying that you
can, typically, configure these tools to be as strict as you need — or
just not use them at all.

Either way: you have the choice of plugging in a GitHub workflow that
can do this for you. I believe that you might want to discuss this with
your team first, however.

All the tools mentioned above can be run locally, and they can also be
run as part of your build step. For Node.js, it’s common to run a linter
locally, or as part of your editor’s IDE (such as a plugin for VS Code,
or a bigger IDE like Jetbrains WebStorm). The reason you might want
to consider doing this is simple: pride and, more directly, avoiding
humiliation.

Breaking the build due to a linting “violation” is annoying, mostly


because you get called out in front of your peers. To me, team morale
is critical, so I might suggest you let your team handle the linting as
part of the build and, if you have to, run your linter during the build
process, not separately.

As far as security goes, there are numerous scanning workflows you


can plug in that will scan your dependencies for known vulnerabilities.
There are also scanners that detect if you committed some kind of
secret (like a connection string) or wrote something that’s vulnerable
to a SQL Injection attack.

I think these tools are interesting, but they also beg to be ignored.
One such tool is dependabot, which GitHub enables by default.
Sounds like a good idea, doesn’t it?
THE IMPOSTER’S ROADMAP 613

Well…

I must admit… I have turned off dependabot in every repository I


have. I do understand that it’s trying to help projects become more
secure; however, I have numerous repositories and some of them are
just for me, goofing around, and each one of them gets a notification
about once a week.

These aren’t emails, either, dependabot will open a PR with the


suggested change:
614 ROB CONERY

This is for the repository I was just working with (the Node/PG
starter), and honestly this is tame when compared to some
applications. What makes this even more frustrating is that these PRs
will stay there, even if I bump my dependencies locally (which I did
last year).

All of that said: security is a big deal and should be annoying. As far as
which tool you use — well that’s up to you. There are quite a few to
choose from, and it’s a bit out of scope for this chapter to talk about
scanning tools; just know that GitHub has quite a few to choose from.

SUMMARY
There are so many CI/CD tools out there that help you create a build,
including GitHub’s alter ego: Gitlab. They all work in roughly the
same way, applying workflows (or “jobs”) to your codebase, building,
linting, compiling, securing, and even deploying it to wherever your
code will live out in the wild.

I had to choose one to show the concepts, so I went with a popular


choice: GitHub, which is what I know and what I use.
THE IMPOSTER’S ROADMAP 615

Ultimately, a build is your safeguard from pushing crap into the world,
which already has far too much crap to deal with (software and
otherwise). It’s an essential part of a healthy team, and will help you
deal with the storms of change that will inevitably crash your good
time.

That’s coming next.


TWENTY-THREE
ADDING FEATURES, FIXING
THINGS
CHANGE IS THE HARDEST PART OF ANY
APPLICATION LIFETIME

I
remember when I first played World of Warcraft right after it
came out of beta. It was hard. It took me months to get to level
30. I kept on, and after 4 months of playing semi-casually with
friends, I hit max level which, at the time, was 60.

I remember feeling like I had accomplished something — like I


finished the game. It was at that point that a person in my guild (a
group of other players you play with) pinged me and said, “grats! Now
the real game starts…”

They weren’t wrong. An entire world of raids, instances, reputation


rewards, and armor upgrades just sprang out of nowhere. Not to
mention player vs. player. I played a rogue and I specialized in causing
havoc for armored up Chads because I had a specialty called “bleed”
which ignored armor.

Anyway: that’s where we are now. Our MVP is out the door, and our
initial build out is done. Now comes the endgame, which is the
hardest part of building software: keeping it running, which is balanced
by an equally pressing need: avoiding rewrites.
THE IMPOSTER’S ROADMAP 617

LEARNING TO SAY NO, ESPECIALLY TO YOURSELF


One of the downsides to focusing on speed as you build out the MVP
is that you end up with a load of technical debt that needs to be
addressed. That usually happens after the MVP is shipped, and it’s a
moment of reckoning that is not fun at all. Well, at least for me.

My specialty has always been the MVP effort. I’m good at it. I have,
typically, found the “long haul, steady as she goes” ensuing years to be
rather dull. I think it’s because I came up during the first Dotcom
boom, when money was falling from the sky and my clients were
getting wealthy.

The problem you face, when technical debt stacks up too high, is fighting
off the urge to crave out massive sections of the application, or perhaps
the entire thing, and rebuilding them. The promise you make yourself is
that “this time, I know the problem space better, can lose a lot of code,
and make things more efficient.” Occasionally this is true! Typically,
however, you waste effort on replacing something instead of expanding
and enhancing your application, which increases the value to your users.

I know this all too well. When I ran Tekpub, many years ago, and
even more recently with my publishing company Big Machine, I could
not resist the temptation to rebuild. I like coding and building things,
and I love the idea of making things more efficient. Unfortunately for
me, that came at the expense of creating content for my customers.

Sometimes you have no choice. I got bit by MongoDB back in 2011


when we decided to use it with Ruby on Rails. Orders were just…
disappearing. Our logs showed that checkouts happened, and our
payment processor (Braintree) showed receipts as well. But, no data.
This doesn’t happen anymore, to be ultra-clear. Back then, it did, and I
had to drop what I was doing and move over to PostgreSQL.

You will face this problem, many times, from yourself and also from
your team. A platform upgrade, a new framework, a bug that you keep
618 ROB CONERY

trying to patch but, somehow, keeps coming back. We should just redo
this entire module…

No! Spend the time it takes to understand, clearly, what the issues are
in your codebase; especially the ones that keep recurring. Take the
time to do a clear cost/benefit on a new framework, and you’ll almost
always come away with the understanding that it’s not worth it.

Focusing on recurring problems, however, can uncover some


wonderful results. I ran into a precision error back in 2011, again with
Tekpub, where orders were randomly off by a penny or two. I was
trying to reconcile my data with the processor’s reports, and it was off
by a few dollars. Nothing will drive a data person crazier than “off by a
few cents”.

I figured it was Ruby on Rails doing silly things or MongoDB, but


when we moved to PostgreSQL, the same thing happened! That was
when I was reminded about floating-point errors and how I was
running my calculations. Long story short: I moved everything to
pennies and my problem went away.

When to Say Yes

My friend Burke Holland introduced me to Nuxt back in 2019. I loved


it immediately, as I had been using Vue.js for the longest time, and
loved it, too.

I played around with Nuxt for a day or so, and like I do, I decided to
see how long it would take me to rebuild my bigmachine.io site. It’s a
problem space I know well, and I usually stop after an hour of messing
around. It’s a great way to find the edges of a framework, and my
feeling is, typically, that if I don’t stop, then that’s a good thing!

That’s what happened with Nuxt. I just… kept going. It made things
so much simpler than what I was doing (combination of static site and
Firebase).

Fast-forward 2 years, and I wanted to barf every time I had to work on


my site. The file density (components, pages, layouts, etc.) caused me
THE IMPOSTER’S ROADMAP 619

confusion every time, as did the markdown I was using. It worked


fine, there was nothing wrong with Nuxt. I just… made a monster out
of it.

I tried upgrading to Nuxt 3, which I also liked a lot. That helped, but
then I started running into problems where answers that I found
online were 2 or 3 years old, and out of date. The newer stuff (a year
old by the time I was working with it) wasn’t documented very well,
and I snapped.

Over the course of two weekends, I ported the entire site to Node.js,
Express, and Sequelize. It didn’t take long at all, and I can’t tell you
how freeing it was. I no longer had that visceral reaction, and fixing
things was straightforward.

If you’re struggling with a mess of your making, sometimes cleaning


up means throwing away and starting again. Most of the time, it
doesn’t mean that at all. In fact, I would say 90% of the time, you need
to push back against this most common of urges.

But you also need to recognize the lingering 10%, and you can only do
that by taking notes and carefully thinking through the cost/benefit
with your team.

REFACTORING
Typically, what you have to do (instead of rewriting) is to refactor your
approach, which is a fancy word for “reducing and simplifying”. This
is where your careful attention to your testing strategy starts to
pay off.

A solid refactoring effort is a joy. In fact, just this last week, I


refactored an application I’m working on for Microsoft. I had tried to
keep things extremely simple, using a data access strategy I had used
with other .NET applications in the past. After a few weeks, I
recognized that what I was creating was getting messy. The patterns
started to emerge:
620 ROB CONERY

Hacks and workarounds. I’m pretty good at commenting


code with a HACK tag, which is a standard practice when you
know you need to fix something, but it solves the problem at
the time.
That “clanking sound”. I don’t know how else to describe it,
but there’s a weird mental “clank” that will go off when I know
something just isn’t fitting together correctly. In my case, I
found that I was writing raw SQL to get around a limitation in
my hands-spun query tool. That’s a massive clank, right there.
I don’t mind using raw SQL at all, but that is typically when I
need to flex the power of a given database (CTEs in
PostgreSQL, for instance).
Change becomes difficult. My colleague, Aaron Wislang, was
setting up our build and suggested that we could use a
database interface (IDbConnection instead of
NpgsqlConnection) in my data access stuff, which would
allow us to use SQLite for testing. That idea made my eyes
glaze over because… well… the NpgsqlConnection does
implement IDbConnection, but it has a few more methods
and properties that I was using and… yeah, changing would
suck.

Refactoring is largely kicked off by a “feeling” you get, as well as too


much inertia when it comes to changing things. It’s at this point that
you discover why programmers use certain patterns and principles:
change sucks. If I had taken the time to code to an interface, like I know
I should do, I wouldn’t have had this issue.

So, what did I do? Well, allow me to share!

Step 1: Discussion and Evaluation

Every team faces this problem at some point: a given change is just
too hard. This is almost always the driver for a refactor. The other big
driver is, frankly, vanity. Change might not be an issue, but opening a
file that’s 1200 lines long just looks… eww.
THE IMPOSTER’S ROADMAP 621

Code editors are pretty good at summarizing what you’re seeing, and
there’s always the find function if you need it. Ugly code that works
isn’t really a problem, is it? Or is it? This depends on so many things,
and you usually hear the argument “well if code is ugly then we won’t
understand the intent of the thing” which I think is a load of bollocks.

This is where we get into tabs vs. spaces, where you put your braces,
and how you name your variables, which should be in a style guide, by
the way. If the style guide was followed, you are probably wasting your
time trying to make the code “prettier”.

This is where discussion comes in. Is the refactor going to loosen


things up? Increase cohesion and decrease coupling? Will the
codebase become “simpler”? These are subjective, to be sure, which is
why we talk it out.

In my case, above, I could replace my homespun code with an object-


relational mapper (ORM) or some other simpler query tool. Here’s
what we discussed:

Does this make change easier? Yes. By using a more complete


package, I can focus on the needs of the application, not
supporting my homespun solution. While I have made 5 data
access tools over the years (I’m good at it by now), that’s not
something I want to support.
We could use a full-featured ORM. The go-to data access story
in .NET is Entity Framework (EF). It works well, and supports
many databases. It also introduces complexity of its own. I’m
using a Command/Query approach, which means I don’t need
the abstraction EF provides. It is an option, however, and
might be preferable to my homespun code, but that’s arguable.
We could use a midsized tool, such as Dapper. This is more of
a query tool than a mapper, but there are plugins for it that
add basic mapping. Reading the documentation, it’s simple
and straightforward to use, and it supports quite a few
databases. It’s well-maintained too.
622 ROB CONERY

Fun fact: one of the data access tools I created was called Massive. I introduced it
as a single file data access solution for .NET that used dynamic data. Jeff Atwood
and team, who were building out Stack Overflow, liked the idea and considered
it, briefly, for the site. They ended up making their own solution, however, which
became known as Dapper.

After a brief discussion, the choice became clear: we’ll use Dapper.

Step 2: Branching and Experimenting

We have a possible solution with Dapper, and I read the


documentation extensively, so I feel good about it. That said, replacing
your data access strategy is never straightforward.

I allowed myself 2 hours to see just how much work the refactor
would take. I created a local branch, using Git, and jumped right in.

I left my tests the way they were and added a new one. My old
solution was still in place, but I decided to create new code that used
Dapper directly. Within 2 hours I was able to replace, entirely, 2 of my
existing commands.

It’s a go!

Step 3: Refactor in Parallel

This is a trick that I learned many years back: don’t replace, rebuild. It’s
tempting to jump into the files that we need to update and start
hacking away. That will inevitably lead to test failures, a lot of them,
and cause you to comment things out and make a mess.

Instead, create new files in a new directory, and rebuild the existing
things using the new way of doing things. In my case, I had the old
command open on my right monitor, and the new one on my left. I
changed the class name, of course, so the compiler wouldn’t freak out,
and then I rebuilt things with the Dapper code.

This is more of a mental thing, really. Seeing splashes of red


everywhere (test results) can be frustrating, even if you know that it’s
only temporary.
THE IMPOSTER’S ROADMAP 623

Once my new command was put together, I would use it in the


existing test suite, fixing any errors that came up, which were
thankfully few.

Step 4: Merge and Push

The entire process took about 4 hours, which is great. I added a few
more tests and also took the time to ensure that database connections
were managed better, and I refactored a few tests for readability.

That’s it! I merged the changes in, pushed to my personal GitHub, and
then opened a PR for review. We’ll discuss code reviews in the next
chapter, but I will mention this here: a major downside of a refactor
like this is that I touched quite a few files. I also went a little extra by
refactoring the tests and connection code.

Doing too much will drive reviewers crazy, but thankfully my


colleagues are good people and it went smoothly.

Your Tests Are Your Friends

Hopefully, you can see the role your tests play during a refactor. If
they’re too precise (or “brittle”, as we discussed in the testing
chapter), then refactoring means changing tests, too, which is a major
drag. When you change, or as is often the case, delete a test, you can
never be certain you’ll get that test coverage back. In fact, you
probably won’t.

By writing tests in a more behavioral way, you make refactoring that


much easier. If you want to write a test for every method, go for it!
However, I will say, from this perspective (during a refactor), that’s a
lot of extra work! Is it worth it, especially when you have coverage
with behavioral tests? I’ll leave that to you — it’s a very subjective
answer. I hate friction, so I tend to lean on behavioral testing a lot
more than unit tests.
624 ROB CONERY

DATABASE CHANGES
When creating an application, you typically build out the database at
the same time as the application code. Rails, for example, ties these
concepts together with its generators and database migrations. Same
with Django and other “batteries included” frameworks.

This is great during the initial build, but as time goes on, you’ll need
to separate the idea of database maintenance from application
maintenance.

The Database Is Your Priority

I can’t tell you how many times I’ve had arguments over this.
Programmers hate to hear that the application they built is not the
business priority! I guess I understand. It’s a lot of work putting all
that code together.

You’ve heard me say this throughout the book and, yes, it’s my
opinion, but then again, it’s really not. Ask yourself: what would
happen if your application code (the source, repository, everything) just
vanished one day? Panic, surely, but outages do happen and if your
company is OK with being transparent, you could rebuild everything in
the open and slowly get back on your feet. I know this might sound like
I’m just glossing over the details, but the truth is, you could probably
get your app up and running again within a month if you had to.

Now consider what happens if your database disappears, with all of


your business data? You could rebuild from logs, I supposed, and if
you sell things you could pull data from your payment processor. But
what if you couldn’t?

Your business is done. Your customers would never trust you!

OK, maybe you still don’t believe me. Let’s try another question!
What’s worse: a code breach or a data breach? I think I’ve read about
source being stolen perhaps twice in the last year, and that was for a
THE IMPOSTER’S ROADMAP 625

video game. I read about data breaches every day, and as a user at
some of these companies, let me say that it’s horrifying. Do I care if
their source is stolen? Not at all. My data? Oh yes.

My point: changing code is expected. You have a build to protect that


change as you adapt your application to your users’ needs. Changing
your database is also expected, but you absolutely have to safeguard
against screwing things up.

Moving Forward

I was toying with the idea of adding a chapter on database


maintenance and management, but it scared me. Seriously! Can you
just imagine the email?

Read your chapter on DB management and tried that thing you


mentioned and managed to drop prod and get fired. The business
tanked too. Thanks, Conery.

I will say this, however: you must have a backup plan! One that runs on a
timer (nightly is a good idea) and one that runs when you require it.
Backing up before you make any changes is critical.

OK, that said, let’s move on.

Any time you change a database, even when there’s a problem, you
need to think about moving forward instead of “rolling back”. I know I
just spent time telling you to back up your database nightly, but that’s
for complete disasters where your database is in a terminal state (aka
“dead”). That happens if you forget a where clause a delete or update
statement, and data is lost forever.

Or is it?

Let’s consider the time that Rob forgot a where clause on an update
626 ROB CONERY

statement and managed to set every user in the Tekpub database to an


Annual Subscriber.

Transitioning State, Not Rolling Back

It’s true. I was trying to update the expiration dates for my annual
subscribers because, as luck would have it, my date management skills
were off. I lived in Hawaii at the time, which is GMT-10, and my users
in Europe and Australia didn’t like that their subscription ended a day
early. So I decided to just make the subs last a year and a day,
why not?

I’ll spare you the SQL, but I’ll summarize by saying that running
update users set... without a where condition is one of the oldest,
dumbest problems that you can run into.

If I had snapped a backup of my data first, I would have quickly rebuilt


the database and hoped that no one saw what just happened. That’s
rarely the case, however, especially if your site has many users. Once
you screw up, the clock starts ticking. New data is coming in (if you
didn’t crater your entire system, that is), and it’s landing right on top
of your screw-up. Your window for a restore is closing rapidly!

I went to Google Analytics to see how many people were on my site,


live, and it was in the hundreds. I just released a new video, so traffic
was a bit higher than normal.

I had a decision to make:

YOLO and restore. My users would probably notice their


watch time and some other convenience stats look weird, but
that’s about it.
Probe the problem and figure out a fix, rolling forward.

My decision was made for me, unfortunately, as two orders came in


while I was looking at Google Analytics. If I restored, those orders
were gone!
THE IMPOSTER’S ROADMAP 627

In general, when you’re faced with this dilemma, you should always
look to fix forward, which some people call “version forward”. Your
database is a fluid thing and, usually, you can resolve the problem
you’re having by understanding precisely what happened, and creating
a fix.

In my case, I could address my issue in one of three ways:

1. Restore a backup on my local machine, create a new table


called user_fix with the correct subscription information,
import into production and then run an update query,
replacing the bad subscription data.
2. Pull subscription data from my payment processor and do the
same process (create a new table, and then update as needed).
3. Create a dedicated code script, and execute it against the live
system. I was using Rails at the time, so a migration running
against production with the updated data would amount to
the same thing.

I did option 3, but not before I thoroughly investigated and made sure
that nothing else happened that I was unaware of. Thank goodness, I
hate triggers!

Fixing forward takes time, and it can be difficult to shed the panic and
get your brain into the right space. You can, usually, build a forward
fix from your last backup instead of doing a full restore. If you can’t,
well, we’ll talk about disasters in a few chapters.

Migrating Your Schema

Migrating data is one thing, but changing your database schema is a


whole new world of potential pain. This is where a DBA can be your
best friend, reviewing your change scripts and spotting potential
problems.

What kind of problems, you ask? Here’s a fun one!


628 ROB CONERY

Let’s say you have a table with 10,000,000 rows in it. For some, that’s
not all that much data. For others, like me, it’s huge. Let’s say that’s
your users table, which holds their login credentials. If you have
10,000,000 users, you’re in pretty good shape, friend!

In your last sprint, you and your team decided to add IP address
tracking to your user logs, and to add last_ip_address to your users
table:

You run this script locally and it works fine. You ask for a code review,
everyone asks if the syntax is correct, you say it is and you get the
green lights.

You run the script and production goes down for 40 minutes. Uh
oh. You look at your database application and the spinner is still
spinning… should you kill it? Let it run?

Monitoring alarms start going off, Slack starts pinging…

So a version of this really happened to Twitter, back in 2010. The


outage was caused by MySQL (the database they were using at the
time) locking the users table while the structure was changed and the
default IP address of 0.0.0.0 was applied. If you’ve never been bitten
by a row-locking bug, lucky you!

Every query was halted during the lock, which meant users couldn’t
tweet nor could they see any tweets at all. Ah the good old days!
THE IMPOSTER’S ROADMAP 629

So, what should have happened here? To be honest, I’m not sure. At
that time, MySQL required table locks when altering the schema, so I
suppose the best choice would have been to make the change as fast
as possible (no defaults), with an announced maintenance period.
They could have also decided not to do the column addition, deciding
it might not be completely essential.

These are things you need to consider as you change your table
schema:

How large the table is, and how many reads/writes happen per
second. If there’s a lot, you’ll need to schedule a maintenance
window and let your users know you’re going down for a
while.
Is that default value necessary? Defaults can put a strain on
the system as it’s one more write that must go in.
Can we do this concurrently? Some systems, like PostgreSQL,
allow you to apply indexing in the “background”, which won’t
lock your table as the index builds.
630 ROB CONERY

Does this constraint really help? Every constraint you put on


your table is another thing that needs to happen during a
write, such as setting a default. Your downtime is growing.

Writing the SQL is easy, as is creating the migration code in Rails or


some other platform. Understanding the impact is, by far, the greater
concern.

Migration Tools, Or Not

As a database person, the idea of programmers writing database


migrations scares me. See the above section as to why. I do think
they’re useful, but, in the case of something like Rails, once you’re in,
you’re in. Every change needs to happen through those migrations. If
you decided to run a quick schema change by hand, and then try to
roll your migrations back, you’ll be out of sync and sad.

If you’re a Rails person, this is probably OK. If you’re a Rails person at


a startup that might eventually hire a DBA, your days writing
migrations are numbered. This is OK because… see above section,
again.

There are alternatives here. I’ll show you some tools, and then I’ll tell
you what I do. Let me say this right here, too: I do what I do because
it works for me. It’s critical that your change plans work for you and
your team.

If you’re using a relational system, such as Postgres, you’ll have no


problem finding a tool out there that will create a change script for
you, based on the state of one database compared to another. One of
my long-time favorites is Navicat:
THE IMPOSTER’S ROADMAP 631

For a few hundred dollars, you’re given everything you need to


manage just about any database system. I’ve had a license for the
PostgreSQL edition for years, and I love it.

Need to sync the structure between your development system and


your production one? No problem:
632 ROB CONERY

This is extremely useful because it lets you analyze the differences,


and it will then create the change script for you. Just like every other
high-end GUI tool out there.

Don’t get me wrong, I do think it’s worth it! I, however, like to have
my scripts. I like SQL, it’s powerful, and if I’m disciplined in what I do
and make sure I capture the SQL changes needed, I don’t need to
spend the extra money.

My process goes like this:

Initial build out to MVP (all prelaunch stuff): create a file


called db.sql in a /db directory. Run that file before every test
run using a Makefile.
Once deployed, I archive the SQL file to /db/archive/db.sql. I
don’t want to run this again! I make sure to tag the commit as
THE IMPOSTER’S ROADMAP 633

well, so I know that’s the initial database version. I stop


running the SQL file during tests.
I create a new SQL file called 1_change.sql. Now, whenever I
need to change anything, I pop it in here. I run this before
every test.
Once I’m ready to push the changes, I archive the SQL file
once again, and create a new one called 2_change.sql.

If you’re curious, here’s a Makefile from a recent project:

You can see that I also have a seed file, which I generated because I
needed 10,000 users to go in. Believe it or not, this all happens in
milliseconds!

Doing things in this way requires idempotency, which means that every
SQL file can be run repeatedly with the same results. This is one
reason I like working with table schemas because dropping and
reloading is pretty simple:
634 ROB CONERY

This goes right at the top, and wipes out everything in the mail
schema. All the create statements are down below, and all the data is
in the seed.sql file.

If you’re not working in a schema, you can drop public without


harming your system. You just need to recreate it and make sure you
also install any extensions.

For the changes, you have a choice: keep rebuilding everything from
scratch (using db.sql and then change.sql, which means tweaking our
Makefile a bit:

This little change will ensure that everything goes in at the right time,
and we can layer in our changes on top of our database initialization
script.
THE IMPOSTER’S ROADMAP 635

Or, we could make sure that our change.sql script can be rerun
consistently without problems by ensuring the changes don’t exist
before adding them:

This is clunky, but it does give you a way to “undo”, if you will, the
changes that go in.

One of the major benefits of doing things this way is that you can ask
for code reviews on your change scripts, and ensure that you know
precisely what will happen and how. It’s good to be a control freak
when it comes to databases.

And finally, again: this is what works for me as a person comfortable


with SQL and tools like Make. Hopefully, you can find a good balance.

WHEN SERVICES AND DEPENDENCIES BETRAY YOU


I’m generally an optimistic person, except for when it comes to
external services and dependencies for applications that my livelihood
depends on. There are only two that have proven me wrong, and
they are:

Stripe. I’ve been using this payment processor since it began


back in 2009 and, somehow, they have always been solid.
PostgreSQL. I started using Postgres right around the same
time as Stripe, and it’s never let me down.

I should probably elaborate on what “betrayal” and “let me down”


means. It’s when a platform, framework, or service:
636 ROB CONERY

Changes their terms of service in a detrimental way, which


usually involves privacy or some other thing that makes me
mad. A JavaScript framework did this to me once, activating
telemetry without an opt-in and not being open about it. I
hate that.
Raises their rates or changes service levels. Amazon Prime
just did this with their video service: if you want to watch
videos without commercials, you now have to pay more than
you were paying.
Goes out of business. This happened to RethinkDB right
when I published a course on it.
Gets bought by a company you despise. This happened to
MySQL many years ago. I don’t have an opinion on Oracle, but
I know others do.
Is proven to be insecure. Okta was in the news recently
because all of its users' data was breached. News like this is
becoming “normal”, but given that Okta’s business is
managing user identity, this is bad. This is one of many
security incidents, and by far the worst. I’m not sure how
they’re still in business if I’m honest.
Locks you in. This happens most with cloud providers. As the
years go by, your services grow, as does your data. That’s when
you discover “egress fees” which can cost thousands of
dollars.
Flips the Bozo Bit, which basically means they do slimy
things to make money, act like jerks publicly, or support
political causes you disagree with.
Keeps breaking things. It’s great that platforms evolve, but
having to upgrade your application because the platform
underneath it changes can be difficult to deal with. .NET gets
a new version every year, and migrating can be painful. Ruby
on Rails is less frequent, but moving to a new version can
really take time.
Doesn’t allow the customization you need. As your
business grows, you will need to adjust to what your
THE IMPOSTER’S ROADMAP 637

customers want and if you’re locked into a service that doesn’t


allow you to change things, you’re in trouble. I ran my
business on WordPress for a year or so, but the plugins
required to do what I required absolutely killed me.
Eventually, I had to write my own, which is when I stopped
using WordPress.

I could go on — but I think you get it. You cannot rely completely on
external services or dependencies. This doesn’t mean you have to create
everything yourself. You just need to have a backup plan for every
dependency you take and service that you rely on.

Creating a Backup Plan

My focus is always the data. If I own the data, I’m happy. No matter what
service I use, I set up webhooks to update a centralized data store
(PostgreSQL), which I use for reports and customer fulfillment.

That PostgreSQL database is currently hosted by Supabase, but I


don’t trust them to stay around forever, though I hope they do.
They’re a startup living off Venture Capital, so it’s entirely likely
they’re going to go away someday, which is fine because I have a
backup going off every night which sends my data to Amazon S3
(cloud storage).

I have two main applications: my main site and my checkout site. The
main site runs on Rails (currently) but I also have Ghost CMS site
that I can deploy quickly should something happen to my Rails app.

My checkout site is a service I’ve been using happily for years:


ThriveCart. They seem to be doing well, but it’s possible they could go
away too, and if they do, I have all of my checkouts ready to go at
Stripe, using Stripe checkout. I also have my own checkout pages on
my Rails site, just in case.

You don’t need to do all of this: just protect your data and be ready to
deploy to a different host/cloud/service within an hour if anything
happens.
638 ROB CONERY

STEADY ON!
Changing, updating, and versioning your application is the long haul
and if you do it for years, that’s a great sign of success. As you go on,
your change process becomes your business process, and can only be
as healthy as your organization and leadership. This is where I will
leave off, as this isn’t a book about building a company or
organization.

Your success, however, will be scrutinized, which is also a good thing.


Think about the leadership books you’ve read or heard of: they’re all
from people who have delivered, and how they delivered. Some people
care about the founding, or the start of a project — but most people
care about growing the project into something amazing and successful.

Stripe and GitHub, for instance, are companies that started with
simple ideas that were unremarkable at the time. Over the years,
however, The Collison brothers built Stripe into a multi-billion dollar
company trusted by developers around the world. Tom Preston-
Werner, Chris Wanstrath, PJ Hyett, and Scott Chacon thought it would
be fun to create a website that let you see your Git repository
information in a centralized place. That idea grew into another multi-
billion dollar company as the founders adapted to what customers
liked and needed.

Change and growth, that’s where the fun is.


TWENTY-FOUR
CODE REVIEWS
IN WHICH WE GET TO FLEX OUR PEOPLE
SKILLS FOR GREAT GOOD

Y
our job now is to manage change, as we discussed in the
previous chapter. You’ve probably been doing code reviews as
you and your team ramped up the MVP, but they were most
likely a bit rushed. Which is fine — MVP is about getting your code
out and into the hands of users with as few bugs as possible.

Thereafter, it’s all about pace and rigor. This is the endgame!

FLEXING YOUR SOCIAL SKILLS


Aside from the technical interview, I can’t think of any other process
that will test your social skills more than the code review. It is the
most socially demanding thing you will do in your job as a
programmer.

Before we get into the details, it’s important to break things down so
you understand the context for what you’re about to read.

The first, and most important, thing is this: all of this is highly subjective.
Some people have a “right between the eyes” approach to a code
review, and see it as a way to challenge their colleagues (and
640 ROB CONERY

themselves) to become better, faster, stronger. No BS, no touchy-feely


nonsense — just give it to me (or hand it out) straight, direct, and to
the point.

Many people like to flex and play domination games, while others like
to give out hugs and boost others however they possibly can.

So, to the point: there is no one true way to do a code review. Your
approach will shift based on whom you’re reviewing and what type of
feedback they prefer. This will also have to align with what type of
feedback you like to give.

There is one bit of solid truth here, however: you’re here to keep shipping.
This might push you right out of your comfort zone, especially when
reviewing someone else’s code who is clearly in over their head. What
do you do then? Encourage them and try to help? Or give them a
nudge toward the door. The latter might seem harsh, but teams that
lose a bad apple can transform overnight (see the law about avoiding
negative people).

I’m going to start out with the human part of code reviews, but I will
keep us on track with the singular goal of getting the code out the
door and into the hands of our customer. It’s critical to remember,
that’s why you’re here.

We’ll then take a look at code review tools that you can use, and as
usual, I’m going to go with the most widely used tool: GitHub. Other
source control platforms (TFS, GitLab) work roughly the same way, so
hopefully you can translate as you need.

UNDERSTANDING YOUR OPPORTUNITY


There are so many ways to become essential to your project, including
being an outstanding code reviewer. It’s important to understand that
you don’t have to be an expert at something to offer solid feedback, you
just have to know what the other person is trying to do, and see if
they’re doing it.
THE IMPOSTER’S ROADMAP 641

It’s so easy to lapse into horror stories when it comes to code reviews
(and there are many), but let’s start out on a positive footing,
shall we?

To Code Is Human, To Review Is Divine

That’s a variation on a famous quote from Stephen King, from his


book On Writing, in which he discusses the writing process:

To write is human. To edit is divine.

I’ve written quite a few books over the years and I can tell you, as a
fact, that a good editor is a profound gift from the gods. One of my
books, A Curious Moon, was edited by my friend Dian M. Faye and the
process was surreal. She saw what I was trying to say with this book,
and focused her time on helping me say it. I don’t quite understand
what magic she possesses, but the joy I felt as this book took shape
with her deft skills… it was, as Mr. King states, divine.

That’s how I see code reviews. You’re there to help someone shape
their code into something spectacular. Who knows? Maybe it already
is, and your vote of confidence can make a wonderful difference in
their day.

On the other hand, you can also suggest some small tweaks here and
there that your review partner might have overlooked, like we all do,
and quite literally change their life.

You’re a code editor, there to do the things that editors at a publishing


house do: trim, tweak, nudge, and help focus. Keep this in mind and
everyone wins!
642 ROB CONERY

CODE REVIEW STRUCTURE


If you work at a larger company, the code review process will likely be
defined for you and expectations for tone and content will be laid out
already. There is typically wiggle room in there, however, for you to
provide a personal approach.

I have found that most companies will adopt a tone and style from one
of the senior managers or leads, which can be a good thing… or not.
Always remember that the “style” you use should be your own, as long
as it’s effective.

Let’s consider a few styles that you might consider on a case by case
basis. They each have their pros and cons — your job is to figure out
which to use, when.

The Gatekeeper

I both love and hate this type of code review. On one hand, as a coder,
it pushes you to “measure up” and make sure you lint, comment, and
follow the style guides to avoid “PR ping pong”. I think this is a good
thing.

On the other hand, it sets up a false hierarchy, that the reviewer is


somehow “above” the person submitting the PR. This might literally
be true, in terms of project structure or company seniority, but if
you’re following an Agile way of doing things, there are no levels.

The main problem with gatekeeper code reviews is that the reviewer
might not be as skilled as the coder, even if (sometimes especially if)
they’re senior. I have been in this position far too many times.

I think it’s OK to have a “decline by default” mentality because it can


cut down on the back and forth and asking for better comments (a
typical one). The phrase “please follow the comment guidelines and
use proper voice and grammar” is one that makes me want to throw
things.

The Nice Person


THE IMPOSTER’S ROADMAP 643

We like nice people, don’t we? We like to be liked and often hate the
idea of conflict, especially at work. I was talking to a good friend just
the other day about code reviews, and he told me “that’s why I never
made it as a manger”, which I found odd! He was simply too nice and
didn’t give solid feedback that could help someone grow.

Being nice, however, doesn’t mean “avoiding conflict”. It can also


mean that you’re showing respect for your colleagues and doing your
best to use supportive words.

As with most things, it depends on who’s listening to those


supportive words, and whether they convey what you’re truly
thinking. If you’re reviewing someone’s code, and they forgot, for
the 40th time, to comment their code with something more
meaningful than the obvious (and with bad grammar and voice), a
response of “great comments! Thanks for taking the extra time to
add them” isn’t going to stop the problem from happening again in
the future.

In fact, it will almost guarantee it, making more work for you or the
future code reviewer, and not helping out your colleague.

Something more direct will likely help: thanks for taking the time to add
the comment, but I would like to see it follow our guidelines, so the
documentation reads the way we decided as a team.

Cold Reality

I struggled with naming this style because it sounds so negative, but I


think this is my favorite style: you don’t need to hold my hand, just tell it to
me straight. Time is money! When I get a code review, I want to get the
fixes in as fast as I can, so I can keep going with the other things I’m
working on. If you’re reviewing my code, tell me directly what I need
to do, so I can make the fixes. I trust you.

Using our comments example, one more time, let’s imagine that I
forgot to add one (for the 40th time) and you catch it. You decide to
be direct:
644 ROB CONERY

It would be helpful if you could remember to add comments


without being asked to do so. Time is important to all of us.

I wouldn’t read this as a reprimand or someone being “mean”. It’s


truthful, which is a gift.

This, of course, is just me. I know others that would read this and
think you’re being unduly harsh. Terseness has that effect, but you can
mitigate the damage by sending an email before you do the review:

Hey, I tend to leave terse comments with as much detail as


possible, minus the niceties. Please don’t take this as
condescending or mean — I value your time (and mine) so tend to
just get to the fixes as fast as I can.

You might also want to follow up with the person to ensure you
haven’t gone over the top, which is easy to do. Good relations on your
team, especially if you’re the lead, is critical!

The Coach

A few years ago, I was on a project that required me to learn Go. The
syntax of the language was simple enough as it is C-style (braces, etc.)
but the idioms escaped me (null checks everywhere? Really?)
Thankfully, my good friend Aaron Schlesinger, a Go expert, was the
one reviewing my code fairly often.

Aaron and I are peers, yet Aaron knows a lot more about Go than I do,
and it was critical that he took the time to encourage me to learn more
about a topic. To that end, he took extra time to explain concepts that I
was obviously missing.
THE IMPOSTER’S ROADMAP 645

While I can’t show you the exact comments, I can show you a
representation:

Love the progress, and I especially like how you’re starting to


adopt a more gopher idioms! One thing you might consider, when
handling nulls repeatedly, is instead of doing X, you can do Y and
handle it in far fewer lines. It’s tempting to get clever, creating
helpers and so on, but Go developers like clear code so using Y
will put you right on track.

This was extremely helpful. It did cause me to rewrite big chunks of


code, but it didn’t matter. The idea that I was writing more idiomatic
code made me happy.

This is the power of a good code review, especially when experience


and need aligns. Aaron fast-tracked my Go skills tremendously, and it
obviously made a giant impression.

That said, I have also had “coaches” review my code and offer advice
on how to “properly” query PostgreSQL. This usually involves the
strong suggestion that I use a data access tool instead of writing inline
SQL, which is “prone to SQL injection attacks”.

I’ve written 5 data access libraries in my day, 3 of which were pretty


successful (Subsonic, Massive, and Massive.js). I understand SQL
injection issues well, and I’m also proficient at SQL, especially with
PostgreSQL.

It’s a common theme in my life: someone sees me write inline SQL


and calls me out for promoting SQL injection attacks. To be clear: I
only write inline SQL when ORMs do silly things, like creating an
insert query with 10,000 parameters instead of letting me write an
insert...select.
646 ROB CONERY

Anyway: the condescension that drips from these “reviews” easily


balances the wonderful help that can happen when things align. This,
of course, happens when skills don’t match and egos get out of
control.

It’s critical for each person (coder and reviewer) to understand the
other’s strengths. It’s also critical to know when you know something,
or are just repeating dogma. We’ll talk more about that in a bit.

The Hero

Here’s a scenario: you’re the senior lead on your team, and you’ve
been given a PR to review from one of the junior developers on the
team. It’s for a React application, and the PR is for adding a feature
that uses the application state to store some data and update a status
element. Simple stuff, at least to you, because you know React (let’s
assume you do in case you don’t).

What you see is instantly confusing, and also alarming. The style is
dated, and it’s clear they wrote their own components and functions
instead of reusing the core bits the team has been using the entire
time.

This is the first time you’ve worked with this person, and you realize
they were hired a month ago. You look at their profile, and they claim
to have 6 years of active React experience, but they clearly don’t, and
you start to feel that the entire PR is probably a waste of time.

If you’re a Hero code reviewer, you might:

Rewrite everything yourself and send it to the person as a


suggestion.
Set up an in-person meeting to go over WTAF.
Suggest they quickly delete the PR and you’ll follow up with
them.

These aren’t entirely bad responses to a PR from hell, but they do


frame you as someone trying to “rescue” a person on your team. You
THE IMPOSTER’S ROADMAP 647

might have a good reason for this, but by taking on the hero role,
you’re involving yourself unduly in someone else’s problem.

I am guilty of this. I told myself that the project needed to move


forward, and I didn’t want someone else to get hammered, so I just
did the work and discussed it with the coder over lunch. It didn’t go
well, as you can imagine.

You never want to make someone feel erased. That said, you’re here to
ship something, not wait while someone who clearly lied on their
resume fumbles around with a feature required for your sprint.

We’re going to discuss the ups and downs of politics in the future, and
how you will need to sometimes flip the evil switch — now is one of
those times, although “evil” is far too strong a word.

In short: this person needs to go, or up their skill level dramatically.


Right now, they’re a liability and slowing your team down, and your
impulse to be a hero and save them is very kind of you, but if you
think they lied on their resume (and the hiring process didn’t catch
it), that’s a hard stop.

If this were me, today, I would probably:

Set a 30-minute meeting before I reviewed the code further.


Tell the person directly, without accusations, my impression of
their code.
Ask them point-blank about their experience.

You can crush someone doing this, and that’s never a good thing.
People quit all the time because of bad code reviews; quite a few give
up on coding altogether. I don’t quite know what to say about this
because, on one hand, it could be a good thing. Quite a few people get
into programming because it pays well, not fully understanding the
commitment they’re making, only to realize after a few months they
hate it.
648 ROB CONERY

Other people just need a little more time to get up to speed, and find
themselves in the deep end, sitting opposite you at their first code
review trying not to break into tears.

So what do you do? If someone lied on their resume about their


experience, that’s “cause” for their termination. Don’t feel weird
about it, either. If they lied about that, what else will they lie about?

But what if this person was desperate for work? We’ve all been there.
Maybe they have kids to feed, or a sick relative, and they were forced
into a tough decision. They try to convince you that they are a fast
learner, and that they can get the job done. They promise to study,
take an online course or two, and put themselves on 30 day probation.

The truth of the situation remains: this team member is letting you
and everyone else down and, moreover, is taking the place of someone
else who needs a job. Someone who’s qualified and much more
deserving.

I don’t have an answer for you. “Fake it ’til you make it” people are
extremely challenging, yet I also understand that work is work and
when times are hard, people do weird things. They are typically
discovered during code reviews, however, and how you handle it is up
to you, of course.

The action you take will define you as a leader.

PUSHING DOGMA, NOT FIXES


I had a contract with an e-commerce gateway years ago, one you’ve
heard of, most likely. I was asked to update their C# SDK to be more
“modern”, and to add updates that the underlying API had recently
received.

I was proud of my work. I decided to use method chaining, what some


in the .NET world would call a “fluent interface”, to help you build
your Charge and then send it to the service from your .NET code.
THE IMPOSTER’S ROADMAP 649

Building an object like this has roots in functional programming,


where you transform data by piping it from one function to another,
wrapping the data in a structure that helps you understand what
happened. That’s precisely what I did for this project, and it was
backed by 100 or so tests written so that developers could understand
what was going on (I like thinking of tests as documentation when
possible).

I felt good walking into my first code review with one of the company
engineers, who’s first comment was:

What the f$&# is this sh#@!

I thought he was joking, so my response was to laugh and say, “sh#%


is my specialty”. Which is when I found out that he was not, in fact,
joking.

He proceeded to tear me apart because I wasn’t using a proper


constructor pattern, which is what he was used to seeing. He didn’t
like my domain model either, telling me I needed far more immutable
structs and blah blah blah blah.

I let him finish, of course, and didn’t say a word during the process
(always say less than necessary). My client was on the call as well, and I
could tell she was absolutely mortified.

The problem here is straightforward: this isn’t the way he learned how to
use C#. This happens in the Ruby community too, especially when it
comes to Rails. Pythonistas and gophers have their idioms too, and if
you go against that, you’re probably wrong.

This is toxic.

I do think it’s true that certain patterns can help with a complex
codebase. Factory methods can make things much, much clearer, and
Builder objects can be used when Factory methods get out of control.
650 ROB CONERY

There was nothing tricky or weird about what I was doing with my
fluent interface. It worked, and the API discovery (if you use
intellisense) can be extremely helpful. That said, there are people who
dislike it because it seems verbose, like my friend Jimmy Bogard:

The replies to Jimmy’s post are even more interesting:

You can take any coding “style” too far, which goes without saying,
and if you see someone doing that, do let them know. That’s when
code reviews work best: when you take the time to offer substantive
feedback, such as:

I’m not a fan of fluent interfaces because they tend to be verbose


and difficult to read and understand. Like this method here… I
think this API might benefit from becoming a little more terse. If
you were to make these small changes, then the chaining would
be 3, at most 4 methods long, which is much more readable.
THE IMPOSTER’S ROADMAP 651

This will likely cause some discussion, which may or may not be
constructive. No one likes to hear that their code doesn’t measure up,
especially when they’re proud of it. That said, I’ll end this discussion,
before we get to the tools and more practical stuff, with another quote
that I like, which is targeted at aspiring writers:

Kill your darlings.

This has to do with the editing process, when it becomes obvious that
a character or plot point should be cut to tighten up the story. As a
writer, you tend to fall in love with your characters and, often, the
scenes or plot elements you put them through. Your editor will likely
tell you, more than once, “time to kill this little darling” (darling,
here, is the thing you fell in love with).

The same goes for code. People fall in love with what they write and
will defend it, even though it might not be needed or could be made
more powerful and clear by refactoring or just cutting it out altogether
in favor of a simpler approach.

Editing is divine, after all!

AUTOMATING THINGS
Code reviews used to be done in person, not so long ago. In fact, they
still are done in person at many large companies, and even startups.
There’s a good reason for that: in-person communication tends to be
much smoother than written.

That said, you can easily end up with misunderstandings, no written


record of what was discussed when, and unclear direction from the
reviewer. One way around this is with an automated code review,
using a tool like GitHub.
652 ROB CONERY

To be clear: I’m not saying automatic code review — there are still
humans involved. An automated code review simply tracks what was
said, when, about code that’s under review.

I’ve been using GitHub throughout this book, and I’ll keep on going
for this example, because code reviews with GitHub are powerful. If
you’re using GitLab, Atlassian Crucible, or BitBucket — the code
review process is basically identical to this.

Hi, It’s Me, I’m The Problem

I have a bit of an issue with my current codebase. Can you spot the
problem?

I’m not sure why I did this if I’m honest. I know better.

The reason I’m showing you this is because of the link, right above the
image. If you click it, you’ll be taken right to that line in the code. This
is one really nice thing about GitHub: they allow you to navigate
directly to a place in your codebase.

The url is straightforward as well:

https://github.com/robconery/node-pg-start/blob/master/mail/
index.js#L43

The end bit there, #L43, highlights the line so you can see it better.
This is extremely useful when creating an issue, which my very
considerate alter ego just did:
THE IMPOSTER’S ROADMAP 653

Issue, Then PR, Then Review

There’s a fairly standard flow with most projects, open source or


internal, that is considered “good form”:

Create an issue first, before offering a Pull Request (PR).


You never know what a maintainer is thinking — they might
have a good reason for the way things are! This also gives you
a forum to discuss your ideas for a fix. If you’re on a project,
an issue will have likely been created for you to work against.
Create the PR and reference the issue. People who look at
the issue will see that work is being done via an existing PR,
and that can keep duplicate work at a minimum.

Lucky for us, my alter ego is on the job. I read the issue and gave a
thumbs up, so a PR followed shortly after:
654 ROB CONERY

Very nice. Again: we covered all of this in a previous chapter, but


repetition is a good thing if this is new to you.

Reviewing a Pull Request

Code reviews with GitHub (and most other Git-based tools) are
isolated to the files that change for a given commit, which is
wonderful. As a reviewer, you know exactly what has changed and can
assume that all other code has remained the same.

When you receive a PR, you’re given the change to review the changes
if you like. If you don’t have time or want a second pair of eyes, you
can also request that someone else add their review too, by clicking on
“Request” under the “Reviewers” section:
THE IMPOSTER’S ROADMAP 655

The incoming PR has two separate commits, each with a link. When I
click those links, I’m taken to a diff that shows exactly what was
changed and how:

This is a nice, clean diff, and it makes me happy. Why? Consider:


656 ROB CONERY

The commit is small, with changes that are easy to review. It


is no fun to review a massive candy-striped diff with changes
across multiple files! If you want to impress your lead, make
your commits isolated and easy to review with a message that
makes sense.
There is no “housekeeping”. Developers are the worst when
it comes to sneaking little things in, such as recasing a
variable, changing your spaces to tabs, or messing with where
your braces are. Some editors do this by default when you save
a file — I hate this. If you want your PR rejected, reformat with
spaces. The entire diff turns into a sea of red…

Here, I can see exactly what’s going on. Line 47 is checking for a
variable named SEND_FROM on the environment, and I think the
only response I might have here for that variable name.

Right now, I’m looking at the changes for just one commit, but I can
view them all by changing the selection:

This can make reviews much easier to navigate as you can see
everything at once.
THE IMPOSTER’S ROADMAP 657

Let’s take a look at the code changes. This red and green striped file is
called a “diff ”, which you’ve probably heard of, and is the “difference”,
or before and after, for this file. The red lines are ones that have been
changed, and the green lines are the changes.

If I hover over a line number (any color), I’ll see a big blue plus icon
appear, which tells me I can add a comment:

I’ll add a message here, and make sure that I click “Start a review” as
that’s what I’m doing.

Note: if someone requested a review from me, the review would already be
started, assuming I accept. Also: you can leave a comment on the code without
doing a formal review, which will hold up the merge, by using “Add single
comment”.

There’s only one change with this commit, so I don’t need to add any
more comments, and I’m done reviewing this commit.

Up on the top right of the page, there’s a button that will wrap things
up and let my alter ego know that I’ve reviewed the code and that I
have some change requests:
658 ROB CONERY

Applying a Review Style

This is where diplomacy and the “styles” discussed above come into
play. Let’s go through a few different responses:

1. Just accept the PR and worry about the possibility of


changing the reply to address later when it’s needed.
2. Be more direct, using fewer words. My alter ego is busy, so a
reply here similar to “let’s call it DEFAULT_REPLY_TO”
might be appreciated. Short, direct.
3. Try to sound kind, using diplomacy so you don’t ruin
someone’s day. Thank them for their work, and make sure
they understand their role is important.
4. I accept the PR and then change the name myself.
5. Don’t request a change, but instead leave a comment
reminding my alter ego just how busy I am, and directing
them to someone else who can review this trivial change.
THE IMPOSTER’S ROADMAP 659

6. Reject the PR immediately because the guidelines for PR


descriptions weren’t followed. We have standards for a
reason.

Each of these represent a valid code review, even if they have a tinge of
negativity. Let’s discuss!

Accepting by default (response #1), trying to be nice, can be the most


toxic thing you do as a lead. Programmers want to get better, which
means they need feedback from you. If you don’t question them or
suggest improvements, they’ll resent you for not helping them
improve. Or they’ll lose trust in you and think you don’t know what
you’re doing.

Being direct is (response #2), as you can probably tell, my preferred


way. It can also be incredibly frustrating for a programmer who is
unclear on what you’re asking for. I’ve been that programmer, and
driven my lead insane with followup comments that simply say “more
words please”.

Diplomacy and professionalism are good things. With response #3,


which happens to be the style I chose for my reply, I’m taking a few
extra steps to ensure that the person submitting the PR feels good
about doing so. This is useful if it’s a new person on your team, or if
you’re doing open-source work and this is a first-time committer.
Otherwise, it’s a waste of words. Don’t get me wrong: kindness is
grand, but often people just want to be told what needs to happen.

Response #4 is horrible. Nothing will deflate your team faster than


you making changes for them. It is so, so tempting, especially when
you’re busy, to just… click edit… right in the repo and make that
change. Resist this, because it will come back to bite you.

Response #5 sounds harsh, doesn’t it? It doesn’t have to be! With


some well-chosen words (being diplomatic, but direct), you can set
much-needed boundaries as a project lead. In fact, quite a few leads I
know will automatically appoint someone else (usually at least two
660 ROB CONERY

people) to do all code reviews. GitHub lets you set policies on this,
which we discussed in a previous chapter, one of which is “each PR
must have two reviews”.

Finally, response #6 is what will likely happen in the Real World,


outside the pages of this book. It’s true: PR standards are there for a
reason. If you’re the lead, it’s your job to make sure people respect the
rules.

Note: This is where AI can really help you. If you’re unsure of your tone, throw it
into ChatGPT (or whatever you use) and see if it can be reworded. If you’re
unsure, just use fewer words and be direct with an emoji at the end. Never fails!

Reviewing the Review

My alter ego should have received an email, letting them know my


review is done. Clicking on the link, they end up back at the PR:

The review comment is appended to the PR itself, as is the diff and my


THE IMPOSTER’S ROADMAP 661

inline comment. Super useful! My alter ego can reply here, or just get
to work.

Tip: just get to work. Replying with a thumbs up or OK isn’t needed unless you
work for someone who has asked you to acknowledge their comments. If you’re
unsure, let your lead know (or your team) that no reply on a PR is implied
acknowledgment.

My alter ego made the changes I asked for and updated the PR, adding
a comment for me:

They also clicked “Resolve conversation” as they did what I asked. You
can change who has the right to do this, but in general, it’s good form
to let the reviewer resolve any change requests. For small changes like
this, it can save time if the coder resolves instead.

You might wonder why outdated is there next to the file name. This
is letting us know that the code in that diff no longer represents the
actual code in the codebase.

We’re almost done! I can’t merge the PR yet as my review, as far as


GitHub is concerned, isn’t finished. To achieve that, I click “View
reviewed changes”, which takes me back into review mode.

Here, I can navigate each commit, including the latest one, to see what
has changed. The very last commit is what I’m interested in:
662 ROB CONERY

Looks good! Before I close out this review, I might want to navigate
through the commits using the dropdown in the top left (outlined).
This is, once again, why small, targeted commits are always a good
thing — they’re easy to review! The smaller they are, the quicker they
get approved.

Speaking of: let’s approve this. Once again, I’ll click the “Review
changes” button in the top right, but this time I’ll toggle “Approve”.
There’s no need for a comment here — so I’ll keep things terse and
speedy and just let it rip!
THE IMPOSTER’S ROADMAP 663

Back on the PR, you can see the timeline has updated, including a build
run, which we’ll discuss in a few chapters! We wouldn’t want to
accept a PR that broke the build, would we? If the build failed, we
could address it in the same way — using the review tools.

Add a Dash of AI

Your tone is everything in a code review that’s not in person. Terse is


great for busy people, but can easily be misconstrued as “mean”.
Being too nice isn’t good either, because diplomacy can easily breed
misunderstanding.

As I write this chapter, GitHub just rolled out Copilot for comments
and PRs if you’re a maintainer of a project. It works pretty well, and
I’m sure it will evolve.

If I click the Copilot icon in a comment window, I will get a summary


of the changes:

It takes a second, but once it’s finished, the comment box is filled with
a detailed explanation, complete with links:
664 ROB CONERY

Not a bad summary, even if it’s a bit verbose. If I'm a maintainer, this
can be incredibly useful for creating PRs from the start.

SUMMARY
You’re here to ship code, which is the biggest impact you can have on
your company. The next biggest impact you can have is to help others
do their best work.

As a lead or senior developer, that’s one of the major roles you will
play. By mastering the art of the code review, you can, quite literally,
change the course of your project and also change someone’s career
and therefore their life.

This isn’t hyperbole! It’s tempting to think that I’m exhorting you to
be kind and thoughtful, but I am absolutely not! The captain of a ship
does not need to be kind to be a good captain — they need to be effective.
That doesn’t mean yelling at people either, it means clearing blocks for
them and helping them to write their best possible code.

That doesn’t come through kindness alone. It comes through


challenge, caring, setting an example, and yes, a dose of kindness here
and there.

If people start requesting code reviews from you, it means they trust
you. Which means you will be the boss in short order.
TWENTY-FIVE
OH, RIGHT, DOCUMENTATION
NO ONE LIKES CREATING
DOCUMENTATION, BUT WHEN IT’S DONE
WELL, YOU’RE A HERO

P
lease don’t skip this chapter. I know you’re tempted to
because I was tempted to not write it. The only thing worse
than writing docs is writing about docs!

That said, I think I can make this more interesting for you and your
project. Good docs are a sign of a healthy project, which is a sign of a
healthy process, and that means good leadership.

I know, it’s probably not motivating you, still. Let’s keep at it, though,
and see what I can come up with, which will be in two parts:

Types of documentation.
Tools for documenting things.

Simple enough! And yes, I’ll keep this chapter short, but hopefully, I
will also make it exciting!

USER DOCS VS. DEVELOPER DOCS


This chapter is focused on Developer Documentation, which means
the stuff you write for yourself, your team, and the developers in the
666 ROB CONERY

future who are going to work with your application. This is a different
concern than end user documentation, which describes how to
actually use what you’ve made. That’s important too! But I’m not
going to get into that here, as that is an entirely different discipline,
and a valuable one at that.

WRITING DOCUMENTATION SUCKS


Have you ever wondered why that is? I love showing other people
what I’ve created, especially if it does a really cool thing. I’ll walk them
through the process, why I made it, and how it works. Documentation!
The only difference is that I’m telling someone a fun story of discovery
and creation, not dry, dull technical docs.

Here’s a question: can we write documentation that doesn’t suck? Of course


we can, which is where I think we need to start. No one wants to read
boring process manuals, so let’s make it more fun, shall we?

Here’s an example of what I mean:

This is a documentation site I built for an internal project I’m working


on at Microsoft. As you can see, I have a template that I bought (using
THE IMPOSTER’S ROADMAP 667

Tailwind CSS) and I’m writing about things that I think the end user
(the public) will want to know, when using the application we’ve
created.

Dependency Injection is a funky principle and tough to make


entertaining. If you pretend you’re explaining it to a friend at the pub,
however, that can be a fun challenge!

Here’s another project with great documentation, Django:

I’ve gone through this tutorial, which is famous for being thorough
and written by a human for other humans.

Another project is Ruby on Rails, and their Rails Guides:


668 ROB CONERY

I have read all of these as well. Like Django, the text is straightforward
and human, not a dry wall of procedural nonsense.

My point: let your freak flag fly! Have some style (without trying to be
too cute, of course) and remember that your readers have no idea
what’s in your head, but are here because they like what you’ve put
together and want to know more!

STRATEGIES
The guides above, including the site I created, are online documentation,
but that’s only one type, or “strategy”, that you need to consider. Let’s
take a look at a few more.

Code Comments

Modern languages and frameworks like .NET, Python, Elixir, and


Ruby, allow you to add extensive comments to your code. I’ll assume
that you know how to comment your code, and also how horrible
comments can be.
THE IMPOSTER’S ROADMAP 669

Inline comments should serve one of two purposes:

Why the code exists and is written the way it is.


A placeholder for #HACK, #TODO, #FIXME, etc., indicating
a need to refactor.

That’s it. Short, concise text that lets the person reading the code,
often the reviewer, why that code exists, written the way it was. This
is a time-saving feature, mostly because reviewers (if they agree with
you) won’t create an issue, and we all save time.

Tagging your code helps you remember where you need to go back and
fix things. Most editors and IDEs will help you with this, adding
bookmarks that help you click your way back fast.
670 ROB CONERY

Long form comments are there to help future you understand what
you’ve created. They’re a journal, of sorts, which is a great way to
think about it.

Stepping back into a project after a long time away can be difficult,
frustrating, and discouraging. I think this is why so many people
rewrite their applications so much. They can’t remember what they
did (nor why), and they hate their code, like many people can’t stand
the sound of their voice.

Long Form Comments

Most programming languages support the idea of long form


comments, which are more structured and provide a lot more context
for a given class or function.

Generally, they look something like this:

I asked GitHub Copilot to generate these comments for me, which


basically repeat what the code is conveying just a few lines down.
THE IMPOSTER’S ROADMAP 671

Writing isn’t easy. Knowing what to write in a long form comment is


nearly impossible if your brain is also filled with writing tests and how
you’re going to refactor your code so it’s clear and concise.

Let’s update this to have a bit more meaning:

This is still a bit terse, but it does convey why the Checkout class
exists, that it’s a Model, and what fields do what.

Some languages, such as Elixir, will generate documentation for you if


you take the time to add it. Here’s an example from Moebius, a data
access library I created for Elixir many years ago, and is now being
maintained by a lovely chap named Chase Pursley:
672 ROB CONERY

That might seem like a lot of effort, not mention extra noise in the
code, but look at the results:

If you publish your code to Hex, the Elixir package manager, you get a
“Hexdoc” that’s generated for you, so people can read all the fun
things you added in your long form comment.
THE IMPOSTER’S ROADMAP 673

I think that is interesting, but it’s mechanical. There’s no flow to it —


you’re just reading over a collection of functions and modules. This
isn’t a knock on Hex, by any stretch, it’s a great tool. It’s just a bit
challenging to do a comprehensive, readable set of documentation
using generator tools.

It is, however, better than nothing. Barely. Let’s see what other
choices we have.

README files

To borrow from Keats and Mary Poppins: A well-formed README is a


thing of beauty and a joy forever. It’s the least thing a programmer can
do, especially if you’re working in open source.

Here’s a recent experience I had, building an application using .NET. I


needed the simplest data access tool possible, which meant using
Dapper:
674 ROB CONERY

This is the README up at GitHub, and it’s delightfully detailed with


simple instructions. It took me all of 2 minutes to figure out how to
use it, which is wonderful.

Dapper is built around the idea that writing the SQL you need, in your
code, is just fine. I like SQL and I don’t disagree with this idea, but I
also like a little help now and again, building SQL from classes,
structs, or maybe a dynamic object or two.

As it turns out, there are some plugins you can add to Dapper, one of
which is Dapper.Rainbow. Let’s take a look.

Welp… there goes that idea. In the team’s defense: Dapper is pretty
dang simple and if you “get it”, you can likely figure out what it’s
going to do. I could install this package and probably use intellisense
to figure out how the CRUD (Create, Read, Update, Delete) functions
work. On the other hand… it doesn’t take that much time to kick up a
README, does it?

A README is a simple step up from long form comments, and is the


very least you can do if you’re doing open-source work. For internal
projects, a README will help new folks get up to speed quickly, and
also help you clarify what you’re currently working on.

README Tools
THE IMPOSTER’S ROADMAP 675

Writing is hard, have I mentioned that? Luckily, we have a few tools at


our disposal. The first is the README generator from GitHub, which
will pop a README file for you:

This is interesting and useful, as are the options to add a license and a
.gitignore file… but it’s also a bit lacking:

Just the name of the repo and my description. What comes next?
676 ROB CONERY

The simplest README should have the following sections, no matter


if it’s an internal project or open source:

The name of your project, with a fuller description as to why


it exists and the problems it will solve.
Badges indicating the project health. I suppose these are
optional, but they’re sure helpful when you want to know if a
build is passing.
A list of features. If you’re just getting off the ground, these
can be planned features with checkboxes (using [ ] Feature
Name in markdown).
A roadmap so that people know what you’re planning.
A quickstart. Sometimes people hope to kick the tires and see
if what you’re doing is useful. This applies mostly to open
source, but internal projects can benefit as well if you have a
new marketing person or a PM who needs to ramp up fast.
Development setup. This is key for new developers coming
on and for people who might want to submit PRs.
Where to find help. This can be a link to more extensive
documentation, IRC, or email of key people int the project.
Code of Conduct. This is not just for open source! Set the
tone for the project, and make sure people understand what
will be tolerated, and more importantly, what won’t be.

If you have an open-source project, consider adding some more details


to your README the way that Dapper does. These won’t replace
more complete documentation, but can often be sufficient to getting
people up and running.

If you need some more help, try a tool like readme.so from Katherine
Oelsner, a senior engineer at GitHub:
THE IMPOSTER’S ROADMAP 677

I love this. Writing Markdown isn’t terribly difficult, but knowing


what sections to add can cause anxiety and lock up. At least for me!

READMEs are lovely, but nothing replaces a solid set of


documentation for your future self and, ideally, your ever-expanding
team! Let’s make sure they have what they need.

USING AI
Most code editors have some form of AI to help you, such as Copilot
for VS Code. You can ask Copilot to generate the start of a README
for you by using the @workspace chat participant:
678 ROB CONERY

This is a good start, but you’ll probably want to work on the prompt
and keep generating a few more times to get something useful. Which
you should then augment to sound human.

SIMPLE DOCS: USE YOUR REPO


Most code repositories have functionality that is there to help you
with your docs. You can use a Wiki or a static HTML site, for example.

Let’s see what GitHub has to offer.

The Wiki

The first thing you want to do is head over to Settings for your
repository and make sure that Wikis is turned on:
THE IMPOSTER’S ROADMAP 679

Some organizations choose to share their code as open source, which,


I think, is great. If you’re doing open-source work, consider turning off
the editing restrictions. That’s what wikis were originally! Open to all
and policed as needed.

This can be scary, but as with most things, it’s only a problem when
it’s a problem, right? Also: contributors have to be logged in to
GitHub, so if someone defaces your documentation you can report
them.

Editing a page is basic:

You have your choice of text styles: Markdown, ASCIIDoc, RDoc,


MediaWiki and more.
680 ROB CONERY

My example here is a bit empty as I don’t use the wiki for this
repository, but here’s someone who does! This is the wiki for pg-
promise, a Postgres query tool I love:

That’s a TON of documentation, and it’s right there next to the code,
right in the repository.

GitHub Pages

If you have a project that needs a little pop, GitHub Pages might work
well for you. By “pop”, I mean clean design, easy to read, and its own
domain.

Like this:
THE IMPOSTER’S ROADMAP 681

This is the Guidebook theme from the Bootstrap CSS folks, and it
costs $49. I think that’s worth it because I’m not a CSS person. You
can buy this theme and then plug it into the Jekyll static site
generator, which is the application behind GitHub Pages.

You don’t need to use Jekyll, however, you can use whatever static
app you like — as long as it doesn’t require a server. GitHub Pages,
you see, runs inside your repo from a special source, or a special
branch.

Let me show you what I mean.

In my Node/PG Starter repository, I have turned on GitHub Pages:


682 ROB CONERY

This setting tells GitHub to build the files in my /docs directory,


treating them like a Jekyll application. GitHub will do that, and will
serve that directory from the URL https://robconery.github.io/
node-pg-start. I can change that to a URL, if I like, and I’ll show you
how to do that in a second.

First, let’s have a look inside that /docs directory:

That’s it! Just a simple markdown file with some placeholder text:
THE IMPOSTER’S ROADMAP 683

If you’re not familiar with static site generators, this is a standard


format. Some YAML at the top with page metadata, and markdown
below.

When I go to my pages URL, however, this is turned into a web page!

This is how Jekyll works with GitHub Pages. It looks for Markdown
files with names that correspond to URLs, and builds out a static site
for you.

This site doesn’t have any pop, however. We can change that by
adding a few directories with templating and a theme, like Just the
Docs, which is free.

Jekyll started out as a simple blog application, but you can do all kinds
of fun things with it, and I’ll leave it to you to read up on the
684 ROB CONERY

documentation. But, if you’re curious, here’s my old blog which was


running on Jekyll until a year ago or so:

That’s what a Jekyll site looks like. I could pop this, for the most part,
into my /docs directory, and it would build just fine. I have also set it
up to be served from a custom domain, which is handled by
Cloudflare:

I love Cloudflare. I’m on the free plan, and they still offer free SSL
certificates through a proxy that you don’t need to install anywhere,
THE IMPOSTER’S ROADMAP 685

they just work. That means that my old blog, which is parked at https://
blog.bigmachine.io, will work just fine.

This is a basic way to host a more complete documentation site, but


it’s also more work. You have to work with HTML and CSS to get
things to look the way you want, and you’ll need to be certain people
don’t mess things up!

Let’s take a look at more automated ways.

DISCUSSION: GENERATORS
As we saw with the Hex toolset above, you can generate
documentation from your code. Some of it is pretty nice looking, too,
but they all share the same issue: they don’t read very well.

Documentation is for humans. It tells a story with a beginning,


middle, and end that should bring your reader on a journey of
understanding. I’m not trying to be deep and philosophical here —
this is how humans think.

I like this summation, which I read on Reddit just the other day:
686 ROB CONERY

I agree with this; however, there are a few exceptions.

Swagger-like Tools

Sometimes people want to understand your API and don’t need the
full backstory of how the application was written. This is end-user
documentation, to be sure, but if your API is going to extend
someone’s application, then something like Swagger can be extremely
helpful.

This is an open-source application I’m building and, as you can see,


I’m using Swagger to generate the API details for me. That’s what
Swagger does: it scans your code, looking for attributes and other
details, and builds out this web-based UI. You can set this to be
public, or for internal use only.

For internal projects, this could easily be all you need for new
developers. Clicking on one of the routes, you see a lot of information:
THE IMPOSTER’S ROADMAP 687

This application sends broadcast emails to people, and before these


emails are sent, they need to be validated. Here, we have a description
of the endpoint as well as response types.

Setting this up means writing some additional code, but it’s worth it if
you ask me:
688 ROB CONERY

This is ASP.NET Minimal API, and as you can see, I’ve chained on a
method that documents the API using Swashbuckle, an open-source
project that builds out Swagger manifests. Swashbuckle is smart
enough to understand the request and response types too, so it builds
out examples and other documentation.

There are other choices out there, but Swagger (now OpenAPI) is the
most widely used.

A FORMAL SOLUTION: DEDICATED APPS


Generating documentation or having it in your application repository
might not scale very well. For larger applications, your documentation
can end up bloating your repository and making things extremely
difficult. Jekyll, for instance, can break your build if you screw up one
of your pages.

If you would rather keep your documentation separate, I have good


news for you! There are some ready-made solutions that are super
simple to use.

Docusaurus

This open-source project generates a React application that you host


yourself, which you can do for free using Vercel or Netlify. Or you can
host it on your internal servers.
THE IMPOSTER’S ROADMAP 689

Using it is simple. You have to have Node installed, of course, and you
can get things setup using a single command:

npx create-docusaurus@latest my-website classic

If you don’t know, npx is Node’s “just in time” binary installer, which
means you can avoid installing global tools you might not want. This
is setting up a project called my-website using the classic template,
which they recommend.

Once you do this, you run npm start from within the directory, and
you’re off and running:
690 ROB CONERY

At the end of the day, this is simply generating a React site for you
that processes Extended Markdown (MDX), which allows you to do
fun React-y things with Markdown.

Your documentation lives in the /docs directory, and you can set up
navigation however you like. If you’re a React person, this could be
fun for you.
THE IMPOSTER’S ROADMAP 691

Ready Made Templates

At the beginning of the chapter, I showed you a documentation site


that I put together for a project I’m working on:

This is built using Tailwind UI’s Syntax Template, which requires a


membership that I gladly paid for because these templates are
gorgeous.

This site is also a React application, and it’s easy to use. I’m not a
React person, either, but the documentation for the template is
extremely thorough and simple to read. Same as Docusaurus, your
docs are in Markdown:
692 ROB CONERY

When I showed this site to my colleagues, they were extremely


impressed. It says a lot about the thing you’re putting together when
it looks impressive, which I know might sound silly, but remember
The Laws! Mediocrity Kills, so make a big splash and go way, way over
the top to impress people in every way you can.

A beautiful docs site does just that.

ReadTheDocs

My Tailwind template costs money, and I think it’s worth it to


streamline the documentation process and to make things look good.
If working with React, Markdown, and HTML doesn't sound like fun
to you, a hosted service might be worth your investment.
THE IMPOSTER’S ROADMAP 693

I used the free community version of ReadTheDocs for an open-source


project I ran a few years back called Massive.js, which is now
maintained by my friend (and editor) Dian Faye.

The idea is simple: you create a /docs directory and hook up the
service, which will then cycle through your docs files and put together
a typical documentation site. You can extend it with themes, run rules,
and do all kinds of interesting things, which is great because if you’re
not doing an open-source project, the service costs $50/month.

Is this better than GitHub Pages? That depends on whether you want
to keep your docs on a separate service with a lot more control. I like
the simplicity of GitHub Pages, so I would probably go with that.

Gitbook

If you just want an internal knowledge base and need to keep things
private, Gitbook might interest you. It’s not free for businesses, but
the pricing is reasonable for what it does.

Ironically, years ago I used Gitbook as a starting point for The Imposter’s
Handbook. I loved the idea of writing a book in Markdown, using
GitHub for version control. The experience was… interesting.

The whole idea was that you wrote this book, and Gitbook would
generate a PDF and EPUB file for you. The problem was that it was
694 ROB CONERY

ugly. I think others agreed with me because the service pivoted years
back and are now concentrating on beautiful online documentation.

Gitbook’s main pitch is that you can integrate with other “knowledge
platforms” (Slack, Jira, Google Analytics, etc.) to build out a
knowledge base for your project. You can also focus on creating
documentation using their editor, which, I have to say, is fascinating.

It’s free to poke around, so if you’re interested, please go take a look.


I’m not making any money here, so don’t worry about sliminess.

In summary form: you create a “space” in Gitbook where you and your
team can collaborate and build out a set of documents, like this one:
THE IMPOSTER’S ROADMAP 695

I generated this with a template, which is extremely helpful! You can


integrate a space with GitHub as well, which will sync the documents
you create (which are Markdown in the background) with your
repository’s /docs folder, or with a given branch. This is what we’ve
been doing manually, but Gitbook wraps this all in a nice UI.

The editor will look familiar to you if you have previously used a
service like…

Notion

I discussed Notion as a project management tool in an earlier chapter


of this book, and in that context, I mentioned that Notion is “a bit
unfocused”. You end up twiddling your template far more than
actually getting work done, which I don’t like.

When it comes to documentation, however, it rocks.

More and more companies are moving their knowledge management


to Notion, and for good reason. Notion is amazingly easy to use. If you’ve
never used it before, it’s a super simple wiki on steroids.

As I mentioned before, everything in Notion is a page, and every page


is editable in a million different ways. Pages can contain other pages,
and you control the layout. You don’t have to make it up on your own,
696 ROB CONERY

however, which can be terrifying. Notion comes with several prebuilt


templates, like this API Reference:

It’s easy to add this to your space and then invite team members,
although that’s where Notion makes its money, and it’s worth every
penny if you ask me.

The best part is that you can publish your documentation to a static
website:
THE IMPOSTER’S ROADMAP 697

And it looks the same, but not editable, of course:


698 ROB CONERY

You can change the look and feel all you like, adding graphics and
video embeds and more. You can even set up your own domain, which
is wonderful.

Notion gives you a lot for free, but after that, you pay $8/mo/user for
smaller groups and $15/mo/user for larger businesses. You can do
internal knowledge management as well as external documentation
easily.

I’M OVERWHELMED, JUST TELL ME WHAT TO DO


I got you. I’m overwhelmed too just writing this chapter! If this was a
project I was starting today, I would create a monorepo with a /docs
directory in the root. I would go find a nice-looking HTML template,
read up on how Jekyll works (which takes an hour or so), and set it up
with instructions for everyone else on the team.

I know web apps, however, so this is easy for me. If I’m on a team, I
would delegate this to someone with web experience, making sure
that the entire team understands the importance of having something
that looks good because, well, mediocrity sucks, doesn’t it!

Part of every sprint would include documentation for the /docs that
we could discuss in the post. Personally, I wouldn’t focus too much on
the quality of the documentation as that will change as the project
takes shape. The last thing you want is for documentation quibbles to
hold up the sprint — but it’s a good idea to make sure your
documentation matures, and is versioned, along with your source.

Bloated docs are a good problem to have, too, and if you get to that
point, good for you! You should be able to hire a technical writer to
build out a Notion page for you — that’s what I would do. I like
Notion for docs, if you can’t tell!

So there it is: a super simple strategy. I like simple!


TWENTY-SIX
MONITORING
KNOWING WHAT’S HAPPENING WHEN SO
YOU CAN AVOID PROBLEMS

T
here used to be a natural boundary between programmers
and IT people, which is as it should be. IT people make rigid
rules about what can happen where, and programmers do
their best to bend or break those rules because rules don’t apply to
them.

But then DevOps happened, and things changed. Programmers started


caring about servers and infrastructure, and IT people started writing
code. I think this is a good thing because it pushed the idea of
monitoring right into the codebase, which is a good thing.

Programmers didn’t typically think about monitoring until after their


site crashed, and they couldn’t figure out why. They resorted to SSH,
logging directly into their server and viewing their /var/www logs
using /cat (that would be me, always).

I have more than a few ops friends, and every time I tell them some
horror story about my app crashing in production, they look at me
quizzically: what was your monitoring stack?

I’m sure they’re going to give me a hard time for putting this section
at the back of the book, too, but why break form?
700 ROB CONERY

YET ANOTHER CRITICAL THING YOU NEED


Aside from losing your production database, probably the biggest fear
that every tech lead has is that their site will crash, and they won’t be
able to do anything about it. Or, worse: they won’t even know.

Things crash. It’s a truth of the universe that you can’t change. What
you can change is how fast you can get it back up and running. That’s
where a good monitoring plan comes in, ideally one that doesn’t
involve waking you up at 3am!

Your Ops Team

As a lead, your job is to ship software. Keeping that software up and


running (and tuned for performance) is also one of your jobs, but one
that should be delegated to people who enjoy doing those things. Initially,
this will likely be just you or a combination of you and one or two others
on your team. As time goes by, however, and your application gains users,
you’ll want to ensure that you have a team dedicated to the task.

It’s important to understand that monitoring isn’t solely “is my app


still running?” That’s a part of it, to be sure, but there’s a lot more to
it, such as:

Assessing application performance and bottlenecks.


Tracking errors.
System software and server health.
Database health and performance.
Log analysis, looking for things you didn’t think of.
Security.

That’s a lot to track, and if you’re a “noodler” (someone who loves


getting lost in numbers) this can be overwhelming. You’ll see what I
mean in the next section, as we dig in to some popular services.
You’re given these dashboards that are full of charts and graphs… it’s
like entering another world altogether.
THE IMPOSTER’S ROADMAP 701

That’s my feeling, anyway. Let’s start from the outside and move our
way inward. We’ll take a look at monitoring solutions that you can use
for just about any platform, and then we’ll dig in to each of the
considerations from the list above.

POPULAR MONITORING SERVICES


The first service I think of when it comes to monitoring is New Relic.
They started as a Ruby on Rails monitoring service that you could
install with a single command, and have since branched out to handle
the majority of application platforms.

They have a generous free tier, but as you might imagine, the cost
goes up as your team and application grows. I think it’s worth every
penny! When I ran my business on Rails (Tekpub.com) many years
ago, New Relic saved my butt quite a few times.

It’s an all-in-one service, which means it tracks all the metrics that I
702 ROB CONERY

laid out above (application, errors, server, etc.), and installing it is


elementary, especially if you have a Rails application.

The dashboards are a thing of beauty, too:


THE IMPOSTER’S ROADMAP 703

These images are from New Relic’s blog, and highlight how you can
combine whatever information you need into your dashboard. I love
this… and it also scares me. This is the kind of thing I lose days on.
The numbers!

Another popular service is Datadog, which I haven’t used in the past:

The difference between the two services comes down to “deep” vs.
“wide”. New Relic’s focus is your application, and how it’s running in
your infrastructure. Datadog is the opposite: its focus is your
infrastructure, and how well it can run your application.

There are plenty of other services out there (Scout, Skylight, and so
on) but my experience is with New Relic, and my focus for this
chapter isn’t the service itself, it’s what the service is telling you. That’s
the important part.

It’s also worth noting that you don’t need to go with an all-in-one.
There are individual open-source projects you can plug in to help you
understand what’s going on. That said, having a hosted service is
704 ROB CONERY

usually the smart choice, as the data it gathers can be used across all
the metrics you need.

APPLICATION PERFORMANCE
Application Performance Monitoring, also known as APM, is typically
my focus when thinking about monitoring. It’s one of the major
reasons I went with New Relic many years ago — I think about things
from my application’s perspective.

What does APM even mean, however? For us, it means 4 things:

Response time.
Requests/minute.
Error rates.
Query performance.

Obviously, I’m coming at this from a web application perspective,


serving HTML, JavaScript, and CSS to end users. I think it also applies
to web APIs, but for those, we might not care that much about
response times given that we’re likely sending JSON across the wire
and there’s not much we can do to improve that.

Our service should show us these things in a single dashboard, like so:
THE IMPOSTER’S ROADMAP 705

This is New Relic’s APM dashboard and tells you everything you need
to know at a glance. I’m running this Rails application locally (it’s the
Rails version of my bigmachine.io site) so the traffic is slight, but
hopefully, you get the idea here.

Tracking Your Service Levels

A term that you’re going to get to know during your career is “Service
Level Agreement”, or SLA. This can apply to end users, but mostly
applies to applications that are themselves services. You want to be
able to guarantee your customers a certain degree of uptime, which is
usually expressed by percentages, something like: “95% uptime
guaranty”.

But how do you know you’ve hit that number? That’s what a good
monitoring solution will tell you:
706 ROB CONERY

I love this report. We’re breaking down the weeks of the year and
showing what % of our application’s performance is satisfactory,
tolerable, and frustrating. Looks like my app is doing pretty well.

For this report to make any sense, however, we need to have our
Service Level Objectives (SLOs) defined somewhere. You can define
these yourself, or New Relic can take a swing at it for you with
“typical” industry standards of 95% and 99% uptime.

Bottlenecks

There are just some things that happen slower than others, and this
can cause a traffic jam inside your application. Slow queries, poorly
maintained open-source packages, or just crappy code can cause this
— but how do you know where it is?

Your monitoring service should be able to tell you. Given New Relic’s
deep integration into your code, we can run reports like this:
THE IMPOSTER’S ROADMAP 707

This is from my live site, taken just now, and shows the pages with
the slowest (or “most time-consuming”) response time.

You can click on each of these transactions (or “pages”) to see a more
complete breakdown of what took so long:

This is the slowest transaction on my site, and I can see that my


orders_controller’s index action might be causing problems. I’m still
not exactly sure what’s going on, but we can dig in even deeper if we
want:
708 ROB CONERY

Here, I can see a rollup of the entire stack trace over time, and decide
where I might want to improve things. Looking at this, I already know
what the issue is — I’m showing the last 50 orders on the index page
(it’s an admin page) which takes a while to render. Probably better if I
show just the last 10.

We’ll get into profiling your application more in the scaling chapter.
Watching someone who knows what they’re doing is truly an art, and
when they resolve the problem, they become a god.

In 1998, my company hired a person to fix our ailing ASP classic


application. Within 3 hours he tracked the problem to a bad database
query, then he implemented an index, and finally rolled everything to
a stored procedure (this was SQL Server 6.5 where such a thing
mattered). Immediately, our problems went away, and I wanted to cry.

That’s coming later. For now, let’s track our errors.

TRACKING ERRORS
The last thing you want is a potential customer emailing you or,
worse, tweeting out an error with your application:
THE IMPOSTER’S ROADMAP 709

That’s my friend Damian Edwards, and he tweeted that just this


morning! When I first read his tweet, I thought “there has to be
someone, somewhere, who knows this is happening… right?”

It’s happened to all of us, especially me. Errors are going to happen,
and they usually happen because:

Our customer adds data that we didn’t anticipate.


An external service is down, or has changed something.
Our server has issues, such as a full disk or a zombie service
hogging RAM.

Each of these has happened to me, more than once. I had a customer
from Japan use Kanji for their name, and my database wasn’t set up
710 ROB CONERY

for UTF-8 (I was using SQL Server’s varchar when it should have
been nvarchar. Silly error on my part). Shopify changed their API
once without telling me, and all inbound orders began to fail.

I have had my disk fill up due to runaway log issues (from MySQL)
more times than I care to admit.

I should have known what was happening. Let’s take that one step
further: I should have known it was going to happen and prevented it.

Once again, a good monitoring service can spot these things and help
you out.

New Relic (and others) will monitor your infrastructure for you,
which could be your own server, a Docker container, a VM, or
Kubernetes. It examines CPU, RAM, disk activity and disk usage, and
will let you know if anything doesn’t “look right”:

We care about our infrastructure, but we also care about application


errors and user experience (aka “things being slow”). That’s where
alerts are critical, and New Relic sets up a default set for you:
THE IMPOSTER’S ROADMAP 711

Note: the Apdex score is an industry standard measurement that combines


satisfactory responses, tolerated, frustrated and then divides them by the total
requests.

Setting Up Alerts

It’s good to get to know these conditions because figuring out who
gets notified how and when is paramount. A good service will help
you with this, making sure that the right people get notified at the
right time.

Let’s start with handling application errors, or “transaction” errors as


New Relic likes to call them. I’ll click on the link above, and here I can
see the alert settings:
712 ROB CONERY

The first thing to notice is that no one will be notified unless there are
more than 10 errors during a 5-minute period. I think that’s a bit
high, so let’s dial that down, shall we?

There’s an edit button just off-screen in the image above. I’ll click that
and tweak how this incident is triggered:
THE IMPOSTER’S ROADMAP 713

The nice thing about this screen is that the incident graph just above
will change as your values change, showing you what you would have
been notified about in the past.

I’ve set the delay to 0 minutes, as I want to know right away!

Now let’s head back to the alert screen, and I’ll select “Notifications”
from the tab at the top of the screen:

Here we can see a single notification setting, which an email


notification that goes to me. You can opt to have things go to Slack,
PagerDuty (we’ll talk about them later), a Webhook, or other services.
Email works for me, so I’ve set that up.

Let’s see if this works! I’m raising an error in my lessons#show


controller, let’s see what happens now.
714 ROB CONERY

I got an email letting me know that, yes, I have an error. An incident


has been opened, and I need to acknowledge it or New Relic will keep
bugging me, which is great!

Clicking on “Acknowledge” takes me to an incident summary screen


that has a load of data, but the thing I care about most is the error
itself. There’s a link on the incident screen that I click, and here
we are:
THE IMPOSTER’S ROADMAP 715

This is where New Relic and services like it start paying for
themselves. I can see how many times this error has happened and if I
click on it, I get a full stack trace:

There’s even a place to discuss the issue and how best to solve it.

We’ll discuss triage in the chapter on Disaster Recovery, but that is the
next logical step: how do you address incidents when they come in?
You can’t just wing it, you have to have a process in place. Duplication
716 ROB CONERY

of effort or, worse, assumption that someone else is addressing the


problem — these are things we absolutely do not want.

For now, let’s move on to server health.

SOFTWARE AND SERVER HEALTH


Things can go horribly wrong with your application, and your code
might not have anything to do with it. Bugs lead to small outages,
network, and server stuff lead to the big ones.

Consider the moving parts between your user and your application. As
I sit here and make a request to my main site, bigmachine.io, here’s
what’s happening:

My request goes through my home network and off to my ISP


(Cox).
It then bounces around inside Cox for a bit, then is routed to
the wilds of the internet.
Various name servers ping pong the request until a destination
is found (Cloudflare).
Cloudflare proxies the request, bouncing and rerouting
through its infrastructure.
Eventually, the request makes it to Digital Ocean, where my
site is hosted, and is bounced around their infrastructure until
my VM is found.
My VM is running Dokku, which is an automated Docker
setup that works like Heroku. One of these services has Nginx
running, which handles the inbound request, and that’s where
my code is fired — right at the very end.

The internet is a modern miracle and truly one of the wonders of the
world. If you think about the ridiculous number of services that need
to be running and healthy just to get my request through to my code — it’s
mind-boggling.
THE IMPOSTER’S ROADMAP 717

I bring this up to highlight one thing: there is a lot of opportunity for


something to go wrong between your user and your application that
doesn’t involve your code. But how do you monitor such a thing?

Most services can do server or infrastructure monitoring. If your


application is running in a cloud somewhere (Azure, AWS, GCP, etc.),
they will likely have some type of monitoring that they offer as part of
hosting your infrastructure. I think that’s a good first step, but I’ve
had mixed results getting these monitoring services to work the way I
expect.

Two years ago, I set up monitoring for a hosted Docker service with
one of the major cloud providers. I turned on Health and Monitoring,
added my email to receive alerts, and promptly heard nothing when
my container failed to start. It turns out I was using the wrong
monitoring, which… of course I was. I’m still not sure what the
“right” monitoring would have been.

Anyway. I tend to lean on third-party monitoring services when it


comes to server health because they will let you know if your
application is unreachable and, often, they’ll be able to tell you
(almost) exactly where it is.

This is another service that New Relic provides, and why I love them:
718 ROB CONERY

It’s Midnight, Do You Know Where Your Services Are?

Let’s say it’s late at night, and you’re about to go to bed, when your
phone chimes (you have a rule to let your monitoring service through
your Do Not Disturb if it’s a critical issue). You mutter “boundaries”
as you wander to your phone, and there it is: your monitoring service
is alerting you to an outage.

This outage can be one of three things:

Network. People can’t get to your site for some reason.


Server. Web, database, VM, Docker — these are things that
are yours that can go down for various reasons, such as a full
disk.
Code. A 500 error has happened enough times that it
warrants attention.
Security. A breach has been detected.

In an ideal world, you will have a team in place to respond to each of


these incidents. Unfortunately, your team is still small, and you’re the
one who gets woken up when things go wrong.

We discussed handling application errors above, and we’ll get to


security issues in a minute. If it’s a network problem, it’s likely there’s
nothing you can do but hope someone else’s ops team is on it.

If it’s one of your servers, sleep is going to wait for a while. Unless, of
course, you have a monitoring service that can tell you exactly where
the problem is:
THE IMPOSTER’S ROADMAP 719

At one glance, your infrastructure dashboard should tell you:

Server load. How many requests/responses are coming


through over time.
CPU and RAM usage.
Disk utilization and storage space.

I think I mentioned above that I’ve had servers crash, and it was
extremely annoying. I was running my Rails app using Dokku, and the
MySQL logs filled up the disk — I still don’t know why. It took 6
months for this to happen, and thankfully I could up the size of the
VM to make the problem go away.

You know what would have been better? Having a monitoring service
that warned me ahead of time. This is actually when I started using
New Relic, and for this very reason. Just 3 months later, Redis was
about to tip over because I was using it for shopping cart data and a
few other things and I forgot to set an expiration for the data. Oops. I
720 ROB CONERY

got an alert from New Relic that told me my caching server (Redis)
was at 80% of its RAM, and I should probably fix that.

This will happen to you, at some point, and my hope with this chapter
is that you see the value in having a monitoring service without me
needing to convince you. Handling problems before they become
problems is going to make you look like a champion — and that’s a
good thing because you are a champion, right?

Back to you, staring at your phone, wanting to go to bed. You received


the same notice I did: your caching server is using 80% of its RAM and will
reach 100% in 3 days.

What the hell is causing this?

LOG ANALYSIS
Redis is one of those wonderful services that you fall in love with the
very first time you use it. It’s so simple and so fast, and I know many
people who use it as their application data store without a second
thought.

If you don’t know what it is: Redis is an in-memory data store that
uses various data structures to store its data. With a relational system,
you have tables with rows and columns, which is the only data
structure you’re given. Redis gives you simple key/value structures,
hashes, lists, sets, and more. It really is a fascinating system.

That said, most people use it for caching and session management,
which makes good sense. The trick, however, is making sure you
understand your traffic and site usage well enough to manage Redis as
you need. The metric you really care about is concurrent active users,
who are people actively using your application to do whatever they do.

Most caching and session libraries will have default expirations built
in. Sessions, for instance, are typically 20 minutes long. Caching is
something you tune yourself, and we’ll get to that in a later chapter.
THE IMPOSTER’S ROADMAP 721

So, you take your best guess, set things up and hope for the best. Or,
maybe, since you’ve read this book, you’ll do the next best thing
which is to run log analysis to ensure your measurements are correct.

That’s why we have logs!

We’ve discussed logging throughout this book, but this is where the
payoff happens. As I mentioned before: logs tell you the story of your
application. Or, more accurately, the story of how your users use your
application.

There’s a lot in there, and hopefully, you’ve set your logs up to record
the things that need recording. We covered that in a previous chapter,
but here we are, after we’ve deployed, realizing that we need our logs
to tell us more about our caching strategy.

That’s OK! Logging is an ongoing thing, and we can adjust what we’re
doing on the fly. We can also ensure that we’re using our monitoring
solution to its full extent:

These are the logs for my Rails application, and as you can see, they’re
pretty verbose and that’s fine — I would rather log too much than too
little. Well, sort of. It can be expensive if you log too much but what
we have here is fine.
722 ROB CONERY

At this point, your monitoring service should let you sift and query
your logs, even parse them according to some rules. If you don’t like
this query screen, you can export a subset of your logs to Excel or
some other spreadsheet, transforming the logs into something more
meaningful to your research.

The first thing we might want to know is our cache hit rate. How often
are we using the thing? The next is cache expiration rate (also called
cache eviction). We can know this exactly by analyzing our logs. We
might find that we’re over caching things, which is putting an undo
strain on Redis, so we might decrease our session time to 10 minutes,
and reduce the expiration time of our other caching efforts, or remove
them entirely.

At this point, we might want to make an entirely new dashboard so


we can see the realtime load on our cache vs. the load on our server
(CPU, RAM, disk activity). This isn’t something you do for an hour or
so — you’ll want a few days, preferable a week, of data so you can get
correct averages.

So, how did I resolve my issue with Redis? I immediately bumped the
size of the VM and I reduced the session time from 20 minutes to 10.
I also realized, as I was discussing above, that I didn’t need to do the
model caching (I was using Rails) that I was doing. My web and
database server will be able to handle the load just fine.

Oh, and I added expirations to my cart data. That was extremely silly.

SECURITY
I hesitated adding this section to the “monitoring” chapter as security
is a world unto itself. That said, it is absolutely something you’re
going to need to think about, at some level.

But what do we mean by “Security”? Like I said: it’s a gigantic topic,


and you can write 10 books on any subtopic within that lofty label, so
let’s narrow things down a bit.
THE IMPOSTER’S ROADMAP 723

When discussing security, in the realm of monitoring things, we need


to think about:

Users being jerks. Some people just love to poke around at


your site and see if they can do things like SQL Injection
attacks, cookie hijacking, credential stuffing, etc. Perhaps the
nicer way to think about this is that these people are actually
doing you a favor, especially if they find something. There is a
dark side to it, however, which I’ll get to.
Strangers being jerks. I won’t lie, it’s a rush when you realize
you’ve figured out how to break into a system. When I was a
kid, I used to go to the mall and slip into the service corridors
behind the shops. I wasn’t supposed to be there, that was the
point. I didn’t want to steal anything… just… wanted to be a
jerk.
Bots being jerks. Denial-of-service attacks, DNS hijacks, port
scanning, and platform vulnerability attacks. So, so annoying!

If a user is messing with your site, there’s not much you can do aside
from hoping they don’t find something and, if they do, that they’ll let
you know. This, believe it or not, has become an industry, and an
annoying one at that.

The Beg Bounty

There are scanning tools you can use (I’m purposely not going to link
to them) that will scan a website, DNS records, and a web server,
looking for vulnerabilities. Occasionally you find good ones, other
times you find ones that are, shall we say, “nice to haves”.

Sometimes, if you’re unlucky, some random person will scan your


servers and DNS for you, sending an email like this one, to my friend
Troy Hunt:
724 ROB CONERY

If you reply with a “Positive response”, you’ll likely get a request for
money so they can tell you how to fix a DNS record that isn’t filled out
“correctly” (usually an SPF record ending with ~all).

If you don’t pay, they won’t tell you what they find. Often, it’s
meaningless, but occasionally, it isn’t, which is a shame. I do find,
however, that the people who find an actual problem will simply tell
you and not ask for money.

The Impending Data Breach

If you knew, beyond a doubt, that the data in your database was going
to be stolen, what would you do differently? Keep that in mind as we
imagine this at a deeper level.

You come in to work on a beautiful Tuesday morning and are excited


to plan your team’s next spring, which will be adding a much-
THE IMPOSTER’S ROADMAP 725

requested feature that your customers love. You do your normal


morning thing, except today there’s an email from Troy Hunt, the
same person in the tweet above.

Hi, my name is Troy Hunt and I run a service called “Have I been
pwned?” That tracks data breaches and compromised passwords.
Unfortunately, you might be in for a very bad day.

I’m after your support in helping to verify whether a data breach


I’ve been handed is legitimate or not. If you’re willing to assist, I’ll
send you further information on the incident and include a small
snippet of your (allegedly) breached record, enough for you to
verify if it’s accurate. Is this something you’re willing to help
with?

Note: Troy sends emails in the context of each breach, and what I wrote above is
just a representation.

At this point, your emotions will follow a typical flow: denial, anger,
depression, acceptance. I think most people who receive emails from Troy
do, indeed, have very bad days.

You look at his service, decide this could be a legit email, and reply,
letting him know you’ll help as needed. If you asked him to get
stuffed, however, he would reply with a cordial email letting you know
he’s going to go public with it anyway, and you better do the right
thing.

This isn’t Troy being a jerk. Just about every country (and in the US,
all 50 states), has legal requirements for disclosing data breaches, and
you could be in big trouble for ignoring these laws. If you do have a
breach, you need to notify your customers in a timely manner, notify
law enforcement, figure out what was breached and then notify people
affected before Troy does. Oh, and you should probably speak to a
lawyer or two.
726 ROB CONERY

Let’s assume you did almost all of this, aside from emailing your
customers, whose data is now in the hands of some hacker.

What are you going to write? Your lawyer will help, to be sure, but
when it comes to the data that was stolen, what data protection
measures will you describe? What data did you need to store?

This is far more important than monitoring and, like so many things
in this book, I could spend countless pages detailing “how to store
data securely”. Instead, I’ll offer this summary:

Don’t store it unless your business depends on it. IP


addresses, for instance, can identify a person’s location and
much more. If it’s not critical, the safest thing to do is keep it
out of your system.
Know what your logs are tracking, and ensure that no
personal information is in there!
Don’t use passwords if you can help it. I prefer social logins
and magic link emails. If you have to use passwords, make
sure you’re using a strong hashing algorithm.

The big thing here is how passwords are stored if you use them.
Encrypting all personal information and keeping the key safe (in your
cloud’s key vault, hopefully) will help, but if your entire database was
nabbed, stealing your key is likely a matter of echo $SECRET_KEY. If
you’re using a secure key vault, that’s great!

Your countermeasures should be in your disclosure email because if


they’re missing, people will think the worst. Hopefully, your notice
will detail the protections you offered, right down to log obfuscation.

Denial of Service

People have agendas, and occasionally, you become part of their story
and find yourself awake at 0200, wondering why your application is
being picked on. Once again, however, we need to move on from why
and get straight to dealing with it.
THE IMPOSTER’S ROADMAP 727

Distributed Denial of Service attacks, or DDoS attacks, are the result


of a large swarm of bots targeting a server with a specific request
format. Imagine your current infrastructure receiving a throughput of
2.3 terabytes per second for days. This happened to AWS back in
February 2020 and if you had an application in the affected zones, you
had a very bad time of it.

Mitigating a DDoS attack typically involves having a service sit in


front of your infrastructure, proxying requests and guarding against
attacks. DDoS attacks are so common these days that services can
automatically respond, shifting your DNS records on the fly and
putting up request filters automatically.

Probably the most well-known service that does this is Cloudflare:

I’ve been using them for many years and I love the service. I’m on the
free tier (I haven’t needed their full suite of services), and they host all
of my DNS for me, and also provide my web apps with a free SSL
certificate. So easy to use, and if I’m ever DDoS’d, I can toggle a
switch:
728 ROB CONERY

They also have security dashboards, allowing you to scan your


infrastructure, see suspect traffic, and even investigate IP addresses,
domains, or URLs that you think are suspicious.

Most cloud providers have services that do a portion of what


Cloudflare does, but I think it’s fair to say that Cloudflare’s presence
is almost mandatory if you run anything online.

Why Is This Security Stuff In Your Monitoring Section?

Security is a big subject that deserves its own book, written by


someone who understands the subject well. Troy Hunt and Scott
Helme have online courses at Pluralsight, if you’re interested in
finding out more, but to answer the question above: I had to put this
section somewhere.

Threat detection is something you think about when thinking about


monitoring, but to me, it’s far better to assume the worst is probably
going to happen, so plan for it by not storing what you don’t need to
store. No one wants to give up data, that’s a given, so of course do
everything you possibly can to harden your security (start by watching
those courses above, or hire a security specialist, etc.).

That’s why this section is here. Challenge yourself, your managers,


and your team when storing information. Always assume it’s going to
end up in Troy’s hands at some point.
THE IMPOSTER’S ROADMAP 729

BUSINESS REVIEWS
Monitoring isn’t just for the engineering team. There is a lot of
business insight to be gained from careful log reviews and traffic
analysis. Google Analytics is usually where people go for this kind of
information, but it’s not totally complete given the rise of script
blockers and secure browsers, like Brave.

A monitoring system like New Relic can be set up to handle all kinds
of analysis, even for use with marketing:

I didn’t do any of this by hand, it’s a dashboard that New Relic offers
you with a single click:
730 ROB CONERY

I don’t have any values in here because this is my test Rails


application and I do all my sales stuff through ThriveCart, which is a
service I also love, but hopefully you can see how you could set up and
customize this report to suite your needs.

This is where log reports and analysis start overlapping with business
analytics and, if I’m honest, the two don’t really mix. We’ll get into
reporting in a few chapters, but for now, I will say that monitoring
reports for checkout are still quite valuable. This is a sales report
that’s more for engineers than anything, focused on page load times,
transaction elapsed time, etc.

SUMMARY
There are so many great monitoring suites out there, including open-
source projects (like Grafana) that allow to you configure things as
you need. I love open-source solutions, but if I’m honest, I’d rather
pay someone to run monitoring for me. This is my opinion, of course,
and I suppose it’s also informed by having used services like New
Relic that I already know. If your servers are in your data center, then
Grafana (and tools like it) is a great choice.

In the next chapter, we’ll get into Disaster Recovery plans, and what
THE IMPOSTER’S ROADMAP 731

to do when your monitoring system wakes you up at 0200. Yes, it’s


going to happen, so let’s have a plan in place for when it does.

The first task: appoint someone else to be woken up at 0200…


TWENTY-SEVEN
WE’RE GONNA NEED A
BIGGER BOAT
THE ART AND SCIENCE OF SCALING AN
YOUR APPLICATION

B
ack in the mid-2000s, “Web 2.0” was emerging from the ashes
of “Web 1.0”, otherwise known as the DotCom boom, where
I got my start in the tech industry. Wild times, back then, but
that’s for another book.

Web 2.0 was all about groovy scripted languages, like Ruby and
Python, and their equally groovy web frameworks, such as Rails,
Sinatra, Django and others. Building with these new frameworks was
fun because the development experience was dramatically better. You
could prop up a full commerce site (for example) in weeks instead of
months, and get paid that much sooner.

Some existing software communities, specifically Java and .NET,


looked right down their noses at these “new” languages and
frameworks, proclaiming that “there’s no way they can scale”.

That, friends, is the dismissive nonsense that has plagued the Ruby
and Python communities for years, including JavaScript and Node in
more recent years: they’re cute, but they don’t scale very well.

The comment, like the sentiment, is meaningless and ignorant. Let’s


understand why.
THE IMPOSTER’S ROADMAP 733

WHAT SCALING IS AND ISN’T


When you “scale” something, you change its size for one reason or
another. With a picture, map, or a model, scaling simply means
making a larger (or smaller) version of the original. But what does this
mean for a software application?

Like so many things in the programming world, all context is lost due
to the abuse of the word itself. In short, scaling can mean increasing:

Capacity. More users can use the application at any given


time.
Speed. The application responds faster because of more RAM
or CPU.
Availability. The application remains up, even if there’s some
kind of problem internally.
Maintainability. You will need to add things to your
application over time, which many programmers confusingly
refer to as “scaling” architecturally because you’re making the
application bigger… I guess.

Context is king, and you’ll need to gauge the topic of conversation


when the word “scaling” is thrown around, but a good rule of thumb
is that people are mostly discussing capacity, and how well an
application behaves under load.

The Flat Line

When you talk about scaling for capacity, what you’re hoping for is the
flattest response line you can get:
734 ROB CONERY

This is a very typical graph that depicts “good” scaling. Applications


without scaling measures (caching, proxies, etc.) don’t respond as
quickly as applications without all the scaling bits in place. With only
one user, every application is a rocket!

Those scaling measures kick in as the requests go up, which helps the
response times stay “flat”, which is what you want.

The Speed Lie

Many programmers will discuss how “fast” their programming


language and platform are, highlighting one benchmark or another
showing millions of requests per second. I do think these are
interesting benchmarks to consider, but the raw speed of a language,
platform, or framework does not mean it will scale well! At least if we’re
discussing capacity.

I’m going to bold this because if you take one thing away from this
chapter, I want it to be this: most of the time, applications aren’t
slow because of the CPU. They’re slow because of I/O.

I/O (input/output) is calling out to the file system, database, mail


client, logging service, or anything else that isn’t directly computable
by your server’s CPU. Your .NET web service might be screamingly
THE IMPOSTER’S ROADMAP 735

fast, but if it’s waiting for a SQLite query executing on a small free-tier
cloud image, all that speed is meaningless.

Unless, of course, you take advantage of asynchronous programming.

The Async Promise

Ryan Dahl, the creator of Node, understood the I/O problem very
well:

If you’ve done traditional web programming, you’ve probably used


activerecord and you access some record. You use a function to do
the I/O, but what does your software do while it’s accessing the
database. In many cases, nothing. It’s the year 2010, we’re using
Rails, and when you access a database, it stops, the world stops
for who knows how long, the database might be in LA, and it
takes 2 seconds to respond… When you access stuff in the CPU,
it’s very fast. You can assume any operation to take zero amount
of time, until you access the disk or the network. It’s not
appropriate to treat operations in the CPU in the same way as
operations on disk or I/O. Abstracting I/O as a function doesn’t
make sense when the time-frames are so different.

I remember when this presentation came out. It absolutely shocked


the web community, especially the numbers that Ryan was discussing
(thousands of concurrent requests being handled, which was a lot
back then).

Web applications that were built back then can be thought of as a


restaurant with exactly one person working there. They wait on your
table, get you drinks, cook the food and then serve it. If it’s only you,
yay! The service is nice and fast.

When more people come in, you get crappy scaling because our
server/cook/bartender gets busier and busier. They may have been
736 ROB CONERY

smart and prepared (cached) things beforehand and can keep service
limping along, but not as well as a restaurant with two people.

This is asynchronous programming. There’s a server and a cook, and


the server coordinates with the cook and can bounce around the
tables, doing I/O with the kitchen as needed, when needed.

To be clear: you could do this with some web frameworks already,


using threading and event-based libraries, but it was difficult because
most languages at that time were synchronous, unlike JavaScript, which
is asynchronous by default.

Fast-forward to today, and platforms like .NET and Java have async
facilities built right into the frameworks. Other languages (like Ruby
and Python), have done their best to build in optimizations where
they can as well.

The State of Play

Asynchronous programming can help tremendously, but it’s not the


only issue when it comes to scaling for capacity. Many times, you can
achieve the same type of scaling if you use a load balancer in front of a
set of containers (see the Kubernetes section). Is your Rails diner
buckling under the rush of customers? Great! Open up a few more
across the street.

That said, things have changed quite a lot when it comes to compute
power. If you don’t follow Kelly Sommers on Twitter, you should. She
can be a bit spicy at times, and quite challenging, but she definitely
has opinions:
THE IMPOSTER’S ROADMAP 737

This is a massive subject, obviously. Discovering what is slowing an


application down, and then fixing it, is quite the career skill. People
quite literally make millions per year to do just that. This line of work
will also seriously flex your skills at diplomacy and your
understanding of human interactions.

There is one thing, however, that goes beyond diplomacy and tact.

MEASURE, MEASURE, MEASURE


Aside from software architecture, I honestly don’t know of a subject in
Computer Science that will make otherwise kind people act like
dismissive jerks. Posturing, sarcasm, passive-aggressive attacks,
insults, and overall childish behavior seem to be the norm when a
scaling conversation starts.

There is only one way around this: numbers. You have to have them;
otherwise you have no idea what you’re talking about. By “numbers”,
I mean:

Exact numbers (requests/second, concurrent users, disk


usage, RAM, CPU, etc.) at the exact time of any error or
service failure.
Benchmarks of the current system, as deployed.
Production numbers from every part of your application over a
period that makes sense (a week, averaged, is typical).
738 ROB CONERY

What you’re trying to do is decide if there’s a problem and, if there is,


what the best course of action is. You can guess, or you can measure.
Thankfully, we set up comprehensive logging in a previous chapter, so
we have some reasonable numbers.

Deciding If There’s a Problem

As programmers, we don’t have a good grasp of problems. This is an


odd thing to assert, on my part because our job is to solve problems, so
it would seem that we should know a problem when we see one!

Unfortunately, that’s often not the case. I would offer that quite a few
problems exist between our ears, and not on the screen. We write
code “defensively”, and think about things that could happen — and
this is a great skill to have as a coder. This is figurative problem
assessment, not practical.

What do we mean by “practical” problems? When crossing the street,


you know that you need to look for cars or bicyclists so you don’t step
off the curb in front of them. You think about this well before crossing
the street, running the scenario in your mind unconsciously.

What happens, however, if you didn’t see that large bus barreling
toward you? You somehow managed to step off the curb, right into the
path of a bus that’s now screeching to a stop…

What do you do then? Did you have a plan for a getting out of the way
of a speeding vehicle, or just a plan to look both ways? This is
practical (jumping backwards) vs. figurative (looking both ways). In
the short time you have before possible impact, you might want to
answer some very pressing, very direct questions:

Does the bus have enough momentum to actually strike you?


Should you jump forward, possibly getting hit by another car,
or backward, possibly being blocked by a person behind you?
What happens if you jump off the ground? Perhaps the bus is
moving slow enough that you might glance off the front of it
without injury.
THE IMPOSTER’S ROADMAP 739

The good news for you is that each of these questions is


immediately answerable, provided you have the time to investigate.
Brains are interesting during times of crisis, and your peripheral
vision might tell you that no, there is no car coming from the other
direction, and you’re very close to that side of the bus, so let’s leap
that way go.

Your amazing brain might also snap a judgement on the forward speed
of the bus, realizing that it will stop many feet away from you, so no
action is necessary.

Bringing this back to something a bit less life-threatening: when


you’re evaluating a scaling plan for your application, you need to know
if you’ve just stepped off the curb into traffic, or if it’s safe to cross.

I just snapped a screenshot of my live Rails app that I’m using for my
production site at bigmachine.io:

I have a “good” user experience score according to New Relic, and my


Apdex score is also good. That said, I have a single query that took
450ms to run, and I can tell you that (from looking at my logs) it was
my posts_controller fetching a single post, which is alarming.

A single row query like that should return in 1ms or less and, as a
740 ROB CONERY

database person as well as a programmer, I’m offended! Did I just stop


writing and create an index for that query?

Why yes, yes I did. Was this an actual problem for my users or my site?
Well… no, no it wasn’t. But it could turn into one!

This is premature optimization at its finest. It is correct to say that this


query could be faster, and writing an index as I have is a reasonable
thing to do to avoid potential issues. And this is where I could offer
you the all-too-common anecdote about architecture gone wild, and a
database encumbered by index updates that slowed everything
down… but I think you get the point.

When you think about scaling, you need to have a problem that’s
clearly a problem (error, low Apdex, slow throughput, etc.). You need
to have numbers to isolate and understand what service is causing the
problem, and then you must propose solutions that are, themselves,
measurable.

The Lead’s New Clothes

I worked with a lead developer who loved to nitpick everything, and


was an absolute stickler for how things were named. It’s
understandable: naming is hard, and this person had to be sure that
the codebase was clean and readable both now and in the future. That
said, he also had his “particulars”, one of which was response times.
THE IMPOSTER’S ROADMAP 741

I can guarantee you that, if he were to read the graph above, I would
have an issue created for improving the response time of that
transaction. No crashes, no user complaints. Just a lead that doesn’t
like things being more than 10ms ever.

I mention this because sometimes things just aren’t measurable. The


threshold for action can vary based on who is assessing the
performance of the application. Marketing, for instance, would
probably prefer 0ms load times on every page, and could turn
themselves blue in the face demonstrating lost revenue per
millisecond of load time.

When this happens, it’s up to you to dig into the logs and make sure
what they’re seeing is real. If it is, and your boss agrees, then you have
a problem that you need to fix.

Since I’m the lead, the boss, and the programmer for bigmachine.io,
the meetings are short and the action straightforward when it comes
to > 100ms transactions: fix it with an index!
742 ROB CONERY

Our average page load has dropped now to 60ms or so, and this is
entirely without caching of any kind. I’m using Rails, so it’s possible
to do highly targeted caching, and who knows? I might… but… do I
really need it?

I don’t think so. There’s no bus rumbling down the street, and I need
to focus on the business itself, not how well my site is working. If the
transaction time creeps up above 100ms, I might look into caching, vs.
increasing the capacity of my server.

But that’s only if there’s a real problem…

OK, SO THERE REALLY IS A PROBLEM.


Scaling problems, in my experience, don’t happen with a big bang. It’s
usually a series of repeated issues that are highly annoying. The
bangers are typically IT-related, such as bad router configuration,
service discovery problems, and yes, DNS (it’s always DNS).

True scaling problems creep up on you, and when you’re not looking,
will assert themselves in ridiculous ways.

This is almost always a software problem.

I mentioned in a previous chapter how I accidentally filled up a disk


on my production machine because I set the logging level of MySQL
too high. Coupled with Rails logging, the disk filled up over a period
of 4 months. I had a Google for “production Rails configuration best
practices” and, yeah, there were quite a few tweaks I needed to make.

As well as bumping my VM size, adding resources (CPU, disk size,


RAM, etc.) to mitigate the problem. You’ve probably heard of this: it’s
called scaling up. You can also scale out by adding another node. If I was
working with Kubernetes, for instance, I could mitigate spikes in
requests/sec by adding nodes on the fly.

I’m getting ahead of myself. Scaling requires discipline, to


understand the problem and then take small, measurable steps to
THE IMPOSTER’S ROADMAP 743

resolving the problem. My logs filled up, so the first step is to up my


disk size. The second step is to delete the logs I don’t need. The final
step is to dial down the aggressive logging practices of MySQL and
Rails.

Did I address the problem by doing this? No! Scaling problems are
never resolved. They’re only put off for a while. By “scaling” I
mean both up and down. No use paying for expensive cloud resources
if you don’t have the traffic to justify it, right?

Performance tuning your application is like having a good gym routine


together with a good diet. If you get the combining factors correct, and
if you’re disciplined in tracking things, you’ll look great and be
healthy.

What factors are we talking about here? Let’s see.

Your Code

It’s entirely possible to write code that accidentally taxes the CPU or
blows up your RAM. Consider this:

This is from my Rails app, specifically the courses page, where I show
all the exciting and wonderful courses I’ve made. Doesn’t look like a
problem on its own, does it? But what is that partial doing, and do I
need to care?

Most of the partial is fine, but then we come to this:


744 ROB CONERY

I use a Content Management gem called Spina, which I really like. I’m
using it to handle the Course and Lesson content, with a course
having 0 to n lessons, or “children”. This association is handled in the
gem itself, which is great as it saves me time and code. Unfortunately,
if I want to show summary information (number of videos, sum of
their length), I have to use the Rails associations. Is this acceptable?

Let’s not guess — let’s take a look at the logs:


THE IMPOSTER’S ROADMAP 745

Note: You might notice that I’m summing on the ID field, which is silly, but I’m
showing it here to represent what I was doing before I fixed a few things

What you see here is referred to as the “select N + 1” problem, or just


“N+1”. I’m executing a query in a loop — two queries actually — to
get the count and the sum. If you’ve been creating applications for a
while, you can usually spot this problem from long away. Tell me
you’re using Rails and show me a rollup on a page, like I have, and I’ll
be 99% certain there’s an N+1 issue.

The crazy thing is: you won’t know this is a problem until you ship
your application, and it comes under load. Even then, things like this
don’t crash, they just slow everything else down, like your database.

Typically, you catch these problems in your logs. Consider mine:

The top database operations, unsurprisingly, are from Spina. My CMS


accounts for 70% of all the database calls — which is surprising, don’t
you think?

I thought so, but in looking over my logs I realized that my utilization


was very low, so it wasn’t a problem that needed fixing. I left it. For 24
hours.
746 ROB CONERY

I couldn’t stand the idea, so I added fields for duration and video
count to the courses and filled them in. These things don’t change
much, if at all, so hard-coding is fine with me. It wasn’t a problem in
the real world, only in my head.

Changing my code helped lighten the load on my server, and


(hopefully) prevented a scaling issue later on. I probably would only
have caught that issue if my database connection pool was used up
and throwing connection errors, which is a very typical side effect of
N+1 issues.

Increased load on your system (CPU and RAM) will also slow down
response times and, in some cases, cause timeouts altogether if your
web server (Nginx, Apache, IIS, etc.) decides it waited long enough
and here’s a 502 error instead.

This is technical debt in action: the pendulum swing between actual,


measurable problems as opposed to using your experience to head off
the obvious problems waiting to happen. There is no hard and fast
rule here, other than to do the obvious thing, which is an eager load on
an N+1 query, or go around the issue like I did.

Resolving those issues is not fun, as we’ll see next.

Your Platform

I hate to pick on WordPress, but, unfortunately, scaling this platform


is big business, and for good reason. WordPress is extremely flexible
and capable, but that comes at the cost of having one of the worst
relational database designs I’ve ever seen.

Everything on WordPress is a wp_post, which has a basic structure


with things like title, status, content, etc. Then there’s the
wp_post_meta table, which has a foreign key to wp_posts, and then a
column called meta_key and another called meta_value.

The wp_post_meta table is a key/value store, which means it’s


incredibly active. I’ll stop here because, as a data person, this makes me
cringe. WordPress, itself, does an OK job keeping the calls down and,
THE IMPOSTER’S ROADMAP 747

last time I checked, bare-bones WordPress needed 20 or so total


queries to show a page.

If you install plugins, however, that’s when the fun starts. And who
uses WordPress without plugins? I used to use WooCommerce to sell
things but stopped because of what I’m about to show you:

This image comes from WordPress support forums, where someone


was wondering just how it was possible that WooCommerce needed
to execute 753 queries to show a single page.

I think that’s a good question, don’t you?

Imagine you’re tasked with taking over a WordPress application (don’t


laugh, it happens) and you need to keep it up and running while you
and your team rewrite the application in Python.

Inheriting an application that’s running on an old framework is no


fun, but that’s often the first step when a rewrite is required. The new
team is hired, tasked with keeping the old site alive while work
begins.

Ruby on Rails used to have this reputation, mostly for issues I


discussed above. There were plenty of things you could do to keep a
Rails app up and make it quite performant, but it took some work,
which we’ll discuss now.

Your Database

You can’t change the way WordPress does things because there are far
too many plugins that rely on the way WordPress does things. In other
748 ROB CONERY

words: they’re stuck with their weird database design, unless they
want to upend the entire plugin ecosystem.

You can, however, tweak the database to improve things. Of course,


you would rather not do this yourself! There’s a plugin for that.

This plugin, which I’ve used, does many things, including:

Removing dead and orphaned data. You see, despite using a


relational database, WordPress is (almost) completely
denormalized. This means dead data can, and often does,
clog things up. Over time, this makes life hard for your
processor and RAM. This plugin will remove that data
sludge.
Rewrites “bad” (slow) queries with “good” (faster) ones. I
haven’t looked into how they do this, but I’m guessing they
add some indexes for common queries and then make sure
they’re using those indexes.
Adds an index to the highly used (and abused) wp_post_meta
table.

Those last two are very useful, so of course you need to pay for them
on an annual subscription basis. I don’t mind this, most of the time,
THE IMPOSTER’S ROADMAP 749

but subscribing to a database index encapsulates perfectly everything I


dislike about the WordPress ecosystem.

Yeah. I know: I sound like a WordPress hater, which I suppose is the


result of using the tool for many years and having to drop everything,
twice, when it crashed. Hard.

My point here is simple: you will pay for crappy architecture when it
comes to scaling, especially if the database platform you use doesn’t fit
your needs or is difficult to optimize. This is a hard thing to discover,
especially early on because some database systems are just so simple
and easy to use, you’ll do anything to use them.

MongoDB, in the early days, was exactly like this. Super simple JSON
document storage, but after a year goes by, and you run sales reports,
when your boss asks “are you sure about these numbers?” it’s quite
nice to have relational guarantees behind you.

Don’t get me wrong! There are plenty of great use cases for document
storage. My claim, here, is that any data that will be used to make
decisions needs to be in a relational system. Those guarantees, while
of course not perfect, offer a lot of reassurance.

I ran my business for years on Firebase (using the Firestore database)


and I loved it. Super easy and, reportedly, scaled well. Unfortunately,
when I extracted and transformed the order data into my relational
system, I found the inevitable “crap” data that got stored.

Of course it was my fault. Missing product information due to a


malformed SKU, bad order numbers which caused duplicate
overwrites, and wrong total calculations due to my checkout provider
returning a string to my JavaScript backend and me not parsing things
properly.

You can do a lot with constraints and foreign keys that keep you from
disabling future decision-making. Unique constraints help you avoid
overwriting existing records, foreign key constraints ensure that data
needs to exist in another table before a parent table write, and proper
750 ROB CONERY

typing (especially when converting to pennies to avoid insanity) will


keep your totals correct.

What does this have to do with scaling? Aside from performance


tuning, picking the right platform(s) at the start has everything to do with
scaling. Firestore would have worked fine, but if I had paired it with a
relational database for storing rollup information, I might have
avoided quite a few issues.

There are, of course, rules and indexes you can put on systems like
Firestore and MongoDB that will help you replicate the niceties of
relational guarantees. The only problem is that you need to remember
to write these, and then hope that your custom solution is a good one.

But what about optimizing a relational system? If you use MySQL,


PostgreSQL, or some other system, they likely come with profiling
tools of some kind. Your monitoring system can also help you in this
regard, as I showed above, and it’s a great first step, but knowing what
to do next can be difficult.

How Indexing Works

Indexing a table or collection works the same in every platform that


supports indexing: it uses what’s called binary search, which is a “divide
and conquer” algorithm that sequentially splits a set of data in half,
using a pre-sorted key to do the splitting, until it finds what you’re
looking for.

If you’re hazy on this, pretend you have a blindfold on, and you have
to search for a given lego piece out of 100 different lego pieces. It’s
possible you could go through all 99 of them until you find what
you’re looking for (a rectangular brick with 12 studs, which is a term I
learned just now).

Now imagine that someone went through and sorted these in a


particular order: shapes and stud count. You still have your blindfold
on, but you’re a smart person and realize that you could start in the
very middle, testing a piece on the left and then the right. If the right
THE IMPOSTER’S ROADMAP 751

side leads to the direction you want to go based on the sorting rules,
you aim for the middle of the remaining blocks. You keep splitting
until you find what you’re looking for.

As programmers, we can think of this as order n time complexity —


which is a math-y way of saying “if there’s n bricks, it could take me n
tries to find something”. The second method, binary search, is order log
n, due to splitting being logarithmic.

How much faster is log n? If we had 1,000,000 records, trying to find


something would require 1,000,000 lookups if we didn’t have an
index. If we use a logarithmic approach, however, it takes 20
operations:
752 ROB CONERY

That’s excellent savings, and we’ll get more detail on this in the next
section. Before we get there, we need to discuss something essential.

The Pitfalls of Indexing Poorly

Every index requires system resources that run in the background.


Some databases can optimize this, but it can still cause issues if you
aren’t careful.

To use binary search, the data you’re indexing needs to be sorted first.
That sorting is what is going to take up the resources.

Here’s an example using a simple messages table. When data is


written to it, the index must be updated, or “resorted”, which can take
some time. This is where we get into optimized sorting algorithms,
and if you’re curious, you can read my other book that explains them
in detail.
THE IMPOSTER’S ROADMAP 753

If you have a high-write table, like our messages table here, the index
will be updating on a pretty constant basis, which adds load to your
database. You can spread that load out horizontally by sharding your
server, which I’ll get into in a minute, or you can up the specs on your
server so it can do things better, stronger, faster.

Slow indexing can result in another surprising side effect: dirty reads.
This is a term you often hear in data circles, and refers to old data
being returned in a query after good data has replaced it. If you’re a
data person, this might surprise you. Transactional systems (such as
ACID) guarantee that when you receive acknowledgment for a
transaction, it’s durable, which means it’s saved and ready to be used.

If you’ve never heard the term “ACID” before, it’s Atomicity,


Consistency, Isolated, and Durable; a wrapper term for a guaranteed
transaction within the system. So, given this guarantee, how can our
data be dirty?

The transaction applies only to the table being written to, not its
corresponding indexes. If the transaction had to wait for the indexes
to update, that could end up taking quite a while, and even then, you
have a chicken-and-egg problem. What would happen if the table
754 ROB CONERY

write fails for some reason? The index might become corrupted, with
everyone becoming sad.

This is what’s known as a tradeoff, and every system has them. You
have a guarantee of an atomic, consistent, isolated, and durable
transaction, but you’ll have to wait a few microseconds for the
associated indexes to update.

It’s Rarely Just the Database

In my experience, and that of many others, databases have become


ridiculously fast over the years. This is due to constant tweaks and
upgrades by the vendors, but also the availability of increasingly high-
performance VMs.

My current Dokku VM has 8G of RAM and 4 CPUs. That would have


cost me hundreds, maybe thousands of dollars in 2010! Today it costs
me $54, but the amount of data I store and have to query hasn’t really
changed all the much. It amounts to “residual scaling”, which is a
term I just made up, that if you wait long enough, CPU and RAM
power will increase for you. I don’t know if that’s true anymore, but I
like the term anyway.

Either way: databases are built to be fast, and when it comes to


performance and scaling issues, you can often make tweaks to your
code to improve things. If your application is “stumbling” and
occasionally crashing due to timeouts, it’s probably due to your
infrastructure in general.

Let’s learn more.

Your Infrastructure

Let’s recap:

Scaling your code can mean two things at once: making it


more performant, but also making it easier to change (find and
fix bugs, enhance, add features).
THE IMPOSTER’S ROADMAP 755

Scaling performance can mean caching, database indexing, or


bumping up/out your server capacities.

Your code is just text that is compiled into a binary at some point in
the execution chain. What that code runs on, and how it moves from
disk out to the wild beyond, is considered your infrastructure, and
includes:

Real or virtual hardware, such as disks, CPUs, and RAM


Networking things, such as switches, load balancers, Virtual
Private Networks, IP configurations, and overall network
configuration.
Security protocols, who can access what, when, and how.
Hosting. Where does all this live, and can they handle the
load?

I could add to this list, but hopefully you get the idea. When you go
through a scaling conversation, it’s a good idea to focus on the exact
problem, using measurements to come up with a scaling plan. That
plan, however, needs to be evaluated regarding the entire scheme —
your infrastructure.

Let’s ground this in the real world, shall we?

Hello Again, WordPress

In 2019, I was tasked with scaling WordPress on Azure. I worked at


Microsoft at the time, and we wanted to see what factors would go
into scaling the world’s most popular web platform.

I started out with a free tier offering and was quickly turned around.
WordPress, understandably, needs some RAM and CPU to run, and a
free tier is just not a feasible option.

So I bumped the service to the next level up and created a VM using a


Bitnami image, which had the best reviews and, I figured, would be
756 ROB CONERY

installed by the most people. Everything worked OK, but my random


clicking through the site seemed to be a bit slow.

When WordPress “seems slow”, it’s anyone’s guess as to what’s going


on. I didn’t have any plugins installed, so I figured it wasn’t the old
Slow Plugin Problem. I was also on a 2-core VM with 4G RAM, which
should be fast enough.

My plan was to start simply by writing my own script to ping the site.
I figured that I could run requests in a loop to see how fast I could get
to 1000, which I know is not the best way of doing it, but it’s the
simple start I wanted to play with.

There are better tools for this, like Apache Benchmark, but my plan
was to wait until I got to the bigger SKUs (machine sizes) so I didn’t
break anything accidentally.

The first test was, shall we say, disappointing. I think I got to 50 or so


requests when I hit a timeout (502 response). That was unexpected.

And then I remembered that Azure, like many cloud providers, has a
limit on virtual disk transactions, which are called IOPS (I/O per
second). If you have a small application, the I/O is also likely small, so
THE IMPOSTER’S ROADMAP 757

the IO is limited. Given that WordPress is an I/O hungry beast (see


discussion above RE database work), it’s easy to see how we hit
this cap.

So: how do we scale this? We can’t simply bump the SKU, which would
also bump the I/O, we need to consider the entire infrastructure here.

What if we went with a hosted MySQL offering, that doesn’t have the
same I/O limit? Perhaps we could implement aggressive caching with
a WordPress plugin, and we could also go with a flavor of WordPress
(such as OpenLiteSpeed) that boasts some pretty impressive
numbers:

I’ve used OpenLiteSpeed in the past, and it really is quite fast. It


seems like a great solution, doesn’t it?

Cloud Provider Whack-a-mole

Cloud providers make money by charging for infrastructure scaling.


Understanding how they do that requires a Ph.D. in patience as well
as mathematics.

For instance: you can speed up your network throughput from your
hosted VM by adding another virtual network interface card (NIC),
which bumps you 3 SKUs. This will also up your IOPS, but if you go 4
SKUs you can also double your RAM and up your CPU cores, but
you’ll be paying another $400/mo.
758 ROB CONERY

A virtual (aka “Cloud”) infrastructure is great in terms of savings on


hardware and personnel to manage that hardware, and also the
colocation and data center fees. That said, the bigger you get, the more
attractive having your own hardware becomes.

We talked about this in a previous chapter, so no need to recap here.


That said, you can’t have a scaling discussion without understanding
what that means in terms of the Cloud and your pocketbook.

This goes all the way back to your choice of application platform.
WordPress is a hard bit of tech to scale, given the raucous plugin
architecture and wild database design. Scala, C# and .NET are blazing
fast, and Elixir/Erlang are fascinating choices when it comes to uptime
and using in-memory solutions as opposed to disk-based ones (etc.d,
for instance).

It’s when you’re sitting there, staring at your cloud bill a few years
after launch that you start to ponder the Great Rewrite. I discussed
this many, many chapters ago and, at that time, I asserted that one of
the major reasons that companies rewrite their applications is for
financial reasons:

A more efficient development experience means fewer


developers.
A more efficient platform means lower scaling costs.

Those concerns aren’t necessarily orthogonal, but often are. Scaling


Ruby on Rails or Django isn’t as hard as it used to be, but you’re
simply not going to get the same scaling numbers as you are with
.NET or Scala.

You’re also not going to get to market fast enough with .NET or Scala
simply because they don’t have the generators and convention that
Rails and Django do, and .NET and Java developers tend to be more
expensive.
THE IMPOSTER’S ROADMAP 759

PRACTICAL TIPS
Writing better code and learning how to tune whatever database
platform, web server, message broker, or DCOM server you’re running
is a product of experience. Every approach will be unique because it’s
the result of trial and error with your given system, or infrastructure.

What I’m trying to say here is I can’t go out and investigate “How Perf
Tune Stuff ” and come back here with a list of 10 things you can do to
speed up your app. What I can do, however, is to offer some general
tips that, like DNS, seem to always be in play when you’re trying to
squeeze out a few more RPS.

Faster Code

Writing “faster code” is almost always a myth. Eric Lippert, one of the
people who helped build C#, had the perfect answer to the question
Should LINQ be avoided because it’s slow?:

No. It should be avoided if it is not fast enough. Slow and not fast
enough are not at all the same thing!

Slow is irrelevant to your customers, your management and your


stakeholders. Not fast enough is extremely relevant. Never
measure how fast something is; that tells you nothing that you
can use to base a business decision on. Measure how close to
being acceptable to the customer it is. If it is acceptable then stop
spending money on making it faster; it's already good enough.

Performance optimization is expensive. Writing code so that it can


be read and maintained by others is expensive. Those goals are
frequently in opposition to each other, so in order to spend your
stakeholder's money responsibly you've got to ensure that you're
only spending valuable time and effort doing performance
optimizations on things that are not fast enough.
760 ROB CONERY

LINQ is “Language Integrated Query” and is one of the great things


about C#. Using it requires the creation of an expression tree in
memory, which is then used to handle list operations and other things
at runtime. Given that extra step, many people wonder if the added
cost of using LINQ is worth it.

Here’s the thing: as Lippert suggests, if what you’re doing is fast


enough, who cares? It’s only when things become noticeably slow that
you need to worry — and this, right here, is the essence of a scaling
problem. It’s not a problem until it’s a problem (unless your experience
demands otherwise).

The question we have to ask ourselves is this: how many traps do we


want to set for ourselves? If you use LINQ (for example), and think you
might run a billion operations for a given routine, well yeah, it might
turn into a scaling problem at some point. Anything will turn into a
scaling problem if you’re running a billion operations for a given
request!

Think About IO

CPUs are very fast and compilers are fantastic these days at
optimizing code. What you write is rarely what’s actually running
under the hood. The one thing you can control, which is the thing that
will likely slow your code down, is IO — input/output, or “hitting the
disk or network”.

When your thread or process needs to wait for an IO-bound routine,


your whole application suffers. I discussed this at length, above, so I
won’t repeat it here.

Database Tuning

Every database has its particulars when it comes to “tuning”, or


speeding up queries. The only queries you should worry about
speeding up are the ones that are called the most often, which is
something your monitoring service should help you with.
THE IMPOSTER’S ROADMAP 761

If you don’t have a monitoring service, or yours doesn’t do this kind of


thing, your database should have a way of tracking meta information
that you can inspect as you need. Postgres, for instance, has an
extension called pg_stat_statements that tracks how many times a
query was called, and what the average response time was. MySQL has
the same kind of idea, but you have to enable it based on the engine
you’re using.

If you’re using SQL Server or Oracle, welcome to the land of 1000 perf
tools that aren’t free.

Let’s dig in to the tool I’ve used before: pg_stat_statements. I’m a


Postgres person, so this should come as no surprise! If you’re keen to
learn more, my friend Craig Kerstiens has a great set of blog posts
about how to use this extension, and also how to tune Postgres to
your needs.

The Cache Hit

Your application will only access a small percentage of your data in


your database. The most-accessed data (think user access information,
for example) will be accessed far more than product information, and
this typically falls into the 80/20 rule, where 20% of your data is
accessed 80% of the time.

Most databases understand this, and will cache the data you query
most often. With Postgres, you can ask your database what it’s
caching by running a quick meta query:
762 ROB CONERY

Here, I ran my select statement 11 times and only the first query read
the data from the disk; the other 10 used the in-memory cache.

It’s for this very reason that you want to be certain your database
server has far more RAM than your actual data. This is a sneaky
bugger of a scaling problem!

Let’s say you’re running the next StackOverflow which is just like the
first StackOverflow, but people are actually nice and answer your
questions without making you feel like an absolute idiot (please make
this, someone). Your database server might start out with a solid
100G of RAM, which is great, and keeps things delightful and fast.

After the first year, your site becomes extremely popular because, as it
turns out, people don’t like being intellectually abused for simply
asking a question they need to know the answer to so they can feed
their children. Pretty soon, you’ve amassed 200G of data, mostly
because you’re allowing for versioning of questions and answers.

Your cache remains lean and mean, however, because it only retains
what people query the most, which is user data followed by a query
for a single question with its answers. The next biggest query is for
the home page, and then individual tag pages. The total size of the
cache is somewhere around 10G total.

Another year goes by, and your database is now into the terabytes.
People love your site! But you’ve noticed things are starting to slow
THE IMPOSTER’S ROADMAP 763

down, and you wonder what’s going on. Random timeouts happen,
and it feels like your database server is starting to buckle under the
weight of all that data.

What’s really happening is that your cache is now trying to hold over
100G of data and can’t do it because there’s not enough RAM (the
actual number would be less because the rest of the system needs
RAM but, for this example, let’s pretend you have 120G RAM or
something). That means it needs to hit the disk for some of the
queries that should be hitting the cache, which degrades performance
and makes everyone sad.

This is a data problem, not a server problem, which is to say that


there’s no bug that was pushed to production causing problems and
no chip that had Coke spilled on it that now doesn’t work. Data
accumulated and blew up your cache!

The fix? Add more RAM, reboot.

RAM is the fuel of life for a database, especially Postgres because


that’s where your data is going to live for most of its life. Many people
don’t know this, but now you do!

That Index Seemed Like Such a Good Idea

Have you ever bought a pair of shoes that just called to you? Like, you
had to have them or your day would be ruined! Comfortable, maybe
edgy, perhaps on the nicer side, fun, probably a bit too expensive as
well.

Here’s that shoe for me:


764 ROB CONERY

These are the Manganni Costa Leather Low Top Sneakers and cost
$375. I hate the name “Sneakers”, by the way, especially for a shoe
that costs this much money.

I don’t buy shoes that cost over $100. I wear Vans, or flip-flops, and I
might splurge on a pair of Hokas if they’re on sale, which is precisely
why I splurged and bought these shoes 5.5 years ago. I wanted
something impressive for when I visited clients, went to conferences,
or for a nice night out.

These shoes are 5.5 years old, and exceptionally clean. Do you know
why that is? I’ve worn them exactly once. This makes me sad, and I really
should do something about that, but it’s become a thing in my
mind now.

What does this have to do with database indexes? Here’s the deal: you
might analyze, research, and come up with the best reasons to create
an index for your database, only to miss the mark completely and
never actually use it.

To see what I mean: here’s the DDL (Data Definition Language) for a
users table in one of my applications:
THE IMPOSTER’S ROADMAP 765

It’s reasonable to assume that you’ll query this table using the id field
most often, which is why it’s the primary key and has an index. But
what about email? We really should have an index here, and we
should also ensure that it’s unique:

Great, that should solve our query problems! Or will it. It turns out
that one of the queries we’ve been struggling with is looking up users
with the same email domain as the current user. One of our
developers got lazy and thought this would be easier than doing joins,
and they did a fuzzy match:
766 ROB CONERY

Where’s our index? A query plan will show you if it uses an index
rather than a sequential scan, which means “loop every row in our
table”. A sequential scan is O(n), which isn’t good, as opposed to
O(log n), which is binary search as we discussed above, and that’s
what we want.

Indexes aren’t that demanding, honestly, but you have to keep in mind
that their power comes from their sorting. Our idx_users_email sorts
the email addresses and, using that sorting, can run a binary search. If
we do a fuzzy search, or a “contains”, if you will, that sorting doesn’t
matter because the value we seek is within the data, not at the start of
it, so the index is entirely ignored.

Using explain analyze in Postgres (and most other systems) will


show you the query plan as well as reasonably exact execution
numbers. Here, it’s telling us “nice index, can’t use it”, and you would
be surprised at how often people make an index to address a slow
query, but never check to see if it’s actually working!

Seriously, it would blow your mind. So: check that index!

Choosing What To Index

I like how Craig describes this, so I’m going to steal his words from
his blog:
THE IMPOSTER’S ROADMAP 767

A general rule of thumb is that most of your very common queries


that return 1 or a small set of records should return in about 1
ms. In some cases there may be queries that regularly run in 4-5
ms, but in most cases about 1 ms or less is an ideal.

To pick where to begin I usually attempt to strike some balance


between total time and long average time. In this case I’d start
with the second probably, as on the first one I could likely shave
an order of magnitude off, on the second I’m hopeful to shave two
order of magnitudes off thus reducing the time spent on that
query from a cumulative 220 minutes down to 2 minutes.

This is good advice, but how do you know which queries are taking
the most time? This is where the pg_stat_statements extension
comes in! Installing it is easy, and it’s free:

This extension will profile your database, so you want to be certain it’s
installed on your production server so it can profile actual usage. After
a few days, you should be good to go, and you can query it thus
(again, thanks to Craig for this query):
768 ROB CONERY

I know that working in SQL is not fun for many people, but it
becomes fun, really fast, when you can dig into your data and come up
with something actionable!

As an example, here’s a query I ran on the StackOverflow SciFi data


dump a few years back:

If you don’t have any indexes, your top queries will jump out at you in
terms of total_minutes of execution. This usually falls inside the
80/20 distribution rule: 80% of your execution time will be from 20%
of your queries.

The great thing about this is that we can also see the SQL that’s being
run, which means we should be able to build a precise index for that
query.

In this example, I would follow Craig’s advice and only worry about
the average_time, making sure to index things that are taking more
than 5ms.

SUMMARY
Scaling your application is likely going to involve the following steps:

Tuning your database and ensuring you have enough RAM.


THE IMPOSTER’S ROADMAP 769

Handling IO as gracefully as possible in your code.


Scaling your infrastructure to meet the demand (requests/sec)

The only way you can do this properly is to measure as much as you
can using your monitoring solution, whatever tools your platform
gives you (like pg_stat_statements), and whatever profiler you’re
willing to pay for.

Scaling isn’t easy, and there’s no replacement for experience. You can’t
read this chapter and be prepared well enough to face a true challenge,
so please don’t get mad at me when you’re up at 2AM trying to figure
out why things are breaking.

It’s a good pain, and makes you stronger! Seriously, every scaling story
is a good story, as long as you don’t get fired. That’s my goal with this
chapter: preserving your job, and helping you get through a
challenging, yet extremely valuable, part of your career.
TWENTY-EIGHT
A LOUD BANG, THEN SILENCE
CREATING A DISASTER PLAN FOR WHEN
THINGS GO VERY BADLY

I
think the normal thing to do, with a chapter like this, is to start
off with horror stories. We’ve all had them; we’ve all heard
them. Yes, I’ve dropped the production database. Yes, I have
shut servers down with poorly written code. Yes, I have used the
wrong ENV variables and not known it, causing production data to get
corrupted.

If you haven’t had problems like these, you will. The point of this
chapter, however, is not to scare you; just prepare you.

Things will go upside down for you and your team. If you’re not prepared,
that’s a failure of leadership, which is either going to be yours, or the
person you’re working for. We should be extremely clear on this point:
you will be told to “deal with it later”, “yes, I’m sure it’s important,
but we have to get this feature out or there won’t be an application”,
and one of the classics: “you worry too much”.

The person who says these things to you will also forget they said
them, and you’ll be the one with a target on your head.
THE IMPOSTER’S ROADMAP 771

Blame someone else

This is why we draft up a Disaster Recovery Plan, or DRP, so everyone


understands the stages, tasks, and assignments. You can start with a
template, if you like, and go from there. It takes a few hours to throw
together, then a meeting to ensure everyone knows what’s going on.

Picking the Lucky People

Tasks are great, but if they’re not assigned to anyone, they are
completely meaningless. It’s not difficult to think about your
application, what could go wrong, and, correspondingly, who gets
pulled out of bed.

As we discussed in the last chapter on monitoring, there is a “natural


division” of responsibilities within your team, if you will:

Infrastructure
Application
Servers and Database
Networking

This is from my experience, but split these in whatever way makes


sense for you and your team. If you’re at a startup, you’re probably
each team; either way: set up your monitoring rules as if these teams
will exist.

Oh, but there’s more! Everyone in your company is going to care


about an outage, including:

Your bosses, who will see money evaporating.


The marketing team, who will want to spin the problem and
let the public know (if your app is public) what’s going on in a
timely manner.
772 ROB CONERY

The support team, who will need to know what’s happening


so they can tell customers when they call.

And then there’s you. Small issues (errors, bugs, etc.) can probably be
handled without you, but a full disaster that causes an outage? Your
job is to take point and coordinate the triage while simultaneously
keeping the rest of your company informed. If your team is big
enough, you could delegate this to a “disaster management team”, but
your neck will still be on the line, so it’s better if you’re a part of
whatever happens.

We’re getting a bit ahead of ourselves. Let’s be a bit more rigorous


and figure out what could go wrong, and how we can handle it.

ASKING THE HARD QUESTIONS


The first thing we need to do is understand what we mean by
“disaster”. The term “outage” is far too broad, so let’s narrow it down.

A disaster for a startup or midsize company might be:

A network disruption, which is causing traffic not to reach the


application.
The app server crashed and can’t be restarted (usually a data
or environment issue).
The database is offline or is not writing data (could be RAM
corruption, disk error, fire at the colo).
A container crashed and can’t be restarted, which is causing
problems with the rest of the application.
An external service we rely on (hopefully not the logging and
monitoring one!) just went bankrupt and turned off its
servers.

We could add more, but let’s start with this simple list. Now the
question becomes: how quickly do we want or expect things to come
back online in each case? It would seem that the obvious answer is
THE IMPOSTER’S ROADMAP 773

“AS SOON AS POSSIBLE!”, but that’s a given. We need to define


what’s acceptable.

So: how long can an outage last? You’ll need to push on this one;
nobody wants to think they’re giving you an excuse to be lazy.
Managers and executives will look at this from a financial perspective,
every minute you’re down increases the support calls from your
customers, and the marketing team is going to run out of things to tell
people on social. This is called you Maximum Allowable Outage, or MAO
in ops circles.

It will be important that you learn to speak “ops” when doing this
assessment because acronyms mean you know what you’re talking
about (this is humor, for my Dutch friends out there).

To figure out your MAO, you need to know your Recovery Time Objective
(RTO) as well as your Work Recovery Time (WRT). RTO + WRT =
MAO. Put in simpler terms: “recovery time” means hardware and
system stuff — how long will it take for a backup to load up or for a
backup system to be promoted to primary. WRT is the people stuff —
how long will it take for humans to tell the machines what to do. This
is a very mechanical way of thinking about things, and every stage will
have an RTO and WRT element to it, so I’ll be moving forward with
the idea of stages.

Anyway: the best option, when trying to figure this out, is to be


proactive. Tell, don’t ask. What do you think is acceptable? Worst-case
scenario is a good perspective on this one, meaning you and your team
are up at 3AM, trying to think straight while being hounded for
updates.

Here’s a rough plan:

Investigation. You need to know what went wrong and why,


and this is where you thank your favorite deity that you have
solid logging and monitoring. This should take 30 minutes or
less (once you’re up and on the case).
774 ROB CONERY

Confirmation. This one is tough! It’s tempting to jump in and


try to fix things, but it’s likely you’ll make things worse if you
don’t take 30 minutes to confirm what you’re seeing.
Wasting time on a non-solution is soul-destroying.
Triage. You know what’s happening, let’s stop the bleeding.
This is where you might reroute traffic to a static site, letting
people know you’re down, and pointing them to your status
page (we’ll talk about that later on). Twitter’s Fail Whale is a
great example of this. This should be relatively instant,
assuming you have a static error page (and you will, after this
chapter) and your DNS is hosted by a service like Cloudflare,
that can reroute your DNS traffic in seconds.
Fix. Now the fun starts. How do you resolve the problem and
ensure that everything is consistent with the pre-outage state
of your application? The data is the key at this point. You can
lose some of it (session logs, cart data) but other data, no way
(orders, products, etc.). This can take minutes if all you need
to do is scale up your containers or services, or it can take
hours to days if your database needs to be rebuilt from backup
and logs. This is a tough one to measure, and you’ll have to
adjust for the size of your operation, but for a small to midsize
company I would say 4 hours is a decent estimate.
Verification. No fix goes live until all tests pass and a
rudimentary consistency check is made. You’ll be getting a lot
of heat from people, but if you push more problems, or
compromise company data, things will become much, much
worse for you. Take the time you need, but give yourself at
least 20% of the fix time to ensure it’s all working properly: 1
hour.
THE IMPOSTER’S ROADMAP 775

If you walk in to your bosses’ office and say “our DRP has an MAO of
6 hours”, you might get some side eye. Just be ready to explain your
reasoning, and to shave an hour or two to make them feel better,
which means bringing this plan with you.

Speaking of, let’s get into it.

BACKUP STRATEGY
The first thing to worry about is your data. Code is the easy part, it’s
versioned and if you lose it, just pull the latest version, which should
be what’s deployed.

Your infrastructure can be replaced. Sure, it will take time, but it’s
certainly not business critical. The only thing that is, is your data. I think
I might have mentioned this once or 1000 times before?

When deciding on your backup strategy, you need to have failure on


your mind, and a vision of yourself awake at 3AM, wondering when
your last restore point was taken. This is a better definition of a
backup: a restore point.

You will have to be OK with losing some data, it’s the nature of things.
Even if you have a replication scheme going (which we’ll discuss in a
776 ROB CONERY

second), it won’t be perfectly consistent with the state of your


database when things go pear-shaped. Even worse: if there was a
network problem, it could be out of sync by hours, or even longer
(though you should know about that from your monitoring system).

Simple Periodic Restore Points

The question we first need to answer is: what is my tolerance for losing
data? For many businesses, like mine, a nightly backup is all I really
need. If something were to happen, and I needed to restore the
database, I might lose some customer and order data, but it’s nothing
I couldn’t manufacture from logs and processor records.

That’s me, however, and I run a simple commerce app. If you ran a
system like BaseCamp, which handles chat and project records for
thousands of customers and is, I assume, churning millions of records
an hour, you would probably want something like an hourly backup
strategy.

The bigger your database, the more often you’ll want to back it up.
That seems to be the rule, but it also presents a problem: where do you
put this data? I put mine on Amazon S3, which doesn’t fill up (I hope),
but I do have to pay for capacity. That’s OK because I also have a
pruning plan.

This is a very common scenario:

You back up your PostgreSQL database on a nightly (or more


frequent) basis, and save that dump file to disk.
You copy that dump file to AWS S3 (or some other offsite
location).
You have a pruning script that deletes the oldest backups,
keeping the last n just in case.

Here’s a script that does just that. Every data person has their favorite
backup script, so have a Google and look around, see what you see.
THE IMPOSTER’S ROADMAP 777

The simplest thing you can do, as an administrator, is to nab one of


these scripts and run it a few times, making sure it works as you need.

You can then load it to your database server and crack open cron:

This is what I used to do because I have a small company and I like


simple. In fact, this is what most companies do (if they’re not using a
managed system), if you can believe it.

I think a managed system makes more sense.

Better: Use a Managed Service

I currently use Supabase to manage my data, and I really like the


service a lot. They have a very generous free tier, but if you want
nightly backups, you’ll have to upgrade to Pro, which is (as of today)
$24/mo. Most managed database systems do this for you, and you
don’t need to think about it.

Cloud services, like AWS, Azure, GCP, and Digital Ocean (what I use)
do this as well. I should also point out that Supabase is a service that
uses AWS under the hood, like so many other managed services out
there.

The nice thing about these services is they do nightly backups by


default and point-in-time recovery using the database logs. They also
offer automatic failover because they set up a cluster for you, by
default.
778 ROB CONERY

For a pretty decent price ($28/mo) you can get a premium SSD, 2G of
RAM and 40G of space. That’s usually all you need to get started, but
you can up that, of course, to suit what you need.

The nice thing about services like Digital Ocean is that they create a
cluster for you, which means databases with replication setup and
ready to go. If you don’t know what that is, you will in just a minute,
but in summary, it’s a Very Good Thing.

There are other services out there, including some startups like Neon
that are taking database hosting to a whole new level:
THE IMPOSTER’S ROADMAP 779

It’s like having your very own DBA! I love the autoscaling feature and
the fact that they separate the charges for storage and CPU. This
means you can have a big database with many historical records that
you query only once in a while.

With Neon, you get 24 hours of point-in-time recovery for free, as


well as 500Mb of drive space. Your CPU is throttled a bit, but you do
get 1G of RAM. This should also be enough to get you started.

The good thing about services like Neon is that you can tweak your
backup strategy with a slider:
780 ROB CONERY

This will cost you money, of course, but if your disaster plan calls for
it, it’s absolutely worth it.

The downside to companies like Neon (and Supabase) is that it is a


startup, trying to disrupt the managed database space. They’re doing
the VC thing and have an impressive client list, but at some point
they’re going to need to demonstrate viability as a company (I don’t
know if they’re profitable yet) or they’ll go away, like so many startups
do.

This isn’t as dire as it sounds. Moving your database to a different


service can be simple: take your app offline for an hour, run a backup,
restore to a new service. I’ve done this often, but yeah, I know, it’s
stressful.

If it were up to me, I would go with a service that I know and trust.


I’ve used Digital Ocean for almost a decade now, and I love their
service. For others, this might be using Azure, AWS, or some other
service.

Using Replication and Clusters

If you have a DBA, make sure they’re the ones doing this. If you feel
brave, or know databases already, go for it. Otherwise: I would leave
this to a managed service, every time.

You can set up most modern databases in what’s called a “cluster”,


which is a simple word for a set of network-connected database
servers. You designate one as your primary, the others as your standby,
or failovers. Your application reads and writes to your primary and the
primary, as it can, replicates the data to the rest of the cluster.

This type of thing is great for hardware or OS problems. If something


happens to the primary, you can set up an automatic failover, or
“promotion”, of your standby. Or you can do it yourself. It doesn’t
work so well if you accidentally delete a table, which means you’re
headed into your backups for a restore point.
THE IMPOSTER’S ROADMAP 781

I’m not going to dig into replication for this book, but you can read
more about how to do it with Postgres right here.

Testing Your Backups

You would rather not find out that your restore file won’t work when
you need it to work. This can happen if your backup script is
incomplete (omitting users and roles, only public schema, etc.), the
file is corrupted, or the file has the wrong encoding. In short: you want
to make sure your backup script does what it’s supposed to.

The next thing that testing will do for you is to find out how long it will
take to restore a full backup. This is a number you’ll need for your
RPO, the machine stuff, and how long it will take to get that file from
storage and load it up, which is part of your WRT.

This isn’t something you want to automate, as part of testing your


backups is so that you, and your team, know what’s involved with a
database restore. How will you get the latest file, and who is going to
actually execute the command?

These are things you want in your disaster plan, as well as a testing
schedule.

Recap

If you’re in the “I’m overwhelmed, help” state, no worries, this stuff is


intense. Here’s what you need to do:

Pick a managed service that you like and is within your cost
range.
Make sure they have nightly, point-in-time (for at least 24
hours), and failover recovery options.
Test the recovery process.
For extra credit, figure out what it would take to move your
managed service entirely if your startup service goes out of
business.
782 ROB CONERY

Once you’ve done this, you can then detail who’s going to do what,
when. It will probably be you in the beginning, so write that down, as
well as the worst-case scenario of losing data in a 24-hour window if
someone deletes or corrupts critical data.

All of that goes in your plan, under the heading of “Data Recovery”.
The RPO will be the time it takes for the recovery itself, and in a
worst-case scenario, moving to a different service. The WRT will be
how long it takes you to make it all happen.

SERVER CRASHES
We discussed this in the Kubernetes chapter, so I won’t retread too
many of those topics here, aside from reassuring you that yes, your
server will crash at some point. By “server” I mean one, or more, of
the following:

The server hosting your application.


The container running your application.
The VM running your container(s).
The load balancer in front of your servers and containers.

Any one of these services can crash at any moment, and you will need
to detail how that can happen, the expected reboot period (RPO) and
what it will take to get them back into service (WRP).

Mitigating a service crash is one reason people like using Kubernetes:


if a service dies, another will be booted up to replace it. If a service is
stressed, Kubernetes can autoscale for you, avoiding the service crash
altogether.

That’s the theory, anyway. But what happens when Kubernetes


crashes? Like, say, when you decide to upgrade it to a new version?
THE IMPOSTER’S ROADMAP 783

Upgrading from Kubernetes 1.23 to 1.24 on the particular cluster


we were working on bit us in a new and subtle way we’d never
seen before. It took us hours to decide that a rollback, a high-risk
action on its own, was the best course of action.

This is a post from Jayme Howard, the platform lead at Reddit,


discussing what happened during the infamous Pi Day Outage. Long
story short: Reddit was down for nearly three hours as the team tried
in vain to figure out what had happened. They thought they had it
figured, but then things went sideways again. It’s a long read, but well
worth it, especially for the eventual reveal of the problem:

In doing the research, we discovered that the way that the route
reflectors were configured was to set the control plane nodes as
the reflectors, and everything else to use them. Fairly
straightforward, and logical to do in an autoscaled cluster where
the control plane nodes are the only consistently available ones.
However, the way this was configured had an insidious flaw…

The nodeSelector and peerSelector for the route reflectors target


the label node-role.kubernetes.io/master. In the 1.20 series,
Kubernetes changed its terminology from “master” to “control-
plane.” And in 1.24, they removed references to “master,” even
from running clusters. This is the cause of our outage. Kubernetes
node labels.

Kubernetes is a powerful bit of software, and I’m amazed that they


were able to recover from this in just 3 hours (give or take). This is
where your DRP might just… evaporate…

Everyone Has a Plan Until They Get Hit In the Face


784 ROB CONERY

You knew Mike Tyson was going to make an appearance here, didn’t
you? He was paraphrasing a much older quote, but I like his version
better.

A DRP is just a plan for who does what, when, and how. It has time
estimates, contingencies (or “Plan Bs”, like moving your database),
but you will, at some point, be flying by the seat of your pants.

The Reddit team followed their plan until the 30-minute mark, when
it was time to make a choice:

About 30 minutes in, we still hadn’t found clear leads. More


people had joined the incident call. Roughly a half-dozen of us
from various on-call rotations worked hands-on, trying to find the
problem, while dozens of others observed and gave feedback.
Another 30 minutes went by. We had some promising leads, but
not a definite solution by this point, so it was time for
contingency planning… we picked a subset of the Compute team
to fork off to another call and prepare all the steps to restore from
backup.

In parallel, several of us combed logs. We tried restarts of


components, thinking perhaps some of them had gotten stuck in
an infinite loop or a leaked connection from a pool that wasn’t
recovering on its own.

This is the absolute worst possible case: when you just can’t find the
damned problem! Thankfully, Reddit had a competent team that knew
when it was time to abandon process and do whatever they needed to
do to figure out what was happening.

My point is simply this: the more complex your infrastructure is, the
more time you need to build in for your investigation phase. I’m a big
fan of simple, but I also don’t run Reddit, so there’s that. If I did, my
THE IMPOSTER’S ROADMAP 785

first order of business would be to hire people like Jayme who can fix
things when it all goes to hell, which it will.

Recap: Turn It Off and Back On

That’s really what server troubleshooting amounts to, I hate to say.


For the overwhelmed, here you go:

Only run Kubernetes if you absolutely have to, for scaling


purposes. Keep your infrastructure as simple as it can possibly
be.
Make sure you cushion your investigation time in your WRP
to account for the complexity of your infrastructure.
You’re probably going to use containers somewhere, so make
sure you have more than one container expert on the team.

If you do run Kubernetes, having a staging environment is critical.


This will allow you to test things like upgrades, but will also help you
avoid deployment errors assuming you roll to staging first, and then
production.

It all comes down to the ops expertise on your team, and your chosen
infrastructure. I wish I could give a more detailed “do this, than that”
but there are just too many variables, aside from “keep it as simple as
possible”.

SECURITY
Can you hear it now? When you hand over your DRP and your boss
asks: what happens if we get hacked? You know it’s coming, so let’s add
that to our DRP too.

“Getting hacked” may or may not be a disaster. If they steal your data,
you probably won’t know for days or weeks (or longer) and you won’t
have an outage, so as far as you are concerned, it’s not really a
disaster. I mean it is for your company, but not a technical disaster.
786 ROB CONERY

If, however, they throw a DDoS attack at you, hijack your DNS, or
randomly change your data (always a fun one), you’ll want to have a
plan. That plan needs to start with steps for prevention.

Routine Automated Audits

I don’t know what I think of automated security scans and, before I go


further, full disclosure: I’ve never used one. Yes, there are rudimentary
scans that you can do for your code (like the ever-annoying
dependabot at GitHub and the various tools that can scan your
container setup), but that won’t quite do what you need if you’re
looking to head off intrusions.

Monitoring services will do rudimentary checks for you that look


something like this (once again, New Relic):

This is my Rails app, the real one, and as you can see, I have some
packages that need upgrading. This is great, but what about my
server’s ports, software upgrades, and bare-metal things like that?

There are open-source tools you can use if you’re running Linux. Vuls
is one such tool:
THE IMPOSTER’S ROADMAP 787

It’s written in Go and can run standalone, without the need for an
agent or other installed services. You can even run it remotely and tell
it to ping you on Slack if there are issues:
788 ROB CONERY

From the vuls.io website

Being open source, you, as the person running the scan, will need to
know a bit about Linux and how to fix/upgrade things when they
come up. You’ll probably also need to know how to RTFM if you have
any questions.

There are, of course, online services that will do this for you, and all
you need do is run a quick Google search to find one that works
for you.

I do hope that I don’t sound flippant or dismissive here. It’s important


to keep your server (VMs or actual hardware) up to date and most
cloud services will do this for you, especially if you’re all-in on
containers and not handling any VMs.

Routine Specialist Audits

When your project is small, and you’re just getting going, it’s likely
that security is going otherwise be an afterthought. I’m not saying
that’s a good thing, but let’s be real: your focus will be on getting your
MVP out the door and worrying about the details later.

Is this a good idea? Probably not. If your data is compromised in your


first year of existence, you’re most likely finished as a company.
Thankfully, this isn’t a decision you get to make alone, but it’s critical
that you explain the details to your managers and clients, or even
higher up than that.

Unless you’re a security expert, you won’t know what vulnerabilities


your application and infrastructure might have. This is where the
conversation should start, by the way: “I don’t know how we might be
compromised” which should be quickly followed by “and I think we
should bring someone in for an audit.”

It’s not difficult to do, and there are many firms out there that
specialize in this kind of thing. In fact, they can do it remotely. They
will look for things like:
THE IMPOSTER’S ROADMAP 789

Your data retention and encryption plan. Are you storing more
than you should, and how is that information protected?
Who in your company has access to your system?
What does that access look like (SSH, login/password, etc.)?
Who is the admin responsible for patching/managing the
servers?
Where are the server credentials stored?
Which ports are open on your servers and why?

I’m not a security expert by any stretch, and there will be many more
questions, which is wonderful for you because that means you get to
have a conversation and learn things. Take good notes! If this is your
first time through one of these, well, that’s valuable experience.

Above all: make sure the final decision of hiring someone is not yours
and, if it is, do it. It’s hard enough coming up with an idea that will
make money — to have it explode because of a data breach, or worse,
is simply cruel.

SUMMARY
A Disaster Recovery Plan causes conversation, which, I think, is more
valuable than the plan itself. It’s good to know who will do what when
things explode, which I’m sure is obvious, but you might be surprised
at how many companies making 7 figures a year don’t do this.

I like thinking of a DRP as a dash cam, which is something you really


don’t need until you do, and it becomes the most valuable thing you
own in that moment. People tend to lose their minds when things
explode. By having a plan and making sure everyone knows it and is
ready to follow it, you can save your company thousands of dollars in
lost revenue.

Don’t skip this exercise.


TWENTY-NINE
REPORTING, ONCE AGAIN
UNDERSTANDING HOW GOOD DATA CAN
SAVE YOUR JOB

W
e’ve touched on reporting throughout this book, and it’s
fitting that we end on it, as I’m hoping this chapter will
stick with you as the last thing you read, aside from me
“saying thanks and good luck!” in the next chapter.

I wouldn’t be here, writing this book, or doing any of the things I’ve
done if it wasn’t for the strange quirk of circumstances that thrust me
into the world of data and data analysis. I think I mentioned already
that I was a Geologist when I started my tech career. That might seem
like an odd entry point, but it’s worth understanding what happened
to me because a variation of it will likely happen to you.

YOU KNOW COMPUTERS, RIGHT?


This is what my boss said to me in back 1995 during a project meeting
at Levine-Fricke, an environmental company in Emeryville, CA. We
were running a massive cleanup job out in Stockton, CA, where some
dry cleaners at a commercial shopping center decided to dispose of
their solvent waste by flushing it down the toilet in the back room.
THE IMPOSTER’S ROADMAP 791

They did this for years, until the 1990s when the owner tried to sell
the land and an environmental assessment was done, which found a
problem.

The solvents that were dumped down the toilets made their way into
the main septic system, which was made of clay. It should have been
replaced in the 70s and 80s, but it wasn’t, so it was cracked
throughout, which let the solvents from the dry cleaner seep into the
ground, and then into the water table 50 feet (ca. 15 m) below the
shopping center.

When it was found, everyone involved lawyered up and sued each


other. The city sued the shopping center, which sued the dry cleaner
businesses (there were three). The dry cleaners sued both of them
right back, claiming their businesses had been impacted due to pipes
that should have been replaced and a city, which failed to enforce its
own rules.

Millions of dollars were on the line, and we were in the middle of it


all, trying to help get the chemicals out of the ground. We installed
sampling wells throughout the 35 acre property, took soil samples,
and wrote numerous reports. These reports had a lot of detailed
information in them, and it needed to be correct.

Back in those days, we would receive the analysis reports on paper,


and someone who could be trusted did the data entry, which (for us)
was Lotus-123. The numbers would be spot checked, and then the
engineers and science people would request reports from the
“quants”, who knew how to use Lotus, and could create charts and
graphs.

1995 was the year that Windows 95 was released, along with Office
95, which made desktop computing a lot easier and more accessible.
Our laboratories started offering Excel spreadsheets for our results, in
addition to paper reports, which is when my boss popped into my
office and asked about me knowing computers.
792 ROB CONERY

“Yep. Why do you ask?”. In response, he tossed a floppy disk on my


desk full of data from our analysis lab. He asked me to put it into a
spreadsheet, sort it by date, and give him a printed report for well
MH-232, which had an anomalous result in the previous round of
sampling. He wanted to see the historical data — all 18 months of it
— complete with averages and min/max calculations.

You might be wondering why he didn’t run this himself? The short
answer is that computers and their software were expensive back
then, so they were only given to people that would actually use them,
and one of those people was me. My boss was not, and he wasn’t sad
about it.

I created the report he asked for, though I did have to call a friend to
get some help on a few of the formulas I needed. Averages and
Min/Max are pretty easy, but I had a few inspirations I wanted to try
out, so I dug into formulas.

I had a blast, and my boss loved what he saw. As he reviewed the


numbers, he asked a question I’ll always remember:

You sure this is right?

I was tempted to say “of course, I didn’t create the data, the lab did”
but that wasn’t what he was asking. He didn’t know how I was
calculating these numbers, and he also didn’t trust Excel to calculate
the sums correctly. He wasn’t wrong to think these things — precision
errors are notorious with computers, but I didn’t know that at the
time.

I cross-checked the calculations with a calculator, yes, I think


they’re correct.
THE IMPOSTER’S ROADMAP 793

He nodded his head and, like Adam always did, whipped out his own
calculator because he trusted me as much as he trusted Excel, which is
not at all. Thankfully everything was in order, and Adam looked up at
me, smiled, and said:

Good work. You’re the data guy now.

Hello Access

I still got to do Geologist things, but more and more I found myself
sitting behind a computer, pumping numbers into one system or
another. I didn’t want to sit behind the computer at all, but the
position I now held was a critical one, and I loved it.

I taught myself Microsoft Access along with relational theory


(meaning I bought some books at Barnes and Noble) and created one
hell of a database application that could mine through analytical
results and also create beautiful reports.

I taught myself Microsoft Project and created the Gantt chart for the
entire investigation, which we had to print out on a massive plotter
and then tape to the wall behind me. My boss loved the thing,
everyone else hated it, but it was always fun to have people drop by
and look over my shoulder at what was going to happen next in the
project.

After a year of this, I began to think of myself more as a tech person


than as a Geologist. My boss had gone out and bought a copy of ESRI
ArcGIS, one of the very first geospatial data programs you could buy. I
read through the manual, plotted out the location and elevations of
our wells and their water levels, and within a week had produced a 3-
dimensional graph that showed which way the groundwater was
flowing, and where the contaminant plume was likely strongest.
794 ROB CONERY

My boss was speechless. He asked his favorite question at least 5


times: are you sure about these numbers, but this time I had attached an
addendum with all the data and any assumptions for the graph.

By that time, the lawsuit was joined by other businesses that were
upset because leaked solvents were making their way through the
groundwater and underneath their businesses, which put them at risk
for a big cleanup bill from the state. A gigantic mess, as you can
imagine, but having this report helped solve numerous issues and
provided immense value to our clients.

Data Is Life

I’ve discussed this before, but it bears repeating at least 10 more


times: your IP is cool, especially as an asset for your company. Your
customers couldn’t care less about your code — what they care about
is the experience using your application, and that experience is
powered by data.

Without customers, your IP is worthless. Without data, your code is


just that — code, sitting there on its own, offering no value to anyone.

Many programmers, mostly junior, don’t like this idea because they
like building applications and think that the applications themselves
are worth something. “Without my application, this data wouldn’t
exist!” That’s true. Unfortunately, that’s also a pretty good self own.

As time goes on, your application will change shape, be rebuilt, crash,
come back to life, become loved, hated, and everything in between.
Your data, however, will always be your data and if you lose it, you
might as well lock things up and go home.

Data Is Power

The data your application and monitoring systems collect will be used
to guide your company direction. As great as your code might be, I can
guarantee you it won’t be presented at a board meeting. Unless, of
course, it produces crappy data and your boss tries to deflect blame to
you, showing your code as the culprit (it happens, I’ve seen it).
THE IMPOSTER’S ROADMAP 795

If your application produces solid, actionable data, you will be a star. As I


write these words, I’m thinking of Adam’s face when he read my
report. It was like the last scene of Whiplash when Terence Fletcher
finally sees the genius of Andrew Neyman (though I suppose we could
argue about what the ending meant, but let’s not).

I moved from a simple “Staff Geologist” role to “Data Guy”, one of the
pivotal people on our project. I’m not suggesting you drop what you’re
doing and become a DBA, but I am suggesting that you understand the
critical nature of handling data properly, and understand the power it
has for you and your company.

We’ll get to practical things in just a bit, but they won’t mean a thing
if you haven’t bought in to this truth.

THE DATA ALWAYS TELLS A STORY


Look around the room you’re sitting in now. If you’re outside, do the
same. What is this place telling you? There are stories here that
combine to tell an even bigger one. Maybe someone left their shoes
on the floor by the door instead of putting them in the closet.
There’s an empty coffee cup on the desk, and some light music
playing.

Playing detective, we could theorize that someone just came back and
was preoccupied with a work task. They kicked off their shoes, made
some coffee, turned on some music and started their work — which
was probably writing given the Ulysses app is open on the screen with
the cursor blinking next to a few words. If you’re starting to wonder if
this person is me, coming home from the store with an itch to write
on a Saturday eventing, you would be correct.

Your database contains the same clues that people will sift through,
trying to read the story that they’re interested in. Marketing people
will want to see behavior and responses to their campaigns. Executive
types will want to see sales numbers and who is buying what, when,
where, and how.
796 ROB CONERY

But those are just first step analyses which lead to more questions,
which is where you come in. For instance: your boss might spot a
weird trend in sales based in Sweden. Every Tuesday night at 3AM
there’s a decent spike in sales, and they’re all from Sweden. More
information is needed here, including:

What pages did these people land on, and how long did they
stay?
What other pages did they visit?
How many bought something, and how much did they pay?
Who was the referrer?

What’s your answer going to be to these followup questions?

It’s worth noting that all of these questions can be answered with
something like Google Analytics, or any other reporting service out
there. Your marketing department has decided, however, that financial
decisions can’t be made using these services due to script blockers,
that are prevalent with your users (programmers).

Thankfully, you plugged in a first party reporting package (meaning:


not a hosted service) that records the information your boss is looking
for. You’ve been careful to scrub the IP address from the database as it
can be used to find location information, but everything else is there.

You run the report, hand over the data, and watch the boss smile as
they go to work trying to figure out why Swedes are digging your app
at 3AM.

The story is in your data, you just need to be certain you have the
clues you need.

Jobs and Joins

Let’s take this to a more practical level, shall we? To illustrate what I
mean by “the data tells the story”, consider these tables:
THE IMPOSTER’S ROADMAP 797

The simplest of job trackers, with people, jobs, and assignments. Now
let’s add some data:

Straightforward still, I hope. We have 4 people and 4 jobs, and 6 total


assignments. What does this tell you? Hopefully, you’re thinking that
some people are doubling up on jobs, which would be true by
definition.

Let’s run a query and see who’s doing what:


798 ROB CONERY

Looks like Darth Vader and Apple Dumpling are feeling pretty alpha,
but there’s more to this:

Darth and Apple are taking on multiple assignments,


including Bake Apple Pies, which they’re doing together. Two
alphas doing the same job; is this a good thing?
Totoro is doing the least work, sharing his job with Apple
Dumpling.
Something is weird about the assignment algorithm — either
the people doing the work are influencing it, or there’s a bug
somewhere given the way the jobs have been divided up.

That list bit is critical, as it surfaces an assumption that could be


core to the business: are jobs supposed to be done by one person? If not,
how can we keep people from signing up for every job and getting
burned out?

There’s another problem here, however, which I’m guessing many of


you haven’t noticed (bonus points if you did) — There are only 3 people
in our report, but there are 4 people in our system. That tells us someone is
being a slacker.

Let’s tweak our query and use a left outer join so that all the people
in the people table are shown:
THE IMPOSTER’S ROADMAP 799

Aha! Duke Leto is slacking off! Or is he? All we know, from running
this query, is that Duke Leto doesn’t have a job, which means he
either didn’t sign up for one, or wasn’t given one, or…?

Let’s dig in further and run a full outer join, which will show all job
data along with all the people data:

Uh oh. This data is showing us something else, entirely! Ruling


Arrakis is a job that needs doing, and no one is doing it. Leto should be
800 ROB CONERY

doing a job, but he’s not. Using our data detective skills (which means
we hate coincidence), is it possible that Leto should be ruling Arrakis,
but was removed from the job somehow?

We should probably get ahold of Leto. Darth Vader wants to control


the galaxy, after all, and there could be foul play here!

Sure It’s Silly, But This is Normal

The more you dig, the more you find. That’s the way it is with data
and while I will agree that the demo above was a bit contrived, it’s
also grounded in real experience trying to find a cause for a given
effect.

With the first report we ran, we had no idea there was someone
without a job. We then dug in a bit more and assumed Leto was being
a slacker, and then we dug up a galactic conspiracy thanks to being
able to write good SQL.

If this were a real investigation, we would want more data here, like
Leto’s journal entries, employment agreement, last login time and
more. This kind of thing is going to happen to you: someone at your
company is going to find a weird anomaly that could turn into money
or, worse, help stave off bankruptcy. They will turn to you for more
information, more data, something that will help them string together
the clues to figure out if their bold new plan is a good one, or if it’s
been tried already and failed. And everything in between.

Like so many things in these later chapters, appreciating the role of


data in your job is something you build over time through experience
and failure. Hopefully, the failures that you face won’t cost your
company, or your job.

CULTIVATING YOUR DATA SENSE


As you work with data, you cultivate what I like to call a “data sense”.
This comes from auditing data sets, sifting through records with a
THE IMPOSTER’S ROADMAP 801

goal in mind, and preparing reports. Scanning logs, for instance, to see
why a user couldn’t log in, builds your data sense.

Hi there, I’m trying to log in to your site and I can’t. Please let me
know when I can.

This is a typical support request from the worst possible source:


another developer. Whether intentional or not, you will always have to
dig for more information… or will you? We have their email, and we
also know the problem: they can’t log in. We also know that this is the
only user to report this problem, so it appears to be an isolated event.

This is where our data senses kick in! What can the logs tell us? It
turns out, quite a lot:

We can see there are no 500 errors in the last 2 hours, which
covers an hour before the email was sent.
The email logs show success, aside from a few bounces that
don’t involve the user who sent the email.
Checking the database, our user doesn’t exist in the users
table, based on their email address.

This narrows things down quite a bit, the last thing we need to do is
query the logs for our user’s email, and the only reason I didn’t do
this first is because I wanted to rule out the simplest explanations.

Doing a quick query on the logs, there are no entries found. This tells
us something important:

Sorry for the trouble; I don’t see your email address in our system,
nor do I see a request for a login link. Is this the email you used?
802 ROB CONERY

The reply comes shortly thereafter:

No, I used [email protected].

If you can’t tell, I’ve had this email exchange before. I’m not sure what
it is about programmers, but wow are they bad at asking for help.
Anyway: now that we have this email, we can quickly check search for
our user in our logs and our users table.

Check your spam, it looks like an email was sent successfully to


you three times.

It happens.

Sleuthing Is a Skill

We know our data, so we can solve problems quickly and, as usual,


our logs rescue the situation. But let’s consider something I am
willing to bet a box of Tim Tams on: the request from marketing.

Marketing departments tend to keep their own datasets, which usually


involve third-party services like Google Analytics, Google Tag
Manager, and various other reporting tools. Occasionally, however,
you’ll get an email like this one:

We’re trying to prepare an analysis on the holiday sales campaign


for last December, but need accurate numbers. Can we get a sales
report, please? This would need to include daily sales and product
information. Demo would be great too.
THE IMPOSTER’S ROADMAP 803

What marketing is asking for here is a CSV dump of the sales data
and, before you read on, are there any alarms going off in your head or
weird tingling sensations happening in your body right now? If not,
here’s a simple truth for you: data that’s not in your database is not
protected by anything.

When you send CSVs out into the wild, consider them public domain.
The data will appear on laptop screens left open at Starbucks and
reports printed out and lost on the subway or airport lounge. Never
provide sensitive information unless you arrange for its safety ahead of time.
You’ll be the one who gets the blame, by the way, every time.

Hopefully, by now, you know what I mean by “sensitive information”,


which is anything that might violate your customer’s privacy (which
should never be in any report anyway) and/or anything that would
hurt your company or yourself.

Daily sales don’t fall into this category, but demographic (“Demo”)
information might if you’re not careful. Knowing this, we can run a
query for the month of December, ensuring there are no email and IP
addresses included. We can add city, region, and country if we want
because it won’t be tied to any personal information.

But there’s another problem here. One that you need to get stung by a
few times before you start appreciating just how messed up it is.

Within a few hours, you get a reply:

Are you sure these numbers are correct?

I blame texting. People dig being terse and don’t realize that leaving
out critical information requires more typing in the form of protracted
Q&A. Our marketing person doesn’t like our data, for some reason,
and after a few emails back and forth we find out that the numbers
don’t align with their expectations:
804 ROB CONERY

We did a full blast on New Year’s Eve and Google is telling us it


worked. Our traffic spiked and if our conversions are 3.2% like
they always are, we should see a lot more sales than this. Check
again.

Your initial reaction is a human one: I’m not deleting sales information if
that’s what you’re asking. That was my reaction when my client said
these exact words to me back in 2001, when they did a full sales
push for the holiday. They were short expectations if they used the
sales data from my report, and were insistent I did something
wrong.

I pushed back, told them the numbers were the numbers and no, we
didn’t lose any orders. That’s when my client asked me a question that
changed my life. Mark was a pretty technical guy and understood
databases well. With his perfectly nuanced Queen’s English, he asked:

What time zone are you using, my septic friend?

Interesting question that I had no idea how to answer. My first


inclination was to ask why that mattered, but Mark pressed on:

If you’re using GMT, then we’re missing 8 hours of sales data.


Give it a look.

It took me a few minutes to catch up to him, which was embarrassing,


and the only reply I could think of was “but… the servers are sitting
right here next to me…”. Our office was on 1st and Harrison in San
Francisco, so… shouldn’t that be the timezone on the order?
THE IMPOSTER’S ROADMAP 805

In short: no. Skipping ahead, I was, indeed, storing the date as UTC
(Universal Time Coordinates) which corresponds to the GMT time
zone. My client’s office was in Oakland, and the sales time needed to
reflect that, and it’s something I never checked on the server.

Ironically, this exact problem would happen to me again in 2010, but this time
with my own business.

This problem is so, so common, especially when working with CSV


files. Relational systems can work with timezones just fine, but when
you export the data, it becomes pure text and all of that relational
goodness is gone. That’s when people step in and do people things,
blowing up your data.

This is a spreadsheet I downloaded from NASA/JPL (yes, that one). I


tried to import the data into Postgres and got an error because of this:

Someone at NASA decided to add a column to the CSV, projecting the


epoch data (a big integer) as a human-readable date. Unfortunately,
there’s a leap year bug in Excel which produced this invalid date,
which is so fun to try to debug.

You Just Know It Will Happen


806 ROB CONERY

My point with these stories is this: people do weird things with


spreadsheets. Once you let the CSV out the door, it will be mangled.
It will be imported into Excel, somewhere, and people will make
decisions on it, which is fine, just be sure to never let it back in to
your database.

So, in summary:

Always “cleanse” your export data and ensure that privacy


information and other sensitive things are never included in an
export. That file will be lost or made public somehow.
Know that a time zone problem will happen, so make sure
you’re clear on time expectations before you send your data if
it includes dates.
You’re bound to have someone email you a CSV, saying they
“fixed” some data for you. Nope. Never. Even if it’s your boss.
Especially if it’s your boss.

Hard-won Experience

Whenever I download a CSV data dump, which is usually from a


service I use like Shopify, Stripe, or ConvertKit, I “scan” the
information and look for inconsistencies or things that cause my data
senses to flare up. This might include:

Empty strings instead of 0.


The use of the text “null” instead of empty string.
Date formatting and the time zone used.
Casing issues with lookup information (this is so damned
common).
Random, empty cells.

You’ve probably faced a lot of these issues yourself. It really is


spectacular what people will do with data. Here’s a CSV I
downloaded from Gumroad when I ragequit their service back in
2016:
THE IMPOSTER’S ROADMAP 807

A date without a time stamp or a time zone. Thanks for that. Now I
get to guess how to handle orders from Australia vs. Hawaii… yay!

Being positive: the more you develop these skills, the better you will
be at anticipating what will be needed from your export information.
The better, and more complete, your export is, the better you look.

TOOLS FOR DATA ANALYSIS


The last thing you want to be doing is slinging SQL queries in
response to random emails from around your company. And there is
absolutely no way you’re going to allow anyone apart from yourself and
a trusted few on your team to access your database. People wonder
why DBAs are such jerks, and hopefully, you’re starting to understand:
the data is the key to everything.

That said, you do have to let other people play with it occasionally —
but how do you do that without compromising things?

Thankfully, there are some tools that you can use to make life simpler.
These are referred to as “Business Intelligence Tools”, or “BI” for
short, and you should get to know them and have one as your go to
when you’re asked for your opinion.
808 ROB CONERY

To that end, let’s start with mine!

Metabase

Metabase is a self-hosted solution that is ridiculously good for an open


source (and free!) tool. You can roll this thing up with Docker quickly,
and turn it over to your data people (usually marketing or sales) to do
as they need.

I just love their pitch:

Metabase is a pretty hefty Java application, and setting it up using


Docker (which we’ll do in a second) takes a few minutes, depending
on the resources Docker has available to it. For me, this seems to
average about 2–3 minutes on my M2 Mac.

Once the setup is done, you’ll see this:


THE IMPOSTER’S ROADMAP 809

Metabase sets up its own dedicated database for itself, which is


important to remember as you don’t want to mess up with the Docker
Compose file and use your production settings, which I’ve done too
many times.

The setup wizard does the normal wizard things, adding you as an
administrator and setting up your credentials. Then comes this
important screen:
810 ROB CONERY

This will be our first connection. You can connect to multiple


databases to run analytics, but it’s a good idea to have one from the
start.

As long as you can access your production system (meaning a port


isn’t blocked), you can add those credentials here. If you want to play
around with your local Postgres, however, you need to be sure to enter
host.docker.internal as the host. Metabase is running in Docker,
remember, so it can’t see your machine unless you use that host
name.

Once connected, you’ll see your dashboard:


THE IMPOSTER’S ROADMAP 811

This is my development database for my Rails app that I’m using for
Big Machine. Metabase snooped around in there, and came up with a
several “explorations” that might interest me.

This is where we enter the deep end of the analytics pool. Metabase is
freaky good at making guesses as to what you want to see.

For instance, here’s a rollup of my sales over time:

I can turn this into a quarterly chart, pie graph, and more.
812 ROB CONERY

I can also use ad-hoc SQL to build a dataset that I would like to dive
into. This is powerful, but it’s also scary because, yes, you can write
inserts, updates, and deletes, which is not a good idea.

Thankfully there’s a robust permissions system:

You can do simple things here, or you can do things at a granular


level, going table by table, in case you don’t want a certain group of
users to have access to a given table.

I could write for days, showing you all the cool things you can do with
Metabase. I use it for my own business, and it sends me daily emails
with sales charts (seriously) and I can dig in to anything I can think of
with pivot tables.

I honestly can’t recommend this tool enough. It’s ridiculous how good
it is! I used to work at an analytics company back in 2003, and I
remember we paid $12,000 for a system called “Cognos” that, well,
let’s just say it was nowhere near as good as Metabase. A long time
ago, sure, but I’m still blown away every time I open this tool up.

There is, of course (and thankfully), a paid tier which isn’t so


expensive honestly. I’m happy running this on Docker the way it is,
THE IMPOSTER’S ROADMAP 813

but if you need more control or have teams of people, the starter plan
is about $100/mo and worth it. They even host the solution for you!

Setting Up With Docker

The easiest thing to do is pop this Docker Compose file somewhere on


disk then run docker-compose up:

That’s obviously not the full file, you can get that from this gist.
There’s more information here as well if you run into any problems.

Excel. Always Excel.

It’s likely your client/boss/whatever will be using Excel, and will ask
you for an “Excel file” that contains whatever data they want to have.
I’m sure you’ve used Excel — it’s ubiquitous in the business world,
and for good reason: it’s excellent at playing with data.
814 ROB CONERY

This means you need to be good at working with CSVs. I made a video
for Postgres fans that you can watch on YouTube if you like. A few
tricks here and there can always be useful for you when it comes to
CSVs.

Should you learn Excel? If it works for you. I use it occasionally, but I
usually find what I need much faster using SQL. There is one thing,
however, that you could learn about that might solve many problems
pretty easily.

The Web Query

The data for your Excel spreadsheets does not have to be in a sheet —
there are data sources of all kinds, including my favorite: the Web
Query.

If you don’t know about this: you can give Excel a URL, and it will
scan the web page it lands on, looking for tables. It will ask you to
THE IMPOSTER’S ROADMAP 815

verify the selection visually, and then you can transform the data if
you want using an old tool called Power Query.

This is a list of lessons I offer on my site, bigmachine.io. As you can


see, Excel is smart enough to scan the page and scrape out the tables.

Once you’re done, close and import and the next thing you see will be:
816 ROB CONERY

This is a sheet in Excel just like any other, and you can base equations,
charts, and rollups on this data as normal. You can even set options to
refresh on a given period so you don’t have to rerun this query
every day:
THE IMPOSTER’S ROADMAP 817

This lets the decision makers in your company do their thing without
worrying about bugging you. The best part? They’re using their
favorite tool!

Every Other BI Solution

Business Intelligence is a booming business, as you can imagine. The


rise of AI is making it even more intense, and the tool your company
chooses is likely to have little to do with you. The only reason I added
a section on tools is for the time you’re asked the dreaded question:
so, how are sales looking? It might not be that exact one, but it will be a
variation of it.

You do not want to be the data fairy. Your job is to ship software
and ensure that your clients and bosses know the value of what you
do. Your role, when it comes to data and reporting, is to ensure that
you’re writing what people need to know about and omitting what is
not theirs to see.

If you work in a larger company, it’s likely they’ll already have a


reporting and analysis tool, like Power BI or Tableau (among others),
and if that’s the case, you’re golden. Schedule some time to understand
the reports they run so you can be sure you are tracking all the data
they need (I know, I’m repeating myself, but that’s how important
this is).

SUMMARY
You know what I’m going to say, as I’ve been saying it throughout the
chapter and also throughout the book: data is the value of your
application. That data is worth good money, and you’re the one
producing it. Yes, the true value of your business is based on sales and
customer count, but all of that is determined by your data.

The best way to ensure you’re capturing the things you’ll need to
show your clients and bosses at the end of the month is to figure it
out, as best you can, from the beginning.
818 ROB CONERY

Here are some good questions:

How will we know, as a business, that we’re offering our


customers value?
How do you calculate our growth trajectory, and is there a
plan in place?
When do we get to discuss performance bonuses?

Use your discretion on the last one. It’s important to know, but there’s
a time and a place. That said, it is a question you should have in the
back of your mind. Your salary and incentives are based on the
company bottom line, but how is that figured?

It’s wild to think that your paycheck is based on the quality of data
you’re storing, isn’t it? That’s a good perspective to have as you
consider reporting capabilities.
THIRTY
AND HERE WE ARE
GO MAKE SOME MAGIC HAPPEN

I
started this book in September 2020, sitting in an apartment in
San Anselmo, CA, staring at an email from a person who had
just read one of my old posts. In summary, the note said:

I just finished your book and I loved it, but I wish it had more
information about the things you’re writing about here. A lot of
this goes right over my head, and I have no idea how I’ll be able to
ever lead a team. There’s just way too much to know.

The post was about mashing two design patterns together: Unit of
Work and Repository. In my opinion, it’s a ridiculous thing to do and
can result in transactions you don’t intend.

It occurred to me, at that point, that there is a lot a new software


engineer needs to know as they move up in the world. These things
are both personal and technical, and I thought it might be useful to
create a book that touches on both.
820 ROB CONERY

Mike Gunderloy’s book, Coder to Developer, changed my career, entirely.


I remember reading it and thinking “my goodness, I’m an actual
software developer now!” In those days, if you were eager to learn
something, you either signed up for in-person training, or you waited
for a book to show up at your local book store. There were blogs and
magazines, but online training had yet to become popular and if you
wanted to know something, you needed to find a book.

In 2024, things have changed, radically. It’s no longer good enough to


make sure you use source control — you have to structure how you
use it, and follow standard practices to get the full value from it. We
write software a bit differently as well — builds are cheaper and easier
than ever to create, and occasionally, we commit our code right into
the trunk.

We’ve also become pseudo-IT professionals, given the rise of Docker


and the ability to turn the data center into a series of YAML files.

If you have 2–3 years experience under your belt, and you’ve read this
book hoping to shortcut your career, be careful. It’s great to know
what’s coming, but it’s not so great to avoid living it. There’s no
replacing solid experience, so use this book as a way to get yourself
out there, and get paid what you’re worth.

A FINAL THOUGHT ON POWER


You will gain influence as your expertise grows. People will listen to
you and, maybe someday, buy your books and online courses. This is
power: the ability to influence others based on your reputation.

We’ve discussed this throughout the book, and I hope to leave off with
it as well: if the word “power” affects you, think about why. For many
people, they see visions of power-hungry freaks screwing the world
over so they can make another million.

That’s not what I’m talking about.


THE IMPOSTER’S ROADMAP 821

I’m talking about using your place in life, your privilege, education,
drive, charisma, and empathy to do your best work, which is
delivering software. You can’t do this alone, you have to be able to
influence your team to do their best work, and your boss to let them
do so.

Don’t shy away from the word, own it and do good with it. If you
don’t, someone else will, which would be a shame.

Thanks so much for going on this journey with me! Best of luck
to you.

You might also like