Buildertools and The Development Process
Buildertools and The Development Process
The standard development process at Amazon uses both of these methodologies as you
proceed through the phases of development, as depicted in this graphic:
In most cases, each Builder Tool is used within a particular development phase.
However, you might also use features provided by a single Builder Tool across
multiple phases—for example, the Brazil Build System (Brazil). As its name
suggests, Brazil is primarily a build system used during the Build & Package phase.
However, it provides functionality that you will also use during the Develop phase,
including creating a new workspace.
This section provides an overview of the different Builder Tools and when/how they
apply during the development process. The content is organized by development
phase, with details about the activities commonly performed within that phase and
the Builder Tools that you will use to perform each activity. See Tool
Recommendations for Web Services for a list of recommended tools that are organized
by development phase and compute platform.
Note
This is an overview only; we don’t include step-by-step instructions for each
activity. For that level of information, we provide links to relevant documentation
that provides more specific guidance.
The Setup & Create phase is the starting point for new development. The Builder
Tools used in this phase enable you to Set up a development environment and Create
applications and packages that you will take into the Develop phase.
In addition to installing Builder Tools, you may want to install other tools
including a file syncing solution, a Cloud Desktop GUI, IDE, and/or text editors.
You can use your preferred IDE and editor to access and edit packages in your
workspace (the most popular IDEs used at Amazon are JetBrains (for example,
IntelliJ IDEA, PyCharm) and Visual Studio Code). Some IDEs provide more features
and plugin support for Builder Tools. One option provided by Builder Tools is Black
Caiman. Black Caiman is a JetBrains IDE plugin that enables you to perform common
source and build operations from within your IDE (without using a CLI).
PROD/CORP: You can create empty, specification file packages based on your language
preference (for example, Java, Python) or using built-in package templates.
BuilderHub Create generates the selected package type with the necessary package
structure to begin development.
For PROD/CORP infrastructure as code packages, you can create LPT packages. Live
Pipeline Templates (LPT) is a system to model your service stack in source code. It
uses built-in pipeline templates to apply Amazon’s infrastructure best practices to
your service’s operations by default. You can also create your own pipeline
templates to model your team’s specific needs.
Native AWS: You can create complete example applications that include all of the
AWS resources and Builder Tools resources that you need to start your development,
including packages, workspaces, version sets, pipelines, and deployment targets.
For Native AWS infrastructure as code packages, BuilderHub Create uses the AWS
Cloud Developer Kit (CDK) . The CDK is a software development framework for
defining cloud infrastructure in code and provisioning it through AWS
CloudFormation. See Getting Started With CDK in the Native AWS Developer Guide.
Develop
The Develop phase is when you start developing the code in your packages and
perform continuous integration. The Builder Tools used in this phase enable you to
Create a workspace, Clone your package, Write your code, Build your code locally,
Test your code locally, and Submit a Code Review (CR).
Create a workspace
To develop packages on your development environment, you create a workspace using
the Brazil CLI. A workspace is a personal, predefined directory that includes a
default set of folders and files that are created and populated based on the
actions you take within the workspace.
Workspaces can be created for specific development tasks and deleted at anytime (in
other words, they are not repositories). Typically, you will have multiple
workspaces for the different projects that you’re working on, or even different
workspaces for each feature you are working on for a single application (provided
they are independent of each other).
For more information about how to create and manage your workspaces, see Working
with workspaces in the Brazil CLI Guide.
To search and view Git repositories in GitFarm, you use Code Browser. Code Browser
is a web UI that you can use to search for packages and view detailed information
about your repository, including its permissions, branches, and commits (in other
words, it’s GitFarm’s web UI and includes functionality that’s similar to external
tools like GitHub).
To begin working with a package, you clone it into a workspace using the Brazil
CLI. After a package is added to your workspace, you can use Git to perform common
commands , including branching, checking the status of a repository, and pulling
the latest commits into your workspace.
The brazil-build command builds your package using the build system(s) defined in
the package specification file. Brazil uses different build systems for each
programming language. One example of a build system used for mobile development is
Electric Company. Electric Company provides a Brazil-integrated build system for
iOS and Android development, as well as a web UI that you can use to sign native
artifacts.
When the brazil-build command runs, your local build systems will automatically
fetch the necessary components of the packages you depend on and place them in the
Brazil package cache. It will then produce local runtime environments for each
build system and store them in your workspace’s env folder. When the build
finishes, the command outputs the build status (pass/fail) and produces build
artifacts that are stored in your workspace’s build folder.
There are different brazil-build command options that you can use to verify that
your changes build successfully when using different build targets. For example, to
verify that your package builds successfully using the build system’s default
target, you can simply run brazil-build. To see if your package will build using a
release target, you can run the brazil-build release command. Depending on the
build system, the release target may contain more build tasks than the default
target (if they’re different) and detect different issues.
Note
The brazil-build command only builds the package that you’re currently working in
and it does not build all of the packages in your workspace. To build all of the
packages in your workspace, you can use the brazil-recursive-cmd command.
Beyond unit testing, you may want to verify that your code changes will not break
anything during the release process. To perform this type of testing you use Rapid
Dev Environment (RDE). RDE is a personal staging environment that replicates a
complete Prod stack on your development environment, enabling you to verify your
changes through a simulated release process before entering the “official” release
stages. It includes a workflow execution engine that automates your build, test,
and deploy process locally or any other scriptable action.
Attention
Your code commits must never contain any live AWS credentials. See more in
restrictions on pushing code to GitFarm.
Each package has its own CR requirements (in other words, CRUX rules) that are set
by the package owner. CRUX rules are used to ensure that certain requirements are
met before a CR can be merged. For example, teams can implement a rule to specify
thresholds for unit test coverage and block CR approvals if those thresholds are
not met. In this particular example, CRUX uses a tool called Coverlay to perform an
analysis and report both line and file level test coverage data.
After a CR has been approved by its reviewers and all CRUX rules have been
satisfied, the commits in the CR can be merged using the CRUX dashboard. For more
information about the CR process, see the CRUX code review lifecycle in the CRUX
User Guide.
Orchestrate
The Orchestrate phase is when you configure and enable tooling to automate your
continuous deployment process. The Builder Tools provided in this phase enable you
to Automate your build, test, and deploy stages.
You can create and manage a pipeline using the web UI and/or API provided by
Pipelines. However, many teams create and manage a pipeline using one of the
following infrastructure as code packages:
A TypeScript file in a CDK package. The file includes your pipeline configuration
written using a CDK pipeline construct. When the package is built using the lpt
synthesize command, the build system creates and/or updates the pipeline according
to your declarations (for example, stages and approval workflows) in your file.
After creating and configuring a pipeline, your code changes can be built, tested
and deployed according to your promotion configurations and approvals.
The following graphic shows an example build workflow starting from approval and
merging of a code review that was completed during the Develop phase.
To build your packages into a version set, you submit a build request to Brazil’s
Package Builder. Package Builder is the centralized system that processes build
requests and version set merges. Package Builder will build your requested packages
using the brazil-build release command on dedicated Brazil hosts. Additionally, any
packages that have a dependency (either direct or transitive) on the requested
package(s) will be added to the build request and built against the new version.
After a successful build request, a new version set revision is generated.
Package Builder can be invoked through multiple channels, including the Brazil CLI
brazil-packagebuilder command and through its web UI (Build ). In a continuous
deployment process, you will typically rely on Pipelines to automatically submit a
build request using the AutoBuild agent (which gets invoked after a CR has been
approved and merged).
By default, the build artifacts produced by Package Builder are not deployable
using NAWS solutions. BATS is a packaging service that takes build artifacts from
Package Builder, transforms them into NAWS compatible deployment packages, and
publishes the result to S3.
BATS uses fetchers, transformers, and publishers to create and publish the
deployment package. Collectively, this process is performed during the Packaging
stage in your pipeline. After Package Builder completes its process and a new
version set revision is produced, your pipeline will invoke BATS via a promotion.
Test
Note
The following sections represent the three primary test verticals that are
supported by Builder Tools. Your team may perform additional testing (for example,
fuzz testing, chaos testing ) using different community tools (for example, Gremlin
, Weezer )
Integration testing
Integration testing verifies the points of integration between components and/or
systems work as expected. During integration testing, individual software modules
or parts of an application are combined and tested as one or more test cases.
Load testing
Load testing verifies your application response to increased traffic (for example,
transactions per second (TPS)). They are used to determine your application’s
behavior under both normal and anticipated peak load conditions (for example, we
anticipate X number of TPS, can our application handle that load?).
Canary testing
Canary testing verifies that your deployed application is up and running by sending
it frequent requests. Canary tests act as an early warning system to detect when
there is a problem with your application. If a canary test fails, it will notify
you via an alarm, enabling you to identify the issue quickly and before your
customers are impacted.
Both integration and load tests are typically run as approval workflows in your
pipeline; this ensures that your code can be safely promoted to the next stage in
your continuous deployment process. Promotion of your code can also be gated using
CloudCover during the approval workflow. CloudCover evaluates statistics related to
runtime test coverage and provides realtime data for determining whether
integration tests are thoroughly covering the code base.
Canary tests should be deployed using their own pipeline (independent from your
application pipeline) and are run on their respective deployment target against
your application endpoints. For Native AWS applications, canary test should also
have their own AWS resources and account. There are a few reasons why canary tests
are kept separate from your application resources:
If there are issues with a canary deployment, it does not block your application
pipeline (and vice versa).
For Native AWS testing, having different accounts helps to avoid security and
availability problems (for example, account resource limits). Additionally, it’s a
better way to emulate customer traffic as operations and calls from the same
account might have different implicit rules (for example, resource access). When
you have multiple AWS accounts, you typically want to use a dedicated pipeline for
each account to manage deployments.
Depending on your team’s codebase, there are three Builder Tools that you will use
to execute your tests:
To configure and execute automated integration, load, and canary tests for Native
AWS applications, you use Hydra Test Platform (Hydra). Hydra is a serverless test
system that automatically configures your test run infrastructure, orchestrates
your test runs, and reports your test results.
To configure and execute load and canary tests for PROD/CORP deployment targets,
you use TPSGenerator. TPSGenerator generates synthetic traffic against a service
(or to replay production traffic), to validate that it can withstand peak traffic,
and to ensure it degrades gradually at throughput higher than expected.
Deploy
The Deploy stage is the final stage in the release process and where your build
artifacts are deployed to your deployment target(s). The Builder Tools used in this
phase enable you to Deploy to Prod/Corp and Deploy to Native AWS.
The following graphic shows the high-level activities that are automated by
Pipelines to deploy to a PROD/CORP or Native AWS deployment targets. There is no
developer activity shown (activity driven by automation) and this graphic is
intended to show what happens “behind-the-scenes” at a very high-level.
Deploy to Prod/Corp
To manage deployments to the PROD/CORP fabrics and/or the PROD/CORP VPCs you use
Apollo. Apollo is a deployment service that organizes software into environments.
Apollo environments are composed of one or more environment stages (for example,
Alpha, Beta, Prod) that represent the deployable portion of an environment,
including the version filter and package group(s) that get deployed, and the
host(s) and host class(es) that receives the deployment. To automate your
deployments, each environment stage usually maps to a stage in your pipeline.
When using Pipelines to orchestrate your continuous deployment process, you will
likely enable the AutoDeploy promotion agent to automatically invoke Apollo to
perform a minimal deployment.
CloudFormation is a service that helps you model and set up your AWS resources. You
create a template that describes all the AWS resources that you want and
CloudFormation takes care of provisioning and configuring those resources for you.
You don’t need to individually create and configure AWS resources and figure out
what’s dependent on what; CloudFormation handles that.
When using Pipelines to orchestrate your continuous deployment process, you will
likely enable the AutoPromote promotion agent to automatically trigger a deployment
to your target(s). Before a deployment can occur to Native AWS deployment targets,
your code will enter a Packaging stage in your pipeline where the resources will be
packaged by BATS
Manage
The Manage phase applies throughout the continuous deployment process as your team
implements software maintenance and governance models. The Builder Tools used in
this phase enable you to Implement operational excellence, Maintain your software,
Track your artifacts, and Manage changes outside of CI/CD.
One way teams implement operational excellence procedures, is using Dogma. Dogma is
a service that you can use to configure operational excellence rules to your
pipeline. Your rules are designed to flag deployment safety issues and report
violations as risks. When a risk is identified in your pipeline, Dogma may block
your pipeline from promoting any changes. Dogma provides a web UI that you can use
to view your pipeline risks and manage your rules, including requesting exemptions
from rules that should not have reported a risk.
To ensure teams keep their applications and infrastructure up-to-date, SAS uses
software campaigns. A campaign is an initiative to remove or update a software
version, either because that version has risks (security, availability) or because
it’s no longer supported by the software vendor (not getting recent patches).
Depending on the campaign, teams can easily and efficiently address the issues
identified in campaigns by using Transmogrifier. Transmogrifier emulates hundreds
or thousands of software developers working for you to create software updates for
other teams. Campaign owners create write-once-run-many update tools, and use
Transmogrifier to apply them again and again across the company.
Similar to Pipelines, MCM gives you the ability to model change processes as
workflows to help remove ambiguities and reveal opportunities for increased safety
and automation. The main difference is that MCM focuses on orchestrating changes
that do not fit the typical continuous deployment process (for example, scheduled
rack maintenance in a data center, database upgrades) by providing similar
mechanisms and patterns. Because of the similarities, some teams use MCM as a
bridge to implementing a fully automated CI/CD process with Pipelines.