0% found this document useful (0 votes)
111 views10 pages

Buildertools and The Development Process

Uploaded by

metalic0071
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views10 pages

Buildertools and The Development Process

Uploaded by

metalic0071
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 10

Development Phases

At Amazon, software is developed using continuous integration and continuous


deployment (CI/CD) methodologies. Continuous integration is the practice of
regularly merging your code changes into a shared, version controlled repository.
Continuous deployment is the practice of releasing code to production through a
series of automated stages (e.g., build, test, deploy).

The standard development process at Amazon uses both of these methodologies as you
proceed through the phases of development, as depicted in this graphic:

Image of the development process, showing each development phase.


During each phase you’ll use internal development tools—known collectively as
Builder Tools—to perform various development activities, from setting up your
development environment to building, deploying, and managing your applications and
services. Many Builder Tools are comparable to external development tools, but they
provide unique functionality to support Amazon specific development concepts.

In most cases, each Builder Tool is used within a particular development phase.
However, you might also use features provided by a single Builder Tool across
multiple phases—for example, the Brazil Build System (Brazil). As its name
suggests, Brazil is primarily a build system used during the Build & Package phase.
However, it provides functionality that you will also use during the Develop phase,
including creating a new workspace.

This section provides an overview of the different Builder Tools and when/how they
apply during the development process. The content is organized by development
phase, with details about the activities commonly performed within that phase and
the Builder Tools that you will use to perform each activity. See Tool
Recommendations for Web Services for a list of recommended tools that are organized
by development phase and compute platform.

Note
This is an overview only; we don’t include step-by-step instructions for each
activity. For that level of information, we provide links to relevant documentation
that provides more specific guidance.

Development phases and associated activities

Setup & Create


Develop
Orchestrate
Build & Package
Test
Deploy
Manage

The Setup & Create phase is the starting point for new development. The Builder
Tools used in this phase enable you to Set up a development environment and Create
applications and packages that you will take into the Develop phase.

Set up a development environment


A development environment is a machine that hosts the tools and resources you need
to write and edit your code. As a new builder, you’ll receive a development laptop
for local development and you have the option to also use a Cloud Desktop to
offload your compute-intensive builds to a more powerful, always-on host (that uses
the same operating system and processor architecture as Package Builder).
A Cloud Desktop is a Linux instance running on EC2 that contains all of the tools
you need to develop software at Amazon; it is the common development environment
for many teams. While development may occur on your Cloud Desktop, you can interact
with the tools, resources, and write your code on it from your laptop through an
SSH connection or an IDE plugin.

Setting up a development environment requires the use of Builder Toolbox to install


some Builder Tools. Builder Toolbox is a CLI that supports not only the
installation and updating of different Builder Tools, but it also vends different
community tools that your team may use.

In addition to installing Builder Tools, you may want to install other tools
including a file syncing solution, a Cloud Desktop GUI, IDE, and/or text editors.
You can use your preferred IDE and editor to access and edit packages in your
workspace (the most popular IDEs used at Amazon are JetBrains (for example,
IntelliJ IDEA, PyCharm) and Visual Studio Code). Some IDEs provide more features
and plugin support for Builder Tools. One option provided by Builder Tools is Black
Caiman. Black Caiman is a JetBrains IDE plugin that enables you to perform common
source and build operations from within your IDE (without using a CLI).

To learn more about setting up a development environment, see the Developer


Environment Setup Guide.

Create applications and packages


The code that you write and edit resides in a package. A package is a repository
that contains code with similar and/or related functions that are intended to be
built and deployed together (e.g., library, software application, or release
infrastructure). Packages can vary in size and many times your applications will be
composed of multiple packages.

In addition to application packages, teams typically implement infrastructure as


code. Infrastructure as code is the creation and management of infrastructure (for
example, pipelines, deployment targets) using a declarative model. Like your
application code, your infrastructure code resides in packages and is developed
using the same processes and standards. When developing applications, many times
you will have both “application packages” and an “infrastructure package” that are
collectively used to build and deploy your application to customers.

BuilderHub Create is used to create different types of packages depending on the


scope of your project.

PROD/CORP: You can create empty, specification file packages based on your language
preference (for example, Java, Python) or using built-in package templates.
BuilderHub Create generates the selected package type with the necessary package
structure to begin development.

For PROD/CORP infrastructure as code packages, you can create LPT packages. Live
Pipeline Templates (LPT) is a system to model your service stack in source code. It
uses built-in pipeline templates to apply Amazon’s infrastructure best practices to
your service’s operations by default. You can also create your own pipeline
templates to model your team’s specific needs.

Native AWS: You can create complete example applications that include all of the
AWS resources and Builder Tools resources that you need to start your development,
including packages, workspaces, version sets, pipelines, and deployment targets.

For Native AWS infrastructure as code packages, BuilderHub Create uses the AWS
Cloud Developer Kit (CDK) . The CDK is a software development framework for
defining cloud infrastructure in code and provisioning it through AWS
CloudFormation. See Getting Started With CDK in the Native AWS Developer Guide.

Develop
The Develop phase is when you start developing the code in your packages and
perform continuous integration. The Builder Tools used in this phase enable you to
Create a workspace, Clone your package, Write your code, Build your code locally,
Test your code locally, and Submit a Code Review (CR).

Create a workspace
To develop packages on your development environment, you create a workspace using
the Brazil CLI. A workspace is a personal, predefined directory that includes a
default set of folders and files that are created and populated based on the
actions you take within the workspace.

Workspaces can be created for specific development tasks and deleted at anytime (in
other words, they are not repositories). Typically, you will have multiple
workspaces for the different projects that you’re working on, or even different
workspaces for each feature you are working on for a single application (provided
they are independent of each other).

For more information about how to create and manage your workspaces, see Working
with workspaces in the Brazil CLI Guide.

Clone your package


After creating a workspace, you’re ready to configure it by sourcing the package(s)
that you want to develop. Every package has its own Git repository that’s stored
and accessible using GitFarm. GitFarm is Amazon’s Git hosting service that supports
the use of any Git client that you install when you set up your development
environment.

To search and view Git repositories in GitFarm, you use Code Browser. Code Browser
is a web UI that you can use to search for packages and view detailed information
about your repository, including its permissions, branches, and commits (in other
words, it’s GitFarm’s web UI and includes functionality that’s similar to external
tools like GitHub).

To begin working with a package, you clone it into a workspace using the Brazil
CLI. After a package is added to your workspace, you can use Git to perform common
commands , including branching, checking the status of a repository, and pulling
the latest commits into your workspace.

Another way packages can be added to your workspace is by using the


BrazilThirdPartyTool. The BrazilThirdPartyTool is the approved way to import third-
party software packages (for example, open source packages) into Amazon’s code
base. Using this tool, you can keep your dependencies on third party software up-
to-date. It operates in your workspace, allowing you to test and build against
third-party code locally before you ever decide to push it out into Amazon at
large.

Write your code


After cloning your package(s) into your workspace, you’re ready to write your code!
The source code in your packages can be written in your preferred programming
language. Some of the more common languages used are Java, JavaScript, and Python.
Depending on the type of application you’re developing, Builder Tools provides two
service frameworks that you can use:
Coral is a service framework that allows clients and servers written in different
programming languages to reliably talk to each other while evolving compatibly. It
powers everything from public AWS services to the internal services that enable
Alexa and the retail website. Coral is typically used for RPC or REST services.

Smithy is an interface definition language and a set of tools that allows


developers to build clients and servers in multiple languages. Smithy models define
a service as a collection of resources, operations, and shapes. A Smithy model
enables API providers to generate clients and servers in various programming
languages, API documentation, test automation, and example code.

Build your code locally


After cloning your package and writing your code, you want to verify it builds
successfully by running the brazil-build command in your workspace package
directories.

The brazil-build command builds your package using the build system(s) defined in
the package specification file. Brazil uses different build systems for each
programming language. One example of a build system used for mobile development is
Electric Company. Electric Company provides a Brazil-integrated build system for
iOS and Android development, as well as a web UI that you can use to sign native
artifacts.

When the brazil-build command runs, your local build systems will automatically
fetch the necessary components of the packages you depend on and place them in the
Brazil package cache. It will then produce local runtime environments for each
build system and store them in your workspace’s env folder. When the build
finishes, the command outputs the build status (pass/fail) and produces build
artifacts that are stored in your workspace’s build folder.

There are different brazil-build command options that you can use to verify that
your changes build successfully when using different build targets. For example, to
verify that your package builds successfully using the build system’s default
target, you can simply run brazil-build. To see if your package will build using a
release target, you can run the brazil-build release command. Depending on the
build system, the release target may contain more build tasks than the default
target (if they’re different) and detect different issues.

Note
The brazil-build command only builds the package that you’re currently working in
and it does not build all of the packages in your workspace. To build all of the
packages in your workspace, you can use the brazil-recursive-cmd command.

Test your code locally


When you build your code, you should also execute unit tests. A unit test is the
smallest testable part of an application. Each unit test isolates an individual
program component (for example, function, procedure, class) to determine that it
works and/or is written correctly. Unit tests are typically located within your
package and are executed when you run the brazil-build release command or brazil-
build unit-tests commands. Unit test results are reported and gated using the
Coverlay tool. Likewise, consider using CloudCover if you want to gate a release
based on runtime test coverage data that is collected during integration testing.

Beyond unit testing, you may want to verify that your code changes will not break
anything during the release process. To perform this type of testing you use Rapid
Dev Environment (RDE). RDE is a personal staging environment that replicates a
complete Prod stack on your development environment, enabling you to verify your
changes through a simulated release process before entering the “official” release
stages. It includes a workflow execution engine that automates your build, test,
and deploy process locally or any other scriptable action.

Submit a Code Review (CR)


After successfully building and testing your code locally, you’re ready to submit
your commit(s) for a peer review before merging it into the package repository.

Attention
Your code commits must never contain any live AWS credentials. See more in
restrictions on pushing code to GitFarm.

This review process is referred to as a code review (CR) and is managed by a


Builder Tool called CRUX. CRUX provides a CLI and a web UI dashboard that’s
integrated into Code Browser that you use to create, publish, comment, update,
approve, and merge CRs.

Each package has its own CR requirements (in other words, CRUX rules) that are set
by the package owner. CRUX rules are used to ensure that certain requirements are
met before a CR can be merged. For example, teams can implement a rule to specify
thresholds for unit test coverage and block CR approvals if those thresholds are
not met. In this particular example, CRUX uses a tool called Coverlay to perform an
analysis and report both line and file level test coverage data.

After a CR has been approved by its reviewers and all CRUX rules have been
satisfied, the commits in the CR can be merged using the CRUX dashboard. For more
information about the CR process, see the CRUX code review lifecycle in the CRUX
User Guide.

Orchestrate
The Orchestrate phase is when you configure and enable tooling to automate your
continuous deployment process. The Builder Tools provided in this phase enable you
to Automate your build, test, and deploy stages.

Automate your build, test, and deploy stages


After merging your code changes, you’re ready to build, test, and deploy those
changes to your customers. Most of the time, teams automate this process by using
Pipelines. Pipelines is a continuous deployment service that you can use to model,
visualize, manage, and automate your continuous deployment process by using a
pipeline to release your software.

You can create and manage a pipeline using the web UI and/or API provided by
Pipelines. However, many teams create and manage a pipeline using one of the
following infrastructure as code packages:

An LPT package template that includes your pipeline configuration written in a


declarative model. When the package is built using the lpt synthesize command, the
build system creates and/or updates the pipeline according to the declarations (for
example, stages and approval workflows) in your LPT template.

A TypeScript file in a CDK package. The file includes your pipeline configuration
written using a CDK pipeline construct. When the package is built using the lpt
synthesize command, the build system creates and/or updates the pipeline according
to your declarations (for example, stages and approval workflows) in your file.

After creating and configuring a pipeline, your code changes can be built, tested
and deployed according to your promotion configurations and approvals.

Build & Package


The Build & Package phase is when you officially build your packages for release
after your changes have been merged. The Builder Tools used in this phase enable
you to Submit a build request and Prepare Native AWS packages.

The following graphic shows an example build workflow starting from approval and
merging of a code review that was completed during the Develop phase.

Submit a build request


Applications are typically developed using multiple packages with many dependencies
that must be collectively built in order for the application to function. To
facilitate this collective build, you use version sets. A version set represents a
collection of your application and dependency package major versions at specific
commits that have demonstrated they can collectively build successfully.

To build your packages into a version set, you submit a build request to Brazil’s
Package Builder. Package Builder is the centralized system that processes build
requests and version set merges. Package Builder will build your requested packages
using the brazil-build release command on dedicated Brazil hosts. Additionally, any
packages that have a dependency (either direct or transitive) on the requested
package(s) will be added to the build request and built against the new version.
After a successful build request, a new version set revision is generated.

In addition to executing an official build request, Package Builder supports dry-


run builds. A dry-run build is a “test” build that does not block other build
requests or release a new version of your package after the build completes. Dry-
runs enable you to identify any issues that may result in a failed build before you
submit a build request. Many times, packages have a CRUX rule that automatically
triggers a dry-run build when a code review (CR) is published.

Package Builder can be invoked through multiple channels, including the Brazil CLI
brazil-packagebuilder command and through its web UI (Build ). In a continuous
deployment process, you will typically rely on Pipelines to automatically submit a
build request using the AutoBuild agent (which gets invoked after a CR has been
approved and merged).

Prepare Native AWS packages


If you’re deploying to a Native AWS deployment target, your packages must go
through a packaging process before they can be deployed.

By default, the build artifacts produced by Package Builder are not deployable
using NAWS solutions. BATS is a packaging service that takes build artifacts from
Package Builder, transforms them into NAWS compatible deployment packages, and
publishes the result to S3.

BATS uses fetchers, transformers, and publishers to create and publish the
deployment package. Collectively, this process is performed during the Packaging
stage in your pipeline. After Package Builder completes its process and a new
version set revision is produced, your pipeline will invoke BATS via a promotion.

Test

Verify your changes and protect your pipeline


To ensure your built code performs as expected, you perform tests using different
test verticals. In Builder Tools, a test vertical is a specific type of test you
perform after unit testing. The following are three test verticals that you will
likely use to test your applications:

Note
The following sections represent the three primary test verticals that are
supported by Builder Tools. Your team may perform additional testing (for example,
fuzz testing, chaos testing ) using different community tools (for example, Gremlin
, Weezer )

Integration testing
Integration testing verifies the points of integration between components and/or
systems work as expected. During integration testing, individual software modules
or parts of an application are combined and tested as one or more test cases.

Load testing
Load testing verifies your application response to increased traffic (for example,
transactions per second (TPS)). They are used to determine your application’s
behavior under both normal and anticipated peak load conditions (for example, we
anticipate X number of TPS, can our application handle that load?).

Canary testing
Canary testing verifies that your deployed application is up and running by sending
it frequent requests. Canary tests act as an early warning system to detect when
there is a problem with your application. If a canary test fails, it will notify
you via an alarm, enabling you to identify the issue quickly and before your
customers are impacted.

Both integration and load tests are typically run as approval workflows in your
pipeline; this ensures that your code can be safely promoted to the next stage in
your continuous deployment process. Promotion of your code can also be gated using
CloudCover during the approval workflow. CloudCover evaluates statistics related to
runtime test coverage and provides realtime data for determining whether
integration tests are thoroughly covering the code base.

Canary tests should be deployed using their own pipeline (independent from your
application pipeline) and are run on their respective deployment target against
your application endpoints. For Native AWS applications, canary test should also
have their own AWS resources and account. There are a few reasons why canary tests
are kept separate from your application resources:

If there are issues with a canary deployment, it does not block your application
pipeline (and vice versa).

For Native AWS testing, having different accounts helps to avoid security and
availability problems (for example, account resource limits). Additionally, it’s a
better way to emulate customer traffic as operations and calls from the same
account might have different implicit rules (for example, resource access). When
you have multiple AWS accounts, you typically want to use a dedicated pipeline for
each account to manage deployments.

Depending on your team’s codebase, there are three Builder Tools that you will use
to execute your tests:

To configure and execute automated integration, load, and canary tests for Native
AWS applications, you use Hydra Test Platform (Hydra). Hydra is a serverless test
system that automatically configures your test run infrastructure, orchestrates
your test runs, and reports your test results.

To configure and execute automated integration tests for PROD/CORP deployment


targets, you use Test on Demand (ToD). Similar to Hydra, ToD can automatically
execute your tests, as well as execute tests on a schedule, provide notifications,
and provide run-over-run test results analysis.

To configure and execute load and canary tests for PROD/CORP deployment targets,
you use TPSGenerator. TPSGenerator generates synthetic traffic against a service
(or to replay production traffic), to validate that it can withstand peak traffic,
and to ensure it degrades gradually at throughput higher than expected.

Deploy
The Deploy stage is the final stage in the release process and where your build
artifacts are deployed to your deployment target(s). The Builder Tools used in this
phase enable you to Deploy to Prod/Corp and Deploy to Native AWS.

The following graphic shows the high-level activities that are automated by
Pipelines to deploy to a PROD/CORP or Native AWS deployment targets. There is no
developer activity shown (activity driven by automation) and this graphic is
intended to show what happens “behind-the-scenes” at a very high-level.

Deploy to Prod/Corp
To manage deployments to the PROD/CORP fabrics and/or the PROD/CORP VPCs you use
Apollo. Apollo is a deployment service that organizes software into environments.
Apollo environments are composed of one or more environment stages (for example,
Alpha, Beta, Prod) that represent the deployable portion of an environment,
including the version filter and package group(s) that get deployed, and the
host(s) and host class(es) that receives the deployment. To automate your
deployments, each environment stage usually maps to a stage in your pipeline.

When using Pipelines to orchestrate your continuous deployment process, you will
likely enable the AutoDeploy promotion agent to automatically invoke Apollo to
perform a minimal deployment.

To maintain your Apollo environments, you use Quilt. Quilt is a continuous


deployment system that help to keep your Apollo hosts secure and up-to-date with
respect to operating system patches, while maintaining your application’s health
and reducing operational burden.

Deploy to Native AWS


To manage and facilitate your Native AWS deployments, you can use CloudFormation
and CodeDeploy.

CloudFormation is a service that helps you model and set up your AWS resources. You
create a template that describes all the AWS resources that you want and
CloudFormation takes care of provisioning and configuring those resources for you.
You don’t need to individually create and configure AWS resources and figure out
what’s dependent on what; CloudFormation handles that.

Similar to Apollo, CodeDeploy is a deployment service that automates Native AWS


application deployments. Depending on your application, there are different compute
platforms that you can target.

When using Pipelines to orchestrate your continuous deployment process, you will
likely enable the AutoPromote promotion agent to automatically trigger a deployment
to your target(s). Before a deployment can occur to Native AWS deployment targets,
your code will enter a Packaging stage in your pipeline where the resources will be
packaged by BATS
Manage
The Manage phase applies throughout the continuous deployment process as your team
implements software maintenance and governance models. The Builder Tools used in
this phase enable you to Implement operational excellence, Maintain your software,
Track your artifacts, and Manage changes outside of CI/CD.

Implement operational excellence


To manage and maintain applications, many teams follow an operational excellence
model. Operational excellence is the implementation of process and tools to govern
and optimize your development operations. Following operational excellence models
helps you develop software correctly and deliver experiences that consistently
delight your customers.

One way teams implement operational excellence procedures, is using Dogma. Dogma is
a service that you can use to configure operational excellence rules to your
pipeline. Your rules are designed to flag deployment safety issues and report
violations as risks. When a risk is identified in your pipeline, Dogma may block
your pipeline from promoting any changes. Dogma provides a web UI that you can use
to view your pipeline risks and manage your rules, including requesting exemptions
from rules that should not have reported a risk.

Maintain your software


To help keep software up-to-date, teams use Software Assurance Services (SAS). SAS
provides tooling that enables you to update your application software, reduce
risks, and stay current with the latest security requirements. It includes a
Software Upgrade Service (SUS) that removes unwanted software versions. For
example, SUS helps identify all entities (e.g Apollo environments, Code Deploy
deployment groups, version sets) that contain a specific artifact (for example,
package major version, commit), and report any removal actions you must take.

To ensure teams keep their applications and infrastructure up-to-date, SAS uses
software campaigns. A campaign is an initiative to remove or update a software
version, either because that version has risks (security, availability) or because
it’s no longer supported by the software vendor (not getting recent patches).
Depending on the campaign, teams can easily and efficiently address the issues
identified in campaigns by using Transmogrifier. Transmogrifier emulates hundreds
or thousands of software developers working for you to create software updates for
other teams. Campaign owners create write-once-run-many update tools, and use
Transmogrifier to apply them again and again across the company.

Track your artifacts


As discussed in this guide, building applications requires the use of multiple
Builder Tools that result in the production of different artifacts. To help keep
track of these artifacts you can use Gated Garden. Gated Garden is an artifact
provenance tracking service that’s integrated with GitFarm, Brazil, and Apollo.
It’s a graph datastore that stores artifacts, relationships between these
artifacts, and artifact metadata. For example, with Gated Garden you can see what
Apollo environment contains a specific commit, or see what packages went into a
building a package version.

Manage changes outside of CI/CD


Not all projects and operations follow the standard continuous integration and
continuous deployment (CI/CD) process, so the use of Pipelines to orchestrate those
changes does not apply. For these types of projects, you can use Modeled Change
Management (MCM). MCM is a tool that enables you to define, review, schedule, and
execute manual and scripted changes to your customer-impacting systems.

Similar to Pipelines, MCM gives you the ability to model change processes as
workflows to help remove ambiguities and reveal opportunities for increased safety
and automation. The main difference is that MCM focuses on orchestrating changes
that do not fit the typical continuous deployment process (for example, scheduled
rack maintenance in a data center, database upgrades) by providing similar
mechanisms and patterns. Because of the similarities, some teams use MCM as a
bridge to implementing a fully automated CI/CD process with Pipelines.

You might also like