AWS Codepipeline-User
AWS Codepipeline-User
User Guide
API Version 2015-07-09
CodePipeline User Guide
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
CodePipeline User Guide
Table of Contents
What Is CodePipeline? ........................................................................................................................ 1
Video Introduction to AWS CodePipeline ....................................................................................... 1
What Can I Do with CodePipeline? .............................................................................................. 1
A Quick Look at CodePipeline ...................................................................................................... 2
A Quick Look at Input and Output Artifacts .......................................................................... 2
How Do I Get Started with CodePipeline? ..................................................................................... 3
We Want to Hear from You ......................................................................................................... 3
Concepts ................................................................................................................................... 4
Continuous Delivery and Integration ..................................................................................... 4
How CodePipeline Works .................................................................................................... 4
Getting Started .................................................................................................................................. 9
Step 1: Create an AWS Account .................................................................................................. 10
Step 2: Create or Use an IAM User ............................................................................................. 10
Step 3: Use an IAM Managed Policy to Assign CodePipeline Permissions to the IAM User .................... 10
Step 4: Install the AWS CLI ........................................................................................................ 11
Step 5: Open the Console for CodePipeline ................................................................................. 11
Next Steps ............................................................................................................................... 11
Product and Service Integrations ........................................................................................................ 12
Integrations with CodePipeline Action Types ................................................................................ 12
Source Action Integrations ................................................................................................. 12
Build Action Integrations ................................................................................................... 15
Test Action Integrations .................................................................................................... 16
Deploy Action Integrations ................................................................................................ 17
Approval Action Integrations ............................................................................................. 20
Invoke Action Integrations ................................................................................................. 20
General Integrations with CodePipeline ....................................................................................... 20
Examples from the Community .................................................................................................. 22
Blog Posts ....................................................................................................................... 22
Videos ............................................................................................................................. 24
Tutorials .......................................................................................................................................... 25
Tutorial: Create a Simple Pipeline (Amazon S3 Bucket) .................................................................. 26
Create an Amazon S3 Bucket ............................................................................................. 27
Create Windows Server Amazon EC2 Instances and Install the CodeDeploy Agent ...................... 28
Create an Application in CodeDeploy .................................................................................. 29
Create Your First Pipeline .................................................................................................. 30
Add Another Stage ........................................................................................................... 34
Disable and Enable Transitions Between Stages .................................................................... 41
Clean Up Resources .......................................................................................................... 41
Tutorial: Create a Simple Pipeline (CodeCommit Repository) .......................................................... 42
Create a CodeCommit Repository ....................................................................................... 43
Download, Commit and Push Your Code ............................................................................. 43
Create an Amazon EC2 Linux Instance and Install the CodeDeploy Agent ................................. 44
Create an Application in CodeDeploy .................................................................................. 46
Create Your First Pipeline .................................................................................................. 47
Update Code in Your CodeCommit Repository ...................................................................... 51
Optional Stage Management Tasks ..................................................................................... 53
Clean Up Resources .......................................................................................................... 53
Tutorial: Create a Four-Stage Pipeline ......................................................................................... 54
Set Up Prerequisites ......................................................................................................... 54
Create a Pipeline .............................................................................................................. 57
Add More Stages .............................................................................................................. 58
Clean Up Resources .......................................................................................................... 60
Tutorial: Set Up a CloudWatch Events Rule to Receive Email Notifications for Pipeline State Changes .... 61
Set Up an Email Notification Using Amazon SNS .................................................................. 61
Topics
• Video Introduction to AWS CodePipeline (p. 1)
• What Can I Do with CodePipeline? (p. 1)
• A Quick Look at CodePipeline (p. 2)
• How Do I Get Started with CodePipeline? (p. 3)
• We Want to Hear from You (p. 3)
• CodePipeline Concepts (p. 4)
• Automate your release processes: CodePipeline fully automates your release process from end to end,
starting from your source repository through build, test, and deployment. You can prevent changes
from moving through a pipeline by including a manual approval action in any stage except a Source
stage. You can release when you want, in the way you want, on the systems of your choice, across one
instance or multiple instances.
• Establish a consistent release process: Define a consistent set of steps for every code change.
CodePipeline runs each stage of your release according to your criteria.
• Speed up delivery while improving quality: You can automate your release process to allow your
developers to test and release code incrementally and speed up the release of new features to your
customers.
• Use your favorite tools: You can incorporate your existing source, build, and deployment tools into
your pipeline. For a full list of AWS services and third-party tools currently supported by CodePipeline,
see Product and Service Integrations with CodePipeline (p. 12).
• View progress at a glance: You can review real-time status of your pipelines, check the details of any
alerts, retry failed actions, view details about the source revisions used in the latest pipeline execution
in each stage, and manually rerun any pipeline.
• View pipeline history details: You can view details about executions of a pipeline, including start and
end times, run duration, and execution IDs.
In this example, when developers commit changes to a source repository, CodePipeline automatically
detects the changes. Those changes are built, and if any tests are configured, those tests are run. After
the tests are complete, the built code is deployed to staging servers for testing. From the staging server,
CodePipeline runs additional tests, such as integration or load tests. Upon the successful completion of
those tests, and after a manual approval action that was added to the pipeline is approved, CodePipeline
deploys the tested and approved code to production instances.
CodePipeline can deploy applications to Amazon EC2 instances by using CodeDeploy, AWS Elastic
Beanstalk, or AWS OpsWorks Stacks. CodePipeline can also deploy container-based applications
to services by using Amazon ECS. Developers can also use the integration points provided with
CodePipeline to plug in other tools or services, including build services, test providers, or other
deployment targets or systems.
Stages use input and output artifacts that are stored in the artifact store for your pipeline. An artifact
store is an Amazon S3 bucket. Your artifact store is in the same AWS Region as the pipeline to store
items for all pipelines in that Region associated with your account. Every time you use the console to
create another pipeline in that Region, CodePipeline creates a folder for that pipeline in the bucket. It
uses that folder to store artifacts for your pipeline as the automated release process runs. When you
create or edit a pipeline, you must have an artifact bucket in the pipeline Region and then you must have
one artifact bucket per Region where you are executing an action.
CodePipeline zips and transfers the files for input or output artifacts as appropriate for the action type
in the stage. For example, at the start of a build action, CodePipeline retrieves the input artifact (any files
to be built) and provides the artifact to the build action. After the action is complete, CodePipeline takes
the output artifact (the built application) and saves it to the output artifact bucket for use in the next
stage.
When you use the Create Pipeline wizard to configure or choose stages:
1. CodePipeline triggers your pipeline to run when there is a commit to the source repository, providing
the output artifact from the Source stage.
2. The output artifact from the previous step is ingested as an input artifact to the Build stage. An
output artifact from the Build stage can be an updated application or an updated Docker image built
to a container.
3. The output artifact from the previous step is ingested as an input artifact to the Deploy stage, such as
staging or production environments in the AWS Cloud. You can deploy applications to a deployment
fleet, or you can deploy container-based applications to tasks running in Amazon ECS clusters.
The following diagram shows a high-level artifact workflow between stages in CodePipeline.
1. Learn how CodePipeline works by reading the CodePipeline Concepts (p. 4) section.
2. Prepare to use CodePipeline by following the steps in Getting Started with CodePipeline (p. 9).
3. Experiment with CodePipeline by following the steps in the CodePipeline Tutorials (p. 25) tutorials.
4. Use CodePipeline for your new or existing projects by following the steps in Create a Pipeline in
CodePipeline (p. 187).
CodePipeline Concepts
You will find modeling and configuring your automated release process easier if you understand
the concepts and terms used in AWS CodePipeline and some of the underlying concepts of release
automation. Here are some concepts to know about as you use CodePipeline.
Topics
• Continuous Delivery and Integration with CodePipeline (p. 4)
• How CodePipeline Works (p. 4)
Continuous delivery is a software development methodology where the release process is automated.
Every software change is automatically built, tested, and deployed to production. Before the final push
to production, a person, an automated test, or a business rule decides when the final push should occur.
Although every successful software change can be immediately released to production with continuous
delivery, not all changes need to be released right away.
Continuous integration is a software development practice where members of a team use a version
control system and frequently integrate their work to the same location, such as a master branch. Each
change is built and verified to detect integration errors as quickly as possible. Continuous integration
is focused on automatically building and testing code, as compared to continuous delivery, which
automates the entire software release process up to production.
For more information, see Practicing Continuous Integration and Continuous Delivery on AWS:
Accelerating Software Delivery with DevOps.
The following diagram and accompanying descriptions introduce you to terms unique to CodePipeline
and how these concepts relate to each other:
• You can use the CodePipeline console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or
any combination of these to create and manage your pipelines.
When you use the console to create your first pipeline, CodePipeline creates an Amazon S3 bucket
in the same region as the pipeline to store items for all pipelines in that region associated with your
account. Every time you use the console to create another pipeline in that region, CodePipeline creates
a folder for that pipeline in the bucket. It uses that folder to store artifacts for your pipeline as the
automated release process runs. This bucket is named codepipeline-region-1234567EXAMPLE, where
region is the AWS region in which you created the pipeline, and 1234567EXAMPLE is a ten-digit
random number that ensures the bucket name is unique.
Note
CodePipeline truncates artifact names, and this can cause some bucket names to appear
similar. Even though the artifact name appears to be truncated, CodePipeline maps to the
artifact bucket in a way that is not affected by artifacts with truncated names. The pipeline
can function normally. This is not an issue with the folder or artifacts. There is a 100-character
limit to pipeline names. Although the artifact folder name might appear to be shortened, it is
still unique for your pipeline.
When you create or edit a pipeline, you must have an artifact bucket in the pipeline Region and
then you must have one artifact bucket per Region where you plan to execute an action. If you use
the console to create a pipeline or cross-region actions, default artifact buckets are configured by
CodePipeline in the Regions where you have actions.
If you use the AWS CLI to create a pipeline, you can store the artifacts for that pipeline in any Amazon
S3 bucket as long as that bucket is in the same AWS Region as the pipeline. You might do this if you
are concerned about exceeding the limits of Amazon S3 buckets allowed for your account. If you
use the AWS CLI to create or edit a pipeline, and you add a cross-region action (an action with an
AWS provider in a separate Region from your pipeline), you must provide an artifact bucket for each
additional region where you plan to execute an action.
• A revision is a change made to a source that is configured in a source action for CodePipeline, such
as a pushed commit to a GitHub repository or a CodeCommit repository, or an update to a file in a
versioned Amazon S3 bucket. Each revision is run separately through the pipeline. Multiple revisions
can be processed in the same pipeline, but each stage can process only one revision at a time.
Revisions are run through the pipeline as soon as a change is made in the location specified in the
source stage of the pipeline.
Note
If a pipeline contains multiple source actions, all of them run again, even if a change is
detected for one source action only.
• CodePipeline breaks up your workflow into a series of stages. For example, there might be a build
stage, where code is built and tests are run. There are also deployment stages, where code updates
are deployed to runtime environments. You can configure multiple parallel deployments to different
environments in the same deployment stage. You can label each stage in the release process for better
tracking, control, and reporting (for example "Source," "Build," and "Staging").
Each stage in a pipeline has a unique name, and contains a sequence of actions as part of its workflow.
A stage can process only one revision at a time. A revision must run through a stage before the next
revision can run through it. All actions configured for a stage must be completed successfully before
the stage is considered complete. After a stage is complete, the pipeline transitions the revision and
its artifacts created by the actions in that stage to the next stage in the pipeline. You can manually
disable and enable these transitions. For more information about stage requirements and structure,
see Pipeline and Stage Structure Requirements in CodePipeline (p. 394).
• Every stage contains at least one action, which is some kind of task performed on the artifact. Each
type of action has a valid set of providers. Valid action types in CodePipeline are shown in Valid Action
Types and Providers in CodePipeline (p. 393). Pipeline actions occur in a specified order, in sequence
or in parallel, as determined in the configuration of the stage. For example, a deployment stage might
contain a deploy action, which deploys code to one or more staging servers. You can configure a stage
with a single action to start, and then add actions to that stage if needed. For more information, see
Edit a Pipeline in CodePipeline (p. 196) and Action Structure Requirements in CodePipeline (p. 396).
After a revision starts running through a pipeline, CodePipeline copies files or changes that will be
worked on by the actions and stages in the pipeline to the Amazon S3 bucket. These objects are
referred to as artifacts, and might be the source for an action (input artifacts) or the output of an
action (output artifacts). An artifact can be worked on by more than one action.
• Every action has a type. Depending on the type, the action might have one or both of the following:
• An input artifact, which is the artifact it consumes or works on over the course of the action run.
• An output artifact, which is the output of the action.
Every output artifact in the pipeline must have a unique name. Every input artifact for an action must
match the output artifact of an action earlier in the pipeline, whether that action is immediately
before the action in a stage or runs in a stage several stages earlier. The following illustration
demonstrates how input artifacts and output artifacts are produced and consumed in a pipeline:
• A transition is the act of a revision in a pipeline continuing from one stage to the next in a workflow.
In the CodePipeline console, transition arrows connect stages together to show the order in which the
stages happen. When a stage is complete, by default the revision will transition to the next stage in
the pipeline. You can disable a transition from one stage to the next. When you do, your pipeline will
run all actions in the stages before that transition, but will not run any stages or actions after that
stage until you enable that transition. This is a simple way to prevent changes from running through
the entire pipeline. After you enable the transition, the most recent revision that ran successfully
through the previous stages will be run through the stages after that transition. If all transitions are
enabled, the pipeline runs continuously. Every revision is deployed as part of a successful run through
the pipeline (continuous deployment).
• Because only one revision can run through a stage at a time, CodePipeline batches any revisions that
have completed the previous stage until the next stage is available. If a more recent revision completes
running through the stage, the batched revision is replaced by the most current revision.
• An approval action prevents a pipeline from transitioning to the next action until permission is granted
(for example, by receiving manual approval from an authorized IAM user). You might use an approval
action when you want the pipeline to continue only after a successful code review, for example, or
you want to control the time at which a pipeline transitions to a final Production stage. In this case,
you can add a manual approval action to a stage just before Production, and approve it yourself when
you're ready to release changes to the public.
• A failure is an action in a stage that is not completed successfully. If one action fails in a stage, the
revision does not transition to the next action in the stage or the next stage in the pipeline. If a failure
occurs, no more transitions occur in the pipeline for that revision. CodePipeline pauses the pipeline
until one of the following occurs:
• You manually retry the stage that contains the failed actions.
• You start the pipeline again for that revision.
• Another revision is made in a source stage action.
• A pipeline starts automatically when a change is made in the source location (as defined in a source
action in a pipeline), or when you manually start the pipeline. You can also set up a rule in Amazon
CloudWatch to automatically start a pipeline when events you specify occur. After a pipeline starts,
the revision runs through every stage and action in the pipeline. You can view details of the last run of
each action in a pipeline on the pipeline view page.
Note
If you create or edit a pipeline in the console that has an AWS CodeCommit source repository,
CodePipeline uses Amazon CloudWatch Events to detect changes in your repository and start
your pipeline when a change occurs.
The following diagram shows two stages in an example pipeline in the CodePipeline console. It includes
an action in each stage and the enabled transition between the two stages.
The CodePipeline console includes helpful information in a collapsible panel that you can open from the
information icon or any Info link on the page. ( ). You can close this panel at any time.
The CodePipeline console also provides a way to quickly search for your resources, such as repositories,
build projects, deployment applications, and pipelines. Choose Go to resource or press the / key, and
then type the name of the resource. Any matches appear in the list. Searches are case insensitive. You
only see resources that you have permissions to view. For more information, see Viewing Resources in the
Console (p. 362).
Before you can use AWS CodePipeline for the first time, you must complete the following steps.
Topics
• Step 1: Create an AWS Account (p. 10)
• Step 2: Create or Use an IAM User (p. 10)
• Step 3: Use an IAM Managed Policy to Assign CodePipeline Permissions to the IAM User (p. 10)
• Step 4: Install the AWS CLI (p. 11)
• Step 5: Open the Console for CodePipeline (p. 11)
• Create a CodePipeline service role or choose an existing one and pass the role to CodePipeline
• Might choose to create a CloudWatch Events rule for change detection and pass the
CloudWatch Events service role to CloudWatch Events
For more information, see Granting a User Permissions to Pass a Role to an AWS Service.
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In the IAM console, in the navigation pane, choose Policies, and then choose the
AWSCodePipelineFullAccess managed policy from the list of policies.
3. On the Policy Details page, choose the Attached Entities tab, and then choose Attach.
4. On the Attach Policy page, select the check box next to the IAM users or groups, and then choose
Attach Policy.
Note
The AWSCodePipelineFullAccess policy provides access to all CodePipeline actions and
resources that the IAM user has access to, as well as all possible actions when creating
stages in a pipeline, such as creating stages that include CodeDeploy, Elastic Beanstalk,
or Amazon S3. As a best practice, you should grant individuals only the permissions they
need to perform their duties. For more information about how to restrict IAM users to a
limited set of CodePipeline actions and resources, see Remove Permissions for Unused AWS
Services (p. 369).
1. On your local machine, download and install the AWS CLI. This will enable you to interact with
CodePipeline from the command line. For more information, see Getting Set Up with the AWS
Command Line Interface.
Note
CodePipeline works only with AWS CLI versions 1.7.38 and later. To determine which version
of the AWS CLI that you may have installed, run the command aws --version. To upgrade an
older version of the AWS CLI to the latest version, follow the instructions in Uninstalling the
AWS CLI, and then follow the instructions in Installing the AWS Command Line Interface.
2. Configure the AWS CLI with the configure command, as follows:
aws configure
When prompted, specify the AWS access key and AWS secret access key of the IAM user that you will
use with CodePipeline. When prompted for the default region name, specify the region where you
will create the pipeline, such as us-east-2. When prompted for the default output format, specify
json. For example:
AWS Access Key ID [None]: Type your target AWS access key ID here, and then press Enter
AWS Secret Access Key [None]: Type your target AWS secret access key here, and then
press Enter
Default region name [None]: Type us-east-2 here, and then press Enter
Default output format [None]: Type json here, and then press Enter
Note
For more information about IAM, access keys, and secret keys, see Managing Access Keys for
IAM Users and How Do I Get Credentials?.
For more information about the regions and endpoints available for CodePipeline, see
Regions and Endpoints.
Next Steps
You have completed the prerequisites. You can begin using CodePipeline. To start working with
CodePipeline, see the CodePipeline Tutorials (p. 25).
Topics
• Integrations with CodePipeline Action Types (p. 12)
• General Integrations with CodePipeline (p. 20)
• Examples from the Community (p. 22)
Topics
• Source Action Integrations (p. 12)
• Build Action Integrations (p. 15)
• Test Action Integrations (p. 16)
• Deploy Action Integrations (p. 17)
• Approval Action Integrations (p. 20)
• Invoke Action Integrations (p. 20)
Amazon Simple Storage Amazon S3 is storage for the internet. You can use Amazon S3 to store
Service (Amazon S3) and retrieve any amount of data at any time, from anywhere on the web.
You can configure CodePipeline to use a versioned Amazon S3 bucket
as the source stage for your code. You must first create the bucket and
then enable versioning on it before you can create a pipeline that uses the
bucket as part of a source action within a stage.
Note
Amazon S3 can also be included in a pipeline as a deploy action.
Learn more:
AWS CodeCommit CodeCommit is a version control service hosted by AWS that you can use
to privately store and manage assets (such as documents, source code, and
binary files) in the cloud. You can configure CodePipeline to use a branch
in a CodeCommit repository as the source stage for your code. You must
first create the repository and associate it with a working directory on your
local machine before you can create a pipeline that uses the branch as
part of a source action within a stage. You can connect to the CodeCommit
repository by either creating a new pipeline or editing an existing one.
Learn more:
GitHub You can configure CodePipeline to use a GitHub repository as the source
stage for your code. You must have previously created a GitHub account
and at least one GitHub repository. You can connect to the GitHub
repository by either creating a new pipeline or editing an existing one.
Note
CodePipeline integration with GitHub Enterprise is not supported.
The first time you add a GitHub repository to a pipeline, you will be asked
to authorize CodePipeline access to your repositories. To integrate with
GitHub, CodePipeline creates an OAuth application for your pipeline and,
if your pipeline is created or updated in the console, CodePipeline creates
a GitHub webhook that starts your pipeline when a change occurs in the
repository. The token and webhook require the following GitHub scopes:
• The repo scope, which is used for full control to read and pull artifacts
from public and private repositories into a pipeline.
• The admin:repo_hook scope, which is used for full control of
repository hooks.
For more information about GitHub scopes, see the GitHub Developer API
Reference.
Learn more:
Amazon ECR Amazon ECR is an AWS Docker image repository service. You use Docker
push and pull commands to upload Docker images to your repository.
An Amazon ECR repository URI and image are used in Amazon ECS task
definitions to reference source image information.
Learn more:
AWS CodeBuild CodeBuild is a fully managed build service in the cloud. CodeBuild compiles
your source code, runs unit tests, and produces artifacts that are ready to
deploy.
You can add CodeBuild as a build action to the build stage of a pipeline.
You can use an existing build project or create one in the CodePipeline
console. The output of the build project can then be deployed as part of a
pipeline.
Note
CodeBuild can also be included in a pipeline as a test action, with
or without a build output.
Learn more:
• What Is CodeBuild?
• Add a CodeBuild Build Action to a Pipeline (in Use CodePipeline with
CodeBuild to Test Code and Run Builds)
• Working with Build Projects in CodeBuild
• CodeBuild – Fully Managed Build Service
CloudBees You can configure CodePipeline to use CloudBees to build or test your code
in one or more actions in a pipeline.
Learn more:
Jenkins You can configure CodePipeline to use Jenkins CI to build or test your code
in one or more actions in a pipeline. You must have previously created a
Jenkins project and installed and configured the CodePipeline Plugin for
Jenkins for that project. You can connect to the Jenkins project by either
creating a new pipeline or editing an existing one.
Access for Jenkins is configured on a per-project basis. You must install the
CodePipeline Plugin for Jenkins on every Jenkins instance you want to use
with CodePipeline. In addition, you must configure CodePipeline access to
the Jenkins project. You should secure your Jenkins project by configuring
it to accept HTTPS/SSL connections only. If your Jenkins project is installed
on an Amazon EC2 instance, consider providing your AWS credentials by
installing the AWS CLI on each instance and configuring an AWS profile on
those instances with the IAM user profile and AWS credentials you want to
use for connections between CodePipeline and Jenkins, rather than adding
them or storing them through the Jenkins web interface.
Learn more:
• Accessing Jenkins
• Tutorial: Create a Four-Stage Pipeline (p. 54)
TeamCity You can configure CodePipeline to use TeamCity to build and test your
code in one or more actions in a pipeline.
Learn more:
AWS CodeBuild CodeBuild is a fully managed build service in the cloud. CodeBuild compiles
your source code, runs unit tests, and produces artifacts that are ready to
deploy.
You can add CodeBuild to a pipeline as a test action to run unit tests
against your code, with or without a build output artifact. If you generate
an output artifact for the test action, it can be deployed as part of
a pipeline. You can use an existing build project or create one in the
CodePipeline console.
Note
CodeBuild can also be included in a pipeline as a build action, with
a mandatory build output artifact.
Learn more:
• What Is CodeBuild?
• Add a CodeBuild Test Action to a Pipeline (in Use CodePipeline with
CodeBuild to Test Code and Run Builds)
AWS Device Farm AWS Device Farm is an app testing service that you can use to test and
interact with your Android, iOS, and web applications on real, physical
phones and tablets that are hosted by Amazon Web Services (AWS). You
can configure CodePipeline to use AWS Device Farm to test your code in
one or more actions in a pipeline. AWS Device Farm allows you to upload
your own tests or use built-in, script-free compatibility tests. Because
testing is automatically performed in parallel, tests on multiple devices
begin in minutes. A test report containing high-level results, low-level
logs, pixel-to-pixel screenshots, and performance data is updated as tests
are completed. AWS Device Farm supports testing of native and hybrid
Android, iOS, and Fire OS apps, including those created with PhoneGap,
Titanium, Xamarin, Unity, and other frameworks. It supports remote access
of Android apps, which allows you to interact directly with test devices.
Learn more:
BlazeMeter You can configure CodePipeline to use BlazeMeter to test your code in one
or more actions in a pipeline.
Learn more:
Ghost Inspector You can configure CodePipeline to use Ghost Inspector to test your code in
one or more actions in a pipeline.
Learn more:
Micro Focus You can configure CodePipeline to use Micro Focus StormRunner Load in
StormRunner Load one or more actions in a pipeline.
Learn more:
Nouvola You can configure CodePipeline to use Nouvola to test your code in one or
more actions in a pipeline.
Learn more:
Runscope You can configure CodePipeline to use Runscope to test your code in one or
more actions in a pipeline.
Learn more:
Amazon Simple Storage Amazon S3 is storage for the internet. You can use Amazon S3 to store
Service (Amazon S3) and retrieve any amount of data at any time, from anywhere on the web.
You can add an action to a pipeline that uses Amazon S3 as a deployment
provider.
Note
Amazon S3 can also be included in a pipeline as a source action.
Learn more:
Learn more:
Learn more:
Amazon Elastic Amazon ECS is a highly scalable, high performance container management
Container Service service that allows you to run container-based applications in the AWS
Cloud. When you create a pipeline, you can select Amazon ECS as a
deployment provider. A change to code in your source control repository
triggers your pipeline to build a new Docker image, push it to your
container registry, and then deploy the updated image to Amazon ECS. You
can also use the ECS (Blue/Green) provider action in CodePipeline to route
and deploy traffic to Amazon ECS with CodeDeploy.
Learn more:
AWS Elastic Beanstalk Elastic Beanstalk is an easy-to-use service for deploying and scaling
web applications and services developed with Java, .NET, PHP, Node.js,
Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx,
Passenger, and IIS. You can configure CodePipeline to use Elastic Beanstalk
to deploy your code. You can create the Elastic Beanstalk application and
environment to use in a deploy action within a stage either before you
create the pipeline or when you use the Create Pipeline wizard.
Learn more:
AWS OpsWorks Stacks AWS OpsWorks is a configuration management service that helps
you configure and operate applications of all shapes and sizes using
Chef. Using AWS OpsWorks Stacks, you can define the application’s
architecture and the specification of each component including package
installation, software configuration and resources such as storage. You
can configure CodePipeline to use AWS OpsWorks Stacks to deploy your
code in conjunction with custom Chef cookbooks and applications in AWS
OpsWorks.
You create the AWS OpsWorks stack and layer you want to use before you
create the pipeline. You can create the AWS OpsWorks application to use
in a deploy action within a stage either before you create the pipeline or
when you use the Create Pipeline wizard.
Learn more:
AWS Service Catalog AWS Service Catalog enables organizations to create and manage catalogs
of products that are approved for use on AWS.
Learn more:
• Tutorial: Create a Pipeline That Deploys to AWS Service Catalog (p. 76)
• Create a Pipeline in CodePipeline (p. 187)
Alexa Skills Kit Amazon Alexa Skills Kit lets you build and distribute cloud-based skills to
users of Alexa-enabled devices.
You can add an action to a pipeline that uses Alexa Skills Kit as a
deployment provider. Source changes are detected by your pipeline, and
then your pipeline deploys updates to your Alexa skill in the Alexa service.
Learn more:
• Tutorial: Create a Pipeline That Deploys an Amazon Alexa Skill (p. 108)
XebiaLabs You can configure CodePipeline to use XebiaLabs to deploy your code in
one or more actions in a pipeline.
Learn more:
Learn more:
Learn more:
AWS CloudTrail CloudTrail captures AWS API calls and related events made by or on behalf
of an AWS account and delivers log files to an Amazon S3 bucket that
you specify. You can configure CloudTrail to capture API calls from the
CodePipeline console, CodePipeline commands from the AWS CLI, and
from the CodePipeline API.
Learn more:
Learn more:
Amazon CloudWatch Amazon CloudWatch Events is a web service that detects changes in your
Events AWS services based on rules that you define and invokes an action in one or
more specified AWS services when a change occurs.
Learn more:
• What Is Amazon CloudWatch Events?
• Start a Pipeline Execution in CodePipeline (p. 137).
• Use CloudWatch Events to Start a Pipeline (CodeCommit
Source) (p. 140)
• Receive notifications when a pipeline state changes — You can set
up Amazon CloudWatch Events rules to detect and react to changes in
execution state for a pipeline, stage, or action.
Learn more:
• Detect and React to Changes in Pipeline State with Amazon
CloudWatch Events (p. 334)
• Tutorial: Set Up a CloudWatch Events Rule to Receive Email
Notifications for Pipeline State Changes (p. 61)
AWS Key Management AWS KMS is a managed service that makes it easy for you to create
Service and control the encryption keys used to encrypt your data. By default,
CodePipeline uses AWS KMS to encrypt artifacts for pipelines stored in
Amazon S3 buckets.
Learn more:
• To create a pipeline that uses a source bucket, artifact bucket, and service
role from one AWS account and CodeDeploy resources from a different
AWS account, you must create a customer-managed KMS key, add the
key to the pipeline, and set up account policies and roles to enable
cross-account access. For more information, see Create a Pipeline in
CodePipeline That Uses Resources from Another AWS Account (p. 215).
• To create a pipeline from one AWS account that deploys an AWS
CloudFormation stack to another AWS account, you must create a
customer-managed KMS key, add the key to the pipeline, and set up
account policies and roles to deploy the stack to another AWS account.
For an AWS KMS key, you can use the key ID, the key ARN, or the alias ARN.
Note
Aliases are recognized only in the account that created the
customer master key (CMK). For cross-account actions, you can
only use the key ID or key ARN to identify the key.
Topics
• Integration Examples: Blog Posts (p. 22)
• Integration Examples: Videos (p. 24)
Learn how to use a CI/CD pipeline in CodePipeline to automate preventive and detective security
controls. This post covers how to use a pipeline to create a simple security group and perform security
checks during the source, test, and production stages to improve the security posture of your AWS
accounts.
Learn how to create a continuous deployment pipeline to Amazon Elastic Container Service (Amazon
ECS). Applications are delivered as Docker containers using CodePipeline, CodeBuild, Amazon ECR, and
AWS CloudFormation.
• Download a sample AWS CloudFormation template and instructions for using it to create your own
continuous deployment pipeline from the ECS Reference Architecture: Continuous Deployment repo
on GitHub.
Learn how to use a collection of AWS services to create a continuous deployment pipeline for your
serverless applications. You'll use the Serverless Application Model (SAM) to define the application and
its resources and CodePipeline to orchestrate your application deployment.
• View a sample application written in Go with the Gin framework and an API Gateway proxy shim.
Learn how to integrate CodePipeline with Git servers that support webhooks functionality, such as
GitHub Enterprise, Bitbucket, and GitLab.
Learn how use Dynatrace monitoring solutions to scale pipelines in CodePipeline, automatically
analyze test executions before code is committed, and maintain optimal lead times.
Learn how to implement continuous delivery in a CodePipeline pipeline for an application in AWS
Elastic Beanstalk. All AWS resources are provisioned automatically through the use of an AWS
CloudFormation template. This walkthrough also incorporates CodeCommit and AWS Identity and
Access Management (IAM).
Use AWS CloudFormation to automate the provisioning of AWS resources for a continuous delivery
pipeline that uses CodeCommit, CodePipeline, CodeDeploy, and AWS Identity and Access Management.
Learn how to automate the provisioning of cross-account access to pipelines in AWS CodePipeline by
using AWS Identity and Access Management. Includes examples in an AWS CloudFormation template.
Learn how to create a full continuous delivery system for an ASP.NET Core application using
CodeDeploy and AWS CodePipeline.
Learn how to use the AWS CodePipeline console to create a two-stage pipeline in a walkthrough based
on the AWS CodePipeline Tutorial: Create a Four-Stage Pipeline (p. 54).
Learn how to invoke a Lambda function that lets you visualize the actions and stages in a CodePipeline
software delivery process as you design it, before the pipeline is operational. As you design your
pipeline structure, you can use the Lambda function to test whether your pipeline will complete
successfully.
Learn how to create an AWS CloudFormation stack that provisions all the AWS resources used in the
user guide task Invoke an AWS Lambda Function in a Pipeline in CodePipeline (p. 294).
Learn how to provision a basic continuous delivery pipeline in CodePipeline using AWS
CloudFormation.
Learn how to use GitHub, CodePipeline, Jenkins, and Elastic Beanstalk to create a deployment pipeline
for a web application that is updated automatically every time you change your code.
Learn how to inject automated load tests at the right places in the CodePipeline delivery workflow
with BlazeMeter’s native CodePipeline integration.
Learn how to configure your pipeline and the AWS Lambda function to deploy to AWS OpsWorks using
CodePipeline.
Learn how to use CodePipeline, CloudWatch, and BlazeMeter to create a continuous delivery workflow
that reduces time to release and increases test coverage for developers during the release.
Learn how to use the CodePipeline console to create a pipeline that uses CodeDeploy and Amazon S3.
Duration: 8:53
CodePipeline Tutorials
After you complete the steps in Getting Started with CodePipeline (p. 9), you can try one of the AWS
CodePipeline tutorials in this user guide:
I want to create a two-stage pipeline that uses See Tutorial: Create a Simple Pipeline (Amazon S3
CodeDeploy to deploy a sample application from Bucket) (p. 26).
an Amazon S3 bucket to Amazon EC2 instances
running Amazon Linux. After using the wizard to
create my pipeline, I want to add a third stage.
I want to create a two-stage pipeline that uses See Tutorial: Create a Simple Pipeline
CodeDeploy to deploy a sample application from (CodeCommit Repository) (p. 42).
a CodeCommit repository to an Amazon EC2
instance running Amazon Linux.
I want to add a build stage to the three-stage See Tutorial: Create a Four-Stage
pipeline I created in the first tutorial. The new Pipeline (p. 54).
stage uses Jenkins to build my application.
I want to set up a CloudWatch Events rule that See Tutorial: Set Up a CloudWatch Events Rule
sends notifications whenever there are changes to Receive Email Notifications for Pipeline State
to the execution state of my pipeline, stage, or Changes (p. 61).
action.
I want to create a pipeline with a GitHub source See Tutorial: Create a Pipeline That Builds and
that builds and tests an Android app with Tests Your Android App When a Commit Is Pushed
CodeBuild and AWS Device Farm. to Your GitHub Repository (p. 64).
I want to create a pipeline with an Amazon S3 See Tutorial: Create a Pipeline That Tests Your
source that tests an iOS app with AWS Device iOS App After a Change in Your Amazon S3
Farm. Bucket (p. 70).
I want to create a pipeline that deploys my See Tutorial: Create a Pipeline That Deploys to
product template to AWS Service Catalog. AWS Service Catalog (p. 76).
I want to use sample templates to create a simple See Tutorial: Create a Pipeline with AWS
pipeline (with an Amazon S3, CodeCommit, or CloudFormation (p. 89).
GitHub source) using the AWS CloudFormation
console.
I want to create a two-stage pipeline that uses See Tutorial: Create a Pipeline with an
CodeDeploy and Amazon ECS for blue green Amazon ECR Source and ECS-to-CodeDeploy
deployment of an image from an Amazon ECR Deployment (p. 95).
repository to an Amazon ECS cluster and service.
I want to create a pipeline that continuously See Tutorial: Create a Pipeline That Publishes
publishes my serverless application to the AWS Your Serverless Application to the AWS Serverless
Serverless Application Repository. Application Repository (p. 125).
Note
Tutorial: Create a Four-Stage Pipeline (p. 54) shows how to create a pipeline that gets source
code from a GitHub repository, uses Jenkins to build and test the source code, and then uses
CodeDeploy to deploy the built and tested source code to Amazon EC2 instances running
Amazon Linux or Microsoft Windows Server. Because this tutorial builds on concepts covered in
the walkthroughs, we recommend you complete at least one of them first.
The following tutorials and walkthroughs in other user guides provide guidance for integrating other
AWS services into your pipelines:
In this walkthrough, you create a two-stage pipeline that uses a versioned Amazon S3 bucket and
CodeDeploy to release a sample application.
Important
Many of the actions you add to your pipeline in this procedure involve AWS resources that you
need to create before you create the pipeline. AWS resources for your source actions must
always be created in the same AWS Region where you create your pipeline. For example, if you
create your pipeline in the US East (Ohio) Region, your CodeCommit repository must be in the
US East (Ohio) Region.
You can add cross-region actions when you create your pipeline. AWS resources for cross-
region actions must be in the same AWS Region where you plan to execute the action.
For more information about cross-region actions, see Add a Cross-Region Action in
CodePipeline (p. 322).
After you create this simple pipeline, you add another stage and then disable and enable the transition
between stages.
Not what you're looking for? To create a simple pipeline using a CodeCommit branch as a code
repository, see Tutorial: Create a Simple Pipeline (CodeCommit Repository) (p. 42).
Note
For pipelines with an Amazon S3 source, an Amazon CloudWatch Events rule detects source
changes and then starts your pipeline when changes occur. When you use the console to create
or change a pipeline, the rule and all associated resources are created for you. If you create or
change an Amazon S3 pipeline in the CLI or AWS CloudFormation, you must create the Amazon
CloudWatch Events rule, IAM role, and AWS CloudTrail trail manually.
Before you begin, you should complete the prerequisites in Getting Started with CodePipeline (p. 9).
Topics
• Step 1: Create an Amazon S3 Bucket for Your Application (p. 27)
• Step 2: Create Amazon EC2 Windows Instances and Install the CodeDeploy Agent (p. 28)
• Step 3: Create an Application in CodeDeploy (p. 29)
• Step 4: Create Your First Pipeline in CodePipeline (p. 30)
• Step 5: Add Another Stage to Your Pipeline (p. 34)
• Step 6: Disable and Enable Transitions Between Stages in CodePipeline (p. 41)
• Step 7: Clean Up Resources (p. 41)
If you want to use an existing Amazon S3 bucket, see Enable Versioning for a Bucket, copy the sample
applications to that bucket, and skip ahead to Step 3: Create an Application in CodeDeploy (p. 29).
If you want to use a GitHub repository instead of an Amazon S3 bucket, copy the sample applications to
that repository, and skip ahead to Step 3: Create an Application in CodeDeploy (p. 29).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Create bucket.
3. In Bucket name, type a name for your bucket (for example, awscodepipeline-demobucket-
example-date).
Note
Because all bucket names in Amazon S3 must be unique, use one of your own, not the name
shown in the example. You can change the example name just by adding the date to it.
Make a note of this name because you need it for the rest of this tutorial.
In Region, choose the region where you intend to create your pipeline, such as US West (Oregon),
and then choose Next.
4. On the Configure options tab, in Versioning, select Keep all versions of an object in the same
bucket, and then choose Next.
When versioning is enabled, Amazon S3 saves every version of every object in the bucket.
5. On the Set permissions tab, accept the default permissions to allow your account read/write access
on objects, and choose Next. For more information about Amazon S3 bucket and object permissions,
see Specifying Permissions in a Policy.
6. Choose Create bucket.
7. Next, download a sample from a GitHub repository and save it into a folder or directory on your
local computer.
Important
Do not use the Clone or download or Download ZIP buttons in the GitHub repositories.
This creates a nested folder structure that does not work with CodeDeploy.
• If you want to deploy to Amazon Linux instances using CodeDeploy, use the sample in
https://github.com/awslabs/aws-codepipeline-s3-aws-codedeploy_linux.
• If you want to deploy to Windows Server instances using CodeDeploy, use the sample in
https://github.com/awslabs/AWSCodePipeline-S3-AWSCodeDeploy_Windows.
b. Choose the dist folder.
c. Choose the file name.
Download the compressed (zipped) file. Do not unzip the file. For example, save the aws-
codepipeline-s3-aws-codedeploy_linux.zip file to your desktop and do not extract the
files.
8. In the Amazon S3 console for your bucket, upload the file:
a. Choose Upload.
b. Drag and drop the file or choose Add files and browse for the file. For example, choose the
aws-codepipeline-s3-aws-codedeploy_linux.zip file from your desktop.
c. Choose Upload.
In this step, you create the Windows Server Amazon EC2 instances to which you will deploy a sample
application. As part of this process, you install the CodeDeploy agent on the instances. The CodeDeploy
agent is a software package that enables an instance to be used in CodeDeploy deployments.
To launch instances
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
6. On the Step 3: Configure Instance Details page, expand Advanced Details, and in User data, with
As text selected, type the following:
<powershell>
New-Item -Path c:\temp -ItemType "directory" -Force
powershell.exe -Command Read-S3Object -BucketName bucket-name/latest -Key codedeploy-
agent.msi -File c:\temp\codedeploy-agent.msi
Start-Process -Wait -FilePath c:\temp\codedeploy-agent.msi -WindowStyle Hidden
</powershell>
bucket-name is the name of the Amazon S3 bucket that contains the CodeDeploy Resource Kit files
for your region. For example, for the US West (Oregon) Region, replace bucket-name with aws-
codedeploy-uswest-2. For a list of bucket names, see Resource Kit Bucket Names by Region.
This code installs the CodeDeploy agent on your instance as it is created. This script is written for
Windows instances only.
7. Leave the rest of the items on the Step 3: Configure Instance Details page unchanged. Choose
Next: Add Storage, leave the Step 4: Add Storage page unchanged, and then choose Next: Add
Tags.
8. On the Add Tags page, with Name displayed in the Key box, type MyCodePipelineDemo in the
Value box, and then choose Next: Configure Security Group.
Important
The Key and Value boxes are case-sensitive.
9. On the Configure Security Group page, allow port 80 communication so you can access the public
instance endpoint.
10. Choose Review and Launch.
11. On the Review Instance Launch page, choose Launch.
12. Choose View Instances to close the confirmation page and return to the console.
13. You can view the status of the launch on the Instances page. When you launch an instance, its initial
state is pending. After the instance starts, its state changes to running, and it receives a public
DNS name. (If the Public DNS column is not displayed, choose the Show/Hide icon, and then select
Public DNS.)
14. It can take a few minutes for the instance to be ready for you to connect to it. Check that your
instance has passed its status checks. You can view this information in the Status Checks column.
1. On the page that displays your application, choose Create deployment group.
2. In Deployment group name, type MyDemoDeploymentGroup.
3. In Service Role, choose a service role that trusts AWS CodeDeploy with, at minimum, the trust and
permissions described in Create a Service Role for CodeDeploy. To get the service role ARN, see Get
the Service Role ARN (Console).
4. Under Deployment type, choose In-place.
5. Under Environment configuration, choose Amazon EC2 Instances. Choose Name in the Key box,
and in the Value box, type MyCodePipelineDemo.
Important
You must choose the same value for the Name key here that you assigned to your Amazon
EC2 instance when you created it. If you tagged your instance with something other than
MyCodePipelineDemo, be sure to use it here.
6. Under Deployment configuration, choose CodeDeployDefault.OneAtaTime.
7. Under Load Balancer, clear Enable load balancing.
8. Expand the Advanced section. Under Alarms, choose Ignore alarm configuration.
9. Choose Create deployment group.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Welcome page, Getting started page, or the Pipelines page, choose Create pipeline.
3. In Step 1: Choose pipeline settings, in Pipeline name, enter MyFirstPipeline.
Note
If you choose another name for your pipeline, be sure to use that name instead of
MyFirstPipeline for the rest of this tutorial. After you create a pipeline, you cannot
change its name. Pipeline names are subject to some limitations. For more information, see
Limits in AWS CodePipeline (p. 412).
4. In Service role, do one of the following:
• Choose New service role to allow CodePipeline to create a new service role
in IAM. In Role name, the role and policy name both default to this format:
AWSCodePipelineServiceRole-region-pipeline_name. For example, this is the service role
created for this tutorial: AWSCodePipelineServiceRole-eu-west-2-MyFirstPipeline.
• Choose Existing service role to use a service role already created in IAM. In Role name, choose
your service role from the list.
Note
Depending on when your service role was created, you might need to update its permissions
to support additional AWS services. For information, see Add Permissions for Other AWS
Services (p. 366).
5. In Artifact location, do one of the following:
a. Choose Default location to use the default artifact store, such as the Amazon S3 artifact bucket
designated as the default, for your pipeline in the region you have selected for your pipeline.
b. Choose Custom location if you already have an existing artifact store you have created, such as
an Amazon S3 artifact bucket, in the same region as your pipeline.
Note
This is not the source bucket for your source code. This is the artifact store for your pipeline.
A separate artifact store, such as an Amazon S3 bucket, is required for each pipeline. When
you create or edit a pipeline, you must have an artifact bucket in the pipeline Region, and
then you must have one artifact bucket per AWS Region where you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
Choose Next.
6. In Step 2: Add source stage, in Source provider, choose Amazon S3. In Bucket, enter the
name of the Amazon S3 bucket you created in Step 1: Create an Amazon S3 Bucket for Your
Application (p. 27). In S3 object key, enter the sample file you copied to that bucket,
either aws-codepipeline-s3-aws-codedeploy_linux.zip or AWSCodePipeline-S3-
AWSCodeDeploy_Windows.zip. Choose Next step.
s3://awscodepipeline-demobucket-example-date/aws-codepipeline-s3-aws-
codedeploy_linux.zip
s3://awscodepipeline-demobucket-example-date/AWSCodePipeline-S3-
AWSCodeDeploy_Windows.zip
Note
If you copied the sample application to a GitHub repository instead of an Amazon S3
bucket, choose GitHub from the list of source providers, and then follow the instructions.
For more information, see Create a Pipeline (Console) (p. 187).
Under Change detection options, leave the defaults. This allows CodePipeline to use Amazon
CloudWatch Events to detect changes in your source bucket.
Choose Next.
7. In Step 3: Add build stage, choose Skip build stage, and then accept the warning message by
choosing Skip again. Choose Next.
Note
You can configure a build action with a provider such as CodeBuild, which is a fully
managed build service in the cloud. You can also configure a build action that uses a
provider with a build server or system, such as Jenkins. You can walk through the steps
for setting up build resources and creating a pipeline that uses those resources in the next
tutorial, Tutorial: Create a Four-Stage Pipeline (p. 54).
8. In Step 4: Add deploy stage, in Deploy provider, choose AWS CodeDeploy. The Region
field defaults to the same AWS Region as your pipeline. In Application name, enter
CodePipelineDemoApplication, or choose the Refresh button, and then choose the application
name from the list. In Deployment group, type CodePipelineDemoFleet, or choose it from the
list, and then choose Next.
Note
The name "Deploy" is the name given by default to the stage created in the Step 4: Add
deploy stage step, just as "Source" is the name given to the first stage of the pipeline.
9. In Step 5: Review, review the information, and then choose Create pipeline.
10. The pipeline starts to run. You can view progress and success and failure messages as the
CodePipeline sample deploys a webpage to each of the Amazon EC2 instances in the CodeDeploy
deployment.
Congratulations! You just created a simple pipeline in CodePipeline. The pipeline has two stages:
• A source stage named Source, which detects changes in the versioned sample application stored in the
Amazon S3 bucket and pulls those changes into the pipeline.
• A Deploy stage that deploys those changes to Amazon EC2 instances with CodeDeploy.
1. View the initial progress of the pipeline. The status of each stage changes from No executions yet
to In Progress, and then to either Succeeded or Failed. The pipeline should complete the first run
within a few minutes.
2. After Succeeded is displayed for the action status, in the status area for the Staging stage, choose
Details. This opens the AWS CodeDeploy console.
3. In the Deployment group tab, under Deployment lifecycle events, choose the instance ID. This
opens the EC2 console.
4. On the Description tab, in Public DNS, copy the address, and then paste it into the address bar of
your web browser. View the index page for the sample application you uploaded to your Amazon S3
bucket.
The following page is the sample application you uploaded to your Amazon S3 bucket.
For more information about stages, actions, and how pipelines work, see CodePipeline Concepts (p. 4).
Topics
• Create a Second Deployment Group in CodeDeploy (p. 34)
• Add the Deployment Group as Another Stage in Your Pipeline (p. 35)
Topics
• Create a Third Stage (Console) (p. 35)
• Create a Third Stage (CLI) (p. 38)
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. In Name, choose the name of the pipeline you created, MyFirstPipeline.
3. On the pipeline details page, choose Edit.
4. On the Edit page, choose + Add stage to add a stage immediately after the Deploy stage.
Alternatively, to use the AWS CLI to rerun the pipeline, from a terminal on your local Linux, macOS,
or Unix machine, or a command prompt on your local Windows machine, run the start-pipeline-
execution command, specifying the name of the pipeline. This runs the application in your source
bucket through the pipeline for a second time.
The pipeline shows three stages and the state of the artifact running through those three stages.
It might take up to five minutes for the pipeline to run through all stages. You see the deployment
succeeds on the first two stages, just as before, but the Production stage shows the Deploy-Second-
Deployment action failed.
12. In the Deploy-Second-Deployment action, choose Details. You are redirected to the page for the
CodeDeploy deployment. In this case, the failure is the result of the first instance group deploying to
all of the Amazon EC2 instances, leaving no instances for the second deployment group.
Note
This failure is by design, to demonstrate what happens when there is a failure in a pipeline
stage.
1. Open a terminal session on your local Linux, macOS, or Unix machine, or a command prompt on
your local Windows machine, and run the get-pipeline command to display the structure of the
pipeline you just created. For MyFirstPipeline, you would type the following command:
This command returns the structure of MyFirstPipeline. The first part of the output should look
similar to the following:
{
"pipeline": {
"roleArn": "arn:aws:iam::80398EXAMPLE:role/AWS-CodePipeline-Service",
"stages": [
...
The final part of the output includes the pipeline metadata and should look similar to the following:
...
],
"artifactStore": {
"type": "S3"
"location": "codepipeline-us-east-2-250656481468",
},
"name": "MyFirstPipeline",
"version": 4
},
"metadata": {
"pipelineArn": "arn:aws:codepipeline:us-east-2:80398EXAMPLE:MyFirstPipeline",
"updated": 1501626591.112,
"created": 1501626591.112
}
}
2. Copy and paste this structure into a plain-text editor, and save the file as pipeline.json. For
convenience, save this file in the same directory where you run the aws codepipeline commands.
Note
You can pipe the JSON directly into a file with the get-pipeline command as follows:
3. Copy the Staging stage section and paste it after the first two stages. Because it is a deploy stage,
just like the Staging stage, you use it as a template for the third stage.
API Version 2015-07-09
38
CodePipeline User Guide
Add Another Stage
4. Change the name of the stage and the deployment group details.
The following example shows the JSON you add to the pipeline.json file after the Staging stage. Edit
the emphasized elements with new values. Remember to include a comma to separate the Staging
and Production stage definitions.
,
{
"name": "Production",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "Deploy-Second-Deployment",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineProductionFleet"
},
"runOrder": 1
}
]
}
5. If you are working with the pipeline structure retrieved using the get-pipeline command, you must
remove the metadata lines from the JSON file. Otherwise, the update-pipeline command cannot
use it. Remove the "metadata": { } lines and the "created", "pipelineARN", and "updated"
fields.
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
The pipeline shows three stages and the state of the artifact running through those three stages. It
might take up to five minutes for the pipeline to run through all stages. Although the deployment
succeeds on the first two stages, just as before, the Production stage shows that the Deploy-
Second-Deployment action failed.
9. In the Deploy-Second-Deployment action, choose Details to see details of the failure. You are
redirected to the details page for the CodeDeploy deployment. In this case, the failure is the result
of the first instance group deploying to all of the Amazon EC2 instances, leaving no instances for the
second deployment group.
Note
This failure is by design, to demonstrate what happens when there is a failure in a pipeline
stage.
1. Open the CodePipeline console and choose MyFirstPipeline from the list of pipelines.
2. On the details page for the pipeline, choose the Disable transition button between the second stage
(Staging) and the third stage that you added in the previous section (Production).
3. In Disable transition, enter a reason for disabling the transition between the stages, and then
choose Disable.
The arrow between stages displays an icon and color change, and the Enable transition button.
4. Upload your sample again to the Amazon S3 bucket. Because the bucket is versioned, this change
starts the pipeline. For information, see Upload the sample application (p. 28).
5. Return to the details page for your pipeline and watch the status of the stages. The pipeline view
changes to show progress and success on the first two stages, but no changes occur on the third
stage. This process might take a few minutes.
6. Enable the transition by choosing the Enable transition button between the two stages. In the
Enable transition dialog box, choose Enable. The stage starts running in a few minutes and
attempts to process the artifact that has already been run through the first two stages of the
pipeline.
Note
If you want this third stage to succeed, edit the CodePipelineProductionFleet deployment
group before you enable the transition, and specify a different set of Amazon EC2 instances
where the application is deployed. For more information about how to do this, see Change
Deployment Group Settings. If you create more Amazon EC2 instances, you may incur
additional costs.
so that you are not charged for the continued use of those resources. First, delete the pipeline, then the
CodeDeploy application and its associated Amazon EC2 instances, and finally, the Amazon S3 bucket.
1. To clean up your CodePipeline resources, follow the instructions in Delete a Pipeline in AWS
CodePipeline (p. 214).
2. To clean up your CodeDeploy resources, follow the instructions in Clean Up Deployment
Walkthrough Resources.
3. To delete the Amazon S3 bucket, follow the instructions in Deleting or Emptying an Amazon
S3 Bucket. If you do not intend to create more pipelines, delete the Amazon S3 bucket created
for storing your pipeline artifacts. For more information about this bucket, see CodePipeline
Concepts (p. 4).
In this tutorial, you use CodePipeline to deploy code that is maintained in a CodeCommit repository to a
single Amazon EC2 instance. You will use CodeDeploy as the deployment service.
Important
Many of the actions you add to your pipeline in this procedure involve AWS resources that you
need to create before you create the pipeline. AWS resources for your source actions must
always be created in the same AWS Region where you create your pipeline. For example, if you
create your pipeline in the US East (Ohio) Region, your CodeCommit repository must be in the
US East (Ohio) Region.
You can add cross-region actions when you create your pipeline. AWS resources for cross-
region actions must be in the same AWS Region where you plan to execute the action.
For more information about cross-region actions, see Add a Cross-Region Action in
CodePipeline (p. 322).
Not what you're looking for? To create a simple pipeline using a versioned Amazon S3 bucket as a code
repository, see Tutorial: Create a Simple Pipeline (Amazon S3 Bucket) (p. 26).
After you complete this tutorial, you should have enough practice with CodeCommit concepts to use it as
a repository in your pipelines.
CodePipeline uses Amazon CloudWatch Events to detect changes in your CodeCommit source repository
and branch. Using Amazon CloudWatch Events to start your pipeline when changes occur is the default
for this source type. When you use the wizard in the console to create a pipeline, the rule is created for
you.
Before you begin, make sure you have completed the following tasks:
Note
If you have already completed the Tutorial: Create a Simple Pipeline (Amazon S3
Bucket) (p. 26) tutorial, but have not yet cleaned up its resources, you must create
different names for many of the resources you used in that tutorial. For example, instead of
MyFirstPipeline, you might name your pipeline MySecondPipeline.
Topics
• Step 1: Create a CodeCommit Repository and Local Repo (p. 43)
• Step 2: Add Sample Code to Your CodeCommit Repository (p. 43)
• Step 3: Create an Amazon EC2 Linux Instance and Install the CodeDeploy Agent (p. 44)
• Step 4: Create an Application in CodeDeploy (p. 46)
• Step 5: Create Your First Pipeline in CodePipeline (p. 47)
• Step 6: Modify Code in Your CodeCommit Repository (p. 51)
• Step 7: Optional Stage Management Tasks (p. 53)
• Step 8: Clean Up Resources (p. 53)
Follow the first two procedures in the Git with CodeCommit Tutorial in the CodeCommit User Guide:
For information about connecting to a local repo you create, see Connect to a CodeCommit Repository.
After you complete these two procedures, return to this page and continue to the next step. Do not
continue to the third step in the CodeCommit tutorial. You must complete different steps in this tutorial
instead.
• SampleApp_Linux.zip.
2. Unzip the files and push the files to the root of your test repository.
For this tutorial, if you created the /tmp/my-demo-repo directory, unzip the files from
SampleApp_Linux.zip into the local directory you created in the previous procedure (for example, /
tmp/my-demo-repo or c:\temp\my-demo-repo).
Be sure to place the files directly into your local repository. Do not include a SampleApp_Linux
folder. On your local Linux, macOS, or Unix machine, for example, your directory and file hierarchy
should look like this:
/tmp
|my-demo-repo
|-- appspec.yml
|-- index.html
|-- LICENSE.txt
`-- scripts
|-- install_dependencies
|-- start_server
`-- stop_server
git add -A
5. Run the following command to commit the files with a commit message:
6. Run the following command to push the files from your local repo to your CodeCommit repository:
git push
7. The files you downloaded and added to your local repo have now been added to the master branch
in your CodeCommit MyDemoRepo repository and are ready to be included in a pipeline.
To launch an instance
Note
These basic configurations, called Amazon Machine Images (AMIs), serve as templates for
your instance. This tutorial can be completed with any of the free tier eligible AMIs. For
simplicity, we use the HVM edition of the Amazon Linux AMI.
4. On the Step 2: Choose an Instance Type page, choose the free tier eligible t2.micro type as the
hardware configuration for your instance, and then choose Next: Configure Instance Details.
5. On the Step 3: Configure Instance Details page, do the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
6. On the Step 3: Configure Instance Details page, expand Advanced Details, and in the User data
field, enter the following:
#!/bin/bash
yum -y update
yum install -y ruby
yum install -y aws-cli
cd /home/ec2-user
aws s3 cp s3://aws-codedeploy-us-east-2/latest/install . --region us-east-2
chmod +x ./install
./install auto
This code installs the CodeDeploy agent on your instance as it is created. If you prefer, you can
connect to your Linux instance using SSH and install the CodeDeploy agent manually after the
instance is created.
7. Leave the rest of the items on the Step 3: Configure Instance Details page unchanged. Choose
Next: Add Storage, leave the Step 4: Add Storage page unchanged, and then choose Next: Add
Tags.
8. On the Add Tags page, with Name displayed in the Key box, type MyCodePipelineDemo in the
Value box, and then choose Next: Configure Security Group.
Important
The Key and Value boxes are case-sensitive.
9. On the Step 6: Configure Security Group page, do the following:
• If you already have a key pair to use with Amazon EC2 instances, select Choose an existing key
pair, and then select your key pair.
• If you have not created a key pair yet, select Create a new key pair, enter a name for the key pair,
and then choose Download Key Pair. This is your only chance to save the private key file. Be sure
to download it. Save the private key file in a safe place. You must provide the name of your key
pair when you launch an instance. You must provide the corresponding private key each time you
connect to the instance. For more information, see Amazon EC2 Key Pairs.
Warning
Don't select the Proceed without a key pair option. If you launch your instance without
a key pair, you can't connect to it if you need to troubleshoot issues with the CodeDeploy
agent.
When you are ready, select the acknowledgement check box, and then choose Launch Instances.
12. Choose View Instances to close the confirmation page and return to the console.
13. You can view the status of the launch on the Instances page. When you launch an instance, its initial
state is pending. After the instance starts, its state changes to running, and it receives a public
DNS name. (If the Public DNS column is not displayed, choose the Show/Hide icon, and then select
Public DNS.)
14. It can take a few minutes for the instance to be ready for you to connect to it. Check that your
instance has passed its status checks. You can view this information in the Status Checks column.
If you want to confirm that the CodeDeploy agent is configured correctly, you can connect to your Linux
instance using SSH and then verify the CodeDeploy agent is running.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Welcome page, Getting started page, or the Pipelines page, choose Create pipeline.
3. In Step 1: Choose pipeline settings, in Pipeline name, enter MyFirstPipeline.
Note
If you choose another name for your pipeline, be sure to use it instead of
MyFirstPipeline in the remaining steps of this tutorial. After you create a pipeline,
you cannot change its name. Pipeline names are subject to some limitations. For more
information, see Limits in AWS CodePipeline (p. 412).
4. In Service role, do one of the following:
• Choose New service role to allow CodePipeline to create a new service role
in IAM. In Role name, the role and policy name both default to this format:
AWSCodePipelineServiceRole-region-pipeline_name. For example, this is the service role
created for this tutorial: AWSCodePipelineServiceRole-eu-west-2-MyFirstPipeline.
• Choose Existing service role to use a service role already created in IAM. In Role name, choose
your service role from the list.
Note
Depending on when your service role was created, you might need to update its permissions
to support additional AWS services. For information, see Add Permissions for Other AWS
Services (p. 366).
5. In Artifact store, do one of the following:
a. Choose Default location. This uses the default artifact store, such as the Amazon S3 artifact
bucket designated as the default, for your pipeline in the region you have selected for your
pipeline.
b. Choose Custom location if you already have an artifact store, such as an Amazon S3 artifact
bucket, in the same region as your pipeline.
Note
This is not the source bucket for your source code. This is the artifact store for your pipeline.
A separate artifact store, such as an Amazon S3 bucket, is required for each pipeline. When
you create or edit a pipeline, you must have an artifact bucket in the pipeline Region, and
then you must have one artifact bucket per AWS Region where you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
Choose Next.
6. In Step 2: Add source stage, in Source provider, choose AWS CodeCommit. In Repository name,
choose the name of the CodeCommit repository you created in Step 1: Create a CodeCommit
Repository and Local Repo (p. 43). In Branch name, choose the name of the branch that contains
your latest code update. Unless you created a different branch on your own, only master is
available.
After you select the repository name and branch, a message is displayed showing the Amazon
CloudWatch Events rule to be created for this pipeline.
Under Change detection options, leave the defaults. This allows CodePipeline to use Amazon
CloudWatch Events to detect changes in your source repository.
Choose Next.
7. In Step 3: Add build stage, choose Skip build stage, and then accept the warning message by
choosing Skip again. Choose Next.
Note
In this tutorial, you are deploying code that requires no build service.
8. In Step 4: Add deploy stage, in Deploy provider, choose AWS CodeDeploy. The Region
field defaults to the same AWS Region as your pipeline. In Application name, enter
MyDemoApplication, or choose the Refresh button, and then choose the application name from
the list. In Deployment group, enter MyDemoDeploymentGroup, or choose it from the list, and then
choose Next step.
Note
The name "Deploy" is the name given by default to the stage created in the Step 4: Deploy
step, just as "Source" is the name given to the first stage of the pipeline.
9. In Step 5: Review, review the information, and then choose Create pipeline.
10. The pipeline starts to run. You can view progress and success and failure messages as the
CodePipeline sample deploys the webpage to the Amazon EC2 instance in the CodeDeploy
deployment.
Congratulations! You just created a simple pipeline in CodePipeline. The pipeline has two stages:
• A source stage (Source) that detects changes in the sample application stored in the CodeCommit
repository and pulls those changes into the pipeline.
• A deployment stage (Deploy) that deploys those changes to the Amazon EC2 instance using
CodeDeploy.
1. View the initial progress of the pipeline. The status of each stage changes from No executions yet
to In Progress, and then to either Succeeded or Failed. The pipeline should complete the first run
within a few minutes.
2. After Succeeded is displayed for the pipeline status, in the status area for the Staging stage, choose
Details. This opens the CodeDeploy console.
3. Choose your application in the list. On the Deployment group tab, under Deployment lifecycle
events, choose the instance ID. This opens the EC2 console.
4. On the Description tab, in Public DNS, copy the address, and then paste it into the address bar of
your web browser.
This is the sample application you downloaded and pushed to your CodeCommit repository.
For more information about stages, actions, and how pipelines work, see CodePipeline Concepts (p. 4).
3. Revise the contents of the index.html file to change the background color and some of the text on
the webpage, and then save the file.
<!DOCTYPE html>
<html>
<head>
<title>Updated Sample Deployment</title>
<style>
body {
color: #000000;
background-color: #CCFFCC;
font-family: Arial, sans-serif;
font-size:14px;
}
h1 {
font-size: 250%;
font-weight: normal;
margin-bottom: 0;
}
h2 {
font-size: 175%;
font-weight: normal;
margin-bottom: 0;
}
</style>
</head>
<body>
<div align="center"><h1>Updated Sample Deployment</h1></div>
<div align="center"><h2>This application was updated using CodePipeline, CodeCommit,
and CodeDeploy.</h2></div>
<div align="center">
<p>Learn more:</p>
<p><a href="https://docs.aws.amazon.com/codepipeline/latest/
userguide/">CodePipeline User Guide</a></p>
<p><a href="https://docs.aws.amazon.com/codecommit/latest/userguide/">CodeCommit
User Guide</a></p>
<p><a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/">CodeDeploy
User Guide</a></p>
</div>
</body>
</html>
4. Commit and push your changes to your CodeCommit repository by running the following
commands, one at a time:
git push
Your pipeline is configured to run whenever code changes are made to your CodeCommit repository.
1. View the initial progress of the pipeline. The status of each stage changes from No executions yet
to In Progress, and then to either Succeeded or Failed. The pipeline should complete within a few
minutes.
2. After Succeeded is displayed for the action status, in the status area for the Staging stage, choose
Details. This opens the CodeDeploy console.
3. On the Deployment group tab, under Deployment lifecycle events, choose the instance ID. This
opens the EC2 console.
4. On the Description tab, in Public DNS, copy the address, and then paste it into the address bar of
your web browser.
For more information about stages, actions, and how pipelines work, see CodePipeline Concepts (p. 4).
1. To clean up your CodePipeline resources, follow the instructions in Delete a Pipeline in AWS
CodePipeline (p. 214).
2. To clean up your CodeDeploy resources, follow the instructions in Clean Up Deployment
Walkthrough Resources.
3. To delete the CodeCommit repository, follow the instructions in Delete a CodeCommit Repository.
Before you can create this pipeline, you must configure the required resources. For example, if you want
to use a GitHub repository for your source code, you must create the repository before you can add it to
a pipeline. As part of setting up, this tutorial walks you through setting up Jenkins on an Amazon EC2
instance for demonstration purposes.
Important
Many of the actions you add to your pipeline in this procedure involve AWS resources that you
need to create before you create the pipeline. AWS resources for your source actions must
always be created in the same AWS Region where you create your pipeline. For example, if you
create your pipeline in the US East (Ohio) Region, your CodeCommit repository must be in the
US East (Ohio) Region.
You can add cross-region actions when you create your pipeline. AWS resources for cross-
region actions must be in the same AWS Region where you plan to execute the action.
For more information about cross-region actions, see Add a Cross-Region Action in
CodePipeline (p. 322).
Before you begin this tutorial, you should have already completed the general prerequisites in Getting
Started with CodePipeline (p. 9).
Topics
• Step 1: Set Up Prerequisites (p. 54)
• Step 2: Create a Pipeline in CodePipeline (p. 57)
• Step 3: Add Another Stage to Your Pipeline (p. 58)
• Step 4: Clean Up Resources (p. 60)
• You are familiar with installing and administering Jenkins and creating Jenkins projects.
• You have installed Rake and the Haml gem for Ruby on the same computer or instance that
hosts your Jenkins project.
• You have set the required system environment variables so that Rake commands can be run
from the terminal or command line (for example, on Windows systems, modifying the PATH
variable to include the directory where you installed Rake).
Topics
• Copy or Clone the Sample into a GitHub Repository (p. 55)
• Create an IAM Role to Use for Jenkins Integration (p. 55)
• Install and Configure Jenkins and the CodePipeline Plugin for Jenkins (p. 56)
1. Download the sample code from the GitHub repository, or clone the repositories to your local
computer. There are two sample packages:
• If you will be deploying your sample to Amazon Linux, RHEL, or Ubuntu Server instances, choose
aws-codepipeline-jenkins-aws-codedeploy_linux.zip.
• If you will be deploying your sample to Windows Server instances, choose AWSCodePipeline-
Jenkins-AWSCodeDeploy_Windows.zip.
2. From the repository, choose Fork to clone the sample repo into a repo in your Github account. For
more information, see the GitHub documentation.
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In the IAM console, in the navigation pane, choose Roles, and then choose Create new role.
3. On the Select role type page, with AWS Service Role selected, next to Amazon EC2, choose Select.
4. On the Attach Policy page, select the AWSCodePipelineCustomActionAccess managed policy, and
then choose Next Step.
5. On the Set role name and review page, in the Role name box, type the name of the role you will
create specifically for Jenkins integration (for example JenkinsAccess), and then choose Create
role.
When you create the Amazon EC2 instance where you will install Jenkins, in Step 3: Configure Instance
Details, make sure you choose the instance role (for example, JenkinsAccess).
For more information about instance roles and Amazon EC2, see IAM Roles for Amazon EC2, Using IAM
Roles to Grant Permissions to Applications Running on Amazon EC2 Instances, and Creating a Role to
Delegate Permissions to an AWS Service.
1. Create an Amazon EC2 instance where you will install Jenkins, and in Step 3: Configure Instance
Details, make sure you choose the instance role you created (for example, JenkinsAccess). For
more information about creating Amazon EC2 instances, see Launch an Amazon EC2 Instance.
Note
If you already have Jenkins resources you want to use, you can do so, but you must create
a special IAM user, apply the AWSCodePipelineCustomActionAccess managed policy to
that user, and then configure and use the access credentials for that user on your Jenkins
resource. If you want to use the Jenkins UI to supply the credentials, configure Jenkins to
only allow HTTPS. For more information, see Troubleshooting CodePipeline (p. 345).
2. Install Jenkins on the Amazon EC2 instance. For more information, see the Jenkins documentation
for installing Jenkins and starting and accessing Jenkins, as well as details of integration with
Jenkins (p. 15) in Product and Service Integrations with CodePipeline (p. 12).
3. Launch Jenkins, and on the home page, choose Manage Jenkins.
4. On the Manage Jenkins page, choose Manage Plugins.
5. Choose the Available tab, and in the Filter search box, type AWS CodePipeline. Choose
CodePipeline Plugin for Jenkins from the list and choose Download now and install after restart.
6. On the Installing Plugins/Upgrades page, select Restart Jenkins when installation is complete
and no jobs are running.
7. Choose Back to Dashboard.
8. On the main page, choose New Item.
9. In Item Name, type a name for the Jenkins project (for example, MyDemoProject). Choose
Freestyle project, and then choose OK.
Note
Make sure the name for your project meets the requirements for CodePipeline. For more
information, see Limits in AWS CodePipeline (p. 412).
10. On the configuration page for the project, select the Execute concurrent builds if necessary check
box. In Source Code Management, choose AWS CodePipeline. If you have installed Jenkins on an
Amazon EC2 instance and configured the AWS CLI with the profile for the IAM user you created for
integration between CodePipeline and Jenkins, leave all of the other fields empty.
11. Choose Advanced, and in Provider, type a name for the provider of the action as it will appear in
CodePipeline (for example, MyJenkinsProviderName). Make sure this name is unique and easy
to remember. You will use it when you add a build action to your pipeline later in this tutorial, and
again when you add a test action.
Note
This action name must meet the naming requirements for actions in CodePipeline. For more
information, see Limits in AWS CodePipeline (p. 412).
12. In Build Triggers, clear any check boxes, and then select Poll SCM. In Schedule, type five asterisks
separated by spaces, as follows:
* * * * *
rake
Note
Make sure your environment is configured with the variables and settings required to run
rake; otherwise, the build will fail.
14. Choose Add post-build action, and then choose AWS CodePipeline Publisher. Choose Add, and in
Build Output Locations, leave the location blank. This configuration is the default. It will create a
compressed file at the end of the build process.
15. Choose Save to save your Jenkins project.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. If necessary, use the region selector to change the region to the same region where your pipeline
resources are located. For example, if you created resources for the previous tutorial in us-east-2,
make sure the region selector is set to US East (Ohio).
For more information about the regions and endpoints available for CodePipeline, see Regions and
Endpoints.
3. On the Welcome page, Getting started page, or the Pipelines page, choose Create pipeline.
4. On the Step 1: Choose pipeline settings page, in Pipeline name, type the name for your pipeline.
5. In Service role, do one of the following:
• Choose New service role to allow CodePipeline to create a new service role
in IAM. In Role name, the role and policy name both default to this format:
AWSCodePipelineServiceRole-region-pipeline_name. For example, this is the service role
created for a pipeline named MyPipeline: AWSCodePipelineServiceRole-eu-west-2-MyPipeline.
• Choose Existing service role to use a service role already created in IAM. In Role name, choose
your service role from the list.
Note
Depending on when your service role was created, you might need to update its permissions
to support additional AWS services. For information, see Add Permissions for Other AWS
Services (p. 366).
6. In Artifact location, do one of the following:
a. Choose Default location. This will use the default artifact store, such as the Amazon S3 artifact
bucket designated as the default, for your pipeline in the region you have selected for your
pipeline.
b. Choose Custom location if you already have an existing artifact store you have created, such as
an Amazon S3 artifact bucket, in the same region as your pipeline.
Note
This is not the source bucket for your source code. This is the artifact store for your pipeline.
A separate artifact store, such as an Amazon S3 bucket, is required for each pipeline. When
you create or edit a pipeline, you must have an artifact bucket in the pipeline Region, and
then you must have one artifact bucket per AWS Region where you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
7. Choose Next.
8. In Step 2: Add source stage, in Source provider, choose GitHub, and then choose Connect to
GitHub. This will open a new browser window that will connect you to GitHub. If prompted to sign
in, provide your GitHub credentials.
Important
Do not provide your AWS credentials on the GitHub website.
After you have selected GitHub, a message displays advising that CodePipeline will create a
webhook in GitHub for your pipeline.
After you have connected to GitHub, choose the repository and branch where you pushed the
sample you want to use for this tutorial (aws-codepipeline-jenkins-aws-codedeploy_linux.zip or
AWSCodePipeline-Jenkins-AWSCodeDeploy_Windows.zip), and then choose Next.
Note
In GitHub, there is a limit to the number of OAuth tokens you can use for an application,
such as CodePipeline. If you exceed this limit, retry the connection to allow CodePipeline to
reconnect by reusing existing tokens. For more information, see ??? (p. 348).
9. In Step 3: Add build stage, choose Add Jenkins. In Provider name, type the name of the action
you provided in the CodePipeline Plugin for Jenkins (for example MyJenkinsProviderName). This
name must exactly match the name in the CodePipeline Plugin for Jenkins. In Server URL, type the
URL of the Amazon EC2 instance where Jenkins is installed. In Project name, type the name of the
project you created in Jenkins, such as MyDemoProject, and then choose Next.
10. In Step 4: Add deploy stage, reuse the CodeDeploy application and deployment group you created
in Tutorial: Create a Simple Pipeline (Amazon S3 Bucket) (p. 26). In Deploy provider, choose
CodeDeploy. In Application name, type CodePipelineDemoApplication, or choose the
refresh button, and then choose the application name from the list. In Deployment group, type
CodePipelineDemoFleet, or choose it from the list, and then choose Next.
Note
You can use your own CodeDeploy resources or create new ones, but you might incur
additional costs.
11. In Step 5: Review, review the information, and then choose Create pipeline.
12. The pipeline automatically starts and runs the sample through the pipeline. You can view progress
and success and failure messages as the pipeline builds the Haml sample to HTML and deploys it a
web page to each of the Amazon EC2 instances in the CodeDeploy deployment.
1. After Succeeded is displayed for the pipeline status, in the status area for the Staging stage, choose
Details.
2. In the Deployment Details section, in Instance ID, choose the instance ID of one of the successfully
deployed instances.
3. Copy the IP address of the instance (for example, 192.168.0.4). You will use this IP address in your
Jenkins test.
1. On the instance where you installed Jenkins, open Jenkins and from the main page, choose New
Item.
2. In Item Name, type a name for the Jenkins project (for example, MyTestProject). Choose
Freestyle project, and then choose OK.
Note
Make sure the name for your project meets the CodePipeline requirements. For more
information, see Limits in AWS CodePipeline (p. 412).
3. On the configuration page for the project, select the Execute concurrent builds if necessary check
box. In Source Code Management, choose AWS CodePipeline. If you have installed Jenkins on an
Amazon EC2 instance and configured the AWS CLI with the profile for the IAM user you created for
integration between CodePipeline and Jenkins, leave all the other fields empty.
Important
If you are configuring a Jenkins project and it is not installed on an Amazon EC2 instance,
or it is installed on an Amazon EC2 instance that is running a Windows operating system,
complete the fields as required by your proxy host and port settings, and provide
the credentials of the IAM user you configured for integration between Jenkins and
CodePipeline.
4. Choose Advanced, and in Category, choose Test.
5. In Provider, type the same name you used for the build project (for example,
MyJenkinsProviderName). You will use this name when you add the test action to your pipeline
later in this tutorial.
Note
This name must meet the CodePipeline naming requirements for actions. For more
information, see Limits in AWS CodePipeline (p. 412).
6. In Build Triggers, clear any check boxes, and then select Poll SCM. In Schedule, type five asterisks
separated by spaces, as follows:
* * * * *
If you are deploying to Windows Server instances, choose Execute batch command, and then type
the following, where the IP address is the address of the Amazon EC2 instance you copied earlier:
Note
The test assumes a default port of 80. If you want to specify a different port, add a test port
statement, as follows:
8. Choose Add post-build action, and then choose AWS CodePipeline Publisher. Do not choose Add.
9. Choose Save to save your Jenkins project.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. In Name, choose the name of the pipeline you created, MySecondPipeline.
3. On the pipeline details page, choose Edit.
4. On the Edit page, choose + Stage to add a stage immediately after the Staging stage.
5. In the name field for the new stage, type a name (for example, Testing), and then choose + Add
action group.
6. In Action name, type MyJenkinsTest-Action. In Test provider, choose the provider name you
specified in Jenkins (for example, MyJenkinsProviderName). In Project name, type the name of
the project you created in Jenkins (for example, MyTestProject). In Input artifacts, choose the
artifact from the Jenkins build whose default name is MyBuiltApp, and then choose Save.
For more information about input and output artifacts and the structure of pipelines, see
CodePipeline Pipeline Structure Reference (p. 393).
7. On the Edit page, choose Save pipeline changes. In the Save pipeline changes dialog box, choose
Save and continue.
8. Although the new stage has been added to your pipeline, a status of No executions yet is displayed
for that stage because no changes have triggered another run of the pipeline. To run the sample
through the revised pipeline, on the pipeline details page, choose Release change.
The pipeline view shows the stages and actions in your pipeline and the state of the revision running
through those four stages. The time it takes for the pipeline to run through all stages will depend on
the size of the artifacts, the complexity of your build and test actions, and other factors.
1. Open a terminal session on your local Linux, macOS, or Unix machine, or a command prompt on
your local Windows machine, and run the delete-pipeline command to delete the pipeline you
created. For MySecondPipeline, you would type the following command:
In this tutorial, you configure a notification to send an email when a pipeline's state changes to FAILED.
This tutorial uses an input transformer method when creating the CloudWatch Events rule. It transforms
the message schema details to deliver the message in human-readable text.
Topics
• Step 1: Set Up an Email Notification Using Amazon SNS (p. 61)
• Step 2: Create a Rule and Add the SNS Topic as the Target (p. 63)
• Step 3: Clean Up Resources (p. 64)
Create or identify a topic in Amazon SNS. CodePipeline will use CloudWatch Events to send notifications
to this topic through Amazon SNS. To create a topic:
For more information, see Create a Topic in the Amazon SNS Developer Guide.
Subscribe one or more recipients to the topic to receive email notifications. To subscribe a recipient to a
topic:
1. In the Amazon SNS console, from the Topics list, select the check box next to your new topic.
Choose Actions, Subscribe to topic.
2. In the Create subscription dialog box, verify that an ARN appears in Topic ARN.
3. For Protocol, choose Email.
4. For Endpoint, type the recipient's full email address. Compare your results to the following:
For more information, see Subscribe to a Topic in the Amazon SNS Developer Guide.
{ "pipeline" : "$.detail.pipeline" }
To stop using a rule to send build notifications, in the CloudWatch console, choose the rule, and then
choose Actions, Disable.
To delete a rule, in the CloudWatch console, choose the rule, and then choose Actions, Delete.
For information about how to clean up the SNS notification and delete the Amazon CloudWatch Events
rule, see Clean Up (Unsubscribe from an Amazon SNS Topic) and reference DeleteRule in the Amazon
CloudWatch Events API Reference.
You can try this out using your existing Android app and test definitions, or you can use the sample app
and test definitions provided by Device Farm.
Configure Add build and Push a package App build and View test results
pipeline resources test definitions to your repository test build output
to your package artifact kicked
off automatically
version: 0.2
phases:
build:
commands:
- chmod +x ./gradlew
- ./gradlew assembleDebug
artifacts:
files:
- './android/app/build/outputs/**/*.apk'
discard-paths: yes
2. (Optional) If you use Calabash or Appium to test your app, add the test definition file to your
repository. In a later step, you can configure Device Farm to use the definitions to carry out your test
suite.
If you use Device Farm built-in tests, you can skip this step.
3. To create your pipeline and add a source stage, do the following:
a. Sign in to the AWS Management Console and open the CodePipeline console at https://
console.aws.amazon.com/codepipeline/.
b. Choose Create pipeline. On the Step 1: Choose pipeline settings page, in Pipeline name, enter
the name for your pipeline.
c. In Service role, leave New service role selected, and leave Role name unchanged. You can also
choose to use an existing service role, if you have one.
Note
If you use a CodePipeline service role that was created before July 2018, you need to
add permissions for Device Farm. To do this, open the IAM console, find the role, and
then add the following permissions to the role's policy. For more information, see Add
Permissions for Other AWS Services (p. 366).
{
"Effect": "Allow",
"Action": [
"devicefarm:ListProjects",
"devicefarm:ListDevicePools",
"devicefarm:GetRun",
"devicefarm:GetUpload",
"devicefarm:CreateUpload",
"devicefarm:ScheduleRun"
],
"Resource": "*"
}
i. Choose Default location to use the default artifact store, such as the Amazon S3 artifact
bucket designated as the default, for your pipeline in the region you have selected for your
pipeline.
ii. Choose Custom location if you already have an existing artifact store you have created,
such as an Amazon S3 artifact bucket, in the same region as your pipeline.
Note
This is not the source bucket for your source code. This is the artifact store for your
pipeline. A separate artifact store, such as an Amazon S3 bucket, is required for each
pipeline. When you create or edit a pipeline, you must have an artifact bucket in the
pipeline Region, and then you must have one artifact bucket per AWS Region where
you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
e. Choose Next.
f. On the Step 2: Add source stage page, in Source provider, choose GitHub, and then choose
Connect to GitHub.
g. In the browser window, choose Authorize aws-codesuite. This allows your pipeline to make
your repository a source, and to use webhooks that detect when new code is pushed to the
repository.
h. In Repository, choose the source repository.
i. In Branch, choose the branch that you want to use.
j. Choose Next.
4. In Add build stage, add a build stage:
a. In Build provider, choose AWS CodeBuild. Allow Region to default to the pipeline Region.
b. Choose Create project.
c. In Project name, enter a name for this build project.
d. In Environment image, choose Managed image. For Operating system, choose Ubuntu.
e. For Runtime, choose Standard. For Image, choose aws/codebuild/standard:1.0.
CodeBuild uses this OS image, which has Android Studio installed, to build your app.
f. For Service role, choose your existing CodeBuild service role or create a new one.
g. For Build specifications, choose Use a buildspec file.
h. Choose Continue to CodePipeline. This returns to the CodePipeline console and creates a
CodeBuild project that uses the buildspec.yml in your repository for configuration. The build
project uses a service role to manage AWS service permissions. This step might take a couple of
minutes.
i. Choose Next.
5. On the Step 4: Add deploy stage page, choose Skip deploy stage, and then accept the warning
message by choosing Skip again. Choose Next.
6. On Step 5: Review, choose Create pipeline. You should see a diagram that shows the source and
build stages.
In the AWS CodePipeline console, you can find the name of the output artifact for each stage
by hovering over the information icon in the pipeline diagram. If your pipeline tests your app
directly from the Source stage, choose SourceArtifact. If the pipeline includes a Build stage,
choose BuildArtifact.
g. In ProjectId, enter your Device Farm project ID.
h. In DevicePoolArn, enter the ARN for the device pool.
i. In AppType, enter Android.
j. In App, enter the path of the compiled app package. The path is relative to the root of the input
artifact for the test stage. Typically, this path is similar to app-release.apk.
k. In TestType, do one of the following:
• If you're using one of the built-in Device Farm tests, enter the type of test configured in your
Device Farm project, such as BUILTIN_FUZZ. In FuzzEventCount, enter a time in milliseconds,
such as 6000. In FuzzEventThrottle, enter a time in milliseconds, such as 50.
• If you aren't using one of the built-in Device Farm tests, enter your type of test, and then in
Test, enter the path of the test definition file. The path is relative to the root of the input
artifact for your test.
l. In the remaining fields, provide the configuration that is appropriate for your test and
application type.
m. (Optional) In Advanced, provide configuration information for your test run.
n. Choose Save.
o. On the stage you are editing, choose Done. In the AWS CodePipeline pane, choose Save, and
then choose Save on the warning message.
p. To submit your changes and start a pipeline build, choose Release change, and then choose
Release.
always be created in the same AWS Region where you create your pipeline. For example, if you
create your pipeline in the US East (Ohio) Region, your CodeCommit repository must be in the
US East (Ohio) Region.
You can add cross-region actions when you create your pipeline. AWS resources for cross-
region actions must be in the same AWS Region where you plan to execute the action.
For more information about cross-region actions, see Add a Cross-Region Action in
CodePipeline (p. 322).
You can try this out using your existing iOS app, or you can use the sample iOS app.
Configure Add test definitions Upload .zip to Test output View test results
pipeline resources to your package your bucket artifact kicked
off automatically
a. Sign in to the AWS Management Console and open the CodePipeline console at https://
console.aws.amazon.com/codepipeline/.
b. Choose Create pipeline. On the Step 1: Choose pipeline settings page, in Pipeline name, enter
the name for your pipeline.
c. In Service role, leave New service role selected, and leave Role name unchanged. You can also
choose to use an existing service role, if you have one.
Note
If you use a CodePipeline service role that was created before July 2018, you must
add permissions for Device Farm. To do this, open the IAM console, find the role, and
then add the following permissions to the role's policy. For more information, see Add
Permissions for Other AWS Services (p. 366).
{
"Effect": "Allow",
"Action": [
"devicefarm:ListProjects",
"devicefarm:ListDevicePools",
"devicefarm:GetRun",
"devicefarm:GetUpload",
"devicefarm:CreateUpload",
"devicefarm:ScheduleRun"
],
"Resource": "*"
}
i. Choose Default location to use the default artifact store, such as the Amazon S3 artifact
bucket designated as the default, for your pipeline in the region you have selected for your
pipeline.
ii. Choose Custom location if you already have an existing artifact store you have created,
such as an Amazon S3 artifact bucket, in the same region as your pipeline.
Note
This is not the source bucket for your source code. This is the artifact store for your
pipeline. A separate artifact store, such as an Amazon S3 bucket, is required for each
pipeline. When you create or edit a pipeline, you must have an artifact bucket in the
pipeline Region, and then you must have one artifact bucket per AWS Region where
you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
e. Choose Next.
f. On the Step 2: Add source stage page, in Source provider, choose Amazon S3.
g. In Amazon S3 location, enter the bucket and object key for your .zip file.
h. Choose Next.
4. In Build, create a placeholder build stage for your pipeline. This allows you to create the pipeline
in the wizard. After you use the wizard to create your two-stage pipeline, you no longer need this
placeholder build stage. After the pipeline is completed, this second stage is deleted and the new
test stage is added in step 5.
a. In Build provider, choose Add Jenkins. This build selection is a placeholder. It is not used.
b. In Provider name, enter a name. The name is a placeholder. It is not used.
c. In Server URL, enter text. The text is a placeholder. It is not used.
d. In Project name, enter a name. The name is a placeholder. It is not used.
e. Choose Next.
f. On the Step 4: Add deploy stage page, choose Skip deploy stage, and then accept the warning
message by choosing Skip again.
g. On Step 5: Review, choose Create pipeline. You should see a diagram that shows the source
and build stages.
In the AWS CodePipeline console, you can find the name of the output artifact for each stage
by hovering over the information icon in the pipeline diagram. If your pipeline tests your app
directly from the Source stage, choose SourceArtifact. If the pipeline includes a Build stage,
choose BuildArtifact.
i. In ProjectId, choose your Device Farm project ID.
j. In DevicePoolArn, enter the ARN for the device pool.
k. In AppType, enter iOS.
l. In App, enter the path of the compiled app package. The path is relative to the root of the input
artifact for the test stage. Typically, this path is similar to ios-test.ipa.
m. In TestType, do one of the following:
• If you're using one of the built-in Device Farm tests, enter the type of test configured in your
Device Farm project, such as BUILTIN_FUZZ. In FuzzEventCount, enter a time in milliseconds,
such as 6000. In FuzzEventThrottle, enter a time in milliseconds, such as 50.
• If you aren't using one of the built-in Device Farm tests, enter your type of test, and then in
Test, enter the path of the test definition file. The path is relative to the root of the input
artifact for your test.
n. In the remaining fields, provide the configuration that is appropriate for your test and
application type.
o. (Optional) In Advanced, provide configuration information for your test run.
p. Choose Save.
q. On the stage you are editing, choose Done. In the AWS CodePipeline pane, choose Save, and
then choose Save on the warning message.
API Version 2015-07-09
75
CodePipeline User Guide
Tutorial: Create a Pipeline That
Deploys to AWS Service Catalog
r. To submit your changes and start a pipeline execution, choose Release change, and then choose
Release.
First, you create a product in AWS Service Catalog, and then you create a pipeline in AWS CodePipeline.
This tutorial provides two options for setting up the deployment configuration:
• Create a product in AWS Service Catalog and upload a template file to your source repository.
Provide product version and deployment configuration in the CodePipeline console (without a
separate configuration file). See Option 1: Deploy to AWS Service Catalog Without a Configuration
File (p. 77).
Note
The template file can be created in YAML or JSON format.
• Create a product in AWS Service Catalog and upload a template file to your source repository. Provide
product version and deployment configuration in a separate configuration file. See Option 2: Deploy to
AWS Service Catalog Using a Configuration File (p. 83).
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS CloudFormation Sample Template S3_Bucket: Sample template showing
how to create a privately accessible S3 bucket. **WARNING** This template creates an
S3 bucket. You will be billed for the AWS resources used if you create a stack from
this template.",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {}
}
},
"Outputs": {
"BucketName": {
"Value": {
"Ref": "S3Bucket"
},
"Description": "Name of Amazon S3 bucket to hold website content"
}
}
}
This template allows AWS CloudFormation to create an Amazon S3 bucket that can be used by AWS
Service Catalog.
2. Upload the S3_template.json file to your AWS CodeCommit repository.
a. In Product name, enter the name you want to use for your new product.
b. In Description, enter the product catalog description. This description is shown in the product
listing to help the user choose the correct product.
c. In Provided by, enter the name of your IT department or administrator.
d. Choose Next.
3. (Optional) In Enter support details, enter contact information for product support, and choose
Next.
4. In Version details, complete the following:
a. Choose Upload a template file. Browse for your S3_template.json file and upload it.
b. In Version title, enter the name of the product version (for example, "devops S3 v2").
c. In Description, enter details that distinguish this version from other versions.
d. Choose Next.
5. On the Review page, verify that the information is correct, and then choose Create.
6. On the Products page, in the browser, copy the URL of your new product. This contains the product
ID. Copy and retain this product ID. You use it when you create your pipeline in CodePipeline.
Here is the URL for a product named my-product. To extract the product ID, copy the value
between the equals sign (=) and the ampersand (&). In this example, the product ID is prod-
example123456.
https://<region-URL>/servicecatalog/home?region=<region>#/admin-products?
productCreated=prod-example123456&createdProductTitle=my-product
Note
Copy the URL for your product before you navigate away from the page. Once you navigate
away from this page, you must use the CLI to obtain your product ID.
After a few seconds, your product appears on the Products page. You might need to refresh your
browser to see the product in the list.
a. Sign in to the AWS Management Console and open the CodePipeline console at https://
console.aws.amazon.com/codepipeline/.
b. Choose Getting started. Choose Create pipeline, and then enter a name for your pipeline.
c. In Service role, choose New service role. This creates a service role for CodePipeline to manage
permissions to other AWS services.
Note
If you use a CodePipeline service role that was created before October 16 2018, you
need to add permissions for AWS Service Catalog. Open the IAM console, find the role,
and then add the following permissions to the role's policy. For more information, see
Add Permissions for Other AWS Services (p. 366).
"Statement": [
{
"Effect": "Allow",
"Action": [
"servicecatalog:ListProvisioningArtifacts",
"servicecatalog:CreateProvisioningArtifact",
"servicecatalog:DescribeProvisioningArtifact",
"servicecatalog:DeleteProvisioningArtifact”,
“servicecatalog:UpdateProduct”
],
"Resource": "*"
},
{
API Version 2015-07-09
78
CodePipeline User Guide
Option 1: Deploy to AWS Service
Catalog Without a Configuration File
"Effect": "Allow",
"Action": [
"cloudformation:ValidateTemplate"
],
"Resource": "*"
}
d. In Artifact store, choose Default location. This uses the default Amazon S3 artifact bucket for
this region.
e. Choose Next.
2. To add a source stage, do the following:
c. In Product ID, paste the product ID you copied from the AWS Service Catalog console.
d. In Template file path, enter the relative path where the template file is stored.
e. In Product type, choose AWS CloudFormation Template.
f. In Product version name, enter the name of the product version you specified in AWS Service
Catalog. If you want to have the template change deployed to a new product version, enter
a product version name that has not been used for any previous product version in the same
product.
g. For Input artifact, choose the source input artifact.
h. Choose Next.
5. In Review, review your pipeline settings, and then choose Create.
6. After your pipeline runs successfully, on the deployment stage, choose Details. This opens your
product in AWS Service Catalog.
7. Under your product information, choose your version name to open the product template. View the
template deployment.
2. Commit and push your change. Your pipeline starts after you push the change. When the run of the
pipeline is complete, on the deployment stage, choose Details to open your product in AWS Service
Catalog.
3. Under your product information, choose the new version name to open the product template. View
the deployed template change.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS CloudFormation Sample Template S3_Bucket: Sample template showing
how to create a privately accessible S3 bucket. **WARNING** This template creates an
S3 bucket. You will be billed for the AWS resources used if you create a stack from
this template.",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {}
}
},
"Outputs": {
"BucketName": {
"Value": {
"Ref": "S3Bucket"
},
"Description": "Name of Amazon S3 bucket to hold website content"
}
}
This template allows AWS CloudFormation to create an Amazon S3 bucket that can be used by AWS
Service Catalog.
2. Upload the S3_template.json file to your AWS CodeCommit repository.
{
"SchemaVersion": "1.0",
"ProductVersionName": "devops S3 v2",
"ProductVersionDescription": "MyProductVersionDescription",
"ProductType": "CLOUD_FORMATION_TEMPLATE",
"Properties": {
"TemplateFilePath": "/S3_template.json"
}
}
This file creates the product version information for you each time your pipeline runs.
2. Upload the sample_config.json file to your AWS CodeCommit repository. Make sure you upload
this file to your source repository.
a. In Product name, enter the name you want to use for your new product.
b. In Description, enter the product catalog description. This description appears in the product
listing to help the user choose the correct product.
c. In Provided by, enter the name of your IT department or administrator.
d. Choose Next.
3. (Optional) In Enter support details, enter product support contact information, and then choose
Next.
4. In Version details, complete the following:
a. Choose Upload a template file. Browse for your S3_template.json file and upload it.
b. In Version title, enter the name of the product version (for example, "devops S3 v2").
c. In Description, enter details that distinguish this version from other versions.
d. Choose Next.
API Version 2015-07-09
84
CodePipeline User Guide
Option 2: Deploy to AWS Service
Catalog Using a Configuration File
5. On the Review page, verify that the information is correct, and then choose Confirm and upload.
6. On the Products page, in the browser, copy the URL of your new product. This contains the product
ID. Copy and retain this product ID. You use when you create your pipeline in CodePipeline.
Here is the URL for a product named my-product. To extract the product ID, copy the value
between the equals sign (=) and the ampersand (&). In this example, the product ID is prod-
example123456.
https://<region-URL>/servicecatalog/home?region=<region>#/admin-products?
productCreated=prod-example123456&createdProductTitle=my-product
Note
Copy the URL for your product before you navigate away from the page. Once you navigate
away from this page, you must use the CLI to obtain your product ID.
After a few seconds, your product appears on the Products page. You might need to refresh your
browser to see the product in the list.
a. Sign in to the AWS Management Console and open the CodePipeline console at https://
console.aws.amazon.com/codepipeline/.
b. Choose Getting started. Choose Create pipeline, and then enter a name for your pipeline.
c. In Service role, choose New service role. This creates a service role for CodePipeline to manage
permissions to other AWS services.
Note
If you use a CodePipeline service role that was created before October 16 2018, you
need to add permissions for AWS Service Catalog. Open the IAM console, find the role,
and then add the following permissions to the role's policy. For more information, see
Add Permissions for Other AWS Services (p. 366).
"Statement": [
{
"Effect": "Allow",
"Action": [
"servicecatalog:ListProvisioningArtifacts",
"servicecatalog:CreateProvisioningArtifact",
"servicecatalog:DescribeProvisioningArtifact",
"servicecatalog:DeleteProvisioningArtifact”,
“servicecatalog:UpdateProduct”
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"cloudformation:ValidateTemplate"
],
"Resource": "*"
}
d. In Artifact store, choose Default location. This uses the default Amazon S3 artifact bucket for
this region.
e. Choose Next.
API Version 2015-07-09
85
CodePipeline User Guide
Option 2: Deploy to AWS Service
Catalog Using a Configuration File
c. In Product ID, paste the product ID you copied from the AWS Service Catalog console.
d. In Configuration file path, enter the file path of the configuration file in your repository.
e. Choose Next.
5. In Review, review your pipeline settings, and then choose Create.
6. After your pipeline runs successfully, on your deployment stage, choose Details to open your
product in AWS Service Catalog.
7. Under your product information, choose your version name to open the product template. View the
template deployment.
2. Commit and push your change. Your pipeline starts after you push the change. When the run of the
pipeline is complete, on the deployment stage, choose Details to open your product in AWS Service
Catalog.
3. Under your product information, choose the new version name to open the product template. View
the deployed template change.
Topics
• Example 1: Create an AWS CodeCommit Pipeline with AWS CloudFormation (p. 89)
• Example 2: Create an Amazon S3 Pipeline with AWS CloudFormation (p. 91)
• Example 3: Create a GitHub Pipeline with AWS CloudFormation (p. 93)
Prerequisites:
You must have created the following resources to use with the AWS CloudFormation sample template:
• You must have created a source repository. You can use the AWS CodeCommit repository you created
in Tutorial: Create a Simple Pipeline (CodeCommit Repository) (p. 42).
• You must have created a CodeDeploy application and deployment group. You can use the CodeDeploy
resources you created in Tutorial: Create a Simple Pipeline (CodeCommit Repository) (p. 42).
• Download the sample AWS CloudFormation template file for creating a pipeline. You can download
the sample template in YAML or JSON format. Unzip the file and place it on your local computer.
• Download the SampleApp_Linux.zip sample application file.
1. Unzip the files from SampleApp_Linux.zip and upload the files into to your AWS CodeCommit
repository. You must upload the unzipped files to the root directory of your repository. You can
follow the instructions in Step 2: Add Sample Code to Your CodeCommit Repository (p. 43) to
push the files to your repository.
2. Open the AWS CloudFormation console and choose Create Stack.
3. In Choose a template, choose Upload a template to Amazon S3. Choose Browse and then select
the template file from your local computer. Choose Next.
4. In Stack name, enter a name for your pipeline. Parameters specified by the sample template are
displayed. Enter the following parameters:
5. Choose Next. Accept the defaults on the following page, and then choose Next.
6. In Capabilities, select I acknowledge that AWS CloudFormation might create IAM resources, and
then choose Create.
7. After your stack creation is complete, view the event list to check for any errors.
Troubleshooting
The IAM user who is creating the pipeline in AWS CloudFormation might require additional
permissions to create resources for the pipeline. The following permissions are required in the IAM
user's policy to allow AWS CloudFormation to create the required Amazon CloudWatch Events
resources for the CodeCommit pipeline:
{
"Effect": "Allow",
"Action": [
"events:PutRule",
"events:PutEvents",
"events:PutTargets",
"events:DeleteRule",
"events:RemoveTargets",
"events:DescribeRule"
],
"Resource": "*"
}
8. Sign in to the AWS Management Console and open the CodePipeline console at https://
console.aws.amazon.com/codepipeline/.
Under Pipelines, choose your pipeline and choose View. The diagram shows your pipeline source
and deployment stages.
9. In your source repository, commit and push a change. Your change-detection resources pick up the
change, and your pipeline starts.
Prerequisites:
You must have created the following resources to use with the AWS CloudFormation sample template:
• You must have created a CodeDeploy application and deployment group. You can use the CodeDeploy
resources you created in Tutorial: Create a Simple Pipeline (CodeCommit Repository) (p. 42).
• Download the sample AWS CloudFormation template files for creating a pipeline with an Amazon S3
source:
• Download the sample template for your pipeline in YAML or JSON format.
• Download the sample template for your CloudTrail bucket and trail in YAML and JSON.
• Unzip the files and place them on your local computer.
• Download the sample application based on the instances you created in your deployment group:
• If you want to use CodeDeploy to deploy to Amazon Linux instances, use the sample availble from
https://github.com/awslabs/aws-codepipeline-s3-aws-codedeploy_linux.
• If you want to use CodeDeploy to deploy to Windows Server instances, use the sample available
from : https://github.com/awslabs/AWSCodePipeline-S3-AWSCodeDeploy_Windows.
Save the zip file on your local computer. You upload the zip file after the stack is created.
Troubleshooting
The IAM user who is creating the pipeline in AWS CloudFormation might require additional
permissions to create resources for the pipeline. The following permissions are required in the IAM
user's policy to allow AWS CloudFormation to create the required Amazon CloudWatch Events
resources for the Amazon S3 pipeline:
{
"Effect": "Allow",
"Action": [
"events:PutRule",
"events:PutEvents",
"events:PutTargets",
"events:DeleteRule",
"events:RemoveTargets",
"events:DescribeRule"
],
"Resource": "*"
}
7. In the Amazon S3 console for your bucket, choose Upload, and follow the instructions to upload
your .zip file.
Under Pipelines, choose your pipeline, and then choose View. The diagram shows your pipeline
source and deployment stages.
9. Complete the steps in the following procedure to create your AWS CloudTrail resources.
2. In Choose a template, choose Upload a template to Amazon S3. Choose Browse, and then select
the template file for the AWS CloudTrail resources from your local computer. Choose Next.
3. In Stack name, enter a name for your resource stack. Parameters specified by the sample template
are displayed. Enter the following parameters:
• In SourceObjectKey, accept the default for the sample application's zip file.
4. Choose Next. Accept the defaults on the following page, and then choose Next.
5. In Capabilities, select I acknowledge that AWS CloudFormation might create IAM resources, and
then choose Create.
6. After your stack creation is complete, view the event list to check for any errors.
The following permissions are required in the IAM user's policy to allow AWS CloudFormation to
create the required CloudTrail resources for the Amazon S3 pipeline:
{
"Effect": "Allow",
"Action": [
"cloudtrail:CreateTrail",
"cloudtrail:DeleteTrail",
"cloudtrail:StartLogging",
"cloudtrail:StopLogging",
"cloudtrail:PutEventSelectors"
],
"Resource": "*"
}
7. Sign in to the AWS Management Console and open the CodePipeline console at https://
console.aws.amazon.com/codepipeline/.
Under Pipelines, choose your pipeline, and then choose View. The diagram shows your pipeline
source and deployment stages.
8. In your source bucket, commit and push a change. Your change-detection resources pick up the
change and your pipeline starts.
Prerequisites:
You must have created the following resources to use with the AWS CloudFormation sample template:
• A CodeDeploy application and deployment group. You can use the CodeDeploy resources you created
in Tutorial: Create a Simple Pipeline (CodeCommit Repository) (p. 42).
• Download the sample AWS CloudFormation template file for creating a pipeline. You can download
the sample template in YAML or JSON format. Unzip the file and place it on your local computer.
• Download the SampleApp_Linux.zip.
• The GitHub repository and branch you want to use for your source.
• A personal access key for your GitHub repository. This is used to provide an OAuth token for
connection to your repository.
1. Unzip the files from SampleApp_Linux.zip and upload the files to your GitHub repository. You must
upload the unzipped files to the root directory of your repository.
2. Open the AWS CloudFormation console and choose Create Stack.
3. In Choose a template, choose Upload a template to Amazon S3. Choose Browse, and then select
the template file from your local computer. Choose Next.
4. In Stack name, enter a name for your pipeline. Parameters specified by the sample template are
displayed. Enter the following parameters:
5. Choose Next. Accept the defaults on the following page, and then choose Next.
6. In Capabilities, select I acknowledge that AWS CloudFormation might create IAM resources, and
then choose Create.
7. After your stack creation is complete, view the event list to check for any errors.
8. Sign in to the AWS Management Console and open the CodePipeline console at https://
console.aws.amazon.com/codepipeline/.
Under Pipelines, choose your pipeline, and then choose View. The diagram shows your pipeline
source and deployment stages.
9. In your source repository, commit and push a change. Your change-detection resources pick up the
change and your pipeline starts.
The completed pipeline detects changes to your image, which is stored in the Amazon ECR image
repository, and uses CodeDeploy to route and deploy traffic to an Amazon ECS cluster and load balancer.
CodeDeploy uses a listener to reroute traffic to the port of the updated container specified in the
AppSpec file. The pipeline is also configured to use a CodeCommit source location where your Amazon
ECS task definition is stored. In this tutorial, you configure each of these AWS resources and then create
your pipeline with stages that contain actions for each resource.
Your continuous delivery pipeline will automatically build and deploy container images whenever source
code is changed or a new base image is uploaded to Amazon ECR.
• A Docker image file that specifies the container name and repository URI of your Amazon ECR image
repository.
• An Amazon ECS task definition that lists your Docker image name, container name, Amazon ECS
service name, and load balancer configuration.
• A CodeDeploy AppSpec file that specifies the name of the Amazon ECS task definition file, the name
of the updated application's container, and the container port where CodeDeploy reroutes production
traffic. It can also specify optional network configuration and Lambda functions you can run during
deployment lifecycle event hooks.
Note
When you commit a change to your Amazon ECR image repository, the pipeline source
action creates an imageDetail.json file for that commit. For information about the
imageDetail.json file, see imageDetail.json File for Amazon ECS Blue/Green Deployment
Actions (p. 410).
When you create or edit your pipeline and update or specify source artifacts for your deployment stage,
make sure to point to the source artifacts with the latest name and version you want to use. After you set
up your pipeline, as you make changes to your image or task definition, you might need to update your
source artifact files in your repositories and then edit the deployment stage in your pipeline.
Topics
• Prerequisites (p. 96)
• Step 1: Create Image and Push to Amazon ECR Repository (p. 96)
• Step 2: Create Task Definition and AppSpec Source Files and Push to CodeCommit
Repository (p. 97)
• Step 3: Create Your Application Load Balancer and Target Groups (p. 100)
• Step 4: Create Your Amazon ECS Cluster and Service (p. 101)
• Step 5: Create Your CodeDeploy Application and Deployment Group (ECS Compute
Platform) (p. 103)
• Step 6: Create Your Pipeline (p. 103)
• Step 7: Make a Change to Your Pipeline and Verify Deployment (p. 108)
Prerequisites
You must have already created the following resources:
• A CodeCommit repository. You can use the AWS CodeCommit repository you created in Tutorial: Create
a Simple Pipeline (CodeCommit Repository) (p. 42).
• Launch an Amazon EC2 Linux instance and install Docker to create an image as shown in this tutorial. If
you already have an image you want to use, you can skip this prerequisite.
To create an image
Pull down an image for nginx. This command provides the nginx:latest image from Docker Hub:
2. Run docker images. You should see the image in the list.
docker images
1. Create an Amazon ECR repository to store your image. Make a note of the repositoryUri in the
output.
Output:
{
"repository": {
"registryId": "aws_account_id",
"repositoryName": "nginx",
"repositoryArn": "arn:aws:ecr:us-east-1:aws_account_id:repository/nginx",
"createdAt": 1505337806.0,
"repositoryUri": "aws_account_id.dkr.ecr.us-east-1.amazonaws.com/nginx"
}
}
2. Tag the image with the repositoryUri value from the previous step.
3. Run the aws ecr get-login --no-include-email command in parentheses to get the docker login
authentication command string for your registry.
The authorization token command is run in memory (not exposed in the output) and completes the
login.
4. Push the image to Amazon ECR using the repositoryUri from the earlier step.
1. Create a file named taskdef.json with the following contents. For image, enter your image name,
such as nginx. This value is updated when your pipeline runs.
Note
Make sure that the execution role specified in the task definition contains the
AmazonECSTaskExecutionRolePolicy. For more information, see Amazon ECS Task Execution
IAM Role.
{
"executionRoleArn": "arn:aws:iam::account_ID:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "sample-website",
"image": "nginx",
"essential": true,
"portMappings": [
{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 80
}
]
}
],
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "256",
"memory": "512",
"family": "ecs-demo"
}
3. After the task definition is registered, edit your file to remove the image name and include the
<IMAGE1_NAME> placeholder text in the image field.
{
"executionRoleArn": "arn:aws:iam::account_ID:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "sample-website",
"image": "<IMAGE1_NAME>",
"essential": true,
"portMappings": [
{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 80
}
]
}
],
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "256",
"memory": "512",
"family": "ecs-demo"
}
• The AppSpec file is used for CodeDeploy deployments. The file, which includes optional fields, uses
this format:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "task-definition-ARN"
LoadBalancerInfo:
ContainerName: "container-name"
ContainerPort: container-port-number
# Optional properties
PlatformVersion: "LATEST"
NetworkConfiguration:
AwsvpcConfiguration:
Subnets: ["subnet-name-1", "subnet-name-2"]
SecurityGroups: ["security-group"]
AssignPublicIp: "ENABLED"
Hooks:
- BeforeInstall: "BeforeInstallHookFunctionName"
- AfterInstall: "AfterInstallHookFunctionName"
- AfterAllowTestTraffic: "AfterAllowTestTrafficHookFunctionName"
- BeforeAllowTraffic: "BeforeAllowTrafficHookFunctionName"
- AfterAllowTraffic: "AfterAllowTrafficHookFunctionName"
For more information about the AppSpec file, including examples, see CodeDeploy AppSpec File
Reference.
Create a file named appspec.yaml with the following contents. For TaskDefinition, do not
change the <TASK_DEFINITION> placeholder text. This value is updated when your pipeline runs.
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: "sample-website"
ContainerPort: 80
1. Push or upload the files to your CodeCommit repository. These files are the source artifact created
by the Create pipeline wizard for your deployment action in CodePipeline. Your files should look like
this in your local directory:
/tmp
|my-demo-repo
|-- appspec.yaml
|-- taskdef.json
a. To use your git command line from a cloned repository on your local computer:
ii. Run the following command to stage all of your files at once:
git add -A
iii. Run the following command to commit the files with a commit message:
iv. Run the following command to push the files from your local repo to your CodeCommit
repository:
git push
i. Open the CodeCommit console, and choose your repository from the Repositories list.
API Version 2015-07-09
99
CodePipeline User Guide
Step 3: Create Your Application
Load Balancer and Target Groups
1. Sign in to the AWS Management Console and open the Amazon VPC console at https://
console.aws.amazon.com/vpc/.
2. Verify the default VPC to use. In the navigation pane, choose Your VPCs. Note which VPC shows Yes
in the Default VPC column. This is the default VPC. It contains default subnets for you to select.
3. Choose Subnets. Choose two subnets that show Yes in the Default subnet column.
Note
Make note of your subnet IDs for use later in this tutorial.
4. Choose the subnets, and then choose the Description tab. Verify that the subnets you want to use
are in different Availability Zones.
5. Choose the subnets, and then choose the Route Table tab. To verify that each subnet you want to
use is a public subnet, confirm that a gateway row is included in the route table.
1. Sign in to the AWS Management Console and open the Amazon EC2 console at https://
console.aws.amazon.com/ec2/.
2. In the navigation pane, choose Load Balancers.
3. Choose Create Load Balancer.
4. Choose Application Load Balancer, and then choose Create.
5. In Name, enter the name of your load balancer.
6. In Scheme, choose internet-facing.
7. In IP address type, choose ipv4.
8. Configure two listener ports for your load balancer:
a. Under Load Balancer Protocol, choose HTTP. Under Load Balancer Port, enter 80.
b. Choose Add listener.
c. Under Load Balancer Protocol for the second listener, choose HTTP. Under Load Balancer
Port, enter 8080.
9. Under Availability Zones, in VPC, choose the default VPC. Next, choose the two default subnets you
want to use.
10. Choose Next: Configure Security Settings.
11. Choose Next: Configure Security Groups.
12. Choose Select an existing security group, and make a note of the security group ID.
13. Choose Next: Configure Routing.
14. In Target group, choose New target group and configure your first target group:
1. After your load balancer is provisioned, open the Amazon EC2 console. In the navigation pane,
choose Target Groups.
2. Choose Create target group.
3. In Name, enter a target group name (for example, target-group-2).
4. In Target type, choose IP.
5. In Protocol choose HTTP. In Port, enter 8080.
6. In VPC, choose the default VPC.
7. Choose Create.
Note
You must have two target groups created for your load balancer in order for your
deployment to run. You only need to make a note of the ARN of your first target group. This
ARN is used in the create-service JSON file in the next step.
1. Open the Amazon EC2 console. In the navigation pane, choose Load Balancers.
2. Choose your load balancer, and then choose the Listeners tab. Choose the listener with port 8080,
and then choose Edit.
3. Choose the pencil icon next to Forward to. Choose your second target group, and then choose the
check mark. Choose Update to save the updates.
1. Create a JSON file and name it create-service.json. Paste the following into the JSON file.
For the taskDefinition field, when you register a task definition in Amazon ECS, you give it a
family. This is similar to a name for multiple versions of the task definition, specified with a revision
number. In this example, use "ecs-demo:1" for the family and revision number in your file. Use the
subnet names, security group, and target group value you created with your load balancer in Step 3:
Create Your Application Load Balancer and Target Groups (p. 100).
Note
You need to include your target group ARN in this file. Open the Amazon EC2 console and
from the navigation pane, under LOAD BALANCING, choose Target Groups. Choose your
first target group. Copy your ARN from the Description tab.
{
"taskDefinition": "family:revision-number",
"cluster": "my-cluster",
"loadBalancers": [
{
"targetGroupArn": "target-group-arn",
"containerName": "sample-website",
"containerPort": 80
}
],
"desiredCount": 1,
"launchType": "FARGATE",
"schedulingStrategy": "REPLICA",
"deploymentController": {
"type": "CODE_DEPLOY"
},
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-1",
"subnet-2"
],
"securityGroups": [
"security-group"
],
"assignPublicIp": "ENABLED"
}
}
}
1. On your application page's Deployment groups tab, choose Create deployment group.
2. In Deployment group name, enter a name that describes the deployment group.
3. In Service role, choose a service role that grants CodeDeploy access to Amazon ECS.
4. In Environment configuration, choose your Amazon ECS cluster name and service name.
5. From Load balancers, choose the name of the load balancer that serves traffic to your Amazon ECS
service.
6. From Production listener port, choose the port and protocol for the listener that serves production
traffic to your Amazon ECS service.
7. From Target group 1 name and Target group 2 name, choose the target groups used to route
traffic during your deployment. Make sure that these are the target groups you created for your load
balancer.
8. Choose Reroute traffic immediately to determine how long after a successful deployment to
reroute traffic to your updated Amazon ECS task.
9. Choose Create deployment group.
• A CodeCommit action where the source artifacts are the task definition and the AppSpec file.
• A source stage with an Amazon ECR source action where the source artifact is the image file.
• A deployment stage with an Amazon ECS deploy action where the deployment runs with a CodeDeploy
application and deployment group.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Welcome page, Getting started page, or the Pipelines page, choose Create pipeline.
3. In Step 1: Choose pipeline settings, in Pipeline name, enter MyImagePipeline.
4. In Service role, do one of the following:
• Choose New service role to allow CodePipeline to create a new service role
in IAM. In Role name, the role and policy name both default to this format:
AWSCodePipelineServiceRole-region-pipeline_name. For example, this is the service role
created for this tutorial: AWSCodePipelineServiceRole-eu-west-2-MyImagePipeline.
• Choose Existing service role to use a service role already created in IAM. In Role name, choose
your service role from the list.
Note
Depending on when your service role was created, you might need to update its permissions
to support additional AWS services. For information, see Add Permissions for Other AWS
Services (p. 366).
5. In Artifact store:
a. Choose Default location to use the default artifact store, such as the Amazon S3 artifact bucket
designated as the default, for your pipeline in the region you have selected for your pipeline.
b. Choose Custom location if you already have an artifact store, such as an Amazon S3 artifact
bucket, in the same region as your pipeline.
Note
This is not the source bucket for your source code. This is the artifact store for your pipeline.
A separate artifact store, such as an Amazon S3 bucket, is required for each pipeline. When
you create or edit a pipeline, you must have an artifact bucket in the pipeline Region, and
then you must have one artifact bucket per AWS Region where you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
Choose Next.
6. In Step 2: Add source stage, in Source provider, choose AWS CodeCommit. In Repository name,
choose the name of the CodeCommit repository you created in Step 1: Create a CodeCommit
Repository and Local Repo (p. 43). In Branch name, choose the name of the branch that contains
your latest code update. Unless you created a different branch on your own, only master is
available.
Choose Next.
7. In Step 3: Add build stage, choose Skip build stage, and then accept the warning message by
choosing Skip again. Choose Next.
8. In Step 4: Add deploy stage:
a. In Deploy provider, choose Amazon ECS (Blue/Green). In Application name, enter or choose
the application name from the list. In Deployment group, enter or choose the deployment
group name from the list.
Note
The name "Deploy" is the name given by default to the stage created in the Step 4:
Deploy step, just as "Source" is the name given to the first stage of the pipeline.
b. Under Amazon ECS task definition, choose SourceArtifact.
c. Under AWS CodeDeploy AppSpec file, choose SourceArtifact.
Note
At this point, do not fill in any information under Dynamically update task definition
image.
d. Choose Next.
9. In Step 5: Review, review the information, and then choose Create pipeline.
View your pipeline and add an Amazon ECR source action to your pipeline.
9. Choose Save on the action screen. Choose Done on the stage screen. Choose Save on the pipeline.
A message shows the Amazon CloudWatch Events rule to be created for the Amazon ECR source
action.
1. Choose Edit on your Deploy stage and choose the icon to edit the Amazon ECS (Blue/Green) action.
2. Scroll to the bottom of the pane. In Input artifacts, choose Add. Add the source artifact from your
new Amazon ECR repository (for example, MyImage).
3. In Task Definition, choose SourceArtifact, and then enter taskdef.json.
4. In AWS CodeDeploy AppSpec File, choose SourceArtifact and enter appspec.yaml.
5. In Dynamically update task definition image, in Input Artifact with Image URI, choose MyImage,
and then enter the placeholder text that is used in the taskdef.json file: "IMAGE1_NAME". Choose
Save.
6.
7. In the AWS CodePipeline pane, choose Save pipeline change, and then choose Save change. View
your updated pipeline.
8. To submit your changes and start a pipeline build, choose Release change, and then choose Release.
pipeline includes a third build stage with an CodeBuild action and an action in the Deploy stage
for AWS CloudFormation.
Prerequisites
You must already have the following:
• A CodeCommit repository. You can use the AWS CodeCommit repository you created in Tutorial: Create
a Simple Pipeline (CodeCommit Repository) (p. 42).
• An Amazon developer account. This is the account that owns your Alexa skills. You can create an
account for free at Alexa Skills Kit.
• An Alexa skill. You can create a sample skill using the Get Custom Skill Sample Code tutorial.
• Install the ASK CLI and configure it using ask init with your AWS credentials. See https://
developer.amazon.com/docs/smapi/quick-start-alexa-skills-kit-command-line-interface.html#install-
initialize.
• A skill.json file.
• An interactionModel folder.
1. Retrieve your skill ID from the Alexa Skills Kit developer console. Use this command:
Locate your skill by name and then copy the associated ID in the skillId field.
2. Generate a skill.json file that contains your skill details. Use this command:
Use this command to generate the interaction model file within the folder. For locale, this tutorial
uses en-US as the locale in the filename.
1. Push or upload the files to your CodeCommit repository. These files are the source artifact created
by the Create Pipeline wizard for your deployment action in AWS CodePipeline. Your files should
look like this in your local directory:
skill.json
/interactionModel
|en-US.json
a. To use the Git command line from a cloned repository on your local computer:
git add -A
ii. Run the following command to commit the files with a commit message:
iii. Run the following command to push the files from your local repo to your CodeCommit
repository:
git push
i. Open the CodeCommit console, and choose your repository from the Repositories list.
ii. Choose Add file, and then choose Upload file.
iii. Choose Choose file, and then browse for your file. Commit the change by entering your
user name and email address. Choose Commit changes.
iv. Repeat this step for each file you want to upload.
2. When prompted, enter your client ID and secret as shown in this example:
3. The sign-in browser page displays. Sign in with your Amazon developer account credentials.
4. Return to the command line screen. The access token and referesh token are generated in the
output. Copy the refresh token returned in the output.
• A source stage with a CodeCommit action where the source artifacts are the Alexa skill files that
support your skill.
• A deployment stage with an Alexa Skills Kit deploy action.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. Choose the AWS Region where you want to create the project and its resources. The Alexa skill
runtime is available only in the following regions:
• Choose New service role to allow CodePipeline to create a new service role
in IAM. In Role name, the role and policy name both default to this format:
AWSCodePipelineServiceRole-region-pipeline_name. For example, this is the service role
created for this tutorial: AWSCodePipelineServiceRole-eu-west-2-MyAlexaPipeline.
• Choose Existing service role to use a service role already created in IAM. In Role name, choose
your service role from the list.
Note
Depending on when your service role was created, you might need to update its permissions
to support other AWS services. For information, see Add Permissions for Other AWS
Services (p. 366).
6. In Artifact store:
a. Choose Default location to use the default artifact store, such as the Amazon S3 artifact bucket
designated as the default, for your pipeline in the region you have selected for your pipeline.
b. Choose Custom location if you already have an artifact store, such as an Amazon S3 artifact
bucket, in the same region as your pipeline.
Note
This is not the source bucket for your source code. This is the artifact store for your pipeline.
A separate artifact store, such as an Amazon S3 bucket, is required for each pipeline. When
you create or edit a pipeline, you must have an artifact bucket in the pipeline Region, and
then you must have one artifact bucket per AWS Region where you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
Choose Next.
7. In Step 2: Add source stage, in Source provider, choose AWS CodeCommit. In Repository name,
choose the name of the CodeCommit repository you created in Step 1: Create a CodeCommit
Repository and Local Repo (p. 43). In Branch name, choose the name of the branch that contains
your latest code update. Unless you created a different branch on your own, only master is
available.
After you select the repository name and branch, a message shows the Amazon CloudWatch Events
rule to be created for this pipeline.
Choose Next.
8. In Step 3: Add build stage, choose Skip build stage, and then accept the warning message by
choosing Skip again.
Choose Next.
9. In Step 4: Add deploy stage:
f. Choose Next.
10. In Step 5: Review, review the information, and then choose Create pipeline.
• Create a pipeline that deploys a static website to your Amazon S3 public bucket. This example creates
a pipeline with an AWS CodeCommit source action and an Amazon S3 deployment action. See Option
1: Deploy Static Website Files to Amazon S3 (p. 115).
• Create a pipeline that compiles sample TypeScript code into JavaScript and deploys the CodeBuild
output artifact to your Amazon S3 bucket for archive. This example creates a pipeline with an Amazon
S3 source action, a CodeBuild build action, and an Amazon S3 deployment action. See Option 2:
Deploy Built Archive Files to Amazon S3 from an Amazon S3 Source Bucket (p. 120).
Important
Many of the actions you add to your pipeline in this procedure involve AWS resources that you
need to create before you create the pipeline. AWS resources for your source actions must
always be created in the same AWS Region where you create your pipeline. For example, if you
create your pipeline in the US East (Ohio) Region, your CodeCommit repository must be in the
US East (Ohio) Region.
You can add cross-region actions when you create your pipeline. AWS resources for cross-
region actions must be in the same AWS Region where you plan to execute the action.
For more information about cross-region actions, see Add a Cross-Region Action in
CodePipeline (p. 322).
Prerequisites
You must already have the following:
• A CodeCommit repository. You can use the AWS CodeCommit repository you created in Tutorial: Create
a Simple Pipeline (CodeCommit Repository) (p. 42).
• Source files for your static website. Use this link to download a sample static website. The sample-
website.zip download produces the following files:
• An index.html file
• A main.css file
• A graphic.jpg file
• An Amazon S3 bucket configured for website hosting. See Hosting a Static Website on Amazon S3.
Make sure you create your bucket in the same region as the pipeline.
Note
To host a website, your bucket must have public read access, which gives everyone read
access. With the exception of website hosting, you should keep the default access settings
that block public access to Amazon S3 buckets.
1. Extract the downloaded sample files. Do not upload the ZIP file to your repository.
2. Push or upload the files to your CodeCommit repository. These files are the source artifact created
by the Create Pipeline wizard for your deployment action in CodePipeline. Your files should look like
this in your local directory:
index.html
main.css
graphic.jpg
3. You can use Git or the CodeCommit console to upload your files:
a. To use the Git command line from a cloned repository on your local computer:
git add -A
ii. Run the following command to commit the files with a commit message:
iii. Run the following command to push the files from your local repo to your CodeCommit
repository:
git push
i. Open the CodeCommit console, and choose your repository from the Repositories list.
ii. Choose Add file, and then choose Upload file.
iii. Select Choose file, and then browse for your file. Commit the change by entering your user
name and email address. Choose Commit changes.
iv. Repeat this step for each file you want to upload.
• A source stage with a CodeCommit action where the source artifacts are the files for your website.
• A deployment stage with an Amazon S3 deployment action.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Welcome page, Getting started page, or Pipelines page, choose Create pipeline.
3. In Step 1: Choose pipeline settings, in Pipeline name, enter MyS3DeployPipeline.
4. In Service role, do one of the following:
Note
Depending on when your service role was created, you might need to update its permissions
to support other AWS services. For information, see Add Permissions for Other AWS
Services (p. 366).
5. In Artifact store:
a. Choose Default location to use the default artifact store, such as the Amazon S3 artifact bucket
designated as the default, for your pipeline in the region you have selected for your pipeline.
b. Choose Custom location if you already have an artifact store, such as an Amazon S3 artifact
bucket, in the same region as your pipeline.
Note
This is not the source bucket for your source code. This is the artifact store for your pipeline.
A separate artifact store, such as an Amazon S3 bucket, is required for each pipeline. When
you create or edit a pipeline, you must have an artifact bucket in the pipeline Region, and
then you must have one artifact bucket per AWS Region where you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
Choose Next.
6. In Step 2: Add source stage, in Source provider, choose AWS CodeCommit. In Repository name,
choose the name of the CodeCommit repository you created in Step 1: Create a CodeCommit
Repository and Local Repo (p. 43). In Branch name, choose the name of the branch that contains
your latest code update. Unless you created a different branch on your own, only master is
available.
After you select the repository name and branch, the Amazon CloudWatch Events rule to be created
for this pipeline is displayed.
Choose Next.
7. In Step 3: Add build stage, choose Skip build stage, and then accept the warning message by
choosing Skip again.
Choose Next.
8. In Step 4: Add deploy stage:
When Extract file before deploy is selected, Deployment path is displayed. Enter the name of
the path you want to use. This creates a folder structure in Amazon S3 to which the files are
extracted. For this tutorial, leave this field blank.
d. (Optional) In Canned ACL, you can apply a set of predefined grants, known as a canned ACL, to
the uploaded artifacts.
e. (Optional) In Cache control, enter the caching parameters. You can set this to control caching
behavior for requests/responses. For valid values, see the Cache-Control header field for
HTTP operations.
f. Choose Next.
9. In Step 5: Review, review the information, and then choose Create pipeline.
10. After your pipeline runs successfully, open the Amazon S3 console and verify that your files appear
in your public bucket as shown:
index.html
main.css
graphic.jpg
11. Access your endpoint to test the website. Your endpoint follows this format: http://bucket-
name.s3-website-region.amazonaws.com/.
Prerequisites
You must already have the following:
• An Amazon S3 source bucket. You can use the bucket you created in Tutorial: Create a Simple Pipeline
(Amazon S3 Bucket) (p. 26).
•
• An Amazon S3 target bucket. See Hosting a Static Website on Amazon S3. Make sure you create your
bucket in the same AWS Region as the pipeline you want to create.
Note
This example demonstrates deploying files to a private bucket. Do not enable your target
bucket for website hosting or attach any policies that make the bucket public.
• Create a file named buildspec.yml with the following contents. These build commands install
TypeScript and use the TypeScript compiler to rewrite the code in index.ts to JavaScript code.
version: 0.2
phases:
install:
commands:
- npm install -g typescript
build:
commands:
- tsc index.ts
artifacts:
files:
- index.js
interface Greeting {
message: string;
}
greet(greeting);
buildspec.yml
index.ts
source.zip
• A source stage with an Amazon S3 action where the source artifacts are the files for your
downloadable application.
• A deployment stage with an Amazon S3 deployment action.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Welcome page, Getting started page, or Pipelines page, choose Create pipeline.
3. In Step 1: Choose pipeline settings, in Pipeline name, enter MyS3DeployPipeline.
4. In Service role, do one of the following:
Note
Depending on when your service role was created, you might need to update its permissions
to support other AWS services. For information, see Add Permissions for Other AWS
Services (p. 366).
5. In Artifact store:
a. Choose Default location to use the default artifact store, such as the Amazon S3 artifact bucket
designated as the default, for your pipeline in the region you have selected for your pipeline.
b. Choose Custom location if you already have an artifact store, such as an Amazon S3 artifact
bucket, in the same region as your pipeline.
Note
This is not the source bucket for your source code. This is the artifact store for your pipeline.
A separate artifact store, such as an Amazon S3 bucket, is required for each pipeline. When
you create or edit a pipeline, you must have an artifact bucket in the pipeline Region, and
then you must have one artifact bucket per AWS Region where you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
Choose Next.
6. In Step 2: Add source stage, in Source provider, choose Amazon S3. In Bucket, choose the name of
your source bucket. In S3 object key, enter the name of your source ZIP file.
Choose Next.
7. In Step 3: Add build stage:
When Extract file before deploy is cleared, S3 object key is displayed. Enter the name of the
path you want to use and the file name for your output file as follows: js-application/
{datetime}.zip.
This creates a js-application folder in Amazon S3 to which the files are extracted. In this
folder, the {datetime} variable creates a timestamp on each output file when your pipeline
runs.
d. (Optional) In Canned ACL, you can apply a set of predefined grants, known as a canned ACL, to
the uploaded artifacts.
e. (Optional) In Cache control, enter the caching parameters. You can set this to control caching
behavior for requests/responses. For valid values, see the Cache-Control header field for
HTTP operations.
API Version 2015-07-09
123
CodePipeline User Guide
Option 2: Deploy Built Archive Files to
Amazon S3 from an Amazon S3 Source Bucket
f. Choose Next.
9. In Step 5: Review, review the information, and then choose Create pipeline.
10. After your pipeline runs successfully, view your bucket in the Amazon S3 console. Verify that
your deployed ZIP file is displayed in your target bucket under the js-application folder. The
JavaScript file contained in the ZIP file should be index.js. The index.js file contains the
following output:
}());
function greet(greeting) {
console.log(greeting.message);
}
var greeting = new HelloGreeting();
greet(greeting);
This tutorial shows how to create and configure a pipeline to build your serverless application that is
hosted in GitHub and publish it to the AWS Serverless Application Repository automatically. The pipeline
uses GitHub as the source provider and CodeBuild as the build provider. To publish your serverless
application to the AWS Serverless Application Repository, you deploy an application (from the AWS
Serverless Application Repository) and associate the Lambda function created by that application as an
Invoke action provider in your pipeline. Then you can continuously deliver application updates to the
AWS Serverless Application Repository, without writing any code.
Important
Many of the actions you add to your pipeline in this procedure involve AWS resources that you
need to create before you create the pipeline. AWS resources for your source actions must
always be created in the same AWS Region where you create your pipeline. For example, if you
create your pipeline in the US East (Ohio) Region, your CodeCommit repository must be in the
US East (Ohio) Region.
You can add cross-region actions when you create your pipeline. AWS resources for cross-
region actions must be in the same AWS Region where you plan to execute the action.
For more information about cross-region actions, see Add a Cross-Region Action in
CodePipeline (p. 322).
• You are familiar with AWS Serverless Application Model (AWS SAM) and the AWS Serverless
Application Repository.
• You have a serverless application hosted in GitHub that you have published to the AWS Serverless
Application Repository using the AWS SAM CLI. To publish an example application to the AWS
Serverless Application Repository, see Quick Start: Publishing Applications in the AWS Serverless
Application Repository Developer Guide. To publish your own application to the AWS Serverless
Application Repository, see Publishing Applications Using the AWS SAM CLI in the AWS Serverless
Application Model Developer Guide.
version: 0.2
phases:
build:
commands:
- pip install --upgrade pip
- pip install pipenv --user
- pipenv install awscli aws-sam-cli
- pipenv run sam package --template-file template.yml --s3-bucket bucketname --
output-template-file packaged-template.yml
artifacts:
files:
- packaged-template.yml
1. Sign in to the AWS Management Console and open the CodePipeline console at https://
console.aws.amazon.com/codepipeline/.
2. If necessary, switch to the AWS Region where you want to publish your serverless application.
3. Choose Create pipeline. On the Choose pipeline settings page, in Pipeline name, enter the name
for your pipeline.
4. In Service role, leave New service role selected, and leave Role name unchanged.
5. In Artifact store, choose Default location. The default artifact store, such as the Amazon S3 artifact
bucket designated as the default, is used for your pipeline in the AWS Region you have selected.
Note
This is not the source bucket for your source code. This is the artifact store for your pipeline.
A separate artifact store, such as an Amazon S3 bucket, is required for each pipeline. When
you create or edit a pipeline, you must have an artifact bucket in the pipeline Region, and
then you must have one artifact bucket per AWS Region where you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
6. Choose Next.
7. On the Add source stage page, in Source provider, choose GitHub, and then choose Connect to
GitHub.
8. In the browser window, choose Authorize aws-codesuite. This allows your pipeline to make your
repository a source, and to use webhooks that detect when new code is pushed to the repository.
9. In Repository, choose your GitHub source repository.
10. In Branch, choose your GitHub branch.
11. Choose Next.
12. On the Add build stage page, add a build stage:
a. In Build provider, choose AWS CodeBuild. For Region, use the pipeline Region.
b. Choose Create project.
c. In Project name, enter a name for this build project.
d. In Environment image, choose Managed image. For Operating system, choose Ubuntu.
e. For Runtime and Runtime version, choose the runtime and version required for your serverless
application.
f. For Service role, choose New service role.
g. For Build specifications, choose Use a buildspec file.
h. Choose Continue to CodePipeline. This opens the CodePipeline console and creates a
CodeBuild project that uses the buildspec.yml in your repository for configuration. The build
project uses a service role to manage AWS service permissions. This step might take a couple of
minutes.
i. Choose Next.
13. On the Add deploy stage page, choose Skip deploy stage, and then accept the warning message by
choosing Skip again. Choose Next.
14. Choose Create pipeline. You should see a diagram that shows the source and build stages.
15. Grant the CodeBuild service role permission to access the Amazon S3 bucket where your packaged
application is stored.
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucketname/*"
],
"Action": [
"s3:PutObject"
]
}
complete, check that your application has been updated with your change in the AWS Serverless
Application Repository.
A simple business use case for CodePipeline can help you understand ways you might implement the
service and control user access. The use cases are described in general terms. They do not prescribe the
APIs to use to achieve the results you want.
Topics
• Best Practices (p. 130)
• Use Cases for CodePipeline (p. 131)
Best Practices
Use the best practices outlined in these sections when using CodePipeline.
• If you create a pipeline that uses an Amazon S3 source bucket, configure server-side encryption
for artifacts stored in Amazon S3 for CodePipeline by managing AWS KMS-managed keys (SSE-
KMS), as described in Configure Server-Side Encryption for Artifacts Stored in Amazon S3 for
CodePipeline (p. 384).
• If you create a pipeline that uses a GitHub source repository, configure GitHub authentication. You
can use an AWS-managed OAuth token or a customer-managed personal access token, as described in
Configure GitHub Authentication (p. 386).
• AWS CloudTrail can be used to log AWS API calls and related events made by or on behalf of an AWS
account. For more information, see Logging CodePipeline API Calls with AWS CloudTrail (p. 342).
• Amazon CloudWatch Events can be used to monitor your AWS Cloud resources and the applications
you run on AWS. You can create alerts in Amazon CloudWatch Events based on metrics that you
define. For more information, see Detect and React to Changes in Pipeline State with Amazon
CloudWatch Events (p. 334).
The instance profile provides applications running on an Amazon EC2 instance with the credentials to
access other AWS services. As a result, you do not need to configure AWS credentials (AWS access key
and secret key).
To learn how to create the role for your Jenkins instance profile, see the steps in Create an IAM Role to
Use for Jenkins Integration (p. 55).
Topics
• Use CodePipeline with Amazon S3, AWS CodeCommit, and AWS CodeDeploy (p. 131)
• Use CodePipeline with Third-party Action Providers (GitHub and Jenkins) (p. 132)
• Use CodePipeline with AWS CodeStar to Build a Pipeline in a Code Project (p. 132)
• Use CodePipeline to Compile, Build, and Test Code with CodeBuild (p. 133)
• Use CodePipeline with Amazon ECS for Continuous Delivery of Container-Based Applications to the
Cloud (p. 133)
• Use CodePipeline with Elastic Beanstalk for Continuous Delivery of Web Applications to the
Cloud (p. 133)
• Use CodePipeline with AWS Lambda for Continuous Delivery of Lambda-Based and Serverless
Applications (p. 133)
• Use CodePipeline with AWS CloudFormation Templates for Continuous Delivery to the
Cloud (p. 133)
source stage and at least a build or deploy stage. The wizard creates the stages for you with default
names that cannot be changed. These are the stage names created when you set up a full three-stage
pipeline in the wizard:
You can use the tutorials in this guide to create pipelines and specify stages:
• The steps in Tutorial: Create a Simple Pipeline (Amazon S3 Bucket) (p. 26) help you use the wizard to
create a pipeline with two default stages: “Source” and “Staging”, where your Amazon S3 repository
is the source provider. This tutorial creates a pipeline that uses AWS CodeDeploy to deploy a sample
application from an Amazon S3 bucket to Amazon EC2 instances running Amazon Linux.
• The steps in Tutorial: Create a Simple Pipeline (CodeCommit Repository) (p. 42) help you use the
wizard to create a pipeline with a “Source” stage that uses your AWS CodeCommit repository as
the source provider. This tutorial creates a pipeline that uses AWS CodeDeploy to deploy a sample
application from an AWS CodeCommit repository to an Amazon EC2 instance running Amazon Linux.
To create your AWS CodeStar project, you choose your coding language and the type of application you
want to deploy. You can create the following project types: a web application, a web service, or an Alexa
skill.
At any time, you can integrate your preferred IDE into your AWS CodeStar dashboard. You can also add
and remove team members and manage permissions for team members on your project. For a tutorial
that shows you how to use AWS CodeStar to create a sample pipeline for a serverless application, see
Tutorial: Creating and Managing a Serverless Project in AWS CodeStar.
Tagging Resources
A tag is a custom attribute label that you or AWS assigns to an AWS resource. Each AWS tag has two
parts:
• A tag key (for example, CostCenter, Environment, Project, or Secret). Tag keys are case
sensitive.
• An optional field known as a tag value (for example, 111122223333, Production, or a team name).
Omitting the tag value is the same as using an empty string. Like tag keys, tag values are case
sensitive.
Tags help you identify and organize your AWS resources. Many AWS services support tagging, so you can
assign the same tag to resources from different services to indicate that the resources are related. For
example, you can assign the same tag to a pipeline that you assign to an Amazon S3 source bucket.
For tips on using tags, see the AWS Tagging Strategies post on the AWS Answers blog.
You can use the AWS CLI, CodePipeline APIs, or AWS SDKs to:
• Add tags to a pipeline, custom action, or webhook when you create it.
• Add, manage, and remove tags for a pipeline, custom action, or webhook.
You can also use the console to add, manage, and remove tags for a pipeline.
In addition to identifying, organizing, and tracking your resource with tags, you can use tags in IAM
policies to help control who can view and interact with your resource. For examples of tag-based access
policies, see Using Tags to Control Access to CodePipeline Resources (p. 370).
Amazon VPC is an AWS service that you can use to launch AWS resources in a virtual network that you
define. With a VPC, you have control over your network settings, such as:
• IP address range
• Subnets
• Route tables
• Network gateways
Interface VPC endpoints are powered by AWS PrivateLink, an AWS technology that facilitates private
communication between AWS services using an elastic network interface with private IP addresses. To
connect your VPC to CodePipeline, you define an interface VPC endpoint for CodePipeline. This type of
endpoint makes it possible for you to connect your VPC to AWS services. The endpoint provides reliable,
scalable connectivity to CodePipeline without requiring an internet gateway, network address translation
(NAT) instance, or VPN connection. For information about setting up a VPC, see the VPC User Guide.
The endpoint is prepopulated with the region you specified when you signed in to AWS. If you sign in to
another region, the VPC endpoint is updated with the new region.
The following services integrate with CodePipeline, but they must communicate with the internet:
The following services are enabled for VPC support, but they must communicate with the internet, and
cannot be connected to CodePipeline using VPC endpoints:
Before you can create a pipeline, you must first complete the steps in Getting Started with
CodePipeline (p. 9).
For more information about pipelines, see CodePipeline Concepts (p. 4), CodePipeline
Tutorials (p. 25), and, if you want to use the AWS CLI to create a pipeline, CodePipeline Pipeline
Structure Reference (p. 393). To view a list of pipelines, see View Pipeline Details and History in
CodePipeline (p. 202).
Topics
• Start a Pipeline Execution in CodePipeline (p. 137)
• Create a Pipeline in CodePipeline (p. 187)
• Edit a Pipeline in CodePipeline (p. 196)
• View Pipeline Details and History in CodePipeline (p. 202)
• Delete a Pipeline in CodePipeline (p. 214)
• Create a Pipeline in CodePipeline That Uses Resources from Another AWS Account (p. 215)
• Edit Pipelines to Use Push Events (p. 224)
• Create the CodePipeline Service Role (p. 276)
• Tag a Pipeline in CodePipeline (p. 277)
• Automatically: Using change detection methods that you specify, you can make your pipeline start
when a change is made to a repository. You can also make your pipeline start on a schedule. The
following are the automatic change detection methods:
• When you use the console to create a pipeline that has a CodeCommit source repository or Amazon
S3 source bucket, CodePipeline creates an Amazon CloudWatch Events rule that starts your pipeline
when the source changes. This is the recommended change detection method. If you use the
AWS CLI to create the pipeline, the change detection method defaults to starting the pipeline by
periodically checking the source (CodeCommit, Amazon S3, and GitHub source providers only).
We recommend that you disable periodic checks and create the change detection resources
manually. For more information, see Use CloudWatch Events to Start a Pipeline (CodeCommit
Source) (p. 140).
• When you use the console to create a pipeline that has a GitHub repository, CodePipeline creates
a webhook that starts your pipeline when the source changes. This is the recommended change
detection method. If you use the AWS CLI to create the pipeline, the change detection method
defaults to starting the pipeline by periodically checking the source. We recommend that you
disable periodic checks and create the additional resources manually. For more information, see Use
Webhooks to Start a Pipeline (GitHub Source) (p. 166).
• Manually: You can use the console or the AWS CLI to start a pipeline manually. For information, see
Start a Pipeline Manually in AWS CodePipeline (p. 184).
By default, pipelines are configured to start automatically using change detection methods.
Note
Your pipeline runs only when something changes in the source repository and branch that you
have defined.
Topics
• Change Detection Methods Used to Start Pipelines Automatically (p. 138)
• Use CloudWatch Events to Start a Pipeline (CodeCommit Source) (p. 140)
• Use CloudWatch Events to Start a Pipeline (Amazon S3 Source) (p. 151)
• Use Webhooks to Start a Pipeline (GitHub Source) (p. 166)
• Use CloudWatch Events to Start a Pipeline (Amazon ECR Source) (p. 177)
• Use Periodic Checks to Start a Pipeline (p. 184)
• Start a Pipeline Manually in AWS CodePipeline (p. 184)
• Use Amazon CloudWatch Events to Start a Pipeline on a Schedule (p. 185)
You can use the console (p. 196), CLI (p. 198), or AWS CloudFormation to specify your change
detection method.
Amazon ECR Amazon CloudWatch • Your pipeline is Yes When you use
Events. This is created triggered as soon as a the CLI or AWS
by the wizard for change is made to the CloudFormation to
pipelines with an repository. create a pipeline,
Amazon ECR source • Periodic checks are make sure you
created or edited in not applicable for this create your CWE
the console. source provider. rule for change
detection. To create
your rule, see Use
CloudWatch Events
to Start a Pipeline
(Amazon ECR
Source) (p. 177).
In Amazon CloudWatch Events, you create a rule to detect and react to changes in the state of the
pipeline's defined source.
1. Create an Amazon CloudWatch Events rule that uses the pipeline's source repository as the event
source.
2. Add CodePipeline as the target.
3. Grant Amazon CloudWatch Events permissions to start the pipeline.
As you build your rule, the Event Pattern Preview pane in the console (or the --event-pattern output
in the AWS CLI) displays the event fields, in JSON format. The following sample CodeCommit event
pattern uses this JSON structure:
{
"source": [ "aws.codecommit" ],
"detail-type": [ "CodeCommit Repository State Change" ],
"resources": [ "CodeCommitRepo_ARN" ],
"detail": {
"event": [
"referenceCreated",
"referenceUpdated"],
"referenceType":["branch"],
"referenceName": ["branch_name"]
}
}
Topics
• Create a CloudWatch Events Rule for a CodeCommit Source (Console) (p. 141)
• Create a CloudWatch Events Rule for a CodeCommit Source (CLI) (p. 143)
• Create a CloudWatch Events Rule for a CodeCommit Source (AWS CloudFormation Template)
(p. 146)
• Configure Your Pipelines to Use Amazon CloudWatch Events for Change Detection (CodeCommit
Source) (p. 151)
The service name that you choose owns the event resource. For example, choose CodeCommit to
trigger a pipeline when there are changes to the CodeCommit repository associated with a pipeline.
4. From Event Type, choose CodeCommit Repository State Change.
To make a rule that applies to one or more repositories, choose Specific resource(s) by ARN, and
then enter the ARN.
Note
You can find the ARN for a CodeCommit repository on the Settings page in the
CodeCommit console.
To specify the branch to associate with the repository, choose Edit, and enter the resource type
branch and branch name. Use the event pattern options for detail. The preceding example shows
the detail options for a CodeCommit repository branch named master.
The following is a sample CodeCommit event pattern in the Event window for a MyTestRepo
repository with a branch named master:
{
"source": [
"aws.codecommit"
],
"detail-type": [
"CodeCommit Repository State Change"
],
"resources": [
"arn:aws:codecommit:us-west-2:80398EXAMPLE:MyTestRepo"
],
"detail": {
"referenceType": [
"branch"
],
"referenceName": [
"master"
]
}
}
Choose Save.
• Choose Create a new role for this specific resource to create a service role that gives Amazon
CloudWatch Events permissions to your start your pipeline executions when triggered.
• Choose Use existing role to enter a service role that gives Amazon CloudWatch Events
permissions to your start your pipeline executions when triggered.
9. Review your rule setup to make sure it meets your requirements.
10. Choose Configure details.
11. On the Configure rule details page, enter a name and description for the rule, and then choose
State to enable the rule.
12. If you're satisfied with the rule, choose Create rule.
• A name that uniquely identifies the rule you are creating. This name must be unique across all of the
pipelines you create with CodePipeline associated with your AWS account.
• The event pattern for the source and detail fields used by the rule. For more information, see Amazon
CloudWatch Events and Event Patterns.
To create a CloudWatch Events rule with CodeCommit as the event source and CodePipeline
as the target
1. Add permissions for Amazon CloudWatch Events to use CodePipeline to invoke the rule. For more
information, see Using Resource-Based Policies for Amazon CloudWatch Events.
a. Use the following sample to create the trust policy that allows CloudWatch Events to assume
the service role. Name the trust policy trustpolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
b. Use the following command to create the Role-for-MyRule role and attach the trust policy.
c. Create the permissions policy JSON, as shown in this sample, for the pipeline named
MyFirstPipeline. Name the permissions policy permissionspolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:StartPipelineExecution"
],
"Resource": [
"arn:aws:codepipeline:us-west-2:80398EXAMPLE:MyFirstPipeline"
]
}
]
}
Why am I making this change? Adding this policy to the role creates permissions for
CloudWatch Events.
2. Call the put-rule command and include the --name and --event-pattern parameters.
Why am I making this change? This command enables AWS CloudFormation to create the event.
API Version 2015-07-09
144
CodePipeline User Guide
Use a CloudWatch Events Rule to
Start a Pipeline (CodeCommit Source)
3. To add CodePipeline as a target, call the put-targets command and include the following
parameters:
• The --rule parameter is used with the rule_name you created by using put-rule.
• The --targets parameter is used with the list Id of the target in the list of targets and the ARN
of the target pipeline.
The following sample command specifies that for the rule called MyCodeCommitRepoRule, the
target Id is composed of the number one, indicating that in a list of targets for the rule, this is
target 1. The sample command also specifies an example ARN for the pipeline. The pipeline starts
when something changes in the repository.
Important
When you create a pipeline with this method, the PollForSourceChanges parameter
defaults to true if it is not explicitly set to false. When you add event-based change detection,
you must add the parameter to your output and set it to false to disable polling. Otherwise,
your pipeline starts twice for a single source change. For details, see Default Settings for the
PollForSourceChanges Parameter (p. 403).
1. Run the get-pipeline command to copy the pipeline structure into a JSON file. For example, for a
pipeline named MyFirstPipeline, run the following command:
This command returns nothing, but the file you created should appear in the directory where you
ran the command.
2. Open the JSON file in any plain-text editor and edit the source stage by changing the
PollForSourceChanges parameter to false, as shown in this example.
Why am I making this change? Changing this parameter to false turns off periodic checks so you
can use event-based change detection only.
"configuration": {
"PollForSourceChanges": "false",
"BranchName": "master",
"RepositoryName": "MyTestRepo"
},
3. If you are working with the pipeline structure retrieved using the get-pipeline command, remove
the metadata lines from the JSON file. Otherwise, the update-pipeline command cannot use it.
Remove the "metadata": { } lines and the "created", "pipelineARN", and "updated" fields.
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
To update your pipeline AWS CloudFormation template and create CloudWatch Events rule
1. In the template, under Resources, use the AWS::IAM::Role AWS CloudFormation resource to
configure the IAM role that allows your event to start your pipeline. This entry creates a role that
uses two policies:
Why am I making this change? Adding the AWS::IAM::Role resource enables AWS
CloudFormation to create permissions for CloudWatch Events. This resource is added to your AWS
CloudFormation stack.
YAML
AmazonCloudWatchEventRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: cwe-pipeline-execution
PolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action: codepipeline:StartPipelineExecution
Resource: !Join [ '', [ 'arn:aws:codepipeline:', !Ref
'AWS::Region', ':', !Ref 'AWS::AccountId', ':', !Ref AppPipeline ] ]
JSON
"AmazonCloudWatchEventRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "cwe-pipeline-execution",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codepipeline:StartPipelineExecution",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
...
2. In the template, under Resources, use the AWS::Events::Rule AWS CloudFormation resource
to add a CloudWatch Events rule. This event pattern creates an event that monitors push changes
to your repository When CloudWatch Events detects a repository state change, the rule invokes
StartPipelineExecution on your target pipeline.
Why am I making this change? Adding the AWS::Events::Rule resource enables AWS
CloudFormation to create the event. This resource is added to your AWS CloudFormation stack.
YAML
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.codecommit
detail-type:
- 'CodeCommit Repository State Change'
resources:
- !Join [ '', [ 'arn:aws:codecommit:', !Ref 'AWS::Region', ':', !Ref
'AWS::AccountId', ':', !Ref RepositoryName ] ]
detail:
event:
- referenceCreated
- referenceUpdated
referenceType:
- branch
referenceName:
- master
Targets:
-
Arn:
!Join [ '', [ 'arn:aws:codepipeline:', !Ref 'AWS::Region', ':', !Ref
'AWS::AccountId', ':', !Ref AppPipeline ] ]
RoleArn: !GetAtt AmazonCloudWatchEventRole.Arn
Id: codepipeline-AppPipeline
JSON
"AmazonCloudWatchEventRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"EventPattern": {
"source": [
"aws.codecommit"
],
"detail-type": [
"CodeCommit Repository State Change"
],
"resources": [
{
"Fn::Join": [
"",
[
"arn:aws:codecommit:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "RepositoryName"
}
]
]
}
],
"detail": {
"event": [
"referenceCreated",
"referenceUpdated"
],
"referenceType": [
"branch"
],
"referenceName": [
"master"
]
}
},
"Targets": [
{
"Arn": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
},
"RoleArn": {
"Fn::GetAtt": [
"AmazonCloudWatchEventRole",
"Arn"
]
},
"Id": "codepipeline-AppPipeline"
}
]
}
},
3. Save the updated template to your local computer, and then open the AWS CloudFormation console.
4. Choose your stack, and then choose Create Change Set for Current Stack.
5. Upload the template, and then view the changes listed in AWS CloudFormation. These are the
changes to be made to the stack. You should see your new resources in the list.
6. Choose Execute.
to your output and set it to false to disable polling. Otherwise, your pipeline starts twice
for a single source change. For details, see Default Settings for the PollForSourceChanges
Parameter (p. 403).
Why am I making this change? Changing this parameter to false turns off periodic checks so you
can use event-based change detection only.
YAML
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: CodeCommit
OutputArtifacts:
- Name: SourceOutput
Configuration:
BranchName: !Ref BranchName
RepositoryName: !Ref RepositoryName
PollForSourceChanges: false
RunOrder: 1
JSON
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "CodeCommit"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"BranchName": {
"Ref": "BranchName"
},
"RepositoryName": {
"Ref": "RepositoryName"
},
"PollForSourceChanges": false
},
"RunOrder": 1
}
]
},
Configuring pipelines 1. Open your pipeline in the console. The Amazon CloudWatch
in the console 2. Choose Edit. Events rule is created for you.
3. Choose the pencil icon next to the No further action is required.
CodeCommit source action.
4. Choose Update.
5. Choose Save pipeline changes.
Configuring pipelines Use the update-pipeline command to set You must create the Amazon
in the CLI the PollForSourceChanges parameter to CloudWatch Events rule.
false.
See Use CloudWatch
See Edit a Pipeline (AWS CLI) (p. 198). Events to Start a
Pipeline (CodeCommit
Source) (p. 140).
Configuring Update the AWS CloudFormation resource You must create the Amazon
pipelines in AWS stack. CloudWatch Events rule.
CloudFormation
See What Is Amazon CloudWatch Events. See Use CloudWatch
Events to Start a
Pipeline (CodeCommit
Source) (p. 140).
AWS CloudTrail is a service that logs and filters events on your Amazon S3 source bucket. The trail sends
the filtered source changes to the Amazon CloudWatch Events rule. The Amazon CloudWatch Events rule
detects the source change and then starts your pipeline.
Note
For pipelines with an Amazon S3 source, an Amazon CloudWatch Events rule detects source
changes and then starts your pipeline when changes occur. When you use the console to create
or change a pipeline, the rule and all associated resources are created for you. If you create or
change a pipeline with an Amazon S3 source in the CLI or AWS CloudFormation, you must create
the Amazon CloudWatch Events rule, IAM role, and AWS CloudTrail trail manually.
Requirements:
• If you are not creating a trail, use an existing AWS CloudTrail trail for logging events in your Amazon
S3 source bucket and sending filtered events to the Amazon CloudWatch Events rule.
• Create or use an existing S3 bucket where AWS CloudTrail can store its log files. AWS CloudTrail must
have the permissions required to deliver log files to an Amazon S3 bucket. The bucket cannot be
configured as a Requester Pays bucket. When you create an Amazon S3 bucket as part of creating or
updating a trail in the console, AWS CloudTrail attaches the required permissions to a bucket for you.
For more information, see Amazon S3 Bucket Policy for CloudTrail.
To create a trail
To create a CloudWatch Events rule that targets your pipeline with an S3 source
Above the Event Pattern Preview pane, choose Edit. Edit the event pattern to add the bucket name
and encryption key as requestParameters, as shown in this example for a bucket named my-
bucket. When you use the Edit window to specify resources, your rule is updated to use a custom
event pattern.
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"CopyObject",
"CompleteMultiPartUpload",
"PutObject"
],
"requestParameters": {
"bucketName": [
"my-bucket"
],
"key": [
"my-key"
]
}
}
}
• Choose Create a new role for this specific resource to create a service role that gives Amazon
CloudWatch Events permissions to your start your pipeline executions when triggered.
• Choose Use existing role to enter a service role that gives Amazon CloudWatch Events
permissions to your start your pipeline executions when triggered.
10. Review your rule to make sure it meets your requirements, and then choose Configure details.
11. On the Configure rule details page, enter a name and description for the rule, and then choose
State to enable the rule.
12. If you're satisfied with the rule, choose Create rule.
To use the AWS CLI to create a trail, call the create-trail command, specifying:
For more information, see Creating a Trail with the AWS Command Line Interface.
1. Call the create-trail command and include the --name and --s3-bucket-name parameters.
Why am I making this change? This creates the CloudTrail trail required for your S3 source bucket.
The following command uses --name and --s3-bucket-name to create a trail named my-trail
and a bucket named myBucket.
Why am I making this change? This command starts the CloudTrail logging for your source bucket
and sends events to CloudWatch Events.
Example:
The following command uses --name to start logging on a trail named my-trail.
API Version 2015-07-09
154
CodePipeline User Guide
Use CloudWatch Events to Start
a Pipeline (Amazon S3 Source)
3. Call the put-event-selectors command and include the --trail-name and --event-selectors
parameters. Use event selectors to specify that you want your trail to log data events for your source
bucket and send the events to the Amazon CloudWatch Events rule.
Example:
The following command uses --trail-name and --event-selectors to specify data events for
a source bucket and prefix named myBucket/myFolder.
To create a CloudWatch Events rule with Amazon S3 as the event source and CodePipeline as
the target and apply the permissions policy
1. Grant permissions for Amazon CloudWatch Events to use CodePipeline to invoke the rule. For more
information, see Using Resource-Based Policies for Amazon CloudWatch Events.
a. Use the following sample to create the trust policy to allow CloudWatch Events to assume the
service role. Name it trustpolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
b. Use the following command to create the Role-for-MyRule role and attach the trust policy.
Why am I making this change? Adding this trust policy to the role creates permissions for
CloudWatch Events.
c. Create the permissions policy JSON, as shown here for the pipeline named MyFirstPipeline.
Name the permissions policy permissionspolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:StartPipelineExecution"
],
"Resource": [
"arn:aws:codepipeline:us-west-2:80398EXAMPLE:MyFirstPipeline"
]
}
]
}
2. Call the put-rule command and include the --name and --event-pattern parameters.
The following sample command uses --event-pattern to create a rule named MyS3SourceRule.
3. To add CodePipeline as a target, call the put-targets command and include the --rule and --
targets parameters.
The following command specifies that for the rule named MyS3SourceRule, the target Id is
composed of the number one, indicating that in a list of targets for the rule, this is target 1. The
command also specifies an example ARN for the pipeline. The pipeline starts when something
changes in the repository.
Important
When you create a pipeline with this method, the PollForSourceChanges parameter
defaults to true if it is not explicitly set to false. When you add event-based change detection,
you must add the parameter to your output and set it to false to disable polling. Otherwise,
your pipeline starts twice for a single source change. For details, see Default Settings for the
PollForSourceChanges Parameter (p. 403).
1. Run the get-pipeline command to copy the pipeline structure into a JSON file. For example, for a
pipeline named MyFirstPipeline, run the following command:
This command returns nothing, but the file you created should appear in the directory where you
ran the command.
2. Open the JSON file in any plain-text editor and edit the source stage by changing the
PollForSourceChanges parameter for a bucket named storage-bucket to false, as shown in
this example.
Why am I making this change? Setting this parameter to false turns off periodic checks so you can
use event-based change detection only.
API Version 2015-07-09
156
CodePipeline User Guide
Use CloudWatch Events to Start
a Pipeline (Amazon S3 Source)
"configuration": {
"S3Bucket": "storage-bucket",
"PollForSourceChanges": "false",
"S3ObjectKey": "index.zip"
},
3. If you are working with the pipeline structure retrieved using the get-pipeline command, you must
remove the metadata lines from the JSON file. Otherwise, the update-pipeline command cannot
use it. Remove the "metadata": { } lines and the "created", "pipelineARN", and "updated"
fields.
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
To create a CloudWatch Events rule with Amazon S3 as the event source and CodePipeline as
the target and apply the permissions policy
1. In the template, under Resources, use the AWS::IAM::Role AWS CloudFormation resource to
configure the IAM role that allows your event to start your pipeline. This entry creates a role that
uses two policies:
Why am I making this change? Adding AWS::IAM::Role resource enables AWS CloudFormation
to create permissions for Amazon CloudWatch Events. This resource is added to your AWS
CloudFormation stack.
YAML
AmazonCloudWatchEventRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: cwe-pipeline-execution
PolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action: codepipeline:StartPipelineExecution
Resource: !Join [ '', [ 'arn:aws:codepipeline:', !Ref
'AWS::Region', ':', !Ref 'AWS::AccountId', ':', !Ref AppPipeline ] ]
...
JSON
"AmazonCloudWatchEventRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "cwe-pipeline-execution",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codepipeline:StartPipelineExecution",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
...
Why am I making this change? Adding the AWS::Events::Rule resource enables AWS
CloudFormation to create the event. This resource is added to your AWS CloudFormation stack.
YAML
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.s3
detail-type:
- 'AWS API Call via CloudTrail'
detail:
eventSource:
- s3.amazonaws.com
eventName:
- CopyObject
- PutObject
- CompleteMultipartUpload
requestParameters:
bucketName:
- !Ref SourceBucket
key:
- !Ref SourceObjectKey
Targets:
-
Arn:
!Join [ '', [ 'arn:aws:codepipeline:', !Ref 'AWS::Region', ':', !Ref
'AWS::AccountId', ':', !Ref AppPipeline ] ]
RoleArn: !GetAtt AmazonCloudWatchEventRole.Arn
Id: codepipeline-AppPipeline
...
JSON
"AmazonCloudWatchEventRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"EventPattern": {
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"CopyObject",
"PutObject",
"CompleteMultipartUpload"
],
"requestParameters": {
"bucketName": [
{
"Ref": "SourceBucket"
}
],
"key": [
{
"Ref": "SourceObjectKey"
}
]
}
}
},
"Targets": [
{
"Arn": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
},
"RoleArn": {
"Fn::GetAtt": [
"AmazonCloudWatchEventRole",
"Arn"
]
},
"Id": "codepipeline-AppPipeline"
}
]
}
}
},
...
YAML
Outputs:
SourceBucketARN:
Description: "S3 bucket ARN that Cloudtrail will use"
Value: !GetAtt SourceBucket.Arn
Export:
Name: SourceBucketARN
JSON
"Outputs" : {
"SourceBucketARN" : {
"Description" : "S3 bucket ARN that Cloudtrail will use",
"Value" : { "Fn::GetAtt": ["SourceBucket", "Arn"] },
"Export" : {
"Name" : "SourceBucketARN"
}
}
...
4. Save your updated template to your local computer, and open the AWS CloudFormation console.
5. Choose your stack, and then choose Create Change Set for Current Stack.
6. Upload your updated template, and then view the changes listed in AWS CloudFormation. These are
the changes that will be made to the stack. You should see your new resources in the list.
7. Choose Execute.
YAML
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: S3
OutputArtifacts:
- Name: SourceOutput
Configuration:
S3Bucket: !Ref SourceBucket
S3ObjectKey: !Ref SourceObjectKey
PollForSourceChanges: false
RunOrder: 1
JSON
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "S3"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"S3Bucket": {
"Ref": "SourceBucket"
},
"S3ObjectKey": {
"Ref": "SourceObjectKey"
},
"PollForSourceChanges": false
},
"RunOrder": 1
}
Why am I making this change? Given the current limit of five trails per account, the CloudTrail trail
must be created and managed separately. (See Limits in AWS CloudTrail.) However, you can include
many Amazon S3 buckets on a single trail, so you can create the trail once and then add Amazon S3
buckets for other pipelines as necessary. Paste the following into your second sample template file.
YAML
###################################################################################
# Prerequisites:
# - S3 SoureBucket and SourceObjectKey must exist
###################################################################################
Parameters:
SourceObjectKey:
Description: 'S3 source artifact'
Type: String
Default: SampleApp_Linux.zip
Resources:
AWSCloudTrailBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref AWSCloudTrailBucket
PolicyDocument:
Version: 2012-10-17
Statement:
-
Sid: AWSCloudTrailAclCheck
Effect: Allow
Principal:
Service:
- cloudtrail.amazonaws.com
Action: s3:GetBucketAcl
Resource: !GetAtt AWSCloudTrailBucket.Arn
-
Sid: AWSCloudTrailWrite
Effect: Allow
Principal:
Service:
- cloudtrail.amazonaws.com
Action: s3:PutObject
Resource: !Join [ '', [ !GetAtt AWSCloudTrailBucket.Arn, '/AWSLogs/', !
Ref 'AWS::AccountId', '/*' ] ]
Condition:
StringEquals:
s3:x-amz-acl: bucket-owner-full-control
AWSCloudTrailBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
AwsCloudTrail:
DependsOn:
- AWSCloudTrailBucketPolicy
Type: AWS::CloudTrail::Trail
Properties:
S3BucketName: !Ref AWSCloudTrailBucket
EventSelectors:
-
DataResources:
-
Type: AWS::S3::Object
Values:
- !Join [ '', [ !ImportValue SourceBucketARN, '/', !Ref
SourceObjectKey ] ]
ReadWriteType: WriteOnly
IncludeGlobalServiceEvents: true
IsLogging: true
IsMultiRegionTrail: true
...
JSON
{
"Parameters": {
"SourceObjectKey": {
"Description": "S3 source artifact",
"Type": "String",
"Default": "SampleApp_Linux.zip"
}
},
"Resources": {
"AWSCloudTrailBucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain"
},
"AWSCloudTrailBucketPolicy": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"Bucket": {
"Ref": "AWSCloudTrailBucket"
},
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AWSCloudTrailAclCheck",
"Effect": "Allow",
"Principal": {
"Service": [
"cloudtrail.amazonaws.com"
]
},
"Action": "s3:GetBucketAcl",
"Resource": {
"Fn::GetAtt": [
"AWSCloudTrailBucket",
"Arn"
]
}
},
{
"Sid": "AWSCloudTrailWrite",
"Effect": "Allow",
"Principal": {
"Service": [
"cloudtrail.amazonaws.com"
]
},
"Action": "s3:PutObject",
"Resource": {
"Fn::Join": [
"",
[
{
"Fn::GetAtt": [
"AWSCloudTrailBucket",
"Arn"
]
},
"/AWSLogs/",
{
"Ref": "AWS::AccountId"
},
"/*"
]
]
},
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
}
},
"AwsCloudTrail": {
"DependsOn": [
"AWSCloudTrailBucketPolicy"
],
"Type": "AWS::CloudTrail::Trail",
"Properties": {
"S3BucketName": {
"Ref": "AWSCloudTrailBucket"
},
"EventSelectors": [
{
"DataResources": [
{
"Type": "AWS::S3::Object",
"Values": [
{
"Fn::Join": [
"",
[
{
"Fn::ImportValue": "SourceBucketARN"
},
"/",
{
"Ref": "SourceObjectKey"
}
]
]
}
]
}
],
"ReadWriteType": "WriteOnly"
}
],
"IncludeGlobalServiceEvents": true,
"IsLogging": true,
"IsMultiRegionTrail": true
}
}
}
}
...
Configuring pipelines 1. Open your pipeline in the console. The Amazon CloudWatch
in the console 2. Choose Edit. Events rule and AWS
CloudTrail trail are created for
you.
Configuring pipelines Use the update-pipeline command to set You must create:
in the CLI the PollForSourceChanges parameter to
false. • The AWS CloudTrail trail.
• The Amazon CloudWatch
See Edit a Pipeline (AWS CLI) (p. 198). Events rule.
When you use the console to create or edit a pipeline that has a GitHub source, CodePipeline creates
a webhook. CodePipeline deletes your webhook when you delete your pipeline. You do not need to
manage it in GitHub. If you use the AWS CLI or AWS CloudFormation to create or edit a pipeline that has
a GitHub source, you must use the information in these sections to manage webhooks yourself.
Topics
• Create a Webhook for a GitHub Source (p. 167)
• List Webhooks in Your Account (p. 169)
• Edit the Webhook for Your GitHub Source (p. 170)
• Delete the Webhook for Your GitHub Source (p. 171)
• Tag a Webhook in CodePipeline (p. 171)
• Create a Webhook for a GitHub Source (AWS CloudFormation Template) (p. 173)
• Configure Your Pipelines to Use Webhooks for Change Detection (GitHub Source) (p. 176)
To use the AWS CLI to create a webhook, call the put-webhook command and supply the following:
• A name that uniquely identifies the webhook. This name must be unique within the region of the
account for the pipeline.
• A secret in the JSON file to be used for GitHub authorization.
1. In a text editor, create and save a JSON file for the webhook you want to create. Use this sample file
for a webhook named my-webhook:
{"webhook":
{"name": "my-webhook",
"targetPipeline": "pipeline_name",
"targetAction": "source_action_name",
"filters": [
{
"jsonPath": "$.ref",
"matchEquals": "refs/heads/{Branch}"
}
],
"authentication": "GITHUB_HMAC",
"authenticationConfiguration": {"SecretToken":"secret"}
}
}
2. Call the put-webhook command and include the --cli-input and --region parameters.
The following sample command creates a webhook with the webhook_json JSON file.
3. In the output shown in this example, the URL and ARN are returned for a webhook named my-
webhook.
{
"webhook": {
"url": "https://webhooks.domain.com/trigger111111111EXAMPLE11111111111111111",
"definition": {
"authenticationConfiguration": {
"SecretToken": "secret"
},
"name": "my-webhook",
"authentication": "GITHUB_HMAC",
"targetPipeline": "pipeline_name",
"targetAction": "Source",
"filters": [
{
"jsonPath": "$.ref",
"matchEquals": "refs/heads/{Branch}"
}
]
},
"arn": "arn:aws:codepipeline:eu-central-1:ACCOUNT_ID:webhook:my-webhook"
},
"tags": [{
"key": "Project",
"value": "ProjectA"
}]
}
This example adds tagging to the webhook by including the Project tag key and ProjectA
value on the webhook. For more information about tagging resources in CodePipeline, see Tagging
Resources (p. 134).
4. Call the register-webhook-with-third-party command and include the --webhook-name
parameter.
If you are updating a pipeline to use webhooks, you must also use the following procedure to turn off
periodic checks.
1. Run the get-pipeline command to copy the pipeline structure into a JSON file. For example, for a
pipeline named MyFirstPipeline, you would type the following command:
This command returns nothing, but the file you created should appear in the directory where you
ran the command.
2. Open the JSON file in any plain-text editor and edit the source stage by changing or adding the
PollForSourceChanges parameter. In this example, for a repository named UserGitHubRepo,
the parameter is set to false .
Why am I making this change? Changing this parameter turns off periodic checks so you can use
event-based change detection only.
"configuration": {
"Owner": "darlaker",
"Repo": "UserGitHubRepo",
"PollForSourceChanges": "false",
"Branch": "master",
"OAuthToken": "****"
},
3. If you are working with the pipeline structure retrieved using the get-pipeline command, you
must edit the structure in the JSON file by removing the metadata lines from the file. Otherwise,
the update-pipeline command cannot use it. Remove the "metadata" section from the pipeline
structure in the JSON file, including the : { } and the "created", "pipelineARN", and
"updated" fields.
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
1. To list your webhooks, call the list-webhooks command and include the --endpoint-url and --
region parameters.
The following sample command lists webhooks for the "eu-central-1" endpoint URL.
2. Webhooks are listed, including the name and ARN for each webhook.
{
"webhooks": [
{
"url": "https://webhooks.domain.com/
trigger111111111EXAMPLE11111111111111111": {
"authenticationConfiguration": {
"SecretToken": "Secret"
},
"name": "my-webhook",
"authentication": "GITHUB_HMAC",
"targetPipeline": "my-Pipeline",
"targetAction": "Source",
"filters": [
{
"jsonPath": "$.ref",
"matchEquals": "refs/heads/{Branch}"
}
]
},
"arn": "arn:aws:codepipeline:eu-central-1:ACCOUNT_ID:webhook:my-webhook"
}
]
}
• If you use the console to edit the GitHub source action for your pipeline, the webhook is updated for
you (and re-registered, if appropriate).
• If you are not updating the webhook name, and you are not changing the GitHub repository, you can
use the AWS CLI to update the webhook. See Example 1.
• If you are changing the webhook name or GitHub repository name, you must edit the source action
in the console or delete and recreate the webhook in the CLI. After you create the webhook, you also
register it. See Example 2.
1. In a text editor, edit the JSON file for the webhook you want to update. This example modifies the
sample file that was used to create the webhook in Create a Webhook for a GitHub Source (p. 167).
This sample changes the secret token of the webhook named "my-webhook".
{"webhook":
{"name": "my-webhook",
"targetPipeline": "pipeline_name",
"targetAction": "source_action_name",
"filters": [
{
"jsonPath": "$.ref",
"matchEquals": "refs/heads/{Branch}"
}
],
"authentication": "GITHUB_HMAC",
"authenticationConfiguration": {"SecretToken":"new_secret"}
}
}
2. Call the put-webhook command and include the --cli-input and --region parameters.
The following sample command updates a webhook with the modified "webhook_json" JSON file.
3. The output returns the webhook details and the new secret.
Note
You can edit the GitHub source action in the console. This allows CodePipeline to manage
webhooks for you.
1. Use the steps in Delete the Webhook for Your GitHub Source (p. 171) to deregister and delete the
existing webhook that is associated with the old webhook name or GitHub repository.
2. Use the steps in Create a Webhook for a GitHub Source (p. 167) to recreate the webhook.
Note
You can edit the GitHub source action in the console. This allows CodePipeline to manage
webhooks for you.
1. You must deregister the webhook before you delete it. Call the deregister-webhook-with-third-
party command and include the --webhook-name parameter.
You can specify tags when you create a webhook. You can add, remove, and update the values of tags in
a webhook. You can add up to 50 tags to each webhook.
Topics
• Add Tags to an Existing Webhook (p. 171)
• View Tags for a Webhook (p. 172)
• Edit Tags for a Webhook (p. 172)
• Remove Tags for a Webhook (p. 172)
In these steps, we assume that you have already installed a recent version of the AWS CLI or updated to
the current version. For more information, see Installing the AWS Command Line Interface.
At the terminal or command line, run the tag-resource command, specifying the Amazon Resource
Name (ARN) of the webhook where you want to add tags and the key and value of the tag you want to
add. You can add more than one tag to a webhook. For example, to tag a webhook named MyWebhook
with two tags, a tag key named Project with the tag value of NewProject, and a tag key named
ApplicationName with the tag value of MyApplication:
At the terminal or command line, run the list-tags-for-resource command. For example, to view a list of
tag keys and tag values for a webhook named MyWebhook with the ARN arn:aws:codepipeline:us-
west-2:account-id:webhook:MyWebhook:
{
"tags": {
"Project": "NewProject",
"ApplicationName": "MyApplication"
}
}
At the terminal or command line, run the tag-resource command, specifying the ARN of the webhook
where you want to update a tag and specify the tag key and tag value:
At the terminal or command line, run the untag-resource command, specifying the ARN of the webhook
where you want to remove tags and the tag key of the tag you want to remove. For example, to remove a
tag on a webhook named MyWebhook with the tag key Project:
If successful, this command returns nothing. To verify the tags associated with the webhook, run the list-
tags-for-resource command.
YAML
Parameters:
GitHubOwner:
Type: String
GitHubSecret:
Type: String
NoEcho: true
GitHubOAuthToken:
Type: String
NoEcho: true
...
JSON
{
"Parameters": {
"BranchName": {
"Description": "GitHub branch name",
"Type": "String",
"Default": "master"
},
"GitHubOwner": {
"Type": "String"
},
"GitHubSecret": {
"Type": "String",
"NoEcho": true
},
"GitHubOAuthToken": {
"Type": "String",
"NoEcho": true
},
...
If RegisterWithThirdParty is set to true, make sure the user associated to the OAuthToken
can set the required scopes in GitHub. The token and webhook require the following GitHub scopes:
• repo - used for full control to read and pull artifacts from public and private repositories into a
pipeline.
• admin:repo_hook - used for full control of repository hooks.
Otherwise, GitHub returns a 404. For more information about the 404 returned, see https://
help.github.com/articles/about-webhooks.
YAML
AppPipelineWebhook:
Type: AWS::CodePipeline::Webhook
Properties:
Authentication: GITHUB_HMAC
AuthenticationConfiguration:
SecretToken: !Ref GitHubSecret
Filters:
-
JsonPath: "$.ref"
MatchEquals: refs/heads/{Branch}
TargetPipeline: !Ref AppPipeline
TargetAction: SourceAction
Name: AppPipelineWebhook
TargetPipelineVersion: !GetAtt AppPipeline.Version
RegisterWithThirdParty: true
...
JSON
"AppPipelineWebhook": {
"Type": "AWS::CodePipeline::Webhook",
"Properties": {
"Authentication": "GITHUB_HMAC",
"AuthenticationConfiguration": {
"SecretToken": {
"Ref": "GitHubSecret"
}
},
"Filters": [
{
"JsonPath": "$.ref",
"MatchEquals": "refs/heads/{Branch}"
}
],
"TargetPipeline": {
"Ref": "AppPipeline"
},
"TargetAction": "SourceAction",
"Name": "AppPipelineWebhook",
"TargetPipelineVersion": {
"Fn::GetAtt": [
"AppPipeline",
"Version"
]
},
"RegisterWithThirdParty": true
}
},
...
3. Save the updated template to your local computer, and then open the AWS CloudFormation console.
4. Choose your stack, and then choose Create Change Set for Current Stack.
5. Upload the template, and then view the changes listed in AWS CloudFormation. These are the
changes to be made to the stack. You should see your new resources in the list.
6. Choose Execute.
Why am I making this change? Changing this parameter to false turns off periodic checks so you
can use event-based change detection only.
YAML
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
OutputArtifacts:
- Name: SourceOutput
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref RepositoryName
Branch: !Ref BranchName
OAuthToken: !Ref GitHubOAuthToken
PollForSourceChanges: false
RunOrder: 1
JSON
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "ThirdParty",
"Version": 1,
"Provider": "GitHub"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"Owner": {
"Ref": "GitHubOwner"
},
"Repo": {
"Ref": "RepositoryName"
},
"Branch": {
"Ref": "BranchName"
},
"OAuthToken": {
"Ref": "GitHubOAuthToken"
},
"PollForSourceChanges": false
},
"RunOrder": 1
}
This table includes procedures for configuring pipelines with a GitHub source to use webhooks.
Configuring pipelines 1. Open your pipeline in the console. The webhook is created
in the console 2. Choose Edit. for you and registered with
GitHub.
3. Choose the pencil icon next to the
GitHub source action. No further action is required.
4. Choose Connect to repository and then
choose Update.
5. Choose Save pipeline changes.
Configuring pipelines Use the update-pipeline command to set You must create the webhook
in the CLI the PollForSourceChanges parameter to and register it with GitHub.
false.
Configuring Retain periodic checking for your GitHub For information about
pipelines in AWS source repository. the webhook resource in
CloudFormation AWS CloudFormation, see
AWS::CodePipeline::Webhook.
In Amazon CloudWatch Events, you create a rule to detect and react to changes in the state of the
pipeline's defined source.
Topics
• Create a CloudWatch Events Rule for an Amazon ECR Source (Console) (p. 177)
• Create a CloudWatch Events Rule for an Amazon ECR Source (CLI) (p. 179)
• Create a CloudWatch Events Rule for an Amazon ECR Source (AWS CloudFormation Template)
(p. 180)
Choose Edit, and then paste the following event pattern in the Event Source window for a my-
image-repo repository with a tagged image version of latest:
{
"source": [
"aws.ecr"
],
"detail": {
"eventName": [
"PutImage"
],
"requestParameters": {
"repositoryName": [
"my-image-repo"
],
"imageTag": [
"latest"
]
}
}
}
5. Choose Save.
• Choose Create a new role for this specific resource to create a service role that gives Amazon
CloudWatch Events permissions to your start your pipeline executions when triggered.
• Choose Use existing role to enter a service role that gives Amazon CloudWatch Events
permissions to your start your pipeline executions when triggered.
9. Review your rule setup to make sure it meets your requirements.
10. Choose Configure details.
11. On the Configure rule details page, enter a name and description for the rule, and then choose
State to enable the rule.
12. If you're satisfied with the rule, choose Create rule.
• A name that uniquely identifies the rule you are creating. This name must be unique across all of the
pipelines you create with CodePipeline associated with your AWS account.
• The event pattern for the source and detail fields used by the rule. For more information, see Amazon
CloudWatch Events and Event Patterns.
To create a CloudWatch Events rule with Amazon ECR as the event source and CodePipeline
as the target
1. Add permissions for Amazon CloudWatch Events to use CodePipeline to invoke the rule. For more
information, see Using Resource-Based Policies for Amazon CloudWatch Events.
a. Use the following sample to create the trust policy that allows CloudWatch Events to assume
the service role. Name the trust policy trustpolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
b. Use the following command to create the Role-for-MyRule role and attach the trust policy.
c. Create the permissions policy JSON, as shown in this sample, for the pipeline named
MyFirstPipeline. Name the permissions policy permissionspolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:StartPipelineExecution"
],
"Resource": [
"arn:aws:codepipeline:us-west-2:80398EXAMPLE:MyFirstPipeline"
]
}
]
}
Why am I making this change? Adding this policy to the role creates permissions for
CloudWatch Events.
2. Call the put-rule command and include the --name and --event-pattern parameters.
Why am I making this change? This command enables AWS CloudFormation to create the event.
The following sample command uses --event-pattern to create a rule called MyECRRepoRule.
Note
The put-rule command above can use either the aws.ecr or ecr.amazonaws.com service
name in the source field.
3. To add CodePipeline as a target, call the put-targets command and include the following
parameters:
• The --rule parameter is used with the rule_name you created by using put-rule.
• The --targets parameter is used with the list Id of the target in the list of targets and the ARN
of the target pipeline.
The following sample command specifies that for the rule called MyECRRepoRule, the target Id
is composed of the number one, indicating that in a list of targets for the rule, this is target 1.
The sample command also specifies an example ARN for the pipeline. The pipeline starts when
something changes in the repository.
To update your pipeline AWS CloudFormation template and create CloudWatch Events rule
1. In the template, under Resources, use the AWS::IAM::Role AWS CloudFormation resource to
configure the IAM role that allows your event to start your pipeline. This entry creates a role that
uses two policies:
Why am I making this change? Adding the AWS::IAM::Role resource enables AWS
CloudFormation to create permissions for CloudWatch Events. This resource is added to your AWS
CloudFormation stack.
YAML
AmazonCloudWatchEventRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: cwe-pipeline-execution
PolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action: codepipeline:StartPipelineExecution
Resource: !Join [ '', [ 'arn:aws:codepipeline:', !Ref
'AWS::Region', ':', !Ref 'AWS::AccountId', ':', !Ref AppPipeline ] ]
JSON
"AmazonCloudWatchEventRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "cwe-pipeline-execution",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action":
"codepipeline:StartPipelineExecution",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
...
2. In the template, under Resources, use the AWS::Events::Rule AWS CloudFormation resource
to add a CloudWatch Events rule for the Amazon ECR source. This event pattern creates an event
that monitors push changes to your repository When CloudWatch Events detects a repository state
change, the rule invokes StartPipelineExecution on your target pipeline.
Why am I making this change? Adding the AWS::Events::Rule resource enables AWS
CloudFormation to create the event. This resource is added to your AWS CloudFormation stack. This
snippet uses an image named my-image-repo with a tag of latest.
YAML
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.ecr
detail:
eventName:
- PutImage
requestParameters:
repositoryName:
- my-image-repo
imageTag:
- latest
Targets:
-
Arn:
!Join [ '', [ 'arn:aws:codepipeline:', !Ref 'AWS::Region', ':', !Ref
'AWS::AccountId', ':', !Ref AppPipeline ] ]
RoleArn: !GetAtt AmazonCloudWatchEventRole.Arn
Id: codepipeline-AppPipeline
JSON
{
"AmazonCloudWatchEventRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"EventPattern": {
"source": [
"aws.ecr"
],
"detail": {
"eventName": [
"PutImage"
],
"requestParameters": {
"repositoryName": [ "my-image-repo" ],
"imageTag": [ "latest" ]
}
}
},
"Targets": [
{
"Arn": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
},
"RoleArn": {
"Fn::GetAtt": [
"AmazonCloudWatchEventRole",
"Arn"
]
},
"Id": "codepipeline-AppPipeline"
}
]
}
}
},
3. Save the updated template to your local computer, and then open the AWS CloudFormation console.
4. Choose your stack, and then choose Create Change Set for Current Stack.
5. Upload the template, and then view the changes listed in AWS CloudFormation. These are the
changes to be made to the stack. You should see your new resources in the list.
6. Choose Execute.
For more information about creating a pipeline with the recommended configuration, see Create a
Pipeline (Console) (p. 187) and Create a Pipeline (CLI) (p. 193). For more information about updating
an action or pipeline with the recommended configuration, see Edit a Pipeline (Console) (p. 196) and
Edit a Pipeline (CLI) (p. 198).
For more information, see Change-Detection Methods Used to Start Pipelines Automatically (p. 138).
Topics
• Start a Pipeline Manually (Console) (p. 184)
• Start a Pipeline Manually (CLI) (p. 184)
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. In Name, choose the name of the pipeline you want to start.
3. On the pipeline details page, choose Release change. This starts the most recent revision available
in each source location specified in a source action through the pipeline.
1. Open a terminal (Linux, macOS, or Unix) or command prompt (Windows) and use the AWS CLI to run
the start-pipeline-execution command, specifying the name of the pipeline you want to start. For
example, to start running the last change through a pipeline named MyFirstPipeline:
2. To verify success, view the returned object. This command returns an execution ID, similar to the
following:
{
"pipelineExecutionId": "c53dbd42-This-Is-An-Example"
Note
After you have started the pipeline, you can monitor its progress in the CodePipeline
console or by running the get-pipeline-state command. For more information, see View
Pipeline Details and History (Console) (p. 202) and View Pipeline Details and History
(CLI) (p. 208).
• Choose Create a new role for this specific resource to create a service role that grants Amazon
CloudWatch Events permissions to start your pipeline executions when triggered.
• Choose Use existing role to enter a service role that grants Amazon CloudWatch Events
permissions to start your pipeline executions when triggered.
8. Choose Configure details.
9. On the Configure rule details page, enter a name and description for the rule, and then choose
State to enable the rule.
10. If you're satisfied with the rule, choose Create rule.
• A name that uniquely identifies the rule you are creating. This name must be unique across all of the
pipelines you create with CodePipeline associated with your AWS account.
• The schedule expression for the rule.
1. Call the put-rule command and include the --name and --schedule-expression parameters.
Examples:
The following sample command uses --schedule-expression to create a rule called MyRule2 that
filters CloudWatch Events on a schedule.
2. Grant permissions for Amazon CloudWatch Events to use CodePipeline to invoke the rule. For more
information, see Using Resource-Based Policies for Amazon CloudWatch Events.
a. Use the following sample to create the trust policy to allow Amazon CloudWatch Events to
assume the service role. Name it trustpolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
b. Use the following command to create the Role-for-MyRule role and attach the trust policy.
c. Create the permissions policy JSON as shown in this sample for the pipeline named
MyFirstPipeline. Name the permissions policy permissionspolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:StartPipelineExecution"
],
"Resource": [
"arn:aws:codepipeline:us-west-2:80398EXAMPLE:MyFirstPipeline"
]
}
]
}
You can add actions to your pipeline that are in an AWS Region different from your pipeline. When
an AWS service is the provider for an action, and this action type/provider type are in a different AWS
Region from your pipeline, this is a cross-region action. For more information about cross-region actions,
see Add a Cross-Region Action in CodePipeline (p. 322).
You can also create pipelines that build and deploy container-based applications by using Amazon ECS
as the deployment provider. Before you create a pipeline that deploys container-based applications
with Amazon ECS, you must create an image definitions file as described in Image Definitions File
Reference (p. 408).
CodePipeline uses change detection methods to start your pipeline when a source code change is
pushed. These detection methods are based on source type:
• CodePipeline uses Amazon CloudWatch Events to detect changes in your CodeCommit source
repository and branch or your Amazon S3 source bucket.
• CodePipeline uses webhooks to detect changes in your GitHub source repository and branch. A
webhook is an HTTP notification that detects events in an external tool and connects those external
events to a pipeline.
Note
When you use the console to create or edit a pipeline, the change detection resources are
created for you. If you use the AWS CLI to create the pipeline, you must create the additional
resources yourself. For more information, see Use CloudWatch Events to Start a Pipeline
(CodeCommit Source) (p. 140).
Topics
• Create a Pipeline (Console) (p. 187)
• Create a Pipeline (CLI) (p. 193)
When you use the console to create a pipeline, you must include a source stage and one or both of the
following:
• A build stage.
• A deployment stage.
When you use the pipeline wizard, CodePipeline creates the names of stages (source, build, staging).
These names cannot be changed. You can use more specific names (for example, BuildToGamma or
DeployToProd) to stages you add later.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
In a single AWS account, each pipeline you create in an AWS Region must have a unique name.
Names can be reused for pipelines in different Regions.
Note
After you create a pipeline, you cannot change its name. For information about other
limitations, see Limits in AWS CodePipeline (p. 412).
4. In Service role, do one of the following:
• Choose New service role to allow CodePipeline to create a new service role
in IAM. In Role name, the role and policy name both default to this format:
AWSCodePipelineServiceRole-region-pipeline_name. For example, this is the service role
created for a pipeline named MyPipeline: AWSCodePipelineServiceRole-eu-west-2-MyPipeline.
• Choose Existing service role to use a service role already created in IAM. In Role name, choose
your service role from the list.
Note
Depending on when your service role was created, you might need to update its permissions
to support additional AWS services. For information, see Add Permissions for Other AWS
Services (p. 366).
For more information about the service role and its policy statement, see Manage the CodePipeline
Service Role (p. 363).
5. In Artifact store, do one of the following:
a. Choose Default location to use the default artifact store, such as the Amazon S3 artifact bucket
designated as the default, for your pipeline in the AWS Region you have selected for your
pipeline.
b. Choose Custom location if you already have an artifact store, such as an Amazon S3 artifact
bucket, in the same Region as your pipeline.
Note
This is not the source bucket for your source code. This is the artifact store for your pipeline.
A separate artifact store, such as an Amazon S3 bucket, is required for each pipeline. When
you create or edit a pipeline, you must have an artifact bucket in the pipeline Region, and
then you must have one artifact bucket per AWS Region where you are running an action.
For more information, see A Quick Look at Input and Output Artifacts (p. 2) and
CodePipeline Pipeline Structure Reference (p. 393).
6. Choose Next.
• On the Step 2: Add source stage page, in Source provider, choose the type of repository where
your source code is stored, specify its required options, and then choose Next step.
• For GitHub:
1. Choose Connect to GitHub. If you are prompted to sign in, provide your GitHub credentials.
Important
Do not provide your AWS credentials.
2. If this is your first time connecting to GitHub from CodePipeline for this Region, you are
asked to authorize application access to your account. Review the permissions required for
integration, and then, if you want to continue, choose Authorize application. When you
connect to GitHub in the console, the following resources are created for you:
• CodePipeline uses an OAuth token to create an authorized application that is managed by
CodePipeline.
Note
In GitHub, there is a limit to the number of OAuth tokens you can use for an
application, such as CodePipeline. If you exceed this limit, retry the connection to
allow CodePipeline to reconnect by reusing existing tokens. For more information,
see ??? (p. 348).
• CodePipeline creates a webhook in GitHub to detect source changes and then start your
pipeline when a change occurs. In addition to the webhook, CodePipeline:
• Randomly generates a secret and uses it to authorize the connection to GitHub.
• Generates the webhook URL using the public endpoint for the Region and registers it with
GitHub. This subscribes the URL to receive repository events.
3. Choose the GitHub repository you want to use as the source location for your pipeline. In
Branch, from the drop-down list, choose the branch you want to use.
• For Amazon S3:
1. In Amazon S3 location, provide the Amazon S3 bucket name and path to the object in a bucket
with versioning enabled. The format of the bucket name and path looks like this:
s3://bucketName/folderName/objectName
2. After you choose the Amazon S3 source bucket, CodePipeline creates the Amazon CloudWatch
Events rule and the AWS CloudTrail trail to be created for this pipeline. Accept the defaults
under Change detection options. This allows CodePipeline to use Amazon CloudWatch Events
and AWS CloudTrail to detect changes for your new pipeline. Choose Next.
• For AWS CodeCommit:
• In Repository name, choose the name of the CodeCommit repository you want to use as the
source location for your pipeline. In Branch name, from the drop-down list, choose the branch
you want to use.
• After you choose the CodeCommit repository name and branch, a message is displayed in
Change detection options showing the Amazon CloudWatch Events rule to be created for this
pipeline. Accept the defaults under Change detection options. This allows CodePipeline to use
Amazon CloudWatch Events to detect changes for your new pipeline.
• For Amazon ECR:
• In Repository name, choose the name of your Amazon ECR repository.
• In Image tag, specify the image name and version, if different from LATEST.
• In Output artifacts, choose the output artifact default, such as MyApp, that contains the image
name and repository URI information you want the next stage to use.
For a tutorial about creating a pipeline for Amazon ECS with CodeDeploy blue-green
deployments that includes an Amazon ECR source stage, see Tutorial: Create a Pipeline with an
Amazon ECR Source and ECS-to-CodeDeploy Deployment.
When you include an Amazon ECR source stage in your pipeline, the source action generates an
imageDetail.json file as an output artifact when you commit a change. For information about
the imageDetail.json file, see imageDetail.json File for Amazon ECS Blue/Green Deployment
Actions (p. 410).
Note
The object and file type must be compatible with the deployment system you plan to use
(for example, Elastic Beanstalk or CodeDeploy). Supported file types might include .zip, .tar,
and .tgz files. For more information about the supported container types for Elastic
Beanstalk, see Customizing and Configuring Elastic Beanstalk Environments and Supported
Platforms. For more information about deploying revisions with CodeDeploy, see Uploading
Your Application Revision and Prepare a Revision.
• On the Step 3: Add build stage page, do one of the following, and then choose Next:
In Region, choose the AWS Region where the resource is created or where you plan to create it.
The Region field designates where the AWS resources are created for this action type and provider
type. This field only displays for actions where the action provider is an AWS service. The Region
field defaults to the same AWS Region as your pipeline.
In Project name, choose your build project. If you have already created a build project in
CodeBuild, choose it. Or you can create a build project in CodeBuild and then return to this task.
Follow the instructions in Create a Pipeline That Uses CodeBuild in the CodeBuild User Guide.
• On the Step 4: Add deploy stage page, do one of the following, and then choose Next:
• Choose Skip deploy stage if you created a build stage in the previous step.
Note
This option does not appear if you have already skipped the build stage.
• In Deploy provider, choose a custom action that you have created for a deployment provider.
In Region, for cross-region actions only, choose the AWS Region where the resource is created.
The Region field designates where the AWS resources are created for this action type and provider
type. This field only displays for actions where the action provider is an AWS service. The Region
field defaults to the same AWS Region as your pipeline.
• In Deploy provider, fields are available for default providers as follows:
• CodeDeploy
In Application name, enter or choose the name of an existing Elastic Beanstalk application. In
Environment name, enter an environment for the application. Choose Next. You can also create
an application, environment, or both in the Elastic Beanstalk console.
• AWS OpsWorks Stacks
In Stack, enter or choose the name of the stack you want to use. In Layer, choose the layer that
your target instances belong to. In App, choose the application that you want to update and
deploy. If you need to create an app, choose Create a new one in AWS OpsWorks.
For information about adding an application to a stack and layer in AWS OpsWorks, see Adding
Apps in the AWS OpsWorks User Guide.
For an end-to-end example of how to use a simple pipeline in CodePipeline as the source for
code that you run on AWS OpsWorks layers, see Using CodePipeline with AWS OpsWorks Stacks.
• AWS CloudFormation
In Cluster name, enter or choose the name of an existing Amazon ECS cluster. In Service name,
enter or choose the name of the service running on the cluster. You can also create a cluster
and service. In Image filename, enter the name of the image definitions file that describes your
service's container and image.
Note
The Amazon ECS deployment action requires an imagedefinitions.json
file as an input to the deployment action. The default file name for the file is
imagedefinitions.json. If you choose to use a different file name, you must provide
it when you create the pipeline deployment stage. For more information, see
imagedefinitions.json File for Amazon ECS Standard Deployment Actions (p. 408).
Choose Next.
Note
Make sure your Amazon ECS cluster is configured with two or more instances. Amazon
ECS clusters must contain at least two instances so that one is maintained as the
primary instance and another is used to accommodate new deployments.
For a tutorial about deploying container-based applications with your pipeline, see Tutorial:
Continuous Deployment with CodePipeline.
• Amazon ECS (Blue/Green)
Enter the CodeDeploy application and deployment group, Amazon ECS task definition, and
AppSpec file information, and then choose Next.
Note
The Amazon ECS (Blue/Green) action requires an imageDetail.json file as an
input artifact to the deploy action.2015-07-09
API Version Because the Amazon ECR source action creates
191
CodePipeline User Guide
Create a Pipeline (Console)
this file, pipelines with an Amazon ECR source action do not need to provide an
imageDetail.json file. For more information, see imageDetail.json File for Amazon
ECS Blue/Green Deployment Actions (p. 410).
For a tutorial about creating a pipeline for blue-green deployments to an Amazon ECS cluster
with CodeDeploy, see Tutorial: Create a Pipeline with an Amazon ECR Source and ECS-to-
CodeDeploy Deployment (p. 95).
• AWS Service Catalog
Choose Enter deployment configuration if you want to use fields in the console to specify
your configuration, or choose Configuration file if you have a separate configuration file. Enter
product and configuration information, and then choose Next.
For a tutorial about deploying product changes to AWS Service Catalog with your pipeline, see
Tutorial: Create a Pipeline That Deploys to AWS Service Catalog (p. 76).
• Alexa Skills Kit
In Alexa Skill ID, enter the skill ID for your Alexa skill. In Client ID and Client secret, enter the
credentials generated using a Login with Amazon (LWA) security profile. In Refresh token, enter
the refresh token you generated using the ASK CLI command for retrieving a refresh token.
Choose Next.
For a tutorial about deploying Alexa skills with your pipeline and generating the LWA
credentials, see Tutorial: Create a Pipeline That Deploys an Amazon Alexa Skill (p. 108).
• Amazon S3
In Bucket, enter the name of the Amazon S3 bucket you want to use. Choose Extract file before
deploy if the input artifact to your deploy stage is a ZIP file. If Extract file before deploy is
selected, you may optionally enter a value for Deployment path to which your ZIP file will be
unzipped. If it is not selected, you are required to to enter a value in S3 object key.
Note
Most source and build stage output artifacts will be zipped. All pipeline source
providers except Amazon S3 will zip your source files before providing them as the
input artifact to the next action.
(Optional) In Canned ACL, enter the canned ACL to apply to the object deployed to Amazon S3.
Note
Applying a canned ACL overwrites any existing ACL applied to the object.
(Optional) In Cache control, specify the cache control parameters for requests to download
objects from the bucket. For a list of valid values, see the Cache-Control header field for
HTTP operations.
Choose Next.
For a tutorial about creating a pipeline with an Amazon S3 deployment action provider, see
Tutorial: Create a Pipeline That Uses Amazon S3 as a Deployment Provider (p. 114).
• On the Step 5: Review page, review your pipeline configuration, and then choose Create pipeline to
create the pipeline or Previous to go back and edit your choices. To exit the wizard without creating
a pipeline, choose Cancel.
Now that you've created your pipeline, you can view it in the console. The pipeline starts to run after you
create it. For more information, see View Pipeline Details and History in CodePipeline (p. 202). For more
information about making changes to your pipeline, see Edit a Pipeline in CodePipeline (p. 196).
For more information about pipeline structure, see CodePipeline Pipeline Structure Reference (p. 393)
and create-pipeline in the CodePipeline API Reference.
To create a JSON file, use the sample pipeline JSON file, edit it, and then call that file when you run the
create-pipeline command.
Prerequisites:
You need the ARN of the service role you created for CodePipeline in Getting Started with
CodePipeline (p. 9). You use the CodePipeline service role ARN in the pipeline JSON file when you
run the create-pipeline command. For more information about creating a service role, see Create the
CodePipeline Service Role (p. 276). Unlike the console, running the create-pipeline command in the
AWS CLI does not have the option to create the CodePipeline service role for you. The service role must
already exist.
You need the name of an Amazon S3 bucket where artifacts for the pipeline are stored. This bucket must
be in the same Region as the pipeline. You use the bucket name in the pipeline JSON file when you run
the create-pipeline command. Unlike the console, running the create-pipeline command in the AWS CLI
does not create an Amazon S3 bucket for storing artifacts. The bucket must already exist.
Note
You can also use the get-pipeline command to get a copy of the JSON structure of that
pipeline, and then modify that structure in a plain-text editor.
1. At a terminal (Linux, macOS, or Unix) or command prompt (Windows), create a new text file in a
local directory.
2. Open the file in a plain-text editor and edit the values to reflect the structure you want to create. At
a minimum, you must change the name of the pipeline. You should also consider whether you want
to change:
• The Amazon S3 bucket where artifacts for this pipeline are stored.
• The source location for your code.
• The deployment provider.
• How you want your code deployed.
• The tags for your pipeline.
The following two-stage sample pipeline structure highlights the values you should consider
changing for your pipeline. Your pipeline likely contains more than two stages:
"pipeline": {
"roleArn": "arn:aws:iam::80398EXAMPLE::role/AWS-CodePipeline-Service",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "S3"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"S3Bucket": "awscodepipeline-demobucket-example-date",
"S3ObjectKey": "ExampleCodePipelineSampleBundle.zip",
"PollForSourceChanges": "false"
},
"runOrder": 1
}
]
},
{
"name": "Staging",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "Deploy-CodeDeploy-Application",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineDemoFleet"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-2-250656481468"
},
"name": "MyFirstPipeline",
"version": 1
},
"metadata": {
"pipelineArn": "arn:aws:codepipeline:us-east-2:80398EXAMPLE:MyFirstPipeline",
"updated": 1501626591.112,
"created": 1501626591.112
},
"tags": [{
"key": "Project",
"value": "ProjectA"
}]
}
This example adds tagging to the pipeline by including the Project tag key and ProjectA
value on the pipeline. For more information about tagging resources in CodePipeline, see Tagging
Resources (p. 134).
Make sure the PollForSourceChanges parameter in your JSON file is set as follows:
"PollForSourceChanges": "false",
CodePipeline uses Amazon CloudWatch Events to detect changes in your CodeCommit source
repository and branch or your Amazon S3 source bucket. CodePipeline uses webhooks to detect
changes in your GitHub source repository and branch. The next step includes instructions to
manually create these resources for your pipeline. Setting the flag to false disables periodic
checks, which are not necessary when you are using the recommended change detection methods.
3. To create a build, test, or deploy action in a Region different from your pipeline, you must add
the following to your pipeline structure. For instructions, see Add a Cross-Region Action in
CodePipeline (p. 322).
To create a pipeline
1. Run the create-pipeline command and use the --cli-input-json parameter to specify the JSON
file you created previously.
To create a pipeline named MySecondPipeline with a JSON file named pipeline.json that includes
the name "MySecondPipeline" as the value for name in the JSON, your command would look like
the following:
Important
Be sure to include file:// before the file name. It is required in this command.
This command returns the structure of the entire pipeline you created.
2. To view the pipeline, either open the CodePipeline console and choose it from the list of pipelines, or
use the get-pipeline-state command. For more information, see View Pipeline Details and History in
CodePipeline (p. 202).
3. If you use the CLI to create a pipeline, you must manually create the recommended change
detection resources for your pipeline:
• For a pipeline with a CodeCommit repository, you must manually create the CloudWatch Events
rule, as described in Create a CloudWatch Events Rule for a CodeCommit Source (CLI) (p. 143).
• For a pipeline with an Amazon S3 source, you must manually create the CloudWatch Events rule
and AWS CloudTrail trail, as described in Use CloudWatch Events to Start a Pipeline (Amazon S3
Source) (p. 151).
API Version 2015-07-09
195
CodePipeline User Guide
Edit a Pipeline
• For a pipeline with a GitHub source, you must manually create the webhook, as described in Use
Webhooks to Start a Pipeline (GitHub Source) (p. 166).
Unlike creating a pipeline, editing a pipeline does not rerun the most recent revision through the
pipeline. If you want to run the most recent revision through a pipeline you've just edited, you
must manually rerun it. Otherwise, the edited pipeline runs the next time you make a change to a
source location configured in the source stage. For information, see Start a Pipeline Manually in AWS
CodePipeline (p. 184).
You can add actions to your pipeline that are in an AWS Region different from your pipeline. When
an AWS service is the provider for an action, and this action type/provider type are in a different AWS
Region from your pipeline, this is a cross-region action. For more information about cross-region actions,
see Add a Cross-Region Action in CodePipeline (p. 322).
CodePipeline uses change detection methods to start your pipeline when a source code change is
pushed. These detection methods are based on source type:
• CodePipeline uses Amazon CloudWatch Events to detect changes in your CodeCommit source
repository or your Amazon S3 source bucket.
• CodePipeline uses webhooks to detect changes in your GitHub source repository and branch.
Note
Change detection resources are created automatically when you use the console. When you
use the console to create or edit a pipeline, the additional resources are created for you. If you
use the AWS CLI to create the pipeline, you must create the additional resources yourself. For
more information about creating or updating a CodeCommit pipeline, see Create a CloudWatch
Events Rule for a CodeCommit Source (CLI) (p. 143). For more information about using the CLI
to create or update an Amazon S3 pipeline, see Create a CloudWatch Events Rule for an Amazon
S3 Source (CLI) (p. 154). For more information about creating or updating a GitHub pipeline,
see Use Webhooks to Start a Pipeline (GitHub Source) (p. 166).
Topics
• Edit a Pipeline (Console) (p. 196)
• Edit a Pipeline (AWS CLI) (p. 198)
To edit a pipeline
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
The names of all pipelines associated with your AWS account are displayed.
2. In Name, choose the name of the pipeline you want to edit. This opens a detailed view of the
pipeline, including the state of each of the actions in each stage of the pipeline.
3. On the pipeline details page, choose Edit.
4. On the Edit page, do one of the following:
• To edit a stage, choose Edit stage. You can add actions in serial and parallel with existing actions:
You can also edit actions in this view by choosing the edit icon for those actions. To delete an
action, choose the delete icon on that action.
• To edit an action, choose the edit icon for that action, and then on Edit action, change the values.
Items marked with an asterisk (*) are required.
• For a CodeCommit repository name and branch, a message appears showing the Amazon
CloudWatch Events rule to be created for this pipeline. If you remove the CodeCommit source, a
message appears showing the Amazon CloudWatch Events rule to be deleted.
• For an Amazon S3 source bucket, a message appears showing the Amazon CloudWatch Events
rule and AWS CloudTrail trail to be created for this pipeline. If you remove the Amazon S3
source, a message appears showing the Amazon CloudWatch Events rule and AWS CloudTrail
trail to be deleted. If the AWS CloudTrail trail is in use by other pipelines, the trail is not
removed and the data event is deleted.
• For a GitHub source, the following are added for the pipeline:
• CodePipeline uses an OAuth token to create an authorized application that is managed by
CodePipeline.
Note
In GitHub, there is a limit to the number of OAuth tokens you can use for an
application, such as CodePipeline. If you exceed this limit, retry the connection to
allow CodePipeline to reconnect by reusing existing tokens. For more information,
see ??? (p. 348).
• CodePipeline creates a webhook in GitHub to detect source changes and then start your
pipeline when a change occurs. CodePipeline creates the following along with the webhook:
• A secret is randomly generated and used to authorize the connection to GitHub.
• The webhook URL is generated using the public endpoint for the Region.
• The webhook is registered with GitHub. This subscribes the URL to receive repository
events.
If you delete a GitHub source action, the webhook is deregistered and deleted for you.
• To add a stage, choose + Add stage at the point in the pipeline where you want to add a stage.
Provide a name for the stage, and then add at least one action to it. Items marked with an asterisk
(*) are required.
• To delete a stage, choose the delete icon on that stage. The stage and all of its actions are deleted.
1. In the stage where you want to add your action, choose Edit stage, and then choose + Add action
group.
2. In Edit action, in Action name, enter the name of your action. The Action provider list displays
provider options by category. Look for the category (for example, Deploy). Under the category,
choose the provider (for example, AWS CodeDeploy). In Region, choose the AWS Region where
the resource is created or where you plan to create it. The Region field designates where the AWS
resources are created for this action type and provider type. This field only displays for actions
where the action provider is an AWS service. The Region field defaults to the same AWS Region as
your pipeline.
For more information about the requirements for actions in CodePipeline, including names
for input and output artifacts and how they are used, see Action Structure Requirements in
CodePipeline (p. 396). For examples of adding action providers and using the default fields for
each provider, see Create a Pipeline (Console) (p. 187).
To add CodeBuild as a build action or test action to a stage, see Use CodePipeline with CodeBuild
to Test Code and Run Builds in the CodeBuild User Guide.
Note
Some action providers, such as GitHub, require you to connect to the provider's website
before you can complete the configuration of the action. When you connect to a
provider's website, make sure you use the credentials for that website. Do not use your
AWS credentials.
3. When you have finished configuring your action, choose Save.
Note
You cannot rename a stage in the console view. You can add a stage with the name you
want to change, and then delete the old one. Make sure you have added all the actions you
want in that stage before you delete the old one.
5. When you have finished editing your pipeline, choose Save to return to the summary page.
Important
After you save your changes, you cannot undo them. You must edit the pipeline again.
If a revision is running through your pipeline when you save your changes, the run is not
completed. If you want a specific commit or change to run through the edited pipeline,
you must manually run it through the pipeline. Otherwise, the next commit or change runs
automatically through the pipeline.
6. To test your action, choose Release change to process that commit through the pipeline and commit
a change to the source specified in the source stage of the pipeline. Or follow the steps in Start a
Pipeline Manually in AWS CodePipeline (p. 184) to use the AWS CLI to manually release a change.
To edit a pipeline
1. Open a terminal session (Linux, macOS, or Unix) or command prompt (Windows) and run the get-
pipeline command to copy the pipeline structure into a JSON file. For example, for a pipeline named
MyFirstPipeline, enter the following command:
This command returns nothing, but the file you created should appear in the directory where you
ran the command.
2. Open the JSON file in any plain-text editor and modify the structure of the file to reflect the
changes you want to make to the pipeline. For example, you can add or remove stages, or add
another action to an existing stage.
The following example shows how you would add another deployment stage in the pipeline.json file.
This stage runs after the first deployment stage named Staging.
Note
This is just a portion of the file, not the entire structure. For more information, see
CodePipeline Pipeline Structure Reference (p. 393).
,
{
"name": "Staging",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "Deploy-CodeDeploy-Application",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineDemoFleet"
},
"runOrder": 1
}
]
},
{
"name": "Production",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "Deploy-Second-Deployment",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineProductionFleet"
},
"runOrder": 1
}
]
}
]
}
The following example shows how you would add a source stage that uses a GitHub repository as
its source action. For more information about how CodePipeline integrates with GitHub, see Source
Action Integrations (p. 12).
Note
This is just a portion of the file, not the entire structure. For more information, see
CodePipeline Pipeline Structure Reference (p. 393).
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"provider": "GitHub",
"version": "1"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"Owner": "MyGitHubAccountName",
"Repo": "MyGitHubRepositoryName",
"PollForSourceChanges": "false",
"Branch": "master",
"OAuthToken": "****"
},
"runOrder": 1
}
]
},
The value for OAuthToken remains masked because CodePipeline uses it to access the GitHub
repository. You can use a personal access token for this value. To create a personal access token,
see Pipeline Error: I receive a pipeline error that says: "Could not access the GitHub repository" or
"Unable to connect to the GitHub repository" (p. 348).
Note
Some edits, such as moving an action from one stage to another stage, delete the last
known state history for the action. If a pipeline contains one or more secret parameters,
such as an OAuth token for an action, that token is masked by a series of asterisks (****).
These secret parameters are left unchanged unless you edit that portion of the pipeline
(for example, if you change the name of the action that includes the OAuth token or the
name of the stage that contains an action that uses an OAuth token). If you make a change
that affects an action that includes an OAuth token, you must include the value of the
token in the edited JSON. For more information, see CodePipeline Pipeline Structure
Reference (p. 393). It is a security best practice to rotate your personal access token on
a regular basis. For more information, see Rotate Your GitHub Personal Access Token on a
Regular Basis (GitHub and CLI) (p. 388).
For information about using the CLI to add an approval action to a pipeline, see Add a Manual
Approval Action to a Pipeline in CodePipeline (p. 316).
Make sure the PollForSourceChanges parameter in your JSON file is set as follows:
"PollForSourceChanges": "false",
CodePipeline uses Amazon CloudWatch Events to detect changes in your CodeCommit source
repository and branch or your Amazon S3 source bucket. CodePipeline uses webhooks to detect
changes in your GitHub source repository and branch. The next step includes instructions for
creating these resources manually. Setting the flag to false disables periodic checks, which are not
required when you use the recommended change detection methods.
3. To add a build, test, or deploy action in a Region different from your pipeline, you must add the
following to your pipeline structure. For detailed instructions, see Add a Cross-Region Action in
CodePipeline (p. 322).
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
• For a CodeCommit repository, you must create the CloudWatch Events rule, as described in Create
a CloudWatch Events Rule for a CodeCommit Source (CLI) (p. 143).
• For an Amazon S3 source, you must create the CloudWatch Events rule and AWS CloudTrail trail,
as described in Use CloudWatch Events to Start a Pipeline (Amazon S3 Source) (p. 151).
• For a GitHub source, you must create the webhook, as described in Use Webhooks to Start a
Pipeline (GitHub Source) (p. 166).
6. To apply your changes, run the update-pipeline command, specifying the pipeline JSON file:
Important
Be sure to include file:// before the file name. It is required in this command.
The pipeline shows your changes. The next time you make a change to the source location, the
pipeline runs that revision through the revised structure of the pipeline.
API Version 2015-07-09
201
CodePipeline User Guide
View Pipeline Details and History
8. To manually run the last revision through the revised structure of the pipeline, run the start-
pipeline-execution command. For more information, see Start a Pipeline Manually in AWS
CodePipeline (p. 184).
For more information about the structure of a pipeline and expected values, see CodePipeline Pipeline
Structure Reference (p. 393) and AWS CodePipeline API Reference.
Topics
• View Pipeline Details and History (Console) (p. 202)
• View Pipeline Details and History (CLI) (p. 208)
Topics
• View Pipeline (Console) (p. 202)
• View Pipeline Execution History (Console) (p. 204)
• View Execution Status (Console) (p. 204)
• View Pipeline Execution Source Revisions (Console) (p. 205)
• View Action Executions (Console) (p. 207)
• View Action Artifacts and Artifact Store Information (Console) (p. 207)
To view a pipeline
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
The names and creation date of all pipelines associated with your AWS account are displayed, along
with links to view execution history.
2. To see details for a single pipeline, in Name, choose the pipeline. You can also select the pipeline,
and then choose View pipeline. A detailed view of the pipeline, including the state of each action in
each stage and the state of the transitions, is displayed.
The graphical view displays the following information for each stage:
The graphical view also displays the following information about actions in each stage:
5. To see the progress details for an action in a stage, choose Details when it is displayed next to an
action in progress (indicated by an In Progress message). If the action is in progress, you see the
incremental progress and the steps or actions as they occur.
Note
Details are available for source actions that retrieve content from GitHub repositories, but
not those that retrieve content from Amazon S3 buckets or CodeCommit repositories.
6. To approve or reject actions that have been configured for manual approval, choose Review.
7. To retry actions in a stage that were not completed successfully, choose Retry.
8. To get more information about errors or failures for a completed action in a stage, choose Details.
Details from the last time the action ran, including the results of that action (Succeeded or Failed)
are displayed.
9. To view details about source artifacts (output artifact that originated in the first stage of a pipeline)
that are used the latest pipeline execution for a stage, click in the details information area at the
bottom of the stage. You can view details about identifiers, such as commit IDs, check-in comments,
and the time since the artifact was created or updated.
10. To view details about the most recent executions for the pipeline, choose View history. For past
executions, you can view revision details associated with source artifacts, such as execution IDs,
status, start and end times, duration, and commit IDs and messages.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
The names of all pipelines associated with your AWS account are displayed, along with their status.
2. In Name, choose the name of the pipeline.
3. Choose View history.
4. View the status, source revisions, and change details related to each execution for your pipeline.
The following are valid states for pipelines, stages, and actions:
Pipeline-level states
Pipeline State Description
SUPERSEDED While this pipeline execution was waiting for the next stage to be completed,
a newer pipeline execution advanced and continued through the pipeline
instead.
Stage-level states
Stage State Description
Action-level states
Action State Description
FAILED For Approval actions, the FAILED state means the action was either rejected
by the reviewer or failed due to an incorrect action configuration.
• Summary: Summary information about the most recent revision of the artifact. For GitHub and AWS
CodeCommit repositories, the commit message. For Amazon S3 buckets or actions, the user-provided
content of a codepipeline-artifact-revision-summary key specified in the object metadata.
• revisionUrl: The revision URL for the artifact revision (for example, the external repository URL).
• revisionId: The revision ID for the artifact revision. For example, for a source change in a CodeCommit
or GitHub repository, this is the commit ID. For artifacts stored in GitHub or CodeCommit repositories,
the commit ID is linked to a commit details page.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
The names of all pipelines associated with your AWS account will be displayed.
2. Choose the name of the pipeline for which you want to view source revision details, Do one of the
following:
• Choose View history. In Source revisions, the source change for each execution is listed.
• Locate an action for which you want to view source revision details, and then find the revision
information at the bottom of its stage:
Click in the details area to view more information about the artifact, including the length of time
since the artifact was committed. With the exception of artifacts stored in Amazon S3 buckets,
identifiers such as commit IDs in this information detail view are linked to source information
pages for the artifacts.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
The names of all pipelines associated with your AWS account are displayed.
2. Choose the name of the pipeline for which you want to view action details, and then choose View
history.
3. In Execution ID, choose the execution ID for which you want to view action execution details.
4. You can view the following information on the Timeline tab:
a. In Action name, choose the link to open a details page for the action where you can view status,
stage name, action name, configuration data, and artifact information.
b. In Provider, choose the link to view the action provider details. For example, in the preceding
example pipeline, if you choose CodeDeploy in either the Staging or Production stages, the
CodeDeploy console page for the CodeDeploy application configured for that stage is displayed.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
The names of all pipelines associated with your AWS account are displayed.
2. Choose the name of the pipeline for which you want to view action details, and then choose View
history.
3. In Execution ID, choose the execution ID for which you want to view action details.
4. On the Timeline tab, in Action name, choose the link to open a details page for the action.
5. On the details page, in Execution summary, view the status and timing of the action execution.
6. In Action details, view the action provider and AWS Region where the execution runs. In Action
configuration, view the resource configuration for the action (for example, the CodeBuild build
project name).
7. In Artifacts, view the artifact details in Artifact type and Artifact provider. Choose the link under
Artifact name to view the artifacts in the artifact store.
• list-pipelines command to view a summary of all of the pipelines associated with your AWS account.
• get-pipeline command to review details of a single pipeline.
• list-pipeline-executions to view summaries of the most recent executions for a pipeline.
• get-pipeline-execution to view information about an execution of a pipeline, including details about
artifacts, the pipeline execution ID, and the name, version, and status of the pipeline.
• get-pipeline-state command to view pipeline, stage, and action status.
• list-action-executions to view action execution details for a pipeline.
Topics
• View Pipeline (CLI) (p. 208)
• View Execution History (CLI) (p. 209)
• View Execution Status (CLI) (p. 210)
• View Source Revisions (CLI) (p. 211)
• View Action Executions (CLI) (p. 213)
1. Open a terminal (Linux, macOS, or Unix) or command prompt (Windows) and use the AWS CLI to run
the list-pipelines command:
This command returns a list of all of the pipelines associated with your AWS account.
2. To view details about a pipeline, run the get-pipeline command, specifying the unique name of
the pipeline. For example, to view details about a pipeline named MyFirstPipeline, enter the
following:
• To view details about past executions of a pipeline, run the list-pipeline-executions command,
specifying the unique name of the pipeline. For example, to view details about the current state of a
pipeline named MyFirstPipeline, enter the following:
This command returns summary information about all pipeline executions for which history has
been recorded. The summary includes start and end times, duration, and status.
The following example shows the returned data for a pipeline named MyFirstPipeline that has
had three executions:
{
"pipelineExecutionSummaries": [
{
"lastUpdateTime": 1496380678.648,
"pipelineExecutionId": "7cf7f7cb-3137-539g-j458-d7eu3EXAMPLE",
"startTime": 1496380258.243,
"status": "Succeeded"
},
{
"lastUpdateTime": 1496591045.634,
"pipelineExecutionId": "3137f7cb-8d494hj4-039j-d84l-d7eu3EXAMPLE",
"startTime": 1496590401.222,
"status": "Succeeded"
},
{
"lastUpdateTime": 1496946071.6456,
"pipelineExecutionId": "4992f7jf-7cf7-913k-k334-d7eu3EXAMPLE",
"startTime": 1496945471.5645,
"status": "Succeeded"
}
]
}
To view more details about a pipeline execution, run the get-pipeline-execution, specifying the
unique ID of the pipeline execution. For example, to view more details about the first execution in
the previous example, enter the following:
API Version 2015-07-09
209
CodePipeline User Guide
View Pipeline Details and History (CLI)
This command returns summary information about an execution of a pipeline, including details
about artifacts, the pipeline execution ID, and the name, version, and status of the pipeline.
The following example shows the returned data for a pipeline named MyFirstPipeline:
{
"pipelineExecution": {
"pipelineExecutionId": "3137f7cb-7cf7-039j-s83l-d7eu3EXAMPLE",
"pipelineVersion": 2,
"pipelineName": "MyFirstPipeline",
"status": "Succeeded",
"artifactRevisions": [
{
"created": 1496380678.648,
"revisionChangeIdentifier": "1496380258.243",
"revisionId": "7636d59f3c461cEXAMPLE8417dbc6371",
"name": "MyApp",
"revisionSummary": "Updating the application for feature 12-4820"
}
]
}
}
• To view details about the current state of a pipeline, run the get-pipeline-state command,
specifying the unique name of the pipeline. For example, to view details about the current state of a
pipeline named MyFirstPipeline, enter the following:
This command returns the current status of all stages of the pipeline and the status of the actions in
those stages.
The following example shows the returned data for a three-stage pipeline named
MyFirstPipeline, where the first two stages and actions show success, the third shows failure,
and the transition between the second and third stages is disabled:
{
"updated": 1427245911.525,
"created": 1427245911.525,
"pipelineVersion": 1,
"pipelineName": "MyFirstPipeline",
"stageStates": [
{
"actionStates": [
{
"actionName": "Source",
"entityUrl": "https://console.aws.amazon.com/s3/home?#",
"latestExecution": {
"status": "Succeeded",
"lastStatusChange": 1427298837.768
}
}
],
"stageName": "Source"
},
{
"actionStates": [
{
"actionName": "Deploy-CodeDeploy-Application",
"entityUrl": "https://console.aws.amazon.com/codedeploy/home?#",
"latestExecution": {
"status": "Succeeded",
"lastStatusChange": 1427298939.456,
"externalExecutionUrl": "https://console.aws.amazon.com/?#",
"externalExecutionId": ""c53dbd42-This-Is-An-Example"",
"summary": "Deployment Succeeded"
}
}
],
"inboundTransitionState": {
"enabled": true
},
"stageName": "Staging"
},
{
"actionStates": [
{
"actionName": "Deploy-Second-Deployment",
"entityUrl": "https://console.aws.amazon.com/codedeploy/home?#",
"latestExecution": {
"status": "Failed",
"errorDetails": {
"message": "Deployment Group is already deploying
deployment ...",
"code": "JobFailed"
},
"lastStatusChange": 1427246155.648
}
}
],
"inboundTransitionState": {
"disabledReason": "Disabled while I investigate the failure",
"enabled": false,
"lastChangedAt": 1427246517.847,
"lastChangedBy": "arn:aws:iam::80398EXAMPLE:user/CodePipelineUser"
},
"stageName": "Production"
}
]
}
• Summary: Summary information about the most recent revision of the artifact. For GitHub and AWS
CodeCommit repositories, the commit message. For Amazon S3 buckets or actions, the user-provided
content of a codepipeline-artifact-revision-summary key specified in the object metadata.
• revisionUrl: The commit ID for the artifact revision. For artifacts stored in GitHub or AWS CodeCommit
repositories, the commit ID is linked to a commit details page.
You can run the get-pipeline-execution command to view information about the most recent source
revisions that were included in a pipeline execution. After you first run the get-pipeline-state command
to get details about all stages in a pipeline, you identify the execution ID that applies to a stage for which
you want source revision details. Then you use the execution ID in the get-pipeline-execution command.
(Because stages in a pipeline might have been last successfully completed during different pipeline runs,
they can have different execution IDs.)
In other words, if you want to view details about artifacts currently in the Staging stage, run the get-
pipeline-state command, identify the current execution ID of the Staging stage, and then run the get-
pipeline-execution command using that execution ID.
1. Open a terminal (Linux, macOS, or Unix) or command prompt (Windows) and use the AWS CLI to run
the get-pipeline-state command. For a pipeline named MyFirstPipeline, you would enter:
This command returns the most recent state of a pipeline, including the latest pipeline execution ID
for each stage.
2. To view details about a pipeline execution, run the get-pipeline-execution command, specifying
the unique name of the pipeline and the pipeline execution ID of the execution for which you
want to view artifact details. For example, to view details about the execution of a pipeline named
MyFirstPipeline, with the execution ID 3137f7cb-7cf7-039j-s83l-d7eu3EXAMPLE, you would
enter the following:
This command returns information about each source revision that is part of the pipeline execution
and identifying information about the pipeline. Only information about pipeline stages that were
included in that execution are included. There might be other stages in the pipeline that were not
part of that pipeline execution.
The following example shows the returned data for a portion of pipeline named
MyFirstPipeline, where an artifact named "MyApp" is stored in a GitHub repository:
3.
{
"pipelineExecution": {
"artifactRevisions": [
{
"created": 1427298837.7689769,
"name": "MyApp",
"revisionChangeIdentifier": "1427298921.3976923",
"revisionId": "7636d59f3c461cEXAMPLE8417dbc6371",
"revisionSummary": "Updating the application for feature 12-4820",
"revisionUrl": "https://api.github.com/repos/anycompany/MyApp/git/
commits/7636d59f3c461cEXAMPLE8417dbc6371"
}
//More revisions might be listed here
],
"pipelineExecutionId": "3137f7cb-7cf7-039j-s83l-d7eu3EXAMPLE",
"pipelineName": "MyFirstPipeline",
"pipelineVersion": 2,
"status": "Succeeded"
}
}
• To view details for all action executions in a pipeline, run the list-action-executions command,
specifying the unique name of the pipeline. For example, to view action executions in a pipeline
named MyFirstPipeline, enter the following:
{
"actionExecutionDetails": [
{
"actionExecutionId": "ID",
"lastUpdateTime": 1552958312.034,
"startTime": 1552958246.542,
"pipelineExecutionId": "Execution_ID",
"actionName": "Build",
"status": "Failed",
"output": {
"executionResult": {
"externalExecutionUrl": "Project_ID",
"externalExecutionSummary": "Build terminated with state:
FAILED",
"externalExecutionId": "ID"
},
"outputArtifacts": []
},
"stageName": "Beta",
"pipelineVersion": 8,
"input": {
"configuration": {
"ProjectName": "java-project"
},
"region": "us-east-1",
"inputArtifacts": [
{
"s3location": {
"bucket": "codepipeline-us-east-1-ID",
"key": "MyFirstPipeline/MyApp/Object.zip"
},
"name": "MyApp"
}
],
"actionTypeId": {
"version": "1",
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild"
}
}
},
. . .
• To view all action executions in a pipeline execution, run the list-action-executions command,
specifying the unique name of the pipeline and the execution ID. For example, to view action
executions for an Execution_ID, enter the following:
{
"actionExecutionDetails": [
{
"stageName": "Beta",
"pipelineVersion": 8,
"actionName": "Build",
"status": "Failed",
"lastUpdateTime": 1552958312.034,
"input": {
"configuration": {
"ProjectName": "java-project"
},
"region": "us-east-1",
"actionTypeId": {
"owner": "AWS",
"category": "Build",
"provider": "CodeBuild",
"version": "1"
},
"inputArtifacts": [
{
"s3location": {
"bucket": "codepipeline-us-east-1-ID",
"key": "MyFirstPipeline/MyApp/Object.zip"
},
"name": "MyApp"
}
]
},
. . .
Topics
• Delete a Pipeline (Console) (p. 215)
• Delete a Pipeline (CLI) (p. 215)
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
The names and status of all pipelines associated with your AWS account are displayed.
2. In Name, choose the name of the pipeline you want to delete.
3. On the pipeline details page, choose Edit.
4. On the Edit page, choose Delete.
5. Type delete in the field to confirm, and then choose Delete.
Important
This action cannot be undone.
To delete a pipeline
1. Open a terminal (Linux, macOS, or Unix) or command prompt (Windows) and use the AWS CLI to run
the delete-pipeline command, specifying the name of the pipeline you want to delete. For example,
to delete a pipeline named MyFirstPipeline:
Note
When you create a pipeline with actions from multiple accounts, you must configure your
actions so that they can still access artifacts within the limitations of cross-account pipelines.
The following limitations apply to cross-account actions:
In other words, you cannot pass an artifact from one account to another if neither account is
the pipeline account.
• Cross-account actions are not supported for the following action types:
• Jenkins build actions
• CodeBuild build or test actions
For this example, you must create an AWS Key Management Service (AWS KMS) key to use, add the key
to the pipeline, and set up account policies and roles to enable cross-account access. For an AWS KMS
key, you can use the key ID, the key ARN, or the alias ARN.
Note
Aliases are recognized only in the account that created the customer master key (CMK). For
cross-account actions, you can only use the key ID or key ARN to identify the key.
In this walkthrough and its examples, AccountA is the account originally used to create the pipeline.
It has access to the Amazon S3 bucket used to store pipeline artifacts and the service role used by
AWS CodePipeline. AccountB is the account originally used to create the CodeDeploy application,
deployment group, and service role used by CodeDeploy.
For AccountA to edit a pipeline to use the CodeDeploy application created by AccountB, AccountA
must:
• Request the ARN or account ID of AccountB (in this walkthrough, the AccountB ID is
012ID_ACCOUNT_B).
• Create or use an AWS KMS customer-managed key in the region for the pipeline, and grant
permissions to use that key to the service role (AWS-CodePipeline-Service) and AccountB.
• Create an Amazon S3 bucket policy that grants AccountB access to the Amazon S3 bucket (for
example, codepipeline-us-east-2-1234567890).
• Create a policy that allows AccountA to assume a role configured by AccountB, and attach that
policy to the service role (AWS-CodePipeline-Service).
• Edit the pipeline to use the customer-managed AWS KMS key instead of the default key.
For AccountB to allow access to its resources to a pipeline created in AccountA, AccountB must:
• Request the ARN or account ID of AccountA (in this walkthrough, the AccountA ID is
012ID_ACCOUNT_A).
• Create a policy applied to the Amazon EC2 instance role configured for CodeDeploy that allows access
to the Amazon S3 bucket (codepipeline-us-east-2-1234567890).
• Create a policy applied to the Amazon EC2 instance role configured for CodeDeploy that allows access
to the AWS KMS customer-managed key used to encrypt the pipeline artifacts in AccountA.
• Configure and attach an IAM role (CrossAccount_Role) with a trust relationship policy that allows
AccountA to assume the role.
• Create a policy that allows access to the deployment resources required by the pipeline and attach it to
CrossAccount_Role.
Topics
• Prerequisite: Create an AWS KMS Encryption Key (p. 217)
• Step 1: Set Up Account Policies and Roles (p. 217)
• Step 2: Edit the Pipeline (p. 223)
1. Sign in to the AWS Management Console with AccountA and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In Dashboard, choose Encryption keys.
3. In Encryption keys, in Filter, make sure the region selected is the same as the region where the
pipeline was created, and then choose Create key.
For example, if the pipeline was created in us-east-2, make sure the filter is set to US East (Ohio).
4. In Alias, type an alias to use for this key (for example, PipelineName-Key). Optionally, provide a
description for this key, and then choose Next Step.
5. In Define Key Administrative Permissions, choose your IAM user and any other users or groups you
want to act as administrators for this key, and then choose Next Step.
6. In Define Key Usage Permissions, under This Account, select the name of the service role for
the pipeline (for example, AWS-CodePipeline-Service). Under External Accounts, choose Add an
External Account. Type the account ID for AccountB to complete the ARN, and then choose Next
Step.
7. In Preview Key Policy, review the policy, and then choose Finish.
8. From the list of keys, choose the alias of your key and copy its ARN (for example, arn:aws:kms:us-
east-2:012ID_ACCOUNT_A:key/2222222-3333333-4444-556677EXAMPLE). You will need this
when you edit your pipeline and configure policies.
Topics
• Configure Policies and Roles in the Account That Will Create the Pipeline (AccountA) (p. 218)
• Configure Policies and Roles in the Account That Owns the AWS Resource (AccountB) (p. 220)
Configure Policies and Roles in the Account That Will Create the
Pipeline (AccountA)
To create a pipeline that uses CodeDeploy resources associated with another AWS account, AccountA
must configure policies for both the Amazon S3 bucket used to store artifacts and the service role for
CodePipeline.
To create a policy for the Amazon S3 bucket that grants access to AccountB (console)
1. Sign in to the AWS Management Console with AccountA and open the Amazon S3 console at
https://console.aws.amazon.com/s3/.
2. In the list of Amazon S3 buckets, choose the Amazon S3 bucket where artifacts for your pipelines
are stored. This bucket is named codepipeline-region-1234567EXAMPLE, where region is the AWS
region in which you created the pipeline and 1234567EXAMPLE is a ten-digit random number that
ensures the bucket name is unique (for example, codepipeline-us-east-2-1234567890).
3. On the detail page for the Amazon S3 bucket, choose Properties.
4. In the properties pane, expand Permissions, and then choose Add bucket policy.
Note
If a policy is already attached to your Amazon S3 bucket, choose Edit bucket policy. You
can then add the statements in the following example to the existing policy. To add a new
policy, choose the link, and follow the instructions in the AWS Policy Generator. For more
information, see Overview of IAM Policies.
5. In the Bucket Policy Editor window, type the following policy. This will allow AccountB access to
the pipeline artifacts, and will give AccountB the ability to add output artifacts if an action, such as
a custom source or build action, creates them.
In the following example, the ARN is for AccountB is 012ID_ACCOUNT_B. The ARN for the Amazon
S3 bucket is codepipeline-us-east-2-1234567890. Replace these ARNs with the ARN for the
account you want to allow access and the ARN for the Amazon S3 bucket:
{
"Version": "2012-10-17",
"Id": "SSEAndSSLPolicy",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::codepipeline-us-east-2-1234567890/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
},
{
"Sid": "DenyInsecureConnections",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::codepipeline-us-east-2-1234567890/*",
"Condition": {
"Bool": {
"aws:SecureTransport": false
}
}
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::012ID_ACCOUNT_B:root"
},
"Action": [
"s3:Get*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::codepipeline-us-east-2-1234567890/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::012ID_ACCOUNT_B:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::codepipeline-us-east-2-1234567890"
}
]
}
1. Sign in to the AWS Management Console with AccountA and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In Dashboard, choose Roles.
3. In the list of roles, under Role Name, choose the name of the service role for CodePipeline. By
default, this is AWS-CodePipeline-Service. If you used a different name for your service role, be sure
to choose it from the list.
4. On the Summary page, on the Permissions tab, expand Inline Policies, and then choose Create
Role Policy.
Note
If you have not previously created any role policies, Create Role Policy will not appear.
Choose the link to create a new policy instead.
5. In Set Permissions, choose Custom Policy, and then choose Select.
6. On the Review Policy page, type a name for the policy in Policy Name. In Policy Document,
type the following policy to allow AccountB to assume the role. In the following example,
012ID_ACCOUNT_B is the ARN for AccountB:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::012ID_ACCOUNT_B:role/*"
]
}
}
Configure Policies and Roles in the Account That Owns the AWS
Resource (AccountB)
When you create an application, deployment, and deployment group in CodeDeploy, you also create
an Amazon EC2 instance role. (This role is created for you if you use the Run Deployment Walkthrough
wizard, but you can also create it manually.) For a pipeline created in AccountA to use CodeDeploy
resources created in AccountB, you must:
• Configure a policy for the instance role that allows it to access the Amazon S3 bucket where pipeline
artifacts are stored.
• Create a second role in AccountB configured for cross-account access.
This second role must not only have access to the Amazon S3 bucket in AccountA, it must also contain
a policy that allows access to the CodeDeploy resources and a trust relationship policy that allows
AccountA to assume the role.
Note
These policies are specific to setting up CodeDeploy resources to be used in a pipeline created
using a different AWS account. Other AWS resources will require policies specific to their
resource requirements.
To create a policy for the Amazon EC2 instance role configured for CodeDeploy (console)
1. Sign in to the AWS Management Console with AccountB and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In Dashboard, choose Roles.
3. In the list of roles, under Role Name, choose the name of the service role used as the Amazon EC2
instance role for the CodeDeploy application. This role name can vary, and more than one instance
role can be used by a deployment group. For more information, see Create an IAM Instance Profile
for your Amazon EC2 Instances.
4. On the Summary page, on the Permissions tab, expand Inline Policies, and then choose Create
Role Policy.
5. In Set Permissions, choose Custom Policy, and then choose Select.
6. On the Review Policy page, type a name for the policy in Policy Name. In Policy Document, type
the following policy to grant access to the Amazon S3 bucket used by AccountA to store artifacts
for pipelines (in this example, codepipeline-us-east-2-1234567890):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::codepipeline-us-east-2-1234567890/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::codepipeline-us-east-2-1234567890"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:DescribeKey",
"kms:GenerateDataKey*",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-
east-1:012ID_ACCOUNT_A:key/2222222-3333333-4444-556677EXAMPLE"
]
}
]
}
Important
You must use the account ID of AccountA in this policy as part of the resource ARN for the
AWS KMS key, as shown here, or the policy will not work.
10. Choose Validate Policy.
11. After the policy is validated, choose Apply Policy.
Now create an IAM role to use for cross-account access, and configure it so that AccountA can assume
the role. This role must contain policies that allow access to the CodeDeploy resources and the Amazon
S3 bucket used to store artifacts in AccountA.
1. Sign in to the AWS Management Console with AccountB and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In Dashboard, choose Roles, and then choose Create New Role.
3. On the Set New Role page, type a name for this role in Role Name (for example,
CrossAccount_Role). You can name this role anything you want as long as it follows the naming
conventions in IAM. Consider giving the role a name that clearly states its purpose.
4. On the Select Role Type page, choose Role for Cross-Account Access. Next to Provide access
between AWS accounts you own, choose Select.
5. Type the AWS account ID for the account that will create the pipeline in CodePipeline (AccountA),
and then choose Next Step.
Note
This step creates the trust relationship policy between AccountB and AccountA.
6. In Attach Policy, choose AmazonS3ReadOnlyAccess, and then choose Next Step.
Note
This is not the policy you will use. You must choose a policy to complete the wizard.
7. On the Review page, choose Create Role.
8. From the list of roles, choose the policy you just created (for example, CrossAccount_Role) to
open the Summary page for that role.
9. Expand Permissions, and then expand Inline Policies. Choose the link to create an inline policy.
10. In Set Permissions, choose Custom Policy, and then choose Select.
11. On the Review Policy page, type a name for the policy in Policy Name. In Policy Document, type
the following policy to allow access to CodeDeploy resources:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codedeploy:CreateDeployment",
"codedeploy:GetDeployment",
"codedeploy:GetDeploymentConfig",
"codedeploy:GetApplicationRevision",
"codedeploy:RegisterApplicationRevision"
],
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject*",
"s3:PutObject",
"s3:PutObjectAcl",
"codecommit:ListBranches",
"codecommit:ListRepositories"
],
"Resource": [
"arn:aws:s3:::codepipeline-us-east-2-1234567890/*"
]
}
]
}
To add the resources associated with another AWS account (AWS CLI)
1. At a terminal (Linux, macOS, or Unix) or command prompt (Windows), run the get-pipeline
command on the pipeline to which you want to add resources. Copy the command output to a JSON
file. For example, for a pipeline named MyFirstPipeline, you would type something similar to the
following:
{
"artifactStore”: {
"location": "codepipeline-us-east-2-1234567890",
"type": "S3",
"encryptionKey": {
"id": "arn:aws:kms:us-
east-1:012ID_ACCOUNT_A:key/2222222-3333333-4444-556677EXAMPLE",
"type": "KMS"
}
},
3. Add a deploy action in a stage to use the CodeDeploy resources associated with AccountB,
including the roleArn values for the cross-account role you created (CrossAccount_Role).
The following example shows JSON that adds a deploy action named ExternalDeploy. It uses the
CodeDeploy resources created in AccountB in a stage named Staging. In the following example,
the ARN for AccountB is 012ID_ACCOUNT_B:
,
{
"name": "Staging",
"actions": [
{
"inputArtifacts": [
{
"name": "MyAppBuild"
}
],
"name": "ExternalDeploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "AccountBApplicationName",
"DeploymentGroupName": "AccountBApplicationGroupName"
},
"runOrder": 1,
"roleArn":
"arn:aws:iam::012ID_ACCOUNT_B:role/CrossAccount_Role"
}
]
}
Note
This is not the JSON for the entire pipeline, just the structure for the action in a stage.
4. Save the file.
5. To apply your changes, run the update-pipeline command, specifying the pipeline JSON file, similar
to the following:
Important
Be sure to include file:// before the file name. It is required in this command.
To test the pipeline that uses resources associated with another AWS account
1. At a terminal (Linux, macOS, or Unix) or command prompt (Windows), run the start-pipeline-
execution command, specifying the name of the pipeline, similar to the following:
For more information, see Start a Pipeline Manually in AWS CodePipeline (p. 184).
2. Sign in to the AWS Management Console with AccountA and open the CodePipeline console at
http://console.aws.amazon.com/codesuite/codepipeline/home.
The names of all pipelines associated with your AWS account are displayed.
3. In Name, choose the name of the pipeline you just edited. This opens a detailed view of the pipeline,
including the state of each action in each stage of the pipeline.
4. Watch the progress through the pipeline. Wait for a success message on the action that uses the
resource associated with another AWS account.
Note
You will receive an error if you try to view details for the action while signed in with
AccountA. Sign out, and then sign in with AccountB to view the deployment details in
CodeDeploy.
Initially, only polling was supported. Events are now the default and recommended way to start your
pipeline when there’s a code change.
Important
You must explicitly set the PollForSourceChanges parameter to false within your Source
action’s configuration to stop a pipeline from polling. As a result, it is possible to erroneously
configure a pipeline with both event-based change detection and polling by, for example,
configuring a CloudWatch Events rule and also omitting the PollForSourceChanges
parameter. This results in duplicate pipeline executions, and the pipeline is counted toward the
limit on total number of polling pipelines, which by default is much lower than event-based
pipelines.
There are some important advantages to using push events instead of polling:
• On average, events are significantly faster. Events should start your pipeline almost immediately, as
opposed to polling, which requires waiting for the next periodic check.
• Higher limits. Compared to pipelines that poll for changes, CodePipeline can support far more event-
based pipelines.
• Better experience with many pipelines. Some customers might experience throttling or higher costs
by having many pipelines continuously polling their repository for code changes. You can avoid this by
using events.
When you use the CodePipeline console or AWS CodeStar to create a pipeline, events are enabled
by default. For backward compatibility, new pipelines created through the API, AWS CLI, or AWS
CloudFormation use the original polling functionality. We strongly recommend that you use events
instead. To opt in, use the AWS CLI or AWS CloudFormation to create the CloudWatch event or webhook
and disable polling. Use the instructions in the following table.
You should also use events on pipelines that were created before the new console was launched. To opt
in, use the CodePipeline console to create the CloudWatch event or webhook and disable polling. Use the
instructions in the following table .
If you create and manage your Use the AWS CLI to disable Update Pipelines for Push
pipeline with the AWS CLI periodic checks and create your Events (CodeCommit Source)
Amazon CloudWatch Events (CLI) (p. 231)
resources
If you create and manage Use AWS CloudFormation Update Pipelines for Push
your pipeline with AWS to execute a change set that Events (CodeCommit Source)
CloudFormation disables periodic checks (AWS CloudFormation
and creates your Amazon Template) (p. 239)
CloudWatch Events resources
If you created your pipeline in Use the CodePipeline console to Update Pipelines for Push
the console before October 11, let CodePipeline disable periodic Events (CodeCommit or Amazon
2017 checks and create your Amazon S3 Source) (Console) (p. 227)
CloudWatch Events resources
If you create and manage your Use the AWS CLI to disable Update Pipelines for Push
pipeline with the AWS CLI periodic checks and create your Events (Amazon S3 Source)
Amazon CloudWatch Events and (CLI) (p. 233)
CloudTrail resources
If you create and manage Use AWS CloudFormation Update Pipelines for Push
your pipeline with AWS to execute a change set that Events (Amazon S3 Source)
CloudFormation disables periodic checks (AWS CloudFormation
and creates your Amazon Template) (p. 249)
CloudWatch Events and AWS
CloudTrail resources
If you created your pipeline in Use the CodePipeline console to Update Pipelines for Push
the console before March 22, let CodePipeline disable periodic Events (CodeCommit or Amazon
2018 checks and create your Amazon S3 Source) (Console) (p. 227)
CloudWatch Events and AWS
CloudTrail resources
If you create and manage your Use the AWS CLI to disable Update Pipelines for Push
pipeline with the AWS CLI periodic checks and create and Events (GitHub Source)
register your webhook (CLI) (p. 236)
If you create and manage your Use AWS CloudFormation Update Pipelines for Push
pipeline AWS CloudFormation to execute a change set that Events (GitHub Source)
disables periodic checks and (AWS CloudFormation
creates and registers your Template) (p. 268)
webhook
If you created your pipeline in Use the CodePipeline console to Update Pipelines for Push
the console before May 1, 2018 let CodePipeline disable periodic Events (GitHub Source)
checks and create and register (Console) (p. 229)
your webhook
Use these steps to edit a pipeline that is using periodic checks. If you want to create a pipeline, see
Create a Pipeline in CodePipeline (p. 187).
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
The names of all pipelines associated with your AWS account are displayed.
2. In Name, choose the name of the pipeline you want to edit. This opens a detailed view of the
pipeline, including the state of each of the actions in each stage of the pipeline.
3. On the pipeline details page, choose Edit.
4. In Edit stage, choose the edit icon on the source action.
5. Expand Change Detection Options and choose Use CloudWatch Events to automatically start my
pipeline when a change occurs (recommended).
A message appears showing the Amazon CloudWatch Events rule to be created for this pipeline.
Choose Update.
If you are updating a pipeline that has an Amazon S3 source, you see the following message. Choose
Update.
6. When you have finished editing your pipeline, choose Save pipeline changes to return to the
summary page.
A message displays the name of the Amazon CloudWatch Events rule to be created for your pipeline.
Choose Save and continue.
7. To test your action, release a change by using the AWS CLI to commit a change to the source
specified in the source stage of the pipeline.
Follow these steps to edit a pipeline that is using polling (periodic checks) to use Amazon CloudWatch
Events instead. If you want to create a pipeline, see Create a Pipeline in CodePipeline (p. 187).
When you use the console, the PollForSourceChanges parameter for your pipelined is changed for
you. The GitHub webhook is created and registered for you.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
The names of all pipelines associated with your AWS account are displayed.
2. In Name, choose the name of the pipeline you want to edit. This opens a detailed view of the
pipeline, including the state of each of the actions in each stage of the pipeline.
3. On the pipeline details page, choose Edit.
4. In Edit stage, choose the edit icon on the source action.
5. Expand Change detection options and choose Use Amazon CloudWatch Events to automatically
start my pipeline when a change occurs (recommended).
A message is displayed to advise that CodePipeline creates a webhook in GitHub to detect source
changes. Choose Update. In addtion to the webhook, CodePipeline creates the following:
CodePipeline registers the webhook with GitHub. This subscribes the URL to receive repository
events.
6. When you have finished editing your pipeline, choose Save pipeline changes to return to the
summary page.
A message displays the name of the webhook to be created for your pipeline. Choose Save and
continue.
7. To test your action, release a change by using the AWS CLI to commit a change to the source
specified in the source stage of the pipeline.
To build an event-driven pipeline with CodeCommit, you edit the PollForSourceChanges parameter
of your pipeline and then create the following resources:
1. Run the get-pipeline command to copy the pipeline structure into a JSON file. For example, for a
pipeline named MyFirstPipeline, run the following command:
This command returns nothing, but the file you created should appear in the directory where you
ran the command.
2. Open the JSON file in any plain-text editor and edit the source stage by changing the
PollForSourceChanges parameter to false, as shown in this example.
Why am I making this change? Changing this parameter to false turns off periodic checks so you
can use event-based change detection only.
"configuration": {
"PollForSourceChanges": "false",
"BranchName": "master",
"RepositoryName": "MyTestRepo"
},
3. If you are working with the pipeline structure retrieved using the get-pipeline command, remove
the metadata lines from the JSON file. Otherwise, the update-pipeline command cannot use it.
Remove the "metadata": { } lines and the "created", "pipelineARN", and "updated" fields.
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
To create a CloudWatch Events rule with CodeCommit as the event source and CodePipeline
as the target
1. Add permissions for Amazon CloudWatch Events to use CodePipeline to invoke the rule. For more
information, see Using Resource-Based Policies for Amazon CloudWatch Events.
a. Use the following sample to create the trust policy that allows CloudWatch Events to assume
the service role. Name the trust policy trustpolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
b. Use the following command to create the Role-for-MyRule role and attach the trust policy.
c. Create the permissions policy JSON, as shown in this sample, for the pipeline named
MyFirstPipeline. Name the permissions policy permissionspolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:StartPipelineExecution"
],
"Resource": [
"arn:aws:codepipeline:us-west-2:80398EXAMPLE:MyFirstPipeline"
]
}
]
}
Why am I making this change? Adding this policy to the role creates permissions for
CloudWatch Events.
2. Call the put-rule command and include the --name and --event-pattern parameters.
Why am I making this change? This command enables AWS CloudFormation to create the event.
3. To add CodePipeline as a target, call the put-targets command and include the following
parameters:
• The --rule parameter is used with the rule_name you created by using put-rule.
• The --targets parameter is used with the list Id of the target in the list of targets and the ARN
of the target pipeline.
The following sample command specifies that for the rule called MyCodeCommitRepoRule, the
target Id is composed of the number one, indicating that in a list of targets for the rule, this is
target 1. The sample command also specifies an example ARN for the pipeline. The pipeline starts
when something changes in the repository.
To build an event-driven pipeline with Amazon S3, you edit the PollForSourceChanges parameter of
your pipeline and then create the following resources:
• AWS CloudTrail trail, bucket, and bucket policy that Amazon S3 can use to log the events.
• Amazon CloudWatch Events event
• IAM role to allow the CloudWatch event to start your pipeline
To use the AWS CLI to create a trail, call the create-trail command, specifying:
For more information, see Creating a Trail with the AWS Command Line Interface.
1. Call the create-trail command and include the --name and --s3-bucket-name parameters.
Why am I making this change? This creates the CloudTrail trail required for your S3 source bucket.
The following command uses --name and --s3-bucket-name to create a trail named my-trail
and a bucket named myBucket.
Why am I making this change? This command starts the CloudTrail logging for your source bucket
and sends events to CloudWatch Events.
Example:
The following command uses --name to start logging on a trail named my-trail.
3. Call the put-event-selectors command and include the --trail-name and --event-selectors
parameters. Use event selectors to specify that you want your trail to log data events for your source
bucket and send the events to the Amazon CloudWatch Events rule.
Example:
The following command uses --trail-name and --event-selectors to specify data events for
a source bucket and prefix named myBucket/myFolder.
To create a CloudWatch Events rule with Amazon S3 as the event source and CodePipeline as
the target and apply the permissions policy
1. Grant permissions for Amazon CloudWatch Events to use CodePipeline to invoke the rule. For more
information, see Using Resource-Based Policies for Amazon CloudWatch Events.
a. Use the following sample to create the trust policy to allow CloudWatch Events to assume the
service role. Name it trustpolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
b. Use the following command to create the Role-for-MyRule role and attach the trust policy.
API Version 2015-07-09
234
CodePipeline User Guide
Update Pipelines for Push Events (CLI)
Why am I making this change? Adding this trust policy to the role creates permissions for
CloudWatch Events.
c. Create the permissions policy JSON, as shown here for the pipeline named MyFirstPipeline.
Name the permissions policy permissionspolicyforCWE.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:StartPipelineExecution"
],
"Resource": [
"arn:aws:codepipeline:us-west-2:80398EXAMPLE:MyFirstPipeline"
]
}
]
}
2. Call the put-rule command and include the --name and --event-pattern parameters.
The following sample command uses --event-pattern to create a rule named MyS3SourceRule.
3. To add CodePipeline as a target, call the put-targets command and include the --rule and --
targets parameters.
The following command specifies that for the rule named MyS3SourceRule, the target Id is
composed of the number one, indicating that in a list of targets for the rule, this is target 1. The
command also specifies an example ARN for the pipeline. The pipeline starts when something
changes in the repository.
Important
When you create a pipeline with this method, the PollForSourceChanges parameter
defaults to true if it is not explicitly set to false. When you add event-based change detection,
you must add the parameter to your output and set it to false to disable polling. Otherwise,
your pipeline starts twice for a single source change. For details, see Default Settings for the
PollForSourceChanges Parameter (p. 403).
1. Run the get-pipeline command to copy the pipeline structure into a JSON file. For example, for a
pipeline named MyFirstPipeline, run the following command:
This command returns nothing, but the file you created should appear in the directory where you
ran the command.
2. Open the JSON file in any plain-text editor and edit the source stage by changing the
PollForSourceChanges parameter for a bucket named storage-bucket to false, as shown in
this example.
Why am I making this change? Setting this parameter to false turns off periodic checks so you can
use event-based change detection only.
"configuration": {
"S3Bucket": "storage-bucket",
"PollForSourceChanges": "false",
"S3ObjectKey": "index.zip"
},
3. If you are working with the pipeline structure retrieved using the get-pipeline command, you must
remove the metadata lines from the JSON file. Otherwise, the update-pipeline command cannot
use it. Remove the "metadata": { } lines and the "created", "pipelineARN", and "updated"
fields.
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
To build an event-driven pipeline, you edit the PollForSourceChanges parameter of your pipeline and
then create the following resources manually:
1. In a text editor, create and save a JSON file for the webhook you want to create. Use this sample file
for a webhook named my-webhook:
{"webhook":
{"name": "my-webhook",
"targetPipeline": "pipeline_name",
"targetAction": "source_action_name",
"filters": [
{
"jsonPath": "$.ref",
"matchEquals": "refs/heads/{Branch}"
}
],
"authentication": "GITHUB_HMAC",
"authenticationConfiguration": {"SecretToken":"secret"}
}
}
2. Call the put-webhook command and include the --cli-input and --region parameters.
The following sample command creates a webhook with the webhook_json JSON file.
3. In the output shown in this example, the URL and ARN are returned for a webhook named my-
webhook.
{
"webhook": {
"url": "https://webhooks.domain.com/trigger111111111EXAMPLE11111111111111111",
"definition": {
"authenticationConfiguration": {
"SecretToken": "secret"
},
"name": "my-webhook",
"authentication": "GITHUB_HMAC",
"targetPipeline": "pipeline_name",
"targetAction": "Source",
"filters": [
{
"jsonPath": "$.ref",
"matchEquals": "refs/heads/{Branch}"
}
]
},
"arn": "arn:aws:codepipeline:eu-central-1:ACCOUNT_ID:webhook:my-webhook"
},
"tags": [{
"key": "Project",
"value": "ProjectA"
}]
}
This example adds tagging to the webhook by including the Project tag key and ProjectA
value on the webhook. For more information about tagging resources in CodePipeline, see Tagging
Resources (p. 134).
4. Call the register-webhook-with-third-party command and include the --webhook-name
parameter.
1. Run the get-pipeline command to copy the pipeline structure into a JSON file. For example, for a
pipeline named MyFirstPipeline, you would type the following command:
This command returns nothing, but the file you created should appear in the directory where you
ran the command.
2. Open the JSON file in any plain-text editor and edit the source stage by changing or adding the
PollForSourceChanges parameter. In this example, for a repository named UserGitHubRepo,
the parameter is set to false .
Why am I making this change? Changing this parameter turns off periodic checks so you can use
event-based change detection only.
"configuration": {
"Owner": "darlaker",
"Repo": "UserGitHubRepo",
"PollForSourceChanges": "false",
"Branch": "master",
"OAuthToken": "****"
},
3. If you are working with the pipeline structure retrieved using the get-pipeline command, you
must edit the structure in the JSON file by removing the metadata lines from the file. Otherwise,
the update-pipeline command cannot use it. Remove the "metadata" section from the pipeline
structure in the JSON file, including the : { } and the "created", "pipelineARN", and
"updated" fields.
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
Follow these steps to edit a pipeline with AWS CloudFormation that is using periodic checks. If you want
to create a pipeline, see Continuous Delivery with CodePipeline.
If you use AWS CloudFormation to create and manage your pipelines, your template includes content like
the following.
Note
The Configuration property in the source stage called PollForSourceChanges. If that
property isn't included in your template, then PollForSourceChanges is set to true by
default.
YAML
Resources:
AppPipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: codecommit-polling-pipeline
RoleArn:
!GetAtt CodePipelineServiceRole.Arn
Stages:
-
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: CodeCommit
OutputArtifacts:
- Name: SourceOutput
Configuration:
BranchName: !Ref BranchName
RepositoryName: !Ref RepositoryName
PollForSourceChanges: true
RunOrder: 1
JSON
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "CodeCommit"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"BranchName": {
"Ref": "BranchName"
},
"RepositoryName": {
"Ref": "RepositoryName"
},
"PollForSourceChanges": true
},
"RunOrder": 1
}
]
},
To update your pipeline AWS CloudFormation template and create CloudWatch Events rule
1. In the template, under Resources, use the AWS::IAM::Role AWS CloudFormation resource to
configure the IAM role that allows your event to start your pipeline. This entry creates a role that
uses two policies:
Why am I making this change? Adding the AWS::IAM::Role resource enables AWS
CloudFormation to create permissions for CloudWatch Events. This resource is added to your AWS
CloudFormation stack.
YAML
AmazonCloudWatchEventRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: cwe-pipeline-execution
PolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action: codepipeline:StartPipelineExecution
Resource: !Join [ '', [ 'arn:aws:codepipeline:', !Ref
'AWS::Region', ':', !Ref 'AWS::AccountId', ':', !Ref AppPipeline ] ]
JSON
"AmazonCloudWatchEventRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "cwe-pipeline-execution",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codepipeline:StartPipelineExecution",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
...
2. In the template, under Resources, use the AWS::Events::Rule AWS CloudFormation resource
to add a CloudWatch Events rule. This event pattern creates an event that monitors push changes
to your repository When CloudWatch Events detects a repository state change, the rule invokes
StartPipelineExecution on your target pipeline.
Why am I making this change? Adding the AWS::Events::Rule resource enables AWS
CloudFormation to create the event. This resource is added to your AWS CloudFormation stack.
YAML
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.codecommit
detail-type:
- 'CodeCommit Repository State Change'
resources:
- !Join [ '', [ 'arn:aws:codecommit:', !Ref 'AWS::Region', ':', !Ref
'AWS::AccountId', ':', !Ref RepositoryName ] ]
detail:
event:
- referenceCreated
- referenceUpdated
referenceType:
- branch
referenceName:
- master
Targets:
-
Arn:
!Join [ '', [ 'arn:aws:codepipeline:', !Ref 'AWS::Region', ':', !Ref
'AWS::AccountId', ':', !Ref AppPipeline ] ]
RoleArn: !GetAtt AmazonCloudWatchEventRole.Arn
Id: codepipeline-AppPipeline
JSON
"AmazonCloudWatchEventRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"EventPattern": {
"source": [
"aws.codecommit"
],
"detail-type": [
"CodeCommit Repository State Change"
],
"resources": [
{
"Fn::Join": [
"",
[
"arn:aws:codecommit:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "RepositoryName"
}
]
]
}
],
"detail": {
"event": [
"referenceCreated",
"referenceUpdated"
],
"referenceType": [
"branch"
],
"referenceName": [
"master"
]
}
},
"Targets": [
{
"Arn": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
},
"RoleArn": {
"Fn::GetAtt": [
"AmazonCloudWatchEventRole",
"Arn"
]
},
"Id": "codepipeline-AppPipeline"
}
]
}
},
3. Save the updated template to your local computer, and then open the AWS CloudFormation console.
4. Choose your stack, and then choose Create Change Set for Current Stack.
5. Upload the template, and then view the changes listed in AWS CloudFormation. These are the
changes to be made to the stack. You should see your new resources in the list.
6. Choose Execute.
Why am I making this change? Changing this parameter to false turns off periodic checks so you
can use event-based change detection only.
YAML
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: CodeCommit
OutputArtifacts:
- Name: SourceOutput
Configuration:
BranchName: !Ref BranchName
RepositoryName: !Ref RepositoryName
PollForSourceChanges: false
RunOrder: 1
JSON
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "CodeCommit"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"BranchName": {
"Ref": "BranchName"
},
"RepositoryName": {
"Ref": "RepositoryName"
},
"PollForSourceChanges": false
},
"RunOrder": 1
}
]
},
Example
When you create these resources with AWS CloudFormation, your pipeline is triggered when files in your
repository are created or updated. Here is the final template snippet:
YAML
Resources:
AmazonCloudWatchEventRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: cwe-pipeline-execution
PolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action: codepipeline:StartPipelineExecution
Resource: !Join [ '', [ 'arn:aws:codepipeline:', !Ref 'AWS::Region',
':', !Ref 'AWS::AccountId', ':', !Ref AppPipeline ] ]
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.codecommit
detail-type:
- 'CodeCommit Repository State Change'
resources:
- !Join [ '', [ 'arn:aws:codecommit:', !Ref 'AWS::Region', ':', !Ref
'AWS::AccountId', ':', !Ref RepositoryName ] ]
detail:
event:
- referenceCreated
- referenceUpdated
referenceType:
- branch
referenceName:
- master
Targets:
-
Arn:
!Join [ '', [ 'arn:aws:codepipeline:', !Ref 'AWS::Region', ':', !Ref
'AWS::AccountId', ':', !Ref AppPipeline ] ]
RoleArn: !GetAtt AmazonCloudWatchEventRole.Arn
Id: codepipeline-AppPipeline
AppPipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: codecommit-events-pipeline
RoleArn:
!GetAtt CodePipelineServiceRole.Arn
Stages:
-
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: CodeCommit
OutputArtifacts:
- Name: SourceOutput
Configuration:
BranchName: !Ref BranchName
RepositoryName: !Ref RepositoryName
PollForSourceChanges: false
RunOrder: 1
...
JSON
"Resources": {
...
"AmazonCloudWatchEventRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "cwe-pipeline-execution",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codepipeline:StartPipelineExecution",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
}
}
]
}
}
]
}
},
"AmazonCloudWatchEventRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"EventPattern": {
"source": [
"aws.codecommit"
],
"detail-type": [
"CodeCommit Repository State Change"
],
"resources": [
{
"Fn::Join": [
"",
[
"arn:aws:codecommit:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "RepositoryName"
}
]
]
}
],
"detail": {
"event": [
"referenceCreated",
"referenceUpdated"
],
"referenceType": [
"branch"
],
"referenceName": [
"master"
]
}
},
"Targets": [
{
"Arn": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
},
"RoleArn": {
"Fn::GetAtt": [
"AmazonCloudWatchEventRole",
"Arn"
]
},
"Id": "codepipeline-AppPipeline"
}
]
}
},
"AppPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Name": "codecommit-events-pipeline",
"RoleArn": {
"Fn::GetAtt": [
"CodePipelineServiceRole",
"Arn"
]
},
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "CodeCommit"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"BranchName": {
"Ref": "BranchName"
},
"RepositoryName": {
"Ref": "RepositoryName"
},
"PollForSourceChanges": false
},
"RunOrder": 1
}
]
},
...
To build an event-driven pipeline with Amazon S3, you edit the PollForSourceChanges parameter of
your pipeline and then add the following resources to your template:
• Amazon CloudWatch Events requires that all Amazon S3 events must be logged. You must create an
AWS CloudTrail trail, bucket, and bucket policy that Amazon S3 can use to log the events that occur.
For more information, see Logging Management and Data Events with AWS CloudTrail.
• Amazon CloudWatch Events rule and IAM role to allow this event to start our pipeline.
If you use AWS CloudFormation to create and manage your pipelines, your template includes content like
the following.
Note
The Configuration property in the source stage called PollForSourceChanges. If your
template doesn't include that property, then PollForSourceChanges is set to true by
default.
YAML
AppPipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
RoleArn: !GetAtt CodePipelineServiceRole.Arn
Stages:
-
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: S3
OutputArtifacts:
-
Name: SourceOutput
Configuration:
S3Bucket: !Ref SourceBucket
S3ObjectKey: !Ref S3SourceObjectKey
PollForSourceChanges: true
RunOrder: 1
...
JSON
"AppPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"RoleArn": {
"Fn::GetAtt": ["CodePipelineServiceRole", "Arn"]
},
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "S3"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"S3Bucket": {
"Ref": "SourceBucket"
},
"S3ObjectKey": {
"Ref": "SourceObjectKey"
},
"PollForSourceChanges": true
},
"RunOrder": 1
}
]
},
...
To create a CloudWatch Events rule with Amazon S3 as the event source and CodePipeline as
the target and apply the permissions policy
1. In the template, under Resources, use the AWS::IAM::Role AWS CloudFormation resource to
configure the IAM role that allows your event to start your pipeline. This entry creates a role that
uses two policies:
Why am I making this change? Adding AWS::IAM::Role resource enables AWS CloudFormation
to create permissions for Amazon CloudWatch Events. This resource is added to your AWS
CloudFormation stack.
YAML
AmazonCloudWatchEventRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: cwe-pipeline-execution
PolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action: codepipeline:StartPipelineExecution
Resource: !Join [ '', [ 'arn:aws:codepipeline:', !Ref
'AWS::Region', ':', !Ref 'AWS::AccountId', ':', !Ref AppPipeline ] ]
...
JSON
"AmazonCloudWatchEventRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "cwe-pipeline-execution",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codepipeline:StartPipelineExecution",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
...
Why am I making this change? Adding the AWS::Events::Rule resource enables AWS
CloudFormation to create the event. This resource is added to your AWS CloudFormation stack.
YAML
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.s3
detail-type:
- 'AWS API Call via CloudTrail'
detail:
eventSource:
- s3.amazonaws.com
eventName:
- CopyObject
- PutObject
- CompleteMultipartUpload
requestParameters:
bucketName:
- !Ref SourceBucket
key:
- !Ref SourceObjectKey
Targets:
-
Arn:
!Join [ '', [ 'arn:aws:codepipeline:', !Ref 'AWS::Region', ':', !Ref
'AWS::AccountId', ':', !Ref AppPipeline ] ]
RoleArn: !GetAtt AmazonCloudWatchEventRole.Arn
Id: codepipeline-AppPipeline
...
JSON
"AmazonCloudWatchEventRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"EventPattern": {
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"CopyObject",
"PutObject",
"CompleteMultipartUpload"
],
"requestParameters": {
"bucketName": [
{
"Ref": "SourceBucket"
}
],
"key": [
{
"Ref": "SourceObjectKey"
}
]
}
}
},
"Targets": [
{
"Arn": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
},
"RoleArn": {
"Fn::GetAtt": [
"AmazonCloudWatchEventRole",
"Arn"
]
},
"Id": "codepipeline-AppPipeline"
}
]
}
}
},
...
YAML
Outputs:
SourceBucketARN:
Description: "S3 bucket ARN that Cloudtrail will use"
Value: !GetAtt SourceBucket.Arn
Export:
Name: SourceBucketARN
JSON
"Outputs" : {
"SourceBucketARN" : {
"Description" : "S3 bucket ARN that Cloudtrail will use",
"Value" : { "Fn::GetAtt": ["SourceBucket", "Arn"] },
"Export" : {
"Name" : "SourceBucketARN"
}
}
...
4. Save your updated template to your local computer, and open the AWS CloudFormation console.
5. Choose your stack, and then choose Create Change Set for Current Stack.
6. Upload your updated template, and then view the changes listed in AWS CloudFormation. These are
the changes that will be made to the stack. You should see your new resources in the list.
7. Choose Execute.
YAML
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: S3
OutputArtifacts:
- Name: SourceOutput
Configuration:
S3Bucket: !Ref SourceBucket
S3ObjectKey: !Ref SourceObjectKey
PollForSourceChanges: false
RunOrder: 1
JSON
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "S3"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"S3Bucket": {
"Ref": "SourceBucket"
},
"S3ObjectKey": {
"Ref": "SourceObjectKey"
},
"PollForSourceChanges": false
},
"RunOrder": 1
}
Why am I making this change? Given the current limit of five trails per account, the CloudTrail trail
must be created and managed separately. (See Limits in AWS CloudTrail.) However, you can include
many Amazon S3 buckets on a single trail, so you can create the trail once and then add Amazon S3
buckets for other pipelines as necessary. Paste the following into your second sample template file.
YAML
###################################################################################
# Prerequisites:
# - S3 SoureBucket and SourceObjectKey must exist
###################################################################################
Parameters:
SourceObjectKey:
Description: 'S3 source artifact'
Type: String
Default: SampleApp_Linux.zip
Resources:
AWSCloudTrailBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref AWSCloudTrailBucket
PolicyDocument:
Version: 2012-10-17
Statement:
-
Sid: AWSCloudTrailAclCheck
Effect: Allow
Principal:
Service:
- cloudtrail.amazonaws.com
Action: s3:GetBucketAcl
Resource: !GetAtt AWSCloudTrailBucket.Arn
-
Sid: AWSCloudTrailWrite
Effect: Allow
Principal:
Service:
- cloudtrail.amazonaws.com
Action: s3:PutObject
Resource: !Join [ '', [ !GetAtt AWSCloudTrailBucket.Arn, '/AWSLogs/', !
Ref 'AWS::AccountId', '/*' ] ]
Condition:
StringEquals:
s3:x-amz-acl: bucket-owner-full-control
AWSCloudTrailBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
AwsCloudTrail:
DependsOn:
- AWSCloudTrailBucketPolicy
Type: AWS::CloudTrail::Trail
Properties:
S3BucketName: !Ref AWSCloudTrailBucket
EventSelectors:
-
DataResources:
-
Type: AWS::S3::Object
Values:
- !Join [ '', [ !ImportValue SourceBucketARN, '/', !Ref
SourceObjectKey ] ]
ReadWriteType: WriteOnly
IncludeGlobalServiceEvents: true
IsLogging: true
IsMultiRegionTrail: true
...
JSON
{
"Parameters": {
"SourceObjectKey": {
"Description": "S3 source artifact",
"Type": "String",
"Default": "SampleApp_Linux.zip"
}
},
"Resources": {
"AWSCloudTrailBucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain"
},
"AWSCloudTrailBucketPolicy": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"Bucket": {
"Ref": "AWSCloudTrailBucket"
},
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AWSCloudTrailAclCheck",
"Effect": "Allow",
"Principal": {
"Service": [
"cloudtrail.amazonaws.com"
]
},
"Action": "s3:GetBucketAcl",
"Resource": {
"Fn::GetAtt": [
"AWSCloudTrailBucket",
"Arn"
]
}
},
{
"Sid": "AWSCloudTrailWrite",
"Effect": "Allow",
"Principal": {
"Service": [
"cloudtrail.amazonaws.com"
]
},
"Action": "s3:PutObject",
"Resource": {
"Fn::Join": [
"",
[
{
"Fn::GetAtt": [
"AWSCloudTrailBucket",
"Arn"
]
},
"/AWSLogs/",
{
"Ref": "AWS::AccountId"
},
"/*"
]
]
},
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
}
},
"AwsCloudTrail": {
"DependsOn": [
"AWSCloudTrailBucketPolicy"
],
"Type": "AWS::CloudTrail::Trail",
"Properties": {
"S3BucketName": {
"Ref": "AWSCloudTrailBucket"
},
"EventSelectors": [
{
"DataResources": [
{
"Type": "AWS::S3::Object",
"Values": [
{
"Fn::Join": [
"",
[
{
"Fn::ImportValue": "SourceBucketARN"
},
"/",
{
"Ref": "SourceObjectKey"
}
]
]
}
]
}
],
"ReadWriteType": "WriteOnly"
}
],
"IncludeGlobalServiceEvents": true,
"IsLogging": true,
"IsMultiRegionTrail": true
}
}
}
}
...
Example
When you use AWS CloudFormation to create these resources, your pipeline is triggered when files in
your repository are created or updated.
Note
Do not stop here. Although your pipeline is created, you must create a second AWS
CloudFormation template for your Amazon S3 pipeline. If you do not create the second
template, your pipeline does not have any change detection functionality.
YAML
Resources:
SourceBucket:
Type: AWS::S3::Bucket
Properties:
VersioningConfiguration:
Status: Enabled
CodePipelineArtifactStoreBucket:
Type: AWS::S3::Bucket
CodePipelineArtifactStoreBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref CodePipelineArtifactStoreBucket
PolicyDocument:
Version: 2012-10-17
Statement:
-
Sid: DenyUnEncryptedObjectUploads
Effect: Deny
Principal: '*'
Action: s3:PutObject
Resource: !Join [ '', [ !GetAtt CodePipelineArtifactStoreBucket.Arn, '/
*' ] ]
Condition:
StringNotEquals:
s3:x-amz-server-side-encryption: aws:kms
-
Sid: DenyInsecureConnections
Effect: Deny
Principal: '*'
Action: s3:*
Resource: !Join [ '', [ !GetAtt CodePipelineArtifactStoreBucket.Arn, '/
*' ] ]
Condition:
Bool:
aws:SecureTransport: false
CodePipelineServiceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Principal:
Service:
- codepipeline.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: AWS-CodePipeline-Service-3
PolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action:
- codecommit:CancelUploadArchive
- codecommit:GetBranch
- codecommit:GetCommit
- codecommit:GetUploadArchiveStatus
- codecommit:UploadArchive
Resource: '*'
-
Effect: Allow
Action:
- codedeploy:CreateDeployment
- codedeploy:GetApplicationRevision
- codedeploy:GetDeployment
- codedeploy:GetDeploymentConfig
- codedeploy:RegisterApplicationRevision
Resource: '*'
-
Effect: Allow
Action:
- codebuild:BatchGetBuilds
- codebuild:StartBuild
Resource: '*'
-
Effect: Allow
Action:
- devicefarm:ListProjects
- devicefarm:ListDevicePools
- devicefarm:GetRun
- devicefarm:GetUpload
- devicefarm:CreateUpload
- devicefarm:ScheduleRun
Resource: '*'
-
Effect: Allow
Action:
- lambda:InvokeFunction
- lambda:ListFunctions
Resource: '*'
-
Effect: Allow
Action:
- iam:PassRole
Resource: '*'
-
Effect: Allow
Action:
- elasticbeanstalk:*
- ec2:*
- elasticloadbalancing:*
- autoscaling:*
- cloudwatch:*
- s3:*
- sns:*
- cloudformation:*
- rds:*
- sqs:*
- ecs:*
Resource: '*'
AppPipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: s3-events-pipeline
RoleArn:
!GetAtt CodePipelineServiceRole.Arn
Stages:
-
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: S3
OutputArtifacts:
- Name: SourceOutput
Configuration:
S3Bucket: !Ref SourceBucket
S3ObjectKey: !Ref SourceObjectKey
PollForSourceChanges: false
RunOrder: 1
-
Name: Beta
Actions:
-
Name: BetaAction
InputArtifacts:
- Name: SourceOutput
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CodeDeploy
Configuration:
ApplicationName: !Ref ApplicationName
DeploymentGroupName: !Ref BetaFleet
RunOrder: 1
ArtifactStore:
Type: S3
Location: !Ref CodePipelineArtifactStoreBucket
AmazonCloudWatchEventRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: cwe-pipeline-execution
PolicyDocument:
Version: 2012-10-17
Statement:
-
Effect: Allow
Action: codepipeline:StartPipelineExecution
Resource: !Join [ '', [ 'arn:aws:codepipeline:', !Ref 'AWS::Region',
':', !Ref 'AWS::AccountId', ':', !Ref AppPipeline ] ]
AmazonCloudWatchEventRule:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- aws.s3
detail-type:
- 'AWS API Call via CloudTrail'
detail:
eventSource:
- s3.amazonaws.com
eventName:
- PutObject
- CompleteMultipartUpload
resources:
ARN:
- !Join [ '', [ !GetAtt SourceBucket.Arn, '/', !Ref SourceObjectKey ] ]
Targets:
-
Arn:
!Join [ '', [ 'arn:aws:codepipeline:', !Ref 'AWS::Region', ':', !Ref
'AWS::AccountId', ':', !Ref AppPipeline ] ]
RoleArn: !GetAtt AmazonCloudWatchEventRole.Arn
Id: codepipeline-AppPipeline
Outputs:
SourceBucketARN:
Description: "S3 bucket ARN that Cloudtrail will use"
Value: !GetAtt SourceBucket.Arn
Export:
Name: SourceBucketARN
JSON
"Resources": {
"SourceBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"VersioningConfiguration": {
"Status": "Enabled"
}
}
},
"CodePipelineArtifactStoreBucket": {
"Type": "AWS::S3::Bucket"
},
"CodePipelineArtifactStoreBucketPolicy": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"Bucket": {
"Ref": "CodePipelineArtifactStoreBucket"
},
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": {
"Fn::Join": [
"",
[
{
"Fn::GetAtt": [
"CodePipelineArtifactStoreBucket",
"Arn"
]
},
"/*"
]
]
},
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
},
{
"Sid": "DenyInsecureConnections",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": {
"Fn::Join": [
"",
[
{
"Fn::GetAtt": [
"CodePipelineArtifactStoreBucket",
"Arn"
]
},
"/*"
]
]
},
"Condition": {
"Bool": {
"aws:SecureTransport": false
}
}
}
]
}
}
},
"CodePipelineServiceRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"codepipeline.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "AWS-CodePipeline-Service-3",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codecommit:CancelUploadArchive",
"codecommit:GetBranch",
"codecommit:GetCommit",
"codecommit:GetUploadArchiveStatus",
"codecommit:UploadArchive"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"codedeploy:CreateDeployment",
"codedeploy:GetApplicationRevision",
"codedeploy:GetDeployment",
"codedeploy:GetDeploymentConfig",
"codedeploy:RegisterApplicationRevision"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"codebuild:BatchGetBuilds",
"codebuild:StartBuild"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"devicefarm:ListProjects",
"devicefarm:ListDevicePools",
"devicefarm:GetRun",
"devicefarm:GetUpload",
"devicefarm:CreateUpload",
"devicefarm:ScheduleRun"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
"lambda:ListFunctions"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"ecs:*"
],
"Resource": "*"
}
]
}
}
]
}
},
"AppPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Name": "s3-events-pipeline",
"RoleArn": {
"Fn::GetAtt": [
"CodePipelineServiceRole",
"Arn"
]
},
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "AWS",
"Version": 1,
"Provider": "S3"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"S3Bucket": {
"Ref": "SourceBucket"
},
"S3ObjectKey": {
"Ref": "SourceObjectKey"
},
"PollForSourceChanges": false
},
"RunOrder": 1
}
]
},
{
"Name": "Beta",
"Actions": [
{
"Name": "BetaAction",
"InputArtifacts": [
{
"Name": "SourceOutput"
}
],
"ActionTypeId": {
"Category": "Deploy",
"Owner": "AWS",
"Version": 1,
"Provider": "CodeDeploy"
},
"Configuration": {
"ApplicationName": {
"Ref": "ApplicationName"
},
"DeploymentGroupName": {
"Ref": "BetaFleet"
}
},
"RunOrder": 1
}
]
}
],
"ArtifactStore": {
"Type": "S3",
"Location": {
"Ref": "CodePipelineArtifactStoreBucket"
}
}
}
},
"AmazonCloudWatchEventRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "cwe-pipeline-execution",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "codepipeline:StartPipelineExecution",
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
}
}
]
}
}
]
}
},
"AmazonCloudWatchEventRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"EventPattern": {
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject",
"CompleteMultipartUpload"
],
"resources": {
"ARN": [
{
"Fn::Join": [
"",
[
{
"Fn::GetAtt": [
"SourceBucket",
"Arn"
]
},
"/",
{
"Ref": "SourceObjectKey"
}
]
]
}
]
}
}
},
"Targets": [
{
"Arn": {
"Fn::Join": [
"",
[
"arn:aws:codepipeline:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "AppPipeline"
}
]
]
},
"RoleArn": {
"Fn::GetAtt": [
"AmazonCloudWatchEventRole",
"Arn"
]
},
"Id": "codepipeline-AppPipeline"
}
]
}
}
},
"Outputs" : {
"SourceBucketARN" : {
"Description" : "S3 bucket ARN that Cloudtrail will use",
"Value" : { "Fn::GetAtt": ["SourceBucket", "Arn"] },
"Export" : {
"Name" : "SourceBucketARN"
}
}
}
}
...
To build an event-driven pipeline with AWS CodeCommit, you edit the PollForSourceChanges
parameter of your pipeline and then add the following resources to your template:
• A GitHub webhook
If you use AWS CloudFormation to create and manage your pipelines, your template has content like the
following.
Note
Note the PollForSourceChanges configuration property in the source stage. If your template
doesn't include that property, then PollForSourceChanges is set to true by default.
YAML
Resources:
AppPipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: github-polling-pipeline
RoleArn:
!GetAtt CodePipelineServiceRole.Arn
Stages:
-
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
OutputArtifacts:
- Name: SourceOutput
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref RepositoryName
Branch: !Ref BranchName
OAuthToken: !Ref GitHubOAuthToken
PollForSourceChanges: true
RunOrder: 1
...
JSON
"AppPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Name": "github-polling-pipeline",
"RoleArn": {
"Fn::GetAtt": [
"CodePipelineServiceRole",
"Arn"
]
},
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "ThirdParty",
"Version": 1,
"Provider": "GitHub"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"Owner": {
"Ref": "GitHubOwner"
},
"Repo": {
"Ref": "RepositoryName"
},
"Branch": {
"Ref": "BranchName"
},
"OAuthToken": {
"Ref": "GitHubOAuthToken"
},
"PollForSourceChanges": true
},
"RunOrder": 1
}
]
},
...
Note
When you use the CLI or AWS CloudFormation to create a pipeline and add a webhook,
you must disable periodic checks. To disable periodic checks, you must explicitly add
the PollForSourceChanges parameter and set it to false, as detailed in the final
procedure below. Otherwise, the default for a CLI or AWS CloudFormation pipeline is that
PollForSourceChanges defaults to true and does not display in the pipeline structure
output. For more information about PollForSourceChanges defaults, see Default Settings for the
PollForSourceChanges Parameter (p. 403).
YAML
Parameters:
GitHubOwner:
Type: String
GitHubSecret:
Type: String
NoEcho: true
GitHubOAuthToken:
Type: String
NoEcho: true
...
JSON
{
"Parameters": {
"BranchName": {
"Description": "GitHub branch name",
"Type": "String",
"Default": "master"
},
"GitHubOwner": {
"Type": "String"
},
"GitHubSecret": {
"Type": "String",
"NoEcho": true
},
"GitHubOAuthToken": {
"Type": "String",
"NoEcho": true
},
...
Note
The TargetAction you specify must match the Name property of the source action
defined in the pipeline.
If RegisterWithThirdParty is set to true, make sure the user associated to the OAuthToken
can set the required scopes in GitHub. The token and webhook require the following GitHub scopes:
• repo - used for full control to read and pull artifacts from public and private repositories into a
pipeline.
• admin:repo_hook - used for full control of repository hooks.
Otherwise, GitHub returns a 404. For more information about the 404 returned, see https://
help.github.com/articles/about-webhooks.
YAML
AppPipelineWebhook:
Type: AWS::CodePipeline::Webhook
Properties:
Authentication: GITHUB_HMAC
AuthenticationConfiguration:
SecretToken: !Ref GitHubSecret
Filters:
-
JsonPath: "$.ref"
MatchEquals: refs/heads/{Branch}
TargetPipeline: !Ref AppPipeline
TargetAction: SourceAction
Name: AppPipelineWebhook
TargetPipelineVersion: !GetAtt AppPipeline.Version
RegisterWithThirdParty: true
...
JSON
"AppPipelineWebhook": {
"Type": "AWS::CodePipeline::Webhook",
"Properties": {
"Authentication": "GITHUB_HMAC",
"AuthenticationConfiguration": {
"SecretToken": {
"Ref": "GitHubSecret"
}
},
"Filters": [
{
"JsonPath": "$.ref",
"MatchEquals": "refs/heads/{Branch}"
}
],
"TargetPipeline": {
"Ref": "AppPipeline"
},
"TargetAction": "SourceAction",
"Name": "AppPipelineWebhook",
"TargetPipelineVersion": {
"Fn::GetAtt": [
"AppPipeline",
"Version"
]
},
"RegisterWithThirdParty": true
}
},
...
3. Save the updated template to your local computer, and then open the AWS CloudFormation console.
4. Choose your stack, and then choose Create Change Set for Current Stack.
5. Upload the template, and then view the changes listed in AWS CloudFormation. These are the
changes to be made to the stack. You should see your new resources in the list.
6. Choose Execute.
Why am I making this change? Changing this parameter to false turns off periodic checks so you
can use event-based change detection only.
YAML
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
OutputArtifacts:
- Name: SourceOutput
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref RepositoryName
Branch: !Ref BranchName
OAuthToken: !Ref GitHubOAuthToken
PollForSourceChanges: false
RunOrder: 1
JSON
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "ThirdParty",
"Version": 1,
"Provider": "GitHub"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"Owner": {
"Ref": "GitHubOwner"
},
"Repo": {
"Ref": "RepositoryName"
},
"Branch": {
"Ref": "BranchName"
},
"OAuthToken": {
"Ref": "GitHubOAuthToken"
},
"PollForSourceChanges": false
},
"RunOrder": 1
}
Example
When you create these resources with AWS CloudFormation, the webhook defined is created in the
specified GitHub repository. Your pipeline is triggered on commit.
YAML
Parameters:
GitHubOwner:
Type: String
GitHubSecret:
Type: String
NoEcho: true
GitHubOAuthToken:
Type: String
NoEcho: true
Resources:
AppPipelineWebhook:
Type: AWS::CodePipeline::Webhook
Properties:
Authentication: GITHUB_HMAC
AuthenticationConfiguration:
SecretToken: !Ref GitHubSecret
Filters:
-
JsonPath: "$.ref"
MatchEquals: refs/heads/{Branch}
TargetPipeline: !Ref AppPipeline
TargetAction: SourceAction
Name: AppPipelineWebhook
TargetPipelineVersion: !GetAtt AppPipeline.Version
RegisterWithThirdParty: true
AppPipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: github-events-pipeline
RoleArn:
!GetAtt CodePipelineServiceRole.Arn
Stages:
-
Name: Source
Actions:
-
Name: SourceAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
OutputArtifacts:
- Name: SourceOutput
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref RepositoryName
Branch: !Ref BranchName
OAuthToken: !Ref GitHubOAuthToken
PollForSourceChanges: false
RunOrder: 1
...
JSON
{
"Parameters": {
"BranchName": {
"Description": "GitHub branch name",
"Type": "String",
"Default": "master"
},
"RepositoryName": {
"Description": "GitHub repository name",
"Type": "String",
"Default": "test"
},
"GitHubOwner": {
"Type": "String"
},
"GitHubSecret": {
"Type": "String",
"NoEcho": true
},
"GitHubOAuthToken": {
"Type": "String",
"NoEcho": true
},
"ApplicationName": {
"Description": "CodeDeploy application name",
"Type": "String",
"Default": "DemoApplication"
},
"BetaFleet": {
"Description": "Fleet configured in CodeDeploy",
"Type": "String",
"Default": "DemoFleet"
}
},
"Resources": {
...
},
"AppPipelineWebhook": {
"Type": "AWS::CodePipeline::Webhook",
"Properties": {
"Authentication": "GITHUB_HMAC",
"AuthenticationConfiguration": {
"SecretToken": {
"Ref": "GitHubSecret"
}
},
"Filters": [
{
"JsonPath": "$.ref",
"MatchEquals": "refs/heads/{Branch}"
}
],
"TargetPipeline": {
"Ref": "AppPipeline"
},
"TargetAction": "SourceAction",
"Name": "AppPipelineWebhook",
"TargetPipelineVersion": {
"Fn::GetAtt": [
"AppPipeline",
"Version"
]
},
"RegisterWithThirdParty": true
}
},
"AppPipeline": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Name": "github-events-pipeline",
"RoleArn": {
"Fn::GetAtt": [
"CodePipelineServiceRole",
"Arn"
]
},
"Stages": [
{
"Name": "Source",
"Actions": [
{
"Name": "SourceAction",
"ActionTypeId": {
"Category": "Source",
"Owner": "ThirdParty",
"Version": 1,
"Provider": "GitHub"
},
"OutputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Configuration": {
"Owner": {
"Ref": "GitHubOwner"
},
"Repo": {
"Ref": "RepositoryName"
},
"Branch": {
"Ref": "BranchName"
},
"OAuthToken": {
"Ref": "GitHubOAuthToken"
},
"PollForSourceChanges": false
},
"RunOrder": 1
...
You can use the CodePipeline console or the AWS CLI to create an CodePipeline service role. A service
role is required to create a pipeline, and the pipeline is always associated to that service role.
The service role is not an AWS managed role but is created initially for pipeline creation, and then as
new permissions are added to the service role policy, you may need to update the service role for your
pipeline. Once your pipeline is created with a service role, you cannot apply a different service role
to that pipeline. Attach the recommended policy to the service role as detailed in Review the Default
CodePipeline Service Role Policy (p. 363).
For more information about the service role and its policy statement, see Manage the CodePipeline
Service Role (p. 363).
Choose Create pipeline and complete the Step 1: Choose pipeline settings page in the pipeline
creation wizard.
Note
After you create a pipeline, you cannot change its name. For information about other
limitations, see Limits in AWS CodePipeline (p. 412).
2. In Service role, do one of the following:
• Choose New service role to allow CodePipeline to create a new service role
in IAM. In Role name, the role and policy name both default to this format:
AWSCodePipelineServiceRole-region-pipeline_name. For example, this is the service role
created for this tutorial: AWSCodePipelineServiceRole-eu-west-2-MyFirstPipeline.
• Choose Existing service role to use a service role already created in IAM. In Role name, choose
your service role from the list.
Note
In the console, service roles created before September 2018 are created with the name
"oneClick_AWS-CodePipeline-Service_ID-Number".
Service roles created after September 2018 use the service role name format
"AWSCodePipelineServiceRole-Region-Pipeline_Name". For example, for a pipeline
named MyFirstPipeline created in the console in eu-west-2, the service role named
"AWSCodePipelineServiceRole-eu-west-2-MyFirstPipeline" is created. The policy name
format is the same as the role name format.
3. Complete the pipeline creation. Your pipeline service role is available to view in your list of IAM
roles, and you can view the service role ARN associated to a pipeline by running the get-pipeline
command with the AWS CLI.
1. Use the IAM console or the AWS CLI to create a role with the policy detailed in Review the Default
CodePipeline Service Role Policy (p. 363). The policy name format is normally the same as the role
name format.
2. Use the service role ARN when you create your pipeline with the AWS CLI or AWS CloudFormation.
3. After you create it, your pipeline service role is available to view in your list of IAM roles, and you can
view the service role ARN associated to a pipeline by running the get-pipeline command with the
AWS CLI.
You can use the CLI to specify tags when you create a pipeline. You can use the console or CLI to add or
remove tags, and update the values of tags in a pipeline. You can add up to 50 tags to each pipeline.
Topics
• Tag Pipelines (Console) (p. 277)
• Tag Pipelines (CLI) (p. 279)
Topics
• Add Tags to a Pipeline (Console) (p. 277)
• View Tags for a Pipeline (Console) (p. 278)
• Edit Tags for a Pipeline (Console) (p. 279)
• Remove Tags from a Pipeline (Console) (p. 279)
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Pipelines page, choose the pipeline where you want to add tags.
3. From the navigation pane, choose Settings.
6. (Optional) Choose Add tag to add more rows and enter more tags.
7. Choose Submit. The tags are listed under pipeline settings.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Pipelines page, choose the pipeline where you want to view tags.
3. From the navigation pane, choose Settings.
4. Under Pipeline tags, view the tags for the pipeline under the Key and Value columns.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Pipelines page, choose the pipeline where you want to update tags.
3. From the navigation pane, choose Settings.
4. Under Pipeline tags, choose Edit.
5. In the Key and Value fields, update the values in each field as needed. For example, for the Project
key, in Value, change ProjectA to ProjectB.
6. Choose Submit.
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Pipelines page, choose the pipeline where you want to remove tags.
3. From the navigation pane, choose Settings.
4. Under Pipeline tags, choose Edit.
5. Next to the key and value for each tag you want to delete, choose Remove tag.
6. Choose Submit.
Topics
• Add Tags to a Pipeline (CLI) (p. 279)
• View Tags for a Pipeline (CLI) (p. 280)
• Edit Tags for a Pipeline (CLI) (p. 280)
• Remove Tags from a Pipeline (CLI) (p. 280)
To add a tag to a pipeline when you create it, see Create a Pipeline in CodePipeline (p. 187).
In these steps, we assume that you have already installed a recent version of the AWS CLI or updated to
the current version. For more information, see Installing the AWS Command Line Interface.
At the terminal or command line, run the tag-resource command, specifying the Amazon Resource
Name (ARN) of the pipeline where you want to add tags and the key and value of the tag you want to
add. You can add more than one tag to a pipeline. For example, to tag a pipeline named MyPipeline
with two tags, a tag key named DeploymentEnvironment with the tag value of Test, and a tag key
named IscontainerBased with the tag value of true:
At the terminal or command line, run the list-tags-for-resource command. For example, to view a list
of tag keys and tag values for a pipeline named MyPipeline with the arn:aws:codepipeline:us-
west-2:account-id:MyPipeline ARN value:
{
"tags": {
"Project": "ProjectA",
"IscontainerBased": "true"
}
}
At the terminal or command line, run the tag-resource command, specifying the ARN of the pipeline
where you want to update a tag and specify the tag key and tag value:
At the terminal or command line, run the untag-resource command, specifying the ARN of the pipeline
where you want to remove tags and the tag key of the tag you want to remove. For example, to remove
multiple tags on a pipeline named MyPipeline with the tag keys Project and IscontainerBased:
If successful, this command returns nothing. To verify the tags associated with the pipeline, run the list-
tags-for-resource command.
• Source
• Build
• Test
• Deploy
• Approval
• Invoke
For information about the AWS services and partner products and services you can integrate into your
pipeline based on action type, see Integrations with CodePipeline Action Types (p. 12).
Topics
• Create and Add a Custom Action in CodePipeline (p. 282)
• Tag a Custom Action in CodePipeline (p. 292)
• Invoke an AWS Lambda Function in a Pipeline in CodePipeline (p. 294)
• Retry a Failed Action in CodePipeline (p. 310)
• Manage Approval Actions in CodePipeline (p. 312)
• Add a Cross-Region Action in CodePipeline (p. 322)
You can create custom actions for the following AWS CodePipeline action categories:
When you create a custom action, you must also create a job worker that will poll CodePipeline for job
requests for this custom action, execute the job, and return the status result to CodePipeline. This job
worker can be located on any computer or resource as long as it has access to the public endpoint for
CodePipeline. To easily manage access and security, consider hosting your job worker on an Amazon EC2
instance.
The following diagram shows a high-level view of a pipeline that includes a custom build action:
When a pipeline includes a custom action as part of a stage, the pipeline will create a job request. A
custom job worker detects that request and performs that job (in this example, a custom process using
third-party build software). When the action is complete, the job worker returns either a success result or
a failure result. If a success result is received, the pipeline will transition the revision and its artifacts to
the next action. If a failure is returned, the pipeline will not transition the revision to the next action in
the pipeline.
Note
These instructions assume that you have already completed the steps in Getting Started with
CodePipeline (p. 9).
Topics
• Create a Custom Action (p. 283)
• Create a Job Worker for Your Custom Action (p. 286)
• Add a Custom Action to a Pipeline (p. 290)
1. Open a text editor and create a JSON file for your custom action that includes the action category,
the action provider, and any settings required by your custom action. For example, to create a
custom build action that requires only one property, your JSON file might look like this:
{
"category": "Build",
"provider": "My-Build-Provider-Name",
"version": "1",
"settings": {
"entityUrlTemplate": "https://my-build-instance/job/{Config:ProjectName}/",
"executionUrlTemplate": "https://my-build-instance/job/{Config:ProjectName}/
lastSuccessfulBuild/{ExternalExecutionId}/"
},
"configurationProperties": [{
"name": "ProjectName",
"required": true,
"key": true,
"secret": false,
"queryable": false,
"description": "The name of the build project must be provided when this action
is added to the pipeline.",
"type": "String"
}],
"inputArtifactDetails": {
"maximumCount": integer,
"minimumCount": integer
},
"outputArtifactDetails": {
"maximumCount": integer,
"minimumCount": integer
},
"tags": [{
"key": "Project",
"value": "ProjectA"
}]
}
This example adds tagging to the custom action by including the Project tag key and ProjectA
value on the custom action. For more information about tagging resources in CodePipeline, see
Tagging Resources (p. 134).
There are two properties included in the JSON file, entityUrlTemplate and
executionUrlTemplate. You can refer to a name in the configuration properties of the custom
action within the URL templates by following the format of {Config:name}, as long as the
configuration property is both required and not secret. For example, in the sample above, the
entityUrlTemplate value refers to the configuration property ProjectName.
• entityUrlTemplate: the static link that provides information about the service provider for the
action. In the example, the build system includes a static link to each build project. The link format
will vary, depending on your build provider (or, if you are creating a different action type, such as
test, other service provider). You must provide this link format so that when the custom action is
added, the user can choose this link to open a browser to a page on your website that provides the
specifics for the build project (or test environment).
• executionUrlTemplate: the dynamic link that will be updated with information about the
current or most recent run of the action. When your custom job worker updates the status of a
job (for example, success, failure, or in progress), it will also provide an externalExecutionId
that will be used to complete the link. This link can be used to provide details about the run of an
action.
For example, when you view the action in the pipeline, you see the following two links:
This static link appears after you add your custom action and points to the address in
entityUrlTemplate, which you specify when you create your custom action.
This dynamic link is updated after every run of the action and points to the address in
executionUrlTemplate, which you specify when you create your custom action.
For more information about these link types, as well as RevisionURLTemplate and
ThirdPartyURL, see ActionTypeSettings and CreateCustomActionType in the CodePipeline API
Reference. For more information about action structure requirements and how to create an action,
see CodePipeline Pipeline Structure Reference (p. 393).
2. Save the JSON file and give it a name you can easily remember (for example,
MyCustomAction.json).
3. Open a terminal session (Linux, OS X, Unix) or command prompt (Windows) on a computer where
you have installed the AWS CLI.
4. Use the AWS CLI to run the aws codepipeline create-custom-action-type command, specifying the
name of the JSON file you just created.
5. This command returns the entire structure of the custom action you created, as well as the JobList
action configuration property, which is added for you. When you add the custom action to a
pipeline, you can use JobList to specify which projects from the provider you can poll for jobs. If
you do not configure this, all available jobs will be returned when your custom job worker polls for
jobs.
For example, the preceding command might return a structure similar to the following:
{
"actionType": {
"inputArtifactDetails": {
"maximumCount": 1,
"minimumCount": 1
},
API Version 2015-07-09
285
CodePipeline User Guide
Create a Job Worker for Your Custom Action
"actionConfigurationProperties": [
{
"secret": false,
"required": true,
"name": "ProjectName",
"key": true,
"description": "The name of the build project must be provided when
this action is added to the pipeline."
}
],
"outputArtifactDetails": {
"maximumCount": 0,
"minimumCount": 0
},
"id": {
"category": "Build",
"owner": "Custom",
"version": "1",
"provider": "My-Build-Provider-Name"
},
"settings": {
"entityUrlTemplate": "https://my-build-instance/job/{Config:ProjectName}/",
"executionUrlTemplate": "https://my-build-instance/job/mybuildjob/
lastSuccessfulBuild/{ExternalExecutionId}/"
}
}
}
Note
As part of the output of the create-custom-action-type command, the id section includes
"owner": "Custom". CodePipeline automatically assigns Custom as the owner of custom
action types. This value can't be assigned or changed when you use the create-custom-
action-type command or the update-pipeline command.
There are many ways to design your job worker. The following sections provide some practical guidance
for developing your custom job worker for CodePipeline.
Topics
• Choose and Configure a Permissions Management Strategy for Your Job Worker (p. 286)
• Develop a Job Worker for Your Custom Action (p. 288)
• Custom Job Worker Architecture and Examples (p. 289)
The simplest strategy is to add the infrastructure you need for your custom job worker by creating
Amazon EC2 instances with an IAM instance role, which allow you to easily scale up the resources you
need for your integration. You can use the built-in integration with AWS to simplify the interaction
between your custom job worker and CodePipeline.
1. Learn more about Amazon EC2 and determine whether it is the right choice for your integration. For
information, see Amazon EC2 - Virtual Server Hosting.
2. Get started creating your Amazon EC2 instances. For information, see Getting Started with Amazon
EC2 Linux Instances.
Another strategy to consider is using identity federation with IAM to integrate your existing identity
provider system and resources. This strategy is particularly useful if you already have a corporate identity
provider or are already configured to support users using web identity providers. Identity federation
allows you to grant secure access to AWS resources, including CodePipeline, without having to create
or manage IAM users. You can leverage features and policies for password security requirements and
credential rotation. You can use sample applications as templates for your own design.
1. Learn more about IAM identity federation. For information, see Manage Federation.
2. Review the examples in Scenarios for Granting Temporary Access to identify the scenario for
temporary access that best fits the needs of your custom action.
3. Review code examples of identity federation relevant to your infrastructure, such as:
A third strategy to consider is to create an IAM user to use under your AWS account when running your
custom action and job worker.
1. Learn more about IAM best practices and use cases in IAM Best Practices and Use Cases.
2. Get started creating IAM users by following the steps in Creating an IAM User in Your AWS Account.
The following is an example policy you might create for use with your custom job worker. This policy is
meant as an example only and is provided as-is.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:PollForJobs",
"codepipeline:AcknowledgeJob",
"codepipeline:GetJobDetails",
"codepipeline:PutJobSuccessResult",
"codepipeline:PutJobFailureResult"
],
"Resource": [
"arn:aws:codepipeline:us-east-2::actionType:custom/Build/MyBuildProject/1/"
]
}
]
Note
Consider using the AWSCodePipelineCustomActionAccess managed policy for the IAM user.
5. While the action is running, the job worker can call PutJobSuccessResult with a continuation
token (the serialization of the state of the job generated by the job worker, for example a build
identifer in JSON format, or an Amazon S3 object key), as well as the ExternalExecutionId
information that will be used to populate the link in executionUrlTemplate. This will update
the console view of the pipeline with a working link to specific action details while it is in progress.
Although not required, it is a best practice because it enables users to view the status of your custom
action while it runs.
Once PutJobSuccessResult is called, the job is considered complete. A new job is created in
CodePipeline that includes the continuation token. This job will appear if your job worker calls
PollForJobs again. This new job can be used to check on the state of the action, and either returns
with a continuation token, or returns without a continuation token once the action is complete.
Note
If your job worker performs all the work for a custom action, you should consider breaking
your job worker processing into at least two steps. The first step establishes the details page
for your action. Once you have created the details page, you can serialize the state of the
job worker and return it as a continuation token, subject to size limits (see Limits in AWS
CodePipeline (p. 412)). For example, you could write the state of the action into the string
you use as the continuation token. The second step (and subsequent steps) of your job worker
processing perform the actual work of the action. The final step returns success or failure to
CodePipeline, with no continuation token on the final step.
For more information about using the continuation token, see the specifications for
PutJobSuccessResult in the CodePipeline API Reference.
6. Once the custom action completes, the job worker returns the result of the custom action to
CodePipeline by calling one of two APIs:
• PutJobSuccessResult without a continuation token, which indicates the custom action ran
successfully
• PutJobFailureResult, which indicates the custom action did not run successfully
Depending on the result, the pipeline will either continue on to the next action (success) or stop
(failure).
To upload artifacts to the Amazon S3 bucket, you must additionally configure the Amazon S3
PutObject request to use encryption. Currently only SSE-KMS is supported for encryption. In order to
know whether to use the default key or a customer-managed key to upload artifacts, your custom job
worker must look at the job data and check the encryption key property. If the encryption key property
is set, you should use that encryption key ID when configuring SSE-KMS. If the key is null, you use the
default master key. CodePipeline uses the default Amazon S3 master key unless otherwise configured.
The following sample shows how to create the KMS parameters in Java:
For more samples, see Specifying the AWS Key Management Service in Amazon S3 Using the AWS
SDKs. For more information about the Amazon S3 bucket for CodePipeline, see CodePipeline
Concepts (p. 4).
A more complex example of a custom job worker is available on GitHub. This sample is open source and
provided as-is.
• Sample Job Worker for CodePipeline: Download the sample from the GitHub repository.
Topics
• Add a Custom Action to a Pipeline (Console) (p. 290)
• Add a Custom Action to an Existing Pipeline (CLI) (p. 290)
1. Open a terminal session (Linux, macOS, or Unix) or command prompt (Windows) and run the get-
pipeline command to copy the pipeline structure you want to edit into a JSON file. For example, for
a pipeline named MyFirstPipeline, you would type the following command:
This command returns nothing, but the file you created should appear in the directory where you
ran the command.
2. Open the JSON file in any text editor and modify the structure of the file to add your custom action
to an existing stage.
Note
If you want your action to run in parallel with another action in that stage, make sure you
assign it the same runOrder value as that action.
For example, to modify the structure of a pipeline to add a stage named Build and to add a build
custom action to that stage, you might modify the JSON to add the Build stage before a deployment
stage as follows:
,
{
"name": "MyBuildStage",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "MyBuildCustomAction",
"actionTypeId": {
"category": "Build",
"owner": "Custom",
"version": "1",
"provider": "My-Build-Provider-Name"
},
"outputArtifacts": [
{
"name": "MyBuiltApp"
}
],
"configuration": {
"ProjectName": "MyBuildProject"
},
"runOrder": 1
}
]
},
{
"name": "Staging",
"actions": [
{
"inputArtifacts": [
{
"name": "MyBuiltApp"
}
],
"name": "Deploy-CodeDeploy-Application",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "CodePipelineDemoApplication",
"DeploymentGroupName": "CodePipelineDemoFleet"
},
"runOrder": 1
}
]
}
]
}
3. To apply your changes, run the update-pipeline command, specifying the pipeline JSON file, similar
to the following:
Important
Be sure to include file:// before the file name. It is required in this command.
The pipeline shows your changes. The next time you make a change to the source location, the
pipeline will run that revision through the revised structure of the pipeline.
You can add, remove, and update the values of tags in a custom action. You can add up to 50 tags to
each custom action.
Topics
• Add Tags to a Custom Action (p. 292)
• View Tags for a Custom Action (p. 293)
• Edit Tags for a Custom Action (p. 293)
• Remove Tags from a Custom Action (p. 293)
In these steps, we assume that you have already installed a recent version of the AWS CLI or updated to
the current version. For more information, see Installing the AWS Command Line Interface.
At the terminal or command line, run the tag-resource command, specifying the Amazon Resource
Name (ARN) of the custom action where you want to add tags and the key and value of the tag you want
to add. You can add more than one tag to a custom action. For example, to tag a custom action with
two tags, a tag key named TestActionType with the tag value of UnitTest, and a tag key named
ApplicationName with the tag value of MyApplication:
At the terminal or command line, run the list-tags-for-resource command. For example, to view
a list of tag keys and tag values for a custom action with the ARN arn:aws:codepipeline:us-
west-2:account-id:actiontype:Owner/Category/Provider/Version:
{
"tags": {
"TestActionType": "UnitTest",
"ApplicationName": "MyApplication"
}
}
At the terminal or command line, run the tag-resource command, specifying the Amazon Resource
Name (ARN) of the custom action where you want to update a tag and specify the tag key and tag value:
At the terminal or command line, run the untag-resource command, specifying the ARN of the custom
action where you want to remove tags and the tag key of the tag you want to remove. For example, to
remove a tag on a custom action with the tag key TestActionType:
If successful, this command returns nothing. To verify the tags associated with the custom action, run the
list-tags-for-resource command.
• To roll out changes to your environment by applying or updating an AWS CloudFormation template.
• To create resources on demand in one stage of a pipeline using AWS CloudFormation and delete them
in another stage.
• To deploy application versions with zero downtime in AWS Elastic Beanstalk with a Lambda function
that swaps CNAME values.
• To deploy to Amazon ECS Docker instances.
• To back up resources before building or deploying by creating an AMI snapshot.
• To add integration with third-party products to your pipeline, such as posting messages to an IRC
client.
This topic assumes you are familiar with AWS CodePipeline and AWS Lambda and know how to create
pipelines, functions, and the IAM policies and roles on which they depend. This topic shows you how to:
• Create a Lambda function that tests whether a webpage was deployed successfully.
• Configure the CodePipeline and Lambda execution roles and the permissions required to run the
function as part of the pipeline.
• Edit a pipeline to add the Lambda function as an action.
• Test the action by manually releasing a change.
This topic includes sample functions to demonstrate the flexibility of working with Lambda functions in
CodePipeline:
Each sample function includes information about the permissions you must add to the role. For
information about limits in AWS Lambda, see Limits in the AWS Lambda Developer Guide.
Important
The sample code, roles, and policies included in this topic are examples only, and are provided
as-is.
Topics
• Step 1: Create a Pipeline (p. 295)
• Step 2: Create the Lambda Function (p. 295)
• Step 3: Add the Lambda Function to a Pipeline in the CodePipeline Console (p. 298)
• Step 4: Test the Pipeline with the Lambda function (p. 299)
• Step 5: Next Steps (p. 300)
• Example JSON Event (p. 300)
• Additional Sample Functions (p. 302)
1. Follow the first three steps in Tutorial: Create a Simple Pipeline (Amazon S3 Bucket) (p. 26) to create
an Amazon S3 bucket, CodeDeploy resources, and a two-stage pipeline. Choose the Amazon Linux
option for your instance types. You can use any name you want for the pipeline, but the steps in this
topic use MyLambdaTestPipeline.
2. On the status page for your pipeline, in the CodeDeploy action, choose Details. On the deployment
details page for the deployment group, choose an instance ID from the list.
3. In the Amazon EC2 console, on the Description tab for the instance, copy the IP address in Public IP
(for example, 192.0.2.4). You use this address as the target of the function in AWS Lambda.
Note
The default service role for CodePipeline, AWS-CodePipeline-Service, includes the Lambda
permissions required to invoke the function, so you do not have to create an additional
invocation policy or role. However, if you have modified the default service role or selected
a different one, make sure the policy for the role allows the lambda:InvokeFunction and
lambda:ListFunctions permissions. Otherwise, pipelines that include Lambda actions fail.
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. Choose Policies, and then choose Create Policy. Choose the JSON tab, and then paste the following
policy into the field.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:*"
],
"Effect": "Allow",
"Resource": "arn:aws:logs:*:*:*"
},
{
"Action": [
"codepipeline:PutJobSuccessResult",
"codepipeline:PutJobFailureResult"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
1. Sign in to the AWS Management Console and open the AWS Lambda console at https://
console.aws.amazon.com/lambda/.
2. On the Functions page, choose Create function.
Note
If you see a Welcome page instead of the Lambda page, choose Get Started Now.
3. On the Create function page, choose Author from scratch. In Name, enter a name for your Lambda
function (for example, MyLambdaFunctionForAWSCodePipeline). In Description, enter an
optional description for the function (for example, A sample test to check whether the
website responds with a 200 (OK) and contains a specific word on the page). In
Runtime, choose Node.js 6.10, and then copy the following code into the Function code box:
Note
The event object, under the CodePipeline.job key, contains the job details. For a full
example of the JSON event CodePipeline returns to Lambda, see Example JSON
Event (p. 300).
// Retrieve the value of UserParameters from the Lambda action configuration in AWS
CodePipeline, in this case a URL which will be
// health checked by this function.
var url =
event["CodePipeline.job"].data.actionConfiguration.configuration.UserParameters;
pageObject.body = '';
pageObject.statusCode = response.statusCode;
response.on('end', function () {
callback(pageObject);
});
response.resume();
}).on('error', function(error) {
// Fail the job if our request failed
putJobFailure(error);
});
};
getPage(url, function(returnedPage) {
try {
// Check if the HTTP response has a 200 status
assert(returnedPage.statusCode === 200);
// Check if the page contains the text "Congratulations"
// You can change this to check for different text, or add other tests as
required
assert(returnedPage.contains('Congratulations'));
4. Under Role, select Choose an existing role. In Existing role, choose your role, and then choose
Create function.
5. Leave Handler at the default value, and leave Role at the default,
CodePipelineLambdaExecRole.
6. In Basic settings, for Timeout, choose 20.
7. Choose Save.
To add a stage
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Welcome page, choose the pipeline you created.
3. On the pipeline view page, choose Edit.
4. On the Edit page, choose + Add stage to add a stage after the deployment stage with the
CodeDeploy action. Enter a name for the stage (for example, LambdaStage), and choose Add stage.
Note
You can also choose to add your Lambda action to an existing stage. For demonstration
purposes, we are adding the Lambda function as the only action in a stage to allow you to
easily view its progress as artifacts progress through a pipeline.
5. Choose + Add action group. In Edit action, in Action name, enter a name for your
Lambda action (for example, MyLambdaAction). In Provider, choose AWS Lambda.
In Function name, choose or enter the name of your Lambda function (for example,
MyLambdaFunctionForAWSCodePipeline). In User parameters, specify the IP address for the
Amazon EC2 instance you copied earlier (for example, http://192.0.2.4), and then choose Done.
Note
This topic uses an IP address, but in a real-world scenario, you could provide your registered
website name instead (for example, http://www.example.com). For more information
about event data and handlers in AWS Lambda, see Programming Model in the AWS
Lambda Developer Guide.
6. On the Edit action page, choose Save.
To use the console to run the most recent version of an artifact through a pipeline
1. On the pipeline details page, choose Release change. This runs the most recent revision available in
each source location specified in a source action through the pipeline.
2. When the Lambda action is complete, choose the Details link to view the log stream for the function
in Amazon CloudWatch, including the billed duration of the event. If the function failed, the
CloudWatch log provides information about the cause.
After you have finished experimenting with the Lambda function, consider removing it from your
pipeline, deleting it from AWS Lambda, and deleting the role from IAM to avoid possible charges. For
more information, see Edit a Pipeline in CodePipeline (p. 196), Delete a Pipeline in CodePipeline (p. 214),
and Deleting Roles or Instance Profiles.
and pipelineContext data types. Two action configuration details, FunctionName and
UserParameters, are included in both the JSON event and the response to the GetJobDetails API.
The values in red italic text are examples or explanations, not real values.
{
"CodePipeline.job": {
"id": "11111111-abcd-1111-abcd-111111abcdef",
"accountId": "111111111111",
"data": {
"actionConfiguration": {
"configuration": {
"FunctionName": "MyLambdaFunctionForAWSCodePipeline",
"UserParameters": "some-input-such-as-a-URL"
}
},
"inputArtifacts": [
{
"location": {
"s3Location": {
"bucketName": "the name of the bucket configured as the
pipeline artifact store in Amazon S3, for example codepipeline-us-east-2-1234567890",
"objectKey": "the name of the application, for example
CodePipelineDemoApplication.zip"
},
"type": "S3"
},
"revision": null,
"name": "ArtifactName"
}
],
"outputArtifacts": [],
"artifactCredentials": {
"secretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"sessionToken": "MIICiTCCAfICCQD6m7oRw0uXOjANBgkqhkiG9w
0BAQUFADCBiDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZ
WF0dGxlMQ8wDQYDVQQKEwZBbWF6b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIw
EAYDVQQDEwlUZXN0Q2lsYWMxHzAdBgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5
jb20wHhcNMTEwNDI1MjA0NTIxWhcNMTIwNDI0MjA0NTIxWjCBiDELMAkGA1UEBh
MCVVMxCzAJBgNVBAgTAldBMRAwDgYDVQQHEwdTZWF0dGxlMQ8wDQYDVQQKEwZBb
WF6b24xFDASBgNVBAsTC0lBTSBDb25zb2xlMRIwEAYDVQQDEwlUZXN0Q2lsYWMx
HzAdBgkqhkiG9w0BCQEWEG5vb25lQGFtYXpvbi5jb20wgZ8wDQYJKoZIhvcNAQE
BBQADgY0AMIGJAoGBAMaK0dn+a4GmWIWJ21uUSfwfEvySWtC2XADZ4nB+BLYgVI
k60CpiwsZ3G93vUEIO3IyNoH/f0wYK8m9TrDHudUZg3qX4waLG5M43q7Wgc/MbQ
ITxOUSQv7c7ugFFDzQGBzZswY6786m86gpEIbb3OhjZnzcvQAaRHhdlQWIMm2nr
AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAtCu4nUhVVxYUntneD9+h8Mg9q6q+auN
KyExzyLwaxlAoo7TJHidbtS4J5iNmZgXL0FkbFFBjvSfpJIlJ00zbhNYS5f6Guo
EDmFJl0ZxBHjJnyp378OD8uTs7fLvjx79LjSTbNYiytVbZPQUQ5Yaxu2jXnimvw
3rrszlaEXAMPLE=",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE"
},
"continuationToken": "A continuation token if continuing job",
"encryptionKey": {
"id": "arn:aws:kms:us-
west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
"type": "KMS"
}
}
}
}
Topics
• Sample Python Function That Uses an AWS CloudFormation Template (p. 302)
This Python sample assumes you have a pipeline that uses an Amazon S3 bucket as a source action, or
that you have access to a versioned Amazon S3 bucket you can use with the pipeline. You create the AWS
CloudFormation template, compress it, and upload it to that bucket as a .zip file. You must then add a
source action to your pipeline that retrieves this .zip file from the bucket.
• The use of JSON-encoded user parameters to pass multiple configuration values to the function
(get_user_params).
• The interaction with .zip artifacts in an artifact bucket (get_template).
• The use of a continuation token to monitor a long-running asynchronous process
(continue_job_later). This allows the action to continue and the function to succeed even if it
exceeds a fifteen-minute runtime (a limit in Lambda).
To use this sample Lambda function, the policy for the Lambda execution role must have Allow
permissions in AWS CloudFormation, Amazon S3, and CodePipeline, as shown in this sample policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:*"
],
"Effect": "Allow",
"Resource": "arn:aws:logs:*:*:*"
},
{
"Action": [
"codepipeline:PutJobSuccessResult",
"codepipeline:PutJobFailureResult"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"cloudformation:DescribeStacks",
"cloudformation:CreateStack",
"cloudformation:UpdateStack"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
To create the AWS CloudFormation template, open any plain-text editor and copy and paste the
following code:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "AWS CloudFormation template which creates an S3 bucket",
"Resources" : {
"MySampleBucket" : {
"Type" : "AWS::S3::Bucket",
"Properties" : {
}
}
},
"Outputs" : {
"BucketName" : {
"Value" : { "Ref" : "MySampleBucket" },
"Description" : "The name of the S3 bucket"
}
}
}
Save this as a JSON file with the name template.json in a directory named template-package.
Create a compressed (.zip) file of this directory and file named template-package.zip, and upload
the compressed file to a versioned Amazon S3 bucket. If you already have a bucket configured for
your pipeline, you can use it. Next, edit your pipeline to add a source action that retrieves the .zip
file. Name the output for this action MyTemplate. For more information, see Edit a Pipeline in
CodePipeline (p. 196).
Note
The sample Lambda function expects these file names and compressed structure. However,
you can substitute your own AWS CloudFormation template for this sample. If you use your
own template, make sure you modify the policy for the Lambda execution role to allow any
additional functionality required by your AWS CloudFormation template.
6. In Advanced settings, for Timeout (s), replace the default of 3 seconds with 20.
7. Copy the following code into Lambda function code:
import json
import urllib
import boto3
import zipfile
import tempfile
import botocore
import traceback
print('Loading function')
cf = boto3.client('cloudformation')
code_pipeline = boto3.client('codepipeline')
Args:
artifacts: The list of artifacts available to the function
name: The artifact we wish to use
Returns:
The artifact dictionary found
Raises:
Exception: If no matching artifact is found
"""
for artifact in artifacts:
if artifact['name'] == name:
return artifact
Args:
artifact: The artifact to download
file_in_zip: The path to the file within the zip containing the template
Returns:
The CloudFormation template as a string
Raises:
Exception: Any exception thrown while downloading the artifact or unzipping it
"""
tmp_file = tempfile.NamedTemporaryFile()
bucket = artifact['location']['s3Location']['bucketName']
key = artifact['location']['s3Location']['objectKey']
Args:
stack: The stack to update
template: The template to apply
Returns:
True if an update was started, false if there were no changes
to the template since the last update.
Raises:
Exception: Any exception besides "No updates are to be performed."
"""
try:
cf.update_stack(StackName=stack, TemplateBody=template)
return True
except botocore.exceptions.ClientError as e:
if e.response['Error']['Message'] == 'No updates are to be performed.':
return False
else:
raise Exception('Error updating CloudFormation stack "{0}"'.format(stack),
e)
def stack_exists(stack):
"""Check if a stack exists or not
Args:
stack: The stack to check
Returns:
True or False depending on whether the stack exists
Raises:
Any exceptions raised .describe_stacks() besides that
the stack doesn't exist.
"""
try:
cf.describe_stacks(StackName=stack)
return True
except botocore.exceptions.ClientError as e:
if "does not exist" in e.response['Error']['Message']:
return False
else:
raise e
Args:
stack: The stack to be created
template: The template for the stack to be created with
Throws:
Exception: Any exception thrown by .create_stack()
"""
cf.create_stack(StackName=stack, TemplateBody=template)
def get_stack_status(stack):
"""Get the status of an existing CloudFormation stack
Args:
stack: The name of the stack to check
Returns:
The CloudFormation status string of the stack such as CREATE_COMPLETE
Raises:
Exception: Any exception thrown by .describe_stacks()
"""
stack_description = cf.describe_stacks(StackName=stack)
return stack_description['Stacks'][0]['StackStatus']
Args:
job: The CodePipeline job ID
message: A message to be logged relating to the job status
Raises:
Exception: Any exception thrown by .put_job_success_result()
"""
print('Putting job success')
print(message)
code_pipeline.put_job_success_result(jobId=job)
Args:
job: The CodePipeline job ID
message: A message to be logged relating to the job status
Raises:
Exception: Any exception thrown by .put_job_failure_result()
"""
print('Putting job failure')
print(message)
code_pipeline.put_job_failure_result(jobId=job, failureDetails={'message': message,
'type': 'JobFailed'})
This will cause CodePipeline to invoke the function again with the
supplied continuation token.
Args:
job: The JobID
message: A message to be logged relating to the job status
continuation_token: The continuation token
Raises:
Exception: Any exception thrown by .put_job_success_result()
"""
# Use the continuation token to keep track of any job execution state
# This data will be available when a new job is scheduled to continue the current
execution
continuation_token = json.dumps({'previous_job_id': job})
code_pipeline.put_job_success_result(jobId=job,
continuationToken=continuation_token)
Args:
job_id: The ID of the CodePipeline job
stack: The stack to create or update
template: The template to create/update the stack with
"""
if stack_exists(stack):
status = get_stack_status(stack)
if status not in ['CREATE_COMPLETE', 'ROLLBACK_COMPLETE', 'UPDATE_COMPLETE']:
# If the CloudFormation stack is not in a state where
# it can be updated again then fail the job right away.
put_job_failure(job_id, 'Stack cannot be updated when status is: ' +
status)
return
if were_updates:
# If there were updates then continue the job so it can monitor
# the progress of the update.
continue_job_later(job_id, 'Stack update started')
else:
# If there were no updates then succeed the job immediately
put_job_success(job_id, 'There were no stack updates')
else:
# If the stack doesn't already exist then create it instead
# of updating it.
create_stack(stack, template)
# Continue the job so the pipeline will wait for the CloudFormation
# stack to be created.
continue_job_later(job_id, 'Stack create started')
Args:
job_id: The CodePipeline job ID
stack: The stack to monitor
"""
status = get_stack_status(stack)
if status in ['UPDATE_COMPLETE', 'CREATE_COMPLETE']:
# If the update/create finished successfully then
# succeed the job and don't continue.
put_job_success(job_id, 'Stack update complete')
else:
# If the Stack is a state which isn't "in progress" or "complete"
# then the stack update/create has failed so end the job with
# a failed result.
put_job_failure(job_id, 'Update failed: ' + status)
def get_user_params(job_data):
"""Decodes the JSON user parameters and validates the required properties.
Args:
job_data: The job data structure containing the UserParameters string which
should be a valid JSON structure
Returns:
The JSON parameters decoded as a dictionary.
Raises:
Exception: The JSON can't be decoded or a property is missing.
"""
try:
# Get the user parameters which contain the stack, artifact and file settings
user_parameters = job_data['actionConfiguration']['configuration']
['UserParameters']
decoded_parameters = json.loads(user_parameters)
except Exception as e:
# We're expecting the user parameters to be encoded as JSON
# so we can pass multiple values. If the JSON can't be decoded
# then fail the job with a helpful message.
raise Exception('UserParameters could not be decoded as JSON')
return decoded_parameters
def setup_s3_client(job_data):
"""Creates an S3 client
Args:
job_data: The job data structure
Returns:
An S3 client with the appropriate credentials
"""
key_id = job_data['artifactCredentials']['accessKeyId']
key_secret = job_data['artifactCredentials']['secretAccessKey']
session_token = job_data['artifactCredentials']['sessionToken']
session = Session(aws_access_key_id=key_id,
aws_secret_access_key=key_secret,
aws_session_token=session_token)
return session.client('s3',
config=botocore.client.Config(signature_version='s3v4'))
Args:
event: The event passed by Lambda
context: The context passed by Lambda
"""
try:
# Extract the Job ID
job_id = event['CodePipeline.job']['id']
stack = params['stack']
artifact = params['artifact']
template_file = params['file']
if 'continuationToken' in job_data:
# If we're continuing then the create/update has already been triggered
# we just need to check if it has finished.
check_stack_update_status(job_id, stack)
else:
# Get the artifact details
artifact_data = find_artifact(artifacts, artifact)
# Get S3 client to access artifact with
s3 = setup_s3_client(job_data)
# Get the JSON template file out of the artifact
template = get_template(s3, artifact_data, template_file)
# Kick off a stack update or create
start_update_or_create(job_id, stack, template)
except Exception as e:
# If any other exceptions which we didn't expect are raised
# then fail the job and log the exception message.
print('Function failed due to exception.')
print(e)
traceback.print_exc()
put_job_failure(job_id, 'Function exception: ' + str(e))
print('Function complete.')
return "Complete."
• Stack name
Use curly brackets ({ }) and separate the parameters with commas. For example, to create a stack
named MyTestStack, for a pipeline with the input artifact MyTemplate, in UserParameters, enter:
{"stack":"MyTestStack","file":"template-package/template.json", "artifact":"MyTemplate"}.
Note
Even though you have specified the input artifact in UserParameters, you must also specify
this input artifact for the action in Input artifacts.
10. Save your changes to the pipeline, and then manually release a change to test the action and
Lambda function.
You can retry the latest failed actions in a stage without having to run a pipeline again from the
beginning. If you are using the console to view a pipeline, a Retry button will appear on the stage where
the failed actions can be retried.
If you are using the AWS CLI, you can use the get-pipeline-state command to determine whether any
actions have failed.
Note
In the following cases, you may not be able to retry actions:
Topics
• Retry Failed Actions (Console) (p. 311)
• Retry Failed Actions (CLI) (p. 311)
The names of all pipelines associated with your AWS account are displayed.
2. In Name, choose the name of the pipeline.
3. Locate the stage with the failed action, and then choose Retry.
Note
To identify which actions in the stage can be retried, hover over the Retry button.
If all retried actions in the stage are completed successfully, the pipeline continues to run.
1. At a terminal (Linux, macOS, or Unix) or command prompt (Windows), run the get-pipeline-
state command on a pipeline. For example, for a pipeline named MyFirstPipeline, you would type
something similar to the following:
The response to the command includes pipeline state information for each stage. In the following
example, the response indicates that one or more actions failed in the Staging stage:
{
"updated": 1427245911.525,
"created": 1427245911.525,
"pipelineVersion": 1,
"pipelineName": "MyFirstPipeline",
"stageStates": [
{
"actionStates": [...],
"stageName": "Source",
"latestExecution": {
"pipelineExecutionId": "9811f7cb-7cf7-SUCCESS",
"status": "Succeeded"
}
},
{
"actionStates": [...],
"stageName": "Staging",
"latestExecution": {
"pipelineExecutionId": "3137f7cb-7cf7-EXAMPLE",
"status": "Failed"
}
}
]
}
2. In a plain-text editor, create a file where you will record the following, in JSON format:
For the preceding MyFirstPipeline example, your file would look something like this:
{
"pipelineName": "MyFirstPipeline",
"stageName": "Staging",
"pipelineExecutionId": "3137f7cb-7cf7-EXAMPLE",
"retryMode": "FAILED_ACTIONS"
}
5. To view the results of the retry attempt, either open the CodePipeline console and choose the
pipeline that contains the actions that failed, or use the get-pipeline-state command again. For
more information, see View Pipeline Details and History in CodePipeline (p. 202).
If the action is approved, the pipeline execution resumes. If the action is rejected—or if no one approves
or rejects the action within seven days of the pipeline reaching the action and stopping—the result is the
same as an action failing, and the pipeline execution does not continue.
• You want someone to perform a code review or change management review before a revision is
allowed into the next stage of a pipeline.
• You want someone to perform manual quality assurance testing on the latest version of an
application, or to confirm the integrity of a build artifact, before it is released.
• You want someone to review new or updated text before it is published to a company website.
Publish Approval Notifications You can configure an approval action to publish a message to an
Amazon Simple Notification Service topic when the pipeline stops at the action. Amazon SNS delivers
the message to every endpoint subscribed to the topic. You must use a topic created in the same AWS
region as the pipeline that will include the approval action. When you create a topic, we recommend
you give it a name that will identify its purpose, in formats such as MyFirstPipeline-us-east-2-
approval.
When you publish approval notifications to Amazon SNS topics, you can choose from formats such as
email or SMS recipients, SQS queues, HTTP/HTTPS endpoints, or AWS Lambda functions you invoke
using Amazon SNS. For information about Amazon SNS topic notifications, see the following topics:
For the structure of the JSON data generated for an approval action notification, see JSON Data Format
for Manual Approval Notifications in CodePipeline (p. 322).
Specify a URL for Review As part of the configuration of the approval action, you can specify a URL to
be reviewed. The URL might be a link to a web application you want approvers to test or a page with
more information about your approval request. The URL is included in the notification that is published
to the Amazon SNS topic. Approvers can use the console or CLI to view it.
Enter Comments for Approvers When you create an approval action, you can also add comments that
are displayed to those who receive the notifications or those who view the action in the console or CLI
response.
No Configuration Options You can also choose not to configure any of these three options. You might
not need them if, for example, you can notify someone directly that the action is ready for their review,
or you simply want the pipeline to stop until you decide to approve the action yourself.
1. You grant the IAM permissions required for approving or rejecting approval actions to one or more
IAM users in your organization.
2. (Optional) If you are using Amazon SNS notifications, you ensure that the service role you use in your
CodePipeline operations has permission to access Amazon SNS resources.
3. (Optional) If you are using Amazon SNS notifications, you create an Amazon SNS topic and add one or
more subscribers or endpoints to it.
4. When you use the AWS CLI to create the pipeline or after you have used the CLI or console to create
the pipeline, you add an approval action to a stage in the pipeline.
If you are using notifications, you include the Amazon Resource Name (ARN) of the
Amazon SNS topic in the configuration of the action. (An ARN is a unique identifier for an
Amazon resource. ARNs for Amazon SNS topics are structured like arn:aws:sns:us-
east-2:80398EXAMPLE:MyApprovalTopic. For more information, see Amazon Resource Names
(ARNs) and AWS Service Namespaces in the Amazon Web Services General Reference.)
5. The pipeline stops when it reaches the approval action. If an Amazon SNS topic ARN was included in
the configuration of the action, a notification is published to the Amazon SNS topic, and a message
is delivered to any subscribers to the topic or subscribed endpoints, with a link to review the approval
action in the console.
• If you chose Groups, choose the Permissions tab, and expand Inline Policies. If no inline policies
have been created yet, choose click here.
In Policy Name, enter a name for this policy. Continue to the next step to paste the policy in the
Policy Document box.
• If you chose Users, choose the Permissions tab, and choose Add inline policy. Choose the JSON
tab. Continue to the next step to paste the policy.
5. Paste the policy into the policy box. Specify the individual resources an IAM user can access. For
example, the following policy grants users the authority to approve or reject only the action named
MyApprovalAction in the MyFirstPipeline pipeline in the US East (Ohio) Region (us-east-2):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"codepipeline:ListPipelines"
],
"Resource": [
"*"
],
"Effect": "Allow"
},
{
"Action": [
"codepipeline:GetPipeline",
"codepipeline:GetPipelineState",
"codepipeline:GetPipelineExecution"
],
"Effect": "Allow",
"Resource": "arn:aws:codepipeline:us-east-2:80398EXAMPLE:MyFirstPipeline"
},
{
"Action": [
"codepipeline:PutApprovalResult"
],
"Effect": "Allow",
"Resource": "arn:aws:codepipeline:us-east-2:80398EXAMPLE:MyFirstPipeline/
MyApprovalStage/MyApprovalAction"
}
]
}
Note
The codepipeline:ListPipelines permission is required only if IAM users need to
access the CodePipeline dashboard to view this list of pipelines. If console access is not
required, you can omit codepipeline:ListPipelines.
6. Do one of the following:
• If you chose Groups, choose Validate Policy. Correct any errors displayed in a red box at the top
of the page. Choose Apply Policy.
• If you chose Users, choose Review policy.
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
Important
Make sure you are signed in to the AWS Management Console with the same account
information you used in Getting Started with CodePipeline (p. 9).
2. In the IAM console, in the navigation pane, choose Roles.
3. Choose the name of the service role you use in your CodePipeline operations.
4. On the Permissions tab, in the Inline Policies area, choose Create Role Policy.
–or–
If the Create Role Policy button is not available, expand the Inline Policies area, and then choose
click here.
5. On the Set Permissions page, choose Custom Policy, and then choose Select.
6. On the Review Policy page, in the Policy Name field, type a name to identify this policy, such as
SNSPublish.
7. Paste the following into the Policy Document field:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sns:Publish",
"Resource": "*"
}
]
}
If you want to use Amazon SNS to send notifications when an approval action is ready for review, you
must first complete the following prerequisites:
• Grant permission to your CodePipeline service role to access Amazon SNS resources. For information,
see Grant Amazon SNS Permissions to a CodePipeline Service Role (p. 316).
• Grant permission to one or more IAM users in your organization to update the status of an approval
action. For information, see Grant Approval Permissions to an IAM User in CodePipeline (p. 314).
If you want to add an approval action to an existing stage, choose Edit stage.
5. Choose + Add action group.
6. On the Edit action page, do the following:
6. Choose Save.
For more information about creating and editing pipelines, see Create a Pipeline in CodePipeline (p. 187)
and Edit a Pipeline in CodePipeline (p. 196).
To add a stage to a pipeline that includes only an approval action, you would include something similar
to the following example when you create or update the pipeline.
Note
The configuration section is optional. This is just a portion, not the entire structure, of the
file. For more information, see CodePipeline Pipeline Structure Reference (p. 393).
{
"name": "MyApprovalStage",
"actions": [
{
"name": "MyApprovalAction",
"actionTypeId": {
"category": "Approval",
"owner": "AWS",
"version": "1",
"provider": "Manual"
},
"inputArtifacts": [],
"outputArtifacts": [],
"configuration": {
"NotificationArn": "arn:aws:sns:us-east-2:80398EXAMPLE:MyApprovalTopic",
"ExternalEntityLink": "http://example.com",
"CustomData": "The latest changes include feedback from Bob."},
"runOrder": 1
}
]
}
If the approval action is in a stage with other actions, the section of your JSON file that contains the
stage might look similar instead to the following example.
Note
The configuration section is optional. This is just a portion, not the entire structure, of the
file. For more information, see CodePipeline Pipeline Structure Reference (p. 393).
,
{
"name": "Production",
"actions": [
{
"inputArtifacts": [],
"name": "MyApprovalStage",
"actionTypeId": {
"category": "Approval",
"owner": "AWS",
"version": "1",
"provider": "Manual"
},
"outputArtifacts": [],
"configuration": {
"NotificationArn": "arn:aws:sns:us-east-2:80398EXAMPLE:MyApprovalTopic",
"ExternalEntityLink": "http://example.com",
"CustomData": "The latest changes include feedback from Bob."
},
"runOrder": 1
},
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "MyDeploymentStage",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "MyDemoApplication",
"DeploymentGroupName": "MyProductionFleet"
},
"runOrder": 2
}
]
}
If the person who added the approval action to the pipeline configured notifications, you might receive
an email that looks similar to the following:
7. In the Approve or reject the revision window, enter review comments, such as why you are
approving or rejecting the action, and then choose Approve or Reject.
1. At a terminal (Linux, macOS, or Unix) or command prompt (Windows), run the get-pipeline-state
command on the pipeline that contains the approval action. For example, for a pipeline named
MyFirstPipeline, enter the following:
2. In the response to the command, locate the token value, which appears in latestExecution in
the actionStates section for the approval action, as shown here:
{
"created": 1467929497.204,
"pipelineName": "MyFirstPipeline",
"pipelineVersion": 1,
"stageStates": [
{
"actionStates": [
{
"actionName": "MyApprovalAction",
"currentRevision": {
"created": 1467929497.204,
"revisionChangeId": "CEM7d6Tp7zfelUSLCPPwo234xEXAMPLE",
"revisionId": "HYGp7zmwbCPPwo23xCMdTeqIlEXAMPLE"
},
"latestExecution": {
"lastUpdatedBy": "arn:aws:iam::123456789012:user/Bob",
"summary": "The new design needs to be reviewed before
release.",
"token": "1a2b3c4d-573f-4ea7-a67E-XAMPLETOKEN"
}
}
//More content might appear here
}
3. In a plain-text editor, create a file where you add the following, in JSON format:
For the preceding MyFirstPipeline example, your file should look like this:
{
"pipelineName": "MyFirstPipeline",
"stageName": "MyApprovalStage",
"actionName": "MyApprovalAction",
"token": "1a2b3c4d-573f-4ea7-a67E-XAMPLETOKEN",
"result": {
"status": "Approved",
"summary": "The new design looks good. Ready to release to customers."
}
}
The following example shows the structure of the JSON output available with CodePipeline approvals.
{
"region": "us-east-2",
"consoleLink": "https://console.aws.amazon.com/codepipeline/home?region=us-east-2#/
view/MyFirstPipeline",
"approval": {
"pipelineName": "MyFirstPipeline",
"stageName": "MyApprovalStage",
"actionName": "MyApprovalAction",
"token": "1a2b3c4d-573f-4ea7-a67E-XAMPLETOKEN",
"expires": "2016-07-07T20:22Z",
"externalEntityLink": "http://example.com",
"approvalReviewLink": "https://console.aws.amazon.com/codepipeline/home?region=us-
east-2#/view/MyFirstPipeline/MyApprovalStage/MyApprovalAction/approve/1a2b3c4d-573f-4ea7-
a67E-XAMPLETOKEN",
"customData": "Review the latest changes and approve or reject within seven days."
}
}
Note
Certain action types in CodePipeline may only be available in certain AWS Regions. Also note
that there may AWS Regions where an action type is available, but a specific AWS provider for
that action type is not available.
You can use the console, AWS CLI, or AWS CloudFormation to add cross-region actions in pipelines.
If you use the console to create a pipeline or cross-region actions, default artifact buckets are
configured by CodePipeline in the Regions where you have actions. When you use the AWS CLI, AWS
CloudFormation, or an SDK to create a pipeline or cross-region actions, you provide the artifact bucket
for each Region where you have actions.
You cannot create cross-region actions for the following action types:
• Source actions
• Third-party actions
• Custom actions
When a pipeline includes a cross-region action as part of a stage, CodePipeline replicates only the input
artifacts of the cross-region action from the pipeline Region to the action's Region.
Note
The pipeline Region and the Region where your CloudWatch Events change detection resources
are maintained remain the same. The Region where your pipeline is hosted does not change.
In the console, you create a cross-region action in a pipeline stage by choosing the action provider and
the Region field, which lists the resources you have created in that region for that provider. When you
add a cross-region action, CodePipeline uses a separate artifact bucket in the action's region. For more
information about cross-region artifact buckets, see CodePipeline Pipeline Structure Reference (p. 393).
selection. The Region field designates where the AWS resources are created for this action
type and provider type. This field only displays for actions where the action provider is an AWS
service. The Region field defaults to the same AWS Region as your pipeline.
d. In Input artifacts choose the appropriate input from the previous stage. For example, if the
previous stage is a source stage, choose SourceArtifact.
e. Complete all the required fields for the action provider you are configuring.
f. In Output artifacts choose the appropriate output to the next stage. For example, if the next
stage is a deployment stage, choose BuildArtifact.
g. Choose Save.
6. On Edit: <Stage>, choose Done.
7. Choose Save.
To create a cross-region action in a pipeline stage with the AWS CLI, you add the configuration action
along with an optional region field. You must also have already created an artifact bucket in the
action's region. Instead of providing the artifactStore parameter of the single-region pipeline, you
use the artifactStores parameter to include a listing of each Region's artifact bucket.
Note
In this walkthrough and its examples, RegionA is the Region where the pipeline is created.
It has access to the RegionA Amazon S3 bucket used to store pipeline artifacts and the
service role used by CodePipeline. RegionB is the region where the CodeDeploy application,
deployment group, and service role used by CodeDeploy are created.
Prerequisites
You must have created the following:
• A pipeline in RegionA.
• An Amazon S3 artifact bucket in RegionB.
• The resources for your action, such as your CodeDeploy application and deployment group for a cross-
region deploy action, in RegionB.
1. For a pipeline in RegionA, run the get-pipeline command to copy the pipeline structure into a JSON
file. For example, for a pipeline named MyFirstPipeline, run the following command:
This command returns nothing, but the file you created should appear in the directory where you
ran the command.
2. Add the region field to add a new stage with your cross-region action that includes the Region
and resources for your action. The following JSON sample adds a Deploy stage with a cross-region
deploy action where the provider is CodeDeploy, in a new region us-east-1.
"name": "Deploy",
"actions": [
{
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"name": "Deploy",
"region": "RegionB",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "name",
"DeploymentGroupName": "name"
},
"runOrder": 1
}
3. In the pipeline structure, remove the artifactStore field and add the artifactStores map for
your new cross-region action. The mapping must include an entry for each AWS Region in which you
have actions. For each entry in the mapping, the resources must be in the respective AWS Region. In
the example below, ID-A is the encryption key ID for RegionA, and ID-B is the encryption key ID
for RegionB.
"artifactStores":{
"RegionA":{
"encryptionKey":{
"id":"ID-A",
"type":"KMS"
},
"location":"Location1",
"type":"S3"
},
"RegionB":{
"encryptionKey":{
"id":"ID-B",
"type":"KMS"
},
"location":"Location2",
"type":"S3"
}
}
The following JSON example shows the us-west-2 bucket as my-storage-bucket and adds the
new us-east-1 bucket named my-storage-bucket-us-east-1.
"artifactStores": {
"us-west-2": {
"type": "S3",
"location": "my-storage-bucket"
},
"us-east-1": {
"type": "S3",
"location": "my-storage-bucket-us-east-1"
}
},
4. If you are working with the pipeline structure retrieved using the get-pipeline command, remove
the metadata lines from the JSON file. Otherwise, the update-pipeline command cannot use it.
Remove the "metadata": { } lines and the "created", "pipelineARN", and "updated" fields.
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
This command returns the entire structure of the edited pipeline. The output is similar to the
following.
{
"pipeline": {
"version": 4,
"roleArn": "ARN",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "CodeCommit"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"configuration": {
"PollForSourceChanges": "false",
"BranchName": "master",
"RepositoryName": "MyTestRepo"
},
"runOrder": 1
}
]
},
{
"name": "Deploy",
"actions": [
{
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"name": "Deploy",
"region": "us-east-1",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CodeDeploy"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "name",
"DeploymentGroupName": "name"
},
"runOrder": 1
}
]
}
],
"name": "AnyCompanyPipeline",
"artifactStores": {
"us-west-2": {
"type": "S3",
"location": "my-storage-bucket"
},
"us-east-1": {
"type": "S3",
"location": "my-storage-bucket-us-east-1"
}
}
}
}
Note
The update-pipeline command stops the pipeline. If a revision is being run through the
pipeline when you run the update-pipeline command, that run is stopped. You must
manually start the pipeline to run that revision through the updated pipeline. Use the
start-pipeline-execution command to manually start your pipeline.
6. After you update your pipeline, the cross-region actions is displayed in the console, as shown here.
1. Add the Region parameter to the ActionDeclaration resource in your template, as shown in this
example:
ActionDeclaration:
Type: Object
Properties:
ActionTypeId:
Type: ActionTypeId
Required: true
Configuration:
Type: Map
InputArtifacts:
Type: Array
ItemType:
Type: InputArtifact
Name:
Type: String
Required: true
OutputArtifacts:
Type: Array
ItemType:
Type: OutputArtifact
RoleArn:
Type: String
RunOrder:
Type: Integer
Region:
Type: String
2. Under Mappings, add the region map. Under the Pipeline resource, under the artifactStore
field, add the artifactStores map for your new cross-region action as follows:
Mappings:
SecondRegionMap:
RegionA:
SecondRegion: "RegionB"
RegionB:
SecondRegion: "RegionA"
...
Properties:
ArtifactStores:
-
Region: RegionB
ArtifactStore:
Type: "S3"
Location: test-cross-region-artifact-store-bucket-RegionB
-
Region: RegionA
ArtifactStore:
Type: "S3"
Location: test-cross-region-artifact-store-bucket-RegionA
The following YAML example shows the RegionA bucket as us-west-2 and adds the new RegionB
bucket, eu-central-1:
Mappings:
SecondRegionMap:
us-west-2:
SecondRegion: "eu-central-1"
eu-central-1:
SecondRegion: "us-west-2"
...
Properties:
ArtifactStores:
-
Region: eu-central-1
ArtifactStore:
Type: "S3"
Location: test-cross-region-artifact-store-bucket-eu-central-1
-
Region: us-west-2
ArtifactStore:
Type: "S3"
Location: test-cross-region-artifact-store-bucket-us-west-2
3. Save the updated template to your local computer, and then open the AWS CloudFormation console.
4. Choose your stack, and then choose Create Change Set for Current Stack.
5. Upload the template, and then view the changes listed in AWS CloudFormation. These are the
changes to be made to the stack. You should see your new resources in the list.
6. Choose Execute.
You can use the AWS CodePipeline console or the AWS CLI to disable or enable transitions between
stages in a pipeline.
Note
You can use an approval action to pause the run of a pipeline until it is manually approved to
continue. For more information, see Manage Approval Actions in CodePipeline (p. 312).
Topics
• Disable or Enable Transitions (Console) (p. 331)
• Disable or Enable Transitions (CLI) (p. 333)
1. Sign in to the AWS Management Console and open the CodePipeline console at http://
console.aws.amazon.com/codesuite/codepipeline/home.
The names of all pipelines associated with your AWS account are displayed.
2. In Name, choose the name of the pipeline for which you want to enable or disable transitions. This
opens a detailed view of the pipeline, including the transitions between the stages of the pipeline.
3. Find the arrow after the last stage that you want to run, and then choose the button next to it. For
example, in the following pipeline, if you want the actions in the Staging stage to run, but not the
actions in the stage named Production, choose the Disable transition button between those two
stages:
4. In the Disable transition dialog box, enter a reason for disabling the transition, and then choose
Disable.
The button changes to show that transitions are disabled between the stage preceding the arrow
and the stage following the arrow. Any revisions that were already running in the stages that come
after the disabled transition continue through the pipeline, but any subsequent revisions do not
continue past the disabled transition.
5. Choose the Enable transition button next to the arrow. In the Enable transition dialog box, choose
Enable. The pipeline immediately enables the transition between the two stages. If any revisions
have been run through the earlier stages after the transition was disabled, in a few moments, the
pipeline starts running the latest revision through the stages after the formerly disabled transition.
The pipeline runs the revision through all remaining stages in the pipeline.
Note
It might take a few seconds for changes to appear in the CodePipeline console after you
enable the transition.
To disable a transition
1. Open a terminal (Linux, macOS, or Unix) or command prompt (Windows) and use the AWS CLI
to run the disable-stage-transition command, specifying the name of the pipeline, the name of
the stage to which you want to disable transitions, the transition type, and the reason you are
disabling transitions to that stage. Unlike using the console, you must also specify whether you
are disabling transitions into the stage (inbound) or transitions out of the stage after all actions
complete (outbound).
For example, to disable the transition to a stage named Staging in a pipeline named
MyFirstPipeline, you would type a command similar to the following:
To enable a transition
1. Open a terminal (Linux, macOS, or Unix) or command prompt (Windows) and use the AWS CLI to run
the enable-stage-transition command, specifying the name of the pipeline, the name of the stage to
which you want to enable transitions, and the transition type.
For example, to enable the transition to a stage named Staging in a pipeline named
MyFirstPipeline, you would type a command similar to the following:
You can use the following tools to monitor your CodePipeline pipelines and their resources:
• Amazon CloudWatch Events — Use Amazon CloudWatch Events to detect and react to pipeline
execution state changes (for example, send an Amazon SNS notification or invoke a Lambda function).
• AWS CloudTrail — Use CloudTrail to capture API calls made by or on behalf of CodePipeline in your
AWS account and deliver the log files to an Amazon S3 bucket. You can choose to have CloudWatch
publish Amazon SNS notifications when new log files are delivered so you can take quick action.
• Console and CLI — You can use the CodePipeline console and CLI to view details about the status of a
pipeline or a particular pipeline execution.
Topics
• Detect and React to Changes in Pipeline State with Amazon CloudWatch Events (p. 334)
• Logging CodePipeline API Calls with AWS CloudTrail (p. 342)
• Rules. An event in Amazon CloudWatch Events is configured by first creating a rule with a selected
service as the event source.
• Targets. The new rule receives a selected service as the event target. For a list of services available as
Amazon CloudWatch Events targets, see What Is Amazon CloudWatch Events.
• A rule that sends a notification when the instance state changes, where an EC2 instance is the event
source and Amazon SNS is the event target.
• A rule that sends a notification when the build phase changes, where a CodeBuild configuration is the
event source and Amazon SNS is the event target.
• A rule that detects pipeline changes and invokes an AWS Lambda function.
1. Create an Amazon CloudWatch Events rule that uses CodePipeline as an event source.
2. Create a target for your rule that uses one of the services available as targets in Amazon CloudWatch
Events, such as AWS Lambda or Amazon SNS.
3. Grant permissions to Amazon CloudWatch Events to allow it to invoke the selected target service.
You can configure notifications to be sent when the state changes for:
• Specified pipelines or all your pipelines. You control this by using "detail-type": "CodePipeline
Pipeline Execution State Change".
• Specified stages or all your stages, within a specified pipeline or all your pipelines. You control this by
using "detail-type": "CodePipeline Stage Execution State Change".
• Specified actions or all actions, within a specified stage or all stages, within a specified pipeline or all
your pipelines. You control this by using "detail-type": "CodePipeline Action Execution
State Change".
Each type of execution state change event emits notifications with specific message content, where:
• The initial version entry shows the version number for the CloudWatch event.
• The version entry under pipeline detail shows the pipeline structure version number.
• The execution-id entry under pipeline detail shows the execution ID for the pipeline execution
that caused the state change. Refer to the GetPipelineExecution API call in the AWS CodePipeline API
Reference.
Pipeline execution state change message content: When a pipeline execution starts, it emits an
event that sends notifications with the following content. This example is for the pipeline named
"myPipeline" in the us-east-1 region.
{
"version": "0",
"id": event_Id,
"detail-type": "CodePipeline Pipeline Execution State Change",
"source": "aws.codepipeline",
"account": Pipeline_Account,
"time": TimeStamp,
"region": "us-east-1",
"resources": [
"arn:aws:codepipeline:us-east-1:account_ID:myPipeline"
],
"detail": {
"pipeline": "myPipeline",
"version": "1",
"state": "STARTED",
"execution-id": execution_Id
}
}
Stage execution state change message content: When a stage execution starts, it emits an event that
sends notifications with the following content. This example is for the pipeline named "myPipeline" in
the us-east-1 region, for the stage "Prod".
{
"version": "0",
"id": event_Id,
"detail-type": "CodePipeline Stage Execution State Change",
"source": "aws.codepipeline",
"account": Pipeline_Account,
"time": TimeStamp,
"region": "us-east-1",
"resources": [
"arn:aws:codepipeline:us-east-1:account_ID:myPipeline"
],
"detail": {
"pipeline": "myPipeline",
"version": "1",
"execution-id": execution_Id,
"stage": "Prod",
"state": "STARTED"
}
}
Action execution state change message content: When an action execution starts, it emits an
event that sends notifications with the following content. This example is for the pipeline named
"myPipeline" in the us-east-1 region, for the action "myAction".
{
"version": "0",
"id": event_Id,
"detail-type": "CodePipeline Action Execution State Change",
"source": "aws.codepipeline",
"account": Pipeline_Account,
"time": TimeStamp,
"region": "us-east-1",
"resources": [
"arn:aws:codepipeline:us-east-1:account_ID:myPipeline"
],
"detail": {
"pipeline": "myPipeline",
"version": "1",
"execution-id": execution_Id,
"stage": "Prod",
"action": "myAction",
"state": "STARTED",
"type": {
"owner": "AWS",
"category": "Deploy",
"provider": "CodeDeploy",
"version": 1
}
}
}
Pipeline-level states
CANCELED The pipeline execution was canceled because the pipeline structure was
updated.
SUPERSEDED While this pipeline execution was waiting for the next stage to be completed,
a newer pipeline execution advanced and continued through the pipeline
instead.
Stage-level states
CANCELED The stage was canceled because the pipeline structure was updated.
Action-level states
FAILED For Approval actions, the FAILED state means the action was either rejected
by the reviewer or failed due to an incorrect action configuration.
CANCELED The action was canceled because the pipeline structure was updated.
Prerequisites
Before you create event rules for use in your CodePipeline operations, you should do the following:
• Complete the CloudWatch Events prerequisites. For this information, see Regional Endpoints.
• Familiarize yourself with events, rules, and targets in CloudWatch Events. For more information, see
What Is Amazon CloudWatch Events.
• Create the target or targets you will use in your event rules, such as an Amazon SNS notification topic.
• For a rule that applies to pipeline-level events, choose CodePipeline Pipeline Execution State
Change.
• For a rule that applies to stage-level events, choose CodePipeline Stage Execution State Change.
• For a rule that applies to action-level events, choose CodePipeline Action Execution State
Change.
5. Specify the state changes the rule applies to:
• For a rule that applies to all state changes, choose Any state.
• For a rule that applies to some state changes only, choose Specific state(s), and then choose one
or more state values from the list.
6. For event patterns that are more detailed than the selectors allow, you can also use the Edit option
in the Event Pattern Preview window to designate an event pattern in JSON format. The following
example shows the JSON structure edited manually to specify a pipeline named "myPipeline."
Note
If not otherwise specified, then the event pattern is created for all pipelines/stages/actions
and states.
For more detailed event patterns, you can copy and paste the following example event patterns into
the Edit window.
• Example
Use this sample event pattern to capture failed deploy and build actions across all the pipelines.
{
"source": [
"aws.codepipeline"
],
"detail-type": [
"CodePipeline Action Execution State Change"
],
"detail": {
"state": [
"FAILED"
],
"type": {
"category": ["Deploy", "Build"]
}
}
}
• Example
Use this sample event pattern to capture all rejected or failed approval actions across all the
pipelines.
{
"source": [
"aws.codepipeline"
],
"detail-type": [
"CodePipeline Action Execution State Change"
],
"detail": {
"state": [
"FAILED"
],
"type": {
"category": ["Approval"]
}
}
}
• Example
Use this sample event pattern to capture all the events from the specified pipelines.
{
"source": [
"aws.codepipeline"
],
"detail-type": [
"CodePipeline Pipeline Execution State Change",
"CodePipeline Action Execution State Change",
"CodePipeline Stage Execution State Change"
],
"detail": {
"pipeline": ["myPipeline", "my2ndPipeline"]
}
}
To use the AWS CLI to create a rule, call the put-rule command, specifying:
• A name that uniquely identifies the rule you are creating. This name must be unique across all of the
pipelines you create with CodePipeline associated with your AWS account.
API Version 2015-07-09
341
CodePipeline User Guide
Logging API Calls with AWS CloudTrail
• The event pattern for the source and detail fields used by the rule. For more information, see Amazon
CloudWatch Events and Event Patterns.
1. Call the put-rule command to create a rule specifying the event pattern. (See the preceding tables
for valid states.)
2. Call the put-targets command to add a target to your new rule, as shown in this example for an
Amazon SNS topic:
3. Add permissions for Amazon CloudWatch Events to use the designated target service to invoke
the notification. For more information, see Using Resource-Based Policies for Amazon CloudWatch
Events.
To learn more about CloudTrail, see the AWS CloudTrail User Guide.
For an ongoing record of events in your AWS account, including events for CodePipeline, create a trail.
A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a
trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the
AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can
configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs.
For more information, see the following:
All CodePipeline actions are logged by CloudTrail and are documented in the CodePipeline
API Reference. For example, calls to the CreatePipeline, GetPipelineExecution and
UpdatePipeline actions generate entries in the CloudTrail log files.
Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:
• Whether the request was made with root or AWS Identity and Access Management (IAM) user
credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.
The following example shows a CloudTrail log entry for an update pipeline event, where a pipeline
named MyFirstPipeline has been edited by the user named JaneDoe-CodePipeline with the account
ID 80398EXAMPLE. The user changed the name of the source stage of a pipeline from Source to
MySourceStage. Because both the requestParameters and the responseElements elements in the
CloudTrail log contain the entire structure of the edited pipeline, those elements have been abbreviated
in the following example. Emphasis has been added to the requestParameters portion of the pipeline
where the change occurred, the previous version number of the pipeline, and the responseElements
portion, which shows the version number incremented by 1. Edited portions are marked with ellipses (...)
to illustrate where more data appears in a real log entry.
{
"eventVersion":"1.03",
"userIdentity": {
"type":"IAMUser",
"principalId":"AKIAI44QH8DHBEXAMPLE",
"arn":"arn:aws:iam::80398EXAMPLE:user/JaneDoe-CodePipeline",
"accountId":"80398EXAMPLE",
"accessKeyId":"AKIAIOSFODNN7EXAMPLE",
"userName":"JaneDoe-CodePipeline",
"sessionContext": {
"attributes":{
"mfaAuthenticated":"false",
"creationDate":"2015-06-17T14:44:03Z"
}
},
"invokedBy":"signin.amazonaws.com"},
"eventTime":"2015-06-17T19:12:20Z",
"eventSource":"codepipeline.amazonaws.com",
"eventName":"UpdatePipeline",
"awsRegion":"us-east-2",
"sourceIPAddress":"192.0.2.64",
"userAgent":"signin.amazonaws.com",
"requestParameters":{
"pipeline":{
"version":1,
"roleArn":"arn:aws:iam::80398EXAMPLE:role/AWS-CodePipeline-Service",
"name":"MyFirstPipeline",
"stages":[
{
"actions":[
{
"name":"MySourceStage",
"actionType":{
"owner":"AWS",
"version":"1",
"category":"Source",
"provider":"S3"
},
"inputArtifacts":[],
"outputArtifacts":[
{"name":"MyApp"}
],
"runOrder":1,
"configuration":{
"S3Bucket":"awscodepipeline-demobucket-example-date",
"S3ObjectKey":"sampleapp_linux.zip"
}
}
],
"name":"Source"
},
(...)
},
"responseElements":{
"pipeline":{
"version":2,
(...)
},
"requestID":"2c4af5c9-7ce8-EXAMPLE",
"eventID":""c53dbd42-This-Is-An-Example"",
"eventType":"AwsApiCall",
"recipientAccountId":"80398EXAMPLE"
}
]
}
Troubleshooting CodePipeline
The following information might help you troubleshoot common issues in AWS CodePipeline.
Topics
• Pipeline Error: A pipeline configured with AWS Elastic Beanstalk returns an error
message: "Deployment failed. The provided role does not have sufficient permissions:
Service:AmazonElasticLoadBalancing" (p. 345)
• Deployment Error: A pipeline configured with an AWS Elastic Beanstalk deploy action hangs instead
of failing if the "DescribeEvents" permission is missing (p. 346)
• Pipeline Error: A source action returns the insufficient permissions message: "Could not access
the CodeCommit repository repository-name. Make sure that the pipeline IAM role has sufficient
permissions to access the repository." (p. 347)
• Pipeline Error: A Jenkins build or test action runs for a long time and then fails due to lack of
credentials or permissions (p. 347)
• Pipeline Error: My GitHub source stage contains Git submodules, but CodePipeline doesn't initialize
them (p. 347)
• Pipeline Error: I receive a pipeline error that says: "Could not access the GitHub repository" or "Unable
to connect to the GitHub repository" (p. 348)
• Pipeline Error: A pipeline created in one AWS region using a bucket created in another AWS region
returns an "InternalError" with the code "JobFailed" (p. 349)
• Deployment Error: A ZIP file that contains a WAR file is deployed successfully to AWS Elastic
Beanstalk, but the application URL reports a 404 Not Found error (p. 346)
• File permissions are not retained on source files from GitHub when ZIP does not preserve external
attributes (p. 350)
• Pipeline artifact folder names appear to be truncated (p. 351)
• Need Help with a Different Issue? (p. 351)
{
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"ecs:*",
"iam:PassRole"
],
"Resource": "*",
"Effect": "Allow"
},
After you apply the edited policy, follow the steps in Start a Pipeline Manually in AWS
CodePipeline (p. 184) to manually rerun any pipelines that use Elastic Beanstalk.
Depending on your security needs, you can modify the permissions in other ways, too.
After you apply the edited policy, follow the steps in Start a Pipeline Manually in AWS
CodePipeline (p. 184) to manually rerun any pipelines that use Elastic Beanstalk.
For more information about the default service role, see Review the Default CodePipeline Service Role
Policy (p. 363).
Possible fixes: Add the required permissions for CodeCommit to your CodePipeline service role's policy.
For more information, see Add Permissions for Other AWS Services (p. 366).
Possible fixes: Make sure that Amazon EC2 instance role or IAM user is configured with the
AWSCodePipelineCustomActionAccess managed policy or with the equivalent permissions. For
more information, see AWS Managed (Predefined) Policies for CodePipeline (p. 363).
If you are using an IAM user, make sure the AWS profile configured on the instance uses the IAM user
configured with the correct permissions. You might have to provide the IAM user credentials you
configured for integration between Jenkins and CodePipeline directly into the Jenkins UI. This is not a
recommended best practice. If you must do so, be sure the Jenkins server is secured and uses HTTPS
instead of HTTP.
Possible fixes: Consider cloning the GitHub repository directly as part of a separate script. For example,
you could include a clone action in a Jenkins script.
If these permissions have been revoked or otherwise disabled, then the pipeline fails when it cannot use
the GitHub token to connect to the repository.
It is a security best practice to rotate your personal access token on a regular basis. For more information,
see Rotate Your GitHub Personal Access Token on a Regular Basis (GitHub and CLI) (p. 388).
Possible fixes:
If CodePipeline is unable to connect to the GitHub repository, there are two troubleshooting options:
• You might simply need to reconnect your pipeline to the repository manually. You might have revoked
the permissions of the OAuth token for CodePipeline and they just need to be restored. To do this, see
the steps below.
• You might need to change your default OAuth token to a personal access token. The number of OAuth
tokens is limited. For more information, see the GitHub documentation. If CodePipeline reaches that
limit, older tokens stop working, and actions in pipelines that rely on that token fail.
1. Check to see if the permissions for CodePipeline have been revoked. For the steps to check the
Authorized OAuth Apps list in GitHub, see View Your Authorized OAuth Apps (p. 386). If you do
not see CodePipeline in the list, you must use the console to reconnect your pipeline to GitHub.
a. Open your pipeline in the console and choose Edit. On the source stage that contains your
GitHub source action, choose Edit stage.
b. On the GitHub source action, choose the edit icon.
c. On the Edit action page, choose Connect to GitHub to restore the authorization.
If prompted, you might need to re-enter the Repository and Branch for your action. Choose
Done. Choose Done on the stage editing page, and then choose Save on the pipeline editing
page. Run the pipeline.
2. If this does not correct the error but you can see CodePipeline in the Authorized OAuth Apps list in
GitHub, you might have exceeded the allowed number of tokens. To fix this issue, you can manually
configure one OAuth token as a personal access token, and then configure all pipelines in your AWS
account to use that token. For more information, see Configure Your Pipeline to Use a Personal
Access Token (GitHub and CLI) (p. 387).
Possible fixes: Make sure the Amazon S3 bucket where your artifact is stored is in the same AWS region
as the pipeline you have created.
Possible fixes: AWS Elastic Beanstalk can unpack a ZIP file, but not a WAR file contained in a ZIP file.
Instead of specifying a WAR file in your buildspec.yml file, specify a folder that contains the content
to be deployed. For example:
version: 0.1
phases:
post_build:
commands:
- mvn package
- mv target/my-web-app ./
artifacts:
files:
- my-web-app/**/*
discard-paths: yes
Possible fixes: You must modify the file permissions via a chmod command from where the artifacts are
consumed. Update the build spec file in the build provider, such as CodeBuild, to restore file permissions
each time the build stage is run. The following example shows a build spec for CodeBuild with a chmod
command under the build: section:
version: 0.2
phases:
build:
commands:
- dotnet restore
- dotnet build
- chmod a+x bin/Debug/myTests
- bin/Debug/myTests
artifacts:
files:
- bin/Debug/myApplication
Note
To use the CodePipeline console to confirm the name of the build input artifact, display the
pipeline and then, in the Build action, rest your mouse pointer on the tooltip. Make a note of
the value for Input artifact (for example, MyApp). To use the AWS CLI to get the name of the S3
artifact bucket, run the AWS CodePipeline get-pipeline command. The output contains an
artifactStore object with a location field that displays the name of the bucket.
Explanation: CodePipeline truncates artifact names to ensure that the full Amazon S3 path does not
exceed policy size limits when CodePipeline generates temporary credentials for job workers.
Even though the artifact name appears to be truncated, CodePipeline maps to the artifact bucket in a
way that is not affected by artifacts with truncated names. The pipeline can function normally. This is
not an issue with the folder or artifacts. There is a 100-character limit to pipeline names. Although the
artifact folder name might appear to be shortened, it is still unique for your pipeline.
Authentication
You can access AWS as any of the following types of identities:
• AWS account root user – When you sign up for AWS, you provide an email address and password that
is associated with your AWS account. These are your root credentials and they provide complete access
to all of your AWS resources.
Important
For security reasons, we recommend that you use the root credentials only to create an
administrator user, which is an IAM user with full permissions to your AWS account. Then, you
can use this administrator user to create other IAM users and roles with limited permissions.
For more information, see IAM Best Practices and Creating an Admin User and Group in the
IAM User Guide.
• IAM user – An IAM user is simply an identity within your AWS account that has specific custom
permissions (for example, permissions to send event data to a target in CodePipeline). You can use an
IAM user name and password to sign in to secure AWS webpages like the AWS Management Console,
AWS Discussion Forums, or the AWS Support Center.
In addition to a user name and password, you can also generate access keys for each user. You can use
these keys when you access AWS services programmatically, either through one of the several SDKs
or by using the AWS Command Line Interface (AWS CLI). The SDK and CLI tools use the access keys to
cryptographically sign your request. If you don’t use the AWS tools, you must sign the request yourself.
CodePipeline supports Signature Version 4, a protocol for authenticating inbound API requests. For
more information about authenticating requests, see Signature Version 4 Signing Process in the AWS
General Reference.
• IAM role – An IAM role is another IAM identity you can create in your account that has specific
permissions. It is similar to an IAM user, but it is not associated with a specific person. An IAM role
enables you to obtain temporary access keys that can be used to access AWS services and resources.
IAM roles with temporary credentials are useful in the following situations:
• Federated user access – Instead of creating an IAM user, you can use preexisting user identities from
AWS Directory Service, your enterprise user directory, or a web identity provider. These are known as
federated users. AWS assigns a role to a federated user when access is requested through an identity
provider. For more information about federated users, see Federated Users and Roles in the IAM User
Guide.
• Cross-account access – You can use an IAM role in your account to grant another AWS account
permissions to access your account’s resources. For an example, see Tutorial: Delegate Access Across
AWS Accounts Using IAM Roles in the IAM User Guide.
• AWS service access – You can use an IAM role in your account to grant an AWS service permissions
to access your account’s resources. For example, you can create a role that allows Amazon Redshift
to access an Amazon S3 bucket on your behalf and then load data stored in the bucket into an
Amazon Redshift cluster. For more information, see Creating a Role to Delegate Permissions to an
AWS Service in the IAM User Guide.
• Applications running on Amazon EC2 – Instead of storing access keys within the EC2 instance for
use by applications running on the instance and making AWS API requests, you can use an IAM role
to manage temporary credentials for these applications. To assign an AWS role to an EC2 instance
and make it available to all of its applications, you can create an instance profile that is attached
to the instance. An instance profile contains the role and enables programs running on the EC2
instance to get temporary credentials. For more information, see Using Roles for Applications on
Amazon EC2 in the IAM User Guide.
The following sections describe how to manage permissions for CodePipeline. We recommend that you
read the overview first.
When granting permissions, you decide who is getting the permissions, the resources they get
permissions for, and the specific actions that you want to allow on those resources.
Topics
• CodePipeline Resources and Operations (p. 354)
• Supported Resource-Level Permissions for CodePipeline API Calls (p. 355)
• Understanding Resource Ownership (p. 357)
• Managing Access to Resources (p. 358)
• Specifying Policy Elements: Actions, Effects, and Principals (p. 360)
• Specifying Conditions in a Policy (p. 361)
Pipeline arn:aws:codepipeline:region:account:pipeline-name
Stage arn:aws:codepipeline:region:account:pipeline-name/stage-name
Action arn:aws:codepipeline:region:account:pipeline-name/stage-
name/action-name
Note
Most services in AWS treat a colon (:) or a forward slash (/) as the same character in ARNs.
However, CodePipeline uses an exact match in event patterns and rules. Be sure to use the
correct ARN characters when creating event patterns so that they match the ARN syntax in the
pipeline you want to match.
In CodePipeline, there are API calls that support resource-level permissions. Resource-level permissions
indicate whether an API call can specify a resource ARN, or whether the API call can only specify
all resources using the wildcard. See Supported Resource-Level Permissions for CodePipeline API
Calls (p. 355) for a detailed description of resource-level permissions and a listing of the CodePipeline
API calls that support resource-level permissions.
For example, you can indicate a specific pipeline (myPipeline) in your statement using its ARN as
follows:
"Resource": "arn:aws:codepipeline:us-east-2:111222333444:myPipeline"
You can also specify all pipelines that belong to a specific account by using the (*) wildcard character as
follows:
"Resource": "arn:aws:codepipeline:us-east-2:111222333444:*"
To specify all resources, or if a specific API action does not support ARNs, use the (*) wildcard character in
the Resource element as follows:
"Resource": "*"
Note
When you create IAM policies, follow the standard security advice of granting least privilege
—that is, granting only the permissions required to perform a task. If an API call supports
ARNs, then it supports resource-level permissions, and you do not need to use the (*) wildcard
character.
Some CodePipeline API calls accept multiple resources (for example, GetPipeline). To specify multiple
resources in a single statement, separate their ARNs with commas, as follows:
CodePipeline provides a set of operations to work with the CodePipeline resources. For a list of available
operations, see CodePipeline Permissions Reference (p. 380).
The following table describes the AWS CodePipeline API calls that currently support resource-level
permissions, as well as the supported resources, resource ARNs, and condition keys for each action.
Note
AWS CodePipeline API calls that are not listed in this table do not support resource-level
permissions. If an AWS CodePipeline API call does not support resource-level permissions, you
can grant users permission to use it, but you have to specify a * for the resource element of your
policy statement.
PutActionRevision Pipeline
arn:aws:codepipeline:region:account:pipeline-name
arn:aws:codepipeline:region:account:actionType:owner/category/
provider/version
CreatePipeline Pipeline
arn:aws:codepipeline:region:account:pipeline-name
ListPipelines Pipeline
arn:aws:codepipeline:region:account:pipeline-name
arn:aws:codepipeline:region:account:actionType:owner/category/
provider/version
DisableStageTransition Pipeline
arn:aws:codepipeline:region:account:pipeline-name/stage-name
StartPipelineExecution Pipeline
arn:aws:codepipeline:region:account:pipeline-name
UpdatePipeline Pipeline
arn:aws:codepipeline:region:account:pipeline-name
GetPipelineState Pipeline
arn:aws:codepipeline:region:account:pipeline-name
RetryStageExecution Pipeline
arn:aws:codepipeline:region:account:pipeline-name
arn:aws:codepipeline:region:account:actionType:owner/category/
provider/version
arn:aws:codepipeline:region:account:actionType:owner/category/
provider/version
GetPipelineExecution Pipeline
arn:aws:codepipeline:region:account:pipeline-name
GetPipeline Pipeline
arn:aws:codepipeline:region:account:pipeline-name
ListPipelineExecutions Pipeline
arn:aws:codepipeline:region:account:pipeline-name
DeletePipeline Pipeline
arn:aws:codepipeline:region:account:pipeline-name
EnableStageTransition Pipeline
arn:aws:codepipeline:region:account:pipeline-name/stage-name
PutApprovalResult Action
arn:aws:codepipeline:region:account:pipeline-name/stage-
name/action-name
Note
This API call supports resource-level permissions.
However, you may encounter an error if you use the
IAM console or Policy Generator to create policies with
"codepipeline:PutApprovalResult" that specify a resource
ARN. If you encounter an error, you can use the JSON tab in the
IAM console or the CLI to create a policy.
DeleteWebhook Webhook
arn:aws:codepipeline:region:account:webhook:webhook-name
DeregisterWebhookWithThirdParty
Webhook
arn:aws:codepipeline:region:account:webhook:webhook-name
PutWebhook Pipeline
arn:aws:codepipeline:region:account:pipeline-name
Webhook
arn:aws:codepipeline:region:account:webhook:webhook-name
RegisterWebhookWithThirdParty
Webhook
arn:aws:codepipeline:region:account:webhook:webhook-name
• If you use the root account credentials of your AWS account to create a rule, your AWS account is the
owner of the CodePipeline resource.
• If you create an IAM user in your AWS account and grant permissions to create CodePipeline resources
to that user, the user can create CodePipeline resources. However, your AWS account, to which the user
belongs, owns the CodePipeline resources.
• If you create an IAM role in your AWS account with permissions to create CodePipeline resources,
anyone who can assume the role can create CodePipeline resources. Your AWS account, to which the
role belongs, owns the CodePipeline resources.
Policies attached to an IAM identity are referred to as identity-based policies (IAM policies) and policies
attached to a resource are referred to as resource-based policies. CodePipeline supports only identity-
based (IAM policies).
Topics
• Identity-Based Policies (IAM Policies) (p. 358)
• Resource-Based Policies (p. 360)
• Attach a permissions policy to a user or a group in your account – To grant a user permissions to
view pipelines in the CodePipeline console, you can attach a permissions policy to a user or group that
the user belongs to.
• Attach a permissions policy to a role (grant cross-account permissions) – You can attach an
identity-based permissions policy to an IAM role to grant cross-account permissions. For example,
the administrator in Account A can create a role to grant cross-account permissions to another AWS
account (for example, Account B) or an AWS service as follows:
1. Account A administrator creates an IAM role and attaches a permissions policy to the role that
grants permissions on resources in Account A.
2. Account A administrator attaches a trust policy to the role identifying Account B as the principal
who can assume the role.
3. Account B administrator can then delegate permissions to assume the role to any users in Account
B. Doing this allows users in Account B to create or access resources in Account A. The principal in
the trust policy an also be an AWS service principal if you want to grant an AWS service permissions
to assume the role.
For more information about using IAM to delegate permissions, see Access Management in the IAM
User Guide.
The following example shows a policy in the 111222333444 account that allows users to view, but not
change, the pipeline named MyFirstPipeline in the CodePipeline console. This policy is based on the
AWSCodePipelineReadOnlyAccess managed policy, but because it is specific to the MyFirstPipeline
pipeline, it cannot use the managed policy directly. If you do not want to restrict the policy to a specific
pipeline, strongly consider using one of the managed policies created and maintained by CodePipeline.
For more information, see Working with Managed Policies. You must attach this policy to an IAM role you
create for access, for example a role named CrossAccountPipelineViewers:
{
"Statement": [
{
"Action": [
"codepipeline:GetPipeline",
"codepipeline:GetPipelineState",
"codepipeline:GetPipelineExecution",
"codepipeline:ListPipelineExecutions",
"codepipeline:ListActionTypes",
"codepipeline:ListPipelines",
"iam:ListRoles",
"s3:GetBucketPolicy",
"s3:GetObject",
"s3:ListAllMyBuckets",
"s3:ListBucket",
"codecommit:ListBranches",
"codecommit:ListRepositories",
"codedeploy:GetApplication",
"codedeploy:GetDeploymentGroup",
"codedeploy:ListApplications",
"codedeploy:ListDeploymentGroups",
"elasticbeanstalk:DescribeApplications",
"elasticbeanstalk:DescribeEnvironments",
"lambda:GetFunctionConfiguration",
"lambda:ListFunctions",
"opsworks:DescribeApps",
"opsworks:DescribeLayers",
"opsworks:DescribeStacks"
],
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "2012-10-17"
}
After you create this policy, create the IAM role in the 111222333444 account and attach the policy to
that role. In the role's trust relationships, you must add the AWS account that will assume this role. The
following example shows a policy that allows users from the 111111111111 AWS account to assume
roles defined in the 111222333444 account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:root"
},
"Action": "sts:AssumeRole"
}
]
}
The following example shows a policy created in the 111111111111 AWS account that allows users to
assume the role named CrossAccountPipelineViewers in the 111222333444 account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::111222333444:role/CrossAccountPipelineViewers"
}
]
}
You can create specific IAM policies to restrict the calls and resources that users in your account have
access to, and then attach those policies to IAM users. For more information about how to create IAM
roles and to explore example IAM policy statements for CodePipeline, see Overview of Managing Access
Permissions to Your CodePipeline Resources (p. 353).
Resource-Based Policies
Other services, such as Amazon S3, also support resource-based permissions policies. For example, you
can attach a policy to an S3 bucket to manage access permissions to that bucket. Although CodePipeline
doesn't support resource-based policies, it does store artifacts to be used in pipelines in versioned S3
buckets.
Example To create a policy for an Amazon S3 bucket to use as the artifact store for
CodePipeline
You can use any versioned Amazon S3 bucket as the artifact store for CodePipeline. If you use the Create
Pipeline wizard to create your first pipeline, this Amazon S3 bucket is created for you automatically to
ensure that all objects uploaded to the artifact store are encrypted and connections to the bucket are
secure. As a best practice, if you create your own Amazon S3 bucket, consider adding the following policy
or its elements to the bucket. In this policy, the ARN for the Amazon S3 bucket is codepipeline-us-
east-2-1234567890. Replace this ARN with the ARN for your Amazon S3 bucket:
{
"Version": "2012-10-17",
"Id": "SSEAndSSLPolicy",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::codepipeline-us-east-2-1234567890/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
},
{
"Sid": "DenyInsecureConnections",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::codepipeline-us-east-2-1234567890/*",
"Condition": {
"Bool": {
"aws:SecureTransport": false
}
}
}
]
}
• Resource – You use an Amazon Resource Name (ARN) to identify the resource that the policy applies
to. For more information, see CodePipeline Resources and Operations (p. 354).
• Action – You use action keywords to identify resource operations that you want to allow or deny. For
example, the codepipeline:GetPipeline permission allows the user permissions to perform the
GetPipeline operation.
• Effect – You specify the effect, either allow or deny, when the user requests the specific action. If you
don't explicitly grant access to (allow) a resource, access is implicitly denied. You can also explicitly
deny access to a resource, which you might do to make sure that a user cannot access it, even if a
different policy grants access.
• Principal – In identity-based policies (IAM policies), the user that the policy is attached to is the
implicit principal. For resource-based policies, you specify the user, account, service, or other entity
that you want to receive permissions (applies to resource-based policies only).
To learn more about IAM policy syntax and descriptions, see AWS IAM Policy Reference in the IAM User
Guide.
For a table showing all of the CodePipeline API actions and the resources that they apply to, see
CodePipeline Permissions Reference (p. 380).
To express conditions, you use predefined condition keys. There are no condition keys specific to
CodePipeline. However, there are AWS-wide condition keys that you can use as appropriate. For a
complete list of AWS-wide keys, see Available Keys for Conditions in the IAM User Guide.
The following sections provide instructions for working with IAM policies specific to CodePipeline.
The following shows an example of a permissions policy that allows a user to enable and disable all
stage transitions in the pipeline named MyFirstPipeline in the us-west-2 region:
{
"Version": "2012-10-17",
"Statement" : [
{
"Effect" : "Allow",
"Action" : [
"codepipeline:EnableStageTransition",
"codepipeline:DisableStageTransition"
],
"Resource" : [
"arn:aws:codepipeline:us-west-2:111222333444:MyFirstPipeline"
]
}
]
}
Depending on the other services you incorporate into your pipelines, you may need permissions from
one or more of the following:
• AWS CodeCommit
• AWS CodeBuild
• AWS CloudFormation
• AWS CodeDeploy
• AWS Elastic Beanstalk
• AWS Lambda
• AWS OpsWorks
If you create an IAM policy that is more restrictive than the minimum required permissions, the console
won't function as intended for users with that IAM policy. To ensure that those users can still use the
CodePipeline console, also attach the AWSCodePipelineReadOnlyAccess managed policy to the user,
as described in AWS Managed (Predefined) Policies for CodePipeline (p. 363).
You don't need to allow minimum console permissions for users that are making calls only to the AWS
CLI or the CodePipeline API.
To perform this search across resources in all services, you must have the following permissions:
• CodeCommit: ListRepositories
• CodeDeploy: ListApplications
• CodePipeline: ListPipelines
Results are not returned for a service's resources if you do not have permissions for that service. Even if
you have permissions for viewing resources, specific resources will not be returned if there is an explicit
Deny to view those resources.
The following AWS managed policies, which you can attach to users in your account, are specific to
CodePipeline:
CodePipeline uses this service role when processing revisions through the stages and actions in a
pipeline. That role is configured with one or more policies that control access to the AWS resources used
by the pipeline. You might want to attach additional policies to this role, edit the policy attached to the
role, or configure policies for other service roles in AWS. You might also want to attach a policy to a role
when configuring cross-account access to your pipeline.
Important
Modifying a policy statement or attaching another policy to the role can prevent your pipelines
from functioning. Be sure that you understand the implications before you modify the service
role for CodePipeline in any way. Make sure you test your pipelines after making any changes to
the service role.
Topics
• Review the Default CodePipeline Service Role Policy (p. 363)
• Add Permissions for Other AWS Services (p. 366)
• Remove Permissions for Unused AWS Services (p. 369)
Note
In the console, service roles created before September 2018 are created with the name
"oneClick_AWS-CodePipeline-Service_ID-Number".
Service roles created after September 2018 use the service role name format
"AWSCodePipelineServiceRole-Region-Pipeline_Name". For example, for a pipeline
named MyFirstPipeline created in the console in eu-west-2, the service role named
"AWSCodePipelineServiceRole-eu-west-2-MyFirstPipeline" is created.
{
"Statement": [
{
"Action": [
"iam:PassRole"
],
"Resource": "*",
"Effect": "Allow",
"Condition": {
"StringEqualsIfExists": {
"iam:PassedToService": [
"cloudformation.amazonaws.com",
"elasticbeanstalk.amazonaws.com",
"ec2.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
}
}
},
{
"Action": [
"codecommit:CancelUploadArchive",
"codecommit:GetBranch",
"codecommit:GetCommit",
"codecommit:GetUploadArchiveStatus",
"codecommit:UploadArchive"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"codedeploy:CreateDeployment",
"codedeploy:GetApplication",
"codedeploy:GetApplicationRevision",
"codedeploy:GetDeployment",
"codedeploy:GetDeploymentConfig",
"codedeploy:RegisterApplicationRevision"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"ecs:*"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"lambda:InvokeFunction",
"lambda:ListFunctions"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"opsworks:CreateDeployment",
"opsworks:DescribeApps",
"opsworks:DescribeCommands",
"opsworks:DescribeDeployments",
"opsworks:DescribeInstances",
"opsworks:DescribeStacks",
"opsworks:UpdateApp",
"opsworks:UpdateStack"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"cloudformation:CreateStack",
"cloudformation:DeleteStack",
"cloudformation:DescribeStacks",
"cloudformation:UpdateStack",
"cloudformation:CreateChangeSet",
"cloudformation:DeleteChangeSet",
"cloudformation:DescribeChangeSet",
"cloudformation:ExecuteChangeSet",
"cloudformation:SetStackPolicy",
"cloudformation:ValidateTemplate"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"codebuild:BatchGetBuilds",
"codebuild:StartBuild"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Effect": "Allow",
"Action": [
"devicefarm:ListProjects",
"devicefarm:ListDevicePools",
"devicefarm:GetRun",
"devicefarm:GetUpload",
"devicefarm:CreateUpload",
"devicefarm:ScheduleRun"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"servicecatalog:ListProvisioningArtifacts",
"servicecatalog:CreateProvisioningArtifact",
"servicecatalog:DescribeProvisioningArtifact",
"servicecatalog:DeleteProvisioningArtifact",
"servicecatalog:UpdateProduct"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"cloudformation:ValidateTemplate"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:DescribeImages"
],
"Resource": "*"
}
],
"Version": "2012-10-17"
}
Note
Make sure your service role for CodePipeline includes the
"elasticbeanstalk:DescribeEvents" action for any pipelines that use AWS Elastic
Beanstalk. Without this permission, AWS Elastic Beanstalk deploy actions hang without failing or
indicating an error.
This is especially important if the service role you use for your pipelines was created before support was
added to CodePipeline for an AWS service.
The following table shows when support was added for other AWS services.
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In the IAM console, in the navigation pane, choose Roles, and then choose your AWS-CodePipeline-
Service role from the list of roles.
3. On the Permissions tab, in Inline Policies, in the row for your service role policy, choose Edit Policy.
Note
Your service role has a name in a format similar to oneClick_AWS-
CodePipeline-1111222233334.
4. Add the required permissions in the Policy Document box. For example, for CodeCommit support, add
the following to your policy statement:
{
"Action": [
"codecommit:GetBranch",
"codecommit:GetCommit",
"codecommit:UploadArchive",
"codecommit:GetUploadArchiveStatus",
"codecommit:CancelUploadArchive"
],
"Resource": "*",
"Effect": "Allow"
},
For AWS OpsWorks support, add the following to your policy statement:
{
"Action": [
"opsworks:CreateDeployment",
"opsworks:DescribeApps",
"opsworks:DescribeCommands",
"opsworks:DescribeDeployments",
"opsworks:DescribeInstances",
"opsworks:DescribeStacks",
"opsworks:UpdateApp",
"opsworks:UpdateStack"
],
"Resource": "*",
"Effect": "Allow"
},
For AWS CloudFormation support, add the following to your policy statement:
{
"Action": [
"cloudformation:CreateStack",
"cloudformation:DeleteStack",
"cloudformation:DescribeStacks",
"cloudformation:UpdateStack",
"cloudformation:CreateChangeSet",
"cloudformation:DeleteChangeSet",
"cloudformation:DescribeChangeSet",
"cloudformation:ExecuteChangeSet",
"cloudformation:SetStackPolicy",
"cloudformation:ValidateTemplate",
"iam:PassRole"
],
"Resource": "*",
"Effect": "Allow"
},
For AWS CodeBuild support, add the following to your policy statement:
{
"Action": [
"codebuild:BatchGetBuilds",
"codebuild:StartBuild"
],
"Resource": "*",
"Effect": "Allow"
},
For AWS Device Farm support, add the following to your policy statement:
{
"Action": [
"devicefarm:ListProjects",
"devicefarm:ListDevicePools",
"devicefarm:GetRun",
"devicefarm:GetUpload",
"devicefarm:CreateUpload",
"devicefarm:ScheduleRun"
],
"Resource": "*",
"Effect": "Allow"
},
For AWS Service Catalog support, add the following to your policy statement:
{
"Effect": "Allow",
"Action": [
"servicecatalog:ListProvisioningArtifacts",
"servicecatalog:CreateProvisioningArtifact",
"servicecatalog:DescribeProvisioningArtifact",
"servicecatalog:DeleteProvisioningArtifact",
"servicecatalog:UpdateProduct"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"cloudformation:ValidateTemplate"
],
"Resource": "*"
}
5. For Amazon ECR support, add the following to your policy statement:
{
"Action": [
"ecr:DescribeImages"
],
"Resource": "*",
"Effect": "Allow"
},
6. For Amazon ECS, the following are the minimum permissions needed to create pipelines with an
Amazon ECS deploy action.
{
"Action": [
"ecs:DescribeServices",
"ecs:DescribeTaskDefinition",
"ecs:DescribeTasks",
"ecs:ListTasks",
"ecs:RegisterTaskDefinition",
"ecs:UpdateService"
],
"Resource": "*",
"Effect": "Allow"
},
Note
When you create IAM policies, follow the standard security advice of granting least privilege
—that is, granting only the permissions required to perform a task. Certain API calls support
resource-based permissions and allow access to be limited. For example, in this case, to limit
permissions when calling DescribeTasks and ListTasks, you can replace the wildcard
character (*) with a specific resource ARN or with a wildcard character (*) in a resource ARN.
7. Choose Validate Policy to ensure the policy contains no errors. When the policy is error-free, choose
Apply Policy.
{
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"ecs:*",
"iam:PassRole"
],
"Resource": "*",
"Effect": "Allow"
},
Similarly, if none of your pipelines includes CodeDeploy, you can edit the policy statement to remove the
section that grants CodeDeploy resources:
{
"Action": [
"codedeploy:CreateDeployment",
"codedeploy:GetApplicationRevision",
"codedeploy:GetDeployment",
"codedeploy:GetDeploymentConfig",
"codedeploy:RegisterApplicationRevision"
],
"Resource": "*",
"Effect": "Allow"
},
When you design IAM policies, you might be setting granular permissions by granting access to specific
resources. As the number of resources that you manage grows, this task becomes more difficult. Tagging
resources and using tags in policy statement conditions can make this task easier. You grant access in
bulk to any resource with a certain tag. Then you repeatedly apply this tag to relevant resources, during
creation or later.
Tags can be attached to the resource or passed in the request to services that support tagging. In
CodePipeline, resources can have tags, and some actions can include tags. When you create an IAM
policy, you can use tag condition keys to control:
• Which users can perform actions on a pipeline resource, based on tags that it already has.
• Which tags can be passed in an action's request.
• Whether specific tag keys can be used in a request.
For the complete syntax and semantics of tag condition keys, see Controlling Access Using Tags in the
IAM User Guide.
The following examples demonstrate how to specify tag conditions in policies for CodePipeline users.
Note
As you review the following examples, be aware that tag-based access control for
iam:PassRole is not currently supported. You cannot limit permissions to pass a role based on
tags attached to that role using the ResourceTag/key-name condition key. For more information,
see Controlling Access to Resources.
The following policy limits this power and denies unauthorized users permission to create pipelines
for specific projects. To do that, it denies the CreatePipeline action if the request specifies a tag
named Project with one of the values ProjectA or ProjectB. (The aws:RequestTag condition key
is used to control which tags can be passed in an IAM request.) In addition, the policy prevents these
unauthorized users from tampering with the resources by using the aws:TagKeys condition key to not
allow tag modification actions to include these same tag values or to completely remove the Project
tag. A customer's administrator must attach this IAM policy to unauthorized IAM users, in addition to the
managed user policy.
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"codepipeline:CreatePipeline",
"codepipeline:TagResource"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestTag/Project": ["ProjectA", "ProjectB"]
}
}
},
{
"Effect": "Deny",
"Action": [
"codepipeline:UntagResource"
],
"Resource": "*",
"Condition": {
"ForAllValues:StringEquals": {
"aws:TagKeys": ["Project"]
}
}
}
]
}
The CodePipeline CodePipelineFullAccess managed user policy gives users unlimited permission to
perform any CodePipeline action on any resource.
The following policy limits this power and denies unauthorized users permission to perform actions on
specific project pipelines. To do that, it denies specific actions if the resource has a tag named Project
with one of the values ProjectA or ProjectB. (The aws:ResourceTag condition key is used to control
access to the resources based on the tags on those resources.) A customer's administrator must attach
this IAM policy to unauthorized IAM users, in addition to the managed user policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"codepipeline:TagResource",
"codepipeline:UntagResource",
"codepipeline:UpdatePipeline",
"codepipeline:DeletePipeline",
"codepipeline:ListTagsForResource"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/Project": ["ProjectA", "ProjectB"]
}
}
}
]
}
The following policy grants users permission to create development pipelines in CodePipeline.
To do that, it allows the CreatePipeline and TagResource actions if the request specifies a tag
named Project with the value ProjectA. (The aws:RequestTag condition key is used to control
which tags can be passed in an IAM request.) The aws:TagKeys condition ensures tag key case
sensitivity. This policy is useful for IAM users who don't have the CodePipeline CodePipelineFullAccess
managed user policy attached. The managed policy gives users unlimited permission to perform any
CodePipeline action on any resource.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:CreatePipeline",
"codepipeline:TagResource"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestTag/Project": "ProjectA"
},
"ForAllValues:StringEquals": {
"aws:TagKeys": ["Project"]
}
}
}
]
}
The following policy grants users permission to perform actions on, and get information about, specific
project pipelines in CodePipeline.
To do that, it allows specific actions if the pipeline has a tag named Project with the value ProjectA.
(The aws:RequestTag condition key is used to control which tags can be passed in an IAM request.) The
aws:TagKeys condition ensures tag key case sensitivity. This policy is useful for IAM users who don't
have the CodePipeline CodePipelineFullAccess managed user policy attached. The managed policy gives
users unlimited permission to perform any CodePipeline action on any resource.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:UpdatePipeline",
"codepipeline:DeletePipeline",
"codepipeline:ListPipelines"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/Project": "ProjectA"
},
"ForAllValues:StringEquals": {
"aws:TagKeys": ["Project"]
}
}
}
]
}
Examples
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:GetPipelineState"
],
"Resource": "arn:aws:codepipeline:us-west-2:111222333444:MyFirstPipeline"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:DisableStageTransition",
"codepipeline:EnableStageTransition"
],
"Resource": "arn:aws:codepipeline:us-west-2:111222333444:MyFirstPipeline/*"
}
]
}
To allow the user to disable and enable transitions for a single stage in a pipeline, you must specify the
stage. For example, to allow the user to enable and disable transitions for a stage named Staging in a
pipeline named MyFirstPipeline:
"Resource": "arn:aws:codepipeline:us-west-2:111222333444:MyFirstPipeline/Staging"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:ListActionTypes"
],
"Resource": "arn:aws:codepipeline:us-west-2:111222333444:actiontype:*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:PutApprovalResult"
],
"Resource": "arn:aws:codepipeline:us-west-2:111222333444:MyFirstPipeline/
Staging/*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:PollForJobs"
],
"Resource": [
"arn:aws:codepipeline:us-
west-2:111222333444:actionType:Custom/Test/TestProvider/1"
]
}
]
}
{
"Statement": [
{
"Action": [
"codepipeline:AcknowledgeJob",
"codepipeline:GetJobDetails",
"codepipeline:PollForJobs",
"codepipeline:PutJobFailureResult",
"codepipeline:PutJobSuccessResult"
],
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "2012-10-17"
}
The following example shows a policy in the 80398EXAMPLE account that allows users to view, but not
change, the pipeline named MyFirstPipeline in the CodePipeline console. This policy is based on the
AWSCodePipelineReadOnlyAccess managed policy, but because it is specific to the MyFirstPipeline
pipeline, it cannot use the managed policy directly. If you do not want to restrict the policy to a specific
pipeline, strongly consider using one of the managed policies created and maintained by CodePipeline.
For more information, see Working with Managed Policies. You must attach this policy to an IAM role you
create for access, for example a role named CrossAccountPipelineViewers:
{
"Statement": [
{
"Action": [
"codepipeline:GetPipeline",
"codepipeline:GetPipelineState",
"codepipeline:ListActionTypes",
"codepipeline:ListPipelines",
"iam:ListRoles",
"s3:GetBucketPolicy",
"s3:GetObject",
"s3:ListAllMyBuckets",
"s3:ListBucket",
"codedeploy:GetApplication",
"codedeploy:GetDeploymentGroup",
"codedeploy:ListApplications",
"codedeploy:ListDeploymentGroups",
"elasticbeanstalk:DescribeApplications",
"elasticbeanstalk:DescribeEnvironments",
"lambda:GetFunctionConfiguration",
"lambda:ListFunctions"
],
"Effect": "Allow",
"Resource": "arn:aws:codepipeline:us-east-2:80398EXAMPLE:MyFirstPipeline"
}
],
"Version": "2012-10-17"
}
After you create this policy, create the IAM role in the 80398EXAMPLE account and attach the policy to
that role. In the role's trust relationships, you must add the AWS account that will assume this role. The
following example shows a policy that allows users from the 111111111111 AWS account to assume
roles defined in the 80398EXAMPLE account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:root"
},
"Action": "sts:AssumeRole"
}
]
}
The following example shows a policy created in the 111111111111 AWS account that allows users to
assume the role named CrossAccountPipelineViewers in the 80398EXAMPLE account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::80398EXAMPLE:role/CrossAccountPipelineViewers"
}
]
}
The following example shows a policy configured by AccountA for an Amazon S3 bucket used to store
pipeline artifacts that grants access to AccountB (where the ARN for AccountB. In the following example,
the ARN is for AccountB is 012ID_ACCOUNT_B. The ARN for the Amazon S3 bucket is codepipeline-
us-east-2-1234567890. Replace these ARNs with the ARN for the account you want to allow access
and for the Amazon S3 bucket:
{
"Version": "2012-10-17",
"Id": "SSEAndSSLPolicy",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::codepipeline-us-east-2-1234567890/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
},
{
"Sid": "DenyInsecureConnections",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::codepipeline-us-east-2-1234567890/*",
"Condition": {
"Bool": {
"aws:SecureTransport": false
}
}
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::012ID_ACCOUNT_B:root"
},
"Action": [
"s3:Get*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::codepipeline-us-east-2-1234567890/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::012ID_ACCOUNT_B:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::codepipeline-us-east-2-1234567890"
}
]
}
The following example shows a policy configured by AccountA that allows AccountB to assume a role.
This policy must be applied to the service role for CodePipeline (AWS-CodePipeline-Service). For
more information about how to apply policies to roles in IAM, see Modifying a Role. In the following
example, 012ID_ACCOUNT_B is the ARN for AccountB:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::012ID_ACCOUNT_B:role/*"
]
}
}
The following example shows a policy configured by AccountB and applied to the Amazon EC2 instance
role for CodeDeploy. This policy grants access to the Amazon S3 bucket used by AccountA to store
pipeline artifacts (codepipeline-us-east-2-1234567890):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::codepipeline-us-east-2-1234567890/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::codepipeline-us-east-2-1234567890"
]
}
]
}
The following example shows a policy for AWS KMS where arn:aws:kms:us-
east-1:012ID_ACCOUNT_A:key/2222222-3333333-4444-556677EXAMPLE is the ARN of the
customer-managed key created in AccountA and configured to allow AccountB to use it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:DescribeKey",
"kms:GenerateDataKey*",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-
east-1:012ID_ACCOUNT_A:key/2222222-3333333-4444-556677EXAMPLE"
]
}
]
}
The following example shows an inline policy for an IAM role (CrossAccount_Role) created by
AccountB that allows access to CodeDeploy actions required by the pipeline in AccountA.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codedeploy:CreateDeployment",
"codedeploy:GetDeployment",
"codedeploy:GetDeploymentConfig",
"codedeploy:GetApplicationRevision",
"codedeploy:RegisterApplicationRevision"
],
"Resource": "*"
}
]
}
The following example shows an inline policy for an IAM role (CrossAccount_Role) created by
AccountB that allows access to the Amazon S3 bucket in order to download input artifacts and upload
output artifacts:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject*",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::codepipeline-us-east-2-1234567890/*"
]
}
]
}
For more information about how to edit a pipeline for cross-account access to resources after you have
created the necessary policies, roles, and AWS Key Management Service customer-managed key, see Step
2: Edit the Pipeline (p. 223).
You can use AWS-wide condition keys in your CodePipeline policies to express conditions. For a complete
list of AWS-wide keys, see Available Keys in the IAM User Guide.
Note
To specify an action, use the codepipeline: prefix followed by the API operation name.
For example: codepipeline:GetPipeline, codepipeline:CreatePipeline, or
codepipeline:* (for all CodePipeline actions).
To specify multiple actions in a single statement, separate them with commas as follows:
You can also specify multiple actions using wildcards. For example, you can specify all actions whose
name begins with the word "Get" as follows:
"Action": "codepipeline:Get*"
"Action": "codepipeline:*"
The actions you can specify in an IAM policy for use with CodePipeline are listed below.
AcknowledgeJob
Action(s): codepipeline:AcknowledgeJob
Required to view information about a specified job and whether that job has been received by the
job worker. Only used for custom actions.
AcknowledgeThirdPartyJob
Action(s): codepipeline:AcknowledgeThirdPartyJob
Required to confirms a job worker has received the specified job. Only used for partner actions.
CreateCustomActionType
Action(s): codepipeline:CreateCustomActionType
Required to create a new custom action that can be used in all pipelines associated with the AWS
account. Only used for custom actions.
CreatePipeline
Action(s): codepipeline:CreatePipeline
Action(s): codepipeline:DeleteCustomActionType
Required to mark a custom action as deleted. PollForJobs for the custom action will fail after the
action is marked for deletion. Only used for custom actions.
DeletePipeline
Action(s): codepipeline:DeletePipeline
Action(s):codepipeline:DeleteWebhook
Action(s):codepipeline:DeregisterWebhookWithThirdParty
Before a webhook is deleted, required to remove the connection between the webhook that was
created by CodePipeline and the external tool with events to be detected. Currently only supported
for webhooks that target an action type of GitHub.
DisableStageTransition
Action(s): codepipeline:DisableStageTransition
Required to prevent artifacts in a pipeline from transitioning to the next stage in the pipeline.
EnableStageTransition
Action(s): codepipeline:EnableStageTransition
Action(s): codepipeline:GetJobDetails
Required to retrieve information about a job. Only used for custom actions.
GetPipeline
Action(s): codepipeline:GetPipeline
Required to retrieve the structure, stages, actions, and metadata of a pipeline, including the pipeline
ARN.
GetPipelineExecution
Action(s): codepipeline:GetPipelineExecution
Required to retrieve information about an execution of a pipeline, including details about artifacts,
the pipeline execution ID, and the name, version, and status of the pipeline.
GetPipelineState
Action(s): codepipeline:GetPipelineState
Required to retrieve information about the state of a pipeline, including the stages and actions.
GetThirdPartyJobDetails
Action(s): codepipeline:GetThirdPartyJobDetails
Required to request the details of a job for a third party action. Only used for partner actions.
ListActionTypes
Action(s): codepipeline:ListActionTypes
Required to generate a summary of all AWS CodePipeline action types associated with your account.
ListPipelineExecutions
Action(s): codepipeline:ListPipelineExecutions
Action(s): codepipeline:ListPipelines
Required to generate a summary of all of the pipelines associated with your account.
ListTagsForResource
Action(s): codepipeline:ListTagsForResource
Action(s):codepipeline:ListWebhooks
Required to list all of the webhooks in the account for that region.
PollForJobs
Action(s): codepipeline:PollForJobs
Required to retrieve information about any jobs for AWS CodePipeline to act upon.
PollForThirdPartyJobs
Action(s): codepipeline:PollForThirdPartyJobs
Required to determine whether there are any third party jobs for a job worker to act on. Only used
for partner actions.
PutActionRevision
Action(s): codepipeline:PutActionRevision
Action(s): codepipeline:PutApprovalResult
Required to report the response to a manual approval request to AWS CodePipeline. Valid responses
include Approved and Rejected.
PutJobFailureResult
Action(s): codepipeline:PutJobFailureResult
Required to report the failure of a job as returned to the pipeline by a job worker. Only used for
custom actions.
PutJobSuccessResult
Action(s): codepipeline:PutJobSuccessResult
Required to report the success of a job as returned to the pipeline by a job worker. Only used for
custom actions.
PutThirdPartyJobFailureResult
Action(s): codepipeline:PutThirdPartyJobFailureResult
Required to report the failure of a third party job as returned to the pipeline by a job worker. Only
used for partner actions.
PutThirdPartyJobSuccessResult
Action(s): codepipeline:PutThirdPartyJobSuccessResult
Required to report the success of a third party job as returned to the pipeline by a job worker. Only
used for partner actions.
PutWebhook
Action(s):codepipeline:PutWebhook
Action(s):codepipeline:RegisterWebhookWithThirdParty
After a webhook is created, required to configure supported third parties to call the generated
webhook URL.
RetryStageExecution
Action(s): codepipeline:RetryStageExecution
Required to resume the pipeline execution by retrying the last failed actions in a stage.
StartPipelineExecution
Action(s): codepipeline:StartPipelineExecution
Required to start the specified pipeline. Specifically, it begins processing the latest commit to the
source location specified as part of the pipeline.
TagResource
Action(s): codepipeline:TagResource
Action(s): codepipeline:UntagResource
Action(s): codepipeline:UpdatePipeline
Security Configuration
This section describes security configuration based on best practices for the following:
Topics
• Configure Server-Side Encryption for Artifacts Stored in Amazon S3 for CodePipeline (p. 384)
• Configure GitHub Authentication (p. 386)
• Use Parameter Store to Track Database Passwords or Third-Party API Keys (p. 391)
• CodePipeline creates an Amazon S3 artifact bucket and default AWS-managed SSE-KMS encryption
keys when you create a pipeline using the Create Pipeline wizard. The master key is encrypted along
with object data and managed by AWS.
• You can create and manage your own customer-managed SSE-KMS keys.
If you are using the default Amazon S3 key, you cannot change or delete this AWS-managed key. If you
are using a customer-managed key in AWS KMS to encrypt or decrypt artifacts in the Amazon S3 bucket,
you can change or rotate this key as necessary.
Amazon S3 supports bucket policies that you can use if you require server-side encryption for all
objects that are stored in your bucket. For example, the following bucket policy denies upload object
(s3:PutObject) permission to everyone if the request does not include the x-amz-server-side-
encryption header requesting server-side encryption with SSE-KMS.
{
"Version": "2012-10-17",
"Id": "SSEAndSSLPolicy",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::codepipeline-us-west-2-890506445442/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
},
{
"Sid": "DenyInsecureConnections",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::codepipeline-us-west-2-890506445442/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
For more information about server-side encryption and AWS KMS, see Protecting Data Using Server-Side
Encryption and .
For more information about AWS KMS, see the AWS Key Management Service Developer Guide and and
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
Topics
• View Your Default Amazon S3 SSE-KMS Encryption Keys (p. 385)
• Configure Server-Side Encryption for S3 Buckets When Using AWS CloudFormation or the
CLI (p. 385)
To view information about your default AWS KMS key, do the following:
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In the service navigation pane, choose Encryption Keys. (If a welcome page appears, choose Get
Started Now.)
3. In Filter, choose the region for your pipeline. For example, if the pipeline was created in us-east-2,
make sure the filter is set to US East (Ohio).
For more information about the regions and endpoints available for CodePipeline, see Regions and
Endpoints.
4. In the list of encryption keys, choose the key with the alias used for your pipeline (by default, aws/
s3). Basic information about the key will be displayed.
• You want to rotate the key on a schedule to meet business or security requirements for your
organization.
• You want to create a pipeline that uses resources associated with another AWS account. This requires
the use of a customer-managed key. For more information, see Create a Pipeline in CodePipeline That
Uses Resources from Another AWS Account (p. 215).
Cryptographic best practices discourage extensive reuse of encryption keys. As a best practice, rotate
your key on a regular basis. To create new cryptographic material for your AWS Key Management Service
(AWS KMS) customer master keys (CMKs), you can create new CMKs, and then change your applications
or aliases to use the new CMKs. Or, you can enable automatic key rotation for an existing CMK.
To rotate your SSE-KMS customer master key, see Rotating Customer Master Keys.
• AWS creates a default AWS-managed OAuth token when you use the console to create or update
pipelines.
• You can create and manage your own customer-generated personal access tokens. You need personal
access tokens when you use the CLI, SDK, or AWS CloudFormation to create or update your pipeline.
Topics
• View Your Authorized OAuth Apps (p. 386)
• Configure Your Pipeline to Use a Personal Access Token (GitHub and CLI) (p. 387)
• Rotate Your GitHub Personal Access Token on a Regular Basis (GitHub and CLI) (p. 388)
When you use the console to create or edit a pipeline for a GitHub action, you choose Connect to GitHub
in the console. When you use the console to connect to GitHub, AWS creates a default AWS-managed
OAuth token for you. For the steps to connect to GitHub, see Pipeline Error: I receive a pipeline error that
says: "Could not access the GitHub repository" or "Unable to connect to the GitHub repository" (p. 348).
When your pipeline connects to the repository, it uses GitHub credentials to connect to GitHub. You
do not manage the token in any way, but you can view your connection information to verify your
authorized OAuth applications in GitHub.
1. In GitHub, from the drop-down option on your profile photo, choose Settings.
2. Choose Applications and then choose Authorized OAuth Apps.
3. Review your authorized apps.
Use these steps to create a GitHub personal access token and then update the pipeline structure with the
new token.
1. In GitHub, from the drop-down option on your profile photo, choose Settings.
2. Choose Developer settings, and then choose Personal access tokens.
3. Choose Generate new token.
4. Under Select scopes, select admin:repo_hook and repo.
7. At a terminal (Linux, macOS, or Unix) or command prompt (Windows), run the get-pipeline
command on the pipeline where you want to change the OAuth token, and then copy the output
of the command to a JSON file. For example, for a pipeline named MyFirstPipeline, you would type
something similar to the following:
"configuration": {
"Owner": "MyGitHubUserName",
"Repo": "test-repo",
"Branch": "master",
"OAuthToken": "Replace the **** with your personal access token"
}
9. If you are working with the pipeline structure retrieved using the get-pipeline command, you
must modify the structure in the JSON file by removing the metadata lines from the file, or the
update-pipeline command will not be able to use it. Remove the section from the pipeline
structure in the JSON file (the "metadata": { } lines and the "created," "pipelineARN," and "updated"
fields within).
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
10. Save the file, and then run the update-pipeline with the --cli-input-json parameter to specify
the JSON file you just edited.
For example, to update a pipeline named MyFirstPipeline, you would type something similar to the
following:
Important
Be sure to include file:// before the file name. It is required in this command.
11. Repeat steps 6 through 8 for every pipeline that contains a GitHub action.
12. When you are finished, delete the JSON files used to update those pipelines.
For more information, see Creating a personal access token for the command line on the GitHub website.
API Version 2015-07-09
388
CodePipeline User Guide
Configure GitHub Authentication
After you have regenerated a new personal access token, you can use the CLI or API to rotate it. You can
also use AWS CloudFormation and call UpdatePipeline.
Note
You might have to update other applications if they are using the same personal access token.
As a security best practice, do not share a single token across multiple applications. Create a new
personal access token for each application.
Use these steps to rotate your GitHub personal access token and then update the pipeline structure with
the new token.
Note
After you rotate your personal access token, remember to update any CLI scripts or AWS
CloudFormation templates that contain the old token information.
1. In GitHub, from the drop-down option on your profile photo, choose Settings.
2. Choose Developer settings and then choose Personal access tokens.
3. Next to your GitHub personal access token, choose Edit.
Note
Make sure you copy your generated token at this time. You cannot view the token after you
close this page.
6. At a terminal (Linux, macOS, or Unix) or command prompt (Windows), run the get-pipeline
command on the pipeline where you want to change the personal access token, and then copy the
output of the command to a JSON file. For example, for a pipeline named MyFirstPipeline, you
would type something similar to the following:
{
"configuration": {
"Owner": "MyGitHubUserName",
"Repo": "test-repo",
"Branch": "master",
"OAuthToken": "Replace the **** with your personal access token"
},
8. If you are working with the pipeline structure retrieved using the get-pipeline command, you
must modify the structure in the JSON file by removing the metadata lines from the file, or the
update-pipeline command will not be able to use it. Remove the section from the pipeline
structure in the JSON file (the "metadata": { } lines and the "created," "pipelineARN," and "updated"
fields within).
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
9. Save the file, and then run update-pipeline with the --cli-input-json parameter to specify the
JSON file you just edited. For example, to update a pipeline named MyFirstPipeline, you would type
something similar to the following:
Important
Be sure to include file:// before the file name. It is required in this command.
10. When you have finished updating your pipelines, delete the JSON files.
For more information, see Pipeline Error: I receive a pipeline error that says: "Could not access the GitHub
repository" or "Unable to connect to the GitHub repository" (p. 348).
Before you use the AWS CLI, make sure you complete the prerequisites in Getting Started with
CodePipeline (p. 9).
To view a list of all available CodePipeline commands, run the following command:
To view information about a specific CodePipeline command, run the following command, where
command-name is the name of one of the commands listed below (for example, create-pipeline):
To begin learning how to use the commands in the CodePipeline extension to the AWS CLI, go to one or
more of the following sections:
You can also view examples of how to use most of these commands in CodePipeline Tutorials (p. 25).
Topics
• Valid Action Types and Providers in CodePipeline (p. 393)
• Pipeline and Stage Structure Requirements in CodePipeline (p. 394)
• Action Structure Requirements in CodePipeline (p. 396)
• Source
• Build
• Test
• Deploy
• Approval
• Invoke
Each action category has a designated set of providers. This table lists valid providers by action type.
Source Amazon S3
CodeCommit
GitHub
Amazon ECR
Build CodeBuild
Custom CloudBees
Custom Jenkins
Custom TeamCity
Test CodeBuild
Custom BlazeMeter
ThirdParty GhostInspector
Custom Jenkins
ThirdParty Nouvola
ThirdParty Runscope
Deploy Amazon S3
AWS CloudFormation
CodeDeploy
Amazon ECS
Elastic Beanstalk
AWS OpsWorks
Amazon Alexa
Custom XebiaLabs
Approval Manual
Some action types in CodePipeline are available in select AWS Regions only. It is possible that an action
type is available in an AWS Region, but an AWS provider for that action type is not available.
For more information about each action provider, see Integrations with CodePipeline Action
Types (p. 12).
The following sections provide examples for provider information and configuration properties for each
action type.
{
"roleArn": "An IAM ARN for a service role, such as arn:aws:iam::80398EXAMPLE:role/AWS-
CodePipeline-Service",
"stages": [
{
"name": "SourceStageName",
"actions": [
... See Action Structure Requirements in CodePipeline ...
]
},
{
"name": "NextStageName",
"actions": [
... See Action Structure Requirements in CodePipeline ...
]
}
],
"artifactStore": {
"type": "S3",
"location": "The name of the Amazon S3 bucket automatically generated for you the
first time you create a pipeline
using the console, such as codepipeline-us-east-2-1234567890, or any Amazon S3
bucket you provision for this purpose"
},
"name": "YourPipelineName",
"version": 1
}
The following example shows the basic structure for a pipeline with cross-region actions that uses the
artifactStores parameter:
"pipeline": {
"name": "YourPipelineName",
"roleArn": "ServiceRoleARN",
"artifactStores": {
"us-east-1": {
"type": "S3",
"location": "The name of the Amazon S3 bucket automatically generated as
the default when you use the console, such as codepipeline-us-east-2-1234567890, or any
Amazon S3 bucket you provision for this purpose"
},
"us-west-2": {
"type": "S3",
"location": "The name of the Amazon S3 bucket automatically generated as
the default when you use the console, such as codepipeline-us-east-2-1234567890, or any
Amazon S3 bucket you provision for this purpose"
}
},
"stages": [
{
...
• The pipeline metadata fields are distinct from the pipeline structure and cannot be edited. When you
update a pipeline, the date in the updated metadata field changes automatically.
• When you edit or update a pipeline, the pipeline name cannot be changed.
Note
If you want to rename an existing pipeline, you can use the CLI get-pipeline command
to build a JSON file containing your pipeline's structure. Then you can use the CLI create-
pipeline command to create a new pipeline with that structure and give it a new name.
The version number of a pipeline is automatically generated and updated every time you update the
pipeline.
[
{
"inputArtifacts": [
An input artifact structure, if supported for the action category
],
"name": "ActionName",
"region": "Region",
"actionTypeId": {
"category": "An action category",
"owner": "AWS",
"version": "1"
"provider": "A provider type for the action category",
},
"outputArtifacts": [
An output artifact structure, if supported for the action category
],
"configuration": {
Configuration details appropriate to the provider type
},
"runOrder": A positive integer that indicates the run order within the
stage,
}
]
For a list of example configuration details appropriate to the provider type, see Configuration Details
by Provider Type (p. 405).
"outputArtifacts": [
{
"MyApp"
}
],
and there are no other output artifacts, then the input artifact of a following action must be:
"inputArtifacts": [
{
"MyApp"
}
],
This is true for all actions, whether they are in the same stage or in following stages, but the input
artifact does not have to be the next action in strict sequence from the action that provided the output
artifact. Actions in parallel can declare different output artifact bundles, which are in turn consumed
by different following actions.
The following illustration provides an example of input and output artifacts in actions in a pipeline:
• Output artifact names must be unique within a pipeline. For example, a pipeline can include one action
that has an output artifact named "MyApp" and another action that has an output artifact named
"MyBuiltApp". However, a pipeline cannot include two actions that both have an output artifact
named "MyApp".
• Cross-region actions use the Region field to designate the AWS Region where the actions is to be
created. The AWS resources created for this action must be created in the same region provided in the
Region field. You cannot create cross-region actions for the following actions types:
• Source actions
• Actions by third-party providers
• Actions by custom providers
• If an action contains a parameter whose value is secret, such as the OAuth token for a GitHub source
action, the value of that parameter is masked in the JSON by a series of four asterisks (****). The actual
value is stored, and as long as you do not edit that value, or change the name of the action or the
API Version 2015-07-09
397
CodePipeline User Guide
Action Structure Requirements in CodePipeline
name of the stage where that action runs, you do not have to supply that value when editing the
JSON using the AWS CLI or CodePipeline API. However, if you do change the name of the action, or the
name of the stage in which the action runs, the value of any secret parameters will be lost. You must
manually type the values for any secret parameters in the JSON, or the action will fail the next time
the pipeline runs.
• For all currently supported action types, the only valid version string is "1".
• For all currently supported action types, the only valid owner string is "AWS", "ThirdParty", or
"Custom". For more information, see the CodePipeline API Reference.
• The default runOrder value for an action is 1. The value must be a positive integer (natural number).
You cannot use fractions, decimals, negative numbers, or zero. To specify a serial sequence of actions,
use the smallest number for the first action and larger numbers for each of the rest of the actions in
sequence. To specify parallel actions, use the same integer for each action you want to run in parallel.
For example, if you want three actions to run in sequence in a stage, you would give the first action
the runOrder value of 1, the second action the runOrder value of 2, and the third the runOrder
value of 3. However, if you want the second and third actions to run in parallel, you would give the first
action the runOrder value of 1 and both the second and third actions the runOrder value of 2.
Note
The numbering of serial actions do not have to be in strict sequence. For example, if you
have three actions in a sequence and decide to remove the second action, you do not need to
renumber the runOrder value of the third action. Because the runOrder value of that action
(3) is higher than the runOrder value of the first action (1), it will run serially after the first
action in the stage.
• When you use an Amazon S3 bucket as a deployment location, you also specify an object key. An
object key can be a file name (object) or a combination of a prefix (folder path) and file name. You
can use variables to specify the location name you want the pipeline to use. Amazon S3 deployment
actions support the use of the following variables in Amazon S3 object keys.
Example:
js-application/2019-01-10_07-39-57.zip
Example:
js-application/54a60075-
b96a-4bf3-9013-db3a9EXAMPLE.zip
• Depending on the action type, you can have the following number of input and output artifacts:
• Valid provider types for an action category depend on the category. For example, for a source action
type, a valid provider type is S3, GitHub, CodeCommit, or Amazon ECR. This example shows the
structure for a source action with an S3 provider:
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "S3"},
• Every action must have a valid action configuration, which depends on the provider type for that
action. The following table lists the required action configuration elements for each valid provider
type:
KMSEncryptionKeyARN² Optional
CannedACL³ Optional
CacheControl4 Optional
RepositoryName Required
BranchName Required
Repo Required
PollForSourceChanges¹ Optional
Branch Required
OAuthToken Required
RepositoryName Required
DeploymentGroupName Required
ProjectId Required
App Required
RadioBluetoothEnabled Default=true
RecordVideo Default=true
RadioWifiEnabled Default=true
RadioNfcEnabled Default=true
RadioGpsEnabled Default=true
Test Required
DevicePoolArn Required
TestType Required
AppiumVersion Required
UserParameters Optional
App Required
ServiceName Required
FileName Optional
Image1ArtifactName Optional
TaskDefinitionTemplateArtifact Required
Image1ContainerName Optional
AppSpecTemplateArtifact Required
AppSpecTemplatePath Required
TaskDefinitionTemplatePath Optional
ProductType Required
ProductVersionDescription Optional
ProductId Required
RefreshToken Required
SkillId Required
NotificationArn Optional
¹In many cases, the PollForSourceChanges parameter defaults to true when you create a
pipeline. When you add event-based change detection, you must add the parameter to your output
and set it to false to disable polling. Otherwise, your pipeline starts twice for a single source change.
For more information, see Default Settings for the PollForSourceChanges Parameter (p. 403).
For pipelines or source actions created or updated in the console after the following dates, the
PollForSourceChanges parameter defaults to false:
• CodeCommit pipelines created or updated in the console after October 11, 2017.
• Amazon S3 pipelines created or updated in the console after March 22, 2018.
• GitHub pipelines created or updated in the console after May 1, 2018.
²The KMSEncryptionKeyARN parameter encrypts uploaded artifacts with the provided KMS key.
For an AWS KMS key, you can use the key ID, the key ARN, or the alias ARN.
Note
Aliases are recognized only in the account that created the customer master key (CMK). For
cross-account actions, you can only use the key ID or key ARN to identify the key.
³The CannedACL parameter applies the specified canned ACL to objects deployed to Amazon S3.
This overwrites any existing ACL that was applied to the object.
4
The CacheControl parameter controls caching behavior for requests/responses for objects in the
bucket. For a list of valid values, see the Cache-Control header field for HTTP operations.
5
The third and fourth required properties for AWS CloudFormation depend on the selected
ActionMode property. For a list of all required and optional properties, see Configuration Properties
Reference in the AWS CloudFormation User Guide. For template snippets with examples for
configuration properties, see Using Parameter Override Functions with CodePipeline Pipelines in the
AWS CloudFormation User Guide.
6
For a build project with multiple input sources, PrimarySource designates the directory where
CodeBuild looks for and runs your buildspec file. To learn more, see CodePipeline Integration with
CodeBuild and Multiple Input Sources and Output Artifacts Sample.
Topics
• Default Settings for the PollForSourceChanges Parameter (p. 403)
• Configuration Details by Provider Type (p. 405)
• Add the PollForSourceChanges parameter to the JSON file or AWS CloudFormation template.
• Create change detection resources (CloudWatch Events rule or webhook as applicable).
• Set the PollForSourceChanges parameter to false.
Note
If you create a CloudWatch Events rule or webhook, you must set the parameter to false to
avoid trigering the pipeline more than once.
Note
The PollForSourceChanges parameter is not applicable for Amazon ECR source actions.
"PollForSourceChanges": "true",
³ For details about the change detection resources that apply to each source
provider, see Change Detection Methods (p. 138).
The following example shows a valid action configuration for a source action that uses Amazon S3:
"configuration": {
"S3Bucket": "awscodepipeline-demobucket-example-date",
"S3ObjectKey": "CodePipelineDemoApplication.zip",
"PollForSourceChanges": "false"
}
The following example shows the action configuration returned for a source action that uses GitHub:
"configuration": {
"Owner": "MyGitHubAccountName",
"Repo": "MyGitHubRepositoryName",
"PollForSourceChanges": "false",
"Branch": "master",
"OAuthToken": "****"
},
The following example shows a valid configuration for a build action that uses CodeBuild:
"configuration": {
"ProjectName": "Name" }
The following example shows a valid configuration for a deploy action that uses AWS CloudFormation:
"configuration": {
"StackName": "Name",
"ActionMode": "Name",
"RoleArn": "ARN",
"TemplateConfiguration": "Name",
"TemplatePath": "Path" }
The following example shows the action configuration returned for a source action that uses Amazon
ECR:
"configuration": {
"ImageTag": "latest",
"RepositoryName": "my-image-repo"
},
The following example shows a valid configuration for a deploy action that uses Amazon ECS:
"configuration": {
"ClusterName": "my-ecs-cluster",
"ServiceName": "sample-app-service",
"FileName": "imagedefinitions.json",
}
The following example shows a valid configuration for a test action that uses AWS Device Farm:
"configuration": {
"RecordAppPerformanceData": "true",
"AppType": "Android",
"ProjectId": "Project_ID",
"App": "app-release.apk",
"RadioBluetoothEnabled": "true",
"RecordVideo": "true",
"RadioWifiEnabled": "true",
"RadioNfcEnabled": "true",
"RadioGpsEnabled": "true",
"Test": "tests.zip",
"DevicePoolArn": "ARN",
"TestType": "Calabash",
"AppiumVersion": "1.7.2"
}
The following example shows a valid configuration for a deploy action that uses AWS Service Catalog, for
a pipeline that was created in the console without a separate configuration file:
"configuration": {
"TemplateFilePath": "S3_template.json",
"ProductVersionName": "devops S3 v2",
"ProductType": "CLOUD_FORMATION_TEMPLATE",
"ProductVersionDescription": "Product version description",
"ProductId": "prod-example123456"
}
The following example shows a valid configuration for a deploy action that uses AWS Service Catalog, for
a pipeline that was created in the console with a separate sample_config.json configuration file:
"configuration": {
"ConfigurationFilePath": "sample_config.json",
"ProductId": "prod-example123456"
}
The following example shows a valid configuration for a deploy action that uses Alexa Skills Kit:
"configuration": {
"ClientId": "amzn1.application-oa2-client.aadEXAMPLE",
"ClientSecret": "****",
"RefreshToken": "****",
"SkillId": "amzn1.ask.skill.22649d8f-0451-4b4b-9ed9-bfb6cEXAMPLE"
}
The following example shows a valid configuration for a deploy action that uses Amazon S3:
"configuration": {
"BucketName": "website-bucket",
"Extract": "true",
"ObjectKey": "MyWebsite"
}
The following example shows a valid configuration for an Amazon ECS and CodeDeploy Blue/Green
deployment:
"configuration": {
"ApplicationName": "codedeploy-ecs-application",
"DeploymentGroupName": "ecs-codedeploy-deplgroup",
"Image1ArtifactName": "MyImage",
"TaskDefinitionTemplateArtifact": "SourceArtifact",
"Image1ContainerName": "IMAGE1_NAME",
"": "taskdef.json",
"AppSpecTemplateArtifact": "SourceArtifact",
"AppSpecTemplatePath": "appspec.yaml",
"TaskDefinitionTemplatePath": "pathname"
}
"configuration": {
"CustomData": "Comments on the manual approval",
"ExternalEntityLink": "http://my-url.com",
"NotificationArn": "arn:aws:sns:us-west-2:12345EXAMPLE:Notification"
}
AWS CodePipeline job workers for container actions, such as an Amazon ECR source action or Amazon
ECS deploy actions, use definitions files to map the image URI and container name to the task definition.
Each definitions file is a JSON-formatted file used by the action provider as follows:
Topics
• imagedefinitions.json File for Amazon ECS Standard Deployment Actions (p. 408)
• imageDetail.json File for Amazon ECS Blue/Green Deployment Actions (p. 410)
• The maximum file size limit for the image definitions file is 100 KB.
• You must create the file as a source or build artifact so that it is an input artifact for the deploy action.
The imagedefinitions.json file provides the container name and image URI. It must be constructed
with the following set of key-value pairs.
Key Value
name container_name
imageURI image_URI
Here is the JSON structure, where the container name is sample-app, the image URI is ecs-repo, and
the tag is latest:
[
{
"name": "sample-app",
"imageUri": "11111EXAMPLE.dkr.ecr.us-west-2.amazonaws.com/ecs-repo:latest"
}
]
You can also construct the file to list multiple container-image pairs.
JSON structure:
[
{
"name": "simple-app",
"imageUri": "httpd:2.4"
},
{
"name": "simple-app-1",
"imageUri": "mysql"
},
{
"name": "simple-app-2",
"imageUri": "java1.8"
}
]
Before you create your pipeline, use the following steps to set up the imagedefinitions.json file.
1. As part of planning the container-based application deployment for your pipeline, plan the source
stage and the build stage, if applicable.
2. Choose one of the following:
a. If your pipeline has skipped the build stage, you must manually create the JSON file and upload
it to your source repository so the source action can provide the artifact. Create the file using a
text editor, and name the file or use the default imagedefinitions.json file name. Push the
image definitions file to your source repository.
Note
If your source repository is an Amazon S3 bucket, remember to zip the JSON file.
b. If your pipeline has a build stage, add a command to your build spec file that outputs the image
definitions file in your source repository during the build phase. The following example uses
the printf command to create an imagedefinitions.json file. List this command in the
post_build section of the buildspec.yml file:
You must include the image definitions file as an output artifact in the buildspec.yml file.
3. When you create your pipeline in the console, on the Deploy page of the Create Pipeline wizard, in
Image Filename, enter the image definitions file name.
For a step-by-step tutorial for creating a pipeline that uses Amazon ECS as the deployment provider, see
Tutorial: Continuous Deployment with CodePipeline.
You must create the imageDetail.json file as a source or build artifact so that it is an input artifact
for the deploy action.
Note
Amazon ECR source actions automatically generate an imageDetail.json file as an input
artifact to the next action.
Because the Amazon ECR source action creates this file, pipelines with an Amazon ECR source
action do not need to provide an imageDetail.json file.
The imageDetail.json file provides the image URI. It must be constructed with the following key-
value pair.
Key Value
ImageURI image_URI
imageDetail.json
{
"ImageURI": "ACCOUNTID.dkr.ecr.us-west-2.amazonaws.com/dk-image-repo@sha256:example3"
}
An imageDetail.json file is generated automatically by the Amazon ECR source action each time
a change is pushed to the image repository. The imageDetail.json generated by Amazon ECR
source actions is provided as an output artifact from the source action to the next action in the
pipeline.
Here is the JSON structure, where the repository name is dk-image-repo, the image URI is ecs-
repo, and the image tag is latest:
{
"ImageSizeInBytes": "44728918",
"ImageDigest":
"sha256:EXAMPLE11223344556677889900bfea42ea2d3b8a1ee8329ba7e68694950afd3",
"Version": "1.0",
"ImagePushedAt": "Mon Jan 21 20:04:00 UTC 2019",
"RegistryId": "EXAMPLE12233",
"RepositoryName": "dk-image-repo",
"ImageURI": "ACCOUNTID.dkr.ecr.us-west-2.amazonaws.com/dk-image-
repo@sha256:example3",
"ImageTags": [
"latest"
]
}
The imageDetail.json file maps the image URI and container name to the Amazon ECS task
definition as follows:
Before you create your pipeline, use the following steps to set up the imageDetail.json file.
1. As part of planning the container-based application blue/green deployment for your pipeline, plan
the source stage and the build stage, if applicable.
2. Choose one of the following:
a. If your pipeline has skipped the build stage, you must manually create the JSON file and upload
it to your source repository, such as CodeCommit, so the source action can provide the artifact.
Create the file using a text editor, and name the file or use the default imageDetail.json file
name. Push the imageDetail.json file to your source repository.
Note
If your source repository is an Amazon S3 bucket, remember to zip the JSON file.
b. If your pipeline has a build stage, perform the following:
i. Add a command to your build spec file that outputs the image definitions file in your source
repository during the build phase. The following example uses the printf command to
create an imageDetail.json file. List this command in the post_build section of the
buildspec.yml file:
You must include the imageDetail.json file as an output artifact in the buildspec.yml
file.
ii. Add the imageDetail.json as an artifact file in the buildspec.yml file.
artifacts:
files:
- imageDetail.json
AWS Regions where you can create a pipeline US East (Ohio) (us-east-2)
EU (Ireland) (eu-west-1)
EU (London) (eu-west-2)
EU (Paris) (eu-west-3)
EU (Frankfurt) (eu-central-1)
Maximum size of input artifacts for AWS If you are using AWS CloudFormation to deploy
CloudFormation actions Lambda functions, the Lambda code archive size
should not exceed 256 MB. See AWS Lambda
Limits.
Characters allowed in pipeline name Pipeline names cannot exceed 100 characters.
Allowed characters include:
Characters allowed in stage name Stage names cannot exceed 100 characters.
Allowed characters include:
Characters allowed in action name Action names cannot exceed 100 characters.
Allowed characters include:
Characters allowed in action types Action type names cannot exceed 25 characters.
Allowed characters include:
Characters allowed in partner action names Partner action names must follow the same
naming conventions and restrictions as other
action names in CodePipeline. Specifically, they
cannot exceed 100 characters. Allowed characters
include:
Maximum size of the JSON object that can be For a CodePipeline deploy action with
stored in the ParameterOverrides property AWS CloudFormation as the provider, the
ParameterOverrides property is used to
store a JSON object that specifies values for the
AWS CloudFormation template configuration
file. There is a maximum size limit of 1 kilobyte
for the JSON object that can be stored in the
ParameterOverrides property.
A key can have only one value, but many keys can
have the same value. A tag cannot:
Specify canned ACLs and You can now specify canned June 27, 2019
cache control for Amazon S3 ACL and cache control options
deployment actions (p. 417) when you create an Amazon
S3 deployment action in
CodePipeline. The following
topics have been updated:
Create a Pipeline (Console),
CodePipeline Pipeline Structure
Reference, and Tutorial: Create a
Pipeline That Uses Amazon S3 as
a Deployment Provider.
You can now add tags You can now use tagging May 15, 2019
to resources in AWS to track and manage AWS
CodePipeline (p. 417) CodePipeline resources such as
pipelines, custom actions, and
webhooks. The following new
topics have been added: Tagging
Resources, Using Tags to
Control Access to CodePipeline
Resources, Tag a Pipeline in
CodePipeline, Tag a Custom
Action in CodePipeline, and Tag
a Webhook in CodePipeline.
The following topics have
been updated to show how to
use the CLI to tag resources:
Create a Pipeline (CLI), Create a
Custom Action (CLI), and Create
a Webhook for a GitHub Source.
You can now view action You can now view details about March 20, 2019
execution history in AWS past executions of all actions in
CodePipeline (p. 417) a pipeline. These details include
start and end times, duration,
action execution ID, status, input
and output artifact location
details, and external resource
details. The View Pipeline Details
and History topic has been
updated to reflect this support.
AWS CodePipeline now supports You can now create a pipeline March 8, 2019
publishing applications to the in CodePipeline that publishes
AWS Serverless Application your serverless application to
Repository (p. 417) the AWS Serverless Application
Repository. A new tutorial,
Tutorial: Publish Applications to
the AWS Serverless Application
Repository, provides steps for
creating and configuring a
pipeline to continuously deliver
your serverless application to
the AWS Serverless Application
Repository.
AWS CodePipeline now supports You can now manage cross- February 14, 2019
cross-region actions in the region actions in the AWS
console (p. 417) CodePipeline console. Add a
Cross-region Action has been
updated with the steps to add,
edit, or delete an action that
is in a different AWS Region
from your pipeline. The Create
a Pipeline, Edit a Pipeline, and
CodePipeline Pipeline Structure
Reference topics have been
updated.
AWS CodePipeline now You can now create a pipeline in January 16, 2019
supports Amazon S3 CodePipeline that uses Amazon
Deployments (p. 417) S3 as the deployment action
provider. A new tutorial, Tutorial:
Create a Pipeline That Uses
Amazon S3 as a Deployment
Provider, provides steps for
deploying sample files to
your Amazon S3 bucket with
CodePipeline. The CodePipeline
Pipeline Structure Reference
topic has also been updated.
AWS CodePipeline now You can now use CodePipeline December 19, 2018
supports Alexa Skills Kit and Alexa Skills Kit for
Deployments (p. 417) continuous deployment of Alexa
skills. A new tutorial, Tutorial:
Create a Pipeline that Deploys
an Amazon Alexa Skill, contains
steps for creating credentials
that allow AWS CodePipeline
to connect to your Alexa Skills
Kit developer account and then
creating a pipeline that deploys
a sample skill. The CodePipeline
Pipeline Structure Reference
topic has been updated.
AWS CodePipeline now You can now connect directly December 6, 2018
supports Amazon Virtual to AWS CodePipeline through
Private Cloud (Amazon VPC) a private endpoint in your VPC,
endpoints powered by AWS keeping all traffic inside your
PrivateLink (p. 417) VPC and the AWS network.
For more information, see Use
CodePipeline with Amazon
Virtual Private Cloud.
AWS CodePipeline now supports You can now use CodePipeline November 27, 2018
Amazon ECR source actions and and CodeDeploy with Amazon
ECS-to-CodeDeploy deployment ECR and Amazon ECS for
actions (p. 417) continuous deployment of
container-based applications. A
new tutorial, Create a Pipeline
with an Amazon ECR Source
and ECS-to-CodeDeploy
Deployment, contains steps for
using the console to create a
pipeline that deploys container
applications stored in an image
repository to an Amazon ECS
cluster with CodeDeploy traffic
routing. The Create a Pipeline
and CodePipeline Pipeline
Structure Reference topics have
been updated.
AWS CodePipeline now supports A new topic, Add a Cross- November 12, 2018
cross-region actions in a region Action, contains steps
pipeline (p. 417) for using the AWS CLI or AWS
CloudFormation to add an action
that is in a different region
from your pipeline. The Create
a Pipeline, Edit a Pipeline, and
CodePipeline Pipeline Structure
Reference topics have been
updated.
AWS CodePipeline now You can now add AWS Service October 16, 2018
integrates with AWS Service Catalog as a deployment action
Catalog (p. 417) to your pipeline. This allows you
to set up a pipeline to publish
product updates to AWS Service
Catalog when you make a
change in your source repository.
The Integrations topic has been
updated to reflect this support
for AWS Service Catalog. Two
AWS Service Catalog tutorials
have been added to the AWS
CodePipeline Tutorials section.
AWS CodePipeline now You can now add AWS Device July 19, 2018
integrates with AWS Device Farm as a test action to your
Farm (p. 417) pipeline. This allows you to set
up a pipeline to test mobile
applications. The Integrations
topic has been updated to
reflect this support for AWS
Device Farm. Two AWS Device
Farm tutorials have been added
to the AWS CodePipeline
Tutorials section.
AWS CodePipeline User Guide The HTML version of the June 30, 2018
update notifications now CodePipeline User Guide now
available through RSS (p. 417) supports an RSS feed of updates
that are documented in the
Documentation Update History
page. The RSS feed includes
updates made after June 30,
2018 and later. Previously
announced updates are still
available in the Documentation
Update History page. Use the
RSS button in the top menu
panel to subscribe to the feed.
Earlier Updates
The following table describes important changes in each release of the CodePipeline User Guide on June
30, 2018 and earlier.
Use webhooks to When you create or edit a pipeline in the console, May 1, 2018
detect source changes CodePipeline now creates a webhook that detects changes to
in GitHub pipelines your GitHub source repository and then starts your pipeline.
For information about migrating your pipeline, see Configure
Your GitHub Pipelines to Use Webhooks for Change Detection.
For more information, see Start a Pipeline Execution in
CodePipeline.
Updated topics When you create or edit a pipeline in the console, March 22,
CodePipeline now creates an Amazon CloudWatch Events 2018
rule and an AWS CloudTrail trail that detects changes to your
Amazon S3 source bucket and then starts your pipeline. For
information about migrating your pipeline, see Configure
Your Pipelines to Use Amazon CloudWatch Events for Change
Detection (Amazon S3 Source) (p. 165).
Updated topic CodePipeline is now available in EU (Paris). The Limits in AWS February 21,
CodePipeline (p. 412) topic has been updated. 2018
Updated topics You can now use CodePipeline and Amazon ECS for December 12,
continuous deployment of container-based applications. 2017
When you create a pipeline, you can select Amazon ECS as
a deployment provider. A change to code in your source
control repository triggers your pipeline to build a new Docker
image, push it to your container registry, and then deploy the
updated image to an Amazon ECS service.
Updated topics When you create or edit a pipeline in the console, October 11,
CodePipeline now creates an Amazon CloudWatch Events rule 2017
that detects changes to your CodeCommit repository and
then automatically starts your pipeline. For information about
migrating your existing pipeline, see Configure Your Pipelines
to Use Amazon CloudWatch Events for Change Detection
(CodeCommit Source) (p. 151).
New and updated CodePipeline now provides built-in support for pipeline state September 8,
topics change notifications through Amazon CloudWatch Events 2017
and Amazon Simple Notification Service (Amazon SNS). A
new tutorial Tutorial: Set Up a CloudWatch Events Rule to
Receive Email Notifications for Pipeline State Changes (p. 61)
has been added. For more information, see Detect and React
to Changes in Pipeline State with Amazon CloudWatch
Events (p. 334).
New and updated You can now add CodePipeline as a target for Amazon September 5,
topics CloudWatch Events actions. Amazon CloudWatch Events 2017
rules can be set up to detect source changes so that the
pipeline starts as soon as those changes occur, or they can
be set up to run scheduled pipeline executions. Information
has been added for the PollForSourceChanges source action
configuration option. For more information, see Start a
Pipeline Execution in CodePipeline (p. 137).
New regions CodePipeline is now available in Asia Pacific (Seoul) and Asia July 27, 2017
Pacific (Mumbai). The Limits in AWS CodePipeline (p. 412)
topic and Regions and Endpoints topic have been updated.
New regions CodePipeline is now available in US West (N. California), June 29,
Canada (Central), and EU (London). The Limits in AWS 2017
CodePipeline (p. 412) topic and Regions and Endpoints topic
have been updated.
Updated topics You can now view details about past executions of a pipeline, June 22,
not just the most recent execution. These details include 2017
start and end times, duration, and execution ID. Details
are available for a maximum of 100 pipeline executions
during the most recent 12-month period. The topics View
Pipeline Details and History in CodePipeline (p. 202),
CodePipeline Permissions Reference (p. 380), and Limits in
AWS CodePipeline (p. 412) have been updated to reflect this
support.
Updated topic Nouvola has been added to the list of available actions in Test May 18, 2017
Action Integrations (p. 16).
Updated topics In the AWS CodePipeline wizard, the page Step 4: Beta has April 7, 2017
been renamed Step 4: Deploy. The default name of the
stage created by this step has been changed from "Beta"
to "Staging". Numerous topics and screenshots have been
updated to reflect these changes.
Updated topics You can now add AWS CodeBuild as a test action to any March 8,
stage of a pipeline. This allows you to more easily use AWS 2017
CodeBuild to run unit tests against your code. Prior to this
release, you could use AWS CodeBuild to run unit tests only as
part of a build action. A build action requires a build output
artifact, which unit tests typically do not produce.
New and updated The table of contents has been reorganized to include February 8,
topics sections for pipelines, actions, and stage transitions. A 2017
new section has been added for CodePipeline tutorials.
For better usability, Product and Service Integrations with
CodePipeline (p. 12) has been divided into shorter topics.
New region CodePipeline is now available in Asia Pacific (Tokyo). The December 14,
Limits in AWS CodePipeline (p. 412) topic and Regions and 2016
Endpoints topic have been updated.
New region CodePipeline is now available in South America (São Paulo). December 7,
The Limits in AWS CodePipeline (p. 412) topic and Regions 2016
and Endpoints topic have been updated.
Updated topics You can now add AWS CodeBuild as a build action to any December 1,
stage of a pipeline. AWS CodeBuild is a fully managed build 2016
service in the cloud that compiles your source code, runs unit
tests, and produces artifacts that are ready to deploy. You can
use an existing build project or create one in the CodePipeline
console. The output of the build project can then be deployed
as part of a pipeline.
New region CodePipeline is now available in EU (Frankfurt). The Limits in November 16,
AWS CodePipeline (p. 412) topic and Regions and Endpoints 2016
topic have been updated.
New region CodePipeline is now available in the Asia Pacific (Sydney) October 26,
Region. The Limits in AWS CodePipeline (p. 412) topic and 2016
Regions and Endpoints topic have been updated.
New region CodePipeline is now available in Asia Pacific (Singapore). The October 20,
Limits in AWS CodePipeline (p. 412) topic and Regions and 2016
Endpoints topic have been updated.
New region CodePipeline is now available in the US East (Ohio) Region. October 17,
The Limits in AWS CodePipeline (p. 412) topic and Regions 2016
and Endpoints topic have been updated.
Updated topic Create a Pipeline in CodePipeline (p. 187) has been updated September
to reflect support for displaying version identifiers of custom 22, 2016
actions in the Source provider and Build provider lists.
Updated topic The Manage Approval Actions in CodePipeline (p. 312) September
section has been updated to reflect an enhancement that lets 14, 2016
Approval action reviewers open the Approve or reject the
revision form directly from an email notification.
New and updated A new topic, View Pipeline Execution Source Revisions September
topics (Console) (p. 205), describes how to view details about code 08, 2016
changes currently flowing through your software release
pipeline. Quick access to this information information
can be useful when reviewing manual approval actions or
troubleshooting failures in your pipeline.
New and updated A new section, Manage Approval Actions in July 06, 2016
topics CodePipeline (p. 312), provides information about configuring
and using manual approval actions in pipelines. Topics in this
section provide conceptual information about the approval
process; instructions for setting up required IAM permissions,
creating approval actions, and approving or rejecting approval
actions; and samples of the JSON data generated when an
approval action is reached in a pipeline.
New region CodePipeline is now available in the EU (Ireland) Region. The June 23,
Limits in AWS CodePipeline (p. 412) topic and Regions and 2016
Endpoints topic have been updated.
New topic A new topic, Retry a Failed Action in CodePipeline (p. 310), has June 22,
been added to describe how to retry a failed action or a group 2016
of parallel failed actions in stage.
New and updated A new topic, Tutorial: Create a Simple Pipeline (CodeCommit April 18,
topics Repository) (p. 42), has been added. This topic provides a 2016
sample walkthrough showing how to use a CodeCommit
repository and branch as the source location for a source
action in a pipeline. Several other topics have been updated
to reflect this integration with CodeCommit, including
Authentication, Access Control, and Security Configuration for
AWS CodePipeline (p. 352), Product and Service Integrations
with CodePipeline (p. 12), Tutorial: Create a Four-Stage
Pipeline (p. 54), and Troubleshooting CodePipeline (p. 345).
New topic A new topic, Invoke an AWS Lambda Function in a Pipeline January 27,
in CodePipeline (p. 294), has been added. This topic contains 2016
sample AWS Lambda functions and steps for adding Lambda
functions to pipelines.
Updated topic A new section has been added to Authentication, January 22,
Access Control, and Security Configuration for AWS 2016
CodePipeline (p. 352), Resource-Based Policies (p. 360).
New topic A new topic, Product and Service Integrations with December 17,
CodePipeline (p. 12), has been added. Information about 2015
integrations with partners and other AWS services has been
moved to this topic. Links to blogs and videos have also been
added.
Updated topic Details of integration with Solano CI have been added to November 17,
Product and Service Integrations with CodePipeline (p. 12). 2015
Updated topic The CodePipeline Plugin for Jenkins is now available through November 9,
the Jenkins Plugin Manager as part of the library of plugins 2015
for Jenkins. The steps for installing the plugin have been
updated in Tutorial: Create a Four-Stage Pipeline (p. 54).
New region CodePipeline is now available in the US West (Oregon) Region. October 22,
The Limits in AWS CodePipeline (p. 412) topic has been 2015
updated. Links have been added to Regions and Endpoints.
New topic Two new topics,Configure Server-Side Encryption for Artifacts August 25,
Stored in Amazon S3 for CodePipeline (p. 384) and Create a 2015
Pipeline in CodePipeline That Uses Resources from Another
AWS Account (p. 215), have been added. A new section has
been added to Authentication, Access Control, and Security
Configuration for AWS CodePipeline (p. 352), Example 8:
Use AWS Resources Associated with Another Account in a
Pipeline (p. 377).
Updated topic The Create and Add a Custom Action in CodePipeline (p. 282) August 17,
topic has been updated to reflect changes in the 2015
structure, including inputArtifactDetails and
outputArtifactDetails.
Updated topic The Troubleshooting CodePipeline (p. 345) topic has been August 11,
updated with revised steps for troubleshooting problems with 2015
the service role and Elastic Beanstalk.
New topic A Troubleshooting CodePipeline (p. 345) topic has been July 24, 2015
added. Updated steps have been added for IAM roles and
Jenkins in Tutorial: Create a Four-Stage Pipeline (p. 54).
Topic update Updated steps have been added for downloading the July 22, 2015
sample files in Tutorial: Create a Simple Pipeline (Amazon
S3 Bucket) (p. 26) and Tutorial: Create a Four-Stage
Pipeline (p. 54).
Topic update A temporary workaround for download issues with the sample July 17, 2015
files was added in Tutorial: Create a Simple Pipeline (Amazon
S3 Bucket) (p. 26).
Topic update A link was added in Limits in AWS CodePipeline (p. 412) to July 15, 2015
point to information about which limits can be changed.
Topic update The managed policies section in Authentication, July 10, 2015
Access Control, and Security Configuration for AWS
CodePipeline (p. 352) was updated.
Initial Public Release This is the initial public release of the CodePipeline User July 9, 2015
Guide.
AWS Glossary
For the latest AWS terminology, see the AWS Glossary in the AWS General Reference.