ECS DevelopersGuide
ECS DevelopersGuide
Developer Guide
API Version 2014-11-13
Amazon Elastic Container Service Developer Guide
Table of Contents
What is Amazon ECS? ......................................................................................................................... 1
Features of Amazon ECS ............................................................................................................. 1
Containers and Images ....................................................................................................... 3
Task Definitions ................................................................................................................. 3
Tasks and Scheduling ......................................................................................................... 4
Clusters ............................................................................................................................. 4
Container Agent ................................................................................................................. 4
How to Get Started with Amazon ECS .......................................................................................... 5
Related Services ......................................................................................................................... 5
Accessing Amazon ECS ................................................................................................................ 6
Setting Up ........................................................................................................................................ 8
Sign Up for AWS ........................................................................................................................ 8
Create an IAM User .................................................................................................................... 8
Create an IAM Role for your Container Instances and Services ........................................................ 10
Create a Key Pair ..................................................................................................................... 10
Create a Virtual Private Cloud .................................................................................................... 11
Create a Security Group ............................................................................................................ 12
Install the AWS CLI ................................................................................................................... 13
Docker Basics ................................................................................................................................... 14
Installing Docker ...................................................................................................................... 14
Create a Docker Image .............................................................................................................. 15
(Optional) Push your image to Amazon Elastic Container Registry ................................................... 16
Next Steps ............................................................................................................................... 17
Getting Started ................................................................................................................................ 19
Cleaning Up ..................................................................................................................................... 22
Scale Down Services ................................................................................................................. 22
Delete Services ........................................................................................................................ 22
Deregister Container Instances ................................................................................................... 22
Delete a Cluster ....................................................................................................................... 23
Delete the AWS CloudFormation Stack ........................................................................................ 23
Clusters ........................................................................................................................................... 25
Cluster Concepts ...................................................................................................................... 25
Creating a Cluster .................................................................................................................... 25
Scaling a Cluster ...................................................................................................................... 27
Deleting a Cluster .................................................................................................................... 29
Container Instances .......................................................................................................................... 30
Container Instance Concepts ...................................................................................................... 30
Container Instance Lifecycle ....................................................................................................... 31
Check the Instance Role for Your Account ................................................................................... 31
Container Instance AMIs ............................................................................................................ 32
Amazon ECS-Optimized AMI .............................................................................................. 32
Subscribing to Amazon ECS–Optimized AMI Update Notifications ................................................... 40
Amazon SNS Message Format ............................................................................................ 42
Launching a Container Instance .................................................................................................. 43
Bootstrap Container Instances .................................................................................................... 46
Amazon ECS Container Agent ............................................................................................ 46
Docker Daemon ............................................................................................................... 47
cloud-init-per Utility ........................................................................................................ 47
MIME Multi Part Archive .................................................................................................... 48
Example User Data Scripts ................................................................................................. 49
Connect to Your Container Instance ............................................................................................ 52
CloudWatch Logs ..................................................................................................................... 53
CloudWatch Logs IAM Policy .............................................................................................. 53
Installing the CloudWatch Logs Agent ................................................................................. 54
Step 6: Add Content to the Amazon EFS File System .................................................................. 349
Step 7: Run a Task and View the Results ................................................................................... 350
Tutorial: Continuous Deployment with AWS CodePipeline .................................................................... 351
Prerequisites .......................................................................................................................... 351
Step 1: Add a Build Specification File to Your Source Repository ................................................... 351
Step 2: Creating Your Continuous Deployment Pipeline ............................................................... 353
Step 3: Add Amazon ECR Permissions to the AWS CodeBuild Role ................................................. 354
Step 4: Test Your Pipeline ........................................................................................................ 354
Service Limits ................................................................................................................................. 356
CloudTrail Logging .......................................................................................................................... 357
Amazon ECS Information in CloudTrail ...................................................................................... 357
Understanding Amazon ECS Log File Entries .............................................................................. 357
Troubleshooting ............................................................................................................................. 358
Invalid CPU or memory value specified ..................................................................................... 358
Checking Stopped Tasks for Errors ............................................................................................ 358
Service Event Messages ........................................................................................................... 360
Service Event Messages ................................................................................................... 361
CannotCreateContainerError: API error (500): devmapper ........................................ 363
Troubleshooting Service Load Balancers .................................................................................... 364
Enabling Docker Debug Output ................................................................................................ 365
Amazon ECS Log File Locations ................................................................................................ 366
Amazon ECS Container Agent Log .................................................................................... 367
Amazon ECS ecs-init Log ............................................................................................ 367
IAM Roles for Tasks Credential Audit Log ........................................................................... 367
Amazon ECS Logs Collector ..................................................................................................... 368
Agent Introspection Diagnostics ............................................................................................... 369
Docker Diagnostics ................................................................................................................. 370
List Docker Containers .................................................................................................... 370
View Docker Logs ........................................................................................................... 371
Inspect Docker Containers ............................................................................................... 371
API failures Error Messages ................................................................................................. 372
Troubleshooting IAM Roles for Tasks ......................................................................................... 373
Windows Containers ....................................................................................................................... 376
Windows Container Caveats ..................................................................................................... 376
Getting Started with Windows Containers ................................................................................. 377
Step 1: Create a Windows Cluster ..................................................................................... 377
Step 2: Launching a Windows Container Instance into your Cluster ........................................ 377
Step 3: Register a Windows Task Definition ........................................................................ 380
Step 4: Create a Service with Your Task Definition .............................................................. 381
Step 5: View Your Service ................................................................................................ 381
Windows Task Definitions ........................................................................................................ 382
Windows Task Definition Parameters ................................................................................. 382
Windows Sample Task Definitions ..................................................................................... 384
Windows IAM Roles for Tasks ................................................................................................... 385
IAM Roles for Task Container Bootstrap Script .................................................................... 385
Pushing Windows Images to Amazon ECR .................................................................................. 386
AWS Glossary ................................................................................................................................. 388
Amazon ECS lets you launch and stop container-based applications with simple API calls, allows you to
get the state of your cluster from a centralized service, and gives you access to many familiar Amazon
EC2 features.
You can use Amazon ECS to schedule the placement of containers across your cluster based on your
resource needs, isolation policies, and availability requirements. Amazon ECS eliminates the need for
you to operate your own cluster management and configuration management systems or worry about
scaling your management infrastructure.
Amazon ECS can be used to create a consistent deployment and build experience, manage, and scale
batch and Extract-Transform-Load (ETL) workloads, and build sophisticated application architectures on
a microservices model. For more information about Amazon ECS use cases and scenarios, see Container
Use Cases.
The following diagram shows the architecture of an Amazon ECS environment using the Fargate launch
type:
The following sections dive into these individual elements of the Amazon ECS architecture in more
detail.
Images are typically built from a Dockerfile, a plain text file that specifies all of the components that are
included in the container. These images are then stored in a registry from which they can be downloaded
and run on your cluster. For more information about container technology, see Docker Basics (p. 14).
Note
The Fargate launch type only supports using container images hosted in Amazon ECR or publicly
on Docker Hub. Private repositories are currently only supported using the EC2 launch type.
Task Definitions
To prepare your application to run on Amazon ECS, you create a task definition. The task definition is
a text file, in JSON format, that describes one or more containers, up to a maximum of ten, that form
your application. It can be thought of as a blueprint for your application. Task definitions specify various
parameters for your application. Examples of task definition parameters are which containers to use and
the repositories in which they are located, which ports should be opened on the container instance for
your application, and what data volumes should be used with the containers in the task. The specific
parameters available for the task definition depend on which launch type you are using. For more
information about creating task definitions, see Amazon ECS Task Definitions (p. 100).
The following is an example of a simple task definition containing a single container that runs an NGINX
web server using the Fargate launch type. For a more extended example demonstrating the use of
multiple containers in a task definition, see Example Task Definitions (p. 143).
{
"family": "webserver",
"containerDefinitions": [
{
"name": "web",
"image": "nginx",
"memory": "100",
"cpu": "99"
},
],
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"memory": "100",
"cpu": "99",
}
The Amazon ECS task scheduler is responsible for placing tasks within your cluster. There are several
different scheduling options available. For example, you can define a service that runs and maintains a
specified number of tasks simultaneously. For more information about the different scheduling options
available, see Scheduling Amazon ECS Tasks (p. 147).
Clusters
When you run tasks using Amazon ECS, you place them on a cluster, which is a logical grouping of
resources. If you use the Fargate launch type with tasks within your cluster, Amazon ECS manages your
cluster resources. If you use the EC2 launch type, then your clusters will be a group of container instances
you manage. Amazon ECS downloads your container images from a registry that you specify, and runs
those images within your cluster.
For more information about creating clusters, see Amazon ECS Clusters (p. 25). If you are using
the EC2 launch type, you can read about creating container instances at Amazon ECS Container
Instances (p. 30).
Container Agent
The container agent runs on each infrastructure resource within an Amazon ECS cluster. It sends
information about the resource's current running tasks and resource utilization to Amazon ECS, and
starts and stops tasks whenever it receives a request from Amazon ECS. For more information, see
Amazon ECS Container Agent (p. 69).
Alternatively, you can install the AWS Command Line Interface (AWS CLI) to use Amazon ECS. For more
information, see Setting Up with Amazon ECS (p. 8).
Related Services
Amazon ECS can be used along with the following AWS services:
IAM is a web service that helps you securely control access to AWS resources for your users. Use
IAM to control who can use your AWS resources (authentication) and what resources they can use
in which ways (authorization). In Amazon ECS, IAM can be used to control access at the container
instance level using IAM roles, and at the task level using IAM task roles. For more information, see
Amazon ECS IAM Policies, Roles, and Permissions (p. 224).
Auto Scaling
Auto Scaling is a web service that enables you to automatically scale out or in your tasks based
on user-defined policies, health status checks, and schedules. You can use Auto Scaling with a
Fargate task within a service to scale in response to a number of metrics or with a EC2 task to scale
the container instances within your cluster. For more information, see Tutorial: Scaling Container
Instances with CloudWatch Alarms (p. 208).
Elastic Load Balancing
Elastic Load Balancing automatically distributes incoming application traffic across multiple
EC2 instances in the cloud. It enables you to achieve greater levels of fault tolerance in your
applications, seamlessly providing the required amount of load balancing capacity needed to
distribute application traffic. You can use Elastic Load Balancing to create an endpoint that balances
traffic across services in a cluster. For more information, see Service Load Balancing (p. 165).
Amazon Elastic Container Registry
Amazon ECR is a managed AWS Docker registry service that is secure, scalable, and reliable. Amazon
ECR supports private Docker repositories with resource-based permissions using IAM so that specific
users or EC2 instances can access repositories and images. Developers can use the Docker CLI to
push, pull, and manage images. For more information, see the Amazon Elastic Container Registry
User Guide.
AWS CloudFormation
AWS CloudFormation gives developers and systems administrators an easy way to create and
manage a collection of related AWS resources, provisioning and updating them in an orderly and
predictable fashion. You can define clusters, task definitions, and services as entities in an AWS
CloudFormation script. For more information, see AWS CloudFormation Template Reference.
The console is a browser-based interface to manage Amazon ECS resources. For a tutorial that
guides you through the console, see Getting Started with Amazon ECS using Fargate (p. 19).
AWS command line tools
You can use the AWS command line tools to issue commands at your system's command line to
perform Amazon ECS and AWS tasks; this can be faster and more convenient than using the console.
The command line tools are also useful for building scripts that perform AWS tasks.
AWS provides two sets of command line tools: the AWS Command Line Interface (AWS CLI) and the
AWS Tools for Windows PowerShell. For more information, see the AWS Command Line Interface
User Guide and the AWS Tools for Windows PowerShell User Guide.
Amazon ECS CLI
In addition to using the AWS CLI to access Amazon ECS resources, you can use the Amazon ECS CLI,
which provides high-level commands to simplify creating, updating, and monitoring clusters and
tasks from a local development environment using Docker Compose. For more information, see
Using the Amazon ECS Command Line Interface (p. 264).
AWS SDKs
We also provide SDKs that enable you to access Amazon ECS from a variety of programming
languages. The SDKs automatically take care of tasks such as:
• Cryptographically signing your service requests
• Retrying requests
• Handling error responses
For more information about available SDKs, see Tools for Amazon Web Services.
Complete the following tasks to get set up for Amazon ECS. If you have already completed any of these
steps, you may skip them and move on to installing the custom AWS CLI.
If you have an AWS account already, skip to the next task. If you don't have an AWS account, use the
following procedure to create one.
Part of the sign-up procedure involves receiving a phone call and entering a PIN using the phone
keypad.
Note your AWS account number, because you'll need it for the next task.
console requires your password. You can create access keys for your AWS account to access the command
line interface or API. However, we don't recommend that you access AWS using the credentials for your
AWS account; we recommend that you use AWS Identity and Access Management (IAM) instead. Create
an IAM user, and then add the user to an IAM group with administrative permissions or and grant this
user administrative permissions. You can then access AWS using a special URL and the credentials for the
IAM user.
If you signed up for AWS but have not created an IAM user for yourself, you can create one using the IAM
console.
To create an IAM user for yourself and add the user to an Administrators group
1. Use your AWS account email address and password to sign in to the AWS Management Console as
the AWS account root user.
2. In the navigation pane of the console, choose Users, and then choose Add user.
3. For User name, type Administrator.
4. Select the check box next to AWS Management Console access, select Custom password, and then
type the new user's password in the text box. You can optionally select Require password reset to
force the user to select a new password the next time the user signs in.
5. Choose Next: Permissions.
6. On the Set permissions for user page, choose Add user to group.
7. Choose Create group.
8. In the Create group dialog box, type Administrators.
9. For Filter, choose Job function.
10. In the policy list, select the check box for AdministratorAccess. Then choose Create group.
11. Back in the list of groups, select the check box for your new group. Choose Refresh if necessary to
see the group in the list.
12. Choose Next: Review to see the list of group memberships to be added to the new user. When you
are ready to proceed, choose Create user.
You can use this same process to create more groups and users, and to give your users access to your
AWS account resources. To learn about using policies to restrict users' permissions to specific AWS
resources, go to Access Management and Example Policies.
To sign in as this new IAM user, sign out of the AWS console, then use the following URL, where
your_aws_account_id is your AWS account number without the hyphens (for example, if your AWS
account number is 1234-5678-9012, your AWS account ID is 123456789012):
https://your_aws_account_id.signin.aws.amazon.com/console/
Enter the IAM user name and password that you just created. When you're signed in, the navigation bar
displays "your_user_name @ your_aws_account_id".
If you don't want the URL for your sign-in page to contain your AWS account ID, you can create an
account alias. From the IAM dashboard, choose Create Account Alias and enter an alias, such as your
company name. To sign in after you create an account alias, use the following URL:
https://your_account_alias.signin.aws.amazon.com/console/
To verify the sign-in link for IAM users for your account, open the IAM console and check under IAM
users sign-in link on the dashboard.
For more information about IAM, see the AWS Identity and Access Management User Guide.
The Amazon ECS container agent also makes calls to the Amazon EC2 and Elastic Load Balancing APIs on
your behalf, so container instances can be registered and deregistered with load balancers. Before you
can attach a load balancer to an Amazon ECS service, you must create an IAM role for your services to
use before you start them. This requirement applies to any Amazon ECS service that you plan to use with
a load balancer.
Note
The Amazon ECS instance and service roles are automatically created for you in the console first
run experience, so if you intend to use the Amazon ECS console, you can move ahead to the next
section. If you do not intend to use the Amazon ECS console, and instead plan to use the AWS
CLI, complete the procedures in Amazon ECS Container Instance IAM Role (p. 238) and Amazon
ECS Service Scheduler IAM Role (p. 247) before launching container instances or using Elastic
Load Balancing load balancers with services.
AWS uses public-key cryptography to secure the login information for your instance. A Linux instance,
such as an Amazon ECS container instance, has no password to use for SSH access; you use a key pair to
log in to your instance securely. You specify the name of the key pair when you launch your container
instance, then provide the private key when you log in using SSH.
If you haven't created a key pair already, you can create one using the Amazon EC2 console. Note that if
you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more
information about regions, see Regions and Availability Zones in the Amazon EC2 User Guide for Linux
Instances.
Important
This is the only chance for you to save the private key file. You'll need to provide the name
of your key pair when you launch an instance and the corresponding private key each time
you connect to the instance.
7. If you will use an SSH client on a Mac or Linux computer to connect to your Linux instance, use the
following command to set the permissions of your private key file so that only you can read it.
For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.
To connect to your Linux instance from a computer running Mac or Linux, specify the .pem file to your
SSH client with the -i option and the path to your private key. To connect to your Linux instance from a
computer running Windows, you can use either MindTerm or PuTTY. If you plan to use PuTTY, you'll need
to install it and use the following procedure to convert the .pem file to a .ppk file.
4. Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem
file, choose the option to display files of all types.
5. Select the private key file that you created in the previous procedure and choose Open. Choose OK
to dismiss the confirmation dialog box.
6. Choose Save private key. PuTTYgen displays a warning about saving the key without a passphrase.
Choose Yes.
7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the
.ppk file extension.
If you have a default VPC, you also can skip this section and move to the next task, Create a Security
Group (p. 12). To determine whether you have a default VPC, see Supported Platforms in the Amazon
EC2 Console in the Amazon EC2 User Guide for Linux Instances. Otherwise, you can create a nondefault
VPC in your account using the steps below.
Important
If your account supports Amazon EC2 Classic in a region, then you do not have a default VPC in
that region.
For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide.
Note that if you plan to launch container instances in multiple regions, you need to create a security
group in each region. For more information about regions, see Regions and Availability Zones in the
Amazon EC2 User Guide for Linux Instances.
Tip
You need the public IP address of your local computer, which you can get using a service. For
example, we provide the following service: http://checkip.amazonaws.com/. To locate another
service that provides your IP address, use the search phrase "what is my IP address." If you are
connecting through an Internet service provider (ISP) or from behind a firewall without a static
IP address, you need to find out the range of IP addresses used by client computers.
Note
If your account supports Amazon EC2 Classic, select the VPC that you created in the
previous task.
7. Amazon ECS container instances do not require any inbound ports to be open. However, you might
want to add an SSH rule so you can log into the container instance and examine the tasks with
Docker commands. You can also add rules for HTTP and HTTPS if you want your container instance
to host a task that runs a web server. Complete the following steps to add these optional security
group rules.
On the Inbound tab, create the following rules (choose Add Rule for each new rule), and then
choose Create:
• Choose HTTP from the Type list, and make sure that Source is set to Anywhere (0.0.0.0/0).
• Choose HTTPS from the Type list, and make sure that Source is set to Anywhere (0.0.0.0/0).
• Choose SSH from the Type list. In the Source field, ensure that Custom IP is selected, and specify
the public IP address of your computer or network in CIDR notation. To specify an individual
IP address in CIDR notation, add the routing prefix /32. For example, if your IP address is
203.0.113.25, specify 203.0.113.25/32. If your company allocates addresses from a range,
specify the entire range, such as 203.0.113.0/24.
Important
For security reasons, we don't recommend that you allow SSH access from all IP addresses
(0.0.0.0/0) to your instance, except for testing purposes and only for a short time.
To use the AWS CLI with Amazon ECS, install the latest AWS CLI, version. For information about installing
the AWS CLI or upgrading it to the latest version, see Installing the AWS Command Line Interface in the
AWS Command Line Interface User Guide.
Docker Basics
Docker is a technology that allows you to build, run, test, and deploy distributed applications that are
based on Linux containers. Amazon ECS uses Docker images in task definitions to launch containers on
EC2 instances in your clusters. For Amazon ECS product details, featured customer case studies, and
FAQs, see the Amazon Elastic Container Service product detail pages.
The documentation in this guide assumes that readers possess a basic understanding of what Docker is
and how it works. For more information about Docker, see What is Docker? and the Docker User Guide.
Topics
• Installing Docker (p. 14)
• Create a Docker Image (p. 15)
• (Optional) Push your image to Amazon Elastic Container Registry (p. 16)
• Next Steps (p. 17)
Installing Docker
Note
If you already have Docker installed, skip to Create a Docker Image (p. 15).
Docker is available on many different operating systems, including most modern Linux distributions, like
Ubuntu, and even Mac OSX and Windows. For more information about how to install Docker on your
particular operating system, go to the Docker installation guide.
You don't even need a local development system to use Docker. If you are using Amazon EC2 already, you
can launch an Amazon Linux instance and install Docker to get started.
1. Launch an instance with the Amazon Linux AMI. For more information, see Launching an Instance in
the Amazon EC2 User Guide for Linux Instances.
2. Connect to your instance. For more information, see Connect to Your Linux Instance in the Amazon
EC2 User Guide for Linux Instances.
3. Update the installed packages and package cache on your instance.
6. Add the ec2-user to the docker group so you can execute Docker commands without using sudo.
7. Log out and log back in again to pick up the new docker group permissions.
8. Verify that the ec2-user can run Docker commands without sudo.
docker info
Note
In some cases, you may need to reboot your instance to provide permissions for the ec2-
user to access the Docker daemon. Try rebooting your instance if you see the following
error:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
1. Create a file called Dockerfile. A Dockerfile is a manifest that describes the base image to use
for your Docker image and what you want installed and running on it. For more information about
Dockerfiles, go to the Dockerfile Reference.
touch Dockerfile
2. Edit the Dockerfile you just created and add the following content.
FROM ubuntu:12.04
# Install dependencies
RUN apt-get update -y
RUN apt-get install -y apache2
# Configure apache
RUN a2enmod rewrite
RUN chown -R www-data:www-data /var/www
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
EXPOSE 80
This Dockerfile uses the Ubuntu 12.04 image. The RUN instructions update the package caches,
install some software packages for the web server, and then write the "Hello World!" content to the
web server's document root. The EXPOSE instruction exposes port 80 on the container, and the CMD
instruction starts the web server.
4. Run docker images to verify that the image was created correctly.
Output:
5. Run the newly built image. The -p 80:80 option maps the exposed port 80 on the container to
port 80 on the host system. For more information about docker run, go to the Docker run reference.
Note
Output from the Apache web server is displayed in the terminal window. You can ignore
the "Could not reliably determine the server's fully qualified domain
name" message.
6. Open a browser and point to the server that is running Docker and hosting your container.
• If you are using an EC2 instance, this is the Public DNS value for the server, which is the same
address you use to connect to the instance with SSH. Make sure that the security group for your
instance allows inbound traffic on port 80.
• If you are running Docker locally, point your browser to http://localhost/.
• If you are using docker-machine on a Windows or Mac computer, find the IP address of the
VirtualBox VM that is hosting Docker with the docker-machine ip command, substituting
machine-name with the name of the docker machine you are using.
docker-machine ip machine-name
You should see a web page with your "Hello World!" statement.
7. Stop the Docker container by typing Ctrl + c.
1. Create an Amazon ECR repository to store your hello-world image. Note the repositoryUri in
the output.
Output:
{
"repository": {
"registryId": "aws_account_id",
"repositoryName": "hello-world",
"repositoryArn": "arn:aws:ecr:us-east-1:aws_account_id:repository/hello-world",
"createdAt": 1505337806.0,
"repositoryUri": "aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-world"
}
}
2. Tag the hello-world image with the repositoryUri value from the previous step.
3. Run the aws ecr get-login --no-include-email command to get the docker login authentication
command string for your registry.
Note
The get-login command is available in the AWS CLI starting with version 1.9.15; however,
we recommend version 1.11.91 or later for recent versions of Docker (17.06 or later). You
can check your AWS CLI version with the aws --version command. If you are using Docker
version 17.06 or later, include the --no-include-email option after get-login. If you
receive an Unknown options: --no-include-email error, install the latest version of
the AWS CLI. For more information, see Installing the AWS Command Line Interface in the
AWS Command Line Interface User Guide.
4. Run the docker login command that was returned in the previous step. This command provides an
authorization token that is valid for 12 hours.
Important
When you execute this docker login command, the command string can be visible by other
users on your system in a process list (ps -e) display. Because the docker login command
contains authentication credentials, there is a risk that other users on your system could
view them this way and use them to gain push and pull access to your repositories. If you
are not on a secure system, you should consider this risk and log in interactively by omitting
the -p password option, and then entering the password when prompted.
5. Push the image to Amazon ECR with the repositoryUri value from the earlier step.
Next Steps
After the image push is finished, you can use your image in your Amazon ECS task definitions, which you
can use to run tasks with.
API Version 2014-11-13
17
Amazon Elastic Container Service Developer Guide
Next Steps
Note
This section requires the AWS CLI. If you do not have the AWS CLI installed on your system, see
Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.
1. Create a file called hello-world-task-def.json with the following contents, substituting the
repositoryUri from the previous section for the image field.
{
"family": "hello-world",
"containerDefinitions": [
{
"name": "hello-world",
"image": "aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-world",
"cpu": 10,
"memory": 500,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"entryPoint": [
"/usr/sbin/apache2",
"-D",
"FOREGROUND"
],
"essential": true
}
]
}
The task definition is registered in the hello-world family as defined in the JSON file.
• Use the following AWS CLI command to run a task with the hello-world task definition.
The Amazon ECS first run wizard will guide you through the process to get started. The wizard gives
you the option of creating a cluster and launching our sample web application, or if you already have a
Docker image you would like to launch in Amazon ECS, you can create a task definition with that image
and use that for your cluster instead.
Important
Before you begin, be sure that you've completed the steps in Setting Up with Amazon ECS (p. 8)
and that your AWS user has the required permissions specified in the Amazon ECS First Run
Wizard (p. 256) IAM policy example.
For Container definition, the first run wizard comes preloaded with the sample-app and nginx
container definitions in the console. You can optionally rename the container or review and edit the
resources used by the container (such as CPU units and memory limits) by choosing Edit and editing
the values shown.
For more information on what each of these container definition parameters does, see Container
Definitions (p. 108).
Note
If you are using an Amazon ECR image in your container definition, be sure to use the
full registry/repository:tag naming for your Amazon ECR images. For example,
aws_account_id.dkr.ecr.region.amazonaws.com/my-web-app:latest.
3. For Task definition, the first run wizard defines a task definition to use with the preloaded container
definitions. You can optionally rename the task definition and edit the resources used by the task
(such as the Task memory and Task CPU values) by choosing Edit and editing the values shown.
Task definitions created in the first run wizard are limited to a single container for simplicity's sake.
You can create multi-container task definitions later in the Amazon ECS console.
For more information on what each of these task definition parameters does, see Task Definition
Parameters (p. 107).
4. Choose Next to continue.
style application that is meant to run indefinitely, so by running it as a service, it will restart if the task
becomes unhealthy or unexpectedly stops.
The first run wizard comes preloaded with a service definition, and you can see the sample-app-
service service defined in the console. You can optionally rename the service or review and edit the
details by choosing Edit and doing the following:
Complete the following steps to use a load balancer with your service.
a. In the Application load balancing section, choose the Load balancer listener port. The default
value here are set up for the sample application, but you can configure different listener options
for the load balancer. For more information, see Service Load Balancing (p. 165).
b. In the Application Load Balancer target group field, specify a name for the target group.
5. Review your service settings and click Save, Next.
In this section of the wizard, you name your cluster, and then Amazon ECS take cares of the networking
and IAM configuration for you.
Step 4: Review
1. Review your task definition, task configuration, and cluster configurations and click Create to
finish. You are directed to a Launch Status page that shows the status of your launch and describes
each step of the process (this can take a few minutes to complete while your Auto Scaling group is
created and populated).
2. After the launch is complete, choose View service to view your service in the Amazon ECS console.
If your service is a web-based application, such as the Amazon ECS sample application, you can view its
containers with a web browser.
4. Enter the IPv4 Public IP address in your web browser and you should see a web page that displays
the Amazon ECS sample application.
Some Amazon ECS resources, such as tasks, services, clusters, and container instances, are cleaned up
using the Amazon ECS console. Other resources, such as Amazon EC2 instances, Elastic Load Balancing
load balancers, and Auto Scaling groups, must be cleaned up manually in the Amazon EC2 console or by
deleting the AWS CloudFormation stack that created them.
Topics
• Scale Down Services (p. 22)
• Delete Services (p. 22)
• Deregister Container Instances (p. 22)
• Delete a Cluster (p. 23)
• Delete the AWS CloudFormation Stack (p. 23)
Alternatively, you can use the following AWS CLI command to scale down your service. Be sure to
substitute the region name, cluster name, and service name for each service that you are scaling down.
Delete Services
Before you can delete a cluster, you must delete the services inside that cluster. After your service has
scaled down to 0 tasks, you can delete it. For each service inside your cluster, follow the procedures in
Deleting a Service (p. 196) to delete it.
Alternatively, you can use the following AWS CLI command to delete your services. Be sure to substitute
the region name, cluster name, and service name for each service that you are deleting.
Alternatively, you can use the following AWS CLI command to deregister your container instances. Be
sure to substitute the region name, cluster name, and container instance ID for each container instance
that you are deregistering.
Delete a Cluster
After you have removed the active resources from your Amazon ECS cluster, you can delete it. Use the
following procedure to delete your cluster.
To delete a cluster
Alternatively, you can use the following AWS CLI command to delete your cluster. Be sure to substitute
the region name and cluster name for each cluster that you are deleting.
Topics
• Cluster Concepts (p. 25)
• Creating a Cluster (p. 25)
• Scaling a Cluster (p. 27)
• Deleting a Cluster (p. 29)
Cluster Concepts
• Clusters are region-specific.
• Clusters can contain tasks using both the Fargate and EC2 launch types. For more information on
launch types, see Amazon ECS Launch Types (p. 132).
• For tasks using the EC2 launch type, clusters can contain multiple different container instance types,
but each container instance may only be part of one cluster at a time.
• You can create custom IAM policies for your clusters to allow or restrict users' access to specific
clusters. For more information, see the Clusters (p. 258) section in Amazon ECS IAM Policy
Examples (p. 256).
Creating a Cluster
You can create an Amazon ECS cluster using the AWS Management Console, as described in this topic.
Before you begin, be sure that you've completed the steps in Setting Up with Amazon ECS (p. 8). If you
are launching tasks with the EC2 launch type, you can register container instances into the cluster after
creating it.
Note
This cluster creation wizard provides a simple way to create the resources that are needed by an
Amazon ECS cluster, and it lets you customize several common cluster configuration options.
However, this wizard does not allow you to customize every resource option (for example,
the container instance AMI ID). If your requirements extend beyond what is supported in this
wizard, consider using our reference architecture at https://github.com/awslabs/ecs-refarch-
cloudformation.
Do not attempt to modify the underlying resources directly after they are created by the wizard.
To create a cluster
• Networking only–This choice takes you through the options to launch a cluster of tasks using the
Fargate launch type. The Fargate launch type allows you to run your containerized applications
without the need to provision and manage the backend infrastructure. Register your task
definition and Fargate launches the container for you.
• EC2 Linux + Networking–This choice takes you through the choices to launch a cluster of tasks
using the EC2 launch type using Linux containers. The EC2 launch type allows you to run your
containerized applications on a cluster of Amazon EC2 instances that you manage.
• EC2 Windows + Networking – This choice takes you through the choices to launch a cluster of
tasks using the EC2 launch type using Windows containers. The EC2 launch type allows you to run
your containerized applications on a cluster of Amazon EC2 instances that you manage. For more
information, see Windows Containers (p. 376).
If you chose the Networking only cluster template, do the following, otherwise skip to the next section:
1. On the Configure cluster page, choose a Cluster name. Up to 255 letters (uppercase and lowercase),
numbers, hyphens, and underscores are allowed.
2. In the Networking section, configure the VPC for your cluster. You can leave the default settings in
or you can modify these settings by following the substeps below.
a. (Optional) If you choose to create a new VPC, for CIDR Block, select a CIDR block for your VPC.
For more information, see Your VPC and Subnets in the Amazon VPC User Guide.
b. For Subnets, select the subnets to use for your VPC. You can keep the default settings or you
can modify them to meet your needs.
3. Choose Create.
If you chose the EC2 Linux + Networking or EC2 Windows + Networking templates, do the following:
Using the EC2 Linux + Networking or EC2 Windows + Networking cluster template
1. For Cluster name, enter a name for your cluster. Up to 255 letters (uppercase and lowercase),
numbers, hyphens, and underscores are allowed.
2. (Optional) If you wish to create a cluster with no resources, choose Create an empty cluster, Create.
3. For Provisioning model, choose one of the following:
• On-Demand Instance–With On-Demand Instances, you pay for compute capacity by the hour with
no long-term commitments or upfront payments.
• Spot–Spot Instances allow you to bid on spare Amazon EC2 computing capacity for up to 90% off
the On-Demand price. For more information, see Spot Instances.
Note
Spot Instances are subject to possible interruptions. We recommend that you avoid Spot
Instances for applications that can't be interrupted. For more information, see Spot
Instance Interruptions.
4. For Spot Instances, do the following; otherwise, skip to the next step.
a. For Spot Instance allocation strategy, choose the strategy that meets your needs. For more
information, see Spot Fleet Allocation Strategy.
b. For Maximum bid price (per instance/hour), specify a bid price. Your Spot Instances are not
launched if your bid price is lower than the Spot price for the instance types that you selected.
5. For EC2 instance types, choose the EC2 instance type for your container instances. The instance type
that you select determines the resources available for your tasks.
6. For Number of instances, choose the number of EC2 instances to launch into your cluster. These
instances are launched using the latest Amazon ECS–optimized AMI. For more information, see
Amazon ECS-Optimized AMI (p. 32).
7. For EBS storage (GiB), choose the size of the Amazon EBS volume to use for data storage on your
container instances. By default, the Amazon ECS–optimized AMI launches with an 8-GiB root volume
and a 22-GiB data volume. You can increase the size of the data volume to allow for greater image
and container storage.
8. For Key pair, choose an Amazon EC2 key pair to use with your container instances for SSH access.
If you do not specify a key pair, you cannot connect to your container instances with SSH. For more
information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.
9. In the Networking section, configure the VPC to launch your container instances into. By default,
the cluster creation wizard creates a new VPC with two subnets in different Availability Zones, and
a security group open to the internet on port 80. This is a basic setup that works well for an HTTP
service. However, you can modify these settings by following the substeps below.
Scaling a Cluster
If you have a cluster that contains Amazon EC2 container instances, the following will help you scale the
number of Amazon EC2 instances in your cluster.
If your cluster was created with the console first-run experience described in Getting Started with
Amazon ECS using Fargate (p. 19) after November 24th, 2015, then the Auto Scaling group associated
API Version 2014-11-13
27
Amazon Elastic Container Service Developer Guide
Scaling a Cluster
with the AWS CloudFormation stack created for your cluster can be scaled up or down to add or remove
container instances. You can perform this scaling operation from within the Amazon ECS console.
If your cluster was not created with the console first-run experience described in Getting Started with
Amazon ECS using Fargate (p. 19) after November 24th, 2015, then you cannot scale your cluster from
the Amazon ECS console. However, you can still modify existing Auto Scaling groups associated with
your cluster in the Auto Scaling console. If you do not have an Auto Scaling group associated with your
cluster, you can create one from an existing container instance. For more information, see Creating an
Auto Scaling Group Using an EC2 Instance in the Auto Scaling User Guide. You can also manually launch
or terminate container instances from the Amazon EC2 console; for more information see Launching an
Amazon ECS Container Instance (p. 43).
To scale a cluster
If a Scale ECS Instances button appears, then you can scale your cluster in the next step. If not,
you must manually adjust your Auto Scaling group to scale up or down your instances, or you can
manually launch or terminate your container instances in the Amazon EC2 console.
6. Choose Scale ECS Instances.
7. For Desired number of instances, enter the number of instances to which to scale your cluster to
and choose Scale.
Note
If you reduce the number of container instances in your cluster, randomly selected container
instances are terminated until the desired count is achieved, and any tasks that are running
on terminated instances are stopped.
Deleting a Cluster
If you are finished using a cluster, you can delete it. When you delete a cluster in the Amazon ECS
console, the associated resources that are deleted with it vary depending on how the cluster was created.
Step 5 (p. 29) of the following procedure changes based on that condition.
If your cluster was created with the console first-run experience described in Getting Started with
Amazon ECS using Fargate (p. 19) after November 24th, 2015, or the cluster creation wizard described in
Creating a Cluster (p. 25), then the AWS CloudFormation stack that was created for your cluster is also
deleted when you delete your cluster.
If your cluster was created manually (without the cluster creation wizard) or with the console first
run experience prior to November 24th, 2015, then you must deregister (or terminate) any container
instances associated with the cluster before you can delete it. For more information, see Deregister
a Container Instance (p. 67). In this case, after the cluster is deleted, you should delete any
remaining AWS CloudFormation stack resources or Auto Scaling groups associated with the cluster
to avoid incurring any future charges for those resources. For more information, see Delete the AWS
CloudFormation Stack (p. 23).
To delete a cluster
Topics
• Container Instance Concepts (p. 30)
• Container Instance Lifecycle (p. 31)
• Check the Instance Role for Your Account (p. 31)
• Container Instance AMIs (p. 32)
• Subscribing to Amazon ECS–Optimized AMI Update Notifications (p. 40)
• Launching an Amazon ECS Container Instance (p. 43)
• Bootstrapping Container Instances with Amazon EC2 User Data (p. 46)
• Connect to Your Container Instance (p. 52)
• Using CloudWatch Logs with Container Instances (p. 53)
• Container Instance Draining (p. 60)
• Managing Container Instances Remotely (p. 61)
• Starting a Task at Container Instance Launch Time (p. 64)
• Deregister a Container Instance (p. 67)
Amazon VPC User Guide and HTTP Proxy Configuration (p. 97) in this guide. For help creating a VPC,
see Tutorial: Creating a VPC with Public and Private Subnets for Your Clusters (p. 342)
• The type of EC2 instance that you choose for your container instances determines the resources
available in your cluster. Amazon EC2 provides different instance types, each with different CPU,
memory, storage, and networking capacity that you can use to run your tasks. For more information,
see Amazon EC2 Instances.
• Because each container instance has unique state information that is stored locally on the container
instance and within Amazon ECS:
• You should not deregister an instance from one cluster and re-register it into another. To relocate
container instance resources, we recommend that you terminate container instances from one
cluster and launch new container instances with the latest Amazon ECS-optimized AMI in the new
cluster. For more information, see Terminate Your Instance in the Amazon EC2 User Guide for Linux
Instances and Launching an Amazon ECS Container Instance (p. 43).
• You cannot stop a container instance and change its instance type. Instead, we recommend that
you terminate the container instance and launch a new container instance with the desired instance
size and the latest Amazon ECS-optimized AMI in your desired cluster. For more information, see
Terminate Your Instance in the Amazon EC2 User Guide for Linux Instances and Launching an Amazon
ECS Container Instance (p. 43) in this guide.
If you stop (not terminate) an Amazon ECS container instance, the status remains ACTIVE, but the
agent connection status transitions to FALSE within a few minutes. Any tasks that were running on the
container instance stop. If you start the container instance again, the container agent reconnects with
the Amazon ECS service, and you are able to run tasks on the instance again.
Important
If you stop and start a container instance, or reboot that instance, some older versions of the
Amazon ECS container agent register the instance again without deregistering the original
container instance ID. In this case, Amazon ECS lists more container instances in your cluster
than you actually have. (If you have duplicate container instance IDs for the same Amazon
EC2 instance ID, you can safely deregister the duplicates that are listed as ACTIVE with an
agent connection status of FALSE.) This issue is fixed in the current version of the Amazon ECS
container agent. To update to the current version, see Updating the Amazon ECS Container
Agent (p. 74).
If you change the status of a container instance to DRAINING, new tasks are not placed on the container
instance. Any service tasks running on the container instance are removed, if possible, so that you can
perform system updates. For more information, see Container Instance Draining (p. 60).
If you deregister or terminate a container instance, the container instance status changes to INACTIVE
immediately, and the container instance is no longer reported when you list your container instances.
However, you can still describe the container instance for one hour following termination. After one hour,
the instance description is no longer available.
In most cases, the Amazon ECS instance role is automatically created for you in the console first-run
experience. You can use the following procedure to check and see if your account already has an Amazon
ECS service role.
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In the navigation pane, choose Roles.
3. Search the list of roles for ecsInstanceRole. If the role exists, you do not need to create it. If the
role does not exist, follow the procedures in Amazon ECS Container Instance IAM Role (p. 238) to
create the role.
Required
• A modern Linux distribution running at least version 3.10 of the Linux kernel.
• The Amazon ECS container agent (preferably the latest version). For more information, see Amazon
ECS Container Agent (p. 69).
• A Docker daemon running at least version 1.5.0, and any Docker runtime dependencies. For more
information, see Check runtime dependencies in the Docker documentation.
Note
For the best experience, we recommend the Docker version that ships with and is tested with
the corresponding Amazon ECS agent version that you are using. For more information, see
Amazon ECS-Optimized AMI Container Agent Versions (p. 73).
Recommended
• An initialization and nanny process to run and monitor the Amazon ECS agent. The Amazon ECS-
optimized AMI uses the ecs-init upstart process. For more information, see the ecs-init project
on GitHub.
The Amazon ECS-optimized AMI is preconfigured with these requirements and recommendations. We
recommend that you use the Amazon ECS-optimized AMI for your container instances unless your
application requires a specific operating system or a Docker version that is not yet available in that AMI.
For more information, see Amazon ECS-Optimized AMI (p. 32).
The current Amazon ECS-optimized Linux AMI IDs by region are listed below for reference.
Topics
• How to Launch the Latest Amazon ECS-Optimized AMI (p. 34)
• Amazon ECS-Optimized AMI Versions (p. 35)
• Storage Configuration (p. 36)
• The Amazon ECS console first-run wizard launches your container instances with the latest Amazon
ECS-optimized AMI. For more information, see Getting Started with Amazon ECS using Fargate (p. 19).
• You can launch your container instances manually in the Amazon EC2 console by following the
procedures in Launching an Amazon ECS Container Instance (p. 43). You could also choose the EC2
console link in the table below that corresponds to your cluster's region.
• Use an AMI ID from the table below that corresponds to your cluster's region with the AWS CLI, the
AWS SDKs, or an AWS CloudFormation template to launch your instances.
The current Amazon ECS-optimized Linux AMI IDs by region are listed below for reference.
For previous versions of the Amazon ECS-optimized AMI and its corresponding Docker and Amazon ECS
container agent versions, see Amazon ECS-Optimized AMI Container Agent Versions (p. 73).
We always recommend using the latest version of the Amazon ECS-optimized AMI. For more information,
see How to Launch the Latest Amazon ECS-Optimized AMI (p. 34).
Storage Configuration
By default, the Amazon ECS-optimized AMI ships with 30 GiB of total storage. You can modify this value
at launch time to increase or decrease the available storage on your container instance. This storage is
used for the operating system and for Docker images and metadata. The sections below describe the
storage configuration of the Amazon ECS-optimized AMI, based on the AMI version.
Note
You can increase these default volume sizes by changing the block device mapping settings for
your instances when you launch them; however, you cannot specify a smaller volume size than
the default. For more information, see Block Device Mapping in the Amazon EC2 User Guide for
Linux Instances.
The docker-storage-setup utility configures the LVM volume group and logical volume for Docker when
the instance launches. By default, docker-storage-setup creates a volume group called docker, adds
/dev/xvdcz as a physical volume to that group. It then creates a logical volume called docker-pool
that uses 99% of the available storage in the volume group. The remaining 1% of the available storage is
reserved for metadata.
Note
Earlier Amazon ECS-optimized AMI versions (2015.09.d to 2016.03.a) create a logical volume
that uses 40% of the available storage in the volume group. When the logical volume becomes
60% full, the logical volume is increased in size by 20%.
• You can use the LVM commands, vgs and lvs, or the docker info command to view available storage
for Docker.
Note
The LVM command output displays storage values in GiB (2^30 bytes), and docker info
displays storage values in GB (10^9 bytes).
a. You can view the available storage in the volume group with the vgs command. This command
shows the total size of the volume group and the available space in the volume group that can
be used to grow the logical volume. The example below shows a 22-GiB volume with 204 MiB of
free space.
Output:
b. You can view the available space in the logical volume with the lvs command. The example
below shows a logical volume that is 21.75 GiB in size, and it is 7.63% full. This logical volume
can grow until there is no more free space in the volume group.
Output:
c. The docker info command also provides information about how much data space it is using,
and how much data space is available. However, its available space value is based on the logical
volume size that it is using.
Note
Because docker info displays storage values as GB (10^9 bytes), instead of GiB (2^30
bytes), the values displayed here look larger for the same amount of storage displayed
with lvs. However, the values are equal (23.35 GB = 21.75 GiB).
API Version 2014-11-13
37
Amazon Elastic Container Service Developer Guide
Amazon ECS-Optimized AMI
Output:
The easiest way to add storage to your container instances is to terminate the existing instances and
launch new ones with larger data storage volumes. However, if you are unable to do this, you can add
storage to the volume group that Docker uses and extend its logical volume by following these steps.
Note
If your container instance storage is filling up too quickly, there are a few actions that you can
take to reduce this effect:
• (Amazon ECS container agent 1.8.0 and later) Reduce the amount of time
that stopped or exited containers remain on your container instances. The
ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION agent configuration variable sets the time
duration to wait from when a task is stopped until the Docker container is removed (by
default, this value is 3 hours). This removes the Docker container data. If this value is set too
low, you may not be able to inspect your stopped containers or view the logs before they are
removed. For more information, see Amazon ECS Container Agent Configuration (p. 81).
• Remove non-running containers and unused images from your container instances. You can
use the following example commands to manually remove stopped containers and unused
images. Deleted containers cannot be inspected later, and deleted images must be pulled
again before starting new containers from them.
To remove unused images, execute the following command on your container instance:
• Remove unused data blocks within containers. You can use the following command to run
fstrim on any running container and discard any data blocks that are unused by the container
file system.
1. Create a new Amazon EBS volume in the same Availability Zone as your container instance. For more
information, see Creating an Amazon EBS Volume in the Amazon EC2 User Guide for Linux Instances.
2. Attach the volume to your container instance. The default location for the Docker data volume is
/dev/xvdcz. For consistency, attach additional volumes in reverse alphabetical order from that
device name (for example, /dev/xvdcy). For more information, see Attaching an Amazon EBS
Volume to an Instance in the Amazon EC2 User Guide for Linux Instances.
3. Connect to your container instance using SSH. For more information, see Connect to Your Container
Instance (p. 52).
4. Check the size of your docker-pool logical volume. The example below shows a logical volume of
409.19 GiB.
Output:
5. Check the current available space in your volume group. The example below shows 612.75 GiB in the
VFree column.
Output:
6. Add the new volume to the docker volume group, substituting the device name to which you
attached the new volume. In this example, a 1-TiB volume was previously added and attached to /
dev/xvdcy.
7. Verify that your volume group size has increased with the vgs command. The VFree column should
show the increased storage size. The example below now has 1.6 TiB in the VFree column, which
is 1 TiB larger than it was previously. Your VFree column should be the sum of the original VFree
value and the size of the volume you attached.
Output:
8. Extend the docker-pool logical volume with the size of the volume you added earlier. The
command below adds 1024 GiB to the logical volume, which is entered as 1024G.
Output:
Output:
10. (Optional) Verify that docker info also recognizes the added storage space.
Note
Because docker info displays storage values as GB (10^9 bytes), instead of GiB (2^30 bytes),
the values displayed here look larger for the same amount of storage displayed with lvs.
However, the values are equal (1.539 TB =1.40 TiB).
Output:
There is no practical way to add storage (that Docker can use) to instances launched from these AMIs
without stopping them. If you find that your container instances need more storage than the default 30
GiB, you should terminate each instance. Then, launch another in its place with the latest Amazon ECS-
optimized AMI and a large enough data storage volume.
You can subscribe an Amazon SQS queue to this notification topic, but you must use a topic ARN that is
in the same region. For more information, see Tutorial: Subscribing an Amazon SQS Queue to an Amazon
SNS Topic in the Amazon Simple Queue Service Developer Guide.
You can also use an AWS Lambda function to trigger events when notifications are received. For more
information, see Invoking Lambda functions using Amazon SNS notifications in the Amazon Simple
Notification Service Developer Guide.
The Amazon SNS topic ARNs for each region are shown below.
us-east-1 arn:aws:sns:us-
east-1:177427601217:ecs-optimized-
amazon-ami-update
us-east-2 arn:aws:sns:us-
east-2:177427601217:ecs-optimized-
amazon-ami-update
us-west-1 arn:aws:sns:us-
west-1:177427601217:ecs-optimized-
amazon-ami-update
us-west-2 arn:aws:sns:us-
west-2:177427601217:ecs-optimized-
amazon-ami-update
eu-west-1 arn:aws:sns:eu-
west-1:177427601217:ecs-optimized-
amazon-ami-update
eu-west-2 arn:aws:sns:eu-
west-2:177427601217:ecs-optimized-
amazon-ami-update
eu-central-1 arn:aws:sns:eu-
central-1:177427601217:ecs-optimized-
amazon-ami-update
ap-northeast-1 arn:aws:sns:ap-
northeast-1:177427601217:ecs-
optimized-amazon-ami-update
ap-southeast-1 arn:aws:sns:ap-
southeast-1:177427601217:ecs-
optimized-amazon-ami-update
ap-southeast-2 arn:aws:sns:ap-
southeast-2:177427601217:ecs-
optimized-amazon-ami-update
ca-central-1 arn:aws:sns:ca-
central-1:177427601217:ecs-optimized-
amazon-ami-update
5. For Protocol, choose Email. For Endpoint, type an email address you can use to receive the
notification.
6. Choose Create subscription.
7. In your email application, open the message from AWS Notifications and open the link to confirm
your subscription.
2. In your email application, open the message from AWS Notifications and open the link to confirm
your subscription.
{
"ECSAgent": {
"ReleaseVersion": "1.14.1"
},
"ECSAmis": [
{
"ReleaseVersion": "2016.09.g",
"AgentVersion": "1.14.1",
"ReleaseNotes": "This AMI includes the latest ECS agent 2016.09.g",
"OsType": "linux",
"OperatingSystemName": "Amazon Linux",
"Regions": {
"ap-northeast-1": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-f63f6f91"
},
"ap-southeast-1": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-b4ae1dd7"
},
"ap-southeast-2": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-fbe9eb98"
},
"ca-central-1": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-ee58e58a"
},
"eu-central-1": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-085e8a67"
},
"eu-west-1": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-95f8d2f3"
},
"eu-west-2": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-bf9481db"
},
"us-east-1": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-275ffe31"
},
"us-east-2": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-62745007"
},
"us-west-1": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-689bc208"
},
"us-west-2": {
"Name": "amzn-ami-2016.09.g-amazon-ecs-optimized",
"ImageId": "ami-62d35c02"
}
}
}
]
}
To use the Amazon ECS-optimized AMI, type amazon-ecs-optimized in the Search community
AMIs field and press the Enter key. Choose Select next to the amzn-ami-2017.09.d-amazon-ecs-
optimized AMI.
The current Amazon ECS–optimized AMI IDs by region are listed below for reference.
Region AMI ID
us-east-2 ami-58f5db3d
Region AMI ID
us-east-1 ami-fad25980
us-west-2 ami-7114c909
us-west-1 ami-62e0d802
eu-west-2 ami-dbfee1bf
eu-west-1 ami-4cbe0935
eu-central-1 ami-05991b6a
ap-northeast-2 ami-7267c01c
ap-northeast-1 ami-56bd0030
ap-southeast-2 ami-14b55f76
ap-southeast-1 ami-1bdc8b78
ca-central-1 ami-918b30f5
ap-south-1 ami-e4d29c8b
sa-east-1 ami-d596d2b9
6. On the Choose an Instance Type page, you can select the hardware configuration of your instance.
The t2.micro instance type is selected by default. The instance type that you select determines the
resources available for your tasks to run on.
7. Choose Next: Configure Instance Details.
8. On the Configure Instance Details page, configure the following fields accordingly.
a. Set the Number of instances field depending on how many container instances you want to add
to your cluster.
b. (Optional) If you want to use Spot Instances, set the Purchasing option field by selecting the
checkbox next to Request Spot Instances. You will also need to set the other fields related to
Spot Instances. See Spot Instance Requests for more details.
Note
If you are using Spot Instances and see a Not available message, you may need to
choose a different instance type.
c. For Network, choose the VPC to launch your container instance into.
d. For Subnet, choose a subnet to use, or keep the default option to choose the default subnet in
any Availability Zone.
e. Set the Auto-assign Public IP field depending on whether you want your instance to be
accessible from the public Internet. If your instance should be accessible from the Internet,
verify that the Auto-assign Public IP field is set to Enable. If your instance should not be
accessible from the Internet, set this field to Disable.
Note
Container instances need external network access to communicate with the Amazon
ECS service endpoint, so if your container instances do not have public IP addresses,
then they must use network address translation (NAT) to provide this access. For
more information, see NAT Gateways in the Amazon VPC User Guide and HTTP Proxy
Configuration (p. 97) in this guide. For help creating a VPC, see Tutorial: Creating a
VPC with Public and Private Subnets for Your Clusters (p. 342)
f. Select the ecsInstanceRole IAM role value that you created for your container instances in
Setting Up with Amazon ECS (p. 8).
Important
If you do not launch your container instance with the proper IAM permissions, your
Amazon ECS agent cannot connect to your cluster. For more information, see Amazon
ECS Container Instance IAM Role (p. 238).
g. (Optional) Configure your Amazon ECS container instance with user data, such as the agent
environment variables from Amazon ECS Container Agent Configuration (p. 81). Amazon EC2
user data scripts are executed only one time, when the instance is first launched.
By default, your container instance launches into your default cluster. To launch into a non-
default cluster, choose the Advanced Details list. Then, paste the following script into the User
data field, replacing your_cluster_name with the name of your cluster.
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
Or, if you have an ecs.config file in Amazon S3 and have enabled Amazon S3 read-only
access to your container instance role, choose the Advanced Details list. Then, paste the
following script into the User data field, replacing your_bucket_name with the name of your
bucket to install the AWS CLI and write your configuration file at launch time.
Note
For more information about this configuration, see Storing Container Instance
Configuration in Amazon S3 (p. 87).
#!/bin/bash
yum install -y aws-cli
aws s3 cp s3://your_bucket_name/ecs.config /etc/ecs/ecs.config
For more information, see Bootstrapping Container Instances with Amazon EC2 User
Data (p. 46).
9. Choose Next: Add Storage.
10. On the Add Storage page, configure the storage for your container instance.
If you are using an Amazon ECS-optimized AMI before the 2015.09.d version, your instance has a
single volume that is shared by the operating system and Docker.
If you are using the 2015.09.d or later Amazon ECS-optimized AMI, your instance has two volumes
configured. The Root volume is for the operating system's use, and the second Amazon EBS volume
(attached to /dev/xvdcz) is for Docker's use.
You can optionally increase or decrease the volume sizes for your instance to meet your application
needs.
11. Choose Review and Launch.
12. On the Review Instance Launch page, under Security Groups, you see that the wizard created and
selected a security group for you. Instead, select the security group that you created in Setting Up
with Amazon ECS (p. 8) using the following steps:
14. In the Select an existing key pair or create a new key pair dialog box, choose Choose an existing
key pair, then select the key pair that you created when getting set up.
When you are ready, select the acknowledgment field, and then choose Launch Instances.
15. A confirmation page lets you know that your instance is launching. Choose View Instances to close
the confirmation page and return to the console.
16. On the Instances screen, you can view the status of your instance. It takes a short time for an
instance to launch. When you launch an instance, its initial state is pending. After the instance
starts, its state changes to running, and it receives a public DNS name. If the Public DNS column is
hidden, choose Show/Hide, Public DNS.
You can pass multiple types of user data to Amazon EC2, including cloud boothooks, shell scripts, and
cloud-init directives. For more information about these and other format types, see the Cloud-Init
documentation.
You can pass this user data into the Amazon EC2 launch wizard in Step 8.g (p. 45) of Launching an
Amazon ECS Container Instance (p. 43).
Topics
• Amazon ECS Container Agent (p. 46)
• Docker Daemon (p. 47)
• cloud-init-per Utility (p. 47)
• MIME Multi Part Archive (p. 48)
• Example Container Instance User Data Configuration Scripts (p. 49)
To set only a single agent configuration variable, such as the cluster name, use echo to copy the variable
to the configuration file:
#!/bin/bash
echo "ECS_CLUSTER=MyCluster" >> /etc/ecs/ecs.config
If you have multiple variables to write to /etc/ecs/ecs.config, use the following heredoc format.
This format writes everything between the lines beginning with cat and EOF to the configuration file.
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=MyCluster
ECS_ENGINE_AUTH_TYPE=docker
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":
{"username":"my_name","password":"my_password","email":"[email protected]"}}
ECS_LOGLEVEL=debug
EOF
Docker Daemon
You can specify Docker daemon configuration information with Amazon EC2 user data, but this
configuration data must be written before the Docker daemon starts. The cloud-boothook user data
format executes earlier in the boot process than a user data shell script. For a complete list of Docker
daemon configuration options, see the Docker daemon documentation.
By default, cloud-boothook user data is run at every instance boot, so you must create a mechanism
to prevent the boothook from running multiple times. The cloud-init-per utility is provided to control
boothook frequency in this manner. For more information, see cloud-init-per Utility (p. 47).
In the example below, the --storage-opt dm.basesize=20G option is appended to any existing
options in the Docker daemon configuration file, /etc/sysconfig/docker.
#cloud-boothook
cloud-init-per once docker_options echo 'OPTIONS="${OPTIONS} --storage-opt
dm.basesize=20G"' >> /etc/sysconfig/docker
To write multiple lines to a file, use the following heredoc format to accomplish the same goal:
#cloud-boothook
cloud-init-per instance docker_options cat <<'EOF' >> /etc/sysconfig/docker
OPTIONS="${OPTIONS} --storage-opt dm.basesize=20G"
HTTP_PROXY=http://proxy.example.com:80/
EOF
cloud-init-per Utility
The cloud-init-per utility is provided by the cloud-init package to help you create boothook
commands for instances that run at a specified frequency.
frequency
The name to include in the semaphore file path that is written when the boothook runs.
The semaphore file is written to /var/lib/cloud/instances/instance_id/sem/
bootper.name.instance.
cmd
#cloud-boothook
cloud-init-per once docker_options echo 'OPTIONS="${OPTIONS} --storage-opt
dm.basesize=20G"' >> /etc/sysconfig/docker
The semaphore file records the exit code of the command and a UNIX timestamp for when it was
executed.
Output:
0 1488410363
This example MIME multi-part file configures the Docker base device size to 20 GiB and configures the
Amazon ECS container agent to register the instance into the cluster named my-ecs-cluster.
--==BOUNDARY==
Content-Type: text/cloud-boothook; charset="us-ascii"
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Set any ECS agent configuration options
echo "ECS_CLUSTER=my-ecs-cluster" >> /etc/ecs/ecs.config
--==BOUNDARY==--
You can use this script for your own container instances, provided that they are launched from an
Amazon ECS-optimized AMI. Be sure to replace the ECS_CLUSTER=default line in the configuration file
to specify your own cluster name, if you are not using the default cluster. For more information about
launching container instances, see Launching an Amazon ECS Container Instance (p. 43).
--==BOUNDARY==
Content-Type: text/cloud-boothook; charset="us-ascii"
# Install nfs-utils
cloud-init-per once yum_update yum update -y
cloud-init-per once install_nfs_utils yum install -y nfs-utils
# Mount /efs
cloud-init-per once mount_efs echo -e 'fs-abcd1234.efs.us-east-1.amazonaws.com:/ /efs nfs4
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0' >> /etc/fstab
mount -a
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Set any ECS agent configuration options
echo "ECS_CLUSTER=default" >> /etc/ecs/ecs.config
--==BOUNDARY==--
• Install Docker
• Create the required iptables rules for IAM roles for tasks
• Create the required directories for the Amazon ECS container agent
• Write the Amazon ECS container agent configuration file
• Write the systemd unit file to monitor the agent
• Enable and start the systemd unit
You can use this script for your own container instances, provided that they are launched from an Ubuntu
16.04 AMI. Be sure to replace the ECS_CLUSTER=default line in the configuration file to specify your
own cluster name, if you are not using the default cluster. For more information about launching
container instances, see Launching an Amazon ECS Container Instance (p. 43).
#!/bin/bash
# Install Docker
apt-get update -y && apt-get install -y docker.io
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker rm -f %i
ExecStart=/usr/bin/docker run --name %i \
--restart=on-failure:10 \
--volume=/var/run:/var/run \
--volume=/var/log/ecs/:/log \
--volume=/var/lib/ecs/data:/data \
--volume=/etc/ecs:/etc/ecs \
--net=host \
--env-file=/etc/ecs/ecs.config \
amazon/amazon-ecs-agent:latest
ExecStop=/usr/bin/docker stop %i
[Install]
WantedBy=default.target
EOF
• Install Docker
• Create the required iptables rules for IAM roles for tasks
• Create the required directories for the Amazon ECS container agent
• Write the Amazon ECS container agent configuration file
• Write the systemd unit file to monitor the agent
• Enable and start the systemd unit
Note
The docker run command in the systemd unit file below contains the required modifications for
SELinux, including the --privileged flag, and the :Z suffixes to the volume mounts.
You can use this script for your own container instances (provided that they are launched from an
CentOS 7 AMI), but be sure to replace the ECS_CLUSTER=default line in the configuration file to
specify your own cluster name (if you are not using the default cluster). For more information about
launching container instances, see Launching an Amazon ECS Container Instance (p. 43).
#!/bin/bash
# Install Docker
yum install -y docker
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker rm -f %i
ExecStart=/usr/bin/docker run --name %i \
--privileged \
--restart=on-failure:10 \
--volume=/var/run:/var/run \
--volume=/var/log/ecs/:/log:Z \
--volume=/var/lib/ecs/data:/data:Z \
--volume=/etc/ecs:/etc/ecs \
--net=host \
--env-file=/etc/ecs/ecs.config \
amazon/amazon-ecs-agent:latest
ExecStop=/usr/bin/docker stop %i
[Install]
WantedBy=default.target
EOF
• Your container instances need external network access to connect using SSH. If your container
instances are running in a private VPC, they need an SSH bastion instance to provide this access. For
more information, see the Securely connect to Linux instances running in a private Amazon VPC blog
post.
• Your container instances must have been launched with a valid Amazon EC2 key pair. Amazon ECS
container instances have no password, and you use a key pair to log in using SSH. If you did not specify
a key pair when you launched your instance, there is no way to connect to the instance.
• SSH uses port 22 for communication. Port 22 must be open in your container instance security group
for you to connect to your instance using SSH.
Note
The Amazon ECS console first-run experience creates a security group for your container
instances without inbound access on port 22. If your container instances were launched from
the console first-run experience, add inbound access to port 22 on the security group used for
those instances. For more information, see Authorizing Network Access to Your Instances in
the Amazon EC2 User Guide for Linux Instances.
If you are using a Windows computer, see Connecting to Your Linux Instance from Windows Using
PuTTY in the Amazon EC2 User Guide for Linux Instances.
Important
If you experience any issues connecting to your instance, see Troubleshooting Connecting to
Your Instance in the Amazon EC2 User Guide for Linux Instances.
To send container logs from your tasks to CloudWatch Logs, see Using the awslogs Log Driver (p. 137).
For more information on CloudWatch Logs, see Monitoring Log Files in the Amazon CloudWatch User
Guide.
Topics
• CloudWatch Logs IAM Policy (p. 53)
• Installing the CloudWatch Logs Agent (p. 54)
• Configuring and Starting the CloudWatch Logs Agent (p. 55)
• Viewing CloudWatch Logs (p. 57)
• Configuring CloudWatch Logs at Launch with User Data (p. 58)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
After you have installed the agent, proceed to the next section to configure the agent.
The example configuration file below is configured for the Amazon ECS-optimized AMI, and it provides
log streams for several common log files:
/var/log/dmesg
Log messages from the IAM roles for tasks credential provider.
You can use the example file below for your Amazon ECS container instances, but you must substitute
the {cluster} and {container_instance_id} entries with the cluster name and container instance
ID for each container instance so that the log streams are grouped by cluster name and separate for each
individual container instance. The procedure that follows the example configuration file has steps to
replace the cluster name and container instance ID placeholders.
[general]
state_file = /var/lib/awslogs/agent-state
[/var/log/dmesg]
file = /var/log/dmesg
log_group_name = /var/log/dmesg
log_stream_name = {cluster}/{container_instance_id}
[/var/log/messages]
file = /var/log/messages
log_group_name = /var/log/messages
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %b %d %H:%M:%S
[/var/log/docker]
file = /var/log/docker
log_group_name = /var/log/docker
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%S.%f
[/var/log/ecs/ecs-init.log]
file = /var/log/ecs/ecs-init.log
log_group_name = /var/log/ecs/ecs-init.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
[/var/log/ecs/ecs-agent.log]
file = /var/log/ecs/ecs-agent.log.*
log_group_name = /var/log/ecs/ecs-agent.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
[/var/log/ecs/audit.log]
file = /var/log/ecs/audit.log.*
log_group_name = /var/log/ecs/audit.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
3. Open the /etc/awslogs/awslogs.conf file with a text editor, and copy the example file above
into it.
4. Install the jq JSON query utility.
5. Query the Amazon ECS introspection API to find the cluster name and set it to an environment
variable.
6. Replace the {cluster} placeholders in the file with the value of the environment variable you set
in the previous step.
7. Query the Amazon ECS introspection API to find the container instance ID and set it to an
environment variable.
8. Replace the {container_instance_id} placeholders in the file with the value of the
environment variable you set in the previous step.
By default, the CloudWatch Logs agent sends data to the us-east-1 region. If you would like to send
your data to a different region, such as the region that your cluster is located in, you can set the region in
the /etc/awslogs/awscli.conf file.
Output:
Starting awslogs: [ OK ]
2. Use the chkconfig command to ensure that the CloudWatch Logs agent starts at every system boot.
The example user data block below performs the following tasks:
• Installs the awslogs package, which contains the CloudWatch Logs agent
• Installs the jq JSON query utility
• Writes the configuration file for the CloudWatch Logs agent and configures the region to send data to
(the region that the container instance is located)
• Gets the cluster name and container instance ID after the Amazon ECS container agent starts and then
writes those values to the CloudWatch Logs agent configuration file log streams
• Starts the CloudWatch Logs agent
• Configures the CloudWatch Logs agent to start at every system boot
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
[/var/log/dmesg]
file = /var/log/dmesg
log_group_name = /var/log/dmesg
log_stream_name = {cluster}/{container_instance_id}
[/var/log/messages]
file = /var/log/messages
log_group_name = /var/log/messages
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %b %d %H:%M:%S
[/var/log/docker]
file = /var/log/docker
log_group_name = /var/log/docker
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%S.%f
[/var/log/ecs/ecs-init.log]
file = /var/log/ecs/ecs-init.log
log_group_name = /var/log/ecs/ecs-init.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
[/var/log/ecs/ecs-agent.log]
file = /var/log/ecs/ecs-agent.log.*
log_group_name = /var/log/ecs/ecs-agent.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
[/var/log/ecs/audit.log]
file = /var/log/ecs/audit.log.*
log_group_name = /var/log/ecs/audit.log
log_stream_name = {cluster}/{container_instance_id}
datetime_format = %Y-%m-%dT%H:%M:%SZ
EOF
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Set the region to send CloudWatch Logs data to (the region where the container instance
is located)
region=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region)
sed -i -e "s/region = us-east-1/region = $region/g" /etc/awslogs/awscli.conf
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Configure and start CloudWatch Logs agent on Amazon ECS container instance"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/cloudwatch-logs-start.log
set -x
do
sleep 1
done
# Grab the cluster and container instance ARN from instance metadata
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster')
container_instance_id=$(curl -s http://localhost:51678/v1/metadata | jq -r '.
| .ContainerInstanceArn' | awk -F/ '{print $2}' )
# Replace the cluster name and container instance ID placeholders with the actual values
sed -i -e "s/{cluster}/$cluster/g" /etc/awslogs/awslogs.conf
sed -i -e "s/{container_instance_id}/$container_instance_id/g" /etc/awslogs/awslogs.conf
If you have created the ECS-CloudWatchLogs policy and attached it to your ecsInstanceRole as
described in CloudWatch Logs IAM Policy (p. 53), then you can add the above user data block to any
container instances that you launch manually, or you can add it to an Auto Scaling launch configuration,
and your container instances that are launched with this user data will begin sending their log data to
CloudWatch Logs as soon as they launch. For more information, see Launching an Amazon ECS Container
Instance (p. 43).
When you set a container instance to DRAINING, Amazon ECS prevents new tasks from being scheduled
for placement on the container instance. If the resources are available, replacement service tasks are
started on other container instances in the cluster. Service tasks on the container instance that are in the
PENDING state are stopped immediately.
Service tasks on the container instance that are in the RUNNING state are stopped and replaced
according to the service's deployment configuration parameters, minimumHealthyPercent and
maximumPercent.
Any PENDING or RUNNING tasks that do not belong to a service are unaffected; you must wait for them
to finish or stop them manually.
A container instance has completed draining when there are no more RUNNING tasks (although the state
remains as DRAINING). You can verify this using the ListTasks operation with the containerInstance
parameter.
When you change the status of a container instance from DRAINING to ACTIVE, the Amazon ECS
scheduler can schedule tasks on the instance again.
Draining Instances
You can use the UpdateContainerInstancesState API action or the update-container-instances-state
command to change the status of a container instance to DRAINING.
The following procedure demonstrates how to set your instance to DRAINING using the AWS
Management Console.
Here are some examples of the types of tasks you can perform with Run Command:
This topic covers basic installation of Run Command on the Amazon ECS-optimized AMI and a few simple
use cases, but it is by no means exhaustive. For more information about Run Command, see Manage
Amazon EC2 Instances Remotely in the Amazon EC2 User Guide for Linux Instances.
Topics
• Run Command IAM Policy (p. 62)
• Installing the SSM Agent on the Amazon ECS-optimized AMI (p. 62)
• Using Run Command (p. 63)
To manually install the SSM agent on existing Amazon ECS-optimized AMI container
instances
To install the SSM agent on new instance launches with Amazon EC2 user data
• Launch one or more container instances by following the procedure in Launching an Amazon ECS
Container Instance (p. 43), but in Step 8.g (p. 45), copy and paste the user data script below
into the User data field. You can also add the commands from this user data script to another
existing script that you may have to perform other tasks, such as setting the cluster name for the
instance to register into.
Note
The user data script below installs the jq JSON parser and uses that to determine the region
of the container instance. Then it downloads and installs the SSM agent.
#!/bin/bash
For more information about Run Command, see Manage Amazon EC2 Instances Remotely in the Amazon
EC2 User Guide for Linux Instances.
One of the most common use cases for Run Command on Amazon ECS container instances is to update
the instance software on your entire fleet of container instances at once, simultaneously.
$ yum update -y
12. (Optional) Choose the Output tab, and then choose View Output. The image below shows a snippet
of the container instance output for the yum update command.
Note
Unless you configure a command to save the output to an Amazon S3 bucket, then the
command output is truncated at 2500 characters.
To do this, you can configure your container instances to call the docker run command with the user
data script at launch, or in some init system such as Upstart or systemd. While this method works, it has
some disadvantages because Amazon ECS has no knowledge of the container and cannot monitor the
CPU, memory, ports, or any other resources used. To ensure that Amazon ECS can properly account for
all task resources, create a task definition for the container to run on your container instances. Then, use
Amazon ECS to place the task at launch time with Amazon EC2 user data.
The Amazon EC2 user data script in the following procedure uses the Amazon ECS introspection API to
identify the container instance. Then, it uses the AWS CLI and the start-task command to run a specified
task on itself during startup.
1. If you have not done so already, create a task definition with the container you want to run on your
container instance at launch by following the procedures in Creating a Task Definition (p. 102).
2. Modify your ecsInstanceRole IAM role to add permissions for the StartTask API operation. For
more information, see Amazon ECS Container Instance IAM Role (p. 238).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:StartTask"
],
"Resource": "*"
}
]
}
3. Launch one or more container instances by following the procedure in Launching an Amazon ECS
Container Instance (p. 43), but in Step 8.g (p. 45). Then, copy and paste the MIME multi-part
user data script below into the User data field. Substitute your_cluster_name with the cluster
for the container instance to register into and my_task_def with the task definition to run on the
instance at launch.
Note
The MIME mult-ipart content below uses a shell script to set configuration values and install
packages. It also uses an Upstart job to start the task after the ecs service is running and
the introspection API is available.
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '.
| .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/
'{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '.
| .ContainerInstanceArn' | awk -F: '{print $4}')
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-
instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
4. Verify that your container instances launch into the correct cluster and that your tasks have started.
Each container instance you launched should have your task running on it, and the container
instance ARN should be in the Started By column.
If you do not see your tasks, you can log in to your container instances with SSH and check the /
var/log/ecs/ecs-start-task.log file for debugging information.
Following deregistration, the container instance is no longer able to accept new tasks. If you have tasks
running on the container instance when you deregister it, these tasks remain running until you terminate
the instance or the tasks stop through some other means. However, these tasks are orphaned (no longer
monitored or accounted for by Amazon ECS). If an orphaned task on your container instance is part of an
Amazon ECS service, then the service scheduler starts another copy of that task, on a different container
instance, if possible. Any containers in orphaned service tasks that are registered with a Classic Load
Balancer or an Application Load Balancer target group are deregistered. They begin connection draining
according to the settings on the load balancer or target group.
If you intend to use the container instance for some other purpose after deregistration, you should stop
all of the tasks running on the container instance before deregistration. This stops any orphaned tasks
from consuming resources.
Important
Because each container instance has unique state information, they should not be deregistered
from one cluster and re-registered into another. To relocate container instance resources, we
recommend that you terminate container instances from one cluster and launch new container
instances with the latest Amazon ECS-optimized AMI in the new cluster. For more information,
see Terminate Your Instance in the Amazon EC2 User Guide for Linux Instances and Launching an
Amazon ECS Container Instance (p. 43).
Deregistering a container instance removes the instance from a cluster, but it does not terminate the
EC2 instance. If you are finished using the instance, be sure to terminate it in the Amazon EC2 console to
stop billing. For more information, see Terminate Your Instance in the Amazon EC2 User Guide for Linux
Instances.
Note
If you terminate a running container instance with a connected Amazon ECS container agent,
the agent automatically deregisters the instance from your cluster. Stopped container instances
or instances with disconnected agents are not automatically deregistered when terminated.
The source code for the Amazon ECS container agent is available on GitHub. We encourage you to submit
pull requests for changes that you would like to have included. However, Amazon Web Services does not
currently support running modified copies of this software.
Note
The Amazon ECS container agent is installed on the AWS-managed infrastructure used for tasks
using the Fargate launch type. No additional configuration is needed so this topic does not
apply if you are only using tasks with the Fargate launch type.
Topics
• Installing the Amazon ECS Container Agent (p. 69)
• Amazon ECS Container Agent Versions (p. 72)
• Updating the Amazon ECS Container Agent (p. 74)
• Amazon ECS Container Agent Configuration (p. 81)
• Automated Task and Image Cleanup (p. 88)
• Private Registry Authentication (p. 89)
• Amazon ECS Container Metadata (p. 92)
• Amazon ECS Container Agent Introspection (p. 95)
• HTTP Proxy Configuration (p. 97)
To install the Amazon ECS container agent on an Amazon Linux EC2 instance
1. Launch an Amazon Linux instance with an IAM role that allows access to Amazon ECS. For more
information, see Amazon ECS Container Instance IAM Role (p. 238).
2. Connect to your instance.
3. Install the ecs-init package. For more information about ecs-init, see the source code on
GitHub.
Output:
Output:
6. (Optional) You can verify that the agent is running and see some information about your new
container instance with the agent introspection API. For more information, see the section called
“Amazon ECS Container Agent Introspection” (p. 95).
Output:
{
"Cluster": "default",
"ContainerInstanceArn": "<container_instance_ARN>",
"Version": "Amazon ECS Agent - v1.16.0 (1ca656c)"
}
To install the Amazon ECS container agent on a non-Amazon Linux EC2 instance
1. Launch an EC2 instance with an IAM role that allows access to Amazon ECS. For more information,
see Amazon ECS Container Instance IAM Role (p. 238).
2. Connect to your instance.
3. Install Docker on your instance. Amazon ECS requires a minimum Docker version of 1.5.0 (version
17.06.2-ce is recommended), and the default Docker versions in many system package managers,
such as yum or apt-get do not meet this minimum requirement. For information about installing
the latest Docker version on your particular Linux distribution, see https://docs.docker.com/engine/
installation/.
Note
The Amazon Linux AMI always includes the recommended version of Docker for use with
Amazon ECS. You can install Docker on Amazon Linux with the sudo yum install docker -y
command.
4. Check your Docker version to verify that your system meets the minimum version requirement.
Output:
In this example, the Docker version is 1.4.1, which is below the minimum version of 1.5.0. This
instance needs to upgrade its Docker version before proceeding. For information about installing the
latest Docker version on your particular Linux distribution, go to https://docs.docker.com/engine/
installation/.
5. Run the following commands on your container instance to allow the port proxy to route traffic
using loopback addresses.
6. Run the following commands on your container instance to enable IAM roles for tasks. For more
information, see IAM Roles for Tasks (p. 251).
• For Debian/Ubuntu:
• For CentOS/RHEL:
8. Create the /etc/ecs directory and create the Amazon ECS container agent configuration file.
9. Edit the /etc/ecs/ecs.config file and add the following contents. If you do not want your
container instance to register with the default cluster, specify your cluster name as the value for
ECS_CLUSTER.
ECS_DATADIR=/data
ECS_ENABLE_TASK_IAM_ROLE=true
ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST=true
ECS_LOGFILE=/log/ecs-agent.log
ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","awslogs"]
ECS_LOGLEVEL=info
ECS_CLUSTER=default
For more information about these and other agent runtime options, see Amazon ECS Container
Agent Configuration (p. 81).
Note
You can optionally store your agent environment variables in Amazon S3 (which can be
downloaded to your container instances at launch time using Amazon EC2 user data). This
is recommended for sensitive information such as authentication credentials for private
repositories. For more information, see Storing Container Instance Configuration in Amazon
S3 (p. 87) and Private Registry Authentication (p. 89).
10. Pull and run the latest Amazon ECS container agent on your container instance.
Note
You should use Docker restart policies or a process manager (such as upstart or systemd)
to treat the container agent as a service or a daemon and ensure that it is restarted after
exiting. For more information, see Automatically start containers and Restart policies in
the Docker documentation. The Amazon ECS-optimized AMI uses the ecs-init RPM
for this purpose, and you can view the source code for this RPM on GitHub. For example
systemd unit files for Ubuntu 16.04 and CentOS 7, see Example Container Instance User
Data Configuration Scripts (p. 49).
The following example agent run command is broken into separate lines to show each option. For
more information about these and other agent runtime options, see Amazon ECS Container Agent
Configuration (p. 81).
Important
Operating systems with SELinux enabled require the --privileged option in your docker
run command. In addition, for SELinux-enabled container instances, we recommend that
you add the :Z option to the /log and /data volume mounts. However, the host mounts
for these volumes must exist before you run the command or you receive a no such file
or directory error. Take the following action if you experience difficulty running the
Amazon ECS agent on an SELinux-enabled container instance:
Note
If you receive an Error response from daemon: Cannot start container
message, you can delete the failed container with the sudo docker rm ecs-agent command
and try running the agent again.
Launching your container instances from the most recent Amazon ECS-optimized AMI ensures that you
receive the current container agent version. To launch a container instance with the latest Amazon ECS-
optimized AMI, see Launching an Amazon ECS Container Instance (p. 43).
To install the latest version of the Amazon ECS container agent on another operating system, see
Installing the Amazon ECS Container Agent (p. 69). The table in Amazon ECS-Optimized AMI
Container Agent Versions (p. 73) shows the Docker version that is tested on Amazon Linux for each
agent version.
To see which features and enhancements are included with each agent release, see https://github.com/
aws/amazon-ecs-agent/releases.
For more information about the Amazon ECS-optimized AMI, including AMI IDs for the latest version in
each region, see Amazon ECS-Optimized AMI (p. 32).
Note
Agent updates do not apply to Windows container instances. We recommend that you launch
new container instances to update the agent version in your Windows clusters.
Topics
• Checking Your Amazon ECS Container Agent Version (p. 75)
• Updating the Amazon ECS Container Agent on the Amazon ECS-Optimized AMI (p. 76)
• Manually Updating the Amazon ECS Container Agent (for Non-Amazon ECS-optimized
AMIs) (p. 79)
To check if your Amazon ECS container agent is running the latest version in the console
If your agent version is 1.16.0, you are running the latest container agent. If your agent version is
below 1.16.0, you can update your container agent with the following procedures:
• If your container instance is running the Amazon ECS-optimized AMI, see Updating the Amazon
ECS Container Agent on the Amazon ECS-Optimized AMI (p. 76).
• If your container instance is not running the Amazon ECS-optimized AMI, see Manually Updating
the Amazon ECS Container Agent (for Non-Amazon ECS-optimized AMIs) (p. 79).
Important
To update the Amazon ECS agent version from versions before v1.0.0 on your Amazon
ECS-optimized AMI, we recommend that you terminate your current container instance
and launch a new instance with the most recent AMI version. Any container instances that
use a preview version should be retired and replaced with the most recent AMI. For more
information, see Launching an Amazon ECS Container Instance (p. 43).
You can also use the Amazon ECS container agent introspection API to check the agent version
from the container instance itself. For more information, see Amazon ECS Container Agent
Introspection (p. 95).
To check if your Amazon ECS container agent is running the latest version with the
introspection API
Output:
{
"Cluster": "default",
"ContainerInstanceArn": "arn:aws:ecs:us-west-2:<aws_account_id>:container-
instance/4d3910c1-27c8-410c-b1df-f5d06fab4305",
"Version": "Amazon ECS Agent - v1.16.0 (1ca656c)"
}
Note
The introspection API added Version information in the version v1.0.0 of the Amazon
ECS container agent. If Version is not present when querying the introspection API, or
the introspection API is not present in your agent at all, then the version you are running is
v0.0.3 or earlier. You should update your version.
• Terminate your current container instances and launch the latest version of the Amazon ECS-optimized
AMI (either manually or by updating your Auto Scaling launch configuration with the latest AMI). This
provides a fresh container instance with the most current tested and validated versions of Amazon
Linux, Docker, ecs-init, and the Amazon ECS container agent. For more information, see Amazon
ECS-Optimized AMI (p. 32).
• Connect to the instance with SSH and update the ecs-init package (and its dependencies) to the
latest version. This operation provides the most current tested and validated versions of Docker and
ecs-init that are available in the Amazon Linux repositories and the latest version of the Amazon
ECS container agent. For more information, see To update the ecs-init package on the Amazon ECS-
optimized AMI (p. 77).
• Update the container agent with the UpdateContainerAgent API operation, either through the
console or with the AWS CLI or AWS SDKs. For more information, see Updating the Amazon ECS
Container Agent with the UpdateContainerAgent API Operation (p. 77).
Note
Agent updates do not apply to Windows container instances. We recommend that you launch
new container instances to update the agent version in your Windows clusters.
1. Log in to your container instance via SSH. For more information, see Connect to Your Container
Instance (p. 52).
2. Update the ecs-init package with the following command.
Note
The ecs-init package and the Amazon ECS container agent are updated immediately.
However, newer versions of Docker are not loaded until the Docker daemon is restarted.
Restart either by rebooting the instance, or by running sudo service docker restart to
restart Docker and then sudo start ecs to restart the container agent.
The update process begins when you request an agent update, either through the console or with the
AWS CLI or AWS SDKs. Amazon ECS checks your current agent version against the latest available agent
version, and if an update is possible, the update process progresses as shown in the flow chart below. If
an update is not available, for example, if the agent is already running the most recent version, then a
NoUpdateAvailableException is returned.
PENDING
The agent has begun downloading the agent update. If the agent cannot download the update, or
if the contents of the update are incorrect or corrupted, then the agent sends a notification of the
failure and the update transitions to the FAILED state.
STAGED
The agent download has completed and the agent contents have been verified.
UPDATING
The ecs-init service is restarted and it picks up the new agent version. If the agent is for some
reason unable to restart, the update transitions to the FAILED state; otherwise, the agent signals
Amazon ECS that the update is complete.
To update the Amazon ECS container agent on the Amazon ECS-optimized AMI in the console
Note
Agent updates do not apply to Windows container instances. We recommend that you launch
new container instances to update the agent version in your Windows clusters.
To update the Amazon ECS container agent on the Amazon ECS-optimized AMI with the AWS
CLI
Note
Agent updates with the UpdateContainerAgent API operation do not apply to Windows
container instances. We recommend that you launch new container instances to update the
agent version in your Windows clusters.
• Use the following command to update the Amazon ECS container agent on your container instance:
Output:
"ECS_DATADIR=/data",
Important
If the previous command does not return the ECS_DATADIR environment variable, you
must stop any tasks running on this container instance before updating your agent. Newer
agents with the ECS_DATADIR environment variable save their state and you can update
them while tasks are running without issues.
3. Stop the Amazon ECS container agent.
5. Ensure that the /etc/ecs directory and the Amazon ECS container agent configuration file exist at
/etc/ecs/ecs.config.
6. Edit the /etc/ecs/ecs.config file and ensure that it contains at least the following variable
declarations. If you do not want your container instance to register with the default cluster, specify
your cluster name as the value for ECS_CLUSTER.
API Version 2014-11-13
79
Amazon Elastic Container Service Developer Guide
Manually Updating the Amazon ECS Container
Agent (for Non-Amazon ECS-optimized AMIs)
ECS_DATADIR=/data
ECS_ENABLE_TASK_IAM_ROLE=true
ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST=true
ECS_LOGFILE=/log/ecs-agent.log
ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","awslogs"]
ECS_LOGLEVEL=info
ECS_CLUSTER=default
For more information about these and other agent runtime options, see Amazon ECS Container
Agent Configuration (p. 81).
Note
You can optionally store your agent environment variables in Amazon S3 (which can be
downloaded to your container instances at launch time using Amazon EC2 user data). This
is recommended for sensitive information such as authentication credentials for private
repositories. For more information, see Storing Container Instance Configuration in Amazon
S3 (p. 87) and Private Registry Authentication (p. 89).
7. Pull the latest Amazon ECS container agent image from Docker Hub.
Output:
8. Run the latest Amazon ECS container agent on your container instance.
Note
You should use Docker restart policies or a process manager (such as upstart or systemd)
to treat the container agent as a service or a daemon and ensure that it is restarted after
exiting. For more information, see Automatically start containers and Restart policies in
the Docker documentation. The Amazon ECS-optimized AMI uses the ecs-init RPM
for this purpose, and you can view the source code for this RPM on GitHub. For example
systemd unit files for Ubuntu 16.04 and CentOS 7, see Example Container Instance User
Data Configuration Scripts (p. 49).
The following example agent run command is broken into separate lines to show each option. For
more information about these and other agent runtime options, see Amazon ECS Container Agent
Configuration (p. 81).
Important
Operating systems with SELinux enabled require the --privileged option in your docker
run command. In addition, for SELinux-enabled container instances, we recommend that
you add the :Z option to the /log and /data volume mounts. However, the host mounts
for these volumes must exist before you run the command or you receive a no such file
or directory error. Take the following action if you experience difficulty running the
Amazon ECS agent on an SELinux-enabled container instance:
• Append the :Z option to the /log and /data container volume mounts (for example, --
volume=/var/log/ecs/:/log:Z) to the docker run command below.
Note
If you receive an Error response from daemon: Cannot start container
message, you can delete the failed container with the sudo docker rm ecs-agent command
and try running the agent again.
If your container instance was launched with the Amazon ECS-optimized AMI, you can set these
environment variables in the /etc/ecs/ecs.config file and then restart the agent. You can also write
these configuration variables to your container instances with Amazon EC2 user data at launch time. For
more information, see Bootstrapping Container Instances with Amazon EC2 User Data (p. 46).
If you are manually starting the Amazon ECS container agent (for non-Amazon ECS-optimized AMIs), you
can use these environment variables in the docker run command that you use to start the agent with the
syntax --env=VARIABLE_NAME=VARIABLE_VALUE. For sensitive information, such as authentication
credentials for private repositories, you should store your agent environment variables in a file and pass
them all at one time with the --env-file path_to_env_file option.
Topics
• Available Parameters (p. 81)
• Storing Container Instance Configuration in Amazon S3 (p. 87)
Available Parameters
The following are the available environment keys:
ECS_CLUSTER
The cluster that this agent should check into. If this value is undefined, then the default cluster is
assumed. If the default cluster does not exist, the Amazon ECS container agent attempts to create
it. If a non-default cluster is specified and it does not exist, registration fails.
ECS_RESERVED_PORTS
An array of ports that should be marked as unavailable for scheduling on this container instance.
ECS_RESERVED_PORTS_UDP
Default Value: []
An array of UDP ports that should be marked as unavailable for scheduling on this container
instance.
ECS_ENGINE_AUTH_TYPE
Required for private registry authentication. This is the type of authentication data in
ECS_ENGINE_AUTH_DATA. For more information, see Authentication Formats (p. 89).
ECS_ENGINE_AUTH_DATA
Example Values:
• ECS_ENGINE_AUTH_TYPE=dockercfg: {"https://index.docker.io/v1/":
{"auth":"zq212MzEXAMPLE7o6T25Dk0i","email":"[email protected]"}}
• ECS_ENGINE_AUTH_TYPE=docker: {"https://index.docker.io/v1/":
{"username":"my_name","password":"my_password","email":"[email protected]"}}
The region to be used in API requests as well as to infer the correct back end host.
AWS_ACCESS_KEY_ID
Used to create a connection to the Docker daemon; behaves similarly to the environment variable as
used by the Docker client.
ECS_LOGLEVEL
The path to output full debugging information to. If blank, no logs are recorded. If this value is set,
logs at the debug level (regardless of ECS_LOGLEVEL) are written to that file.
ECS_CHECKPOINT
Default Value: If ECS_DATADIR is explicitly set to a non-empty value, then ECS_CHECKPOINT is set
to true; otherwise, it is set to false.
Whether to save the checkpoint state to the location specified with ECS_DATADIR.
ECS_DATADIR
The name of the persistent data directory on the container that is running the Amazon ECS
container agent. The directory is used to save information about the cluster and the agent state.
ECS_UPDATES_ENABLED
Whether to exit for ECS agent updates when they are requested.
ECS_UPDATE_DOWNLOAD_DIR
The filesystem location to place update tarballs within the container when they are downloaded.
ECS_DISABLE_METRICS
Whether to disable CloudWatch metrics for Amazon ECS. If this value is set to true, CloudWatch
metrics are not collected.
ECS_RESERVED_MEMORY
Example Values: 32
Default Value: 0
The amount of memory, in MiB, to reserve for processes that are not managed by ECS.
ECS_AVAILABLE_LOGGING_DRIVERS
The logging drivers available on the container instance. The Amazon ECS container agent running
on a container instance must register the logging drivers available on that instance with the
ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that
instance can use log configuration options for those drivers in tasks. For information about how to
use the awslogs log driver, see Using the awslogs Log Driver (p. 137). For more information about
the different log drivers available for your Docker version and how to configure them, see Configure
logging drivers in the Docker documentation.
ECS_DISABLE_PRIVILEGED
Whether launching privileged containers is disabled on the container instance. If this value is set to
true, privileged containers are not permitted.
ECS_SELINUX_CAPABLE
Example Values: 1h (Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", and "h".)
Default Value: 3h
Time duration to wait from when a task is stopped until the Docker container is removed. As this
removes the Docker container data, be aware that if this value is set too low, you may not be able to
inspect your stopped containers or view the logs before they are removed. The minimum duration is
1m; any value shorter than 1 minute is ignored.
ECS_CONTAINER_STOP_TIMEOUT
Example Values: 10m (Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", and "h".)
Time duration to wait from when a task is stopped before its containers are forcefully killed if they
do not exit normally on their own.
HTTP_PROXY
The hostname (or IP address) and port number of an HTTP proxy to use for the ECS agent to connect
to the internet (for example, if your container instances do not have external network access through
an Amazon VPC internet gateway or NAT gateway or instance). If this variable is set, you must also
set the NO_PROXY variable to filter EC2 instance metadata and Docker daemon traffic from the
proxy. For more information, see HTTP Proxy Configuration (p. 97).
NO_PROXY
Example Values:
• Linux: 169.254.169.254,169.254.170.2,/var/run/docker.sock
• Windows: 169.254.169.254,169.254.170.2,\\.\pipe\docker_engine
The HTTP traffic that should not be forwarded to the specified HTTP_PROXY. You must specify
169.254.169.254,/var/run/docker.sock to filter EC2 instance metadata and Docker daemon
traffic from the proxy. For more information, see HTTP Proxy Configuration (p. 97).
ECS_ENABLE_TASK_IAM_ROLE
Whether IAM roles for tasks should be enabled on the container instance for task containers with the
bridge or default network modes. For more information, see IAM Roles for Tasks (p. 251).
ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST
Whether IAM roles for tasks should be enabled on the container instance for task containers with the
host network mode. This variable is only supported on agent versions 1.12.0 and later. For more
information, see IAM Roles for Tasks (p. 251).
ECS_DISABLE_IMAGE_CLEANUP
Whether to disable automated image cleanup for the Amazon ECS agent. For more information, see
Automated Task and Image Cleanup (p. 88).
ECS_IMAGE_CLEANUP_INTERVAL
The time interval between automated image cleanup cycles. If set to less than 10 minutes, the value
is ignored.
ECS_IMAGE_MINIMUM_CLEANUP_AGE
Default Value: 1h
The minimum time interval between when an image is pulled and when it can be considered for
automated image cleanup.
ECS_NUM_IMAGES_DELETE_PER_CYCLE
Example Values: 5
Default Value: 5
The maximum number of images to delete in a single automated image cleanup cycle. If set to less
than 1, the value is ignored.
ECS_INSTANCE_ATTRIBUTES
A list of custom attributes, in JSON form, to apply to your container instances. Using this attribute at
instance registration adds the custom attributes, allowing you to skip the manual method of adding
custom attributes via the AWS Management Console.
Note
Attributes added will not apply to container instances that are already registered.
To add custom attributes to already registered container instances, see Adding an
Attribute (p. 153).
For information about custom attributes to use, see Attributes (p. 152).
An invalid JSON value for this variable causes the agent to exit with a code of 5. A message appears
in the agent logs. If the JSON value is valid but there is an issue detected when validating the
attribute (for example if the value is too long or contains invalid characters), then the container
instance registration happens but the agent exits with a code of 5 and a message is written to the
agent logs. For information about how to locate the agent logs, see Amazon ECS Container Agent
Log (p. 367).
ECS_ENABLE_CONTAINER_METADATA
When true, the agent creates a file describing the container's metadata. The file can be located and
consumed by using the container environment variable $ECS_CONTAINER_METADATA_FILE.
ECS_HOST_DATA_DIR
The source directory on the host from which ECS_DATADIR is mounted. We use this to determine
the source mount path for container metadata files in the case the ECS agent is running as a
container. We do not use this value in Windows because the ECS agent does not run as a container.
Storing configuration information in a private bucket in Amazon S3 and granting read-only access to
your container instance IAM role is a secure and convenient way to allow container instance configuration
at launch time. You can store a copy of your ecs.config file in a private bucket, and then use
Amazon EC2 user data to install the AWS CLI and copy your configuration information to /etc/ecs/
ecs.config when the instance launches.
1. Create an ecs.config file with valid environment variables and values from Amazon ECS Container
Agent Configuration (p. 81) using the following format. This example configures private registry
authentication. For more information, see Private Registry Authentication (p. 89).
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":
{"auth":"zq212MzEXAMPLE7o6T25Dk0i","email":"[email protected]"}}
2. To store your configuration file, create a private bucket in Amazon S3. For more information, see
Create a Bucket in the Amazon Simple Storage Service Getting Started Guide.
3. Upload the ecs.config file to your Amazon S3 bucket. For more information, see Add an Object to
a Bucket in the Amazon Simple Storage Service Getting Started Guide.
1. Complete the above procedures in this section to allow read-only Amazon S3 access to your
container instances and store an ecs.config file in a private Amazon S3 bucket.
2. Launch new container instances by following the steps in Launching an Amazon ECS Container
Instance (p. 43). In Step 8.g (p. 45), use the following example script that installs the AWS CLI and
copies your configuration file to /etc/ecs/ecs.config.
#!/bin/bash
yum install -y aws-cli
Likewise, containers that belong to stopped tasks can also consume container instance storage with log
information, data volumes, and other artifacts. These artifacts are useful for debugging containers that
have stopped unexpectedly, but most of this storage can be safely freed up after a period of time.
By default, the Amazon ECS container agent automatically cleans up stopped tasks and Docker images
that are not being used by any tasks on your container instances.
Note
The automated image cleanup feature requires at least version 1.13.0 of the Amazon ECS
container agent. To update your agent to the latest version, see Updating the Amazon ECS
Container Agent (p. 74).
Tunable Parameters
The following agent configuration variables are available to tune your automated task and image
cleanup experience. For more information about how to set these variables on your container instances,
see Amazon ECS Container Agent Configuration (p. 81).
ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION
This variable specifies the time to wait before removing any containers that belong to stopped tasks.
The image cleanup process cannot delete an image as long as there is a container that references
it. After images are not referenced by any containers (either stopped or running), then the image
becomes a candidate for cleanup. By default, this parameter is set to 3 hours but you can reduce this
period to as low as 1 minute, if you need to for your application.
ECS_DISABLE_IMAGE_CLEANUP
If you set this variable to true, then automated image cleanup is disabled on your container
instance and no images are automatically removed.
ECS_IMAGE_CLEANUP_INTERVAL
This variable specifies how frequently the automated image cleanup process should check for
images to delete. The default is every 30 minutes but you can reduce this period to as low as 10
minutes to remove images more frequently.
ECS_IMAGE_MINIMUM_CLEANUP_AGE
This variable specifies the minimum amount of time between when an image was pulled and when it
may become a candidate for removal. This is used to prevent cleaning up images that have just been
pulled. The default is 1 hour.
ECS_NUM_IMAGES_DELETE_PER_CYCLE
This variable specifies how many images may be removed during a single cleanup cycle. The default
is 5 and the minimum is 1.
Cleanup Workflow
When the Amazon ECS container agent is running and automated image cleanup is not disabled, the
agent checks for Docker images that are not referenced by running or stopped containers at a frequency
determined by the ECS_IMAGE_CLEANUP_INTERVAL variable. If unused images are found and they
are older than the minimum cleanup time specified by the ECS_IMAGE_MINIMUM_CLEANUP_AGE
variable, the agent removes up to the maximum number of images that are specified with the
ECS_NUM_IMAGES_DELETE_PER_CYCLE variable. The least-recently referenced images are deleted first.
After the images are removed, the agent waits until the next interval and repeats the process again.
The agent looks for two environment variables when it launches: ECS_ENGINE_AUTH_TYPE, which
specifies the type of authentication data that is being sent, and ECS_ENGINE_AUTH_DATA, which
contains the actual authentication credentials.
The Amazon ECS-optimized AMI scans the /etc/ecs/ecs.config file for these variables when the
container instance launches, and each time the service is started (with the sudo start ecs command).
AMIs that are not Amazon ECS-optimized should store these environment variables in a file and pass
them with the --env-file path_to_env_file option to the docker run command that starts the
container agent.
Important
We do not recommend that you inject these authentication environment variables at instance
launch time with Amazon EC2 user data or pass them with the --env option to the docker run
command. These methods are not appropriate for sensitive data like authentication credentials.
To safely add authentication credentials to your container instances, see Storing Container
Instance Configuration in Amazon S3 (p. 87).
Authentication Formats
There are two available formats for private registry authentication, dockercfg and docker.
The dockercfg format uses the authentication information stored in the configuration file that is
created when you run the docker login command. You can create this file by running docker login on
your local system and entering your registry user name, password, and email address. You can also log in
to a container instance and run the command there. Depending on your Docker version, this file is saved
as either ~/.dockercfg or ~/.docker/config.json.
cat ~/.docker/config.json
Output:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i"
}
}
}
Important
Newer versions of Docker create a configuration file as shown above with an outer auths object.
The Amazon ECS agent only supports dockercfg authentication data that is in the below
format, without the auths object. If you have the jq utility installed, you can extract this data
with the following command: cat ~/.docker/config.json | jq .auths
Output:
{
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i",
"email": "[email protected]"
}
}
In the above example, the following environment variables should be added to the environment variable
file (/etc/ecs/ecs.config for the Amazon ECS-optimized AMI) that the Amazon ECS container
agent loads at run time. If you are not using the Amazon ECS-optimized AMI and you are starting
the agent manually with docker run, specify the environment variable file with the --env-file
path_to_env_file option when you start the agent.
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":
{"auth":"zq212MzEXAMPLE7o6T25Dk0i","email":"[email protected]"}}
You can configure multiple private registries with the following syntax.
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"repo.example-01.com":
{"auth":"zq212MzEXAMPLE7o6T25Dk0i","email":"[email protected]"},"repo.example-02.com":
{"auth":"fQ172MzEXAMPLEoF7225DU0j","email":"[email protected]"}}
The docker format uses a JSON representation of the registry server that the agent should authenticate
with, as well as the authentication parameters required by that registry (such as user name, password,
and the email address for that account). For a Docker Hub account, the JSON representation looks like
this:
{
"https://index.docker.io/v1/": {
"username": "my_name",
"password": "my_password",
"email": "[email protected]"
}
}
In this example, the following environment variables should be added to the environment variable file (/
etc/ecs/ecs.config for the Amazon ECS-optimized AMI) that the Amazon ECS container agent loads
at run time. If you are not using the Amazon ECS-optimized AMI and you are starting the agent manually
with docker run, specify the environment variable file with the --env-file path_to_env_file
option when you start the agent.
ECS_ENGINE_AUTH_TYPE=docker
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":
{"username":"my_name","password":"my_password","email":"[email protected]"}}
You can configure multiple private registries with the following syntax.
ECS_ENGINE_AUTH_TYPE=docker
ECS_ENGINE_AUTH_DATA={"repo.example-01.com":
{"username":"my_name","password":"my_password","email":"[email protected]"},"repo.example-02.com":
{"username":"another_name","password":"another_password","email":"[email protected]"}}
sudo vi /etc/ecs/ecs.config
ECS_ENGINE_AUTH_TYPE=docker
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":
{"username":"my_name","password":"my_password","email":"[email protected]"}}
3. Check to see if your agent uses the ECS_DATADIR environment variable to save its state.
Output:
"ECS_DATADIR=/data",
Important
If the previous command does not return the ECS_DATADIR environment variable, you
must stop any tasks running on this container instance before stopping the agent. Newer
agents with the ECS_DATADIR environment variable save their state and you can stop and
start them while tasks are running without issues. For more information, see Updating the
Amazon ECS Container Agent (p. 74).
4. Stop the ecs service.
Output:
ecs stop/waiting
Output:
6. (Optional) You can verify that the agent is running and see some information about your new
container instance by querying the agent introspection API operation. For more information, see the
section called “Amazon ECS Container Agent Introspection” (p. 95).
curl http://localhost:51678/v1/metadata
Output:
{
"Cluster": "default",
"ContainerInstanceArn": "<container_instance_ARN>",
"Version": "Amazon ECS Agent - v1.16.0 (1ca656c)"
}
cat $ECS_CONTAINER_METADATA_FILE
The container metadata file is cleaned up on the host instance when the container is cleaned up. You
can adjust when this happens with the ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION container agent
variable. For more information, see Automated Task and Image Cleanup (p. 88).
Topics
• Enabling Container Metadata (p. 92)
• Container Metadata File Locations (p. 93)
• Container Metadata File Format (p. 93)
variable in the /etc/ecs/ecs.config configuration file and restart the agent. You can also set it as a
Docker environment variable at run time when the agent container is started. For more information, see
Amazon ECS Container Agent Configuration (p. 81).
Note
The minimum Amazon ECS container agent version to support this feature is 1.15.0.
If the ECS_ENABLE_CONTAINER_METADATA is set to true when the agent starts, metadata files are
created for any already running ECS containers and any future containers started by ECS.
However, for easy access, the container metadata file location is set to the
ECS_CONTAINER_METADATA_FILE environment variable inside the container. You can read the file
contents from inside the container with the following command:
cat $ECS_CONTAINER_METADATA_FILE
ContainerInstanceARN
The full Amazon Resource Name (ARN) of the host container instance.
TaskARN
The full Amazon Resource Name (ARN) of the task that the container belongs to.
ContainerID
The Docker container ID (and not the Amazon ECS container ID) for the container.
ContainerName
The container name from the Amazon ECS task definition for the container.
DockerContainerName
The container name that the Docker daemon uses for the container (for example, the name that
shows up in docker ps command output).
ImageID
The SHA digest for the Docker image used to start the container.
ImageName
The image name and tag for the Docker image used to start the container.
PortMappings
The bind IP address that is assigned to the container by Docker. This IP address is only applied
with the bridge network mode, and it is only accessible from the container instance.
Protocol
The network mode for the task to which the container belongs.
IPv4Addresses
The status of the metadata file. When the status is READY, the metadata file is current and complete.
If the file is not ready yet (for example, the moment the task is started), a truncated version of the
file format is available. To avoid a likely race condition where the container has started, but the
metadata has not yet been written, you can parse the metadata file and wait for this parameter to
be set to READY before depending on the metadata. This is usually available in less than 1 second
from when the container starts.
The following example shows a container metadata file in the READY status.
"ContainerInstanceARN": "arn:aws:ecs:us-west-2:012345678910:container-instance/1f73d099-
b914-411c-a9ff-81633b7741dd",
"TaskARN": "arn:aws:ecs:us-west-2:012345678910:task/2b88376d-aba3-4950-9ddf-bcb0f388a40c",
"ContainerID": "98e44444008169587b826b4cd76c6732e5899747e753af1e19a35db64f9e9c32",
"ContainerName": "metadata",
"DockerContainerName": "/ecs-metadata-7-metadata-f0edfbd6d09fdef20800",
"ImageID": "sha256:c24f66af34b4d76558f7743109e2476b6325fcf6cc167c6e1e07cd121a22b341",
"ImageName": "httpd:2.4",
"PortMappings": [
{
"ContainerPort": 80,
"HostPort": 80,
"BindIp": "",
"Protocol": "tcp"
}
],
"Networks": [
{
"NetworkMode": "bridge",
"IPv4Addresses": [
"172.17.0.2"
]
}
],
"MetadataFileStatus": "READY"
}
Example Incomplete Amazon ECS container metadata file (not yet READY)
The following example shows a container metadata file that has not yet reached the READY status.
The information in the file is limited to a few parameters that are known from the task definition. The
container metadata file should be ready within 1 second after the container starts.
{
"ContainerInstanceARN": "arn:aws:ecs:us-west-2:012345678910:container-
instance/1f73d099-b914-411c-a9ff-81633b7741dd",
"TaskARN": "arn:aws:ecs:us-west-2:012345678910:task/
d90675f8-1a98-444b-805b-3d9cabb6fcd4",
"ContainerName": "metadata"
}
To view container instance metadata, log in to your container instance via SSH and run the following
command. Metadata includes the container instance ID, the Amazon ECS cluster in which the container
instance is registered, and the Amazon ECS container agent version information.
Output:
{
"Cluster": "default",
"ContainerInstanceArn": "<container_instance_ARN>",
"Version": "Amazon ECS Agent - v1.16.0 (1ca656c)"
}
To view information about all of the tasks that are running on a container instance, log in to your
container instance via SSH and run the following command:
Output:
{
"Tasks": [
{
"Arn": "arn:aws:ecs:us-east-1:<aws_account_id>:task/example5-58ff-46c9-
ae05-543f8example",
"DesiredStatus": "RUNNING",
"KnownStatus": "RUNNING",
"Family": "hello_world",
"Version": "8",
"Containers": [
{
"DockerId": "9581a69a761a557fbfce1d0f6745e4af5b9dbfb86b6b2c5c4df156f1a5932ff1",
"DockerName": "ecs-hello_world-8-mysql-fcae8ac8f9f1d89d8301",
"Name": "mysql"
},
{
"DockerId": "bf25c5c5b2d4dba68846c7236e75b6915e1e778d31611e3c6a06831e39814a15",
"DockerName": "ecs-hello_world-8-wordpress-e8bfddf9b488dff36c00",
"Name": "wordpress"
}
]
}
]
}
You can view information for a particular task that is running on a container instance. To specify a
specific task or container, append one of the following to the request:
To get task information with a container's Docker ID, log in to your container instance via SSH and run
the following command.
Note
Amazon ECS container agents before version 1.14.2 require full Docker container IDs for the
introspection API, not the short version that is shown with docker ps. You can get the full
Docker ID for a container by running the docker ps --no-trunc command on the container
instance.
Output:
"Arn": "arn:aws:ecs:us-east-1:<aws_account_id>:task/e01d58a8-151b-40e8-
bc01-22647b9ecfec",
"Containers": [
{
"DockerId": "79c796ed2a7f864f485c76f83f3165488097279d296a7c05bd5201a1c69b2920",
"DockerName": "ecs-nginx-efs-2-nginx-9ac0808dd0afa495f001",
"Name": "nginx"
}
],
"DesiredStatus": "RUNNING",
"Family": "nginx-efs",
"KnownStatus": "RUNNING",
"Version": "2"
}
/etc/ecs/ecs.config
HTTP_PROXY=10.0.0.131:3128
Set this value to the hostname (or IP address) and port number of an HTTP proxy to use for
the ECS agent to connect to the internet. For example, your container instances may not have
external network access through an Amazon VPC internet gateway, NAT gateway, or instance.
NO_PROXY=169.254.169.254,169.254.170.2,/var/run/docker.sock
Set this value to the hostname (or IP address) and port number of an HTTP proxy to use for
ecs-init to connect to the internet. For example, your container instances may not have
external network access through an Amazon VPC internet gateway, NAT gateway, or instance.
env NO_PROXY=169.254.169.254,169.254.170.2,/var/run/docker.sock
Set this value to the hostname (or IP address) and port number of an HTTP proxy to use for the
Docker daemon to connect to the internet. For example, your container instances may not have
external network access through an Amazon VPC internet gateway, NAT gateway, or instance.
export NO_PROXY=169.254.169.254
Set this value to 169.254.169.254 to filter EC2 instance metadata from the proxy.
Setting these environment variables in the above files only affects the Amazon ECS container agent,
ecs-init, and the Docker daemon. They do not configure any other services (such as yum) to use the
proxy.
The example user data cloud-boothook script below configures the Amazon ECS container agent,
ecs-init, the Docker daemon, and yum to use an HTTP proxy that you specify. You can also specify a
cluster into which the container instance registers itself.
To use this script when you launch a container instance, follow the steps in Launching an Amazon ECS
Container Instance (p. 43), and in Step 8.g (p. 45). Then, copy and paste the cloud-boothook script
below into the User data field (be sure to substitute the red example values with your own proxy and
cluster information).
#cloud-boothook
# Configure Yum, the Docker daemon, and the ECS agent to use an HTTP proxy
# Specify proxy host, port number, and ECS cluster name to use
PROXY_HOST=10.0.0.131
PROXY_PORT=3128
CLUSTER_NAME=proxy-test
You can define multiple containers in a task definition. The parameters that you use depend on the
launch type you choose for the task. Not all parameters are valid. For more information about the
parameters available and which launch types they are valid for in a task definition, see Task Definition
Parameters (p. 107).
Your entire application stack does not need to exist on a single task definition, and in most cases it
should not. Your application can span multiple task definitions by combining related containers into
their own task definitions, each representing a single component. For more information, see Application
Architecture (p. 100).
Topics
• Application Architecture (p. 100)
• Creating a Task Definition (p. 102)
• Task Definition Parameters (p. 107)
• Using Data Volumes in Tasks (p. 126)
• Task Networking with the awsvpc Network Mode (p. 131)
• Amazon ECS Launch Types (p. 132)
• Using the awslogs Log Driver (p. 137)
• Example Task Definitions (p. 143)
• Updating a Task Definition (p. 145)
• Deregistering Task Definitions (p. 146)
Application Architecture
How you architect your application on Amazon ECS depends on several factors, with the launch type you
are using being a key differentiator. We give the following guidance, broken down by launch type, which
should assist in the process.
You should put multiple containers in the same task definition if:
• Containers share a common lifecycle (that is, they should be launched and terminated together)
• Containers are required to be run on the same underlying host that is, one container references the
other on a localhost port)
• You want your containers to share resources
• Your containers share volumes
Otherwise you should define your containers in separate tasks definitions so that you can scale,
provision, and deprovision them separately.
In your development environment, you probably run all three containers together on your Docker host.
You might be tempted to use the same approach for your production environment, but this approach has
several drawbacks:
• Changes to one component can impact all three components, which may be a larger scope for the
change than anticipated
• Each component is more difficult to scale because you have to scale every container proportionally
• Task definitions can only have 10 container definitions and your application stack might require more,
either now or in the future
• Every container in a task definition must land on the same container instance, which may limit your
instance choices to the largest sizes
Instead, you should create task definitions that group the containers that are used for a common
purpose, and separate the different components into multiple task definitions. In this example, three
task definitions each specify one container. The example cluster below has three container instances
registered with three front-end service containers, two backend service containers, and one data store
service container.
You can group related containers in a task definition, such as linked containers that must be run
together. For example, you could add a log streaming container to your front-end service and include
that in the same task definition.
After you have your task definitions, you can create services from them to maintain the availability of
your desired tasks. For more information, see Creating a Service (p. 188). In your services, you can
associate containers with Elastic Load Balancing load balancers. For more information, see Service Load
Balancing (p. 165). When your application requirements change, you can update your services to scale
the number of desired tasks up or down, or to deploy newer versions of the containers in your tasks. For
more information, see Updating a Service (p. 194).
You can define multiple containers and data volumes in a task definition. For more information about the
parameters available in a task definition, see Task Definition Parameters (p. 107).
a. On the Configure task and container definitions page, scroll to the bottom of the page and
choose Configure via JSON.
b. Paste your task definition JSON into the text area and choose Save.
c. Verify your information and choose Create.
Scroll to the bottom of the page and choose Configure via JSON.
6. If you chose Fargate, complete the following steps. If you chose EC2, skip to the next section.
1. For Task Definition Name, type a name for your task definition. Up to 255 letters (uppercase and
lowercase), numbers, hyphens, and underscores are allowed.
2. (Optional) For Task Role, choose an IAM role that provides permissions for containers in your task to
make calls to AWS APIs on your behalf. For more information, see IAM Roles for Tasks (p. 251).
Note
Only roles that have the Amazon EC2 Container Service Task Role trust relationship are
shown here. For help creating an IAM role for your tasks, see Creating an IAM Role and
Policy for your Tasks (p. 254).
3. For Task execution IAM role, either select your task execution role or select Create new role so the
console can create one for you.
4. For Task size, choose a value for Task memory (GB) and Task CPU (vCPU).
5. For each container in your task definition, complete the following steps.
• For Name, type a name for your volume. Up to 255 letters (uppercase and lowercase), numbers,
hyphens, and underscores are allowed.
7. Choose Create.
1. For Task Definition Name, type a name for your task definition. Up to 255 letters (uppercase and
lowercase), numbers, hyphens, and underscores are allowed.
2. (Optional) For Task Role, choose an IAM role that provides permissions for containers in your task to
make calls to AWS APIs on your behalf. For more information, see IAM Roles for Tasks (p. 251).
Note
Only roles that have the Amazon EC2 Container Service Task Role trust relationship are
shown here. For more information about creating an IAM role for your tasks, see Creating an
IAM Role and Policy for your Tasks (p. 254).
3. (Optional) For Network Mode, choose the Docker network mode to use for the containers in your
task. The available network modes correspond to those described in Network settings in the Docker
run reference.
The default Docker network mode is bridge. The awsvpc network mode is required if your task
definition uses the Fargate launch type. If the network mode is set to none, you can't specify port
mappings in your container definitions, and the task's containers do not have external connectivity.
If the network mode is awsvpc, the task is allocated an elastic network interface. The host and
awsvpc network modes offer the highest networking performance for containers because they use
the Amazon EC2 network stack instead of the virtualized network stack provided by the bridge
mode; however, exposed container ports are mapped directly to the corresponding host port, so you
cannot take advantage of dynamic host port mappings or run multiple instantiations of the same
task on a single container instance if port mappings are used.
4. (Optional) For Task size, choose a value for Task memory (GB) and Task CPU (vCPU). The table
below shows the valid combinations for task-level CPU and memory.
Note
Task-level CPU and memory parameters are ignored for Windows containers. We
recommend specifying container-level resources for Windows containers.
5. (Optional) For Constraint, define how tasks that are created from this task definition are placed in
your cluster. For tasks that use the Fargate launch type, you can use constraints to place tasks based
on Availability Zone or by task group. For tasks that use the EC2 launch type, you can use constraints
to place tasks based on Availability Zone, instance type, or custom attributes. For more information,
see Amazon ECS Task Placement Constraints (p. 152).
6. For each container in your task definition, complete the following steps.
a. For Name, type a name for your volume. Up to 255 letters (uppercase and lowercase), numbers,
hyphens, and underscores are allowed.
b. (Optional) For Source Path, type the path on the host container instance to present to the
container. If you leave this field empty, the Docker daemon assigns a host path for you. If you
specify a source path, the data volume persists at the specified location on the host container
instance until you delete it manually. If the source path does not exist on the host container
instance, the Docker daemon creates it. If the location does exist, the contents of the source
path folder are exported to the container.
8. Choose Create.
{
"family": "",
"taskRoleArn": "",
"executionRoleArn": "",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "",
"image": "",
"cpu": 0,
"memory": 0,
"memoryReservation": 0,
"links": [
""
],
"portMappings": [
{
"containerPort": 0,
"hostPort": 0,
"protocol": "udp"
}
],
"essential": true,
"entryPoint": [
""
],
"command": [
""
],
"environment": [
{
"name": "",
"value": ""
}
],
"mountPoints": [
{
"sourceVolume": "",
"containerPath": "",
"readOnly": true
}
],
"volumesFrom": [
{
"sourceContainer": "",
"readOnly": true
}
],
"linuxParameters": {
"capabilities": {
"add": [
""
],
"drop": [
""
]
},
"devices": [
{
"hostPath": "",
"containerPath": "",
"permissions": [
"read"
]
}
],
"initProcessEnabled": true
},
"hostname": "",
"user": "",
"workingDirectory": "",
"disableNetworking": true,
"privileged": true,
"readonlyRootFilesystem": true,
"dnsServers": [
""
],
"dnsSearchDomains": [
""
],
"extraHosts": [
{
"hostname": "",
"ipAddress": ""
}
],
"dockerSecurityOptions": [
""
],
"dockerLabels": {
"KeyName": ""
},
"ulimits": [
{
"name": "rss",
"softLimit": 0,
"hardLimit": 0
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"KeyName": ""
}
}
}
],
"volumes": [
{
"name": "",
"host": {
"sourcePath": ""
}
}
],
"placementConstraints": [
{
"type": "memberOf",
"expression": ""
}
],
"requiresCompatibilities": [
"EC2"
],
"cpu": "",
"memory": ""
}
Note that you can generate this task definition template using the following AWS CLI command.
The family and container definitions are required in a task definition, while task role, network mode,
volumes, task placement constraints, and launch type are optional.
Parts
• Family (p. 107)
• Task Role (p. 107)
• Network Mode (p. 108)
• Container Definitions (p. 108)
• Volumes (p. 123)
• Task Placement Constraints (p. 124)
• Launch Types (p. 124)
• Task Size (p. 125)
Family
family
Type: string
Required: yes
When you register a task definition, you give it a family, which is similar to a name for multiple
versions of the task definition, specified with a revision number. The first task definition that is
registered into a particular family is given a revision of 1, and any task definitions registered after
that are given a sequential revision number.
Task Role
taskRoleArn
Type: string
Required: no
When you register a task definition, you can provide a task role for an IAM role that allows the
containers in the task permission to call the AWS APIs that are specified in its associated policies on
your behalf. For more information, see IAM Roles for Tasks (p. 251).
IAM roles for tasks on Windows require that the -EnableTaskIAMRole option is set when you
launch the Amazon ECS-optimized Windows AMI. Your containers must also run some configuration
code in order to take advantage of the feature. For more information, see Windows IAM Roles for
Tasks (p. 385).
Network Mode
networkMode
Type: string
Required: no
The Docker networking mode to use for the containers in the task. The valid values are none,
bridge, awsvpc, and host. The default Docker network mode is bridge. If using the Fargate
launch type, the awsvpc network mode is required. If using the EC2 launch type, any network mode
can be used. If the network mode is set to none, you can't specify port mappings in your container
definitions, and the task's containers do not have external connectivity. The host and awsvpc
network modes offer the highest networking performance for containers because they use the
Amazon EC2 network stack instead of the virtualized network stack provided by the bridge mode.
With the host and awsvpc network modes, exposed container ports are mapped directly to the
corresponding host port (for the host network mode) or the attached elastic network interface port
(for the awsvpc network mode), so you cannot take advantage of dynamic host port mappings.
If the network mode is awsvpc, the task is allocated an Elastic Network Interface, and you must
specify a NetworkConfiguration when you create a service or run a task with the task definition.
For more information, see Task Networking with the awsvpc Network Mode (p. 131).
Note
Currently, only the Amazon ECS-optimized AMI, other Amazon Linux variants with the ecs-
init package, or AWS Fargate infrastructure support the awsvpc network mode.
If the network mode is host, you can't run multiple instantiations of the same task on a single
container instance when port mappings are used.
Docker for Windows uses different network modes than Docker for Linux. When you register a task
definition with Windows containers, you must not specify a network mode. If you use the console to
register a task definition with Windows containers, you must choose the <default> network mode
object.
Container Definitions
When you register a task definition, you must specify a list of container definitions that are passed to the
Docker daemon on a container instance. The following parameters are allowed in a container definition.
Topics
• Standard Container Definition Parameters (p. 109)
• Advanced Container Definition Parameters (p. 112)
name
Type: string
Required: yes
The name of a container. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and
underscores are allowed. If you are linking multiple containers together in a task definition, the name
of one container can be entered in the links of another container to connect the containers. This
parameter maps to name in the Create a container section of the Docker Remote API and the --
name option to docker run.
image
Type: string
Required: yes
The image used to start a container. This string is passed directly to the Docker daemon. Images in
the Docker Hub registry are available by default. You can also specify other repositories with either
repository-url/image:tag or repository-url/image@digest. Up to 255 letters (uppercase
and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs
are allowed. This parameter maps to Image in the Create a container section of the Docker Remote
API and the IMAGE parameter of docker run.
• The Fargate launch type only supports images in Amazon ECR or public repositories in Docker
Hub.
• Images in Amazon ECR repositories can be specified by using either the full registry/
repository:tag or registry/repository@digest naming convention. For
example, aws_account_id.dkr.ecr.region.amazonaws.com/my-web-
app:latest or aws_account_id.dkr.ecr.region.amazonaws.com/my-web-
app@sha256:94afd1f2e64d908bc90dbca0035a5b567EXAMPLE
• Images in official repositories on Docker Hub use a single name (for example, ubuntu or mongo).
• Images in other repositories on Docker Hub are qualified with an organization name (for example,
amazon/amazon-ecs-agent).
• Images in other online repositories are qualified further by a domain name (for example,
quay.io/assemblyline/ubuntu).
memory
Type: string
Required: no
The hard limit (in MiB) of memory to present to the container. If your container attempts to exceed
the memory specified here, the container is killed. This parameter maps to Memory in the Create a
container section of the Docker Remote API and the --memory option to docker run.
If your containers will be part of a task using the Fargate launch type, this field is optional and the
only requirement is that the total amount of memory reserved for all containers within a task be
lower than the task memory value.
For containers that will be part of a task using the EC2 launch type, you must specify a non-zero
integer for one or both of memory or memoryReservation in container definitions. If you specify
both, memory must be greater than memoryReservation. If you specify memoryReservation,
then that value is subtracted from the available memory resources for the container instance on
which the container is placed; otherwise, the value of memory is used.
The Docker daemon reserves a minimum of 4 MiB of memory for a container, so you should not
specify fewer than 4 MiB of memory for your containers.
memoryReservation
Type: integer
Required: no
The soft limit (in MiB) of memory to reserve for the container. When system memory is under
contention, Docker attempts to keep the container memory to this soft limit; however, your
container can consume more memory when it needs to, up to either the hard limit specified with
the memory parameter (if applicable), or all of the available memory on the container instance,
whichever comes first. This parameter maps to MemoryReservation in the Create a container
section of the Docker Remote API and the --memory-reservation option to docker run.
You must specify a non-zero integer for one or both of memory or memoryReservation in
container definitions. If you specify both, memory must be greater than memoryReservation.
If you specify memoryReservation, then that value is subtracted from the available memory
resources for the container instance on which the container is placed; otherwise, the value of
memory is used.
For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256
MiB of memory for short periods of time, you can set a memoryReservation of 128 MiB, and a
memory hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB
of memory from the remaining resources on the container instance, but also allow the container to
consume more memory resources when needed.
The Docker daemon reserves a minimum of 4 MiB of memory for a container, so you should not
specify fewer than 4 MiB of memory for your containers.
portMappings
Required: no
Port mappings allow containers to access ports on the host container instance to send or receive
traffic.
For task definitions that use the awsvpc network mode, you should only specify the
containerPort. The hostPort can be left blank or it must be the same value as the
containerPort.
Port mappings on Windows use the NetNAT gateway address rather than localhost. There is no
loopback for port mappings on Windows, so you cannot access a container's mapped port from the
host itself.
This parameter maps to PortBindings in the Create a container section of the Docker Remote
API and the --publish option to docker run. If the network mode of a task definition is set to
host, then host ports must either be undefined or they must match the container port in the port
mapping.
Note
After a task reaches the RUNNING status, manual and automatic host and container port
assignments are visible in the Network Bindings section of a container description of a
selected task in the Amazon ECS console, or the networkBindings section of describe-
tasks AWS CLI command output or DescribeTasks API responses.
containerPort
Type: integer
The port number on the container that is bound to the user-specified or automatically assigned
host port.
If using containers in a task with the Fargate, exposed ports should be specified using
containerPort.
If using containers in a task with the EC2 launch type and you specify a container port and not
a host port, your container automatically receives a host port in the ephemeral port range (for
more information, see hostPort). Port mappings that are automatically assigned in this way do
not count toward the 100 reserved ports limit of a container instance.
hostPort
Type: integer
Required: no
The port number on the container instance to reserve for your container.
If using containers in a task with the Fargate, the hostPort can either be left blank or needs to
be the same value as the containerPort.
If using containers in a task with the EC2 launch type, you can specify a non-reserved host port
for your container port mapping (this is referred to as static host port mapping), or you can
omit the hostPort (or set it to 0) while specifying a containerPort and your container will
automatically receive a port (this is referred to as dynamic host port mapping) in the ephemeral
port range for your container instance operating system and Docker version.
The default ephemeral port range is 49153 to 65535, and this range is used for Docker versions
prior to 1.6.0. For Docker version 1.6.0 and later, the Docker daemon tries to read the ephemeral
port range from /proc/sys/net/ipv4/ip_local_port_range (which is 32768 to 61000
on the latest Amazon ECS-optimized AMI); if this kernel parameter is unavailable, the default
ephemeral port range is used. You should not attempt to specify a host port in the ephemeral
port range, since these are reserved for automatic assignment. In general, ports below 32768
are outside of the ephemeral port range.
The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon
ECS container agent port 51678. Any host port that was previously user-specified for a running
task is also reserved while the task is running (after a task stops, the host port is released). The
current reserved ports are displayed in the remainingResources of describe-container-
instances output, and a container instance may have up to 100 reserved ports at a time,
including the default reserved ports (automatically assigned ports do not count toward the 100
reserved ports limit).
protocol
Type: string
Required: no
The protocol used for the port mapping. Valid values are tcp and udp. The default is tcp.
Important
UDP support is only available on container instances that were launched with version
1.2.0 of the Amazon ECS container agent (such as the amzn-ami-2015.03.c-
amazon-ecs-optimized AMI) or later, or with container agents that have been
updated to version 1.3.0 or later. To update your container agent to the latest version,
see Updating the Amazon ECS Container Agent (p. 74).
"portMappings": [
{
"containerPort": integer,
"hostPort": integer
}
...
]
If you want an automatically assigned host port, use the following syntax:
"portMappings": [
{
"containerPort": integer
}
...
]
Topics
• Environment (p. 112)
• Network Settings (p. 114)
• Storage and Logging (p. 116)
• Security (p. 119)
• Resource Limits (p. 122)
• Docker Labels (p. 123)
Environment
cpu
Type: integer
Required: no
The number of cpu units to reserve for the container. This parameter maps to CpuShares in the
Create a container section of the Docker Remote API and the --cpu-shares option to docker run.
This field is optional for tasks using the Fargate launch type, and the only requirement is that the
total amount of CPU reserved for all containers within a task be lower than the task-level cpu value.
Note
You can determine the number of CPU units that are available per Amazon EC2 instance
type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances
detail page by 1,024.
Linux containers share unallocated CPU units with other containers on the container instance with
the same ratio as their allocated amount. For example, if you run a single-container task on a single-
core instance type with 512 CPU units specified for that container, and that is the only task running
on the container instance, that container could use the full 1,024 CPU unit share at any given time.
However, if you launched another copy of the same task on that container instance, each task would
be guaranteed a minimum of 512 CPU units when needed, and each container could float to higher
CPU usage if the other container was not using it, but if both tasks were 100% active all of the time,
they would be limited to 512 CPU units.
On Linux container instances, the Docker daemon on the container instance uses the CPU value to
calculate the relative CPU share ratios for running containers. For more information, see CPU share
constraint in the Docker documentation. The minimum valid CPU share value that the Linux kernel
will allow is 2; however, the CPU parameter is not required, and you can use CPU values below 2 in
your container definitions. For CPU values below 2 (including null), the behavior varies based on your
Amazon ECS container agent version:
• Agent versions <= 1.1.0: Null and zero CPU values are passed to Docker as 0, which Docker then
converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel
converts to 2 CPU shares.
• Agent versions >= 1.2.0: Null, zero, and CPU values of 1 are passed to Docker as 2.
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows
containers only have access to the specified amount of CPU that is described in the task definition.
essential
Type: Boolean
Required: no
If the essential parameter of a container is marked as true, and that container fails or stops for
any reason, all other containers that are part of the task are stopped. If the essential parameter
of a container is marked as false, then its failure does not affect the rest of the containers in a task.
If this parameter is omitted, a container is assumed to be essential.
All tasks must have at least one essential container. If you have an application that is composed
of multiple containers, you should group containers that are used for a common purpose into
components, and separate the different components into multiple task definitions. For more
information, see Application Architecture (p. 100).
"essential": true|false
entryPoint
Important
Early versions of the Amazon ECS container agent do not properly handle entryPoint
parameters. If you have problems using entryPoint, update your container agent or enter
your commands and arguments as command array items instead.
Required: no
The entry point that is passed to the container. This parameter maps to Entrypoint in the Create
a container section of the Docker Remote API and the --entrypoint option to docker run. For
more information about the Docker ENTRYPOINT parameter, go to https://docs.docker.com/engine/
reference/builder/#entrypoint.
command
Required: no
The command that is passed to the container. This parameter maps to Cmd in the Create a container
section of the Docker Remote API and the COMMAND parameter to docker run. For more information
about the Docker CMD parameter, go to https://docs.docker.com/engine/reference/builder/#cmd.
workingDirectory
Type: string
Required: no
The working directory in which to run commands inside the container. This parameter maps to
WorkingDir in the Create a container section of the Docker Remote API and the --workdir option
to docker run.
"workingDirectory": "string"
environment
Required: no
The environment variables to pass to a container. This parameter maps to Env in the Create a
container section of the Docker Remote API and the --env option to docker run.
Important
We do not recommend using plaintext environment variables for sensitive information, such
as credential data.
name
Type: string
Type: string
"environment" : [
{ "name" : "string", "value" : "string" },
{ "name" : "string", "value" : "string" }
]
Network Settings
disableNetworking
Type: Boolean
Required: no
When this parameter is true, networking is disabled within the container. This parameter maps to
NetworkDisabled in the Create a container section of the Docker Remote API.
Note
This parameter is not supported for Windows containers.
"disableNetworking": true|false
links
Required: no
The link parameter allows containers to communicate with each other without the need for
port mappings. Only supported if the network mode of a task definition is set to bridge. The
name:internalName construct is analogous to name:alias in Docker links. Up to 255 letters
(uppercase and lowercase), numbers, hyphens, and underscores are allowed. For more information
about linking Docker containers, go to https://docs.docker.com/engine/userguide/networking/
default_network/dockerlinks/. This parameter maps to Links in the Create a container section of
the Docker Remote API and the --link option to docker run.
Note
This parameter is not supported for Windows containers.
Important
Containers that are collocated on a single container instance may be able to communicate
with each other without requiring links or host port mappings. Network isolation is achieved
on the container instance using security groups and VPC settings.
hostname
Type: string
Required: no
The hostname to use for your container. This parameter maps to Hostname in the Create a container
section of the Docker Remote API and the --hostname option to docker run.
"hostname": "string"
dnsServers
Required: no
A list of DNS servers that are presented to the container. This parameter maps to Dns in the Create a
container section of the Docker Remote API and the --dns option to docker run.
Note
This parameter is not supported for Windows containers.
dnsSearchDomains
Required: no
A list of DNS search domains that are presented to the container. This parameter maps to
DnsSearch in the Create a container section of the Docker Remote API and the --dns-search
option to docker run.
Note
This parameter is not supported for Windows containers.
extraHosts
Required: no
A list of hostnames and IP address mappings to append to the /etc/hosts file on the container.
This parameter maps to ExtraHosts in the Create a container section of the Docker Remote API
and the --add-host option to docker run.
Note
This parameter is not supported for Windows containers.
"extraHosts": [
{
"hostname": "string",
"ipAddress": "string"
}
...
]
hostname
Type: string
Type: string
Type: Boolean
Required: no
When this parameter is true, the container is given read-only access to its root file system. This
parameter maps to ReadonlyRootfs in the Create a container section of the Docker Remote API
and the --read-only option to docker run.
Note
This parameter is not supported for Windows containers.
"readonlyRootFilesystem": true|false
mountPoints
Required: no
This parameter maps to Volumes in the Create a container section of the Docker Remote API and
the --volume option to docker run.
Windows containers can mount whole directories on the same drive as $env:ProgramData.
Windows containers cannot mount directories on a different drive, and mount point cannot be
across drives.
sourceVolume
Type: string
Type: string
Type: boolean
Required: no
If this value is true, the container has read-only access to the volume. If this value is false,
then the container can write to the volume. The default value is false.
"mountPoints": [
{
"sourceVolume": "string",
"containerPath": "string",
"readOnly": true|false
}
]
volumesFrom
Required: no
Data volumes to mount from another container. This parameter maps to VolumesFrom in the
Create a container section of the Docker Remote API and the --volumes-from option to docker
run.
sourceContainer
Type: string
readOnly
Type: Boolean
Required: no
If this value is true, the container has read-only access to the volume. If this value is false,
then the container can write to the volume. The default value is false.
"volumesFrom": [
{
"sourceContainer": "string",
"readOnly": true|false
}
]
logConfiguration
Required: no
If using the Fargate launch type, the only supported value is awslogs. For more information on
using the awslogs log driver in task definitions to send your container logs to CloudWatch Logs, see
Using the awslogs Log Driver (p. 137).
This parameter maps to LogConfig in the Create a container section of the Docker Remote API and
the --log-driver option to docker run. By default, containers use the same logging driver that
the Docker daemon uses; however the container may use a different logging driver than the Docker
daemon by specifying a log driver with this parameter in the container definition. To use a different
logging driver for a container, the log system must be configured properly on the container instance
(or on a different log server for remote logging options). For more information on the options for
different supported log drivers, see Configure logging drivers in the Docker documentation.
Note
Amazon ECS currently supports a subset of the logging drivers available to the Docker
daemon (shown in the valid values below). Additional log drivers may be available in future
releases of the Amazon ECS container agent.
This parameter requires version 1.18 of the Docker Remote API or greater on your container
instance.
Note
The Amazon ECS container agent running on a container instance must register the
logging drivers available on that instance with the ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that instance can use these log
configuration options. For more information, see Amazon ECS Container Agent
Configuration (p. 81).
"logConfiguration": {
"logDriver": "json-file"|"syslog"|"journald"|"gelf"|"fluentd"|"awslogs"|"splunk",
"options": {"string": "string"
...}
logDriver
Type: string
The log driver to use for the container. The valid values listed earlier are log drivers that the
Amazon ECS container agent can communicate with by default.
If using the Fargate launch type, the only supported value is awslogs.
Note
If you have a custom driver that is not listed earlier that you would like to work with
the Amazon ECS container agent, you can fork the Amazon ECS container agent project
that is available on GitHub and customize it to work with that driver. We encourage
you to submit pull requests for changes that you would like to have included. However,
Amazon Web Services does not currently provide support for running modified copies
of this software.
This parameter requires version 1.18 of the Docker Remote API or greater on your container
instance.
options
Required: no
This parameter requires version 1.19 of the Docker Remote API or greater on your container
instance.
Security
privileged
Type: Boolean
Required: no
When this parameter is true, the container is given elevated privileges on the host container instance
(similar to the root user).
This parameter maps to Privileged in the Create a container section of the Docker Remote API
and the --privileged option to docker run.
Note
This parameter is not supported for Windows containers or tasks using the Fargate launch
type.
"privileged": true|false
user
Type: string
Required: no
The user name to use inside the container. This parameter maps to User in the Create a container
section of the Docker Remote API and the --user option to docker run.
Note
This parameter is not supported for Windows containers.
"user": "string"
dockerSecurityOptions
Required: no
A list of strings to provide custom labels for SELinux and AppArmor multi-level security systems.
This parameter maps to SecurityOpt in the Create a container section of the Docker Remote API
and the --security-opt option to docker run.
Note
This parameter is not supported for Windows containers or tasks using the Fargate launch
type.
Note
The Amazon ECS container agent running on a container instance must register with
the ECS_SELINUX_CAPABLE=true or ECS_APPARMOR_CAPABLE=true environment
variables before containers placed on that instance can use these security options. For more
information, see Amazon ECS Container Agent Configuration (p. 81).
linuxParameters
Required: no
"linuxParameters": {
"capabilities": {
"add": ["string", ...],
"drop": ["string", ...]
}
}
capabilities
Required: no
The Linux capabilities for the container that are added to or dropped from the default
configuration provided by Docker. For more information about the default capabilities and the
non-default available capabilities, see Runtime privilege and Linux capabilities in the Docker run
reference. For more detailed information about these Linux capabilities, see the capabilities(7)
Linux manual page.
add
Required: no
The Linux capabilities for the container to add to the default configuration provided by
Docker. This parameter maps to CapAdd in the Create a container section of the Docker
Remote API and the --cap-add option to docker run.
drop
Required: no
The Linux capabilities for the container to remove from the default configuration provided
by Docker. This parameter maps to CapDrop in the Create a container section of the Docker
Remote API and the --cap-drop option to docker run.
devices
Any host devices to expose to the container. This parameter maps to Devices in the Create a
container section of the Docker Remote API and the --device option to docker run.
Required: No
hostPath
Type: String
Required: Yes
containerPath
The path inside the container at which to expose the host device.
Type: String
Required: No
permissions
The explicit permissions to provide to the container for the device. By default, the container
will be able to read, write, and mknod the device.
Run an init process inside the container that forwards signals and reaps processes. This
parameter maps to the --init option to docker run.
This parameter requires version 1.25 of the Docker Remote API or greater on your container
instance.
Resource Limits
ulimits
Required: no
A list of ulimits to set in the container. This parameter maps to Ulimits in the Create a container
section of the Docker Remote API and the --ulimit option to docker run.
This parameter requires version 1.18 of the Docker Remote API or greater on your container
instance.
Note
This parameter is not supported for Windows containers.
"ulimits": [
{
"name":
"core"|"cpu"|"data"|"fsize"|"locks"|"memlock"|"msgqueue"|"nice"|"nofile"|"nproc"|"rss"|"rtprio"|"r
"softLimit": integer,
"hardLimit": integer
}
...
]
name
Type: string
Type: integer
Type: integer
Docker Labels
dockerLabels
Required: no
A key/value map of labels to add to the container. This parameter maps to Labels in the Create a
container section of the Docker Remote API and the --label option to docker run.
This parameter requires version 1.18 of the Docker Remote API or greater on your container
instance.
Volumes
When you register a task definition, you can optionally specify a list of volumes that will be passed to the
Docker daemon on a container instance and become available for other containers on the same container
instance to access.
If you are using the Fargate launch type, the host and sourcePath parameters are not supported.
For more information, see Using Data Volumes in Tasks (p. 126).
name
Type: string
Required: yes
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and
underscores are allowed. This name is referenced in the sourceVolume parameter of container
definition mountPoints.
host
Type: object
Required: no
The contents of the host parameter determine whether your data volume persists on the host
container instance and where it is stored. If the host parameter is empty, then the Docker daemon
assigns a host path for your data volume, but the data is not guaranteed to persist after the
containers associated with it stop running.
Windows containers can mount whole directories on the same drive as $env:ProgramData.
Windows containers cannot mount directories on a different drive, and mount point cannot be
across drives. For example, you can mount C:\my\path:C:\my\path and D:\:D:\, but not D:\my
\path:C:\my\path or D:\:C:\my\path.
Type: string
Required: no
The path on the host container instance that is presented to the container. If this parameter is
empty, then the Docker daemon assigns a host path for you.
If you are using the Fargate launch type, the sourcePath parameter is not supported.
If the host parameter contains a sourcePath file location, then the data volume persists
at the specified location on the host container instance until you delete it manually. If the
sourcePath value does not exist on the host container instance, the Docker daemon creates it.
If the location does exist, the contents of the source path folder are exported.
[
{
"name": "string",
"host": {
"sourcePath": "string"
}
}
]
If you are using the Fargate launch type, task placement contraints are not supported.
For tasks that use the EC2 launch type, you can use constraints to place tasks based on Availability
Zone, instance type, or custom attributes. For more information, see Amazon ECS Task Placement
Constraints (p. 152).
expression
Type: string
Required: no
A cluster query language expression to apply to the constraint. For more information, see Cluster
Query Language (p. 155).
type
Type: string
Required: yes
The type of constraint. Use memberOf to restrict selection to a group of valid candidates.
Launch Types
When you register a task definition, you specify the launch type that you will be using for your task. For
more details about launch types, see Amazon ECS Launch Types (p. 132).
requiresCompatibilities
Type: string
Required: no
The launch type the task is using. This will enable a check to ensure that all of the parameters used
in the task definition meet the requirements of the launch type.
Valid values are FARGATE and EC2. For more information about launch types, see Amazon ECS
Launch Types (p. 132).
Task Size
When you register a task definition, you can specify the total cpu and memory used for the task. This is
separate from the cpu and memory values at the container definition level. If using the EC2 launch type,
this field is optional and any value can be used. If using the Fargate launch type, this field is required and
you must use one of the following values, which determines your range of valid values for the memory
parameter.
Note
Task-level CPU and memory parameters are ignored for Windows containers. We recommend
specifying container-level resources for Windows containers.
cpu
Type: string
Required: no
Note
This parameter is not supported for Windows containers.
The number of cpu units used by the task. If using the EC2 launch type, this field is optional and any
value can be used. If using the Fargate launch type, this field is required and you must use one of the
following values, which determines your range of valid values for the memory parameter:
memory
Type: string
Required: no
Note
This parameter is not supported for Windows containers.
The amount (in MiB) of memory used by the task. If using the EC2 launch type, this field is optional
and any value can be used. If using the Fargate launch type, this field is required and you must use
one of the following values, which determines your range of valid values for the cpu parameter:
1. In the task definition volumes section, define a data volume with name and sourcePath values.
"volumes": [
{
"name": "webdata",
"host": {
"sourcePath": "/ecs/webdata"
}
}
]
"containerDefinitions": [
{
"name": "web",
"image": "nginx",
"cpu": 99,
"memory": 100,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"essential": true,
"mountPoints": [
{
"sourceVolume": "webdata",
"containerPath": "/usr/share/nginx/html"
}
]
}
]
In some cases, you want containers to share the same empty data volume, but you aren't interested in
keeping the data after the task has finished. For example, you may have two database containers that
need to access the same scratch file storage location during a task.
1. In the task definition volumes section, define a data volume with the name database_scratch.
Note
Because the database_scratch volume does not specify a source path, the Docker
daemon manages the volume for you. When no containers reference this volume, the
Amazon ECS container agent task cleanup service eventually deletes it (by default, this
happens 3 hours after the container exits, but you can configure this duration with the
ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION agent variable). For more information, see
Amazon ECS Container Agent Configuration (p. 81). If you need this data to persist, specify
a sourcePath value for the volume.
"volumes": [
{
"name": "database_scratch",
"host": {}
}
]
2. In the containerDefinitions section, create the database container definitions so they mount
the nonpersistent data volumes.
"containerDefinitions": [
{
"name": "database1",
"image": "my-repo/database",
"cpu": 100,
"memory": 100,
"essential": true,
"mountPoints": [
{
"sourceVolume": "database_scratch",
"containerPath": "/var/scratch"
}
]
},
{
"name": "database2",
"image": "my-repo/database",
"cpu": 100,
"memory": 100,
"essential": true,
"mountPoints": [
{
"sourceVolume": "database_scratch",
"containerPath": "/var/scratch"
}
]
}
]
1. In the task definition volumes section, define a data volume with the name webroot and the
source path /data/webroot.
"volumes": [
{
"name": "webroot",
"host": {
"sourcePath": "/data/webroot"
}
}
]
2. In the containerDefinitions section, define a container for each web server with mountPoints
values that associate the webroot volume with the containerPath value pointing to the
document root for that container.
"containerDefinitions": [
{
"name": "web-server-1",
"image": "my-repo/ubuntu-apache",
"cpu": 100,
"memory": 100,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"essential": true,
"mountPoints": [
{
"sourceVolume": "webroot",
"containerPath": "/var/www/html",
"readOnly": true
}
]
},
{
"name": "web-server-2",
"image": "my-repo/sles11-apache",
"cpu": 100,
"memory": 100,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 8080
}
],
"essential": true,
"mountPoints": [
{
"sourceVolume": "webroot",
"containerPath": "/srv/www/htdocs",
"readOnly": true
}
]
}
]
You can define one or more volumes on a container, and then use the volumesFrom parameter
in a different container definition (within the same task) to mount all of the volumes from the
sourceContainer at their originally defined mount points. The volumesFrom parameter applies to
volumes defined in the task definition, and those that are built into the image with a Dockerfile.
1. (Optional) To share a volume that is built into an image, you need to build the image with the
volume declared in a VOLUME instruction. The following example Dockerfile uses an httpd image
and then adds a volume and mounts it at dockerfile_volume in the Apache document root
(which is the folder used by the httpd web server):
FROM httpd
VOLUME ["/usr/local/apache2/htdocs/dockerfile_volume"]
You can build an image with this Dockerfile and push it to a repository, such as Docker Hub, and use
it in your task definition. The example my-repo/httpd_dockerfile_volume image used in the
following steps was built with the above Dockerfile.
2. Create a task definition that defines your other volumes and mount points for the containers. In this
example volumes section, you create an empty volume called empty, which the Docker daemon will
manage. There is also a host volume defined called host_etc, which exports the /etc folder on the
host container instance.
{
"family": "test-volumes-from",
"volumes": [
{
"name": "empty",
"host": {}
},
{
"name": "host_etc",
"host": {
"sourcePath": "/etc"
}
}
],
In the container definitions section, create a container that mounts the volumes defined earlier. In
this example, the web container (which uses the image built with a volume in the Dockerfile) mounts
the empty and host_etc volumes.
"containerDefinitions": [
{
"name": "web",
"image": "my-repo/httpd_dockerfile_volume",
"cpu": 100,
"memory": 500,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"mountPoints": [
{
"sourceVolume": "empty",
"containerPath": "/usr/local/apache2/htdocs/empty_volume"
},
{
"sourceVolume": "host_etc",
"containerPath": "/usr/local/apache2/htdocs/host_etc"
}
],
"essential": true
},
Create another container that uses volumesFrom to mount all of the volumes that are associated
with the web container. All of the volumes on the web container will likewise be mounted on the
busybox container (including the volume specified in the Dockerfile that was used to build the my-
repo/httpd_dockerfile_volume image).
{
"name": "busybox",
"image": "busybox",
"volumesFrom": [
{
"sourceContainer": "web"
}
],
"cpu": 100,
"memory": 500,
"entryPoint": [
"sh",
"-c"
],
"command": [
"echo $(date) > /usr/local/apache2/htdocs/empty_volume/date && echo $(date)
> /usr/local/apache2/htdocs/host_etc/date && echo $(date) > /usr/local/apache2/htdocs/
dockerfile_volume/date"
],
"essential": false
}
]
}
When this task is run, the two containers mount the volumes, and the command in the busybox
container writes the date and time to a file called date in each of the volume folders, which are
then visible at the website displayed by the web container.
Note
Because the busybox container runs a quick command and then exits, it needs to be set as
"essential": false in the container definition to prevent it from stopping the entire
task when it exits.
Task networking also provides greater security for your containers by allowing you to use security groups
and network monitoring tools at a more granular level within ECS tasks. Because each task gets its
own elastic network interface, you can also take advantage of other Amazon EC2 networking features
like VPC Flow Logs so that you can monitor traffic to and from your tasks. Additionally, containers that
belong to the same task can communicate over the localhost interface. A task can only have one
elastic network interface associated with it at a given time.
To use task networking, specify the awsvpc network mode in your task definition. Then, when you run a
task or create a service, specify a network configuration that includes the subnets in which to place your
tasks and the security groups to attach to its associated elastic network interface. The tasks are placed
on valid container instances in those subnets and the specified security groups are associated with the
elastic network interface that is provisioned for the task.
The elastic network interface that is created for your task is fully managed by Amazon ECS. Amazon ECS
creates the elastic network interface and attaches it to the container instance with the specified security
group. The task sends and receives network traffic on the elastic network interface in the same way that
Amazon EC2 instances do with their primary network interfaces. These elastic network interfaces are
visible in the Amazon EC2 console for your account, but they cannot be detached manually or modified
by your account. This is to prevent accidental deletion of an elastic network interface that is associated
with a running task. You can view the elastic network interface attachment information for tasks in the
Amazon ECS console or with the DescribeTasks API operation. When the task stops or if the service is
scaled down, the elastic network interface is released.
To use task networking, your task definitions must specify the awsvpc network mode. For more
information, see Network Mode (p. 108). When you run tasks or create services using a task definition
that specifies the awsvpc network mode, you specify a network configuration that contains the VPC
subnets to be considered for placement and the security groups to attach to the task's elastic network
interface.
Tasks and services that use the awsvpc network mode require the Amazon ECS service-linked role to
provide Amazon ECS with the permissions to make calls to other AWS services on your behalf. This role
is created for you automatically when you create a cluster, or if you create or update a service in the AWS
Management Console. For more information, see Using Service-Linked Roles for Amazon ECS (p. 243).
You can also create the service-linked role with the following AWS CLI command:
• The awsvpc network mode does not provide task elastic network interfaces with public IP addresses.
To access the internet, tasks must be launched in a private subnet that is configured to use a NAT
gateway. For more information, see NAT Gateways in the Amazon VPC User Guide. Inbound network
access must be from within the VPC using the private IP address or DNS hostname, or routed through
a load balancer from within the VPC. Tasks launched within public subnets do not have outbound
network access.
• Currently, only the Amazon ECS-optimized AMI, or other Amazon Linux variants with the ecs-init
package, support task networking. Your Amazon ECS container instances require at least version 1.15.0
of the container agent to enable task networking. We recommend using the latest container agent
version.
• Each task that uses the awsvpc network mode receives its own elastic network interface, which is
attached to the container instance that hosts it. EC2 instances have a limit to the number of elastic
network interfaces that can be attached to them, and the primary network interface counts as one.
For example, a c4.large instance may have up to three elastic network interfaces attached to it. The
primary network adapter for the instance counts as one, so you can attach two more elastic network
interfaces to the instance. Because each awsvpc task requires an elastic network interface, you can
only run two such tasks on this instance type. For more information about how many elastic network
interfaces are supported per instance type, see IP Addresses Per Network Interface Per Instance Type in
the Amazon EC2 User Guide for Linux Instances.
• Amazon ECS only accounts for the elastic network interfaces that it attaches to your container
instances for you. If you have attached elastic network interfaces to your container instances manually,
then Amazon ECS could try to place a task on an instance without sufficient available network adapter
attachments. In this case, the task would time out, move from PROVISIONING to DEPROVISIONING,
and then to STOPPED. We recommend that you do not attach elastic network interfaces to your
container instances manually.
• Container instances must be registered with the ecs.capability.task-eni to be considered for
placement of tasks with the awsvpc network mode. Container instances running version 1.15.0-4 or
later of ecs-init are registered with this attribute.
• The elastic network interfaces that are created and attached to your container instances cannot be
detached manually or modified by your account. This is to prevent the accidental deletion of an elastic
network interface that is associated with a running task. To release the elastic network interfaces for a
task, stop the task.
If you use the Fargate launch type, the following task parameters are not valid:
• dockerSecurityOptions
• links
• linuxParameters
• placementConstraints
• privileged
If you use the Fargate launch type, the following task parameters can be used but with limitations:
• networkMode ‐ The only valid value is awsvpc. For more information, see Network Mode (p. 108).
• portMappings ‐ You should specify any exposed ports as containerPort. The hostPort can be
left blank.
• logConfiguration ‐ The only valid value is awslogs. For more information, see Using the awslogs
Log Driver (p. 137).
• volumes ‐ The host and sourcePath values are not valid. There are also specific service limits
related to volumes for tasks using the Fargate launch type. For more information, see Amazon ECS
Service Limits (p. 356).
• There are separate task definition parameters for container and task size. The container size
parameters are optional. The task size parameters are required and have specific values that must be
used. For more information, see Task Size (p. 125).
To send system logs from your Amazon ECS container instances to CloudWatch Logs, see Using
CloudWatch Logs with Container Instances (p. 53). For more information about CloudWatch Logs, see
Monitoring Log Files in the Amazon CloudWatch User Guide.
Topics
• Enabling the awslogs Log Driver for Your Containers (p. 137)
• Creating Your Log Groups (p. 137)
• Available awslogs Log Driver Options (p. 139)
• Specifying a Log Configuration in your Task Definition (p. 139)
• Viewing awslogs Container Logs in CloudWatch Logs (p. 141)
If you are using the EC2 launch type for your tasks and want to enable the awslogs log driver, your
Amazon ECS container instances require at least version 1.9.0 of the container agent. For information
about checking your agent version and updating to the latest version, see Updating the Amazon ECS
Container Agent (p. 74).
Note
If you are not using the Amazon ECS-optimized AMI (with at least version 1.9.0-1 of the ecs-
init package) for your container instances, you also need to specify that the awslogs logging
driver is available on the container instance when you start the agent by using the following
environment variable in your docker run statement or environment variable file. For more
information, see Installing the Amazon ECS Container Agent (p. 69).
ECS_AVAILABLE_LOGGING_DRIVERS='["json-file","awslogs"]'
Your Amazon ECS container instances also require logs:CreateLogStream and logs:PutLogEvents
permission on the IAM role with which you launch your container instances. If you created your Amazon
ECS container instance role before awslogs log driver support was enabled in Amazon ECS, then you
might need to add this permission. If your container instances use the managed IAM policy for container
instances, then your container instances should have the correct permissions. For information about
checking your Amazon ECS container instance role and attaching the managed IAM policy for container
instances, see To check for the ecsInstanceRole in the IAM console (p. 240).
option so if you register your task definitions in the console and choose the Auto-configure CloudWatch
Logs option your log groups will be created for you. Alternatively, you can manually created your log
groups using the following steps.
As an example, you could have a task with a WordPress container (which uses the awslogs-wordpress
log group) that is linked to a MySQL container (which uses the awslogs-mysql log group). The sections
below show how to create these log groups with the AWS CLI and with the CloudWatch console.
If you have a working installation of the AWS CLI, you can use it to create your log groups. The command
below creates a log group called awslogs-wordpress in the us-west-2 region. Run this command for
each log group to create, replacing the log group name with your value and region name to the desired
log destination.
awslogs-region
Required: Yes
Specify the region to which the awslogs log driver should send your Docker logs. You can choose to
send all of your logs from clusters in different regions to a single region in CloudWatch Logs so that
they are all visible in one location, or you can separate them by region for more granularity. Be sure
that the specified log group exists in the region that you specify with this option.
awslogs-group
Required: Yes
You must specify a log group to which the awslogs log driver will send its log streams. For more
information, see Creating Your Log Groups (p. 137).
awslogs-stream-prefix
Required: No, unless using the Fargate launch type in which case it is required.
The awslogs-stream-prefix option allows you to associate a log stream with the specified
prefix, the container name, and the ID of the Amazon ECS task to which the container belongs. If you
specify a prefix with this option, then the log stream takes the following format:
prefix-name/container-name/ecs-task-id
If you do not specify a prefix with this option, then the log stream is named after the container ID
that is assigned by the Docker daemon on the container instance. Because it is difficult to trace logs
back to the container that sent them with just the Docker container ID (which is only available on the
container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you could use the service name as the prefix, which would allow you to
trace log streams to the service that the container belongs to, the name of the container that sent
them, and the ID of the task to which the container belongs.
You must specify a stream-prefix for your logs in order to have your logs appear in the Log pane
when using the Amazon ECS console.
The task definition JSON shown below has a logConfiguration object specified for each container;
one for the WordPress container that sends logs to a log group called awslogs-wordpress, and one
for a MySQL container that sends logs to a log group called awslogs-mysql. Both containers use the
awslogs-example log stream prefix.
{
"containerDefinitions": [
{
"name": "wordpress",
"links": [
"mysql"
],
"image": "wordpress",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-wordpress",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
},
"memory": 500,
"cpu": 10
},
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "password"
}
],
"name": "mysql",
"image": "mysql",
"cpu": 10,
"memory": 500,
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-mysql",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
}
}
],
"family": "awslogs-example"
}
In the Amazon ECS console, the log configuration for the wordpress container is specified as shown in
the image below.
After you have registered a task definition with the awslogs log driver in a container definition log
configuration, you can run a task or create a service with that task definition to start sending logs to
CloudWatch Logs. For more information, see Running Tasks (p. 148) and Creating a Service (p. 188).
To view your CloudWatch Logs data for a container from the Amazon ECS console
The following example specifies a WordPress container and a MySQL container that are linked together.
These WordPress container exposes the container port 80 on the host port 80. The security group on the
container instance would need to open port 80 in order for this WordPress installation to be accessible
from a web browser.
For more information about the WordPress container, go to the official WordPress Docker Hub repository
at https://registry.hub.docker.com/_/wordpress/. For more information about the MySQL container, go
to the official MySQL Docker Hub repository at https://registry.hub.docker.com/_/mysql/.
{
"containerDefinitions": [
{
"name": "wordpress",
"links": [
"mysql"
],
"image": "wordpress",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 500,
"cpu": 10
},
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "password"
}
],
"name": "mysql",
"image": "mysql",
"cpu": 10,
"memory": 500,
"essential": true
}
],
"family": "hello_world"
}
Important
If you use this task definition with a load balancer, you need to complete the WordPress
setup installation through the web interface on the container instance immediately after the
container starts. The load balancer health check ping expects a 200 response from the server,
but WordPress returns a 301 until the installation is completed. If the load balancer health
check fails, the load balancer deregisters the instance.
The following example demonstrates how to use the awslogs log driver in a task definition. The nginx
container will send its logs to the ecs-log-streaming log group in the us-west-2 region. For more
information, see Using the awslogs Log Driver (p. 137).
{
"containerDefinitions": [
{
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"name": "nginx-container",
"image": "nginx",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "ecs-log-streaming",
"awslogs-region": "us-west-2"
}
},
"cpu": 0
}
],
"family": "example_task_1"
}
Example Example: Amazon ECR Image and Task Definition IAM Role
The following example uses an Amazon ECR image called aws-nodejs-sample with the v1
tag from the 123456789012.dkr.ecr.us-west-2.amazonaws.com registry. The container
in this task will inherit IAM permissions from the arn:aws:iam::123456789012:role/
AmazonECSTaskS3BucketRole role. For more information, see IAM Roles for Tasks (p. 251).
{
"containerDefinitions": [
{
"name": "sample-app",
"image": "123456789012.dkr.ecr.us-west-2.amazonaws.com/aws-nodejs-sample:v1",
"memory": 200,
"cpu": 10,
"essential": true
}
],
"family": "example_task_3",
"taskRoleArn": "arn:aws:iam::123456789012:role/AmazonECSTaskS3BucketRole"
}
{
"containerDefinitions": [
{
"memory": 32,
"essential": true,
"entryPoint": [
"ping"
],
"name": "alpine_ping",
"readonlyRootFilesystem": true,
"image": "alpine:3.4",
"command": [
"-c",
"4",
"google.com"
],
"cpu": 16
}
],
"family": "example_task_2"
}
2. From the navigation bar, choose the region that contains your task definition.
3. In the navigation pane, choose Task Definitions.
4. On the Task Definitions page, select the box to the left of the task definition to revise and choose
Create new revision.
5. On the Create new revision of Task Definition page, make changes. For example, to change the
existing container definitions (such as the container image, memory limits, or port mappings), select
the container, make the changes, and then choose Update.
6. Verify the information and choose Create.
7. If your task definition is used in a service, update your service with the updated task definition. For
more information, see Updating a Service (p. 194).
When you deregister a task definition, it is immediately marked as INACTIVE. Existing tasks and services
that reference an INACTIVE task definition continue to run without disruption, and existing services
that reference an INACTIVE task definition can still scale up or down by modifying the service's desired
count.
You cannot use an INACTIVE task definition to run new tasks or create new services, and you cannot
update an existing service to reference an INACTIVE task definition (although there may be up to a 10-
minute window following deregistration where these restrictions have not yet taken effect).
Note
At this time, INACTIVE task definitions remain discoverable in your account indefinitely;
however, this behavior is subject to change in the future, so you should not rely on INACTIVE
task definitions persisting beyond the lifecycle of any associated tasks and services.
Amazon ECS provides a service scheduler (for long-running tasks and applications), the ability to run
tasks manually (for batch jobs or single run tasks), with Amazon ECS placing tasks on your cluster
for you. You can specify task placement strategies and constraints that allow you to run tasks in the
configuration you choose, such as spread out across availability zones. It is also possible to integrate with
custom or third-party schedulers.
Service Scheduler
The service scheduler is ideally suited for long running stateless services and applications. The service
scheduler ensures that the specified number of tasks are constantly running and reschedules tasks when
a task fails (for example, if the underlying infrastructure fails for some reason). The service scheduler
optionally also makes sure that tasks are registered against an Elastic Load Balancing load balancer. You
can update your services that are maintained by the service scheduler, such as deploying a new task
definition, or changing the running number of desired tasks. By default, the service scheduler spreads
tasks across Availability Zones, but you can use task placement strategies and constraints to customize
task placement decisions. For more information, see Services (p. 161).
The RunTask action is ideally suited for processes such as batch jobs that perform work and then stop.
For example, you could have a process call RunTask when work comes into a queue. The task pulls
work from the queue, performs the work, and then exits. Using RunTask, you can allow the default
task placement strategy to distribute tasks randomly across your cluster, which minimizes the chances
that a single instance gets a disproportionate number of tasks. Alternatively, you can use RunTask to
customize how the scheduler places tasks using task placement strategies and constraints. For more
information, see Running Tasks (p. 148) and RunTask in the Amazon Elastic Container Service API
Reference.
If you have tasks to run at set intervals in your cluster, such as a backup operation or a log scan, you can
use the Amazon ECS console to create a CloudWatch Events rule that runs one or more tasks in your
cluster at specified times. Your scheduled event rule can be set to either a specific interval (run every N
minutes, hours, or days), or for more complicated scheduling, you can use a cron expression. For more
information, see Scheduled Tasks (cron) (p. 158).
Custom Schedulers
Amazon ECS allows you to create your own schedulers that meet the needs of your business, or to
leverage third party schedulers. Blox is an open source project that gives you more control over how your
containerized applications run on Amazon ECS. It enables you to build schedulers and integrate third-
party schedulers with Amazon ECS while leveraging Amazon ECS to fully manage and scale your clusters.
Custom schedulers use the StartTask API operation to place tasks on specific container instances within
your cluster. For more information, see StartTask in the Amazon Elastic Container Service API Reference.
Note
Custom schedulers are only compatible with tasks using the EC2 launch type. If you are using
the Fargate launch type for your tasks then the StartTask API will not work.
Task Placement
The RunTask and CreateService actions enable you to specify task placement constraints and
task placement strategies to customize how Amazon ECS places your tasks. For more information, see
Amazon ECS Task Placement (p. 150).
Contents
• Running Tasks (p. 148)
• Amazon ECS Task Placement (p. 150)
• Scheduled Tasks (cron) (p. 158)
• Task Life Cycle (p. 159)
• Task Retirement (p. 160)
Running Tasks
Running tasks manually is ideal in certain situations. For example, suppose that you are developing a
task but you are not ready to deploy this task with the service scheduler. Perhaps your task is a one-time
or periodic batch job that does not make sense to keep running or restart when it finishes.
To keep a specified number of tasks running or to place your tasks behind a load balancer, use the
Amazon ECS service scheduler instead. For more information, see Services (p. 161).
To run a task
• To run the latest revision of a task definition shown here, select the box to the left of the task
definition to run.
• To run an earlier revision of a task definition shown here, select the task definition to view all
active revisions, then select the revision to run.
3. Choose Actions, Run Task.
4. For Launch Type, choose your desired launch type. For more information about launch types, see
Amazon ECS Launch Types (p. 132).
5. For Cluster, choose the cluster to use. For Number of tasks, type the number of tasks to launch with
this task definition. For Task Group, type the name of the task group.
6. If your task definition uses the awsvpc network mode, complete these substeps. Otherwise,
continue to the next step.
a. For Cluster VPC, choose the VPC that your container instances reside in.
b. For Subnets, choose the available subnets for your task.
Important
Only private subnets are supported for the awsvpc network mode. Because tasks do
not receive public IP addresses, a NAT gateway is required for outbound internet access,
and inbound internet traffic should be routed through a load balancer.
c. For Security groups, a security group has been created for your task that allows HTTP traffic
from the internet (0.0.0.0/0). To edit the name or the rules of this security group, or to choose
an existing security group, choose Edit and then modify your security group settings.
7. (Optional) For Task Placement, you can specify how tasks are placed using task placement strategies
and constraints. Choose from the following options:
• AZ Balanced Spread - distribute tasks across Availability Zones and across container instances in
the Availability Zone.
• AZ Balanced BinPack - distribute tasks across Availability Zones and across container instances
with the least available memory.
• BinPack - distribute tasks based on the least available amount of CPU or memory.
• One Task Per Host - place, at most, one task from the service on each container instance.
• Custom - define your own task placement strategy. See Amazon ECS Task Placement (p. 150) for
examples.
For more information, see Amazon ECS Task Placement (p. 150).
8. (Optional) To send command or environment variable overrides to one or more containers in your
task definition, or to specify an IAM role task override, choose Advanced Options and complete the
following steps:
a. For Task Role Override, choose an IAM role that provides permissions for containers in your
task to make calls to AWS APIs on your behalf. For more information, see IAM Roles for
Tasks (p. 251).
Note that only roles with the Amazon EC2 Container Service Task Role trust relationship are
shown here. For more information about creating an IAM role for your tasks, see Creating an
IAM Role and Policy for your Tasks (p. 254).
b. For Task Execution Role Override, choose an IAM role that provides permissions for containers
in your task to make calls to AWS APIs on your behalf. For more information, see IAM Roles for
Tasks (p. 251).
Note that only roles with the Amazon EC2 Container Service Task Execution Role trust
relationship are shown here. For more information about creating an IAM role for your tasks, see
Creating an IAM Role and Policy for your Tasks (p. 254).
c. For Container Overrides, choose a container to which to send a command or environment
variable override.
• For a command override: For Command override, type the command override to send. If
your container definition does not specify an ENTRYPOINT, the format should be a comma-
separated list of non-quoted strings. For example:
/bin/sh,-c,echo,$DATE
If your container definition does specify an ENTRYPOINT (such as sh,-c), the format should be
an unquoted string, which is surrounded with double quotes and passed as an argument to
the ENTRYPOINT command. For example:
• For environment variable overrides: Choose Add Environment Variable. For Key, type the
name of your environment variable. For Value, type a string value for your environment value
(without surrounding quotes).
A task placement strategy is an algorithm for selecting instances for task placement or tasks for
termination. For example, Amazon ECS can select instances at random or it can select instances such
that tasks are distributed evenly across a group of instances. A task placement constraint is a rule that
is considered during task placement. For example, you can use constraints to place tasks based on
Availability Zone or instance type. You can associate attributes, which are name/value pairs, with your
container instances and then use a constraint to place tasks based on attribute.
Note
Task placement strategies are best effort. Amazon ECS still attempts to place tasks even when
the most optimal placement option is unavailable. However, task placement constraints are
binding, and they can prevent task placement.
You can use strategies and constraints together. For example, you can distribute tasks across Availability
Zones and bin pack tasks based on memory within each Availability Zone, but only for G2 instances.
When Amazon ECS places tasks, it uses the following process to select container instances:
1. Identify the instances that satisfy the CPU, memory, and port requirements in the task definition.
2. Identify the instances that satisfy the task placement constraints.
3. Identify the instances that satisfy the task placement strategies.
4. Select the instances for task placement.
Contents
• Amazon ECS Task Placement Strategies (p. 150)
• Amazon ECS Task Placement Constraints (p. 152)
• Cluster Query Language (p. 155)
Strategy Types
Amazon ECS supports the following task placement strategies:
binpack
Place tasks based on the least available amount of CPU or memory. This minimizes the number of
instances in use.
random
Place tasks evenly based on the specified value. Accepted values are attribute key:value pairs,
instanceId, or host. Service tasks are spread based on the tasks from that service.
Example Strategies
You can specify task placement strategies with the following actions: CreateService and RunTask.
"placementStrategy": [
{
"field": "attribute:ecs.availability-zone",
"type": "spread"
}
]
"placementStrategy": [
{
"field": "instanceId",
"type": "spread"
}
]
"placementStrategy": [
{
"field": "memory",
"type": "binpack"
}
]
"placementStrategy": [
{
"type": "random"
}
]
The following strategy distributes tasks evenly across Availability Zones and then distributes tasks evenly
across the instances within each Availability Zone.
"placementStrategy": [
{
"field": "attribute:ecs.availability-zone",
"type": "spread”
},
{
"field": "instanceId",
"type": "spread"
}
]
The following strategy distributes tasks evenly across Availability Zones and then bin packs tasks based
on memory within each Availability Zone.
"placementStrategy": [
{
"field": "attribute:ecs.availability-zone",
"type": "spread”
},
{
"field": "memory",
"type": "binpack"
}
]
Constraint Types
Amazon ECS supports the following types of task placement constraints:
distinctInstance
For more information about expression syntax, see Cluster Query Language (p. 155).
Attributes
You can add custom metadata to your container instances, known as attributes. Each attribute has a
name and an optional string value. You can use the built-in attributes provided by Amazon ECS or define
custom attributes.
Built-in Attributes
Amazon ECS automatically applies the following attributes to your container instances.
ecs.ami-id
The ID of the AMI used to launch the instance. An example value for this attribute is "ami-eca289fb".
ecs.availability-zone
The Availability Zone for the instance. An example value for this attribute is "us-east-1a".
ecs.instance-type
The instance type for the instance. An example value for this attribute is "g2.2xlarge".
ecs.os-type
The operating system for the instance. The possible values for this attribute are "linux" and
"windows".
Custom Attributes
You can apply custom attributes to your container instances. For example, you can define an attribute
with the name "stack" and a value of "prod".
Adding an Attribute
You can add custom attributes at instance registration time using the container agent or manually, using
the AWS Management Console. For more information about using the container agent, see Amazon ECS
Container Agent Configuration Parameters (p. 86).
The following examples demonstrate how to add custom attributes using the put-attributes command.
The following example adds the custom attribute "stack=prod" to the specified container instance in the
default cluster.
The following example adds the custom attributes "stack=prod" and "project=a" to the specified
container instance in the default cluster.
Filtering by Attribute
You can apply a filter for your container instances, allowing you to see custom attributes.
For Filter by attributes, type or select the attributes by which to filter. After you select the attribute
name, you are prompted for the attribute value.
6. Add additional attributes to the filter as needed. Remove an attribute by choosing the X next to it.
The following examples demonstrate how to filter container instances by attribute using the list-
constainer-instances command. For more information about the filter syntax, see Cluster Query
Language (p. 155).
The following example uses built-in attributes to list the g2.2xlarge instances.
The following example lists the instances with the custom attribute "stack=prod".
The following example lists the instances with the custom attribute "stack" unless the attribute value is
"prod".
The following example uses built-in attributes to list the instances of type t2.small or t2.medium.
The following example uses built-in attributes to list the T2 instances in Availability Zone us-east-1a.
Task Groups
You can identify a set of related tasks as a task group. All tasks with the same task group name are
considered as a set when performing spread placement. For example, suppose that you are running
different applications in one cluster, such as databases and web servers. To ensure that your databases
are balanced across Availability Zones, add them to a task group named "databases" and then use this
task group as a constraint for task placement.
When you launch a task using the RunTask or StartTask action, you can specify the name of the task
group for the task. If you don't specify a task group for the task, the default name is the family name of
the task definition (for example, family:my-task-definition).
For tasks launched by the service scheduler, the task group name is the name of the service (for example,
service:my-service-name).
Limits
Example Constraints
You can specify task placement constraints with the following actions: CreateService,
RegisterTaskDefinition, and RunTask.
"placementConstraints": [
{
"expression": "attribute:ecs.instance-type =~ t2.*",
"type": "memberOf"
}
]
The following constraint places tasks on instances in the databases task group.
"placementConstraints": [
{
"expression": "task:group == databases",
"type": "memberOf"
}
]
The following constraint places each task in the group on a different instance.
"placementConstraints": [
{
"type": "distinctInstance"
}
]
After you have defined a group of container instances, you can customize Amazon ECS to place tasks on
container instances based on group. For more information, see Running Tasks (p. 148) and Creating
a Service (p. 188). You can also apply a group filter when listing container instances. For more
information, see Filtering by Attribute (p. 153).
Expression Syntax
Expressions have the following syntax:
Subject
attribute:attribute-name
Note
For more details about attributes, see Attributes (p. 152).
You can also select container instances by task group. Specify task groups as follows:
task:group
Note
For more details about task groups, see Task Groups (p. 155).
Operator
Operator Description
Argument
The in and not_in operators expect an argument list as the argument. You specify an argument list as
follows:
The matches and not_matches operators expect an argument that conforms to the Java regular
expression syntax. For more information, see java.util.regex.Pattern.
Compound Expressions
You can combine expressions using the following Boolean operators:
• &&, and
• ||, or
• !, not
Example Expressions
The following are example expressions.
The following expression selects instances with the specified instance type.
attribute:ecs.instance-type == t2.small
The following expression selects instances in the us-east-1a or us-east-1b Availability Zone.
The following expression selects G2 instances that are not in the us-east-1d Availability Zone.
The following expression selects instances that are hosting tasks in the service:production group.
task:group == service:production
The following expression selects instances that are not hosting tasks in the database group.
not(task:group == database)
If you have tasks to run at set intervals in your cluster, such as a backup operation or a log scan, you can
use the Amazon ECS console to create a CloudWatch Events rule that runs one or more tasks in your
cluster at the specified times. Your scheduled event rule can be set to either a specific interval (run every
N minutes, hours, or days), or for more complicated scheduling, you can use a cron expression. For more
information, see Schedule Expressions for Rules in the Amazon CloudWatch Events User Guide.
• For Run at fixed interval, enter the interval and unit for your schedule.
• For Cron expression, enter the cron expression for your task schedule. These expressions have six
required fields, and fields are separated by white space. For more information, and examples of
cron expressions, see Cron Expressions in the Amazon CloudWatch Events User Guide.
7. Create a target for your schedule rule.
a. For Target ID, enter a unique identifier for your target. Up to 64 letters, numbers, periods,
hyphens, and underscores are allowed.
b. For Task definition, choose the family and revision (family:revision) of the task definition to run
for this target.
c. For Number of tasks, enter the number of instantiations of the specified task definition to run
on your cluster when the rule executes.
d. (Optional) For Task role override, choose the IAM role to use for the task in your target, instead
of the task definition default. For more information, see IAM Roles for Tasks (p. 251). Only
roles with the Amazon EC2 Container Service Task Role trust relationship are shown here. For
more information about creating an IAM role for your tasks, see Creating an IAM Role and Policy
for your Tasks (p. 254).
e. For CloudWatch Events IAM role for this target, choose an existing CloudWatch Events service
role (ecsEventsRole) that you may have already created. Or, choose Create new role to create
the required IAM role that allows CloudWatch Events to make calls to Amazon ECS to run tasks
on your behalf. For more information, see CloudWatch Events IAM Role (p. 251).
f. (Optional) In the Container overrides section, you can expand individual containers and
override the command and/or environment variables for that container that are defined in the
task definition.
8. (Optional) To add additional targets (other tasks to run when this rule is executed), choose Add
targets and repeat the previous substeps for each additional target.
9. Choose Create.
When task status changes are requested, such as stopping a task or updating the desired count of a
service to scale it up or down, the Amazon ECS container agent tracks these changes as the last known
status of the task and the desired status of the task. The flow chart below shows the different paths that
task status can take, based on the action that causes the status change.
The center path shows the natural progression of a batch job that stops on its own. A persistent task
that is not meant to finish would also be on the center path, but it would stop at the RUNNING:RUNNING
stage. The paths to the right show what happens at a given state if an API call reaches the agent to stop
the task or a container instance. The paths to the left show what happens if the container instance a task
is running on is removed, whether by forcefully deregistering it or by terminating the instance.
Task Retirement
A task is scheduled to be retired when AWS detects irreparable failure of the underlying hardware
hosting the task. When a task reaches its scheduled retirement date, it is stopped or terminated by AWS.
If the task is part of a service, then the task is automatically stopped and the service schedule will start
a new one to replace it. If you are using standalone tasks, then you will receive notification of the task
retirement described below.
Services
Amazon ECS allows you to run and maintain a specified number (the "desired count") of instances of
a task definition simultaneously in an Amazon ECS cluster. This is called a service. If any of your tasks
should fail or stop for any reason, the Amazon ECS service scheduler launches another instance of your
task definition to replace it and maintain the desired count of tasks in the service.
In addition to maintaining the desired count of tasks in your service, you can optionally run your service
behind a load balancer. The load balancer distributes traffic across the tasks that are associated with the
service.
Topics
• Service Concepts (p. 161)
• Service Definition Parameters (p. 162)
• Service Load Balancing (p. 165)
• Service Auto Scaling (p. 179)
• Creating a Service (p. 188)
• Updating a Service (p. 194)
• Deleting a Service (p. 196)
Service Concepts
• If a task in a service stops, the task is killed and restarted. This process continues until your service
reaches the number of desired running tasks.
• If a service is unable to reach the desired number of running tasks because a task fails it will continue
to attempt to start the task. If it continues to fail after multiple attempts then a service event message
will be displayed. For more information, see Service Event Messages (p. 360).
• You can optionally run your service behind a load balancer. For more information, see Service Load
Balancing (p. 165).
• You can optionally specify a deployment configuration for your service. During a deployment (which is
triggered by updating the task definition or desired count of a service), the service scheduler uses the
minimum healthy percent and maximum percent parameters to determine the deployment strategy.
For more information, see Service Definition Parameters (p. 162).
• When the service scheduler launches new tasks or stops running tasks that use the Fargate launch
type, it attempts to maintain balance across the Availability Zones in your service.
• When the service scheduler launches new tasks using the EC2 launch type, the scheduler uses the
following logic:
• Determine which of the container instances in your cluster can support your service's task definition
(for example, they have the required CPU, memory, ports, and container instance attributes).
• Determine which container instances satisfy any placement constraints that are defined for the
service.
• If there is a placement strategy defined, use that strategy to select an instance from the remaining
candidates.
• If there is no placement strategy defined, balance tasks across the Availability Zones in your cluster
with the following logic:
• Sort the valid container instances by the fewest number of running tasks for this service in the
same Availability Zone as the instance. For example, if zone A has one running service task and
zones B and C each have zero, valid container instances in either zone B or C are considered
optimal for placement.
• Place the new service task on a valid container instance in an optimal Availability Zone (based on
the previous steps), favoring container instances with the fewest number of running tasks for this
service.
• When the service scheduler stops running tasks, it attempts to maintain balance across the Availability
Zones in your cluster. For tasks using the EC2 launch type, the scheduler uses the following logic:
• If a placement strategy is defined, use that strategy to select which tasks to terminate. For example,
if a service has an Availability Zone spread strategy defined, then a task will be selected which leaves
the remaining tasks with the best spread.
• If no placement strategy is defined, maintain balance across the Availability Zones in your cluster
with the following logic:
• Sort the container instances by the largest number of running tasks for this service in the same
Availability Zone as the instance. For example, if zone A has one running service task and zones
B and C each have two, container instances in either zone B or C are considered optimal for
termination.
• Stop the task on a container instance in an optimal Availability Zone (based on the previous steps),
favoring container instances with the largest number of running tasks for this service.
{
"cluster": "",
"serviceName": "",
"taskDefinition": "",
"loadBalancers": [
{
"targetGroupArn": "",
"loadBalancerName": "",
"containerName": "",
"containerPort": 0
}
],
"desiredCount": 0,
"clientToken": "",
"launchType": "FARGATE",
"platformVersion": "",
"role": "",
"deploymentConfiguration": {
"maximumPercent": 0,
"minimumHealthyPercent": 0
},
"placementConstraints": [
{
"type": "distinctInstance",
"expression": ""
}
],
"placementStrategy": [
{
"type": "binpack",
"field": ""
}
],
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
""
],
"securityGroups": [
""
],
"assignPublicIp": "ENABLED"
}
},
"healthCheckGracePeriodSeconds": 0
}
Note
You can create the above service definition template with the following AWS CLI command.
cluster
The short name or full Amazon Resource Name (ARN) of the cluster on which to run your service. If
you do not specify a cluster, the default cluster is assumed.
serviceName
The name of your service. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and
underscores are allowed. Service names must be unique within a cluster, but you can have similarly
named services in multiple clusters within a region or across multiple regions.
taskDefinition
The family and revision (family:revision) or full ARN of the task definition to run in your
service. If a revision is not specified, the latest ACTIVE revision is used.
loadBalancers
A load balancer object representing the load balancer to use with your service. Currently, you are
limited to one load balancer or target group per service. After you create a service, the load balancer
name or target group ARN, container name, and container port specified in the service definition are
immutable.
For Classic Load Balancers, this object must contain the load balancer name, the container name (as
it appears in a container definition), and the container port to access from the load balancer. When a
task from this service is placed on a container instance, the container instance is registered with the
load balancer specified here.
For Application Load Balancers and Network Load Balancers, this object must contain the load
balancer target group ARN, the container name (as it appears in a container definition), and the
container port to access from the load balancer. When a task from this service is placed on a
container instance, the container instance and port combination is registered as a target in the
target group specified here.
targetGroupArn
The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group associated
with a service.
loadBalancerName
containerName
The name of the container (as it appears in a container definition) to associate with the load
balancer.
containerPort
The port on the container to associate with the load balancer. This port must correspond to a
containerPort in the service's task definition. Your container instances must allow ingress
traffic on the hostPort of the port mapping.
desiredCount
The number of instantiations of the specified task definition to place and keep running on your
cluster.
clientToken
Unique, case-sensitive identifier you provide to ensure the idempotency of the request. Up to 32
ASCII characters are allowed.
launchType
The launch type on which to run your service. If one is not specified, Standard will be used by
default. For more information, see Amazon ECS Launch Types (p. 132).
platformVersion
The platform version on which to run your service. If one is not specified, the latest version will be
used by default.
AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task
infrastructure. When specifying the LATEST platform version when running a task or creating a
service, you will get the most current platform version available for your tasks. When you scale up
your service, those tasks will receive the platform version that was specified on the service's current
deployment. For more information, see AWS Fargate Platform Versions (p. 99).
role
The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make
calls to your load balancer on your behalf. This parameter is required if you are using a load balancer
with your service. If you specify the role parameter, you must also specify a load balancer object
with the loadBalancers parameter.
If your specified role has a path other than /, then you must either specify the full role ARN (this
is recommended) or prefix the role name with the path. For example, if a role with the name bar
has a path of /foo/ then you would specify /foo/bar as the role name. For more information, see
Friendly Names and Paths in the IAM User Guide.
deploymentConfiguration
Optional deployment parameters that control how many tasks run during the deployment and the
ordering of stopping and starting tasks.
maximumPercent
The maximumPercent parameter represents an upper limit on the number of your service's
tasks that are allowed in the RUNNING or PENDING state during a deployment, as a percentage
of the desiredCount (rounded down to the nearest integer). This parameter enables you
to define the deployment batch size. For example, if your service has a desiredCount of
four tasks and a maximumPercent value of 200%, the scheduler may start four new tasks
before stopping the four older tasks (provided that the cluster resources required to do this are
available). The default value for maximumPercent is 200%.
The maximum number of tasks during a deployment is the desiredCount multiplied by the
maximumPercent/100, rounded down to the nearest integer value.
minimumHealthyPercent
The minimum number of healthy tasks during a deployment is the desiredCount multiplied
by the minimumHealthyPercent/100, rounded up to the nearest integer value.
placementConstraints
An array of placement constraint objects to use for tasks in your service. You can specify a maximum
of 10 constraints per task (this limit includes constraints in the task definition and those specified at
run time). If you are using the Fargate launch type, task placement contraints are not supported.
placementStrategy
The placement strategy objects to use for tasks in your service. You can specify a maximum of four
strategy rules per service.
networkConfiguration
The network configuration for the service. This parameter is required for task definitions that use
the awsvpc network mode to receive their own Elastic Network Interface, and it is not supported for
other network modes. If using the Fargate launch type, the awsvpc network mode is required. For
more information, see Task Networking with the awsvpc Network Mode (p. 131).
awsvpcConfiguration
An object representing the subnets and security groups for a task or service.
subnets
The security groups associated with the task or service. If you do not specify a security
group, the default security group for the VPC is used.
healthCheckGracePeriodSeconds
The period of time, in seconds, that the Amazon ECS service scheduler should ignore unhealthy
Elastic Load Balancing target health checks after a task has first started. This is only valid if your
service is configured to use a load balancer. If your service's tasks take a while to start and respond
to ELB health checks, you can specify a health check grace period of up to 1,800 seconds during
which the ECS service scheduler will ignore ELB health check status. This grace period can prevent
the ECS service scheduler from marking tasks as unhealthy and stopping them before they have time
to come up.
Elastic Load Balancing provides three types of load balancers: Application Load Balancers, Network Load
Balancers, and Classic Load Balancers.
An Application Load Balancer makes routing decisions at the application layer (HTTP/HTTPS), supports
path-based routing, and can route requests to one or more ports on each container instance in your
cluster. Application Load Balancers support dynamic host port mapping. For example, if your task's
container definition specifies port 80 for an NGINX container port, and port 0 for the host port, then the
host port is dynamically chosen from the ephemeral port range of the container instance (such as 32768
to 61000 on the latest Amazon ECS-optimized AMI). When the task is launched, the NGINX container
is registered with the Application Load Balancer as an instance ID and port combination, and traffic is
distributed to the instance ID and port corresponding to that container. This dynamic mapping allows
you to have multiple tasks from a single service on the same container instance. For more information,
see the User Guide for Application Load Balancers.
A Network Load Balancer makes routing decisions at the transport layer (TCP/SSL). It can handle millions
of requests per second. After the load balancer receives a connection, it selects a target from the target
group for the default rule using a flow hash routing algorithm. It attempts to open a TCP connection to
the selected target on the port specified in the listener configuration. It forwards the request without
modifying the headers. Network Load Balancers support dynamic host port mapping. For example, if
your task's container definition specifies port 80 for an NGINX container port, and port 0 for the host
port, then the host port is dynamically chosen from the ephemeral port range of the container instance
(such as 32768 to 61000 on the latest Amazon ECS-optimized AMI). When the task is launched, the
NGINX container is registered with the Network Load Balancer as an instance ID and port combination,
and traffic is distributed to the instance ID and port corresponding to that container. This dynamic
mapping allows you to have multiple tasks from a single service on the same container instance. For
more information, see the User Guide for Network Load Balancers.
A Classic Load Balancer makes routing decisions at either the transport layer (TCP/SSL) or the application
layer (HTTP/HTTPS). Classic Load Balancers currently require a fixed relationship between the load
balancer port and the container instance port. For example, it is possible to map the load balancer
port 80 to the container instance port 3030 and the load balancer port 4040 to the container instance
port 4040. However, it is not possible to map the load balancer port 80 to port 3030 on one container
instance and port 4040 on another container instance. This static mapping requires that your cluster
has at least as many container instances as the desired count of a single service that uses a Classic Load
Balancer. For more information, see the User Guide for Classic Load Balancers.
Elastic Load Balancing supports the following types of load balancers: Application Load Balancers,
Network Load Balancers, and Classic Load Balancers, and Amazon ECS services can use either type of
load balancer. Application Load Balancers are used to route HTTP/HTTPS traffic. Network Load Balancers
and Classic Load Balancers are used to route TCP or Layer 7 traffic.
Application Load Balancers offer several features that make them particularly attractive for use with
Amazon ECS services:
• Application Load Balancers allow containers to use dynamic host port mapping (so that multiple tasks
from the same service are allowed per container instance).
• Application Load Balancers support path-based routing and priority rules (so that multiple services can
use the same listener port on a single Application Load Balancer).
We recommend that you use Application Load Balancers for your Amazon ECS services so that you can
take advantage of these latest features. For more information about Elastic Load Balancing and the
differences between the load balancer types, see the Elastic Load Balancing User Guide.
Note
Currently, Amazon ECS services can only specify a single load balancer or target group. If your
service requires access to multiple load balanced ports (for example, port 80 and port 443 for
an HTTP/HTTPS service), you must use a Classic Load Balancer with multiple listeners. To use
an Application Load Balancer, separate the single HTTP/HTTPS service into two services, where
each handles requests for different ports. Then, each service could use a different target group
behind a single Application Load Balancer.
Topics
• Load Balancing Concepts (p. 169)
• Check the Service Role for Your Account (p. 169)
In most cases, the Amazon ECS service role is automatically created for you in the Amazon ECS console
first run experience. You can use the following procedure to check and see if your account already has an
Amazon ECS service role.
5. In the Managed Policies section, ensure that the AmazonEC2ContainerServiceRole managed policy
is attached to the role. If the policy is attached, your Amazon ECS service role is properly configured.
If not, follow the substeps below to attach the policy.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Elastic Load Balancing supports the following types of load balancers: Application Load Balancers,
Network Load Balancers, and Classic Load Balancers, and Amazon ECS services can use either type of
load balancer. Application Load Balancers are used to route HTTP/HTTPS traffic. Network Load Balancers
and Classic Load Balancers are used to route TCP or Layer 7 traffic.
Application Load Balancers offer several features that make them particularly attractive for use with
Amazon ECS services:
• Application Load Balancers allow containers to use dynamic host port mapping (so that multiple tasks
from the same service are allowed per container instance).
• Application Load Balancers support path-based routing and priority rules (so that multiple services can
use the same listener port on a single Application Load Balancer).
We recommend that you use Application Load Balancers for your Amazon ECS services so that you can
take advantage of these latest features. For more information about Elastic Load Balancing and the
differences between the load balancer types, see the Elastic Load Balancing User Guide.
Note
Currently, Amazon ECS services can only specify a single load balancer or target group. If your
service requires access to multiple load balanced ports (for example, port 80 and port 443 for
an HTTP/HTTPS service), you must use a Classic Load Balancer with multiple listeners. To use
an Application Load Balancer, separate the single HTTP/HTTPS service into two services, where
each handles requests for different ports. Then, each service could use a different target group
behind a single Application Load Balancer.
Topics
• Creating an Application Load Balancer (p. 171)
• Creating a Network Load Balancer (p. 174)
• Creating a Classic Load Balancer (p. 175)
A listener is a process that checks for connection requests. It is configured with a protocol and a port
for the frontend (client to load balancer) connections, and protocol and a port for the backend (load
balancer to backend instance) connections. In this example, you configure a listener that accepts HTTP
requests on port 80 and sends them to the containers in your tasks on port 80 using HTTP.
1. If you have a certificate from AWS Certificate Manager, choose Choose an existing certificate from
AWS Certificate Manager (ACM), and then choose the certificate from Certificate name.
2. If you have already uploaded a certificate using IAM, choose Choose an existing certificate from
AWS Identity and Access Management (IAM), and then choose your certificate from Certificate
name.
3. If you have a certificate ready to upload, choose Upload a new SSL Certificate to AWS Identity and
Access Management (IAM). For Certificate name, type a name for the certificate. For Private Key,
copy and paste the contents of the private key file (PEM-encoded). In Public Key Certificate, copy
and paste the contents of the public key certificate file (PEM-encoded). In Certificate Chain, copy
and paste the contents of the certificate chain file (PEM-encoded), unless you are using a self-signed
certificate and it's not important that browsers implicitly accept the certificate.
4. For Select policy, choose a predefined security policy. For details on the security policies, see
Security Policies.
5. Choose Next: Configure Security Groups.
1. On the Assign Security Groups page, choose Create a new security group.
2. Enter a name and description for your security group, or leave the default name and description.
This new security group contains a rule that allows traffic to the port that you configured your
listener to use.
Note
Later in this topic, you create a security group rule for your container instances that allows
traffic on all ports coming from the security group created here, so that the Application
Load Balancer can route traffic to dynamically assigned host ports on your container
instances.
Configure Routing
In this section, you create a target group for your load balancer and the health check criteria for targets
that are registered within that group.
Register Targets
Your load balancer distributes traffic between the targets that are registered to its target groups.
When you associate a target group to an Amazon ECS service, Amazon ECS automatically registers and
deregisters containers with your target group. Because Amazon ECS handles target registration, you do
not add targets to your target group at this time.
1. In the Registered instances section, ensure that no instances are selected for registration.
2. Choose Next: Review to go to the next page in the wizard.
To allow inbound traffic from your load balancer to your container instances
A listener is a process that checks for connection requests. It is configured with a protocol and port
for the frontend (client to load balancer) connections, and a protocol and port for the backend (load
balancer to backend instance) connections. In this example, you configure a listener that accepts HTTP
requests on port 80 and sends them to the containers in your tasks on port 80 using HTTP.
Note
If you plan on routing traffic to more than one target group, see ListenerRules for
details on how to add host or path-based rules.
d. For Availability Zones, select the VPC that you used for your EC2 instances. For each Availability
that you used to launch your EC2 instances, select an Availability Zone and then select the
public subnet for that Availability Zone. To associate an Elastic IP address with the subnet, select
it from Elastic IP.
e. Choose Next: Configure Routing.
Configure Routing
You register targets, such as Amazon EC2 instances, with a target group. The target group that you
configure in this step is used as the target group in the listener rule, which forwards requests to the
target group. For more information, see Target Groups for Your Network Load Balancers.
1. In the Registered instances section, ensure that no instances are selected for registration.
2. Choose Next: Review to go to the next page in the wizard.
Note that you can create your Classic Load Balancer for use with EC2-Classic or a VPC. Some of the tasks
described in these procedures apply only to load balancers in a VPC.
A listener is a process that checks for connection requests. It is configured with a protocol and port
for the frontend (client to load balancer) connections and a protocol, and a protocol and port for the
backend (load balancer to backend instance) connections. In this example, you configure a listener that
accepts HTTP requests on port 80 and sends them to the backend instances on port 80 using HTTP.
The load balancer name you choose must be unique within your set of load balancers, must have a
maximum of 32 characters, and must only contain alphanumeric characters or hyphens.
7. For Create LB inside, select the same network that your container instances are located in: EC2-
Classic or a specific VPC.
8. The default values configure an HTTP load balancer that forwards traffic from port 80 at the
load balancer to port 80 of your container instances, but you can modify these values for your
application. For more information, see Listeners for Your Classic Load Balancer in the User Guide for
Classic Load Balancers.
9. [EC2-VPC] To improve the availability of your load balancer, select at least two subnets in different
Availability Zones. Your load balancer subnet configuration must include all Availability Zones that
your container instances reside in. In the Select Subnets section, under Available Subnets, select the
subnets. The subnets that you select are moved under Selected Subnets.
Note
If you selected EC2-Classic as your network, or you have a default VPC but did not choose
Enable advanced VPC configuration, you do not see Select Subnets.
10. Choose Next: Assign Security Groups to go to the next page in the wizard.
Amazon ECS does not automatically update the security groups associated with Elastic Load Balancing
load balancers or Amazon ECS container instances.
Note
If you selected EC2-Classic as your network, you do not see this page in the wizard and you can
go to the next step. Elastic Load Balancing provides a security group that is assigned to your
load balancer for EC2-Classic automatically.
1. On the Assign Security Groups page, choose Create a new security group.
2. Enter a name and description for your security group, or leave the default name and description.
This new security group contains a rule that allows traffic to the port that you configured your load
balancer to use. If you specified a different port for the health checks, you must choose Add Rule to
add a rule that allows inbound traffic to that port as well.
Note
You should also assign this security group to container instances in your service, or another
security group with the same rules.
3. Choose Next: Configure Security Settings to go to the next page in the wizard.
2. Choose Next: Add EC2 Instances to go to the next page in the wizard.
1. On the Add EC2 Instances page, for Add Instances to Load Balancer, ensure that no instances are
selected for registration.
2. Leave the other fields at their default values.
3. Choose Next: Add Tags to go to the next page in the wizard.
1. On the Add Tags page, specify a key and a value for the tag.
2. To add another tag, choose Create Tag and specify a key and a value for the tag.
3. After you are finished adding tags, choose Review and Create.
1. On the Review page, check your settings. If you need to make changes to the initial settings, choose
the corresponding edit link.
2. Choose Create to create your load balancer.
3. After you are notified that your load balancer was created, choose Close.
Amazon ECS publishes CloudWatch metrics with your service’s average CPU and memory usage. You can
use these service utilization metrics to scale your service up to deal with high demand at peak times, and
to scale your service down to reduce costs during periods of low utilization. For more information, see
Service Utilization (p. 205).
You can also use CloudWatch metrics published by other services, or custom metrics that are specific to
your application. For example, a web service could increase the number of tasks based on Elastic Load
Balancing metrics such as SurgeQueueLength, and a batch job could increase the number of tasks
based on Amazon SQS metrics like ApproximateNumberOfMessagesVisible.
You can also use Service Auto Scaling in conjunction with Auto Scaling for Amazon EC2 on your Amazon
ECS cluster to scale your cluster, and your service, as a result to the demand. For more information, see
Tutorial: Scaling Container Instances with CloudWatch Alarms (p. 208).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"application-autoscaling:*",
"cloudwatch:DescribeAlarms",
"cloudwatch:PutMetricAlarm"
],
"Resource": [
"*"
]
}
]
}
The Create Services (p. 262) and Update Services (p. 263) IAM policy examples show the permissions
that are required for IAM users to use Service Auto Scaling in the AWS Management Console.
The Application Auto Scaling service needs permission to describe your ECS services and CloudWatch
alarms, as well as permissions to modify your service's desired count on your behalf. You must create an
IAM role (ecsAutoscaleRole) for your ECS services to provide these permissions and then associate
that role with your service before it can use Application Auto Scaling. If an IAM user has the required
permissions to use Service Auto Scaling in the Amazon ECS console, create IAM roles, and attach IAM
role policies to them, then that user can create this role automatically as part of the Amazon ECS
console create service (p. ) or update service (p. 194) workflows, and then use the role for any
other service later (in the console or with the CLI/SDKs). You can also create the role by following the
procedures in Amazon ECS Service Auto Scaling IAM Role (p. 249).
When you configure a service to use Service Auto Scaling in the console, your service is automatically
registered as a scalable target with Application Auto Scaling so that you can configure scaling policies
that scale your service up and down. You can also create and update the scaling policies and CloudWatch
alarms that trigger them in the Amazon ECS console.
To create a new ECS service that uses Service Auto Scaling, see Creating a Service (p. 188).
To update an existing service to use Service Auto Scaling, see Updating a Service (p. 194).
• Service Auto Scaling is made possible by a combination of the Amazon ECS, CloudWatch, and
Application Auto Scaling APIs. Services are created and updated with Amazon ECS, alarms are created
with CloudWatch, and scaling policies are created with Application Auto Scaling. For more information
about these specific API operations, see the Amazon Elastic Container Service API Reference, the
Amazon CloudWatch API Reference, and the Application Auto Scaling API Reference. For more
information about the AWS CLI commands for these services, see the ecs, cloudwatch, and application-
autoscaling sections of the AWS Command Line Interface Reference.
• Before your service can use Service Auto Scaling, you must register it as a scalable target with the
Application Auto Scaling RegisterScalableTarget API operation.
• After your ECS service is registered as a scalable target, you can create scaling policies with the
Application Auto Scaling PutScalingPolicy API operation to specify what should happen when your
CloudWatch alarms are triggered.
• After you create the scaling policies for your service, you can create the CloudWatch alarms that
trigger the scaling events for your service with the CloudWatch PutMetricAlarm API operation.
Amazon ECS publishes CloudWatch metrics with your service’s average CPU and memory usage. You can
use these service utilization metrics to scale your service up to deal with high demand at peak times, and
to scale your service down to reduce costs during periods of low utilization. For more information, see
Service Utilization (p. 205).
In this tutorial, you create a cluster and a service (that runs behind an Elastic Load Balancing load
balancer) using the Amazon ECS first run wizard. Then you configure Service Auto Scaling on the service
with CloudWatch alarms that use the CPUUtilization metric to scale your service up or down,
depending on the current application load.
When the CPU utilization of your service rises above 75% (meaning that more than 75% of the CPU that
is reserved for the service is being used), the scale out alarm triggers Service Auto Scaling to add another
task to your service to help out with the increased load. Conversely, when the CPU utilization of your
service drops below 25%, the scale in alarm triggers a decrease in the service's desired count to free up
those cluster resources for other tasks and services.
Prerequisites
This tutorial assumes that you have an AWS account and an IAM administrative user with permissions to
perform all of the actions described within, and an Amazon EC2 key pair in the current region. If you do
not have these resources, or your are not sure, you can create them by following the steps in Setting Up
with Amazon ECS (p. 8).
For this tutorial, you create a cluster called service-autoscaling and a service called sample-
webapp.
For this tutorial, you will not use Amazon ECR, so be sure to clear the lower option. Choose Continue
to proceed.
3. On the Create a task definition page, leave all of the default options and choose Next step.
4. On the Configure service page, for Container name: host port, choose simple-app:80.
Important
Elastic Load Balancing load balancers do incur cost while they exist in your AWS resources.
For more information, see Elastic Load Balancing Pricing.
5. For Select IAM role for service, choose an existing Amazon ECS service (ecsServiceRole) role that
you have already created, or choose Create new role to create the required IAM role for your service.
6. The remaining default values here are set up for the sample application, so leave them as they are
and choose Next step.
7. On the Configure cluster page, enter the following information:
You are directed to a Launch Status page that shows the status of your launch and describes each
step of the process (this can take a few minutes to complete while your Auto Scaling group is
created and populated).
8. When your cluster and service are created, choose View service to view your new service.
1. On the Service: sample-webapp page, your service configuration should look similar to the image
below (although the task definition revision and load balancer name will likely be different). Choose
Update to update your new service.
3. For Service Auto Scaling, choose Configure Service Auto Scaling to adjust your service’s desired
count.
4. For Minimum number of tasks, enter 1 for the lower limit of the number of tasks for Service Auto
Scaling to use. Your service's desired count will not be automatically adjusted below this amount.
5. For Desired number of tasks, this field is pre-populated with the value you entered earlier. This
value must be between the minimum and maximum number of tasks specified on this page. Leave
this value at 1.
6. For Maximum number of tasks, enter 2 for the upper limit of the number of tasks for Service Auto
Scaling to use. Your service's desired count will not be automatically adjusted above this amount.
7. For IAM role for Service Auto Scaling, choose an IAM role to authorize the Application Auto Scaling
service to adjust your service's desired count on your behalf. If you have not previously created such
a role, choose Create new role and the role is created for you. For future reference, the role that is
created for you is called ecsAutoscaleRole. For more information, see Amazon ECS Service Auto
Scaling IAM Role (p. 249).
These steps will help you create scaling policies and CloudWatch alarms that can be used to trigger
scaling activities for your service. You can create a scale out alarm to increase the desired count of your
service, and a scale in alarm to decrease the desired count of your service.
1. On the Service Auto Scaling (optional) page, choose Add scaling policy to configure your
ScaleOutPolicy.
2. For Policy name, enter ScaleOutPolicy
3. For Execute policy when, choose Create new alarm.
5. For Cooldown period, enter 60 for the number of seconds between scaling actions and choose Save
to save your ScaleOutPolicy.
6. After you return to the Service Auto Scaling (optional) page, choose Add scaling policy to
configure your ScaleInPolicy.
7. For Policy name, enter ScaleInPolicy
8. For Execute policy when, choose Create new alarm.
10. For Cooldown period, enter 60 for the number of seconds between scaling actions and choose Save
to save your ScaleInPolicy.
11. After you return to the Service Auto Scaling (optional) page, choose Save to finish your Service
Auto Scaling configuration.
12. On the Update Service page, choose Update Service.
13. When your service status is finished updating, choose View Service.
After the ApacheBench utility finishes the requests, the service CPU utilization should drop below your
25% threshold, triggering a scale in activity that returns the service's desired count to 1.
1. From your service's main view page in the console, choose the load balancer name to view its details
in the Amazon EC2 console. You need the load balancer's DNS name, which should look something
like this: EC2Contai-EcsElast-SMAKV74U23PH-96652279.us-east-1.elb.amazonaws.com.
2. Use the ApacheBench (ab) utility to make thousands of HTTP requests to your load balancer in a
short period of time.
Note
This command is installed by default on Mac OSX, and it is available for many Linux
distributions, as well. For example, you can install ab on Amazon Linux with the following
command:
Run the following command, substituting your load balancer's DNS name.
Step 4: Cleaning Up
When you have completed this tutorial, you may choose to keep your cluster, Auto Scaling group, load
balancer, and EC2 instances. However, if you are not actively using these resources, you should consider
cleaning them up so that your account does not incur unnecessary charges.
1. In the Amazon ECS console, switch to Clusters in the left navigation pane.
2. On the Clusters page, choose the x in the upper right hand corner of the service-autoscaling cluster
to delete the cluster.
3. Review and choose Delete to confirm your cluster deletion. It may take a few minutes for the cluster
AWS CloudFormation stack to finish cleaning up.
4. In the CloudWatch console Alarms view, select the alarms that begin with sample-webapp-cpu- and
then choose Delete to delete the alarms.
5. Choose Yes, Delete to confirm your alarm deletion.
In this tutorial, you configure Service Auto Scaling on the service with a custom CloudWatch alarm that
uses the HealthyHostCount metric to scale your service up or down, depending on the number of
healthy hosts behind your Application Load Balancer.
Prerequisites
This tutorial assumes that you have an AWS account and an IAM administrative user with permissions
to perform all of the actions described within. This tutorial also assumes you have already created your
cluster and service which includes an Application Load Balancer. If you do not have these resources, or
you are not sure, you can create them by following the steps in Setting Up with Amazon ECS (p. 8).
To create an SNS topic, choose New list. For Send notification to, type a name for the SNS topic
(for example, HealthyHostCount), and for Email list, type a comma-separated list of email
addresses to be notified when the alarm changes to the ALARM state. Each email address is sent a
topic subscription confirmation email. You must confirm the subscription before notifications can be
sent.
12. Choose Create Alarm.
2. On the navigation bar, select the region that your cluster is in.
3. In the navigation pane, choose Clusters.
4. On the Clusters page, select the name of the cluster that your service resides in.
5. On the Cluster: name page, choose Services.
6. Check the box to the left of the service to update and choose Update.
7. On the Update Service page, choose Configure Service Auto Scaling.
8. On the Service Auto Scaling page, do the following:
a. Select Configure Service Auto Scaling to adjust your service’s desired count.
b. For Minimum number of tasks, enter 1.
c. For Desired number of tasks, enter 2.
d. For Maximum number of tasks, enter 3.
e. For IAM role for Service Auto Scaling, choose an IAM role to authorize the Application Auto
Scaling service to adjust your service's desired count on your behalf. If you have not previously
created such a role, choose Create new role and the role will be created for you. For future
reference, the role that is created for you is called ecsAutoscaleRole. For more information,
see Amazon ECS Service Auto Scaling IAM Role (p. 249).
9. Under the Automatic task scaling policies section, choose Add scaling policy.
10. On the Add policy page, do the following:
a. For Policy name, enter a descriptive name for your policy (for example, HealthyHostCount).
b. For Execute policy when, select Use an existing Alarm and choose the alarm you created in the
previous section.
c. For Scaling action, select Add and then enter 1 for the number of tasks when 0 >
HealthyHostCount > -infinity.
d. (Optional) You can repeat Step 10.c (p. 188) to configure multiple scaling actions for a single
alarm (for example, to remove 1 task if HealthyHostCount is above 3).
e. For Cooldown period, enter 300 as the number of seconds between scaling actions.
f. Choose Save.
11. On the Service Auto Scaling page, choose Save to complete the update of your service.
Creating a Service
When you create an Amazon ECS service, you specify the basic parameters that define what makes up
your service and how it should behave. These parameters create a service definition.
You can optionally configure additional features, such as an Elastic Load Balancing load balancer
to distribute traffic across the containers in your service. For more information, see Service Load
Balancing (p. 165). You must verify that your container instances can receive traffic from your load
balancers. You can allow traffic to all ports on your container instances from your load balancer's security
group to ensure that traffic can reach any containers that use dynamically assigned ports.
This procedure covers creating a service with the basic service definition parameters that are required.
After you have configured these parameters, you can create your service or move on to the procedures
for optional service definition configuration, such as configuring your service to use a load balancer.
• Launch type: Choose whether your service should run tasks on Fargate infrastructure, or Amazon
EC2 container instances that you maintain.
• Cluster: Select the cluster in which to create your service.
• Service name: Type a unique name for your service.
• Number of tasks, type the number of tasks to launch and maintain on your cluster.
Note
If your launch type is EC2, and your task definition uses static host port mappings on your
container instances, then you need at least one container instance with the specified port
available in your cluster for each task in your service. This restriction does not apply if
your task definition uses dynamic host port mappings with the bridge network mode.
For more information, see portMappings (p. 110).
• Minimum healthy percent: Specify a lower limit on the number of your service's tasks that
must remain in the RUNNING state during a deployment, as a percentage of the service's desired
number of tasks (rounded up to the nearest integer). For example, if your service has a desired
number of four tasks and a minimum healthy percent of 50%, the scheduler may stop two
existing tasks to free up cluster capacity before starting two new tasks. Tasks for services that
do not use a load balancer are considered healthy if they are in the RUNNING state; tasks for
services that do use a load balancer are considered healthy if they are in the RUNNING state and
the container instance it is hosted on is reported as healthy by the load balancer. The default value
for minimum healthy percent is 50% in the console, and 100% with the AWS CLI or SDKs.
• Maximum percent: Specify an upper limit on the number of your service's tasks that are allowed
in the RUNNING or PENDING state during a deployment, as a percentage of the service's desired
number of tasks (rounded down to the nearest integer). For example, if your service has a desired
number of four tasks and a maximum percent value of 200%, the scheduler may start four new
tasks before stopping the four older tasks (provided that the cluster resources required to do this
are available). The default value for maximum percent is 200%.
7. (Optional) For Task Placement, you can specify how tasks are placed using task placement strategies
and constraints. Choose from the following options:
• AZ Balanced Spread - distribute tasks across Availability Zones and across container instances in
the Availability Zone.
• AZ Balanced BinPack - distribute tasks across Availability Zones and across container instances
with the least available memory.
• BinPack - distribute tasks based on the least available amount of CPU or memory.
• One Task Per Host - place, at most, one task from the service on each container instance.
• Custom - define your own task placement strategy. See Amazon ECS Task Placement (p. 150) for
examples.
For more information, see Amazon ECS Task Placement (p. 150).
8. Choose Next step to proceed.
API Version 2014-11-13
189
Amazon Elastic Container Service Developer Guide
Configure Network
Configure Network
VPC and Security Groups
If your service's task definition uses the awsvpc network mode, you must configure VPC, subnet,
and security group settings for your service. For more information, see the section called “Task
Networking” (p. 131).
1. For Cluster VPC, choose the VPC that your container instances reside in.
2. For Subnets, choose the available subnets for your service task placement.
Important
Only private subnets are supported for the awsvpc network mode. Because tasks do not
receive public IP addresses, a NAT gateway is required for outbound internet access, and
inbound internet traffic should be routed through a load balancer.
3. For Security groups, a security group has been created for your service's tasks, which allows HTTP
traffic from the internet (0.0.0.0/0). To edit the name or the rules of this security group, or to choose
an existing security group, choose Edit and then modify your security group settings.
• Health check grace period: Enter the period of time, in seconds, that the Amazon ECS service
scheduler should ignore unhealthy Elastic Load Balancing target health checks after a task has first
started.
If you have an available Elastic Load Balancing load balancer configured, you can attach it to your service
with the following procedures, or you can configure a new load balancer. For more information see
Creating a Load Balancer (p. 170).
Note
You must create your Elastic Load Balancing load balancer resources before following these
procedures.
First, you must choose the load balancer type to use with your service. Then you can configure your
service to work with the load balancer.
1. If you have not done so already, follow the basic service creation procedures in Configuring Basic
Service Parameters (p. 188).
2. On the Create Service page, choose Configure ELB.
Allows containers to use dynamic host port mapping, which enables you to place multiple tasks
using the same port on a single container instance. Multiple services can use the same listener
port on a single load balancer with rule-based routing and paths.
Network Load Balancer
Allows containers to use dynamic host port mapping, which enables you to place multiple tasks
using the same port on a single container instance. Multiple services can use the same listener
port on a single load balancer with rule-based routing.
Classic Load Balancer
Requires static host port mappings (only one task allowed per container instance); rule-based
routing and paths are not supported.
We recommend that you use Application Load Balancers for your Amazon ECS services so that you
can take advantage of the advanced features available to them.
4. For Select IAM role for service, choose Create new role to create a new role for your service, or
select an existing IAM role to use for your service (by default, this is ecsServiceRole).
Important
If you choose to use an existing ecsServiceRole IAM role, you must verify that the role
has the proper permissions to use Application Load Balancers and Classic Load Balancers.
For more information, see Amazon ECS Service Scheduler IAM Role (p. 247).
5. For ELB Name, choose the name of the load balancer to use with your service. Only load balancers
that correspond to the load balancer type you selected earlier are visible here.
6. The next step depends on the load balancer type for your service. If you've chosen an Application
Load Balancer, follow the steps in To configure an Application Load Balancer (p. 191). If
you've chosen a Network Load Balancer, follow the steps in To configure a Network Load
Balancer (p. 192). If you've chosen a Classic Load Balancer, follow the steps in To configure a Classic
Load Balancer (p. 192).
1. For Select a Container, choose the container and port combination from your task definition that
your load balancer should distribute traffic to, and choose Add to ELB.
2. For Listener port, choose the listener port and protocol of the listener that you created in Creating
an Application Load Balancer (p. 171) (if applicable), or choose create new to create a new listener
and then enter a port number and choose a port protocol in Listener protocol.
3. For Target group name, choose the target group that you created in Creating an Application Load
Balancer (p. 171) (if applicable), or choose create new to create a new target group.
4. (Optional) If you chose to create a new target group, complete the following fields as follows:
• For Target group name, enter a name for your target group.
• For Target group protocol, enter the protocol to use for routing traffic to your tasks.
• For Path pattern, if your listener does not have any existing rules, the default path pattern (/) is
used. If your listener already has a default rule, then you must enter a path pattern that matches
traffic that you want to have sent to your service's target group. For example, if your service is a
web application called web-app, and you want traffic that matches http://my-elb-url/web-
app to route to your service, then you would enter /web-app* as your path pattern. For more
information, see ListenerRules in the User Guide for Application Load Balancers.
• For Health check path, enter the path to which the load balancer should send health check pings.
API Version 2014-11-13
191
Amazon Elastic Container Service Developer Guide
(Optional) Configuring Your
Service to Use Service Auto Scaling
5. When you are finished configuring your Application Load Balancer, choose Save to save your
configuration and proceed to Review and Create Your Service (p. 194).
1. For Select a Container, choose the container and port combination from your task definition that
your load balancer should distribute traffic to, and choose Add to ELB.
2. For Listener port, choose the listener port and protocol of the listener that you created in Creating
an Application Load Balancer (p. 171) (if applicable), or choose create new to create a new listener
and then enter a port number and choose a port protocol in Listener protocol.
3. For Target group name, choose the target group that you created in Creating an Application Load
Balancer (p. 171) (if applicable), or choose create new to create a new target group.
4. (Optional) If you chose to create a new target group, complete the following fields as follows:
• For Target group name, enter a name for your target group.
• For Target group protocol, enter the protocol to use for routing traffic to your tasks.
• For Path pattern, if your listener does not have any existing rules, the default path pattern (/) is
used. If your listener already has a default rule, then you must enter a path pattern that matches
traffic that you want to have sent to your service's target group. For example, if your service is a
web application called web-app, and you want traffic that matches http://my-elb-url/web-
app to route to your service, then you would enter /web-app* as your path pattern. For more
information, see ListenerRules in the User Guide for Application Load Balancers.
• For Health check path, enter the path to which the load balancer should send health check pings.
5. When you are finished configuring your Network Load Balancer, choose Save to save your
configuration and proceed to Review and Create Your Service (p. 194).
1. The Health check port, Health check protocol, and Health check path fields are all pre-populated
with the values you configured in Creating a Classic Load Balancer (p. 175) (if applicable). You can
update these settings in the Amazon EC2 console.
2. For Container for ELB health check, choose the container to send health checks.
3. When you are finished configuring your Classic Load Balancer, choose Save to save your
configuration and proceed to Review and Create Your Service (p. 194).
1. If you have not done so already, follow the basic service creation procedures in Configuring Basic
Service Parameters (p. 188).
2. On the Create Service page, choose Configure Service Auto Scaling.
3. On the Service Auto Scaling page, select Configure Service Auto Scaling to adjust your service’s
desired count.
4. For Minimum number of tasks, enter the lower limit of the number of tasks for Service Auto Scaling
to use. Your service's desired count is not automatically adjusted below this amount.
5. For Desired number of tasks, this field is pre-populated with the value you entered earlier. You can
change your service's desired count at this time, but this value must be between the minimum and
maximum number of tasks specified on this page.
6. For Maximum number of tasks, enter the upper limit of the number of tasks for Service Auto
Scaling to use. Your service's desired count is not automatically adjusted above this amount.
7. For IAM role for Service Auto Scaling, choose an IAM role to authorize the Application Auto Scaling
service to adjust your service's desired count on your behalf. If you have not previously created such
a role, choose Create new role and the role is created for you. For future reference, the role that is
created for you is called ecsAutoscaleRole. For more information, see Amazon ECS Service Auto
Scaling IAM Role (p. 249).
These steps help you create scaling policies and CloudWatch alarms that can be used to trigger scaling
activities for your service. You can create a Scale out alarm to increase the desired count of your service,
and a Scale in alarm to decrease the desired count of your service.
1. For Policy name, enter a descriptive name for your policy, or use the default policy name that is
already entered.
2. For Execute policy when, select the CloudWatch alarm that you want to use to scale your service up
or down.
You can use an existing CloudWatch alarm that you have previously created, or you can choose to
create a new alarm. The Create new alarm workflow allows you to create CloudWatch alarms that
are based on the CPUUtilization and MemoryUtilization of the service that you are creating.
To use other metrics, you can create your alarm in the CloudWatch console and then return to this
wizard to choose that alarm.
3. (Optional) If you've chosen to create a new alarm, complete the following steps.
a. For Alarm name, enter a descriptive name for your alarm. For example, if your alarm
should trigger when your service CPU utilization exceeds 75%, you could call the alarm
service_name-cpu-gt-75.
b. For ECS service metric, choose the service metric to use for your alarm. For more information
about these service utilization metrics, see Service Utilization (p. 205).
c. For Alarm threshold, enter the following information to configure your alarm:
• Choose the CloudWatch statistic for your alarm (the default value of Average works in many
cases). For more information, see Statistics in the Amazon CloudWatch User Guide.
• Choose the comparison operator for your alarm and enter the value that the comparison
operator checks against (for example, > and 75).
• Enter the number of consecutive periods before the alarm is triggered and the period length.
For example, two consecutive periods of 5 minutes would take 10 minutes before the alarm
triggered. Because your Amazon ECS tasks can scale up and down quickly, you should consider
using a low number of consecutive periods and a short period duration to react to alarms as
soon as possible.
d. Choose Save to save your alarm.
4. For Scaling action, enter the following information to configure how your service responds to the
alarm:
• Choose whether to add to, subtract from, or set a specific desired count for your service.
• If you chose to add or subtract tasks, enter the number of tasks (or percent of existing tasks) to
add or subtract when the scaling action is triggered. If you chose to set the desired count, enter
the desired count that your service should be set to when the scaling action is triggered.
• (Optional) If you chose to add or subtract tasks, choose whether the previous value is used as an
integer or a percent value of the existing desired count.
• Enter the lower boundary of your step scaling adjustment. By default, for your first scaling action,
this value is the metric amount where your alarm is triggered. For example, the following scaling
action adds 100% of the existing desired count when the CPU utilization is greater than 75%.
5. (Optional) You can repeat Step 4 (p. 193) to configure multiple scaling actions for a single alarm
(for example, to add one task if CPU utilization is between 75-85%, and to add two tasks if CPU
utilization is greater than 85%).
6. (Optional) If you chose to add or subtract a percentage of the existing desired count, enter a
minimum increment value for Add tasks in increments of N task(s).
7. For Cooldown period, enter the number of seconds between scaling actions.
8. Repeat Step 1 (p. 193) through Step 7 (p. 194) for the Scale in policy and choose Save to save
your Service Auto Scaling configuration.
Updating a Service
You can update a running service to change the number of tasks that are maintained by a service or
which task definition is used by the tasks. If you have an application that needs more capacity, you can
scale up your service to use more of your container instances (as long as they are available). If you have
unused capacity that you would like to scale down, you can reduce the number of desired tasks in your
service and free up resources.
If you have updated the Docker image of your application, you can create a new task definition with
that image and deploy it to your service. The service scheduler uses the minimum healthy percent and
maximum percent parameters (in the service's deployment configuration) to determine the deployment
strategy.
The minimum healthy percent represents a lower limit on the number of your service's tasks that must
remain in the RUNNING state during a deployment, as a percentage of the desired number of tasks
(rounded up to the nearest integer). This parameter enables you to deploy without using additional
cluster capacity. For example, if your service has a desired number of four tasks and a minimum healthy
percent of 50%, the scheduler may stop two existing tasks to free up cluster capacity before starting
two new tasks. Tasks for services that do not use a load balancer are considered healthy if they are in
the RUNNING state; tasks for services that do use a load balancer are considered healthy if they are in
the RUNNING state and the container instance on which it is hosted is reported as healthy by the load
balancer. The default value for minimum healthy percent is 50% in the console and 100% for the AWS
CLI, the AWS SDKs, and the APIs.
The maximum percent parameter represents an upper limit on the number of your service's tasks that
are allowed in the RUNNING or PENDING state during a deployment, as a percentage of the desired
number of tasks (rounded down to the nearest integer). This parameter enables you to define the
deployment batch size. For example, if your service has a desired number of four tasks and a maximum
percent value of 200%, the scheduler may start four new tasks before stopping the four older tasks
(provided that the cluster resources required to do this are available). The default value for maximum
percent is 200%.
When the service scheduler replaces a task during an update, if a load balancer is used by the service,
the service first removes the task from the load balancer and waits for the connections to drain. Then
the equivalent of docker stop is issued to the containers running in the task. This results in a SIGTERM
signal and a 30-second timeout, after which SIGKILL is sent and the containers are forcibly stopped. If
the container handles the SIGTERM signal gracefully and exits within 30 seconds from receiving it, no
SIGKILL signal is sent. The service scheduler starts and stops tasks as defined by your minimum healthy
percent and maximum percent settings.
Important
If you are changing the ports used by containers in a task definition, you may need to update
your container instance security groups to work with the updated ports.
If your service uses a load balancer, the load balancer configuration defined for your service
when it was created cannot be changed. If you update the task definition for the service, the
container name and container port that were specified when the service was created must
remain in the task definition.
To change the load balancer name, the container name, or the container port associated with a
service load balancer configuration, you must create a new service.
Amazon ECS does not automatically update the security groups associated with Elastic Load
Balancing load balancers or Amazon ECS container instances.
Deleting a Service
You can delete a service if you have no running tasks in it and the desired task count is zero. If the service
is actively maintaining tasks, you cannot delete it, and you must update the service to a desired task
count of zero. For more information, see Updating a Service (p. 194).
Note
When you delete a service, if there are still running tasks that require cleanup, the service
status moves from ACTIVE to DRAINING, and the service is no longer visible in the console
or in ListServices API operations. After the tasks have stopped, then the service status
moves from DRAINING to INACTIVE. Services in the DRAINING or INACTIVE status can still
be viewed with DescribeServices API operations; however, in the future, INACTIVE services
may be cleaned up and purged from Amazon ECS record keeping, and DescribeServices API
operations on those services will return a ServiceNotFoundException error.
For more information on how to create repositories, push and pull images from Amazon ECR, and set
access controls on your repositories, see the Amazon Elastic Container Registry User Guide.
• Your container instances must be using at least version 1.7.0 of the Amazon ECS container agent. The
latest version of the Amazon ECS–optimized AMI supports ECR images in task definitions. For more
information, including the latest Amazon ECS–optimized AMI IDs, see Amazon ECS Container Agent
Versions (p. 72).
• The Amazon ECS container instance role (ecsInstanceRole) that you use with your container
instances must possess the following IAM policy permissions for Amazon ECR.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:GetAuthorizationToken"
],
"Resource": "*"
}
]
}
Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon
ECS and your AWS solutions. You should collect monitoring data from all of the parts of your AWS
solution so that you can more easily debug a multi-point failure if one occurs. Before you start
monitoring Amazon ECS; however, you should create a monitoring plan that includes answers to the
following questions:
The metrics made available will depend on the launch type of the tasks and services in your clusters. If
you are using the Fargate launch type for your services then CPU and memory utilization metrics are
provided to assist in the monitoring of your services. For the Amazon EC2 launch type you will own
and need to monitor the EC2 instances that make your underlying infrastructure so additional CPU and
memory reservation and utilization metrics are made available at the cluster, service, and task level.
The next step is to establish a baseline for normal Amazon ECS performance in your environment, by
measuring performance at various times and under different load conditions. As you monitor Amazon
ECS, store historical monitoring data so that you can compare it with current performance data, identify
normal performance patterns and performance anomalies, and devise methods to address issues.
• The CPU and memory and reservation utilization metrics for your Amazon ECS clusters
• The CPU and memory utilization metrics for your Amazon ECS services
Topics
• Monitoring Tools (p. 198)
• Amazon ECS CloudWatch Metrics (p. 200)
• Amazon ECS Event Stream for CloudWatch Events (p. 213)
Monitoring Tools
AWS provides various tools that you can use to monitor Amazon ECS. You can configure some of these
tools to do the monitoring for you, while some of the tools require manual intervention. We recommend
that you automate monitoring tasks as much as possible.
• Amazon CloudWatch alarms – Watch a single metric over a time period that you specify, and perform
one or more actions based on the value of the metric relative to a given threshold over a number of
time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon SNS)
topic or Auto Scaling policy. CloudWatch alarms do not invoke actions simply because they are in a
particular state; the state must have changed and been maintained for a specified number of periods.
For more information, see Amazon ECS CloudWatch Metrics (p. 200).
You can use CloudWatch alarms to scale in and scale out the container instances based on CloudWatch
metrics, such as cluster memory reservation. For more information, see Tutorial: Scaling Container
Instances with CloudWatch Alarms (p. 208)
• Amazon CloudWatch Logs – Monitor, store, and access the log files from the containers in your
Amazon ECS tasks by specifying the awslogs log driver in your task definitions. This method for
accessing logs must be used for tasks using the Fargate launch type, but also works with tasks using
the EC2 launch tyep. For more information, see Using the awslogs Log Driver (p. 137).
You can also monitor, store, and access the operating system and Amazon ECS container agent
log files from your Amazon ECS container instances. This method for accessing logs can be used
for containers using the EC2 launch type. For more information, see Using CloudWatch Logs with
Container Instances (p. 53).
• Amazon CloudWatch Events – Match events and route them to one or more target functions or
streams to make changes, capture state information, and take corrective action. For more information,
see Amazon ECS Event Stream for CloudWatch Events (p. 213) in this guide and Using Events in the
Amazon CloudWatch User Guide.
• AWS CloudTrail log monitoring – Share log files between accounts, monitor CloudTrail log files in real
time by sending them to CloudWatch Logs, write log processing applications in Java, and validate
that your log files have not changed after delivery by CloudTrail. For more information, see Logging
Amazon ECS API Calls By Using AWS CloudTrail (p. 357) in this guide, and Working with CloudTrail
Log Files in the AWS CloudTrail User Guide.
checks are available to users with a Business or Enterprise support plan. For more information, see
AWS Trusted Advisor.
Topics
• Enabling CloudWatch Metrics (p. 200)
• Available Metrics and Dimensions (p. 200)
• Cluster Reservation (p. 203)
• Cluster Utilization (p. 204)
• Service Utilization (p. 205)
• Service RUNNING Task Count (p. 206)
• Viewing Amazon ECS Metrics (p. 207)
• Tutorial: Scaling Container Instances with CloudWatch Alarms (p. 208)
For any task or service using the Standard launch type, your Amazon ECS container instances require at
least version 1.4.0 of the container agent to enable CloudWatch metrics; however, we recommend using
the latest container agent version. For information about checking your agent version and updating to
the latest version, see Updating the Amazon ECS Container Agent (p. 74).
If you are starting your agent manually (for example, if you are not using the Amazon ECS-optimized AMI
for your container instances), see Manually Updating the Amazon ECS Container Agent (for Non-Amazon
ECS-optimized AMIs) (p. 79).
The metrics made available will depend on the launch type of the tasks and services in your clusters. If
you are using the Fargate launch type for your services then CPU and memory utilization metrics are
provided to assist in the monitoring of your services. For the Amazon EC2 launch type you will own
and need to monitor the EC2 instances that make your underlying infrastructure so additional CPU and
memory reservation and utilization metrics are made available at the cluster, service, and task level.
Amazon ECS sends the following metrics to CloudWatch every minute. When Amazon ECS collects
metrics, it collects multiple data points every minute. It then aggregates them to one data point before
sending the data to CloudWatch. So in CloudWatch, one sample count is actually the aggregate of
multiple data points during one minute.
Metric Description
Unit: Percent
CPUUtilization The percentage of CPU units that are used in the cluster
or service.
Metric Description
Valid Statistics: Average, Minimum, Maximum, Sum,
Data Samples.
Unit: Percent
Unit: Percent
Unit: Percent
Note
If you are using tasks with the EC2 launch type and have Linux container instances, the Amazon
ECS container agent relies on Docker stats metrics to gather CPU and memory data for each
container running on the instance. If you are using an Amazon ECS agent prior to version 1.14.0,
ECS includes filesystem cache usage when reporting memory utilization to CloudWatch so your
CloudWatch graphs show a higher than actual memory utilization for tasks. To remediate this,
starting with Amazon ECS agent version 1.14.0, the Amazon ECS container agent excludes the
filesystem cache usage from the memory utilization metric. This change does not impact the
out-of-memory behavior of containers.
Dimension Description
ClusterName This dimension filters the data you request for all
resources in a specified cluster. All Amazon ECS metrics
are filtered by ClusterName.
ServiceName This dimension filters the data you request for all
resources in a specified service within a specified cluster.
Cluster Reservation
Cluster reservation metrics are measured as the percentage of CPU and memory that is reserved by all
Amazon ECS tasks on a cluster when compared to the aggregate CPU and memory that was registered
for each active container instance in the cluster. This metric is only utilized on clusters with tasks or
services using the Standard launch type and is not compatible with any using the Fargate launch type.
When you run a task in a cluster, Amazon ECS parses its task definition and reserves the aggregate
CPU units and MiB of memory that is specified in its container definitions. Each minute, Amazon ECS
calculates the number of CPU units and MiB of memory that are currently reserved for each task that
is running in the cluster. The total amount of CPU and memory reserved for all tasks running on the
cluster is calculated, and those numbers are reported to CloudWatch as a percentage of the total
registered resources for the cluster. If you specify a soft limit (memoryReservation), then it will be
used to calculate the amount of reserved memory. Otherwise, the hard limit (memory) is used. For more
information about hard and soft limits, see Task Definition Parameters.
For example, a cluster has two active container instances registered, a c4.4xlarge instance and a
c4.large instance. The c4.4xlarge instance registers into the cluster with 16,384 CPU units and
30,158 MiB of memory. The c4.large instance registers with 2,048 CPU units and 3,768 MiB of
memory. The aggregate resources of this cluster are 18,432 CPU units and 33,926 MiB of memory.
If a task definition reserves 1,024 CPU units and 2,048 MiB of memory, and ten tasks are started with
this task definition on this cluster (and no other tasks are currently running), a total of 10,240 CPU units
and 20,480 MiB of memory are reserved, which is reported to CloudWatch as 55% CPU reservation and
60% memory reservation for the cluster.
The illustration below shows the total registered CPU units in a cluster and what their reservation and
utilization means to existing tasks and new task placement. The lower (Reserved, utilized) and center
(Reserved, not utilized) blocks represent the total CPU units that are reserved for the existing tasks that
are running on the cluster, or the CPUReservation CloudWatch metric. The lower block represents the
reserved CPU units that the running tasks are actually using on the cluster, or the CPUUtilization
CloudWatch metric. The upper block represents CPU units that are not reserved by existing tasks; these
CPU units are available for new task placement. Existing tasks can utilize these unreserved CPU units as
well, if their need for CPU resources increases. For more information, see the cpu (p. 112) task definition
parameter documentation.
Cluster Utilization
Cluster utilization is measured as the percentage of CPU and memory that is used by all Amazon ECS
tasks on a cluster when compared to the aggregate CPU and memory that was registered for each active
container instance in the cluster. This metric is only utilized on clusters with tasks or services using the
Standard launch type and is not compatible with any using the Fargate launch type.
Each minute, the Amazon ECS container agent on each container instance calculates the number of CPU
units and MiB of memory that are currently being used for each task that is running on that container
instance, and this information is reported back to Amazon ECS. The total amount of CPU and memory
used for all tasks running on the cluster is calculated, and those numbers are reported to CloudWatch as
a percentage of the total registered resources for the cluster.
For example, a cluster has two active container instances registered, a c4.4xlarge instance and a
c4.large instance. The c4.4xlarge instance registers into the cluster with 16,384 CPU units and
30,158 MiB of memory. The c4.large instance registers with 2,048 CPU units and 3,768 MiB of
memory. The aggregate resources of this cluster are 18,432 CPU units and 33,926 MiB of memory.
If ten tasks are running on this cluster that each consume 1,024 CPU units and 2,048 MiB of memory,
a total of 10,240 CPU units and 20,480 MiB of memory are utilized on the cluster, which is reported to
CloudWatch as 55% CPU utilization and 60% memory utilization for the cluster.
Service Utilization
Service utilization is measured as the percentage of CPU and memory that is used by the Amazon ECS
tasks that belong to a service on a cluster when compared to the CPU and memory that is defined in the
service's task definition. This metric is compatible with services with tasks using both the Standard and
Fargate launch types.
Each minute, the Amazon ECS container agent on each container instance calculates the number of CPU
units and MiB of memory that are currently being used for each task owned by the service that is running
on that container instance, and this information is reported back to Amazon ECS. The total amount of
CPU and memory used for all tasks owned by the service that are running on the cluster is calculated,
and those numbers are reported to CloudWatch as a percentage of the total resources that are specified
for the service in the service's task definition. If you specify a soft limit (memoryReservation), then it
will be used to calculate the amount of reserved memory. Otherwise, the hard limit (memory) is used. For
more information about hard and soft limits, see Task Definition Parameters.
For example, the task definition for a service specifies a total of 512 CPU units and 1,024 MiB of memory
(with the hard limit memory parameter) for all of its containers. The service has a desired count of 1
running task, the service is running on a cluster with 1 c4.large container instance (with 2,048 CPU
units and 3,768 MiB of total memory), and there are no other tasks running on the cluster. Although
the task specifies 512 CPU units, because it is the only running task on a container instance with 2,048
CPU units, it has the ability to use up to four times the specified amount (2,048 / 512); however, the
specified memory of 1,024 MiB is a hard limit and it cannot be exceeded, so in this case, service memory
utilization cannot exceed 100%.
If the previous example used the soft limit memoryReservation instead of the hard limit memory
parameter, the service's tasks could use more than the specified 1,024 MiB of memory if they needed to.
In this case, the service's memory utilization could exceed 100%.
If this task is performing CPU-intensive work during a period and using all 2,048 of the available
CPU units and 512 MiB of memory, then the service reports 400% CPU utilization and 50% memory
utilization. If the task is idle and using 128 CPU units and 128 MiB of memory, then the service reports
25% CPU utilization and 12.5% memory utilization.
Topics
• Viewing Cluster Metrics in the Amazon ECS Console (p. 207)
• Viewing Service Metrics in the Amazon ECS Console (p. 207)
• Viewing Amazon ECS Metrics in the CloudWatch Console (p. 207)
Depending on the Amazon EC2 instance types you use in your clusters, and quantity of container
instances you have in a cluster, your tasks have a limited amount of resources that they can use when
they are run. Amazon ECS monitors the resources available in the cluster to work with the schedulers to
place tasks. If your cluster runs low on any of these resources, such as memory, you will eventually be
unable to launch more tasks until you add more container instances, reduce the number of desired tasks
in a service, or stop some of the running tasks in your cluster to free up the constrained resource.
In this tutorial, you create a CloudWatch alarm using the MemoryReservation metric for your cluster.
When the memory reservation of your cluster rises above 75% (meaning that only 25% of the memory
in your cluster is available to for new tasks to reserve), the alarm triggers the Auto Scaling group to add
another instance and provide more resources for your tasks and services.
Prerequisites
This tutorial assumes that you have enabled CloudWatch metrics for your clusters and services. Metrics
are not available until the clusters and services send the metrics to CloudWatch, and you cannot create
CloudWatch alarms for metrics that do not exist yet.
Your Amazon ECS container instances require at least version 1.4.0 of the container agent to enable
CloudWatch metrics. For information about checking your agent version and updating to the latest
version, see Updating the Amazon ECS Container Agent (p. 74).
For this tutorial, you create an alarm on the cluster MemoryReservation metric to alert when the
cluster's memory reservation is above 75%.
• Name: memory-above-75-pct
• Description: Cluster memory reservation above 75%
7. Set the threshold and time period requirement to MemoryReservation greater than 75% for 1
period.
8. (Optional) Configure a notification to send when the alarm is triggered. You can also choose to
delete the notification if you don't want to configure one now.
9. Choose Create Alarm. Now you can use this alarm to trigger your Auto Scaling group to add a
container instance when the memory reservation is above 75%.
10. (Optional) You can also create another alarm that triggers when the memory reservation is below
25%, which you can use to remove a container instance from your Auto Scaling group.
To use the Amazon ECS-optimized AMI, type amazon-ecs-optimized in the Search community
AMIs field and press the Enter key. Choose Select next to the amzn-ami-2017.09.d-amazon-ecs-
optimized AMI.
The current Amazon ECS-optimized Linux AMI IDs by region are listed below for reference.
7. On the Choose Instance Type step of the Create Auto Scaling Group wizard, choose an instance
type for your Auto Scaling group and choose Next: Configure details.
8. On the Configure details step of the Create Auto Scaling Group wizard, enter the following
information. The other fields are optional. For more information, see Creating Launch Configurations
in the Auto Scaling User Guide.
1. On the Configure Auto Scaling group details step of the Create Auto Scaling Group wizard, enter
the following information and choose Next: Configure scaling policies.
• Execute policy when: Choose the memory-above-75-pct CloudWatch alarm you configured
earlier.
• Take the action: Enter the number of instances you would like to add to your cluster when the
alarm is triggered.
5. If you configured an alarm to trigger a group size reduction, set that alarm in the Decrease Group
Size section and specify how many instances to remove if that alarm is triggered. Otherwise,
collapse the Decrease Group Size section by clicking the X in the upper-right-hand corner of the
section.
Note
If you configure your Auto Scaling group to remove container instances, any tasks running
on the removed container instances are killed. If your tasks are running as part of a service,
Amazon ECS restarts those tasks on another instance if the required resources are available
(CPU, memory, ports); however, tasks that were started manually will are not restarted
automatically.
6. Choose Review to review your Auto Scaling group and then choose Create Auto Scaling Group to
finish.
To test that your Auto Scaling group is configured properly, you can create some tasks that consume a
considerable amount of memory and start launching them into your cluster. After your cluster exceeds
the 75% memory reservation from the CloudWatch alarm for the specified number of periods, you
should see a new instance launch in the EC2 console.
Step 5: Cleaning Up
When you have completed this tutorial, you may choose to keep your Auto Scaling group and Amazon
EC2 instances in service for your cluster. However, if you are not actively using these resources, you
should consider cleaning them up so your account does not incur unnecessary charges. You can delete
your Auto Scaling group to terminate the Amazon EC2 instances within it, but your launch configuration
remains intact and you can create a new Auto Scaling group with it later if you choose.
Using CloudWatch Events, you can build custom schedulers on top of Amazon ECS that are responsible
for orchestrating tasks across clusters, and to monitor the state of clusters in near real time. You can
eliminate scheduling and monitoring code that continuously polls the Amazon ECS service for status
changes, and instead handle Amazon ECS state changes asynchronously using any CloudWatch Events
target, such as AWS Lambda, Amazon Simple Queue Service, Amazon Simple Notification Service, and
Amazon Kinesis Data Streams.
Events from Amazon ECS Event Stream are ensured to be delivered at least one time. In the event
that duplicate events are sent, the event provides enough information to identify duplicates. For more
information, see Handling Events (p. 218)
Events are relatively ordered, so that you can easily tell when an event occurred in relation to other
events.
Topics
• Amazon ECS Events (p. 213)
• Handling Events (p. 218)
• Tutorial: Listening for Amazon ECS CloudWatch Events (p. 220)
• Tutorial: Sending Amazon Simple Notification Service Alerts for Task Stopped Events (p. 222)
In some cases, multiple events are triggered for the same activity. For example, when a task is started
on a container instance, a task state change event is triggered for the new task, and a container instance
state change event is triggered to account for the change in available resources (such as CPU, memory,
and available ports) on the container instance. Likewise, if a container instance is terminated, events
are triggered for the container instance, the container agent connection status, and every task that was
running on the container instance.
Events contain two version fields; one in the main body of the event, and one in the detail object of
the event.
• The version in the main body of the event is set to 0 on all events. For more information about
CloudWatch Events parameters, see Events and Event Patterns in the Amazon CloudWatch Events User
Guide.
• The version in the detail object of the event describes the version of the associated resource. Each
time a resource changes state, this version is incremented. Because events can be sent multiple times,
this field allows you to identify duplicate events (they will have the same version in the detail
object). If you are replicating your Amazon ECS container instance and task state with CloudWatch
events, you can compare the version of a resource reported by the Amazon ECS APIs with the version
reported in CloudWatch events for the resource (inside the detail object) to verify that the version in
your event stream is current.
Topics
• Container Instance State Change Events (p. 214)
• Task State Change Events (p. 217)
You call the StartTask, RunTask, or StopTask API operations (either directly, or with the AWS
Management Console or SDKs)
Placing or stopping tasks on a container instance modifies the available resources on the container
instance (such as CPU, memory, and available ports).
The Amazon ECS service scheduler starts or stops a task
Placing or stopping tasks on a container instance modifies the available resources on the container
instance (such as CPU, memory, and available ports).
The Amazon ECS container agent calls the SubmitTaskStateChange API operation with a STOPPED
status for a task with a desired status of RUNNING
The Amazon ECS container agent monitors the state of tasks on your container instances, and it
reports any state changes. If a task that is supposed to be RUNNING is transitioned to STOPPED, the
agent releases the resources that were allocated to the stopped task (such as CPU, memory, and
available ports).
You deregister the container instance with the DeregisterContainerInstance API operation (either
directly, or with the AWS Management Console or SDKs)
Deregistering a container instance changes the status of the container instance and the connection
status of the Amazon ECS container agent.
A task was stopped when EC2 instance was stopped
When you stop a container instance, the tasks that are running on it are transitioned to the STOPPED
status.
The Amazon ECS container agent registers a container instance for the first time
The first time the Amazon ECS container agent registers a container instance (at launch or when first
run manually), this creates a state change event for the instance.
The Amazon ECS container agent connects or disconnects from Amazon ECS
When the Amazon ECS container agent connects or disconnects from the Amazon ECS back end, it
changes the agentConnected status of the container instance.
Note
The Amazon ECS container agent periodically disconnects and reconnects (several times per
hour) as a part of its normal operation, so agent connection events should be expected and
they are not an indication that there is an issue with the container agent or your container
instance.
You upgrade the Amazon ECS container agent on an instance
The container instance detail contains an object for the container agent version. If you upgrade the
agent, this version information changes and triggers an event.
Container instance state change events are delivered in the following format (the detail section
below resembles the ContainerInstance object that is returned from a DescribeContainerInstances
API operation in the Amazon Elastic Container Service API Reference). For more information about
CloudWatch Events parameters, see Events and Event Patterns in the Amazon CloudWatch Events User
Guide.
{
"version": "0",
"id": "8952ba83-7be2-4ab5-9c32-6687532d15a2",
"detail-type": "ECS Container Instance State Change",
"source": "aws.ecs",
"account": "111122223333",
"time": "2016-12-06T16:41:06Z",
"region": "us-east-1",
"resources": [
"arn:aws:ecs:us-east-1:111122223333:container-instance/
b54a2a04-046f-4331-9d74-3f6d7f6ca315"
],
"detail": {
"agentConnected": true,
"attributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.syslog"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role-network-host"
},
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "com.amazonaws.ecs.capability.logging-driver.json-file"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
},
{
"name": "com.amazonaws.ecs.capability.privileged-container"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.20"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.21"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.22"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.23"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
}
],
"clusterArn": "arn:aws:ecs:us-east-1:111122223333:cluster/default",
"containerInstanceArn": "arn:aws:ecs:us-east-1:111122223333:container-instance/
b54a2a04-046f-4331-9d74-3f6d7f6ca315",
"ec2InstanceId": "i-f3a8506b",
"registeredResources": [
{
"name": "CPU",
"type": "INTEGER",
"integerValue": 2048
},
{
"name": "MEMORY",
"type": "INTEGER",
"integerValue": 3767
},
{
"name": "PORTS",
"type": "STRINGSET",
"stringSetValue": [
"22",
"2376",
"2375",
"51678",
"51679"
]
},
{
"name": "PORTS_UDP",
"type": "STRINGSET",
"stringSetValue": []
}
],
"remainingResources": [
{
"name": "CPU",
"type": "INTEGER",
"integerValue": 1988
},
{
"name": "MEMORY",
"type": "INTEGER",
"integerValue": 767
},
{
"name": "PORTS",
"type": "STRINGSET",
"stringSetValue": [
"22",
"2376",
"2375",
"51678",
"51679"
]
},
{
"name": "PORTS_UDP",
"type": "STRINGSET",
"stringSetValue": []
}
],
"status": "ACTIVE",
"version": 14801,
"versionInfo": {
"agentHash": "aebcbca",
"agentVersion": "1.13.0",
"dockerVersion": "DockerVersion: 1.11.2"
},
"updatedAt": "2016-12-06T16:41:06.991Z"
}
}
You call the StartTask, RunTask, or StopTask API operations (either directly, or with the AWS
Management Console or SDKs)
Starting or stopping tasks creates new task resources or modifies the state of existing task resources.
The Amazon ECS service scheduler starts or stops a task
Starting or stopping tasks creates new task resources or modifies the state of existing task resources.
The Amazon ECS container agent calls the SubmitTaskStateChange API operation
The Amazon ECS container agent monitors the state of tasks on your container instances, and it
reports any state changes (for example, from PENDING to RUNNING, or from RUNNING to STOPPED.
Deregistering a container instance changes the status of the container instance and the connection
status of the Amazon ECS container agent. If tasks are running on the container instance, the force
flag must be set to allow deregistration. This stops all tasks on the instance.
The underlying container instance is stopped or terminated
When you stop or terminate a container instance, the tasks that are running on it are transitioned to
the STOPPED status.
A container in the task changes state
The Amazon ECS container agent monitors the state of containers within tasks. For example, if a
container that is running within a task stops, this container state change triggers an event.
Task state change events are delivered in the following format (the detail section below resembles
the Task object that is returned from a DescribeTasks API operation in the Amazon Elastic Container
Service API Reference). For more information about CloudWatch Events parameters, see Events and Event
Patterns in the Amazon CloudWatch Events User Guide.
{
"version": "0",
"id": "9bcdac79-b31f-4d3d-9410-fbd727c29fab",
"detail-type": "ECS Task State Change",
"source": "aws.ecs",
"account": "111122223333",
"time": "2016-12-06T16:41:06Z",
"region": "us-east-1",
"resources": [
"arn:aws:ecs:us-east-1:111122223333:task/b99d40b3-5176-4f71-9a52-9dbd6f1cebef"
],
"detail": {
"clusterArn": "arn:aws:ecs:us-east-1:111122223333:cluster/default",
"containerInstanceArn": "arn:aws:ecs:us-east-1:111122223333:container-instance/
b54a2a04-046f-4331-9d74-3f6d7f6ca315",
"containers": [
{
"containerArn": "arn:aws:ecs:us-east-1:111122223333:container/3305bea1-
bd16-4217-803d-3e0482170a17",
"exitCode": 0,
"lastStatus": "STOPPED",
"name": "xray",
"taskArn": "arn:aws:ecs:us-east-1:111122223333:task/
b99d40b3-5176-4f71-9a52-9dbd6f1cebef"
}
],
"createdAt": "2016-12-06T16:41:05.702Z",
"desiredStatus": "RUNNING",
"group": "task-group",
"lastStatus": "RUNNING",
"overrides": {
"containerOverrides": [
{
"name": "xray"
}
]
},
"startedAt": "2016-12-06T16:41:06.8Z",
"startedBy": "ecs-svc/9223370556150183303",
"updatedAt": "2016-12-06T16:41:06.975Z",
"taskArn": "arn:aws:ecs:us-east-1:111122223333:task/
b99d40b3-5176-4f71-9a52-9dbd6f1cebef",
"taskDefinitionArn": "arn:aws:ecs:us-east-1:111122223333:task-definition/xray:2",
"version": 4
}
}
Handling Events
Amazon ECS sends events on an "at least once" basis. This means you may receive more than a single
copy of a given event. Additionally, events may not be delivered to your event listeners in the order in
which the events occurred.
To enable proper ordering of events, the detail section of each event contains a version property.
Events with a higher version property number should be treated as occurring later than events with
lower version numbers. Events with matching version numbers can be treated as duplicates.
• ECSCtrInstanceState: Stores the latest state for a container instance. The table ID is the
containerInstanceArn value of the container instance.
• ECSTaskState: Stores the latest state for a task. The table ID is the taskArn value of the task.
import json
import boto3
if event["source"] != "aws.ecs":
raise ValueError("Function only supports input from events with a source type of:
aws.ecs")
new_record["cw_version"] = event["version"]
new_record.update(event["detail"])
# Look first to see if you have received a newer version of an event ID.
# If the version is OLDER than what you have on file, do not process it.
# Otherwise, update the associated record with this latest information.
print("Looking for recent event with same ID...")
dynamodb = boto3.resource("dynamodb", region_name="us-east-1")
table = dynamodb.Table(table_name)
saved_event = table.get_item(
Key={
id_name : event_id
}
)
if "Item" in saved_event:
# Compare events and reconcile.
print("EXISTING EVENT DETECTED: Id " + event_id + " - reconciling")
if saved_event["Item"]["version"] < event["detail"]["version"]:
print("Received event is more recent version than stored event - updating")
table.put_item(
Item=new_record
)
else:
print("Received event is more recent version than stored event - ignoring")
else:
print("Saving new event - ID " + event_id)
table.put_item(
Item=new_record
)
import json
print(json.dumps(event))
This is a simple Python 2.7 function that prints the event sent by Amazon ECS. If everything is
configured correctly, at the end of this tutorial, you see the event details appear in the CloudWatch
Logs log stream associated with this Lambda function.
6. In the Function code section, edit the value of Handler to be eventstream-handler.
7. Choose Save.
{
"source": [
"aws.ecs"
],
"detail-type": [
"ECS Task State Change"
],
"detail": {
"lastStatus": [
"STOPPED"
],
"stoppedReason" : [
"Essential container in task exited"
]
}
}
This code defines a CloudWatch Events event rule that matches any event where the lastStatus
and stoppedReason fields match the indicated values. For more information about event patterns,
see Events and Event Patterns in the Amazon CloudWatch User Guide.
5. For Targets, choose Add target. For Target type, choose SNS topic, and then choose
TaskStoppedAlert.
6. Choose Configure details.
7. For Rule definition, type a name and description for your rule and then choose Create rule.
To test a rule
When you attach a policy to a user or group of users, it allows or denies the users permission to perform
the specified tasks on the specified resources. For more general information about IAM policies, see
Permissions and Policies in the IAM User Guide. For more information about managing and creating
custom IAM policies, see Managing IAM Policies.
Likewise, Amazon ECS container instances make calls to the Amazon ECS and Amazon EC2 APIs on
your behalf, so they need to authenticate with your credentials. This authentication is accomplished by
creating an IAM role for your container instances and associating that role with your container instances
when you launch them. For more information, see Amazon ECS Container Instance IAM Role (p. 238). If
you use an Elastic Load Balancing load balancer with your Amazon ECS services, calls to the Amazon EC2
and Elastic Load Balancing APIs are made on your behalf to register and deregister container instances
with your load balancers. For more information, see Amazon ECS Service Scheduler IAM Role (p. 247).
For more general information about IAM roles, see IAM Roles in the IAM User Guide.
Getting Started
An IAM policy must grant or deny permission to use one or more Amazon ECS actions. It must also
specify the resources that can be used with the action, which can be all resources, or in some cases,
specific resources. The policy can also include conditions that you apply to the resource.
Amazon ECS partially supports resource-level permissions. This means that for some Amazon ECS API
actions, you cannot specify which resource a user is allowed to work with for that action; instead, you
have to allow users to work with all resources for that action.
Topics
• Policy Structure (p. 225)
• Supported Resource-Level Permissions for Amazon ECS API Actions (p. 228)
• Creating Amazon ECS IAM Policies (p. 231)
• Managed Policies and Trust Relationships (p. 231)
• Amazon ECS Container Instance IAM Role (p. 238)
• Amazon ECS Task Execution IAM Role (p. 241)
• Using Service-Linked Roles for Amazon ECS (p. 243)
• Amazon ECS Service Scheduler IAM Role (p. 247)
• Amazon ECS Service Auto Scaling IAM Role (p. 249)
• Amazon EC2 Container Service Task Role (p. 250)
• CloudWatch Events IAM Role (p. 251)
• IAM Roles for Tasks (p. 251)
Policy Structure
The following topics explain the structure of an IAM policy.
Topics
• Policy Syntax (p. 225)
• Actions for Amazon ECS (p. 226)
• Amazon Resource Names for Amazon ECS (p. 226)
• Condition Keys for Amazon ECS (p. 227)
• Checking that Users Have the Required Permissions (p. 228)
Policy Syntax
An IAM policy is a JSON document that consists of one or more statements. Each statement is structured
as follows:
{
"Statement":[{
"Effect":"effect",
"Action":"action",
"Resource":"arn",
"Condition":{
"condition":{
"key":"value"
}
}
}
]
}
• Effect: The effect can be Allow or Deny. By default, IAM users don't have permission to use resources
and API actions, so all requests are denied. An explicit allow overrides the default. An explicit deny
overrides any allows.
• Action: The action is the specific API action for which you are granting or denying permission. To learn
about specifying action, see Actions for Amazon ECS (p. 226).
• Resource: The resource that's affected by the action. Some Amazon ECS API actions allow you to
include specific resources in your policy that can be created or modified by the action. To specify a
resource in the statement, you need to use its Amazon Resource Name (ARN). For more information
about specifying the arn value, see Amazon Resource Names for Amazon ECS (p. 226). For more
information about which API actions support which ARNs, see Supported Resource-Level Permissions
for Amazon ECS API Actions (p. 228). If the API action does not support ARNs, use the * wildcard to
specify that all resources can be affected by the action.
• Condition: Conditions are optional. They can be used to control when your policy will be in effect.
For more information about specifying conditions for Amazon ECS, see Condition Keys for Amazon
ECS (p. 227).
For more information about example IAM policy statements for Amazon ECS, see Creating Amazon ECS
IAM Policies (p. 231).
To specify multiple actions in a single statement, separate them with commas as follows:
You can also specify multiple actions using wildcards. For example, you can specify all actions whose
name begins with the word "Describe" as follows:
"Action": "ecs:Describe*"
To specify all Amazon ECS API actions, use the * wildcard as follows:
"Action": "ecs:*"
For a list of Amazon ECS actions, see Actions in the Amazon Elastic Container Service API Reference.
arn:aws:[service]:[region]:[account]:resourceType/resourcePath
service
A path that identifies the resource. You can use the * wildcard in your paths.
For example, you can indicate a specific cluster (default) in your statement using its ARN as follows:
"Resource": "arn:aws:ecs:us-east-1:123456789012:cluster/default"
You can also specify all clusters that belong to a specific account by using the * wildcard as follows:
"Resource": "arn:aws:ecs:us-east-1:123456789012:cluster/*"
To specify all resources, or if a specific API action does not support ARNs, use the * wildcard in the
Resource element as follows:
"Resource": "*"
The following table describes the ARNs for each type of resource used by the Amazon ECS API actions.
Cluster arn:aws:ecs:region:account:cluster/cluster-name
Service arn:aws:ecs:region:account:service/service-name
Task arn:aws:ecs:region:account:task/task-id
Container arn:aws:ecs:region:account:container/container-id
Many Amazon ECS API actions accept multiple resources. To specify multiple resources in a single
statement, separate their ARNs with commas, as follows:
For more general information about ARNs, see Amazon Resource Names (ARN) and AWS Service
Namespaces in the Amazon Web Services General Reference.
If you specify multiple conditions, or multiple keys in a single condition, we evaluate them using a
logical AND operation. If you specify a single condition with multiple values for one key, we evaluate the
condition using a logical OR operation. For permission to be granted, all conditions must be met.
You can also use placeholders when you specify conditions. For more information, see Policy Variables in
the IAM User Guide.
Amazon ECS implements the AWS-wide condition keys (see Available Keys), plus the following service-
specific condition keys. (We'll add support for additional service-specific condition keys for Amazon ECS
later.)
For information about which condition keys you can use with which Amazon ECS resources, on an action-
by-action basis, see Supported Resource-Level Permissions for Amazon ECS API Actions (p. 228). For
example policy statements for Amazon ECS, see Creating Amazon ECS IAM Policies (p. 231).
First, create an IAM user for testing purposes, and then attach the IAM policy that you created to the test
user. Then, make a request as the test user. You can make test requests in the console or with the AWS
CLI.
Note
You can also test your policies with the IAM Policy Simulator. For more information on the policy
simulator, see Working with the IAM Policy Simulator in the IAM User Guide.
If the action that you are testing creates or modifies a resource, you should make the request using
the DryRun parameter (or run the AWS CLI command with the --dry-run option). In this case, the
call completes the authorization check, but does not complete the operation. For example, you can
check whether the user can terminate a particular instance without actually terminating it. If the
test user has the required permissions, the request returns DryRunOperation; otherwise, it returns
UnauthorizedOperation.
If the policy doesn't grant the user the permissions that you expected, or is overly permissive, you can
adjust the policy as needed and retest until you get the desired results.
Important
It can take several minutes for policy changes to propagate before they take effect. Therefore,
we recommend that you allow five minutes to pass before you test your policy updates.
If an authorization check fails, the request returns an encoded message with diagnostic information. You
can decode the message using the DecodeAuthorizationMessage action. For more information, see
DecodeAuthorizationMessage in the AWS Security Token Service API Reference, and decode-authorization-
message in the AWS Command Line Interface Reference.
The following table describes the Amazon ECS API actions that currently support resource-level
permissions, as well as the supported resources, resource ARNs, and condition keys for each action.
Important
If an Amazon ECS API action is not listed in this table, then it does not support resource-level
permissions. If an Amazon ECS API action does not support resource-level permissions, you can
grant users permission to use the action, but you have to specify a * for the resource element of
your policy statement.
arn:aws:ecs:region:account:container-
instance/container-instance-id
arn:aws:ecs:region:account:cluster/my-
cluster
DeregisterContainerInstance
Cluster N/A
arn:aws:ecs:region:account:cluster/my-
cluster
arn:aws:ecs:region:account:cluster/my-
cluster1,
arn:aws:ecs:region:account:cluster/my-
cluster2
arn:aws:ecs:region:account:container-
instance/container-instance-id1,
arn:aws:ecs:region:account:container-
instance/container-instance-id2
arn:aws:ecs:region:account:task/
1abf0f6d-a411-4033-
b8eb-a4eed3ad252a,
arn:aws:ecs:region:account:task/
1abf0f6d-a411-4033-b8eb-
a4eed3ad252b
arn:aws:ecs:region:account:cluster/my-
cluster
arn:aws:ecs:region:account:cluster/my-
cluster
arn:aws:ecs:region:account:container-
instance/container-instance-id
arn:aws:ecs:region:account:container-
instance/container-instance-id
arn:aws:ecs:region:account:cluster/my-
cluster
arn:aws:ecs:region:account:task-
definition/hello_world:8
arn:aws:ecs:region:account:task- ecs:container-instances
definition/hello_world:8
arn:aws:ecs:region:account:container-
instance/container-instance-id
arn:aws:ecs:region:account:task/
1abf0f6d-a411-4033-b8eb-
a4eed3ad252a
SubmitContainerStateChange
Cluster N/A
arn:aws:ecs:region:account:cluster/my-
cluster
arn:aws:ecs:region:account:cluster/my-
cluster
arn:aws:ecs:region:account:container-
instance/container-instance-id
UpdateContainerInstancesState
Container instance ecs:cluster
arn:aws:ecs:region:account:container-
instance/container-instance-id
When you attach a policy to a user or group of users, it allows or denies the users permission to perform
the specified tasks on the specified resources. For more general information about IAM policies, see
Permissions and Policies in the IAM User Guide. For more information about managing and creating
custom IAM policies, see Managing IAM Policies.
Topics
• Amazon ECS Managed Policies and Trust Relationships (p. 232)
• Amazon ECR Managed Policies (p. 237)
Topics
• AmazonECS_FullAccess (p. 232)
• AmazonEC2ContainerServiceFullAccess (p. 234)
• AmazonEC2ContainerServiceforEC2Role (p. 235)
• AmazonEC2ContainerServiceRole (p. 235)
• AmazonEC2ContainerServiceAutoscaleRole (p. 236)
• AmazonEC2ContainerServiceTaskRole (p. 236)
• AmazonEC2ContainerServiceEventsRole (p. 237)
AmazonECS_FullAccess
This managed policy provides administrative access to Amazon ECS resources and enables ECS
features through access to other AWS service resources, including VPCs, Auto Scaling groups, and AWS
CloudFormation stacks.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"application-autoscaling:DeleteScalingPolicy",
"application-autoscaling:DeregisterScalableTarget",
"application-autoscaling:DescribeScalableTargets",
"application-autoscaling:DescribeScalingActivities",
"application-autoscaling:DescribeScalingPolicies",
"application-autoscaling:PutScalingPolicy",
"application-autoscaling:RegisterScalableTarget",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:CreateAutoScalingGroup",
"autoscaling:CreateLaunchConfiguration",
"autoscaling:DeleteAutoScalingGroup",
"autoscaling:DeleteLaunchConfiguration",
"autoscaling:Describe*",
"cloudformation:CreateStack",
"cloudformation:DeleteStack",
"cloudformation:DescribeStack*",
"cloudformation:UpdateStack",
"cloudwatch:DescribeAlarms",
"cloudwatch:GetMetricStatistics",
"cloudwatch:PutMetricAlarm",
"ec2:AssociateRouteTable",
"ec2:AttachInternetGateway",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CancelSpotFleetRequests",
"ec2:CreateInternetGateway",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSecurityGroup",
"ec2:CreateSubnet",
"ec2:CreateVpc",
"ec2:DeleteSubnet",
"ec2:DeleteVpc",
"ec2:Describe*",
"ec2:DetachInternetGateway",
"ec2:DisassociateRouteTable",
"ec2:ModifySubnetAttribute",
"ec2:ModifyVpcAttribute",
"ec2:RequestSpotFleet",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateRule",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteRule",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeRules",
"elasticloadbalancing:DescribeTargetGroups",
"ecs:*",
"events:DescribeRule",
"events:DeleteRule",
"events:ListRuleNamesByTarget",
"events:ListTargetsByRule",
"events:PutRule",
"events:PutTargets",
"events:RemoveTargets",
"iam:ListAttachedRolePolicies",
"iam:ListInstanceProfiles",
"iam:ListRoles"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DeleteInternetGateway",
"ec2:DeleteRoute",
"ec2:DeleteRouteTable",
"ec2:DeleteSecurityGroup"
],
"Resource": [
"*"
],
"Condition": {
"StringLike": {
"ec2:ResourceTag/aws:cloudformation:stack-name": "EC2ContainerService-
*"
}
}
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": [
"arn:aws:iam::*:role/aws-service-role/ecs.amazonaws.com/
AWSServiceRoleForECS*"
],
"Condition": {
"StringLike": {
"iam:AWSServiceName": "ecs.amazonaws.com"
}
}
},
{
"Action": "iam:PassRole",
"Effect": "Allow",
"Resource": [
"*"
],
"Condition": {
"StringLike": {
"iam:PassedToService": "ecs-tasks.amazonaws.com"
}
}
},
{
"Action": "iam:PassRole",
"Effect": "Allow",
"Resource": [
"arn:aws:iam::*:role/ecsInstanceRole*"
],
"Condition": {
"StringLike": {
"iam:PassedToService": ["ec2.amazonaws.com", "ec2.amazonaws.com.cn"]
}
}
},
{
"Action": "iam:PassRole",
"Effect": "Allow",
"Resource": [
"arn:aws:iam::*:role/ecsAutoscaleRole*"
],
"Condition": {
"StringLike": {
"iam:PassedToService": ["application-autoscaling.amazonaws.com",
"application-autoscaling.amazonaws.com.cn"]
}
}
}
]
}
AmazonEC2ContainerServiceFullAccess
This managed policy allows full administrator access to Amazon ECS.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:Describe*",
"autoscaling:UpdateAutoScalingGroup",
"cloudformation:CreateStack",
"cloudformation:DeleteStack",
"cloudformation:DescribeStack*",
"cloudformation:UpdateStack",
"cloudwatch:GetMetricStatistics",
"ec2:Describe*",
"elasticloadbalancing:*",
"ecs:*",
"events:DescribeRule",
"events:DeleteRule",
"events:ListRuleNamesByTarget",
"events:ListTargetsByRule",
"events:PutRule",
"events:PutTargets",
"events:RemoveTargets",
"iam:ListInstanceProfiles",
"iam:ListRoles",
"iam:PassRole"
],
"Resource": "*"
}
]
}
AmazonEC2ContainerServiceforEC2Role
This managed policy allows Amazon ECS container instances to make calls to AWS on your behalf. For
more information, see Amazon ECS Container Instance IAM Role (p. 238).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:Submit*",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
AmazonEC2ContainerServiceRole
This managed policy allows Elastic Load Balancing load balancers to register and deregister Amazon
ECS container instances on your behalf. For more information, see Amazon ECS Service Scheduler IAM
Role (p. 247).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:Describe*",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:Describe*",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:RegisterTargets"
],
"Resource": "*"
}
]
}
AmazonEC2ContainerServiceAutoscaleRole
This managed policy allows Application Auto Scaling to scale your Amazon ECS service's desired count
up and down in response to CloudWatch alarms on your behalf. For more information, see Amazon ECS
Service Auto Scaling IAM Role (p. 249).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1456535218000",
"Effect": "Allow",
"Action": [
"ecs:DescribeServices",
"ecs:UpdateService"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1456535243000",
"Effect": "Allow",
"Action": [
"cloudwatch:DescribeAlarms"
],
"Resource": [
"*"
]
}
]
}
AmazonEC2ContainerServiceTaskRole
This IAM trust relationship policy allows containers in your Amazon ECS tasks to make calls to the AWS
APIs on your behalf. For more information, see Amazon EC2 Container Service Task Role (p. 250).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
AmazonEC2ContainerServiceEventsRole
This policy allows CloudWatch Events to run tasks on your behalf. For more information, see Scheduled
Tasks (cron) (p. 158).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:RunTask"
],
"Resource": [
"*"
]
}
]
}
Topics
• AmazonEC2ContainerRegistryFullAccess (p. 237)
• AmazonEC2ContainerRegistryPowerUser (p. 237)
• AmazonEC2ContainerRegistryReadOnly (p. 238)
AmazonEC2ContainerRegistryFullAccess
This managed policy allows full administrator access to Amazon ECR.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:*"
],
"Resource": "*"
}
]
}
AmazonEC2ContainerRegistryPowerUser
This managed policy allows power user access to Amazon ECR, which allows read and write access to
repositories, but does not allow users to delete repositories or change the policy documents applied to
them.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage"
],
"Resource": "*"
}]
}
AmazonEC2ContainerRegistryReadOnly
This managed policy allows read-only access to Amazon ECR, such as the ability to list repositories and
the images within the repositories, and also to pull images from Amazon ECR with the Docker CLI.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}]
}
that you limit the permissions in your container instance role to the minimal list of permissions
provided in the managed AmazonEC2ContainerServiceforEC2Role policy shown below.
If the containers in your tasks need extra permissions that are not listed here, we recommend
providing those tasks with their own IAM roles. For more information, see IAM Roles for
Tasks (p. 251).
You can prevent containers on the docker0 bridge from accessing the permissions supplied to
the container instance role (while still allowing the permissions that are provided by IAM Roles
for Tasks (p. 251)) by running the following iptables command on your container instances;
however, containers will not be able to query instance metadata with this rule in effect. Note
that this command assumes the default Docker bridge configuration and it will not work for
containers that use the host network mode. For more information, see Network Mode (p. 108).
You must save this iptables rule on your container instance for it to survive a reboot. For the
Amazon ECS-optimized AMI, use the following command. For other operating systems, consult
the documentation for that OS.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:Submit*",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
Note
The ecs:CreateCluster line in the above policy is optional, provided that the cluster you
intend to register your container instance into already exists. If the cluster does not already
exist, the agent must have permission to create it, or you can create the cluster with the create-
cluster command prior to launching your container instance.
If you omit the ecs:CreateCluster line, the Amazon ECS container agent will not be able to
create clusters, including the default cluster.
The ecs:Poll line in the above policy is used to grant the agent permission to connect with the Amazon
ECS service to report status and get commands.
The Amazon ECS instance role is automatically created for you in the console first-run experience;
however, you should manually attach the managed IAM policy for container instances to allow Amazon
ECS to add permissions for future features and enhancements as they are introduced. You can use the
following procedure to check and see if your account already has the Amazon ECS instance role and to
attach the managed IAM policy if needed.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
For more information about creating an ecs.config file, storing it in Amazon S3, and launching
instances with this configuration, see Storing Container Instance Configuration in Amazon S3 (p. 87).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
The Amazon ECS task execution role is automatically created for you in the console first-run experience;
however, you should manually attach the managed IAM policy for tasks to allow Amazon ECS to add
permissions for future features and enhancements as they are introduced. You can use the following
procedure to check and see if your account already has the Amazon ECS task execution role and to attach
the managed IAM policy if needed.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
A service-linked role makes setting up Amazon ECS easier because you don’t have to manually add
the necessary permissions. Amazon ECS defines the permissions of its service-linked roles, and unless
defined otherwise, only Amazon ECS can assume its roles. The defined permissions include the trust
policy and the permissions policy, and that permissions policy cannot be attached to any other IAM
entity.
You can delete the roles only after first deleting their related resources. This protects your Amazon ECS
resources because you can't inadvertently remove permission to access the resources.
For information about other services that support service-linked roles, see AWS Services That Work with
IAM and look for the services that have Yes in the Service-Linked Role column. Choose a Yes with a link
to view the service-linked role documentation for that service.
The AWSServiceRoleForECS service-linked role trusts the following services to assume the role:
• ecs.amazonaws.com
The role permissions policy allows Amazon ECS to complete the following actions on the specified
resources:
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or
delete a service-linked role.
Add the following statement to the permissions policy for the IAM entity that needs to create the
service-linked role:
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole",
"iam:PutRolePolicy"
],
"Resource": "arn:aws:iam::*:role/aws-service-role/ecs.amazonaws.com/
AWSServiceRoleForECS*",
"Condition": {"StringLike": {"iam:AWSServiceName": "ecs.amazonaws.com"}}
}
To allow an IAM entity to edit the description of the AWSServiceRoleForECS service-linked role
Add the following statement to the permissions policy for the IAM entity that needs to edit the
description of a service-linked role:
{
"Effect": "Allow",
"Action": [
"iam:UpdateRoleDescription"
],
"Resource": "arn:aws:iam::*:role/aws-service-role/ecs.amazonaws.com/
AWSServiceRoleForECS*",
"Condition": {"StringLike": {"iam:AWSServiceName": "ecs.amazonaws.com"}}
}
Add the following statement to the permissions policy for the IAM entity that needs to delete a service-
linked role:
{
"Effect": "Allow",
"Action": [
"iam:DeleteServiceLinkedRole",
"iam:GetServiceLinkedRoleDeletionStatus"
],
"Resource": "arn:aws:iam::*:role/aws-service-role/ecs.amazonaws.com/
AWSServiceRoleForECS*",
"Condition": {"StringLike": {"iam:AWSServiceName": "ecs.amazonaws.com"}}
}
You must delete all Amazon ECS clusters in all AWS Regions before you can delete the
AWSServiceRoleForECS role.
1. Scale all Amazon ECS services down to a desired count of 0 in all regions, and then delete the
services. For more information, see Updating a Service (p. 194) and Deleting a Service (p. 196).
2. Force deregister all container instances from all clusters in all regions. For more information, see
Deregister a Container Instance (p. 67).
3. Delete all Amazon ECS clusters in all regions. For more information, see Deleting a Cluster (p. 29).
1. Sign in to the AWS Management Console and open the IAM console at https://
console.aws.amazon.com/iam/.
2. In the navigation pane of the IAM console, choose Roles. Then select the check box next to
AWSServiceRoleForECS, not the name or row itself.
3. For Role actions at the top of the page, choose Delete role.
4. In the confirmation dialog box, review the service last accessed data, which shows when each of the
selected roles last accessed an AWS service. This helps you to confirm whether the role is currently
active. If you want to proceed, choose Yes, Delete to submit the service-linked role for deletion.
5. Watch the IAM console notifications to monitor the progress of the service-linked role deletion.
Because the IAM service-linked role deletion is asynchronous, after you submit the role for deletion,
the deletion task can succeed or fail.
• If the task succeeds, then the role is removed from the list and a notification of success appears at
the top of the page.
• If the task fails, you can choose View details or View Resources from the notifications to learn
why the deletion failed. If the deletion fails because the role is using the service's resources, then
the notification includes a list of resources, if the service returns that information. You can then
clean up the resources and submit the deletion again.
Note
You might have to repeat this process several times, depending on the information that
the service returns. For example, your service-linked role might use six resources and your
service might return information about five of them. If you clean up the five resources
and submit the role for deletion again, the deletion fails and the service reports the one
remaining resource. A service might return all of the resources, a few of them, or it might
not report any resources.
• If the task fails and the notification does not include a list of resources, then the service might not
return that information. To learn how to clean up the resources for that service, see AWS Services
That Work with IAM. Find your service in the table, and choose the Yes link to view the service-
linked role documentation for that service.
1. Because a service-linked role cannot be deleted if it is being used or has associated resources, you
must submit a deletion request. That request can be denied if these conditions are not met. You
must capture the deletion-task-id from the response to check the status of the deletion task.
Type the following command to submit a service-linked role deletion request:
2. Type the following command to check the status of the deletion task:
The status of the deletion task can be NOT_STARTED, IN_PROGRESS, SUCCEEDED, or FAILED.
If the deletion fails, the call returns the reason that it failed so that you can troubleshoot. If the
deletion fails because the role is using the service's resources, then the notification includes a list of
resources, if the service returns that information. You can then clean up the resources and submit
the deletion again.
Note
You might have to repeat this process several times, depending on the information that
the service returns. For example, your service-linked role might use six resources and your
service might return information about five of them. If you clean up the five resources
and submit the role for deletion again, the deletion fails and the service reports the one
remaining resource. A service might return all of the resources, a few of them, or it might
not report any resources. To learn how to clean up the resources for a service that does not
report any resources, see AWS Services That Work with IAM. Find your service in the table,
and choose the Yes link to view the service-linked role documentation for that service.
1. To submit a deletion request for a service-linked roll, call DeleteServiceLinkedRole. In the request,
specify the AWSServiceRoleForECS role name.
Because a service-linked role cannot be deleted if it is being used or has associated resources, you
must submit a deletion request. That request can be denied if these conditions are not met. You
must capture the DeletionTaskId from the response to check the status of the deletion task.
2. To check the status of the deletion, call GetServiceLinkedRoleDeletionStatus. In the request, specify
the DeletionTaskId.
The status of the deletion task can be NOT_STARTED, IN_PROGRESS, SUCCEEDED, or FAILED.
If the deletion fails, the call returns the reason that it failed so that you can troubleshoot. If the
deletion fails because the role is using the service's resources, then the notification includes a list of
resources, if the service returns that information. You can then clean up the resources and submit
the deletion again.
Note
You might have to repeat this process several times, depending on the information that
the service returns. For example, your service-linked role might use six resources and your
service might return information about five of them. If you clean up the five resources
and submit the role for deletion again, the deletion fails and the service reports the one
remaining resource. A service might return all of the resources, a few of them, or it might
not report any resources. To learn how to clean up the resources for a service that does not
report any resources, see AWS Services That Work with IAM. Find your service in the table,
and choose the Yes link to view the service-linked role documentation for that service.
In most cases, the Amazon ECS service role is created for you automatically in the console first-run
experience. You can use the following procedure to check if your account already has the Amazon ECS
service role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:Describe*",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:Describe*",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:RegisterTargets"
],
"Resource": "*"
}
]
}
Note
The ec2:AuthorizeSecurityGroupIngress rule is reserved for future use. Amazon ECS
does not automatically update the security groups associated with Elastic Load Balancing load
balancers or Amazon ECS container instances.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
You can use the following procedure to check and see if your account already has Service Auto Scaling.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1456535218000",
"Effect": "Allow",
"Action": [
"ecs:DescribeServices",
"ecs:UpdateService"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1456535243000",
"Effect": "Allow",
"Action": [
"cloudwatch:DescribeAlarms"
],
"Resource": [
"*"
]
}
]
}
To check for the Service Auto Scaling role in the IAM console
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "application-autoscaling.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
You can create a task IAM role for each task definition that needs permission to call AWS APIs. You
simply create an IAM policy that defines which permissions your task should have, and then attach that
policy to a role that uses the Amazon EC2 Container Service Task Role trust relationship policy. For more
information, see Creating an IAM Role and Policy for your Tasks (p. 254).
The Amazon EC2 Container Service Task Role trust relationship is shown below.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
The CloudWatch Events role is created for you in the AWS Management Console when you configure a
scheduled task. For more information, see Scheduled Tasks (cron) (p. 158).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:RunTask"
],
"Resource": [
"*"
]
}
]
}
Important
Containers that are running on your container instances are not prevented from accessing the
credentials that are supplied to the container instance profile (through the Amazon EC2 instance
metadata server). We recommend that you limit the permissions in your container instance role
to the minimal list of permissions shown in Amazon ECS Container Instance IAM Role (p. 238).
You can prevent containers on the docker0 bridge from accessing the credential information
supplied to the container instance profile (while still allowing the permissions that are provided
by the task role) by running the following iptables command on your container instances.
However, containers will no longer be able to query instance metadata with this rule in effect.
Note that this command assumes the default Docker bridge configuration and it will not
work for containers that use the host network mode. For more information, see Network
Mode (p. 108).
You must save this iptables rule on your container instance for it to survive a reboot. For the
Amazon ECS-optimized AMI, use the following command. For other operating systems, consult
the documentation for that OS.
You define the IAM role to use in your task definitions, or you can use a taskRoleArn override when
running a task manually with the RunTask API operation. The Amazon ECS agent receives a payload
message for starting the task with additional fields that contain the role credentials. The Amazon ECS
agent sets the task’s UUID as an identification token and updates its internal credential cache so that
the identification token for the task points to the role credentials that are received in the payload.
The Amazon ECS agent populates the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment
variable in the Env object (available with the docker inspect container_id command) for all
containers that belong to this task with the following relative URI: /credential_provider_version/
credentials?id=task_UUID.
Note
When you specify an IAM role for a task, the AWS CLI or other SDKs in the containers for that
task use the AWS credentials provided by the task role exclusively and they no longer inherit any
IAM permissions from the container instance.
From inside the container, you can query the credentials with the following command:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
Output:
{
"AccessKeyId": "ACCESS_KEY_ID",
"Expiration": "EXPIRATION_DATE",
"RoleArn": "TASK_ROLE_ARN",
"SecretAccessKey": "SECRET_ACCESS_KEY",
"Token": "SECURITY_TOKEN_STRING"
}
If your container instance is using at least version 1.11.0 of the container agent and
a supported version of the AWS CLI or SDKs, then the SDK client will see that the
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI variable is available, and it will use the provided
credentials to make calls to the AWS APIs. For more information, see Enabling Task IAM Roles on your
Container Instances (p. 253) and Using a Supported AWS SDK (p. 255).
Each time the credential provider is used, the request is logged locally on the host container instance at /
var/log/ecs/audit.log.YYYY-MM-DD-HH. For more information, see IAM Roles for Tasks Credential
Audit Log (p. 367).
Topics
• Benefits of Using IAM Roles for Tasks (p. 253)
• Enabling Task IAM Roles on your Container Instances (p. 253)
• Creating an IAM Role and Policy for your Tasks (p. 254)
• Using a Supported AWS SDK (p. 255)
• Specifying an IAM Role for your Tasks (p. 255)
If you are not using the Amazon ECS-optimized AMI for your container instances, be sure to add the
--net=host option to your docker run command that starts the agent and the appropriate agent
configuration variables for your desired configuration (for more information, see Amazon ECS Container
Agent Configuration (p. 81)):
ECS_ENABLE_TASK_IAM_ROLE=true
Enables IAM roles for tasks for containers with the bridge and default network modes.
ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST=true
Enables IAM roles for tasks for containers with the host network mode. This variable is only
supported on agent versions 1.12.0 and later.
For an example run command, see Manually Updating the Amazon ECS Container Agent (for Non-
Amazon ECS-optimized AMIs) (p. 79). You will also need to set the following networking commands on
your container instance so that the containers in your tasks can retrieve their AWS credentials:
You must save these iptables rules on your container instance for them to survive a reboot. You can use
the iptables-save and iptables-restore commands to save your iptables rules and restore them at boot.
For more information, consult your specific operating system documentation.
You must also create a role for your tasks to use before you can specify it in your task definitions. You
can create the role using the Amazon EC2 Container Service Task Role service role in the IAM console.
Then you can attach your specific IAM policy to the role that gives the containers in your task the
permissions you desire. The procedures below describe how to do this.
If you have multiple task definitions or services that require IAM permissions, you should consider
creating a role for each specific task definition or service with the minimum required permissions for the
tasks to operate so that you can minimize the access that you provide for each task.
In this example, we create a policy to allow read-only access to an Amazon S3 bucket. You could
store database credentials or other secrets in this bucket, and the containers in your task can read the
credentials from the bucket and load them into your application.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1465589882000",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-task-secrets-bucket/*"
]
}
]
}
3. In the Select Role Type section, choose Select next to the Amazon EC2 Container Service Task Role
service role.
4. In the Attach Policy section, select the policy you want to use for your tasks (in this example
AmazonECSTaskS3BucketPolicy, and then choose Next Step.
5. In the Role Name field, enter a name for your role. For this example, type
AmazonECSTaskS3BucketRole to name the role, and then choose Create Role to finish.
To ensure that you are using a supported SDK, follow the installation instructions for your preferred SDK
at Tools for Amazon Web Services when you are building your containers.
The following AWS SDK versions and above support IAM roles for tasks:
• Specify an IAM role for your tasks in the task definition. You can create a new task definition or a
new revision of an existing task definition and specify the role you created previously. If you use the
console to create your task definition, choose your IAM role in the Task Role field. If you use the AWS
CLI or SDKs, specify your task role ARN using the taskRoleArn parameter. For more information, see
Creating a Task Definition (p. 102).
Note
This option is required if you want to use IAM task roles in an Amazon ECS service.
• Specify an IAM task role override when running a task. You can specify an IAM task role override when
running a task. If you use the console to run your task, choose Advanced Options and then choose
your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using
the taskRoleArn parameter in the overrides JSON object. For more information, see Running
Tasks (p. 148).
Note
In addition to the standard Amazon ECS permissions required to run tasks and services, IAM
users also require iam:PassRole permissions to use IAM roles for tasks.
Topics
• Amazon ECS First Run Wizard (p. 256)
• Clusters (p. 258)
• Container Instances (p. 259)
• Task Definitions (p. 260)
• Run Tasks (p. 260)
• Start Tasks (p. 261)
• List and Describe Tasks (p. 261)
• Create Services (p. 262)
• Update Services (p. 263)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:CreateAutoScalingGroup",
"autoscaling:CreateLaunchConfiguration",
"autoscaling:CreateOrUpdateTags",
"autoscaling:DeleteAutoScalingGroup",
"autoscaling:DeleteLaunchConfiguration",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeAutoScalingNotificationTypes",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeScalingActivities",
"autoscaling:DescribeTags",
"autoscaling:DescribeTriggers",
"autoscaling:UpdateAutoScalingGroup",
"cloudformation:CreateStack",
"cloudformation:DescribeStack*",
"cloudformation:DeleteStack",
"cloudformation:UpdateStack",
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics",
"ec2:AssociateRouteTable",
"ec2:AttachInternetGateway",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateInternetGateway",
"ec2:CreateKeyPair",
"ec2:CreateNetworkInterface",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSecurityGroup",
"ec2:CreateSubnet",
"ec2:CreateTags",
"ec2:CreateVpc",
"ec2:DeleteInternetGateway",
"ec2:DeleteRoute",
"ec2:DeleteRouteTable",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSubnet",
"ec2:DeleteTags",
"ec2:DeleteVpc",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInstances",
"ec2:DescribeInternetGateways",
"ec2:DescribeKeyPairs",
"ec2:DescribeNetworkInterface",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcs",
"ec2:DetachInternetGateway",
"ec2:DisassociateRouteTable",
"ec2:ModifyVpcAttribute",
"ec2:RunInstances",
"ec2:TerminateInstances",
"ecr:*",
"ecs:*",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DeleteLoadBalancerPolicy",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:DescribeInstanceHealth",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeLoadBalancerPolicyTypes",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"iam:AttachRolePolicy",
"iam:CreateRole",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:GetRole",
"iam:ListAttachedRolePolicies",
"iam:ListInstanceProfiles",
"iam:ListRoles",
"iam:ListGroups",
"iam:ListUsers",
"iam:CreateInstanceProfile",
"iam:AddRoleToInstanceProfile",
"iam:ListInstanceProfilesForRole"
],
"Resource": "*"
}
]
}
Clusters
The following IAM policy allows permission to create and list clusters. The CreateCluster and
ListClusters actions do not accept any resources, so the resource definition is set to * for all
resources.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:ListClusters"
],
"Resource": [
"*"
]
}
]
}
The following IAM policy allows permission to describe and delete a specific cluster. The
DescribeCluster and DeleteCluster actions accept cluster ARNs as resources.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:DescribeCluster",
"ecs:DeleteCluster"
],
"Resource": [
"arn:aws:ecs:us-east-1:<aws_account_id>:cluster/<cluster_name>"
]
}
]
}
The following IAM policy can be attached to a user or group that would only allow that user or group to
perform operations on a specific cluster.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecs:Describe*",
"ecs:List*"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"ecs:DeleteCluster",
"ecs:DeregisterContainerInstance",
"ecs:ListContainerInstances",
"ecs:RegisterContainerInstance",
"ecs:SubmitContainerStateChange",
"ecs:SubmitTaskStateChange"
],
"Effect": "Allow",
"Resource": "arn:aws:ecs:us-east-1:<aws_account_id>:cluster/default"
},
{
"Action": [
"ecs:DescribeContainerInstances",
"ecs:DescribeTasks",
"ecs:ListTasks",
"ecs:UpdateContainerAgent",
"ecs:StartTask",
"ecs:StopTask",
"ecs:RunTask"
],
"Effect": "Allow",
"Resource": "*",
"Condition": {
"ArnEquals": {
"ecs:cluster": "arn:aws:ecs:us-east-1:<aws_account_id>:cluster/default"
}
}
}
]
}
Container Instances
Container instance registration is handled by the Amazon ECS agent, but there may be times where you
want to allow a user to deregister an instance manually from a cluster. Perhaps the container instance
was accidentally registered to the wrong cluster, or the instance was terminated with tasks still running
on it.
The following IAM policy allows a user to list and deregister container instances in a specified cluster:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:DeregisterContainerInstance",
"ecs:ListContainerInstances"
],
"Resource": [
"arn:aws:ecs:<region>:<aws_account_id>:cluster/<cluster_name>"
]
}
]
}
The following IAM policy allows a user to describe a specified container instance in a specified cluster.
To open this permission up to all container instances in a cluster, you can replace the container instance
UUID with *.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:DescribeContainerInstance"
],
"Condition": {
"ArnEquals": {
"ecs:cluster": "arn:aws:ecs:<region>:<aws_account_id>:cluster/<cluster_name>"
}
},
"Resource": [
"arn:aws:ecs:<region>:<aws_account_id>:container-instance/
<container_instance_UUID>"
]
}
]
}
Task Definitions
Task definition IAM policies do not support resource-level permissions, but the following IAM policy
allows a user to register, list, and describe task definitions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:RegisterTaskDefinition",
"ecs:ListTaskDefinitions",
"ecs:DescribeTaskDefinition"
],
"Resource": [
"*"
]
}
]
}
Run Tasks
The resources for RunTask are task definitions. To limit which clusters a user can run task definitions
on, you can specify them in the Condition block. The advantage is that you don't have to list both task
definitions and clusters in your resources to allow appropriate access. You can apply one, the other, or
both.
The following IAM policy allows permission to run any revision of a specific task definition on a specific
cluster:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:RunTask"
],
"Condition": {
"ArnEquals": {
"ecs:cluster": "arn:aws:ecs:<region>:<aws_account_id>:cluster/<cluster_name>"
}
},
"Resource": [
"arn:aws:ecs:<region>:<aws_account_id>:task-definition/<task_family>:*"
]
}
]
}
Start Tasks
The resources for StartTask are task definitions. To limit which clusters and container instances a user
can start task definitions on, you can specify them in the Condition block. The advantage is that you
don't have to list both task definitions and clusters in your resources to allow appropriate access. You can
apply one, the other, or both.
The following IAM policy allows permission to start any revision of a specific task definition on a specific
cluster and specific container instance:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:StartTask"
],
"Condition": {
"ArnEquals": {
"ecs:cluster": "arn:aws:ecs:<region>:<aws_account_id>:cluster/<cluster_name>",
"ecs:container-instances" : [
"arn:aws:ecs:<region>:<aws_account_id>:container-instance/
<container_instance_UUID>"
]
}
},
"Resource": [
"arn:aws:ecs:<region>:<aws_account_id>:task-definition/<task_family>:*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:ListTasks"
],
"Condition": {
"ArnEquals": {
"ecs:cluster": "arn:aws:ecs:<region>:<aws_account_id>:cluster/<cluster_name>"
}
},
"Resource": [
"*"
]
}
]
}
The following IAM policy allows a user to describe a specified task in a specified cluster:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:DescribeTask"
],
"Condition": {
"ArnEquals": {
"ecs:cluster": "arn:aws:ecs:<region>:<aws_account_id>:cluster/<cluster_name>"
}
},
"Resource": [
"arn:aws:ecs:<region>:<aws_account_id>:task/<task_UUID>"
]
}
]
}
Create Services
The following IAM policy allows a user to create Amazon ECS services in the AWS Management Console:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"application-autoscaling:Describe*",
"application-autoscaling:PutScalingPolicy",
"application-autoscaling:RegisterScalableTarget",
"cloudwatch:DescribeAlarms",
"cloudwatch:PutMetricAlarm",
"ecs:List*",
"ecs:Describe*",
"ecs:CreateService",
"elasticloadbalancing:Describe*",
"iam:AttachRolePolicy",
"iam:CreateRole",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:GetRole",
"iam:ListAttachedRolePolicies",
"iam:ListRoles",
"iam:ListGroups",
"iam:ListUsers"
],
"Resource": [
"*"
]
}
]
}
Update Services
The following IAM policy allows a user to update Amazon ECS services in the AWS Management Console:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"application-autoscaling:Describe*",
"application-autoscaling:PutScalingPolicy",
"application-autoscaling:DeleteScalingPolicy",
"application-autoscaling:RegisterScalableTarget",
"cloudwatch:DescribeAlarms",
"cloudwatch:PutMetricAlarm",
"ecs:List*",
"ecs:Describe*",
"ecs:UpdateService",
"iam:AttachRolePolicy",
"iam:CreateRole",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:GetRole",
"iam:ListAttachedRolePolicies",
"iam:ListRoles",
"iam:ListGroups",
"iam:ListUsers"
],
"Resource": [
"*"
]
}
]
}
Topics
• Installing the Amazon ECS CLI (p. 264)
• Configuring the Amazon ECS CLI (p. 265)
• Migrating Configuration Files (p. 267)
• Amazon ECS CLI Tutorial (p. 268)
• Amazon ECS Command Line Reference (p. 276)
• For macOS:
Note
If you encounter permission issues, ensure you are running PowerShell as Administrator.
2. (Optional) Verify the downloaded binary with the MD5 sum provided.
• For macOS (compare the two output strings to verify that they match):
Open Windows PowerShell and find the md5 hash of the executable you downloaded:
Edit the environment variables and add C:\Program Files\Amazon\ECSCLI to the PATH
variable field, separated from existing entries by using a semicolon. For example:
C:\existing\path;C:\Program Files\Amazon\ECSCLI
Restart PowerShell (or the command prompt) so the changes will go into effect.
Note
Once the PATH variable is set, the ECS CLI can be used from either Windows PowerShell
or the command prompt.
4. Verify that the CLI is working properly.
ecs-cli --version
cluster to use. Configuration information is stored in the ~/.ecs directory on macOS and Linux systems
and in C:\Users\<username>\AppData\local\ecs on Windows systems.
1. Set up a CLI profile with the following command, substituting profile_name with your desired
profile name, $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY environment variables with
your AWS credentials.
2. Complete the configuration with the following command, substituting launch type with
the launch type you want to use by default, region_name with your desired AWS region,
cluster_name with the name of an existing Amazon ECS cluster or a new cluster to use, and
configuration_name for the name you'd like to give this configuration.
After you have installed and configured the CLI, you can try the Amazon ECS CLI Tutorial (p. 268). For
more information, see the Amazon ECS Command Line Reference (p. 276).
Profiles
The Amazon ECS CLI supports the configuring of multiple sets of AWS credentials as named profiles
using the ecs-cli configure profile command. A default profile can be set by using the ecs-
cli configure profile default command. These profiles can then be referenced when you run
Amazon ECS CLI commands that require credentials using the --ecs-profile flag otherwise the
default profile is used.
For more information, see ecs-cli configure profile (p. 281) and ecs-cli configure profile
default (p. 283).
Cluster Configurations
A cluster configuration is a set of fields that describes an Amazon ECS cluster including the name of the
cluster and the region. A default cluster configuration can be set by using the ecs-cli configure
default command. The Amazon ECS CLI supports the configuring of multiple named cluster
configurations using the --config-name option.
For more information, see ecs-cli configure (p. 279) and ecs-cli configure default (p. 281).
Order of Precedence
There are multiple methods for passing both the credentials and the region in an Amazon ECS CLI
command. The following is the order of precedence for each of these.
b. AWS_PROFILE
c. AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN
3. ECS config‐attempts to fetch credentials from the default ECS profile
4. Default AWS profile‐attempts to use credentials (aws_access_key_id, aws_secret_access_key)
or assume_role (role_arn, source_profile) from the AWS profile name
a. AWS_DEFAULT_PROFILE environment variable (defaults to default)
5. EC2 Instance role
• Splitting up of credential and cluster-related configuration information into two separate files.
Credential information is stored in ~/.ecs/credentials and cluster configuration information is
stored in ~/.ecs/config.
• The configuration files are formatted in YAML.
• Support for storing multiple named configurations.
• Deprecation of the field compose-service-name-prefix (name used for creating a service
<compose_service_name_prefix> + <project_name>). This field can still be configured.
However, if it is not configured, there is no longer a default value assigned. For ECS CLI v0.6.6 and
earlier, the default was ecscompose-service-.
• Removal of the field compose-project-name-prefix (name used for creating a task definition
<compose_project_name_prefix> + <project_name>). Amazon ECS CLI v1.0.0 and later
can still read old configuration files; so if this field is present then it is still read and used. However,
configuring this field is not supported in v1.0.0+ with the ecs-cli configure command, and if the
field is manually added to a v1.0.0+ configuration file it causes the Amazon ECS CLI to throw an error.
• The field cfn-stack-name-prefix (name used for creating CFN stacks
<cfn_stack_name_prefix> + <cluster_name>) has been changed to cfn-stack-name. Instead
of specifying a prefix, the exact name of a CloudFormation template can be configured.
• Amazon ECS CLI v0.6.6 and earlier allowed configuring credentials using a named AWS profile from
the ~/.aws/credentials file on your system. This functionality has been removed. However, a new
flag, --aws-profile, has been added which allows the referencing of an AWS profile inline in all
commands that require credentials.
Note
The --project-name flag can be used to set the Project name.
When running the ecs-cli configure migrate command there is a warning message displayed
with the old configuration file, and a preview of the new configuration files. User confirmation is
required before the migration proceeds. If the --force flag is used, then the warning message is not
displayed, and the migration proceeds without any confirmation. If cfn-stack-name-prefix is used
in the legacy file, then cfn-stack-name is stored in the new file as <cfn_stack_name_prefix> +
<cluster_name>.
Topics
• Amazon ECS CLI Tutorial using Fargate Launch Type (p. 268)
• Amazon ECS CLI Tutorial using EC2 Launch Type (p. 272)
Step 1: Prerequisites
Amazon ECS needs permissions so that your Fargate task will be able to store logs in CloudWatch. Create
a role policy ahead of time so it can be referenced later.
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
3. Using the AWS CLI attach the task execution role policy:
Note
If this is the first time you are configuring the ECS CLI these configurations will be marked
as default. If this is not your first time configuring the ECS CLI, see ecs-cli configure
default (p. 281) and ecs-cli configure profile default (p. 283) to set this as the default
configuration and profile.
ecs-cli up
Note
This command may take a few minutes to complete as your resources are created. Take note of
the VPC and subnet IDs that are created as they will be used later.
Using the AWS CLI create a security group using the VPC ID from the previous output:
aws ec2 create-security-group --group-name "my-sg" --description "My security group" --vpc-
id "VPC_ID"
Using AWS CLI, add a security group rule to allow inbound access on port 80:
Here is the compose file, which you can call docker-compose.yml. The wordpress container exposes
port 80 for inbound traffic to the web server. It also configures container logs to go to the CloudWatch
log group created earlier. This is the recommended best practice for Fargate tasks.
version: '2'
services:
wordpress:
image: wordpress
ports:
- "80:80"
logging:
driver: awslogs
options:
awslogs-group: tutorial
awslogs-region: us-east-1
awslogs-stream-prefix: wordpress
In addition to the Docker compose information, there are some Amazon ECS specific parameters you
need to specify for the service. Using the VPC, subnet, and security group IDs from the previous step,
create a file named ecs-params.yml with the following content:
version: 1
task_definition:
task_execution_role: ecsExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet ID 1"
- "subnet ID 2"
security_groups:
- "security group ID"
assign_public_ip: ENABLED
Note
The assign_public_ip and task_size parameters are only valid for a Fargate task. This task
definition will fail if the launch type is changed to EC2.
Output:
In the above example, you can see the wordpress container from your compose file, and also the IP
address and port of the web server. If you point your web browser at that address, you should see the
WordPress installation wizard. Also in the output is the task-id of the container. Copy the task ID; you
will use it in the next step.
Note
The --follow option tells the ECS CLI to continously poll for logs.
Output:
Step 9: Clean Up
When you are done with this tutorial, you should clean up your resources so they do not incur any more
charges. First, delete the service so that it stops the existing containers and does not try to run any more
tasks.
Now, take down your cluster, which cleans up the resources that you created earlier with ecs-cli up.
Note
If this is the first time you are configuring the ECS CLI these configurations will be marked
as default. If this is not your first time configuring the ECS CLI, see ecs-cli configure
default (p. 281) and ecs-cli configure profile default (p. 283) to set this as the default
configuration and profile.
By default, the security group created for your container instances opens port 80 for inbound traffic. You
can use the --port option to specify a different port to open, or if you have more complicated security
group requirements, you can specify an existing security group to use with the --security-group
option.
This command may take a few minutes to complete as your resources are created. Now that you have a
cluster, you can create a Docker compose file and deploy it.
The following parameters are supported in compose files for the Amazon ECS CLI:
• cap_add (Not valid for tasks using the Fargate launch type)
• cap_drop (Not valid for tasks using the Fargate launch type)
• command
• cpu_shares
• dns
• dns_search
• entrypoint
• environment: If an environment variable value is not specified in the compose file, but it exists in the
shell environment, the shell environment variable value is passed to the task definition that is created
for any associated tasks or services.
Important
We do not recommend using plaintext environment variables for sensitive information, such
as credential data.
• env_file
Important
We do not recommend using plaintext environment variables for sensitive information, such
as credential data.
• extra_hosts
• hostname
• image
• labels
• links (Not valid for tasks using the Fargate launch type)
• log_driver (Compose file version 1 only)
• log_opt (Compose file version 1 only)
• logging (Compose file version 2 only)
• driver
• options
• mem_limit (in bytes)
• mem_reservation (in bytes)
• ports
• privileged (Not valid for tasks using the Fargate launch type)
• read_only
• security_opt
• ulimits
• user
• volumes
• volumes_from
• working_dir
Important
The build directive is not supported at this time.
For more information about Docker compose file syntax, see the Compose file reference in the Docker
documentation.
Here is the compose file, which you can call hello-world.yml. Each container has 100 CPU units and
500 MiB of memory. The wordpress container exposes port 80 to the container instance for inbound
traffic to the web server. A logging configuration for the containers is also defined.
version: '2'
services:
wordpress:
image: wordpress
cpu_shares: 100
mem_limit: 524288000
ports:
- "80:80"
links:
- mysql
logging:
driver: awslogs
options:
awslogs-group: tutorial-wordpress
awslogs-region: us-east-1
awslogs-stream-prefix: wordpress
mysql:
image: mysql
cpu_shares: 100
mem_limit: 524288000
environment:
MYSQL_ROOT_PASSWORD: password
logging:
driver: awslogs
options:
awslogs-group: tutorial-mysql
awslogs-region: us-east-1
awslogs-stream-prefix: mysql
ecs-cli ps
In the above example, you can see the wordpress and mysql containers from your compose file, and
also the IP address and port of the web server. If you point a web browser to that address, you should
see the WordPress installation wizard.
ecs-cli ps
Output:
Before starting your service, stop the containers from your compose file with the ecs-cli compose down
command so that you have an empty cluster to work with.
Output:
Step 8: Clean Up
When you are done with this tutorial, you should clean up your resources so they do not incur any more
charges. First, delete the service so that it stops the existing containers and does not try to run any more
tasks.
Output:
Now, take down your cluster, which cleans up the resources that you created earlier with ecs-cli up.
Output:
ecs-cli --help
Available Commands
• ecs-cli (p. 277)
• ecs-cli configure (p. 279)
• ecs-cli configure default (p. 281)
• ecs-cli configure profile (p. 281)
• ecs-cli configure profile default (p. 283)
• ecs-cli configure migrate (p. 284)
• ecs-cli up (p. 285)
• ecs-cli down (p. 290)
• ecs-cli scale (p. 291)
• ecs-cli logs (p. 293)
• ecs-cli ps (p. 295)
• ecs-cli push (p. 297)
• ecs-cli pull (p. 298)
• ecs-cli images (p. 300)
• ecs-cli license (p. 302)
• ecs-cli compose (p. 303)
• ecs-cli compose create (p. 308)
• ecs-cli compose start (p. 312)
• ecs-cli compose up (p. 314)
• ecs-cli compose service (p. 317)
ecs-cli
Description
The Amazon ECS command line interface (CLI) provides high-level commands to simplify creating,
updating, and monitoring clusters and tasks from a local development environment. The Amazon ECS
CLI supports Docker Compose, a popular open-source tool for defining and running multi-container
applications.
For a quick walkthrough of the Amazon ECS CLI, see the Amazon ECS CLI Tutorial (p. 268).
Help text is available for each individual subcommand with ecs-cli subcommand --help.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli [--version] [subcommand] [--help]
Options
Name Description
--version, -v Prints the version information for the Amazon ECS CLI.
Required: No
Name Description
Required: No
Available Subcommands
The ecs-cli command supports the following subcommands:
configure
Configures your AWS credentials, the region to use, and the ECS cluster name to use with the
Amazon ECS CLI. For more information, see ecs-cli configure (p. 279).
migrate
Migrates a legacy configuration file (ECS CLI v0.6.6 and older) to the new configuration file format
(ECS CLI v1.0.0 and later). The command prints a summary of the changes to be made and then asks
for confirmation to proceed. For more information, see ecs-cli configure migrate (p. 284).
up
Creates the ECS cluster (if it does not already exist) and the AWS resources required to set up the
cluster. For more information, see ecs-cli up (p. 285).
down
Deletes the AWS CloudFormation stack that was created by ecs-cli up and the associated resources.
For more information, see ecs-cli down (p. 290).
scale
Modifies the number of container instances in an ECS cluster. For more information, see ecs-cli
scale (p. 291).
logs
Retrieves container logs from CloudWatch Logs. Only valid for tasks that use the awslogs driver
and has a log stream prefix specified. For more information, see ecs-cli logs (p. 293).
ps
Lists all of the running containers in an ECS cluster. For more information, see ecs-cli ps (p. 295).
push
Pushes an image to an Amazon ECR repository. For more information, see ecs-cli push (p. 297).
pull
Pulls an image from an ECR repository. For more information, see ecs-cli pull (p. 298).
images
Lists all of the running containers in an ECS cluster. For more information, see ecs-cli
images (p. 300).
license
Prints the LICENSE files for the Amazon ECS CLI and its dependencies. For more information, see
ecs-cli license (p. 302).
compose
Executes docker-compose–style commands on an ECS cluster. For more information, see ecs-cli
compose (p. 303).
help
ecs-cli configure
Description
Configures the AWS region to use, resource creation prefixes, and the Amazon ECS cluster name to use
with the Amazon ECS CLI. Stores a single named cluster configuration in the ~/.ecs/config file. The
first cluster configuration that is created is set as the default.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
• Multiple cluster configurations may be stored, but one is always the default.
• The first cluster configuration that is stored is set as the default.
• Use the ecs-cli configure default command to change which cluster configuration is set as the
default. For more information, see ecs-cli configure default (p. 281)
• A non-default cluster configuration can be referenced in a command by using the --cluster-
config flag.
Syntax
ecs-cli configure --cluster cluster_name --region region [--default-launch-type
launch_type] [--config-name config_name] [--cfn-stack-name stack_name] [--help]
Options
Name Description
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: Yes
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: Yes
Name Description
--config-name config_name Specifies the name of this cluster configuration. This is the
name that can be referenced in commands using the --
cluster-config flag. If this option is omitted, then the
name is set to default.
Type: String
Required: No
--cfn-stack-name stack_name Specifies the stack name to add to the AWS CloudFormation
stack that is created on ecs-cli up.
Important
It is not recommended to use this parameter. It is
included to ensure backwards compatibility with
previous versions of the ECS CLI.
Type: String
Default: amazon-ecs-cli-setup-<cluster_name>
Required: No
--default-launch-type Specifies the default launch type to use. Valid values are
launch_type FARGATE or EC2. If not specified, no default launch type is
used. For more information about launch types, see Amazon
ECS Launch Types (p. 132).
Type: String
Required: No
Required: No
Examples
Example
This example configures the Amazon ECS CLI to create a cluster configuration named ecs-cli-demo,
which uses FARGATE as the default launch type for cluster ecs-cli-demo in the us-west-2 region.
Output:
version: v1
default: fargate
clusters:
ecs-cli-demo:
cluster: ecs-cli-demo
region: us-west-2
default_launch_type: "FARGATE"
Syntax
ecs-cli configure default --config-name config_name
Options
Name Description
Type: String
Required: Yes
Required: No
Examples
Example
This example configures the Amazon ECS CLI to set the ecs-cli-demo cluster configuration as the
default.
the ecs-cli configure profile default command. For more information, see ecs-cli configure profile
default (p. 283).
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
• You can set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. When
you run ecs-cli configure profile, the values of those variables are stored in the Amazon ECS CLI
configuration file.
• You can pass credentials directly on the command line with the --access-key and --secret-key
options.
• You can provide the name of a new profile with the --profile-name flag. If a profile name is not
provided, then the profile is named default.
• The first profile configured is set as the default profile. The Amazon ECS CLI uses credentials specified
in this profile unless the --ecs-profile flag is used.
• Multiple profiles may be configured, but one is always the default. This profile is used when an
Amazon ECS CLI command is run that requires credentials.
• The first profile that is created is set as the default profile.
• To change the default profile, use the ecs-cli configure profile default command. For more
information, see ecs-cli configure profile default (p. 283).
• A non-default profile can be referenced in a command using the --ecs-profile flag.
Syntax
ecs-cli configure profile --profile-name profile_name --access-key
aws_access_key_id --secret-key aws_secret_access_key
Options
Name Description
--profile-name profile_name Specifies the name of this ECS profile. This is the name
that can be referenced in commands using the --ecs-
profile flag. If this option is omitted, then the name is set
to default.
Type: String
Required: Yes
Type: String
Name Description
Required: Yes
Type: String
Required: Yes
Required: No
Examples
Example
This example configures the Amazon ECS CLI to create and use a profile named default with a set of
access keys.
Output:
Syntax
ecs-cli configure profile default --profile-name profile_name
Options
Name Description
Type: String
Required: Yes
Name Description
Required: No
Examples
Example
This example configures the Amazon ECS CLI to set the default profile as the default profile to be used.
Syntax
ecs-cli configure migrate [--force]
Options
Name Description
Required: No
Required: No
Examples
Example
This example migrates the legacy Amazon ECS CLI configuration file to the new YAML format.
ecs-cli up
Description
Creates the ECS cluster (if it does not already exist) and the AWS resources required to set up the cluster.
Syntax
ecs-cli up [--verbose] [--capability-iam | --instance-role
instance-profile-name] [--keypair keypair_name] [--size n] [--
azs availability_zone_1,availability_zone_2] [--security-group
security_group_id[,security_group_id[,...]]] [--cidr ip_range] [--port
port_number] [--subnets subnet_1,subnet_2] [--vpc vpc_id] [--instance-type
instance_type] [--image-id ami_id] [--launch-type launch_type] [--no-associate-
public-ip-address] [--force] [--cluster cluster_name] [--region region] [--
help]
Options
Name Description
Required: No
Required: No
--keypair keypair_name Specifies the name of an existing Amazon EC2 key pair to
enable SSH access to the EC2 instances in your cluster. This
option is only used if using tasks with the EC2 launch type.
Type: String
Required: No
Name Description
Type: Integer
Default: 1
Required: No
Type: String
Required: No
Required: No
--cidr ip_range Specifies a CIDR/IP range for the security group to use for
container instances in your cluster.
Note
This parameter is ignored if an existing security
group is specified with the --security-group
option.
Default: 0.0.0.0/0
Required: No
Name Description
--port port_number Specifies a port to open on the security group to use for
container instances in your cluster.
Note
This parameter is ignored if an existing security
group is specified with the --security-group
option.
Type: Integer
Default: 80
Required: No
Type: String
Type: String
Required: No
--instance-type instance_type Specifies the EC2 instance type for your container instances.
This option is only used if using tasks with the EC2 launch
type.
Type: String
Default: t2.micro
Required: No
--image-id ami_id Specifies the Amazon EC2 AMI ID to use for your container
instances. This option is only used if using tasks with the EC2
launch type.
Type: String
Required: No
Name Description
Required: No
Required: No
--instance-role, -f instance- Specifies a custom IAM instance profile name for instances in
profile-name your cluster. This option is only used if using tasks with the
EC2 launch type.
Required: No
--launch-type launch_type Specifies the launch type to use. Available options are
FARGATE or EC2. For more information about launch types,
see Amazon ECS Launch Types (p. 132).
Type: String
Required: No
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
Type: String
Required: No
Name Description
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Required: No
Examples
Creating a Cluster to Use with Tasks That Will Use the EC2 Launch Type
This example brings up a cluster of four c4.large instances and configures them to use the EC2 key pair
called id_rsa.
Output:
Creating a Cluster to Use with Tasks That Will Use the Fargate Launch Type
This example brings up a cluster for your Fargate tasks and creates a new VPC with two subnets.
Output:
ecs-cli down
Description
Deletes the AWS CloudFormation stack that was created by ecs-cli up and the associated resources. The
--force option is required.
Note
The Amazon ECS CLI can only manage tasks, services, and container instances that were created
with the CLI. To manage tasks, services, and container instances that were not created by the
Amazon ECS CLI, use the AWS Command Line Interface or the AWS Management Console.
The ecs-cli down command attempts to delete the cluster specified in ~/.ecs/config. However, if
there are any active services (even with a desired count of 0) or registered container instances in your
cluster that were not created by ecs-cli up, the cluster is not deleted and the services and pre-existing
container instances remain active. This might happen, for example, if you used an existing ECS cluster
with registered container instances, such as the default cluster.
If you have remaining services or container instances in your cluster that you would like to remove, you
can follow the procedures in Cleaning Up your Amazon ECS Resources (p. 22) to remove them and then
delete your cluster.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli down --force [--cluster cluster_name] [--region region] [--help]
Options
Name Description
Required: Yes
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Name Description
Type: String
Required: No
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Required: No
Examples
Example
This example deletes a cluster.
Output:
ecs-cli scale
Description
Modifies the number of container instances in your cluster. This command changes the desired and
maximum instance count in the Auto Scaling group created by the ecs-cli up command. You can use this
command to scale out (increase the number of instances) or scale in (decrease the number of instances)
your cluster.
Note
The Amazon ECS CLI can only manage tasks, services, and container instances that were created
with the CLI. To manage tasks, services, and container instances that were not created by the
Amazon ECS CLI, use the AWS Command Line Interface or the AWS Management Console.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli scale --capability-iam --size n [--cluster cluster_name] [--region
region] [--help]
Options
Name Description
Required: Yes
Type: Integer
Required: Yes
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
Name Description
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Required: No
Examples
Example
This example scales the current cluster to two container instances.
Output:
ecs-cli logs
Description
Retrieves container logs from CloudWatch Logs. Only valid for tasks that use the awslogs driver and
have a log stream prefix specified.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli logs --task-id task_id [--task-def task_definition] [--follow]
[--filter-pattern search_string] [--since n_minutes] [--start-time
2006-01-02T15:04:05+07:00] [--end-time 2006-01-02T15:04:05+07:00] [--
timestamps] [--help]
Options
Name Description
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
Name Description
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
Type: String
Required: Yes
--task-def task_definition Specifies the name or full Amazon Resource Name (ARN) of
the ECS task definition associated with the task ID. This is
only needed if the task has been stopped.
Type: String
Required: No
Required: No
--filter-pattern search_string Specifies the substring to search for within the logs.
Type: String
Required: No
Type: Integer
Required: No
--start-time timestamp Returns logs after a specific date (format: RFC 3339.
Example: 2006-01-02T15:04:05+07:00). Cannot be used
with --since flag.
Required: No
--end-time timestamp Returns logs before a specific date (format: RFC 3339.
Example: 2006-01-02T15:04:05+07:00). Cannot be used
with --follow.
Required: No
--timestamps Specifies if time stamps are shown on each line in the log
output.
Required: No
Name Description
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Required: No
Examples
Example
This example prints the log for a task.
ecs-cli ps
Description
Lists all running containers in your ECS cluster.
The IP address displayed by the Amazon ECS CLI depends heavily upon how you have configured your
task and cluster:
• For tasks using the EC2 launch type without task networking ‐ the IP address shown is the public IP
address of the Amazon EC2 instance running your task, or the instances private IP if it lacks a public IP
address.
• For tasks using the EC2 launch type with task networking ‐ the ECS CLI only shows a private IP address
obtained from the network interfaces section of the Describe Task output for the task.
• For tasks using the Fargate launch type ‐ the Amazon ECS CLI returns the public IP assigned to the
elastic network instance attached to the Fargate task. If the elastic network instance lacks a public IP,
then the Amazon ECS CLI falls back to the private IP obtained from the network interfaces section of
the Describe Task output.
Syntax
ecs-cli ps [--cluster cluster_name] [--region region] [--help]
Options
Name Description
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Required: No
Examples
Example
This example shows the containers that are running in the cluster.
ecs-cli ps
Output:
ecs-cli push
Description
Pushes an image to an Amazon ECR repository.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli push [--registry-id registry_id] [--region region] ECR_REPOSITORY[:TAG]
[--help]
Options
Name Description
--registry-id registry_id Specifies the ECR registry ID to which to push the image. By
default, images are pushed to the current AWS account.
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Name Description
Type: String
Required: No
Type: String
Required: No
Required: No
Examples
Example 1
This example pushes a local image called ubuntu to an ECR repository with the same name.
Output:
ecs-cli pull
Description
Pull an image from an Amazon ECR repository.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli pull [--registry-id registry_id] [--region region] ECR_REPOSITORY[:TAG|
@DIGEST] [--help]
Options
Name Description
--registry-id registry_id Specifies the ECR registry ID from which to pull the image.
By default, images are pulled from the current AWS account.
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Type: String
Required: No
Required: No
Examples
Example 1
This example pulls a local image called amazonlinux from an ECR repository with the same name.
Output:
ecs-cli images
Description
List images in an Amazon ECR registry or repository.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli images [--registry-id registry_id] [--tagged|--untagged] [--region
region] [ECR_REPOSITORY] [--help]
Options
Name Description
--registry-id registry_id Specifies the ECR registry with which to list images. By
default, images are listed for the current AWS account.
Required: No
Required: No
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Type: String
Name Description
Required: No
Required: No
Examples
Example 1
This example lists all of the images in an ECR registry.
ecs-cli images
Output:
Example 2
This example lists all of the images in a specific ECR repository.
Output:
Example 3
This example lists all of the untagged images in an ECR registry.
Output:
ecs-cli license
Description
Prints the LICENSE files for the Amazon ECS CLI and its dependencies.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli license [--help]
Options
Name Description
Required: No
Examples
Example
This example prints the license files.
ecs-cli license
Output:
Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file
except in compliance with the
License. A copy of the License is located at
http://aws.amazon.com/apache2.0/
or in the "license" file accompanying this file. This file is distributed on an "AS IS"
BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions
and limitations under the License.
...
ecs-cli compose
Description
Manage Amazon ECS tasks with docker-compose-style commands on an ECS cluster.
Note
To create Amazon ECS services with the Amazon ECS CLI, see ecs-cli compose service (p. 317).
The ecs-cli compose command works with a Docker compose file to create task definitions and manage
tasks. At this time, the latest version of the Amazon ECS CLI supports Docker compose file syntax
versions 1 and 2. By default, the command looks for a compose file in the current directory, called
docker-compose.yml. However, you can also specify a different file name or path to a compose file
with the --file option. This is especially useful for managing tasks and services from multiple compose
files at a time with the Amazon ECS CLI.
The ecs-cli compose command uses a project name with the task definitions and services it
creates. When the CLI creates a task definition from a compose file, the task definition is called
ecscompose-project-name. When the CLI creates a service from a compose file, the service is called
ecscompose-service-project-name. By default, the project name is the name of the current
working directory. However, you can also specify your own project name with the --project-name
option.
Note
The Amazon ECS CLI can only manage tasks, services, and container instances that were created
with the CLI. To manage tasks, services, and container instances that were not created by the
Amazon ECS CLI, use the AWS Command Line Interface or the AWS Management Console.
The following parameters are supported in compose files for the Amazon ECS CLI:
• cap_add (Not valid for tasks using the Fargate launch type)
• cap_drop (Not valid for tasks using the Fargate launch type)
• command
• cpu_shares
• dns
• dns_search
• entrypoint
• environment: If an environment variable value is not specified in the compose file, but it exists in the
shell environment, the shell environment variable value is passed to the task definition that is created
for any associated tasks or services.
Important
We do not recommend using plaintext environment variables for sensitive information, such
as credential data.
• env_file
Important
We do not recommend using plaintext environment variables for sensitive information, such
as credential data.
• extra_hosts
• hostname
• image
• labels
• links (Not valid for tasks using the Fargate launch type)
• log_driver (Compose file version 1 only)
• log_opt (Compose file version 1 only)
• logging (Compose file version 2 only)
• driver
• options
• mem_limit (in bytes)
• mem_reservation (in bytes)
• ports
• privileged (Not valid for tasks using the Fargate launch type)
• read_only
• security_opt
• ulimits
• user
• volumes
• volumes_from
• working_dir
Important
The build directive is not supported at this time.
For more information about Docker compose file syntax, see the Compose file reference in the Docker
documentation.
Note
Ensure you are using the latest version of the Amazon ECS CLI to use all configuration options.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
version: 1
task_definition:
ecs_network_mode: string
task_role_arn: string
task_execution_role: string
task_size:
cpu_limit: string
mem_limit: string
services:
<service_name>:
essential: boolean
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- subnet_id1
- subnet_id2
security_groups:
- secgroup_id1
- secgroup_id2
assign_public_ip: ENABLED
The fields listed under task_definition correspond to fields to be included in your Amazon ECS task
definition. The following are descriptions for each:
• services ‐ corresponds to the services listed in your Docker compose file, with service_name
matching the name of the container to run. Its fields are merged into a container definition. The only
field you can specify on it is essential. If not specified, the value for essential defaults to true.
The fields listed under run_params are for values needed as options to API calls not specifically related
to a task definition, such as compose up (RunTask) and compose service up (CreateService).
Currently, the only supported parameter under run_params is network_configuration, which is a
required parameter to use task networking. It is required when using tasks with the Fargate launch type.
Syntax
ecs-cli compose [--verbose] [--file compose-file] [--project-name project-name]
[--task-role-arn role_value] [--cluster cluster_name] [--region region] [--ecs-
params ecs-params.yml] [subcommand] [arguments] [--help]
Options
Name Description
Required: No
--file, -f compose-file Specifies the Docker compose file to use. At this time, the
latest version of the Amazon ECS CLI supports Docker
compose file syntax versions 1 and 2. If the COMPOSE_FILE
environment variable is set when ecs-cli compose is run,
then the Docker compose file is set to the value of that
environment variable.
Type: String
Default: ./docker-compose.yml
Required: No
Type: String
Name Description
Required: No
--task-role-arn role_value Specifies the short name or full Amazon Resource Name
(ARN) of the IAM role that containers in this task can assume.
All containers in this task are granted the permissions that
are specified in this role.
Type: String
Required: No
--ecs-params Specifies the ECS parameters that are not native to Docker
compose files. For more information, see Using Amazon ECS
Parameters (p. 305).
Default: ./ecs-params.yml
Required: No
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Type: String
Required: No
Name Description
Type: Boolean
Required: No
Available Subcommands
The ecs-cli compose command supports the following subcommands. Each of these subcommands have
their own flags associated with them, which can be displayed using the --help flag.
create
Creates an Amazon ECS task definition from your compose file. For more information, see ecs-cli
compose create (p. 308).
ps, list
Lists all the containers in your cluster that were started by the compose project.
run [containerName] ["command ..."] ...
Starts all containers overriding commands with the supplied one-off commands for the containers.
scale n
Starts a single task from the task definition created from your compose file. For more information,
see ecs-cli compose start (p. 312).
stop, down
Creates an ECS task definition from your compose file (if it does not already exist) and runs one
instance of that task on your cluster (a combination of create and start). For more information, see
ecs-cli compose up (p. 314).
service [subcommand]
Creates an ECS service from your compose file. For more information, see ecs-cli compose
service (p. 317).
help
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli compose [--verbose] [--file compose-file] [--project-name project-name]
create [arguments] [--help]
Options
Name Description
Required: No
--file, -f compose-file Specifies the Docker compose file to use. At this time, the
latest version of the Amazon ECS CLI supports Docker
compose file syntax versions 1 and 2. If the COMPOSE_FILE
environment variable is set when ecs-cli compose is run,
then the Docker compose file is set to the value of that
environment variable.
Type: String
Default: ./docker-compose.yml
Required: No
Type: String
Required: No
--task-role-arn role_value Specifies the short name or full Amazon Resource Name
(ARN) of the IAM role that containers in this task can assume.
All containers in this task are granted the permissions that
are specified in this role.
Type: String
Required: No
--ecs-params Specifies the ECS parameters that are not native to Docker
compose files. For more information, see Using Amazon ECS
Parameters (p. 305).
Default: ./ecs-params.yml
Required: No
Name Description
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Type: String
Required: No
--launch-type launch_type Specifies the launch type to use. Available options are
FARGATE or EC2. For more information about launch types,
see Amazon ECS Launch Types (p. 132).
Type: String
Required: No
Required: No
Required: No
Examples
Register a Task Definition
This example creates a task definition with the project name hello-world from the hello-world.yml
compose file.
Output:
Register a Task Definition Using the EC2 Launch Type Without Task Networking
This example creates a task definition with the project name hello-world from the hello-world.yml
compose file with additional ECS parameters specified.
version: 1
task_definition:
ecs_network_mode: host
task_role_arn: myCustomRole
services:
my_service:
essential: false
Output:
Register a Task Definition Using the EC2 Launch Type With Task Networking
This example creates a task definition with the project name hello-world from the hello-world.yml
compose file. Additional ECS parameters are specified for task and network configuration for the EC2
launch type. Then one instance of the task is run using the EC2 launch type.
version: 1
task_definition:
ecs_network_mode: awsvpc
services:
my_service:
essential: false
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- subnet-abcd1234
- subnet-dbca4321
security_groups:
- sg-abcd1234
- sg-dbca4321
Command:
Output:
Syntax
ecs-cli compose [--verbose] [--file compose-file] [--project-name project-name]
start [arguments] [--help]
Options
Name Description
Required: No
--file, -f compose-file Specifies the Docker compose file to use. At this time, the
latest version of the Amazon ECS CLI supports Docker
compose file syntax versions 1 and 2. If the COMPOSE_FILE
environment variable is set when ecs-cli compose is run,
then the Docker compose file is set to the value of that
environment variable.
Type: String
Default: ./docker-compose.yml
Required: No
Type: String
Name Description
Default: The current directory name.
Required: No
--task-role-arn role_value Specifies the short name or full Amazon Resource Name
(ARN) of the IAM role that containers in this task can assume.
All containers in this task are granted the permissions that
are specified in this role.
Type: String
Required: No
--ecs-params Specifies the ECS parameters that are not native to Docker
compose files. For more information, see Using Amazon ECS
Parameters (p. 305).
Default: ./ecs-params.yml
Required: No
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Type: String
Required: No
Name Description
--launch-type launch_type Specifies the launch type to use. Available options are
FARGATE or EC2. For more information about launch types,
see Amazon ECS Launch Types (p. 132).
Type: String
Required: No
Required: No
Required: No
Examples
Run a Task
This example creates a task definition from the hello-world.yml compose file and then runs a single
task using that task definition.
version: 1
task_definition:
ecs_network_mode: host
task_role_arn: myCustomRole
services:
my_service:
essential: false
Command:
Output:
ecs-cli compose up
Description
Creates an Amazon ECS task definition from your compose file, if one does not already exist, and runs
one instance of that task on your cluster.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli compose [--verbose] [--file compose-file] [--project-name project-name]
up [arguments] [--help]
Options
Name Description
Required: No
--file, -f compose-file Specifies the Docker compose file to use. At this time, the
latest version of the Amazon ECS CLI supports Docker
compose file syntax versions 1 and 2. If the COMPOSE_FILE
environment variable is set when ecs-cli compose is run,
then the Docker compose file is set to the value of that
environment variable.
Type: String
Default: ./docker-compose.yml
Required: No
Type: String
Required: No
--task-role-arn role_value Specifies the short name or full Amazon Resource Name
(ARN) of the IAM role that containers in this task can assume.
All containers in this task are granted the permissions that
are specified in this role.
Type: String
Required: No
--ecs-params Specifies the ECS parameters that are not native to Docker
compose files. For more information, see Using Amazon ECS
Parameters (p. 305).
Default: ./ecs-params.yml
Required: No
Name Description
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Type: String
Required: No
--launch-type launch_type Specifies the launch type to use. Available options are
FARGATE or EC2. For more information about launch types,
see Amazon ECS Launch Types (p. 132).
Type: String
Required: No
Required: No
Required: No
Examples
Register a Task Definition Using the AWS Fargate Launch Type with Task
Networking
This example creates a task definition with the project name hello-world from the hello-world.yml
compose file. Additional ECS parameters are specified for task and network configuration for the Fargate
launch type. Then one instance of the task is run using the Fargate launch type.
version: 1
task_definition:
ecs_network_mode: awsvpc
task_execution_role: ecsTaskExecutionRole
task_size:
cpu_limit: 512
mem_limit: 2GB
services:
my_service:
essential: false
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- subnet-abcd1234
- subnet-dcba4321
security_groups:
- sg-abcd1234
- sg-dcba4321
assign_public_ip: ENABLED
Command:
Output:
The ecs-cli compose service command works with a Docker compose file to create task definitions
and manage services. At this time, the Amazon ECS CLI supports Docker compose file syntax versions
1 and 2. By default, the command looks for a compose file in the current directory, called docker-
compose.yml. However, you can also specify a different file name or path to a compose file with the --
file option. This is especially useful for managing tasks and services from multiple compose files at a
time with the Amazon ECS CLI.
The ecs-cli compose service command uses a project name with the task definitions and services that
it creates. When the CLI creates a task definition and service from a compose file, the task definition
and service are called project-name. By default, the project name is the name of the current working
directory. However, you can also specify your own project name with the --project-name option.
Note
The Amazon ECS CLI can only manage tasks, services, and container instances that were created
with the CLI. To manage tasks, services, and container instances that were not created by the
Amazon ECS CLI, use the AWS Command Line Interface or the AWS Management Console.
The following parameters are supported in compose files for the Amazon ECS CLI:
• cap_add (Not valid for tasks using the Fargate launch type)
• cap_drop (Not valid for tasks using the Fargate launch type)
• command
• cpu_shares
• dns
• dns_search
• entrypoint
• environment: If an environment variable value is not specified in the compose file, but it exists in the
shell environment, the shell environment variable value is passed to the task definition that is created
for any associated tasks or services.
Important
We do not recommend using plaintext environment variables for sensitive information, such
as credential data.
• env_file
Important
We do not recommend using plaintext environment variables for sensitive information, such
as credential data.
• extra_hosts
• hostname
• image
• labels
• links (Not valid for tasks using the Fargate launch type)
• log_driver (Compose file version 1 only)
• log_opt (Compose file version 1 only)
• logging (Compose file version 2 only)
• driver
• options
• mem_limit (in bytes)
• mem_reservation (in bytes)
• ports
• privileged (Not valid for tasks using the Fargate launch type)
• read_only
• security_opt
• ulimits
• user
• volumes
• volumes_from
• working_dir
Important
The build directive is not supported at this time.
For more information about Docker compose file syntax, see the Compose file reference in the Docker
documentation.
Important
Some features described may only be available with the latest version of the ECS CLI. To obtain
the latest version, see Installing the Amazon ECS CLI (p. 264).
Syntax
ecs-cli compose [--verbose] [--file compose-file] [--project-name project-name]
service [subcommand] [arguments] [--help]
Options
Name Description
Required: No
--file, -f compose-file Specifies the Docker compose file to use. At this time, the
latest version of the Amazon ECS CLI supports Docker
compose file syntax versions 1 and 2. If the COMPOSE_FILE
environment variable is set when ecs-cli compose is run,
then the Docker compose file is set to the value of that
environment variable.
Type: String
Default: ./docker-compose.yml
Required: No
Type: String
Required: No
--task-role-arn role_value Specifies the short name or full Amazon Resource Name
(ARN) of the IAM role that containers in this task can assume.
All containers in this task are granted the permissions that
are specified in this role.
Type: String
Required: No
--ecs-params Specifies the ECS parameters that are not native to Docker
compose files. For more information, see Using Amazon ECS
Parameters (p. 305).
Name Description
Default: ./ecs-params.yml
Required: No
--cluster, -c cluster_name Specifies the ECS cluster name to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--region, -r region Specifies the AWS region to use. Defaults to the cluster
configured using the configure command.
Type: String
Required: No
--ecs-profile ecs_profile Specifies the name of the ECS profile configuration to use.
Defaults to the profile configured using the configure
profile command.
Type: String
Required: No
--aws-profile aws_profile Specifies the AWS profile to use. Enables you to use the
AWS credentials from an existing named profile in ~/.aws/
credentials.
Type: String
Required: No
Type: String
Required: No
Required: No
Available Subcommands
The ecs-cli compose service command supports the following subcommands and arguments:
Creates an ECS service from your compose file. The service is created with a desired count of 0, so no
containers are started by this command.
The --deployment-max-percent option specifies the upper limit (as a percentage of the
service's desiredCount) of the number of running tasks that can be running in a service during
You can optionally run your service behind a load balancer. The load balancer distributes traffic
across the tasks that are associated with the service. For more information, see Service Load
Balancing (p. 165). After you create a service, the load balancer name or target group ARN, container
name, and container port specified in the service definition are immutable.
Note
You must create your load balancer resources in the before you can configure a service to
use them. Your load balancer resources should reside in the same VPC as your container
instances and they should be configured to use the same subnets. You must also add a
security group rule to your container instance security group that allows inbound traffic
from your load balancer. For more information, see Creating a Load Balancer (p. 170).
• To configure your service to use an existing Elastic Load Balancing Classic Load Balancer, you must
specify the load balancer name, the container name (as it appears in a container definition), and
the container port to access from the load balancer. When a task from this service is placed on a
container instance, the container instance is registered with the load balancer specified here.
• To configure your service to use an existing Elastic Load Balancing Application Load Balancer, you
must specify the load balancer target group ARN, the container name (as it appears in a container
definition), and the container port to access from the load balancer. When a task from this service
is placed on a container instance, the container instance and port combination is registered as a
target in the target group specified here.
start [--create-log-groups]
Starts one copy of each of the containers on the created ECS service. This command updates the
desired count of the service to 1.
up [--deployment-max-percent n] [--deployment-min-healthy-percent n] [--load-balancer-name
value|--target-group-arn value] [--container-name value] [--container-port value] [--role value]
[--timeout value] [--launch-type launch_type] [--create-log-groups]
Creates an ECS service from your compose file (if it does not already exist) and runs one instance
of that task on your cluster (a combination of create and start). This command updates the desired
count of the service to 1.
The --deployment-max-percent option specifies the upper limit (as a percentage of the
service's desiredCount) of the number of running tasks that can be running in a service during
a deployment (the default value is 200). The --deployment-min-healthy-percent option
specifies the lower limit (as a percentage of the service's desiredCount) of the number of running
tasks that must remain running and healthy in a service during a deployment (the default value is
100). For more information, see maximumPercent (p. 164) and minimumHealthyPercent (p. 165).
The --timeout option specifies the timeout value in minutes (decimals supported) to wait for the
running task count to change. If the running task count has not changed for the specified period of
time, then the CLI times out and returns an error. Setting the timeout to 0 will cause the command
to return without checking for success. The default timeout value is 5 minutes.
You can optionally run your service behind a load balancer. The load balancer distributes traffic
across the tasks that are associated with the service. For more information, see Service Load
Balancing (p. 165). After you create a service, the load balancer name or target group ARN, container
name, and container port specified in the service definition are immutable.
Note
You must create your load balancer resources in the before you can configure a service to
use them. Your load balancer resources should reside in the same VPC as your container
instances and they should be configured to use the same subnets. You must also add a
security group rule to your container instance security group that allows inbound traffic
from your load balancer. For more information, see Creating a Load Balancer (p. 170).
• To configure your service to use an existing Elastic Load Balancing Classic Load Balancer, you must
specify the load balancer name, the container name (as it appears in a container definition), and
the container port to access from the load balancer. When a task from this service is placed on a
container instance, the container instance is registered with the load balancer specified here.
• To configure your service to use an existing Elastic Load Balancing Application Load Balancer, you
must specify the load balancer target group ARN, the container name (as it appears in a container
definition), and the container port to access from the load balancer. When a task from this service
is placed on a container instance, the container instance and port combination is registered as a
target in the target group specified here.
ps, list
Lists all the containers in your cluster that belong to the service created with the compose project.
scale [--deployment-max-percent n] [--deployment-min-healthy-percent n] [--timeout value] n
The --deployment-max-percent option specifies the upper limit (as a percentage of the
service's desiredCount) of the number of running tasks that can be running in a service during
a deployment (the default value is 200). The --deployment-min-healthy-percent option
specifies the lower limit (as a percentage of the service's desiredCount) of the number of running
tasks that must remain running and healthy in a service during a deployment (the default value is
100). For more information, see maximumPercent (p. 164) and minimumHealthyPercent (p. 165).
The --timeout option specifies the timeout value in minutes (decimals supported) to wait for the
running task count to change. If the running task count has not changed for the specified period of
time, then the CLI times out and returns an error. Setting the timeout to 0 will cause the command
to return without checking for success. The default timeout value is 5 minutes.
stop [--timeout value]
Stops the running tasks that belong to the service created with the compose project. This command
updates the desired count of the service to 0.
The --timeout option specifies the timeout value in minutes (decimals supported) to wait for the
running task count to change. If the running task count has not changed for the specified period of
time, then the CLI times out and returns an error. Setting the timeout to 0 will cause the command
to return without checking for success. The default timeout value is 5 minutes.
rm, delete, down [--timeout value]
Updates the desired count of the service to 0 and then deletes the service.
The --timeout option specifies the timeout value in minutes (decimals supported) to wait for the
running task count to change. If the running task count has not changed for the specified period of
time, then the CLI times out and returns an error. Setting the timeout to 0 will cause the command
to return without checking for success. The default timeout value is 5 minutes.
help
Examples
Example 1
This example brings up an ECS service with the project name hello-world from the hello-
world.yml compose file.
Output:
Example 2
This example scales the service created by the hello-world project to a desired count of 2.
Output:
Example 3
This example scales the service created by the hello-world project to a desired count of 0 and then
deletes the service.
Output:
Example 4
This example creates a service from the nginx-compose.yml compose file and configures it to use an
existing Application Load Balancer.
For more information on the other tools available for managing your AWS resources, including the
different AWS SDKs, IDE toolkits, and the Windows PowerShell command line tools, see http://
aws.amazon.com/tools/.
The following steps will help you set up an Amazon ECS cluster using either a Fargate or EC2 task:
Topics
• AWS CLI Walkthrough with a Fargate Task (p. 325)
• AWS CLI Walkthrough with an EC2 Task (p. 331)
Create your own cluster with a unique name with the following command:
Output:
{
"cluster": {
"status": "ACTIVE",
"statistics": [],
"clusterName": "fargate-cluster",
"registeredContainerInstancesCount": 0,
"pendingTasksCount": 0,
"runningTasksCount": 0,
"activeServicesCount": 0,
"clusterArn": "arn:aws:ecs:region:aws_account_id:cluster/fargate-cluster"
}
}
{
"family": "sample-fargate",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "fargate-app",
"image": "httpd:2.4",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"entryPoint": [
"sh",
"-c"
],
"command": [
"/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title>
<style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div
style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!
</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></
html>' > /usr/local/apache2/htdocs/index.html && httpd-foreground\""
]
}
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "512"
}
The above example JSON can be passed to the AWS CLI in two ways: you can save the task definition
JSON as a file and pass it with the --cli-input-json file://path_to_file.json option, or you
can escape the quotation marks in the JSON and pass the JSON container definitions on the command
line as in the below example. If you choose to pass the container definitions on the command line, your
command additionally requires a --family parameter that is used to keep multiple versions of your
task definition associated with each other.
The register-task-definition returns a description of the task definition after it completes its
registration.
{
"taskDefinition": {
"status": "ACTIVE",
"networkMode": "awsvpc",
"family": "sample-fargate",
"placementConstraints": [],
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
}
],
"cpu": "256",
"compatibilities": [
"EC2",
"FARGATE"
],
"volumes": [],
"memory": "512",
"requiresCompatibilities": [
"FARGATE"
],
"taskDefinitionArn": "arn:aws:ecs:region:aws_account_id:task-definition/sample-
fargate:2",
"containerDefinitions": [
{
"environment": [],
"name": "fargate-app",
"mountPoints": [],
"image": "httpd:2.4",
"cpu": 0,
"portMappings": [
{
"protocol": "tcp",
"containerPort": 80,
"hostPort": 80
}
],
"entryPoint": [
"sh",
"-c"
],
"command": [
"/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title>
<style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div
style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!
</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></
html>' > /usr/local/apache2/htdocs/index.html && httpd-foreground\""
],
"essential": true,
"volumesFrom": []
}
],
"revision": 2
}
}
Output:
{
"taskDefinitionArns": [
"arn:aws:ecs:region:aws_account_id:task-definition/sample-fargate:1",
"arn:aws:ecs:region:aws_account_id:task-definition/sample-fargate:2"
]
}
Output:
{
"service": {
"status": "ACTIVE",
"taskDefinition": "arn:aws:ecs:region:aws_account_id:task-definition/sample-
fargate:1",
"pendingCount": 0,
"launchType": "FARGATE",
"loadBalancers": [],
"roleArn": "arn:aws:iam::aws_account_id:role/aws-service-role/ecs.amazonaws.com/
AWSServiceRoleForECS",
"placementConstraints": [],
"createdAt": 1510811361.128,
"desiredCount": 2,
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-abcd1234"
],
"securityGroups": [
"sg-abcd1234"
],
"assignPublicIp": "DISABLED"
}
},
"platformVersion": "LATEST",
"serviceName": "fargate-service",
"clusterArn": "arn:aws:ecs:region:aws_account_id:cluster/fargate-cluster",
"serviceArn": "arn:aws:ecs:region:aws_account_id:service/fargate-service",
"deploymentConfiguration": {
"maximumPercent": 200,
"minimumHealthyPercent": 100
},
"deployments": [
{
"status": "PRIMARY",
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-abcd1234"
],
"securityGroups": [
"sg-abcd1234"
],
"assignPublicIp": "DISABLED"
}
},
"pendingCount": 0,
"launchType": "FARGATE",
"createdAt": 1510811361.128,
"desiredCount": 2,
"taskDefinition": "arn:aws:ecs:region:aws_account_id:task-definition/
sample-fargate:1",
"updatedAt": 1510811361.128,
"platformVersion": "0.0.1",
"id": "ecs-svc/9223370526043414679",
"runningCount": 0
}
],
"events": [],
"runningCount": 0,
"placementStrategy": []
}
}
Output:
{
"serviceArns": [
"arn:aws:ecs:region:aws_account_id:service/fargate-service"
]
}
Output:
{
"services": [
{
"status": "ACTIVE",
"taskDefinition": "arn:aws:ecs:region:aws_account_id:task-definition/sample-
fargate:1",
"pendingCount": 2,
"launchType": "FARGATE",
"loadBalancers": [],
"roleArn": "arn:aws:iam::aws_account_id:role/aws-service-role/
ecs.amazonaws.com/AWSServiceRoleForECS",
"placementConstraints": [],
"createdAt": 1510811361.128,
"desiredCount": 2,
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-abcd1234"
],
"securityGroups": [
"sg-abcd1234"
],
"assignPublicIp": "DISABLED"
}
},
"platformVersion": "LATEST",
"serviceName": "fargate-service",
"clusterArn": "arn:aws:ecs:region:aws_account_id:cluster/fargate-cluster",
"serviceArn": "arn:aws:ecs:region:aws_account_id:service/fargate-service",
"deploymentConfiguration": {
"maximumPercent": 200,
"minimumHealthyPercent": 100
},
"deployments": [
{
"status": "PRIMARY",
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-abcd1234"
],
"securityGroups": [
"sg-abcd1234"
],
"assignPublicIp": "DISABLED"
}
},
"pendingCount": 2,
"launchType": "FARGATE",
"createdAt": 1510811361.128,
"desiredCount": 2,
"taskDefinition": "arn:aws:ecs:region:aws_account_id:task-definition/
sample-fargate:1",
"updatedAt": 1510811361.128,
"platformVersion": "0.0.1",
"id": "ecs-svc/9223370526043414679",
"runningCount": 0
}
],
"events": [
{
"message": "(service fargate-service) has started 2 tasks: (task
53c0de40-ea3b-489f-a352-623bf1235f08) (task d0aec985-901b-488f-9fb4-61b991b332a3).",
"id": "92b8443e-67fb-4886-880c-07e73383ea83",
"createdAt": 1510811841.408
},
{
"message": "(service fargate-service) has started 2 tasks: (task
b4911bee-7203-4113-99d4-e89ba457c626) (task cc5853e3-6e2d-4678-8312-74f8a7d76474).",
"id": "d85c6ec6-a693-43b3-904a-a997e1fc844d",
"createdAt": 1510811601.938
},
{
"message": "(service fargate-service) has started 2 tasks: (task
cba86182-52bf-42d7-9df8-b744699e6cfc) (task f4c1ad74-a5c6-4620-90cf-2aff118df5fc).",
"id": "095703e1-0ca3-4379-a7c8-c0f1b8b95ace",
"createdAt": 1510811364.691
}
],
"runningCount": 0,
"placementStrategy": []
}
],
"failures": []
}
Step 2: Launch an Instance with the Amazon ECS AMI (p. 332)
Note
The benefit of using the default cluster that is provided for you is that you don't have to
specify the --cluster cluster_name option in the subsequent commands. If you do create
your own, non-default, cluster you need to specify --cluster cluster_name for each
command that you intend to use with that cluster.
Create your own cluster with a unique name with the following command:
Output:
{
"cluster": {
"clusterName": "MyCluster",
"status": "ACTIVE",
"clusterArn": "arn:aws:ecs:region:aws_account_id:cluster/MyCluster"
}
}
The current Amazon ECS-optimized Linux AMI IDs by region are listed below for reference.
Output:
{
"containerInstanceArns": [
"arn:aws:ecs:us-east-1:aws_account_id:container-instance/container_instance_ID"
]
}
Output:
{
"failures": [],
"containerInstances": [
{
"status": "ACTIVE",
"registeredResources": [
{
"integerValue": 1024,
"longValue": 0,
"type": "INTEGER",
"name": "CPU",
"doubleValue": 0.0
},
{
"integerValue": 995,
"longValue": 0,
"type": "INTEGER",
"name": "MEMORY",
"doubleValue": 0.0
},
{
"name": "PORTS",
"longValue": 0,
"doubleValue": 0.0,
"stringSetValue": [
"22",
"2376",
"2375",
"51678"
],
"type": "STRINGSET",
"integerValue": 0
},
{
"name": "PORTS_UDP",
"longValue": 0,
"doubleValue": 0.0,
"stringSetValue": [],
"type": "STRINGSET",
"integerValue": 0
}
],
"ec2InstanceId": "instance_id",
"agentConnected": true,
"containerInstanceArn": "arn:aws:ecs:us-west-2:aws_account_id:container-
instance/container_instance_ID",
"pendingTasksCount": 0,
"remainingResources": [
{
"integerValue": 1024,
"longValue": 0,
"type": "INTEGER",
"name": "CPU",
"doubleValue": 0.0
},
{
"integerValue": 995,
"longValue": 0,
"type": "INTEGER",
"name": "MEMORY",
"doubleValue": 0.0
},
{
"name": "PORTS",
"longValue": 0,
"doubleValue": 0.0,
"stringSetValue": [
"22",
"2376",
"2375",
"51678"
],
"type": "STRINGSET",
"integerValue": 0
},
{
"name": "PORTS_UDP",
"longValue": 0,
"doubleValue": 0.0,
"stringSetValue": [],
"type": "STRINGSET",
"integerValue": 0
}
],
"runningTasksCount": 0,
"attributes": [
{
"name": "com.amazonaws.ecs.capability.privileged-container"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.logging-driver.json-file"
},
{
"name": "com.amazonaws.ecs.capability.logging-driver.syslog"
}
],
"versionInfo": {
"agentVersion": "1.5.0",
"agentHash": "b197edd",
"dockerVersion": "DockerVersion: 1.7.1"
}
}
]
}
You can also find the Amazon EC2 instance ID that you can use to monitor the instance in the Amazon
EC2 console or with the aws ec2 describe-instances --instance-id instance_id command.
{
"containerDefinitions": [
{
"name": "sleep",
"image": "busybox",
"cpu": 10,
"command": [
"sleep",
"360"
],
"memory": 10,
"essential": true
}
],
"family": "sleep360"
}
The above example JSON can be passed to the AWS CLI in two ways: you can save the task definition
JSON as a file and pass it with the --cli-input-json file://path_to_file.json option, or you
can escape the quotation marks in the JSON and pass the JSON container definitions on the command
line as in the below example. If you choose to pass the container definitions on the command line, your
command additionally requires a --family parameter that is used to keep multiple versions of your
task definition associated with each other.
The register-task-definition returns a description of the task definition after it completes its
registration.
{
"taskDefinition": {
"volumes": [],
"taskDefinitionArn": "arn:aws:ec2:us-east-1:aws_account_id:task-definition/
sleep360:1",
"containerDefinitions": [
{
"environment": [],
"name": "sleep",
"mountPoints": [],
"image": "busybox",
"cpu": 10,
"portMappings": [],
"command": [
"sleep",
"360"
],
"memory": 10,
"essential": true,
"volumesFrom": []
}
],
"family": "sleep360",
"revision": 1
}
}
Output:
{
"taskDefinitionArns": [
"arn:aws:ec2:us-east-1:aws_account_id:task-definition/sleep300:1",
"arn:aws:ec2:us-east-1:aws_account_id:task-definition/sleep300:2",
"arn:aws:ec2:us-east-1:aws_account_id:task-definition/sleep360:1",
"arn:aws:ec2:us-east-1:aws_account_id:task-definition/wordpress:3",
"arn:aws:ec2:us-east-1:aws_account_id:task-definition/wordpress:4",
"arn:aws:ec2:us-east-1:aws_account_id:task-definition/wordpress:5",
"arn:aws:ec2:us-east-1:aws_account_id:task-definition/wordpress:6"
]
}
Output:
{
"tasks": [
{
"taskArn": "arn:aws:ecs:us-east-1:aws_account_id:task/task_ID",
"overrides": {
"containerOverrides": [
{
"name": "sleep"
}
]
},
"lastStatus": "PENDING",
"containerInstanceArn": "arn:aws:ecs:us-east-1:aws_account_id:container-
instance/container_instance_ID",
"clusterArn": "arn:aws:ecs:us-east-1:aws_account_id:cluster/default",
"desiredStatus": "RUNNING",
"taskDefinitionArn": "arn:aws:ecs:us-east-1:aws_account_id:task-definition/
sleep360:1",
"containers": [
{
"containerArn": "arn:aws:ecs:us-
east-1:aws_account_id:container/container_ID",
"taskArn": "arn:aws:ecs:us-east-1:aws_account_id:task/task_ID",
"lastStatus": "PENDING",
"name": "sleep"
}
]
}
]
}
Output:
{
"taskArns": [
"arn:aws:ecs:us-east-1:aws_account_id:task/task_ID"
]
}
Output:
{
"failures": [],
"tasks": [
{
"taskArn": "arn:aws:ecs:us-east-1:aws_account_id:task/task_ID",
"overrides": {
"containerOverrides": [
{
"name": "sleep"
}
]
},
"lastStatus": "RUNNING",
"containerInstanceArn": "arn:aws:ecs:us-east-1:aws_account_id:container-
instance/container_instance_ID",
"clusterArn": "arn:aws:ecs:us-east-1:aws_account_id:cluster/default",
"desiredStatus": "RUNNING",
"taskDefinitionArn": "arn:aws:ecs:us-east-1:aws_account_id:task-definition/
sleep360:1",
"containers": [
{
"containerArn": "arn:aws:ecs:us-
east-1:aws_account_id:container/container_ID",
"taskArn": "arn:aws:ecs:us-east-1:aws_account_id:task/task_ID",
"lastStatus": "RUNNING",
"name": "sleep",
"networkBindings": []
}
]
}
]
}
Topics
• Microservices (p. 339)
• Batch Jobs (p. 341)
Microservices
Microservices are built with a software architectural method that decomposes complex applications into
smaller, independent services. Containers are optimal for running small, decoupled services, and they
offer the following advantages:
• Containers make services easy to model in an immutable image with all of your dependencies.
• Containers can use any application and any programming language.
• The container image is a versioned artifact, so you can track your container images to the source they
came from.
• You can test your containers locally, and deploy the same artifact to scale.
The following sections cover some of the aspects and challenges that you must consider when designing
a microservices architecture to run on Amazon ECS. You can also view the microservices reference
architecture on GitHub. For more information, see Deploying Microservices with Amazon ECS, AWS
CloudFormation, and an Application Load Balancer.
Topics
• Auto Scaling (p. 339)
• Service Discovery (p. 340)
• Authorization and Secrets Management (p. 340)
• Logging (p. 340)
• Continuous Integration and Continuous Deployment (p. 340)
Auto Scaling
The application load for your microservice architecture can change over time. A responsive application
can scale out or in, depending on actual or anticipated load. Amazon ECS provides you with several tools
to scale not only your services that are running in your clusters, but the actual clusters themselves.
For example, Amazon ECS provides CloudWatch metrics for your clusters and services. For more
information, see Amazon ECS CloudWatch Metrics (p. 200). You can monitor the memory and CPU
utilization for your clusters and services. Then, use those metrics to trigger CloudWatch alarms that can
automatically scale out your cluster when its resources are running low, and scale them back in when
you don't need as many resources. For more information, see Tutorial: Scaling Container Instances with
CloudWatch Alarms (p. 208).
In addition to scaling your cluster size, your Amazon ECS service can optionally be configured to use
Service Auto Scaling to adjust its desired count up or down in response to CloudWatch alarms. Service
Auto Scaling is available in all regions that support Amazon ECS. For more information, see Service Auto
Scaling (p. 179).
Service Discovery
Service discovery is a key component of most distributed systems and service-oriented architectures.
With service discovery, your microservice components are automatically discovered as they get created
and terminated on a given infrastructure. There are several approaches that you can use to make your
services discoverable. The following resources describe a few examples:
• Run Containerized Microservices with Amazon EC2 Container Service and Application Load Balancer:
This post describes how to use the dynamic port mapping and path-based routing features of Elastic
Load Balancing Application Load Balancers to provide service discovery for a microservice architecture.
• Amazon Elastic Container Service - Reference Architecture: Service Discovery: This Amazon ECS
reference architecture provides service discovery to containers using CloudWatch Events, Lambda, and
Route 53 private hosted zones.
• Service Discovery via Consul with Amazon ECS: This post shows how a third party tool called Consul
by HashiCorp can augment the capabilities of Amazon ECS by providing service discovery for an ECS
cluster (complete with an example application).
Logging
You can configure your container instances to send log information to CloudWatch Logs. This
enables you to view different logs from your container instances in one convenient location. For
more information about getting started using CloudWatch Logs on your container instances that
were launched with the Amazon ECS-optimized AMI, see Using CloudWatch Logs with Container
Instances (p. 53).
You can configure the containers in your tasks to send log information to CloudWatch Logs. This enables
you to view different logs from your containers in one convenient location, and it prevents your container
logs from taking up disk space on your container instances. For more information about getting started
using the awslogs log driver in your task definitions, see Using the awslogs Log Driver (p. 137).
• Updates your Amazon ECS services to use the new image in your application
• ECS Reference Architecture: Continuous Deployment: This reference architecture demonstrates how
to achieve continuous deployment of an application to Amazon ECS using AWS CodePipeline, AWS
CodeBuild, and AWS CloudFormation.
• Continuous Delivery Pipeline for Amazon ECS Using Jenkins, GitHub, and Amazon ECR: This AWS
labs repository helps you set up and configure a continuous delivery pipeline for Amazon ECS using
Jenkins, GitHub, and Amazon ECR.
Batch Jobs
Docker containers are particularly suited for batch job workloads. Batch jobs are often short-lived and
embarrassingly parallel. You can package your batch processing application into a Docker image so that
you can deploy it anywhere, such as in an Amazon ECS task. If you are interested in running batch job
workloads, consider the following resources:
• AWS Batch: For fully managed batch processing at any scale, you should consider using AWS Batch.
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of
thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity
and type of compute resources (for example, CPU or memory optimized instances) based on the
volume and specific resource requirements of the batch jobs submitted. For more information, see the
AWS Batch product detail pages.
• Amazon ECS Reference Architecture: Batch Processing: This reference architecture illustrates how
to use AWS CloudFormation, Amazon S3, Amazon SQS, and CloudWatch alarms to handle batch
processing on Amazon ECS.
This tutorial guides you through creating a VPC with two public subnets and two private subnets, which
are provided with internet access through a NAT gateway.
Topics
• Step 1: Create an Elastic IP Address for Your NAT Gateway (p. 342)
• Step 2: Run the VPC Wizard (p. 342)
• Step 3: Create Additional Subnets (p. 343)
• Next Steps (p. 343)
1. In the left navigation pane, choose Subnets and then Create Subnet.
2. For Name tag, enter a name for your subnet, such as Public subnet.
3. For VPC, choose the VPC that you created earlier.
4. For Availability Zone, choose the same Availability Zone as the additional private subnet that you
created in the previous procedure.
5. For IPv4 CIDR block, enter a valid CIDR block. For example, the wizard creates CIDR blocks in
10.0.0.0/24 and 10.0.1.0/24 by default. You could use 10.0.2.0/24 for your second public subnet.
6. Choose Yes, Create.
7. Select the public subnet that you just created and choose Route Table, Edit.
8. By default, the private route table is selected. Choose the other available route table so that the
0.0.0.0/0 destination is routed to the internet gateway (igw-xxxxxxxx) and choose Save.
9. With your second public subnet still selected, choose Subnet Actions, Modify auto-assign IP
settings.
10. Select Enable auto-assign public IPv4 address and choose Save, Close.
Next Steps
After you have created your VPC, you should consider the following next steps:
• Create security groups for your public and private resources if they require inbound network access.
For more information, see Working with Security Groups in the Amazon VPC User Guide.
• Create Amazon ECS clusters in your private or public subnets. For more information, see Creating
a Cluster (p. 25). If you use the cluster creation wizard in the Amazon ECS console, you can specify
the VPC that you just created and the public or private subnets in which to launch your instances,
depending on your use case.
• To make your containers directly accessible from the internet, launch instances into your public
subnets. Be sure to configure your container instance security groups appropriately.
• To avoid making containers directly accessible from the internet, launch instances into your private
subnets.
• Create a load balancer in your public subnets that can route traffic to services in your public or private
subnets. For more information, see Service Load Balancing (p. 165).
You can use Amazon EFS file systems with Amazon ECS to export file system data across your fleet of
container instances. That way, your tasks have access to the same persistent storage, no matter the
instance on which they land. However, you must configure your container instance AMI to mount the
Amazon EFS file system before the Docker daemon starts. Also, your task definitions must reference
volume mounts on the container instance to use the file system. The following sections help you get
started using Amazon EFS with Amazon ECS.
Topics
• Step 1: Gather Cluster Information (p. 345)
• Step 2: Create a Security Group for an Amazon EFS File System (p. 345)
• Step 3: Create an Amazon EFS File System (p. 346)
• Step 4: Configure Container Instances (p. 346)
• Step 5: Create a Task Definition to Use the Amazon EFS File System (p. 348)
• Step 6: Add Content to the Amazon EFS File System (p. 349)
• Step 7: Run a Task and View the Results (p. 350)
To create an Amazon EFS file system for Amazon ECS container instances
1. Log in to the container instance via SSH. For more information, see Connect to Your Container
Instance (p. 52).
2. Create a mount point for your Amazon EFS file system. For example, /efs.
4. Mount your file system with the following command. Be sure to replace the file system ID and region
with your own.
5. Validate that the file system is mounted correctly with the following command. You should see a file
system entry that matches your Amazon EFS file system. If not, see Troubleshooting Amazon EFS in
the Amazon Elastic File System User Guide.
7. Update the /etc/fstab file to automatically mount the file system at boot.
8. Reload the file system table to verify that your mounts are working properly.
sudo mount -a
Note
If you receive an error while running the above command, examine your /etc/fstab file
for problems. If necessary, restore it with the backup that you created earlier.
9. Restart Docker so that it can see the new file system. The following commands apply to the Amazon
ECS–optimized AMI. If you are using a different operating system, adjust the commands accordingly.
Note
These commands stop all containers that are running on the container instance.
You can use an Amazon EC2 user data script to bootstrap an Amazon ECS–optimized AMI at boot. For
more information, see Bootstrapping Container Instances with Amazon EC2 User Data (p. 46).
1. Follow the container instance launch instructions at Launching an Amazon ECS Container
Instance (p. 43).
2. On Step 8.g (p. 45), pass the following user data to configure your instance. If you are not using the
default cluster, be sure to replace the ECS_CLUSTER=default line in the configuration file to
specify your own cluster name.
--==BOUNDARY==
Content-Type: text/cloud-boothook; charset="us-ascii"
# Install nfs-utils
cloud-init-per once yum_update yum update -y
cloud-init-per once install_nfs_utils yum install -y nfs-utils
# Mount /efs
cloud-init-per once mount_efs echo -e 'fs-abcd1234.efs.us-east-1.amazonaws.com:/ /efs
nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0' >> /etc/
fstab
mount -a
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Set any ECS agent configuration options
echo "ECS_CLUSTER=default" >> /etc/ecs/ecs.config
--==BOUNDARY==--
The following task definition creates a data volume called efs-html at /efs/html on the host
container instance Amazon EFS file system. The nginx container mounts the host data volume at the
NGINX root, /usr/share/nginx/html.
"containerDefinitions": [
{
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"mountPoints": [
{
"containerPath": "/usr/share/nginx/html",
"sourceVolume": "efs-html"
}
],
"name": "nginx",
"image": "nginx"
}
],
"volumes": [
{
"host": {
"sourcePath": "/efs/html"
},
"name": "efs-html"
}
],
"family": "nginx-efs"
}
You can save this task definition to a file called nginx-efs.json and register it to use in your own
clusters with the following AWS CLI command. For more information, see Installing the AWS Command
Line Interface in the AWS Command Line Interface User Guide.
1. Connect using SSH to one of your container instances that is using the Amazon EFS file system. For
more information, see Connect to Your Container Instance (p. 52).
2. Write a simple HTML file by copying and pasting the following block of text into a terminal.
</body>
</html>
EOF
Note
If you do not see the message, make sure that the security group for your container
instances allows inbound network traffic on port 80.
Prerequisites
There are a few resources that you must have in place before you can use this tutorial to create your CD
pipeline. Here are the things you need to get started:
Note
All of these resources should be created within the same AWS Region.
• A source control repository (this tutorial uses AWS CodeCommit) with your Dockerfile and application
source. For more information, see Create an AWS CodeCommit Repository in the AWS CodeCommit
User Guide.
• A Docker image repository (this tutorial uses Amazon ECR) that contains an image you have built from
your Dockerfile and application source. For more information, see Creating a Repository and Pushing
an Image in the Amazon Elastic Container Registry User Guide.
• An Amazon ECS task definition that references the Docker image hosted in your image repository. For
more information, see Creating a Task Definition in the Amazon Elastic Container Service Developer
Guide.
• An Amazon ECS cluster that is running a service that uses your previously mentioned task definition.
For more information, see Creating a Cluster and Creating a Service in the Amazon Elastic Container
Service Developer Guide.
After you have satisfied these prerequisites, you can proceed with the tutorial and create your CD
pipeline.
• Pre-build stage:
• Log in to Amazon ECR.
• Set the repository URI to your ECR image and add an image tag with the first seven characters of the
Git commit ID of the source.
• Build stage:
• Build the Docker image and tag the image both as latest and with the Git commit ID.
• Post-build stage:
• Push the image to your ECR repository with both tags.
• Write a file called imagedefinitions.json in the build root that has your Amazon ECS service's
container name and the image and tag. The deployment stage of your CD pipeline uses this
information to create a new revision of your service's task definition, and then it updates the
service to use the new task definition. The imagedefinitions.json file is required for the AWS
CodeDeploy ECS job worker.
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region $AWS_DEFAULT_REGION)
- REPOSITORY_URI=012345678910.dkr.ecr.us-west-2.amazonaws.com/hello-world
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"hello-world","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG >
imagedefinitions.json
artifacts:
files: imagedefinitions.json
The build specification was written for the following task definition, used by the Amazon ECS service for
this tutorial. The REPOSITORY_URI value corresponds to the image repository (without any image tag),
and the hello-world value near the end of the file corresponds to the container name in the service's
task definition.
{
"taskDefinition": {
"family": "hello-world",
"containerDefinitions": [
{
"name": "hello-world",
"image": "012345678910.dkr.ecr.us-west-2.amazonaws.com/hello-world:6a57b99",
"cpu": 100,
"portMappings": [
{
"protocol": "tcp",
"containerPort": 80,
"hostPort": 80
}
],
"memory": 128,
"essential": true
}
]
}
}
1. Open a text editor and then copy and paste the build specification above into a new file.
2. Replace the REPOSITORY_URI value (012345678910.dkr.ecr.us-west-2.amazonaws.com/
hello-world) with your Amazon ECR repository URI (without any image tag) for your Docker
image. Replace hello-world with the container name in your service's task definition that
references your Docker image.
3. Commit and push your buildspec.yml file to your source repository.
git add .
git push
If this is your first time using AWS CodePipeline, an introductory page appears instead of Welcome.
Choose Get Started Now.
3. On the Step 1: Name page, for Pipeline name, type the name for your pipeline and choose Next
step. For this tutorial, the pipeline name is hello-world.
4. On the Step 2: Source page, for Source provider, choose AWS CodeCommit.
a. For Repository name, choose the name of the AWS CodeCommit repository to use as the source
location for your pipeline.
b. For Branch name, choose the branch to use and choose Next step.
5. On the Step 3: Build page, choose AWS CodeBuild, and then choose Create a new build project.
a. For Project name, choose a unique name for your build project. For this tutorial, the project
name is hello-world.
b. For Operating system, choose Ubuntu.
c. For Runtime, choose Docker. Choose Save build project.
API Version 2014-11-13
353
Amazon Elastic Container Service Developer Guide
Step 3: Add Amazon ECR Permissions
to the AWS CodeBuild Role
a. For Cluster name, choose the Amazon ECS cluster in which your service is running. For this
tutorial, the cluster is default.
b. For Service name, choose the service to update and choose Next step. For this tutorial, the
service name is hello-world.
7. On the Step 5: Service Role page, choose Create role. On the IAM console page that describes the
role to be created for you, choose Allow.
8. Choose Next step.
9. On the Step 6: Review page, review your pipeline configuration and choose Create pipeline to
create the pipeline.
Note
Now that the pipeline has been created, it attempts to run through the different pipeline
stages. However, the default AWS CodeBuild role created by the wizard does not have
permissions to execute all of the commands contained in the buildspec.yml file, so the
build stage fails. The next section adds the permissions for the build stage.
1. Make a code change to your configured source repository, commit, and push the change.
The following table provides other limitations for Amazon ECS that cannot be changed.
Resource Limit
Throttle on container instance registration rate 1 per second / 60 max per minute
Throttle on task definition registration rate 1 per second / 60 max per minute
All of the Amazon ECS actions are logged and are documented in the Amazon Elastic Container Service
API Reference. For example, calls to the CreateService, RunTask, and RegisterContainerInstance actions
generate entries in the CloudTrail log files.
Every log entry contains information about who generated the request. The user identity information
in the log helps you determine whether the request was made with root or IAM user credentials,
with temporary security credentials for a role or federated user, or by another AWS service. For more
information, see the userIdentity field in the CloudTrail Event Reference.
You can store your log files in your bucket for as long as you want, but you can also define Amazon S3
life cycle rules to archive or delete log files automatically. By default, your log files are encrypted by
using Amazon S3 server-side encryption (SSE).
You can choose to have CloudTrail publish Amazon SNS notifications when new log files are delivered if
you want to take quick action upon log file delivery. For more information, see Configuring Amazon SNS
Notifications.
You can also aggregate Amazon ECS log files from multiple AWS regions and multiple AWS accounts into
a single S3 bucket. For more information, see Aggregating CloudTrail Log Files to a Single Amazon S3
Bucket.
Topics
• Invalid CPU or memory value specified (p. 358)
• Checking Stopped Tasks for Errors (p. 358)
• Service Event Messages (p. 360)
• CannotCreateContainerError: API error (500): devmapper (p. 363)
• Troubleshooting Service Load Balancers (p. 364)
• Enabling Docker Debug Output (p. 365)
• Amazon ECS Log File Locations (p. 366)
• Amazon ECS Logs Collector (p. 368)
• Agent Introspection Diagnostics (p. 369)
• Docker Diagnostics (p. 370)
• API failures Error Messages (p. 372)
• Troubleshooting IAM Roles for Tasks (p. 373)
To resolve this issue you must specify one of the following valid CPU and memory settings:
CPU Memory
The current task failed the ELB health check for the load balancer that is associated with the
task's service. For more information, see Troubleshooting Service Load Balancers (p. 364).
Scaling activity initiated by (deployment deployment-id)
When you reduce the desired count of a stable service, some tasks need to be stopped in order
to reach the desired number. Tasks that are stopped by downscaling services have this stopped
reason.
Host EC2 (instance id) stopped/terminated
If you stop or terminate a container instance with running tasks, then the tasks are given this
stopped reason.
If you force the deregistration of a container instance with running tasks, then the tasks are
given this stopped reason.
Essential container in task exited
Containers marked as essential in task definitions cause a task to stop if they exit or die.
When an essential container exiting is the cause of a stopped task, the Step 6 (p. 360) can
provide more diagnostic information as to why the container stopped.
6. If you have a container that has stopped, expand the container and inspect the Status reason row to
see what caused the task state to change.
In the previous example, the container image name cannot be found. This can happen if you misspell
the image name.
If this inspection does not provide enough information, you can connect to the container instance
with SSH and inspect the Docker container locally. For more information, see Inspect Docker
Containers (p. 371).
• (service service-name) was unable to place a task because the resources could not be
found. (p. 362)
• (service service-name) was unable to place a task because no container instance met all of its
requirements. The closest matching container-instance container-instance-id encountered error
"AGENT". (p. 363)
• (service service-name) (instance instance-id) is unhealthy in (elb elb-name) due to (reason
Instance has failed at least the UnhealthyThreshold number of health checks consecutively.) (p. 363)
• (service service-name) is unable to consistently start tasks successfully. (p. 363)
If your task uses fixed host port mapping (for example, your task uses port 80 on the host for a web
server), you must have at least one container instance per task, because only one container can use a
single host port at a time. You should add container instances to your cluster or reduce your number
of desired tasks.
Not enough memory
If your task definition specifies 1000 MiB of memory, and the container instances in your cluster
each have 1024 MiB of memory, you can only run one copy of this task per container instance. You
can experiment with less memory in your task definition so that you could launch more than one
task per container instance, or launch more container instances into your cluster.
Not enough CPU
A container instance has 1,024 CPU units for every CPU core. If your task definition specifies 1,000
CPU units, and the container instances in your cluster each have 1,024 CPU units, you can only
run one copy of this task per container instance. You can experiment with less CPU units in your
task definition so that you could launch more than one task per container instance, or launch more
container instances into your cluster.
Not enough available ENI attachment points
Tasks that use the awsvpc network mode each receive their own Elastic Network Interface (ENI),
which is attached to the container instance that hosts it. Amazon EC2 instances have a limit to the
number of ENIs that can be attached to them, and the primary network interface counts as one. For
example, a c4.large instance may have 3 network interfaces attached to it. The primary network
adapter for the instance counts as one, so you can attach 2 more ENIs to the instance. Because
each awsvpc task requires an ENI, you can only run 2 such tasks on this instance type. For more
information about how many ENIs are supported per instance type, see IP Addresses Per Network
Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances. You can add container
instances to your cluster to provide more available network adapters.
Container instance missing required attribute
Some task definition parameters require a specific Docker remote API version to be installed on
the container instance. Others, such as the logging driver options, require the container instances
to register those log drivers with the ECS_AVAILABLE_LOGGING_DRIVERS agent configuration
variable. If your task definition contains a parameter that requires a specific container instance
attribute, and you do not have any available container instances that can satisfy this requirement,
the task cannot be placed. For more information on which attributes are required for specific task
definition parameters and agent configuration variables, see Task Definition Parameters (p. 107) and
Amazon ECS Container Agent Configuration (p. 81).
CannotCreateContainerError: API error (500): devmapper: Thin Pool has 4350 free data blocks
which is less than minimum required 4454 free data blocks. Create more free space in thin
pool or use dm.min_free_space option to change behavior
By default, Amazon ECS-optimized AMIs from version 2015.09.d and later launch with an 8-GiB volume
for the operating system that is attached at /dev/xvda and mounted as the root of the file system.
There is an additional 22-GiB volume that is attached at /dev/xvdcz that Docker uses for image and
metadata storage. If this storage space is filled up, the Docker deamon cannot create new containers.
The easiest way to add storage to your container instances is to terminate the existing instances and
launch new ones with larger data storage volumes. However, if you are unable to do this, you can add
storage to the volume group that Docker uses and extend its logical volume by following the procedures
in Storage Configuration (p. 36).
If your container instance storage is filling up too quickly, there are a few actions that you can take to
reduce this effect:
• (Amazon ECS container agent 1.8.0 and later) Reduce the amount of time that stopped or exited
containers remain on your container instances. The ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION
agent configuration variable sets the time duration to wait from when a task is stopped until
the Docker container is removed (by default, this value is 3 hours). This removes the Docker
container data. If this value is set too low, you may not be able to inspect your stopped containers
or view the logs before they are removed. For more information, see Amazon ECS Container Agent
Configuration (p. 81).
• Remove non-running containers and unused images from your container instances. You can use the
following example commands to manually remove stopped containers and unused images. Deleted
containers cannot be inspected later, and deleted images must be pulled again before starting new
containers from them.
To remove non-running containers, execute the following command on your container instance:
To remove unused images, execute the following command on your container instance:
• Remove unused data blocks within containers. You can use the following command to run fstrim on
any running container and discard any data blocks that are unused by the container file system.
sudo sh -c "docker ps -q | xargs docker inspect --format='{{ .State.Pid }}' | xargs -IZ
fstrim /proc/Z/root/"
The ecsServiceRole allows Amazon ECS services to register container instances with Elastic
Load Balancing load balancers. You must have the proper permissions set for this role. For more
information, see Amazon ECS Service Scheduler IAM Role (p. 247).
Container instance security group
If your container is mapped to port 80 on your container instance, your container instance security
group must allow inbound traffic on port 80 for the load balancer health checks to pass.
Elastic Load Balancing load balancer not configured for all Availability Zones
Your load balancer should be configured to use all of the Availability Zones in a region, or at least
all of the Availability Zones in which your container instances reside. If a service uses a load balancer
and starts a task on a container instance that resides in an Availability Zone that the load balancer is
not configured to use, the task never passes the health check and it is killed.
The load balancer health check parameters can be overly restrictive or point to resources that do not
exist. If a container instance is determined to be unhealthy, it is removed from the load balancer. Be
sure to verify that the following parameters are configured correctly for your service load balancer.
Ping Port
The Ping Port value for a load balancer health check is the port on the container instances
that the load balancer checks to determine if it is healthy. If this port is misconfigured, the load
balancer will likely deregister your container instance from itself. This port should be configured
to use the hostPort value for the container in your service's task definition that you are using
with the health check.
Ping Path
This value is often set to index.html, but if your service does not respond to that request, then
the health check fails. If your container does not have an index.html file, you can set this to /
to target the base URL for the container instance.
Response Timeout
This is the amount of time that your container has to return a response to the health check ping.
If this value is lower than the amount of time required for a response, the health check fails.
Health Check Interval
This is the amount of time between health check pings. The shorter your health check intervals
are, the faster your container instance can reach the Unhealthy Threshold.
Unhealthy Threshold
This is the number of times your health check can fail before your container instance is
considered unhealthy. If you have an unhealthy threshold of 2, and a health check interval of 30
seconds, then your task has 60 seconds to respond to the health check ping before it is assumed
unhealthy. You can raise the unhealthy threshold or the health check interval to give your tasks
more time to respond.
Unable to update the service servicename: Load balancer container name or port changed in task
definition
If your service uses a load balancer, the load balancer configuration defined for your service when
it was created cannot be changed. If you update the task definition for the service, the container
name and container port that were specified when the service was created must remain in the task
definition.
To change the load balancer name, the container name, or the container port associated with a
service load balancer configuration, you must create a new service.
Enabling Docker debug mode can be especially useful in retrieving error messages that are sent from
container registries, such as Amazon ECR, and, in many circumstances, enabling debug mode is the only
way to see these error messages.
Important
This procedure is written for the Amazon ECS-optimized AMI. For other operating systems,
see Enable debugging and Control and configure Docker with systemd in the Docker
documentation.
1. Connect to your container instance. For more information, see Connect to Your Container
Instance (p. 52).
2. Open the Docker options file with a text editor, such as vi. For the Amazon ECS-optimized AMI, the
Docker options file is at /etc/sysconfig/docker.
3. Find the Docker options statement and add the -D option to the string, inside the quotes.
Note
If the Docker options statement begins with a #, you need to remove that character to
uncomment the statement and enable the options.
For the Amazon ECS-optimized AMI, the Docker options statement is called OPTIONS. For example:
Output:
Stopping docker: [ OK ]
Starting docker: . [ OK ]
Your Docker logs should now show more verbose output. For example:
cat /var/log/ecs/ecs-agent.log.2016-08-15-15
Output:
cat /var/log/ecs/ecs-init.log.2015-04-22-20
Output:
• Timestamp
• HTTP response code
• IP address and port number of request origin
• Relative URI of the credential provider
• The user agent that made the request
• The task ARN that the requesting container belongs to
• The GetCredentials API name and version number
• The Amazon ECS cluster name that the container instance is registered to
• The container instance ARN
cat /var/log/ecs/audit.log.2016-07-13-16
Output:
• Amazon Linux
• Red Hat Enterprise Linux 7
• Debian 8
Note
The source code for the Amazon ECS logs collector is available on GitHub. We encourage you to
submit pull requests for changes that you would like to have included. However, Amazon Web
Services does not currently provide support for running modified copies of this software.
1. Connect to your container instance. For more information, see Connect to Your Container
Instance (p. 52).
2. Download the Amazon ECS logs collector script.
curl -O https://raw.githubusercontent.com/awslabs/ecs-logs-collector/master/ecs-logs-
collector.sh
3. Run the script to collect the logs and create the archive.
Note
To enable debug mode for the Docker daemon and the Amazon ECS container agent,
add the --mode=debug option to the command below. Note that this may restart the
Docker daemon, which kills all containers that are running on the instance. You should
consider draining the container instance and moving any important tasks to other container
instances before enabling debug mode. For more information, see Container Instance
Draining (p. 60).
After you have run the script, you can examine the collected logs in the collect folder that the script
created. The collect.tgz file is a compressed archive of all of the logs, which you can share with AWS
Support for diagnostic help.
The below example shows two tasks, one that is currently running and one that was stopped.
Note
The command below is piped through the python -mjson.tool for greater readability.
Output:
"DockerId":
"096d685fb85a1ff3e021c8254672ab8497e3c13986b9cf005cbae9460b7b901e",
"DockerName": "ecs-console-sample-app-static-6-
busybox-92e4b8d0ecd0cce69a01",
"Name": "busybox"
}
],
"DesiredStatus": "RUNNING",
"Family": "console-sample-app-static",
"KnownStatus": "RUNNING",
"Version": "6"
}
]
}
Docker Diagnostics
Docker provides several diagnostic tools that can help you troubleshoot problems with your containers
and tasks. For more information about all of the available Docker command line utilities, go to the
Docker Command Line topic in the Docker documentation. You can access the Docker command line
utilities by connecting to a container instance using SSH. For more information, see Connect to Your
Container Instance (p. 52).
The exit codes that Docker containers report can also provide some diagnostic information (for example,
exit code 137 means that the container received a SIGKILL signal). For more information, see Exit Status
in the Docker documentation.
docker ps
Output:
You can use the docker ps -a command to see all containers (even stopped or killed containers). This
is helpful for listing containers that are unexpectedly stopping. In the following example, container
f7f1f8a7a245 exited 9 seconds ago, so it would not show up in a docker ps output without the -a flag.
docker ps -a
Output:
Output:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name,
using 172.17.0.11. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name,
using 172.17.0.11. Set the 'ServerName' directive globally to suppress this message
[Thu Apr 23 19:48:36.956682 2015] [mpm_event:notice] [pid 1:tid 140327115417472] AH00489:
Apache/2.4.12 (Unix) configured -- resuming normal operations
[Thu Apr 23 19:48:36.956827 2015] [core:notice] [pid 1:tid 140327115417472] AH00094:
Command line: 'httpd -D FOREGROUND'
10.0.1.86 - - [23/Apr/2015:19:48:59 +0000] "GET / HTTP/1.1" 200 348
10.0.0.154 - - [23/Apr/2015:19:48:59 +0000] "GET / HTTP/1.1" 200 348
10.0.1.86 - - [23/Apr/2015:19:49:28 +0000] "GET / HTTP/1.1" 200 348
10.0.0.154 - - [23/Apr/2015:19:49:29 +0000] "GET / HTTP/1.1" 200 348
10.0.1.86 - - [23/Apr/2015:19:49:50 +0000] "-" 408 -
10.0.0.154 - - [23/Apr/2015:19:49:50 +0000] "-" 408 -
10.0.1.86 - - [23/Apr/2015:19:49:58 +0000] "GET / HTTP/1.1" 200 348
10.0.0.154 - - [23/Apr/2015:19:49:59 +0000] "GET / HTTP/1.1" 200 348
10.0.1.86 - - [23/Apr/2015:19:50:28 +0000] "GET / HTTP/1.1" 200 348
10.0.0.154 - - [23/Apr/2015:19:50:29 +0000] "GET / HTTP/1.1" 200 348
time="2015-04-23T20:11:20Z" level="fatal" msg="write /dev/stdout: broken pipe"
Output:
[{
"AppArmorProfile": "",
"Args": [],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"httpd-foreground"
],
"CpuShares": 10,
"Cpuset": "",
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/
apache2/bin",
"HTTPD_PREFIX=/usr/local/apache2",
"HTTPD_VERSION=2.4.12",
"HTTPD_BZ2_URL=https://www.apache.org/dist/httpd/httpd-2.4.12.tar.bz2"
],
"ExposedPorts": {
"80/tcp": {}
},
"Hostname": "dc7240fe892a",
...
Many resources are region-specific, so make sure the console is set to the correct region for your
resources, or that your AWS CLI commands are being sent to the correct region with the --region
region option.
• DescribeClusters
MISSING (cluster ID)
Your cluster was not found. The cluster name may not have been spelled correctly or the wrong
region may be specified.
• DescribeInstances
MISSING (container instance ID)
The container instance you are attempting to describe does not exist. Perhaps the wrong cluster or
region has been specified, or the container instance ARN or ID is misspelled.
• DescribeServices
MISSING (service ID)
The service you are attempting to describe does not exist. Perhaps the wrong cluster or region has
been specified, or the container instance ARN or ID is misspelled.
• DescribeTasks
MISSING (task ID)
The task you are trying to describe does not exist. Perhaps the wrong cluster or region has been
specified, or the task ARN or ID is misspelled.
• RunTask or StartTask
RESOURCE:* (container instance ID)
The resource or resources requested by the task are unavailable on the given container instance.
If the resource is CPU, memory, ports, or ENIs, you may need to add container instances to your
cluster. For RESOURCE:ENI errors, your cluster does not have any available Elastic Network
Interface attachment points, which are required for tasks that use the awsvpc network mode.
Amazon EC2 instances have a limit to the number of ENIs that can be attached to them, and
the primary network interface counts as one. For more information about how many ENIs are
supported per instance type, see IP Addresses Per Network Interface Per Instance Type in the
Amazon EC2 User Guide for Linux Instances.
AGENT (container instance ID)
The container instance that you attempted to launch a task onto has an agent which is currently
disconnected. In order to prevent extended wait times for task placement, the request was
rejected.
ATTRIBUTE (container instance ID)
Your task definition contains a parameter that requires a specific container instance attribute that
is not available on your container instances. For example, if your task uses the awsvpc network
mode, but there are no instances in your specified subnets with the ecs.capability.task-
eni attribute. For more information on which attributes are required for specific task definition
parameters and agent configuration variables, see Task Definition Parameters (p. 107) and
Amazon ECS Container Agent Configuration (p. 81).
• StartTask
MISSING (container instance ID)
The container instance you attempted to launch the task onto does not exist. Perhaps the wrong
cluster or region has been specified, or the container instance ARN or ID is misspelled.
INACTIVE (container instance ID)
The container instance that you attempted to launch a task onto was previously deregistered with
Amazon ECS and cannot be used.
{
"taskRoleArn": "ECS-task-full-access",
"containerDefinitions": [
{
"memory": 128,
"essential": true,
"name": "amazonlinux",
"image": "amazonlinux",
"entryPoint": [
"/bin/bash",
"-c"
],
"command": [
"yum install -y aws-cli; aws ecs list-tasks --region us-west-2"
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "ecs-tasks",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "iam-role-test"
}
}
}
],
"family": "iam-role-test",
"requiresCompatibilities": [
"EC2" API Version 2014-11-13
374
Amazon Elastic Container Service Developer Guide
Troubleshooting IAM Roles for Tasks
],
"volumes": [],
"placementConstraints": [],
"networkMode": null,
"memory": null,
"cpu": null
}
a. On the Task Definition: iam-role-test registration confirmation page, choose Actions, Run
Task.
b. On the Run Task page, choose the EC2 launch type, a cluster, and then choose Run Task to run
your task.
5. View the container logs in the CloudWatch Logs console.
{
"taskArns": [
"arn:aws:ecs:us-east-1:aws_account_id:task/d48feb62-46e2-4cbc-a36b-
e0400b993d1d"
]
}
Note
If you receive an "Unable to locate credentials" error, then IAM roles for tasks
is not enabled on your container instances. For more information, see Enabling Task
IAM Roles on your Container Instances (p. 253).
Windows Containers
Amazon ECS now supports Windows containers on container instances that are launched with the
Amazon ECS-optimized Windows AMI.
Windows container instances use their own version of the Amazon ECS container agent. On the Amazon
ECS-optimized Windows AMI, the Amazon ECS container agent runs as a service on the host. Unlike the
Linux platform, the agent does not run inside a container because it uses the host's registry and the
named pipe at \\.\pipe\docker_engine to communicate with the Docker daemon.
The source code for the Amazon ECS container agent is available on GitHub. We encourage you to submit
pull requests for changes that you would like to have included. However, Amazon Web Services does
not currently provide support for running modified copies of this software. You can view open issues for
Amazon ECS and Windows on our GitHub issues page.
Topics
• Windows Container Caveats (p. 376)
• Getting Started with Windows Containers (p. 377)
• Windows Task Definitions (p. 382)
• Windows IAM Roles for Tasks (p. 385)
• Pushing Windows Images to Amazon ECR (p. 386)
• Windows containers cannot run on Linux container instances and vice versa. To ensure proper task
placement for Windows and Linux tasks, you should keep Windows and Linux container instances in
separate clusters, and only place Windows tasks on Windows clusters. You can ensure that Windows
task definitions are only placed on Windows instances by setting the following placement constraint:
memberOf(ecs.os-type=='windows').
• Windows containers and container instances cannot support all the task definition parameters that are
available for Linux containers and container instances. For some parameters, they are not supported
at all, and others behave differently on Windows than they do on Linux. For more information, see
Windows Task Definitions (p. 382).
• The IAM roles for tasks feature requires that you configure your Windows container instances to allow
the feature at launch, and your containers must run some provided PowerShell code when they use the
feature. For more information, see Windows IAM Roles for Tasks (p. 385).
• The IAM roles for tasks feature uses a credential proxy to provide credentials to the containers. This
credential proxy occupies port 80 on the container instance, so if you use IAM roles for tasks, port 80
is not available for tasks. For web service containers, you can use an Application Load Balancer and
dynamic port mapping to provide standard HTTP port 80 connections to your containers. For more
information, see Service Load Balancing (p. 165).
• The Windows server Docker images are large (9 GiB), so your container instances require more storage
space than Linux container instances, which typically have smaller image sizes.
• Container instances can take up to 15 minutes to download and extract the Windows server Docker
images the first time they use them. This time can be doubled if you enable IAM roles for tasks.
Topics
• Step 1: Create a Windows Cluster (p. 377)
• Step 2: Launching a Windows Container Instance into your Cluster (p. 377)
• Step 3: Register a Windows Task Definition (p. 380)
• Step 4: Create a Service with Your Task Definition (p. 381)
• Step 5: View Your Service (p. 381)
• You can create a cluster using the AWS CLI with the following command:
The current Amazon ECS-optimized Windows AMI IDs by region are listed below for reference.
Region AMI ID
us-east-2 ami-b19fb1d4
us-east-1 ami-9f1182e5
us-west-2 ami-0b60bc73
us-west-1 ami-a8003bc8
eu-west-2 ami-3da4bb59
eu-west-1 ami-94d360ed
eu-central-1 ami-b4ed61db
ap-northeast-2 ami-bb3691d5
ap-northeast-1 ami-5ed66f38
ap-southeast-2 ami-918075f3
ap-southeast-1 ami-ec32618f
ca-central-1 ami-2859e24c
ap-south-1 ami-25f3ba4a
sa-east-1 ami-05cf8869
6. On the Choose an Instance Type page, you can select the hardware configuration of your instance.
The t2.micro instance type is selected by default. The instance type that you select determines the
resources available for your tasks to run on.
7. Choose Next: Configure Instance Details.
8. On the Configure Instance Details page, set the Auto-assign Public IP check box depending on
whether to make your instance accessible from the public Internet. If your instance should be
accessible from the Internet, verify that the Auto-assign Public IP field is set to Enable. If your
instance should not be accessible from the Internet, choose Disable.
Note
Container instances need external network access to communicate with the Amazon ECS
service endpoint, so if your container instances do not have public IP addresses, then they
must use network address translation (NAT) to provide this access. For more information,
see NAT Gateways in the Amazon VPC User Guide and HTTP Proxy Configuration (p. 97)
in this guide. For help creating a VPC, see Tutorial: Creating a VPC with Public and Private
Subnets for Your Clusters (p. 342)
9. On the Configure Instance Details page, select the ecsInstanceRole IAM role value that you
created for your container instances in Setting Up with Amazon ECS (p. 8).
Important
If you do not launch your container instance with the proper IAM permissions, your
Amazon ECS agent will not connect to your cluster. For more information, see Amazon ECS
Container Instance IAM Role (p. 238).
10. Expand the Advanced Details section and paste the provided user data PowerShell script into the
User data field. By default, this script registers your container instance into the windows cluster that
you created earlier. To launch into another cluster instead of windows, replace the red text in the
script below with the name of your cluster.
Note
The -EnableTaskIAMRole option is required to enable IAM roles for tasks. For more
information, see Windows IAM Roles for Tasks (p. 385).
<powershell>
Import-Module ECSTools
Initialize-ECSAgent -Cluster 'windows' -EnableTaskIAMRole
</powershell>
You can optionally increase or decrease the volume size for your instance to meet your application
needs.
13. Choose Review and Launch.
14. On the Review Instance Launch page, under Security Groups, you'll see that the wizard created and
selected a security group for you. By default, you should have port 3389 for RDP connectivity. If you
want your containers to receive inbound traffic from the Internet, you need to open those ports as
well.
When you are ready, select the acknowledgment field, and then choose Launch Instances.
17. A confirmation page lets you know that your instance is launching. Choose View Instances to close
the confirmation page and return to the console.
18. On the Instances screen, you can view the status of your instance. It takes a short time for an
instance to launch. When you launch an instance, its initial state is pending. After the instance
starts, its state changes to running, and it receives a public DNS name. (If the Public DNS column is
hidden, choose the Show/Hide icon and choose Public DNS.)
19. After your instance has launched, you can view your cluster in the Amazon ECS console to see that
your container instance has registered with it.
Note
It can take up to 15 minutes for your Windows container instance to register with your
cluster.
To register the sample task definition with the AWS Management Console
{
"family": "windows-simple-iis",
"containerDefinitions": [
{
"name": "windows_sample_app",
"image": "microsoft/iis",
"cpu": 100,
"entryPoint":["powershell", "-Command"],
"command":["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -
Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top:
40px; background-color: #333;} </style> </head><body> <div style=color:white;text-
align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your
application is now running on a container in Amazon ECS.</p>'; C:\\ServiceMonitor.exe
w3svc"],
"portMappings": [
{
"protocol": "tcp",
"containerPort": 80,
"hostPort": 8080
}
],
"memory": 500,
"essential": true
}
]
}
To create a service from your task definition with the AWS Management Console
1. On the Task Definition: windows-simple-iis registration confirmation page, choose Actions, Create
Service.
2. On the Create Service page, enter the following information and then choose Create service.
• Cluster: windows
• Number of tasks: 1
• Service name: windows-simple-iis
To create a service from your task definition with the AWS CLI
• Using the AWS CLI, run the following command to create your service.
taskRoleArn
Supported: Yes
IAM roles for tasks on Windows require that the -EnableTaskIAMRole option is set when you
launch the Amazon ECS-optimized Windows AMI. Your containers must also run some configuration
code in order to take advantage of the feature. For more information, see Windows IAM Roles for
Tasks (p. 385).
networkMode
Supported: No
Docker for Windows uses different network modes than Docker for Linux. When you register a task
definition with Windows containers, you must not specify a network mode. If you use the console to
register a task definition with Windows containers, you must choose the <default> network mode
object.
containerDefinitions
Supported: Yes
Additional notes: Not all container definition parameters are supported. Review the list below for
individual parameter support.
portMappings
Supported: Limited
Port mappings on Windows use the NetNAT gateway address rather than localhost. There is
no loopback for port mappings on Windows, so you cannot access a container's mapped port
from the host itself.
cpu
Supported: Yes
Amazon ECS treats this parameter in the same manner that it does for Linux containers: if you
provide 500 CPU shares to a container, that number of CPU shares is removed from the available
resources on the container instance when the task is placed. However, on a Windows container
instance, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only
have access to the specified amount of CPU that is described in the task definition.
disableNetworking
Supported: No
dnsServers
Supported: No
dnsSearchDomains
Supported: No
dockerSecurityOptions
Supported: No
extraHosts
Supported: No
links
Supported: No
mountPoints
Supported: Limited
Windows containers can mount whole directories on the same drive as $env:ProgramData.
Windows containers cannot mount directories on a different drive, and mount point cannot be
across drives.
linuxParameters
Supported: No
privileged
Supported: No
readonlyRootFilesystem
Supported: No
user
Supported: No
ulimits
Supported: No
volumes
Supported: Yes
name
Supported: Yes
host
Supported: Limited
Windows containers can mount whole directories on the same drive as $env:ProgramData.
Windows containers cannot mount directories on a different drive, and mount point cannot be
across drives. For example, you can mount C:\my\path:C:\my\path and D:\:D:\, but not D:
\my\path:C:\my\path or D:\:C:\my\path.
cpu
Supported: No
Task-level CPU is ignored for Windows containers. We recommend specifying container-level CPU for
Windows containers.
memory
Supported: No
The following task definition is the Amazon ECS console sample application that is produced in the first-
run wizard for Amazon ECS; it has been ported to use the microsoft/iis Windows container image.
{
"family": "windows-simple-iis",
"containerDefinitions": [
{
"name": "windows_sample_app",
"image": "microsoft/iis",
"cpu": 100,
"entryPoint":["powershell", "-Command"],
"command":["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -Value '<html>
<head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-
color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon
ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a
container in Amazon ECS.</p>'; C:\\ServiceMonitor.exe w3svc"],
"portMappings": [
{
"protocol": "tcp",
"containerPort": 80,
"hostPort": 8080
}
],
"memory": 500,
"essential": true
}
]
}
• When you launch your container instances, you must enable the feature by setting the -
EnableTaskIAMRole option for the in the container instances user data script. For example:
<powershell>
Import-Module ECSTools
Initialize-ECSAgent -Cluster 'windows' -EnableTaskIAMRole
</powershell>
• You must bootstrap your container with the networking commands that are provided in IAM Roles for
Task Container Bootstrap Script (p. 385).
• You must create an IAM role and policy for your tasks. For more information, see Creating an IAM Role
and Policy for your Tasks (p. 254).
• Your container must use an AWS SDK that supports IAM roles for tasks. For more information, see
Using a Supported AWS SDK (p. 255).
• You must specify the IAM role you created for your tasks when you register the task definition, or
as an override when you run the task. For more information, see Specifying an IAM Role for your
Tasks (p. 255).
• The IAM roles for the task credential provider use port 80 on the container instance, so if you enable
IAM roles for tasks on your container instance, your containers cannot use port 80 for the host port
in any port mappings. To expose your containers on port 80, we recommend configuring a service
for them that uses load balancing. You can use port 80 on the load balancer, and the traffic can be
routed to another host port on your container instances. For more information, see Service Load
Balancing (p. 165).
1. Pull a Windows Docker image locally. This example uses the microsoft/iis image.
3. Tag the image with the repositoryUri that was returned from the previous command.
AWS Glossary
For the latest AWS terminology, see the AWS Glossary in the AWS General Reference.