Ansible Docs 1.7
Ansible Docs 1.7
Release 1.7
Ansible
i
ii
CHAPTER
ONE
ABOUT ANSIBLE
1.1 Introduction
Before we dive into the really fun parts – playbooks, configuration management, deployment, and orchestration, we’ll
learn how to get Ansible installed and some basic concepts. We’ll go over how to execute ad-hoc commands in parallel
across your nodes using /usr/bin/ansible. We’ll also see what sort of modules are available in Ansible’s core (though
you can also write your own, which we’ll also show later).
1.1.1 Installation
1
Ansible Documentation, Release 1.7
Topics
• Installation
– Getting Ansible
– Basics / What Will Be Installed
– What Version To Pick?
– Control Machine Requirements
– Managed Node Requirements
– Installing the Control Machine
* Running From Source
* Latest Release Via Yum
* Latest Releases Via Apt (Ubuntu)
* Latest Releases Via pkg (FreeBSD)
* Latest Releases Via Homebrew (Mac OSX)
* Latest Releases Via Pip
* Tarballs of Tagged Releases
Getting Ansible
You may also wish to follow the Github project if you have a github account. This is also where we keep the issue
tracker for sharing bugs and feature ideas.
Because it runs so easily from source and does not require any installation of software on remote machines, many
users will actually track the development version.
Ansible’s release cycles are usually about two months long. Due to this short release cycle, minor bugs will generally
be fixed in the next release versus maintaining backports on the stable branch. Major bugs will still have maintenance
releases when needed, though these are infrequent.
If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM),
CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager.
For other installation options, we recommend installing via “pip”, which is the Python package manager, though other
options are also available.
If you wish to track the development release to use and test the latest features, we will share information about running
from source. It’s not necessary to install the program to run from source.
Currently Ansible can be run from any machine with Python 2.6 installed (Windows isn’t supported for the control
machine).
This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on.
On the managed nodes, you only need Python 2.4 or later, but if you are running less than Python 2.5 on the remotes,
you will also need:
• python-simplejson
Note: Ansible’s “raw” module (for executing commands in a quick and dirty way) and the script module don’t even
need that. So technically, you can use Ansible to install python-simplejson using the raw module, which then allows
you to use everything else. (That’s jumping ahead though.)
Note: If you have SELinux enabled on remote nodes, you will also want to install libselinux-python on them before
using any copy/file/template related functions in Ansible. You can of course still use the yum module in Ansible to
install this package on remote systems that do not have it.
Note: Python 3 is a slightly different language than Python 2 and most Python programs (including Ansible) are not
switching over yet. However, some Linux distributions (Gentoo, Arch) may not have a Python 2.X interpreter installed
by default. On those systems, you should install one, and set the ‘ansible_python_interpreter’ variable in inventory
(see Inventory) to point at your 2.X Python. Distributions like Red Hat Enterprise Linux, CentOS, Fedora, and Ubuntu
all have a 2.X interpreter installed by default and this does not apply to those distributions. This is also true of nearly
all Unix systems. If you need to bootstrap these remote systems by installing Python 2.X, using the ‘raw’ module will
be able to do it remotely.
Ansible is trivially easy to run from a checkout, root permissions are not required to use it and there is no software
to actually install for Ansible itself. No daemons or database setup are required. Because of this, many users in our
community use the development version of Ansible all of the time, so they can take advantage of new features when
they are implemented, and also easily contribute to the project. Because there is nothing to install, following the
development version is significantly easier than most open source projects.
To install from source.
$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ source ./hacking/env-setup
If you don’t have pip installed in your version of Python, install pip:
$ sudo easy_install pip
Ansible also uses the following Python modules that need to be installed:
$ sudo pip install paramiko PyYAML jinja2 httplib2
Once running the env-setup script you’ll be running from checkout and the default inventory file will be
/etc/ansible/hosts. You can optionally specify an inventory file (see Inventory) other than /etc/ansible/hosts:
1.1. Introduction 3
Ansible Documentation, Release 1.7
You can read more about the inventory file in later parts of the manual.
Now let’s test things with a ping command:
$ ansible all -m ping --ask-pass
RPMs are available from yum for EPEL 6, 7, and currently supported Fedora distributions.
Ansible itself can manage earlier operating systems that contain Python 2.4 or higher (so also EL5).
Fedora users can install Ansible directly, though if you are using RHEL or CentOS and have not already done so,
configure EPEL
# install the epel-release RPM if needed on CentOS, RHEL, or Scientific Linux
$ sudo yum install ansible
You can also build an RPM yourself. From the root of a checkout or tarball, use the make rpm command to build an
RPM you can distribute and install. Make sure you have rpm-build, make, and python2-devel installed.
$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ make rpm
$ sudo rpm -Uvh ~/rpmbuild/ansible-*.noarch.rpm
Debian/Ubuntu packages can also be built from the source checkout, run:
$ make deb
You may also wish to run from source to get the latest, which is covered above.
Ansible can be installed via “pip”, the Python package manager. If ‘pip’ isn’t already available in your version of
Python, you can get pip by:
$ sudo easy_install pip
If you are installing on OS X Mavericks, you may encounter some noise from your compiler. A workaround is to do
the following:
$ sudo CFLAGS=-Qunused-arguments CPPFLAGS=-Qunused-arguments pip install ansible
Readers that use virtualenv can also install Ansible under virtualenv, though we’d recommend to not worry about it
and just install Ansible globally. Do not use easy_install to install ansible directly.
Packaging Ansible or wanting to build a local package yourself, but don’t want to do a git checkout? Tarballs of
releases are available on the Ansible downloads page.
These releases are also tagged in the git repository with the release version.
See also:
Introduction To Ad-Hoc Commands Examples of basic commands
Playbooks Learning ansible’s configuration management language
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
Topics
• Getting Started
– Foreword
– Remote Connection Information
– Your first commands
– Host Key Checking
1.1. Introduction 5
Ansible Documentation, Release 1.7
Foreword
Now that you’ve read Installation and installed Ansible, it’s time to dig in and get started with some commands.
What we are showing first are not the powerful configuration/deployment/orchestration of Ansible, called playbooks.
Playbooks are covered in a separate section.
This section is about how to get going initially. Once you have these concepts down, read Introduction To Ad-Hoc
Commands for some more detail, and then you’ll be ready to dive into playbooks and explore the most interesting
parts!
Before we get started, it’s important to understand how Ansible is communicating with remote machines over SSH.
By default, Ansible 1.3 and later will try to use native OpenSSH for remote communication when possible. This
enables both ControlPersist (a performance feature), Kerberos, and options in ~/.ssh/config such as Jump Host setup.
When using Enterprise Linux 6 operating systems as the control machine (Red Hat Enterprise Linux and derivatives
such as CentOS), however, the version of OpenSSH may be too old to support ControlPersist. On these operating
systems, Ansible will fallback into using a high-quality Python implementation of OpenSSH called ‘paramiko’. If
you wish to use features like Kerberized SSH and more, consider using Fedora, OS X, or Ubuntu as your control
machine until a newer version of OpenSSH is available for your platform – or engage ‘accelerated mode’ in Ansible.
See Accelerated Mode.
In Ansible 1.2 and before, the default was strictly paramiko and native SSH had to be explicitly selected with -c ssh or
set in the configuration file.
Occasionally you’ll encounter a device that doesn’t do SFTP. This is rare, but if talking with some remote devices that
don’t support SFTP, you can switch to SCP mode in The Ansible Configuration File.
When speaking with remote machines, Ansible will by default assume you are using SSH keys – which we encourage
– but passwords are fine too. To enable password auth, supply the option --ask-pass where needed. If using sudo
features and when sudo requires a password, also supply --ask-sudo-pass as appropriate.
While it may be common sense, it is worth sharing: Any management system benefits from being run near the ma-
chines being managed. If running in a cloud, consider running Ansible from a machine inside that cloud. It will work
better than on the open internet in most cases.
As an advanced topic, Ansible doesn’t just have to connect remotely over SSH. The transports are pluggable, and there
are options for managing things locally, as well as managing chroot, lxc, and jail containers. A mode called ‘ansible-
pull’ can also invert the system and have systems ‘phone home’ via scheduled git checkouts to pull configuration
directives from a central repository.
Now that you’ve installed Ansible, it’s time to get started with some basics.
Edit (or create) /etc/ansible/hosts and put one or more remote systems in it, for which you have your SSH key in
authorized_keys:
192.168.1.50
aserver.example.org
bserver.example.org
This is an inventory file, which is also explained in greater depth here: Inventory.
We’ll assume you are using SSH keys for authentication. To set up SSH agent to avoid retyping passwords, you can
do:
$ ssh-agent bash
$ ssh-add ~/.ssh/id_rsa
(Depending on your setup, you may wish to use Ansible’s --private-key option to specify a pem file instead)
Now ping all your nodes:
$ ansible all -m ping
Ansible will attempt to remote connect to the machines using your current user name, just like SSH would. To override
the remote user name, just use the ‘-u’ parameter.
If you would like to access sudo mode, there are also flags to do that:
# as bruce
$ ansible all -m ping -u bruce
# as bruce, sudoing to root
$ ansible all -m ping -u bruce --sudo
# as bruce, sudoing to batman
$ ansible all -m ping -u bruce --sudo --sudo-user batman
(The sudo implementation is changeable in Ansible’s configuration file if you happen to want to use a sudo replace-
ment. Flags passed to sudo (like -H) can also be set there.)
Now run a live command on all of your nodes:
$ ansible all -a "/bin/echo hello"
Congratulations. You’ve just contacted your nodes with Ansible. It’s soon going to be time to read some of the
more real-world Introduction To Ad-Hoc Commands, and explore what you can do with different modules, as well
as the Ansible Playbooks language. Ansible is not just about running commands, it also has powerful configuration
management and deployment features. There’s more to explore, but you already have a fully working infrastructure!
Ansible 1.2.1 and later have host key checking enabled by default.
If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected. If
a host is not initially in ‘known_hosts’ this will result in prompting for confirmation of the key, which results in an
interactive experience if using Ansible, from say, cron. You might not want this.
If you wish to disable this behavior and understand the implications, you can do so by editing /etc/ansible/ansible.cfg
or ~/.ansible.cfg:
[defaults]
host_key_checking = False
Also note that host key checking in paramiko mode is reasonably slow, therefore switching to ‘ssh’ is also recom-
mended when using this feature. Ansible will log some information about module arguments on the remote system in
the remote syslog. To enable basic logging on the control machine see The Ansible Configuration File document and
set the ‘log_path’ configuration file setting. Enterprise users may also be interested in Ansible Tower. Tower provides
a very robust database logging feature where it is possible to drill down and see history based on hosts, projects, and
particular inventories over time – explorable both graphically and through a REST API.
See also:
1.1. Introduction 7
Ansible Documentation, Release 1.7
1.1.3 Inventory
Topics
• Inventory
– Hosts and Groups
– Host Variables
– Group Variables
– Groups of Groups, and Group Variables
– Splitting Out Host and Group Specific Data
– List of Behavioral Inventory Parameters
Ansible works against multiple systems in your infrastructure at the same time. It does this by selecting portions of
systems listed in Ansible’s inventory file, which defaults to being saved in the location /etc/ansible/hosts.
Not only is this inventory configurable, but you can also use multiple inventory files at the same time (explained below)
and also pull inventory from dynamic or cloud sources, as described in Dynamic Inventory.
The format for /etc/ansible/hosts is an INI format and looks like this:
mail.example.com
[webservers]
foo.example.com
bar.example.com
[dbservers]
one.example.com
two.example.com
three.example.com
The things in brackets are group names, which are used in classifying systems and deciding what systems you are
controlling at what times and for what purpose.
It is ok to put systems in more than one group, for instance a server could be both a webserver and a dbserver. If you
do, note that variables will come from all of the groups they are a member of, and variable precedence is detailed in a
later chapter.
If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon.
Ports listed in your SSH config file won’t be used with the paramiko connection but will be used with the openssh
connection.
To make things explicit, it is suggested that you set them if things are not running on the default port:
badwolf.example.com:5309
Suppose you have just static IPs and want to set up some aliases that don’t live in your host file, or you are connecting
through tunnels. You can do things like this:
jumper ansible_ssh_port=5555 ansible_ssh_host=192.168.1.50
In the above example, trying to ansible against the host alias “jumper” (which may not even be a real hostname)
will contact 192.168.1.50 on port 5555. Note that this is using a feature of the inventory file to define some special
variables. Generally speaking this is not the best way to define variables that describe your system policy, but we’ll
share suggestions on doing this later. We’re just getting started.
Adding a lot of hosts? If you have a lot of hosts following similar patterns you can do this rather than listing each
hostname:
[webservers]
www[01:50].example.com
For numeric patterns, leading zeros can be included or removed, as desired. Ranges are inclusive. You can also define
alphabetic ranges:
[databases]
db-[a:f].example.com
You can also select the connection type and user on a per host basis:
[targets]
localhost ansible_connection=local
other1.example.com ansible_connection=ssh ansible_ssh_user=mpdehaan
other2.example.com ansible_connection=ssh ansible_ssh_user=mdehaan
As mentioned above, setting these in the inventory file is only a shorthand, and we’ll discuss how to store them in
individual files in the ‘host_vars’ directory a bit later on.
Host Variables
As alluded to above, it is easy to assign variables to hosts that will be used later in playbooks:
[atlanta]
host1 http_port=80 maxRequestsPerChild=808
host2 http_port=303 maxRequestsPerChild=909
Group Variables
[atlanta:vars]
ntp_server=ntp.atlanta.example.com
proxy=proxy.atlanta.example.com
1.1. Introduction 9
Ansible Documentation, Release 1.7
It is also possible to make groups of groups and assign variables to groups. These variables can be used by
/usr/bin/ansible-playbook, but not /usr/bin/ansible:
[atlanta]
host1
host2
[raleigh]
host2
host3
[southeast:children]
atlanta
raleigh
[southeast:vars]
some_server=foo.southeast.example.com
halon_system_timeout=30
self_destruct_countdown=60
escape_pods=2
[usa:children]
southeast
northeast
southwest
northwest
If you need to store lists or hash data, or prefer to keep host and group specific variables separate from the inventory
file, see the next section.
The preferred practice in Ansible is actually not to store variables in the main inventory file.
In addition to storing variables directly in the INI file, host and group variables can be stored in individual files relative
to the inventory file.
These variable files are in YAML format. See YAML Syntax if you are new to YAML.
Assuming the inventory file path is:
/etc/ansible/hosts
If the host is named ‘foosball’, and in groups ‘raleigh’ and ‘webservers’, variables in YAML files at the following
locations will be made available to the host:
/etc/ansible/group_vars/raleigh
/etc/ansible/group_vars/webservers
/etc/ansible/host_vars/foosball
For instance, suppose you have hosts grouped by datacenter, and each datacenter uses some different servers. The data
in the groupfile ‘/etc/ansible/group_vars/raleigh’ for the ‘raleigh’ group might look like:
---
ntp_server: acme.example.org
database_server: storage.example.org
As alluded to above, setting the following variables controls how ansible interacts with remote hosts. Some we have
already mentioned:
ansible_ssh_host
The name of the host to connect to, if different from the alias you wish to give to it.
ansible_ssh_port
The ssh port number, if not 22
ansible_ssh_user
The default ssh user name to use.
ansible_ssh_pass
The ssh password to use (this is insecure, we strongly recommend using --ask-pass or SSH keys)
ansible_sudo_pass
The sudo password to use (this is insecure, we strongly recommend using --ask-sudo-pass)
ansible_connection
Connection type of the host. Candidates are local, ssh or paramiko. The default is paramiko before
ansible_ssh_private_key_file
Private key file used by ssh. Useful if using multiple keys and you don’t want to use SSH agent.
ansible_shell_type
The shell type of the target system. By default commands are formatted using ’sh’-style syntax by d
ansible_python_interpreter
The target host python path. This is useful for systems with more
than one Python or not located at "/usr/bin/python" such as \*BSD, or where /usr/bin/python
is not a 2.X series Python. We do not use the "/usr/bin/env" mechanism as that requires the remote
path to be set right and also assumes the "python" executable is named python, where the executable
be named something like "python26".
ansible\_\*\_interpreter
Works for anything such as ruby or perl and works just like ansible_python_interpreter.
This replaces shebang of modules which will run on that host.
See also:
Dynamic Inventory Pulling inventory from dynamic sources, such as cloud providers
Introduction To Ad-Hoc Commands Examples of basic commands
Playbooks Learning ansible’s configuration management language
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
1.1. Introduction 11
Ansible Documentation, Release 1.7
Topics
• Dynamic Inventory
– Example: The Cobbler External Inventory Script
– Example: AWS EC2 External Inventory Script
– Other inventory scripts
– Using Multiple Inventory Sources
Often a user of a configuration management system will want to keep inventory in a different software system. Ansible
provides a basic text-based system as described in Inventory but what if you want to use something else?
Frequent examples include pulling inventory from a cloud provider, LDAP, Cobbler, or a piece of expensive enterprisey
CMDB software.
Ansible easily supports all of these options via an external inventory system. The plugins directory contains some of
these already – including options for EC2/Eucalyptus, Rackspace Cloud, and OpenStack, examples of some of which
will be detailed below.
Ansible Tower also provides a database to store inventory results that is both web and REST Accessible. Tower syncs
with all Ansible dynamic inventory sources you might be using, and also includes a graphical inventory editor. By
having a database record of all of your hosts, it’s easy to correlate past event history and see which ones have had
failures on their last playbook runs.
For information about writing your own dynamic inventory source, see Developing Dynamic Inventory Sources.
It is expected that many Ansible users with a reasonable amount of physical hardware may also be Cobbler users.
(note: Cobbler was originally written by Michael DeHaan and is now lead by James Cammarata, who also works for
Ansible, Inc).
While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic layer that allows
it to represent data for multiple configuration management systems (even at the same time), and has been referred to
as a ‘lightweight CMDB’ by some admins.
To tie Ansible’s inventory to Cobbler (optional), copy this script to /etc/ansible and chmod +x the file. cobblerd will
now need to be running when you are using Ansible and you’ll need to use Ansible’s -i command line option (e.g. -i
/etc/ansible/cobbler.py). This particular script will communicate with Cobbler using Cobbler’s XMLRPC
API.
First test the script by running /etc/ansible/cobbler.py directly. You should see some JSON data output,
but it may not have anything in it just yet.
Let’s explore what this does. In cobbler, assume a scenario somewhat like the following:
cobbler profile add --name=webserver --distro=CentOS6-x86_64
cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3"
cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4"
cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5"
In the example above, the system ‘foo.example.com’ will be addressable by ansible directly, but will also be address-
able when using the group names ‘webserver’ or ‘atlanta’. Since Ansible uses SSH, we’ll try to contact system foo
over ‘foo.example.com’, only, never just ‘foo’. Similarly, if you try “ansible foo” it wouldn’t find the system... but
“ansible ‘foo*”’ would, because the system DNS name starts with ‘foo’.
The script doesn’t just provide host and group info. In addition, as a bonus, when the ‘setup’ module is run (which
happens automatically when using playbooks), the variables ‘a’, ‘b’, and ‘c’ will all be auto-populated in the templates:
# file: /srv/motd.j2
Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }}
Note: The name ‘webserver’ came from cobbler, as did the variables for the config file. You can still pass in your
own variables like normal in Ansible, but variables from the external inventory script will override any that have the
same name.
So, with the template above (motd.j2), this would result in the following data being written to /etc/motd for system
‘foo’:
Welcome, I am templated with a value of a=2, b=3, and c=4
And technically, though there is no major good reason to do it, this also works too:
ansible webserver -m shell -a "echo {{ a }}"
If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts
may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For
this reason, you can use the EC2 external inventory script.
You can use this script in one of two ways. The easiest is to use Ansible’s -i command line option and specify the
path to the script after marking it executable:
ansible -i ec2.py -u ubuntu us-east-1d -m ping
The second option is to copy the script to /etc/ansible/hosts and chmod +x it. You will also need to copy the ec2.ini
file to /etc/ansible/ec2.ini. Then you can run ansible as you would normally.
To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are
a variety of methods available, but the simplest is just to export two environment variables:
export AWS_ACCESS_KEY_ID=’AK123’
export AWS_SECRET_ACCESS_KEY=’abc123’
You can test the script by itself to make sure your config is correct:
cd plugins/inventory
./ec2.py --list
After a few moments, you should see your entire EC2 inventory across all regions in JSON.
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ec2.ini
and list only the regions you are interested in. There are other config options in ec2.ini including cache control,
and destination variables.
1.1. Introduction 13
Ansible Documentation, Release 1.7
At their heart, inventory files are simply a mapping from some name to a destination address. The default ec2.ini
settings are configured for running Ansible from outside EC2 (from your laptop for example) – and this is not the most
efficient way to manage EC2.
If you are running Ansible from within EC2, internal DNS names and IP addresses may make more sense than public
DNS names. In this case, you can modify the destination_variable in ec2.ini to be the private DNS name
of an instance. This is particularly important when running Ansible within a private subnet inside a VPC, where the
only way to access an instance is via its private IP address. For VPC instances, vpc_destination_variable in ec2.ini
provides a means of using which ever boto.ec2.instance variable makes the most sense for your use case.
The EC2 external inventory provides mappings to instances from several groups:
Global All instances are in group ec2.
Instance ID These are groups of one since instance IDs are unique. e.g. i-00112233 i-a1b1c1d1
Region A group of all instances in an AWS region. e.g. us-east-1 us-west-2
Availability Zone A group of all instances in an availability zone. e.g. us-east-1a us-east-1b
Security Group Instances belong to one or more security groups. A group is created for each security group,
with all characters except alphanumerics, dashes (-) converted to underscores (_). Each group is pre-
fixed by security_group_ e.g. security_group_default security_group_webservers
security_group_Pete_s_Fancy_Group
Tags Each instance can have a variety of key/value pairs associated with it called Tags. The most common
tag key is ‘Name’, though anything is possible. Each key/value pair is its own group of instances, again
with special characters converted to underscores, in the format tag_KEY_VALUE e.g. tag_Name_Web
tag_Name_redis-master-001 tag_aws_cloudformation_logical-id_WebServerGroup
When the Ansible is interacting with a specific server, the EC2 inventory script is called again with the --host
HOST option. This looks up the HOST in the index cache to get the instance ID, and then makes an API call to AWS
to get information about that specific instance. It then makes information about that instance available as variables to
your playbooks. Each variable is prefixed by ec2_. Here are some of the variables available:
• ec2_architecture
• ec2_description
• ec2_dns_name
• ec2_id
• ec2_image_id
• ec2_instance_type
• ec2_ip_address
• ec2_kernel
• ec2_key_name
• ec2_launch_time
• ec2_monitored
• ec2_ownerId
• ec2_placement
• ec2_platform
• ec2_previous_state
• ec2_private_dns_name
• ec2_private_ip_address
• ec2_public_dns_name
• ec2_ramdisk
• ec2_region
• ec2_root_device_name
• ec2_root_device_type
• ec2_security_group_ids
• ec2_security_group_names
• ec2_spot_instance_request_id
• ec2_state
• ec2_state_code
• ec2_state_reason
• ec2_status
• ec2_subnet_id
• ec2_tag_Name
• ec2_tenancy
• ec2_virtualization_type
• ec2_vpc_id
Both ec2_security_group_ids and ec2_security_group_names are comma-separated lists of all secu-
rity groups. Each EC2 tag is a variable in the format ec2_tag_KEY.
To see the complete list of variables available for an instance, run the script by itself:
cd plugins/inventory
./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com
Note that the AWS inventory script will cache results to avoid repeated API calls, and this cache setting is configurable
in ec2.ini. To explicitly clear the cache, you can run the ec2.py script with the --refresh-cache parameter.
In addition to Cobbler and EC2, inventory scripts are also available for:
BSD Jails
Digital Ocean
Google Compute Engine
Linode
OpenShift
OpenStack Nova
Red Hat’s SpaceWalk
Vagrant (not to be confused with the provisioner in vagrant, which is preferred)
Zabbix
Sections on how to use these in more detail will be added over time, but by looking at the “plugins/” directory of the
Ansible checkout it should be very obvious how to use them. The process for the AWS inventory script is the same.
1.1. Introduction 15
Ansible Documentation, Release 1.7
If you develop an interesting inventory script that might be general purpose, please submit a pull request – we’d likely
be glad to include it in the project.
If the location given to -i in Ansible is a directory (or as so configured in ansible.cfg), Ansible can use multiple
inventory sources at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory
sources in the same ansible run. Instant hybrid cloud!
See also:
Inventory All about static inventory files
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
1.1.5 Patterns
Topics
• Patterns
Patterns in Ansible are how we decide which hosts to manage. This can mean what hosts to communicate with, but in
terms of Playbooks it actually means what hosts to apply a particular configuration or IT process to.
We’ll go over how to use the command line in Introduction To Ad-Hoc Commands section, however, basically it looks
like this:
ansible <pattern_goes_here> -m <module_name> -a <arguments>
Such as:
ansible webservers -m service -a "name=httpd state=restarted"
A pattern usually refers to a set of groups (which are sets of hosts) – in the above case, machines in the “webservers”
group.
Anyway, to use Ansible, you’ll first need to know how to tell Ansible which hosts in your inventory to talk to. This is
done by designating particular host names or groups of hosts.
The following patterns are equivalent and target all hosts in the inventory:
all
*
The following patterns address one or more groups. Groups separated by a colon indicate an “OR” configuration. This
means the host may be in either one group or the other:
webservers
webservers:dbservers
You can exclude groups as well, for instance, all machines must be in the group webservers but not in the group
phoenix:
webservers:!phoenix
You can also specify the intersection of two groups. This would mean the hosts must be in the group webservers and
the host must also be in the group staging:
webservers:&staging
The above configuration means “all machines in the groups ‘webservers’ and ‘dbservers’ are to be managed if they are
in the group ‘staging’ also, but the machines are not to be managed if they are in the group ‘phoenix’ ... whew!
You can also use variables if you want to pass some group specifiers via the “-e” argument to ansible-playbook, but
this is uncommonly used:
webservers:!{{excluded}}:&{{required}}
You also don’t have to manage by strictly defined groups. Individual host names, IPs and groups, can also be referenced
using wildcards:
*.example.com
*.com
It’s also ok to mix wildcard patterns and groups at the same time:
one*.com:dbservers
Most people don’t specify patterns as regular expressions, but you can. Just start the pattern with a ‘~’:
~(web|db).*\.example\.com
While we’re jumping a bit ahead, additionally, you can add an exclusion criteria just by supplying the --limit flag
to /usr/bin/ansible or /usr/bin/ansible-playbook:
ansible-playbook site.yml --limit datacenter2
Easy enough. See Introduction To Ad-Hoc Commands and then Playbooks for how to apply this knowledge.
See also:
Introduction To Ad-Hoc Commands Examples of basic commands
Playbooks Learning ansible’s configuration management language
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
1.1. Introduction 17
Ansible Documentation, Release 1.7
Topics
• Introduction To Ad-Hoc Commands
– Parallelism and Shell Commands
– File Transfer
– Managing Packages
– Users and Groups
– Deploying From Source Control
– Managing Services
– Time Limited Background Operations
– Gathering Facts
The following examples show how to use /usr/bin/ansible for running ad hoc tasks.
What’s an ad-hoc command?
An ad-hoc command is something that you might type in to do something really quick, but don’t want to save for later.
This is a good place to start to understand the basics of what Ansible can do prior to learning the playbooks language
– ad-hoc commands can also be used to do quick things that you might not necessarily want to write a full playbook
for.
Generally speaking, the true power of Ansible lies in playbooks. Why would you use ad-hoc tasks versus playbooks?
For instance, if you wanted to power off all of your lab for Christmas vacation, you could execute a quick one-liner in
Ansible without writing a playbook.
For configuration management and deployments, though, you’ll want to pick up on using ‘/usr/bin/ansible-playbook’
– the concepts you will learn here will port over directly to the playbook language.
(See Playbooks for more information about those)
If you haven’t read Inventory already, please look that over a bit first and then we’ll get going.
Arbitrary example.
Let’s use Ansible’s command line tool to reboot all web servers in Atlanta, 10 at a time. First, let’s set up SSH-agent
so it can remember our credentials:
$ ssh-agent bash
$ ssh-add ~/.ssh/id_rsa
If you don’t want to use ssh-agent and want to instead SSH with a password instead of keys, you can with
--ask-pass (-k), but it’s much better to just use ssh-agent.
Now to run the command on all servers in a group, in this case, atlanta, in 10 parallel forks:
$ ansible atlanta -a "/sbin/reboot" -f 10
/usr/bin/ansible will default to running from your user account. If you do not like this behavior, pass in “-u username”.
If you want to run commands as a different user, it looks like this:
$ ansible atlanta -a "/usr/bin/foo" -u username
Often you’ll not want to just do things from your user account. If you want to run commands through sudo:
Use --ask-sudo-pass (-K) if you are not using passwordless sudo. This will interactively prompt you for the
password to use. Use of passwordless sudo makes things easier to automate, but it’s not required.
It is also possible to sudo to a user other than root using --sudo-user (-U):
$ ansible atlanta -a "/usr/bin/foo" -u username -U otheruser [--ask-sudo-pass]
Note: Rarely, some users have security rules where they constrain their sudo environment to running specific com-
mand paths only. This does not work with ansible’s no-bootstrapping philosophy and hundreds of different modules. If
doing this, use Ansible from a special account that does not have this constraint. One way of doing this without sharing
access to unauthorized users would be gating Ansible with Ansible Tower, which can hold on to an SSH credential and
let members of certain organizations use it on their behalf without having direct access.
Ok, so those are basics. If you didn’t read about patterns and groups yet, go back and read Patterns.
The -f 10 in the above specifies the usage of 10 simultaneous processes to use. You can also set this in The Ansible
Configuration File to avoid setting it again. The default is actually 5, which is really small and conservative. You are
probably going to want to talk to a lot more simultaneous hosts so feel free to crank this up. If you have more hosts
than the value set for the fork count, Ansible will talk to them, but it will take a little longer. Feel free to push this
value as high as your system can handle it!
You can also select what Ansible “module” you want to run. Normally commands also take a -m for module name,
but the default module name is ‘command’, so we didn’t need to specify that all of the time. We’ll use -m in later
examples to run some other About Modules.
Note: The command - Executes a command on a remote node module does not support shell variables and things like
piping. If we want to execute a module using a shell, use the ‘shell’ module instead. Read more about the differences
on the About Modules page.
Using the shell - Execute commands in nodes. module looks like this:
$ ansible raleigh -m shell -a ’echo $TERM’
When running any command with the Ansible ad hoc CLI (as opposed to Playbooks), pay particular attention to shell
quoting rules, so the local shell doesn’t eat a variable before it gets passed to Ansible. For example, using double vs
single quotes in the above example would evaluate the variable on the box you were on.
So far we’ve been demoing simple command execution, but most Ansible modules usually do not work like simple
scripts. They make the remote system look like you state, and run the commands necessary to get it there. This is
commonly referred to as ‘idempotence’, and is a core design goal of Ansible. However, we also recognize that running
arbitrary commands is equally important, so Ansible easily supports both.
File Transfer
Here’s another use case for the /usr/bin/ansible command line. Ansible can SCP lots of files to multiple machines in
parallel.
To transfer a file directly to many servers:
$ ansible atlanta -m copy -a "src=/etc/hosts dest=/tmp/hosts"
If you use playbooks, you can also take advantage of the template module, which takes this another step further.
(See module and playbook documentation).
1.1. Introduction 19
Ansible Documentation, Release 1.7
The file module allows changing ownership and permissions on files. These same options can be passed directly to
the copy module as well:
$ ansible webservers -m file -a "dest=/srv/foo/a.txt mode=600"
$ ansible webservers -m file -a "dest=/srv/foo/b.txt mode=600 owner=mdehaan group=mdehaan"
The file module can also create directories, similar to mkdir -p:
$ ansible webservers -m file -a "dest=/path/to/c mode=755 owner=mdehaan group=mdehaan state=directory
Managing Packages
There are modules available for yum and apt. Here are some examples with yum.
Ensure a package is installed, but don’t update it:
$ ansible webservers -m yum -a "name=acme state=installed"
Ansible has modules for managing packages under many platforms. If your package manager does not have a module
available for it, you can install for other packages using the command module or (better!) contribute a module for
other package managers. Stop by the mailing list for info/details.
The ‘user’ module allows easy creation and manipulation of existing user accounts, as well as removal of user accounts
that may exist:
$ ansible all -m user -a "name=foo password=<crypted password here>"
See the About Modules section for details on all of the available options, including how to manipulate groups and
group membership.
Since Ansible modules can notify change handlers it is possible to tell Ansible to run specific tasks when the code is
updated, such as deploying Perl/Python/PHP/Ruby directly from git and then restarting apache.
Managing Services
Long running operations can be backgrounded, and their status can be checked on later. The same job ID is given to
the same task on all hosts, so you won’t lose track. If you kick hosts and don’t want to poll, it looks like this:
$ ansible all -B 3600 -a "/usr/bin/long_running_operation --do-stuff"
If you do decide you want to check on the job status later, you can:
$ ansible all -m async_status -a "jid=123456789"
The above example says “run for 30 minutes max (-B: 30*60=1800), poll for status (-P) every 60 seconds”.
Poll mode is smart so all jobs will be started before polling will begin on any machine. Be sure to use a high enough
--forks value if you want to get all of your jobs started very quickly. After the time limit (in seconds) runs out (-B),
the process on the remote nodes will be terminated.
Typically you’ll only be backgrounding long-running shell commands or software upgrades only. Backgrounding the
copy module does not do a background file transfer. Playbooks also support polling, and have a simplified syntax for
this.
Gathering Facts
Facts are described in the playbooks section and represent discovered variables about a system. These can be used to
implement conditional execution of tasks but also just to get ad-hoc information about your system. You can see all
facts via:
$ ansible all -m setup
Its also possible to filter this output to just export certain facts, see the “setup” module documentation for details.
Read more about facts at Variables once you’re ready to read up on Playbooks.
See also:
The Ansible Configuration File All about the Ansible config file
About Modules A list of available modules
Playbooks Using Ansible for configuration management & deployment
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
1.1. Introduction 21
Ansible Documentation, Release 1.7
Topics
• The Ansible Configuration File
– Getting the latest configuration
– Environmental configuration
– Explanation of values by section
* General defaults
· action_plugins
· ansible_managed
· ask_pass
· ask_sudo_pass
· callback_plugins
· connection_plugins
· deprecation_warnings
· display_skipped_hosts
· error_on_undefined_vars
· executable
· filter_plugins
· forks
· gathering
· hash_behaviour
· hostfile
· host_key_checking
· jinja2_extensions
· legacy_playbook_variables
· library
· log_path
· lookup_plugins
· module_lang
· module_name
· nocolor
· nocows
· pattern
· poll_interval
· private_key_file
· remote_port
· remote_tmp
· remote_user
· roles_path
· sudo_exe
· sudo_flags
· sudo_user
· system_warnings
· timeout
· transport
· vars_plugins
* Paramiko Specific Settings
· record_host_keys
* OpenSSH Specific Settings
· ssh_args
· control_path
· scp_if_ssh
· pipelining
* Accelerate Mode Settings
· accelerate_port
· accelerate_timeout
1.1. Introduction· accelerate_connect_timeout 23
· accelerate_daemon_timeout
· accelerate_multi_key
Ansible Documentation, Release 1.7
Certain settings in Ansible are adjustable via a configuration file. The stock configuration should be sufficient for most
users, but there may be reasons you would want to change them.
Changes can be made and used in a configuration file which will be processed in the following order:
* ANSIBLE_CONFIG (an environment variable)
* ansible.cfg (in the current directory)
* .ansible.cfg (in the home directory)
* /etc/ansible/ansible.cfg
Ansible will process the above list and use the first file found. Settings in files are not merged.
If installing ansible from a package manager, the latest ansible.cfg should be present in /etc/ansible, possibly as a
”.rpmnew” file (or other) as appropriate in the case of updates.
If you have installed from pip or from source, however, you may want to create this file in order to override default
settings in Ansible.
You may wish to consult the ansible.cfg in source control for all of the possible latest values.
Environmental configuration
Ansible also allows configuration of settings via environment variables. If these environment variables are set, they
will override any setting loaded from the configuration file. These variables are for brevity not defined here, but look
in ‘constants.py’ in the source tree if you want to use these. They are mostly considered to be a legacy system as
compared to the config file, but are equally valid.
The configuration file is broken up into sections. Most options are in the “general” section but some sections of the
file are specific to certain connection types.
General defaults
action_plugins Actions are pieces of code in ansible that enable things like module execution, templating, and so
forth.
This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different loca-
tions:
action_plugins = /usr/share/ansible_plugins/action_plugins
Most users will not need to use this feature. See Developing Plugins for more details.
ansible_managed Ansible-managed is a string that can be inserted into files written by Ansible’s config templating
system, if you use a string like:
{{ ansible_managed }}
This is useful to tell users that a file has been placed by Ansible and manual changes are likely to be overwritten.
Note that if using this feature, and there is a date in the string, the template will be reported changed each time as the
date is updated.
ask_pass This controls whether an Ansible playbook should prompt for a password by default. The default behavior
is no:
#ask_pass=True
If using SSH keys for authentication, it’s probably not needed to change this setting.
ask_sudo_pass Similar to ask_pass, this controls whether an Ansible playbook should prompt for a sudo password
by default when sudoing. The default behavior is also no:
#ask_sudo_pass=True
Users on platforms where sudo passwords are enabled should consider changing this setting.
callback_plugins This is a developer-centric feature that allows low-level extensions around Ansible to be loaded
from different locations:
callback_plugins = /usr/share/ansible_plugins/callback_plugins
Most users will not need to use this feature. See Developing Plugins for more details
connection_plugins This is a developer-centric feature that allows low-level extensions around Ansible to be loaded
from different locations:
connection_plugins = /usr/share/ansible_plugins/connection_plugins
Most users will not need to use this feature. See Developing Plugins for more details
Deprecation warnings indicate usage of legacy features that are slated for removal in a future release of Ansible.
display_skipped_hosts If set to False, ansible will not display any status for a task that is skipped. The default
behavior is to display skipped tasks:
#display_skipped_hosts=True
Note that Ansible will always show the task header for any task, regardless of whether or not the task is skipped.
1.1. Introduction 25
Ansible Documentation, Release 1.7
error_on_undefined_vars On by default since Ansible 1.3, this causes ansible to fail steps that reference variable
names that are likely typoed:
#error_on_undefined_vars=True
If set to False, any ‘{{ template_expression }}’ that contains undefined variables will be rendered in a template or
ansible action line exactly as written.
executable This indicates the command to use to spawn a shell under a sudo environment. Users may need to change
this in rare instances to /bin/bash in rare instances when sudo is constrained, but in most cases it may be left as is:
#executable = /bin/bash
filter_plugins This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations:
filter_plugins = /usr/share/ansible_plugins/filter_plugins
Most users will not need to use this feature. See Developing Plugins for more details
forks This is the default number of parallel processes to spawn when communicating with remote hosts. Since
Ansible 1.3, the fork number is automatically limited to the number of possible hosts, so this is really a limit of how
much network and CPU load you think you can handle. Many users may set this to 50, some set it to 500 or more.
If you have a large number of hosts, higher values will make actions across all of those hosts complete faster. The
default is very very conservative:
forks=5
gathering New in 1.6, the ‘gathering’ setting controls the default policy of facts gathering (variables discovered
about remote systems).
The value ‘implicit’ is the default, meaning facts will be gathered per play unless ‘gather_facts: False’ is set in the
play. The value ‘explicit’ is the inverse, facts will not be gathered unless directly requested in the play.
The value ‘smart’ means each new host that has no facts discovered will be scanned, but if the same host is addressed
in multiple plays it will not be contacted again in the playbook run. This option can be useful for those wishing to save
fact gathering time.
hash_behaviour Ansible by default will override variables in specific precedence orders, as described in Variables.
When a variable of higher precedence wins, it will replace the other value.
Some users prefer that variables that are hashes (aka ‘dictionaries’ in Python terms) are merged. This setting is called
‘merge’. This is not the default behavior and it does not affect variables whose values are scalars (integers, strings)
or arrays. We generally recommend not using this setting unless you think you have an absolute need for it, and
playbooks in the official examples repos do not use this setting:
#hash_behaviour=replace
hostfile This is the default location of the inventory file, script, or directory that Ansible will use to determine what
hosts it has available to talk to:
hostfile = /etc/ansible/hosts
host_key_checking As described in Getting Started, host key checking is on by default in Ansible 1.3 and later. If
you understand the implications and wish to disable it, you may do so here by setting the value to False:
host_key_checking=True
jinja2_extensions This is a developer-specific feature that allows enabling additional Jinja2 extensions:
jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n
If you do not know what these do, you probably don’t need to change this setting :)
legacy_playbook_variables Ansible prefers to use Jinja2 syntax ‘{{ like_this }}’ to indicate a variable should be
substituted in a particular string. However, older versions of playbooks used a more Perl-style syntax. This syntax was
undesirable as it frequently conflicted with bash and was hard to explain to new users when referencing complicated
variable hierarchies, so we have standardized on the ‘{{ jinja2 }}’ way.
To ensure a string like ‘$foo’ is not inadvertently replaced in a Perl or Bash script template, the old form of templating
(which is still enabled as of Ansible 1.4) can be disabled like so
legacy_playbook_variables = no
Ansible knows how to look in multiple locations if you feed it a colon separated path, and it also will look for modules
in the ”./library” directory alongside a playbook.
log_path If present and configured in ansible.cfg, Ansible will log information about executions at the designated
location. Be sure the user running Ansible has permissions on the logfile:
log_path=/var/log/ansible.log
This behavior is not on by default. Note that ansible will, without this setting, record module arguments called to the
syslog of managed machines. Password arguments are excluded.
For Enterprise users seeking more detailed logging history, you may be interested in Ansible Tower.
lookup_plugins This is a developer-centric feature that allows low-level extensions around Ansible to be loaded
from different locations:
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins
Most users will not need to use this feature. See Developing Plugins for more details
module_lang This is to set the default language to communicate between the module and the system. By default,
the value is ‘C’.
1.1. Introduction 27
Ansible Documentation, Release 1.7
module_name This is the default module name (-m) value for /usr/bin/ansible. The default is the ‘command’ mod-
ule. Remember the command module doesn’t support shell variables, pipes, or quotes, so you might wish to change it
to ‘shell’:
module_name = command
nocolor By default ansible will try to colorize output to give a better indication of failure and status information. If
you dislike this behavior you can turn it off by setting ‘nocolor’ to 1:
nocolor=0
nocows By default ansible will take advantage of cowsay if installed to make /usr/bin/ansible-playbook runs more
exciting. Why? We believe systems management should be a happy experience. If you do not like the cows, you can
disable them by setting ‘nocows’ to 1:
nocows=0
pattern This is the default group of hosts to talk to in a playbook if no “hosts:” stanza is supplied. The default is to
talk to all hosts. You may wish to change this to protect yourself from surprises:
hosts=*
Note that /usr/bin/ansible always requires a host pattern and does not use this setting, only /usr/bin/ansible-playbook.
poll_interval For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how often
to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably
moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when
something may have completed:
poll_interval=15
private_key_file If you are using a pem file to authenticate with machines rather than SSH agent or passwords, you
can set the default value here to avoid re-specifying --ansible-private-keyfile with every invocation:
private_key_file=/path/to/file.pem
remote_port This sets the default SSH port on all of your systems, for systems that didn’t specify an alternative
value in inventory. The default is the standard 22:
remote_port = 22
remote_tmp Ansible works by transferring modules to your remote machines, running them, and then cleaning up
after itself. In some cases, you may not wish to use the default location and would like to change the path. You can do
so by altering this setting:
remote_tmp = $HOME/.ansible/tmp
The default is to use a subdirectory of the user’s home directory. Ansible will then choose a random directory name
inside this location.
remote_user This is the default username ansible will connect as for /usr/bin/ansible-playbook. Note that
/usr/bin/ansible will always default to the current user if this is not defined:
remote_user = root
roles_path The roles path indicate additional directories beyond the ‘roles/’ subdirectory of a playbook project to
search to find Ansible roles. For instance, if there was a source control repository of common roles and a different
repository of playbooks, you might choose to establish a convention to checkout roles in /opt/mysite/roles like so:
roles_path = /opt/mysite/roles
Additional paths can be provided separated by colon characters, in the same way as other pathstrings:
roles_path = /opt/mysite/roles:/opt/othersite/roles
Roles will be first searched for in the playbook directory. Should a role not be found, it will indicate all the possible
paths that were searched.
sudo_exe If using an alternative sudo implementation on remote machines, the path to sudo can be replaced here
provided the sudo implementation is matching CLI flags with the standard sudo:
sudo_exe=sudo
sudo_flags Additional flags to pass to sudo when engaging sudo support. The default is ‘-H’ which preserves the
environment of the original user. In some situations you may wish to add or remote flags, but in general most users
will not need to change this setting:
sudo_flags=-H
sudo_user This is the default user to sudo to if --sudo-user is not specified or ‘sudo_user’ is not specified in an
Ansible playbook. The default is the most logical: ‘root’:
sudo_user=root
These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
transport This is the default transport to use if “-c <transport_name>” is not specified to /usr/bin/ansible or
/usr/bin/ansible-playbook. The default is ‘smart’, which will use ‘ssh’ (OpenSSH based) if the local operating system
is new enough to support ControlPersist technology, and then will otherwise use ‘paramiko’. Other transport options
include ‘local’, ‘chroot’, ‘jail’, and so on.
Users should usually leave this setting as ‘smart’ and let their playbooks choose an alternate setting when needed with
the ‘connection:’ play parameter.
1.1. Introduction 29
Ansible Documentation, Release 1.7
vars_plugins This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations:
vars_plugins = /usr/share/ansible_plugins/vars_plugins
Most users will not need to use this feature. See Developing Plugins for more details
Paramiko is the default SSH connection implementation on Enterprise Linux 6 or earlier, and is not used by default on
other platforms. Settings live under the [paramiko] header.
record_host_keys The default setting of yes will record newly discovered and approved (if host key checking is
enabled) hosts in the user’s hostfile. This setting may be inefficient for large numbers of hosts, and in those situa-
tions, using the ssh transport is definitely recommended instead. Setting it to False will improve performance and is
recommended when host key checking is disabled:
record_host_keys=True
Under the [ssh_connection] header, the following settings are tunable for SSH connections. OpenSSH is the default
connection type for Ansible on OSes that are new enough to support ControlPersist. (This means basically all operating
systems except Enterprise Linux 6 or earlier).
ssh_args If set, this will pass a specific set of options to Ansible rather than Ansible’s usual defaults:
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may
be appropriate.
control_path This is the location to save ControlPath sockets. This defaults to:
control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r
On some systems with very long hostnames or very long path names (caused by long user names or deeply nested
home directories) this can exceed the character limit on file socket names (108 characters for most platforms). In that
case, you may wish to shorten the string to something like the below:
control_path = %(directory)s/%%h-%%r
Ansible 1.4 and later will instruct users to run with “-vvvv” in situations where it hits this problem and if so it is easy
to tell there is too long of a Control Path filename. This may be frequently encountered on EC2.
scp_if_ssh Occasionally users may be managing a remote system that doesn’t have SFTP enabled. If set to True, we
can cause scp to be used to transfer remote files instead:
scp_if_ssh=False
There’s really no reason to change this unless problems are encountered, and then there’s also no real drawback to
managing the switch. Most environments support SFTP by default and this doesn’t usually need to be changed.
pipelining Enabling pipelining reduces the number of SSH operations required to execute a module on the remote
server, by executing many ansible modules without actual file transfer. This can result in a very significant performance
improvement when enabled, however when using “sudo:” operations you must first disable ‘requiretty’ in /etc/sudoers
on all managed hosts.
By default, this option is disabled to preserve compatibility with sudoers configurations that have requiretty (the default
on many distros), but is highly recommended if you can enable it, eliminating the need for Accelerated Mode:
pipelining=False
Under the [accelerate] header, the following settings are tunable for Accelerated Mode. Acceleration is a useful
performance feature to use if you cannot enable pipelining in your environment, but is probably not needed if you can.
Note, this value can be set to less than one second, however it is probably not a good idea to do so unless you’re on
a very fast and reliable LAN. If you’re connecting to systems over the internet, it may be necessary to increase this
timeout.
Note, prior to 1.6, the timeout was hard-coded from the time of the daemon’s launch. For version 1.6+, the timeout is
now based on the last activity to the daemon and is configurable via this option.
1.1. Introduction 31
Ansible Documentation, Release 1.7
New clients first connect to the target node over SSH to upload the key, which is done via a local socket file, so they
must have the same access as the user that launched the daemon originally.
Topics
• Windows Support
– Windows: How Does It Work
– Installing on the Control Machine
– Inventory
– Windows System Prep
– Getting to Powershell 3.0 or higher
– What modules are available
– Developers: Supported modules and how it works
– Reminder: You Must Have a Linux Control Machine
– Windows Facts
– Windows Playbook Examples
– Windows Contributions
As you may have already read, Ansible manages Linux/Unix machines using SSH by default.
Starting in version 1.7, Ansible also contains support for managing Windows machines. This uses native powershell
remoting, rather than SSH.
Ansible will still be run from a Linux control machine, and uses the “winrm” Python module to talk to remote hosts.
No additional software needs to be installed on the remote machines for Ansible to manage them, it still maintains the
agentless properties that make it popular on Linux/Unix.
Note that it is expected you have a basic understanding of Ansible prior to jumping into this section, so if you haven’t
written a Linux playbook first, it might be worthwhile to dig in there first.
Inventory
Ansible’s windows support relies on a few standard variables to indicate the username, password, and connection type
(windows) of the remote hosts. These variables are most easily set up in inventory. This is used instead of SSH-keys
or passwords as normally fed into Ansible:
[windows]
winserver1.example.com
winserver2.example.com
ansible_ssh_user: Administrator
ansible_ssh_pass: SekritPasswordGoesHere
ansible_ssh_port: 5986
ansible_connection: winrm
Notice that the ssh_port is not actually for SSH, but this is a holdover variable name from how Ansible is mostly an
SSH-oriented system. Again, Windows management will not happen over SSH.
When using your playbook, don’t forget to specify –ask-vault-pass to provide the password to unlock the file.
Test your configuration like so, by trying to contact your Windows nodes. Note this is not an ICMP ping, but tests the
Ansible communication channel that leverages Windows remoting:
ansible windows [-i inventory] -m win_ping --ask-vault-pass
If you haven’t done anything to prep your systems yet, this won’t work yet. This is covered in a later section about
how to enable powershell remoting - and if neccessary - how to upgrade powershell to a version that is 3 or higher.
You’ll run this command again later though, to make sure everything is working.
In order for Ansible to manage your windows machines, you will have to enable Powershell remoting first, which also
enables WinRM.
From the Windows host, launch the Powershell Client. For information on Powershell, visit Microsoft’s Using Pow-
ershell article.
In the powershell session, run the following to enable PS Remoting and set the execution policy
$ Enable-PSRemoting -Force
$ Set-ExecutionPolicy RemoteSigned
If your Windows firewall is enabled, you must also run the following command to allow firewall access to the public
firewall profile:
# Windows 2012 / 2012R2
$ Set-NetFirewallRule -Name "WINRM-HTTP-In-TCP-PUBLIC" -RemoteAddress Any
By default, Powershell remoting enables an HTTP listener. The following commands enable an HTTPS listener, which
secures communication between the Control Machine and windows.
An SSL certificate for server authentication is required to create the HTTPS listener. The existence of an existing
certificate in the computer account can be verified by using the MMC snap-in.
A best practice for SSL certificates is generating them from an internal or external certificate authority. An existing
certificate could be located in the computer account certificate store using the following article.
1.1. Introduction 33
Ansible Documentation, Release 1.7
Alternatively, a self-signed SSL certificate can be generated in powershell using the following technet article. At
a minimum, the subject name should match the hostname, and Server Authentication is required. Once the self
signed certificate is obtained, the certificate thumbprint can be identified using How to: Retrieve the Thumbprint of a
Certificate.
# Create the https listener
$ winrm create winrm/config/Listener?Address=*+Transport=HTTPS @{Hostname="host_name";CertificateTh
Again, if your Windows firewall is enabled, the following command to allow firewall access to the HTTPS listener:
# Windows 2008 / 2008R2 / 2012 / 2012R2
$ netsh advfirewall firewall add rule Profile=public name="Allow WinRM HTTPS" dir=in localport=5986
However, if you are still running Powershell 2.0 on remote systems, it’s time to use Ansible to upgrade powershell
before proceeding further, as some of the Ansible modules will require Powershell 3.0.
In the future, Ansible may provide a shortcut installer that automates these steps for prepping a Windows machine.
Powershell 3.0 or higher is needed for most provided Ansible modules for Windows.
Looking at an ansible checkout, copy the examples/scripts/upgrade_to_ps3.ps1 script onto the remote host and run a
powershell console as an administrator. You will now be running Powershell 3 and can try connectivity again using
the win_ping technique referenced above.
Most of the Ansible modules in core Ansible are written for a combination of Linux/Unix machines and arbitrary web
services, though there are various Windows modules as listed in the “windows” subcategory of the Ansible module
index.
Browse this index to see what is available.
In many cases, it may not be neccessary to even write or use an Ansible module.
In particular, the “script” module can be used to run arbitrary powershell scripts, allowing Windows administrators
familiar with powershell a very native way to do things, as in the following playbook:
- hosts: windows
tasks:
- script: foo.ps1 --argument --other-argument
Note there are a few other Ansible modules that don’t start with “win” that also function, including “slurp”, “raw”,
and “setup” (which is how fact gathering works).
Developing ansible modules are covered in a later section of the documentation, with a focus on Linux/Unix. What if
you want to write Windows modules for ansible though?
For Windows, ansible modules are implemented in Powershell. Skim those Linux/Unix module development chapters
before proceeding.
Windows modules live in a “windows/” subfolder in the Ansible “library/” subtree. For example, if a module is named
“library/windows/win_ping”, there will be embedded documentation in the “win_ping” file, and the actual powershell
code will live in a “win_ping.ps1” file. Take a look at the sources and this will make more sense.
Modules (ps1 files) should start as follows:
#!powershell
# <license>
# WANT_JSON
# POWERSHELL_COMMON
The above magic is neccessary to tell Ansible to mix in some common code and also know how to push modules out.
The common code contains some nice wrappers around working with hash data structures and emitting JSON results,
and possibly a few mpmore useful things. Regular Ansible has this same concept for reusing Python code - this is just
the windows equivalent.
What modules you see in windows/ are just a start. Additional modules may be submitted as pull requests to github.
Note running Ansible from a Windows control machine is NOT a goal of the project. Refrain from asking for this
feature, as it limits what technologies, features, and code we can use in the main project in the future. A Linux control
machine will be required to manage Windows hosts.
Cygwin is not supported, so please do not ask questions about Ansible running from Cygwin.
Windows Facts
Just as with Linux/Unix, facts can be gathered for windows hosts, which will return things such as the operating system
version. To see what variables are available about a windows host, run the following:
ansible winhost.example.com -m setup
Note that this command invocation is exactly the same as the Linux/Unix equivalent.
Look to the list of windows modules for most of what is possible, though also some modules like “raw” and “script”
also work on Windows, as do “fetch” and “slurp”.
Here is an example of pushing and running a powershell script:
- name: test script module
hosts: windows
tasks:
- name: run test script
script: files/test_script.ps1
Running individual commands uses the ‘raw’ module, as opposed to the shell or command module as is common on
Linux/Unix operating systems:
1.1. Introduction 35
Ansible Documentation, Release 1.7
And for a final example, here’s how to use the win_stat module to test for file existance. Note that the data returned
byt he win_stat module is slightly different than what is provided by the Linux equivalent:
- name: test stat module
hosts: windows
tasks:
- name: test stat module on file
win_stat: path="C:/Windows/win.ini"
register: stat_file
- debug: var=stat_file
Again, recall that the Windows modules are all listed in the Windows category of modules, with the exception that the
“raw”, “script”, and “fetch” modules are also available. These modules do not start with a “win” prefix.
Windows Contributions
Windows support in Ansible is still very new, and contributions are quite welcome, whether this is in the form of new
modules, tweaks to existing modules, documentation, or something else. Please stop by the ansible-devel mailing list
if you would like to get involved and say hi.
See also:
Developing Modules How to write modules
Playbooks Learning ansible’s configuration management language
List of Windows Modules Windows specific module list, all implemented in powershell
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
We’ve recorded a short video that shows how to get started with Ansible that you may like to use alongside the
documentation.
The quickstart video is about 20 minutes long and will show you some of the basics about your first steps with Ansible.
Enjoy, and be sure to visit the rest of the documentation to learn more.
1.3 Playbooks
Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want
your remote systems to enforce, or a set of steps in a general IT process.
If Ansible modules are the tools in your workshop, playbooks are your design plans.
At a basic level, playbooks can be used to manage configurations of and deployments to remote machines. At a more
advanced level, they can sequence multi-tier rollouts involving rolling updates, and can delegate actions to other hosts,
interacting with monitoring servers and load balancers along the way.
While there’s a lot of information here, there’s no need to learn everything at once. You can start small and pick up
more features over time as you need them.
Playbooks are designed to be human-readable and are developed in a basic text language. There are multiple ways to
organize playbooks and the files they include, and we’ll offer up some suggestions on that and making the most out of
Ansible.
It is recommended to look at Example Playbooks while reading along with the playbook documentation. These
illustrate best practices as well as how to put many of the various concepts together.
About Playbooks
Playbooks are a completely different way to use ansible than in adhoc task execution mode, and are particularly
powerful.
Simply put, playbooks are the basis for a really simple configuration management and multi-machine deployment
system, unlike any that already exist, and one that is very well suited to deploying complex applications.
Playbooks can declare configurations, but they can also orchestrate steps of any manual ordered process, even as
different steps must bounce back and forth between sets of machines in particular orders. They can launch tasks
synchronously or asynchronously.
While you might run the main /usr/bin/ansible program for ad-hoc tasks, playbooks are more likely to be kept in source
control and used to push out your configuration or assure the configurations of your remote systems are in spec.
There are also some full sets of playbooks illustrating a lot of these techniques in the ansible-examples repository.
We’d recommend looking at these in another tab as you go along.
There are also many jumping off points after you learn playbooks, so hop back to the documentation index after you’re
done with this section.
Playbooks are expressed in YAML format (see YAML Syntax) and have a minimum of syntax, which intentionally tries
to not be a programming language or script, but rather a model of a configuration or a process.
Each playbook is composed of one or more ‘plays’ in a list.
The goal of a play is to map a group of hosts to some well defined roles, represented by things ansible calls tasks. At
a basic level, a task is nothing more than a call to an ansible module, which you should have learned about in earlier
chapters.
By composing a playbook of multiple ‘plays’, it is possible to orchestrate multi-machine deployments, running certain
steps on all machines in the webservers group, then certain steps on the database server group, then more commands
back on the webservers group, etc.
1.3. Playbooks 37
Ansible Documentation, Release 1.7
“plays” are more or less a sports analogy. You can have quite a lot of plays that affect your systems to do different
things. It’s not as if you were just defining one particular state or model, and you can run different plays at different
times.
For starters, here’s a playbook that contains just one play:
---
- hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum: pkg=httpd state=latest
- name: write the apache config file
template: src=/https/www.scribd.com/srv/httpd.j2 dest=/etc/httpd.conf
notify:
- restart apache
- name: ensure apache is running
service: name=httpd state=started
handlers:
- name: restart apache
service: name=httpd state=restarted
Below, we’ll break down what the various features of the playbook language are.
Basics
For each play in a playbook, you get to choose which machines in your infrastructure to target and what remote user
to complete the steps (called tasks) as.
The hosts line is a list of one or more groups or host patterns, separated by colons, as described in the Patterns
documentation. The remote_user is just the name of the user account:
---
- hosts: webservers
remote_user: root
Note: The remote_user parameter was formerly called just user. It was renamed in Ansible 1.4 to make it more
distinguishable from the user module (used to create users on remote systems).
---
- hosts: webservers
remote_user: yourname
sudo: yes
You can also use sudo on a particular task instead of the whole play:
---
- hosts: webservers
remote_user: yourname
tasks:
- service: name=nginx state=started
sudo: yes
You can also login as you, and then sudo to different users than root:
---
- hosts: webservers
remote_user: yourname
sudo: yes
sudo_user: postgres
If you need to specify a password to sudo, run ansible-playbook with --ask-sudo-pass (-K). If you run a sudo
playbook and the playbook seems to hang, it’s probably stuck at the sudo prompt. Just Control-C to kill it and run it
again with -K.
Important: When using sudo_user to a user other than root, the module arguments are briefly written into a random
tempfile in /tmp. These are deleted immediately after the command is executed. This only occurs when sudoing from
a user like ‘bob’ to ‘timmy’, not when going from ‘bob’ to ‘root’, or logging in directly as ‘bob’ or ‘root’. If this
concerns you that this data is briefly readable (not writable), avoid transferring uncrypted passwords with sudo_user
set. In other cases, ‘/tmp’ is not used and this does not come into play. Ansible also takes care to not log password
parameters.
Tasks list
Each play contains a list of tasks. Tasks are executed in order, one at a time, against all machines matched by the host
pattern, before moving on to the next task. It is important to understand that, within a play, all hosts are going to get
the same task directives. It is the purpose of a play to map a selection of hosts to tasks.
When running the playbook, which runs top to bottom, hosts with failed tasks are taken out of the rotation for the
entire playbook. If things fail, simply correct the playbook file and rerun.
The goal of each task is to execute a module, with very specific arguments. Variables, as mentioned above, can be
used in arguments to modules.
Modules are ‘idempotent’, meaning if you run them again, they will make only the changes they must in order to bring
the system to the desired state. This makes it very safe to rerun the same playbook multiple times. They won’t change
things unless they have to change things.
The command and shell modules will typically rerun the same command again, which is totally ok if the command is
something like ‘chmod’ or ‘setsebool’, etc. Though there is a ‘creates’ flag available which can be used to make these
modules also idempotent.
Every task should have a name, which is included in the output from running the playbook. This is output for humans,
so it is nice to have reasonably good descriptions of each task step. If the name is not provided though, the string fed
to ‘action’ will be used for output.
1.3. Playbooks 39
Ansible Documentation, Release 1.7
Tasks can be declared using the legacy “action: module options” format, but it is recommended that you use the more
conventional “module: options” format. This recommended format is used throughout the documentation, but you
may encounter the older format in some playbooks.
Here is what a basic task looks like, as with most modules, the service module takes key=value arguments:
tasks:
- name: make sure apache is running
service: name=httpd state=running
The command and shell modules are the only modules that just take a list of arguments and don’t use the key=value
form. This makes them work as simply as you would expect:
tasks:
- name: disable selinux
command: /sbin/setenforce 0
The command and shell module care about return codes, so if you have a command whose successful exit code is not
zero, you may wish to do this:
tasks:
- name: run this command and ignore the result
shell: /usr/bin/somecommand || /bin/true
Or this:
tasks:
- name: run this command and ignore the result
shell: /usr/bin/somecommand
ignore_errors: True
If the action line is getting too long for comfort you can break it on a space and indent any continuation lines:
tasks:
- name: Copy ansible inventory file to client
copy: src=/https/www.scribd.com/etc/ansible/hosts dest=/etc/ansible/hosts
owner=root group=root mode=0644
Variables can be used in action lines. Suppose you defined a variable called ‘vhost’ in the ‘vars’ section, you could do
this:
tasks:
- name: create a virtual host file for {{ vhost }}
template: src=somefile.j2 dest=/etc/httpd/conf.d/{{ vhost }}
Those same variables are usable in templates, which we’ll get to later.
Now in a very basic playbook all the tasks will be listed directly in that play, though it will usually make more sense
to break up tasks using the ‘include:’ directive. We’ll show that a bit later.
Action Shorthand
You will notice in earlier versions, this was only available as:
The old form continues to work in newer versions without any plan of deprecation.
As we’ve mentioned, modules are written to be ‘idempotent’ and can relay when they have made a change on the
remote system. Playbooks recognize this and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once
even if notified by multiple different tasks.
For instance, multiple resources may indicate that apache needs to be restarted because they have changed a config
file, but apache will only be bounced once to avoid unnecessary restarts.
Here’s an example of restarting two services when the contents of a file change, but only if the file changes:
- name: template configuration file
template: src=template.j2 dest=/etc/foo.conf
notify:
- restart memcached
- restart apache
The things listed in the ‘notify’ section of a task are called handlers.
Handlers are lists of tasks, not really any different from regular tasks, that are referenced by name. Handlers are what
notifiers notify. If nothing notifies a handler, it will not run. Regardless of how many things notify a handler, it will
run only once, after all of the tasks complete in a particular play.
Here’s an example handlers section:
handlers:
- name: restart memcached
service: name=memcached state=restarted
- name: restart apache
service: name=apache state=restarted
Handlers are best used to restart services and trigger reboots. You probably won’t need them for much else.
Roles are described later on. It’s worthwhile to point out that handlers are automatically processed between ‘pre_tasks’,
‘roles’, ‘tasks’, and ‘post_tasks’ sections. If you ever want to flush all the handler commands immediately though, in
1.2 and later, you can:
tasks:
- shell: some tasks go here
- meta: flush_handlers
- shell: some other tasks
In the above example any queued up handlers would be processed early when the ‘meta’ statement was reached. This
is a bit of a niche case but can come in handy from time to time.
Executing A Playbook
Now that you’ve learned playbook syntax, how do you run a playbook? It’s simple. Let’s run a playbook using a
parallelism level of 10:
1.3. Playbooks 41
Ansible Documentation, Release 1.7
ansible-playbook playbook.yml -f 10
Ansible-Pull
Should you want to invert the architecture of Ansible, so that nodes check in to a central location, instead of pushing
configuration out to them, you can.
Ansible-pull is a small script that will checkout a repo of configuration instructions from git, and then run ansible-
playbook against that content.
Assuming you load balance your checkout location, ansible-pull scales essentially infinitely.
Run ansible-pull --help for details.
There’s also a clever playbook available to configure ansible-pull via a crontab from push mode.
Look at the bottom of the playbook execution for a summary of the nodes that were targeted and how they performed.
General failures and fatal “unreachable” communication attempts are kept separate in the counts.
If you ever want to see detailed output from successful modules as well as unsuccessful ones, use the --verbose
flag. This is available in Ansible 0.5 and later.
Ansible playbook output is vastly upgraded if the cowsay package is installed. Try it!
To see what hosts would be affected by a playbook before you run it, you can do this:
ansible-playbook playbook.yml --list-hosts
See also:
YAML Syntax Learn about YAML syntax
Best Practices Various tips about managing playbooks in the real world
Ansible Documentation Hop back to the documentation index for a lot of special topics about playbooks
About Modules Learn about available modules
Developing Modules Learn how to extend Ansible by writing your own modules
Patterns Learn about how to select hosts
Github examples directory Complete end-to-end playbook examples
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
Topics
• Playbook Roles and Include Statements
– Introduction
– Task Include Files And Encouraging Reuse
– Roles
– Role Default Variables
– Role Dependencies
– Embedding Modules In Roles
– Ansible Galaxy
Introduction
While it is possible to write a playbook in one very large file (and you might start out learning playbooks this way),
eventually you’ll want to reuse files and start to organize things.
At a basic level, including task files allows you to break up bits of configuration policy into smaller files. Task includes
pull in tasks from other files. Since handlers are tasks too, you can also include handler files from the ‘handlers:’
section.
See Playbooks if you need a review of these concepts.
Playbooks can also include plays from other playbook files. When that is done, the plays will be inserted into the
playbook to form a longer list of plays.
When you start to think about it – tasks, handlers, variables, and so on – begin to form larger concepts. You start
to think about modeling what something is, rather than how to make something look like something. It’s no longer
“apply this handful of THINGS to these hosts”, you say “these hosts are dbservers” or “these hosts are webservers”. In
programming, we might call that “encapsulating” how things work. For instance, you can drive a car without knowing
how the engine works.
Roles in Ansible build on the idea of include files and combine them to form clean, reusable abstractions – they allow
you to focus more on the big picture and only dive down into the details when needed.
We’ll start with understanding includes so roles make more sense, but our ultimate goal should be understanding roles
– roles are great and you should use them every time you write playbooks.
See the ansible-examples repository on GitHub for lots of examples of all of this put together. You may wish to have
this open in a separate tab as you dive in.
Suppose you want to reuse lists of tasks between plays or playbooks. You can use include files to do this. Use of
included task lists is a great way to define a role that system is going to fulfill. Remember, the goal of a play in a
playbook is to map a group of systems into multiple roles. Let’s see what this looks like...
A task include file simply contains a flat list of tasks, like so:
---
# possibly saved as tasks/foo.yml
1.3. Playbooks 43
Ansible Documentation, Release 1.7
Include directives look like this, and can be mixed in with regular tasks in a playbook:
tasks:
- include: tasks/foo.yml
You can also pass variables into includes. We call this a ‘parameterized include’.
For instance, if deploying multiple wordpress instances, I could contain all of my wordpress tasks in a single word-
press.yml file, and use it like so:
tasks:
- include: wordpress.yml user=timmy
- include: wordpress.yml user=alice
- include: wordpress.yml user=bob
If you are running Ansible 1.4 and later, include syntax is streamlined to match roles, and also allows passing list and
dictionary parameters:
tasks:
- { include: wordpress.yml, user: timmy, ssh_keys: [ ’keys/one.txt’, ’keys/two.txt’ ] }
Using either syntax, variables passed in can then be used in the included files. We’ve already covered them a bit in
Variables. You can reference them like this:
{{ user }}
(In addition to the explicitly passed-in parameters, all variables from the vars section are also available for use here as
well.)
Starting in 1.0, variables can also be passed to include files using an alternative syntax, which also supports structured
variables:
tasks:
- include: wordpress.yml
vars:
remote_user: timmy
some_list_variable:
- alpha
- beta
- gamma
Playbooks can include other playbooks too, but that’s mentioned in a later section.
Note: As of 1.0, task include statements can be used at arbitrary depth. They were previously limited to a single
level, so task includes could not include other files containing task includes.
Includes can also be used in the ‘handlers’ section, for instance, if you want to define how to restart apache, you only
have to do that once for all of your playbooks. You might make a handlers.yml that looks like:
---
# this might be in a file like handlers/handlers.yml
- name: restart apache
service: name=apache state=restarted
And in your main playbook file, just include it like so, at the bottom of a play:
handlers:
- include: handlers/handlers.yml
You can mix in includes along with your regular non-included tasks and handlers.
Includes can also be used to import one playbook file into another. This allows you to define a top-level playbook that
is composed of other playbooks.
For example:
- name: this is a play at the top level of a file
hosts: all
remote_user: root
tasks:
- name: say hi
tags: foo
shell: echo "hi..."
- include: load_balancers.yml
- include: webservers.yml
- include: dbservers.yml
Note that you cannot do variable substitution when including one playbook inside another.
Note: You can not conditionally path the location to an include file, like you can with ‘vars_files’. If you find yourself
needing to do this, consider how you can restructure your playbook to be more class/role oriented. This is to say you
cannot use a ‘fact’ to decide what include file to use. All hosts contained within the play are going to get the same
tasks. (‘when‘ provides some ability for hosts to conditionally skip tasks).
Roles
1.3. Playbooks 45
Ansible Documentation, Release 1.7
vars/
meta/
If any files are not present, they are just ignored. So it’s ok to not have a ‘vars/’ subdirectory for the role, for instance.
Note, you are still allowed to list tasks, vars_files, and handlers “loose” in playbooks without using roles, but roles
are a good organizational feature and are highly recommended. If there are loose things in the playbook, the roles are
evaluated first.
Also, should you wish to parameterize roles, by adding variables, you can do so, like this:
---
- hosts: webservers
roles:
- common
- { role: foo_app_instance, dir: ’/opt/a’, port: 5000 }
- { role: foo_app_instance, dir: ’/opt/b’, port: 5001 }
While it’s probably not something you should do often, you can also conditionally apply roles like so:
---
- hosts: webservers
roles:
- { role: some_role, when: "ansible_os_family == ’RedHat’" }
This works by applying the conditional to every task in the role. Conditionals are covered later on in the documentation.
Finally, you may wish to assign tags to the roles you specify. You can do so inline::
---
- hosts: webservers
roles:
- { role: foo, tags: ["bar", "baz"] }
If the play still has a ‘tasks’ section, those tasks are executed after roles are applied.
If you want to define certain tasks to happen before AND after roles are applied, you can do this:
---
- hosts: webservers
pre_tasks:
- shell: echo ’hello’
roles:
- { role: some_role }
tasks:
- shell: echo ’still busy’
post_tasks:
- shell: echo ’goodbye’
Note: If using tags with tasks (described later as a means of only running part of a playbook), be sure to also tag
your pre_tasks and post_tasks and pass those along as well, especially if the pre and post tasks are used for monitoring
outage window control or load balancing.
Role Dependencies
Role dependencies can also be specified as a full path, just like top level roles:
---
dependencies:
- { role: ’/path/to/common/roles/foo’, x: 1 }
1.3. Playbooks 47
Ansible Documentation, Release 1.7
Roles dependencies are always executed before the role that includes them, and are recursive. By default, roles can
also only be added as a dependency once - if another role also lists it as a dependency it will not be run again. This
behavior can be overridden by adding allow_duplicates: yes to the meta/main.yml file. For example, a role named
‘car’ could add a role named ‘wheel’ to its dependencies as follows:
---
dependencies:
- { role: wheel, n: 1 }
- { role: wheel, n: 2 }
- { role: wheel, n: 3 }
- { role: wheel, n: 4 }
This is an advanced topic that should not be relevant for most users.
If you write a custom module (see Developing Modules) you may wish to distribute it as part of a role. Generally
speaking, Ansible as a project is very interested in taking high-quality modules into ansible core for inclusion, so this
shouldn’t be the norm, but it’s quite easy to do.
A good example for this is if you worked at a company called AcmeWidgets, and wrote an internal module that helped
configure your internal software, and you wanted other people in your organization to easily use this module – but you
didn’t want to tell everyone how to configure their Ansible library path.
Alongside the ‘tasks’ and ‘handlers’ structure of a role, add a directory named ‘library’. In this ‘library’ directory,
then include the module directly inside of it.
Assuming you had this:
roles/
my_custom_modules/
library/
module1
module2
The module will be usable in the role itself, as well as any roles that are called after this role, as follows:
- hosts: webservers
roles:
- my_custom_modules
- some_other_role_using_my_custom_modules
- yet_another_role_using_my_custom_modules
This can also be used, with some limitations, to modify modules in Ansible’s core distribution, such as to use de-
velopment versions of modules before they are released in production releases. This is not always advisable as API
signatures may change in core components, however, and is not always guaranteed to work. It can be a handy way of
carrying a patch against a core module, however, should you have good reason for this. Naturally the project prefers
that contributions be directed back to github whenever possible via a pull request.
Ansible Galaxy
Ansible Galaxy is a free site for finding, downloading, rating, and reviewing all kinds of community developed Ansible
roles and can be a great way to get a jumpstart on your automation projects.
You can sign up with social auth, and the download client ‘ansible-galaxy’ is included in Ansible 1.4.2 and later.
Read the “About” page on the Galaxy site for more information.
See also:
YAML Syntax Learn about YAML syntax
Playbooks Review the basic Playbook language features
Best Practices Various tips about managing playbooks in the real world
Variables All about variables in playbooks
Conditionals Conditionals in playbooks
Loops Loops in playbooks
About Modules Learn about available modules
Developing Modules Learn how to extend Ansible by writing your own modules
GitHub Ansible examples Complete playbook files from the GitHub project source
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
1.3.3 Variables
1.3. Playbooks 49
Ansible Documentation, Release 1.7
Topics
• Variables
– What Makes A Valid Variable Name
– Variables Defined in Inventory
– Variables Defined in a Playbook
– Variables defined from included files and roles
– Using Variables: About Jinja2
– Jinja2 Filters
* Filters For Formatting Data
* Filters Often Used With Conditionals
* Forcing Variables To Be Defined
* Defaulting Undefined Variables
* Set Theory Filters
* Version Comparison Filters
* Random Number Filter
* Other Useful Filters
– Hey Wait, A YAML Gotcha
– Information discovered from systems: Facts
– Turning Off Facts
– Local Facts (Facts.d)
– Registered Variables
– Accessing Complex Variable Data
– Magic Variables, and How To Access Information About Other Hosts
– Variable File Separation
– Passing Variables On The Command Line
– Conditional Imports
– Variable Precedence: Where Should I Put A Variable?
While automation exists to make it easier to make things repeatable, all of your systems are likely not exactly alike.
All of your systems are likely not the same. On some systems you may want to set some behavior or configuration
that is slightly different from others.
Also, some of the observed behavior or state of remote systems might need to influence how you configure those
systems. (Such as you might need to find out the IP address of a system and even use it as a configuration value on
another system).
You might have some templates for configuration files that are mostly the same, but slightly different based on those
variables.
Variables in Ansible are how we deal with differences between systems.
Once understanding variables you’ll also want to dig into Conditionals and Loops. Useful things like the “group_by”
module and the “when” conditional can also be used with variables, and to help manage differences between systems.
It’s highly recommended that you consult the ansible-examples github repository to see a lot of examples of variables
put to use.
Before we start using variables it’s important to know what are valid variable names.
Variable names should be letters, numbers, and underscores. Variables should always start with a letter.
“foo_port” is a great variable. “foo5” is fine too.
“foo-port”, “foo port”, “foo.port” and “12” are not valid variable names.
We’ve actually already covered a lot about variables in another section, so far this shouldn’t be terribly new, but a bit
of a refresher.
Often you’ll want to set variables based on what groups a machine is in. For instance, maybe machines in Boston want
to use ‘boston.ntp.example.com’ as an NTP server.
See the Inventory document for multiple ways on how to define variables in inventory.
This can be nice as it’s right there when you are reading the playbook.
It turns out we’ve already talked about variables in another place too.
As described in Playbook Roles and Include Statements, variables can also be included in the playbook via include
files, which may or may not be part of an “Ansible Role”. Usage of roles is preferred as it provides a nice organizational
system.
It’s nice enough to know about how to define variables, but how do you use them?
Ansible allows you to reference variables in your playbooks using the Jinja2 templating system. While you can do a
lot of complex things in Jinja, only the basics are things you really need to learn at first.
For instance, in a simple template, you can do something like:
My amp goes to {{ max_amp_value }}
And that will provide the most basic form of variable substitution.
This is also valid directly in playbooks, and you’ll occasionally want to do things like:
template: src=foo.cfg.j2 dest={{ remote_install_path }}/foo.cfg
In the above example, we used a variable to help decide where to place a file.
Inside a template you automatically have access to all of the variables that are in scope for a host. Actually it’s more
than that – you can also read variables about other hosts. We’ll show how to do that in a bit.
Note: ansible allows Jinja2 loops and conditionals in templates, but in playbooks, we do not use them. Ansible
playbooks are pure machine-parseable YAML. This is a rather important feature as it means it is possible to code-
generate pieces of files, or to have other ecosystem tools read Ansible files. Not everyone will need this but it can
unlock possibilities.
1.3. Playbooks 51
Ansible Documentation, Release 1.7
Jinja2 Filters
Note: These are infrequently utilized features. Use them if they fit a use case you have, but this is optional knowledge.
Filters in Jinja2 are a way of transforming template expressions from one kind of data into another. Jinja2 ships with
many of these. See builtin filters in the official Jinja2 template documentation.
In addition to those, Ansible supplies many more.
The following filters will take a data structure in a template and render it in a slightly different format. These are
occasionally useful for debugging:
{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}
The following tasks are illustrative of how filters can be used with conditionals:
tasks:
- shell: /usr/bin/foo
register: result
ignore_errors: True
# in most cases you’ll want a handler, but if you want to do something right now, this is nice
- debug: msg="it changed"
when: result|changed
The default behavior from ansible and ansible.cfg is to fail if variables are undefined, but you can turn this off.
This allows an explicit check with this feature off:
{{ variable | mandatory }}
The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
Jinja2 provides a useful ‘default’ filter, that is often a better approach to failing if a variable is not defined:
{{ some_variable | default(5) }}
In the above example, if the variable ‘some_variable’ is not defined, the value used will be 5, rather than an error being
raised.
If ansible_distribution_version is greater than or equal to 12, this filter will return True, otherwise it will
return False.
The version_compare filter accepts the following operators:
<, lt, <=, le, >, gt, >=, ge, ==, =, eq, !=, <>, ne
This filter also accepts a 3rd parameter, strict which defines if strict version parsing should be used. The default is
False, and if set as True will use more strict version parsing:
{{ sample_version_var | version_compare(’1.0’, operator=’lt’, strict=True) }}
1.3. Playbooks 53
Ansible Documentation, Release 1.7
To get the last name of a file path, like ‘foo.txt’ out of ‘/etc/asdf/foo.txt’:
{{ path | basename }}
To cast values as certain types, such as when you input a string as “True” from a vars_prompt and the system doesn’t
know it is a boolean value:
- debug: msg=test
when: some_string_value | bool
vars:
url: "http://example.com/users/foo/resources/bar"
tasks:
- shell: "msg=’matched pattern 1’"
when: url | match("http://example.com/users/.*/resources/.*")
‘match’ will require a complete match in the string, while ‘search’ will require a match inside of the string.
To replace text in a string with regex, use the “regex_replace” filter:
# convert "ansible" to "able"
{{ ’ansible’ | regex_replace(’^a.*i(.*)$’, ’a\\1’) }}
A few useful filters are typically added with each new Ansible release. The development documentation shows how to
extend Ansible filters by writing your own as plugins, though in general, we encourage new ones to be added to core
so everyone can make use of them.
YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you
aren’t trying to start a YAML dictionary. This is covered on the YAML Syntax page.
This won’t work:
- hosts: app_servers
vars:
app_path: {{ base_path }}/22
There are other places where variables can come from, but these are a type of variable that are discovered, not set by
the user.
Facts are information derived from speaking with your remote systems.
An example of this might be the ip address of the remote host, or what the operating system is.
To see what information is available, try the following:
ansible hostname -m setup
This will return a ginormous amount of variable data, which may look like this, as taken from Ansible 1.4 on a Ubuntu
12.04 system:
1.3. Playbooks 55
Ansible Documentation, Release 1.7
"ansible_all_ipv4_addresses": [
"REDACTED IP ADDRESS"
],
"ansible_all_ipv6_addresses": [
"REDACTED IPV6 ADDRESS"
],
"ansible_architecture": "x86_64",
"ansible_bios_date": "09/20/2012",
"ansible_bios_version": "6.00",
"ansible_cmdline": {
"BOOT_IMAGE": "/boot/vmlinuz-3.5.0-23-generic",
"quiet": true,
"ro": true,
"root": "UUID=4195bff4-e157-4e41-8701-e93f0aec9e22",
"splash": true
},
"ansible_date_time": {
"date": "2013-10-02",
"day": "02",
"epoch": "1380756810",
"hour": "19",
"iso8601": "2013-10-02T23:33:30Z",
"iso8601_micro": "2013-10-02T23:33:30.036070Z",
"minute": "33",
"month": "10",
"second": "30",
"time": "19:33:30",
"tz": "EDT",
"year": "2013"
},
"ansible_default_ipv4": {
"address": "REDACTED",
"alias": "eth0",
"gateway": "REDACTED",
"interface": "eth0",
"macaddress": "REDACTED",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "REDACTED",
"type": "ether"
},
"ansible_default_ipv6": {},
"ansible_devices": {
"fd0": {
"holders": [],
"host": "",
"model": null,
"partitions": {},
"removable": "1",
"rotational": "1",
"scheduler_mode": "deadline",
"sectors": "0",
"sectorsize": "512",
"size": "0.00 Bytes",
"support_discard": "0",
"vendor": null
},
"sda": {
"holders": [],
"host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ult
"model": "VMware Virtual S",
"partitions": {
"sda1": {
"sectors": "39843840",
"sectorsize": 512,
"size": "19.00 GB",
"start": "2048"
},
"sda2": {
"sectors": "2",
"sectorsize": 512,
"size": "1.00 KB",
"start": "39847934"
},
"sda5": {
"sectors": "2093056",
"sectorsize": 512,
"size": "1022.00 MB",
"start": "39847936"
}
},
"removable": "0",
"rotational": "1",
"scheduler_mode": "deadline",
"sectors": "41943040",
"sectorsize": "512",
"size": "20.00 GB",
"support_discard": "0",
"vendor": "VMware,"
},
"sr0": {
"holders": [],
"host": "IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)",
"model": "VMware IDE CDR10",
"partitions": {},
"removable": "1",
"rotational": "1",
"scheduler_mode": "deadline",
"sectors": "2097151",
"sectorsize": "512",
"size": "1024.00 MB",
"support_discard": "0",
"vendor": "NECVMWar"
}
},
"ansible_distribution": "Ubuntu",
"ansible_distribution_release": "precise",
"ansible_distribution_version": "12.04",
"ansible_domain": "",
"ansible_env": {
"COLORTERM": "gnome-terminal",
"DISPLAY": ":0",
"HOME": "/home/mdehaan",
"LANG": "C",
"LESSCLOSE": "/usr/bin/lesspipe %s %s",
"LESSOPEN": "| /usr/bin/lesspipe %s",
1.3. Playbooks 57
Ansible Documentation, Release 1.7
"LOGNAME": "root",
"LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=
"MAIL": "/var/mail/root",
"OLDPWD": "/root/ansible/docsite",
"PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"PWD": "/root/ansible",
"SHELL": "/bin/bash",
"SHLVL": "1",
"SUDO_COMMAND": "/bin/bash",
"SUDO_GID": "1000",
"SUDO_UID": "1000",
"SUDO_USER": "mdehaan",
"TERM": "xterm",
"USER": "root",
"USERNAME": "root",
"XAUTHORITY": "/home/mdehaan/.Xauthority",
"_": "/usr/local/bin/ansible"
},
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "REDACTED",
"netmask": "255.255.255.0",
"network": "REDACTED"
},
"ipv6": [
{
"address": "REDACTED",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "REDACTED",
"module": "e1000",
"mtu": 1500,
"type": "ether"
},
"ansible_form_factor": "Other",
"ansible_fqdn": "ubuntu2",
"ansible_hostname": "ubuntu2",
"ansible_interfaces": [
"lo",
"eth0"
],
"ansible_kernel": "3.5.0-23-generic",
"ansible_lo": {
"active": true,
"device": "lo",
"ipv4": {
"address": "127.0.0.1",
"netmask": "255.0.0.0",
"network": "127.0.0.0"
},
"ipv6": [
{
"address": "::1",
"prefix": "128",
"scope": "host"
}
],
"mtu": 16436,
"type": "loopback"
},
"ansible_lsb": {
"codename": "precise",
"description": "Ubuntu 12.04.2 LTS",
"id": "Ubuntu",
"major_release": "12",
"release": "12.04"
},
"ansible_machine": "x86_64",
"ansible_memfree_mb": 74,
"ansible_memtotal_mb": 991,
"ansible_mounts": [
{
"device": "/dev/sda1",
"fstype": "ext4",
"mount": "/",
"options": "rw,errors=remount-ro",
"size_available": 15032406016,
"size_total": 20079898624
}
],
"ansible_os_family": "Debian",
"ansible_pkg_mgr": "apt",
"ansible_processor": [
"Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz"
],
"ansible_processor_cores": 1,
"ansible_processor_count": 1,
"ansible_processor_threads_per_core": 1,
"ansible_processor_vcpus": 1,
"ansible_product_name": "VMware Virtual Platform",
"ansible_product_serial": "REDACTED",
"ansible_product_uuid": "REDACTED",
"ansible_product_version": "None",
"ansible_python_version": "2.7.3",
"ansible_selinux": false,
"ansible_ssh_host_key_dsa_public": "REDACTED KEY VALUE"
"ansible_ssh_host_key_ecdsa_public": "REDACTED KEY VALUE"
"ansible_ssh_host_key_rsa_public": "REDACTED KEY VALUE"
"ansible_swapfree_mb": 665,
"ansible_swaptotal_mb": 1021,
"ansible_system": "Linux",
"ansible_system_vendor": "VMware, Inc.",
"ansible_user_id": "root",
"ansible_userspace_architecture": "x86_64",
"ansible_userspace_bits": "64",
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "VMware"
In the above the model of the first harddrive may be referenced in a template or playbook as:
{{ ansible_devices.sda.model }}
1.3. Playbooks 59
Ansible Documentation, Release 1.7
{{ ansible_hostname }}
Facts are frequently used in conditionals (see Conditionals) and also in templates.
Facts can be also used to create dynamic groups of hosts that match particular criteria, see the About Modules docu-
mentation on ‘group_by’ for details, as well as in generalized conditional statements as discussed in the Conditionals
chapter.
If you know you don’t need any fact data about your hosts, and know everything about your systems centrally, you
can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems,
mainly, or if you are using Ansible on experimental platforms. In any play, just do this:
- hosts: whatever
gather_facts: no
Note: Perhaps “local facts” is a bit of a misnomer, it means “locally supplied user values” as opposed to “centrally
supplied user values”, or what facts are – “locally dynamically determined values”.
If a remotely managed system has an “/etc/ansible/facts.d” directory, any files in this directory ending in ”.fact”, can
be JSON, INI, or executable files returning JSON, and these can supply local facts in Ansible.
For instance assume a /etc/ansible/facts.d/preferences.fact:
[general]
asdf=1
bar=2
This will produce a hash variable fact named “general” with ‘asdf’ and ‘bar’ as members. To validate this, run the
following:
ansible <hostname> -m setup -a "filter=ansible_local"
}
}
The local namespace prevents any user supplied fact from overriding system facts or variables defined elsewhere in
the playbook.
If you have a playbook that is copying over a custom fact and then running it, making an explicit call to re-run the
setup module can allow that fact to be used during that particular play. Otherwise, it will be available in the next play
that gathers fact information. Here is an example of what that might look like:
- hosts: webservers
tasks:
- name: create directory for ansible custom facts
file: state=directory recurse=yes path=/etc/ansible/facts.d
- name: install custom impi fact
copy: src=ipmi.fact dest=/etc/ansible/facts.d
- name: re-read facts after adding custom fact
setup: filter=ansible_local
In this pattern however, you could also write a fact module as well, and may wish to consider this as an option.
Registered Variables
Another major use of variables is running a command and using the result of that command to save the result into a
variable. Results will vary from module to module. Use of -v when executing playbooks will show possible values for
the results.
The value of a task being executed in ansible can be saved in a variable and used later. See some examples of this in
the Conditionals chapter.
While it’s mentioned elsewhere in that document too, here’s a quick syntax example:
- hosts: web_servers
tasks:
- shell: /usr/bin/foo
register: foo_result
ignore_errors: True
- shell: /usr/bin/bar
when: foo_result.rc == 5
Registered variables are valid on the host the remainder of the playbook run, which is the same as the lifetime of
“facts” in Ansible. Effectively registered variables are just like facts.
1.3. Playbooks 61
Ansible Documentation, Release 1.7
{{ ansible_eth0["ipv4"]["address"] }}
OR alternatively:
{{ ansible_eth0.ipv4.address }}
Even if you didn’t define them yourself, Ansible provides a few variables for you automatically. The most important of
these are ‘hostvars’, ‘group_names’, and ‘groups’. Users should not use these names themselves as they are reserved.
‘environment’ is also reserved.
Hostvars lets you ask about the variables of another host, including facts that have been gathered about that host. If,
at this point, you haven’t talked to that host yet in any play in the playbook or set of playbooks, you can get at the
variables, but you will not be able to see the facts.
If your database server wants to use the value of a ‘fact’ from another node, or an inventory variable assigned to
another node, it’s easy to do so within a template or even an action line:
{{ hostvars[’test.example.com’][’ansible_distribution’] }}
Additionally, group_names is a list (array) of all the groups the current host is in. This can be used in templates using
Jinja2 syntax to make template source files that vary based on the group membership (or role) of the host:
{% if ’webserver’ in group_names %}
# some part of a configuration file that only applies to webservers
{% endif %}
groups is a list of all the groups (and hosts) in the inventory. This can be used to enumerate all hosts within a group.
For example:
{% for host in groups[’app_servers’] %}
# something that applies to all app servers.
{% endfor %}
A frequently used idiom is walking a group to find all IP addresses in that group:
{% for host in groups[’app_servers’] %}
{{ hostvars[host][’ansible_eth0’][’ipv4’][’address’] }}
{% endfor %}
An example of this could include pointing a frontend proxy server to all of the app servers, setting up the correct
firewall rules between servers, etc.
Additionally, inventory_hostname is the name of the hostname as configured in Ansible’s inventory host file. This
can be useful for when you don’t want to rely on the discovered hostname ansible_hostname or for other mysterious
reasons. If you have a long FQDN, inventory_hostname_short also contains the part up to the first period, without the
rest of the domain.
play_hosts is available as a list of hostnames that are in scope for the current play. This may be useful for filling out
templates with multiple hostnames or for injecting the list into the rules for a load balancer.
Don’t worry about any of this unless you think you need it. You’ll know when you do.
Also available, inventory_dir is the pathname of the directory holding Ansible’s inventory host file, inventory_file is
the pathname and the filename pointing to the Ansible’s inventory host file.
It’s a great idea to keep your playbooks under source control, but you may wish to make the playbook source public
while keeping certain important variables private. Similarly, sometimes you may just want to keep certain information
in different files, away from the main playbook.
You can do this by using an external variables file, or files, just like this:
---
- hosts: all
remote_user: root
vars:
favcolor: blue
vars_files:
- /vars/external_vars.yml
tasks:
This removes the risk of sharing sensitive data with others when sharing your playbook source with them.
The contents of each variables file is a simple YAML dictionary, like this:
---
# in the above example, this would be vars/external_vars.yml
somevar: somevalue
password: magic
Note: It’s also possible to keep per-host and per-group variables in very similar files, this is covered in Patterns.
In addition to vars_prompt and vars_files, it is possible to send variables over the Ansible command line. This is par-
ticularly useful when writing a generic release playbook where you may want to pass in the version of the application
to deploy:
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
This is useful, for, among other things, setting the hosts group or the user for the playbook.
Example:
---
tasks:
- ...
As of Ansible 1.2, you can also pass in extra vars as quoted JSON, like so:
--extra-vars ’{"pacman":"mrs","ghosts":["inky","pinky","clyde","sue"]}’
1.3. Playbooks 63
Ansible Documentation, Release 1.7
The key=value form is obviously simpler, but it’s there if you need it!
As of Ansible 1.3, extra vars can be loaded from a JSON file with the “@” syntax:
--extra-vars "@some_file.json"
Also as of Ansible 1.3, extra vars can be formatted as YAML, either on the command line or in a file as above.
Conditional Imports
Note: This behavior is infrequently used in Ansible. You may wish to skip this section. The ‘group_by’ module as
described in the module documentation is a better way to achieve this behavior in most cases.
Sometimes you will want to do certain things differently in a playbook based on certain criteria. Having one playbook
that works on multiple platforms and OS versions is a good example.
As an example, the name of the Apache package may be different between CentOS and Debian, but it is easily handled
with a minimum of syntax in an Ansible Playbook:
---
- hosts: all
remote_user: root
vars_files:
- "vars/common.yml"
- [ "vars/{{ ansible_os_family }}.yml", "vars/os_defaults.yml" ]
tasks:
Note: The variable ‘ansible_os_family’ is being interpolated into the list of filenames being defined for vars_files.
As a reminder, the various YAML files contain just keys and values:
---
# for vars/CentOS.yml
apache: httpd
somethingelse: 42
How does this work? If the operating system was ‘CentOS’, the first file Ansible would try to import would be
‘vars/CentOS.yml’, followed by ‘/vars/os_defaults.yml’ if that file did not exist. If no files in the list were found, an
error would be raised. On Debian, it would instead first look towards ‘vars/Debian.yml’ instead of ‘vars/CentOS.yml’,
before falling back on ‘vars/os_defaults.yml’. Pretty simple.
To use this conditional import feature, you’ll need facter or ohai installed prior to running the playbook, but you can
of course push this out with Ansible if you like:
# for facter
ansible -m yum -a "pkg=facter ensure=installed"
ansible -m yum -a "pkg=ruby-json ensure=installed"
# for ohai
ansible -m yum -a "pkg=ohai ensure=installed"
Ansible’s approach to configuration – separating variables from tasks, keeps your playbooks from turning into arbitrary
code with ugly nested ifs, conditionals, and so on - and results in more streamlined & auditable configuration rules –
A lot of folks may ask about how variables override another. Ultimately it’s Ansible’s philosophy that it’s better you
know where to put a variable, and then you have to think about it a lot less.
Avoid defining the variable “x” in 47 places and then ask the question “which x gets used”. Why? Because that’s not
Ansible’s Zen philosophy of doing things.
There is only one Empire State Building. One Mona Lisa, etc. Figure out where to define a variable, and don’t make
it complicated.
However, let’s go ahead and get precedence out of the way! It exists. It’s a real thing, and you might have a use for it.
If multiple variables of the same name are defined in different places, they win in a certain order, which is:
* -e variables always win
* then comes "most everything else"
* then comes variables defined in inventory
* then comes facts discovered about a system
* then "role defaults", which are the most "defaulty" and lose in priority to everything.
Note: In versions prior to 1.5.4, facts discovered about a system were in the “most everything else” category above.
That seems a little theoretical. Let’s show some examples and where you would choose to put what based on the kind
of control you might want over values.
First off, group variables are super powerful.
Site wide defaults should be defined as a ‘group_vars/all’ setting. Group variables are generally placed alongside your
inventory file. They can also be returned by a dynamic inventory script (see Dynamic Inventory) or defined in things
like Ansible Tower from the UI or API:
---
# file: /etc/ansible/group_vars/all
# this is the site wide default
ntp_server: default-time.example.com
Regional information might be defined in a ‘group_vars/region’ variable. If this group is a child of the ‘all’ group
(which it is, because all groups are), it will override the group that is higher up and more general:
---
# file: /etc/ansible/group_vars/boston
ntp_server: boston-time.example.com
If for some crazy reason we wanted to tell just a specific host to use a specific NTP server, it would then override the
group variable!:
---
# file: /etc/ansible/host_vars/xyz.boston.example.com
ntp_server: override.example.com
So that covers inventory and what you would normally set there. It’s a great place for things that deal with geography
or behavior. Since groups are frequently the entity that maps roles onto hosts, it is sometimes a shortcut to set variables
on the group instead of defining them on a role. You could go either way.
Remember: Child groups override parent groups, and hosts always override their groups.
Next up: learning about role variable precedence.
1.3. Playbooks 65
Ansible Documentation, Release 1.7
We’ll pretty much assume you are using roles at this point. You should be using roles for sure. Roles are great. You
are using roles aren’t you? Hint hint.
Ok, so if you are writing a redistributable role with reasonable defaults, put those in the ‘roles/x/defaults/main.yml’
file. This means the role will bring along a default value but ANYTHING in Ansible will override it. It’s just a default.
That’s why it says “defaults” :) See Playbook Roles and Include Statements for more info about this:
---
# file: roles/x/defaults/main.yml
# if not overridden in inventory or as a parameter, this is the value that will be used
http_port: 80
if you are writing a role and want to ensure the value in the role is absolutely used in that role, and is not going to be
overridden by inventory, you should put it in roles/x/vars/main.yml like so, and inventory values cannot override it. -e
however, still will:
---
# file: roles/x/vars/main.yml
# this will absolutely be used in this role
http_port: 80
So the above is a great way to plug in constants about the role that are always true. If you are not sharing your role
with others, app specific behaviors like ports is fine to put in here. But if you are sharing roles with others, putting
variables in here might be bad. Nobody will be able to override them with inventory, but they still can by passing a
parameter to the role.
Parameterized roles are useful.
If you are using a role and want to override a default, pass it as a parameter to the role like so:
roles:
- { name: apache, http_port: 8080 }
This makes it clear to the playbook reader that you’ve made a conscious choice to override some default in the role, or
pass in some configuration that the role can’t assume by itself. It also allows you to pass something site-specific that
isn’t really part of the role you are sharing with others.
This can often be used for things that might apply to some hosts multiple times, like so:
roles:
- { role: app_user, name: Ian }
- { role: app_user, name: Terry }
- { role: app_user, name: Graham }
- { role: app_user, name: John }
That’s a bit arbitrary, but you can see how the same role was invoked multiple Times. In that example it’s quite likely
there was no default for ‘name’ supplied at all. Ansible can yell at you when variables aren’t defined – it’s the default
behavior in fact.
So that’s a bit about roles.
There are a few bonus things that go on with roles.
Generally speaking, variables set in one role are available to others. This means if you have a
“roles/common/vars/main.yml” you can set variables in there and make use of them in other roles and elsewhere
in your playbook:
roles:
- { role: common_settings }
- { role: something, foo: 12 }
- { role: something_else }
Note: There are some protections in place to avoid the need to namespace variables. In the above, variables de-
fined in common_settings are most definitely available to ‘app_user’ and ‘something_else’ tasks, but if “something’s”
guaranteed to have foo set at 12, even if somewhere deep in common settings it set foo to 20.
So, that’s precedence, explained in a more direct way. Don’t worry about precedence, just think about if your role is
defining a variable that is a default, or a “live” variable you definitely want to use. Inventory lies in precedence right
in the middle, and if you want to forcibly override something, use -e.
If you found that a little hard to understand, take a look at the ansible-examples repo on our github for a bit more about
how all of these things can work together.
See also:
Playbooks An introduction to playbooks
Conditionals Conditional statements in playbooks
Loops Looping in playbooks
Playbook Roles and Include Statements Playbook organization by roles
Best Practices Best practices in playbooks
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
1.3.4 Conditionals
Topics
• Conditionals
– The When Statement
– Loading in Custom Facts
– Applying ‘when’ to roles and includes
– Conditional Imports
– Selecting Files And Templates Based On Variables
– Register Variables
Often the result of a play may depend on the value of a variable, fact (something learned about the remote system), or
previous task result. In some cases, the values of variables may depend on other variables. Further, additional groups
can be created to manage hosts based on whether the hosts match other criteria. There are many options to control
execution flow in Ansible.
Let’s dig into what they are.
Sometimes you will want to skip a particular step on a particular host. This could be something as simple as not
installing a certain package if the operating system is a particular version, or it could be something like performing
some cleanup steps if a filesystem is getting full.
This is easy to do in Ansible, with the when clause, which contains a Jinja2 expression (see Variables). It’s actually
pretty simple:
1.3. Playbooks 67
Ansible Documentation, Release 1.7
tasks:
- name: "shutdown Debian flavored systems"
command: /sbin/shutdown -t now
when: ansible_os_family == "Debian"
A number of Jinja2 “filters” can also be used in when statements, some of which are unique and provided by Ansible.
Suppose we want to ignore the error of one statement and then decide to do something conditionally based on success
or failure:
tasks:
- command: /bin/false
register: result
ignore_errors: True
- command: /bin/something
when: result|failed
- command: /bin/something_else
when: result|success
- command: /bin/still/something_else
when: result|skipped
Note that was a little bit of foreshadowing on the ‘register’ statement. We’ll get to it a bit later in this chapter.
As a reminder, to see what facts are available on a particular system, you can do:
ansible hostname.example.com -m setup
Tip: Sometimes you’ll get back a variable that’s a string and you’ll want to do a math operation comparison on it. You
can do this like so:
tasks:
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_os_family == "RedHat" and ansible_lsb.major_release|int >= 6
Note: the above example requires the lsb_release package on the target host in order to return the ansi-
ble_lsb.major_release fact.
Variables defined in the playbooks or inventory can also be used. An example may be the execution of a task based on
a variable’s boolean value:
vars:
epic: true
or:
tasks:
- shell: echo "This certainly isn’t epic!"
when: not epic
If a required variable has not been set, you can skip or fail using Jinja2’s defined test. For example:
tasks:
- shell: echo "I’ve got ’{{ foo }}’ and am not afraid to use it!"
when: foo is defined
This is especially useful in combination with the conditional import of vars files (see below).
Note that when combining when with with_items (see Loops), be aware that the when statement is processed separately
for each item. This is by design:
tasks:
- command: echo {{ item }}
with_items: [ 0, 2, 4, 6, 8, 10 ]
when: item > 5
It’s also easy to provide your own facts if you want, which is covered in Developing Modules. To run them, just make
a call to your own custom fact gathering module at the top of your list of tasks, and variables returned there will be
accessible to future tasks:
tasks:
- name: gather site specific fact data
action: site_facts
- command: /usr/bin/thingy
when: my_custom_fact_just_retrieved_from_the_remote_system == ’1234’
Note that if you have several tasks that all share the same conditional statement, you can affix the conditional to a
task include statement as below. Note this does not work with playbook includes, just task includes. All the tasks get
evaluated, but the conditional is applied to each and every task:
- include: tasks/sometasks.yml
when: "’reticulating splines’ in output"
Or with a role:
- hosts: webservers
roles:
- { role: debian_stock_config, when: ansible_os_family == ’Debian’ }
You will note a lot of ‘skipped’ output by default in Ansible when using this approach on systems that don’t match the
criteria. Read up on the ‘group_by’ module in the About Modules docs for a more streamlined way to accomplish the
same thing.
Conditional Imports
Note: This is an advanced topic that is infrequently used. You can probably skip this section.
Sometimes you will want to do certain things differently in a playbook based on certain criteria. Having one playbook
that works on multiple platforms and OS versions is a good example.
As an example, the name of the Apache package may be different between CentOS and Debian, but it is easily handled
with a minimum of syntax in an Ansible Playbook:
1.3. Playbooks 69
Ansible Documentation, Release 1.7
---
- hosts: all
remote_user: root
vars_files:
- "vars/common.yml"
- [ "vars/{{ ansible_os_family }}.yml", "vars/os_defaults.yml" ]
tasks:
- name: make sure apache is running
service: name={{ apache }} state=running
Note: The variable ‘ansible_os_family’ is being interpolated into the list of filenames being defined for vars_files.
As a reminder, the various YAML files contain just keys and values:
---
# for vars/CentOS.yml
apache: httpd
somethingelse: 42
How does this work? If the operating system was ‘CentOS’, the first file Ansible would try to import would be
‘vars/CentOS.yml’, followed by ‘/vars/os_defaults.yml’ if that file did not exist. If no files in the list were found, an
error would be raised. On Debian, it would instead first look towards ‘vars/Debian.yml’ instead of ‘vars/CentOS.yml’,
before falling back on ‘vars/os_defaults.yml’. Pretty simple.
To use this conditional import feature, you’ll need facter or ohai installed prior to running the playbook, but you can
of course push this out with Ansible if you like:
# for facter
ansible -m yum -a "pkg=facter ensure=installed"
ansible -m yum -a "pkg=ruby-json ensure=installed"
# for ohai
ansible -m yum -a "pkg=ohai ensure=installed"
Ansible’s approach to configuration – separating variables from tasks, keeps your playbooks from turning into arbitrary
code with ugly nested ifs, conditionals, and so on - and results in more streamlined & auditable configuration rules –
especially because there are a minimum of decision points to track.
Note: This is an advanced topic that is infrequently used. You can probably skip this section.
Sometimes a configuration file you want to copy, or a template you will use may depend on a variable. The following
construct selects the first available file appropriate for the variables of a given host, which is often much cleaner than
putting a lot of if conditionals in a template.
The following example shows how to template out a configuration file that was very different between, say, CentOS
and Debian:
- name: template a file
template: src={{ item }} dest=/etc/myapp/foo.conf
with_first_found:
- files:
- {{ ansible_distribution }}.conf
- default.conf
paths:
- search_location_one/somedir/
- /opt/other_location/somedir/
Register Variables
Often in a playbook it may be useful to store the result of a given command in a variable and access it later. Use of the
command module in this way can in many ways eliminate the need to write site specific facts, for instance, you could
test for the existence of a particular program.
The ‘register’ keyword decides what variable to save a result in. The resulting variables can be used in templates,
action lines, or when statements. It looks like this (in an obviously trivial example):
- name: test play
hosts: all
tasks:
As shown previously, the registered variable’s string contents are accessible with the ‘stdout’ value. The registered
result can be used in the “with_items” of a task if it is converted into a list (or already is a list) as shown below.
“stdout_lines” is already available on the object as well though you could also call “home_dirs.stdout.split()” if you
wanted, and could split by other fields:
- name: registered variable usage as a with_items list
hosts: all
tasks:
See also:
Playbooks An introduction to playbooks
Playbook Roles and Include Statements Playbook organization by roles
Best Practices Best practices in playbooks
Conditionals Conditional statements in playbooks
Variables All about variables
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
1.3. Playbooks 71
Ansible Documentation, Release 1.7
1.3.5 Loops
Often you’ll want to do many things in one task, such as create a lot of users, install a lot of packages, or repeat a
polling step until a certain result is reached.
This chapter is all about how to use loops in playbooks.
Topics
• Loops
– Standard Loops
– Nested Loops
– Looping over Hashes
– Looping over Fileglobs
– Looping over Parallel Sets of Data
– Looping over Subelements
– Looping over Integer Sequences
– Random Choices
– Do-Until Loops
– Finding First Matched Files
– Iterating Over The Results of a Program Execution
– Looping Over A List With An Index
– Flattening A List
– Using register with a loop
– Writing Your Own Iterators
Standard Loops
To save some typing, repeated tasks can be written in short-hand like so:
- name: add several users
user: name={{ item }} state=present groups=wheel
with_items:
- testuser1
- testuser2
If you have defined a YAML list in a variables file, or the ‘vars’ section, you can also do:
with_items: somelist
The yum and apt modules use with_items to execute fewer package manager transactions.
Note that the types of items you iterate over with ‘with_items’ do not have to be simple lists of strings. If you have a
list of hashes, you can reference subkeys using things like:
- name: add several users
user: name={{ item.name }} state=present groups={{ item.groups }}
with_items:
Nested Loops
As with the case of ‘with_items’ above, you can use previously defined variables. Just specify the variable’s name
without templating it with ‘{{ }}’:
- name: here, ’users’ contains the above list of employees
mysql_user: name={{ item[0] }} priv={{ item[1] }}.*:ALL append_privs=yes password=foo
with_nested:
- users
- [ ’clientdb’, ’employeedb’, ’providerdb’ ]
And you want to print every user’s name and phone number. You can loop through the elements of a hash using
with_dict like this:
tasks:
- name: Print phone records
debug: msg="User {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})"
with_dict: users
with_fileglob matches all files in a single directory, non-recursively, that match a pattern. It can be used like
this:
---
- hosts: all
tasks:
1.3. Playbooks 73
Ansible Documentation, Release 1.7
Note: When using a relative path with with_fileglob in a role, Ansible resolves the path relative to the
roles/<rolename>/files directory.
Note: This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be
reaching for this one often.
Suppose you have the following variable data was loaded in via somewhere:
---
alpha: [ ’a’, ’b’, ’c’, ’d’ ]
numbers: [ 1, 2, 3, 4 ]
And you want the set of ‘(a, 1)’ and ‘(b, 2)’ and so on. Use ‘with_together’ to get this:
tasks:
- debug: msg="{{ item.0 }} and {{ item.1 }}"
with_together:
- alpha
- numbers
Suppose you want to do something like loop over a list of users, creating them, and allowing them to login by a certain
set of SSH keys.
How might that be accomplished? Let’s assume you had the following defined and loaded in via “vars_files” or maybe
a “group_vars/all” file:
---
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
Subelements walks a list of hashes (aka dictionaries) and then traverses a list with a given key inside of those records.
The authorized_key pattern is exactly where it comes up most.
with_sequence generates a sequence of items in ascending numerical order. You can specify a start, end, and an
optional step value.
Arguments should be specified in key=value pairs. If supplied, the ‘format’ is a printf style string.
Numerical values can be specified in decimal, hexadecimal (0x3f8) or octal (0600). Negative numbers are not sup-
ported. This works as follows:
---
- hosts: all
tasks:
# create groups
- group: name=evens state=present
- group: name=odds state=present
Random Choices
The ‘random_choice’ feature can be used to pick something at random. While it’s not a load balancer (there are
modules for those), it can somewhat be used as a poor man’s loadbalancer in a MacGyver like situation:
- debug: msg={{ item }}
with_random_choice:
- "go through the door"
- "drink from the goblet"
- "press the red button"
- "do nothing"
Do-Until Loops
Sometimes you would want to retry a task until a certain condition is met. Here’s an example:
1.3. Playbooks 75
Ansible Documentation, Release 1.7
The above example run the shell module recursively till the module’s result has “all systems go” in its stdout or the
task has been retried for 5 times with a delay of 10 seconds. The default value for “retries” is 3 and “delay” is 5.
The task returns the results returned by the last task run. The results of individual retries can be viewed by -vv option.
The registered variable will also have a new key “attempts” which will have the number of the retries for the task.
Note: This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be
reaching for this one often.
This isn’t exactly a loop, but it’s close. What if you want to use a reference to a file based on the first file found that
matches a given criteria, and some of the filenames are determined by variable names? Yes, you can do that as follows:
- name: INTERFACES | Create Ansible header for /etc/network/interfaces
template: src={{ item }} dest=/etc/foo.conf
with_first_found:
- "{{ansible_virtualization_type}}_foo.conf"
- "default_foo.conf"
This tool also has a long form version that allows for configurable search paths. Here’s an example:
- name: some configuration template
template: src={{ item }} dest=/etc/file.cfg mode=0444 owner=root group=root
with_first_found:
- files:
- "{{inventory_hostname}}/etc/file.cfg"
paths:
- ../../../templates.overwrites
- ../../../templates
- files:
- etc/file.cfg
paths:
- templates
Note: This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be
reaching for this one often.
Sometimes you might want to execute a program, and based on the output of that program, loop over the results of
that line by line. Ansible provides a neat way to do that, though you should remember, this is always executed on the
control machine, not the local machine:
- name: Example of looping over a command result
shell: /usr/bin/frobnicate {{ item }}
with_lines: /usr/bin/frobnications_per_host --param {{ inventory_hostname }}
Ok, that was a bit arbitrary. In fact, if you’re doing something that is inventory related you might just want to write
a dynamic inventory source instead (see Dynamic Inventory), but this can be occasionally useful in quick-and-dirty
implementations.
Should you ever need to execute a command remotely, you would not use the above method. Instead do this:
- name: Example of looping over a REMOTE command result
shell: /usr/bin/something
register: command_result
Note: This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be
reaching for this one often.
If you want to loop over an array and also get the numeric index of where you are in the array as you go, you can also
do that. It’s uncommonly used:
- name: indexed loop demo
debug: msg="at array position {{ item.0 }} there is a value {{ item.1 }}"
with_indexed_items: some_list
Flattening A List
Note: This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be
reaching for this one often.
In rare instances you might have several lists of lists, and you just want to iterate over every item in all of those lists.
Assume a really crazy hypothetical datastructure:
----
# file: roles/foo/vars/main.yml
packages_base:
- [ ’foo-package’, ’bar-package’ ]
packages_apps:
- [ [’one-package’, ’two-package’ ]]
- [ [’red-package’], [’blue-package’]]
As you can see the formatting of packages in these lists is all over the place. How can we install all of the packages in
both lists?:
- name: flattened loop demo
yum: name={{ item }} state=installed
with_flattened:
- packages_base
- packages_apps
That’s how!
1.3. Playbooks 77
Ansible Documentation, Release 1.7
When using register with a loop the data structure placed in the variable during a loop, will contain a results
attribute, that is a list of all responses from the module.
Here is an example of using register with with_items:
- shell: echo "{{ item }}"
with_items:
- one
- two
register: echo
This differs from the data structure returned when using register without a loop:
{
"changed": true,
"msg": "All items completed",
"results": [
{
"changed": true,
"cmd": "echo \"one\" ",
"delta": "0:00:00.003110",
"end": "2013-12-19 12:00:05.187153",
"invocation": {
"module_args": "echo \"one\"",
"module_name": "shell"
},
"item": "one",
"rc": 0,
"start": "2013-12-19 12:00:05.184043",
"stderr": "",
"stdout": "one"
},
{
"changed": true,
"cmd": "echo \"two\" ",
"delta": "0:00:00.002920",
"end": "2013-12-19 12:00:05.245502",
"invocation": {
"module_args": "echo \"two\"",
"module_name": "shell"
},
"item": "two",
"rc": 0,
"start": "2013-12-19 12:00:05.242582",
"stderr": "",
"stdout": "two"
}
]
}
Subsequent loops over the registered variable to inspect the results may look like:
- name: Fail if return code is not 0
fail:
msg: "The command ({{ item.cmd }}) did not have a 0 return code"
when: item.rc != 0
with_items: echo.results
While you ordinarily shouldn’t have to, should you wish to write your own ways to loop over arbitrary datastructures,
you can read Developing Plugins for some starter information. Each of the above features are implemented as plugins
in ansible, so there are many implementations to reference.
See also:
Playbooks An introduction to playbooks
Playbook Roles and Include Statements Playbook organization by roles
Best Practices Best practices in playbooks
Conditionals Conditional statements in playbooks
Variables All about variables
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
Here are some tips for making the most of Ansible playbooks.
You can find some example playbooks illustrating these best practices in our ansible-examples repository. (NOTE:
These may not use all of the features in the latest release, but are still an excellent reference!).
Topics
• Best Practices
– Content Organization
* Directory Layout
* How to Arrange Inventory, Stage vs Production
* Group And Host Variables
* Top Level Playbooks Are Separated By Role
* Task And Handler Organization For A Role
* What This Organization Enables (Examples)
* Deployment vs Configuration Organization
– Stage vs Production
– Rolling Updates
– Always Mention The State
– Group By Roles
– Operating System and Distribution Variance
– Bundling Ansible Modules With Playbooks
– Whitespace and Comments
– Always Name Tasks
– Keep It Simple
– Version Control
Content Organization
The following section shows one of many possible ways to organize playbook content. Your usage of Ansible should
fit your needs, however, not ours, so feel free to modify this approach and organize as you see fit.
1.3. Playbooks 79
Ansible Documentation, Release 1.7
(One thing you will definitely want to do though, is use the “roles” organization feature, which is documented as part
of the main playbooks page. See Playbook Roles and Include Statements).
Directory Layout
The top level of the directory would contain files and directories like so:
production # inventory file for production servers
stage # inventory file for stage environment
group_vars/
group1 # here we assign variables to particular groups
group2 # ""
host_vars/
hostname1 # if systems need specific variables, put them here
hostname2 # ""
roles/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
ntp.conf.j2 # <------- templates end in .j2
files/ #
bar.txt # <-- files for use with the copy resource
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
meta/ #
main.yml # <-- role dependencies
webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # ""
fooapp/ # ""
In the example below, the production file contains the inventory of all of your production hosts. Of course you can
pull inventory from an external data source as well, but this is just a basic example.
It is suggested that you define groups based on purpose of the host (roles) and also geography or datacenter location
(if applicable):
# file: production
[atlanta-webservers]
www-atl-1.example.com
www-atl-2.example.com
[boston-webservers]
www-bos-1.example.com
www-bos-2.example.com
[atlanta-dbservers]
db-atl-1.example.com
db-atl-2.example.com
[boston-dbservers]
db-bos-1.example.com
Now, groups are nice for organization, but that’s not all groups are good for. You can also assign variables to them!
For instance, atlanta has its own NTP servers, so when setting up ntp.conf, we should use them. Let’s set those now:
---
# file: group_vars/atlanta
ntp: ntp-atlanta.example.com
backup: backup-atlanta.example.com
Variables aren’t just for geographic information either! Maybe the webservers have some configuration that doesn’t
make sense for the database servers:
---
# file: group_vars/webservers
apacheMaxRequestsPerChild: 3000
apacheMaxClients: 900
If we had any default values, or values that were universally true, we would put them in a file called group_vars/all:
---
# file: group_vars/all
ntp: ntp-boston.example.com
backup: backup-boston.example.com
We can define specific hardware variance in systems in a host_vars file, but avoid doing this unless you need to:
1.3. Playbooks 81
Ansible Documentation, Release 1.7
---
# file: host_vars/db-bos-1.example.com
foo_agent_port: 86
bar_agent_port: 99
In site.yml, we include a playbook that defines our entire infrastructure. Note this is SUPER short, because it’s just
including some other playbooks. Remember, playbooks are nothing more than lists of plays:
---
# file: site.yml
- include: webservers.yml
- include: dbservers.yml
In a file like webservers.yml (also at the top level), we simply map the configuration of the webservers group to the
roles performed by the webservers group. Also notice this is incredibly short. For example:
---
# file: webservers.yml
- hosts: webservers
roles:
- common
- webtier
Below is an example tasks file that explains how a role works. Our common role here just sets up NTP, but it could do
more if we wanted:
---
# file: roles/common/tasks/main.yml
Here is an example handlers file. As a review, handlers are only fired when certain tasks report changes, and are run at
the end of each play:
---
# file: roles/common/handlers/main.yml
- name: restart ntpd
service: name=ntpd state=restarted
What about just the first 10, and then the next 10?:
ansible-playbook -i production webservers.yml --limit boston[0-10]
ansible-playbook -i production webservers.yml --limit boston[10-20]
And there are some useful commands to know (at least in 1.1 and higher):
# confirm what task names would be run if I ran this command and said "just ntp tasks"
ansible-playbook -i production webservers.yml --tags ntp --list-tasks
The above setup models a typical configuration topology. When doing multi-tier deployments, there are going to be
some additional playbooks that hop between tiers to roll out an application. In this case, ‘site.yml’ may be augmented
by playbooks like ‘deploy_exampledotcom.yml’ but the general concepts can still apply.
Consider “playbooks” as a sports metaphor – you don’t have to just have one set of plays to use against your infras-
tructure all the time – you can have situational plays that you use at different times and for different purposes.
Ansible allows you to deploy and configure using the same tool, so you would likely reuse groups and just keep the
OS configuration in separate playbooks from the app deployment.
Stage vs Production
As also mentioned above, a good way to keep your stage (or testing) and production environments separate is to use a
separate inventory file for stage and production. This way you pick with -i what you are targeting. Keeping them all
in one file can lead to surprises!
Testing things in a stage environment before trying in production is always a great idea. Your environments need not
be the same size and you can use group variables to control the differences between those environments.
1.3. Playbooks 83
Ansible Documentation, Release 1.7
Rolling Updates
Understand the ‘serial’ keyword. If updating a webserver farm you really want to use it to control how many machines
you are updating at once in the batch.
See Delegation, Rolling Updates, and Local Actions.
The ‘state’ parameter is optional to a lot of modules. Whether ‘state=present’ or ‘state=absent’, it’s always best to
leave that parameter in your playbooks to make it clear, especially as some modules support additional states.
Group By Roles
A system can be in multiple groups. See Inventory and Patterns. Having groups named after things like webservers
and dbservers is repeated in the examples because it’s a very powerful concept.
This allows playbooks to target machines based on role, as well as to assign role specific variables using the group
variable system.
See Playbook Roles and Include Statements.
When dealing with a parameter that is different between two different operating systems, the best way to handle this
is by using the group_by module.
This makes a dynamic group of hosts matching certain criteria, even if that group is not defined in the inventory file:
---
- hosts: all
tasks:
- group_by: key={{ ansible_distribution }}
- hosts: CentOS
gather_facts: False
tasks:
- # tasks that only happen on CentOS go here
If group-specific settings are needed, this can also be done. For example:
---
# file: group_vars/all
asdf: 10
---
# file: group_vars/CentOS
asdf: 42
In the above example, CentOS machines get the value of ‘42’ for asdf, but other machines get ‘10’.
Generous use of whitespace to break things up, and use of comments (which start with ‘#’), is encouraged.
It is possible to leave off the ‘name’ for a given task, though it is recommended to provide a description about why
something is being done instead. This name is shown when the playbook is run.
Keep It Simple
When you can do something simply, do something simply. Do not reach to use every feature of Ansible together, all
at once. Use what works for you. For example, you will probably not need vars, vars_files, vars_prompt
and --extra-vars all at once, while also using an external inventory file.
Version Control
Use version control. Keep your playbooks and inventory file in git (or another version control system), and commit
when you make changes to them. This way you have an audit trail describing when and why you changed the rules
that are automating your infrastructure.
See also:
YAML Syntax Learn about YAML syntax
Playbooks Review the basic playbook features
About Modules Learn about available modules
Developing Modules Learn how to extend Ansible by writing your own modules
Patterns Learn about how to select hosts
Github examples directory Complete playbook files from the github project source
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
Here are some playbook features that not everyone may need to learn, but can be quite useful for particular applications.
Browsing these topics is recommended as you may find some useful tips here, but feel free to learn the basics of Ansible
first and adopt these only if they seem relevant or useful to your environment.
Are you running Ansible 1.5 or later? If so, you may not need accelerate mode due to a new feature called “SSH
pipelining” and should read the pipelining section of the documentation.
For users on 1.5 and later, accelerate mode only makes sense if you (A) are managing from an Enterprise Linux 6 or earlier host
and still are on paramiko, or (B) can’t enable TTYs with sudo as described in the pipelining docs.
If you can use pipelining, Ansible will reduce the amount of files transferred over the wire, making everything much
more efficient, and performance will be on par with accelerate mode in nearly all cases, possibly excluding very large
file transfer. Because less moving parts are involved, pipelining is better than accelerate mode for nearly all use cases.
Accelerate mode remains around in support of EL6 control machines and other constrained environments.
While OpenSSH using the ControlPersist feature is quite fast and scalable, there is a certain small amount of overhead
involved in using SSH connections. While many people will not encounter a need, if you are running on a platform
that doesn’t have ControlPersist support (such as an EL6 control machine), you’ll probably be even more interested in
tuning options.
Accelerate mode is there to help connections work faster, but still uses SSH for initial secure key exchange. There is
no additional public key infrastructure to manage, and this does not require things like NTP or even DNS.
Accelerated mode can be anywhere from 2-6x faster than SSH with ControlPersist enabled, and 10x faster than
paramiko.
Accelerated mode works by launching a temporary daemon over SSH. Once the daemon is running, Ansible will
connect directly to it via a socket connection. Ansible secures this communication by using a temporary AES key that
is exchanged during the SSH connection (this key is different for every host, and is also regenerated periodically).
By default, Ansible will use port 5099 for the accelerated connection, though this is configurable. Once running, the
daemon will accept connections for 30 minutes, after which time it will terminate itself and need to be restarted over
SSH.
Accelerated mode offers several improvements over the (deprecated) original fireball mode from which it was based:
• No bootstrapping is required, only a single line needs to be added to each play you wish to run in accelerated
mode.
• Support for sudo commands (see below for more details and caveats) is available.
• There are fewer requirements. ZeroMQ is no longer required, nor are there any special packages beyond python-
keyczar
• python 2.5 or higher is required.
In order to use accelerated mode, simply add accelerate: true to your play:
---
- hosts: all
accelerate: true
tasks:
If you wish to change the port Ansible will use for the accelerated connection, just add the accelerated_port option:
---
- hosts: all
accelerate: true
# default port is 5099
accelerate_port: 10000
The accelerate_port option can also be specified in the environment variable ACCELERATE_PORT, or in your ansi-
ble.cfg configuration:
[accelerate]
accelerate_port = 5099
As noted above, accelerated mode also supports running tasks via sudo, however there are two important caveats:
• You must remove requiretty from your sudoers options.
• Prompting for the sudo password is not yet supported, so the NOPASSWD option is required for sudo’ed
commands.
As of Ansible version 1.6, you can also allow the use of multiple keys for connections from multiple Ansible manage-
ment nodes. To do so, add the following option to your ansible.cfg configuration:
accelerate_multi_key = yes
When enabled, the daemon will open a UNIX socket file (by default $ANSIBLE_REMOTE_TEMP/.ansible-
accelerate/.local.socket). New connections over SSH can use this socket file to upload new keys to the daemon.
By default tasks in playbooks block, meaning the connections stay open until the task is done on each node. This may
not always be desirable, or you may be running operations that take longer than the SSH timeout.
The easiest way to do this is to kick them off all at once and then poll until they are done.
You will also want to use asynchronous mode on very long running operations that might be subject to timeout.
To launch a task asynchronously, specify its maximum runtime and how frequently you would like to poll for status.
The default poll value is 10 seconds if you do not specify a value for poll:
---
- hosts: all
remote_user: root
tasks:
- name: simulate long running op (15 sec), wait for up to 45 sec, poll every 5 sec
command: /bin/sleep 15
async: 45
poll: 5
Note: There is no default for the async time limit. If you leave off the ‘async’ keyword, the task runs synchronously,
which is Ansible’s default.
Alternatively, if you do not need to wait on the task to complete, you may “fire and forget” by specifying a poll value
of 0:
---
- hosts: all
remote_user: root
tasks:
- name: simulate long running op, allow to run for 45 sec, fire and forget
command: /bin/sleep 15
async: 45
poll: 0
Note: You shouldn’t “fire and forget” with operations that require exclusive locks, such as yum transactions, if you
expect to run other commands later in the playbook against those same resources.
Note: Using a higher value for --forks will result in kicking off asynchronous tasks even faster. This also increases
the efficiency of polling.
See also:
Playbooks An introduction to playbooks
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
Topics
• Check Mode (“Dry Run”)
– Running a task in check mode
– Showing Differences with --diff
When ansible-playbook is executed with --check it will not make any changes on remote systems. Instead, any
module instrumented to support ‘check mode’ (which contains most of the primary core modules, but it is not required
that all modules do this) will report what changes they would have made rather than making them. Other modules that
do not support check mode will also take no action, but just will not report what changes they might have made.
Check mode is just a simulation, and if you have steps that use conditionals that depend on the results of prior
commands, it may be less useful for you. However it is great for one-node-at-time basic configuration management
use cases.
Example:
As a reminder, a task with a when clause evaluated to false, will still be skipped even if it has a always_run clause
evaluated to true.
Topics
• Delegation, Rolling Updates, and Local Actions
– Rolling Update Batch Size
– Maximum Failure Percentage
– Delegation
– Local Playbooks
Being designed for multi-tier deployments since the beginning, Ansible is great at doing things on one host on behalf
of another, or doing local steps with reference to some remote hosts.
This in particular is very applicable when setting up continuous deployment infrastructure or zero downtime rolling
updates, where you might be talking with load balancers or monitoring systems.
Additional features allow for tuning the orders in which things complete, and assigning a batch window size for how
many machines to process at once during a rolling update.
This section covers all of these features. For examples of these items in use, please see the ansible-examples repository.
There are quite a few examples of zero-downtime update procedures for different kinds of applications.
You should also consult the About Modules section, various modules like ‘ec2_elb’, ‘nagios’, and ‘bigip_pool’, and
‘netscaler’ dovetail neatly with the concepts mentioned here.
You’ll also want to read up on Playbook Roles and Include Statements, as the ‘pre_task’ and ‘post_task’ concepts are
the places where you would typically call these modules.
In the above example, if we had 100 hosts, 3 hosts in the group ‘webservers’ would complete the play completely
before moving on to the next 3 hosts.
In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted.
Note: The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task
to abort when 2 of the systems failed, the percentage should be set at 49 rather than 50.
Delegation
- hosts: webservers
serial: 5
tasks:
delegate_to: 127.0.0.1
These commands will run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that
you can use on a per-task basis: ‘local_action’. Here is the same playbook as above, but using the shorthand syntax
for delegating to 127.0.0.1:
---
# ...
tasks:
# ...
A common pattern is to use a local action to call ‘rsync’ to recursively copy files to the managed servers. Here is an
example:
---
# ...
tasks:
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync will
need to ask for a passphrase.
Local Playbooks
It may be useful to use a playbook locally, rather than by connecting over SSH. This can be useful for assuring the
configuration of a system by putting a playbook on a crontab. This may also be used to run a playbook inside an OS
installer, such as an Anaconda kickstart.
To run an entire playbook locally, just set the “hosts:” line to “hosts:127.0.0.1” and then run the playbook like so:
ansible-playbook playbook.yml --connection=local
Alternatively, a local connection can be used in a single playbook play, even if other plays in the playbook use the
default remote connection type:
- hosts: 127.0.0.1
connection: local
See also:
Playbooks An introduction to playbooks
tasks:
The environment can also be stored in a variable, and accessed like so:
- hosts: all
remote_user: root
tasks:
While just proxy settings were shown above, any number of settings can be supplied. The most logical place to define
an environment hash might be a group_vars file, like so:
---
# file: group_vars/boston
ntp_server: ntp.bos.example.com
backup: bak.bos.example.com
proxy_env:
http_proxy: http://proxy.bos.example.com:8080
https_proxy: http://proxy.bos.example.com:8080
See also:
Playbooks An introduction to playbooks
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
Topics
• Error Handling In Playbooks
– Ignoring Failed Commands
– Controlling What Defines Failure
– Overriding The Changed Result
Ansible normally has defaults that make sure to check the return codes of commands and modules and it fails fast –
forcing an error to be dealt with unless you decide otherwise.
Sometimes a command that returns 0 isn’t an error. Sometimes a command might not always need to report that it
‘changed’ the remote system. This section describes how to change the default behavior of Ansible for certain tasks
so output and error handling behavior is as desired.
Note that the above system only governs the failure of the particular task, so if you have an undefined variable used, it
will still raise an error that users will need to address.
- name: fail the play if the previous command did not succeed
fail: msg="the command failed"
when: "’FAILED’ in command_result.stderr"
See also:
Playbooks An introduction to playbooks
Best Practices Best practices in playbooks
Conditionals Conditional statements in playbooks
Variables All about variables
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
Lookup plugins allow access of data in Ansible from outside sources. These plugins are evaluated on the Ansible
control machine, and can include reading the filesystem but also contacting external datastores and services. These
values are then made available using the standard templating system in Ansible, and are typically used to load variables
or templates with information from those systems.
Note: This is considered an advanced feature, and many users will probably not rely on these features.
Note: Lookups occur on the local computer, not on the remote computer.
Topics
• Using Lookups
– Intro to Lookups: Getting File Contents
– The Password Lookup
– More Lookups
tasks:
Note: A great alternative to the password lookup plugin, if you don’t need to generate random passwords on a per-
host basis, would be to use Vault. Read the documentation there and consider using it first, it will be more desirable
for most applications.
password generates a random plaintext password and stores it in a file at a given filepath.
(Docs about crypted save modes are pending)
If the file exists previously, it will retrieve its contents, behaving just like with_file. Usage of variables like “{{
inventory_hostname }}” in the filepath can be used to set up random passwords per host (what simplifies password
management in ‘host_vars’ variables).
Generated passwords contain a random mix of upper and lowercase ASCII letters, the numbers 0-9 and punctuation
(”. , : - _”). The default length of a generated password is 20 characters. This length can be changed by passing an
extra parameter:
---
- hosts: all
tasks:
(...)
Note: If the file already exists, no data will be written to it. If the file has contents, those contents will be read in as
the password. Empty files cause the password to return as an empty string
Starting in version 1.4, password accepts a “chars” parameter to allow defining a custom character set in the generated
passwords. It accepts comma separated list of names that are either string module attributes (ascii_letters,digits, etc)
or are used literally:
---
- hosts: all
tasks:
# create a mysql user with a random password using only ascii letters:
- mysql_user: name={{ client }}
password="{{ lookup(’password’, ’/tmp/passwordfile chars=ascii_letters’) }}"
priv={{ client }}_{{ tier }}_{{ role }}.*:ALL
# create a mysql user with a random password using many different char sets:
- mysql_user: name={{ client }}
password="{{ lookup(’password’, ’/tmp/passwordfile chars=ascii_letters,digits,hexdi
priv={{ client }}_{{ tier }}_{{ role }}.*:ALL
(...)
To enter comma use two commas ‘„’ somewhere - preferably at the end. Quotes and double quotes are not supported.
More Lookups
Note: This feature is very infrequently used in Ansible. You may wish to skip this section.
tasks:
As an alternative you can also assign lookup plugins to variables or use them elsewhere. This macros are evaluated
each time they are used in a task (or template):
vars:
motd_value: "{{ lookup(’file’, ’/etc/motd’) }}"
tasks:
See also:
Playbooks An introduction to playbooks
1.4.8 Prompts
When running a playbook, you may wish to prompt the user for certain input, and can do so with the ‘vars_prompt’
section.
A common use for this might be for asking for sensitive data that you do not want to record.
This has uses beyond security, for instance, you may use the same playbook for all software releases and would prompt
for a particular release version in a push-script.
Here is a most basic example:
---
- hosts: all
remote_user: root
vars:
from: "camelot"
vars_prompt:
name: "what is your name?"
quest: "what is your quest?"
favcolor: "what is your favorite color?"
If you have a variable that changes infrequently, it might make sense to provide a default value that can be overridden.
This can be accomplished using the default argument:
vars_prompt:
- name: "release_version"
prompt: "Product release version"
default: "1.0"
An alternative form of vars_prompt allows for hiding input from the user, and may later support some other options,
but otherwise works equivalently:
vars_prompt:
- name: "some_password"
prompt: "Enter password"
private: yes
- name: "release_version"
prompt: "Product release version"
private: no
If Passlib is installed, vars_prompt can also crypt the entered value so you can use it, for instance, with the user module
to define a password:
vars_prompt:
- name: "my_password2"
prompt: "Enter password2"
private: yes
encrypt: "md5_crypt"
confirm: yes
salt_size: 7
1.4.9 Tags
If you have a large playbook it may become useful to be able to run a specific part of the configuration without running
the whole playbook.
Both plays and tasks support a “tags:” attribute for this reason.
Example:
tasks:
If you wanted to just run the “configuration” and “packages” part of a very long playbook, you could do this:
ansible-playbook example.yml --tags "configuration,packages"
On the other hand, if you want to run a playbook without certain tasks, you could do this:
ansible-playbook example.yml --skip-tags "notification"
Both of these have the function of tagging every single task inside the include statement.
See also:
Playbooks An introduction to playbooks
Playbook Roles and Include Statements Playbook organization by roles
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
1.4.10 Vault
Topics
• Vault
– What Can Be Encrypted With Vault
– Creating Encrypted Files
– Editing Encrypted Files
– Rekeying Encrypted Files
– Encrypting Unencrypted Files
– Decrypting Encrypted Files
– Running a Playbook With Vault
New in Ansible 1.5, “Vault” is a feature of ansible that allows keeping encrypted data in source control.
To enable this feature, a command line tool, ansible-vault is used to edit files, and a command line flag –ask-vault-pass
or –vault-password-file is used.
The vault feature can encrypt any structured data file used by Ansible. This can include “group_vars/” or “host_vars/”
inventory variables, variables loaded by “include_vars” or “vars_files”, or variable files passed on the ansible-playbook
command line with “-e @file.yml” or “-e @file.json”. Role variables and defaults are also included!
Because Ansible tasks, handlers, and so on are also data, these can also be encrypted with vault. If you’d like to not
betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted. However,
that might be a little much and could annoy your coworkers :)
First you will be prompted for a password. The password used with vault currently must be the same for all files you
wish to use together at the same time.
After providing a password, the tool will launch whatever editor you have defined with $EDITOR, and defaults to vim.
Once you are done with the editor session, the file will be saved as encrypted data.
The default cipher is AES (which is shared-secret based).
To edit an encrypted file in place, use the ansible-vault edit command. This command will decrypt the file to a
temporary file and allow you to edit the file, saving it back when done and removing the temporary file:
ansible-vault edit foo.yml
Should you wish to change your password on a vault-encrypted file or files, you can do so with the rekey command:
ansible-vault rekey foo.yml bar.yml baz.yml
This command can rekey multiple data files at once and will ask for the original password and also the new password.
If you have existing files that you wish to encrypt, use the ansible-vault encrypt command. This command can operate
on multiple files at once:
ansible-vault encrypt foo.yml bar.yml baz.yml
If you have existing files that you no longer want to keep encrypted, you can permanently decrypt them by running the
ansible-vault decrypt command. This command will save them unencrypted to the disk, so be sure you do not want
ansible-vault edit instead:
ansible-vault decrypt foo.yml bar.yml baz.yml
To run a playbook that contains vault-encrypted data files, you must pass one of two flags. To specify the vault-
password interactively:
ansible-playbook site.yml --ask-vault-pass
This prompt will then be used to decrypt (in memory only) any vault encrypted files that are accessed. Currently this
requires that all passwords be encrypted with the same password.
Alternatively, passwords can be specified with a file. If this is done, be careful to ensure permissions on the file are
such that no one else can access your key, and do not add your key to source control:
ansible-playbook site.yml --vault-password-file ~/.vault_pass.txt
1.5.1 Introduction
Ansible ships with a number of modules (called the ‘module library’) that can be executed directly on remote hosts or
through Playbooks.
Users can also write their own modules. These modules can control system resources, like services, packages, or files
(anything really), or handle executing system commands.
Let’s review how we execute three different modules from the command line:
ansible webservers -m service -a "name=httpd state=started"
ansible webservers -m ping
ansible webservers -m command -a "/sbin/reboot -t now"
Each module supports taking arguments. Nearly all modules take key=value arguments, space delimited. Some
modules take no arguments, and the command/shell modules simply take the string of the command you want to run.
From playbooks, Ansible modules are executed in a very similar way:
- name: reboot the servers
action: command /sbin/reboot -t now
All modules technically return JSON format data, though if you are using the command line or playbooks, you don’t
really need to know much about that. If you’re writing your own module, you care, and this means you do not have to
write modules in any particular language – you get to choose.
Modules are idempotent, meaning they will seek to avoid changes to the system unless a change needs to be made.
When using Ansible playbooks, these modules can trigger ‘change events’ in the form of notifying ‘handlers’ to run
additional tasks.
Documentation for each module can be accessed from the command line with the ansible-doc tool:
ansible-doc yum
See also:
Introduction To Ad-Hoc Commands Examples of using modules in /usr/bin/ansible
Playbooks Examples of using modules with /usr/bin/ansible-playbook
Developing Modules How to write your own modules
Python API Examples of using modules with the Python API
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# To use accelerate mode, simply add "accelerate: true" to your play. The initial
# key exchange and starting up of the daemon will occur over SSH, but all commands and
# subsequent actions will be conducted over the raw socket connection using AES encryption
- hosts: devservers
accelerate: true
tasks:
- command: /usr/bin/anything
Note: See the advanced playbooks chapter for more about using accelerated mode.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The “acl” module requires that acls are enabled on the target filesystem and that the setfacl and getfacl binaries
are installed.
add_host - add a host (and alternatively a group) to the ansible-playbook in-memory inventory
• Synopsis
• Options
• Examples
Synopsis
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook. Takes variables
so you can define the new hosts more fully.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- airbrake_deployment: token=AAAAAA
environment=’staging’
user=’ansible’
revision=4.2
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Update the repository cache and update package "nginx" to latest version using default release sque
- apt: name=nginx state=latest default_release=squeeze-backports update_cache=yes
# Only run "update_cache=yes" if the last one is more than 3600 seconds ago
- apt: update_cache=yes cache_valid_time=3600
Note: Three of the upgrade modes (full, safe and its alias yes) require aptitude, otherwise apt-get
suffices.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: as a sanity check, downloaded key id must match the one specified
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# On Ubuntu target: add nginx stable repository from PPA and install its signing key.
# On Debian target: adding PPA is not available, so it will fail immediately.
apt_repository: repo=’ppa:nginx/stable’
Note: This module works on Debian and Ubuntu and requires python-apt.
Note: This module supports Debian Squeeze (version 6) as well as its successors.
Note: This module treats Debian and Ubuntu distributions separately. So PPA could be installed only on Ubuntu
machines.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: enable interface Ethernet 1
action: arista_interface interface_id=Ethernet1 admin=up speed=10g duplex=full logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create switchport ethernet1 access port
action: arista_l2interface interface_id=Ethernet1 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create lag interface
action: arista_lag interface_id=Port-Channel1 links=Ethernet1,Ethernet2 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create vlan 999
action: arista_vlan vlan_id=999 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Assembles a configuration file from fragments. Often a particular program will take a single configuration file and does
not support a conf.d style structure where it is easy to build up the configuration from multiple sources. assemble
will take a directory of files that can be local or have already been transferred to the system, and concatenate them
together to produce a destination file. Files are assembled in string sorting order. Puppet calls this idea fragments.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- assert:
that:
- "’foo’ in some_command_result.stdout"
- "number_of_the_counting == 3"
• Synopsis
• Options
Synopsis
Options
• Synopsis
• Options
• Examples
Synopsis
Options
Note: Requires at
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example using key data from a local file on the management machine
- authorized_key: user=charlie key="{{ lookup(’file’, ’/home/charlie/.ssh/id_rsa.pub’) }}"
# Using with_file
- name: Set up authorized_keys for the deploy user
authorized_key: user=deploy
key="{{ item }}"
with_file:
- public_keys/doe-jane
- public_keys/doe-john
# Using key_options:
- authorized_key: user=charlie
key="{{ lookup(’file’, ’/home/charlie/.ssh/id_rsa.pub’) }}"
key_options=’no-port-forwarding,host="10.0.1.1"’
• Synopsis
• Options
• Examples
Synopsis
Creates or terminates azure instances. When created optionally waits for it to be ‘running’. This module has a
dependency on python-azure >= 0.7.1
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: bigip-test
tasks:
- name: Collect BIG-IP facts
local_action: >
bigip_facts
server=lb.mydomain.com
user=admin
password=mysecret
include=interface,vlan
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: bigip-test
tasks:
- name: Add node
local_action: >
bigip_node
server=lb.mydomain.com
user=admin
password=mysecret
state=present
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
name="{{ ansible_default_ipv4["address"] }}"
# Note that the BIG-IP automatically names the node using the
# IP address specified in previous play’s host parameter.
# Future plays referencing this node no longer use the host
# parameter but instead use the name parameter.
# Alternatively, you could have specified a name with the
# name parameter when state=present.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: localhost
tasks:
- name: Create pool
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=present
name=matthite-pool
partition=matthite
lb_method=least_connection_member
slow_ramp_time=120
- hosts: bigip-test
tasks:
- name: Add pool member
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=present
name=matthite-pool
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
port=80
- hosts: localhost
tasks:
- name: Delete pool
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=absent
name=matthite-pool
partition=matthite
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: bigip-test
tasks:
- name: Add pool member
local_action: >
bigip_pool_member
server=lb.mydomain.com
user=admin
password=mysecret
state=present
pool=matthite-pool
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
port=80
description="web server"
connection_limit=100
rate_limit=50
ratio=2
Author [email protected]
• Synopsis
• Options
• Examples
Synopsis
Options
Note: Requires bprobe is required to send data, but not to register a meter
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The capabilities system will automatically transform operators and flags into the effective set, so (for example,
cap_foo=ep will probably become cap_foo+ep). This module does not attempt to determine the final operator and
flags to compare, so you will want to ensure that your capabilities argument matches the final capabilities.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The command module takes the command name followed by a list of space-delimited arguments. The given command
will be executed on all selected nodes. It will not be processed through the shell, so variables like $HOME and
operations like "<", ">", "|", and "&" will not work (use the shell module if you need these features).
Options
Examples
# You can also use the ’args’ form to provide the options. This command
# will change the working directory to somedir/ and will only run when
# /path/to/database doesn’t exist.
- command: /usr/bin/make_database.sh arg1 arg2
args:
chdir: somedir/
creates: /path/to/database
Note: If you want to run a command through the shell (say you are using <, >, |, etc), you actually want the shell
module instead. The command module is much more secure as it’s not affected by the user’s environment.
Note: creates, removes, and chdir can be specified after the command. For instance, if you only want to run
a command if a certain file does not exist, use this.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Downloads and installs all the libs and dependencies outlined in the /path/to/project/composer.lock
- composer: working_dir=/path/to/project
Note: Default options that are always appended in each execution are –no-ansi, –no-progress, and –no-interaction
• Synopsis
• Options
• Examples
Synopsis
The copy module copies a file on the local box to remote locations.
Options
Examples
# Copy a new "ntp.conf file into place, backing up the original if it differs from the copied version
- copy: src=/https/www.scribd.com/mine/ntp.conf dest=/etc/ntp.conf owner=root group=root mode=644 backup=yes
# Copy a new "sudoers" file into place, after passing validation with visudo
- copy: src=/https/www.scribd.com/mine/sudoers dest=/etc/sudoers validate=’visudo -cf %s’
Note: The “copy” module recursively copy facility does not scale to lots (>hundreds) of files. For alternative, see
synchronize module, which is a wrapper around rsync.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Please note that http://search.cpan.org/dist/App-cpanminus/bin/cpanm, cpanm must be installed on the remote
host.
• Synopsis
• Options
• Examples
Synopsis
Use this module to manage crontab entries. This module allows you to create named crontab entries, update, or
delete them. The module includes one line with the description of the crontab entry "#Ansible: <name>"
corresponding to the “name” passed to the module, which is used by future ansible/module calls to find/check the
state.
Options
Examples
# Ensure an old job is no longer present. Removes any job that is prefixed
# by "#Ansible: an old job" from the crontab
- cron: name="an old job" state=absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Specifying package you can register/return the list of questions and current values
debconf: name=’tzdata’
Note: A number of questions have to be answered (depending on the package). Use ‘debconf-show <package>’ on
any Debian or derivative with the package installed to see questions/settings available.
• Synopsis
• Options
• Examples
Synopsis
This module prints statements during execution and can be useful for debugging variables or expressions without
necessarily halting the playbook. Useful for debugging together with the ‘when:’ directive.
Options
Examples
# Example that prints the loopback address and gateway for each host
- debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}"
- shell: /usr/bin/uptime
register: result
- debug: var=result
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- digital_ocean: >
state=present
command=ssh
name=my_ssh_key
ssh_pub_key=’ssh-rsa AAAA...’
client_id=XXX
api_key=XXX
- digital_ocean: >
state=present
command=droplet
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
wait_timeout=500
register: my_droplet
- debug: msg="ID is {{ my_droplet.droplet.id }}"
- debug: msg="IP is {{ my_droplet.droplet.ip_address }}"
- digital_ocean: >
state=present
command=droplet
id=123
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
wait_timeout=500
- digital_ocean: >
state=present
ssh_key_ids=id1,id2
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- digital_ocean_domain: >
state=present
name=my.digitalocean.domain
ip=127.0.0.1
- digital_ocean: >
state=present
name=test_droplet
size_id=1
region_id=2
image_id=3
register: test_droplet
- digital_ocean_domain: >
state=present
name={{ test_droplet.droplet.name }}.my.domain
ip={{ test_droplet.droplet.ip_address }}
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- digital_ocean_sshkey: >
state=present
name=my_ssh_key
ssh_pub_key=’ssh-rsa AAAA...’
client_id=XXX
api_key=XXX
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
#Run the SmokeTest test case from the main app. Useful for testing deploys.
- django_manage: command=test app_path=django_dir apps=main.SmokeTest
Note: virtualenv (http://www.virtualenv.org) must be installed on the remote host if the virtualenv parameter is
specified.
Note: This module will create a virtualenv if the virtualenv parameter is specified and a virtualenv does not already
exist at the given location.
Note: This module assumes English error messages for the ‘createcachetable’ command to detect table existence,
unfortunately.
Note: To be able to use the migrate command, you must have south installed and added as an app in your settings
Note: To be able to use the collectstatic command, you must have enabled staticfiles in your settings
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# delete a domain
- local_action: dnsimple domain=my.com state=absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The DNS Made Easy service requires that machines interacting with the API have the proper time and timezone
set. Be sure you are within a few seconds of actual time by using NTP.
Note: This module returns record(s) in the “result” element when ‘state’ is set to ‘present’. This value can be be
registered and used in your playbooks.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Start one docker container running tomcat in each host of the web group and bind tomcat’s listening p
on the host:
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080
The tomcat server’s port is NAT’ed to a dynamic port on the host, but you can determine which port th
mapped to using docker_containers:
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080 count=5
- name: Display IP address and port mappings for containers
debug: msg={{inventory_hostname}}:{{item[’HostConfig’][’PortBindings’][’8080/tcp’][0][’HostPort’]
with_items: docker_containers
Just as in the previous example, but iterates over the list of docker containers with a sequence:
- hosts: web
sudo: yes
vars:
start_containers_count: 5
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080 count={{start_containers_count}}
- name: Display IP address and port mappings for containers
debug: msg="{{inventory_hostname}}:{{docker_containers[{{item}}][’HostConfig’][’PortBindings’][’8
with_sequence: start=0 end={{start_containers_count - 1}}
Stop, remove all of the running tomcat containers and list the exit code from the stopped containers:
- hosts: web
sudo: yes
tasks:
- name: stop tomcat servers
docker: image=centos command="service tomcat6 start" state=absent
- name: Display return codes from stopped containers
debug: msg="Returned {{inventory_hostname}}:{{item}}"
with_items: docker_containers
- hosts: web
sudo: yes
tasks:
- name: run tomcat server
docker: image=centos name=tomcat command="service tomcat6 start" ports=8080
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos name={{item}} command="service tomcat6 start" ports=8080
with_items:
- crookshank
- snowbell
- heathcliff
- felix
- sylvester
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos name={{item}} command="service tomcat6 start" ports=8080
with_sequence: start=1 end=5 format=tomcat_%d.example.com
- hosts: web
sudo: yes
tasks:
- name: ensure redis container is running
docker: image=crosbymichael/redis name=redis
- hosts: web
sudo: yes
tasks:
- docker:
image: namespace/image_name
links:
- postgresql:db
- redis:redis
Create containers with options specified as strings and lists as comma-separated strings:
- hosts: web
sudo: yes
tasks:
docker: image=namespace/image_name links=postgresql:db,redis:redis
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Build docker image if required. Path should contains Dockerfile to build image:
- hosts: web
sudo: yes
tasks:
- name: check or build image
docker_image: path="/path/to/build/dir" name="my/app" state=present
- hosts: web
sudo: yes
tasks:
- name: check or build image
docker_image: path="/path/to/build/dir" name="my/app" state=build
- hosts: web
sudo: yes
tasks:
- name: remove image
docker_image: name="my/app" state=absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Please note that the easy_install module can only install Python libraries. Thus this module is not
able to remove libraries. It is generally recommended to use the pip module which you can first install using
easy_install.
Note: Also note that virtualenv must be installed on the remote host if the virtualenv parameter is specified.
• Synopsis
• Options
• Examples
Synopsis
Creates or terminates ec2 instances. When created optionally waits for it to be ‘running’. This module has a depen-
dency on python-boto >= 2.5
Options
Examples
# Single instance with additional IOPS volume from snapshot and volume delete on termination
local_action:
module: ec2
key_name: mykey
group: webserver
instance_type: m1.large
image: ami-6e649707
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/sdb
snapshot: snap-abcdef12
device_type: io1
iops: 1000
volume_size: 100
delete_on_termination: true
monitoring: yes
# VPC example
- local_action:
module: ec2
key_name: mykey
group_id: sg-1dc53f72
instance_type: m1.small
image: ami-6e649707
wait: yes
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
key_name: my_keypair
instance_type: m1.small
security_group: my_securitygroup
image: my_ami_id
region: us-east-1
tasks:
- name: Launch instance
local_action: ec2 key_name={{ keypair }} group={{ security_group }} instance_type={{ instance_t
register: ec2
- name: Add new instance to host group
local_action: add_host hostname={{ item.public_ip }} groupname=launched
with_items: ec2.instances
- name: Wait for SSH to come up
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=start
with_items: ec2.instances
#
# Enforce that 5 instances with a tag "foo" are running
#
- local_action:
module: ec2
key_name: mykey
instance_type: c1.medium
image: emi-40603AD1
wait: yes
group: webserver
instance_tags:
foo: bar
exact_count: 5
count_tag: foo
#
# Enforce that 5 running instances named "database" with a "dbtype" of "postgres"
#
- local_action:
module: ec2
key_name: mykey
instance_type: c1.medium
image: emi-40603AD1
wait: yes
group: webserver
instance_tags:
Name: database
dbtype: postgres
exact_count: 5
count_tag:
Name: database
dbtype: postgres
#
# count_tag complex argument examples
#
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Deregister/Delete AMI
- local_action:
module: ec2_ami
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: ${instance.image_id}
delete_snapshot: True
state: absent
# Deregister AMI
- local_action:
module: ec2_ami
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: ${instance.image_id}
delete_snapshot: False
state: absent
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- ec2_asg:
name: special
load_balancers: ’lb1,lb2’
availability_zones: ’eu-west-1a,eu-west-1b’
launch_config_name: ’lc-1’
min_size: 1
max_size: 10
desired_capacity: 5
vpc_zone_identifier: ’subnet-abcd1234,subnet-1a2b3c4d’
tags:
- key: environment
value: production
propagate_at_launch: no
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module will return public_ip on success, which will contain the public IP address associated with the
instance.
Note: There may be a delay between the time the Elastic IP is assigned and when the cloud instance is reachable
via the new address. Use wait_for and pause to delay further playbook execution until the instance is reachable, if
necessary.
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
ec2_elb_lb - Creates or destroys Amazon ELB. - Returns information about the load balancer. - Will
be marked changed when called only if state is changed.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Normally, this module will purge any listeners that exist on the ELB
# but aren’t specified in the listeners parameter. If purge_listeners is
# false it leaves them alone
- local_action:
module: ec2_elb_lb
name: "test-please-delete"
state: present
zones:
- us-east-1a
- us-east-1d
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
purge_listeners: no
# Normally, this module will leave availability zones that are enabled
# on the ELB alone. If purge_zones is true, then any extreneous zones
# will be removed
- local_action:
module: ec2_elb_lb
name: "test-please-delete"
state: present
zones:
- us-east-1a
- us-east-1d
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
purge_zones: yes
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Conditional example
- name: Gather facts
action: ec2_facts
- name: Conditional
action: debug msg="This instance is a t1.micro"
when: ansible_ec2_instance_type == "t1.micro"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
aws_access_key: ACCESS
rules:
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 10.0.0.0/8
- proto: udp
from_port: 10050
to_port: 10050
cidr_ip: 10.0.0.0/8
- proto: udp
from_port: 10051
to_port: 10051
group_id: sg-12345678
- proto: all
# the containing group name may be specified here
group_name: example
rules_egress:
- proto: tcp
from_port: 80
to_port: 80
group_name: example-other
# description to use if example-other needs to be created
group_desc: other example EC2 group
Note: If a rule declares a group_name and that group doesn’t exist, it will be automatically created. In that case,
group_desc should be provided as well. The module will refuse to create a depended-on group without a description.
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new ec2 key pair named ‘example‘ if not present, returns generated
# private key
- name: example ec2 key
local_action:
module: ec2_key
name: example
# Creates a new ec2 key pair named ‘example‘ if not present using provided key
# material
- name: example2 ec2 key
local_action:
module: ec2_key
name: example2
key_material: ’ssh-rsa AAAAxyz...== [email protected]’
state: present
# Creates a new ec2 key pair named ‘example‘ if not present using provided key
# material
- name: example3 ec2 key
local_action:
module: ec2_key
name: example3
key_material: "{{ item }}"
with_file: /path/to/public_key.id_rsa.pub
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- ec2_lc:
name: special
image_id: ami-XXX
key_name: default
security_groups: ’group,group2’
instance_type: t1.micro
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- ec2_scaling_policy:
state: present
region: US-XXX
name: "scaledown-policy"
adjustment_type: "ChangeInCapacity"
asg_name: "slave-pool"
scaling_adjustment: -1
min_adjustment_step: 1
cooldown: 300
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
ec2_vol - create and attach a volume, return volume id and device map
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example: Launch an instance and then add a volue if not already present
# * Nothing will happen if the volume is already attached.
# * Volume must exist in the same zone.
- local_action:
module: ec2
keypair: "{{ keypair }}"
image: "{{ image }}"
zone: YYYYYY
id: my_instance
wait: yes
count: 1
register: ec2
- local_action:
module: ec2_vol
instance: "{{ item.id }}"
name: my_existing_volume_Name_tag
device_name: /dev/xvdf
with_items: ec2.instances
register: ec2_vol
# Remove a volume
- local_action:
module: ec2_vol
id: vol-XXXXXXXX
state: absent
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- 172.22.1.0/24
routes:
- dest: 0.0.0.0/0
gw: igw
region: us-west-2
register: vpc
# Removal of a VPC by id
local_action:
module: ec2_vpc
state: absent
vpc_id: vpc-aaaaaaa
region: us-west-2
If you have added elements not managed by this module, e.g. instances, NATs, etc then
the delete will fail until those dependencies are removed.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Example playbook entries using the ejabberd_user module to manage users state.
tasks:
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Basic example
- local_action:
module: elasticache
name: "test-please-delete"
state: present
engine: memcached
cache_engine_version: 1.4.14
node_type: cache.m1.small
num_nodes: 1
cache_port: 11211
cache_security_groups:
- default
zone: us-east-1d
- local_action:
module: elasticache
name: "test-please-delete"
state: rebooted
• Synopsis
• Examples
Synopsis
Runs the facter discovery program (https://github.com/puppetlabs/facter) on the remote system, returning JSON data
that can be useful for inventory purposes.
Examples
• Synopsis
• Options
• Examples
Synopsis
This module fails the progress with a custom message. It can be useful for bailing out when a certain condition is met
using when.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module works like copy, but in reverse. It is used for fetching files from remote machines and storing them
locally in a file tree, organized by hostname. Note that this module is written to transfer log files that might not be
present, so a missing remote file won’t be an error unless fail_on_missing is set to ‘yes’.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Sets attributes of files, symlinks, and directories, or removes files/symlinks/directories. Many other modules support
the same options as the file module - including copy, template, and assemble.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This modules launches an ephemeral fireball ZeroMQ message bus daemon on the remote node which Ansible can
use to communicate with nodes at high speed. The daemon listens on a configurable port for a configurable amount of
time. Starting a new fireball as a given user terminates any existing user fireballs. Fireball mode is AES encrypted
Options
Examples
# This example playbook has two plays: the first launches ’fireball’ mode on all hosts via SSH, and
# the second actually starts using it for subsequent management over the fireball connection
- hosts: devservers
gather_facts: false
connection: ssh
sudo: yes
tasks:
- action: fireball
- hosts: devservers
connection: fireball
tasks:
- command: /usr/bin/anything
Note: See the advanced playbooks chapter for more about using fireball mode.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- flowdock: type=inbox
token=AAAAAA
[email protected]
source=’my cool app’
msg=’test from ansible’
subject=’test subject’
- flowdock: type=chat
token=AAAAAA
external_user_name=testuser
msg=’test from ansible’
tags=tag1,tag2,tag3
Author [email protected] Note. Most of the code has been taken from the S3 module.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example using defaults and with metadata to create a single ’foo’ instance
- local_action:
module: gce
name: foo
metadata: ’{"db":"postgres", "group":"qa", "id":500}’
# Launch instances from a control node, runs some tasks on the new instances,
# and then terminate them
- name: Create a sandbox instance
hosts: localhost
vars:
names: foo,bar
machine_type: n1-standard-1
image: debian-6
zone: us-central1-a
service_account_email: [email protected]
pem_file: /path/to/pem_file
project_id: project-id
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}} service_account_email={{ service_account_email }}
pem_file={{ pem_file }} project_id={{ project_id }}
register: gce
- name: Wait for SSH to come up
local_action: wait_for host={{item.public_ip}} port=22 delay=10
timeout=60 state=started
with_items: {{gce.instance_data}}
• Synopsis
• Options
• Examples
Synopsis
all prefixed with httphealthcheck. The full documentation for Google Compute Engine load balancing is at
https://developers.google.com/compute/docs/load-balancing/. However, the ansible module simplifies the configu-
ration by following the libcloud model. Full install/configuration instructions for the gce* modules can be found in
the comments of ansible/test/gce_tests.py.
Options
Examples
# Simple example of creating a new LB, adding members, and a health check
- local_action:
module: gce_lb
name: testlb
region: us-central1
members: ["us-central1-a/www-a", "us-central1-b/www-b"]
httphealthcheck_name: hc
httphealthcheck_port: 80
httphealthcheck_path: "/up"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote server must have direct access to the
remote resource. By default, if an environment variable <protocol>_proxy is set on the target host, requests
will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see setting the
environment), or by using the use_proxy option.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: If the task seems to be hanging, first verify remote host is in known_hosts. SSH will prompt user to
authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public key
in /etc/ssh/ssh_known_hosts before calling the git module, with the following command: ssh-keyscan -H
remote_host.com >> /etc/ssh/ssh_known_hosts.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Cleaning all hooks for this repo that had an error on the last update. Since this works for all hoo
- local_action: github_hooks action=cleanall user={{ gituser }} oauthkey={{ oauthkey }} repo={{ repo
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
login_tenant_name=admin
name=cirros
container_format=bare
disk_format=qcow2
state=present
copy_from=http:launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.im
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Use facts to create ad-hoc groups that can be used later in a playbook.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- grove: >
channel_token=6Ph62VBBJOccmtTPZbubiPzdrhipZXtg
service=my-app
message=deployed {{ target }}
• Synopsis
• Options
• Examples
Synopsis
Manages Mercurial (hg) repositories. Supports SSH, HTTP/S and local address.
Options
Examples
# Ensure the current working copy is inside the stable branch and deletes untracked files if any.
- hg: repo=https://bitbucket.org/user/repo1 dest=/home/user/repo1 revision=stable purge=yes
Note: If the task seems to be hanging, first verify remote host is in known_hosts. SSH will prompt user to
authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public
key in /etc/ssh/ssh_known_hosts before calling the hg module, with the following command: ssh-keyscan
remote_host.com >> /etc/ssh/ssh_known_hosts.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- hostname: name=web01
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module depends on the passlib Python library, which needs to be installed on all target systems.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Manage (add, remove, change) individual settings in an INI-style file without having to manage the file as a whole
with, say, template or assemble. Adds missing sections if they don’t exist. Comments are discarded when the
source file is read, and therefore will not show up in the destination file.
Options
Examples
- ini_file: dest=/etc/anotherconf
section=drinks
option=temperature
value=cold
backup=yes
Note: While it is possible to add an option without specifying a value, this makes no sense.
Note: A section named default cannot be added by the module, but if it exists, individual options within the
section can be updated. (This is a limitation of Python’s ConfigParser.) Either use template to create a base INI
file with a [default] section, or use lineinfile to add the missing line.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Ensure no identically named application is deployed through the JBoss CLI
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
assignee=ssmith
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a tenant
- keystone_user: tenant=demo tenant_description="Default Tenant"
# Create a user
- keystone_user: user=john tenant=demo password=secrete
# Apply the admin role to the john user in the demo tenant
- keystone_user: role=admin user=john tenant=demo
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
lineinfile - Ensure a particular line is in a file, or replace an existing line using a back-referenced
regular expression.
• Synopsis
• Options
• Examples
Synopsis
This module will search a file for a line, and ensure that it is present or absent. This is primarily useful when you want
to change a single line in a file only. For other cases, see the copy or template modules.
Options
Examples
# Fully quoted because of the ’: ’ on the line. See the Gotchas in the YAML docs.
- lineinfile: "dest=/etc/sudoers state=present regexp=’^%wheel’ line=’%wheel ALL=(ALL) NOPASSWD: ALL’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
plan: 1
datacenter: 2
distribution: 99
password: ’superSecureRootPassword’
# Delete a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: absent
# Stop a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: stopped
# Reboot a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: restarted
• Synopsis
• Examples
Synopsis
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Requires the LogEntries agent which can be installed following the instructions at logentries.com
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a volume group on top of /dev/sda1 with physical extent size = 32MB.
- lvg: vg=vg.services pvs=/dev/sda1 pesize=32
Note: module does not modify PE size for already present volume group
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a logical volume the size of all remaining space in the volume group
- lvol: vg=firefly lv=test size=100%FREE
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module is useful for sending emails from playbooks. One may wonder why automate sending emails? In complex
environments there are from time to time processes that cannot be automated, either because you lack the authority
to make it so, or because not everyone agrees to a common approach. If you cannot automate a specific step, but the
step is non-blocking, sending out an email to the responsible party to make him perform his part of the bargain is an
elegant way to put the responsibility in someone else’s lap. Of course sending out a mail can be equally useful as a
way to notify one or more people in a team that a specific action has been (successfully) taken.
Options
Examples
host=’127.0.0.1’
port=2025
subject="Ansible-report"
body="Hello, this is an e-mail. I hope you like it ;-)"
from="[email protected] (Jane Jolie)"
to="John Doe <[email protected]>, Suzie Something <[email protected]>"
cc="Charlie Root <root@localhost>"
attach="/etc/group /tmp/pavatar2.png"
[email protected]|X-Special="Something or other"
charset=utf8
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create ’burgers’ database user with name ’bob’ and password ’12345’.
- mongodb_user: database=burgers name=bob password=12345 state=present
# Define more users with various specific roles (if not defined, no roles is assigned, and the user w
- mongodb_user: database=burgers name=ben password=12345 roles=’read’ state=present
- mongodb_user: database=burgers name=jim password=12345 roles=’readWrite,dbAdmin,userAdmin’ state=pr
- mongodb_user: database=burgers name=joe password=12345 roles=’readWriteAnyDatabase’ state=present
# add a user to database in a replica set, the primary server is automatically discovered and written
- mongodb_user: database=burgers name=bob replica_set=blecher password=12345 roles=’readWriteAnyDatab
Note: Requires the pymongo Python package on the remote host, version 2.4.2+. This can be installed using pip or
the OS package manager. @see http://api.mongodb.org/python/current/installation.html
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- local_action: mqtt
topic=service/ansible/{{ ansible_hostname }}
payload="Hello at {{ ansible_date_time.iso8601 }}"
qos=0
retain=false
client_id=ans001
Note: This module requires a connection to an MQTT broker such as Mosquitto http://mosquitto.org and the Paho
mqtt Python client (https://pypi.python.org/pypi/paho-mqtt).
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Copy database dump file to remote host and restore it to database ’my_db’
- copy: src=dump.sql.bz2 dest=/tmp
- mysql_db: name=my_db state=import target=/tmp/dump.sql.bz2
Note: Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install
python-mysqldb. (See apt.)
Note: Both login_password and login_user are required when you are passing credentials. If none are present, the
module will attempt to read the credentials from ~/.my.cnf, and finally fall back to using the MySQL default login
of root with no password.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Change master to master server 192.168.1.1 and use binary log ’mysql-bin.000009’ with position 4578
- mysql_replication: mode=changemaster master_host=192.168.1.1 master_log_file=mysql-bin.000009 maste
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create database user with name ’bob’ and password ’12345’ with all database privileges
- mysql_user: name=bob password=12345 priv=*.*:ALL state=present
# Creates database user ’bob’ and password ’12345’ with all database privileges and ’WITH GRANT OPTIO
- mysql_user: name=bob password=12345 priv=*.*:ALL,GRANT state=present
# Ensure no user named ’sally’ exists, also passing in the auth credentials.
- mysql_user: login_user=root login_password=123456 name=sally state=absent
[client]
user=root
password=n<_665{vS43y
Note: Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install
python-mysqldb.
Note: Both login_password and login_username are required when you are passing credentials. If none are
present, the module will attempt to read the credentials from ~/.my.cnf, and finally fall back to using the MySQL
default login of ‘root’ with no password.
Note: MySQL server installs with default login_user of ‘root’ and no password. To secure this user as part of an
idempotent playbook, you must create at least two tasks: the first must change the root user’s password, without
providing any login_user/login_password details. The second must drop a ~/.my.cnf file containing the new root
credentials. Subsequent runs of the playbook will then succeed by reading the new credentials from the file.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The nagios module has two basic functions: scheduling downtime and toggling alerts for services or hosts. All
actions require the host parameter to be given explicitly. In playbooks you can use the {{inventory_hostname}}
variable to refer to the host the playbook is currently running on. You can specify multiple services at once by
separating them with commas, .e.g., services=httpd,nfs,puppet. When specifying what service to handle
there is a special service value, host, which will handle alerts/downtime for the host itself, e.g., service=host.
This keyword may not be given with other services at the same time. Setting alerts/downtime for a host does not affect
alerts/downtime for any of the services running on it. To schedule downtime for all services on particular host use
keyword “all”, e.g., service=all. When using the nagios module you will need to specify your Nagios server
using the delegate_to parameter.
Options
Examples
# SHUT UP NAGIOS
- nagios: action=silence_nagios
# ANNOY ME NAGIOS
- nagios: action=unsilence_nagios
# command something
- nagios: action=command command=’DISABLE_FAILURE_PREDICTION’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- newrelic_deployment: token=AAAAAA
app_name=myapp
user=’ansible deployment’
revision=1.0
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new VM and attaches to a network and passes metadata to the instance
- nova_compute:
state: present
login_username: admin
login_password: admin
login_tenant_name: admin
name: vm1
image_id: 4f905f38-e52a-43d2-b6ec-754a13ffb529
key_name: ansible_key
wait_for: 200
flavor_id: 4
nics:
- net-id: 34605f38-e52a-25d2-b6ec-754a13ffb723
meta:
hostname: test1
group: uge_master
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new key pair and the private key returned after the run.
- nova_keypair: state=present login_username=admin login_password=admin
login_tenant_name=admin name=ansible_key
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
description: Install packages based on package.json using the npm installed with nvm v0.10.1.
- npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present
• Synopsis
• Examples
Synopsis
Similar to the facter module, this runs the Ohai discovery program (http://wiki.opscode.com/display/chef/Ohai) on
the remote host and returns JSON inventory data. Ohai data is a bit more verbose and nested than facter.
Examples
# Retrieve (ohai) data from all Web servers and store in one-file per host
ansible webservers -m ohai --tree=/tmp/ohaidata
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: If you like this module, you may also be interested in the osx_say callback in the plugins/ directory of the
source checkout.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# stopping an instance
action: ovirt >
instance_name=testansible
state=stopped
user=admin@internal
password=secret
url=https://ovirt.example.com
# starting an instance
action: ovirt >
instance_name=testansible
state=started
user=admin@internal
password=secret
url=https://ovirt.example.com
Author Afterburn
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a 4 hour maintenance window for service FOO123 with the description "deployment".
- pagerduty: name=companyabc
[email protected]
passwd=password123
state=running
service=FOO123
hours=4
desc=deployment
Note: This module does not yet have support to end maintenance windows.
• Synopsis
• Options
• Examples
Synopsis
Pauses playbook execution for a set amount of time, or until a prompt is acknowledged. All parameters are optional.
The default behavior is to pause with a prompt. You can use ctrl+c if you wish to advance a pause earlier than it is
set to expire or if you need to abort a playbook run entirely. To continue early: press ctrl+c and then c. To abort
a playbook: press ctrl+c and then a. The pause module integrates into async/parallelized playbooks without any
special considerations (see also: Rolling Updates). When using pauses with the serial playbook parameter (as in
rolling updates) you are only prompted once for the current group of hosts.
Options
Examples
• Synopsis
• Examples
Synopsis
A trivial test module, this module always returns pong on successful contact. It does not make sense in playbooks,
but it is useful from /usr/bin/ansible
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module does not yet have support to add/remove checks.
• Synopsis
• Options
• Examples
Synopsis
Manage Python library dependencies. To use this module, one of the following keys is required: name or
requirements.
Options
Examples
# Install (MyApp) using one of the remote protocols (bzr+,hg+,git+,svn+). You do not have to supply ’
- pip: name=’svn+http://myrepo/svn/MyApp#egg=MyApp’
# Install (Bottle) into the specified (virtualenv), inheriting none of the globally installed modules
# Install (Bottle) into the specified (virtualenv), inheriting globally installed modules
- pip: name=bottle virtualenv=/my_app/venv virtualenv_site_packages=yes
Note: Please note that virtualenv (http://www.virtualenv.org/) must be installed on the remote host if the virtualenv
parameter is specified.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Author bleader
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: When using pkgsite, be careful that already in cache packages won’t be downloaded again.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Install a package
pkgutil: name=CSWcommon state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Author berenddeboer
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a new database with name "acme" and specific encoding and locale
# settings. If a template different from "template0" is specified, encoding
# and locale settings must match those of the template.
- postgresql_db: name=acme
encoding=’UTF-8’
lc_collate=’de_DE.UTF-8’
lc_ctype=’de_DE.UTF-8’
template=’template0’
Note: The default authentication assumes that you are either logging in as or sudo’ing to the postgres account on
the host.
Note: This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is
installed on the host before using this module. If the remote host is the PostgreSQL server (which is the default case),
then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql,
libpq-dev, and python-psycopg2 packages on the remote host before using this module.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# On database "library":
# GRANT SELECT, INSERT, UPDATE ON TABLE public.books, public.authors
# TO librarian, reader WITH GRANT OPTION
- postgresql_privs: >
database=library
state=present
privs=SELECT,INSERT,UPDATE
type=table
objs=books,authors
schema=public
roles=librarian,reader
grant_option=yes
Note: Default authentication assumes that postgresql_privs is run by the postgres user on the remote host. (Ansi-
ble’s user or sudo-user).
Note: This module requires Python package psycopg2 to be installed on the remote host. In the default case of
the remote host also being the PostgreSQL server, PostgreSQL has to be installed there as well, obviously. For
Debian/Ubuntu-based systems, install packages postgresql and python-psycopg2.
Note: Parameters that accept comma separated lists (privs, objs, roles) have singular alias names (priv, obj, role).
Note: To revoke only GRANT OPTION for a specific object, set state to present and grant_option to no (see
examples).
Note: Note that when revoking privileges from a role R, this role may still have access via privileges granted to any
role R is a member of including PUBLIC.
Note: Note that when revoking privileges from a role R, you do so as the user specified via login. If R has been
granted the same privileges by another user also, R can still access database objects via these privileges.
• Synopsis
• Options
• Examples
Synopsis
Add or remove PostgreSQL users (roles) from a remote host and, optionally, grant the users access to an existing
database or tables. The fundamental function of the module is to create, or delete, roles from a PostgreSQL cluster.
Privilege assignment, or removal, is an optional step, which works on one database at a time. This allows for the
module to be called several times in the same module to modify the permissions on different databases, or to grant
permissions to already existing users. A user cannot be removed until all the privileges have been stripped from the
user. In such situation, if the module tries to remove the user it will fail. To avoid this from happening the fail_on_user
option signals the module to try to remove the user, but if not possible keep going; the module will report if changes
happened and separately if the user was removed or not.
Options
Examples
# Create django user and grant access to database and products table
- postgresql_user: db=acme name=django password=ceec4eif7ya priv=CONNECT/products:ALL
# Create rails user, grant privilege to create other databases and demote rails from super user statu
- postgresql_user: name=rails password=secret role_attr_flags=CREATEDB,NOSUPERUSER
Note: The default authentication assumes that you are either logging in as or sudo’ing to the postgres account on the
host.
Note: This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed
on the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then
PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql, libpq-dev,
and python-psycopg2 packages on the remote host before using this module.
Note: If you specify PUBLIC as the user, then the privilege changes will apply to all users. You may not specify
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
quantum_router_gateway - set/unset a gateway interface for the router with the specified external
network
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- name: ensure the default vhost contains the HA policy via a dict
rabbitmq_policy: name=HA pattern=’.*’
args:
tags:
"ha-mode": all
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
write_priv=.*
state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Executes a low-down and dirty SSH command, not going through the module subsystem. This is useful and should
only be done in two cases. The first case is installing python-simplejson on older (Python 2.4 and before)
hosts that need it as a dependency to run modules, since nearly all core modules require it. Another is speaking to any
devices such as routers that do not have any Python installed. In any other case, using the shell or command module
is much more appropriate. Arguments given to raw are run directly through the configured remote shell. Standard
output, error output and return code are returned when available. There is no change handler support for this module.
This module does not require python on the remote system, much like the script module.
Options
Examples
Note: If you want to execute a command securely and predictably, it may be better to use the command module
instead. Best practices when writing playbooks will follow the trend of using command unless shell is explicitly
required. When running ad-hoc commands, use your best judgement.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- public
register: rax
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
rax_clb_nodes - add, modify and remove nodes from a Rackspace Cloud Load Balancer
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: It is recommended that plays utilizing this module be run with serial: 1 to avoid exceeding the API
request limit imposed by the Rackspace CloudDNS API
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: It is recommended that plays utilizing this module be run with serial: 1 to avoid exceeding the API
request limit imposed by the Rackspace CloudDNS API
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: "Get mycontainer2 metadata"
rax_files:
container: mycontainer2
type: meta
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Keypairs cannot be manipulated, only created and deleted. To “update” a keypair you must first delete and then
recreate.
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: Network create request
local_action:
module: rax_network
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
state: present
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
- hosts: localhost
gather_facts: false
connection: local
tasks:
- rax_scaling_group:
credentials: ~/.raxpub
region: ORD
cooldown: 300
flavor: performance1-1
image: bb02b1a3-bc77-4d17-ab5b-421d89850fca
min_entities: 5
max_entities: 10
name: ASG Test
server_name: asgtest
loadbalancers:
- id: 228385
port: 80
register: asg
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
- hosts: localhost
gather_facts: false
connection: local
tasks:
- rax_scaling_policy:
credentials: ~/.raxpub
region: ORD
at: ’2013-05-19T08:07:08Z’
change: 25
cooldown: 300
is_percent: true
name: ASG Test Policy - at
policy_type: schedule
scaling_group: ASG Test
register: asps_at
- rax_scaling_policy:
credentials: ~/.raxpub
region: ORD
cron: ’1 0 * * *’
change: 25
cooldown: 300
is_percent: true
- rax_scaling_policy:
credentials: ~/.raxpub
region: ORD
cooldown: 300
desired_capacity: 5
name: ASG Test Policy - webhook
policy_type: webhook
scaling_group: ASG Test
register: asp_webhook
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- subnet-bbbbbbbb
redhat_subscription - Manage Red Hat Network registration and subscriptions using the
subscription-manager command
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Register as user (joe_user) with password (somepass) and auto-subscribe to available content.
- redhat_subscription: action=register username=joe_user password=somepass autosubscribe=true
Note: In order to register a system, subscription-manager requires either a username and password, or an activation-
key.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Requires the redis-py Python package on the remote host. You can install it with pip (pip install redis) or with
a package manager. https://github.com/andymccurdy/redis-py
Note: If the redis master instance we are making slave of is password protected this needs to be in the redis.conf in
the masterauth variable
replace - Replace all instances of a particular string in a file using a back-referenced regular expres-
sion.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
rhn_register - Manage Red Hat Network registration using the rhnreg_ks command
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Register as user (joe_user) with password (somepass) and auto-subscribe to available content.
- rhn_register: state=present username=joe_user password=somepass
Note: In order to register a system, rhnreg_ks requires either a username and password, or an activationkey.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- rollbar_deployment: token=AAAAAA
environment=’staging’
user=’ansible’
revision=4.2,
rollbar_user=’admin’,
comment=’Test Deploy’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Delete new.foo.com A record using the results from the get command
- route53: >
command=delete
zone=foo.com
record={{ rec.set.record }}
type={{ rec.set.type }}
value={{ rec.set.value }}
# Add an AAAA record. Note that because there are colons in the value
# that the entire parameter list must be quoted:
- route53: >
command=create
zone=foo.com
record=localhost.foo.com
type=AAAA
ttl=7200
value="::1"
# Add a TXT record. Note that TXT and SPF records must be surrounded
# by quotes when sent to Route 53:
- route53: >
command=create
zone=foo.com
record=localhost.foo.com
type=TXT
ttl=7200
value=""bar""
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The script module takes the script name followed by a list of space-delimited arguments. The local script at
path will be transfered to the remote node and then executed. The given script will be processed through the shell
environment on the remote node. This module does not require python on the remote system, much like the raw
module.
Options
Examples
# Run a script that creates a file, but only if the file is not yet created
- script: /some/local/create_file.sh --some-arguments 1234 creates=/the/created/file.txt
# Run a script that removes a file, but only if the file is not yet removed
- script: /some/local/remove_file.sh --some-arguments 1234 removes=/the/removed/file.txt
Note: It is usually preferable to write Ansible modules than pushing scripts. Convert your script to an Ansible module
for bonus points!
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Configures the SELinux mode and policy. A reboot may be required after usage. Ansible will not issue this reboot but
will let you know when it is required.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example action to enable service httpd, and not touch the running state
- service: name=httpd enabled=yes
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module is automatically called by playbooks to gather useful variables about remote hosts that can be used in
playbooks. It can also be executed directly by /usr/bin/ansible to check what variables are available to a host.
Ansible provides many facts about the system, automatically.
Options
Examples
# Display facts from all hosts and store them indexed by I(hostname) at C(/tmp/facts).
ansible all -m setup --tree /tmp/facts
# Display only facts regarding memory found by ansible on all hosts and output them.
ansible all -m setup -a ’filter=ansible_*_mb’
Note: More ansible facts will be added with successive releases. If facter or ohai are installed, variables from
these programs will also be snapshotted into the JSON file for usage in templating. These variables are prefixed with
facter_ and ohai_ so it’s easy to tell their source. All variables are bubbled up to the caller. Using the ansible
facts and choosing to not install facter and ohai means you can avoid Ruby-dependencies on your remote systems.
(See also facter and ohai.)
Note: The filter option filters only the first level subkey below ansible_facts.
Note: If the target host is Windows, you will not currently have the ability to use fact_path or filter as this is
provided by a simpler implementation of the module. Different facts are returned for Windows hosts.
• Synopsis
• Options
• Examples
Synopsis
The shell module takes the command name followed by a list of space-delimited arguments. It is almost exactly
like the command module but runs the command through a shell (/bin/sh) on the remote node.
Options
Examples
# You can also use the ’args’ form to provide the options. This command
# will change the working directory to somedir/ and will only run when
# somedir/somelog.txt doesn’t exist.
- shell: somescript.sh >> somelog.txt
args:
chdir: somedir/
creates: somelog.txt
Note: If you want to execute a command securely and predictably, it may be better to use the command module
instead. Best practices when writing playbooks will follow the trend of using command unless shell is explicitly
required. When running ad-hoc commands, use your best judgement.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
module: slack
domain: future500.slack.com
token: thetokengeneratedbyslack
msg: "{{ inventory_hostname }} completed"
channel: "#ansible"
username: "Ansible on {{ inventory_hostname }}"
icon_url: "http://www.example.com/some-image-file.png"
link_names: 0
parse: ’none’
• Synopsis
• Options
• Examples
Synopsis
This module works like fetch. It is used for fetching a base64- encoded blob containing the data in a remote file.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- name: Send notification messages via SNS with short message for SMS
local_action:
module: sns
msg: "{{ inventory_hostname }} has completed the play."
sms: "deployed!"
subject: "Deploy complete!"
topic: "deploy"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Obtain the stats of /etc/foo.conf, and check that the file still belongs
# to ’root’. Fail otherwise.
- stat: path=/etc/foo.conf
register: st
- fail: msg="Whoops! file ownership has changed"
when: st.stat.pw_name != ’root’
• Synopsis
• Options
• Examples
Synopsis
Deploy given repository URL / revision to dest. If dest exists, update to the specified revision, otherwise perform a
checkout.
Options
Examples
supervisorctl - Manage the state of a program or group of programs running via supervisord
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: When state = present, the module will call supervisorctl reread then supervisorctl add if
the program/group does not exist.
Note: When state = restarted, the module will call supervisorctl update then call supervisorctl
restart.
• Synopsis
• Options
• Examples
Synopsis
Manages SVR4 packages on Solaris 10 and 11. These were the native packages on Solaris <= 10 and are available
as a legacy feature in Solaris 11. Note that this is a very basic packaging system. It will not enforce dependencies on
install or remove.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
synchronize - Uses rsync to make synchronizing file paths in your playbooks quick and easy.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Synchronization with --archive options enabled except for --times, with --checksum option enabled
synchronize: src=some/relative/path dest=/some/absolute/path checksum=yes times=no
# Synchronize and delete files in dest on the remote host that are not found in src of localhost.
synchronize: src=some/relative/path dest=/some/absolute/path delete=yes
Note: Inspect the verbose output to validate the destination user/host/path are what was expected.
Note: The remote user for the dest path will always be the remote_user, not the sudo_user.
Note: To exclude files and directories from being synchronized, you may add .rsync-filter files to the source
directory.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Set ip forwarding on in /proc and in the sysctl file and reload if necessary
- sysctl: name="net.ipv4.ip_forward" value=1 sysctl_set=yes state=present reload=yes
• Synopsis
• Options
• Examples
Synopsis
Templates are processed by the Jinja2 templating language (http://jinja.pocoo.org/docs/) - documentation on the tem-
plate formatting can be found in the Template Designer Documentation (http://jinja.pocoo.org/docs/templates/). Six
additional variables can be used in templates: ansible_managed (configurable via the defaults section of
ansible.cfg) contains a string which can be used to describe the template name, host, modification time of the tem-
plate file and the owner uid, template_host contains the node name of the template’s machine, template_uid
the owner, template_path the absolute path of the template, template_fullpath is the absolute path of the
template, and template_run_date is the date that the template was rendered. Note that including a string that
uses a date in the template will result in the template being marked ‘changed’ each time.
Options
Examples
# Copy a new "sudoers" file into place, after passing validation with visudo
- template: src=/https/www.scribd.com/mine/sudoers dest=/etc/sudoers validate=’visudo -cf %s’
Note: Since Ansible version 0.9, templates are loaded with trim_blocks=True.
Note: Also, you can override jinja2 settings by adding a special header to template file. i.e.
#jinja2:variable_start_string:’[%’ , variable_end_string:’%]’ which changes the vari-
able interpolation markers to [% var %] instead of {{ var }}. This is the best way to prevent evaluation of things
that look like, but should not be Jinja2. raw/endraw in Jinja2 will not work as you expect because templates in Ansible
are recursively evaluated.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# send a text message from the local server about the build status to (555) 303 5681
# note: you have to have purchased the ’from_number’ on your Twilio account
- local_action: text msg="All servers with webserver role are now configured."
account_sid={{ twilio_account_sid }}
auth_token={{ twilio_auth_token }}
from_number=+15552014545 to_number=+15553035681
Note: Like the other notification modules, this one requires an external dependency to work. In this case, you’ll need
a Twilio account with a purchased or verified phone number to send the text message.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Set logging
ufw: logging=on
# Allow OpenSSH
ufw: rule=allow name=OpenSSH
# Allow incoming access to eth0 from 1.2.3.5 port 5469 to 1.2.3.4 port 5469
ufw: rule=allow interface=eth0 direction=in proto=udp src=1.2.3.5 from_port=5469 dest=1.2.3.4 to_port
# Deny all traffic from the IPv6 2001:db8::/32 to tcp port 25 on this host.
# Note that IPv6 must be enabled in /etc/default/ufw for IPv6 firewalling to work.
ufw: rule=deny proto=tcp src=2001:db8::/32 port=25
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: can handle gzip, bzip2 and xz compressed as well as uncompressed tar files
Note: uses tar’s --diff arg to calculate if changed or not. If this arg is not supported, it will always unpack the
archive
Note: does not detect if a .zip file is different from destination - always unzips
Note: existing files/directories in the destination which are not in the archive are not touched. This is the same
behavior as a normal archive extraction
Note: existing files/directories in the destination which are not in the archive are ignored for purposes of deciding if
the archive should be unpacked or not
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Check that you can connect (GET) to a page and it returns a status 200
- uri: url=http://www.example.com
# Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents.
- action: uri url=http://www.example.com return_content=yes
register: webpage
- action: fail
when: ’AWESOME’ not in "{{ webpage.content }}"
- uri: url=https://your.jira.example.com/rest/api/2/issue/
method=POST user=your_username password=your_pass
body="{{ lookup(’file’,’issue.json’) }}" force_basic_auth=yes
status_code=201 HEADER_Content-Type="application/json"
- uri: url=https://your.form.based.auth.examle.com/index.php
method=POST body="name=your_username&password=your_password&enter=Sign%20in"
status_code=302 HEADER_Content-Type="application/x-www-form-urlencoded"
register: login
- uri: url=https://your.form.based.auth.example.com/dashboard.php
method=GET return_content=yes HEADER_Cookie="{{login.set_cookie}}"
- uri: url=http://{{jenkins.host}}/job/{{jenkins.job}}/build?token={{jenkins.token}}
method=GET user={{jenkins.user}} password={{jenkins.password}} force_basic_auth=yes status_cod
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Add the user ’johnd’ with a specific uid and a primary group of ’admin’
- user: name=johnd comment="John Doe" uid=1040 group=admin
# Add the user ’james’ with a bash shell, appending the group ’admins’ and ’developers’ to the user’s
- user: name=james shell=/bin/bash groups=admins,developers append=yes
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# /usr/bin/ansible invocations
ansible host -m virt -a "name=alpha command=status"
ansible host -m virt -a "name=alpha command=get_xml"
ansible host -m virt -a "name=alpha command=create uri=lxc:///"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- vsphere_guest:
vcenter_hostname: vcenter.mydomain.local
username: myuser
password: mypass
guest: newvm001
state: powered_on
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: storage001
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 2048
num_cpus: 2
osid: centos64Guest
scsi: paravirtual
esxi:
datacenter: MyDatacenter
hostname: esx001.mydomain.local
- vsphere_guest:
vcenter_hostname: vcenter.mydomain.local
username: myuser
password: mypass
guest: newvm001
state: reconfigured
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: storage001
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 4096
num_cpus: 4
osid: centos64Guest
scsi: paravirtual
esxi:
datacenter: MyDatacenter
hostname: esx001.mydomain.local
# Task to gather facts from a vSphere cluster only if the system is a VMWare guest
- vsphere_guest:
vcenter_hostname: vcenter.mydomain.local
username: myuser
password: mypass
guest: newvm001
vmware_guest_facts: yes
- hw_eth0:
- addresstype: "assigned"
label: "Network adapter 1"
macaddress: "00:22:33:33:44:55"
macaddress_dash: "00-22-33-33-44-55"
summary: "VM Network"
hw_guest_full_name: "newvm001"
hw_guest_id: "rhel6_64Guest"
hw_memtotal_mb: 2048
hw_name: "centos64Guest"
hw_processor_count: 2
hw_product_uuid: "ef50bac8-2845-40ff-81d9-675315501dac"
- vsphere_guest:
vcenter_hostname: vcenter.mydomain.local
username: myuser
password: mypass
guest: newvm001
state: absent
force: yes
Note: This module should run from a system that can access vSphere directly. Either by using local_action, or using
delegate_to.
• Synopsis
• Options
• Examples
Synopsis
Waiting for a port to become available is useful for when services are not immediately available after their init scripts
return - which is true of certain Java application servers. It is also useful when starting guests with the virt module
and needing to pause until they are ready. This module can also be used to wait for a regex match a string to be present
in a file. In 1.6 and later, this module can also be used to wait for a file to be available or absent on the filesystem.
Options
Examples
# wait 300 seconds for port 8000 to become open on the host, don’t start checking for 10 seconds
- wait_for: port=8000 delay=10
# wait until the string "completed" is in the file /tmp/foo before continuing
- wait_for: path=/tmp/foo search_regex=completed
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Playbook example
---
- name: Install IIS
hosts: all
gather_facts: false
tasks:
- name: Install IIS
win_feature:
name: "Web-Server"
state: absent
restart: yes
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Playbook example
- name: Download earthrise.jpg to ’C:\Users\RandomUser\earthrise.jpg’
win_get_url:
url: ’http://www.example.com/earthrise.jpg’
dest: ’C:\Users\RandomUser\earthrise.jpg’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Remove a group
win_group:
name: deploy
state: absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Restart a service
win_service:
name: spooler
state: restarted
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- win_stat: path=C:\foo.ini
register: file_info
- debug: var=file_info
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Ad-hoc example
$ ansible -i hosts -m win_user -a "name=bob password=Password12345" all
$ ansible -i hosts -m win_user -a "name=bob password=Password12345 state=absent" all
# Playbook example
---
- name: Add a user
hosts: all
gather_facts: false
tasks:
- name: Add User
win_user:
name: ansible
password: "@ns1bl3"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Installs, upgrade, removes, and lists packages and groups with the yum package manager.
Options
Examples
- name: install the latest version of Apache from the testing repo
yum: name=httpd enablerepo=testing state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Install "nmap"
- zypper: name=nmap state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
ssh_cert_path: /path/to/azure_x509_cert.pem
storage_account: my-storage-account
wait: yes
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- digital_ocean: >
state=present
command=ssh
name=my_ssh_key
ssh_pub_key=’ssh-rsa AAAA...’
client_id=XXX
api_key=XXX
- digital_ocean: >
state=present
command=droplet
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
wait_timeout=500
register: my_droplet
- debug: msg="ID is {{ my_droplet.droplet.id }}"
- debug: msg="IP is {{ my_droplet.droplet.ip_address }}"
- digital_ocean: >
state=present
command=droplet
id=123
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
wait_timeout=500
- digital_ocean: >
state=present
ssh_key_ids=id1,id2
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- digital_ocean_domain: >
state=present
name=my.digitalocean.domain
ip=127.0.0.1
- digital_ocean: >
state=present
name=test_droplet
size_id=1
region_id=2
image_id=3
register: test_droplet
- digital_ocean_domain: >
state=present
name={{ test_droplet.droplet.name }}.my.domain
ip={{ test_droplet.droplet.ip_address }}
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- digital_ocean_sshkey: >
state=present
name=my_ssh_key
ssh_pub_key=’ssh-rsa AAAA...’
client_id=XXX
api_key=XXX
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Start one docker container running tomcat in each host of the web group and bind tomcat’s listening p
on the host:
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080
The tomcat server’s port is NAT’ed to a dynamic port on the host, but you can determine which port th
mapped to using docker_containers:
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080 count=5
- name: Display IP address and port mappings for containers
debug: msg={{inventory_hostname}}:{{item[’HostConfig’][’PortBindings’][’8080/tcp’][0][’HostPort’]
with_items: docker_containers
Just as in the previous example, but iterates over the list of docker containers with a sequence:
- hosts: web
sudo: yes
vars:
start_containers_count: 5
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080 count={{start_containers_count}}
- name: Display IP address and port mappings for containers
debug: msg="{{inventory_hostname}}:{{docker_containers[{{item}}][’HostConfig’][’PortBindings’][’8
Stop, remove all of the running tomcat containers and list the exit code from the stopped containers:
- hosts: web
sudo: yes
tasks:
- name: stop tomcat servers
docker: image=centos command="service tomcat6 start" state=absent
- name: Display return codes from stopped containers
debug: msg="Returned {{inventory_hostname}}:{{item}}"
with_items: docker_containers
- hosts: web
sudo: yes
tasks:
- name: run tomcat server
docker: image=centos name=tomcat command="service tomcat6 start" ports=8080
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos name={{item}} command="service tomcat6 start" ports=8080
with_items:
- crookshank
- snowbell
- heathcliff
- felix
- sylvester
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos name={{item}} command="service tomcat6 start" ports=8080
with_sequence: start=1 end=5 format=tomcat_%d.example.com
- hosts: web
sudo: yes
tasks:
- name: ensure redis container is running
docker: image=crosbymichael/redis name=redis
- hosts: web
sudo: yes
tasks:
- docker:
image: namespace/image_name
links:
- postgresql:db
- redis:redis
Create containers with options specified as strings and lists as comma-separated strings:
- hosts: web
sudo: yes
tasks:
docker: image=namespace/image_name links=postgresql:db,redis:redis
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Build docker image if required. Path should contains Dockerfile to build image:
- hosts: web
sudo: yes
tasks:
- name: check or build image
docker_image: path="/path/to/build/dir" name="my/app" state=present
- hosts: web
sudo: yes
tasks:
- name: check or build image
- hosts: web
sudo: yes
tasks:
- name: remove image
docker_image: name="my/app" state=absent
• Synopsis
• Options
• Examples
Synopsis
Creates or terminates ec2 instances. When created optionally waits for it to be ‘running’. This module has a depen-
dency on python-boto >= 2.5
Options
Examples
wait: yes
wait_timeout: 500
count: 5
instance_tags:
db: postgres
monitoring: yes
# Single instance with additional IOPS volume from snapshot and volume delete on termination
local_action:
module: ec2
key_name: mykey
group: webserver
instance_type: m1.large
image: ami-6e649707
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/sdb
snapshot: snap-abcdef12
device_type: io1
iops: 1000
volume_size: 100
delete_on_termination: true
monitoring: yes
# VPC example
- local_action:
module: ec2
key_name: mykey
group_id: sg-1dc53f72
instance_type: m1.small
image: ami-6e649707
wait: yes
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
local_action:
module: ec2
state: ’absent’
instance_ids: ’{{ ec2.instance_ids }}’
#
# Enforce that 5 instances with a tag "foo" are running
#
- local_action:
module: ec2
key_name: mykey
instance_type: c1.medium
image: emi-40603AD1
wait: yes
group: webserver
instance_tags:
foo: bar
exact_count: 5
count_tag: foo
#
# Enforce that 5 running instances named "database" with a "dbtype" of "postgres"
#
- local_action:
module: ec2
key_name: mykey
instance_type: c1.medium
image: emi-40603AD1
wait: yes
group: webserver
instance_tags:
Name: database
dbtype: postgres
exact_count: 5
count_tag:
Name: database
dbtype: postgres
#
# count_tag complex argument examples
#
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Deregister/Delete AMI
- local_action:
module: ec2_ami
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: ${instance.image_id}
delete_snapshot: True
state: absent
# Deregister AMI
- local_action:
module: ec2_ami
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: ${instance.image_id}
delete_snapshot: False
state: absent
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- ec2_asg:
name: special
load_balancers: ’lb1,lb2’
availability_zones: ’eu-west-1a,eu-west-1b’
launch_config_name: ’lc-1’
min_size: 1
max_size: 10
desired_capacity: 5
vpc_zone_identifier: ’subnet-abcd1234,subnet-1a2b3c4d’
tags:
- key: environment
value: production
propagate_at_launch: no
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module will return public_ip on success, which will contain the public IP address associated with the
instance.
Note: There may be a delay between the time the Elastic IP is assigned and when the cloud instance is reachable
via the new address. Use wait_for and pause to delay further playbook execution until the instance is reachable, if
necessary.
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
local_action: ec2_elb
args:
instance_id: "{{ ansible_ec2_instance_id }}"
state: ’absent’
roles:
- myrole
post_tasks:
- name: Instance Register
local_action: ec2_elb
args:
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{ item }}"
state: ’present’
with_items: ec2_elbs
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
ec2_elb_lb - Creates or destroys Amazon ELB. - Returns information about the load balancer. - Will
be marked changed when called only if state is changed.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
response_timeout: 5 # seconds
interval: 30 # seconds
unhealthy_threshold: 2
healthy_threshold: 10
# Normally, this module will purge any listeners that exist on the ELB
# but aren’t specified in the listeners parameter. If purge_listeners is
# false it leaves them alone
- local_action:
module: ec2_elb_lb
name: "test-please-delete"
state: present
zones:
- us-east-1a
- us-east-1d
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
purge_listeners: no
# Normally, this module will leave availability zones that are enabled
# on the ELB alone. If purge_zones is true, then any extreneous zones
# will be removed
- local_action:
module: ec2_elb_lb
name: "test-please-delete"
state: present
zones:
- us-east-1a
- us-east-1d
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
purge_zones: yes
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Conditional example
- name: Gather facts
action: ec2_facts
- name: Conditional
action: debug msg="This instance is a t1.micro"
when: ansible_ec2_instance_type == "t1.micro"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: If a rule declares a group_name and that group doesn’t exist, it will be automatically created. In that case,
group_desc should be provided as well. The module will refuse to create a depended-on group without a description.
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new ec2 key pair named ‘example‘ if not present, returns generated
# private key
- name: example ec2 key
local_action:
module: ec2_key
name: example
# Creates a new ec2 key pair named ‘example‘ if not present using provided key
# material
- name: example2 ec2 key
local_action:
module: ec2_key
name: example2
key_material: ’ssh-rsa AAAAxyz...== [email protected]’
state: present
# Creates a new ec2 key pair named ‘example‘ if not present using provided key
# material
- name: example3 ec2 key
local_action:
module: ec2_key
name: example3
key_material: "{{ item }}"
with_file: /path/to/public_key.id_rsa.pub
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- ec2_lc:
name: special
image_id: ami-XXX
key_name: default
security_groups: ’group,group2’
instance_type: t1.micro
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
statistic: Average
comparison: "<="
threshold: 5.0
period: 300
evaluation_periods: 3
unit: "Percent"
description: "This will alarm when a bamboo slave’s cpu usage average is lower than 5% for 15 min
dimensions: {’InstanceId’:’i-XXX’}
alarm_actions: ["action1","action2"]
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- ec2_scaling_policy:
state: present
region: US-XXX
name: "scaledown-policy"
adjustment_type: "ChangeInCapacity"
asg_name: "slave-pool"
scaling_adjustment: -1
min_adjustment_step: 1
cooldown: 300
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- local_action:
module: ec2_snapshot
instance_id: i-12345678
device_name: /dev/sdb1
snapshot_tags:
frequency: hourly
source: /data
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
env: prod
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
ec2_vol - create and attach a volume, return volume id and device map
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example: Launch an instance and then add a volue if not already present
# * Nothing will happen if the volume is already attached.
# * Volume must exist in the same zone.
- local_action:
module: ec2
keypair: "{{ keypair }}"
image: "{{ image }}"
zone: YYYYYY
id: my_instance
wait: yes
count: 1
register: ec2
- local_action:
module: ec2_vol
instance: "{{ item.id }}"
name: my_existing_volume_Name_tag
device_name: /dev/xvdf
with_items: ec2.instances
register: ec2_vol
# Remove a volume
- local_action:
module: ec2_vol
id: vol-XXXXXXXX
state: absent
Note: Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See
http://boto.readthedocs.org/en/latest/boto_config_tut.html
Note: AWS_REGION or EC2_REGION can be typically be used to specify the AWS region, when required, but this
can also be configured in the boto config file
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
region: us-west-2
# Full creation example with subnets and optional availability zones.
# The absence or presense of subnets deletes or creates them respectively.
local_action:
module: ec2_vpc
state: present
cidr_block: 172.22.0.0/16
resource_tags: { "Environment":"Development" }
subnets:
- cidr: 172.22.1.0/24
az: us-west-2c
resource_tags: { "Environment":"Dev", "Tier" : "Web" }
- cidr: 172.22.2.0/24
az: us-west-2b
resource_tags: { "Environment":"Dev", "Tier" : "App" }
- cidr: 172.22.3.0/24
az: us-west-2a
resource_tags: { "Environment":"Dev", "Tier" : "DB" }
internet_gateway: True
route_tables:
- subnets:
- 172.22.2.0/24
- 172.22.3.0/24
routes:
- dest: 0.0.0.0/0
gw: igw
- subnets:
- 172.22.1.0/24
routes:
- dest: 0.0.0.0/0
gw: igw
region: us-west-2
register: vpc
# Removal of a VPC by id
local_action:
module: ec2_vpc
state: absent
vpc_id: vpc-aaaaaaa
region: us-west-2
If you have added elements not managed by this module, e.g. instances, NATs, etc then
the delete will fail until those dependencies are removed.
• Synopsis
• Options
• Examples
Synopsis
Manage cache clusters in Amazon Elasticache. Returns information about the specified cache cluster.
Options
Examples
# Basic example
- local_action:
module: elasticache
name: "test-please-delete"
state: present
engine: memcached
cache_engine_version: 1.4.14
node_type: cache.m1.small
num_nodes: 1
cache_port: 11211
cache_security_groups:
- default
zone: us-east-1d
Author [email protected] Note. Most of the code has been taken from the S3 module.
• Synopsis
• Options
• Examples
Synopsis
This module allows users to manage their objects/buckets in Google Cloud Storage. It allows upload and download
operations and can set some canned permissions. It also allows retrieval of URLs for objects for use in playbooks,
and retrieval of string contents of objects. This module requires setting the default project in GCS prior to playbook
usage. See https://developers.google.com/storage/docs/reference/v1/apiversion1 for information about setting the de-
fault project.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example using defaults and with metadata to create a single ’foo’ instance
- local_action:
module: gce
name: foo
metadata: ’{"db":"postgres", "group":"qa", "id":500}’
# Launch instances from a control node, runs some tasks on the new instances,
# and then terminate them
- name: Create a sandbox instance
hosts: localhost
vars:
names: foo,bar
machine_type: n1-standard-1
image: debian-6
zone: us-central1-a
service_account_email: [email protected]
pem_file: /path/to/pem_file
project_id: project-id
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}} service_account_email={{ service_account_email }}
pem_file={{ pem_file }} project_id={{ project_id }}
register: gce
- name: Wait for SSH to come up
local_action: wait_for host={{item.public_ip}} port=22 delay=10
timeout=60 state=started
with_items: {{gce.instance_data}}
module: gce
state: ’absent’
instance_names: {{gce.instance_names}}
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Simple example of creating a new LB, adding members, and a health check
- local_action:
module: gce_lb
name: testlb
region: us-central1
members: ["us-central1-a/www-a", "us-central1-b/www-b"]
httphealthcheck_name: hc
httphealthcheck_port: 80
httphealthcheck_path: "/up"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
images or snapshots. The ‘gce’ module supports creating instances with boot disks. Full install/configuration
instructions for the gce* modules can be found in the comments of ansible/test/gce_tests.py.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a tenant
- keystone_user: tenant=demo tenant_description="Default Tenant"
# Create a user
- keystone_user: user=john tenant=demo password=secrete
# Apply the admin role to the john user in the demo tenant
- keystone_user: role=admin user=john tenant=demo
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
plan: 1
datacenter: 2
distribution: 99
password: ’superSecureRootPassword’
ssh_pub_key: ’ssh-rsa qwerty’
swap: 768
wait: yes
wait_timeout: 600
state: present
# Delete a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: absent
# Stop a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: stopped
# Reboot a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: restarted
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new VM and attaches to a network and passes metadata to the instance
- nova_compute:
state: present
login_username: admin
login_password: admin
login_tenant_name: admin
name: vm1
image_id: 4f905f38-e52a-43d2-b6ec-754a13ffb529
key_name: ansible_key
wait_for: 200
flavor_id: 4
nics:
- net-id: 34605f38-e52a-25d2-b6ec-754a13ffb723
meta:
hostname: test1
group: uge_master
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new key pair and the private key returned after the run.
- nova_keypair: state=present login_username=admin login_password=admin
login_tenant_name=admin name=ansible_key
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# stopping an instance
action: ovirt >
instance_name=testansible
state=stopped
user=admin@internal
password=secret
url=https://ovirt.example.com
# starting an instance
action: ovirt >
instance_name=testansible
state=started
user=admin@internal
password=secret
url=https://ovirt.example.com
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
provider_network_type=local router_external=yes
login_username=admin login_password=admin login_tenant_name=admin
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
quantum_router_gateway - set/unset a gateway interface for the router with the specified external
network
• Synopsis
• Options
• Examples
Synopsis
Creates/Removes a gateway interface from the router, used to associate a external network with a router to route
external traffic.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
router_name=external_route
subnet_name=t1subnet
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
rax_clb_nodes - add, modify and remove nodes from a Rackspace Cloud Load Balancer
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
credentials: ~/.raxpub
name: example.org
email: [email protected]
register: rax_dns
Note: It is recommended that plays utilizing this module be run with serial: 1 to avoid exceeding the API
request limit imposed by the Rackspace CloudDNS API
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
credentials: ~/.raxpub
domain: example.org
name: www.example.org
data: 127.0.0.1
type: A
register: rax_dns_record
Note: It is recommended that plays utilizing this module be run with serial: 1 to avoid exceeding the API
request limit imposed by the Rackspace CloudDNS API
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
local_action:
module: rax_facts
credentials: ~/.raxpub
name: "{{ inventory_hostname }}"
region: DFW
- name: Map some facts
set_fact:
ansible_ssh_host: "{{ rax_accessipv4 }}"
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Keypairs cannot be manipulated, only created and deleted. To “update” a keypair you must first delete and then
recreate.
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: Network create request
local_action:
module: rax_network
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
state: present
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
- hosts: localhost
gather_facts: false
connection: local
tasks:
- rax_scaling_group:
credentials: ~/.raxpub
region: ORD
cooldown: 300
flavor: performance1-1
image: bb02b1a3-bc77-4d17-ab5b-421d89850fca
min_entities: 5
max_entities: 10
name: ASG Test
server_name: asgtest
loadbalancers:
- id: 228385
port: 80
register: asg
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
- hosts: localhost
gather_facts: false
connection: local
tasks:
- rax_scaling_policy:
credentials: ~/.raxpub
region: ORD
at: ’2013-05-19T08:07:08Z’
change: 25
cooldown: 300
is_percent: true
name: ASG Test Policy - at
policy_type: schedule
scaling_group: ASG Test
register: asps_at
- rax_scaling_policy:
credentials: ~/.raxpub
region: ORD
cron: ’1 0 * * *’
change: 25
cooldown: 300
is_percent: true
name: ASG Test Policy - cron
policy_type: schedule
scaling_group: ASG Test
register: asp_cron
- rax_scaling_policy:
credentials: ~/.raxpub
region: ORD
cooldown: 300
desired_capacity: 5
name: ASG Test Policy - webhook
policy_type: webhook
scaling_group: ASG Test
register: asp_webhook
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Delete new.foo.com A record using the results from the get command
- route53: >
command=delete
zone=foo.com
record={{ rec.set.record }}
type={{ rec.set.type }}
value={{ rec.set.value }}
# Add an AAAA record. Note that because there are colons in the value
# that the entire parameter list must be quoted:
- route53: >
command=create
zone=foo.com
record=localhost.foo.com
type=AAAA
ttl=7200
value="::1"
# Add a TXT record. Note that TXT and SPF records must be surrounded
# by quotes when sent to Route 53:
- route53: >
command=create
zone=foo.com
record=localhost.foo.com
type=TXT
ttl=7200
value=""bar""
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# /usr/bin/ansible invocations
ansible host -m virt -a "name=alpha command=status"
ansible host -m virt -a "name=alpha command=get_xml"
ansible host -m virt -a "name=alpha command=create uri=lxc:///"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- vsphere_guest:
vcenter_hostname: vcenter.mydomain.local
username: myuser
password: mypass
guest: newvm001
state: powered_on
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: storage001
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 2048
num_cpus: 2
osid: centos64Guest
scsi: paravirtual
esxi:
datacenter: MyDatacenter
hostname: esx001.mydomain.local
- vsphere_guest:
vcenter_hostname: vcenter.mydomain.local
username: myuser
password: mypass
guest: newvm001
state: reconfigured
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: storage001
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 4096
num_cpus: 4
osid: centos64Guest
scsi: paravirtual
esxi:
datacenter: MyDatacenter
hostname: esx001.mydomain.local
# Task to gather facts from a vSphere cluster only if the system is a VMWare guest
- vsphere_guest:
vcenter_hostname: vcenter.mydomain.local
username: myuser
password: mypass
guest: newvm001
vmware_guest_facts: yes
- hw_eth0:
- addresstype: "assigned"
label: "Network adapter 1"
macaddress: "00:22:33:33:44:55"
macaddress_dash: "00-22-33-33-44-55"
summary: "VM Network"
hw_guest_full_name: "newvm001"
hw_guest_id: "rhel6_64Guest"
hw_memtotal_mb: 2048
hw_name: "centos64Guest"
hw_processor_count: 2
hw_product_uuid: "ef50bac8-2845-40ff-81d9-675315501dac"
- vsphere_guest:
vcenter_hostname: vcenter.mydomain.local
username: myuser
password: mypass
guest: newvm001
state: absent
force: yes
Note: This module should run from a system that can access vSphere directly. Either by using local_action, or using
delegate_to.
• Synopsis
• Options
• Examples
Synopsis
The command module takes the command name followed by a list of space-delimited arguments. The given command
will be executed on all selected nodes. It will not be processed through the shell, so variables like $HOME and
operations like "<", ">", "|", and "&" will not work (use the shell module if you need these features).
Options
Examples
# You can also use the ’args’ form to provide the options. This command
# will change the working directory to somedir/ and will only run when
# /path/to/database doesn’t exist.
- command: /usr/bin/make_database.sh arg1 arg2
args:
chdir: somedir/
creates: /path/to/database
Note: If you want to run a command through the shell (say you are using <, >, |, etc), you actually want the shell
module instead. The command module is much more secure as it’s not affected by the user’s environment.
Note: creates, removes, and chdir can be specified after the command. For instance, if you only want to run
a command if a certain file does not exist, use this.
• Synopsis
• Options
• Examples
Synopsis
Executes a low-down and dirty SSH command, not going through the module subsystem. This is useful and should
only be done in two cases. The first case is installing python-simplejson on older (Python 2.4 and before)
hosts that need it as a dependency to run modules, since nearly all core modules require it. Another is speaking to any
devices such as routers that do not have any Python installed. In any other case, using the shell or command module
is much more appropriate. Arguments given to raw are run directly through the configured remote shell. Standard
output, error output and return code are returned when available. There is no change handler support for this module.
This module does not require python on the remote system, much like the script module.
Options
Examples
Note: If you want to execute a command securely and predictably, it may be better to use the command module
instead. Best practices when writing playbooks will follow the trend of using command unless shell is explicitly
required. When running ad-hoc commands, use your best judgement.
• Synopsis
• Options
• Examples
Synopsis
The script module takes the script name followed by a list of space-delimited arguments. The local script at
path will be transfered to the remote node and then executed. The given script will be processed through the shell
environment on the remote node. This module does not require python on the remote system, much like the raw
module.
Options
Examples
# Run a script that creates a file, but only if the file is not yet created
- script: /some/local/create_file.sh --some-arguments 1234 creates=/the/created/file.txt
# Run a script that removes a file, but only if the file is not yet removed
- script: /some/local/remove_file.sh --some-arguments 1234 removes=/the/removed/file.txt
Note: It is usually preferable to write Ansible modules than pushing scripts. Convert your script to an Ansible module
for bonus points!
• Synopsis
• Options
• Examples
Synopsis
The shell module takes the command name followed by a list of space-delimited arguments. It is almost exactly
like the command module but runs the command through a shell (/bin/sh) on the remote node.
Options
Examples
# You can also use the ’args’ form to provide the options. This command
# will change the working directory to somedir/ and will only run when
# somedir/somelog.txt doesn’t exist.
- shell: somescript.sh >> somelog.txt
args:
chdir: somedir/
creates: somelog.txt
Note: If you want to execute a command securely and predictably, it may be better to use the command module
instead. Best practices when writing playbooks will follow the trend of using command unless shell is explicitly
required. When running ad-hoc commands, use your best judgement.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create ’burgers’ database user with name ’bob’ and password ’12345’.
- mongodb_user: database=burgers name=bob password=12345 state=present
# Define more users with various specific roles (if not defined, no roles is assigned, and the user w
- mongodb_user: database=burgers name=ben password=12345 roles=’read’ state=present
- mongodb_user: database=burgers name=jim password=12345 roles=’readWrite,dbAdmin,userAdmin’ state=pr
- mongodb_user: database=burgers name=joe password=12345 roles=’readWriteAnyDatabase’ state=present
# add a user to database in a replica set, the primary server is automatically discovered and written
- mongodb_user: database=burgers name=bob replica_set=blecher password=12345 roles=’readWriteAnyDatab
Note: Requires the pymongo Python package on the remote host, version 2.4.2+. This can be installed using pip or
the OS package manager. @see http://api.mongodb.org/python/current/installation.html
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Copy database dump file to remote host and restore it to database ’my_db’
- copy: src=dump.sql.bz2 dest=/tmp
- mysql_db: name=my_db state=import target=/tmp/dump.sql.bz2
Note: Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install
python-mysqldb. (See apt.)
Note: Both login_password and login_user are required when you are passing credentials. If none are present, the
module will attempt to read the credentials from ~/.my.cnf, and finally fall back to using the MySQL default login
of root with no password.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Change master to master server 192.168.1.1 and use binary log ’mysql-bin.000009’ with position 4578
- mysql_replication: mode=changemaster master_host=192.168.1.1 master_log_file=mysql-bin.000009 maste
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create database user with name ’bob’ and password ’12345’ with all database privileges
- mysql_user: name=bob password=12345 priv=*.*:ALL state=present
# Creates database user ’bob’ and password ’12345’ with all database privileges and ’WITH GRANT OPTIO
- mysql_user: name=bob password=12345 priv=*.*:ALL,GRANT state=present
# Ensure no user named ’sally’ exists, also passing in the auth credentials.
- mysql_user: login_user=root login_password=123456 name=sally state=absent
[client]
user=root
password=n<_665{vS43y
Note: Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install
python-mysqldb.
Note: Both login_password and login_username are required when you are passing credentials. If none are
present, the module will attempt to read the credentials from ~/.my.cnf, and finally fall back to using the MySQL
default login of ‘root’ with no password.
Note: MySQL server installs with default login_user of ‘root’ and no password. To secure this user as part of an
idempotent playbook, you must create at least two tasks: the first must change the root user’s password, without
providing any login_user/login_password details. The second must drop a ~/.my.cnf file containing the new root
credentials. Subsequent runs of the playbook will then succeed by reading the new credentials from the file.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a new database with name "acme" and specific encoding and locale
# settings. If a template different from "template0" is specified, encoding
# and locale settings must match those of the template.
- postgresql_db: name=acme
encoding=’UTF-8’
lc_collate=’de_DE.UTF-8’
lc_ctype=’de_DE.UTF-8’
template=’template0’
Note: The default authentication assumes that you are either logging in as or sudo’ing to the postgres account on
the host.
Note: This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is
installed on the host before using this module. If the remote host is the PostgreSQL server (which is the default case),
then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql,
libpq-dev, and python-psycopg2 packages on the remote host before using this module.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# On database "library":
# GRANT SELECT, INSERT, UPDATE ON TABLE public.books, public.authors
# TO librarian, reader WITH GRANT OPTION
- postgresql_privs: >
database=library
state=present
privs=SELECT,INSERT,UPDATE
type=table
objs=books,authors
schema=public
roles=librarian,reader
grant_option=yes
schema=math
roles=librarian,reader
Note: Default authentication assumes that postgresql_privs is run by the postgres user on the remote host. (Ansi-
ble’s user or sudo-user).
Note: This module requires Python package psycopg2 to be installed on the remote host. In the default case of
the remote host also being the PostgreSQL server, PostgreSQL has to be installed there as well, obviously. For
Debian/Ubuntu-based systems, install packages postgresql and python-psycopg2.
Note: Parameters that accept comma separated lists (privs, objs, roles) have singular alias names (priv, obj, role).
Note: To revoke only GRANT OPTION for a specific object, set state to present and grant_option to no (see
examples).
Note: Note that when revoking privileges from a role R, this role may still have access via privileges granted to any
role R is a member of including PUBLIC.
Note: Note that when revoking privileges from a role R, you do so as the user specified via login. If R has been
granted the same privileges by another user also, R can still access database objects via these privileges.
• Synopsis
• Options
• Examples
Synopsis
Add or remove PostgreSQL users (roles) from a remote host and, optionally, grant the users access to an existing
database or tables. The fundamental function of the module is to create, or delete, roles from a PostgreSQL cluster.
Privilege assignment, or removal, is an optional step, which works on one database at a time. This allows for the
module to be called several times in the same module to modify the permissions on different databases, or to grant
permissions to already existing users. A user cannot be removed until all the privileges have been stripped from the
user. In such situation, if the module tries to remove the user it will fail. To avoid this from happening the fail_on_user
option signals the module to try to remove the user, but if not possible keep going; the module will report if changes
happened and separately if the user was removed or not.
Options
Examples
# Create django user and grant access to database and products table
- postgresql_user: db=acme name=django password=ceec4eif7ya priv=CONNECT/products:ALL
# Create rails user, grant privilege to create other databases and demote rails from super user statu
- postgresql_user: name=rails password=secret role_attr_flags=CREATEDB,NOSUPERUSER
Note: The default authentication assumes that you are either logging in as or sudo’ing to the postgres account on the
host.
Note: This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed
on the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then
PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql, libpq-dev,
and python-psycopg2 packages on the remote host before using this module.
Note: If you specify PUBLIC as the user, then the privilege changes will apply to all users. You may not specify
password or role_attr_flags when the PUBLIC user is specified.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Requires the redis-py Python package on the remote host. You can install it with pip (pip install redis) or with
a package manager. https://github.com/andymccurdy/redis-py
Note: If the redis master instance we are making slave of is password protected this needs to be in the redis.conf in
the masterauth variable
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The “acl” module requires that acls are enabled on the target filesystem and that the setfacl and getfacl binaries
are installed.
• Synopsis
• Options
• Examples
Synopsis
Assembles a configuration file from fragments. Often a particular program will take a single configuration file and does
not support a conf.d style structure where it is easy to build up the configuration from multiple sources. assemble
will take a directory of files that can be local or have already been transferred to the system, and concatenate them
together to produce a destination file. Files are assembled in string sorting order. Puppet calls this idea fragments.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The copy module copies a file on the local box to remote locations.
Options
Examples
# Copy a new "ntp.conf file into place, backing up the original if it differs from the copied version
- copy: src=/https/www.scribd.com/mine/ntp.conf dest=/etc/ntp.conf owner=root group=root mode=644 backup=yes
# Copy a new "sudoers" file into place, after passing validation with visudo
- copy: src=/https/www.scribd.com/mine/sudoers dest=/etc/sudoers validate=’visudo -cf %s’
Note: The “copy” module recursively copy facility does not scale to lots (>hundreds) of files. For alternative, see
synchronize module, which is a wrapper around rsync.
• Synopsis
• Options
• Examples
Synopsis
This module works like copy, but in reverse. It is used for fetching files from remote machines and storing them
locally in a file tree, organized by hostname. Note that this module is written to transfer log files that might not be
present, so a missing remote file won’t be an error unless fail_on_missing is set to ‘yes’.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Sets attributes of files, symlinks, and directories, or removes files/symlinks/directories. Many other modules support
the same options as the file module - including copy, template, and assemble.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Manage (add, remove, change) individual settings in an INI-style file without having to manage the file as a whole
with, say, template or assemble. Adds missing sections if they don’t exist. Comments are discarded when the
source file is read, and therefore will not show up in the destination file.
Options
Examples
- ini_file: dest=/etc/anotherconf
section=drinks
option=temperature
value=cold
backup=yes
Note: While it is possible to add an option without specifying a value, this makes no sense.
Note: A section named default cannot be added by the module, but if it exists, individual options within the
section can be updated. (This is a limitation of Python’s ConfigParser.) Either use template to create a base INI
file with a [default] section, or use lineinfile to add the missing line.
lineinfile - Ensure a particular line is in a file, or replace an existing line using a back-referenced
regular expression.
• Synopsis
• Options
• Examples
Synopsis
This module will search a file for a line, and ensure that it is present or absent. This is primarily useful when you want
to change a single line in a file only. For other cases, see the copy or template modules.
Options
Examples
# Fully quoted because of the ’: ’ on the line. See the Gotchas in the YAML docs.
- lineinfile: "dest=/etc/sudoers state=present regexp=’^%wheel’ line=’%wheel ALL=(ALL) NOPASSWD: ALL’
replace - Replace all instances of a particular string in a file using a back-referenced regular expres-
sion.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Obtain the stats of /etc/foo.conf, and check that the file still belongs
# to ’root’. Fail otherwise.
- stat: path=/etc/foo.conf
register: st
- fail: msg="Whoops! file ownership has changed"
when: st.stat.pw_name != ’root’
synchronize - Uses rsync to make synchronizing file paths in your playbooks quick and easy.
• Synopsis
• Options
• Examples
Synopsis
or shell depending on your use case. The synchronize action is meant to do common things with rsync easily. It
does not provide access to the full power of rsync, but does make most invocations easier to follow.
Options
Examples
# Synchronization with --archive options enabled except for --times, with --checksum option enabled
synchronize: src=some/relative/path dest=/some/absolute/path checksum=yes times=no
# Synchronize and delete files in dest on the remote host that are not found in src of localhost.
synchronize: src=some/relative/path dest=/some/absolute/path delete=yes
Note: Inspect the verbose output to validate the destination user/host/path are what was expected.
Note: The remote user for the dest path will always be the remote_user, not the sudo_user.
Note: To exclude files and directories from being synchronized, you may add .rsync-filter files to the source
directory.
• Synopsis
• Options
• Examples
Synopsis
Templates are processed by the Jinja2 templating language (http://jinja.pocoo.org/docs/) - documentation on the tem-
plate formatting can be found in the Template Designer Documentation (http://jinja.pocoo.org/docs/templates/). Six
additional variables can be used in templates: ansible_managed (configurable via the defaults section of
ansible.cfg) contains a string which can be used to describe the template name, host, modification time of the tem-
plate file and the owner uid, template_host contains the node name of the template’s machine, template_uid
the owner, template_path the absolute path of the template, template_fullpath is the absolute path of the
template, and template_run_date is the date that the template was rendered. Note that including a string that
uses a date in the template will result in the template being marked ‘changed’ each time.
Options
Examples
# Copy a new "sudoers" file into place, after passing validation with visudo
- template: src=/https/www.scribd.com/mine/sudoers dest=/etc/sudoers validate=’visudo -cf %s’
Note: Since Ansible version 0.9, templates are loaded with trim_blocks=True.
Note: Also, you can override jinja2 settings by adding a special header to template file. i.e.
#jinja2:variable_start_string:’[%’ , variable_end_string:’%]’ which changes the vari-
able interpolation markers to [% var %] instead of {{ var }}. This is the best way to prevent evaluation of things
that look like, but should not be Jinja2. raw/endraw in Jinja2 will not work as you expect because templates in Ansible
are recursively evaluated.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: can handle gzip, bzip2 and xz compressed as well as uncompressed tar files
Note: uses tar’s --diff arg to calculate if changed or not. If this arg is not supported, it will always unpack the
archive
Note: does not detect if a .zip file is different from destination - always unzips
Note: existing files/directories in the destination which are not in the archive are not touched. This is the same
behavior as a normal archive extraction
Note: existing files/directories in the destination which are not in the archive are ignored for purposes of deciding if
the archive should be unpacked or not
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
Synopsis
Options
add_host - add a host (and alternatively a group) to the ansible-playbook in-memory inventory
• Synopsis
• Options
• Examples
Synopsis
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook. Takes variables
so you can define the new hosts more fully.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Use facts to create ad-hoc groups that can be used later in a playbook.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- name: ensure the default vhost contains the HA policy via a dict
rabbitmq_policy: name=HA pattern=’.*’
args:
tags:
"ha-mode": all
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- airbrake_deployment: token=AAAAAA
environment=’staging’
user=’ansible’
revision=4.2
Author [email protected]
• Synopsis
• Options
• Examples
Synopsis
Options
Note: Requires bprobe is required to send data, but not to register a meter
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Requires the LogEntries agent which can be installed following the instructions at logentries.com
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The nagios module has two basic functions: scheduling downtime and toggling alerts for services or hosts. All
actions require the host parameter to be given explicitly. In playbooks you can use the {{inventory_hostname}}
variable to refer to the host the playbook is currently running on. You can specify multiple services at once by
separating them with commas, .e.g., services=httpd,nfs,puppet. When specifying what service to handle
there is a special service value, host, which will handle alerts/downtime for the host itself, e.g., service=host.
This keyword may not be given with other services at the same time. Setting alerts/downtime for a host does not affect
alerts/downtime for any of the services running on it. To schedule downtime for all services on particular host use
keyword “all”, e.g., service=all. When using the nagios module you will need to specify your Nagios server
using the delegate_to parameter.
Options
Examples
# SHUT UP NAGIOS
- nagios: action=silence_nagios
# ANNOY ME NAGIOS
- nagios: action=unsilence_nagios
# command something
- nagios: action=command command=’DISABLE_FAILURE_PREDICTION’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- newrelic_deployment: token=AAAAAA
app_name=myapp
user=’ansible deployment’
revision=1.0
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a 4 hour maintenance window for service FOO123 with the description "deployment".
- pagerduty: name=companyabc
[email protected]
passwd=password123
state=running
service=FOO123
hours=4
desc=deployment
Note: This module does not yet have support to end maintenance windows.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module does not yet have support to add/remove checks.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- rollbar_deployment: token=AAAAAA
environment=’staging’
user=’ansible’
revision=4.2,
rollbar_user=’admin’,
comment=’Test Deploy’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: enable interface Ethernet 1
action: arista_interface interface_id=Ethernet1 admin=up speed=10g duplex=full logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create switchport ethernet1 access port
action: arista_l2interface interface_id=Ethernet1 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create lag interface
action: arista_lag interface_id=Port-Channel1 links=Ethernet1,Ethernet2 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create vlan 999
action: arista_vlan vlan_id=999 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: bigip-test
tasks:
- name: Collect BIG-IP facts
local_action: >
bigip_facts
server=lb.mydomain.com
user=admin
password=mysecret
include=interface,vlan
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
state: present
server: "{{ f5server }}"
user: "{{ f5user }}"
password: "{{ f5password }}"
name: "{{ item.monitorname }}"
type: tcp
send: "{{ item.send }}"
receive: "{{ item.receive }}"
with_items: f5monitors-halftcp
- name: BIGIP F5 | Remove TCP Monitor
local_action:
module: bigip_monitor_tcp
state: absent
server: "{{ f5server }}"
user: "{{ f5user }}"
password: "{{ f5password }}"
name: "{{ monitorname }}"
with_flattened:
- f5monitors-tcp
- f5monitors-halftcp
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: bigip-test
tasks:
- name: Add node
local_action: >
bigip_node
server=lb.mydomain.com
user=admin
password=mysecret
state=present
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
name="{{ ansible_default_ipv4["address"] }}"
# Note that the BIG-IP automatically names the node using the
# IP address specified in previous play’s host parameter.
# Future plays referencing this node no longer use the host
# parameter but instead use the name parameter.
# Alternatively, you could have specified a name with the
# name parameter when state=present.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: localhost
tasks:
- name: Create pool
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=present
name=matthite-pool
partition=matthite
lb_method=least_connection_member
slow_ramp_time=120
- hosts: bigip-test
tasks:
- name: Add pool member
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=present
name=matthite-pool
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
port=80
- hosts: localhost
tasks:
- name: Delete pool
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=absent
name=matthite-pool
partition=matthite
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: bigip-test
tasks:
- name: Add pool member
local_action: >
bigip_pool_member
server=lb.mydomain.com
user=admin
password=mysecret
state=present
pool=matthite-pool
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
port=80
description="web server"
connection_limit=100
rate_limit=50
ratio=2
password=mysecret
state=absent
pool=matthite-pool
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
port=80
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# delete a domain
- local_action: dnsimple domain=my.com state=absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The DNS Made Easy service requires that machines interacting with the API have the proper time and timezone
set. Be sure you are within a few seconds of actual time by using NTP.
Note: This module returns record(s) in the “result” element when ‘state’ is set to ‘present’. This value can be be
registered and used in your playbooks.
• Synopsis
• Examples
Synopsis
Examples
# ok: [10.13.0.22] => (item=eth1) => {"item": "eth1", "msg": "switch2.example.com / Gi0/3"}
# ok: [10.13.0.22] => (item=eth0) => {"item": "eth0", "msg": "switch3.example.com / Gi0/3"}
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote server must have direct access to the
remote resource. By default, if an environment variable <protocol>_proxy is set on the target host, requests
will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see setting the
environment), or by using the use_proxy option.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module works like fetch. It is used for fetching a base64- encoded blob containing the data in a remote file.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Check that you can connect (GET) to a page and it returns a status 200
- uri: url=http://www.example.com
# Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents.
- action: uri url=http://www.example.com return_content=yes
register: webpage
- action: fail
when: ’AWESOME’ not in "{{ webpage.content }}"
- uri: url=https://your.jira.example.com/rest/api/2/issue/
method=POST user=your_username password=your_pass
body="{{ lookup(’file’,’issue.json’) }}" force_basic_auth=yes
status_code=201 HEADER_Content-Type="application/json"
- uri: url=https://your.form.based.auth.examle.com/index.php
method=POST body="name=your_username&password=your_password&enter=Sign%20in"
status_code=302 HEADER_Content-Type="application/x-www-form-urlencoded"
register: login
- uri: url=https://your.form.based.auth.example.com/dashboard.php
method=GET return_content=yes HEADER_Cookie="{{login.set_cookie}}"
- uri: url=http://{{jenkins.host}}/job/{{jenkins.job}}/build?token={{jenkins.token}}
method=GET user={{jenkins.user}} password={{jenkins.password}} force_basic_auth=yes status_cod
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- flowdock: type=inbox
token=AAAAAA
[email protected]
source=’my cool app’
msg=’test from ansible’
subject=’test subject’
- flowdock: type=chat
token=AAAAAA
external_user_name=testuser
msg=’test from ansible’
tags=tag1,tag2,tag3
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- grove: >
channel_token=6Ph62VBBJOccmtTPZbubiPzdrhipZXtg
service=my-app
message=deployed {{ target }}
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module is useful for sending emails from playbooks. One may wonder why automate sending emails? In complex
environments there are from time to time processes that cannot be automated, either because you lack the authority
to make it so, or because not everyone agrees to a common approach. If you cannot automate a specific step, but the
step is non-blocking, sending out an email to the responsible party to make him perform his part of the bargain is an
elegant way to put the responsibility in someone else’s lap. Of course sending out a mail can be equally useful as a
way to notify one or more people in a team that a specific action has been (successfully) taken.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- local_action: mqtt
topic=service/ansible/{{ ansible_hostname }}
payload="Hello at {{ ansible_date_time.iso8601 }}"
qos=0
retain=false
client_id=ans001
Note: This module requires a connection to an MQTT broker such as Mosquitto http://mosquitto.org and the Paho
mqtt Python client (https://pypi.python.org/pypi/paho-mqtt).
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: If you like this module, you may also be interested in the osx_say callback in the plugins/ directory of the
source checkout.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The sns module sends notifications to a topic on your Amazon SNS account
Options
Examples
- name: Send notification messages via SNS with short message for SMS
local_action:
module: sns
msg: "{{ inventory_hostname }} has completed the play."
sms: "deployed!"
subject: "Deploy complete!"
topic: "deploy"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# send a text message from the local server about the build status to (555) 303 5681
# note: you have to have purchased the ’from_number’ on your Twilio account
- local_action: text msg="All servers with webserver role are now configured."
account_sid={{ twilio_account_sid }}
auth_token={{ twilio_auth_token }}
from_number=+15552014545 to_number=+15553035681
Note: Like the other notification modules, this one requires an external dependency to work. In this case, you’ll need
a Twilio account with a purchased or verified phone number to send the text message.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Update the repository cache and update package "nginx" to latest version using default release sque
- apt: name=nginx state=latest default_release=squeeze-backports update_cache=yes
# Only run "update_cache=yes" if the last one is more than 3600 seconds ago
- apt: update_cache=yes cache_valid_time=3600
Note: Three of the upgrade modes (full, safe and its alias yes) require aptitude, otherwise apt-get
suffices.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: as a sanity check, downloaded key id must match the one specified
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# On Ubuntu target: add nginx stable repository from PPA and install its signing key.
# On Debian target: adding PPA is not available, so it will fail immediately.
apt_repository: repo=’ppa:nginx/stable’
Note: This module works on Debian and Ubuntu and requires python-apt.
Note: This module supports Debian Squeeze (version 6) as well as its successors.
Note: This module treats Debian and Ubuntu distributions separately. So PPA could be installed only on Ubuntu
machines.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Downloads and installs all the libs and dependencies outlined in the /path/to/project/composer.lock
- composer: working_dir=/path/to/project
Note: Default options that are always appended in each execution are –no-ansi, –no-progress, and –no-interaction
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Please note that http://search.cpan.org/dist/App-cpanminus/bin/cpanm, cpanm must be installed on the remote
host.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Please note that the easy_install module can only install Python libraries. Thus this module is not
able to remove libraries. It is generally recommended to use the pip module which you can first install using
easy_install.
Note: Also note that virtualenv must be installed on the remote host if the virtualenv parameter is specified.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
description: Install packages based on package.json using the npm installed with nvm v0.10.1.
- npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Author Afterburn
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Manage Python library dependencies. To use this module, one of the following keys is required: name or
requirements.
Options
Examples
# Install (MyApp) using one of the remote protocols (bzr+,hg+,git+,svn+). You do not have to supply ’
- pip: name=’svn+http://myrepo/svn/MyApp#egg=MyApp’
# Install (Bottle) into the specified (virtualenv), inheriting none of the globally installed modules
- pip: name=bottle virtualenv=/my_app/venv
# Install (Bottle) into the specified (virtualenv), inheriting globally installed modules
- pip: name=bottle virtualenv=/my_app/venv virtualenv_site_packages=yes
Note: Please note that virtualenv (http://www.virtualenv.org/) must be installed on the remote host if the virtualenv
parameter is specified.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Author bleader
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: When using pkgsite, be careful that already in cache packages won’t be downloaded again.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Install a package
pkgutil: name=CSWcommon state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Author berenddeboer
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
redhat_subscription - Manage Red Hat Network registration and subscriptions using the
subscription-manager command
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Register as user (joe_user) with password (somepass) and auto-subscribe to available content.
- redhat_subscription: action=register username=joe_user password=somepass autosubscribe=true
Note: In order to register a system, subscription-manager requires either a username and password, or an activation-
key.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
rhn_register - Manage Red Hat Network registration using the rhnreg_ks command
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Register as user (joe_user) with password (somepass) and auto-subscribe to available content.
- rhn_register: state=present username=joe_user password=somepass
Note: In order to register a system, rhnreg_ks requires either a username and password, or an activationkey.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Manages SVR4 packages on Solaris 10 and 11. These were the native packages on Solaris <= 10 and are available
as a legacy feature in Solaris 11. Note that this is a very basic packaging system. It will not enforce dependencies on
install or remove.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Installs, upgrade, removes, and lists packages and groups with the yum package manager.
Options
Examples
- name: install the latest version of Apache from the testing repo
yum: name=httpd enablerepo=testing state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Install "nmap"
- zypper: name=nmap state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: If the task seems to be hanging, first verify remote host is in known_hosts. SSH will prompt user to
authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public key
in /etc/ssh/ssh_known_hosts before calling the git module, with the following command: ssh-keyscan -H
remote_host.com >> /etc/ssh/ssh_known_hosts.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Cleaning all hooks for this repo that had an error on the last update. Since this works for all hoo
- local_action: github_hooks action=cleanall user={{ gituser }} oauthkey={{ oauthkey }} repo={{ repo
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Ensure the current working copy is inside the stable branch and deletes untracked files if any.
- hg: repo=https://bitbucket.org/user/repo1 dest=/home/user/repo1 revision=stable purge=yes
Note: If the task seems to be hanging, first verify remote host is in known_hosts. SSH will prompt user to
authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public
key in /etc/ssh/ssh_known_hosts before calling the hg module, with the following command: ssh-keyscan
remote_host.com >> /etc/ssh/ssh_known_hosts.
• Synopsis
• Options
• Examples
Synopsis
Deploy given repository URL / revision to dest. If dest exists, update to the specified revision, otherwise perform a
checkout.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Note: Requires at
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example using key data from a local file on the management machine
- authorized_key: user=charlie key="{{ lookup(’file’, ’/home/charlie/.ssh/id_rsa.pub’) }}"
# Using with_file
- name: Set up authorized_keys for the deploy user
authorized_key: user=deploy
key="{{ item }}"
with_file:
- public_keys/doe-jane
- public_keys/doe-john
# Using key_options:
- authorized_key: user=charlie
key="{{ lookup(’file’, ’/home/charlie/.ssh/id_rsa.pub’) }}"
key_options=’no-port-forwarding,host="10.0.1.1"’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The capabilities system will automatically transform operators and flags into the effective set, so (for example,
cap_foo=ep will probably become cap_foo+ep). This module does not attempt to determine the final operator and
flags to compare, so you will want to ensure that your capabilities argument matches the final capabilities.
• Synopsis
• Options
• Examples
Synopsis
Use this module to manage crontab entries. This module allows you to create named crontab entries, update, or
delete them. The module includes one line with the description of the crontab entry "#Ansible: <name>"
corresponding to the “name” passed to the module, which is used by future ansible/module calls to find/check the
state.
Options
Examples
# Ensure an old job is no longer present. Removes any job that is prefixed
# by "#Ansible: an old job" from the crontab
- cron: name="an old job" state=absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Specifying package you can register/return the list of questions and current values
debconf: name=’tzdata’
Note: A number of questions have to be answered (depending on the package). Use ‘debconf-show <package>’ on
any Debian or derivative with the package installed to see questions/settings available.
• Synopsis
• Examples
Synopsis
Runs the facter discovery program (https://github.com/puppetlabs/facter) on the remote system, returning JSON data
that can be useful for inventory purposes.
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- hostname: name=web01
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a volume group on top of /dev/sda1 with physical extent size = 32MB.
- lvg: vg=vg.services pvs=/dev/sda1 pesize=32
Note: module does not modify PE size for already present volume group
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a logical volume the size of all remaining space in the volume group
- lvol: vg=firefly lv=test size=100%FREE
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Examples
Synopsis
Similar to the facter module, this runs the Ohai discovery program (http://wiki.opscode.com/display/chef/Ohai) on
the remote host and returns JSON inventory data. Ohai data is a bit more verbose and nested than facter.
Examples
# Retrieve (ohai) data from all Web servers and store in one-file per host
ansible webservers -m ohai --tree=/tmp/ohaidata
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Examples
Synopsis
A trivial test module, this module always returns pong on successful contact. It does not make sense in playbooks,
but it is useful from /usr/bin/ansible
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Configures the SELinux mode and policy. A reboot may be required after usage. Ansible will not issue this reboot but
will let you know when it is required.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example action to enable service httpd, and not touch the running state
- service: name=httpd enabled=yes
• Synopsis
• Options
• Examples
Synopsis
This module is automatically called by playbooks to gather useful variables about remote hosts that can be used in
playbooks. It can also be executed directly by /usr/bin/ansible to check what variables are available to a host.
Ansible provides many facts about the system, automatically.
Options
Examples
# Display facts from all hosts and store them indexed by I(hostname) at C(/tmp/facts).
ansible all -m setup --tree /tmp/facts
# Display only facts regarding memory found by ansible on all hosts and output them.
ansible all -m setup -a ’filter=ansible_*_mb’
Note: More ansible facts will be added with successive releases. If facter or ohai are installed, variables from
these programs will also be snapshotted into the JSON file for usage in templating. These variables are prefixed with
facter_ and ohai_ so it’s easy to tell their source. All variables are bubbled up to the caller. Using the ansible
facts and choosing to not install facter and ohai means you can avoid Ruby-dependencies on your remote systems.
(See also facter and ohai.)
Note: The filter option filters only the first level subkey below ansible_facts.
Note: If the target host is Windows, you will not currently have the ability to use fact_path or filter as this is
provided by a simpler implementation of the module. Different facts are returned for Windows hosts.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Set ip forwarding on in /proc and in the sysctl file and reload if necessary
- sysctl: name="net.ipv4.ip_forward" value=1 sysctl_set=yes state=present reload=yes
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Set logging
ufw: logging=on
# Allow OpenSSH
ufw: rule=allow name=OpenSSH
- 192.168.0.0/16
# Allow incoming access to eth0 from 1.2.3.5 port 5469 to 1.2.3.4 port 5469
ufw: rule=allow interface=eth0 direction=in proto=udp src=1.2.3.5 from_port=5469 dest=1.2.3.4 to_port
# Deny all traffic from the IPv6 2001:db8::/32 to tcp port 25 on this host.
# Note that IPv6 must be enabled in /etc/default/ufw for IPv6 firewalling to work.
ufw: rule=deny proto=tcp src=2001:db8::/32 port=25
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Add the user ’johnd’ with a specific uid and a primary group of ’admin’
- user: name=johnd comment="John Doe" uid=1040 group=admin
# Add the user ’james’ with a bash shell, appending the group ’admins’ and ’developers’ to the user’s
- user: name=james shell=/bin/bash groups=admins,developers append=yes
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# To use accelerate mode, simply add "accelerate: true" to your play. The initial
# key exchange and starting up of the daemon will occur over SSH, but all commands and
# subsequent actions will be conducted over the raw socket connection using AES encryption
- hosts: devservers
accelerate: true
tasks:
- command: /usr/bin/anything
Note: See the advanced playbooks chapter for more about using accelerated mode.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- assert:
that:
- "’foo’ in some_command_result.stdout"
- "number_of_the_counting == 3"
• Synopsis
• Options
• Examples
Synopsis
This module prints statements during execution and can be useful for debugging variables or expressions without
necessarily halting the playbook. Useful for debugging together with the ‘when:’ directive.
Options
Examples
# Example that prints the loopback address and gateway for each host
- debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}"
- shell: /usr/bin/uptime
register: result
- debug: var=result
• Synopsis
• Options
• Examples
Synopsis
This module fails the progress with a custom message. It can be useful for bailing out when a certain condition is met
using when.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This modules launches an ephemeral fireball ZeroMQ message bus daemon on the remote node which Ansible can
use to communicate with nodes at high speed. The daemon listens on a configurable port for a configurable amount of
time. Starting a new fireball as a given user terminates any existing user fireballs. Fireball mode is AES encrypted
Options
Examples
# This example playbook has two plays: the first launches ’fireball’ mode on all hosts via SSH, and
# the second actually starts using it for subsequent management over the fireball connection
- hosts: devservers
gather_facts: false
connection: ssh
sudo: yes
tasks:
- action: fireball
- hosts: devservers
connection: fireball
tasks:
- command: /usr/bin/anything
Note: See the advanced playbooks chapter for more about using fireball mode.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Pauses playbook execution for a set amount of time, or until a prompt is acknowledged. All parameters are optional.
The default behavior is to pause with a prompt. You can use ctrl+c if you wish to advance a pause earlier than it is
set to expire or if you need to abort a playbook run entirely. To continue early: press ctrl+c and then c. To abort
a playbook: press ctrl+c and then a. The pause module integrates into async/parallelized playbooks without any
special considerations (see also: Rolling Updates). When using pauses with the serial playbook parameter (as in
rolling updates) you are only prompted once for the current group of hosts.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Waiting for a port to become available is useful for when services are not immediately available after their init scripts
return - which is true of certain Java application servers. It is also useful when starting guests with the virt module
and needing to pause until they are ready. This module can also be used to wait for a regex match a string to be present
in a file. In 1.6 and later, this module can also be used to wait for a file to be available or absent on the filesystem.
Options
Examples
# wait 300 seconds for port 8000 to become open on the host, don’t start checking for 10 seconds
- wait_for: port=8000 delay=10
# wait until the string "completed" is in the file /tmp/foo before continuing
- wait_for: path=/tmp/foo search_regex=completed
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
#Run the SmokeTest test case from the main app. Useful for testing deploys.
- django_manage: command=test app_path=django_dir apps=main.SmokeTest
Note: virtualenv (http://www.virtualenv.org) must be installed on the remote host if the virtualenv parameter is
specified.
Note: This module will create a virtualenv if the virtualenv parameter is specified and a virtualenv does not already
exist at the given location.
Note: This module assumes English error messages for the ‘createcachetable’ command to detect table existence,
unfortunately.
Note: To be able to use the migrate command, you must have south installed and added as an app in your settings
Note: To be able to use the collectstatic command, you must have enabled staticfiles in your settings
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Example playbook entries using the ejabberd_user module to manage users state.
tasks:
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module depends on the passlib Python library, which needs to be installed on all target systems.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Ensure no identically named application is deployed through the JBoss CLI
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
assignee=ssmith
supervisorctl - Manage the state of a program or group of programs running via supervisord
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: When state = present, the module will call supervisorctl reread then supervisorctl add if
the program/group does not exist.
Note: When state = restarted, the module will call supervisorctl update then call supervisorctl
restart.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Playbook example
---
- name: Install IIS
hosts: all
gather_facts: false
tasks:
- name: Install IIS
win_feature:
name: "Web-Server"
state: absent
restart: yes
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Playbook example
- name: Download earthrise.jpg to ’C:\Users\RandomUser\earthrise.jpg’
win_get_url:
url: ’http://www.example.com/earthrise.jpg’
dest: ’C:\Users\RandomUser\earthrise.jpg’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Remove a group
win_group:
name: deploy
state: absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Restart a service
win_service:
name: spooler
state: restarted
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- win_stat: path=C:\foo.ini
register: file_info
- debug: var=file_info
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Ad-hoc example
$ ansible -i hosts -m win_user -a "name=bob password=Password12345" all
$ ansible -i hosts -m win_user -a "name=bob password=Password12345 state=absent" all
# Playbook example
---
- name: Add a user
hosts: all
gather_facts: false
tasks:
- name: Add User
win_user:
name: ansible
password: "@ns1bl3"
This section is new and evolving. The idea here is explore particular use cases in greater depth and provide a more
“top down” explanation of some basic features.
Introduction
Note: This section of the documentation is under construction. We are in the process of adding more examples about
all of the EC2 modules and how they work together. There’s also an ec2 example in the language_features directory of
the ansible-examples github repository that you may wish to consult. Once complete, there will also be new examples
of ec2 in ansible-examples.
Ansible contains a number of core modules for interacting with Amazon Web Services (AWS). These also work
with Eucalyptus, which is an AWS compatible private cloud solution. There are other supported cloud types, but this
documentation chapter is about AWS API clouds. The purpose of this section is to explain how to put Ansible modules
together (and use inventory scripts) to use Ansible in AWS context.
Requirements for the AWS modules are minimal. All of the modules require and are tested against boto 2.5 or higher.
You’ll need this Python module installed on the execution host. If you are using Red Hat Enterprise Linux or CentOS,
install boto from EPEL:
$ yum install python-boto
And in your playbook steps we’ll typically be using the following pattern for provisioning steps:
- hosts: localhost
connection: local
gather_facts: False
Provisioning
The ec2 module provides the ability to provision instances within EC2. Typically the provisioning task will be per-
formed against your Ansible master server in a play that operates on localhost using the local connection type. If
you are doing an EC2 operation mid-stream inside a regular play operating on remote hosts, you may want to use the
local_action keyword for that particular task. Read Delegation, Rolling Updates, and Local Actions for more
about local actions.
Note: Authentication with the AWS-related modules is handled by either specifying your access and secret key as
ENV variables or passing them as module arguments.
Note: To talk to specific endpoints, the environmental variable EC2_URL can be set. This is useful if using a private
cloud like Eucalyptus, exporting the variable as EC2_URL=https://myhost:8773/services/Eucalyptus. This can be set
using the ‘environment’ keyword in Ansible if you like.
In a play, this might look like (assuming the parameters are held as vars):
tasks:
- name: Provision a set of instances
ec2: >
keypair={{mykeypair}}
group={{security_group}}
instance_type={{instance_type}}
image={{image}}
wait=true
count={{number}}
register: ec2
By registering the return its then possible to dynamically create a host group consisting of these new instances. This
facilitates performing configuration actions on the hosts immediately in a subsequent task:
- name: Add all instance public IPs to host group
add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2.instances
With the host group now created, a second play in your provision playbook might now have some configuration steps:
- name: Configuration play
hosts: ec2hosts
user: ec2-user
gather_facts: true
tasks:
- name: Check NTP service
service: name=ntpd state=started
Rather than include configuration inline, you may also choose to just do it as a task include or a role.
The method above ties the configuration of a host with the provisioning step. This isn’t always ideal and leads us onto
the next section.
Advanced Usage
Host Inventory
Once your nodes are spun up, you’ll probably want to talk to them again. The best way to handle this is to use the ec2
inventory plugin.
Even for larger environments, you might have nodes spun up from Cloud Formations or other tooling. You don’t have
to use Ansible to spin up guests. Once these are created and you wish to configure them, the EC2 API can be used
to return system grouping with the help of the EC2 inventory script. This script can be used to group resources by
their security group or tags. Tagging is highly recommended in EC2 and can provide an easy way to sort between host
groups and roles. The inventory script is documented doc:api section.
You may wish to schedule a regular refresh of the inventory cache to accommodate for frequent changes in resources:
# ./ec2.py --refresh-cache
Put this into a crontab as appropriate to make calls from your Ansible master server to the EC2 API endpoints and
gather host information. The aim is to keep the view of hosts as up-to-date as possible, so schedule accordingly.
Playbook calls could then also be scheduled to act on the refreshed hosts inventory after each refresh. This approach
means that machine images can remain “raw”, containing no payload and OS-only. Configuration of the workload is
handled entirely by Ansible.
Tags
There’s a feature in the ec2 inventory script where hosts tagged with certain keys and values automatically appear in
certain groups.
For instance, if a host is given the “class” tag with the value of “webserver”, it will be automatically discoverable via
a dynamic group like so:
- hosts: tag_class_webserver
tasks:
- ping
Using this philosophy can be a great way to manage groups dynamically, without having to maintain separate inventory.
Pull Configuration
For some the delay between refreshing host information and acting on that host information (i.e. running Ansible
tasks against the hosts) may be too long. This may be the case in such scenarios where EC2 AutoScaling is being
used to scale the number of instances as a result of a particular event. Such an event may require that hosts come
online and are configured as soon as possible (even a 1 minute delay may be undesirable). Its possible to pre-bake
machine images which contain the necessary ansible-pull script and components to pull and run a playbook via git.
The machine images could be configured to run ansible-pull upon boot as part of the bootstrapping procedure.
Read Ansible-Pull for more information on pull-mode playbooks.
(Various developments around Ansible are also going to make this easier in the near future. Stay tuned!)
Ansible Tower also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a
defined URL and the server will “dial out” to the requester and configure an instance that is spinning up. This can be
a great way to reconfigure ephemeral nodes. See the Tower documentation for more details. Click on the Tower link
in the sidebar for details.
A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less informa-
tion has to be shared with remote hosts.
Use Cases
This section covers some usage examples built around a specific use case.
Example 1
Example 1: I’m using CloudFormation to deploy a specific infrastructure stack. I’d like to manage con-
figuration of the instances with Ansible.
Provision instances with your tool of choice and consider using the inventory plugin to group hosts based on particular
tags or security group. Consider tagging instances you wish to managed with Ansible with a suitably unique key=value
tag.
Note: Ansible also has a cloudformation module you may wish to explore.
Example 2
Example 2: I’m using AutoScaling to dynamically scale up and scale down the number of instances.
This means the number of hosts is constantly fluctuating but I’m letting EC2 automatically handle the
provisioning of these instances. I don’t want to fully bake a machine image, I’d like to use Ansible to
configure the hosts.
There are several approaches to this use case. The first is to use the inventory plugin to regularly refresh host informa-
tion and then target hosts based on the latest inventory data. The second is to use ansible-pull triggered by a user-data
script (specified in the launch configuration) which would then mean that each instance would fetch Ansible and the
latest playbook from a git repository and run locally to configure itself. You could also use the Tower callback feature.
Example 3
Example 3: I don’t want to use Ansible to manage my instances but I’d like to consider using Ansible to
build my fully-baked machine images.
There’s nothing to stop you doing this. If you like working with Ansible’s playbook format then writing a playbook
to create an image; create an image file with dd, give it a filesystem and then install packages and finally chroot into
it for further configuration. Ansible has the ‘chroot’ plugin for this purpose, just add the following to your inventory
file:
/chroot/path ansible_connection=chroot
Example 4
How would I create a new ec2 instance, provision it and then destroy it all in the same play?
# Use the ec2 module to create a new host and then add
# it to a special "ec2hosts" group.
- hosts: localhost
connection: local
gather_facts: False
vars:
ec2_access_key: "--REMOVED--"
ec2_secret_key: "--REMOVED--"
keypair: "mykeyname"
instance_type: "t1.micro"
image: "ami-d03ea1e0"
group: "mysecuritygroup"
region: "us-west-2"
zone: "us-west-2c"
tasks:
- name: make one instance
ec2: image={{ image }}
instance_type={{ instance_type }}
aws_access_key={{ ec2_access_key }}
aws_secret_key={{ ec2_secret_key }}
keypair={{ keypair }}
instance_tags=’{"foo":"bar"}’
region={{ region }}
group={{ group }}
wait=true
register: ec2_info
- debug: var=ec2_info
- debug: var=item
with_items: ec2_info.instance_ids
- hosts: ec2hosts
gather_facts: True
user: ec2-user
sudo: True
tasks:
- hosts: ec2hosts
gather_facts: True
connection: local
vars:
ec2_access_key: "--REMOVED--"
ec2_secret_key: "--REMOVED--"
region: "us-west-2"
tasks:
- name: destroy all instances
ec2: state=’absent’
aws_access_key={{ ec2_access_key }}
aws_secret_key={{ ec2_secret_key }}
region={{ region }}
instance_ids={{ item }}
wait=true
with_items: hostvars[inventory_hostname][’ansible_ec2_instance_id’]
Note: more examples of this are pending. You may also be interested in the ec2_ami module for taking AMIs of
running instances.
Pending Information
Introduction
Note: This section of the documentation is under construction. We are in the process of adding more examples about
the Rackspace modules and how they work together. Once complete, there will also be examples for Rackspace Cloud
in ansible-examples.
Ansible contains a number of core modules for interacting with Rackspace Cloud.
The purpose of this section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible
in a Rackspace Cloud context.
Prerequisites for using the rax modules are minimal. In addition to ansible itself, all of the modules require and are
tested against pyrax 1.5 or higher. You’ll need this Python module installed on the execution host.
pyrax is not currently available in many operating system package repositories, so you will likely need to install it via
pip:
$ pip install pyrax
The following steps will often execute from the control machine against the Rackspace Cloud API, so it makes sense
to add localhost to the inventory file. (Ansible may not require this manual step in the future):
[localhost]
localhost ansible_connection=local
Credentials File
The rax.py inventory script and all rax modules support a standard pyrax credentials file that looks like:
[rackspace_cloud]
username = myraxusername
api_key = d41d8cd98f00b204e9800998ecf8427e
Setting the environment parameter RAX_CREDS_FILE to the path of this file will help Ansible find how to load this
information.
More information about this credentials file can be found at https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#auth
Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing
at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python.
This is done via the interpreter line in modules, however when instructed by setting the inventory variable ‘ansi-
ble_python_interpreter’, Ansible will use this specified path instead to find Python. This can be a cause of confusion
as one may assume that modules running on ‘localhost’, or perhaps running via ‘local_action’, are using the virtualenv
Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and
have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost
inventory definition to find this location as follows:
[localhost]
localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python
Note: pyrax may be installed in the global Python package scope or in a virtual environment. There are no special
considerations to keep in mind when installing pyrax.
Provisioning
Note: Authentication with the Rackspace-related modules is handled by either specifying your username and API
key as environment variables or passing them as module arguments, or by specifying the location of a credentials file.
Here’s what it would look like in a playbook, assuming the parameters were defined in variables:
tasks:
- name: Provision a set of instances
local_action:
module: rax
name: "{{ rax_name }}"
flavor: "{{ rax_flavor }}"
image: "{{ rax_image }}"
count: "{{ rax_count }}"
group: "{{ group }}"
wait: yes
register: rax
The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By reg-
istering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory
(temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the
following example, the servers that were successfully created using the above task are dynamically added to a group
called “raxhosts”, with each nodes hostname, IP address, and root password being added to the inventory.
- name: Add the instances we created (by public IP) to the group ’raxhosts’
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax_accessipv4 }}"
ansible_ssh_pass: "{{ item.rax_adminpass }}"
groupname: raxhosts
with_items: rax.success
when: rax.action == ’create’
With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts
group.
- name: Configuration play
hosts: raxhosts
user: root
roles:
- ntp
- webserver
The method above ties the configuration of a host with the provisioning step. This isn’t always what you want, and
leads us to the next section.
Host Inventory
Once your nodes are spun up, you’ll probably want to talk to them again. The best way to handle his is to use the “rax”
inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage.
You might want to use this even if you are spinning up Ansible via other tools, including the Rackspace Cloud user
interface. The inventory plugin can be used to group resources by metadata, region, OS, etc. Utilizing metadata is
highly recommended in “rax” and can provide an easy way to sort between host groups and roles. If you don’t want
to use the rax.py dynamic inventory script, you could also still choose to manually manage your INI inventory file,
though this is less recommended.
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a
common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
rax.py
To use the rackspace dynamic inventory script, copy rax.py into your inventory directory and make it executable.
You can specify a credentails file for rax.py utilizing the RAX_CREDS_FILE environment variable.
Note: Users of Ansible Tower will note that dynamic inventory is natively supported by Tower, and all you have to
do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through
these steps:
$ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup
rax.py also accepts a RAX_REGION environment variable, which can contain an individual region, or a comma
separated list of regions.
When using rax.py, you will not have a ‘localhost’ defined in the inventory.
As mentioned previously, you will often be running most of these modules outside of the host loop, and will need
‘localhost’ defined. The recommended way to do this, would be to create an inventory directory, and place both
the rax.py script and a file containing localhost in it.
Executing ansible or ansible-playbook and specifying the inventory directory instead of an individual
file, will cause ansible to evaluate each file in that directory for inventory.
Let’s test our inventory script to see if it can talk to Rackspace Cloud.
$ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup
Assuming things are properly configured, the rax.py inventory script will output information similar to the following
information, which will be utilized for inventory and variables.
{
"ORD": [
"test"
],
"_meta": {
"hostvars": {
"test": {
"ansible_ssh_host": "1.1.1.1",
"rax_accessipv4": "1.1.1.1",
"rax_accessipv6": "2607:f0d0:1002:51::4",
"rax_addresses": {
"private": [
{
"addr": "2.2.2.2",
"version": 4
}
],
"public": [
{
"addr": "1.1.1.1",
"version": 4
},
{
"addr": "2607:f0d0:1002:51::4",
"version": 6
}
]
},
"rax_config_drive": "",
"rax_created": "2013-11-14T20:48:22Z",
"rax_flavor": {
"id": "performance1-1",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/perfor
"rel": "bookmark"
}
]
},
"rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
"rax_human_id": "test",
"rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
"rax_image": {
"id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7b
"rel": "bookmark"
}
]
},
"rax_key_name": null,
"rax_links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447
"rel": "self"
},
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a
"rel": "bookmark"
}
],
"rax_metadata": {
"foo": "bar"
},
"rax_name": "test",
"rax_name_attr": "name",
"rax_networks": {
"private": [
"2.2.2.2"
],
"public": [
"1.1.1.1",
"2607:f0d0:1002:51::4"
]
},
"rax_os-dcf_diskconfig": "AUTO",
"rax_os-ext-sts_power_state": 1,
"rax_os-ext-sts_task_state": null,
"rax_os-ext-sts_vm_state": "active",
"rax_progress": 100,
"rax_status": "ACTIVE",
"rax_tenant_id": "111111",
"rax_updated": "2013-11-14T20:49:27Z",
"rax_user_id": "22222"
}
}
}
}
Standard Inventory
When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be adventageous
to retrieve discoverable hostvar information from the Rackspace API.
This can be achieved with the rax_facts module and an inventory file similar to the following:
[test_servers]
hostname1 rax_region=ORD
hostname2 rax_region=ORD
While you don’t need to know how it works, it may be interesting to know what kind of variables are returned.
The rax_facts module provides facts as followings, which match the rax.py inventory script:
{
"ansible_facts": {
"rax_accessipv4": "1.1.1.1",
"rax_accessipv6": "2607:f0d0:1002:51::4",
"rax_addresses": {
"private": [
{
"addr": "2.2.2.2",
"version": 4
}
],
"public": [
{
"addr": "1.1.1.1",
"version": 4
},
{
"addr": "2607:f0d0:1002:51::4",
"version": 6
}
]
},
"rax_config_drive": "",
"rax_created": "2013-11-14T20:48:22Z",
"rax_flavor": {
"id": "performance1-1",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1
"rel": "bookmark"
}
]
},
"rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
"rax_human_id": "test",
"rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
"rax_image": {
"id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4
"rel": "bookmark"
}
]
},
"rax_key_name": null,
"rax_links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-4
"rel": "self"
},
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f
"rel": "bookmark"
}
],
"rax_metadata": {
"foo": "bar"
},
"rax_name": "test",
"rax_name_attr": "name",
"rax_networks": {
"private": [
"2.2.2.2"
],
"public": [
"1.1.1.1",
"2607:f0d0:1002:51::4"
]
},
"rax_os-dcf_diskconfig": "AUTO",
"rax_os-ext-sts_power_state": 1,
"rax_os-ext-sts_task_state": null,
"rax_os-ext-sts_vm_state": "active",
"rax_progress": 100,
"rax_status": "ACTIVE",
"rax_tenant_id": "111111",
"rax_updated": "2013-11-14T20:49:27Z",
"rax_user_id": "22222"
},
"changed": false
}
Use Cases
This section covers some additional usage examples built around a specific use case.
Example 1
local_action:
module: rax_network
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
region: IAD
state: present
Example 2
Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a
custom index.html
---
- name: Build environment
hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Load Balancer create request
local_action:
module: rax_clb
credentials: ~/.raxpub
name: my-lb
port: 80
protocol: HTTP
algorithm: ROUND_ROBIN
type: PUBLIC
timeout: 30
region: IAD
wait: yes
state: present
meta:
app: my-cool-app
register: clb
local_action:
module: rax_network
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
state: present
region: IAD
register: network
tasks:
- name: Install nginx
apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
notify:
- restart nginx
Advanced Usage
Ansible Tower also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a
defined URL and the server will “dial out” to the requester and configure an instance that is spinning up. This can be
a great way to reconfigure ephemeral nodes. See the Tower documentation for more details.
A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less informa-
tion has to be shared with remote hosts.
Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks,
deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other pice of
software in an environment. Complex deployments might have previously required manual manipulation of load
balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the
deployment of additional nodes contingent on the current number of running nodes, or the configuration of a clustered
application dependent on the number of nodes with common metadata. One could automate the following scenarios,
for example:
• Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load
balancer pool
• Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and soft-
ware installed
• A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommis-
sioned
• Servers and load balancers that have DNS records created and destroyed on creation and decommissioning,
respectively
Introduction
Note: This section of the documentation is under construction. We are in the process of adding more examples about
all of the GCE modules and how they work together. Upgrades via github pull requests are welcomed!
Ansible contains modules for managing Google Compute Engine resources, including creating instances, controlling
network access, working with persistent disks, and managing load balancers. Additionally, there is an inventory plugin
that can automatically suck down all of your GCE instances into Ansible dynamic inventory, and create groups by tag
and other properties.
The GCE modules all require the apache-libcloud module, which you can install from pip:
$ pip install apache-libcloud
Note: If you’re using Ansible on Mac OS X, libcloud also needs to access a CA cert chain. You’ll need to download
one (you can get one for here.)
Credentials
To work with the GCE modules, you’ll first need to get some credentials. You can create new one from the console by
going to the “APIs and Auth” section. Once you’ve created a new client ID and downloaded the generated private key
(in the pkcs12 format), you’ll need to convert the key by running the following command:
$ openssl pkcs12 -in pkey.pkcs12 -passin pass:notasecret -nodes -nocerts | openssl rsa -out pkey.pem
There are two different ways to provide credentials to Ansible so that it can talk with Google Cloud for provisioning
and configuration actions:
• by providing to the modules directly
• by populating a secrets.py file
For the GCE modules you can specify the credentials as arguments:
• service_account_email: email associated with the project
• pem_file: path to the pem file
• project_id: id of the project
For example, to create a new instance using the cloud module, you can use the following configuration:
- name: Create instance(s)
hosts: localhost
connection: local
gather_facts: no
vars:
service_account_email: [email protected]
pem_file: /path/to/project.pem
project_id: project-id
machine_type: n1-standard-1
image: debian-7
tasks:
Create a file secrets.py looking like following, and put it in some folder which is in your $PYTHONPATH:
GCE_PARAMS = (’[email protected]’, ’/path/to/project.pem’)
GCE_KEYWORD_PARAMS = {’project’: ’project-name’}
Now the modules can be used as above, but the account information can be omitted.
The best way to interact with your hosts is to use the gce inventory plugin, which dynamically queries GCE and tells
Ansible what nodes can be managed.
Note that when using the inventory script gce.py, you also need to populate the gce.ini file that you can find in
the plugins/inventory directory of the ansible checkout.
To use the GCE dynamic inventory script, copy gce.py from plugins/inventory into your inventory directory
and make it executable. You can specify credentials for gce.py using the GCE_INI_PATH environment variable –
the default is to look for gce.ini in the same directory as the inventory script.
Let’s see if inventory is working:
$ ./gce.py --list
You should see output describing the hosts you have, if any, running in Google Compute Engine.
Now let’s see if we can use the inventory script to talk to Google.
$ GCE_INI_PATH=~/.gce.ini ansible all -i gce.py -m setup
hostname | success >> {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"x.x.x.x"
],
As with all dynamic inventory plugins in Ansible, you can configure the inventory path in ansible.cfg. The recom-
mended way to use the inventory is to create an inventory directory, and place both the gce.py script and a file
containing localhost in it. This can allow for cloud inventory to be used alongside local inventory (such as a
physical datacenter) or machines running in different providers.
Executing ansible or ansible-playbook and specifying the inventory directory instead of an individual
file will cause ansible to evaluate each file in that directory for inventory.
Let’s once again use our inventory script to see if it can talk to Google Cloud:
$ ansible all -i inventory/ -m setup
hostname | success >> {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"x.x.x.x"
],
The output should be similar to the previous command. If you’re wanting less output and just want to check for SSH
connectivity, use “-m” ping instead.
Use Cases
For the following use case, let’s use this small shell script as a wrapper.
#!/bin/bash
PLAYBOOK="$1"
if [ -z $PLAYBOOK ]; then
echo "You need to pass a playback as argument to this script."
exit 1
fi
export SSL_CERT_FILE=$(pwd)/cacert.cer
export ANSIBLE_HOST_KEY_CHECKING=False
if [ ! -f "$SSL_CERT_FILE" ]; then
curl -O http://curl.haxx.se/ca/cacert.pem
fi
Create an instance
The GCE module provides the ability to provision instances within Google Compute Engine. The provisioning task is
typically performed from your Ansible control server against Google Cloud’s API.
A playbook would looks like this:
- name: Create instance(s)
hosts: localhost
gather_facts: no
connection: local
vars:
machine_type: n1-standard-1 # default
image: debian-7
service_account_email: [email protected]
pem_file: /path/to/project.pem
project_id: project-id
tasks:
- name: Launch instances
gce:
instance_names: dev
machine_type: "{{ machine_type }}"
image: "{{ image }}"
service_account_email: "{{ service_account_email }}"
pem_file: "{{ pem_file }}"
project_id: "{{ project_id }}"
tags: webserver
register: gce
Note that use of the “add_host” module above creates a temporary, in-memory group. This means that a play in
the same playbook can then manage machines in the ‘new_instances’ group, if so desired. Any sort of arbitrary
configuration is possible at this point.
All of the created instances in GCE are grouped by tag. Since this is a cloud, it’s probably best to ignore hostnames
and just focus on group management.
Normally we’d also use roles here, but the following example is a simple one. Here we will also use the “gce_net”
module to open up access to port 80 on these nodes.
The variables in the ‘vars’ section could also be kept in a ‘vars_files’ file or something encrypted with Ansible-vault,
if you so choose. This is just a basic example of what is possible:
- name: Setup web servers
hosts: tag_webserver
gather_facts: no
vars:
machine_type: n1-standard-1 # default
image: debian-7
service_account_email: [email protected]
pem_file: /path/to/project.pem
project_id: project-id
roles:
By pointing your browser to the IP of the server, you should see a page welcoming you.
Upgrades to this documentation are welcome, hit the github link at the top right of this page if you would like to make
additions!
Introduction
Vagrant is a tool to manage virtual machine environments, and allows you to configure and use reproducible work
environments on top of various virtualization and cloud platforms. It also has integration with Ansible as a provisioner
for these virtual machines, and the two tools work together well.
This guide will describe how to use Vagrant and Ansible together.
If you’re not familiar with Vagrant, you should visit the documentation.
This guide assumes that you already have Ansible installed and working. Running from a Git checkout is fine. Follow
the Installation guide for more information.
Vagrant Setup
The first step once you’ve installed Vagrant is to create a Vagrantfile and customize it to suit your needs. This is
covered in detail in the Vagrant documentation, but here is a quick example:
$ mkdir vagrant-test
$ cd vagrant-test
$ vagrant init precise32 http://files.vagrantup.com/precise32.box
This will create a file called Vagrantfile that you can edit to suit your needs. The default Vagrantfile has a lot of
comments. Here is a simplified example that includes a section to use the Ansible provisioner:
# Vagrantfile API/syntax version. Don’t touch unless you know what you’re doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :public_network
The Vagrantfile has a lot of options, but these are the most important ones. Notice the config.vm.provision
section that refers to an Ansible playbook called playbook.yml in the same directory as the Vagrantfile. Vagrant
runs the provisioner once the virtual machine has booted and is ready for SSH access.
$ vagrant up
Sometimes you may want to run Ansible manually against the machines. This is pretty easy to do.
Vagrant automatically creates an inventory file for each Vagrant machine in the same directory called
vagrant_ansible_inventory_machinename. It configures the inventory file according to the SSH tun-
nel that Vagrant automatically creates, and executes ansible-playbook with the correct username and SSH key
options to allow access. A typical automatically-created inventory file may look something like this:
# Generated by Vagrant
If you want to run Ansible manually, you will want to make sure to pass ansible or ansible-playbook
commands the correct arguments for the username (usually vagrant) and the SSH key (usually
~/.vagrant.d/insecure_private_key), and the autogenerated inventory file.
Here is an example:
$ ansible-playbook -i vagrant_ansible_inventory_machinename --private-key=~/.vagrant.d/insecure_priva
See also:
Vagrant Home The Vagrant homepage with downloads
Vagrant Documentation Vagrant Documentation
Ansible Provisioner The Vagrant documentation for the Ansible provisioner
Playbooks An introduction to playbooks
Introduction
Continuous Delivery is the concept of frequently delivering updates to your software application.
The idea is that by updating more often, you do not have to wait for a specific timed period, and your organization gets
better at the process of responding to change.
Some Ansible users are deploying updates to their end users on an hourly or even more frequent basis – sometimes
every time there is an approved code change. To achieve this, you need tools to be able to quickly apply those updates
in a zero-downtime way.
This document describes in detail how to achieve this goal, using one of Ansible’s most complete example playbooks
as a template: lamp_haproxy. This example uses a lot of Ansible features: roles, templates, and group variables, and
it also comes with an orchestration playbook that can do zero-downtime rolling upgrades of the web application stack.
Note: Click here for the latest playbooks for this example.
The playbooks deploy Apache, PHP, MySQL, Nagios, and HAProxy to a CentOS-based set of servers.
We’re not going to cover how to run these playbooks here. Read the included README in the github project along
with the example for that information. Instead, we’re going to take a close look at every part of the playbook and
describe what it does.
Site Deployment
Let’s start with site.yml. This is our site-wide deployment playbook. It can be used to initially deploy the site, as
well as push updates to all of the servers:
---
# This playbook deploys the whole application stack in this site.
roles:
- common
roles:
- db
# Configure and deploy the web servers. Note that we include two roles
# here, the ’base-apache’ role which simply sets up Apache, and ’web’
# which includes our example web application.
- hosts: webservers
roles:
- base-apache
- web
roles:
- haproxy
roles:
- base-apache
- nagios
Note: If you’re not familiar with terms like playbooks and plays, you should review Playbooks.
In this playbook we have 5 plays. The first one targets all hosts and applies the common role to all of the hosts. This
is for site-wide things like yum repository configuration, firewall configuration, and anything else that needs to apply
to all of the servers.
The next four plays run against specific host groups and apply specific roles to those servers. Along with the roles for
Nagios monitoring, the database, and the web application, we’ve implemented a base-apache role that installs and
configures a basic Apache setup. This is used by both the sample web application and the Nagios hosts.
By now you should have a bit of understanding about roles and how they work in Ansible. Roles are a way to organize
content: tasks, handlers, templates, and files, into reusable components.
This example has six roles: common, base-apache, db, haproxy, nagios, and web. How you organize your
roles is up to you and your application, but most sites will have one or more common roles that are applied to all
systems, and then a series of application-specific roles that install and configure particular parts of the site.
Roles can have variables and dependencies, and you can pass in parameters to roles to modify their behavior. You can
read more about roles in the Playbook Roles and Include Statements section.
Group variables are variables that are applied to groups of servers. They can be used in templates and in playbooks
to customize behavior and to provide easily-changed settings and parameters. They are stored in a directory called
group_vars in the same location as your inventory. Here is lamp_haproxy’s group_vars/all file. As you
might expect, these variables are applied to all of the machines in your inventory:
---
httpd_port: 80
ntpserver: 192.168.1.2
This is a YAML file, and you can create lists and dictionaries for more complex variable structures. In this case, we
are just setting two variables, one for the port for the web server, and one for the NTP server that our machines should
use for time synchronization.
Here’s another group variables file. This is group_vars/dbservers which applies to the hosts in the
dbservers group:
---
mysqlservice: mysqld
mysql_port: 3306
dbuser: root
dbname: foodb
upassword: usersecret
If you look in the example, there are group variables for the webservers group and the lbservers group, simi-
larly.
These variables are used in a variety of places. You can use them in playbooks, like this, in
roles/db/tasks/main.yml:
- name: Create Application Database
mysql_db: name={{ dbname }} state=present
You can also use these variables in templates, like this, in roles/common/templates/ntp.conf.j2:
driftfile /var/lib/ntp/drift
restrict 127.0.0.1
restrict -6 ::1
server {{ ntpserver }}
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
You can see that the variable substitution syntax of {{ and }} is the same for both templates and variables. The
syntax inside the curly braces is Jinja2, and you can do all sorts of operations and apply different filters to the data
inside. In templates, you can also use for loops and if statements to handle more complex situations, like this, in
roles/common/templates/iptables.j2:
{% if inventory_hostname in groups[’dbservers’] %}
-A INPUT -p tcp --dport 3306 -j ACCEPT
{% endif %}
This is testing to see if the inventory name of the machine we’re currently operating on (inventory_hostname)
exists in the inventory group dbservers. If so, that machine will get an iptables ACCEPT line for port 3306.
Here’s another example, from the same template:
{% for host in groups[’monitoring’] %}
-A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT
{% endfor %}
This loops over all of the hosts in the group called monitoring, and adds an ACCEPT line for each monitoring
hosts’ default IPV4 address to the current machine’s iptables configuration, so that Nagios can monitor those hosts.
You can learn a lot more about Jinja2 and its capabilities here, and you can read more about Ansible variables in
general in the Variables section.
Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is
where Ansible’s orchestration features come into play. While some applications use the term ‘orchestration’ to mean
basic ordering or command-blasting, Ansible refers to orchestration as ‘conducting machines like an orchestra’, and
has a pretty sophisticated engine for it.
Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate
a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook,
called rolling_upgrade.yml.
Looking at the playbook, you can see it is made up of two plays. The first play is very simple and looks like this:
- hosts: monitoring
tasks: []
What’s going on here, and why are there no tasks? You might know that Ansible gathers “facts” from the servers
before operating upon them. These facts are useful for all sorts of things: networking information, OS/distribution
versions, etc. In our case, we need to know something about all of the monitoring servers in our environment before
we perform the update, so this simple play forces a fact-gathering step on our monitoring servers. You will see this
pattern sometimes, and it’s a useful trick to know.
The next part is the update play. The first part looks like this:
- hosts: webservers
user: root
serial: 1
This is just a normal play definition, operating on the webservers group. The serial keyword tells Ansible how
many servers to operate on at once. If it’s not specified, Ansible will parallelize these operations up to the default
“forks” limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate
on that many hosts at once. If you had just a handful of webservers, you may want to set serial to 1, for one host at
a time. If you have 100, maybe you could set serial to 10, for ten at a time.
Here is the next part of the update play:
pre_tasks:
- name: disable nagios alerts for this host webserver service
nagios: action=disable_alerts host={{ ansible_hostname }} services=webserver
delegate_to: "{{ item }}"
with_items: groups.monitoring
The pre_tasks keyword just lets you list tasks to run before the roles are called. This will make more sense in a
minute. If you look at the names of these tasks, you can see that we are disabling Nagios alerts and then removing the
webserver that we are currently updating from the HAProxy load balancing pool.
The delegate_to and with_items arguments, used together, cause Ansible to loop over each monitoring server
and load balancer, and perform that operation (delegate that operation) on the monitoring or load balancing server, “on
behalf” of the webserver. In programming terms, the outer loop is the list of web servers, and the inner loop is the list
of monitoring servers.
Note that the HAProxy step looks a little complicated. We’re using HAProxy in this example because it’s freely
available, though if you have (for instance) an F5 or Netscaler in your infrastructure (or maybe you have an AWS
Elastic IP setup?), you can use modules included in core Ansible to communicate with them instead. You might also
wish to use other monitoring modules instead of nagios, but this just shows the main goal of the ‘pre tasks’ section –
take the server out of monitoring, and take it out of rotation.
The next step simply re-applies the proper roles to the web servers. This will cause any configuration management
declarations in web and base-apache roles to be applied to the web servers, including an update of the web
application code itself. We don’t have to do it this way–we could instead just purely update the web application, but
this is a good example of how roles can be used to reuse tasks:
roles:
- common
- base-apache
- web
Finally, in the post_tasks section, we reverse the changes to the Nagios configuration and put the web server back
in the load balancing pool:
post_tasks:
- name: Enable the server in haproxy
shell: echo "enable server myapplb/{{ ansible_hostname }}" | socat stdio /var/lib/haproxy/stats
delegate_to: "{{ item }}"
with_items: groups.lbservers
Again, if you were using a Netscaler or F5 or Elastic Load Balancer, you would just substitute in the appropriate
modules instead.
In this example, we use the simple HAProxy load balancer to front-end the web servers. It’s easy to configure and
easy to manage. As we have mentioned, Ansible has built-in support for a variety of other load balancers like Citrix
NetScaler, F5 BigIP, Amazon Elastic Load Balancers, and more. See the About Modules documentation for more
information.
For other load balancers, you may need to send shell commands to them (like we do for HAProxy above), or call an
API, if your load balancer exposes one. For the load balancers for which Ansible has modules, you may want to run
them as a local_action if they contact an API. You can read more about local actions in the Delegation, Rolling
Updates, and Local Actions section. Should you develop anything interesting for some hardware where there is not a
core module, it might make for a good module for core inclusion!
Now that you have an automated way to deploy updates to your application, how do you tie it all together? A lot of
organizations use a continuous integration tool like Jenkins or Atlassian Bamboo to tie the development, test, release,
and deploy steps together. You may also want to use a tool like Gerrit to add a code review step to commits to either
the application code itself, or to your Ansible playbooks, or both.
Depending on your environment, you might be deploying continuously to a test environment, running an integration
test battery against that environment, and then deploying automatically into production. Or you could keep it simple
and just use the rolling-update for on-demand deployment into test or production specifically. This is all up to you.
For integration with Continuous Integration systems, you can easily trigger playbook runs using the
ansible-playbook command line tool, or, if you’re using Ansible Tower, the tower-cli or the built-in REST
API. (The tower-cli command ‘joblaunch’ will spawn a remote job over the REST API and is pretty slick).
This should give you a good idea of how to structure a multi-tier application with Ansible, and orchestrate operations
upon that app, with the eventual goal of continuous delivery to your customers. You could extend the idea of the
rolling upgrade to lots of different parts of the app; maybe add front-end web servers along with application servers,
for instance, or replace the SQL database with something like MongoDB or Riak. Ansible gives you the capability to
easily manage complicated environments and automate common operations.
See also:
lamp_haproxy example The lamp_haproxy example discussed here.
Playbooks An introduction to playbooks
Playbook Roles and Include Statements An introduction to playbook roles
Variables An introduction to Ansible variables
Ansible.com: Continuous Delivery An introduction to Continuous Delivery with Ansible
Many times, people ask, “how can I best integrate testing with Ansible playbooks?” There are many options. Ansible
is actually designed to be a “fail-fast” and ordered system, therefore it makes it easy to embed testing directly in
Ansible playbooks. In this chapter, we’ll go into some patterns for integrating tests of infrastructure and discuss the
right level of testing that may be appropriate.
Note: This is a chapter about testing the application you are deploying, not the chapter on how to test ansible modules
during development. For that content, please hop over to the Development section.
By incorporating a degree of testing into your deployment workflow, there will be less surprises when code hits
production, and in many cases, tests can be leveraged in production to prevent failed updates from migrating across
an entire installation. Since it’s push-based and also makes it very easy to run steps on the localhost or testing servers,
Ansible lets you insert as many checks and balances into your upgrade workflow as you would like to insert.
Ansible resources are models of desired-state. As such, it should not be neccessary to test that services are running,
packages are installed, or other such things. Ansible is the system that will ensure these things are declaratively true.
Instead, assert these things into your playbooks.
tasks:
- service: name=foo state=running enabled=yes
If you think the service may not be running, the best thing to do is request it to be running. If the service fails to start,
Ansible will yell appropriately. (This should not be confused with whether the service is doing something functional,
which we’ll show more about how to do later).
In the above setup, –check mode in Ansible can be used as a layer of testing as well. If running a deployment playbook
against an existing system, using the –check flag to the ansible command will report if Ansible thinks it would have
had to have made any changes to bring the system into a desired state.
This can let you know up front if there is any need to deploy onto the given system. Ordinarily scripts and commands
don’t run in check mode, so if you want certain steps to always execute in check mode, such as calls to the script
module, add the ‘always_run’ flag:
roles:
- webserver
tasks:
- script: verify.sh
always_run: True
Certain playbook modules are particularly good for testing. Below is an example that ensures a port is open:
tasks:
Here’s an example of using the URI module to make sure a web service returns:
tasks:
It’s easy to push an arbitrary script (in any language) on a remote host and the script will automatically fail if it has a
non-zero return code:
tasks:
- script: test_script1
- script: test_script2 --parameter value --parameter2 value
If using roles (you should be, roles are great!), scripts pushed by the script module can live in the ‘files/’ directory of
a role.
And the assert module makes it very easily to validate various kinds of truth:
tasks:
- assert:
that:
- "’not ready’ not in cmd_result.stderr"
- "’gizmo enabled’ in cmd_result.stdout"
Should you feel the need to test for existance of files that are not declaratively set by your ansible configuration, the
‘stat’ module is a great choice:
tasks:
- stat: path=/path/to/something
register: p
- assert:
that:
- p.stat.exists and p.stat.isdir
As mentioned above, there’s no need to check things like the return codes of commands. Ansible is checking them
automatically. Rather than checking for a user to exist, consider using the user module to make it exist.
Ansible is a fail-fast system, so when there is an error creating that user, it will stop the playbook run. You do not have
to check up behind it.
Testing Lifecycle
If writing some degree of basic validation of your application into your playbooks, they will run every time you deploy.
As such, deploying into a local development VM and a stage environment will both validate that things are according
to plan ahead of your production deploy.
Your workflow may be something like this:
- Use the same playbook all the time with embedded tests in development
- Use the playbook to deploy to a stage environment (with the same playbooks) that simulates producti
- Run an integration test battery written by your QA team against stage
- Deploy to production, with the same integrated tests.
Something like an integration test battery should be written by your QA team if you are a production webservice. This
would include things like Selenium tests or automated API tests and would usually not be something embedded into
your ansible playbooks.
However, it does make sense to include some basic health checks into your playbooks, and in some cases it may be
possible to run a subset of the QA battery against remote nodes. This is what the next section covers.
If you have read into Delegation, Rolling Updates, and Local Actions it may quickly become apparent that the rolling
update pattern can be extended, and you can use the success or failure of the playbook run to decide whether to add a
machine into a load balancer or not.
This is the great culmination of embedded tests:
---
- hosts: webservers
serial: 5
pre_tasks:
roles:
- common
- webserver
- apply_testing_checks
post_tasks:
Of course in the above, the “take out of the pool” and “add back” steps would be replaced with a call to a Ansible load
balancer module or appropriate shell command. You might also have steps that use a monitoring module to start and
end an outage window for the machine.
However, what you can see from the above is that tests are used as a gate – if the “apply_testing_checks” step is not
performed, the machine will not go back into the pool.
Read the delegation chapter about “max_fail_percentage” and you can also control how many failing tests will stop a
rolling update from proceeding.
This above approach can also be modified to run a step from a testing machine remotely against a machine:
---
- hosts: webservers
serial: 5
pre_tasks:
roles:
- common
- webserver
tasks:
- script: /srv/qa_team/app_testing_script.sh --server {{ inventory_hostname }}
delegate_to: testing_server
post_tasks:
In the above example, a script is run from the testing server against a remote node prior to bringing it back into the
pool.
In the event of a problem, fix the few servers that fail using Ansible’s automatically generated retry file to repeat the
deploy on just those servers.
If desired, the above techniques may be extended to enable continuous deployment practices.
The workflow may look like this:
- Write and use automation to deploy local development VMs
- Have a CI system like Jenkins deploy to a stage environment on every code change
- The deploy job calls testing scripts to pass/fail a build on every deploy
- If the deploy job succeeds, it runs the same deploy playbook against production inventory
Some Ansible users use the above approach to deploy a half-dozen or dozen times an hour without taking all of their
infrastructure offline. A culture of automated QA is vital if you wish to get to this level.
If you are still doing a large amount of manual QA, you should still make the decision on whether to deploy manually
as well, but it can still help to work in the rolling update patterns of the previous section and encorporate some basic
health checks using modules like ‘script’, ‘stat’, ‘uri’, and ‘assert’.
Conclusion
Ansible believes you should not need another framework to validate basic things of your infrastructure is true. This is
the case because ansible is an order-based system that will fail immediately on unhandled errors for a host, and prevent
further configuration of that host. This forces errors to the top and shows them in a summary at the end of the Ansible
run.
However, as Ansible is designed as a multi-tier orchestration system, it makes it very easy to incorporate tests into
the end of a playbook run, either using loose tasks or roles. When used with rolling updates, testing steps can decide
whether to put a machine back into a load balanced pool or not.
Finally, because Ansible errors propogate all the way up to the return code of the ansible program itself, and Ansible
by default runs in an easy push-based mode, ansible is a great step to put into a build environment if you wish to use
it to roll out systems as part of a Continous Integration/Continous Delivery pipeline, as is covered in sections above.
The focus should not be on infrastructure testing, but on application testing, so we strongly encourage getting together
with your QA team and ask what sort of tests would make sense to run every time you deploy development VMs, and
which sort of tests they would like to run against the stage environment on every deploy. Obviously at the development
stage, unit tests are great too. But don’t unit test your playbook. Ansible describes states of resources declaratively,
so you don’t have to. If there are cases where you want to be sure of something though, that’s great, and things like
stat/assert are great go-to modules for that purpose.
In all, testing is a very organizational and site-specific thing. Everybody should be doing it, but what makes the most
sense for your environment will vary with what you are deploying and who is using it – but everyone benefits from a
more robust and reliable deployment system.
See also:
About Modules All the documentation for Ansible modules
Playbooks An introduction to playbooks
Delegation, Rolling Updates, and Local Actions Delegation, useful for working with loud balancers, clouds, and lo-
cally executed steps.
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
Pending topics may include: Docker, Jenkins, Google Compute Engine, Linode/Digital Ocean, Continuous Deploy-
ment, and more.
Learn how to build modules of your own in any language, and also how to extend Ansible through several kinds of
plugins. Explore Ansible’s Python API and write Python plugins to integrate with other solutions in your environment.
Topics
• Python API
– Python API
* Detailed API Example
There are several interesting ways to use Ansible from an API perspective. You can use the Ansible python API to
control nodes, you can extend Ansible to respond to various python events, you can write various plugins, and you
can plug in inventory data from external data sources. This document covers the Runner and Playbook API at a basic
level.
If you are looking to use Ansible programmatically from something other than Python, trigger events asynchronously,
or have access control and logging demands, take a look at Ansible Tower as it has a very nice REST API that provides
all of these things at a higher level.
Ansible is written in its own API so you have a considerable amount of power across the board. This chapter discusses
the Python API.
Python API
The Python API is very powerful, and is how the ansible CLI and ansible-playbook are implemented.
It’s pretty simple:
import ansible.runner
runner = ansible.runner.Runner(
module_name=’ping’,
module_args=’’,
pattern=’web*’,
forks=10
)
datastructure = runner.run()
The run method returns results per host, grouped by whether they could be contacted or not. Return types are module
specific, as expressed in the About Modules documentation.:
{
"dark" : {
"web1.example.com" : "failure message"
},
"contacted" : {
"web2.example.com" : 1
}
}
A module can return any type of JSON data it wants, so Ansible can be used as a framework to rapidly build powerful
applications and scripts.
The following script prints out the uptime information for all hosts:
#!/usr/bin/python
import ansible.runner
import sys
if results is None:
print "No hosts found"
sys.exit(1)
Advanced programmers may also wish to read the source to ansible itself, for it uses the Runner() API (with all
available options) to implement the command line tools ansible and ansible-playbook.
See also:
Developing Dynamic Inventory Sources Developing dynamic inventory integrations
Developing Modules How to develop modules
Developing Plugins How to develop plugins
Development Mailing List Mailing list for development topics
irc.freenode.net #ansible IRC chat channel
Topics
• Script Conventions
• Tuning the External Inventory Script
As described in Dynamic Inventory, ansible can pull inventory information from dynamic sources, including cloud
sources.
How do we write a new one?
Simple! We just create a script or program that can return JSON in the right format when fed the proper arguments.
You can do this in any language.
Script Conventions
When the external node script is called with the single argument --list, the script must return a JSON
hash/dictionary of all the groups to be managed. Each group’s value should be either a hash/dictionary containing
a list of each host/IP, potential child groups, and potential group variables, or simply a list of host/IP addresses, like
so:
{
"databases" : {
"hosts" : [ "host1.example.com", "host2.example.com" ],
"vars" : {
"a" : true
}
},
"webservers" : [ "host2.example.com", "host3.example.com" ],
"atlanta" : {
"hosts" : [ "host1.example.com", "host4.example.com", "host5.example.com" ],
"vars" : {
"b" : false
},
"children": [ "marietta", "5points" ]
},
"marietta" : [ "host6.example.com" ],
"5points" : [ "host7.example.com" ]
}
"_meta" : {
"hostvars" : {
"moocow.example.com" : { "asdf" : 1234 },
"llama.example.com" : { "asdf" : 5678 },
}
}
See also:
Python API Python API to Playbooks and Ad Hoc Task Execution
Developing Modules How to develop modules
Developing Plugins How to develop plugins
Ansible Tower REST API endpoint and GUI for Ansible, syncs with dynamic inventory
Development Mailing List Mailing list for development topics
irc.freenode.net #ansible IRC chat channel
Topics
• Developing Modules
– Tutorial
– Testing Modules
– Reading Input
– Module Provided ‘Facts’
– Common Module Boilerplate
– Check Mode
– Common Pitfalls
– Conventions/Recommendations
– Shorthand Vs JSON
– Documenting Your Module
* Example
* Building & Testing
– Getting Your Module Into Core
Ansible modules are reusable units of magic that can be used by the Ansible API, or by the ansible or ansible-playbook
programs.
See About Modules for a list of various ones developed in core.
Modules can be written in any language and are found in the path specified by ANSIBLE_LIBRARY or the
--module-path command line option.
Should you develop an interesting Ansible module, consider sending a pull request to the github project to see about
getting your module included in the core project.
Tutorial
Let’s build a very-basic module to get and set the system time. For starters, let’s build a module that just outputs the
current time.
We are going to use Python here but any language is possible. Only File I/O and outputting to standard out are required.
So, bash, C++, clojure, Python, Ruby, whatever you want is fine.
Now Python Ansible modules contain some extremely powerful shortcuts (that all the core modules use) but first we
are going to build a module the very hard way. The reason we do this is because modules written in any language
OTHER than Python are going to have to do exactly this. We’ll show the easy way later.
So, here’s an example. You would never really need to build a module to set the system time, the ‘command’ module
could already be used to do this. Though we’re going to make one.
Reading the modules that come with ansible (linked above) is a great way to learn how to write modules. Keep in
mind, though, that some modules in ansible’s source tree are internalisms, so look at service or yum, and don’t stare
too close into things like async_wrapper or you’ll turn to stone. Nobody ever executes async_wrapper directly.
Ok, let’s get going with an example. We’ll use Python. For starters, save this as a file named time:
#!/usr/bin/python
import datetime
import json
date = str(datetime.datetime.now())
print json.dumps({
"time" : date
})
Testing Modules
If you did not, you might have a typo in your module, so recheck it and try again.
Reading Input
Let’s modify the module to allow setting the current time. We’ll do this by seeing if a key value pair in the form
time=<string> is passed in to the module.
Ansible internally saves arguments to an arguments file. So we must read the file and parse it. The arguments file is
just a string, so any form of arguments are legal. Here we’ll do some basic parsing to treat the input as key=value.
The example usage we are trying to achieve to set the time is:
time time="March 14 22:10"
If no time parameter is set, we’ll just leave the time as is and return the current time.
Note: This is obviously an unrealistic idea for a module. You’d most likely just use the shell module. However, it
probably makes a decent tutorial.
Let’s look at the code. Read the comments as we’ll explain as we go. Note that this is highly verbose because it’s
intended as an educational example. You can write modules a lot shorter than this:
#!/usr/bin/python
# import some python modules that we’ll use. These are all
# available in Python’s core
import datetime
import sys
import json
import os
import shlex
arguments = shlex.split(args_data)
for arg in arguments:
if key == "time":
if rc != 0:
print json.dumps({
"failed" : True,
"msg" : "failed setting the time"
})
sys.exit(1)
date = str(datetime.datetime.now())
print json.dumps({
"time" : date,
"changed" : True
})
sys.exit(0)
date = str(datetime.datetime.now())
print json.dumps({
"time" : date
})
The ‘setup’ module that ships with Ansible provides many variables about a system that can be used in playbooks and
templates. However, it’s possible to also add your own facts without modifying the system module. To do this, just
have the module return a ansible_facts key, like so, along with other return data:
{
"changed" : True,
"rc" : 5,
"ansible_facts" : {
"leptons" : 5000,
"colors" : {
"red" : "FF0000",
"white" : "FFFFFF"
}
}
}
These ‘facts’ will be available to all statements called after that module (but not before) in the playbook. A good idea
might be make a module called ‘site_facts’ and always call it at the top of each playbook, though we’re always open
to improving the selection of core facts in Ansible as well.
As mentioned, if you are writing a module in Python, there are some very powerful shortcuts you can use. Modules
are still transferred as one file, but an arguments file is no longer needed, so these are not only shorter in terms of code,
they are actually FASTER in terms of execution time.
Rather than mention these here, the best way to learn is to read some of the source of the modules that come with
Ansible.
The ‘group’ and ‘user’ modules are reasonably non-trivial and showcase what this looks like.
Key parts include always ending the module file with:
from ansible.module_utils.basic import *
main()
name = dict(required=True),
enabled = dict(required=True, choices=BOOLEANS),
something = dict(aliases=[’whatever’])
)
)
The AnsibleModule provides lots of common code for handling returns, parses your arguments for you, and allows
you to check inputs.
Successful returns are made like this:
module.exit_json(changed=True, something_else=12345)
And failures are just as simple (where ‘msg’ is a required parameter to explain the error):
module.fail_json(msg="Something fatal happened")
There are also other useful functions in the module class, such as module.md5(path). See
lib/ansible/module_common.py in the source checkout for implementation details.
Again, modules developed this way are best tested with the hacking/test-module script in the git source checkout.
Because of the magic involved, this is really the only way the scripts can function outside of Ansible.
If submitting a module to ansible’s core code, which we encourage, use of the AnsibleModule class is required.
Check Mode
if module.check_mode:
# Check if any changes would be made by don’t actually make those changes
module.exit_json(changed=check_if_system_state_would_be_changed())
Remember that, as module developer, you are responsible for ensuring that no system state is altered when the user
enables check mode.
If your module does not support check mode, when the user runs Ansible in check mode, your module will simply be
skipped.
Common Pitfalls
Because the output is supposed to be valid JSON. Except that’s not quite true, but we’ll get to that later.
Modules must not output anything on standard error, because the system will merge standard out with standard error
and prevent the JSON from parsing. Capturing standard error and returning it as a variable in the JSON on standard
out is fine, and is, in fact, how the command module is implemented.
If a module returns stderr or otherwise fails to produce valid JSON, the actual output will still be shown in Ansible,
but the command will not succeed.
Always use the hacking/test-module script when developing modules and it will warn you about these kind of things.
Conventions/Recommendations
As a reminder from the example code above, here are some basic conventions and guidelines:
• If the module is addressing an object, the parameter for that object should be called ‘name’ whenever possible,
or accept ‘name’ as an alias.
• If you have a company module that returns facts specific to your installations, a good name for this module is
site_facts.
• Modules accepting boolean status should generally accept ‘yes’, ‘no’, ‘true’, ‘false’, or anything else a user
may likely throw at them. The AnsibleModule common code supports this with “choices=BOOLEANS” and a
module.boolean(value) casting function.
• Include a minimum of dependencies if possible. If there are dependencies, document them at the top of the
module file, and have the module raise JSON error messages when the import fails.
• Modules must be self-contained in one file to be auto-transferred by ansible.
• If packaging modules in an RPM, they only need to be installed on the control machine and should be dropped
into /usr/share/ansible. This is entirely optional and up to you.
• Modules should return JSON or key=value results all on one line. JSON is best if you can do JSON. All return
types must be hashes (dictionaries) although they can be nested. Lists or simple scalar values are not supported,
though they can be trivially contained inside a dictionary.
• In the event of failure, a key of ‘failed’ should be included, along with a string explanation in ‘msg’. Mod-
ules that raise tracebacks (stacktraces) are generally considered ‘poor’ modules, though Ansible can deal with
these returns and will automatically convert anything unparseable into a failed result. If you are using the An-
sibleModule common Python code, the ‘failed’ element will be included for you automatically when you call
‘fail_json’.
• Return codes from modules are not actually not significant, but continue on with 0=success and non-zero=failure
for reasons of future proofing.
• As results from many hosts will be aggregated at once, modules should return only relevant output. Returning
the entire contents of a log file is generally bad form.
Shorthand Vs JSON
To make it easier to write modules in bash and in cases where a JSON module might not be available, it is acceptable
for a module to return key=value output all on one line, like this. The Ansible parser will know what to do:
somekey=1 somevalue=2 rc=3 favcolor=red
If you’re writing a module in Python or Ruby or whatever, though, returning JSON is probably the simplest way to go.
All modules included in the CORE distribution must have a DOCUMENTATION string. This string MUST be a
valid YAML document which conforms to the schema defined below. You may find it easier to start writing your
DOCUMENTATION string in an editor with YAML syntax highlighting before you include it in your Python file.
Example
DOCUMENTATION = ’’’
---
module: modulename
short_description: This is a sentence describing the module
# ... snip ...
’’’
The description, and notes fields support formatting with some special macros.
These formatting functions are U(), M(), I(), and C() for URL, module, italic, and constant-width respectively. It
is suggested to use C() for file and option names, and I() when referencing parameters; module names should be
specifies as M(module).
Examples (which typically contain colons, quotes, etc.) are difficult to format with YAML, so these must be written
in plain text in an EXAMPLES string within the module like this:
EXAMPLES = ’’’
- action: modulename opt1=arg1 opt2=arg2
’’’
The EXAMPLES section, just like the documentation section, is required in all module pull requests for new modules.
Put your completed module file into the ‘library’ directory and then run the command: make webdocs. The new
‘modules.html’ file will be built and appear in the ‘docsite/’ directory.
Tip: If you’re having a problem with the syntax of your YAML you can validate it on the YAML Lint website.
Tip: You can use ANSIBLE_KEEP_REMOTE_FILES=1 to prevent ansible from deleting the remote files so you
can debug your module.
High-quality modules with minimal dependencies can be included in the core, but core modules (just due to the pro-
gramming preferences of the developers) will need to be implemented in Python and use the AnsibleModule common
code, and should generally use consistent arguments with the rest of the program. Stop by the mailing list to inquire
about requirements if you like, and submit a github pull request to the main project.
See also:
Topics
• Developing Plugins
– Connection Type Plugins
– Lookup Plugins
– Vars Plugins
– Filter Plugins
– Callbacks
* Examples
* Configuring
* Development
– Distributing Plugins
Ansible is pluggable in a lot of other ways separate from inventory scripts and callbacks. Many of these features are
there to cover fringe use cases and are infrequently needed, and others are pluggable simply because they are there to
implement core features in ansible and were most convenient to be made pluggable.
This section will explore these features, though they are generally not common in terms of things people would look
to extend quite as often.
By default, ansible ships with a ‘paramiko’ SSH, native ssh (just called ‘ssh’), ‘local’ connection type, and there are
also some minor players like ‘chroot’ and ‘jail’. All of these can be used in playbooks and with /usr/bin/ansible to de-
cide how you want to talk to remote machines. The basics of these connection types are covered in the Getting Started
section. Should you want to extend Ansible to support other transports (SNMP? Message bus? Carrier Pigeon?) it’s as
simple as copying the format of one of the existing modules and dropping it into the connection plugins directory. The
value of ‘smart’ for a connection allows selection of paramiko or openssh based on system capabilities, and chooses
‘ssh’ if OpenSSH supports ControlPersist, in Ansible 1.2.1 an later. Previous versions did not support ‘smart’.
More documentation on writing connection plugins is pending, though you can jump into
lib/ansible/runner/connection_plugins and figure things out pretty easily.
Lookup Plugins
Language constructs like “with_fileglob” and “with_items” are implemented via lookup plugins. Just like other plugin
types, you can write your own.
More documentation on writing connection plugins is pending, though you can jump into
lib/ansible/runner/lookup_plugins and figure things out pretty easily.
Vars Plugins
Playbook constructs like ‘host_vars’ and ‘group_vars’ work via ‘vars’ plugins. They inject additional variable data
into ansible runs that did not come from an inventory, playbook, or command line. Note that variables can also be
returned from inventory, so in most cases, you won’t need to write or understand vars_plugins.
More documentation on writing connection plugins is pending, though you can jump into
lib/ansible/inventory/vars_plugins and figure things out pretty easily.
If you find yourself wanting to write a vars_plugin, it’s more likely you should write an inventory script instead.
Filter Plugins
If you want more Jinja2 filters available in a Jinja2 template (filters like to_yaml and to_json are provided by default),
they can be extended by writing a filter plugin. Most of the time, when someone comes up with an idea for a new filter
they would like to make available in a playbook, we’ll just include them in ‘core.py’ instead.
Jump into lib/ansible/runner/filter_plugins/ for details.
Callbacks
Callbacks are one of the more interesting plugin types. Adding additional callback plugins to Ansible allows for
adding new behaviors when responding to events.
Examples
Configuring
Development
More information will come later, though see the source of any of the existing callbacks and you should be able to get
started quickly. They should be reasonably self-explanatory.
Distributing Plugins
Plugins are loaded from both Python’s site_packages (those that ship with ansible) and a configured plugins directory,
which defaults to /usr/share/ansible/plugins, in a subfolder for each plugin type:
* action_plugins
* lookup_plugins
* callback_plugins
* connection_plugins
* filter_plugins
* vars_plugins
Ansible Tower (formerly ‘AWX’) is a web-based solution that makes Ansible even more easy to use for IT teams of
all kinds. It’s designed to be the hub for all of your automation tasks.
Tower allows you to control access to who can access what, even allowing sharing of SSH credentials without someone
being able to transfer those credentials. Inventory can be graphically managed or synced with a wide variety of cloud
sources. It logs all of your jobs, integrates well with LDAP, and has an amazing browsable REST API. Command
line tools are available for easy integration with Jenkins as well. Provisioning callbacks provide great support for
autoscaling topologies.
Find out more about Tower features and how to download it on the Ansible Tower webpage. Tower is free for usage for
up to 10 nodes, and comes bundled with amazing support from Ansible, Inc. As you would expect, Tower is installed
using Ansible playbooks!
Ansible is an open source project designed to bring together developers and administrators of all kinds to collaborate
on building IT automation solutions that work well for them. Should you wish to get more involved – whether in
terms of just asking a question, helping other users, introducing new people to Ansible, or helping with the software
or documentation, we welcome your contributions to the project.
Ways to interact
Ansible Galaxy, is a free site for finding, downloading, rating, and reviewing all kinds of community developed
Ansible roles and can be a great way to get a jumpstart on your automation projects.
You can sign up with social auth, and the download client ‘ansible-galaxy’ is included in Ansible 1.4.2 and later.
Read the “About” page on the Galaxy site for more information.
Many times, people ask, “how can I best integrate testing with Ansible playbooks?” There are many options. Ansible
is actually designed to be a “fail-fast” and ordered system, therefore it makes it easy to embed testing directly in
Ansible playbooks. In this chapter, we’ll go into some patterns for integrating tests of infrastructure and discuss the
right level of testing that may be appropriate.
Note: This is a chapter about testing the application you are deploying, not the chapter on how to test ansible modules
during development. For that content, please hop over to the Development section.
By incorporating a degree of testing into your deployment workflow, there will be less surprises when code hits
production, and in many cases, tests can be leveraged in production to prevent failed updates from migrating across
an entire installation. Since it’s push-based and also makes it very easy to run steps on the localhost or testing servers,
Ansible lets you insert as many checks and balances into your upgrade workflow as you would like to insert.
Ansible resources are models of desired-state. As such, it should not be neccessary to test that services are running,
packages are installed, or other such things. Ansible is the system that will ensure these things are declaratively true.
Instead, assert these things into your playbooks.
tasks:
- service: name=foo state=running enabled=yes
If you think the service may not be running, the best thing to do is request it to be running. If the service fails to start,
Ansible will yell appropriately. (This should not be confused with whether the service is doing something functional,
which we’ll show more about how to do later).
In the above setup, –check mode in Ansible can be used as a layer of testing as well. If running a deployment playbook
against an existing system, using the –check flag to the ansible command will report if Ansible thinks it would have
had to have made any changes to bring the system into a desired state.
This can let you know up front if there is any need to deploy onto the given system. Ordinarily scripts and commands
don’t run in check mode, so if you want certain steps to always execute in check mode, such as calls to the script
module, add the ‘always_run’ flag:
roles:
- webserver
tasks:
- script: verify.sh
always_run: True
Certain playbook modules are particularly good for testing. Below is an example that ensures a port is open:
tasks:
Here’s an example of using the URI module to make sure a web service returns:
tasks:
It’s easy to push an arbitrary script (in any language) on a remote host and the script will automatically fail if it has a
non-zero return code:
tasks:
- script: test_script1
- script: test_script2 --parameter value --parameter2 value
If using roles (you should be, roles are great!), scripts pushed by the script module can live in the ‘files/’ directory of
a role.
And the assert module makes it very easily to validate various kinds of truth:
tasks:
- assert:
that:
- "’not ready’ not in cmd_result.stderr"
- "’gizmo enabled’ in cmd_result.stdout"
Should you feel the need to test for existance of files that are not declaratively set by your ansible configuration, the
‘stat’ module is a great choice:
tasks:
- stat: path=/path/to/something
register: p
- assert:
that:
- p.stat.exists and p.stat.isdir
As mentioned above, there’s no need to check things like the return codes of commands. Ansible is checking them
automatically. Rather than checking for a user to exist, consider using the user module to make it exist.
Ansible is a fail-fast system, so when there is an error creating that user, it will stop the playbook run. You do not have
to check up behind it.
If writing some degree of basic validation of your application into your playbooks, they will run every time you deploy.
As such, deploying into a local development VM and a stage environment will both validate that things are according
to plan ahead of your production deploy.
Your workflow may be something like this:
- Use the same playbook all the time with embedded tests in development
- Use the playbook to deploy to a stage environment (with the same playbooks) that simulates producti
- Run an integration test battery written by your QA team against stage
- Deploy to production, with the same integrated tests.
Something like an integration test battery should be written by your QA team if you are a production webservice. This
would include things like Selenium tests or automated API tests and would usually not be something embedded into
your ansible playbooks.
However, it does make sense to include some basic health checks into your playbooks, and in some cases it may be
possible to run a subset of the QA battery against remote nodes. This is what the next section covers.
If you have read into Delegation, Rolling Updates, and Local Actions it may quickly become apparent that the rolling
update pattern can be extended, and you can use the success or failure of the playbook run to decide whether to add a
machine into a load balancer or not.
This is the great culmination of embedded tests:
---
- hosts: webservers
serial: 5
pre_tasks:
roles:
- common
- webserver
- apply_testing_checks
post_tasks:
Of course in the above, the “take out of the pool” and “add back” steps would be replaced with a call to a Ansible load
balancer module or appropriate shell command. You might also have steps that use a monitoring module to start and
end an outage window for the machine.
However, what you can see from the above is that tests are used as a gate – if the “apply_testing_checks” step is not
performed, the machine will not go back into the pool.
Read the delegation chapter about “max_fail_percentage” and you can also control how many failing tests will stop a
rolling update from proceeding.
This above approach can also be modified to run a step from a testing machine remotely against a machine:
---
- hosts: webservers
serial: 5
pre_tasks:
roles:
- common
- webserver
tasks:
- script: /srv/qa_team/app_testing_script.sh --server {{ inventory_hostname }}
delegate_to: testing_server
post_tasks:
In the above example, a script is run from the testing server against a remote node prior to bringing it back into the
pool.
In the event of a problem, fix the few servers that fail using Ansible’s automatically generated retry file to repeat the
deploy on just those servers.
If desired, the above techniques may be extended to enable continuous deployment practices.
The workflow may look like this:
- Write and use automation to deploy local development VMs
- Have a CI system like Jenkins deploy to a stage environment on every code change
- The deploy job calls testing scripts to pass/fail a build on every deploy
- If the deploy job succeeds, it runs the same deploy playbook against production inventory
Some Ansible users use the above approach to deploy a half-dozen or dozen times an hour without taking all of their
infrastructure offline. A culture of automated QA is vital if you wish to get to this level.
If you are still doing a large amount of manual QA, you should still make the decision on whether to deploy manually
as well, but it can still help to work in the rolling update patterns of the previous section and encorporate some basic
health checks using modules like ‘script’, ‘stat’, ‘uri’, and ‘assert’.
1.12.8 Conclusion
Ansible believes you should not need another framework to validate basic things of your infrastructure is true. This is
the case because ansible is an order-based system that will fail immediately on unhandled errors for a host, and prevent
further configuration of that host. This forces errors to the top and shows them in a summary at the end of the Ansible
run.
However, as Ansible is designed as a multi-tier orchestration system, it makes it very easy to incorporate tests into
the end of a playbook run, either using loose tasks or roles. When used with rolling updates, testing steps can decide
whether to put a machine back into a load balanced pool or not.
Finally, because Ansible errors propogate all the way up to the return code of the ansible program itself, and Ansible
by default runs in an easy push-based mode, ansible is a great step to put into a build environment if you wish to use
it to roll out systems as part of a Continous Integration/Continous Delivery pipeline, as is covered in sections above.
The focus should not be on infrastructure testing, but on application testing, so we strongly encourage getting together
with your QA team and ask what sort of tests would make sense to run every time you deploy development VMs, and
which sort of tests they would like to run against the stage environment on every deploy. Obviously at the development
stage, unit tests are great too. But don’t unit test your playbook. Ansible describes states of resources declaratively,
so you don’t have to. If there are cases where you want to be sure of something though, that’s great, and things like
stat/assert are great go-to modules for that purpose.
In all, testing is a very organizational and site-specific thing. Everybody should be doing it, but what makes the most
sense for your environment will vary with what you are deploying and who is using it – but everyone benefits from a
more robust and reliable deployment system.
See also:
About Modules All the documentation for Ansible modules
Playbooks An introduction to playbooks
Delegation, Rolling Updates, and Local Actions Delegation, useful for working with loud balancers, clouds, and lo-
cally executed steps.
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
1.13.1 How do I handle different machines needing different user accounts or ports
to log in with?
You can also dictate the connection type to be used, if you want:
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com
bar.example.com
You may also wish to keep these in group variables instead, or file in them in a group_vars/<groupname> file. See the
rest of the documentation for more information about how to organize variables.
1.13.2 How do I get ansible to reuse connections, enable Kerberized SSH, or have
Ansible pay attention to my local SSH config file?
Switch your default connection type in the configuration file to ‘ssh’, or use ‘-c ssh’ to use Native OpenSSH for
connections instead of the python paramiko library. In Ansible 1.2.1 and later, ‘ssh’ will be used by default if OpenSSH
is new enough to support ControlPersist as an option.
Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible
from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage
older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old,
so consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use
paramiko.
We keep paramiko as the default as if you are first installing Ansible on an EL box, it offers a better experience for
new users.
Don’t try to manage a fleet of EC2 machines from your laptop. Connect to a management node inside EC2 first and
run Ansible from there.
1.13.4 How do I handle python pathing not having a Python 2.X in /usr/bin/python
on a remote machine?
While you can write ansible modules in any language, most ansible modules are written in Python, and some of these
are important core ones.
By default Ansible assumes it can find a /usr/bin/python on your remote system that is a 2.X version of Python,
specifically 2.4 or higher.
Setting of an inventory variable ‘ansible_python_interpreter’ on any host will allow Ansible to auto-replace the in-
terpreter used when executing python modules. Thus, you can point to any python you want on the system if
/usr/bin/python on your system does not point to a Python 2.X interpreter.
Some Linux operating systems, such as Arch, may only have Python 3 installed by default. This is not sufficient and
you will get syntax errors trying to run modules with Python 3. Python 3 is essentially not the same language as Python
2. Ansible modules currently need to support older Pythons for users that still have Enterprise Linux 5 deployed, so
they are not yet ported to run under Python 3.0. This is not a problem though as you can just install Python 2 also on
a managed host.
Python 3.0 support will likely be addressed at a later point in time when usage becomes more mainstream.
Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.
If you have not done so already, read all about “Roles” in the playbooks documentation. This helps you make playbook
content self-contained, and works well with things like git submodules for sharing content with others.
If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can
be extended.
1.13.6 Where does the configuration file live and what can I configure in it?
If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide that
you would like to work in a professional cow-free environment, you can either uninstall cowsay, or set an environment
variable:
export ANSIBLE_NOCOWS=1
Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in Playbooks
and in templates. To see a list of all of the facts that are available about a machine, you can run the “setup” module as
an ad-hoc action:
ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host.
A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template
configuration file with a list of servers. To do this, you can just access the “$groups” dictionary in your template, like
this:
{% for host in groups[’db_servers’] %}
{{ host }}
{% endfor %}
If you need to access facts about these hosts, for instance, the IP address of each hostname, you need to make sure that
the facts have been populated. For example, make sure you have a play that talks to db_servers:
- hosts: db_servers
tasks:
- # doesn’t matter what you do, just that they were talked to previously.
Then you can use the facts inside your template, like this:
{% for host in groups[’db_servers’] %}
{{ hostvars[host][’ansible_eth0’][’ipv4’][’address’] }}
{% endfor %}
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be
used may be supplied via a role parameter or other input. Variable names can be built by adding strings together, like
so:
{{ hostvars[inventory_hostname][’ansible_’ + which_interface][’ipv4’][’address’] }}
The trick about going through hostvars is necessary because it’s a dictionary of the entire namespace of variables.
‘inventory_hostname’ is a magic variable that indicates the current host you are looping over in the host loop.
What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note
that if we are using dynamic inventory, which host is the ‘first’ may not be consistent, so you wouldn’t want to do this
unless your inventory was static and predictable. (If you are using Ansible Tower, it will use database order, so this
isn’t a problem even if you are using cloud based inventory scripts).
Anyway, here’s the trick:
{{ hostvars[groups[’webservers’][0]][’ansible_eth0’][’ipv4’][’address’] }}
Notice how we’re pulling out the hostname of the first machine of the webservers group. If you are doing this in a
template, you could use the Jinja2 ‘#set’ directive to simplify this, or in a playbook, you could also use set_fact:
• set_fact: headnode={{ groups[[’webservers’][0]] }}
• debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}
Notice how we interchanged the bracket syntax for dots – that can be done anywhere.
The “copy” module has a recursive parameter, though if you want to do something more efficient for a large number
of files, take a look at the “synchronize” module instead, which wraps rsync. See the module index for info on both of
these modules.
If you just need to access existing variables, use the ‘env’ lookup plugin. For example, to access the value of the
HOME environment variable on management machine:
---
# ...
vars:
local_home: "{{ lookup(’env’,’HOME’) }}"
If you need to set environment variables, see the Advanced Playbooks section about environments.
Ansible 1.4 will also make remote environment variables available via facts in the ‘ansible_env’ variable:
{{ ansible_env.SOME_VARIABLE }}
The mkpasswd utility that is available on most Linux systems is a great option:
mkpasswd --method=SHA-512
If this utility is not installed on your system (e.g. you are using OS X) then you can still easily generate these passwords
using Python. First, ensure that the Passlib password hashing library is installed.
pip install passlib
Once the library is ready, SHA512 password values can then be generated as follows:
python -c "from passlib.hash import sha512_crypt; print sha512_crypt.encrypt(’<password>’)"
Yes! See our Guru offering for online support, and support is also included with Ansible Tower. You can also read our
service page and email [email protected] for further details.
Yes! Ansible, Inc makes a great product that makes Ansible even more powerful and easy to use. See Ansible Tower.
Great question! Documentation for Ansible is kept in the main project git repository, and complete instructions for
contributing can be found in the docs README viewable on GitHub. Thanks!
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control,
see Vault.
Please see the section below for a link to IRC and the Google Group, where you can ask your question there.
See also:
Ansible Documentation The documentation index
Playbooks An introduction to playbooks
Best Practices Best practices advice
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
1.14 Glossary
The following is a list (and re-explanation) of term definitions used elsewhere in the Ansible documentation.
Consult the documentation home page for the full documentation and to see the terms in context, but this should be a
good resource to check your knowledge of Ansible’s components and understand how they fit together. It’s something
you might wish to read for review or when a term comes up on the mailing list.
1.14.1 Action
An action is a part of a task that specifies which of the modules to run and the arguments to pass to that module. Each
task can have only one action, but it may also have other parameters.
1.14.2 Ad Hoc
Refers to running Ansible to perform some quick command, using /usr/bin/ansible, rather than the orchestration lan-
guage, which is /usr/bin/ansible-playbook. An example of an ad-hoc command might be rebooting 50 machines in
your infrastructure. Anything you can do ad-hoc can be accomplished by writing a playbook, and playbooks can also
glue lots of other operations together.
1.14.3 Async
Refers to a task that is configured to run in the background rather than waiting for completion. If you have a long
process that would run longer than the SSH timeout, it would make sense to launch that task in async mode. Async
modes can poll for completion every so many seconds, or can be configured to “fire and forget” in which case Ansible
will not even check on the task again, it will just kick it off and proceed to future steps. Async modes work with both
/usr/bin/ansible and /usr/bin/ansible-playbook.
Refers to some user-written code that can intercept results from Ansible and do something with them. Some supplied
examples in the GitHub project perform custom logging, send email, or even play sound effects.
Refers to running Ansible with the --check option, which does not make any changes on the remote systems,
but only outputs the changes that might occur if the command ran without this flag. This is analogous to so-called
“dry run” modes in other systems, though the user should be warned that this does not take into account unexpected
command failures or cascade effects (which is true of similar modes in other systems). Use this to get an idea of what
might happen, but it is not a substitute for a good staging environment.
By default, Ansible talks to remote machines through pluggable libraries. Ansible supports native OpenSSH (‘ssh’),
or a Python implementation called ‘paramiko’. OpenSSH is preferred if you are using a recent version, and also
enables some features like Kerberos and jump hosts. This is covered in the getting started section. There are also other
connection types like ‘accelerate’ mode, which must be bootstrapped over one of the SSH-based connection types but
is very fast, and local mode, which acts on the local system. Users can also write their own connection plugins.
1.14.7 Conditionals
A conditional is an expression that evaluates to true or false that decides whether a given task will be executed on a
given machine or not. Ansible’s conditionals are powered by the ‘when’ statement, and are discussed in the playbook
documentation.
A --diff flag can be passed to Ansible to show how template files change when they are overwritten, or how they
might change when used with --check mode. These diffs come out in unified diff format.
1.14.9 Facts
Facts are simply things that are discovered about remote nodes. While they can be used in playbooks and templates
just like variables, facts are things that are inferred, rather than set. Facts are automatically discovered by Ansible
when running plays by executing the internal ‘setup’ module on the remote nodes. You never have to call the setup
module explicitly, it just runs, but it can be disabled to save time if it is not needed. For the convenience of users who
are switching from other configuration management systems, the fact module will also pull in facts from the ‘ohai’
and ‘facter’ tools if they are installed, which are fact libraries from Chef and Puppet, respectively.
A filter plugin is something that most users will never need to understand. These allow for the creation of new Jinja2
filters, which are more or less only of use to people who know what Jinja2 filters are. If you need them, you can learn
how to write them in the API docs section.
1.14.11 Forks
Ansible talks to remote nodes in parallel and the level of parallelism can be set either by passing --forks, or editing
the default in a configuration file. The default is a very conservative 5 forks, though if you have a lot of RAM, you can
easily set this to a value like 50 for increased parallelism.
Facts are mentioned above. Sometimes when running a multi-play playbook, it is desirable to have some plays that
don’t bother with fact computation if they aren’t going to need to utilize any of these values. Setting gather_facts:
False on a playbook allows this implicit fact gathering to be skipped.
1.14.13 Globbing
Globbing is a way to select lots of hosts based on wildcards, rather than the name of the host specifically, or the name
of the group they are in. For instance, it is possible to select “www*” to match all hosts starting with “www”. This
concept is pulled directly from Func, one of Michael’s earlier projects. In addition to basic globbing, various set
operations are also possible, such as ‘hosts in this group and not in another group’, and so on.
1.14.14 Group
A group consists of several hosts assigned to a pool that can be conveniently targeted together, and also given variables
that they share in common.
The “group_vars/” files are files that live in a directory alongside an inventory file, with an optional filename named
after each group. This is a convenient place to put variables that will be provided to a given group, especially complex
data structures, so that these variables do not have to be embedded in the inventory file or playbook.
1.14.16 Handlers
Handlers are just like regular tasks in an Ansible playbook (see Tasks), but are only run if the Task contains a “notify”
directive and also indicates that it changed something. For example, if a config file is changed then the task referencing
the config file templating operation may notify a service restart handler. This means services can be bounced only if
they need to be restarted. Handlers can be used for things other than service restarts, but service restarts are the most
common usage.
1.14.17 Host
A host is simply a remote machine that Ansible manages. They can have individual variables assigned to them, and
can also be organized in groups. All hosts have a name they can be reached at (which is either an IP address or a
domain name) and optionally a port number if they are not to be accessed on the default SSH port.
Each Play in Ansible maps a series of tasks (which define the role, purpose, or orders of a system) to a set of systems.
This “hosts:” directive in each play is often called the hosts specifier.
It may select one system, many systems, one or more groups, or even some hosts that are in one group and explicitly
not in another.
Just like “Group Vars”, a directory alongside the inventory file named “host_vars/” can contain a file named after
each hostname in the inventory file, in YAML format. This provides a convenient place to assign variables to the
host without having to embed them in the inventory file. The Host Vars file can also be used to define complex data
structures that can’t be represented in the inventory file.
In general, Ansible evaluates any variables in playbook content at the last possible second, which means that if you
define a data structure that data structure itself can define variable values within it, and everything “just works” as you
would expect. This also means variable strings can include other variables inside of those strings.
A lookup plugin is a way to get data into Ansible from the outside world. These are how such things as “with_items”,
a basic looping plugin, are implemented, but there are also lookup plugins like “with_file” which loads data from a
file, and even ones for querying environment variables, DNS text records, or key value stores. Lookup plugins can
also be accessed in templates, e.g., {{ lookup(’file’,’/path/to/file’) }}.
1.14.22 Multi-Tier
The concept that IT systems are not managed one system at a time, but by interactions between multiple systems, and
groups of systems, in well defined orders. For instance, a web server may need to be updated before a database server,
and pieces on the web server may need to be updated after THAT database server, and various load balancers and
monitoring servers may need to be contacted. Ansible models entire IT topologies and workflows rather than looking
at configuration from a “one system at a time” perspective.
1.14.23 Idempotency
The concept that change commands should only be applied when they need to be applied, and that it is better to
describe the desired state of a system than the process of how to get to that state. As an analogy, the path from North
Carolina in the United States to California involves driving a very long way West, but if I were instead in Anchorage,
Alaska, driving a long way west is no longer the right way to get to California. Ansible’s Resources like you to say
“put me in California” and then decide how to get there. If you were already in California, nothing needs to happen,
and it will let you know it didn’t need to change anything.
1.14.24 Includes
The idea that playbook files (which are nothing more than lists of plays) can include other lists of plays, and task lists
can externalize lists of tasks in other files, and similarly with handlers. Includes can be parameterized, which means
that the loaded file can pass variables. For instance, an included play for setting up a WordPress blog may take a
parameter called “user” and that play could be included more than once to create a blog for both “alice” and “bob”.
1.14.25 Inventory
A file (by default, Ansible uses a simple INI format) that describes Hosts and Groups in Ansible. Inventory can also
be provided via an “Inventory Script” (sometimes called an “External Inventory Script”).
A very simple program (or a complicated one) that looks up hosts, group membership for hosts, and variable infor-
mation from an external resource – whether that be a SQL database, a CMDB solution, or something like LDAP. This
concept was adapted from Puppet (where it is called an “External Nodes Classifier”) and works more or less exactly
the same way.
1.14.27 Jinja2
Jinja2 is the preferred templating language of Ansible’s template module. It is a very simple Python template language
that is generally readable and easy to write.
1.14.28 JSON
Ansible uses JSON for return data from remote modules. This allows modules to be written in any language, not just
Python.
1.14.29 Library
By passing --limit somegroup to ansible or ansible-playbook, the commands can be limited to a subset of hosts.
For instance, this can be used to run a playbook that normally targets an entire set of servers to one particular server.
By using “connection: local” in a playbook, or passing “-c local” to /usr/bin/ansible, this indicates that we are manag-
ing the local host and not a remote machine.
A local_action directive in a playbook targeting remote machines means that the given step will actually occur on the
local machine, but that the variable ‘{{ ansible_hostname }}’ can be passed in to reference the remote hostname being
referred to in that step. This can be used to trigger, for example, an rsync operation.
1.14.33 Loops
Generally, Ansible is not a programming language. It prefers to be more declarative, though various constructs like
“with_items” allow a particular task to be repeated for multiple items in a list. Certain modules, like yum and apt, are
actually optimized for this, and can install all packages given in those lists within a single transaction, dramatically
speeding up total time to configuration.
1.14.34 Modules
Modules are the units of work that Ansible ships out to remote machines. Modules are kicked off by either
/usr/bin/ansible or /usr/bin/ansible-playbook (where multiple tasks use lots of different modules in conjunction). Mod-
ules can be implemented in any language, including Perl, Bash, or Ruby – but can leverage some useful communal
library code if written in Python. Modules just have to return JSON or simple key=value pairs. Once modules are
executed on remote machines, they are removed, so no long running daemons are used. Ansible refers to the collection
of available modules as a ‘library’.
1.14.35 Notify
The act of a task registering a change event and informing a handler task that another action needs to be run at the end
of the play. If a handler is notified by multiple tasks, it will still be run only once. Handlers are run in the order they
are listed, not in the order that they are notified.
1.14.36 Orchestration
Many software automation systems use this word to mean different things. Ansible uses it as a conductor would
conduct an orchestra. A datacenter or cloud architecture is full of many systems, playing many parts – web servers,
database servers, maybe load balancers, monitoring systems, continuous integration systems, etc. In performing any
process, it is necessary to touch systems in particular orders, often to simulate rolling updates or to deploy software
correctly. Some system may perform some steps, then others, then previous systems already processed may need to
perform more steps. Along the way, emails may need to be sent or web services contacted. Ansible orchestration is
all about modeling that kind of process.
1.14.37 paramiko
By default, Ansible manages machines over SSH. The library that Ansible uses by default to do this is a Python-
powered library called paramiko. The paramiko library is generally fast and easy to manage, though users desiring
Kerberos or Jump Host support may wish to switch to a native SSH binary such as OpenSSH by specifying the
connection type in their playbook, or using the “-c ssh” flag.
1.14.38 Playbooks
Playbooks are the language by which Ansible orchestrates, configures, administers, or deploys systems. They are
called playbooks partially because it’s a sports analogy, and it’s supposed to be fun using them. They aren’t workbooks
:)
1.14.39 Plays
A playbook is a list of plays. A play is minimally a mapping between a set of hosts selected by a host specifier (usually
chosen by groups, but sometimes by hostname globs) and the tasks which run on those hosts to define the role that
those systems will perform. There can be one or many plays in a playbook.
By default, Ansible runs in push mode, which allows it very fine-grained control over when it talks to each system.
Pull mode is provided for when you would rather have nodes check in every N minutes on a particular schedule. It
uses a program called ansible-pull and can also be set up (or reconfigured) using a push-mode playbook. Most Ansible
users use push mode, but pull mode is included for variety and the sake of having choices.
ansible-pull works by checking configuration orders out of git on a crontab and then managing the machine locally,
using the local connection plugin.
Push mode is the default mode of Ansible. In fact, it’s not really a mode at all – it’s just how Ansible works when you
aren’t thinking about it. Push mode allows Ansible to be fine-grained and conduct nodes through complex orchestration
processes without waiting for them to check in.
The result of running any task in Ansible can be stored in a variable for use in a template or a conditional statement.
The keyword used to define the variable is called ‘register’, taking its name from the idea of registers in assembly
programming (though Ansible will never feel like assembly programming). There are an infinite number of variable
names you can use for registration.
Ansible modules work in terms of resources. For instance, the file module will select a particular file and ensure
that the attributes of that resource match a particular model. As an example, we might wish to change the owner of
/etc/motd to ‘root’ if it is not already set to root, or set its mode to ‘0644’ if it is not already set to ‘0644’. The resource
models are ‘idempotent’ meaning change commands are not run unless needed, and Ansible will bring the system
back to a desired state regardless of the actual state – rather than you having to tell it how to get to the state.
1.14.44 Roles
Roles are units of organization in Ansible. Assigning a role to a group of hosts (or a set of groups, or host patterns,
etc.) implies that they should implement a specific behavior. A role may include applying certain variable values,
certain tasks, and certain handlers – or just one or more of these things. Because of the file structure associated with a
role, roles become redistributable units that allow you to share behavior among playbooks – or even with other users.
The act of addressing a number of nodes in a group N at a time to avoid updating them all at once and bringing the
system offline. For instance, in a web topology of 500 nodes handling very large volume, it may be reasonable to
update 10 or 20 machines at a time, moving on to the next 10 or 20 when done. The “serial:” keyword in an Ansible
playbook controls the size of the rolling update pool. The default is to address the batch size all at once, so this is
something that you must opt-in to. OS configuration (such as making sure config files are correct) does not typically
have to use the rolling update model, but can do so if desired.
1.14.46 Runner
A core software component of Ansible that is the power behind /usr/bin/ansible directly – and corresponds to the
invocation of each task in a playbook. The Runner is something Ansible developers may talk about, but it’s not really
user land vocabulary.
1.14.47 Serial
1.14.48 Sudo
Ansible does not require root logins, and since it’s daemonless, definitely does not require root level daemons (which
can be a security concern in sensitive environments). Ansible can log in and perform many operations wrapped in a
sudo command, and can work with both password-less and password-based sudo. Some operations that don’t normally
work with sudo (like scp file transfer) can be achieved with Ansible’s copy, template, and fetch modules while running
in sudo mode.
Native OpenSSH as an Ansible transport is specified with “-c ssh” (or a config file, or a directive in the playbook) and
can be useful if wanting to login via Kerberized SSH or using SSH jump hosts, etc. In 1.2.1, ‘ssh’ will be used by
default if the OpenSSH binary on the control machine is sufficiently new. Previously, Ansible selected ‘paramiko’ as
a default. Using a client that supports ControlMaster and ControlPersist is recommended for maximum performance
– if you don’t have that and don’t need Kerberos, jump hosts, or other features, paramiko is a good choice. Ansible
will warn you if it doesn’t detect ControlMaster/ControlPersist capability.
1.14.50 Tags
Ansible allows tagging resources in a playbook with arbitrary keywords, and then running only the parts of the play-
book that correspond to those keywords. For instance, it is possible to have an entire OS configuration, and have
certain steps labeled “ntp”, and then run just the “ntp” steps to reconfigure the time server information on a remote
host.
1.14.51 Tasks
Playbooks exist to run tasks. Tasks combine an action (a module and its arguments) with a name and optionally some
other keywords (like looping directives). Handlers are also tasks, but they are a special kind of task that do not run
unless they are notified by name when a task reports an underlying change on a remote system.
1.14.52 Templates
Ansible can easily transfer files to remote systems, but often it is desirable to substitute variables in other files. Vari-
ables may come from the inventory file, Host Vars, Group Vars, or Facts. Templates use the Jinja2 template engine
and can also include logical constructs like loops and if statements.
1.14.53 Transport
Ansible uses “Connection Plugins” to define types of available transports. These are simply how Ansible will reach
out to managed systems. Transports included are paramiko, SSH (using OpenSSH), and local.
1.14.54 When
An optional conditional statement attached to a task that is used to determine if the task should run or not. If the
expression following the “when:” keyword evaluates to false, the task will be ignored.
For no particular reason, other than the fact that Michael really likes them, all Ansible releases are codenamed after
Van Halen songs. There is no preference given to David Lee Roth vs. Sammy Lee Hagar-era songs, and instrumentals
are also allowed. It is unlikely that there will ever be a Jump release, but a Van Halen III codename release is possible.
You never know.
As opposed to Facts, variables are names of values (they can be simple scalar values – integers, booleans, strings) or
complex ones (dictionaries/hashes, lists) that can be used in templates and playbooks. They are declared things, not
things that are inferred from the remote system’s current state or nature (which is what Facts are).
1.14.57 YAML
Ansible does not want to force people to write programming language code to automate infrastructure, so Ansible uses
YAML to define playbook configuration languages and also variable files. YAML is nice because it has a minimum
of syntax and is very clean and easy for people to skim. It is a good data format for configuration files and humans,
but also machine readable. Ansible’s usage of YAML stemmed from Michael’s first use of it inside of Cobbler
around 2006. YAML is fairly popular in the dynamic language community and the format has libraries available for
serialization in many languages (Python, Perl, Ruby, etc.).
See also:
Frequently Asked Questions Frequently asked questions
Playbooks An introduction to playbooks
Best Practices Best practices advice
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
This page provides a basic overview of correct YAML syntax, which is how Ansible playbooks (our configuration
management language) are expressed.
We use YAML because it is easier for humans to read and write than other common data formats like XML or JSON.
Further, there are libraries available in most programming languages for working with YAML.
You may also wish to read Playbooks at the same time to see how this is used in practice.
For Ansible, nearly every YAML file starts with a list. Each item in the list is a list of key/value pairs, commonly
called a “hash” or a “dictionary”. So, we need to know how to write lists and dictionaries in YAML.
There’s another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) should
begin with ---. This is part of the YAML format and indicates the start of a document.
All members of a list are lines beginning at the same indentation level starting with a - (dash) character:
---
# A list of tasty fruits
- Apple
- Orange
- Strawberry
- Mango
---
# An employee record
name: Example Developer
job: Developer
skill: Elite
Dictionaries can also be represented in an abbreviated form if you really want to:
---
# An employee record
{name: Example Developer, job: Developer, skill: Elite}
Ansible doesn’t really use these too much, but you can also specify a boolean value (true/false) in several forms:
---
create_key: yes
needs_agent: no
knows_oop: True
likes_emacs: TRUE
uses_cvs: false
Let’s combine what we learned so far in an arbitrary YAML example. This really has nothing to do with Ansible, but
will give you a feel for the format:
---
# An employee record
name: Example Developer
job: Developer
skill: Elite
employed: True
foods:
- Apple
- Orange
- Strawberry
- Mango
languages:
ruby: Elite
python: Elite
dotnet: Lame
That’s all you really need to know about YAML to start writing Ansible playbooks.
1.15.2 Gotchas
While YAML is generally friendly, the following is going to result in a YAML syntax error:
foo: somebody said I should put a colon here: so I did
You will want to quote any hash values using colons, like so:
foo: “somebody said I should put a colon here: so I did”
And then the colon will be preserved.
Further, Ansible uses “{{ var }}” for variables. If a value after a colon starts with a “{”, YAML will think it is a
dictionary, so you must quote it, like so:
foo: "{{ variable }}"
See also:
Playbooks Learn what playbooks can do and how to write/run them.
YAMLLint YAML Lint (online) helps you debug YAML syntax if you are having problems
Github examples directory Complete playbook files from the github project source
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
While many users should be able to get on fine with the documentation, mailing list, and IRC, sometimes you want a
bit more.
Ansible Guru is an offering from Ansible, Inc that helps users who would like more dedicated help with Ansible,
including building playbooks, best practices, architecture suggestions, and more – all from our awesome support and
services team. It also includes some useful discounts and also some free T-shirts, though you shouldn’t get it just for
the free shirts! It’s a great way to train up to becoming an Ansible expert.
For those interested, click through the link above. You can sign up in minutes!
For users looking for more hands-on help, we also have some more information on our Services page, and support is
also included with Ansible Tower.