Automation Controller User Guide
Automation Controller User Guide
1 Overview 2
1.1 Real-time Playbook Output and Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 “Push Button” Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Enhanced and Simplified Role-Based Access Control and Auditing . . . . . . . . . . . . . . . . . . 2
1.4 Cloud & Autoscaling Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 The Ideal RESTful API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Backup and Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.7 Ansible Galaxy Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.8 Inventory Support for OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.9 Remote Command Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.10 System Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.11 Integrated Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.12 Satellite Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.13 Run-time Job Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.14 Red Hat Insights Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.15 Enhanced User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.16 Custom Virtual Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.17 Authentication Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.18 Cluster Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.19 Container Platform Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.20 Workflow Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.21 Job Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.22 Support for deployment in a FIPS-enabled environment . . . . . . . . . . . . . . . . . . . . . . . . 6
1.23 Limit the number of hosts per organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.24 Inventory Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.25 Secret Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.26 Automation Hub Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Red Hat Ansible Automation Platform Controller Licensing, Updates, and Support 8
2.1 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Trial / Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Subscription Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Node Counting in Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 Attaching Subscriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6 Ansible Automation Platform Component Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Logging In 11
4 Import a Subscription 12
4.1 Obtaining a subscriptions manifest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
i
4.2 Adding a subscription manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6 Search 26
6.1 Searching Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.2 Sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7 Organizations 29
7.1 Creating a New Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.2 Work with Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.3 Work with Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8 Users 37
8.1 Create a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
8.2 Delete a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.3 Users - Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.4 Users - Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.5 Users - Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.6 Users - Tokens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
9 Teams 46
9.1 Create a Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
10 Credentials 54
10.1 Understanding How Credentials Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
10.2 Getting Started with Credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
10.3 Add a New Credential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
10.4 Credential Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
13 Applications 98
13.1 Getting Started with Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
13.2 Create a new application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
ii
15.3 Collection-level metadata¶ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
16 Projects 114
16.1 Add a new project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
16.2 Updating projects from source control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
16.3 Work with Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
16.4 Work with Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
16.5 Work with Job Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
16.6 Work with Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
16.7 Ansible Galaxy Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
16.8 Collections Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
17 Inventories 136
17.1 Smart Inventories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
17.2 Inventory Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
17.3 Add a new inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
17.4 Running Ad Hoc Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
21 Workflows 214
21.1 Workflow scenarios and considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
21.2 Extra Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
21.3 Workflow States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
21.4 Role-Based Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
iii
22.1 Create a Workflow Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
22.2 Work with Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
22.3 Work with Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
22.4 View Completed Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
22.5 Work with Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
22.6 Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
22.7 Workflow Visualizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
22.8 Launch a Workflow Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
22.9 Copy a Workflow Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
22.10 Extra Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
24 Jobs 249
24.1 Inventory Sync Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
24.2 SCM Inventory Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
24.3 Playbook Run Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
24.4 Automation controller Capacity Determination and Job Impact . . . . . . . . . . . . . . . . . . . . . 259
24.5 Job branch overriding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
26 Notifications 277
26.1 Notification Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
26.2 Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
26.3 Create a Notification Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
26.4 Notification Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
26.5 Create custom notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
26.6 Enable and Disable Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
26.7 Configure the host hostname for notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
26.8 Notifications API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
28 Schedules 299
28.1 Add a new schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
iv
30.8 LDAP authentication performance tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
31 Security 310
31.1 Playbook Access and Information Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
31.2 Role-Based Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
31.3 Function of roles: editing and creating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
32 Glossary 322
33 Index 326
Index 328
v
Automation Controller User Guide, Release Automation Controller 4.2.1
Thank you for your interest in Red Hat Ansible Automation Platform controller. Automation controller helps teams
manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environ-
ments.
The Automation Controller User Guide discusses all of the functionality available in automation controller and as-
sumes moderate familiarity with Ansible, including concepts such as Playbooks, Variables, and Tags. For more
information on these and other Ansible concepts, please see the Ansible documentation at http://docs.ansible.com/.
This document has been updated to include information for the latest release of Automation Controller v4.2.1.
We Need Feedback!
If you spot a typo in this documentation, or if you have thought of a way to make this manual better, we would love to
hear from you! Please send an email to: [email protected]
If you have a suggestion, try to be as specific as possible when describing it. If you have found an error, please include
the manual’s title, chapter number/section number, and some of the surrounding text so we can find it easily. We may
not be able to respond to every message sent to us, but you can be sure that we will be reading them all!
Automation Controller Version 4.2.1; September 12, 2022; https://access.redhat.com/
CONTENTS 1
CHAPTER
ONE
OVERVIEW
Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform makes it possible
for users across an organization to share, vet, and manage automation content by means of a simple, powerful, and
agentless technical implementation. IT managers can provide guidelines on how automation is applied to individual
teams. Meanwhile, automation developers retain the freedom to write tasks that use existing knowledge, without the
operational overhead of conforming to complex tools and frameworks. It is a more secure and stable foundation for
deploying end-to-end automation solutions, from hybrid cloud to the edge.
Ansible Automation Platform includes automation controller, which allows users to define, operate, scale, and delegate
automation across their enterprise.
Watch playbooks run in real time, seeing each host as they check in. Easily go back and explore the results for specific
tasks and hosts in great detail. Search for specific plays or hosts and see just those results, or quickly zero in on errors
that need to be corrected.
Access your favorite projects and re-trigger execution from the web interface with a minimum of clicking. automation
controller will ask for input variables, prompt for your credentials, kick off and monitor the job, and display results
and host history over time.
Automation controller allows for the granting of permissions to perform a specific task (such as to view, create, or
modify a file) to different teams or explicit users through role-based access control (RBAC).
Keep some projects private, while allowing some users to edit inventory and others to run playbooks against only
certain systems–either in check (dry run) or live mode. You can also allow certain users to use credentials without
exposing the credentials to them. Regardless of what you do, automation controller records the history of operations
and who made them–including objects edited and jobs launched.
Based on user feedback, automation controller both expands and simplifies its role-based access control. No longer
is job template visibility configured via a combination of permissions on inventory, projects, and credentials. If you
want to give any user or team permissions to use a job template, just assign permissions directly on the job template.
2
Automation Controller User Guide, Release Automation Controller 4.2.1
Similarly, credentials are now full objects in automation controller’s RBAC system, and can be assigned to multiple
users and/or teams for use.
Automation controller includes an ‘Auditor’ type, who can see all aspects of the systems automation, but has no
permission to run or change automation, for those that need a system-level auditor. (This may also be useful for a
service account that scrapes automation information from the REST API.) Refer to Role-Based Access Controls for
more information.
Subsequent releases of automation controller provides more granular permissions, making it easier to delegate inside
your organizations and remove automation bottlenecks.
Automation controller features a powerful provisioning callback feature that allows nodes to request configuration
on demand. While optional, this is an ideal solution for a cloud auto-scaling scenario, integrating with provisioning
servers like Cobbler, or when dealing with managed systems with unpredictable uptimes. Requiring no management
software to be installed on remote nodes, the callback solution can be triggered via a simple call to ‘curl’ or ‘wget’, and
is easily embeddable in init scripts, kickstarts, or preseeds. Access is controlled such that only machines in inventory
can request configuration.
The automation controller REST API is the ideal RESTful API for a systems management application, with all re-
sources fully discoverable, paginated, searchable, and well modeled. A styled API browser allows API exploration
from the API root at http://<server name>/api/, showing off every resource and relation. Everything that
can be done in the user interface can be done in the API - and more.
The ability to backup and restore your system(s) has been integrated into the Ansible Automation Platform setup
playbook, making it easy for you to backup and replicate your instance as needed.
When it comes to describing your automation, everyone repeats the DRY mantra–“Don’t Repeat Yourself.” Using
centralized copies of Ansible roles, such as in Ansible Galaxy, allows you to bring that philosophy to your playbooks.
By including an Ansible Galaxy requirements.yml file in your project directory, automation controller automatically
fetches the roles your playbook needs from Galaxy, GitHub, or your local source control. Refer to Ansible Galaxy
Support for more information.
Ansible is committed to making OpenStack simple for everyone to use. As part of that, dynamic inventory support has
been added for OpenStack. This allows you to easily target any of the virtual machines or images that you’re running
in your OpenStack cloud.
Often times, you just need to do a simple task on a few hosts, whether it’s add a single user, update a single security
vulnerability, or restart a misbehaving service. Automation controller includes remote command execution–any task
that you can describe as a single Ansible play can be run on a host or group of hosts in your inventory, allowing you
to get managing your systems quickly and easily. Plus, it is all backed by an RBAC engine and detailed audit logging,
removing any questions regarding who has done what to what machines.
You can collect facts by using the fact caching feature. Refer to Fact Caching for more detail.
automation controller allows you to easily keep track of the status of your automation. You can configure stackable
notifications for job templates, projects, or entire organizations, and configure different notifications for job start, job
success, job failure, and job approval (for workflow nodes). The following notification sources are supported:
• Email
• Grafana
• IRC
• Mattermost
• PagerDuty
• Rocket.Chat
• Slack
• Twilio
• Webhook (post to an arbitrary webhook, for integration into other tools)
Additionally, you can customize notification messages for each of the above notification types.
Bringing the flexibility of the Ansible command line, you can now prompt for any of the following:
• inventory
• credential
• job tags
• limits
Automation controller supports integration with Red Hat Insights, which allows Insights playbooks to be used as a
Ansible Automation Platform Project.
The layout of the user interface is organized with intuitive navigational elements. With information displayed at-a-
glance, it is intuitive to find and use the automation you need. Compact and expanded viewing modes show and hide
information as needed, and various built-in attributes make it easy to sort.
Custom Ansible environment support allows you to have different Ansible environments and specify custom paths for
different teams and jobs.
Automation controller supports LDAP, SAML, token-based authentication. Enhanced LDAP and SAML support
allows you to integrate your enterprise account information in a more flexible manner. Token-based Authentication
allows for easily authentication of third-party tools and services with automation controller via integrated OAuth 2
token support.
Ansible Automation Platform is available as a containerized pod service for Red Hat OpenShift Container Platform
that can be scaled up and down easily as needed.
In order to better model your complex provisioning, deployment, and orchestration workflows, automation controller
expanded workflows in a number of ways:
• Inventory overrides for Workflows. You can now override an inventory across a workflow at workflow defini-
tion time, or even at launch time. Define your application deployment workflow, and then easily re-use them in
multiple environments.
• Convergence nodes for Workflows. When modeling complex processes, you sometimes need to wait for
multiple steps to finish before proceeding. Now automation controller workflows can easily replicate this;
workflow steps can now wait for any number of prior workflow steps to complete properly before proceeding.
• Workflow Nesting. Re-use individual workflows as components of a larger workflow. Examples include com-
bining provisioning and application deployment workflows into a single master workflow.
• Workflow Pause and Approval. You can build workflows containing approval nodes that require user inter-
vention. This makes it possible to pause workflows in between playbooks so that a user can give approval (or
denial) for continuing on to the next step in the workflow.
As automation moves enterprise-wide, the need to automate at scale grows. Automation controller offer the ability to
take a fact gathering or configuration job running across thousands of machines and slice it into individual job slices
that can be distributed across your automation controller cluster for increased reliability, faster job completion, and
better cluster utilization. If you need to change a parameter across 15,000 switches at scale, or gather information
across your multi-thousand-node RHEL estate, you can now do so easily.
If you require running your environment in restricted modes such as FIPS, automation controller deploys and runs in
such environments.
Lots of large organizations have instances shared among many organizations. They do not want any one organization
to be able to use all the licensed hosts, this feature allows superusers to set a specified upper limit on how many licensed
hosts may be allocated to each organization. The automation controller algorithm factors changes in the limit for an
organization and the number of total hosts across all organizations. Any inventory updates will fail if an inventory
sync brings an organization out of compliance with the policy. Additionally, superusers are able to ‘over-allocate’
their licenses, with a warning.
Updated automation controller to use the following inventory plugins from upstream collections if inventory updates
are run with Ansible 2.9:
• amazon.aws.aws_ec2
• community.vmware.vmware_vm_inventory
• azure.azcollection.azure_rm
• google.cloud.gcp_compute
• theforeman.foreman.foreman
• openstack.cloud.openstack
• ovirt.ovirt.ovirt
• awx.awx.tower
With a secret management system, external credentials are stored and supplied for use in automation controller so you
don’t have to provide them directly.
Automation Hub will act as a content provider for automation controller, which requires both an automation controller
deployment and an Automation Hub deployment running alongside each other.
TWO
Red Hat Ansible Automation Platform controller (“Automation Controller”) is a software product provided as part
of an annual Red Hat Ansible Automation Platform subscription entered into between you and Red Hat, Inc. (“Red
Hat”).
Ansible is an open source software project and is licensed under the GNU General Public License version 3, as detailed
in the Ansible source code: https://github.com/ansible/ansible/blob/devel/COPYING
You must have valid subscriptions attached before installing the Ansible Automation Platform. See Attaching Sub-
scriptions for detail.
2.1 Support
Red Hat offers support to paid Red Hat Ansible Automation Platform customers.
If you or your company has purchased a subscription for Ansible Automation Platform, you can contact the support
team at https://access.redhat.com. To better understand the levels of support which match your Ansible Automa-
tion Platform subscription, refer to Subscription Types. For details of what is covered under an Ansible Automa-
tion Platform subscription, please see the Scopes of Support at: https://access.redhat.com/support/policy/updates/
ansible-tower#scope-of-coverage-4 and https://access.redhat.com/support/policy/updates/ansible-engine.
While a license is required for automation controller to run, there is no fee for a trial license.
• Trial licenses for Red Hat Ansible Automation are available at: http://ansible.com/license
• Support is not included in a trial license or during an evaluation of the automation controller software.
Red Hat Ansible Automation Platform is provided at various levels of support and number of machines as an annual
Subscription.
• Standard
– Manage any size environment
– Enterprise 8x5 support and SLA
8
Automation Controller User Guide, Release Automation Controller 4.2.1
The Red Hat Ansible Automation Platform controller license defines the number of Managed Nodes that can be
managed as part of a Red Hat Ansible Automation Platform subscription. A typical license will say ‘License Count:
500’, which sets the maximum number of Managed Nodes at 500.
For more information on managed node requirements for licensing, please see https://access.redhat.com/articles/
3331481.
You must have valid subscriptions attached before installing the Ansible Automation Platform. Attaching an Ansible
Automation Platform subscription enables Automation Hub repositories. A valid subscription needs to be attached
to the Automation Hub node only. Other nodes do not need to have a valid subscription/pool attached, even if the
[automationhub] group is blank, given this is done at the repos_el role level and that this role is run on both
[default] and [automationhub] hosts.
Note: Attaching subscriptions is unnecessary if your Red Hat account enabled Simple Content Access Mode. But
you still need to register to RHSM or Satellite before installing the Ansible Automation Platform.
Subscription Name: Red Hat Ansible Automation Platform, Premium (5000 Managed Nodes)
Provides: Red Hat Ansible Engine
Red Hat Single Sign-On
Red Hat Ansible Automation Platform
SKU: MCT3695
Contract: ********
Pool ID: ********************
Provides Management: No
Available: 4999
Suggested: 1
If this is properly done, and all nodes have Red Hat Ansible Automation Platform attached, then it will find the
Automation Hub repositories correctly.
To check whether the subscription was successfully attached:
To view the license information for the components included within automation controller, refer to /usr/share/
doc/automation-controller-<version>/README where <version> refers to the version of automa-
tion controller you have installed.
To view a specific license, refer to /usr/share/doc/automation-controller-<version>/*.txt,
where * is replaced by the license file name to which you are referring.
THREE
LOGGING IN
11
CHAPTER
FOUR
IMPORT A SUBSCRIPTION
Available subscriptions or a subscription manifest authorize the use of the automation controller. To obtain your
automation controller subscription, you can either:
1. Provide your Red Hat or Satellite username and password on the license page.
2. Obtain a subscriptions manifest from your Subscription Allocations page on the customer portal. See Obtaining
a subscriptions manifest in the Automation Controller User Guide for more detail.
If you have a Red Hat Ansible Automation Platform subscription, use your Red Hat customer credentials when you
launch the controller to access your subscription information (see instructions below).
If you do not have a Red Hat Ansible Automation Platform subscription, you can request a trial subscription here or
click Request Subscription and follow the instructions to request one.
Disconnected environments with Satellite will be able to use the login flow on vm-based installations if they
have configured subscription manager on the controller instance to connect to their Satellite instance. Recom-
mended workarounds for disconnected environments without Satellite include [1] downloading a manifest from
access.redhat.com in a connected environment, then uploading it to the disconnected controller instance, or [2] con-
necting to the Internet through a proxy server.
Note: In order to use a disconnected environment, it is necessary to have a valid automation controller entitlement
attached to your Satellite organization’s manifest. This can be confirmed by using hammer subscription list
\--organization <org_name>.
If you have issues with the subscription you have received, please contact your Sales Account Manager or Red Hat
Customer Service at https://access.redhat.com/support/contact/customerService/.
When the controller launches for the first time, the Subscription screen automatically displays.
1. By default, the option to retrieve and import your subscription is to upload a subscription manifest you generate
from https://access.redhat.com/management/subscription_allocations. See Obtaining a subscriptions manifest
12
Automation Controller User Guide, Release Automation Controller 4.2.1
for more detail. Once you have a subscription manifest, you can upload it by browsing to the location where the
file is saved (the subscription manifest is the complete .zip file, not its component parts).
Note: If the Browse button in the subscription manifest option is grayed-out, clear the username and password fields
to enable the Browse button.
Alternatively, you can choose the option to enter your Red Hat customer credentials using your username and pass-
word. Use your Satellite username/password if your controller cluster nodes are registered to Satellite via Subscription
Manager. Once you entered your credentials, click Get Subscriptions.
2. The subscription metadata is then retrieved from the RHSM/Satellite API, or from the manifest provided.
• If it is a subscription manifest, and multiple subscription counts were applied in a single installation, the con-
troller will combine the counts but use the earliest expiration date as the expiry (at which point you will need to
refresh your subscription).
• If you entered your credential information (username/password), the controller retrieves your configured sub-
scription service. Then it prompts you to choose the subscription you want to run (the example below shows
multiple subscriptions) and entitles the controller with that metadata. You can log in over time and retrieve new
subscriptions if you have renewed.
Note: When your subscription expires (you can check this in the Subscription details of the Subscription settings
window), you will need to renew it in the controller by one of these two methods.
13
Automation Controller User Guide, Release Automation Controller 4.2.1
If you encounter the following error message, you will need the proper permissions required for the Satellite user with
which the controller admin uses to apply a subscription.
The Satellite username/password is used to query the Satellite API for existing subscriptions. From the Satellite
API, the automation controller gets back some metadata about those subscriptions, then filter through to find valid
subscriptions that you could apply, which are then displayed as valid subscription options in the UI.
The following Satellite roles grant proper access:
• Custom with view_subscriptions and view_organizations filter
• Viewer
• Administrator
• Organization Admin
• Manager
As the Custom role is the most restrictive of these, this is the recommend role to use for your controller integration.
Refer to the Satellite documentation on managing users and roles for more detail.
Note: The System Administrator role is not equivalent to the Administrator user checkbox, and will not provide
sufficient permissions to access the subscriptions API page.
3. Click Next to proceed to Tracking and Insights. Tracking and insights collect data to help Red Hat improve
the product by delivering you a much better user experience. For more information about data collection, refer
to Usability Analytics and Data Collection. This option is checked by default, but you may opt out of any of the
following:
• User analytics collects data from the controller User Interface.
• Insights Analytics provides a high level analysis of your automation with automation controller, which is used
to help you identify trends and anomalous use of the controller. For opt-in of Automation Analytics to have any
effect, your instance of automation controller must be running on Red Hat Enterprise Linux. See instructions
described in the Automation Analytics section. If you select to opt-in for this option, the screen expands and
prompts for a username and password to enable Insights, if applicable.
Note: You may change your analytics data collection preferences at any time, as described in the Usability Analytics
and Data Collection section.
14
Automation Controller User Guide, Release Automation Controller 4.2.1
4. After you have specified your tracking and insights preferences, click Next to proceed to the End User Agree-
ment.
5. Review and check the I agree to the End User License Agreement checkbox and click Submit.
Once your subscription has been accepted, the controller briefly displays the subscription details and navigates you
to the Dashboard of the automation controller interface. For later reference, you can return to this screen by clicking
Settings from the left navigation bar and select Subscription settings from the Subscription option.
2. Click the Create New subscription allocation button to create a new subscription allocation.
Note: If this button is not present or disabled, you do not have the proper permissions to create subscription allo-
cations. To create a subscription allocation, you must either be an Administrator on the Customer Portal, or have the
Manage Your Subscriptions role. Contact an access.redhat.com administrator or organization administrator who will
be able to grant you permission to manage subscriptions.
3. Enter a name for your subscription and select Satellite 6.8 from the Type drop-down menu.
4. Click Create.
5. Once your subscriptions manifest is successfully created, it displays various information including subscription
information at the bottom of the Details tab. The number indicated next to Entitlements indicates the number of
entitlements associated with your subscription.
In order to obtain a subscriptions manifest, you must add an entitlement to your subscriptions though the Subscriptions
tab.
6. Click the Subscriptions tab.
7. In the Subscriptions tab, there are no subscriptions to display, click the Add Subscriptions button.
The next screen allows you to select and add entitlements to put in the manifest file. You may select multiple Ansible
Automation Platform subscriptions (with the same SKU) in your subscription allocation. Valid Ansible Automation
Platform subscriptions commonly go by the name “Red Hat Ansible Automation. . . ”.
8. Specify the number of entitlements/managed nodes to put in the manifest file. This allows you to split up a
subscription (for example: 400 nodes on a development cluster and 600 nodes for the production cluster, out of
a 1000 node subscription).
Note: You can apply multiple subscriptions to a single installation by adding multiple subscriptions of the same type
to a manifest file and uploading them. Similarly, a subset of a subscription can be applied by only allocating a portion
of the subscription when creating the manifest.
9. Click Submit.
The allocations you specified, once successfully added, are displayed in the Subscriptions tab.
10. Click the Details tab to access the subscription manifest file.
11. At the bottom of the details window under Entitlements, click the Export Manifest button to export the manifest
file for this subscription.
A folder pre-pended with manifest_ in the name is downloaded to your local drive. Multiple subscriptions with the
same SKU will be aggregated.
12. Now that you have a subscription manifest, proceed to the Subscription screen. Upload the entire manifest file
(.zip) by clicking Browse and navigate to the location where the file is saved. Do not open it or upload individual
parts of it.
If you are unable to apply or update the subscription info using the user interface, you can upload the subscriptions
manifest manually in an Ansible playbook using the license module in the ansible.controller collection:
FIVE
The User Interface offers a friendly graphical framework for your IT orchestration needs. The left navigation bar
provides quick access to resources, such as Projects, Inventories, Job Templates, and Jobs.
Across the top-right side of the interface, you can access your user profile, the About page, view related documentation,
and log out. Right below these options, you can view the activity stream for that user by clicking on the Activity Stream
button.
Most screens have an Activity Stream ( ) button. Clicking this brings up the Activity Stream for this object.
19
Automation Controller User Guide, Release Automation Controller 4.2.1
An Activity Stream shows all changes for a particular object. For each change, the Activity Stream shows the time of
the event, the user that initiated the event, and the action. The information displayed varies depending on the type of
event. Clicking on the Examine ( ) button shows the event log for the change.
The Activity Stream can be filtered by the initiating user (or the system, if it was system initiated), and by any related
object, such as a particular credential, job template, or schedule.
The Activity Stream on the main Dashboard shows the Activity Stream for the entire instance. Most pages allow
viewing an activity stream filtered for that specific object.
5.2 Views
• Dashboard view
• Jobs view
• Schedules view
5.2. Views 20
Automation Controller User Guide, Release Automation Controller 4.2.1
The Dashboard view begins with a summary of your hosts, inventories, and projects. Each of these is linked to the
corresponding objects for easy access.
On the main Dashboard screen, a summary appears listing your current Job Status. The Job Status graph displays
the number of successful and failed jobs over a specified time period. You can choose to limit the job types that are
viewed, and to change the time horizon of the graph.
Also available for view are summaries of Recent Jobs and Recent Templates on their respective tabs.
The Recent Jobs section displays which jobs were most recently run, their status, and time when they were run as
well.
The Recent Templates section of this display shows a summary of the most recently used templates. You can also
access this summary by clicking Templates from the left navigation bar.
5.2. Views 21
Automation Controller User Guide, Release Automation Controller 4.2.1
Note: Clicking on Dashboard from the left navigation bar or the Ansible Automation Platform logo at any time
returns you to the Dashboard.
Access the Jobs view by clicking Jobs from the left navigation bar. This view shows all the jobs that have ran,
including projects, templates, management jobs, SCM updates, playbook runs, etc.
Access the Schedules view by clicking Schedules from the left navigation bar. This view shows all the scheduled jobs
that are configured.
5.2. Views 22
Automation Controller User Guide, Release Automation Controller 4.2.1
The Resources and Access menus provide you access to the various components of automation controller and allow
you to configure who has permissions for which of those resources.
From here, you can create, view, and edit custom credential types, notifications, management jobs, tokens and appli-
cations, and configure Execution Environments.
Configuring global and system-level settings is accomplished through the Settings menu, which is described in further
detail in the proceeding section. The Settings menu offers access to administrative configuration options.
To enter the Settings window for automation controller, click Settings from the left navigation bar. This page allows
you to modify your controller’s configuration, such as settings associated with authentication, jobs, system, user
interface, and view or retrieve your subscription information.
For more information on configuring these settings, refer to Controller Configuration section of the Automation Con-
troller Administration Guide.
SIX
SEARCH
The automation controller has a powerful search tool that provides both search and filter capabilities that span across
multiple functions. Acceptable search criteria are provided in an expandable “cheat-sheet” accessible from the Ad-
vanced option from the Name drop-down menu in the search field. From there, use the combination of Set Type,
Key, Lookup type to filter.
These searching tips assume that you are not searching hosts. Most of this section still applies to hosts but with some
subtle differences. A typical syntax of a search consists a field (left-hand side) and a value (right-hand side). A colon
is used to separate the field that you want to search from the value. If a search doesn’t have a colon (see example 3)
it is treated as a simple string search where ?search=foobar is sent. Here are the examples of syntax used for
searching:
1. name:localhost In this example, the string before the colon represents the field that you want to search on.
If that string does not match something from Fields or Related Fields then it’s treated the same way Example 3
is (string search). The string after the colon is the string that you want to search for within the name attribute.
2. organization.name:Default This example shows a Related Field Search. The period in the left-hand
portion separates the model from the field in this case. Depending on how deep/complex the search is, you could
have multiple periods in that left-hand portion.
3. foobar Simple string (key term) search that will find all instances of that term using an icontains search
against the name and description fields. If a space is used between terms (e.g. foo bar), then any results that
contain both terms will be returned. If the terms are wrapped in quotes (e.g. “foo bar”), the controller will search
for the entire string with the terms appearing together. Specific name searches will search against the API name.
For example, Management job in the user interface is system_job in the API.
26
Automation Controller User Guide, Release Automation Controller 4.2.1
4. organization:Default This example shows a Related Field search but without specifying a field to go
along with the organization. This is supported by the API and is analogous to a simple string search but done
against the organization (will do an icontains search against both the name and description).
To find values for certain fields, refer to the API endpoint for extensive options and their valid values. For example,
if you want to search against /api/v2/jobs -> type field, you can find the values by performing an OPTIONS
request to /api/v2/jobs and look for entries in the API for "type". Additionally, you can view the related
searches by scrolling to the bottom of each screen. In the example for /api/v2/jobs, the related search shows:
"related_search_fields": [
"modified_by__search",
"project__search",
"project_update__search",
"credentials__search",
"unified_job_template__search",
"created_by__search",
"inventory__search",
"labels__search",
"schedule__search",
"webhook_credential__search",
"job_template__search",
"job_events__search",
"dependent_jobs__search",
"launch_config__search",
"unifiedjob_ptr__search",
"notifications__search",
"unified_job_node__search",
"instance_group__search",
"hosts__search",
"job_host_summaries__search"
The values for Fields come from the keys in a GET request. url, related, and summary_fields are not used.
The values for Related Fields also come from the OPTIONS response, but from a different attribute. Related Fields
is populated by taking all the values from related_search_fields and stripping off the __search from the
end.
Any search that does not start with a value from Fields or a value from the Related Fields, will be treated as a generic
string search. Searching for something like localhost will result in the UI sending ?search=localhost as a
query parameter to the API endpoint. This is a shortcut for an icontains search on the name and description fields.
Searching a Related Field requires you to start the search string with the Related Field. This example describes how
to search using values from the Related Field, organization.
The left-hand side of the search string must start with organization (ex: organization:Default). Depending
on the related field, you might want to provide more specific direction for the search by providing secondary/tertiary
fields. An example of this would be to specify that you want to search for all job templates that use a project matching
a certain name. The syntax on this would look like: job_template.project.name:"A Project".
Note: This query would execute against the unified_job_templates endpoint which is why it starts with
job_template. If we were searching against the job_templates endpoint, then you wouldn’t need the
The following are a few things about searching in the controller that you should be aware of:
• There’s currently no supported syntax for OR queries. All search terms get AND’d in the query parameters.
• The left-hand portion of a search parameter can be wrapped in quotes to support searching for strings with
spaces.
• Currently, the values in the Fields are direct attributes expected to be returned in a GET request. Whenever
you search against one of the values, the controller essentially does an __icontains search. So, for ex-
ample, name:localhost would send back ?name__icontains=localhost. The controller currently
performs this search for every Field value, even id, which is not ideal.
6.2 Sort
Where applicable, use the arrows in each column to sort by ascending or descending order (following is an example
from the schedules list).
The direction of the arrow indicates the sort order of the column.
6.2. Sort 28
CHAPTER
SEVEN
ORGANIZATIONS
An Organization is a logical collection of Users, Teams, Projects, and Inventories, and is the highest level in the
automation controller object hierarchy.
Access the Organizations page by clicking Organizations from the left navigation bar. The Organizations page dis-
plays all of the existing organizations for your installation. Organizations can be searched by Name or Description.
Modify and remove organizations using the Edit and Delete buttons.
29
Automation Controller User Guide, Release Automation Controller 4.2.1
From this list view, you can edit the details of an organization ( ) from the Actions menu.
The inventory sync output view also shows the host limit error. Click Details for additional detail about the error.
From the Details tab, you can edit or delete the organization.
Note: If deleting items that are used by other work items, a message opens listing the items are affected by the deletion
and prompts you to confirm the deletion. Some screens will contain items that are invalid or previously deleted, so
they will fail to run. Below is an example of such a message:
Clicking on Access (beside Details when viewing your organization), displays all the Users associated with this
Organization and their roles.
As you can manage the user membership for this Organization here, you can manage user membership on a per-user
basis from the Users page by clicking Users from the left navigation bar. Organizations have a unique set of roles not
described here. You can assign specific users certain levels of permissions within your organization, or allow them to
act as an admin for a particular resource. Refer to Role-Based Access Controls for more information.
Clicking on a user brings up that user’s details, allowing you to review, grant, edit, and remove associated permissions
for that user. For more information, refer to Users.
In order to add a user or team to an organization, the user or team must already be created. See Create a User and
Create a Team for additional detail. To add existing users or team to the Organization:
5. Click the Save button to apply the roles to the selected user(s) or team(s) and to add them as members.
The Add Users/Teams window closes to display the updated roles assigned for each user and team.
To remove roles for a particular user, click the disassociate (x) button next to its resource.
Note: A user or team with roles associated will retain them even after they have been reassigned to another organiza-
tion.
Clicking the Notifications tab allows you to review any notification integrations you have setup.
Use the toggles to enable or disable the notifications to use with your particular organization. For more detail, see
Enable and Disable Notifications.
If no notifications have been set up, you must create them from the Notifications option on the left navigation bar.
Refer to Notification Types for additional details on configuring various notification types.
EIGHT
USERS
A User is someone who has access to automation controller with associated permissions and credentials. Access the
Users page by clicking Users from the left navigation bar. The User list may be sorted and searched by Username,
First Name, or Last Name and click the headers to toggle your sorting preference.
You can easily view permissions and user type information by looking beside their user name in the User overview
screen.
Note: When modifying your own password, log out and log back in again in order for it to take effect.
37
Automation Controller User Guide, Release Automation Controller 4.2.1
• System Administrator: A System Administrator (also known as Superuser) has full system administration
privileges – with full read and write privileges over the entire installation. A System Administrator is typically
responsible for managing all aspects of automation controller and delegating responsibilities for day-to-day
work to various Users. Assign with caution!
Note: The initial user (usually “admin”) created by the installation process is a Superuser. One Superuser must
always exist. To delete the “admin” user account, you must first create another Superuser account.
You may delete the user from its Details screen by clicking Delete, or once you exit the details screen, you can delete
users from a list of current users. See Delete a User for more detail.
The count for the number of users has also been updated, and a new entry for the new user is added to the list of users
below the edit form. The same window opens whether you click on the user’s name, or the Edit ( ) button beside
the user. Here, the User’s Organizations, Teams, and Permissions, as well as other user membership details, may be
reviewed and modified.
Note: If the user is not a newly-created user, the user’s details screen displays the last login activity of that user.
When you log in as yourself, and view the details of your own user profile, you can manage tokens from your user
profile. See Users - Tokens for more detail.
Before you can delete a user, you must have user permissions. When you delete a user account, the name and email of
the user are permanently removed from automation controller.
1. Expand the Access menu from the left navigation bar, and click Users to display a list of the current users.
2. Select the check box(es) for the user(s) that you want to remove and click Delete.
3. Click Delete in the confirmation warning message to permanently delete the user.
This displays the list of organizations of which that user is a member. This list may be searched by Organization Name
or Description. Organization membership cannot be modified from this display panel.
This displays the list of teams of which that user is a member. This list may be searched by Team Name or Descrip-
tion. Team membership cannot be modified from this display panel. For more information, refer to Teams.
Until a Team has been created and the user has been assigned to that team, the assigned Teams Details for the User
appears blank.
The set of Permissions assigned to this user (role-based access controls) that provide the ability to read, modify, and
administer projects, inventories, job templates, and other automation controller elements are Privileges.
Note: It is important to note that the job template administrator may not have access to any inventory, project, or
credentials associated with the template. Without access to these, certain fields in the job template aren’t editable.
This screen displays a list of the roles that are currently assigned to the selected User and can be sorted and searched
by Name, Type, or Role.
2. Click to select the object for which the user will have access and click Next.
3. Click to select the resource to assign team roles and click Next.
4. Click the checkbox beside the role to assign that role to your chosen type of resource. Different resources have
different options available.
5. Click Save when done, and the Add Permissions Wizard closes to display the updated profile for the user with
the roles assigned for each selected resource.
To remove Permissions for a particular resource, click the disassociate (x) button next to its resource. This launches a
confirmation dialog, asking you to confirm the disassociation.
Note: You can also add teams, individual, or multiple users and assign them permissions at the object level (projects,
inventories, job templates, and workflow templates) as well. This feature reduces the time for an organization to
The Tokens tab will only be present for your user (yourself). Before you add a token for your user, you may want
to create an application if you want to associate your token to it. You may also create a personal access token (PAT)
without associating it with any application. To create a token for your user:
1. If not already selected, click on your user from the Users list view to configure your OAuth 2 tokens.
2. Click the Tokens tab from your user’s profile.
When no tokens are present, the Tokens screen prompts you to add them:
3. Click the Add button, which opens the Create Token window.
4. Enter the following details in Create Token window:
• Application: enter the name of the application with which you want to associate your token. Alternatively, you
can search for it by clicking the button. This opens a separate window that allows you to choose from the
available options. Use the Search bar to filter by name if the list is extensive. Leave this field blank if you want
to create a Personal Access Token (PAT) that is not linked to any application.
• Description: optionally provide a short description for your token.
• Scope (required): specify the level of access you want this token to have.
5. When done, click Save or Cancel to abandon your changes.
After the token is saved, the newly created token for the user displays with the token information and when it expires.
Note: This is the only time the token value and associated refresh token value will ever be shown.
In the user’s profile, the application for which it is assigned to and its expiration displays in the token list view.
NINE
TEAMS
A Team is a subdivision of an organization with associated users, projects, credentials, and permissions. Teams
provide a means to implement role-based access control schemes and delegate responsibilities across organizations.
For instance, permissions may be granted to a whole Team rather than each user on the Team.
You can create as many Teams of users as make sense for your Organization. Each Team can be assigned permissions,
just as with Users. Teams can also scalably assign ownership for Credentials, preventing multiple interface click-
throughs to assign the same Credentials to the same user.
Access the Teams page by clicking Teams from the left navigation bar. The team list may be sorted and searched by
Name or Organization.
Clicking the Edit ( ) button next to the list of Teams allows you to edit details about the team. You can also
review Users and Permissions associated with this Team.
46
Automation Controller User Guide, Release Automation Controller 4.2.1
This tab displays the list of Users that are members of this Team. This list may be searched by Username, First
Name, or Last Name. For more information, refer to Users.
Add a User
In order to add a user to a team, the user must already be created. Refer to Create a User to create a user. Adding a
user to a team adds them as a member only, specifying a role for the user on different resources can be done in the
Access tab . To add existing users to the Team:
1. In the Access tab, click the Add button.
2. Follow the prompts to add user(s) and assign them to roles.
3. Click Save when done.
To remove roles for a particular user, click the disassociate (x) button next to its resource.
Selecting the Roles view displays a list of the permissions that are currently available for this Team. The permissions
list may be sorted and searched by Resource Name, Type, or Role.
The set of privileges assigned to Teams that provide the ability to read, modify, and administer projects, inventories,
and other automation controller elements are permissions. By default, the Team is given the “read” permission (also
called a role).
Permissions must be set explicitly via an Inventory, Project, Job Template, or within the Organization view.
2. Click to select the object for which the team will have access and click Next.
3. Click to select the resource to assign team roles and click Next.
4. Click the checkbox beside the role to assign that role to your chosen type of resource. Different resources have
different options available.
5. Click Save when done, and the Add Permissions Wizard closes to display the updated profile for the user with
the roles assigned for each selected resource.
To remove Permissions for a particular resource, click the disassociate (x) button next to its resource. This launches a
confirmation dialog, asking you to confirm the disassociation.
Note: You can also add teams, individual, or multiple users and assign them permissions at the object level (projects,
inventories, job templates, and workflow templates) as well. This feature reduces the time for an organization to
TEN
CREDENTIALS
Credentials are utilized for authentication when launching Jobs against machines, synchronizing with inventory
sources, and importing project content from a version control system.
You can grant users and teams the ability to use these credentials, without actually exposing the credential to the user.
If you have a user move to a different team or leave the organization, you don’t have to re-key all of your systems just
because that credential was available in the automation controller.
Note: The automation controller encrypts passwords and key information in the database and never makes secret
information visible via the API. See Automation Controller Administration Guide for details.
The automation controller uses SSH to connect to remote hosts (or the Windows equivalent). In order to pass the
key from the automation controller to SSH, the key must be decrypted before it can be written a named pipe. The
automation controller then uses that pipe to send the key to SSH (so that it is never written to disk).
If passwords are used, the automation controller handles those by responding directly to the password prompt and
decrypting the password before writing it to the prompt.
Click Credentials from the left navigation bar to access the Credentials page. The Credentials page displays a search-
able list of all available Credentials and can be sorted by Name.
54
Automation Controller User Guide, Release Automation Controller 4.2.1
Credentials added to a Team are made available to all members of the Team, whereas credentials added to a User are
only available to that specific User by default.
Note: If deleting items that are used by other work items, a message opens listing the items are affected by the deletion
and prompts you to confirm the deletion. Some screens will contain items that are invalid or previously deleted, so
they will fail to run. Below is an example of such a message:
To help you get started, a Demo Credential has been created for your use.
Clicking on the link for the Demo Credential takes you to the Details view of this Credential.
Clicking the Access tab shows you users and teams associated with this Credential and their granted roles (owner,
admin, auditor, etc.)
Note: A credential with roles associated will retain them even after the credential has been reassigned to another
organization.
You can click the Add button to assign this Demo Credential to additional users. If no users exist, add them from the
Users menu and refer to the Users section for further detail.
2. Enter the name for your new credential in the Name field.
3. Optionally enter a description and enter or select the name of the organization with which the credential is
associated.
Note: A credential with a set of permissions associated with one organization will remain even after the credential is
reassigned to another organization.
5. Enter the appropriate details depending on the type of credential selected, as described in the next section,
Credential Types.
6. Click Save when done.
The following credential types are supported with the automation controller:
• Machine
• Microsoft Azure Key Vault
• Microsoft Azure Resource Manager
• Network
• OpenShift or Kubernetes API Bearer Token
• OpenStack
• Red Hat Ansible Automation Platform
• Red Hat Satellite 6
• Red Hat Virtualization
• Source Control
• Thycotic DevOps Secrets Vault
• Thycotic Secret Server
• Vault
• VMware vCenter
The credential types associated with Centrify, CyberArk, HashiCorp Vault, Microsoft Azure Key Management System
(KMS), and Thycotic are part of the credential plugins capability that allows an external system to lookup your secrets
information. See the Secret Management System section for further detail.
Selecting this credential type enables synchronization of cloud inventory with Amazon Web Services.
The automation controller uses the following environment variables for AWS credentials and are fields prompted in
the user interface:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SECURITY_TOKEN
Traditional Amazon Web Services credentials consist of the AWS Access Key and Secret Key.
The automation controller provides support for EC2 STS tokens (sometimes referred to as IAM STS credentials).
Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials
for AWS Identity and Access Management (IAM) users. To learn more about the IAM/EC2 STS Token, refer to:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
Note: If the value of your tags in EC2 contain booleans (yes/no/true/false), you must remember to quote them.
Warning: To use implicit IAM role credentials, do not attach AWS cloud credentials in the automation controller
when relying on IAM roles to access the AWS API. While it may seem to make sense to attach your AWS cloud
credential to your job template, doing so will force the use of your AWS credentials and will not “fall through” to
use your IAM role credentials (this is due to the use of the boto library.)
Selecting this credential allows the automation controller to access Galaxy or use a collection published on a local
Automation Hub. See Using Collections via Hub for detail. Entering the Galaxy server URL is the only required value
on this screen.
This is considered part of the secret management capability. See Centrify Vault Credential Provider Lookup for more
detail.
Selecting this credential allows the automation controller to access a collection of container images. See What is a
container registry? for more information.
Aside from specifying a name, the Authentication URL is the only required field on this screen, and it is already pre-
populated with a default value. You may change this default by specifying the authentication endpoint for a different
container registry.
This is considered part of the secret management capability. See CyberArk AIM Credential Provider Lookup for more
detail.
This is considered part of the secret management capability. See CyberArk Conjur Secret Lookup for more detail.
Selecting this credential allows you to access GitHub using a Personal Access Token (PAT), which is obtained through
GitHub. See Working with Webhooks for detail. Entering the provided token is the only required value in this screen.
GitHub PAT credentials require a value in the Token field, which is provided in your GitHub profile settings.
This credential can be used for establishing an API connection to GitHub for use in webhook listener jobs, to post
status updates.
Selecting this credential allows you to access GitLab using a Personal Access Token (PAT), which is obtained through
GitLab. See Working with Webhooks for detail. Entering the provided token is the only required value in this screen.
GitLab PAT credentials require a value in the Token field, which is provided in your GitLab profile settings.
This credential can be used for establishing an API connection to GitLab for use in webhook listener jobs, to post
status updates.
Selecting this credential type enables synchronization of cloud inventory with Google Compute Engine (GCE).
The automation controller uses the following environment variables for GCE credentials and are fields prompted in
the user interface:
GCE_EMAIL
GCE_PROJECT
GCE_CREDENTIALS_FILE_PATH
• Service Account JSON File: Optionally upload a GCE service account file. Use the folder ( ) icon to
browse for the file that contains the special account information that can be used by services and applications
running on your GCE instance to interact with other Google Cloud Platform APIs. This grants permissions to
the service account and virtual machine instances.
• RSA Private Key: The PEM file associated with the service account email.
This is considered part of the secret management capability. See HashiCorp Vault Secret Lookup for more detail.
This is considered part of the secret management capability. See HashiCorp Vault Signed SSH for more detail.
10.4.12 Insights
Selecting this credential type enables synchronization of cloud inventory with Red Hat Insights.
Insights credentials consist of the Insights Username and Password, which is the user’s Red Hat Customer Portal
Account username and password.
10.4.13 Machine
Machine credentials enable the automation controller to invoke Ansible on hosts under your management. Just like
using Ansible on the command line, you can specify the SSH username, optionally provide a password, an SSH key,
a key password, or even have the automation controller prompt the user for their password at deployment time. They
define ssh and user-level privilege escalation access for playbooks, and are used when submitting jobs to run playbooks
on a remote host. Network connections (httpapi, netconf, and network_cli) use Machine for the credential
type.
Machine/SSH credentials do not use environment variables. Instead, they pass the username via the ansible -u
flag, and interactively write the SSH password when the underlying SSH client prompts for it.
• empty selection: If a task/play has become set to yes and is used with an empty selection, then it will default
to sudo
• sudo: Performs single commands with super user (root user) privileges
• su: Switches to the super user (root user) account (or to other user accounts)
• pbrun: Requests that an application or command be run in a controlled account and provides for advanced root
privilege delegation and keylogging
• pfexec: Executes commands with predefined process attributes, such as specific user or group IDs
• dzdo: An enhanced version of sudo that uses RBAC information in an Centrify’s Active Directory service (see
Centrify’s site on DZDO)
• pmrun: Requests that an application is run in a controlled account (refer to Privilege Manager for Unix 6.0)
• runas: Allows you to run as the current user
• enable: Switches to elevated permissions on a network device
• doas: Allows your remote/login user to execute commands as another user via the doas (“Do as user”) utility
• ksu: Allows your remote/login user to execute commands as another user via Kerberos access
• machinectl: Allows you to manage containers via the systemd machine manager
• sesu: Allows your remote/login user to execute commands as another user via the CA Privileged Access Man-
ager
Note: Custom become plugins are available only starting with Ansible 2.8. For more detail on this concept, refer
to Understanding Privilege Escalation https://docs.ansible.com/ansible/latest/user_guide/become.html and the list of
become plugins https://docs.ansible.com/ansible/latest/plugins/become.html#plugin-list.
• Privilege Escalation Username field is only seen if an option for privilege escalation is selected. Enter the
username to use with escalation privileges on the remote system.
• Privilege Escalation Password: field is only seen if an option for privilege escalation is selected. Enter the
actual password to be used to authenticate the user via the selected privilege escalation type on the remote
system. This password will be stored encrypted in the database, if entered. Alternatively, you may configure
the automation controller to ask the user for the password at launch time by selecting Prompt on launch. In
these cases, a dialog opens when the job is launched, promoting the user to enter the password and password
confirmation.
Note: Sudo Password must be used in combination with SSH passwords or SSH Private Keys, since the automation
controller must first establish an authenticated SSH connection with the host prior to invoking sudo to change to the
sudo user.
Warning: Credentials which are used in Scheduled Jobs must not be configured as “Prompt on launch”.
This is considered part of the secret management capability. See Microsoft Azure Key Vault for more detail.
Selecting this credential type enables synchronization of cloud inventory with Microsoft Azure Resource Manager.
Microsoft Azure Resource Manager credentials have several attributes that may be configured:
• Subscription ID: The Subscription UUID for the Microsoft Azure account (required).
• Username: The username to use to connect to the Microsoft Azure account.
• Password: The password to use to connect to the Microsoft Azure account.
• Client ID: The Client ID for the Microsoft Azure account.
• Client Secret: The Client Secret for the Microsoft Azure account.
• Tenant ID: The Tenant ID for the Microsoft Azure account.
• Azure Cloud Environment: The variable associated with Azure cloud or Azure stack environments.
These fields are equivalent to the variables in the API. To pass service principal credentials, define the following
variables:
AZURE_CLIENT_ID
AZURE_SECRET
AZURE_SUBSCRIPTION_ID
AZURE_TENANT
AZURE_CLOUD_ENVIRONMENT
AZURE_AD_USER
AZURE_PASSWORD
AZURE_SUBSCRIPTION_ID
You can also pass credentials as parameters to a task within a playbook. The order of precedence is parameters, then
environment variables, and finally a file found in your home directory.
To pass credentials as parameters to a task, use the following parameters for service principal credentials:
client_id
secret
subscription_id
tenant
azure_cloud_environment
ad_user
password
subscription_id
10.4.16 Network
Select the Network credential type only if you are using a local connection with provider to use Ansible networking
modules to connect to and manage networking devices. When connecting to network devices, the credential type must
match the connection type:
• For local connections using provider, credential type should be Network
• For all other network connections (httpapi, netconf, and network_cli), credential type should be
Machine
For an overview of connection types available for network devices, refer to Multiple Communication Protocols.
The automation controller uses the following environment variables for Network credentials and are fields prompted
in the user interface:
ANSIBLE_NET_USERNAME
ANSIBLE_NET_PASSWORD
Selecting this credential type allows you to create instance groups that point to a Kubernetes or OpenShift container.
For more information about this concept, refer to Container and Instance Groups.
• A token associated with that service account (OpenShift or Kubernetes Bearer Token)
• A CA certificate associated with the cluster
This section describes creating a Service Account in an Openshift cluster (or K8s) in order to be used to run jobs in
a container group via automation controller. After the Service Account is created, its credentials are provided to the
controller in the form of an Openshift or Kubernetes API bearer token credential. Below describes how to create a
service account and collect the needed information for configuring automation controller.
To configure the controller:
1. To create a service account, you may download and use this sample service account, containergroup sa
and modify it as needed to obtain the above credentials.
2. Apply the configuration from containergroup-sa.yml:
oc apply -f containergroup-sa.yml
10.4.18 OpenStack
Selecting this credential type enables synchronization of cloud inventory with OpenStack.
Selecting this credential allows you to access another automation controller instance.
Automation controller credentials have the following inputs that are required:
• Controller Hostname: The base URL or IP address of the other instance to connect to.
• Username: The username to use to connect to it.
• Password: The password to use to connect to it.
• Oauth Token: If username and password is not used, provide an OAuth token to use to authenticate.
Selecting this credential type enables synchronization of cloud inventory with Red Hat Satellite 6.
The automation controller writes a Satellite configuration file based on fields prompted in the user interface. The
absolute path to the file is set in the following environment variable:
FOREMAN_INI_PATH
This credential allows the automation controller to access Ansible’s oVirt4.py dynamic inventory plugin, which is
managed by Red Hat Virtualization (RHV).
The automation controller uses the following environment variables for Red Hat Virtualization credentials and are
fields in the user interface:
OVIRT_URL
OVIRT_USERNAME
OVIRT_PASSWORD
SCM (source control) credentials are used with Projects to clone and update local source code repositories from a
remote revision control system such as Git or Subversion.
Note: Source Control credentials cannot be configured as “Prompt on launch”. If you are using a GitHub account
for a Source Control credential and you have 2FA (Two Factor Authenication) enabled on your account, you will need
to use your Personal Access Token in the password field rather than your account password.
This is considered part of the secret management capability. See Thycotic DevOps Secrets Vault for more detail.
This is considered part of the secret management capability. See Thycotic Secret Server for more detail.
10.4.25 Vault
Selecting this credential type enables synchronization of inventory with Ansible Vault.
Vault credentials require the Vault Password and an optional Vault Identifier if applying multi-Vault credentialing.
For more information on the automation controller Multi-Vault support, refer to the Multi-Vault Credentials section of
the Automation Controller Administration Guide.
You may configure the automation controller to ask the user for the password at launch time by selecting Prompt
on launch. In these cases, a dialog opens when the job is launched, promoting the user to enter the password and
password confirmation.
Warning: Credentials which are used in Scheduled Jobs must not be configured as “Prompt on launch”.
Selecting this credential type enables synchronization of inventory with VMware vCenter.
The automation controller uses the following environment variables for VMware vCenter credentials and are fields
prompted in the user interface:
VMWARE_HOST
VMWARE_USER
VMWARE_PASSWORD
VMWARE_VALIDATE_CERTS
Note: If the VMware guest tools are not running on the instance, VMware inventory sync may not return an IP
address for that instance.
ELEVEN
As an administrator with superuser access, you can define a custom credential type in a standard format using a
YAML/JSON-like definition, allowing the assignment of new credential types to jobs and inventory updates. This
allows you to define a custom credential type that works in ways similar to existing credential types. For example,
you could create a custom credential type that injects an API token for a third-party web service into an environment
variable, which your playbook or custom inventory script could consume.
Custom credentials support the following ways of injecting their authentication information:
• Environment variables
• Ansible extra variables
• File-based templating (i.e., generating .ini or .conf files that contain credential values)
You can attach one SSH and multiple cloud credentials to a Job Template. Each cloud credential must be of a different
type. In other words, only one AWS credential, one GCE credential, etc., are allowed. Vault credentials and machine
credentials are separate entities.
Note: When creating a new credential type, you are responsible for avoiding collisions in the extra_vars,
env, and file namespaces. Also, avoid environment variable or extra variable names that start with ANSIBLE_
because they are reserved. You must have Superuser permissions to be able to create and edit a credential type
(CredentialType) and to be able to view the CredentialType.injection field.
A “managed” credential type of kind=galaxy represents a content source for fetching collections defined in
requirements.yml when project updates are run (e.g., galaxy.ansible.com, cloud.redhat.com, on-premise Au-
tomation Hub). This new type will represent a URL and (optional) authentication details necessary to construct the
environment variables when a project update runs ansible-galaxy collection install as described in
the Ansible documentation, Configuring the ansible-galaxy client. It has fields which map directly to the configuration
options exposed to the Ansible Galaxy CLI, e.g., per-server. An endpoint in the API reflects an ordered list of these
credentials at the Organization level:
/api/v2/organizations/N/galaxy_credentials/
Installations of the automation controller migrates existing Galaxy-oriented setting values in such a way that post-
upgrade, proper credentials are created and attached to every Organization. After upgrading to the latest version ,
every organization that existed prior to upgrade now has a list of (one or more) “Galaxy” credentials associated with
it.
78
Automation Controller User Guide, Release Automation Controller 4.2.1
Additionally, post-upgrade, these settings are not be visible (or editable) from the /api/v2/settings/jobs/
endpoint.
The automation controller should still continue to fetch roles directly from public Galaxy even if galaxy.ansible.com
is not the first credential in the list for the Organization. The global “Galaxy” settings are no longer configured at
the jobs level, but at the Organization level in the User Interface. The Organization’s Add and Edit windows have an
optional Credential lookup field for credentials of kind=galaxy.
It is very important to specify the order of these credentials as order sets precedence for the sync and lookup of
the content. For more information, see Creating a New Organization. For detail on how to set up a project using
collections, see Using Collections via Hub.
Support for version 2 of the API (api/v2/) means a one-to-many relationship for Job Templates to credentials
(including multi-cloud support). Credentials can be filtered using the v2 API:
$ curl "https://controller.example.org/api/v2/credentials/?credential_type__
,→namespace=aws"
Machine SSH
Vault Vault
Network Sets environment variables (e.g., ANSIBLE_NET_AUTHORIZE)
SCM Source Control
Cloud EC2, AWS
Lots of others
Insights Insights
Galaxy galaxy.ansible.com, cloud.redhat.com
on-premise Automation Hub
Access the Credentials from clicking Credential Types from the left navigation bar. If no custom credential types
have been created, the Credential Types view will not have any to display and will prompt you to add one:
If credential types have been created, this page displays a list of all existing and available Credential Types.
To view more information about a credential type, click on its name or the Edit ( ) button from the Actions
column.
Each credential type displays its own unique configurations in the Input Configuration field and the Injector Con-
figuration field, if applicable. Both YAML and JSON formats are supported in the configuration fields.
Note: When creating a new credential type, do not use reserved variable names that start with ANSIBLE_ for the
INPUT and INJECTOR names and IDs, as they are invalid for custom credential types.
3. In the Input Configuration field, specify an input schema which defines a set of ordered fields for that type.
The format can be in YAML or JSON, as shown:
YAML
fields:
- type: string
id: username
label: Username
- type: string
id: password
label: Password
secret: true
required:
- username
- password
{
"fields": [
{
"type": "string",
"id": "username",
"label": "Username"
},
{
"secret": true,
"type": "string",
"id": "password",
"label": "Password"
}
],
"required": ["username", "password"]
}
{
"fields": [{
"id": "api_token", # required - a unique name used to
# reference the field value
},
{
"fields": [{
"id": "api_token", # required - a unique name used to
,→reference the field value
4. In the Injector Configuration field, enter environment variables or extra variables that specify the values a
credential type can inject. The format can be in YAML or JSON (see examples in the previous step). The
configuration in JSON format below show each field and how they are used:
{
"file": {
"template": "[mycloud]\ntoken={{ api_token }}"
},
"env": {
"THIRD_PARTY_CLOUD_API_TOKEN": "{{ api_token }}"
},
"extra_vars": {
"some_extra_var": "{{ username }}:{{ password }}"
}
}
Credential Types can also generate temporary files to support .ini files or certificate/key data:
{
"file": {
"template": "[mycloud]\ntoken={{ api_token }}"
},
"env": {
"MY_CLOUD_INI_FILE": "{{ tower.filename }}"
}
}
In this example, the automation controller will write a temporary file that contains:
[mycloud]\ntoken=SOME_TOKEN_VALUE
The absolute file path to the generated file will be stored in an environment variable named MY_CLOUD_INI_FILE.
An example of referencing multiple files in a custom credential template is as follows:
Inputs
{
"fields": [{
"id": "cert",
"label": "Certificate",
"type": "string"
},{
"id": "key",
"label": "Key",
"type": "string"
}]
}
Injectors
{
"file": {
"template.cert_file": "[mycert]\n{{ cert }}",
"template.key_file": "[mykey]\n{{ key }}"
},
"env": {
"MY_CERT_INI_FILE": "{{ tower.filename.cert_file }}",
"MY_KEY_INI_FILE": "{{ tower.filename.key_file }}"
}
}
Click to modify the credential type options under the Actions column.
Note: In the Edit screen, you can modify the details or delete the credential. If the Delete button is grayed out, it is
indication that the credential type that is being used by a credential, and you must delete the credential type from all
the credentials that use it before you can delete it. Below is an example of such a message:
7. Verify that the newly created credential type can be selected from the Credential Type selection window when
creating a new credential:
TWELVE
Users and admins upload machine and cloud credentials so that automation can access machines and external services
on their behalf. By default, sensitive credential values (such as SSH passwords, SSH private keys, API tokens for cloud
services) are stored in the database after being encrypted. With external credentials backed by credential plugins, you
can map credential fields (like a password or an SSH Private key) to values stored in a secret management system
instead of providing them to the controller directly. automation controller provides a secret management system that
include integrations for:
• Centrify Vault Credential Provider Lookup
• CyberArk Application Identity Manager (AIM)
• CyberArk Conjur
• HashiCorp Vault Key-Value Store (KV)
• HashiCorp Vault SSH Secrets Engine
• Microsoft Azure Key Management System (KMS)
• Thycotic DevOps Secrets Vault
• Thycotic Secret Server
These external secret values will be fetched prior to running a playbook that needs them. For more information on
specifying these credentials in the User Interface, see Credentials.
When configuring automation controller to pull a secret from a 3rd-party system, it is in essence linking credential
fields to external systems. To link a credential field to a value stored in an external system, select the external credential
corresponding to that system and provide metadata to look up the desired value. The metadata input fields are part of
the external credential type definition of the source credential.
Automation controller provides a credential plugin interface for developers, integrators, admins, and power-users with
the ability to add new external credential types to extend it to support other secret management systems. For more
detail, see the development docs for credential plugins.
Use the automation controller User Interface to configure and use each of the supported 3-party secret management
systems.
1. First, create an external credential for authenticating with the secret management system. At minimum, provide
a name for the external credential and select one of the following for the Credential Type:
86
Automation Controller User Guide, Release Automation Controller 4.2.1
2. Navigate to the credential form of the target credential and link one or more input fields to the external credential
along with metadata for locating the secret in the external system. In this example, the Demo Credential is the
target credential.
3. For any of the fields below the Type Details area that you want to link to the external credential, click the
button of the input field. You are prompted to set the input source to use to retrieve your secret information.
4. Select the credential you want to link to, and click Next. This takes you to the Metadata tab of the input source.
This example shows the Metadata prompt for HashiVault Secret Lookup. Metadata is specific to the input source
you select. See the Metadata for credential input sources table for details.
5. Click Test to verify connection to the secret management system. If the lookup is unsuccessful, an error message
like this one displays:
6. When done, click OK. This closes the prompt window and returns you to the Details screen of your target
credential. Repeat these steps, starting with step 3 above to complete the remaining input fields for the target
credential. By linking the information in this manner, automation controller retrieves sensitive information, such
as username, password, keys, certificates, and tokens from the 3rd-party management systems and populates that
data into the remaining fields of the target credential form.
7. If necessary, supply any information manually for those fields that do not use linking as a way of retrieving
sensitive information. Refer to the appropriate Credential Types for more detail about each of the fields.
8. Click Save when done.
Metadata Description
Account Name (Required) Name of the system account or domain associated with Centrify Vault.
System Name Specify the name used by the Centrify portal.
CyberArk AIM
Metadata Description
Object Query (Required) Lookup query for the object.
Object Query Format Select Exact for a specific secret name, or Regexp for a secret that has a
dynamically generated name.
Reason If required per the object’s policy, supply a reason for checking out the
secret, as CyberArk logs those.
CyberArk Conjur
Metadata Description
Secret Identifier The identifier for the secret.
Secret Version Specify a version of the secret, if necessary, otherwise, leave it empty to use
the latest version.
Metadata Description
Name of Secret Backend Specify the name of the KV backend to use. Leave it blank to use the first
path segment of the Path to Secret field instead.
Path to Secret (required) Specify the path to where the secret information is stored; for example, /
path/username.
Key Name (required) Specify the name of the key to look up the secret information.
Secret Version (V2 Only) Specify a version if necessary, otherwise, leave it empty to use the latest
version.
Metadata Description
Unsigned Public Key (required) Specify the public key of the cert you want to get signed. It needs to be
present in the authorized keys file of the target host(s).
Path to Secret (required) Specify the path to where the secret information is stored; for example, /
path/username.
Role Name (required) A role is a collection of SSH settings and parameters that are stored in Hashi
vault. Typically, you can specify a couple of them with different privileges,
timeouts, etc. So you could have a role that is allowed to get a cert signed
for root, and other less privileged ones, for example.
Valid Principals Specify a user (or users) other than the default, that you are requesting vault
to authorize the cert for the stored key. Hashi vault has a default user for
whom it signs (e.g., ec2-user).
Azure KMS
Metadata Description
Secret Name (required) The actual name of the secret as it is referenced in Azure’s Key vault app.
Secret Version Specify a version of the secret, if necessary, otherwise, leave it empty to use
the latest version.
Metadata Description
Secret Path (required) Specify the path to where the secret information is stored (e.g.,
/path/username).
Metadata Description
Secret ID (required) The identifier for the secret.
Secret Field Specify the field to be used from the secret.
You need the Centrify Vault web service running to store secrets in order for this integration to work. When Centrify
Vault Credential Provider Lookup is selected for Credential Type, provide the following metadata to properly
configure your lookup:
• Centrify Tenant URL (required): provide the URL used for communicating with Centrify’s secret management
system
• Centrify API User (required): provide the username
• Centrify API Password (required): provide the password
• OAuth2 Application ID : specify the identifier given associated with the OAuth2 client
• OAuth2 Scope : specify the scope of the OAuth2 client
Below shows an example of a configured CyberArk AIM credential.
You need the CyberArk Central Credential Provider web service running to store secrets in order for this integration to
work. When CyberArk AIM Credential Provider Lookup is selected for Credential Type, provide the following
metadata to properly configure your lookup:
• CyberArk AIM URL (required): provide the URL used for communicating with CyberArk AIM’s secret man-
agement system
• Application ID (required): specify the identifier given by CyberArk AIM services
• Client Key: paste the client key if provided by CyberArk
• Client Certificate: include the BEGIN CERTIFICATE and END CERTIFICATE lines when pasting the
certificate, if provided by CyberArk
• Verify SSL Certificates: this option is only available when the URL uses HTTPS. Check this option to verify
the server’s SSL certificate is valid and trusted. Environments that use internal or private CA’s should leave this
option unchecked to disable verification.
Below shows an example of a configured CyberArk AIM credential.
When CyberArk Conjur Secret Lookup is selected for Credential Type, provide the following metadata to properly
configure your lookup:
• Conjur URL (required): provide the URL used for communicating with CyberArk Conjur’s secret management
system
• API Key (required): provide the key given by your Conjur admin
• Account (required): the organization’s account name
• Username (required): the specific authenticated user for this service
• Public Key Certificate: include the BEGIN CERTIFICATE and END CERTIFICATE lines when pasting
the public key, if provided by CyberArk
Below shows an example of a configured CyberArk Conjur credential.
When HashiCorp Vault Secret Lookup is selected for Credential Type, provide the following metadata to properly
configure your lookup:
• Server URL (required): provide the URL used for communicating with HashiCorp Vault’s secret management
system
• Token: specify the access token used to authenticate HashiCorp’s server
• CA Certificate: specify the CA certificate used to verify HashiCorp’s server
• Approle Role_ID: specify the ID for Approle authentication
• Approle Secret_ID: specify the corresponding secret ID for Approle authentication
• Path to Approle Auth: specify a path if other than the default path of /approle
• API Version (required): select v1 for static lookups and v2 for versioned lookups
For more detail about Approle and its fields, refer to the Vault documentation for Approle Auth Method. Below shows
an example of a configured HashiCorp Vault Secret Lookup credential.
When HashiCorp Vault Signed SSH is selected for Credential Type, provide the following metadata to properly
configure your lookup:
• Server URL (required): provide the URL used for communicating with HashiCorp Signed SSH’s secret man-
agement system
• Token: specify the access token used to authenticate HashiCorp’s server
• CA Certificate: specify the CA certificate used to verify HashiCorp’s server
• Approle Role_ID: specify the ID for Approle authentication
• Approle Secret_ID: specify the corresponding secret ID for Approle authentication
• Path to Approle Auth: specify a path if other than the default path of /approle
For more detail about Approle and its fields, refer to the Vault documentation for Approle Auth Method.
Below shows an example of a configured HashiCorp SSH Secrets Engine credential.
When Microsoft Azure Key Vault is selected for Credential Type, provide the following metadata to properly
configure your lookup:
• Vault URL (DNS Name) (required): provide the URL used for communicating with MS Azure’s key manage-
ment system
• Client ID (required): provide the identifier as obtained by the Azure Active Directory
• Client Secret (required): provide the secret as obtained by the Azure Active Directory
• Tenant ID (required): provide the unique identifier that is associated with an Azure Active Directory instance
within an Azure subscription
• Cloud Environment: select the applicable cloud environment to apply
Below shows an example of a configured Microsoft Azure KMS credential.
When Thycotic DevOps Secrets Vault is selected for Credential Type, provide the following metadata to properly
configure your lookup:
• Tenant (required): provide the URL used for communicating with Thycotic’s secret management system
• Top-level Domain (TLD) : provide the top-level domain designation (e.g., com, edu, org) associated with the
secret vault you want to integrate
• Client ID (required): provide the identifier as obtained by the Thycotic secret management system
• Client Secret (required): provide the secret as obtained by the Thycotic secret management system
Below shows an example of a configured Thycotic DevOps Secrets Vault credential.
When Thycotic Secrets Server is selected for Credential Type, provide the following metadata to properly configure
your lookup:
• Secret Server URL (required): provide the URL used for communicating with the Thycotic Secrets Server
management system
• Username (required): specify the authenticated user for this service
• Password (required): provide the password associated with the user
Below shows an example of a configured Thycotic Secret Server credential.
THIRTEEN
APPLICATIONS
Creating and configuring token-based authentication for external applications is available starting in automation con-
troller 3.3. This makes it easier for external applications such as ServiceNow and Jenkins to integrate with automation
controller. OAuth 2 allows you to use tokens to share certain data with an application without disclosing login infor-
mation, and furthermore, these tokens can be scoped as “read-only”. You create an application that is representative
of the external application you are integrating with, then use it to create tokens for that application to use on behalf of
the users of the external application.
Having these tokens associated to an application resource gives you the ability to manage all tokens issued for a
particular application more easily. By separating token issuance under Applications, you can revoke all tokens based
on the Application without having to revoke all tokens in the system.
When integrating an external web app with automation controller that web app may need to create OAuth2 Tokens on
behalf of users in that other web app. Creating an application with the Authorization Code grant type is the preferred
way to do this because:
• external applications can obtain a token for users, using their credentials
• compartmentalized tokens issued for a particular application, allows those tokens to be easily managed (revoke
all tokens associated with that application, for example)
Access the Applications page by clicking Applications from the left navigation bar. The Applications page displays a
search-able list of all available Applications currently managed by the controller and can be sorted by Name.
98
Automation Controller User Guide, Release Automation Controller 4.2.1
If no other applications exist, only a gray box with a message to add applications displays.
Selecting the Tokens view displays a list of the users that have tokens to access the application.
Tokens can only access resources that its associated user can access, and can be limited further by specifying the scope
of the token.
Add Tokens
Tokens are added through the Users screen and can be associated with an application at that time. Specifying an
application can be performed directly in the User’s token settings. You can create a token for your user in the Tokens
configuration tab, meaning only you can create and see your tokens in your own user screen. To add a token:
1. Access the Users list view by clicking Users from the left navigation bar then click on your user to configure
your OAuth 2 tokens.
Note: You can only create OAuth 2 Tokens for your user via the API or UI, which means you can only access your
own user profile in order to configure or view your tokens. If you are an admin and need to create or remove tokens
for other users, see the revoke and create commands in the Token and session management section of the Automation
Controller Administration Guide.
3. Click the Add button, which opens the Create Token window.
4. Enter the following details in Create Token window:
• Application: enter the name of the application with which you want to associate your token. Alternatively, you
can search for it by clicking the button. This opens a separate window that allows you to choose from the
available options. Use the Search bar to filter by name if the list is extensive. Leave this field blank if you want
to create a Personal Access Token (PAT) that is not linked to any application.
• Description: optionally provide a short description for your token.
• Scope (required): specify the level of access you want this token to have.
5. When done, click Save or Cancel to abandon your changes.
After the token is saved, the newly created token for the user displays with the token information and when it expires.
Note: This is the only time the token value and associated refresh token value will ever be shown.
In the user’s profile, the application for which it is assigned to and its expiration displays in the token list view.
To verify the application in the example above now shows the user with the appropriate token, go to the Tokens tab of
the Applications window:
FOURTEEN
EXECUTION ENVIRONMENTS
The ability to build and deploy Python virtual environments for automation has been replaced by Ansible execution
environments. Unlike legacy virtual environments, execution environments are container images that make it possible
to incorporate system-level dependencies and collection-based content. Each execution environment allows you to
have a customized image to run jobs, and each of them contain only what you need when running the job, nothing
more.
Using Ansible content that depends on non-default dependencies (custom virtual environments) can be tricky. Pack-
ages must be installed on each node, play nicely with other software installed on the host system, and be kept in
sync. Previously, jobs ran inside of a virtual environment at /var/lib/awx/venv/ansible by default, which
was pre-loaded with dependencies for ansible-runner and certain types of Ansible content used by the Ansible control
machine.
To help simplify this process, container images can be built that serve as Ansible control nodes. These container
images are referred to as automation execution environments, which you can create with ansible-builder and then
ansible-runner can make use of those images.
In order to build images, either installations of podman or docker is required along with the ansible-builder Python
package. The --container-runtime option needs to correspond to the Podman/Docker executable you intend
to use.
To install from PyPi:
To install from a specific tag or branch, replace <ref> in the following example:
104
Automation Controller User Guide, Release Automation Controller 4.2.1
Once you created a definition, use this procedure to build your execution environment.
The ansible-builder build command takes an execution environment definition as an input. It outputs the
build context necessary for building an execution environment image, and proceeds with building that image. The
image can be re-built with the build context elsewhere, and produces the same result. By default, it looks for a file
named execution-environment.yml in the current directory.
For the illustration purposes, the following example execution-environment.yml file is used as a starting
point:
---
version: 1
dependencies:
galaxy: requirements.yml
---
collections:
- name: awx.awx
$ ansible-builder build
...
STEP 7: COMMIT my-awx-ee
--> 09c930f5f6a
(continues on next page)
In addition to producing a ready-to-use container image, the build context is preserved, which can be rebuilt at a
different time and/or location with the tooling of your choice, such as docker build or podman build.
Depending on whether an execution environment is made available for global use or tied to an organization, you must
have the appropriate level of administrator privileges in order to use an execution environment in a job. Execution
environments tied to an organization require Organization administrators to be able to run jobs with those execution
environments.
In order to use an execution environment in a job, you must have created one using ansible-builder. See Build an
execution environment for detail. Once an execution environment is created, you can use it to run jobs. Use the
automation controller user interface to specify the execution environment to use in your job templates.
Note: Before running a job or job template that uses an execution environment that has a credential assigned to it, be
sure that the credential contains a username, host, and password.
4. Click Save.
Now your newly added execution environment is ready to be used in a job template. To add an execution environment
to a job template, specify it in the Execution Environment field of the job template, as shown in the example below.
For more information on setting up a job template, see Job Templates in the Automation Controller User Guide.
Once you added an execution environment to a job template, you can see those templates listed in the Templates tab
of the execution environment:
Rebuilding an execution environment is one way to add certs, but inheriting certs from the host provides a more
convenient solution. For VM-based installs, the controller automatically mounts the system trust store in the execution
environment when jobs run.
Additionally, you may customize execution environment mount options and mount paths in the Paths to expose to
isolated jobs field of the Job Settings page, where it supports podman-style volume mount syntax. Refer to the
Podman documentation for detail.
In some cases where the /etc/ssh/* files were added to the execution environment image due to customization
of an execution environment, an SSH error may occur. For example, exposing the /etc/ssh/ssh_config.
d:/etc/ssh/ssh_config.d:O path allows the container to be mounted, but the ownership permissions are not
mapped correctly.
If you encounter this error, or have upgraded from an older version of the controller (e.g. 3.8.x), perform the following
steps:
1. Change the container ownership on the mounted volume to root.
2. In the Paths to expose to isolated jobs field of the Job Settings page, using the current example, expose the
path as such:
Note: The :O option is only supported for directories. It is highly recommended that you be as specific as possible,
especially when specifying system paths. Mounting /etc or /usr directly have impact that make it difficult to
troubleshoot.
This informs podman to run a command similar to the example below, where the configuration is mounted and the
ssh command works as expected.
To expose isolated paths in OpenShift or Kubernetes containers as HostPath, assume the following configuration:
Use the Expose host paths for Container Groups toggle to enable it.
Once the playbook runs, the resulting Pod spec will display similar to the example below. Note the details of the
volumeMounts and volumes sections.
FIFTEEN
This section contains reference information associated with setting up and building execution environments.
A definition file is a .yml file that is required to build an image for an execution environment. An example of an
execution environment definition schema is as follows:
---
version: 1
build_arg_defaults:
EE_BASE_IMAGE: 'quay.io/ansible/ansible-runner:stable-2.10-devel'
ansible_config: 'ansible.cfg'
dependencies:
galaxy: requirements.yml
python: requirements.txt
system: bindep.txt
additional_build_steps:
prepend: |
RUN whoami
RUN cat /etc/os-release
append:
- RUN echo This is a post-install command!
- RUN ls -la /etc
Default values for build arguments can be specified in the definition file in the default_build_args section as a
dictionary. This is an alternative to using the --build-arg CLI flag.
Build arguments used by ansible-builder are the following:
• ANSIBLE_GALAXY_CLI_COLLECTION_OPTS allows the user to pass the -pre flag to enable the installa-
tion of pre-releases collections.
• EE_BASE_IMAGE specifies the parent image for the execution environment.
• EE_BUILDER_IMAGE specifies the image used for compiling type tasks.
110
Automation Controller User Guide, Release Automation Controller 4.2.1
Values given inside of default_build_args will be hard-coded into the Containerfile, so they will persist if
podman build is called manually.
If the same variable is specified in the CLI --build-arg flag, the CLI value will take higher precedence.
When using an ansible.cfg file to pass a token and other settings for a private account to an Automation Hub
server, listing the config file path here (as a string) will enable it to be included as a build argument in the initial phase
of the build.
The galaxy entry points to a valid requirements file for the ansible-galaxy collection install -r
... command.
The entry requirements.yml may be a relative path from the directory of the execution environment definition’s
folder, or an absolute path.
The python entry points to a valid requirements file for the pip install -r ... command.
The entry requirements.txt may be a relative path from the directory of the execution environment definition’s
folder, or an absolute path.
The system entry points to a bindep requirements file. This will be processed by bindep and then passed to dnf,
other platforms are not yet supported. For more information about bindep, refer to the OpenDev documentation.
Additional commands may be specified in the additional_build_steps section, either for before the main
build steps (prepend) or after (append). The syntax needs to be one of the following:
• a multi-line string (example shown in the prepend section above)
• a dictionary (as shown via append)
The following options can be used with the ansible-builder build command:
15.2.1 Examples
The example in test/data/pytz requires the awx.awx collection in the execution environment definition.
The lookup plugin awx.awx.tower_schedule_rrule requires the PyPI pytz and another library to work.
If test/data/pytz/execution-environment.yml file is provided to the ansible-builder build
command, then it will install the collection inside the image, read the requirements.txt file inside of the collec-
tion, and then install pytz into the image.
The image produced can be used inside of an ansible-runner project by placing these variables inside the env/
settings file, inside of the private data directory.
---
container_image: image-name
process_isolation_executable: podman # or docker
process_isolation: true
The awx.awx collection is a subset of content included in the default AWX execution environment. More details can
be found in the awx-ee repository.
Collections inside of the galaxy entry of an execution environment will contribute their Python and system require-
ments to the image.
Requirements from a collection can be recognized in these ways:
• A file meta/execution-environment.yml references the Python and/or bindep requirements files
• A file named requirements.txt is in the root level of the collection
• A file named bindep.txt is in the root level of the collection
If any of these files are in the build_ignore of the collection, it will not work correctly.
Collection maintainers can verify that ansible-builder recognizes the requirements they expect by using the
introspect command, for example:
Python requirements files are combined into a single file using the requirements-parser library in order to
support complex syntax like references to other files.
Entries from separate collections that give the same package name will be combined into the same entry, with the
constraints combined.
There are several package names which are specifically ignored by ansible-builder, meaning that if a collection lists
these, they will not be included in the combined file. These include test packages and packages that provide Ansible
itself. The full list can be found in EXCLUDE_REQUIREMENTS in the ansible_builder.requirements
module.
The bindep format provides a way of specifying cross-platform requirements. A minimum expectation is that
collections specify necessary requirements for [platform:rpm].
Entries from multiple collections will be combined into a single file. Only requirements with no profiles (runtime
requirements) will be installed to the image. Entries from multiple collections which are outright duplicates of each
other may be consolidated in the combined file.
SIXTEEN
PROJECTS
Note: By default, the Project Base Path is /var/lib/awx/projects, but this may have been modified by
the administrator. It is configured in /etc/tower/conf.d/custom.py. Use caution when editing this file, as
incorrect settings can disable your installation.
The Projects page displays the list of the projects that are currently available. The default view is collapsed (Compact)
with project name and its status, but you can use the arrow next to each entry to expand for more information.
114
Automation Controller User Guide, Release Automation Controller 4.2.1
For each project listed, you can get the latest SCM revision ( ), edit the project ( ), or copy the project attributes
( ), using the respective icons next to each project. Projects are allowed to be updated while a related job is running.
In cases where you have a big project (around 10 GB), disk space on /tmp may be an issue.
Status indicates the state of the project and may be one of the following (note that you can also filter your view by
specific status types):
• Pending - The source control update has been created, but not queued or started yet. Any job (not just source
control updates) will stay in pending until it’s actually ready to be run by the system. Reasons for it not being
ready because it has dependencies that are currently running so it has to wait until they are done, or there is not
enough capacity to run in the locations it is configured to.
• Waiting - The source control update is in the queue waiting to be executed.
• Running - The source control update is currently in progress.
• Successful - The last source control update for this project succeeded.
• Failed - The last source control update for this project failed.
• Error - The last source control update job failed to run at all. (To be deprecated.)
• Canceled - The last source control update for the project was canceled.
• Never updated - The project is configured for source control, but has never been updated.
• OK - The project is not configured for source control, and is correctly in place. (To be deprecated.)
• Missing - Projects are absent from the project base path of /var/lib/awx/projects (applicable for man-
ual or source control managed projects).
Note: Projects of credential type Manual cannot update or schedule source control-based actions without being
reconfigured as an SCM type credential.
Note: If deleting items that are used by other work items, a message opens listing the items are affected by the deletion
and prompts you to confirm the deletion. Some screens will contain items that are invalid or previously deleted, so
they will fail to run. Below is an example of such a message:
115
Automation Controller User Guide, Release Automation Controller 4.2.1
Note: If adding a manual project, each project path inside of the project root folder can only be assigned to one
project. If you receive the following message, ensure that you have not already assigned the project path to an existing
project:
All of the project paths have been assigned to existing projects, or
there are no directories found in the base path. You will need to add
a project path before creating a new project.
• Create one or more directories to store playbooks under the Project Base Path (for example,
/var/lib/awx/projects/)
• Create or copy playbook files into the playbook directory.
• Ensure that the playbook directory and files are owned by the same UNIX user and group that the automation
controller service runs as.
• Ensure that the permissions are appropriate for the playbook directories and files.
If you have trouble adding a project path, check the permissions and SELinux context settings for the project directory
and files.
Warning: If you have not added any Ansible playbook directories to the base project path, you will receive the
following message:
Correct this issue by creating the appropriate playbook directories and checking out playbooks from your SCM or
otherwise copying playbooks into the appropriate playbook directories.
Tip: Using a GitHub link offers an easy way to use a playbook. To help get you started, use the
helloworld.yml file available at: https://github.com/ansible/tower-example.git
This link offers a very similar playbook to the one created manually in the instructions found in the
Automation Controller Quick Setup Guide. Using it will not alter or harm your system in anyway.
To configure playbooks to use Red Hat Insights, in the Project Details tab:
1. Select Red Hat Insights from the SCM Type drop-down menu list.
2. Red Hat Insights requires a credential for authentication. Select from the Credential field the appropriate
credential for use with Insights.
3. In the SCM Update Options, optionally select the launch behavior, if applicable.
• Clean - Removes any local modifications prior to performing an update.
• Delete on Update - Deletes the local repository in its entirety prior to performing an update. De-
pending on the size of the repository this may significantly increase the amount of time required to
complete an update.
• Update Revision on Launch - Updates the revision of the project to the current revision in the
remote source control, as well as cache the roles directory from Galaxy or Collections. Automation
controller ensures that the local revision matches and that the roles and collections are up-to-date
with the last update. Also, to avoid job overflows if jobs are spawned faster than the project can
sync, selecting this allows you to configure a Cache Timeout to cache prior project syncs for a
certain number of seconds.
Playbooks using a remote archive allow projects to be provided based on a build process that produces a versioned
artifact, or release, containing all the requirements for that project in a single archive.
To configure playbooks to use a remote archive, in the Project Details tab:
1. Select Remote Archive from the SCM Type drop-down menu list.
2. Enter the appropriate details into the following fields:
• SCM URL - requires a URL to a remote archive, such as a GitHub Release or a build artifact stored in Artifactory
and unpacks it into the project path for use
• SCM Credential - If authentication is required, select the appropriate SCM credential
3. In the SCM Update Options, optionally select the launch behavior, if applicable.
• Clean - Removes any local modifications prior to performing an update.
• Delete on Update - Deletes the local repository in its entirety prior to performing an update. De-
pending on the size of the repository this may significantly increase the amount of time required to
complete an update.
• Update Revision on Launch - Not recommended, as this option updates the revision of the project
to the current revision in the remote source control, as well as cache the roles directory from Galaxy
or Collections.
• Allow Branch Override - Not recommended, as this option allows a job template that uses this
project to launch with a specified SCM branch or revision other than that of the project’s.
Note: Since this SCM type is intended to support the concept of unchanging artifacts, it is advisable to disable Galaxy
integration (for roles, at minimum).
1. Update an existing SCM-based project by selecting the project and clicking the button.
Note: Please note that immediately after adding a project setup to use source control, a “Sync” starts that
fetches the project details from the configured source control.
2. Click on project’s status under the Status column to get further details about the update process.
The set of permissions assigned to this project (role-based access controls) that provide the ability to read, modify, and
administer projects, inventories, job templates, and other automation controller elements are Privileges.
You can access the project permissions via the Access tab next to the Details tab. This screen displays a list of users
that currently have permissions to this project. The list may be sorted and searched by Username, First Name, or
Last Name.
5. Click the Save button to apply the roles to the selected user(s) or team(s) and to add them as members.
The Add Users/Teams window closes to display the updated roles assigned for each user and team.
To remove roles for a particular user, click the disassociate (x) button next to its resource.
Clicking the Notifications tab allows you to review any notification integrations you have setup.
Use the toggles to enable or disable the notifications to use with your particular project. For more detail, see Enable
and Disable Notifications.
If no notifications have been set up, you can configure them from the Notifications link from the left navigation bar to
create a new notification.
Refer to Notification Types for additional details on configuring various notification types.
Clicking on Job Templates allows you to add and review any job templates or workflow templates associated with
this project.
Click on the statuses of the jobs that ran using that template to see its details and other useful information. You can
sort this list by various criteria, and perform a search to filter the templates of interest.
From this view, you can also launch ( ) or edit ( ) the template configuration.
Clicking on Schedules allows you to review any schedules set up for this project.
At the end of a Project update, automation controller searches for a file called requirements.yml in the roles di-
rectory, located at <project-top-level-directory>/roles/requirements.yml. If this file is found,
the following command automatically runs:
This file allows you to reference Galaxy roles or roles within other repositories which can be checked out in conjunction
with your own project. The addition of this Ansible Galaxy support eliminates the need to create git submodules for
achieving this result. Given that SCM projects (along with roles/collections) are pulled into and executed from a private
job environment, a <private job directory> specific to the project within /tmp is created by default. However, you can
specify another Job Execution Path based on your environment in the Jobs Settings tab of the Settings window:
The cache directory is a subdirectory inside the global projects folder. The content may be copied from the cache
location to <job private directory>/requirements_roles location.
By default, automation controller has a system-wide setting that allows roles to be dynamically downloaded from the
roles/requirements.yml file for SCM projects. You may turn off this setting in the Jobs settings screen of
the Settings menu by switching the Enable Role Download toggle button to OFF.
Whenever a project sync runs, automation controller determines if the project source and any roles from Galaxy and/or
Collections are out of date with the project. Project updates will download the roles inside the update.
If jobs need to pick up a change made to an upstream role, updating the project will ensure this happens. A change
to the role means that a new commit was pushed to the provision-role source control. To make this change take effect
in a job, you do not need to push a new commit to the playbooks repo, but you do need to update the project, which
downloads roles to a local cache. For instance, say you have two git repositories in source control. The first one
is playbooks and the project in automation controller points to this URL. The second one is provision-role and it is
referenced by the roles/requirements.yml file inside of the playbooks git repo.
In short, jobs would download the most recent roles before every job run. Roles and collections are locally cached for
performance reasons, and you will need to select Update Revision on Launch in the project SCM Update Options to
ensure that the upstream role is re-downloaded before each job run:
The update happens much earlier in the process than the sync, so this surfaces errors and details faster and in a more
logic place.
For more information and examples on the syntax of the requirements.yml file, refer to the role requirements
section in the Ansible documentation.
If there are any directories that should specifically be exposed, you can specify those in the Jobs section of the Settings
screen in the Paths to Expose to Isolated Jobs or by updating the following entry in the settings file:
AWX_ISOLATION_SHOW_PATHS = ['/list/of/', '/paths']
Note: The primary file you may want to add to AWX_ISOLATION_SHOW_PATHS is /var/lib/
awx/.ssh, if your playbooks need to use keys or settings defined there.
If you made changes in the settings file, be sure to restart services with the automation-controller-service
restart command after your changes have been saved.
In the User Interface, you can configure these settings in the Jobs settings window.
Note: The Primary Galaxy Server Username and Primary Galaxy Server Password fields are no longer config-
urable in automation controller 3.8. We recommend using tokens to access Galaxy or Automation Hub instead.
Automation controller supports project-specific Ansible collections in job runs. If you specify a collections require-
ments file in the SCM at collections/requirements.yml, automation controller will install collections in
that file in the implicit project sync before a job run.
By default, automation controller has a system-wide setting that allows collections to be dynamically downloaded
from the collections/requirements.yml file for SCM projects. You may turn off this setting in the Jobs
settings tab of the Settings menu by switching the Enable Collections Download toggle button to OFF.
Roles and collections are locally cached for performance reasons, and you will need to select Update Revision on
Launch in the project SCM Update Options to ensure this:
Before automation controller can use Automation Hub as the default source for collections content, you need to create
an API token in the Automation Hub UI so that it could be specified in automation controller. You may connect to a
private Automation Hub or a public Automation Hub collection, the only difference is which URL you specify.
1. Navigate to https://cloud.redhat.com/ansible/automation-hub/token and click Load token.
2. Click the copy icon to copy the API token to the clipboard.
3. To use the public Automation Hub, create an Automation Hub credential using the copied token and pointing to
the URLs shown in the Server URL and SSO URL fields of the token page:
• Galaxy Server URL = https://cloud.redhat.com/api/automation-hub/
• AUTH SEVER URL = https://sso.redhat.com/auth/realms/redhat-external/
protocol/openid-connect/token
4. To use a private Automation Hub, create an Automation Hub credential using a token retrieved from the Repo
Management dashboard of your local Automation Hub and pointing to the published repo URL as shown:
You can create different repos with different namespaces/collections in them. But for each repo in Automation Hub
you need to create a different Automation Hub credential. Copy the Ansible CLI URL from the Automation Hub UI in
the format of https://$<hub_url>/api/galaxy/content/<repo you want to pull from> into
the Galaxy Server URL field of the Create Credential form:
Refer to Managing Red Hat Certified and Ansible Galaxy Collections in Ansible Hub for Automation Hub UI-specific
instructions.
5. Navigate to the organization for which you want to be able to sync content from Automation Hub and add the
new Automation Hub credential to the organization. This step allows you to associate each organization with
the Automation Hub credential (i.e. repo) that you want to be able to use content from.
6. If the Automation Hub has self-signed certificates, click the toggle to enable the setting Ignore Ansible Galaxy
SSL Certificate Verification. For public Automation Hub, which uses a signed certificate, click the toggle to
disable it instead. Note this is a global setting:
7. Create a project, where the source repository specifies the necessary collections in a require-
ments file located in the collections/requirements.yml file. Refer to the syntax described
in the Ansible documentation: https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#
install-multiple-collections-with-a-requirements-file.
8. In the Projects list view, click to run an update against this project. Automation controller fetches the Galaxy
collections from the collections/requirements.yml file and report it as changed; and the collections
will now be installed for any job template using this project.
Note: If updates are needed from Galaxy or Collections, a sync is performed that downloads the required roles,
consuming that much more space in your /tmp file. In cases where you have a big project (around 10 GB), disk space
on /tmp may be an issue.
For more information on collections, refer to Using Collections. For more information on how Red Hat itself publishes
one of these official collections, which can be used to automate your automation controller install directly, refer to the
AWX Ansible Collection documentation. This page is accessible with your Red Hat customer credentials as part of
your Red Hat Ansible Automation Platform subscription.
SEVENTEEN
INVENTORIES
An Inventory is a collection of hosts against which jobs may be launched, the same as an Ansible inventory file.
Inventories are divided into groups and these groups contain the actual hosts. Groups may be sourced manually, by
entering host names into the automation controller, or from one of its supported cloud providers.
Note: If you have a custom dynamic inventory script, or a cloud provider that is not yet supported natively in the
controller, you can also import that into the controller. Refer to Inventory File Importing in the Automation Controller
Administration Guide.
The Inventories window displays a list of the inventories that are currently available. The inventory list may be sorted
by name and searched type, organization, description, owners and modifiers of the inventory, or additional criteria as
needed.
inventory, which shows the inventory’s groups and hosts. (This view is also accessible from the icon.)
• Status
The statuses are:
• Success: when the inventory source sync completed successfully
• Disabled: no inventory source added to the inventory
• Error: when the inventory source sync completed with error
136
Automation Controller User Guide, Release Automation Controller 4.2.1
An example of inventories of various states, including one with detail for a disabled state:
– Copy ( ): Makes a copy of an existing inventory as a template for creating a new one
Note: If deleting items that are used by other work items, a message opens listing the items are affected by the deletion
and prompts you to confirm the deletion. Some screens will contain items that are invalid or previously deleted, so
they will fail to run. Below is an example of such a message:
137
Automation Controller User Guide, Release Automation Controller 4.2.1
A Smart Inventory is a collection of hosts defined by a stored search that can be viewed like a standard inventory
and made to be easily used with job runs. Organization administrators have admin permission to inventories in their
organization and can create a Smart Inventories. A Smart Inventory is identified by KIND=smart. You can define a
Smart Inventory using the same method being used with Search. InventorySource is directly associated with an
Inventory.
The Inventory model has the following new fields that are blank by default but are set accordingly for Smart
Inventories:
• kind is set to smart for Smart Inventories
• host_filter is set AND kind is set to smart for Smart Inventories.
The host model has a related endpoint, smart_inventories that identifies a set of all the Smart Inventory a host
is associated with. The membership table is updated every time a job runs against a smart inventory.
Note: To update the memberships more frequently, you can change the file-based setting
AWX_REBUILD_SMART_MEMBERSHIP to True (default is False). This will update memberships in the fol-
lowing events:
• a new host is added
• an existing host is modified (updated or deleted)
• a new Smart Inventory is added
• an existing Smart Inventory is modified (updated or deleted)
You can use a search filter to populate hosts for an inventory. This feature utilized the capability of the fact searching
feature.
Facts generated by an Ansible playbook during a Job Template run are stored by the automation con-
troller into the database whenever use_fact_cache=True is set per-Job Template. New facts are
merged with existing facts and are per-host. These stored facts can be used to filter hosts via the /
api/v2/hosts endpoint, using the GET query parameter host_filter For example: /api/v2/hosts?
host_filter=ansible_facts__ansible_processor_vcpus=8
The host_filter parameter allows for:
• grouping via ()
• use of the boolean and operator:
– __ to reference related fields in relational fields
– __ is used on ansible_facts to separate keys in a JSON key path
– [] is used to denote a json array in the path specification
– "" can be used in the value when spaces are wanted in the value
• “classic” Django queries may be embedded in the host_filter
Examples:
/api/v2/hosts/?host_filter=name=localhost
/api/v2/hosts/?host_filter=ansible_facts__ansible_date_time__weekday_number="3"
/api/v2/hosts/?host_filter=ansible_facts__ansible_processor[]="GenuineIntel"
/api/v2/hosts/?host_filter=ansible_facts__ansible_lo__ipv6[]__scope="host"
/api/v2/hosts/?host_filter=ansible_facts__ansible_processor_vcpus=8
/api/v2/hosts/?host_filter=ansible_facts__ansible_env__PYTHONUNBUFFERED="true"
/api/v2/hosts/?host_filter=(name=localhost or name=database) and (groups__name=east
,→or groups__name="west coast") and ansible_facts__an
You can search host_filter by host name, group name, and Ansible facts.
The format for a group search is:
groups.name:groupA
ansible_facts.ansible_fips:false
You can also perform Smart Search searches, which consist a host name and host description.
host_filter=name=my_host
If a search term in host_filter is of string type, to make the value a number (e.g. 2.66), or a JSON keyword
(e.g. null, true or false) valid, add double quotations around the value to prevent the controller from mistakenly
parsing it as a non-string:
host_filter=ansible_facts__packages__dnsmasq[]__version="2.66"
To use ansible_facts to define the host filter when creating Smart Inventories, perform the following steps:
1. In the Create new smart inventory screen, click the button next to the Smart host filter field to open a
pop-up window to filter hosts for this inventory.
2. In the search pop-up window, change the search criteria from Name to Advanced and select ansible_facts from
the Key field.
/api/v2/hosts/?host_filter=ansible_facts__ansible_processor[]="GenuineIntel"
In the search field, enter ansible_processor[]="GenuineIntel" (no extra spaces or __ before the value)
and press [Enter].
The resulting search criteria for the specified ansible fact populates in the lower part of the window.
5. From the Details view, you can edit the Smart host filter field by clicking Edit and delete existing filter(s),
clear all existing filters, or add new ones.
Inventory updates use dynamically-generated YAML files which are parsed by their respective inventory plugin. In
Automation Controller Version 4.2.1, users can provide the new style inventory plugin config directly to the controller
via the inventory source source_vars for all the following inventory sources:
• Amazon Web Services EC2
• Google Compute Engine
• Microsoft Azure Resource Manager
• VMware vCenter
• Red Hat Satellite 6
• Red Hat Insights
• OpenStack
• Red Hat Virtualization
• Red Hat Ansible Automation Platform
Newly created configurations for inventory sources will contain the default plugin configuration values. If you want
your newly created inventory sources in 3.8 to match the output of a 3.7 source, you must apply a specific set of
configuration values for that source. To ensure backward compatibility, the controller uses “templates” for each of
these sources to force the output of inventory plugins into the legacy format. Refer to Supported Inventory Plugin
Templates section of this guide for each source and their respective templates to help you migrate to the new style
inventory plugin output.
source_vars that contain plugin: foo.bar.baz as a top-level key will be replaced with the appropriate
fully-qualified inventory plugin name at runtime based on the InventorySource source. For example, if ec2 is
selected for the InventorySource then, at run-time, plugin will be set to amazon.aws.aws_ec2.
• Smart Host Filter: (Only applicable to Smart Inventories) Click the button to open a separate window to
filter hosts for this inventory. These options are based on the organization you chose.
Filters are similar to tags in that tags are used to filter certain hosts that contain those names. Therefore, to
populate the Smart Host Filter field, you are specifying a tag that contains the hosts you want, not actually
selecting the hosts themselves. Enter the tag in the Search field and press [Enter]. Filters are case-sensitive.
Refer to the Smart Host Filter section for more information.
• Instance Groups: Click the button to open a separate window. Choose the instance groups for this
inventory to run on. If the list is extensive, use the search to narrow the options.
• Variables: Variable definitions and values to be applied to all hosts in this inventory. Enter variables using either
JSON or YAML syntax. Use the radio button to toggle between the two.
5. Click the Save button to apply the roles to the selected user(s) or team(s) and to add them as members.
The Add Users/Teams window closes to display the updated roles assigned for each user and team.
To remove roles for a particular user, click the disassociate (x) button next to its resource.
Inventories are divided into groups, which may contain hosts and other groups, and hosts. Groups are only applicable to
standard inventories and is not a configurable directly through a Smart Inventory. You can associate an existing group
through host(s) that are used with standard inventories. There are several actions available for standard inventories:
• Create a new Group
• Create a new Host
• Run a command on the selected Inventory
• Edit Inventory properties
• View activity streams for Groups and Hosts
• Obtain help building your Inventory
Note: Inventory sources are not associated with groups. Spawned groups are top-level and may still have child
groups, and all of these spawned groups may have hosts.
2. Enter the appropriate details into the required and optional fields:
• Name: Required
• Description: Enter an arbitrary description as appropriate (optional)
• Variables: Enter definitions and values to be applied to all hosts in this group. Enter variables using either
JSON or YAML syntax. Use the radio button to toggle between the two.
3. When done, click Save.
If you chose to add an existing group, available groups will appear in a separate selection window.
Once a group is selected, it displays as an entry in the list of groups associated with the group.
5. To configure additional groups and hosts under the subgroup, click on the name of the subgroup from the list of
groups and repeat the same steps described in this section.
The list view displays all your inventory groups at once, or you can filter it to only display the root group(s). An
inventory group is considered a root group if it is not a subset of another group.
You may be able to delete a subgroup without concern for dependencies, since the controller will look for dependencies
such as any child groups or hosts. If any exists, a confirmation dialog displays for you to choose whether to delete
the root group and all of its subgroups and hosts; or promote the subgroup(s) so they become the top-level inventory
group(s), along with their host(s).
You can configure hosts for the inventory as well as for groups and groups within groups. To configure hosts:
1. Click the Hosts tab.
2. Click the Add button, and select whether to add a host that already exists in your configuration or create a new
host.
3. If creating a new host, select the button to specify whether or not to include this host while running
jobs.
4. Enter the appropriate details into the required and optional fields:
• Host Name: Required
• Description: Enter an arbitrary description as appropriate (optional)
• Variables: Enter definitions and values to be applied to all hosts in this group. Enter variables using either
JSON or YAML syntax. Use the radio button to toggle between the two.
5. When done, click Save.
The Create Host window closes and the newly created host displays as an entry in the list of hosts associated with the
group that it was created for.
If you chose to add an existing host, available hosts will appear in a separate selection window.
Once a host is selected, it displays as an entry in the list of hosts associated with the group. You can disassociate a
host from this screen by selecting the host and click the Disassociate button.
Note: You may also run ad hoc commands from this screen. Refer to Running Ad Hoc Commands for more detail.
6. To configure additional groups for the host, click on the name of the host from the list of hosts.
b. Click to select the group(s) to associate with the host and click Save.
Once a group is associated, it displays as an entry in the list of groups associated with the host.
8. If a host was used to run a job, you can view details about those jobs in the Completed Jobs tab of the host and
click Expanded to view details about each job.
Inventory sources are not associated with groups. Spawned groups are top-level and may still have child groups, and
all of these spawned groups may have hosts. Adding a source to an inventory only applies to standard inventories.
Smart inventories inherit their source from the standard inventories they are associated with. To configure the source
for the inventory:
1. In the inventory you want to add a source, click the Sources tab.
2. Click the Add button.
This opens the Create Source window.
3. Enter the appropriate details into the required and optional fields:
• Name: Required
• Description: Enter an arbitrary description as appropriate (optional)
• Execution Environment: Optionally search ( ) or enter the name of the execution environment with which
you want to run your inventory imports. Refer to the Execution Environments section for details on building an
execution environment.
• Source: Choose a source for your inventory. Refer to the Inventory Sources section for more information about
each source and details for entering the appropriate information.
4. After completing the required information for your chosen inventory source, you can continue to optionally
specify other common parameters, such as verbosity, host filters, and variables.
Note: The Regions, Instance Filters, and Only Group By fields have been removed in automation
controller 3.8.
5. Select the appropriate level of output on any inventory source’s update jobs from the Verbosity drop-down
menu.
6. Use the Host Filter field to specify only matching host names to be imported into the controller.
7. In the Enabled Variable, specify the controller to retrieve the enabled state from the given dictionary of host
variables. The enabled variable may be specified using dot notation as ‘foo.bar’, in which case the lookup will
traverse into nested dicts, equivalent to: from_dict.get('foo', {}).get('bar', default).
8. If you specified a dictionary of host variables in the Enabled Variable field, you can provide
a value to enable on import. For example, if enabled_var='status.power_state' and
enabled_value='powered_on' with the following host variables, the host would be marked enabled:
{
"status": {
"power_state": "powered_on",
"created": "2020-08-04T18:13:04+00:00",
"healthy": true
},
"name": "foobar",
"ip_address": "192.168.2.1"
}
If power_state were any value other than powered_on, then the host would be disabled when
imported into the controller. If the key is not found, then the host will be enabled.
9. All cloud inventory sources have the following update options:
• Overwrite: If checked, any hosts and groups that were previously present on the external source but
are now removed, will be removed from the controller inventory. Hosts and groups that were not
managed by the inventory source will be promoted to the next manually created group, or if there is
no manually created group to promote them into, they will be left in the “all” default group for the
inventory.
When not checked, local child hosts and groups not found on the external source will remain
untouched by the inventory update process.
• Overwrite Variables: If checked, all variables for child groups and hosts will be removed and
replaced by those found on the external source. When not checked, a merge will be performed,
combining local variables with those found on the external source.
• Update on Launch: Each time a job runs using this inventory, refresh the inventory from the se-
lected source before executing job tasks. To avoid job overflows if jobs are spawned faster than the
inventory can sync, selecting this allows you to configure a Cache Timeout to cache prior inventory
syncs for a certain number of seconds.
The “Update on Launch” setting refers to a dependency system for projects and inventory, and
it will not specifically exclude two jobs from running at the same time. If a cache timeout
is specified, then the dependencies for the second job is created and it uses the project and
inventory update that the first job spawned. Both jobs then wait for that project and/or inventory
update to finish before proceeding. If they are different job templates, they can then both
start and run at the same time, if the system has the capacity to do so. If you intend to use
the controller’s provisioning callback feature with a dynamic inventory source, “Update on
Launch” should be set for the inventory group.
10. Review your entries and selections and click Save when done. This allows you to configure additional details,
such as schedules and notifications.
11. To configure schedules associated with this inventory source, click the Schedules tab.
a. If schedules are already set up; review, edit, or enable/disable your schedule preferences.
b. if schedules have not been set up, refer to Schedules for more information.
Note: The Notifications tab is only present after you save the newly-created source.
12. To configure notifications for the source, click the Notifications tab.
a. If notifications are already set up, use the toggles to enable or disable the notifications to use with your particular
source. For more detail, see Enable and Disable Notifications.
b. if notifications have not been set up, refer to Notifications for more information.
13. Review your entries and selections and click Save when done.
Once a source is defined, it displays as an entry in the list of sources associated with the inventory. From the Sources
tab you can perform a sync on a single source, or sync all of them at once. You can also perform additional actions
such as scheduling a sync process, and edit or delete the source.
Inventory Sources
Choose a source which matches the inventory type against which a host can be entered:
An inventory that is sourced from a project means that is uses the SCM type from the project it is tied to. For example,
if the project’s source is from GitHub, then the inventory will use the same source.
1. To configure a project-sourced inventory, select Sourced from a Project from the Source field.
2. The Create Source window expands with additional fields. Enter the following details:
• Credential: Optionally specify the credential to use for this source.
• Project: Required. Specify the project this inventory is using as its source. Click the button
to choose from a list of projects. If the list is extensive, use the search to narrow the options.
• Inventory File: Required. Select an inventory file associated with the sourced project. If not already
populated, you can type it into the text field within the drop down menu to filter the extraneous file
types. In addition to a flat file inventory, you can point to a directory or an inventory script.
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in
the main procedure for adding a source.
4. In addition to the update options available for cloud inventory sources, you can specify whether or not to update
on project changes. Check the Update on Project Update option to refresh the inventory from the selected
source after every project update where the SCM revision changes before executing job tasks. For more detail,
refer to Update on Project Update in the Automation Controller Administration Guide.
5. In order to pass to the custom inventory script, you can optionally set environment variables in the Environment
Variables field. You may also place inventory scripts in source control and then run it from a project. See
Inventory File Importing in the Automation Controller Administration Guide for detail.
Note: If you are executing a custom inventory script from SCM, please make sure you set the execution bit (i.e.
chmod +x) on the script in your upstream source control. If you do not, the controller will throw a [Errno 13]
Permission denied error upon execution.
1. To configure an AWS EC2-sourced inventory, select Amazon EC2 from the Source field.
2. The Create Source window expands with additional fields. Enter the following details:
• Credential: Optionally choose from an existing AWS credential (for more information, refer to Credentials).
If the controller is running on an EC2 instance with an assigned IAM Role, the credential may be omitted, and
the security credentials from the instance metadata will be used instead. For more information on using IAM
Roles, refer to the IAM_Roles_for_Amazon_EC2_documentation_at_Amazon.
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in
the main procedure for adding a source.
4. Use the Source Variables field to override variables used by the aws_ec2 inventory plugin. Enter variables
using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed description
of these variables, view the aws_ec2 inventory plugin documenation.
Note: If you only use include_filters, the AWS plugin always returns all the hosts. To use this prop-
erly, the first condition on the or must be on filters and then build the rest of the OR conditions on a list of
include_filters.
1. To configure a Google-sourced inventory, select Google Compute Engine from the Source field.
2. The Create Source window expands with the required Credential field. Choose from an existing GCE Creden-
tial. For more information, refer to Credentials.
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in
the main procedure for adding a source.
4. Use the Source Variables field to override variables used by the gcp_compute inventory plugin. Enter
variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed
description of these variables, view the gcp_compute inventory plugin documenation.
1. To configure a Azure Resource Manager-sourced inventory, select Microsoft Azure Resource Manager from
the Source field.
2. The Create Source window expands with the required Credential field. Choose from an existing Azure Creden-
tial. For more information, refer to Credentials.
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in
the main procedure for adding a source.
4. Use the Source Variables field to override variables used by the azure_rm inventory plugin. Enter variables
using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed description
VMware vCenter
1. To configure a VMWare-sourced inventory, select VMware vCenter from the Source field.
2. The Create Source window expands with the required Credential field. Choose from an existing VMware
Credential. For more information, refer to Credentials.
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in
the main procedure for adding a source.
4. Use the Source Variables field to override variables used by the vmware_inventory inventory plugin. Enter
variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed
description of these variables, view the vmware_inventory inventory plugin.
Starting with Ansible 2.9, VMWare properties have changed from lower case to camelCase. The controller
provides aliases for the top-level keys, but lower case keys in nested properties have been discontinued.
For a list of valid and supported properties starting with Ansible 2.9, refer to the primary documentation
for hostvars from VMWare inventory imports in GitHub.
1. To configure a Red Hat Satellite-sourced inventory, select Red Hat Satellite from the Source field.
2. The Create Source window expands with the required Credential field. Choose from an existing Satellite
Credential. For more information, refer to Credentials.
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in
the main procedure for adding a source.
4. Use the Source Variables field to specify parameters used by the foreman inventory source. Enter variables
using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed descrip-
tion of these variables, refer to the theforeman.foreman.foreman – Foreman inventory source in the Ansible
documentation.
If you encounter an issue with the controller inventory not having the “related groups” from Satellite, you might need
to define these variables in the inventory source. See the inventory plugins template example for Red Hat Satellite 6
in the Ansible Automation Platform Installation and Reference Guide for detail.
If you see the message, "no foreman.id" variable(s) when syncing the inventory, refer to the
solution on the Red Hat Customer Portal at: https://access.redhat.com/solutions/5826451. Be sure to login with your
customer credentials to access the full article.
1. To configure a Red Hat Insights-sourced inventory, select Red Hat Insights from the Source field.
2. The Create Source window expands with the required Credential field. Choose from an existing Insights Cre-
dential. For more information, refer to Credentials.
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in
the main procedure for adding a source.
4. Use the Source Variables field to override variables used by the insights inventory plugin. Enter variables
using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed description
of these variables, view the insights inventory plugin.
OpenStack
1. To configure a Red Hat Virtualization-sourced inventory, select Red Hat Virtualization from the Source field.
2. The Create Source window expands with the required Credential field. Choose from an existing Red Hat
Virtualization Credential. For more information, refer to Credentials.
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in
the main procedure for adding a source.
4. Use the Source Variables field to override variables used by the ovirt inventory plugin. Enter variables using
either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed description of
these variables, view the ovirt inventory plugin.
Note: Red Hat Virtualization (ovirt) inventory source requests are secure by default. To change this default setting,
set the key ovirt_insecure to true in source_variables, which is only available from the API details of
the inventory source at the /api/v2/inventory_sources/N/ endpoint.
1. To configure a automation controller-sourced inventory, select Red Hat Ansible Automation Platform from
the Source field.
2. The Create Source window expands with the required Credential field. Choose from an existing Ansible Au-
tomation Platform Credential. For more information, refer to Credentials.
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in
the main procedure for adding a source.
4. Use the Source Variables field to override variables used by the controller inventory plugin. Enter vari-
ables using either JSON or YAML syntax. Use the radio button to toggle between the two. For a detailed
description of these variables, view the controller inventory plugin (requires your Red Hat Customer login).
Despite the removal of the custom inventory scripts API, the scripts are still saved in the database. The commands
described in this section allows you to recover the scripts in a format that is suitable for you to subsequently check
into source control. Usage looks like this:
$ awx-manage export_custom_scripts --filename=my_scripts.tar
Dump of old custom inventory scripts at my_scripts.tar
The naming of the scripts is _<pk>__<name>. This is the naming scheme used for project folders.
$ ls my_scripts
_10__inventory_script_rawhook _19__
,→_30__inventory_script_listenhospital
_11__inventory_script_upperorder _1__inventory_script_commercialinternet45
,→_4__inventory_script_whitestring
_12__inventory_script_eastplant _22__inventory_script_pinexchange
,→_5__inventory_script_literaturepossession
_13__inventory_script_governmentculture _23__inventory_script_brainluck
,→_6__inventory_script_opportunitytelephone
_14__inventory_script_bottomguess _25__inventory_script_buyerleague
,→_7__inventory_script_letjury
_15__inventory_script_wallisland _26__inventory_script_lifesport
,→_8__random_inventory_script_
(continues on next page)
_17__inventory_script_bidstory _28__inventory_script_boxchild
_18__p _29__inventory_script_wearstress
Each file contains a script. Scripts can be bash/python/ruby/more, so the extension is not included. They are
all directly executable (assuming the scripts worked). If you execute the script, it dumps the inventory data.
$ ./my_scripts/_11__inventory_script_upperorder
{"group_\ud801\udcb0\uc20e\u7b0e\ud81c\udfeb\ub12b\ub4d0\u9ac6\ud81e\udf07\u6ff9\uc17b
,→": {"hosts":
["host_\ud821\udcad\u68b6\u7a51\u93b4\u69cf\uc3c2\ud81f\uddbe\ud820\udc92\u3143\u62c7
,→",
"host_\u6057\u3985\u1f60\ufefb\u1b22\ubd2d\ua90c\ud81a\udc69\u1344\u9d15",
"host_\u78a0\ud820\udef3\u925e\u69da\ua549\ud80c\ude7e\ud81e\udc91\ud808\uddd1\u57d6\
,→ud801\ude57",
"host_\ud83a\udc2d\ud7f7\ua18a\u779a\ud800\udf8b\u7903\ud820\udead\u4154\ud808\ude15\
,→u9711",
"host_\u18a1\u9d6f\u08ac\u74c2\u54e2\u740e\u5f02\ud81d\uddee\ufbd6\u4506"], "vars": {
,→"ansible_host": "127.0.0.1", "ansible_connection":
"local"}}}
You can verify functionality with ansible-inventory. This should give the same data, but reformatted.
In the above example, you could cd into my_scripts and then issue a git init command, add the scripts you
want, push it to source control, and then create an SCM inventory source in the automation controller user interface.
For more information on syncing or using custom inventory scripts, refer to Inventory File Importing in the` in the
Automation Controller Administration Guide.
If an inventory was used to run a job, you can view details about those jobs in the Completed Jobs tab of the inventory
and click Expanded to view details about each job.
4. Click Next to choose the execution environment you want the ad-hoc command to be run against.
5. Click Next to choose the credential you want to use and click the Launch button.
The results display in the Output tab of the module’s job window.
EIGHTEEN
Upon upgrade to 4.x, existing configurations will be migrated to the new format that will produce a backwards com-
patible inventory output. Use the templates below to help aid in migrating your inventories to the new style inventory
plugin output.
compose:
ansible_host: public_ip_address
ec2_account_id: owner_id
ec2_ami_launch_index: ami_launch_index | string
ec2_architecture: architecture
ec2_block_devices: dict(block_device_mappings | map(attribute='device_name') | list
,→| zip(block_device_mappings | map(attribute='ebs.volume_id') | list))
ec2_client_token: client_token
ec2_dns_name: public_dns_name
ec2_ebs_optimized: ebs_optimized
ec2_eventsSet: events | default("")
ec2_group_name: placement.group_name
ec2_hypervisor: hypervisor
ec2_id: instance_id
ec2_image_id: image_id
ec2_instance_profile: iam_instance_profile | default("")
ec2_instance_type: instance_type
ec2_ip_address: public_ip_address
ec2_kernel: kernel_id | default("")
(continues on next page)
173
Automation Controller User Guide, Release Automation Controller 4.2.1
parent_group: security_groups
prefix: security_group
- key: dict(tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list |
,→zip(tags.values()
auth_kind: serviceaccount
compose:
ansible_ssh_host: networkInterfaces[0].accessConfigs[0].natIP |
,→default(networkInterfaces[0].networkIP)
gce_name: name
gce_network: networkInterfaces[0].network.name
gce_private_ip: networkInterfaces[0].networkIP
gce_public_ip: networkInterfaces[0].accessConfigs[0].natIP | default(None)
gce_status: status
gce_subnetwork: networkInterfaces[0].subnetwork.name
gce_tags: tags.get("items", [])
gce_zone: zone
hostnames:
(continues on next page)
conditional_groups:
azure: true
default_host_filters: []
fail_on_template_errors: false
hostvar_expressions:
computer_name: name
private_ip: private_ipv4_addresses[0] if private_ipv4_addresses else None
provisioning_state: provisioning_state | title
public_ip: public_ipv4_addresses[0] if public_ipv4_addresses else None
public_ip_id: public_ip_id if public_ip_id is defined else None
public_ip_name: public_ip_name if public_ip_name is defined else None
tags: tags if tags else None
type: resource_type
keyed_groups:
- key: location
prefix: ''
separator: ''
- key: tags.keys() | list if tags else []
prefix: ''
separator: ''
- key: security_group
prefix: ''
separator: ''
(continues on next page)
prefix: ''
separator: ''
plain_host_names: true
plugin: azure.azcollection.azure_rm
use_contrib_script_compatible_sanitization: true
compose:
ansible_host: guest.ipAddress
ansible_ssh_host: guest.ipAddress
ansible_uuid: 99999999 | random | to_uuid
availablefield: availableField
configissue: configIssue
configstatus: configStatus
customvalue: customValue
effectiverole: effectiveRole
guestheartbeatstatus: guestHeartbeatStatus
layoutex: layoutEx
overallstatus: overallStatus
parentvapp: parentVApp
recenttask: recentTask
resourcepool: resourcePool
rootsnapshot: rootSnapshot
triggeredalarmstate: triggeredAlarmState
filters:
- runtime.powerState == "poweredOn"
keyed_groups:
- key: config.guestId
prefix: ''
separator: ''
- key: '"templates" if config.template else "guests"'
prefix: ''
separator: ''
plugin: community.vmware.vmware_vm_inventory
properties:
- availableField
- configIssue
- configStatus
- customValue
- datastore
- effectiveRole
- guestHeartbeatStatus
- layout
- layoutEx
- name
(continues on next page)
group_prefix: foreman_
keyed_groups:
- key: foreman['environment_name'] | lower | regex_replace(' ', '') | regex_replace(
,→'[^A-Za-z0-9_]', '_') | regex_replace('none', '')
prefix: foreman_environment_
separator: ''
- key: foreman['location_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-
,→Za-z0-9_]', '_')
prefix: foreman_location_
separator: ''
- key: foreman['organization_name'] | lower | regex_replace(' ', '') | regex_replace(
,→'[^A-Za-z0-9_]', '_')
prefix: foreman_organization_
separator: ''
- key: foreman['content_facet_attributes']['lifecycle_environment_name'] | lower |
,→regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
prefix: foreman_lifecycle_environment_
separator: ''
- key: foreman['content_facet_attributes']['content_view_name'] | lower | regex_
,→replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
prefix: foreman_content_view_
separator: ''
legacy_hostvars: true
plugin: theforeman.foreman.foreman
validate_certs: false
want_facts: true
want_hostcollections: false
want_params: true
18.6 OpenStack
expand_hostvars: true
fail_on_errors: true
inventory_hostname: uuid
plugin: openstack.cloud.openstack
compose:
ansible_host: (devices.values() | list)[0][0] if devices else None
keyed_groups:
- key: cluster
prefix: cluster
separator: _
- key: status
prefix: status
separator: _
- key: tags
prefix: tag
separator: _
ovirt_hostname_preference:
- name
- fqdn
ovirt_insecure: false
plugin: ovirt.ovirt.ovirt
include_metadata: true
inventory_id: <inventory_id or url_quoted_named_url>
plugin: awx.awx.tower
validate_certs: <true or false>
NINETEEN
JOB TEMPLATES
A job template is a definition and set of parameters for running an Ansible job. Job templates are useful to execute the
same job many times. Job templates also encourage the reuse of Ansible playbook content and collaboration between
teams.
The Templates menu opens a list of the job templates that are currently available. The default view is collapsed
(Compact), showing the template name, template type, and the timestamp of last job that ran using that template. You
can click Expanded (arrow next to each entry) to expand to view more information. This list is sorted alphabetically
by name, but you can sort by other criteria, or search by various fields and attributes of a template.
From this screen, you can launch ( ), edit ( ), and copy ( ) a job template. To delete a job template, you
must select one or more templates and click the Delete button. Before deleting a job template, be sure it is not used in
a workflow job template.
Note: If deleting items that are used by other work items, a message opens listing the items are affected by the deletion
and prompts you to confirm the deletion. Some screens will contain items that are invalid or previously deleted, so
they will fail to run. Below is an example of such a message:
180
Automation Controller User Guide, Release Automation Controller 4.2.1
Note: Job templates can be used to build a workflow template. For templates that show the Workflow Visualizer
( ) icon next to them are workflow templates. Clicking it allows you to graphically build a workflow. Many
parameters in a job template allow you to enable Prompt on Launch that can be modified at the workflow level, and
do not affect the values assigned at the job template level. For instructions, see the Workflow Visualizer section.
Note: More information on job types can be found in the Playbooks: Special Topics section of the
Ansible documentation.
• Inventory: Choose the inventory to be used with this job template from the inventories available to the currently
logged in user.
• Prompt on Launch: If selected, even if a default value is supplied, you will be prompted upon launch to choose
an inventory to run this job template against.
• Project: Choose the project to be used with this job template from the projects available to the currently logged
in user.
• SCM Branch: This field is only present if you chose a project that allows branch override. Specify the over-
riding branch to use in your job run. If left blank, the specified SCM branch (or commit hash or tag) from the
project is used. For more detail, see job branch overriding.
– Prompt on Launch: If selected, even if a default value is supplied, you will be prompted upon launch to
choose an SCM branch.
• Playbook: Choose the playbook to be launched with this job template from the available playbooks. This field
automatically populates with the names of the playbooks found in the project base path for the selected project.
Alternatively, you can enter the name of the playbook if it is not listed, such as the name of a file (like foo.yml)
you want to use to run with that playbook. If you enter a filename that is not valid, the template will display an
error, or cause the job to fail.
• Credentials: Click the button to open a separate window. Choose the credential from the available options
to be used with this job template. Use the drop-down menu list to filter by credential type if the list is extensive.
Note: Some credential types are not listed because they do not apply to certain job templates.
• Prompt on Launch: If selected, upon launching a job template that has a default machine credential, you will
not be able to remove the default machine credential in the Prompt dialog without replacing it with another
machine credential before it can launch. Alternatively, you can add more credentials as you see fit. Below is an
example of such a message:
• Forks: The number of parallel or simultaneous processes to use while executing the playbook. A value of
zero uses the Ansible default setting, which is 5 parallel processes unless overridden in /etc/ansible/
ansible.cfg.
• Limit: A host pattern to further constrain the list of hosts managed or affected by the playbook. Multiple
patterns can be separated by colons (“:”). As with core Ansible, “a:b” means “in group a or b”, “a:b:&c” means
“in a or b but must be in c”, and “a:!b” means “in a, and definitely not in b”.
• Prompt on Launch: If selected, even if a default value is supplied, you will be prompted upon launch to choose
a limit.
Note: For more information and examples refer to Patterns in the Ansible documentation.
• Verbosity: Control the level of output Ansible produces as the playbook executes. Choose the verbosity from
Normal to various Verbose or Debug settings. This only appears in the “details” report view. Verbose logging
includes the output of all commands. Debug logging is exceedingly verbose and includes information on SSH
operations that can be useful in certain support instances. Most users do not need to see debug mode output.
Warning: Verbosity 5 causes automation controller to block heavily when jobs are running, which could
delay reporting that the job has finished (even though it has) and can cause the browser tab to lock up.
• Prompt on Launch: If selected, even if a default value is supplied, you will be prompted upon launch to choose
a verbosity.
• Job Tags: Provide a comma-separated list of playbook tags to specify what parts of the playbooks should be
executed. For more information and examples refer to Tags in the Ansible documentation.
• Prompt on Launch: If selected, even if a default value is supplied, you will be prompted upon launch to choose
a job tag.
• Skip Tags: Provide a comma-separated list of playbook tags to skip certain tasks or parts of the playbooks to be
executed. For more information and examples refer to Tags in the Ansible documentation.
• Prompt on Launch: If selected, even if a default value is supplied, you will be prompted upon launch to choose
tag(s) to skip.
• Labels: Supply optional labels that describe this job template, such as “dev” or “test”. Labels can be used to
group and filter job templates and completed jobs in the display.
• Labels are created when they are added to the Job Template. Labels are associated to a single Organization
using the Project that is provided in the Job Template. Members of the Organization can create labels on a Job
Template if they have edit permissions (such as admin role).
• Once the Job Template is saved, the labels appear in the Job Templates overview in the Expanded view.
• Click on the “x” beside a label to remove it. When a label is removed, and is no longer associated with a Job or
Job Template, the label is permanently deleted from the list of Organization labels.
• Jobs inherit labels from the Job Template at the time of launch. If a label is deleted from a Job Template, it is
also deleted from the Job.
• Instance Groups: Click the button to open a separate window. Choose the instance groups on which you
want to run this job template. If the list is extensive, use the search to narrow the options.
• Job Slicing: Specify the number of slices you want this job template to run. Each slice will run the same tasks
against a portion of the inventory. For more information about job slices, see Job Slicing.
• Timeout: Allows you to specify the length of time (in seconds) that the job may run before it is canceled.
Defaults to 0 for no job timeout.
• Show Changes: Allows you to see the changes made by Ansible tasks.
• Prompt on Launch: If selected, even if a default value is supplied, you will be prompted upon launch to choose
whether or not to show changes.
• Options: Specify options for launching this template, if necessary.
• Privilege Escalation: If checked, you enable this playbook to run as an administrator. This is the equivalent of
passing the --become option to the ansible-playbook command.
• Provisioning Callbacks: If checked, you enable a host to call back to automation controller via the REST
API and invoke the launch of a job from this job template. Refer to Provisioning Callbacks for additional
information.
• Enable Webhook: Turns on the ability to interface with a predefined SCM system web service that is used to
launch a job template. Currently supported SCM systems are GitHub and GitLab.
If you enable webhooks, other fields display, prompting for additional information:
• Webhook Key: Generated shared secret to be used by the webhook service to sign pay-
loads sent to automation controller. This must be configured in the settings on the web-
hook service in order for automation controller to accept webhooks from this service.
• Webhook Credential: Optionally, provide a GitHub or GitLab personal access token
(PAT) as a credential to use to send status updates back to the webhook service. Before
you can select it, the credential must exist. See Credential Types to create one.
For additional information on setting up webhooks, see Working with Webhooks.
• Concurrent Jobs: If checked, you are allowing jobs in the queue to run simultaneously if not
dependent on one another. Check this box if you want to run job slices simultaneously. Refer to
Automation controller Capacity Determination and Job Impact for additional information.
• Enable Fact Storage: When checked, automation controller will store gathered facts for all hosts
in an inventory related to the job running.
• Extra Variables:
• Pass extra command line variables to the playbook. This is the “-e” or “–extra-vars” command line parameter
for ansible-playbook that is documented in the Ansible documentation at Passing Variables on the Command
Line.
• Provide key/value pairs using either YAML or JSON. These variables have a maximum value of precedence and
overrides other variables specified elsewhere. An example value might be:
git_branch: production
release_version: 1.5
Note: If you want to be able to specify extra_vars on a schedule, you must select Prompt on Launch for
EXTRA VARIABLES on the job template, or a enable a survey on the job template, then those answered survey
questions become extra_vars.
3. When you have completed configuring the details of the job template, click Save.
Saving the template does not exit the job template page but advances to the Job Template Details tab for viewing.
After saving the template, you can click Launch to launch the job, or click Edit to add or change the attributes of the
template, such as permissions, notifications, view completed jobs, and add a survey (if the job type is not a scan). You
must first save the template prior to launching, otherwise, the Launch button remains grayed-out.
You can verify the template is saved when the newly created template appears on the Templates list view.
5. Click the Save button to apply the roles to the selected user(s) or team(s) and to add them as members.
The Add Users/Teams window closes to display the updated roles assigned for each user and team.
To remove roles for a particular user, click the disassociate (x) button next to its resource.
Clicking the Notifications tab allows you to review any notification integrations you have setup and their statuses, if
they have ran.
Use the toggles to enable or disable the notifications to use with your particular template. For more detail, see Enable
and Disable Notifications.
If no notifications have been set up, click the Add button to create a new notification. Refer to Notification Types for
additional details on configuring various notification types and extended messaging.
The Completed Jobs tab provides the list of job templates that have ran. Click Expanded to view details of each job,
including its status, ID, and name; type of job, time started and completed, who started the job; and which template,
inventory, project, and credential were used. You can filter the list of completed jobs using any of these criteria.
Sliced jobs that display on this list are labeled accordingly, with the number of sliced jobs that have run:
19.5 Scheduling
Access the schedules for a particular job template from the Schedules tab.
Note: To able to set extra_vars on schedules, you must select Prompt on Launch for EXTRA VARI-
ABLES on the job template, or a enable a survey on the job template, then those answered survey questions become
extra_vars.
19.6 Surveys
Job types of Run or Check will provide a way to set up surveys in the Job Template creation or editing screens. Surveys
set extra variables for the playbook similar to ‘Prompt for Extra Variables’ does, but in a user-friendly question and
answer way. Surveys also allow for validation of user input. Click the Survey tab to create a survey.
Use cases for surveys are numerous. An example might be if operations wanted to give developers a “push to stage”
button they could run without advanced Ansible knowledge. When launched, this task could prompt for answers to
questions such as, “What tag should we release?”
Many types of questions can be asked, including multiple-choice questions.
To create a survey:
1. Click the Survey tab and click the Add button.
2. A survey can consist of any number of questions. For each question, enter the following information:
• Name: The question to ask the user
• Description: (optional) A description of what’s being asked of the user.
• Answer Variable Name: The Ansible variable name to store the user’s response in. This is the variable to be
used by the playbook. Variable names cannot contain spaces.
• Answer Type: Choose from the following question types.
– Text: A single line of text. You can set the minimum and maximum length (in characters) for this answer.
– Textarea: A multi-line text field. You can set the minimum and maximum length (in characters) for this
answer.
– Password: Responses are treated as sensitive information, much like an actual password is treated. You
can set the minimum and maximum length (in characters) for this answer.
– Multiple Choice (single select): A list of options, of which only one can be selected at a time. Enter the
options, one per line, in the Multiple Choice Options box.
– Multiple Choice (multiple select): A list of options, any number of which can be selected at a time. Enter
the options, one per line, in the Multiple Choice Options box.
– Integer: An integer number. You can set the minimum and maximum length (in characters) for this answer.
– Float: A decimal number. You can set the minimum and maximum length (in characters) for this answer.
• Required: Whether or not an answer to this question is required from the user.
• Minimum length and Maximum length: Specify if a certain length in the answer is required.
• Default Answer: The default answer to the question. This value is pre-filled in the interface and is used if the
answer is not provided by the user.
3. Once you have entered the question information, click Save to add the question.
The survey question displays in the Survey list. For any question, you can click to edit the question, or check
the box next to each question and click Delete to delete the question, or use the toggle button at the top of the screen
to enable or disable the survey prompt(s).
If you have more than one survey question, use the Edit Order button to rearrange the order of the questions by
clicking and dragging on the grid icon.
4. To add more questions, click the Add button to add additional questions.
The Required setting on a survey question determines whether the answer is optional or not for the user interacting
with it.
Behind the scenes, optional survey variables can be passed to the playbook in extra_vars, even when they aren’t
filled in.
• If a non-text variable (input type) is marked as optional, and is not filled in, no survey extra_var is passed to
the playbook.
• If a text input or text area input is marked as optional, is not filled in, and has a minimum length > 0, no
survey extra_var is passed to the playbook.
• If a text input or text area input is marked as optional, is not filled in, and has a minimum length === 0,
that survey extra_var is passed to the playbook, with the value set to an empty string ( “” ).
A major benefit of automation controller is the push-button deployment of Ansible playbooks. You can easily configure
a template to store all parameters you would normally pass to the ansible-playbook on the command line–not just the
playbooks, but the inventory, credentials, extra variables, and all options and settings you can specify on the command
line.
Easier deployments drive consistency, by running your playbooks the same way each time, and allow you to delegate
responsibilities–even users who aren’t Ansible experts can run playbooks written by others.
Launch a job template by any of the following ways:
• Access the job template list from the Templates menu on the left navigation bar or while in the Job Template
Details view, scroll to the bottom to access the button from the list of templates.
• While in the Job Template Details view of the job template you want to launch, click Launch.
A job may require additional information to run. The following data may be requested at launch:
• Credentials that were setup
• The option Prompt on Launch is selected for any parameter
• Passwords or passphrases that have been set to Ask
• A survey, if one has been configured for the job templates
• Extra variables, if requested by the job template
Note: If a job has user-provided values, then those are respected upon relaunch. If the user did not specify a value,
then the job uses the default value from the job template. Jobs are not relaunched as-is. They are relaunched with the
user prompts re-applied to the job template.
Below is an example job launch that prompts for Job Tags, and runs the example survey created in Surveys.
Note: Providing values on one tab, and going back to a previous tab, and then continuing on to the next tab will result
in having to re-provide values on the rest of the tabs. Make sure you fill in the tabs in the order the prompts appear to
avoid this.
Along with any extra variables set in the job template and survey, automation controller automatically adds
the following variables to the job environment. Also note, awx_``* variables are defined by
the system and cannot be overridden. Variables about the job context, like
``awx_job_template_name are not affected if they are set in extra_vars.
• awx_job_id: The Job ID for this job run
• awx_job_launch_type: The description to indicate how the job was started:
– manual: Job was started manually by a user.
– relaunch: Job was started via relaunch.
– callback: Job was started via host callback.
– scheduled: Job was started from a schedule.
– dependency: Job was started as a dependency of another job.
– workflow: Job was started from a workflow job.
– sync: Job was started from a project sync.
– scm: Job was created as an Inventory SCM sync.
• awx_job_template_id: The Job Template ID that this job run uses
• awx_job_template_name: The Job Template name that this job uses
• awx_project_revision: The revision identifier for the source tree that this particular job uses (it is also
the same as the job’s field scm_revision)
• awx_project_scm_branch: The configured default project SCM branch for the project the job template
uses
• awx_job_scm_branch If the SCM Branch is overwritten by the job, the value is shown here
• awx_user_email: The user email of the controller user that started this job. This is not available for callback
or scheduled jobs.
• awx_user_first_name: The user’s first name of the controller user that started this job. This is not avail-
able for callback or scheduled jobs.
• awx_user_id: The user ID of the controller user that started this job. This is not available for callback or
scheduled jobs.
• awx_user_last_name: The user’s last name of the controller user that started this job. This is not available
for callback or scheduled jobs.
• awx_user_name: The user name of the controller user that started this job. This is not available for callback
or scheduled jobs.
• awx_schedule_id: If applicable, the ID of the schedule that launched this job
• awx_schedule_name: If applicable, the name of the schedule that launched this job
• awx_workflow_job_id: If applicable, the ID of the workflow job that launched this job
• awx_workflow_job_name: If applicable, the name of the workflow job that launched this job. Note this is
also the same as the workflow job template.
• awx_inventory_id: If applicable, the ID of the inventory this job uses
• awx_inventory_name: If applicable, the name of the inventory this job uses
For compatibility, all variables are also given an “awx” prefix, for example, awx_job_id.
Upon launch, automation controller automatically redirects the web browser to the Job Status page for this job under
the Jobs tab.
Note: You can re-launch the most recent job from the list view to re-run on all hosts or just failed hosts in the specified
inventory. Refer to Jobs in the Automation Controller User Guide for more detail.
When slice jobs are running, job lists display the workflow and job slices, as well as a link to view their details
individually.
If you choose to copy Job Template, it does not copy any associated schedule, notifications, or permissions. Schedules
and notifications must be recreated by the user or admin creating the copy of the Job Template. The user copying the
Job Template will be granted the admin permission, but no permissions are assigned (copied) to the Job Template.
1. Access the job template list from the Templates menu on the left navigation bar or while in the Job Template
Details view, scroll to the bottom to access it from the list of templates.
2. Click the button associated with the template you want to copy.
The new template with the name of the template from which you copied and a timestamp displays in the list of
templates.
3. Click to open the new template and click Edit.
4. Replace the contents of the Name field with a new name, and provide or modify the entries in the other fields to
complete this page.
5. Click Save when done.
Scan jobs are no longer supported starting with automation controller 3.2. This system tracking feature was used as
a way to capture and store facts as historical data. Facts are now stored in the controller via fact caching. For more
information, see Fact Caching.
If you have Job Template Scan Jobs in your system prior to automation controller 3.2, they have been converted to
type run (like normal job templates) and retained their associated resources (i.e. inventory, credential). Job Template
Scan Jobs that do not have a related project are assigned a special playbook by default, or you can specify a project
with your own scan playbook. A project was created for each organization that points to https://github.com/ansible/
tower-fact-modules and the Job Template was set to the playbook, https://github.com/ansible/tower-fact-modules/
blob/master/scan_facts.yml.
The scan job playbook, scan_facts.yml, contains invocations of three fact scan modules - packages, ser-
vices, and files, along with Ansible’s standard fact gathering. The scan_facts.yml playbook file looks like the
following:
- hosts: all
vars:
scan_use_checksum: false
scan_use_recursive: false
tasks:
- scan_packages:
- scan_services:
- scan_files:
paths: '{{ scan_file_paths }}'
get_checksum: '{{ scan_use_checksum }}'
recursive: '{{ scan_use_recursive }}'
when: scan_file_paths is defined
The scan_files fact module is the only module that accepts parameters, passed via extra_vars on the scan job
template.
scan_file_paths: '/tmp/'
scan_use_checksum: true
scan_use_recursive: true
• The scan_file_paths parameter may have multiple settings (such as /tmp/ or /var/log).
• The scan_use_checksum and scan_use_recursive parameters may also be set to false or omitted.
An omission is the same as a false setting.
Scan job templates should enable become and use credentials for which become is a possibility. You can enable
become by checking the Enable Privilege Escalation from the Options menu:
If you use the scan_facts.yml playbook with use fact cache, ensure that your OS is supported:
• Red Hat Enterprise Linux 5, 6, & 7
• Ubuntu 16.04 (Support for Unbuntu is deprecated and will be removed in a future release)
• OEL 6 & 7
• SLES 11 & 12
• Debian 6, 7, 8
• Fedora 22, 23, 24
• Amazon Linux 2016.03
• Windows Server 2008 and later
Note that some of these operating systems may require initial configuration in order to be able to run python and/or
have access to the python packages (such as python-apt) that the scan modules depend on.
The following are examples of playbooks that configure certain distributions so that scan jobs can be run against them.
Bootstrap Ubuntu (16.04)
---
tasks:
tasks:
A playbook for a custom fact scan is similar to the example of the Fact Scan Playbook above. As an example, a
playbook that only uses a custom scan_foo Ansible fact module would look like this:
scan_custom.yml:
- hosts: all
gather_facts: false
tasks:
- scan_foo:
scan_foo.py:
def main():
module = AnsibleModule(
argument_spec = dict())
foo = [
{
"hello": "world"
(continues on next page)
main()
To use a custom fact module, ensure that it lives in the /library/ subdirectory of the Ansible project used in the
scan job template. This fact scan module is very simple, returning a hard-coded set of facts:
[
{
"hello": "world"
},
{
"foo": "bar"
}
]
Refer to the Module Provided ‘Facts’ section of the Ansible documentation for more information.
Automation controller can store and retrieve facts on a per-host basis through an Ansible Fact Cache plugin. This
behavior is configurable on a per-job template basis. Fact caching is turned off by default but can be enabled to
serve fact requests for all hosts in an inventory related to the job running. This allows you to use job templates with
--limit while still having access to the entire inventory of host facts. A global timeout setting that the plugin
enforces per-host, can be specified (in seconds) through the Jobs settings menu:
Upon launching a job that uses fact cache (use_fact_cache=True), the controller will store all
ansible_facts associated with each host in the inventory associated with the job. The Ansible Fact
Cache plugin that ships with automation controller will only be enabled on jobs with fact cache enabled
(use_fact_cache=True).
When a job that has fact cache enabled (use_fact_cache=True) finishes running, the controller will restore all
records for the hosts in the inventory. Any records with update times newer than the currently stored facts per-host
will be updated in the database.
New and changed facts will be logged via the controller’s logging facility. Specifically, to the system_tracking
namespace or logger. The logging payload will include the fields:
• host_name
• inventory_id
• ansible_facts
where ansible_facts is a dictionary of all Ansible facts for host_name in the controller inventory,
inventory_id.
Note: If a hostname includes a forward slash (/), fact cache will not work for that host. If you have an inventory with
100 hosts and one host has a / in the name, 99 of those hosts will still collect facts.
Fact caching saves a significant amount of time over running fact gathering. If you have a playbook in a job that
runs against a thousand hosts and forks, you could easily spend 10 minutes gathering facts across all of those hosts.
But if you run a job on a regular basis, the first run of it caches these facts and the next run will just pull them from
the database. This cuts the runtime of jobs against large inventories, including Smart Inventories, by an enormous
magnitude.
Note: Do not modify the ansible.cfg file to apply fact caching. Custom fact caching could conflict with the
controller’s fact caching feature. It is recommended to use the fact caching module that comes with automation
controller. Fact caching is not supported for isolated nodes.
You can choose to use cached facts in your job by enabling it in the Options field of the Job Templates window.
To clear facts, you need to run the Ansible clear_facts meta task. Below is an example playbook that uses the
Ansible clear_facts meta task.
- hosts: all
gather_facts: false
tasks:
- name: Clear gathered facts from all currently targeted hosts
meta: clear_facts
The API endpoint for fact caching can be found at: http://<controller server name>/api/v2/
hosts/x/ansible_facts.
Cloud Credentials can be used when syncing a respective cloud inventory. Cloud Credentials may also be associated
with a Job Template and included in the runtime environment for use by a playbook. Cloud Credentials currently
supported:
• OpenStack
• Amazon Web Services
• Google
• Azure
• VMware
19.11.1 OpenStack
The sample playbook below invokes the nova_compute Ansible OpenStack cloud module and requires creden-
tials to do anything meaningful, and specifically requires the following information: auth_url, username,
password, and project_name. These fields are made available to the playbook via the environmental vari-
able OS_CLIENT_CONFIG_FILE, which points to a YAML file written by the controller based on the contents of
the cloud credential. This sample playbook loads the YAML file into the Ansible variable space.
OS_CLIENT_CONFIG_FILE example:
clouds:
devstack:
auth:
auth_url: http://devstack.yoursite.com:5000/v2.0/
username: admin
password: your_password_here
project_name: demo
Playbook example:
- hosts: all
gather_facts: false
vars:
config_file: "{{ lookup('env', 'OS_CLIENT_CONFIG_FILE') }}"
nova_tenant_name: demo
nova_image_name: "cirros-0.3.2-x86_64-uec"
nova_instance_name: autobot
nova_instance_state: 'present'
nova_flavor_name: m1.nano
nova_group:
group_name: antarctica
instance_name: deceptacon
instance_count: 3
tasks:
- debug: msg="{{ config_file }}"
- stat: path="{{ config_file }}"
register: st
- include_vars: "{{ config_file }}"
when: st.stat.exists and st.stat.isreg
Amazon Web Services cloud credentials are exposed as the following environment variables during playbook execu-
tion (in the job template, choose the cloud credential needed for your setup):
• AWS_ACCESS_KEY_ID
• AWS_SECRET_ACCESS_KEY
All of the AWS modules will implicitly use these credentials when run via the controller without having to set the
aws_access_key_id or aws_secret_access_key module options.
19.11.3 Google
Google cloud credentials are exposed as the following environment variables during playbook execution (in the job
template, choose the cloud credential needed for your setup):
• GCE_EMAIL
• GCE_PROJECT
• GCE_CREDENTIALS_FILE_PATH
All of the Google modules will implicitly use these credentials when run via the controller without having to set the
service_account_email, project_id, or pem_file module options.
19.11.4 Azure
Azure cloud credentials are exposed as the following environment variables during playbook execution (in the job
template, choose the cloud credential needed for your setup):
• AZURE_SUBSCRIPTION_ID
• AZURE_CERT_PATH
All of the Azure modules implicitly use these credentials when run via the controller without having to set the
subscription_id or management_cert_path module options.
19.11.5 VMware
VMware cloud credentials are exposed as the following environment variables during playbook execution (in the job
template, choose the cloud credential needed for your setup):
• VMWARE_USER
• VMWARE_PASSWORD
• VMWARE_HOST
The sample playbook below demonstrates usage of these credentials:
- vsphere_guest:
vcenter_hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
username: "{{ lookup('env', 'VMWARE_USER') }}"
password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
guest: newvm001
from_template: yes
template_src: linuxTemplate
(continues on next page)
Provisioning callbacks are a feature of Automation Controller that allow a host to initiate a playbook run against
itself, rather than waiting for a user to launch a job to manage the host from the Automation Controller console.
Please note that provisioning callbacks are only used to run playbooks on the calling host. Provisioning callbacks are
meant for cloud bursting (i.e. new instances with a need for client to server communication for configuration (such as
transmitting an authorization key)), not to run a job against another host. This provides for automatically configuring
a system after it has been provisioned by another system (such as AWS auto-scaling, or a OS provisioning system
like kickstart or preseed) or for launching a job programmatically without invoking the Automation Controller API
directly. The Job Template launched only runs against the host requesting the provisioning.
Frequently this would be accessed via a firstboot type script, or from cron.
To enable callbacks, check the Provisioning Callbacks checkbox in the Job Template. This displays the Provisioning
Callback URL for this job template.
Note: If you intend to use Automation Controller’s provisioning callback feature with a dynamic inventory, Update
on Launch should be set for the inventory group used in the Job Template.
Callbacks also require a Host Config Key, to ensure that foreign hosts with the URL cannot request configuration.
Please provide a custom value for Host Config Key. The host key may be reused across multiple hosts to apply this
job template against multiple hosts. Should you wish to control what hosts are able to request configuration, the key
may be changed at any time.
To callback manually via REST, look at the callback URL in the UI, which is of the form:
https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback/
The ‘7’ in this sample URL is the job template ID in Automation Controller.
The request from the host must be a POST. Here is an example using curl (all on a single line):
https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback/
The requesting host must be defined in your inventory for the callback to succeed. If Automation Controller fails to
locate the host either by name or IP address in one of your defined inventories, the request is denied. When running
a Job Template in this way, the host initiating the playbook run against itself must be in the inventory. If the host is
missing from the inventory, the Job Template will fail with a “No Hosts Matched” type error message.
Note: If your host is not in inventory and Update on Launch is set for the inventory group, Automation Con-
troller attempts to update cloud based inventory source before running the callback.
Successful requests result in an entry on the Jobs tab, where the results and history can be viewed.
While the callback can be accessed via REST, the suggested method of using the callback is to use one of the exam-
ple scripts that ships with Automation Controller - /usr/share/awx/request_tower_configuration.
sh (Linux/UNIX) or /usr/share/awx/request_tower_configuration.ps1 (Windows). Usage is de-
scribed in the source code of the file by passing the -h flag, as shown below:
./request_tower_configuration.sh -h
Usage: ./request_tower_configuration.sh <options>
OPTIONS:
-h Show this message
-s Controller server (e.g. https://ac.example.com) (required)
-k Allow insecure SSL connections and transfers
-c Host config key (required)
-t Job template ID (required)
-e Extra variables
This script has some intelligence, it knows how to retry commands and is therefore a more robust way to use callbacks
than a simple curl request. As written, the script retries once per minute for up to ten minutes.
Note: Please note that this is an example script. You should edit this script if you need more dynamic behavior when
detecting failure scenarios, as any non-200 error code may not be a transient error requiring retry.
Most likely you will use callbacks with dynamic inventory in Automation Controller, such as pulling cloud inventory
from one of the supported cloud providers. In these cases, along with setting Update On Launch, be sure to configure
an inventory cache timeout for the inventory source, to avoid hammering of your Cloud’s API endpoints. Since the
request_tower_configuration.sh script polls once per minute for up to ten minutes, a suggested cache
invalidation time for inventory (configured on the inventory source itself) would be one or two minutes.
While we recommend against running the request_tower_configuration.sh script from a cron job, a sug-
gested cron interval would be perhaps every 30 minutes. Repeated configuration can be easily handled by scheduling
in Automation Controller, so the primary use of callbacks by most users is to enable a base image that is bootstrapped
into the latest configuration upon coming online. To do so, running at first boot is a better practice. First boot
scripts are just simple init scripts that typically self-delete, so you would set up an init script that called a copy of the
request_tower_configuration.sh script and make that into an autoscaling image.
Just as you can pass extra_vars in a regular Job Template, you can also pass them to provisioning callbacks. To
pass extra_vars, the data sent must be part of the body of the POST request as application/json (as the content
type). Use the following JSON format as an example when adding your own extra_vars to be passed:
'{"extra_vars": {"variable1":"value1","variable2":"value2",...}}'
You can also pass extra variables to the Job Template call using curl, such as is shown in the following example:
https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback
Note: extra_vars passed to the job launch API are only honored if one of the following is true:
• They correspond to variables in an enabled survey
• ask_variables_on_launch is set to True
When you pass survey variables, they are passed as extra variables (extra_vars) within the controller. This can be
tricky, as passing extra variables to a job template (as you would do with a survey) can override other variables being
passed from the inventory and project.
For example, say that you have a defined variable for an inventory for debug = true. It is entirely possible that
this variable, debug = true, can be overridden in a job template survey.
To ensure that the variables you need to pass are not overridden, ensure they are included by redefining them in the
survey. Keep in mind that extra variables can be defined at the inventory, group, and host levels.
If specifying the ALLOW_JINJA_IN_EXTRA_VARS parameter, refer to the Controller Tips and Tricks section of the
Automation Controller Administration Guide to configure it in the Jobs Settings screen of the controller UI.
Note: The Job Template extra variables dictionary is merged with the Survey variables.
Here are some simplified examples of extra_vars in YAML and JSON formats:
The configuration in YAML format:
launch_to_orbit: true
satellites:
- sputnik
- explorer
- satcom
{
"launch_to_orbit": true,
"satellites": ["sputnik", "explorer", "satcom"]
}
The following table notes the behavior (hierarchy) of variable precedence in automation controller as it compares to
variable precedence in Ansible.
Automation Controller Variable Precedence Hierarchy (last listed wins)
Instead of manually relaunching a job, a relaunch is denoted by setting launch_type to relaunch. The relaunch
behavior deviates from the launch behavior in that it does not inherit extra_vars.
Job relaunching does not go through the inherit logic. It uses the same extra_vars that were calculated for the job
being relaunched.
For example, say that you launch a Job Template with no extra_vars which results in the creation of a Job called j1.
Next, say that you edit the Job Template and add in some extra_vars (such as adding "{ "hello": "world"
}").
Relaunching j1 results in the creation of j2, but because there is no inherit logic and j1 had no extra_vars, j2 will
not have any extra_vars.
To continue upon this example, if you launched the Job Template with the extra_vars you added after the creation
of j1, the relaunch job created (j3) will include the extra_vars. And relaunching j3 results in the creation of j4,
which would also include extra_vars.
TWENTY
JOB SLICING
A sliced job refers to the concept of a distributed job. Distributed jobs are used for running a job across a very
large number of hosts, allowing you to run multiple ansible-playbooks, each on a subset of an inventory, that can be
scheduled in parallel across a cluster.
By default, Ansible runs jobs from a single control instance. For jobs that do not require cross-host orchestration,
job slicing takes advantage of automation controller’s ability to distribute work to multiple nodes in a cluster. Job
slicing works by adding a Job Template field job_slice_count, which specifies the number of jobs into which to
slice the Ansible run. When this number is greater than 1, automation controller will generate a workflow from a job
template instead of a job. The inventory will be distributed evenly amongst the slice jobs. The workflow job is then
started, and proceeds as though it were a normal workflow. When launching a job, the API will return either a job
resource (if job_slice_count = 1) or a workflow job resource. The corresponding User Interface will redirect
to the appropriate screen to display the status of the run.
• These sliced jobs follow normal scheduling behavior (number of forks, queuing due to capacity, assignation to
instance groups based on inventory mapping).
211
Automation Controller User Guide, Release Automation Controller 4.2.1
Note: Job slicing is intended to scale job executions horizontally. Enabling job slicing on a job template divides an
inventory to be acted upon in the number of slices configured at launch time and then starts a job for each slice.
It is expected that the number of slices will be equal to or less than the number of controller nodes. Setting an extremely
high number of job slices (e.g., thousands), while allowed, can cause performance degradation as the job scheduler is
not designed to schedule simultaneously thousands of workflow nodes, which are what the sliced jobs become.
• Sliced job templates with prompts and/or extra variables behave the same as standard job templates, applying all
variables and limits to the entire set of slice jobs in the resulting workflow job. However, when passing a limit
to a Sliced Job, if the limit causes slices to have no hosts assigned, those slices will fail, causing the overall job
to fail.
• A job slice job status of a distributed job is calculated in the same manner as workflow jobs; failure if there are
any unhandled failures in its sub-jobs.
Warning: Any job that intends to orchestrate across hosts (rather than just applying changes to individual hosts)
should not be configured as a slice job. Any job that does, may fail, and automation controller will not attempt to
discover or account for playbooks that fail when run as slice jobs.
When jobs are sliced, they can run on any node and some may not run at the same time (insufficient capacity in the
system, for example). When slice jobs are running, job details display the workflow and job slice(s) currently running,
as well as a link to view their details individually.
By default, job templates are not normally configured to execute simultaneously (allow_simultaneous must
be checked in the API or Enable Concurrent Jobs in the UI). Slicing overrides this behavior and implies
allow_simultaneous even if that setting is unchecked. See Job Templates for information on how to specify
this, as well as the number of job slices on your job template configuration.
The Job Templates section provides additional detail on performing the following operations in the User Interface:
• Launch workflow jobs with a job template that has a slice number greater than one
• Cancel the whole workflow or individual jobs after launching a slice job template
• Relaunch the whole workflow or individual jobs after slice jobs finish running
• View the details about the workflow and slice jobs after a launching a job template
• Search slice jobs specifically after you create them (see subsequent section, Search job slices)
To make it easier to find slice jobs, use the Search functionality to apply a search filter to:
• job lists to show only slice jobs
• job lists to show only parent workflow jobs of job slices
• job templates lists to only show job templates that produce slice jobs
To show only slice jobs in job lists, as with most cases, you can filter either on the type (jobs here) or unified_jobs:
/api/v2/jobs/?job_slice_count__gt=1
/api/v2/workflow_jobs/?job_template__isnull=false
/api/v2/job_templates/?job_slice_count__gt=1
TWENTYONE
WORKFLOWS
Workflows allow you to configure a sequence of disparate job templates (or workflow templates) that may or may not
share inventory, playbooks, or permissions. However, workflows have ‘admin’ and ‘execute’ permissions, similar to
job templates. A workflow accomplishes the task of tracking the full set of jobs that were part of the release process
as a single unit.
Job or workflow templates are linked together using a graph-like structure called nodes. These nodes can be jobs,
project syncs, or inventory syncs. A template can be part of different workflows or used multiple times in the same
workflow. A copy of the graph structure is saved to a workflow job when you launch the workflow.
The example below shows a workflow that contains all three, as well as a workflow job template:
As the workflow runs, jobs are spawned from the node’s linked template. Nodes linking to a job template which
has prompt-driven fields (job_type, job_tags, skip_tags, limit) can contain those fields, and will not be
prompted on launch. Job templates with promptable credential and/or inventory, WITHOUT defaults, will not be
available for inclusion in a workflow.
214
Automation Controller User Guide, Release Automation Controller 4.2.1
• A node can have multiple parents and children may be linked to any of the states of success, failure, or always.
If always, then the state is neither success or failure. States apply at the node level, not at the workflow job
template level. A workflow job will be marked as successful unless it is canceled or encounters an error.
• If you remove a job or workflow template within the workflow, the node(s) previously connected to those deleted,
automatically get connected upstream and retains its edge type as in the example below:
• You could have a convergent workflow, where multiple jobs converge into one. In this scenario, any of the jobs
or all of them must complete before the next one runs, as shown in the example below:
In the example provided, automation controller runs the first two job templates in parallel. When they both finish and
succeed as specified, the 3rd downstream (convergence node), will trigger.
• Prompts for inventory and surveys will apply to workflow nodes in workflow job templates.
• If you launch from the API, running a get command displays a list of warnings and highlights missing compo-
nents. The basic workflow for a workflow job template is illustrated below.
• It is possible to launch several workflows simultaneously, and set a schedule for when to launch them. You can
set notifications on workflows, such as when a job completes, similar to that of job templates.
Note: Job slicing is intended to scale job executions horizontally. Enabling job slicing on a job template divides an
inventory to be acted upon in the number of slices configured at launch time and then starts a job for each slice.
It is expected that the number of slices will be equal to or less than the number of controller nodes. Setting an extremely
high number of job slices (e.g., thousands), while allowed, can cause performance degradation as the job scheduler is
not designed to schedule simultaneously thousands of workflow nodes, which are what the sliced jobs become.
• You can build a recursive workflow, but if automation controller detects an error, it will stop at the time the
nested workflow attempts to run.
• Artifacts gathered in jobs in the sub-workflow will be passed to downstream nodes.
• An inventory can be set at the workflow level, or prompt for inventory on launch.
• When launched, all job templates in the workflow that have ask_inventory_on_launch=true will use
the workflow level inventory.
• Job templates that do not prompt for inventory will ignore the workflow inventory and run against their own
inventory.
• If a workflow prompts for inventory, schedules and other workflow nodes may provide the inventory.
• In a workflow convergence scenario, set_stats data will be merged in an undefined way, so it is recom-
mended that you set unique keys.
Also similar to job templates, workflows use surveys to specify variables to be used in the playbooks in the workflow,
called extra_vars. Survey variables are combined with extra_vars defined on the workflow job template, and saved to
the workflow job extra_vars. extra_vars in the workflow job are combined with job template variables when spawning
jobs within the workflow.
Workflows utilize the same behavior (hierarchy) of variable precedence as Job Templates with the exception of three
additional variables. Refer to the Variable Precedence Hierarchy in the Extra Variables section of the Job Templates
chapter of this guide. The three additional variables include:
Workflows included in a workflow will follow the same variable precedence - they will only inherit variables if they
are specifically prompted for, or defined as part of a survey.
In addition to the workflow extra_vars, jobs and workflows ran as part of a workflow can inherit variables in the
artifacts dictionary of a parent job in the workflow (also combining with ancestors further upstream in its branch).
These can be defined by the set_stats Ansible module.
If you use the set_stats module in your playbook, you can produce results that can be consumed downstream by
another job, for example, notify users as to the success or failure of an integration run. In this example, there are two
playbooks that can be combined in a workflow to exercise artifact passing:
• invoke_set_stats.yml: first playbook in the workflow:
---
- hosts: localhost
tasks:
- name: "Artifact integration test results to the web"
local_action: 'shell curl -F "file=@integration_results.txt" https://file.io'
register: result
2. Through the invoke_set_stats playbook, set_stats is then invoked to artifact the URL of the uploaded
integration_results.txt into the Ansible variable “integration_results_url”.
3. The second playbook in the workflow consumes the Ansible extra variable “integration_results_url”. It calls
out to the web using the uri module to get the contents of the file uploaded by the previous Job Template Job.
Then, it simply prints out the contents of the gotten file.
Note: For artifacts to work, keep the default setting, per_host = False in the set_stats module.
The workflow job can have the following states (no Failed state):
• Waiting
• Running
• Success (finished)
• Cancel
• Error
• Failed
In the workflow scheme, canceling a job cancels the branch, while canceling the workflow job cancels the entire
workflow.
To edit and delete a workflow job template, you must have the admin role. To create a workflow job template, you
must be an organization admin or a system admin. However, you can run a workflow job template that contains job
templates you don’t have permissions for. Similar to projects, organization admins can create a blank workflow and
then grant an ‘admin_role’ to a low-level user, after which they can go about delegating more access and building the
graph. You must have execute access to a job template to add it to a workflow job template.
Other tasks such as the ability to make a duplicate copy and re-launch a workflow can also be performed, depending on
what kinds of permissions are granted to a particular user. Generally, you should have permissions to all the resources
used in a workflow (like job templates) before relaunching or making a copy.
For more information on performing the tasks described in this section, refer to the Administration Guide.
TWENTYTWO
A workflow job template links together a sequence of disparate resources that accomplishes the task of tracking the
full set of jobs that were part of the release process as a single unit. These resources may include:
• job templates
• workflow templates
• project syncs
• inventory source syncs
The Templates menu opens a list of the workflow and job templates that are currently available. The default view
is collapsed (Compact), showing the template name, template type, and the statuses of the jobs that ran using that
template, but you can click Expanded to view more information. This list is sorted alphabetically by name, but you
can sort by other criteria, or search by various fields and attributes of a template. From this screen, you can launch
Only workflow templates have the Workflow Visualizer icon ( ) as a shortcut for accessing the workflow editor.
Note: Workflow templates can be used as building blocks for another workflow template. Many parameters in a
workflow template allow you to enable Prompt on Launch that can be modified at the workflow job template level,
and do not affect the values assigned at the individual workflow template level. For instructions, see the Workflow
Visualizer section.
220
Automation Controller User Guide, Release Automation Controller 4.2.1
Note: To create a workflow template, you must have “Workflow Admin” permissions assigned at the Org level.
• Prompt on Launch: If selected, you can provide an inventory when this workflow template is launched, or
when this workflow template is used within another workflow template.
• Limit: Optionally specify a limit for subset of servers that your workflow is going to run. This value is a host
pattern to further constrain the list of hosts managed or affected by the playbook. Multiple patterns can be
separated by colons (“:”). As with core Ansible, “a:b” means “in group a or b”, “a:b:&c” means “in a or b but
must be in c”, and “a:!b” means “in a, and definitely not in b”.
• Prompt on Launch: If selected, even if a default value is supplied, you will be prompted upon launch to choose
a limit.
• SCM Branch: Optionally specify the branch to override all job template nodes that prompt for a branch.
– Prompt on Launch: If selected, even if a default value is supplied, you will be prompted upon launch to
choose an SCM branch.
• Labels: Supply optional labels that describe this workflow template, such as “dev” or “test”. Labels can be used
to group and filter workflow templates and completed jobs in the display.
– Labels are created when they are added to the Workflow Template. Labels are associated to a single
Organization using the Project that is provided in the Workflow Template. Members of the Organization
can create labels on a Workflow Template if they have edit permissions (such as an admin role).
– Once the Workflow Template is saved, the labels appear in the Templates overview.
– Click on the “x” beside a label to remove it. When a label is removed, and is no longer associated with a
Workflow or Workflow Template, the label is permanently deleted from the list of Organization labels.
– Jobs inherit labels from the Workflow Template at the time of launch. If a label is deleted from a Workflow
Template, it is also deleted from the Job.
• Options:
• Check Enable Concurrent Jobs to allow simultaneous runs of this workflow. Refer to Automation controller
Capacity Determination and Job Impact for additional information.
• Check Enable Webhooks to turn on the ability to interface with a predefined SCM system web service that is
used to launch a job template. Currently supported SCM systems are GitHub and GitLab.
If you enable webhooks, other fields display, prompting for additional information:
• Webhook Service: Select which service to listen for webhooks from
• Webhook Credential: Optionally, provide a GitHub or GitLab personal access token
(PAT) as a credential to use to send status updates back to the webhook service. Before
you can select it, the credential must exist. See Credential Types to create one.
Upon Save, additional fields populate and the Workflow Visualizer automatically opens.
• Webhook URL: Automatically populated with the URL for the webhook service to POST
requests to.
• Webhook Key: Generated shared secret to be used by the webhook service to sign pay-
loads sent to automation controller. This must be configured in the settings on the web-
hook service in order for automation controller to accept webhooks from this service.
For additional information on setting up webhooks, see Working with Webhooks.
• Extra Variables:
– Pass extra command line variables to the playbook. This is the “-e” or “–extra-vars” command
line parameter for ansible-playbook that is documented in the Ansible documentation at Passing
Variables on the Command Line.
– Provide key/value pairs using either YAML or JSON. These variables have a maximum value of
precedence and overrides other variables specified elsewhere. An example value might be:
git_branch: production
release_version: 1.5
Note: If you want to be able to specify extra_vars on a schedule, you must select Prompt on Launch
for EXTRA VARIABLES on the workflow template, or a enable a survey on the workflow template, then
those answered survey questions become extra_vars.
3. When you have completed configuring the workflow template, click Save.
Saving the template exits the Workflow Template page and the Workflow Visualizer opens to allow you to build a
workflow. See the Workflow Visualizer section for further instructions. Otherwise, you may close the Workflow Visu-
alizer to return to the Details tab of the newly saved template in order to review, edit, add permissions, notifications,
schedules, and surveys, or view completed jobs and build a workflow template at a later time. Alternatively, you can
click Launch to launch the workflow, but you must first save the template prior to launching, otherwise, the Launch
button remains grayed-out. Also, note the Notifications tab is present only after the template has been saved.
Clicking on Access allows you to review, grant, edit, and remove associated permissions for users as well as team
members.
Click the Add button to create new permissions for this workflow template by following the prompts to assign them
accordingly.
Clicking on Notifications allows you to review any notification integrations you have setup. The Notifications tab is
present only after the template has been saved.
Use the toggles to enable or disable the notifications to use with your particular template. For more detail, see Enable
and Disable Notifications.
If no notifications have been set up, see Create a Notification Template for detail.
Refer to Notification Types for additional details on configuring various notification types.
The Completed Jobs tab provides the list of workflow templates that have ran. Click Expanded to view the various
details of each job.
From this view, you can click the job ID - name of the workflow job and see its graphical representation. The example
below shows the job details of a workflow job.
The nodes are marked with labels that help you identify them at a glance. See the legend in the Workflow Visualizer
section for more information.
Clicking on Schedules allows you to review any schedules set up for this template.
22.6 Surveys
Workflows containing job types of Run or Check provide a way to set up surveys in the Workflow Job Template
creation or editing screens. Surveys set extra variables for the playbook similar to ‘Prompt for Extra Variables’ does,
but in a user-friendly question and answer way. Surveys also allow for validation of user input. Click the Survey tab
to create a survey.
Use cases for surveys are numerous. An example might be if operations wanted to give developers a “push to stage”
button they could run without advanced Ansible knowledge. When launched, this task could prompt for answers to
questions such as, “What tag should we release?”
Many types of questions can be asked, including multiple-choice questions.
To create a survey:
1. Click the Survey tab to bring up the Add Survey window.
Use the ON/OFF toggle button at the top of the screen to quickly activate or deactivate this survey prompt.
2. A survey can consist of any number of questions. For each question, enter the following information:
• Name: The question to ask the user.
• Description: (optional) A description of what’s being asked of the user.
• Answer Variable Name: The Ansible variable name to store the user’s response in. This is the variable to be
used by the playbook. Variable names cannot contain spaces.
• Answer Type: Choose from the following question types.
– Text: A single line of text. You can set the minimum and maximum length (in characters) for this answer.
– Textarea: A multi-line text field. You can set the minimum and maximum length (in characters) for this
answer.
– Password: Responses are treated as sensitive information, much like an actual password is treated. You
can set the minimum and maximum length (in characters) for this answer.
– Multiple Choice (single select): A list of options, of which only one can be selected at a time. Enter the
options, one per line, in the Multiple Choice Options box.
– Multiple Choice (multiple select): A list of options, any number of which can be selected at a time. Enter
the options, one per line, in the Multiple Choice Options box.
– Integer: An integer number. You can set the minimum and maximum length (in characters) for this answer.
– Float: A decimal number. You can set the minimum and maximum length (in characters) for this answer.
• Default Answer: Depending on which type chosen, you can supply the default answer to the question. This
value is pre-filled in the interface and is used if the answer is not provided by the user.
• Required: Whether or not an answer to this question is required from the user.
3. Once you have entered the question information, click the Add button to add the question.
A stylized version of the survey is presented in the Preview pane. For any question, you can click on the Edit button
to edit the question, the Delete button to delete the question, and click and drag on the grid icon to rearrange the order
of the questions.
4. Return to the left pane to add additional questions.
5. When done, click Save to save the survey.
The Required setting on a survey question determines whether the answer is optional or not for the user interacting
with it.
Behind the scenes, optional survey variables can be passed to the playbook in extra_vars, even when they aren’t
filled in.
• If a non-text variable (input type) is marked as optional, and is not filled in, no survey extra_var is passed to
the playbook.
• If a text input or text area input is marked as optional, is not filled in, and has a minimum length > 0, no
survey extra_var is passed to the playbook.
• If a text input or text area input is marked as optional, is not filled in, and has a minimum length === 0,
that survey extra_var is passed to the playbook, with the value set to an empty string ( “” ).
The Workflow Visualizer provides a graphical way of linking together job templates, workflow templates, project
syncs, and inventory syncs to build a workflow template. Before building a workflow template, refer to the Workflows
section for considerations associated with various scenarios on parent, child, and sibling nodes.
You can set up any combination of two or more of the following node types to build a workflow: Template (Job
Template or Workflow Job Template), Project Sync, Inventory Sync, or Approval. Each node is represented by a
rectangle while the relationships and their associated edge types are represented by a line (or link) that connects them.
1. In the details/edit view of a workflow template, click the Visualizer tab or from the Templates list view, click
the ( ) icon to launch the Workflow Visualizer.
3. On the right pane, select the type of node you want to add from the drop-down menu:
Note: If you select a job template that does not have a default inventory when populating a workflow graph, the
inventory of the parent workflow will be used. Though a credential is not required in a job template, you will not be
able to choose a job template for your workflow if it has a credential that requires a password, unless the credential is
replaced by a prompted credential.
4. Once a node is selected, the workflow begins to build, and you must specify the type of action to be taken for
the selected node. This action is also referred to as edge type.
5. If the node is a root node, the edge type defaults to Always and is non-editable.
For subsequent nodes, you can select one of the following scenarios (edge type) to apply to each:
• Always: Continue to execute regardless of success or failure.
• On Success: Upon successful completion, execute the next template.
• On Failure: Upon failure, execute a different template.
6. Select the behavior of the node if it is a convergent node from the Convergence field:
• Any is the default behavior, allowing any of the nodes to complete as specified, before triggering
the next converging node. As long as the status of one parent meets one of those run conditions, an
ANY child node will run. In other words, an ANY node requires all nodes to complete, but only one
node must complete with the expected outcome.
• Choose All to ensure that all nodes complete as specified, before converging and triggering the next
node. The purpose of ALL nodes is to make sure that every parent met it’s expected outcome in
order to run the child node. The workflow checks to make sure every parent behaved as expected in
order to run the child node. Otherwise, it will not run the child node.
If selected, the graphical view will label the node as ALL.
Note: If a node is a root node, or a node that does not have any nodes converging into it, setting the Convergence
rule does not apply, as its behavior is dictated by the action that triggers it.
7. If a job template used in the workflow has Prompt on Launch selected for any of its parameters, a Prompt
button appears, allowing you to change those values at the node level. Use the wizard to change the value(s) in
each of the tabs and click Confirm in the Preview tab.
Likewise, if a workflow template used in the workflow has Prompt on Launch selected for the inventory option, use
the wizard to supply the inventory at the prompt. If the parent workflow has its own inventory, it will override any
inventory that is supplied here.
Note: For job templates with promptable fields that are required, but don’t have a default, you must provide those
values when creating a node before the Select button becomes enabled. The two cases that disable the Select button
until a value is provided via the Prompt button: 1) when you select the Prompt on Launch checkbox in a job
template, but do not provide a default, or 2) when you create a survey question that is required but don’t provide a
default answer. However, this is NOT the case with credentials. Credentials that require a password on launch are not
permitted when creating a workflow node, since everything needed to launch the node must be provided when the
node is created. So, if a job template prompts for credentials, automation controller prevents you from being able to
select a credential that requires a password.
You must also click Select when the prompt wizard closes in order to apply the changes at that node. Otherwise, any
changes you make will revert back to the values set in the actual job template.
Once the node is created, it is labeled with its job type. A template that is associated with each workflow node will run
based on the selected run scenario as it proceeds. Click the compass ( ) icon to display the legend for each run
scenario and their job types.
8. Hovering over a node allows you to add another node, view info about the node, edit the
9. When done adding/editing a node, click Select to save any modifications and render it on the graphical view.
For possible ways to build your workflow, see Node building scenarios.
10. When done with building your workflow template, click Save to save your entire workflow template and return
to the new Workflow Template details page.
Important: Clicking Close on this pane will not save your work, but instead, closes the entire Workflow Visualizer
and you will have to start over.
Approval nodes
Choosing an Approval node requires user intervention in order to advance the workflow. This functions as a means to
pause the workflow in between playbooks so that a user can give approval to continue on to the next playbook in the
workflow, giving the user a specified amount of time to intervene, but also allows the user to continue as quickly as
possible without having to wait on some other trigger.
The default for the timeout is none, but you can specify the length of time before the request expires and automatically
gets denied. After selecting and supplying the information for the approval node, it displays on the graphical view
The approver is anyone who can execute the workflow job template containing the approval nodes, has org admin or
above privileges (for the org associated with that workflow job template), or any user who has the Approve permission
explicitly assigned to them within that specific workflow job template.
If pending approval nodes are not approved within the specified time limit (if an expiration was assigned) or they
are denied, then they are marked as “timed out” or “failed”, respectively, and move on to the next “on fail node” or
“always node”. If approved, the “on success” path is taken. If you try to POST in the API to a node that has already
been approved, denied or timed out, an error message notifies you that this action is redundant, and no further steps
will be taken.
Below shows the various levels of permissions allowed on approval workflows:
You can add a sibling node by clicking the on the parent node:
You can insert another node in between nodes by hovering over the line that connects the two until the appears.
Clicking on the automatically inserts the node between the two nodes.
To add a root node to depict a split scenario, click the button again:
At any node where you want to create a split scenario, hover over the node from which the split scenario begins and
click the . This essentially adds multiple nodes from the same parent node, creating sibling nodes:
Note: When adding a new node, the PROMPT button applies to workflow templates as well. Workflow templates
will prompt for inventory and surveys.
If you want to undo the last inserted node, click on another node without making a selection from the right pane. Or,
click Cancel from the right pane.
Below is an example of a workflow that contains all three types of jobs that is initiated by a job template that if it fails
to run, proceed to the project sync job, and regardless of whether that fails or succeeds, proceed to the inventory sync
job.
Remember to refer to the Key at the top of the window to identify the meaning of the symbols and colors associated
with the graphical depiction.
Note: In a workflow with a set of sibling nodes having varying edge types, and you remove a node that has a follow-on
node attached to it, the attached node automatically joins the set of sibling nodes and retains its edge type:
• To add a new link from one node to another, click the link icon that appears on each node. Doing this
highlights the nodes that are possible to link to. These feasible options are indicated by the dotted lines. Invalid
options are indicated by grayed out boxes (nodes) that would otherwise produce an invalid link. The example
below shows the Demo Project as a possible option for the e2e-ec20de52-project to link to, as indicated by the
arrows:
• To remove a link, click the link and click the Unlink button.
This button only appears in the right hand panel if the target or child node has more than one parent. All nodes must
be linked to at least one other node at all times so you must create a new link before removing an old one.
Click the settings icon ( ) to zoom, pan, or reposition the view. Alternatively, you can drag the workflow diagram
to reposition it on the screen or use the scroll on your mouse to zoom.
Template Details view, scroll to the bottom to access the button from the list of templates.
• While in the Job Template Details view of the job template you want to launch, click Launch.
Along with any extra variables set in the job template and survey, automation controller automatically adds the same
variables as those added for a job template upon launch. Additionally, automation controller automatically redirects
the web browser to the Jobs Details page for this job, displaying the progress and the results.
Events related to approvals on workflows display in the Activity Stream ( ) with detailed information about the
approval requests, if any.
automation controller allows you the ability to copy a workflow template. If you choose to copy a workflow template,
it does not copy any associated schedule, notifications, or permissions. Schedules and notifications must be recreated
by the user or admin creating the copy of the workflow template. To copy a workflow template, you must have
“Workflow Admin” permissions assigned at the Org level. The user copying the workflow template will be granted
the admin permission, but no permissions are assigned (copied) to the workflow template.
1. Access the workflow template that you want to copy from the Templates menu on the left navigation bar or
while in the Workflow Job Template Details view, scroll to the bottom to access it from a list of templates.
Select the copied template and replace the contents of the Name field with a new name, and provide or modify the
entries in the other fields to complete this template.
3. Click Save when done.
Note: If a resource has a related resource that you do not have the right level of permission to, you cannot copy the
resource, such as in the case where a project uses a credential that a current user only has Read access. However,
for a workflow template, if any of its nodes uses an unauthorized job template, inventory, or credential, the workflow
template can still be copied. But in the copied workflow template, the corresponding fields in the workflow template
node will be absent.
Note: extra_vars passed to the job launch API are only honored if one of the following is true:
• They correspond to variables in an enabled survey
• ask_variables_on_launch is set to True
When you pass survey variables, they are passed as extra variables (extra_vars). This can be tricky, as passing
extra variables to a workflow template (as you would do with a survey) can override other variables being passed from
the inventory and project.
For example, say that you have a defined variable for an inventory for debug = true. It is entirely possible that
this variable, debug = true, can be overridden in a workflow template survey.
To ensure that the variables you need to pass are not overridden, ensure they are included by redefining them in the
survey. Keep in mind that extra variables can be defined at the inventory, group, and host levels.
The following table notes the behavior (hierarchy) of variable precedence in automation controller as it compares to
variable precedence in Ansible.
Variable Precedence Hierarchy (last listed wins)
TWENTYTHREE
INSTANCE GROUPS
An Instance Group provides the ability to group instances in a clustered environment. Additionally, policies dictate
how instance groups behave and how jobs are executed. The following view displays the capacity levels based on
policy algorithms:
For more information about the policy or rules associated with instance groups, see the Instance Groups section of the
Automation Controller Administration Guide.
If you want to connect your instance group to a container, refer to Container Groups for further detail.
For an in-depth discussion on these concepts, refer to the Feature Spotlight: Instance Groups and Isolated Nodes blog.
244
Automation Controller User Guide, Release Automation Controller 4.2.1
Note: Policy Instance fields are not required to create a new instance group. If you do not specify values, then the
Policy Instance Minimum and Policy Instance Percentage default to 0.
4. Click Save.
Once the instance group is successfully created, the Details tab of the newly created instance group remains, which
allows you to review and edit your instance group information. This is the same menu that is opened if the Edit ( )
button is clicked from the Instance Group link. You can also edit Instances and review Jobs associated with this
instance group.
3. In the following example, the instances added to the instance group displays along with information about their
capacity.
This view also allows you to edit some key attributes associated with the instances in your instance group:
To view the jobs associated with the instance group, click the Jobs tab of the Instance Group window and then click
Expanded to expand the view to show details about each job.
Each job displays the job status, ID, and name; type of job, time started and completed, who started the job; and which
template, inventory, project, and credential were used.
The instances are run in accordance with instance group policies. Refer to Instance Group Policies in the Automation
Controller Administration Guide.
TWENTYFOUR
JOBS
A job is an instance of automation controller launching an Ansible playbook against an inventory of hosts.
The Jobs link displays a list of jobs and their statuses–shown as completed successfully or failed, or as an active
(running) job. The default view is collapsed (Compact) with the job name, status, job type, and start/finish times, but
you can expand to see more information. You can sort this list by various criteria, and perform a search to filter the
jobs of interest.
249
Automation Controller User Guide, Release Automation Controller 4.2.1
Actions you can take from this screen include viewing the details and standard output of a particular job, relaunching
( ) jobs, or removing selected jobs. The relaunch operation only applies to relaunches of playbook runs and does
not apply to project/inventory updates, system jobs, workflow jobs, etc.
When a job relaunches, you are directed the Jobs Output screen as the job runs. Clicking on any type of job also takes
you to the Job Output View for that job, where you can filter jobs by various criteria:
• The Stdout option is the default display that shows the job processes and output
• The Event option allows you to filter by the event(s) of interest, such as errors, host failures, host retries, items
skipped, etc. You can include as many events in the filter as necessary.
250
Automation Controller User Guide, Release Automation Controller 4.2.1
• The Advanced option is a refined search that allows you a combination of including or excluding criteria,
searching by key, or by lookup type. For details about using Search, refer to the Search chapter.
When an inventory sync is executed, the full results automatically display in the Output tab. This shows the same
information you would see if you ran it through the Ansible command line, and can be useful for debugging. The
ANSIBLE_DISPLAY_ARGS_TO_STDOUT is set to False by default for all playbook runs. This matches Ansible’s
default behavior. This does not display task arguments in task headers in the Job Detail interface to avoid leaking cer-
tain sensitive module parameters to stdout. If you wish to restore the prior behavior (despite the security implications),
you can set ANSIBLE_DISPLAY_ARGS_TO_STDOUT to True via the AWX_TASK_ENV configuration setting. For
more details, refer to the ANSIBLE_DISPLAY_ARGS_TO_STDOUT.
The icons at the top right corner of the Output tab allow you to relaunch ( ), download ( ) the job output, or
delete ( ) the job.
Note: An inventory update can be performed while a related job is running. In cases where you have a big project
(around 10 GB), disk space on /tmp may be an issue.
Access the Details tab to provide details about the job execution.
When an inventory sourced from an SCM is executed, the full results automatically display in the Output tab. This
shows the same information you would see if you ran it through the Ansible command line, and can be useful for
debugging. The icons at the top right corner of the Output tab allow you to relaunch ( ), download ( ) the job
output, or delete ( ) the job.
Access the Details tab to provide details about the job execution and its associated project.
When a playbook is executed, the full results automatically display in the Output tab. This shows the same information
you would see if you ran it through the Ansible command line, and can be useful for debugging.
The events summary captures a tally of events that were run as part of this playbook:
• the number of times this playbook has ran in the Plays field
• the number of tasks associated with this playbook in the Tasks field
• the number of hosts associated with this playbook in the Hosts field
• the amount of time it took to complete the playbook run in the Elapsed field
The icons next to the events summary allow you to relaunch ( ), download ( ) the job output, or delete ( )
the job.
The host status bar runs across the top of the Output view. Hover over a section of the host status bar and the number
of hosts associated with that particular status displays.
The output for a Playbook job is also accessible after launching a job from the Jobs tab of its Job Templates page.
Clicking on the various line item tasks in the output, you can view its host details.
24.3.1 Search
Use Search to look up specific events, hostnames, and their statuses. To filter only certain hosts with a particular status,
specify one of the following valid statuses:
• OK: the playbook task returned “Ok”.
• Changed: the playbook task actually executed. Since Ansible tasks should be written to be idempotent, tasks
may exit successfully without executing anything on the host. In these cases, the task would return Ok, but not
Changed.
• Failed: the task failed. Further playbook execution was stopped for this host.
• Unreachable: the host was unreachable from the network or had another fatal error associated with it.
• Skipped: the playbook task was skipped because no change was necessary for the host to reach the target state.
• Rescued: introduced in Ansible 2.8, this shows the tasks that failed and then executes a rescue section.
• Ignored: introduced in Ansible 2.8, this shows the tasks that failed and have ignore_errors: yes con-
figured.
These statuses also display at bottom of each Stdout pane, in a group of “stats” called the Host Summary fields.
For more details about using the Search, refer to the Search chapter.
The standard output view displays all the events that occur on a particular job. By default, all rows are expanded so
that all the details are displayed. Use the collapse-all button ( ) to switch to a view that only contains the headers
for plays and tasks. Click the ( ) button to view all lines of the standard output.
Alternatively, you can display all the details of a specific play or task by clicking on the arrow icons next to them.
Click an arrow from sideways to downward to expand the lines associated with that play or task. Click the arrow back
to the sideways position to collapse and hide the lines.
• If querying for a particular play or task, it will appear collapsed at the end of its completed process.
• In some cases, an error message will appear, stating that the output may be too large to display. This occurs
when there are more than 4000 events. Use the search and filter for specific events to bypass the error.
Click on a line of an event from the Standard Out pane and a Host Events dialog displays in a separate window. This
window shows the host that was affected by that particular event.
Note: Upgrading to the latest versions of Ansible Automation Platform involves progressively migrating all historical
playbook output and events. This migration process is gradual, and happens automatically in the background after
installation is complete. Installations with very large amounts of historical job output (tens, or hundreds of GB of
output) may notice missing job output until migration is complete. Most recent data will show up at the top of the
output, followed by older events. Migrating jobs with a large amount of events may take longer than jobs with a
smaller amount.
The Host Details dialog shows information about the host affected by the selected event and its associated play and
task:
• the Host
• the Status
• the type of run in the Play field
• the type of Task
• if applicable, the Ansible Module for the task, and any arguments for that module
To view the results in JSON format, click on the JSON tab. To view the output of the task, click the Standard Out.
To view errors from the output, click Standard Error.
Access the Details tab to provide details about the job execution.
By clicking on these items, where appropriate, you can view the corresponding job templates, projects, and other
objects.
This section describes how to determine capacity for instance groups and its impact to your jobs. For container groups,
see Container capacity limits in the Automation Controller Administration Guide.
The automation controller capacity system determines how many jobs can run on an instance given the amount of
resources available to the instance and the size of the jobs that are running (referred to as Impact). The algorithm used
to determine this is based entirely on two things:
• How much memory is available to the system (mem_capacity)
• How much CPU is available to the system (cpu_capacity)
Capacity also impacts Instance Groups. Since Groups are made up of instances, likewise, instances can be assigned to
multiple groups. This means that impact to one instance can potentially affect the overall capacity of other Groups.
Instance Groups (not instances themselves) can be assigned to be used by jobs at various levels (see Clustering). When
the Task Manager is preparing its graph to determine which group a job will run on, it will commit the capacity of an
Instance Group to a job that hasn’t or isn’t ready to start yet.
Finally, in smaller configurations, if only one instance is available for a job to run, the Task Manager will allow that
job to run on the instance even if it pushes the instance over capacity. This guarantees that jobs themselves won’t get
stuck as a result of an under-provisioned system.
Therefore, Capacity and Impact is not a zero-sum system relative to jobs and instances/Instance Groups.
For information on sliced jobs and their impact to capacity, see Job slice execution behavior.
The capacity algorithms are defined in order to determine how many forks a system is capable of running simultane-
ously. This controls how many systems Ansible itself will communicate with simultaneously. Increasing the number
of forks a automation controller system is running will, in general, allow jobs to run faster by performing more work in
parallel. The trade-off is that this will increase the load on the system, which could cause work to slow down overall.
Automation controller can operate in two modes when determining capacity. mem_capacity (the default) will allow
you to over-commit CPU resources while protecting the system from running out of memory. If most of your work is
not CPU-bound, then selecting this mode will maximize the number of forks.
mem_capacity is calculated relative to the amount of memory needed per fork. Taking into account the overhead for
internal components, this comes out to be about 100MB per fork. When considering the amount of memory available
to Ansible jobs, the capacity algorithm will reserve 2GB of memory to account for the presence of other services. The
algorithm formula for this is:
As an example:
Therefore, a system with 4GB of memory would be capable of running 20 forks. The value mem_per_fork can be
controlled by setting the settings value (or environment variable) SYSTEM_TASK_FORKS_MEM, which defaults to
100.
Often, Ansible workloads can be fairly CPU-bound. In these cases, sometimes reducing the simultaneous workload
allows more tasks to run faster and reduces the average time-to-completion of those jobs.
Just as the mem_capacity algorithm uses the amount of memory need per fork, the cpu_capacity algorithm
looks at the amount of CPU resources is needed per fork. The baseline value for this is 4 forks per core. The algorithm
formula for this is:
cpus * fork_per_cpu
4 * 4 == 16
The value fork_per_cpu can be controlled by setting the settings value (or environment variable)
SYSTEM_TASK_FORKS_CPU which defaults to 4.
When selecting the capacity, it’s important to understand how each job type affects capacity.
It’s helpful to understand what forks mean to Ansible: https://www.ansible.com/blog/ansible-performance-tuning (see
the section on “Know Your Forks”).
The default forks value for Ansible is 5. However, if automation controller knows that you’re running against fewer
systems than that, then the actual concurrency value will be lower.
When a job is run, automation controller will add 1 to the number of forks selected to compensate for the Ansible
parent process. So if you are running a playbook against 5 systems with a forks value of 5, then the actual forks value
from the perspective of Job Impact will be 6.
Jobs and Ad-hoc jobs follow the above model, forks + 1. If you set a fork value on your job template, your job capacity
value will be the minimum of the forks value supplied, and the number of hosts that you have, plus one. The plus one
is to account for the parent Ansible process.
Instance capacity determines which jobs get assigned to any specific instance. Jobs and ad hoc commands use more
capacity if they have a higher forks value.
Other job types have a fixed impact:
• Inventory Updates: 1
• Project Updates: 1
• System Jobs: 5
If you don’t set a forks value on your job template, your job will use Ansible’s default forks value of five. Even though
Ansible defaults to five forks, it will use fewer if your job has fewer than five hosts. In general, setting a forks value
higher than what the system is capable of could cause trouble by running out of memory or over-committing CPU. So,
the job template fork values that you use should fit on the system. If you have playbooks using 1000 forks but none
of your systems individually has that much capacity, then your systems are undersized and at risk of performance or
resource issues.
Selecting a capacity out of the CPU-bound or the memory-bound capacity limits is, in essence, selecting between the
minimum or maximum number of forks. In the above examples, the CPU capacity would allow a maximum of 16
forks while the memory capacity would allow 20. For some systems, the disparity between these can be large and
often times you may want to have a balance between these two.
The instance field capacity_adjustment allows you to select how much of one or the other you want to consider.
It is represented as a value between 0.0 and 1.0. If set to a value of 1.0, then the largest value will be used. The above
example involves memory capacity, so a value of 20 forks would be selected. If set to a value of 0.0 then the smallest
value will be used. A value of 0.5 would be a 50/50 balance between the two algorithms which would be 18:
To view or edit the capacity in the user interface, select the Instances tab of the Instance Group.
Projects specify the branch, tag, or reference to use from source control in the scm_branch field. These are repre-
sented by the values specified in the Project Details fields as shown.
Projects have the option to “Allow Branch Override”. When checked, project admins can delegate branch selection to
the job templates that use that project (requiring only project use_role).
Every job run has its own private data directory. This directory contains a copy of the project source tree for the given
scm_branch the job is running. Jobs are free to make changes to the project folder and make use of those changes
while it is still running. This folder is temporary and is cleaned up at the end of the job run.
If Clean is checked, automation controller discards modified files in its local copy of the repository through use of the
force parameter in its respective Ansible modules pertaining to git or Subversion.
Typically, during a project update, the revision of the default branch (specified in the SCM Branch field of the project)
is stored when updated, and jobs using that project will employ this revision. Providing a non-default SCM Branch
(not a commit hash or tag) in a job, the newest revision is pulled from the source control remote immediately before
the job starts. This revision is shown in the Source Control Revision field of the job and its respective project update.
Consequently, offline job runs are impossible for non-default branches. To be sure that a job is running a static version
from source control, use tags or commit hashes. Project updates do not save the revision of all branches, only the
project default branch.
The SCM Branch field is not validated, so the project must update to assure it is valid. If this field is provided or
prompted for, the Playbook field of job templates will not be validated, and you will have to launch the job template
in order to verify presence of the expected playbook.
The SCM Refspec field specifies which extra references the update should download from the remote. Examples are:
1. refs/*:refs/remotes/origin/*: fetches all references, including remotes of the remote
2. refs/pull/*:refs/remotes/origin/pull/* (GitHub-specific): fetches all refs for all pull requests
3. refs/pull/62/head:refs/remotes/origin/pull/62/head: fetches the ref for that one GitHub
pull request
For large projects, you should consider performance impact when using the 1st or 2nd examples here.
The SCM Refspec parameter affects the availability of the project branch, and can allow access to references not
otherwise available. The examples above allow the user to supply a pull request from the SCM Branch, which would
not be possible without the SCM Refspec field.
The Ansible git module fetches refs/heads/* by default. This means that a project’s branches and tags (and
commit hashes therein) can be used as the SCM Branch if SCM Refspec is blank. The value specified in the SCM
Refspec field affects which SCM Branch fields can be used as overrides. Project updates (of any type) will perform
an extra git fetch command to pull that refspec from the remote.
For example: You could set up a project that allows branch override with the 1st or 2nd refspec example –> Use this in
a job template that prompts for the SCM Branch –> A client could launch the job template when a new pull request
is created, providing the branch pull/N/head –> The job template would run against the provided GitGub pull
request reference.
For more information on the Ansible git module, see https://docs.ansible.com/ansible/latest/modules/git_module.html.
TWENTYFIVE
A Webhook provides the ability to execute specified commands between apps over the web. automation controller
currently provides webhook integration with GitHub and GitLab. This section describes the procedure for setting up a
webhook through their respective services.
The webhook post-status-back functionality for GitHub and GitLab is designed for work only under certain CI events.
Receiving another kind of event will result in messages like the one below in the service log:
awx.main.models.mixins Webhook event did not have a status API endpoint associated,
,→skipping.
Automation controller has the ability to run jobs based on a triggered webhook event coming in. Job status information
(pending, error, success) can be sent back only for pull request events. If you determine you do not want automation
controller to post job statuses back to the webhook service, skip steps 1-2, and go directly to step 3.
1. Optionally generate a personal access token (PAT) for use with automation controller.
a. In the profile settings of your GitHub account, click Settings.
b. At the very bottom of the settings, click <> Developer Settings.
c. In the Developer settings, click Personal access tokens.
d. From the Personal access tokens screen, click Generate new token.
e. When prompted, enter your GitHub account password to continue.
f. In the Note field, enter a brief description about what this PAT will be used for.
g. In the Scope fields, the automation webhook only needs repo scope access, with the exception
of invites. For information about other scopes, click the link right above the table to access the
docs.
264
Automation Controller User Guide, Release Automation Controller 4.2.1
c. Go to the job template with which you want to enable webhooks, and select the webhook service
and credential you created in the previous step.
d. Click Save. Now your job template is set up to be able to post back to GitHub. An example of
one may look like this:
3. Go to a specific GitHub repo you want to configure webhooks and click Settings.
e. Next, you must select the types of events you want to trigger a webhook. Any such event will
trigger the Job or Workflow. In order to have job status (pending, error, success) sent back to
GitHub, you must select Pull requests in the individual events section.
7. After your webhook is configured, it displays in the list of webhooks active for your repo, along with the ability
to edit or delete it. Click on a webhook, and it brings you to the Manage webhook screen. Scroll to the very
bottom of the screen to view all the delivery attempts made to your webhook and whether they succeeded or
failed.
Automation controller has the ability to run jobs based on a triggered webhook event coming in. Job status information
(pending, error, success) can be sent back only for merge request events. If you determine you do not want automation
controller to post job statuses back to the webhook service, skip steps 1-2, and go directly to step 3.
1. Optionally, generate a personal access token (PAT). This token gives automation controller the ability to post
statuses back when we run jobs based on a webhook coming in.
a. In the profile settings of your GitLab account, click Settings.
b. On the sidebar, under User Settings, click Access Tokens.
c. In the Name field, enter a brief description about what this PAT will be used for.
d. Skip the Expires at field unless you want to set an expiration date for your webhook.
e. In the Scopes fields, select the ones applicable to your integration. For automation controller,
API is the only selection necessary.
c. Go to the job template with which you want to enable webhooks, and select the webhook service
and credential you created in the previous step.
d. Click Save. Now your job template is set up to be able to post back to GitLab. An example of
one may look like this:
3. Go to a specific GitLab repo you want to configure webhooks and click Settings > Integrations.
4. To complete the Integrations page, you need to enable webhooks in a job template (or in a workflow job tem-
plate), which will provide you with the following information:
a. Copy the contents of the Webhook URL from the job template above, and paste it in the URL field.
GitLab uses this address to send results to.
b. Copy the contents of the Webhook Key from the job template above and paste it in the Secret Token field.
c. Next, you must select the types of events you want to trigger a webhook. Any such event will trigger the
Job or Workflow. In order to have job status (pending, error, success) sent back to GitLab, you must select
Merge request events in the Trigger section.
d. Leave Enable SSL verification selected.
e. Click Add webhook.
5. After your webhook is configured, it displays in the list of Project Webhooks for your repo, along with the
ability to test events, edit or delete the webhook. Testing a webhook event displays the results at the top of the
page whether it succeeded or failed.
For more information, refer to the GitLab webhooks integrations documentation.
The entire payload is exposed as an extra variable. To view the payload information, go to the Jobs Detail view of the
job template that ran with the webhook enabled. In the Extra Variables field of the Details pane, view the payload
output from the awx_webhook_payload variable, as shown in the example below.
TWENTYSIX
NOTIFICATIONS
A Notification Template is an instance of a Notification type (Email, Slack, Webhook, etc.) with a name, description,
and a defined configuration.
For example:
• A username, password, server, and recipients are needed for an Email notification template
• The token and a list of channels are needed for a Slack notification template
• The URL and Headers are needed for a Webhook notification template
A Notification is a manifestation of the notification template; for example, when a job fails, a notification is sent using
the configuration defined by the notification template.
At a high level, the typical flow for the notification system works as follows:
• A user creates a notification template to the REST API at the /api/v2/notification_templates end-
point (either through the API or through the UI).
• A user assigns the notification template to any of the various objects that support it (all variants of job templates
as well as organizations and projects) and at the appropriate trigger level for which they want the notification
(started, success, or error). For example a user may wish to assign a particular notification template to trigger
when Job Template 1 fails. In which case, they will associate the notification template with the job template at
/api/v2/job_templates/n/notification_templates_error API endpoint.
• You can set notifications on job start, not just job end. Users and teams are also able to define their own
notifications that can be attached to arbitrary jobs.
Notification templates assigned at certain levels will inherit templates defined on parent objects as such:
• Job Templates will use notification templates defined on it as well as inheriting notification templates from the
Project used by the Job Template and from the Organization that it is listed under (via the Project).
• Project Updates will use notification templates defined on the project and will inherit notification templates from
the Organization associated with it
• Inventory Updates will use notification templates defined on the Organization that it is listed under
• Ad-hoc commands will use notification templates defined on the Organization that the inventory is associated
with
277
Automation Controller User Guide, Release Automation Controller 4.2.1
26.2 Workflow
When a job succeeds or fails, the error or success handler will pull a list of relevant notification templates using the
procedure defined above. It will then create a Notification object for each one containing relevant details about the job
and then sends it to the destination (email addresses, slack channel(s), sms numbers, etc). These Notification objects
are available as related resources on job types (jobs, inventory updates, project updates), and also at /api/v2/
notifications. You may also see what notifications have been sent from a notification templates by examining
its related resources.
If a notification fails, it will not impact the job associated to it or cause it to fail. The status of the notification can be
viewed at its detail endpoint (/api/v2/notifications/<n>).
3. Enter the name of the notification and a description in their respective fields, and specify the organization
(required) it belongs to.
4. Choose a type of notification from the Type drop-down menu. Refer to the subsequent sections for additional
information.
5. Once all required information is complete, click Save to add the notification.
• Email
• Grafana
• IRC
• Mattermost
• PagerDuty
• Rocket.Chat
• Slack
• Twilio
• Webhook
– Webhook payloads
Each of these have their own configuration and behavioral semantics and testing them may need to be approached in
different ways. Additionally, you can customize each type of notification down to a specific detail, or a set of criteria
to trigger a notification. See Create custom notifications for more detail on configuring custom notifications. The
following sections will give as much detail as possible on each type of notification.
26.4.1 Email
The email notification type supports a wide variety of SMTP servers and has support for TLS/SSL connections.
You must provide the following details to setup an email notification:
• Host
• Recipient list
• Sender email
• Port
• Timeout (in seconds): allows you to specify up to 120 seconds, the length of time automation controller may
attempt connecting to the email server before giving up.
26.4.2 Grafana
Grafana is a fairly straightforward integration. First, create an API Key in the Grafana system (this is the token that is
given to automation controller).
You must provide the following details to setup a Grafana notification:
• Grafana URL: The URL of the Grafana API service, generally http://yourcompany.grafana.com.
• Grafana API Key: The user must first create an API Key in the Grafana system (this is the token that is given to
automation controller).
The other options of note are:
• ID of the Dashboard: When you created an API Key for the Grafana account, you can set up a dashboard with
its own unique ID.
• ID of the Panel: If you added panels and graphs to your Grafana interface, you can specify its ID here.
• Tags for the Annotation: Enter keywords that help identify the type(s) of events(s) of the notification you are
configuring.
• Disable SSL Verification: SSL verification is on by default, but you can choose to turn off verification the
authenticity of the target’s certificate. Environments that use internal or private CA’s should select this option to
disable verification.
26.4.3 IRC
The IRC notification takes the form of an IRC bot that will connect, deliver its messages to channel(s) or individual
user(s), and then disconnect. The notification bot also supports SSL authentication. The bot does not currently support
Nickserv identification. If a channel or user does not exist or is not on-line then the Notification will not fail; the
failure scenario is reserved specifically for connectivity.
Connectivity information is straightforward:
• IRC Server Password (optional): IRC servers can require a password to connect. If the server does not require
one, leave blank
• IRC Server Port: The IRC server Port
• IRC Server Address: The host name or address of the IRC server
• IRC Nick: The bot’s nickname once it connects to the server
• Destination Channels or Users: A list of users and/or channels to which to send the notification.
• SSL Connection (optional): Should the bot use SSL when connecting
26.4.4 Mattermost
The Mattermost notification type provides a simple interface to Mattermost’s messaging and collaboration workspace.
The parameters that can be specified are:
• Target URL (required): The full URL that will be POSTed to
• Username
• Channel
• Icon URL: specifies the icon to display for this notification
• Disable SSL Verification: Turns off verification of the authenticity of the target’s certificate. Environments that
use internal or private CA’s should select this option to disable verification.
26.4.5 PagerDuty
PagerDuty is a fairly straightforward integration. First, create an API Key in the PagerDuty system (this is the token
that is given to automation controller) and then create a “Service” which provides an “Integration Key” that will also
be given to automation controller. The other required options are:
• API Token: The user must first create an API Key in the PagerDuty system (this is the token that is given to
automation controller).
• PagerDuty Subdomain: When you sign up for the PagerDuty account, you receive a unique subdomain to
communicate with. For instance, if you signed up as “testuser”, the web dashboard will be at testuser.
pagerduty.com and you will give the API testuser as the subdomain (not the full domain).
• API Service/Integration Key
• Client Identifier: This will be sent along with the alert content to the pagerduty service to help identify the
service that is using the api key/service. This is helpful if multiple integrations are using the same API key and
service.
26.4.6 Rocket.Chat
The Rocket.Chat notification type provides an interface to Rocket.Chat’s collaboration and communication platform.
The parameters that can be specified are:
• Target URL (required): The full URL that will be POSTed to
• Username
• Icon URL: specifies the icon to display for this notification
• Disable SSL Verification: Turns off verification of the authenticity of the target’s certificate. Environments that
use internal or private CA’s should select this option to disable verification.
26.4.7 Slack
Slack, a collaborative team communication and messaging tool, is pretty easy to configure.
You must supply the following to setup Slack notifications:
• A Slack app (refer to the Basic App Setup page of the Slack documentation for information on how to create
one)
• A token (refer to Enabling Interactions with Bots and specific details on bot tokens on the Token Types docu-
mentation page)
Once you have a bot/app set up, you must navigate to “Your Apps”, click on the newly-created app and then go to
Add features and functionality, which allows you to configure incoming webhooks, bots, and permissions; as well
as Install your app to your workspace.
You must also invite the notification bot to join the channel(s) in question in Slack. Note that private messages are not
supported.
26.4.8 Twilio
Twilio service is an Voice and SMS automation service. Once you are signed in, you must create a phone number from
which the message will be sent. You can then define a “Messaging Service” under Programmable SMS and associate
the number you created before with it.
Note that you may need to verify this number or some other information before you are allowed to use it to send to any
numbers. The Messaging Service does not need a status callback URL nor does it need the ability to Process inbound
messages.
Under your individual (or sub) account settings, you will have API credentials. Twilio uses two credentials to deter-
mine which account an API request is coming from. The “Account SID”, which acts as a username, and the “Auth
Token” which acts as a password.
To setup Twilio, provide the following details:
• Account Token
• Source Phone Number (this is the number associated with the messaging service above and must be given in the
form of “+15556667777”)
• Destination SMS number (this will be the list of numbers to receive the SMS and should be the 10-digit phone
number)
• Account SID
26.4.9 Webhook
The webhook notification type provides a simple interface to sending POSTs to a predefined web service. automation
controller will POST to this address using application/json content type with the data payload containing all relevant
details in json format. Some web service APIs expect HTTP requests to be in a certain format with certain fields. You
can configure more of the webhook notification in the following ways:
• configure the HTTP method (using POST or PUT)
• body of the outgoing request
• configure authentication (using basic auth)
The parameters for configuring webhooks are:
• Username
• Basic Auth Password
• Target URL (required): The full URL to which the webhook notification will be PUT or POSTed.
• Disable SSL Verification: SSL verification is on by default, but you can choose to turn off verification of the
authenticity of the target’s certificate. Environments that use internal or private CA’s should select this option to
disable verification.
• HTTP Headers (required): Headers in JSON form where the keys and values are strings. For ex-
ample, {"Authentication": "988881adc9fc3655077dc2d4d757d480b5ea0e11",
"MessageType": "Test"}
• HTTP Method (required). Select the method for your webhook:
– POST: Creates a new resource. Also acts as a catch-all for operations that do not fit into the other cate-
gories. It is likely you need to POST unless you know your webhook service expects a PUT.
– PUT: Updates a specific resource (by an identifier) or a collection of resources. PUT can also be used to
create a specific resource if the resource identifier is known beforehand.
Webhook payloads
Automation controller sends by default the following data at the webhook endpoint:
job id
name
url
created_by
started
finished
status
traceback
inventory
project
playbook
credential
limit
extra_vars
hosts
http method
Automation controller returns by default the following data at the webhook endpoint for a success/fail status:
job id
name
url
created_by
started
finished
status
traceback
inventory
project
playbook
credential
limit
extra_vars
hosts
You can customize the text content of each of the Notification Types by enabling the Customize Messages portion at
the bottom of the notifications form using the toggle button.
The Message fields are pre-populated with a template containing a top-level variable, job coupled with an attribute,
such as id or name, for example. Templates are enclosed in curly braces and may draw from a fixed set of fields
provided by automation controller, as shown in the pre-populated Messages fields.
This pre-populated field suggests commonly displayed messages to a recipient who is notified of an event. You can,
however, customize these messages with different criteria by adding your own attribute(s) for the job as needed.
Custom notification messages are rendered using Jinja - the same templating engine used by Ansible playbooks.
Messages and message bodies have different types of content:
• messages will always just be strings (one-liners only; new lines are not allowed)
• message bodies will be either a dictionary or block of text:
– the message body for Webhooks and PagerDuty uses dictionary definitions. The default message
body for these is {{ job_metadata }}, you can either leave that as is or provide your own
dictionary
– the message body for email uses a block of text or a multi-line string. The default message body
is:
You can tweak this text (leaving {{ job_metadata }} in, or drop {{ job_metadata }}
altogether). Since the body is a block of text, it can really be any string you want.
{{ job_metadata }} gets rendered as a dictionary containing fields that describe the job being
executed. In all cases, {{ job_metadata }} will include the following fields:
– id
– name
– url
– created_by
– started
– finished
– status
– traceback
Note: At the present time, you cannot query individual fields within {{ job_metadata }}.
When using {{ job_metadata }} in a notification template, all data will be returned.
{"id": 18,
"name": "Project - Space Procedures",
"url": "https://host/#/jobs/project/18",
"created_by": "admin",
"started": "2019-10-26T00:20:45.139356+00:00",
"finished": "2019-10-26T00:20:55.769713+00:00",
"status": "successful",
"traceback": ""
}
{"id": 12,
"name": "JobTemplate - Launch Rockets",
"url": "https://host/#/jobs/playbook/12",
"created_by": "admin",
"started": "2019-10-26T00:02:07.943774+00:00",
(continues on next page)
{"id": 14,
"name": "Workflow Job Template - Launch Mars Mission",
"url": "https://host/#/workflows/14",
"created_by": "admin",
"started": "2019-10-26T00:11:04.554468+00:00",
"finished": "2019-10-26T00:11:24.249899+00:00",
"status": "successful",
"traceback": "",
"body": "Workflow job summary:
Note: If you save the notifications template without editing the custom message (or edit and revert back to the default
values), the Details screen assumes the defaults and will not display the custom message tables. If you edit and save
any of the values, the entire table displays in the Details screen.
You can select which notifications to notify you when a specific job starts, in addition to notifying you on success or
failure at the end of the job run. Some behaviors to keep in mind:
• if a workflow template (WFJT) has notification on start enabled, and a job template (JT) within that workflow
also has notification on start enabled, you will receive notifications for both
• you can enable notifications to run on many JTs within a WFJT
• you can enable notifications to run on a sliced job template (SJT) start and each slice will generate a notification
• when you enable a notification to run on job start, and that notification gets deleted, the JT continues to run, but
will result in an error message
You can enable notifications on job start, job success, and job failure, or any combination thereof, from the Notifica-
tions tab of the following resources:
• Job Template
• Workflow Template
• Projects (shown in the example below)
• Inventory Source
• Organizations
For workflow templates that have approval nodes, in addition to Start, Success, and Failure, you can enable or disable
certain approval-related events:
Refer to Approval nodes for additional detail on working with these types of nodes.
In the System Settings, you can replace the default value in the Base URL of the service field with your preferred
hostname to change the notification hostname.
Refreshing your license also changes the notification hostname. New installations of automation controller should not
have to set the hostname for notifications.
The primary way that automation controller determines how the base URL (TOWER_URL_BASE) is defined is by
looking at an incoming request and setting the server address based on that incoming request.
Automation controller takes settings values from the database first. If no settings values are found, it falls back to
using the values from the settings files. If a user posts a license by navigating to the automation controller host’s IP
adddress, the posted license is written to the settings entry in the database.
To change the TOWER_URL_BASE if the wrong address has been picked up, navigate to Miscellaneous System
settings from the Settings menu using the DNS entry you wish to appear in notifications, and re-add your license.
/api/v2/organizations/N/notification_templates_started/
/api/v2/organizations/N/notification_templates_success/
/api/v2/organizations/N/notification_templates_error/
TWENTYSEVEN
This section describes the list of supported job attributes and the proper syntax for constructing the message text for
notifications. The supported job attributes are:
• allow_simultaneous - (boolean) indicates if multiple jobs can run simultaneously from the JT associated
with this job
• controller_node - (string) the instance that managed the isolated execution environment
• created - (datetime) timestamp when this job was created
• custom_virtualenv - (string) custom virtual environment used to execute job
• description - (string) optional description of the job
• diff_mode - (boolean) if enabled, textual changes made to any templated files on the host are shown in the
standard output
• elapsed - (decimal) elapsed time in seconds that the job ran
• execution_node - (string) node the job executed on
• failed - (boolean) true if job failed
• finished - (datetime) date and time the job finished execution
• force_handlers - (boolean) when handlers are forced, they will run when notified even if a task fails on
that host (note that some conditions - e.g. unreachable hosts - can still prevent handlers from running)
• forks - (int) number of forks requested for job
• id - (int) database id for this job
• job_explanation - (string) status field to indicate the state of the job if it wasn’t able to run and capture
stdout
• job_slice_count - (integer) if run as part of a sliced job, the total number of slices (if 1, job is not part of
a sliced job)
• job_slice_number - (integer) if run as part of a sliced job, the ID of the inventory slice operated on (if not
part of a sliced job, attribute is not used)
• job_tags - (string) only tasks with specified tags will execute
• job_type - (choice) run, check, or scan
• launch_type - (choice) manual, relaunch, callback, scheduled, dependency, workflow, sync, or scm
• limit - (string) playbook execution limited to this set of hosts, if specified
• modified - (datetime) timestamp when this job was last modified
• name - (string) name of this job
295
Automation Controller User Guide, Release Automation Controller 4.2.1
296
Automation Controller User Guide, Release Automation Controller 4.2.1
297
Automation Controller User Guide, Release Automation Controller 4.2.1
In addition to the job attributes, there are some other variables that can be added to the template:
• approval_node_name - (string) the approval node name
• approval_status - (choice) one of approved, denied, and timed_out
• url - (string) URL of the job for which the notification is emitted (this applies to start, success, fail, and
approval notifications)
• workflow_url - (string) URL to the relevant approval node. This allows the notification recipient to go
to the relevant workflow job page to see what’s going on (i.e., This node can be viewed at: {{
workflow_url }}). In cases of approval-related notifications, both url and workflow_url are the
same.
• job_friendly_name - (string) the friendly name of the job
• job_metadata - (string) job metadata as a JSON string, for example:
{'url': 'https://towerhost/$/jobs/playbook/13',
'traceback': '',
'status': 'running',
'started': '2019-08-07T21:46:38.362630+00:00',
'project': 'Stub project',
'playbook': 'ping.yml',
'name': 'Stub Job Template',
'limit': '',
'inventory': 'Stub Inventory',
'id': 42,
'hosts': {},
'friendly_name': 'Job',
'finished': False,
'credential': 'Stub credential',
'created_by': 'admin'}
298
CHAPTER
TWENTYEIGHT
SCHEDULES
You can access all your configured schedules by clicking Schedules from the left navigation bar. The schedules list
may be sorted by any of the attributes from each column using the directional arrows. You can also search by name,
date, or the name of the month in which a schedule runs.
Each schedule has a corresponding Actions column that has options to enable/disable that schedule using the ON/OFF
toggle next to the schedule name and to allow editing ( ) of that schedule.
If you are setting up a template, a project, or an inventory source, clicking on the Schedules tab allows you to configure
schedules for these resources. Once schedules are created, they are listed by:
• Name: Clicking the schedule name opens its details
299
Automation Controller User Guide, Release Automation Controller 4.2.1
• Type: Identifies whether the schedule is associated with a source control update or a system-managed job
schedule
• Next Run: The next scheduled run of this task
Schedules can only be created from a template, project, or inventory source, and not directly on the main Schedules
screen itself. To create a new schedule:
1. Click the Schedules tab of the resource you are configuring (template, project, or inventory source).
2. Click the Add button, which opens the Create Schedule window.
Caution: Jobs are scheduled in UTC. Repeating jobs that run at a specific time of day may move relative to a
local timezone when Daylight Savings Time shifts occur. The system resolves the local time zone based time to
UTC when the schedule is saved. To ensure your schedules are correctly set, you should set your schedules in UTC
time.
TWENTYNINE
Automation controller supports integration with Red Hat Insights. Once a host is registered with Insights, it will be
continually scanned for vulnerabilities and known configuration conflicts. Each of the found problems may have an
associated fix in the form of an Ansible playbook. Insights users create a maintenance plan to group the fixes and,
ultimately, create a playbook to mitigate the problems. Automation controller tracks the maintenance plan playbooks
via an Insights project. Authentication to Insights via Basic Auth is backed by a special Insights Credential, which
must first be established in automation controller. To ultimately run an Insights Maintenance Plan, you need an Insights
project, and an Insights inventory.
303
Automation Controller User Guide, Release Automation Controller 4.2.1
7. Enter a valid Insights credential in the Username and Password fields. The Insights credential is the user’s Red
Hat Customer Portal account username and password.
• Organization: Enter the name of the organization associated with this project, or click the button and
select it from the pop-up window.
• SCM Type: Select Red Hat Insights.
• Upon selecting the SCM type, the Source Details field expands.
4. The Credential field is pre-populated with the Insights credential you previously created. If not, enter the
credential, or click the button and select it from the pop-up window.
5. Click to select the update option(s) for this project from the Options field, and provide any additional values, if
applicable. For information about each option, click the tooltip next to the options.
updated to what is current in Insights, manually update the SCM-based project by clicking the button under the
project’s available Actions.
This process syncs your Insights project with your Insights account solution. Notice that the status dot beside the name
of the project updates once the sync has run.
The Insights playbook contains a hosts: line where the value is the hostname that Insights itself knows about, which
may be different than the hostname that Tower knows about. To use an Insights playbook, you will need an Insights
inventory.
To create a new inventory for use with Insights, see Red Hat Insights.
Remediation of an Insights inventory allows Tower to run Insights playbooks with a single click. This is done by
creating a Job Template to run the Insights remediation.
1. Click Job Templates from the left navigation bar to access the Job Templates page.
2. Create a new Job Template, with the appropriate details into the required fields, at minimum. Note the following
fields requiring specific Insights-related entries:
• Name: Enter the name of your Maintenance Plan.
• Job Type: If not already populated, select Run from the drop-down menu list.
• Inventory: Select the Insights Inventory you previously created.
• Project: Select the Insights project you previously created.
• Playbook: Select a playbook associated with the Maintenance Plan you want to run from the drop-down menu
list.
• Credential: Enter the credential to use for this project or click the button and select it from the pop-up
window. The credential does not have to be an Insights credential.
• Verbosity: Keep the default setting, or select the desired verbosity from the drop-down menu list.
THIRTY
BEST PRACTICES
While automation controller supports playbooks stored directly on the server, best practice is to store your playbooks,
roles, and any associated details in source control. This way you have an audit trail describing when and why you
changed the rules that are automating your infrastructure. Plus, it allows for easy sharing of playbooks with other parts
of your infrastructure or team.
Please review the Ansible Tips and Tricks from the Ansible documentation. If creating a common set of roles to use
across projects, these should be accessed via source control submodules, or a common location such as /opt. Projects
should not expect to import roles or content from other projects.
Note: Playbooks should not use the vars_prompt feature, as automation controller does not interactively allow
for vars_prompt questions. If you must use vars_prompt, refer to and make use of the Surveys functionality.
Note: Playbooks should not use the pause feature of Ansible without a timeout, as automation controller does not
allow for interactively cancelling a pause. If you must use pause, ensure that you set a timeout.
Jobs run use the playbook directory as the current working directory, although jobs should be coded to use the
playbook_dir variable rather than relying on this.
If you have an external source of truth for your infrastructure, whether it is a cloud provider or a local CMDB, it is best
to define an inventory sync process and use the support for dynamic inventory (including cloud inventory sources).
This ensures your inventory is always up to date.
Note: Edits and additions to Inventory host variables persist beyond an inventory sync as long as
--overwrite_vars is not set.
308
Automation Controller User Guide, Release Automation Controller 4.2.1
Keeping variable data along with the hosts and groups definitions (see the inventory editor) is encouraged, rather
than using group_vars/ and host_vars/. If you use dynamic inventory sources, the controller can sync such
variables with the database as long as the Overwrite Variables option is not set.
30.5 Autoscaling
Using the “callback” feature to allow newly booting instances to request configuration is very useful for auto-scaling
scenarios or provisioning integration.
Consider setting “forks” on a job template to larger values to increase parallelism of execution runs. For more infor-
mation on tuning Ansible, see the Ansible blog.
For a Continuous Integration system, such as Jenkins, to spawn a job, it should make a curl request to a job template.
The credentials to the job template should not require prompting for any particular passwords. Refer to the CLI
documentation for configuration and usage instructions.
When an LDAP user authenticates, by default, all user-related attributes will be updated in the database on each log
in. In some environments, this operation can be skipped due to performance issues. To avoid it, you can disable the
option AUTH_LDAP_ALWAYS_UPDATE_USER. Refer to the Knowledge Base Article 5823061 for its configuration
and usage instructions. Please note that new users will still be created and get their attributes pushed to the database
on their first login.
Warning: With this option set to False, no changes to LDAP user’s attributes will be updated. Attributes will
only be updated the first time the user is created.
THIRTYONE
SECURITY
The following sections will help you gain an understanding of how automation controller handles and lets you control
file system security.
All playbooks are executed via the awx file system user. For running jobs, automation controller offers job isolation
via the use of Linux containers. This projection ensures jobs can only access playbooks, roles, and data from the
Project directory for that job template.
For credential security, users may choose to upload locked SSH keys and set the unlock password to “ask”. You can
also choose to have the system prompt them for SSH credentials or sudo passwords rather than having the system store
them in the database.
Automation controller’s use of automation execution environments and Linux containers prevents playbooks from
reading files outside of their project directory.
By default, the only data exposed to the ansible-playbook process inside the container is the current project being used.
You can customize this in the Job Settings and expose additional directories from the host into the container. Refer the
next section, Isolation functionality and variables for more information.
Automation controller uses container technology to isolate jobs from each other. By default, only the current project
is exposed to the container running a job template.
You may find that you need to customize your playbook runs to expose additional directories. To fine tune your usage
of job isolation, there are certain variables that can be set.
By default, automation controller will use the system’s tmp directory (/tmp by default) as its staging area. This can be
changed in the Job Execution Path field of the Jobs settings screen, or in the REST API at /api/v2/settings/
jobs:
AWX_ISOLATION_BASE_PATH = "/opt/tmp"
If there are any additional directories that should specifically be exposed from the host to the container that playbooks
run in, you can specify those in the Paths to Expose to Isolated Jobs field of the Jobs setting scren, or in the REST
API at /api/v2/settings/jobs:
AWX_ISOLATION_SHOW_PATHS = ['/list/of/', '/paths']
310
Automation Controller User Guide, Release Automation Controller 4.2.1
Note: The primary file you may want to add to AWX_ISOLATION_SHOW_PATHS is /var/lib/
awx/.ssh, if your playbooks need to use keys or settings defined there.
Role-Based Access Controls (RBAC) are built into automation controller and allow administrators to delegate access to
server inventories, organizations, and more. Administrators can also centralize the management of various credentials,
allowing end users to leverage a needed secret without ever exposing that secret to the end user. RBAC controls allow
the controller to help you increase security and streamline management.
RBACs are easiest to think of in terms of Roles which define precisely who or what can see, change, or delete an
“object” for which a specific capability is being set. RBAC is the practice of granting roles to users or teams.
There are a few main concepts that you should become familiar with regarding automation controller’s RBAC de-
sign–roles, resources, and users. Users can be members of a role, which gives them certain access to any resources
associated with that role, or any resources associated with “descendant” roles.
A role is essentially a collection of capabilities. Users are granted access to these capabilities and the controller’s
resources through the roles to which they are assigned or through roles inherited through the role hierarchy.
Roles associate a group of capabilities with a group of users. All capabilities are derived from membership within a
role. Users receive capabilities only through the roles to which they are assigned or through roles they inherit through
the role hierarchy. All members of a role have all capabilities granted to that role. Within an organization, roles are
relatively stable, while users and capabilities are both numerous and may change rapidly. Users can have many roles.
Imagine that you have an organization named “SomeCompany” and want to allow two people, “Josie” and “Carter”,
access to manage all the settings associated with that organization. You should make both people members of the
organization’s admin_role.
Often, you will have many Roles in a system and you will want some roles to include all of the capabilities of
other roles. For example, you may want a System Administrator to have access to everything that an Organization
Administrator has access to, who has everything that a Project Administrator has access to, and so on.
This concept is referred to as the ‘Role Hierarchy’:
Often, you will have many Roles in a system and you will want some roles to include all of the capabilities of
other roles. For example, you may want a System Administrator to have access to everything that an Organization
Administrator has access to, who has everything that a Project Administrator has access to, and so on. We refer to this
concept as the ‘Role Hierarchy’ and it is represented by allowing Roles to have “Parent Roles”. Any capability that a
Role has is implicitly granted to any parent roles (or parents of those parents, and so on). Of course Roles can have
more than one parent, and capabilities are implicitly granted to all parents.
RBAC controls also give you the capability to explicitly permit User and Teams of Users to run playbooks against
certain sets of hosts. Users and teams are restricted to just the sets of playbooks and hosts to which they are granted
capabilities. And, with automation controller, you can create or import as many Users and Teams as you require–create
users and teams manually or import them from LDAP or Active Directory.
RBACs are easiest to think of in terms of who or what can see, change, or delete an “object” for which a specific
capability is being determined.
The following sections cover how to apply automation controller’s RBAC system in your environment.
Editing Users
When editing a user, a automation controller system administrator may specify the user as being either a System
Administrator (also referred to as the Superuser) or a System Auditor.
• System administrators implicitly inherit all capabilities for all objects (read/write/execute) within the automation
controller environment.
• System Auditors implicitly inherit the read-only capability for all objects within the automation controller envi-
ronment.
Editing Organizations
When editing an organization, system administrators may specify the following roles:
• One or more users as organization administrators
• One or more users as organization auditors
• And one or more users (or teams) as organization members
Users/teams that are members of an organization can view their organization administrator.
Users who are organization administrators implicitly inherit all capabilities for all objects within that automation
controller organization.
Users who are organization auditors implicitly inherit the read-only capability for all objects within that automation
controller organization.
When editing a project in an organization for which they are the administrator, system administrators and organization
administrators may specify:
• One or more users/teams that are project administrators
• One or more users/teams that are project members
• And one or more users/teams that may update the project from SCM, from among the users/teams that are
members of that organization.
Users who are members of a project can view their project administrators.
Project administrators implicitly inherit the capability to update the project from SCM.
Administrators can also specify one or more users/teams (from those that are members of that project) that can use
that project in a job template.
All access that is granted to use, read, or write credentials is now handled through roles. You no longer set the “team”
or “user” for a credential. Instead, you use automation controller’s RBAC system to grant ownership, auditor, or usage
roles.
System administrators and organization administrators may create inventories and credentials within organizations
under their administrative capabilities.
Whether editing an inventory or a credential, System administrators and organization administrators may specify one
or more users/teams (from those that are members of that organization) to be granted the usage capability for that
inventory or credential.
System administrators and organization administrators may specify one or more users/teams (from those that are
members of that organization) that have the capabilities to update (dynamic or manually) an inventory. Administrators
can also execute ad hoc commands for an inventory.
System administrators, organization administrators, and project administrators, within a project under their adminis-
trative capabilities, may create and modify new job templates for that project.
When editing a job template, administrators (automation controller, organization, and project) can select among the
inventory and credentials in the organization for which they have usage capabilities or they may leave those fields
blank so that they will be selected at runtime.
Additionally, they may specify one or more users/teams (from those that are members of that project) that have ex-
ecution capabilities for that job template. The execution capability is valid regardless of any explicit capabilities the
user/team may have been granted against the inventory or credential specified in the job template.
User View
A user can:
• See any organization or project for which they are a member
• Create their own credential objects which only belong to them
• See and execute any job template for which they have been granted execution capabilities
If a job template a user has been granted execution capabilities on does not specify an inventory or credential, the user
will be prompted at run-time to select among the inventory and credentials in the organization they own or have been
granted usage capabilities.
Users that are job template administrators can make changes to job templates; however, to change to the inventory,
project, playbook, or credentials used in the job template, the user must also have the “Use” role for the project and
inventory currently being used or being set.
31.2.3 Roles
As stated earlier in this documentation, all access that is granted to use, read, or write credentials is now handled
through roles, and roles are defined for a resource.
Built-in roles
The following table lists the RBAC system roles and a brief description of the how that role is defined with regard to
privileges in automation controller.
A Singleton Role is a special role that grants system-wide permissions. automation controller currently provides two
built-in Singleton Roles but the ability to create or customize a Singleton Role is not supported at this time.
Automation controller support personnel typically works on ensuring that the controller is available and manages it a
way to balance supportability and ease-of-use for users. Often, automation controller support will assign “Organization
Owner/Admin” to users in order to allow them to create a new Organization and add members from their team the
respective access needed. This minimizes supporting individuals and focuses more on maintaining uptime of the
service and assisting users who are using automation controller.
Below are some common roles managed by the automation controller Organization:
A new organization “resource roles” functionality was introduced in automation controller 3.3 that are specific to a
certain resource type - such as workflows. Being a member of such a role usually provides two types of permissions,
in the case of workflows, where a user is given a “workflow admin role” for the organization “Default”:
• this user can create new workflows in the organization “Default”
• user can edit all workflows in the “Default” organization
One exception is job templates, where having the role is irrelevant of creation permission (more details on its own
section).
Resource-specific organization roles are independent of the organization roles of admin and member. Having the
“workflow admin role” for the “Default” organization will not allow a user to view all users in the organization, but
having a “member” role in the “Default” organization will. The two types of roles are delegated independently of each
other.
Users can edit fields not impacting job runs (non-sensitive fields) with a Job Template admin role alone. However, to
edit fields that impact job runs in a job template, a user needs the following:
• admin role to the job template
• use role to related project
• use role to related inventory
An “organization job template admin” role was introduced, but having this role isn’t sufficient by itself to edit a job
template within the organization if the user does not have use role to the project / inventory a job template uses.
In order to delegate full job template control (within an organization) to a user or team, you will need grant the team
or user all 3 organization-level roles:
• job template admin
• project admin
• inventory admin
This will ensure that the user (or all users who are members of the team with these roles) have full access to modify
job templates in the organization. If a job template uses an inventory or project from another organization, the user
with these organization roles may still not have permission to modify that job template. For clarity of managing
permissions, it is best-practice to not mix projects / inventories from different organizations.
RBAC permissions
Each role should have a content object, for instance, the org admin role has a content object of the org. To delegate
a role, you need admin permission to the content object, with some exceptions that would result in you being able to
reset a user’s password.
Parent is the organization.
Allow is what this new permission will explicitly allow.
Scope is the parent resource that this new role will be created on. Example: Organization.
project_create_role.
An assumption is being made that the creator of the resource should be given the admin role for that resource. If there
are any instances where resource creation does not also imply resource administration, they will be explicitly called
out.
Here are the rules associated with each admin type:
Project Admin
• Allow: Create, read, update, delete any project
• Scope: Organization
• User Interface: Project Add Screen - Organizations
Inventory Admin
• Parent: Org admin
• Allow: Create, read, update, delete any inventory
• Scope: Organization
• User Interface: Inventory Add Screen - Organizations
Note: As it is with the Use role, if you give a user Project Admin and Inventory Admin, it allows them to create Job
Templates (not workflows) for your organization.
Credential Admin
• Parent: Org admin
• Allow: Create, read, update, delete shared credentials
• Scope: Organization
• User Interface: Credential Add Screen - Organizations
Notification Admin
• Parent: Org admin
• Allow: Assignment of notifications
• Scope: Organization
Workflow Admin
• Parent: Org admin
• Allow: Create a workflow
• Scope: Organization
Org Execute
• Parent: Org admin
• Allow: Executing JTs and WFJTs
• Scope: Organization
The following is a sample scenario showing an organization with its roles and which resource(s) each have access to:
THIRTYTWO
GLOSSARY
Ad Hoc Refers to running Ansible to perform some quick command, using /usr/bin/ansible, rather than the orchestra-
tion language, which is /usr/bin/ansible-playbook. An example of an ad hoc command might be rebooting 50
machines in your infrastructure. Anything you can do ad hoc can be accomplished by writing a Playbook, and
Playbooks can also glue lots of other operations together.
Callback Plugin Refers to some user-written code that can intercept results from Ansible and do something with
them. Some supplied examples in the GitHub project perform custom logging, send email, or even play sound
effects.
Control Groups Also known as ‘cgroups’, a control group is a feature in the Linux kernel that allows resources to be
grouped and allocated to run certain processes. In addition to assigning resources to processes, cgroups can also
report actual resource usage by all processes running inside of the cgroup.
Check Mode Refers to running Ansible with the --check option, which does not make any changes on the remote
systems, but only outputs the changes that might occur if the command ran without this flag. This is analogous
to so-called “dry run” modes in other systems, though the user should be warned that this does not take into
account unexpected command failures or cascade effects (which is true of similar modes in other systems). Use
this to get an idea of what might happen, but it is not a substitute for a good staging environment.
Container Groups Container Groups are a type of Instance Group that specify a configuration for provisioning a pod
in a Kubernetes or OpenShift cluster where a job is run. These pods are provisioned on-demand and exist only
for the duration of the playbook run.
Credentials Authentication details that may be utilized by the controller to launch jobs against machines, to synchro-
nize with inventory sources, and to import project content from a version control system.
Credential Plugin Python code that contains definitions for an external credential type, its metadata fields, and the
code needed for interacting with a secret management system.
Distributed Job A job that consists of a job template, an inventory, and slice size. When executed, a distributed job
slices each inventory into a number of “slice size” chunks, which are then used to run smaller job slices.
External Credential Type A managed credential type for automation controller used for authenticating with a secret
management system.
Facts Facts are simply things that are discovered about remote nodes. While they can be used in playbooks and
templates just like variables, facts are things that are inferred, rather than set. Facts are automatically discovered
when running plays by executing the internal setup module on the remote nodes. You never have to call the
setup module explicitly, it just runs, but it can be disabled to save time if it is not needed. For the convenience of
users who are switching from other configuration management systems, the fact module also pulls in facts from
the ‘ohai’ and ‘facter’ tools if they are installed, which are fact libraries from Chef and Puppet, respectively.
Forks Ansible and automation controller talk to remote nodes in parallel and the level of parallelism can be set
serveral ways–during the creation or editing of a Job Template, by passing --forks, or by editing the default
in a configuration file. The default is a very conservative 5 forks, though if you have a lot of RAM, you can
easily set this to a value like 50 for increased parallelism.
322
Automation Controller User Guide, Release Automation Controller 4.2.1
Group A set of hosts in Ansible that can be addressed as a set, of which many may exist within a single Inventory.
Group Vars The group_vars/ files are files that live in a directory alongside an inventory file, with an optional
filename named after each group. This is a convenient place to put variables that will be provided to a given
group, especially complex data structures, so that these variables do not have to be embedded in the inventory
file or playbook.
Handlers Handlers are just like regular tasks in an Ansible playbook (see Tasks), but are only run if the Task contains
a “notify” directive and also indicates that it changed something. For example, if a config file is changed then the
task referencing the config file templating operation may notify a service restart handler. This means services
can be bounced only if they need to be restarted. Handlers can be used for things other than service restarts, but
service restarts are the most common usage.
Host A system managed by automation controller, which may include a physical, virtual, cloud-based server, or other
device. Typically an operating system instance. Hosts are contained in Inventory. Sometimes referred to as a
“node”.
Host Specifier Each Play in Ansible maps a series of tasks (which define the role, purpose, or orders of a system) to
a set of systems. This “hosts:” directive in each play is often called the hosts specifier. It may select one system,
many systems, one or more groups, or even some hosts that are in one group and explicitly not in another.
Instance Group A group that contains instances for use in a clustered environment. An instance group provides the
ability to group instances based on policy.
Inventory A collection of hosts against which Jobs may be launched.
Inventory Script A very simple program (or a complicated one) that looks up hosts, group membership for hosts, and
variable information from an external resource–whether that be a SQL database, a CMDB solution, or something
like LDAP. This concept was adapted from Puppet (where it is called an “External Nodes Classifier”) and works
more or less exactly the same way.
Inventory Source Information about a cloud or other script that should be merged into the current inventory group,
resulting in the automatic population of Groups, Hosts, and variables about those groups and hosts.
Job One of many background tasks launched by the controller, this is usually the instantiation of a Job Template; the
launch of an Ansible playbook. Other types of jobs include inventory imports, project synchronizations from
source control, or administrative cleanup actions.
Job Detail The history of running a particular job, including its output and success/failure status.
Job Slice See Distributed Job.
Job Template The combination of an Ansible playbook and the set of parameters required to launch it.
JSON Ansible and automation controller use JSON for return data from remote modules. This allows modules to be
written in any language, not just Python.
Mesh Describes a network comprising of nodes. Communication between nodes is established at the transport layer
by protocols such as TCP, UDP or Unix sockets. See also, node.
Metadata Information for locating a secret in the external system once authenticated. The uses provides this infor-
mation when linking an external credential to a target credential field.
Node A node corresponds to entries in the instance database model, or the /api/v2/instances/
endpoint, and is a machine participating in the cluster / mesh. The unified jobs API re-
ports controller_node and execution_node fields. The execution node is where the job runs, and
the controller node interfaces between the job and server functions.
323
Automation Controller User Guide, Release Automation Controller 4.2.1
Node Description
Type
Control Nodes that run persistent Ansible Automation Platform services, and delegate jobs to hybrid
and execution nodes
Hybrid Nodes that run persistent Ansible Automation Platform services and execute jobs
Hop Used for relaying across the mesh only
Execu- Nodes that run jobs delivered from control nodes (jobs submitted from the user’s Ansible au-
tion tomation)
Notification Template An instance of a notification type (Email, Slack, Webhook, etc.) with a name, description, and
a defined configuration.
Notification A manifestation of the notification template; for example, when a job fails a notification is sent using the
configuration defined by the notification template.
Notify The act of a task registering a change event and informing a handler task that another action needs to be run at
the end of the play. If a handler is notified by multiple tasks, it will still be run only once. Handlers are run in
the order they are listed, not in the order that they are notified.
Organization A logical collection of Users, Teams, Projects, and Inventories. The highest level in the automation
controller object hierarchy is the Organization.
Organization Administrator An automation controller user with the rights to modify the Organization’s membership
and settings, including making new users and projects within that organization. An organization admin can also
grant permissions to other users within the organization.
Permissions The set of privileges assigned to Users and Teams that provide the ability to read, modify, and administer
Projects, Inventories, and other automation controller objects.
Plays A playbook is a list of plays. A play is minimally a mapping between a set of hosts selected by a host specifier
(usually chosen by groups, but sometimes by hostname globs) and the tasks which run on those hosts to define
the role that those systems will perform. There can be one or many plays in a playbook.
Playbook An Ansible playbook. Refer to http://docs.ansible.com/ for more information.
Policy Policies dictate how instance groups behave and how jobs are executed.
Project A logical collection of Ansible playbooks, represented in automation controller.
Roles Roles are units of organization in Ansible and automation controller. Assigning a role to a group of hosts (or a
set of groups, or host patterns, etc.) implies that they should implement a specific behavior. A role may include
applying certain variable values, certain tasks, and certain handlers–or just one or more of these things. Because
of the file structure associated with a role, roles become redistributable units that allow you to share behavior
among playbooks–or even with other users.
Secret Management System A server or service for securely storing and controlling access to tokens, passwords,
certificates, encryption keys, and other sensitive data.
Schedule The calendar of dates and times for which a job should run automatically.
Sliced Job See Distributed Job.
Source Credential An external credential that is linked to the field of a target credential.
Sudo Ansible does not require root logins and, since it is daemonless, does not require root level daemons (which can
be a security concern in sensitive environments). Ansible can log in and perform many operations wrapped in
a sudo command, and can work with both password-less and password-based sudo. Some operations that do
not normally work with sudo (like scp file transfer) can be achieved with Ansible’s copy, template, and fetch
modules while running in sudo mode.
324
Automation Controller User Guide, Release Automation Controller 4.2.1
Superuser An admin of the automation controller server who has permission to edit any object in the system, whether
associated to any organization. Superusers can create organizations and other superusers.
Survey Questions asked by a job template at job launch time, configurable on the job template.
Target Credential A non-external credential with an input field that is linked to an external credential.
Team A sub-division of an Organization with associated Users, Projects, Credentials, and Permissions. Teams provide
a means to implement role-based access control schemes and delegate responsibilities across Organizations.
User An automation controller operator with associated permissions and credentials.
Webhook Webhooks allow communication and information sharing between apps. They are used to respond to
commits pushed to SCMs and launch job templates or workflow templates.
Workflow Job Template A set consisting of any combination of job templates, project syncs, and inventory syncs,
linked together in order to execute them as a single unit.
YAML Ansible and automation controller use YAML to define playbook configuration languages and also variable
files. YAML has a minimum of syntax, is very clean, and is easy for people to skim. It is a good data format for
configuration files and humans, but is also machine readable. YAML is fairly popular in the dynamic language
community and the format has libraries available for serialization in many languages (Python, Perl, Ruby, etc.).
325
CHAPTER
THIRTYTHREE
INDEX
• genindex
326
CHAPTER
THIRTYFOUR
Ansible, Ansible Automation Platform, Red Hat, and Red Hat Enterprise Linux are trademarks of Red Hat, Inc.,
registered in the United States and other countries.
If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide
a link to the original version.
Third Party Rights
Ubuntu and Canonical are registered trademarks of Canonical Ltd.
The CentOS Project is copyright protected. The CentOS Marks are trademarks of Red Hat, Inc. (“Red Hat”).
Microsoft, Windows, Windows Azure, and Internet Explore are trademarks of Microsoft, Inc.
VMware is a registered trademark or trademark of VMware, Inc.
Amazon Web Services”, “AWS”, “Amazon EC2”, and “EC2”, are trademarks of Amazon Web Services, Inc. or its
affiliates.
OpenStack™ and OpenStack logo are trademarks of OpenStack, LLC.
Chrome™ and Google Compute Engine™ service registered trademarks of Google Inc.
Safari® is a registered trademark of Apple, Inc.
Firefox® is a registered trademark of the Mozilla Foundation.
All other trademarks are the property of their respective owners.
327
INDEX
328
Automation Controller User Guide, Release Automation Controller 4.2.1
Index 329
Automation Controller User Guide, Release Automation Controller 4.2.1
Index 330
Automation Controller User Guide, Release Automation Controller 4.2.1
G I
Galaxy Insights
credential types, 59 credentials, 303
Galaxy support, 128 inventory, 306
gce project, 305
inventories, 175 projects, 303
inventory plugins, 175 source control, 120
getting started insights
applications, 98 credential types, 63
credentials, 54, 80 installation bundle
Git licenses, 10
source control, 118 Instance Group, 323
git refspec instance groups, 244
templates, 263 features, 6
GitHub integration, continuous
webhooks, 264 best practices, 309
GitHub PAT inventories, 136
credential types, 60 |RHAAP|, 166
GitLab |aap|, 179
webhooks, 264 ad hoc commands, 169
GitLab PAT add new, 144
credential types, 61 Amazon Web Services, 159, 173
glossary, 322 aws, 173
Google azure, 176
cloud credentials, 206 custom script, 167
Google Compute Engine gce, 175
credential types, 62 Google Compute Engine, 160, 175
inventories, 160, 175 groups, 148
Grafana groups; add new, 148
notifications types, 278 Microsoft Azure Resource Manager,
Group, 323 160, 176
Group Vars, 323 OpenStack, 164, 179
groups plugins, 143
notifications, 277 project-sourced, 157
Red Hat Insights, 163
H Red Hat Satellite 6, 162, 178
Handlers, 323 Red Hat Virtualization, 165, 179
HashiCorp KV rhv, 179
credential, 93 satellite, 178
HashiCorp Secret Lookup, 93 scan job templates, 199
HashiCorp SSH Secrets Engine, 94 smart, 138
credential, 94 vmware, 177
hierarchy VMware vCenter, 161, 177
notifications, 277 Inventory, 323
Host, 323 inventory
host count Insights, 306
dashboard, 21 inventory plugins
host counts, larger |aap|, 179
best practices, 309 aws, 173
Host Specifier, 323 azure, 176
hostname configuration features, 7
notifications, 293 gce, 175
how they work OpenStack, 179
credentials, 54 rhv, 179
Index 331
Automation Controller User Guide, Release Automation Controller 4.2.1
Index 332
Automation Controller User Guide, Release Automation Controller 4.2.1
dashboard, 19 O
manifest OAuth 2 token
subscriptions, 15 features, 5
Mattermost obtain
notifications types, 278 subscriptions manifest, 15
Mesh, 323 OpenShift
Metadata, 323 credential types, 69
Microsoft Azure Resource Manager OpenStack
credential types, 66 cloud credentials, 205
inventories, 160, 176 credential types, 70
mount options inventories, 164, 179
execution environment, 108 inventory plugins, 179
MS Azure OpenStack inventory support
cloud credentials, 206 features, 4
MS Azure KMS, 95 ordering
credential, 95 sorting, 28
Organization, 324
N Organization Administrator, 324
network organizations, 29
credential types, 67 access, 32
Node, 323 notifications, 35
Notification, 324 users, 32, 40
notification overriding
attributes, 295 job branch, 261
notification messages overview
custom, 295 features, 2
Notification Template, 324 oVirt
notifications credential types, 73
API endpoints, 294
create template, 278 P
features, 4 pagerduty
groups, 277 notifications types, 278
hierarchy, 277 pair
hostname configuration, 293 organizations; teams, 32
inventory sources, 277 payload
jobs, 277 webhooks, 264
organizations, 35 Permissions, 324
resetting the TOWER_URL_BASE, 294 permissions
template, 277, 278 projects, 123
template workflow, 278 teams, 49
troubleshooting TOWER_URL_BASE, 294 users, 40
types, 278 Playbook, 324
types Email, 278 playbook
types Grafana, 278 custom fact scans, 201
types IRC, 278 fact caching, 202
types Mattermost, 278 fact scan job, 200
types pagerduty, 278 scan job, 199
types Rocket.Chat, 278 playbook run
types Slack, 278 jobs results, 254
types Twilio, 278 playbooks
types Webhook, 278 manage manually, 117
Notify, 324 process isolation, 310
projects, 117, 118, 120, 121
sharing access, 310
Index 333
Automation Controller User Guide, Release Automation Controller 4.2.1
Index 334
Automation Controller User Guide, Release Automation Controller 4.2.1
Index 335
Automation Controller User Guide, Release Automation Controller 4.2.1
W
Webhook, 325
Webhook
notifications types, 278
webhooks, 264
GitHub, 264
GitLab, 264
payload, 264
workflow
visualizer, 228
Workflow Job Template, 325
workflow job templates, 220
workflow template
scheduling, 225
workflow templates
job variables, 242
jobs, launching, 241
scheduling, 225, 299
survey creation, 226
survey extra variables, 242
survey optional questions, 227
surveys, 226
Index 336