Ansible Notes
Ansible Notes
Introduction to Ansible
What is Ansible?
Ansible uses a simple YAML-based language (Ansible Playbooks) to define automation tasks. It
relies on SSH (for Linux) and WinRM (for Windows) to communicate with remote systems.
Ansible stands out from other automation tools like Puppet, Chef, and SaltStack due to the
following advantages:
1. Agentless Architecture
o Unlike Puppet, Chef, and SaltStack, Ansible does not require agents on managed
nodes. It communicates over SSH/WinRM, reducing system overhead.
2. Simple and Human-Readable YAML Syntax
o Ansible uses YAML (Yet Another Markup Language) for writing Playbooks,
making it easy to learn and understand.
3. Push-Based Model
o Ansible uses a push-based automation model (compared to pull-based models
in Puppet and Chef), which allows instant changes and easier debugging.
4. Idempotency
o Ensures that running the same playbook multiple times results in the same
system state, preventing unintended side effects.
5. No Dedicated Master Node Required
o Ansible does not require a centralized master-server-client model like Puppet.
Any machine with Ansible installed can act as a control node.
6. Extensive Built-in Modules
o Comes with hundreds of modules that support provisioning, configuration
management, cloud integrations (AWS, Azure, Google Cloud), networking
automation, and more.
7. Supports Both Declarative and Procedural Approaches
o You can define what state the system should be in (declarative) or how it should
get there (procedural).
8. Cross-Platform Support
o Works on Linux, Windows, macOS, and network devices.
9. Security and Compliance
o Uses OpenSSH, Kerberos, and other authentication methods, ensuring secure
communication.
10. Dynamic Inventory Management
Supports static and dynamic inventories that pull host data from cloud services,
databases, or external APIs.
Ansible follows a client-server model where the control node (server) manages remote nodes
(clients) via SSH.
1. Control Node
o The system where Ansible is installed and from which automation tasks are
executed.
2. Managed Nodes (Hosts/Clients)
o The remote systems (Linux, Windows, or networking devices) being automated.
3. Inventory
o A file that lists managed nodes (hosts) and their groupings.
4. Modules
o Predefined scripts that perform automation tasks (e.g., install packages, create
users, configure files).
5. Playbooks
o YAML files defining tasks, roles, and configurations.
6. Tasks
o The individual automation steps inside a playbook.
7. Handlers
o Special tasks that trigger actions when changes occur.
8. Roles
o A structured way to organize playbooks and tasks for reusability.
9. Facts
o System information collected automatically about hosts.
10. Plugins
Verifying Installation
ansible --version
Ensure passwordless SSH access for Ansible to communicate with remote systems.
ssh-keygen -t rsa
ssh-copy-id user@remote-host
An inventory file defines the list of managed nodes (hosts) Ansible controls. It can be a simple
text file (/etc/ansible/hosts) or dynamically generated.
[webservers]
web1.example.com
web2.example.com
[dbservers]
db1.example.com
db2.example.com
[webservers]
192.168.1.10
192.168.1.11
[dbservers]
192.168.1.20
192.168.1.21
[webservers]
web1 ansible_host=192.168.1.10 ansible_user=ubuntu ansible_port=22
web2 ansible_host=192.168.1.11 ansible_user=ubuntu ansible_port=22
[dbservers]
db1 ansible_host=192.168.1.20 ansible_user=root ansible_port=22
Output:
Dynamic Inventory
For cloud-based environments (AWS, Azure, GCP), Ansible supports dynamic inventories using
plugins.
Summary
Would you like me to dive deeper into any specific area, such as playbooks, modules, roles, or
real-world examples?
Got it! I'll break down Ansible Basics in extreme depth so you can answer any interview
question confidently.
1. Ad-hoc Commands
Collects system information such as OS version, network interfaces, CPU details, etc.
4. Copy Files
ansible all -m copy -a "src=/etc/hosts dest=/tmp/hosts"
5. Restart a Service
ansible all -m service -a "name=nginx state=restarted"
1. ansible Command
2. ansible-playbook Command
ansible-playbook site.yml
3. ansible-doc Command
Used to view documentation on Ansible modules. Example:
ansible-doc -s copy
4. ansible-galaxy Command
5. ansible-inventory Command
tasks:
- name: Install Apache
apt:
name: apache2
state: present
Example Playbook
---
- name: Deploy Web Server
hosts: webservers
become: yes
tasks:
- name: Install Nginx
apt:
name: nginx
state: present
Default Locations
/etc/ansible/ansible.cfg (System-wide)
~/.ansible.cfg (User-specific)
ansible.cfg (Per-project directory)
Important Configurations
Interview-Ready Takeaways
A static inventory is a simple text file (typically INI or YAML format) where hosts and groups
are predefined. These inventories do not change unless manually edited.
[web]
web1 ansible_host=192.168.1.10
web2 ansible_host=192.168.1.11
[database]
db1 ansible_host=192.168.1.20
db2 ansible_host=192.168.1.21
[all:vars]
ansible_user=admin
ansible_ssh_private_key_file=/home/admin/.ssh/id_rsa
A dynamic inventory is generated in real-time from external sources like cloud providers (AWS,
Azure, GCP), CMDBs, or databases.
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
keyed_groups:
- key: tags.Name
prefix: ec2_
filters:
instance-state-name: running
[database]
db1
db2
Hosts web1 and web2 are part of the [web] group, while db1 and db2 are in [database].
[frontend]
web1
web2
[backend]
db1
db2
[all_servers:children]
frontend
backend
[web]
serverA ansible_host=192.168.1.10
serverB ansible_host=192.168.1.11
Here, serverA and serverB are aliases for their respective IP addresses.
Pings web1.
All Hosts
Group of Hosts
Excluding Hosts
Multiple Groups
Ansible has built-in plugins for AWS, Azure, GCP, Kubernetes, NetBox, etc.
Example: AWS EC2 Inventory Plugin
plugin: amazon.aws.aws_ec2
regions:
- us-west-1
filters:
instance-state-name: running
plugin: community.general.netbox
url: https://netbox.example.com
token: your_api_token
import json
def get_inventory():
inventory = {
"web": {
"hosts": ["web1", "web2"],
"vars": {"http_port": 80}
},
"_meta": {
"hostvars": {
"web1": {"ansible_host": "192.168.1.10"},
"web2": {"ansible_host": "192.168.1.11"}
}
}
}
print(json.dumps(inventory))
if __name__ == "__main__":
get_inventory()
With this knowledge, you can handle any Ansible inventory question in an interview! Let me
know if you need practice questions.
Understanding variables and facts in Ansible is crucial because they control how automation
tasks execute. Variables store dynamic values, while facts provide system-specific details. Let’s
break down each aspect in detail so you can confidently answer any interview question.
Variables in Ansible are placeholders that store values and allow you to make playbooks more
dynamic. Instead of hardcoding values, you use variables, which can be reused, changed, or
assigned based on conditions.
Here, package_name and version are variables, allowing flexibility to install different packages.
Example:
inventory/
├── hosts
├── host_vars/
│ ├── webserver1.yaml
│ ├── dbserver1.yaml
Content of host_vars/webserver1.yaml:
ansible_host: 192.168.1.10
server_role: web
Example:
inventory/
├── group_vars/
│ ├── webservers.yaml
│ ├── databases.yaml
Content of group_vars/webservers.yaml:
web_port: 80
firewall_enabled: true
Usage in a Playbook:
If a variable is defined in multiple places, Ansible uses the following order of precedence
(highest to lowest):
3. Registered Variables
Registered variables store output from a task and can be used later in a playbook.
Example Output:
{
"stdout": "server1",
"stderr": "",
"rc": 0,
"changed": false
}
Facts are system properties (e.g., OS type, network interfaces, CPU details) collected by Ansible
from remote hosts before executing tasks.
How to View Facts
Run:
Custom Facts
Example (/etc/ansible/facts.d/custom.fact):
[custom]
env=production
Usage in playbooks:
5. Fact Caching
What is Fact Caching?
By default, Ansible gathers facts at the start of a playbook run. Fact caching allows Ansible to
store facts across multiple runs to reduce execution time.
Modify ansible.cfg:
[defaults]
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp/facts
Final Thoughts
Mastering variables and facts in Ansible allows you to write efficient, flexible, and scalable
automation scripts. If you want to ace an interview:
Would you like me to generate some real-world interview questions based on this topic?
tasks:
- name: Install a package
apt:
name: "{{ package_name }}"
state: present
handlers:
- name: Restart Nginx
service:
name: nginx
state: restarted
By default, handlers run at the end of a play. You can force immediate execution:
meta: flush_handlers
Delegation (delegate_to)
With this knowledge, you should be able to answer any interview question on Ansible
playbooks with confidence! Let me know if you want more examples or explanations.
Deep Dive into Ansible Roles and Modularization
Ansible roles and modularization are essential concepts for managing complex infrastructure as
code efficiently. Mastering these topics will help you confidently answer any interview
questions related to Ansible roles.
my_role/
│── defaults/ # Default variables (lowest precedence)
│ ├── main.yml
│
│── files/ # Static files to be copied
│ ├── example.conf
│
│── handlers/ # Tasks triggered by "notify"
│ ├── main.yml
│
│── meta/ # Role metadata (dependencies, author, license)
│ ├── main.yml
│
│── tasks/ # Main list of tasks to execute
│ ├── main.yml
│
│── templates/ # Jinja2 templates (e.g., config files)
│ ├── example.conf.j2
│
│── vars/ # Role-specific variables (higher precedence than defaults)
│ ├── main.yml
│
│── README.md # Documentation (best practice)
1. Follow the standard directory structure – This ensures roles are reusable and easy to maintain.
2. Keep tasks modular – Break down tasks into multiple files in the tasks/ directory for clarity.
3. Use handlers efficiently – Use handlers for actions that need to be triggered only when
something changes.
4. Parameterize variables – Store variables in defaults/ and vars/ for flexibility.
5. Avoid hardcoding values – Use variables and templates for configuration files.
6. Document roles – Include a README.md explaining how to use the role.
7. Keep tasks idempotent – Ensure tasks do not make unnecessary changes if they already exist.
8. Follow security best practices – Do not store sensitive data in plain text; use Ansible Vault if
needed.
my_role/
│── defaults/
│── files/
│── handlers/
│── meta/
│── tasks/
│── templates/
│── vars/
│── README.md
You can install a role from Ansible Galaxy (a repository of community-contributed roles):
Instead of installing roles manually, you can define dependencies in a requirements.yml file:
- name: geerlingguy.nginx
version: 3.0.0
- name: my_custom_role
src: git+https://github.com/myorg/my_custom_role.git
dependencies:
- role: common
- role: database
vars:
db_name: "my_app_db"
This ensures that the common and database roles are executed before the current role.
- hosts: web_servers
roles:
- common
- database
- web_server
Roles will be executed in the order they are defined.
Avoid circular dependencies (e.g., Role A depends on Role B, and Role B depends on Role A).
Use variables within dependent roles to keep them flexible.
If a dependency is required but not installed, Ansible will throw an error.
To publish a role:
galaxy_info:
author: Your Name
description: A role to install and configure Nginx
license: MIT
min_ansible_version: "2.9"
platforms:
- name: Ubuntu
versions:
- all
galaxy_tags:
- web
- nginx
dependencies: []
Once a role is shared on Ansible Galaxy, others can install and use it:
- hosts: all
roles:
- your-github-username.my_role
Summary
Concept Key Points
Best Practices Use handlers, parameterize vars, document roles, follow idempotency.
Mastering these topics will ensure you're well-prepared for any interview question related to
Ansible roles and modularization.
Jinja2 is a powerful templating engine used in Flask to generate dynamic HTML content. It
allows you to inject variables, apply control structures (such as loops and conditionals), use
filters to modify data, and even create custom macros for reusable components. Understanding
Jinja2 thoroughly is crucial for Flask development, as it's the core mechanism for rendering
dynamic pages.
1. Using Jinja2 for Dynamic Content
What is Jinja2?
Jinja2 is a template engine for Python, inspired by Django’s template language. It enables
developers to embed Python-like expressions into HTML files to dynamically generate content.
Basic Syntax
Flask passes data from the backend to the frontend using the render_template() function.
app = Flask(__name__)
@app.route('/')
def home():
user = "Alice"
return render_template("index.html", username=user)
if __name__ == '__main__':
app.run(debug=True)
<!DOCTYPE html>
<html>
<head>
<title>Jinja2 Example</title>
</head>
<body>
<h1>Welcome, {{ username }}!</h1>
</body>
</html>
Output:
If user = "Alice", the rendered HTML page will be:
<h1>Welcome, Alice!</h1>
How it Works
app = Flask(__name__)
@app.route('/')
def home():
return render_template("index.html", name="John Doe", age=25)
if __name__ == '__main__':
app.run(debug=True)
Rendered Output:
Example:
Rendered Output:
<ul>
<li>Alice</li>
<li>Bob</li>
<li>Charlie</li>
</ul>
Example:
Control Structures
1. Conditionals (if-elif-else)
{% if age < 18 %}
<p>You are a minor.</p>
{% elif age >= 18 and age < 65 %}
<p>You are an adult.</p>
{% else %}
<p>You are a senior citizen.</p>
{% endif %}
2. Loops (for)
<ul>
{% for student in students %}
<li>{{ student }}</li>
{% endfor %}
</ul>
Defining a Macro:
{% macro greet(name) %}
<p>Hello, {{ name }}!</p>
{% endmacro %}
Using a Macro:
{{ greet("Alice") }}
{{ greet("Bob") }}
Rendered Output:
<p>Hello, Alice!</p>
<p>Hello, Bob!</p>
<!DOCTYPE html>
<html>
<head>
<title>{% block title %}Default Title{% endblock %}</title>
</head>
<body>
<header>Site Header</header>
<main>
{% block content %}{% endblock %}
</main>
<footer>Site Footer</footer>
</body>
</html>
{% extends "base.html" %}
{% block content %}
<h1>Welcome to My Website</h1>
{% endblock %}
Rendered Output:
<!DOCTYPE html>
<html>
<head>
<title>Home Page</title>
</head>
<body>
<header>Site Header</header>
<main>
<h1>Welcome to My Website</h1>
</main>
<footer>Site Footer</footer>
</body>
</html>
Example:
/project
/static
styles.css
/templates
base.html
index.html
Summary
Feature Syntax Example
Variable Substitution {{ var }} {{ name }}
Control Structures {% if %}, {% for %} {% for item in list %} ... {% endfor %}
Filters `{{ var filter }}`
Macros {% macro func() %} {% macro greet(name) %} ... {% endmacro %}
Template Inheritance {% extends "base.html" %} {% block content %} ... {% endblock %}
By mastering Jinja2, you can dynamically generate HTML, reuse components, and create
efficient web applications. If you can implement these techniques fluently, you’ll be well-
prepared for any interview question on Flask templates and Jinja2.
1. Ansible Modules
Modules are small programs that perform specific tasks in Ansible. They can be categorized into
core modules, community modules, and custom modules.
File Modules
Example:
Package Modules
Example:
Example:
System Modules
Example:
#!/usr/bin/python
def main():
module_args = dict(
num1=dict(type='int', required=True),
num2=dict(type='int', required=True)
)
module = AnsibleModule(argument_spec=module_args)
module.exit_json(changed=False, result=result)
if __name__ == '__main__':
main()
- debug:
msg: "Sum is {{ output.result.sum }}"
Plugins extend Ansible’s functionality. There are different types, including lookup, filter, action,
and callback plugins.
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
length = int(terms[0]) if terms else 8
return [''.join(random.choices(string.ascii_letters, k=length))]
class FilterModule(object):
def filters(self):
return {
'reverse': self.reverse_string
}
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
result = super(ActionModule, self).run(tmp, task_vars)
result['changed'] = False
result['msg'] = "This is a custom action plugin"
return result
2. Use the plugin in a playbook
- debug:
msg: "{{ result.msg }}"
class CallbackModule(CallbackBase):
def v2_runner_on_ok(self, result):
print(f"TASK SUCCESS: {result._task.get_name()} - {result._result}")
[defaults]
callback_whitelist = custom_logger
Conclusion
Managing secrets and sensitive data in Ansible is critical for security, especially when dealing
with infrastructure as code. Ansible provides Ansible Vault as a built-in tool to encrypt and
securely manage sensitive information. Below is an in-depth breakdown of how to manage
secrets effectively.
1. Understanding Ansible Vault
Ansible Vault is a security feature within Ansible that allows you to encrypt and protect
sensitive data such as passwords, API keys, SSH keys, and private information used in
playbooks.
It will prompt you for a password. This password will be needed to decrypt the file later.
$ANSIBLE_VAULT;1.1;AES256
6162636465666768696a6b6c6d6e6f7071727374757678797a303132333435
Decrypting a File
This will prompt you for the old password and then ask for a new one.
db_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
323434356165663434643265346633...
To decrypt it later:
ansible-vault decrypt_string '$ANSIBLE_VAULT;1.1;AES256...'
db_user: admin
db_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
323434356165663434643265346633...
This will prompt you for the vault password to decrypt and use the secrets.
Then use:
This is useful for CI/CD pipelines where human intervention isn't ideal.
6. Best Practices for Storing and Using Secrets Securely
1. Never store vault passwords in version control.
o Use environment variables or dedicated secret management tools.
2. Use separate vault files for different environments
o Example:
o group_vars/
o ├── production/
o │ ├── vault.yml
o ├── staging/
o │ ├── vault.yml
o This ensures that staging and production credentials don’t mix.
3. Limit access to vault files
o Use Linux permissions: chmod 600 vault.yml
o Restrict access using ansible.cfg:
o [defaults]
o vault_password_file = /etc/ansible/.vault_pass
4. Rotate secrets periodically
o Change vault passwords and rekey encrypted files regularly.
5. Use external secret management tools
o Ansible Vault is great, but for more security, consider integrating:
HashiCorp Vault
AWS Secrets Manager
Azure Key Vault
CyberArk Conjur
Ansible Vault is a feature that allows users to encrypt sensitive data such as passwords, API
keys, and SSH keys in Ansible projects. It ensures that secrets are not exposed in plaintext.
By using:
If the vault password is lost, there is no way to recover encrypted data. It’s important to store it
securely.
5. Can you use multiple Ansible Vault passwords for different environments?
Final Thoughts
Ansible Vault is an excellent tool for securing secrets, but it must be used correctly. Following
best practices—such as restricting access, rotating passwords, and integrating external vaults—
ensures a strong security posture.
Let me know if you want me to dive even deeper into any of these topics!
A collection is essentially a structured directory of Ansible content, bundled together for easy
distribution and use. It contains:
ansible_collections/
└── my_namespace/
└── my_collection/
├── docs/
├── plugins/
│ ├── modules/
│ ├── inventory/
│ ├── lookup/
│ ├── filter/
│ └── connection/
├── roles/
│ ├── role1/
│ ├── role2/
├── playbooks/
├── tests/
├── meta/
│ └── runtime.yml
├── README.md
├── galaxy.yml
1. Installing a Collection
Collections can be installed from Ansible Galaxy or from a tar.gz archive.
o From Ansible Galaxy:
o ansible-galaxy collection install community.general
o From a tar.gz file:
o ansible-galaxy collection install my_collection.tar.gz
o Install to a specific directory:
o ansible-galaxy collection install -p ./collections my_namespace.my_collection
2. Using Installed Collections in Playbooks
Once installed, you can reference the collection in playbooks:
3. - name: Example using a collection module
4. hosts: localhost
5. tasks:
6. - name: Use a module from a collection
7. my_namespace.my_collection.my_module:
8. param1: value
roles:
- my_namespace.my_collection.my_role
1. Initialize a Collection
Use the Ansible Galaxy CLI:
2. ansible-galaxy collection init my_namespace.my_collection
3. Adding Modules
Custom modules are placed in plugins/modules/. Example custom module:
4. # plugins/modules/custom_module.py
5. from ansible.module_utils.basic import AnsibleModule
6.
7. def main():
8. module = AnsibleModule(argument_spec={"message": {"type": "str", "required": True}})
9. response = {"message": module.params["message"]}
10. module.exit_json(changed=False, response=response)
11.
12. if __name__ == '__main__':
13. main()
When working with collections, managing dependencies is crucial. Dependencies are specified
in requirements.yml, which allows Ansible to install all required collections automatically.
collections:
- name: community.general
version: 5.5.0
- name: ansible.utils
version: ">=1.0.0,<2.0.0"
collections:
- name: community.general # Install from Ansible Galaxy
- name: ansible.utils
source: https://github.com/ansible-collections/ansible.utils.git
- name: my_namespace.my_collection
source: /path/to/local/collection
[defaults]
collections_paths = ./collections
This deep dive covers everything needed for an interview. Let me know if you need
clarification!
Performance optimization in Ansible is crucial when dealing with large infrastructure, complex
playbooks, or frequent execution. Let’s break down each aspect in-depth.
Example:
Normally, Ansible waits for tasks to finish before moving to the next.
If a task takes a long time (e.g., installing software, database backups), it can block
execution.
Using asynchronous execution allows Ansible to start a task and move on, checking
status later.
3. Accelerated Mode
What is Accelerated Mode?
Ansible normally uses SSH to connect to remote hosts, which is slow due to connection
overhead.
Accelerated Mode (introduced in Ansible 1.9) reduces this overhead by:
o Keeping persistent connections
o Using a lightweight daemon on the remote host
1. Modify ansible.cfg:
2. [defaults]
3. transport = accelssh
4. Ensure Python and acceleration support is available on remote hosts.
5. Advantages:
o Reduces SSH setup time.
o Speeds up execution for multiple tasks on the same host.
Modify ansible.cfg:
[defaults]
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible_facts
fact_caching_timeout = 600
Summary
Optimization Technique Benefit
Forks & Parallel Execution Improves speed by increasing simultaneous connections
Async & Polling Prevents blocking, allowing long tasks to run in the background
Accelerated Mode Reduces SSH overhead by using persistent connections
Fact Caching Avoids redundant fact gathering, improving execution speed
A: Increasing forks allows Ansible to run tasks on multiple hosts simultaneously, reducing the
total execution time. However, excessive forks can cause network congestion and CPU
overload.
A: Async and poll are useful for long-running tasks (e.g., software installation, backups). Instead
of blocking execution, async allows tasks to run in the background, and poll periodically checks
for completion.
Q3: What is Ansible’s accelerated mode, and when should you use it?
A: Fact caching stores gathered facts to prevent redundant computation. This significantly
speeds up playbook execution, especially in large environments where gathering facts takes
time.
Final Thoughts
Mastering Ansible’s performance optimization techniques allows you to handle large-scale
automation efficiently. By tuning forks, leveraging async tasks, enabling accelerated mode, and
using fact caching, you can ensure optimal performance for your infrastructure.
1.2 Examples
ansible-playbook myplaybook.yml -v
Output:
Output:
By default, Ansible does not log output to a file. You can enable logging by setting log_path in
ansible.cfg:
[defaults]
log_path = /var/log/ansible.log
Use case: If one task fails, but you want the playbook to keep running.
Sometimes, a task does not return a failure even if something went wrong. You can force a
failure condition using failed_when:
- hosts: localhost
vars:
my_var: "I am defined!"
tasks:
- debug:
var: my_var
Solution:
ERROR! no action detected in task. This often indicates a misspelled module name
Solution:
Solution:
Example:
Final Tips
1. Use -vvv for deep debugging.
2. Use debug to inspect variables.
3. Enable logging in ansible.cfg.
4. Use failed_when, rescue, and always for robust error handling.
5. Check SSH and network connectivity when hosts are unreachable.
6. Ensure YAML syntax is correct (use yamllint).
With this level of understanding, you should be able to confidently answer any interview
question on Ansible debugging and troubleshooting. Would you like to go even deeper on any
specific topic?
Ansible Tower/AWX (Enterprise Automation) - In-Depth Guide
Ansible Tower is a commercial product developed by Red Hat to provide a web-based interface,
REST API, and centralized control for Ansible automation.
AWX is the open-source upstream project of Ansible Tower, providing almost the same features
without enterprise support.
Key Features:
A job template in Ansible Tower/AWX defines how an Ansible Playbook is executed. It contains
details like:
Scheduling Jobs
You can schedule jobs to run at specific times using the scheduling feature:
View active, pending, and completed jobs via the Jobs Dashboard.
Filter jobs based on status (failed, running, successful).
Check job execution history to track playbook runs.
Health Monitoring
1. What is Ansible Tower/AWX, and how does it differ from Ansible CLI?
2. How do job templates simplify automation in Ansible Tower?
3. Explain the different types of credentials stored in AWX.
Advanced Questions
4. How would you configure RBAC to limit access to sensitive job templates?
5. Describe how you would integrate AWX logs with a centralized logging system.
6. How do you schedule and manage recurring automation tasks?
Scenario-Based Questions
Would you like me to provide hands-on lab exercises for further practice?
Ansible for Network Automation is a powerful tool that simplifies the management and
configuration of network devices. It allows network engineers and DevOps teams to automate
tasks such as configuration changes, device provisioning, and compliance checks across multiple
vendors like Cisco, Juniper, and Arista.
SSH – Most commonly used for CLI-based interactions with network devices.
API (REST, NETCONF, gNMI) – Used for more advanced, structured communication.
Network CLI – A special Ansible connection method (ansible_connection: network_cli)
designed for network devices.
Cisco Modules
---
- name: Configure a Cisco IOS router
hosts: routers
gather_facts: no
tasks:
- name: Show version
cisco.ios.ios_command:
commands: show version
register: version_output
Juniper Modules
For Juniper devices, Ansible provides the juniper.device collection, which supports:
---
- name: Configure a Juniper device
hosts: juniper_routers
gather_facts: no
tasks:
- name: Run a show command
juniper.device.junos_command:
commands: show interfaces terse
register: interfaces_output
Arista Modules
Common modules:
---
- name: Configure an Arista switch
hosts: arista_switches
gather_facts: no
tasks:
- name: Show interfaces
arista.eos.eos_command:
commands: show interfaces
register: output
To automate network devices using CLI, Ansible uses the network_cli connection type. This must
be specified in the inventory file.
Example inventory.yml:
all:
children:
network_devices:
hosts:
router1:
ansible_host: 192.168.1.1
ansible_network_os: cisco.ios
ansible_connection: network_cli
ansible_user: admin
ansible_password: password
In a playbook, you don’t need to specify the connection again if it's in the inventory.
Ansible playbooks and configuration files use YAML (Yet Another Markup Language), which is
human-readable and structured.
Example:
When executing commands on network devices, the output is returned as structured data
(lists/dictionaries).
"stdout_lines": [
"Interface IP-Address OK? Method Status Protocol",
"FastEthernet0/0 192.168.1.1 YES manual up up",
"FastEthernet0/1 unassigned YES unset administratively down down"
]
To extract the IP of FastEthernet0/0, you can use Ansible filters like json_query or simple loops.
Example:
Ansible simplifies the management of network devices by automating tasks like configuration,
monitoring, and compliance. It uses an agentless approach, relying on SSH or API-based
communication.
Ansible connects via network_cli (for SSH-based management) or API (NETCONF, REST). The
connection type is defined in the inventory.
4. How do you store and manage network device credentials securely in Ansible?
This guide should give you deep knowledge about Ansible for network automation, allowing
you to confidently answer any interview question. Let me know if you need more details on any
topic!
Declarative & Idempotent: Ensures resources are always in the desired state.
Agentless: No need to install agents on managed nodes.
Multi-cloud support: Can automate infrastructure across AWS, Azure, and GCP.
Modular & Extensible: Uses modules and plugins to support cloud operations.
- name: Create a VM
azure.azcollection.azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
vm_size: Standard_B1s
admin_username: azureuser
admin_password: "P@ssword123!"
image:
offer: UbuntuServer
publisher: Canonical
sku: "18.04-LTS"
version: latest
[default]
aws_access_key_id=YOUR_ACCESS_KEY
aws_secret_access_key=YOUR_SECRET_KEY
region=us-east-1
ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
ImageId='ami-12345678',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
KeyName='my-key'
)
Terraform
provider "aws" {
region = "us-east-1"
}
Ansible
4. Summary
Ansible is great for cloud automation, managing AWS, Azure, and GCP.
Boto3 provides a Python-based API for AWS but requires manual idempotency.
Terraform vs. Ansible: Terraform is better for provisioning, while Ansible excels at
configuration management.
Best Practice: Combine Terraform for infrastructure and Ansible for post-deployment
configuration.
Would you like me to provide real-world scenarios or interview questions on this topic?
This guide will provide an in-depth understanding of using Ansible in CI/CD, so you can
confidently answer any interview question on this topic.
Jenkins is a widely used CI/CD tool that integrates well with Ansible for automation.
Modify molecule/default/converge.yml:
An Ansible module is a standalone script that Ansible executes on target machines. Modules
return JSON output and follow Ansible’s standard response format.
#!/usr/bin/python
def main():
module_args = dict(
path=dict(type='str', required=True)
)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
path = module.params['path']
file_exists = os.path.exists(path)
result = dict(
changed=False,
path=path,
exists=file_exists
)
if file_exists:
module.exit_json(**result)
else:
module.fail_json(msg="File not found", **result)
if __name__ == '__main__':
main()
- debug:
var: result
ansible-playbook test.yml
AnsibleModule: A helper that simplifies argument parsing, error handling, and JSON
response formatting.
module.exit_json(): Used to return a successful response.
module.fail_json(): Used to return a failure response.
supports_check_mode: Allows the module to run in dry-run mode (--check).
Ansible plugins extend its core functionality. The most important types are:
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'custom_callback'
Edit ansible.cfg:
[defaults]
callback_whitelist = custom_callback
ansible-playbook test.yml
Save it in connection_plugins/custom_connection.py :
class Connection(ConnectionBase):
transport = 'custom'
executor = PlaybookExecutor(
playbooks=[playbook_path],
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
passwords={}
)
executor.run()
run_playbook('test.yml', 'inventory.ini')
Ansible Runner provides a more structured way to execute Ansible in Python applications.
Installing Ansible Runner
Executing a Playbook
import ansible_runner
r = ansible_runner.run(playbook='test.yml', inventory='inventory.ini')
print(r.stdout)
Final Thoughts
By mastering custom modules, callback & connection plugins, and the Ansible SDK, you can
confidently tackle any interview question on this topic. The best way to solidify your knowledge
is through hands-on practice. Try writing different modules and plugins, integrate Ansible with
Python applications, and experiment with real-world automation scenarios.
Ansible best practices are essential for writing maintainable, scalable, and efficient automation.
Below is a deep dive into the key best practices you mentioned.
Proper structure makes playbooks easier to read, debug, and maintain. It also improves
scalability when working on larger projects.
Use roles for reusability: Each role should have a clear purpose (e.g., "webserver").
Separate variables: Use group_vars and host_vars instead of hardcoding values.
Use inventory directories: Separate staging, production, and testing environments.
Keep playbooks in a dedicated directory: Helps in organizing execution logic.
Use ansible.cfg to define paths: Example:
[defaults]
inventory = inventories/production
roles_path = ./roles
Using Tags
Tags allow you to run specific tasks selectively without executing the entire playbook.
Tagging Best Practices
Best Practices
Idempotency ensures that running a playbook multiple times produces the same result without
unnecessary changes.
Assertions help prevent incorrect configurations, while fail explicitly stops execution.
By following these best practices, you ensure your Ansible automation is efficient,
maintainable, and reliable. These techniques help in structuring projects properly, using tags
and includes effectively, ensuring idempotency, and enforcing validation with assert and fail.
Sure! Let's break down each of these advanced Ansible topics thoroughly so you can confidently
answer any interview question about them.
1. Sources – The event sources that generate triggers (e.g., monitoring tools, webhooks,
API calls).
2. Rules – Conditions that determine when an event should trigger an automation action.
3. Actions – The tasks executed when a rule is matched (e.g., running an Ansible
Playbook).
How It Works
Use Cases
What is ServiceNow?
ServiceNow is an IT service management (ITSM) platform that helps organizations handle
incidents, changes, and service requests.
Use Cases
Use Cases
Use Cases
Use dynamic inventory scripts to pull real-time host lists from AWS, Azure, or VMware.
Organize large inventories into groups with host_vars and group_vars.
Conclusion
By mastering these topics, you can confidently tackle any advanced Ansible interview question.
Here's a quick recap:
Would you like any hands-on examples or specific interview questions to practice with?